index
int64
0
18.8k
text
stringlengths
0
826k
year
stringclasses
38 values
No
stringlengths
1
4
300
CHOICES WITHOUT BACKTRACKING John11 dc Klccr Intcllipcnt Systems I.aboratory XEROX Palo Al to I<cscarch Ccn tcr 3333 Coyote IHill Road Palo Alto, California 94304 ‘I’l~c rCSlill:i of this paper arc h0r11 out of frustration. Qualitative rc;lsoning and con\traint Ii~l~LJll~l~CS, the primary topics of my rcscarch, 1~1th rcqllirc lnaking choices am011g aitcrnilti\cs. Various packilgcs fcjr- dc~tling with choice have been dciclopcd, primarily dcrivcd from (II01 Ic, 1079) ;IIKI (McAllcstcr, 1980). ‘I’hcsc sjstcms have proven to bc WOCI~III~ initdcqtlatc for CVCII simple qu;llitiltivc reasoning tasks. ‘I‘hc K~SOIIS for this arc twofold. I:irst. they arc iutrinsically incapabic of working with multiple contradictory choices at once - something one needs LO do aII the Gmc in qualit:ltivc reasoning. Second, tl~y i\I’C very incficicnt in both time and space. Very simple problems fill up all the memory of Sgmbolics l-M-2 or 3600 in short order (rcportcd by L;II ioul; rccc;~rchcrs in quittitativc rcaroning (f:orbus. 1982) (dc Klccr and lhu~vn. 1984) (Williams, 1983)). ‘I‘iming analysis shows that the rcasoncr spends the majority of its time in the backtracking algorithms. ‘I‘hc term “non-rnonotollic” reasoning is a misnomer as filr as memory space is conccrncd: the number of justifications grows lTlOllOtlUliCilll~ as problem solving proceeds. ‘I’his paper prcscnts an altcrnativc solution to using a gcnclal choice mcchnnism such ;IS a TMS. It is the result of cnrcfully anntyzing the kind uf backtracking actually nccdcd for a class of problem-solving tasks and designing ;I matching choice mcchnnism. This mechanism is as gcncrdl as any ‘I-MS for the task, can handle multiple contradictory choices at once, is cxtrcmcly cfflcicnt. The tcchniquc is appropriate for tasks in which: (a) as in a standard ‘J’MS it must bc possible to attribute all conclusions to a small set of antcccdcnts it dcpcnds on, othcrwisc no ‘I’MS can do much good; (b) the user is intcrcstcd in many or all of the solutions which achicvc the goal (if one is only intcrcstcd in i\ single solution, a standard ‘INS ic probably bcttcr): (c) few combinAons of assumptions arc consistent: (d) thcrc arc finitely many (hut not ncccssarily bounded) solutions and choices. Addition:illy, ~hc proposed sclvm~ pcrt’orms bcttcr if: (c) the number of altcrnativcs for each choice arc finite and cxclusivc; howcvcr. thcrc is no ncccssity fol the choices, the number of choices or the ;Iltcrll;ltivcs of 111~ choices IO IX known ;I priori; (Q thcrc is IIO single .<olution which rcquircs infinite time to cxplorc (a stiindard ‘I’MS CiIIl 0liCll bc controlled t0 iI\‘Oicl SllCll hi)lCS). ‘I’llCSC six rcquircmcnts hold for many kinds of constrilint s;ltisf‘lction proble~ns, ijl>d in pirticul,lr for qualilativc ITi~SOllillg! ‘l’hc str,~tcgy proposed in this paper hits hccn implcmcntcd and used successfully in (dc Klccr, 1979) and (dc KIWI. i\nd I~Iww~, i9X4) with grcnt succcss (time spent handling assumptions is in the noise). It is atso used as the nlcchi~nism for h;~ndling disjunction in a constraint I:~ng~~gc under dcvciopmcnt. For the purposes of this pnpcr an cxtrcmcly simplified model of problem sol\ ing sufliccs. ‘I’hc rc;lsoning systcnl consists of ;I proccdurc for performing problem solving. and ;I data base fi)r recording the slate of the problem-sol\ ing process. Most of ~hc problem-solving t:lsk consists 01‘ dcri\ ing new infcrcnccs from Arta ;~nd previously nladc infcrcnccs. All thcsc arc added to the data base. Somctimcs the problem sol\cr must make choices aml~ll~ which thcrc is no prcfcrrcd option. I.~Ic~ choice Inay involve substantial additioni~l work bcforc it proves to bc contradictory or fruiticss. Clowcvcr, a choice must bc made for problem solving to proceed. If a choice is subscqucntly discovcrcd to bc incorrect, problctn solvers typically backtrack tu wnc’ 79 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. The subject of this paper is how this process can bc made mot-c cfficicnt. Without additional information, choice cannot bc avoided. However, impt-ovcments can bc made in the preservation of rclcvant infortnation when a choice is retracted. Suppose the problem solver has produced data-base state S, using {A, B, C} and subscqucntly cxplorcs itnplications of choices {A, L?, D}. ‘I‘hc question is how much of sLttc S, ;irc valid in the data-bnsc state that wo~tld result if the &nscqucnccs of choices {A, B, D} wcrc cxplorcd. ‘I his problctn can bc 1 icwcd as an “intCl’llill” version of the classic Al fratnc problem: IlOW I~~llCll Of tltC description Of tllC pt.oblcln-solvitlg stittc is changed when sotnc action is pcrformcd. l‘t:CI INIQCIKS L;OH I)l;,,\I,lNG \%‘1’l’ll CtIOICK Consider the task of the problctn solbcr which perfortns the fi)llowing scqttcncc. (It must first do one of A or B, lltcn ottc of C ot II, then one of E or F, and then C. Adtnitrcdly. any well-dcsigncd problcnt solver would do G first as it docsn’l rcquirc a choice - but this ortlcrittg best brings out the issttcs I want to address.) AvB CvD v v P’ -1 G Assume the disjunctions arc cxclusivc and the lcttcrs rcprcscnt potcn- tially coniplcx operations. ‘I’hc simplest ,~nd most ottcn used strategy is cxponcntial: Enutncr- ;ttc itI1 ~hc possibilities and try each one ttntil a solution is f(jund (or itll solutions arc fotttld): {A, (3, E, G) {A, C, F, G} {A, D, X, G} {A, D, F, G} {U, C, 15, G} {B. C, F, G} (f-3, LJ, E, G} {U, D, F, G} Ctsu,~lly many of’ the combinations arc inconsislcnt. And often thcsc inconsistcncics can bc dctcctcd in stnitll subsets of choices. This \ttggcsts a tnodi~ic,ition 01’ the bruteforce cnutncrntion whcrc each c!plic)n is ;tttctrtptcd, bitt wlicnc~ct~ an inctmiW2ncy is dctcctcd, lhc problctn solver backtracks to the most rcccnt choice. If {;I, C} itnd {D, I’} arc ittconsistcnt the search order is: (A} {A, C} {A$} {A, D, E) {A, D, E, G} {A, D,Z~‘} {B) {Ll, C} {~,C,zq {B,C,E,G) p,c,I;‘) {B,c,F,G} {B,D> p,n,E} (6 D, E, (;> {B, DD, J’) ‘fhc advant‘tgc of this tcchniquc is that it cxplorcs far fewer cotnplctc sets of options than bt-tttc-force cntttncration. lit this cxamplc, it cxplorcs four complctc sets, while cnumcration cxpli\rcs eight. This strategy is called chr~~n~~logi~;~l backtrAcking and one of its early applications was in QA4 (Rulifson, IIcrkson ottd Wibldingcr, 1972). Chronological backtracking has three scriotts faults which result in it doing far more work than ncccssary. Consider tltc case where {C, G} is inconsistent. Chronological backtracking will starch in the following order: {AC, E, G} {A, C, F} {A, C, J’, G) {$D} ..- (4 C, G) The second two steps arc futile. As G IS inconsistent with the choice of C frotn C v D, backtracking to the last choice, i.e., E from E V F 80 has no effect as it has no influcncc on the contradiction. When a contradiction is discovered lhc scar& should backtrack to it choice which contributed to the contradiction. not to the most recent choice. ‘I‘hc second dcfcct is illustrated by the third and last step. Once it iS discovered that {C, G} is inconsistent the pt’oblctn sohxr sh(~ltld *lcvcr cxplorc that possibility ag‘lin. ‘I’hc final dcfcct of chronological b:tcktr;tckitlg iS th,tt it rcquircs IIIOI’C problem solving opci~~tliotls thn ncccssary. Suppose that choosing the combination {E, G} rcsulls in i\ grc;tt dc:tl of tllc prohlctn-solving effort tlliit is SOICIY tlcl>ctldctlt on E and G. Chronologic~~l backtr,tckin, 0 might scar& tllC SpilCC Of aSsutnption SCtS iIS: { 13, c, I?‘, G} { 13, D} {I?, II, I?, G} ln doing $0 it will do any cot~t~)tttitti~~t~S in\olving both I? and G cwicc. It will add tllc infcrcnccs rccultitt, 0 liI)lll {I?‘, (:} tl, tllC dLlt:l bilSC, then backtrack to {B, U} erasing Lhc infct-cnccs. and fittally rcdcrilc thCsC cr,wd infcrcnccs while cxploritJg (II, I^), 13, C}. A solution to all three of thcsc dcfccts is cnablcd by maintaining records of tltc dcpcndcncc of cxh in fcrcncc on citrlicr OIICS. When a contradiction ix cncountcrcd thcsc dcpcndcttcy IWOI.~S arc consulted to dctcrtninc which choice to bitcktr:tck to. Rcconsidcr the CxittnplC wltcrc C is inconststcnt with G when such records arc a\J;til;tblc. ‘I‘hc dcpcndcncy t.ccc)rds state that G is given, bttt C is it choice from C v D. ‘I‘llus the pr~bicm solver sho~~lcl b;tcktrack to the choice C V D. In gcncral, the problctn SOIVC~ should itt~ttlcdiatcly backtrack to the most rcccnt choice which inllucnccs the contradiction. ‘I‘his tcchniquc is citllcd r~~~rllr//rlr’rrc.~‘-~~/f~(./e~~ hkrrtrcki~rg. Whcncvcr ;I contctdiction is discovcrcd the dcpcndctrcy records ;trc consulted to dctcrtninc which choices c,~tt>cci the contradiction, so this choice can bc avoided in lhc future. ‘t‘hcsc ‘lrc cnllcd the r~~goo~iscts (Stcclc. 1979) as they rcprcscnt choices which arc mutually contradictory. 1)cpcndcncy records also solve the third problem of chronological backtracking (illustrated by working on {E, G} twice). ‘l’hc dcpcn- dcncc of 6 OII a is rccordcd with b; httt 6 is it CO~SC~~CIICL’ of a and this is also rccordcd. ‘l’ht~s. whcttcvcr some option is included in the current set, the dcpcndcncy records citn bc consulted to dctcrminc the conscqucnccs of those options. ’ t’hus. the conscqucnccs of {IT:, G} need only bc dctcrtnined once. ‘I’his can bc affcctcd quite directly within the data bnsc. l’ntrics arc nlilrkcd as tctnporarily unavailable (i.c., out) if they arc not dcrivhblc front the set of choices cttrrcntly being cxplorcd. ‘I’hcsc tcchniqucs xc the biiSiS of the ‘I’MS stratcgics of (Doyle, 1979) and (McAllcstcr, 19SO). In the tnorc gcncrnl ‘I’MS strategies it is not tlCCCSSilry to specify the overall ordering of the starch space (so far WC have been prcsutning some bimplc-tnindcd cnumcration algorithm). ‘I’hcy, in cffcct. choose their own cttumcration but this ordering can bc controlled ~omcwhat by specifying which parts of the search space to explore first. It is important to note that all thcsc strategies are equivalent in the sets of options that arc cxplorcd. ‘I‘hc most sophisticated TMS will find as many consistent solutions as pure cnutneration. The goal is to enhance efficiency without sacrificing completeness. I'HORI,I:MS WITH USING TMS Truth maintcnnncc systems arc Lhc best gcncral-purpose mechanism For dealing with choice. However, they have certain limitations which in appropriate circumstances can bc avoided, The si~zglc slafe problem Given a set of choices which admits multiple solutions, the ‘I‘M!3 algorithms only allow one solution to bc considcrcd at a time. ‘I’his makes it cxtrcmcly difficult to compare two equally plausible solutions. For example, suppose A, D, E, C and B, C, E, G arc both solutions. It is impossible to cxaminc both of thcsc states simultancousty. Howcvcr, this is often exactly what one w<lnts to do in problem solving .- ditTcrcntial diagllosis to dctcrminc the best solution. OVCriCOlOUS cotrlrfldicliotl nsoidcrtlce. Suppose A and B arc contradictory. In this cast, the ‘I’MS will guar:~n~c that if A is bclicvcd, D will IIN IX. and if B is bclicvcd, A will not be. This is not ncccss;lrily the best problcnl-solving t.lctic. All a contradiction bctwccn A and B indicntcs is that any infc:cncc dcpendcnt on both A and R is of no vnluc. Ihit it is still important to draw infcrcnccs from A and B indcpcndcntly. A discovery of a contrirdiction bctwccn A and 4 will result in one of A or H being abandoned until another contradiction is cncountcrcd. .SwiIcIIiug slalc5 is d#icult. Suppose that rhc problem solver dccidcs lo temporarily change a choice (i.e., not in rcsponsc to a contradiction). ‘I‘hcrc is IIO convcnicnt rncchani~nl to faciiit:ltc this. ‘1’1~ only direct way to change the current choice set is to introduce a contradiction, but once added it cannot bc rcnmvcd so the knowlcdgc StiIlc of the problem solver is irrcconcilablv altcrcd. Suppose the change of state was somchow achicvcd. ‘I’hcrc is no way to specify the target state. All a ‘I‘MS can guarantee is rhat it is contradiction- free. So, in parlicular, thcrc is no way to go back to a previous strltc. The rcnson for thcsc oddities is that a ‘I’M!3 has no useful notion of global stale. All ‘I’MS gu‘irantccs is lhal CiIClI justification is satislicd in some way. One inctcgant mcchanisnl that is somctimcs utilized to manipulate states is to take snapshots of the s,tatus and justifications of each assertion and then later rcsct 1.11~ cnlirc dat:!basc from the snapshot. This approach is antithetical to the spirit of TMS for it rcintroduccs chronological backtracking. Information garncrcd within one snapshot is not readily transfcrrcd to another. 81 A C1;,NI*:RAI, SOI,U'L'ION The do~uinnnce ofjus/$cnfions. A ‘I’MS solely uses justifications, not assumptions. Furthcrmorc, what (Doyle, 1979) calls an assumplion is context-dcpcndcnt: an assumption is any node whose current supporting justification dcpcnds on some other node being out. Thus, as probtcm solving proceeds the underlying support justifications and hcncc the assumptions underlying assertions change. ‘t‘his is particularly problematic for problem solvers which must often consult the assumptions and justifications for assertions. The machinery is cumbersome. The ‘TMS algorithm, partly be- cause it is very gcncrat often spends a surprising amount of time to find a solution that satisfies all the justifications. A particularly My solution is to include with each assertion. in addition to its justifications. the set of choices (assumptions) under which ir holds. For cxamplc, c:lch nsscrtion derived from assumption A is labclcd with the set {A}, each assertion dcrivcd from both assumptions A and B is lab&d with the set {A, D]. ‘f’hus if z = 1 under assumption A and 5 -I- y L- 0 under assumption B then WC dcducc y = -1 under assumption set {A, B}. (For clarity, I’ll call the combin,Ltion of an assertion with its assumptions and justifications a value and notate it: <assertion, {assumptions},j.u.slification(s)>. ‘I’hus, the preceding infcrcncc is written as: if <z = 1, {A},> and <z + y = cxpcnsivc operation can result from the dcpcndency-directed back- tracking in rcstx~~~sc lo a contradiction. The backtracking may require cxtcnsivc scnrch, and tic resolution of the contradiction often rcsutts in other contradictions. Eventually. all contradictions arc rcsolvcd, but only after much backtracking. During this time the status of some assertion may have changed bctwccn in (hclicvcd) and out (not bclicvcd) imny rims. Unou/itr~:. Suppose the option set (13, C, E, G} is explored, a contradiction is discovcrcd and then some time later the option set {II, D, B, G} is cxplorcd. ‘l’lic use of dcpcridcncy records assures Illat the infcrenccs dcribcd from { 13, E, G} in the first set {II, C, E, G} will CillTy lllI.Oll~h to ttlc Secc~lld Set {II, I>, I?:, G} without ~\dditic~llill computation. ‘I’hc situation is unfortunately not so simptc. Suppose that in this ca,lmplc the set {C, E,C} is C~~lltl;ldiCl~~l~y but ttl3t this was not discovcrcd until after cxtcnsivc problem ~011 ing ctfort. OINX the conlracliction is discovcrcd thcrc is no Iongcr ,my point to working on the current htil[c and thcrcforc dcpcndcncy-dircctcd b;lcktracking begins. Worb on the situation (13, C, E, C} was ncvcr pcrmittcd to go to conclusion. III p‘lrticular. Ilot illt infcrcllccs based 011 {E,G} may IlilVC OCCll IllildC. WtlCil tllC Set {II, D, I?, C} is CXt)lOlTd ttlC carlicr dcrivcd conscqucnccs of {E, G} can bc included. but problem solving must contilluc for {E, G}. ‘l‘hc dillicutt tilsk. fix which ‘I‘MS is of no aid, is IlOW ti) lilt the “gilpS” Of ttlc collScqLICl1CCS of (13, G} without redoing (tic cntirc computation. ‘I’hcrc arc four styles of solutions to this task, IWIW complctcly satisf,tctory. (1) f:vcrl though ;I contradiction OCCII~‘S during analysis of {E, G} this computation could bc allowed to go on. ‘t‘hc difficulty hcrc is ttl‘lt ;I great deal of cIl;)rt miiy bc spent working OII a set of choices that may bc irrclcvant to an overall soluiion. (2) Another tcchniquc is to store a siupshot of the plohlctn-sol\lcr’s state (its pending task ~UCUC) which C;III bc rcactivatcd at a later time. (3) It is also possitdc to rcstilrt thC c~)l~lpllt;ltion from {I?, G}, taking advantage of Ihc previous result by looking at the co~~scquc~~ccs and examining which ones arc missing. (4) ‘1‘1~~ casicst tcchniquc is just to rcslart the cornpulation from {E, G} without taking ilIly effort to see which co~~scq~~c~~cs should bc unoutcd. All cxpcnsive problcm- solving steps arc rncmoi/.cd so no time-consuming steps arc rcpcatcd. For cxnmplc, (dc Klccr and Sussmnn, 1980) LISCS this Lcchniquc to cache all symbolic GCI) conlputations. 0, {B},), then <y = -1, {A,B},>.) Unlike a ‘I‘MS where the same node can bc brought in and out an arbitrary number of times, a value is rcmo\cd from the data base only if its assumptiw set is found to bc contradictory. For cxamplc, the database can contain both <S = 1, {A},> and <a: = 0, {B},> without dilKculty. z = 1 contradicts z = O but this pro\,idcs no information about z = 1 or 3 = 0 individw,d]y. tlowc\cr. thcsc two vnlucs imply th;rt the assumption set {A, n} is contradictory. ‘I’hus, if the dntabasc contained <Z -1. 4 = 0, {A, B},> it would bc rcmovcd? bccausc the set {A, El} is c(mtladiCt(~ry. As this SCllClTlC is prim,lrily based on assullll)~ions, not justifications 1 term it trs~rtttrptiorr-6~7sctl ;IS opposed to irtst~~i.cItiot~-Dtr.~c(j ‘I’hlS sysbans. Abstractly, one possiblc3 mode of interaction bctwccn the the problLm solver and the dntn base is as follows. A list is maintained of c~cry ;\ssumptioll set disco\rcrcd to bc contradictory. Whcncvcr the problem solver discovers Iwo val~~cs with contt.adictr~ry assumptions, the combined assumption set is placed on this list, and cvcry vaIlIC b,iscd on it or any of its super sets is crascd from the daL\ base. S~upposc cvcry problem-solving step can bc f<)rmt~liltctl Z: from u clad b dctcminc f(n, b) whcrc f takes some problctn solving work. ‘I’hcn the interaction with the data base of values should bc: li)r all \,,~lircs of form cy : <a, A,,,> and p : <b, A,,,> add VUIllC <f(a, b), A,, U A,,, (cc, is),> unless the assumption set A,, IJ At, a supcrsct of COIIIC known con\radictory set of assumptions. NOK that this schcrr~c dots not have or r-cquirc any notion of context. ‘I&C cqui~alcnt noLion in the ~~ssumption-based schcn~~ is just a set of :ls5uniptions: implicitly, it SCt Of ilSSlllllptiOl~S SClCCtS illI tllOSC VillUCS uhosc assumplioll set is a SubSCt Of tllC COntCxt’S ilSSul~lptiOn SCt. It is not ncccss;\ry to bc this cxtrcmc. A more sophisticntcd mode of ilitcr‘lction wo\~ld IX to cxplorc only ;I pilIT Of tllC solution SpXC, ix.. only pcrforni infcrcnccs using th0SC ViIlllC$ WllOSC ilSSLllnptil~llS arc iI subset of the current set of intcrcsting nssuniplions. 'I'llCll. WllCll :I contradiction is discovcrcd, the set of intcrcsting assumptions is chnngcd bul nothing is done to the data base. ‘I’hcsc arc just two of many possible modes of intcmction bctwccn the problem solver and the data base. IiCgilrdlCSS of tllC mode 01 inlcractic;n, the basic assumption-based solution addrcsscs the pi OhlClll:~ discuss4 carlicr: The sittglr skr~c problm. ‘I’hc assumptiotl-t~~lscd schcmc allows arbitrariiy many contradictory solutions to coexist. ‘I’hus, it is simple to compare two solutions. Overzealous cotllrarlicliott avoidmcc. ‘I’hc prcscncc of two con- tr;~tlic~~>ry assertions dncs not tcrminatc work on the ovcmll knowlcdgc stiltc, rather oniy those assertions arc rcmovcd which dcpcnd on the two contr,ldictory assertions. This is exactly the result dcsircd from a contradiction - no more, no ICSS. Snilching slilles is cijicull. Changing state is now trivial or irrclcvant. A state is complctcly spccificd by a set of assumptions. - 211 ic not ncces~ry to rcmovc II. bccawc unhkc a convcntion;ll logid system ;\ contradiction dots not ItIl;lly cvcrqlhlng. I:or some !nsks. 3s polntcd out in (Martins antI Shnpuo. 1983). thcrc IS Sony utility m dcrwng further contradictory V&YZS. 311~~1 cxtremcly incficient. 82 Problem solving can bc restricted to a current context (i.c., a set of assumptions) or all states can bc cxplorcd simultaneously. In either CXE, values obtained in one state arc “automatically” transferred to another. For example, if <s = 1, {E, G},> is dcduccd while cxploriiig {B, C, E, G}, then z = 1 will still bc prcscnt while exploring {B, D, G, G) The dott~itzatzce ofjustifcotiuns. As assumptions, not juslifkations arc the dominant rcprcscntational mode it is easy to compare sets of assumptions underlying assertions. For cxamplc, it is easy to find the assertion with the most assumptions or the Icast; it is easy to dctcrminc whcthcr the prcscnce of an assertion implies the presence of another (a implies b if the assumptions of b arc a subset of the assumptions of u). Also the justifications underlying a value ncvcr change. The tttnchittery is cutt&mott~e. The underlying mechanism is simple. ‘I’hcrc is no backtracking of any kind - let alone dcpcndcncy- dircctcd backtracking. ‘I’hc assumptions underlying a contradiction arc directly idcntifiablc. A \lalllc once added is ncvcr rcmovcd unlcs~ it is rcmoccd ~>crlni~l~cntly, thus it is not ncccssary to explicitly mark cntrics as bclicvcd or disbclicvcd. iJtrurr/itrg. ‘I’hc unouting problem is partially rcsolvcd. Consider the analog to the ‘I-MS problem of prcscrving assertions while moving from StiltC {I?, C, E, G} lo (13, D, E, G} in rcsponsc to i1 contradiction. In the simplest assulnption-bnscd schcmc, all possibilities arc cxplorcd silnl~ltancously. Iicncc, a contradiction within {B, C, E, G} mcrcly implies that ilIly exploration of that StatC CCilSCS, i.c.. ilIly infcrcnccs involving SC~S which contain {B, C, E, C} as a sllbsct MC avoided. Work on state {B, D, E, G} continues as if the contradiction ncvcr occurred. Assertions dcrivcd from {E, G} arc automatically part of cvcry supcrsct hcncc arc also part of {B; D, G, G} (and the contradictory {B, C, B, G} for thnt matter). Unfort~~natcly, the nssllinption-based approach fails to address all of the unouting problem. Suppose that under the assumption {E, G} the problem solver has dctcrmincd that z = 1 and has conscqucntly gone through the difficult process of dctcrmining J .Z=t a 0 &&=2n3 and that z = 1 is also dcduccd urdcr assumptions {G, H} (i.c., an indcpcndcnt derivation). Not all conscqucnccs of z = 1 using {E, G} carry over to {G, II}. In addition thcrc arc derivations from z = 1 possible under {G, H} but not under {E, G}. Consider a simplified cxamplc. Suppose the problcnl solver dcduccd (Y : <z = 1, {A},>, p : <Z + y = 0, {El},>, -y : <.z = 1, {A, B},>, and X : <Z = 0, {A, B},>. In this situation y = -1 is not dcrivablc from a and ,O as the set {A, B} is contradictory. Howcvcr, if c : <z - 1, {C},> is discovcrcd lntcr, <y = ---I, {C, B}, (A c)> ‘. d IS crivablc. ‘I‘hus. <z = 1, {C},> has conscqucnce y = -1, but <Z = 1, {A},> dots not. This unouting problem exists whcthcr assumption-based or justification-based techniques arc used. Thcrc is an inclcgant fix to this problem which WC l~avc implicitly adopted carlicr in this discussion. * two values arc considcrcd the same only if both their assertion and lhcir assumptions arc the same. ‘Illis is contrary to the way TMS’s are usually used (two values arc the same if their assertion is the same). Thus, the unouting problem produced by simply changing statuses is complctcly avoided, but unouting problems produced by adding significantly different justifications is still with US. Marc research is rcquircd to find more clcgant solutions. Unouting is a open problem for both approaches. It just shows up in a dilfcrcnt place in the nssLllnption-based approach than in the justification ap- proach. 1)cpcnding on the charrrctcristics of problem-solving task, an system implcmcntor must choose which which inadequacy hc can live with. HEl)UNl)ANCY .iNl) INCOt1k:RE:NCY ‘l‘hc problem solver will invariably discover muitiplc dcrivntions for some assertions. If the only goal is to identify the statuses of assertions. the problem solver should throw out all values W~IOSC assumptions arc cqunl to or a supcrsct of the asslunptions of some altcrnatc derivation. f:or many tasks ~hc stntuscs of the assertions arc as important as their dcrivatiws. For cxamplc, causal reasoning carcflllly analyycs the derivations for quantities to dcrcrminc dcvicc functioning. For such tasks the problem solver cannot be so cavalier in throwing away derivations. Strictly speaking, if the goal is to discover all possible derivations as well ;IS all possible assertions, the prublcm solver should ncvcr throw away nuy derivation. Unfortunately. lhcrc is no guarantee that the rules of infcrcncc the problem solver NT using arc logically indcpcndcnt. I<cdundancy in the infcrcncc rules results in syntactically diffcrcnt but csscntially identical dcrivntions. ‘l’his problem of logical indcpcndcncc is outside of the scope of tlt~ ‘I’MS, but is one cvcry problem solver which cxatnincs dcrivntions must cope with ? Another problem that arises if derivations arc important is that of incohcrcncy. If the data base contained a : <CC + z = y, {},>, p : <s = l,(A),>, and 7 : <CC = 1, {A},>, three values would bc dcduccd: <y = 2, {A}, (a, p, p)>, <y = 2, {A}, (CY, 7,~)). and <y z‘ 2, {A}, (a,P,7)>. ‘l’his last vnluc is incohcrcnt in that its derivation uses z twice with a diffcrcnt derivation for z each time. ‘I’hus it should bc discarded. One of the advantages of the assumption-based approach is that the notion of global consistent state dots not appear. With the infcrcncc algorithm suggcstcd in this paper it is not cvcn known how many globally consistent states, if any, thcrc arc. However, at the termination of the problctn-solving effort some notion of global consistent state is often rcquircd. WC call the global choice sets inferprekdiorrs and the process of computing them in/erpretafion 4A solution that has worked in practice: A and l3 are two dcrivntions for the same asscrlion. Suppose A is dcrivcd under as>umption set a and I3 is dcrivcd under assumption set b If (L IS a proper subsc~ of b. Ii should bc dlscardcd If a and b are the stmc. compute all the asscrtlons I\ and 1% dcpcnd on. call these CI and p. If CY is a proper subset of p. l% should bc discarded. Othcrwisc, both derivations should bc kept and used. cot~s~ruciiot~. Most of the complexity of interpretation construction results from ~hc goal of maintaining global cohcrcncc and is outside the scope of this paper. (dc Klccr, 1979) and (de Klccr and Brown, 1984) use a very simple tcchniquc to manage the construction of intcrprctations. All non-contradictory infcrcnccs arc pcrmittcd to proceed unchcckcd. After ~hc d;ltn bnsc rcachcs quicsccncc 21 second process is invoked to construct ,111 possible globi~lly consistent states. It can bc viewed as a stl-~~iglit-li)l.w~~l~d set-munipulntion algorithm. Its task is to construct maximal sets of assumptions. such that the .~dclilion of any assumption results in sclccting a contradiction or an incomp,llibility and the rcrrloval of any assunIption r’cnlovcs all v,Ilucs for some ilSSCITiOI1. Mow rcscnrch is rcquircd to dctcrminc which itilplcinctitatiol1 tcchniqucs arc best for ;tssllmption-bnscd approaches. I Icrc I prcscnt and cvaluatc come possible irnplcmcntation options. ‘l’hc basic datil slructul’c is the set and its rcprcscntation can bc optimijcd (c.g.. as cdr-coded lists, arrays or bit-vectors). ‘f’hc same set CM bc nrrivcd at by unioning many difl’crcnt cotnbiniltions of other sets. So it is important to (a) uniquizc sets. and (b) quickly dctcrminc whcthcr the given set Ilas been crcatcd carlicr. ‘I‘hcsc goals arc achicvcd with ii canonical form for sc& and a hash table for thcsc canonic:\li/.cd sets. ‘f‘hus, for cxnmplc. once a set of assumptions is dctcrrnincd to bc contradictory its unique structure can bc marked as such. ‘I’hc most common operations of the assulnption-based algorithms arc set operations, thcrcforc they ci\n bc optirnircd by crc&ing an explicit Subsct/supcrsct 1illtiCC Such that subsct/supcrsct C0lTlplltiltiOI~S can proceed quickly (this is only possible of course if the sets arc uniqui/cd). f .ikc ~hc justilication-l,nscd approi~chcs, assumption-based appronchcs must sornchow record and access the nogood sets. ‘l’hc simplest tcchniquc is to maintain a list of all the contradictions and whcncvcr ;I IICW set is crcillcd by unioning two it is chcckcd to dctcrminc whclhcr the new set is a supcrsct of any known nogood set. As sets ilrc uniquiscd this operation need only bc pcrforrncd once per set. G ivcn ;I lattice data-structure contr;ldiction manipul;~tion can be surprisingly cfflcicnt. As cvcry set is cntcrcd into the lattice structure as it is crcatcd. the fact that it is a supcrsct of some nogood set is computed by a simple in tcrscction of the nogood sets with the subsets of the new set (which ti\kcs linear time for ordcrcd data structures). Izurthcrmorc, when a new contradiction is discovcrcd it is a simple matter to mark all its supcrscts as contradictory and stop all problem solving on them. Howcvcr. unless the problem is cxtrcmcly large (i.c., tens of thousands of assumptions in an I -M-2) thcsc advantages do not outweigh the costs of maintaining the da&structurcs in the first place. In our cxpcricncc the extra ctrort incurred in maintaining this data-structure dots not turn out to bc worth the cost (storing sets as ordcrcd cdr-coded lists or bit-vectors speeds up subset computations suficicntly). 83 (Martins and Shapiro, 1983) uses the technique of marking each assertion with the super-sets of its assumption-set which arc nogood. ((Martins and Shapiro, 1983) calls thcsc super-sets the restriction sets and the assumption set the origin set? ) This has the advantage that it is cxtrcmcly simple to dctcrminc whcthcr a ncwfy crcatcd set is contradictory as only these super-sets need be consulted, not tic cnrirc set of contradictions. Howcvcr, whcncvcr a contradiction is discovcrcd an cxtcnsive computation must take place to determine whcthcr the restriction sets of any assertion must bc updated (this is roughly cquivalcnt to cntcring a set into a fatticc). Although the ordering of the inferences is irrcfcvant to complcte- ncss it has significant influcncc on cfficicncy. For cfficicncy, the problem sofvcr should work on vafucs with fcwcr assumptions first. ‘f’hc ovcraff cficicncy of the problcrn solving is roughly proportional to tflc number of assumptions and infcrcnccs (i.e., the number of constructed values). Fortunntcfy, wit11 the cornput;ltionaf tcchniqucs outfincd in this section. pcrformancc dcgradcs very slowly witfl the number assumptions. However, each infcrencc involves a scparatc problem-sol\ ing step. so, roughly speaking. cllicicncy is fincarfy rc- fated to the nurnbcr of infcrcnccs. This provides a strong motivation for reducing the number of infcrcnccs. ‘I’hcrc arc two classes of in- fcrcnccs which arc guaranteed to bc futifc: values whose assumption sets arc later discovcrcd to bc contradictory and vafucs which arc su- pcrscdcd by other vafucs with identical assertions whose assumptions arc a subset of the original assumptions. fjotfl of thcsc infcrcnccs arc avoided by working on values with smaller assumption sets first. One wily to achicvc this is to introduce new assumptions as fate as possible; this cnsurcs that any values following from the new assumption will not bc supcrscdcd with vafucs with subset assumption sets and no subsets arc contradictory. ‘[‘he cxcfusivc-or between choices produces many contradictions which tend to clutter the contradiction recording mechanism. For cxampfc. the choice AVBVC introduces the four nogood sets {A, B} {A, C} {B, C} and {A, B, C}. A significant incrcasc in cficicncy can bc obtained by marking each individual choice with tfic choice set it is a mcmbcr of. Thus, two different choices of the same set can be trivially chcckcd to see whcthcr their combinations results in a contradiction. ‘f’wo choices arc cott~pn/ible if they arc not mcmbcrs of the salnc choice set. This approach is used in (de Klccr, 1979) and (dc Kfccr and f1rown, 1984). HbX,A’l-El) WORK AND SAMI’LI;, 1MPL~:MICN’I’A’lIONS There arc many applications for which the architccturaf proposal of this paper is applicable. LOCAI, (dc Kfccr, 1976) is a program for troubleshooting cfcctronic circuits which was incorporated in SOt’HIE III (fIrown, Burton and dc Kfccr, 1983). It uses propagation of constraintc to make predictions ~111 his actual implcmcntation hc subtracts but this is conceptually unnecessary. 0Llt the origin set from each restriction set. about dcvicc behavior from component models and circuit measure- ments. As the circuit is faulted, some component is not operating as intended. Thus, at some point, as the correct component model does not describe the actual faulty component, the predictions will become inconsistent. The assumptions arc that individual components are functioning correctly. A contradiction impfics that some underlying assumption is violated, hcncc the fault is focafizcd to a particular component set (i.e., the nogood set). ‘I’hc best mcasurcmcnt to make next is tic one that provides maximal information about the validity of the yet unvcrificd assumptions. ‘I‘his program rcquircs that tic assumptions of an infcrcncc bc cxpficitfy available and that muftiplc contradictory propagations bc simultaneously prcscnt in the data base. Hcncc, for this task the assumption-based approach is bcttcr. QUAI, (dc Kfccr, 1979) and F,NVfSfON (dc Klccr and Rrown. 1984) product causal accounts for device behavior. QUAI, can dctcr- mint the function of a circuit sofcfy from its schematic. Quafitativc analysis is inherently ambiguous, and thus muftipfc solutions arc produced. IHowcvcr, for any particular siruarion a device has only one function. Quaf scfccts the correct one by explicitly comparing diffcrcnt solutions - something that is only possible using assumption-based schcmcs. (Martins and Shapiro, 1983). (McDermott, 1983) and (Ilarton, 1983) all attempt to unify i~ssl~lnptio~~-b;~scd ilnd justilication-based approaches. F,ach of thcsc is powcrfuf enough to formufntc the schcmc proposed in this paper. Howcvcr, for many tasks, the compfcxitics and incfllcicncics introduced by a general schcmc arc unncccssary. No matter how making choices is formulated, it is important to first identify the essential problem-solving work it is providing - the topic of this paper. Mf3R (Muftipfc f%zlicf flcasoncr) (Martins and Shapiro, 1983) is a gcncraf reasoning system which allows muftipfc, including contradic- tory and hypothcticaf, bcficfs to bc rcprcscntcd simuftancously. It is based on a rcfcvancc logic which cxpficitfy takes into account as- sumptions undcrfying underlying wffs. In this system multipfc agents can interact with the same data base. each individually possessing consistent bclicfs, but bcficfs that wcff may contradict bcficfs that other agents have cntcrcd into the data base. McDermott (McDermott, 1983) has proposed a very gcncrafizcd and rcfativcfy compficatcd schcmc which unifies assumption-based and justification-based tcchniqucs. It uses constraint satisfaction among justifications and the assumptions marking contexts. XRup (Barton. 1953), an equality-based reasoning system, uses an assumption-based context mechanism instead of the justification- based framework of fiup (McAlfcster, 1982). As a conscqucncc it has many of the advantages of the assumption-based approach, e.g., easy switching bctwccn contexts. Intcrcstingfy, Xf<up’s cquafity mechanism is also used to construct cquivalcncc classes of assumptions and as a conscqucncc it is possibfc to identify syntactically diffcrem but csscntiafly equivafcnt assumption sets. With the schcmc presented in this paper, tficrc is no ncccssity for contexts and their associated ovcrhcad. 84 The incfficicncics of backtracking for constraint satisfaction prob- lcms was rccogniled quite early. (Mackworth. 1977) summarizes the dificulties and proposes a number of efficient techniques. Although not formulated in terms of assertions, justifications, assumptions, his technique cxplorcs the solution space nearly as cficicntly as ‘I’MS-like schemes. ACKNOWLEDGhWNTS I thank Daniel Bobrow and 13rian Williams who helped sort out many of the technical issues and forced me to clean up the implcmcntation. RI t3t,tOGt<,IPHY [I] Barton, GX., “A Multiple-Context Equality-based Reasoning System,” Artificial Intclligcncc I .aboratory, ‘II<-7 15, Cambridge: M.I.T., 1983. [2] Brown, J.S.. 1). Burton and J. de Klccr, “Pedagogical. natural language and knowlcdgc cnginccring tcchniqucs in SOPHIE I, II and 111,” in Itzfelligm( Tuforitlg S’ysletns, cditcd by D.Slccmnn and J.S. Ijrown, Academic Press, 1983. [3] de Klecr, J. and J.S. Brown, “A Qualitative f’hysics Ijascd on Conflucnccs,” to appear in Arl/~cial Ittklligence. [4] de Klccr. J., “Causal and Tclcological licasoning in Circuit Rccog- nition,” Artificial lntclligcncc Laboratory, ‘H-529, Cambridge: M.I.T., 1979. [5] de Klccr, J. and C.J. Sussman, “Propagation of Constraints Applied to Circuit Synthesis,” Circuit Theory and Applications, Vol. 8, 1980. [G] de Klccr, “Local Methods of Localizing Faults in Electronic Circuits,” Artificial Intclligcncc Laboratory, AIM-394, Cambridge: M.I.T., 1976. [7] Doyle, J., “A Truth Maintenance System,” Arffkial Itr~elligmce, Vol. 12, No. 3, 1979. [8] Forbus, K.D., “Qualitative Process Theory,” Artificial Intclligcnce Laboratory, AIM-664, Cambridge: M.I.T., 1982. [9] Mackworth, A.K., “Consistency in Networks of Relations,” Ar@ial It~rclligetzce, Vol. 8, No. 1, 1977. [lo] Martins, J.P. and S.C. Shapiro, “Reasoning in Multiple Ijclief Spaces.” IJCAI-1983, 1983 (see also: Dcpartmcnt of Computer Science, ‘I’cchnical Report No. 203, l3ufil0, New York: State University of New York, 1983). [ll] McAllcster D., “An Outlook on Truth Maintenance,” Artificial Intclligcncc Laboratory, AIM-HI, Cambridge: M.I.T., 1980. [12] McAllcster D., “Reasoning Utility Package User’s Manual,” Artificial Intelligence Laboratory, AIM-667, Cambridge: M.I.T., 1982. [13] McDermott D., “Contexts and Data Dependencies: A Synthesis,” 1983. [14] Iiulifson, J.F., J.A. Derkson, and K.J. Waldingcr, “QA4: A Procedural Calculus for Intuitive Reasoning,” Artificial Intclligcnce Center, ‘I’cchnical Note 73, Menlo Park: S.1I.I.. 1972. [15] Stcclc, G., “‘l’hc llcfinition and lmplcmcntation of a Computer Programming Language based on Constraints,” Artificial Intelligence I .nboratory, ‘1X-595, Cambridge: M. J.T., 1979. [16) Williams, B.C., “Qualitative Analysis of MOS Circuits,” M.I.T. AI I *aboratory ‘1’11-567, 1983. 85
1984
18
301
Maintaining Diversity in Genetic Search Michael L. Mauldin Department of Computer Science Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 Abstract Genetic adaptive algorithms provide an efficient way to search large function spaces, and are increasingly being used in learning systems. One problem plaguing genetic learning algorithms is premature convergence, or convergence of the pool of active structures to a sub- optimal point in the space being searched. An improvement to the standard genetic adaptive algorithm is presented which guarantees diversity of the gene pool throughout the search. Maintaining genetic diversity is shown to improve off-line (or best) performance of these algorithms at the expense of poorer on-line (or average) performance, and to retard or prevent premature convergence. 1. Int reduction Genetic adaptive algorithms (GA’s) are one solution to the blackbox learning problem - given a domain of input structures and a procedure for determining the value of an objective function on those structures, find a structure which maximizes (or minimizes) the objective function, GA’s are based on the observation that natural systems of evolving species are very efficient at adapting to changing environments. By simulating evolutionary processes, GA’s can harness the power of population genetics to provide autonomous learning components for artificial systems. Genetic algorithms have been applied to widely varying problems in learning and adaptive control such as character recognition 163, state space learning [ll], pattern tracking [lo], discovery [7], maze running and poker betting [12], and gas pipeline operation [5]. 2. Applicability of GA’S The most attractive feature of GA’s is the flexibility of the technique. As long as there is an objective performance measure, genetic search through the function space will find better and better solutions. No initial knowledge of the domain is required, and as long as the objective function is not completely random, the underlying structure of the problem assures that GA’s will outperform random search. Of course, some domains have objective functions which are amenable to more specialized and more efficient search techniques. For example, where the objective function is quadratic, special numerical analysis techniques can quickly find the optimum point in the space. If the function is differentiable, gradient search works equally fast. If the function is at least unimodal, hill climbing search is very effective. In complicated domains, though, these specialized techniques break down quickly. The conditions for which GA’s perform well are much less rigid, and empirical studies have shown that on complicated domains, GA’s outperform both specialized and random searches [2]. Bethke’s thesis characterized the set of functions which are genetically optimizable in terms of the Walsh transforms of the function, and shows that the coefficients involved can be estimated during the search to determine whether the function can be optimized genetically [l]. 3. The Basic Genetic Algorithm The following steps are common to all genetic adaptive algorithms. They are motivated by the study of population genetics, and most of the same intuitions and terms apply. Choose a representation language for describing the possible behaviors of the organisms you wish to study, and then encode this language in strings of binary digits (some genetic algorithms use other alphabets, but bit strings or bit strings with DON’T CARE symbols are the most common internal representations). Each string represents one point in the function space being optimized. Choose an objective (or payoff) function which assigns a scalar payoff to any particular bit string, using the mapping you chose in step 1, This can be the cost of a solution to an economic problem, the final score of a game playing program, or some other measure of performance. This score is usually called a fitness rating. Generate an initial population of strings (often at random, but the system can be given a priori knowledge by including some individual strings already known to perform well). Evaluate each string using the payoff function to assign it a non-negative fitness. Better strings receive higher fitness ratings (when using GA’s to minimize an objective function, a transformation is applied to the result to derive an increasing function). Repeatedly generate a new population. Select one or more parent strings from the population using weighted probabilities so that the chance of being selected as a parent is proportional to the fitness of the string. Then apply one or more genetic operators to generate one or more new strings. There are many possible operators, but the two basic ones are crossover and mutation. 6. Select an equal number of strings in the current population to “die” and replace these with the newly generated strings. Some GA’s generate only one new string at a time; others generate a whole new population at each step. From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. 7. Now evaluate each of the new strings to assign each of them a fitness value, and go back to step 5. The best source of information about GA’s is Holland’s Adaptation in Natural and Artificial Systems [6]. Holland uses terms borrowed from Mendelian genetics to describe the process: l Each position in the string is called a gene. l The possible values of each gene are called alleles. l A particular string is called a genotype. l The population of strings also called the gene pool. l The organism or behavior pattern specified by a genotype is called a phenotype. l If the organism represented is a function with one or more inputs, these inputs are called detectors. Each genotype represents a particular point in the function space being optimized, and the goal of the search is to find points in the space with the largest objective values (better performance). Although this formulation of genetic algorithms is very similar to earlier evolutionary models (eg [3], [4]), there is one subtle difference: the introduction of the crossover operator. Early programs were usually “random generation with preservation of the best.” This corresponds to a GA where the only genetic operator used is mutation. But mutation does not take advantage of the knowledge already present in the gene pool. The crossover operator mixes building blocks (sets of alleles) which have been generated during the course of the search; this approach exploits regularities in the environment to produce new genotypes which are more plausible than mutation alone would provide. Holland uses the term schemata to describe these building blocks, and he showed that each schema tends to increase or decrease its presence in the gene pool in proportion to its past performance [6]. Since this happens for each subset of the space simultaneously, there is an immense amount of implicit parallelism in the search for better genotypes [12]. 4. Examples of GA’s Smith’s maze and poker betting programs and Goldberg’s gas pipeline program (mentioned in Section 1) all use sets of production rules as an internal representation. Production rules are encoded as bit strings; the left-hand side of each rule matches one or more input detectors, and the right- hand side emits a binary message which encodes the desired action. Holland’s classifier systems [8] are also rule based, but the messages emitted by the right-hand side of each rule are fed back to a global message list. Messages on this list activate more production rules to give the system the ability to represent feed-back loops and state memory. The poker betting problem was based on Waterman’s work on draw poker betting [13]. For this problem, seven input detectors were specified: (1) the value of the hand, (2) the size of the pot, (3) the size of the last bet, (4) the likelihood that the opponent is bluffing, (5) the “pot odds,” (6) the number of cards drawn by opponent, and (7) a measure of the opponent’s playing style. The right-hand side of each rule can specify one of four actions: drop, call, bet low, or bet high. Smith’s system learned enough about poker betting over the course of 4000 trials to generate bets in accordance with accepted poker axioms 82% of the time. By contrast, Waterman’s system achieved 86% agreement only with the help of an additional decision matrix not available to the genetic system [12]. 5. Function Optimization as Sandbox Much work on genetic algorithms has focused on function optimization. By using various test functions as environments, the effects of domain features such as linearity, differentiability, continuity, modality, and dimensionality can be studied in isolation [2]. When optimizing functions which map points in !R” to !R, the following representation is commonly used: each point in the domain is an n-tuple of real numbers, each real number is represented as a fixed binary number, and the binary representations are concatenated together to form a bit string. The fitness of the string is the value of the function at the original point. Two different performance measures are commonly used to analyze the effectiveness of function optimizers: on-line and off-line performance. On-line performance is simply the mean of all trials, while off-line performance is the mean of the best previous trial at each time (or trial) t. On-line performance is an appropriate measure for a task such as gambling or economics where learning must be done while performing the task at hand. Off-line performance only considers the best behavior of the system, and is more appropriate for systems which either train to solve a problem, or systems which have a model of the domain. More formally, if f (t) denotes the average value of trial t on functions fi, and $ the goal is to minimize each function, we have the following definitions: On-line performance x,(t) = f . & f,(i) i=l Off-line performance xi(t) = f . & f:(i) i=l Best so far f:(i) = min f,(j) j=l,i 6. Premature Convergence In genetic search, the process converges when the elements of the gene pool are identical, or nearly so. Once this occurs, the crossover operator ceases to produce new individuals, and the algorithm allocates all of its trials in a very small subset of the space. Unfortunately, this often occurs before the true optimum has be found; this behavior is called premature convergence. The mutation operator provides a mechanism for reintroducing lost alleles, but it does so at the cost of slowing down the learning process. DeJong suggests adding a crowding factor which affects the replacement algorithm. Rather than merely replacing a single individual, select a small subset of the gene pool, and replace the string most similar to the newly generated string. This method has the advantage that it does not introduce wild mutations, but unfortunately it does not guarantee that alleles won’t be lost, it merely reduces the probability of loss, delaying but not preventing premature convergence. 7. Diversity The intuitive reason for premature convergence is that the individuals in the gene pool are too “alike.” This realization suggests that one method for preventing this convergence is to assure that different members of the gene pool are different. Since each structure is represented as a 248 bit string, it suffices to check that whenever a new structure is added to the pool that it differs from from every other structure by at least one bit. If the new individual is identical to another member of the gene pool, randomly change one bit, and repeat until the result differs from every other member of the pool. A more general method is to define a metric over the space of structures and assure at each point that the distance between any two structures is greater than some minimum distance. The most obvious metric is to use the Hamming distance between the bit strings representing each structure (ie the number of bits which do not match). So that a large uniqueness value does not preclude search in a small subspace at the end of the search, the uniqueness value of k bits is slowly decreased to one bit as the search proceeds. If the decrease is linear in the number of bits, we have the following equation for n trials: k bit decreasing uniq. Hamming (gi, gi) > [ n k. (n -t) 1 Thus at the start of the search the space is sampled over a reiatively coarse “grid,” and as the search progresses, the grid size is gradually reduced until adjacent points are considered. This process bears a striking similarity to simulated annealing, with the minimum distance being analogous to the decreasing temperature used during the annealing process. But unlike simulated annealing, genetic search with decreasing uniqueness retains the parallel flavor of genetic search, while simulated annealing is a fundamentally serial process. 8. Methodology To evaluate the usefulness of uniqueness, a learning program was written which implemented five search algorithms: (1) standard genetic search with replacement of worst (Holland R,), (2) h bit decreasing uniqueness, (3) DeJong’s crowding /actor, (4) random search, and (5) parallel hill climbing search. These last two algorithms were included as controls to verify the general utility of genetic algorithms. The hillclimbing search used was simple random mutation with preservation of best. Since search is a stochastic process, each algorithm was run 10 times with 10 different random seeds. The initial population depended only on the random seed, not the specific algorithm; therefore for any one seed, every algorithm started with the same initial population. This reduced the chance that an unusual initial population distribution would favor one particular algorithm. Each of these 10 runs was for 5000 trials. The domain for the test was a set of five test functions used by DeJong in his study [2]. Complete descriptions of each function are given in [Q]. This “environment” included functions which were continuous and discontinuous, convex and non-convex, unimodal and multimodal, quadratic and non-quadratic, of low, medium, and high dimensionality, and both with and without noise. 9. Results Table Q-l shows each algorithm’s global performance, ie the sum of its scores on the five test functions. Since this was a minimization problem, smaller numbers indicate better performance. Figure 9-l shows the “Best so far” curves for each algorithm. This is simply a graph of the best point in the space found at that point (this is the first derivative of the off-line performance curve). lO_ 8- 6- 4- 2, 0, -4- global Performance On-ltne Off-line Random Search: 239.336 7.972 HI11 Climbing: 18.871 2.103 Holland Rl: 6,218 1.227 Crowding Factor 4: 3.886 -0.677 Uniqueness 1 bit: 16.938 -0.017 Uniqueness 2 bits: 15.803 -0.140 Uniqueness 4 bits: 17.766 -1.341 Uniqueness 8 bits: 31.300 -2.162 Uniqueness 12 bits: 50.447 -2.776 Crowding 4 + Uniq 12: 9.447 -2.689 Best 6.761 -2.810 -0.207 -2.361 -2.481 -2.209 -3.646 -4.779 -6.287 -6.046 Table Q-l : Global Performance Iiandom Scatch - Hill Climbing _--_ Holland R 1 _______ Crowding Factor 4 .-.-I-. Uniqucncss 1 bit l.-..-*l-l Uniqucncss 2 bits ,-.-.-.. Uniqueness 4 bits ..-..-..-.. Uniqucncss 8 bits . ..-...w...-.. Uniqueness 12 bits . . . . . . . . . . . . . . Crowding 4 + Uniq 12 s-m-m-- r- lOod 200d 300d 400d 500 Figure Q- 1: Global Best Found random rl u2 z4 Ul hill u4 US ~4~12 u12 The data in Table 9-1 clearly show that increasing the uniqueness parameter improves the off-line performance at the expense of poorer on-line performance. The only limit seems to be that the Hamming distance between two bit strings can be no greater than the length of the strings (so for a gene pool of size M and strings of k bits, the maximum uniqueness would be k - log, M). 249 Figure 9-l shows that the the standard R, algorithm 11. Acknowledgments learns very quickly until about 1000 trials, and then the I would like to thank Stephen Smith for his suggestions curve levels off. Adding DeJong’s crowding factor improves and insights into the world of genetic learning, and performance significantly, but the curve (marked c4) still especially for access to his collection of hard-to-find levels off after 2950 and no improvement is found thereafter. literature on genetic algorithms. The graph for uniqueness of 12 bits (marked u 12) has the best off-line performance of any algorithm, and is still References improving at the end of 5000 trials. One surprising result is that the combination of a crowding factor of 4 and a uniqueness of 12 bits performed almost as well off-line as uniqueness of 12 alone, and had a substantially improved on-line performance over simple uniqueness. What happens is this: using a crowding factor greater than 1 means that any new string is likely to be similar to the string it replaces. Since the string being replaced was unique, there is a high probability that the new string will also be unique. Thus fewer mutations are required to maintain diversity, and on-line performance is not as badly degraded. 10. Summary This study confirms earlier work which demonstrated the robustness of genetic search as a tool for function optimization. It was shown that guaranteeing genetic diversity by means of a decreasing uniqueness measure provides significantly improved off-line performance at the expense of much poorer on-line performance. This degraded on-line performance can be ameliorated by combining DeJong’s crowding factor with uniqueness to produce a genetic adaptive algorithm with superior off-line performance and moderate on-line performance. One avenue for future research is to consider metrics other than Hamming distance for defining uniqueness. Another possible variation is to decode the bits strings into the corresponding real numbers and use Euclidean distance as a measure. This would tend to violate the black-box model of genetic learning, but could be viewed as a genetic heuristic search. Another possible improvement to uniqueness would be a mutation operator which is not uniform over the whole bit string. It might be that a mutation operator which always reintroduces a lost allele would provide another performance boost. Another interesting prospect is the author’s conjecture that a diverse gene pool would be helpful in optimizing time- varying functions. Pettit has studied the usefulness of genetic algorithms for tracking changing environments. She concluded that the standard genetic search performed very poorly in tracking even slowly changing environments [lo]. One problem is obvious - if the gene pool ever converges (even at the correct optimum!) all future trials will be allocated at the same point, and the time-varying peak will simply “move out from under it.” If, on the other hand, the gene pool is kept diverse, the crossover operator will continue to generate new strings, and should be much more able to track the peak. PI PI [31 141 El P31 171 R31 M [lOI ml 1121 P31 Bethke, A.D., Genetic Algorithms as Function Optimizers, PhD dissertation, University of Michigan, January 1981. DeJong, K.A., Analysis of the Behavior of a Class of Genetic Adaptive Systems, PhD dissertation, University of Michigan, August 1975. Fogel, L.J., Owens, A.J., and Walsh, M.J., Arfificial lnt&ligence Through Simulated Evolution, Wiley, New York, 1966. Friedberg, R.M., “A Learning Machine, Part 1,” IBM Journal of Research and Development, Vol. 2, 1958. Goldberg, D.E., Computer-Aided Gas Pipeline Operation Use Genetic Algorithms and Rule Learning, PhD dissertation, University of Michigan, 1983. Holland, J.H., Adaptation in Natural and Artificial Systems, University of Michigan Press, 1975. Holland, J.H., “Adaptive Algorithms for Discovery and Using General Patterns in Growing Knowledge Bases,” Intl. Journal of Policy Analysis and Info. Systems, Vol. 4, No. 2, 1980. Holland, J.H., “Escaping Brittleness,” Proceedings of the Second International Machine Learning Conference, July 1983. Mauldin, M.L., “Using Diversity to Improve Off-line Performance of Genetic Search,” Tech. report, Computer Science Department, Cargnegie-Mellon University, 1984. Pettit, E. and Swigger, K.M., “An Analysis of Genetic-Based Pattern Tracking and Cognitive-Based Component Models of Adaptation,” Proceedings AAAI-83, August 1983. Rendell, L.A., “A Doubly Layered Genetic Penetrance Learning System,” Proceedings AAAI-83, August 1983. Smith, S.F., “Flexible Learning of Problem Solving Heuristics Through Adaptive Search,” Proceedings IJCAI-83, August 1983 . Waterman, D.A., “Generalized Learning Techniques for Automating the Learning of Heuristics,” Artificial Intelligence. Vol. 1, 1970 . 250
1984
19
302
The Use of Continuity in a Qualitative Physics Brian C. Williams hrtificinl Inlclli~encc 1 .aboratc,ry Massachusetts Instltutc of ‘I’cchnology ‘I’hc ttbility to reason about a scrics of complex ckcnts over time is cstcntial in analyzing physical systems. This paper discusses the role of continuity ill qualitati\c physics and its application in a system for analyzing the bch;lvior of IIigital MOS circuits that exhibit analog behavior. ‘I’hc discusGon begins with a hricf ovcrvicw of the rcnsoning steps ncccssary to pcrfi)rni 21 qualit:itivc simulation using ‘I’cmpornl Qu;llitati~c (‘I‘Q) A~wlysis. ‘l’hc discussion then focuses in on the ilsc of confilllli[y and the rcl,jric)nship bctwccn quantities ;tnd their higher order dcrivativcs in describing how physical quantities ch,mgc OVCI time. IN’I’IZOI)lK’I’ION ‘I’hc ;Ibility to fCiISOI1 nbout behavior nt tllC qiinlitativc lCVC1 is csscntial to pcrfrmn such tasks as designing. modcling. analyzing and troiiblc-shooting physical systems. One objcclivc of it qualitative physics is to provide a theory for this type of rcnsoning. Over the last f2w ycnr’s 21 fri~mcwork for in qu;\lit:ltivc physics has ~CLW cool\ ing uhich includes mcch;u~isms for hoth dcvicc ccntcrcd (dc Klccr :~nd Brown, 1’84) atId ~IWCSS ccntcrcd ontologics (I:orbus. 1983). thro~lgh the uCC of ;I qLl,llitiltiVC algcbr;l for cxprcssing physical interactions. ‘I‘his pnpcr cxnmincs the role of continuity iii rcasoiiing 3lXNlt Cll2llgC. dlYlWillg from il fCW simple thcorcms Of CillCtll~lS rclcvnnt to n qtlalitiltiVC physics. ‘I’hc discussion begins with :I brief overview of the KXOliillg steps ncccssclry in performing a quAitativc simulation (Ising ‘I’cmpor;11 Qu;llitativc (‘1-Q) An:llysis, ;I system for ;iii;ily7iiig the large sign:11 behavior of MOS circuits. ‘I’hc discussion then focuses on the USC of continuity and the rclntionship bctwccrl quantities and their dcriviltivcs in describing the behavior of physical quantities OVCI time. ‘l’cmporal Qll;~litiltivc Annlysis deccribcs the causnl qualitative behavior of a circuit in rcsponsc to nn input over time, whcrc time is vicwcd ;IS a set of intcrvi\ls in which dcviccs move through diffcrcnt operating regions. ‘I’hc qualitarivc reasoning process, mod&d by ‘I’Q Analysis. is best illustriitcd by :I simple cxamplc. Figure I shows a pnrallcl RC circuit which exhibits the following behavior: b Gnd - Figure 1 : I<C Circuit Assume that at instant tl the voltage iICfOSS the CilpacitOr (Vl,v) is positive. ‘I-his C~NISCS the vottagc i\crOSs the resistor to bc positi\,c. producing iI positive current Lhrough the resistor, which hegins to CiiSCllill’gC LilC CilpacitOr illld dCCrCilSC (V,N). V/h tlCCKXSCS f0r iIll intcrv;ll Of lime illId CVClltiially I-CilCllCS zero. ’ 121 this poinl Lhc currciit stops flowing awl the circuit has rc;lchctl ;I steady state at xcro volts. ‘I’his description is nlarkcd by iI scrics of cvcnts such as V,, hcing initially positive or V 1~’ moving to mu, which break the description into il scrics of time intcrvAs. ‘I‘wo types of reasoning arc required to illlaly/C! tilC circuit during CilCll interval. One type of rcnsoning in\,olvcs dctcrmining the instantnncous response of lhc circuit to ;I set of Ilrijllnr)j C(ILL~~~S which mark the cvcnt; for cxamplc, “, A positive volt:tgc across the resistor, produces n positive cirrrcnt through the resistor . . .*’ ‘I’hc mcchnnism corresponding to this type of rcnsorling in ‘I’Q An;\tysis is Cnusill Prop,\gation. ‘I‘hc second type of rcnsoning dctcrmincs the !ong term cffccts of thcsc qualitative inputs. for cx,rmplc, “V, R’ dccl-mxs for an interval of time alld CVClltUillly rc;lchcs 0:ro.“ ‘I’his type of reasoning is modclcd by ‘I’ri>nsitioll Analysis. ‘1’0 provide iI mcchnnisln for ani~lyzing circuits, n rcprcscrltntion for the circuit and its resulting behavior is nccdcd. Quantitatively, a circuit is rcprcscntcd as a network of dcviccs. ‘I’hc function;llity of each type of &vicc is dcscribcd by a dcvicc model and the interactions ~-- lSmce VJ,~ is a decaying cuponcn~ial. it is positive for t < CO and rcnchcs 02ro at 00. 350 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. between devices are described by a set of network laws. A device model con$ts of a set of algebraic relations bctwccn state variables associated with the dcvicc’s terminals (c.g., current, voltage, charge and their dcrivativcs). ‘I’hc relevant cquntions constraining the circuit’s behavior in the above cx~~mplc are: V[, = I,<R Resistor Model Ic, = Cw Capacitor Model I12 = --I<” KirchotYs Current I .aw ‘I‘hc behavior of the overall circuit is infcrrcd from the network laws and dcvicc models and is cxprcsscd as a function of time. ‘I’he behavior of V,N in the l<C circuit is: Vtp~ = Vtllrlrale-+ for t > 0 Qualitatively, the space of villucs which n quantity of intcrcst can t,tkc on is broken into a set of open intervals or regions scparatcd by ;I set of bound;lrics. ‘I’imc is rcprcscntcd as ;I scqucncc of open intervals. separated by instnnts, and the circuit’s state variables arc rcprcscntcd by their sign, using zero as a boundilry bctwccn positive and ncgntivc. (‘I’hc sign of a quantity (X) is denoted [Xl.) Stiltc vnriablcs arc then combined into ;I set of relations using a qualitiltivc &Am consisting of addition, subtraction and multiplica- tion on signs. For cxan~plc, the sum of two ncgntivc numbers is ncgativc ((-) + (-) = -), while the sum of a positive and a ncga- tivc number is unknown ((-+) + (-) = ?)(dc Klccr, 1979),(Forbus, 1983). ‘I’IIc qualitiltivc cquiviilcnt of the i\bovc models and I:IWS ;IW: [VW] = [h] Resistor Model [IO] = [%;-I Capncitor Model [I,,] = --[I,,] KirchoFs Current I.aw An ~tnalo~ous set of cquntions may also bc crcatcd for the lirst and higher order derivatives of current and volt:tgc. ‘I’hc number of Iiighcr order dCriVit~iVCs uscci in the ;in:ilysis dcpcnds on the lcvcl of detail of behavior which must bc obscrvcd in the particular analysis Llsk. For the analysis of pcrformitncc MOS circuits WC llilvc fhnd it adcqLIi\tc LO exinninc first and second dcriviltivcs, making it possible to rccogni/c minimums. maximums and inllcction points in the circuit bchnvior.2 For simplicity, WC only keep track of quimtilics and their first dcrivativcs in the RC cxnmplc. ‘I’hc circuit’s ovcrnll behavior, in rcsponsc to a set of inputs is dcscribcd by a SC~LICIICC of intervals and the qualitntivc values of the circuit’s state variables for each interval. During an interval each quantity of interest remains within a single qualitative region (c.g., “the voltage is positive” or “the mosfct is in saturation during the interval”). ‘I’hc end of the interval and the beginning of the next is marked by one or more qunntitics transitioning bctwccn qualitative regions. %his differs from cxlicr qualitative dcrivnti\es (de Klccr. 1979). reasoning systems focused only on first C2USAL YIiOP,~CA'I'ION Causal Propagation occurs at the start of a time interval when a set of qualitative inputs (rcfcrrcd to as prir~ory couscs) arc propagated forward, using the dcvicc models and network laws, to dctcrminc their instnntancous cffcct on other circuit quantities. ‘l‘his may be vicwcd as a qu,\litativc stn,lll signal analysis. 7 In the IiC cxplari;~tion. it is given that (V,,v) is positive at instant tl. Using VI,\ i\s the primary cil1IsC, C,~usal t’ropilg;\tion products the following result (whcrc “A --) B” l*CildS “A Cll1ISCS ICI”): [VIN] = 4 Given [II<,] = -1 Resistor Model [Icy,] = -KirchoIf’s Current I.aw [ ;&“I = -- CiIl)ilcitor Model [w] = -l<csistor Model [q] = -+ Kirchofl”s Current I .ilw ‘I‘l~.\NSI’I’ION /\NAI,YSIS Causill Propagation predicts the instantilnco~ls rcsponsc of the circuit, but dots not dcscribc how quiintities chngc OVCI’ time, ‘l‘rnnsition Analysis dctcrmincs WhctllCr or not ;I qllillltity tl;lWitiOllS bctwccn two qu;rlitaGlc regions (c.g., moving from positive to /cro or saturation to cutofT’) at the end of a time interval, and may bc vicwcd as a qualit,ttivc large sign;11 analysis.4 ‘I’ransition Analysis is broken into two steps: 7‘rmrsi~iotr I<ccwgtti/iotr and Trutlsiliotl Orderirrg. ‘I’mnsition Recognition dctcrmincs whcthcr or not a qu‘mtity is moving towards ilnother qualitative region or boundary (c.g., the positive charge on tlic cnp32itor is dccrc;Gng towards zero, or a mosfct is moving from the boundary bctwccn ON and OFF to the region ON). ‘l’ransition Recognition often dctcr- mints that more than OIIC quantity is mobing towards ano~hcr region or boundilry. ‘I’rimsition Ordering dctcrmincs which subset of these quantities will /rmsi~iotr into ;I new region or boundary first, marking the end of that interval. Although this article only discusses transi- tions across zero. the mcchilnism dcscribcd hcrc is easily cxtcndcd to rccogni/.c transitions across boundaries other than zero (c.g., transi- tions bctwccn dcvicc operating regions) aud is dcscribcd in (Williams, 1984). ‘I’IIANSI’I‘ION I~I’C’OC;NIl’ION ‘I’hc basic assumption underlying Transition Ifccognition ilnd Transition Ordering is: 3C’ausal l’ropagatiorl IS Guitar to dc Klccr’s Incrcmcntal Qunlitativc Anal~ais (dc Klccr. 1979) c\ccpL that the cpantltics bclng ptq~gnted arc not rcstrictcd to first dcrivativcs, but ma) lncludc C~:IIIII~ICS and higher order dcn\ativcs. 4Allcrnnti\e appronchcs IO dcscribc the beh,lvior of quantities mom qualitA\c region boundarxs hake bcm proposed by (dc Klccr and Brown, 19X4). (I:olbus, 1983) and (Kulpcrs, lYX2a). 351 The behnvior of real physical syslenrs is corltitzuous.5 MOW prcciscly, it is the functions which dcscribc a physical system that arc continuous. Thcrc arc a number of sirnplc thcorcms of calculus which dcscribc the behavior of continuous functions over time intervals. In this section WC discuss the intuition which thcsc thcorcms provide in dctcrmining how quantities move bctwccn and hithin qunlitativc regions. ‘I‘hcsc thcorcms arc then used to dcrivc two rules about qualilntivc quantities: the Cotl/irruit)’ Rule and the It~lrgn~liot~ Kulc. ‘I’hc first rule rcqiiircs that a quantity is continuous over the interval of intcrcst, while the second ;IssIIIIIcS thnt a quantity is both continuous and diffcrentiablc.6 ‘I‘hc Intcrmcdiatc \‘;~luc ‘I’hcorcm Inordcr to dcscribc the behavior of sonic quantity over time, a set of rules is nccdcd for dctcrmining how a quantity changes from one illll~rvill or instant lo lhc next. If, li)r cx,rmplc. a quantity is positive during some interval of time, will it bc positive, zero or ncgativc during the next time interval. 7 ‘l‘hC Illlcrttlrtiinlc VrllUC Thcoro,l StiitCS tllilt: I lf f is continuous on the closed interval [a, b] and if 1 is any number bctwccn f(a) and f(b), then thcrc is at least one point X in [u, 01 for which J(X) = 1. (I .ooniis, 1977) Inluitivcly, this means that a continuous quantity will nljt~0~5 cross a boundary when moving from one qunlitativc open region to another. ‘1’11~s cnch state variilblc must cross zero when moving bctwccn the positive and ncgilti\rc regions. In the above cxnmplc, the posilicc quantity lllily bc posilivc or zero during the next time interval, howcvcr, it cannot bc ncgativc. value of & from zero (any two distinct points arc scparatcd by an open interval). If WC assume that & is described by a continuous function of time. then it will take some finite interval of time {(tl,t2) where tl f t2) to move from c to 0, traversing the interval (c,O). Similarly, it will take a finite interval of time to move from 0 to some positive value c. F~urlhcrmorc, wc can Sly that a quantity moving from 0 to e will lcavc Lcro at the brgirr,riug of an open interval of time, arriving at E at ~hc end of the interval. Convcrscly, a quantity moving from E to 0 will lcavc L at the beginning of an open interval and arrive at 0 at the crrd of the open interval. Another way of viewing this is th;rt a qllilntity will move through nn open region during an open interval of time, and a quantity will remain OII a boundary for some closed interval of time (possibly for only nn instant). This notion of continuity is captured with the following rule: Continuily Rule 1. If some quilntity Q is positive (ncgati\c) during an instant, it will Klllilill positive (ncgntivc) for sonic open interval of time immcdiatcly following that instant. 2. If sonic quilntity Q is zero during some open interval of time, it will remain zero during the instant following the open interval. Returning to the RC cxamplc, WC dcduccd by Causill Propagation that all of the circuit’s state variables wcrc positive or ncgativc during instant tl. Using the first pilrt of the Continuity ltulc, WC predict that each siatc varinblc must remain positive or ncgativc during the open intcrvai imnlcdiatcly following tl (interval 12). They may. howcvcr, transition to m-0 at the insl;uit following 12. St;itc \‘:iriahlcs ;uid ‘I’iiiic ISy assuming that quantities arc continuous and by using the results of the Intcrmcdiatc Value ‘I‘hcorcm, ;I relationship can be dlawn 1x1~ ccii Lllc rcprcscnlations for sl,rlc v,lri,iblcs ;lnd tinic. I<CCi~ll that the rcprcscntntion for time consists of a scrics of i~s/r~rr/.s scpnratcd by O~WII ir1tcrwl.r. An instant marks ;1 quantity moving from an open region to :I bound,lry or from ;I boundary to an open region. Also, recall that the range of ‘I state varinblc is rcprcscntcd by the open regions posi/i\~c (0, 03) ;md /lcgcr/iw (- 00, 0) scpar,ltcd by the boundary zeru. which wc dcnotc -I3 -- and 0. rcspcctivcly. If some qimitity (Q) is positive at son-it tiinc instant tl (Q@tl = c whcrc c > 0), then thcrc exists some finiic open interval (t, 0) separating the SConlinult>: “‘l‘hc hnctmn f IS continuous iT a smnll chngc in 5 products only a \1i1;111 ch:lllgc 111 f(x). and 11. UC can Lccp lhc chnngc tn f(z) ns sn~nll as WC wish by holdmg the change 111 z sullic~cu~lv smsll ” (I.oomis, 1977) ?lhc noki!ion (u, b) dcnolc~~ Ihc open inbmal !hc c~oxcd mlcr\a! bc~wccn u md b inclusive. hclwccn a and 6, wh~lc [a, b] denoks In ndclition to looking at the continuity of quantities. information can also bc dcrivcd by looking at the relationship bctwccn quantities illld their dcrivativcs. ‘l’hc following two corollaries of the Aleall Vdue T/~cor~~~t/ (‘l‘hon~~s, 1908) ;lrc of particular intcrcst to 'I'Q Annlysis: 1. If a function (f) has a dcrivativc which is equal to zero for all Vi~IlICS of z in an interval (a, b), then the function is constant througliouI Lhc interval. 2. I.ct f bc continuous on [a, t’] and diffcrcntiablc on (a, b). If j’(z) is positive throughout (a, O), then f is an increasing function on [a, b], and if f’(z) is ncgativc throughout (a, b), then f is dccrcasing on [a, b]. Ry combining thcsc two corollaries with the Intcrmcdiatc Value Thcorcm, the bch,tvior of n state variable is described over an interval (instant) in terms of its vnluc during the previous instant (interval) and its dcrivativc. At the qualitative Icvcl, this is similar to intcgmtion and is captured by the following rule: Qualitative Integration Rule Trnnsikions to Zero 1. If a quantity is positive and dccrcasing (ncgntivc and increasing) over an open time interval, then it will move towards xro during that interval and possibly transition to zero at the end of the 352 in tcrval. 2. If a quantity is positive but not dccrcasing (ncgativc and not increasing) over an open time interval, then it cannot transition to Lcro and will remain positive (ncgativc) during the following instant. Transitions Off Zero 3. If a quantity is increasing (dccrcasing) during some open time interval and was ycro during the previous instant, then it will bc positive (ncgativc) during the interval. 4. If a quantity is constant during some open time interval and was 7cro during the previous instant, then it will bc zero during that interval. It is intcrcsting to note that, while in the first two parts of the rule the dcrivativc of 11’1~ quantity affects how it bchavcs during the following instant, in the last two parts the dcrivativc of a quantity affects that quantity during the same intcrvnl. P’OI cxamplc, suppose that a quantity (Q) is resting at %clu at soinc inStitllt (11) (i.c., [&]@tl = 0 and [$~]@tl = 0). If $2 bccomcs positive for the next open interval (IX), then it will cause Q to incrcasc during that interval and bccomc positive. I-urthcrmorc, Q moves off zero ins~lntnncr)usly. thus Q is also positive during 12. In tlic above cast, the causal rcl;\tivnship bctwccn ;I quantity and its dcrivativc is similar to that bctwccn two dilfcrcnt quantities rclatcd by a qualitative cxprcssion (c.g., in a resistor a change in current instantaneously causes a change in vol tagc). If WC arc intercstcd in analy7ing a system which includes a number of higher order dcrivativcs, then the Integration Rule may also bc applied bctwccn each dcrivntivc and the next high order dcrivativc. For cxamplc, suppose the system being analyzed involves the position (z), v&city (v) and accclcration (a) of a mass (where & - df - a) and that iIll three quantities arc constant at SOIIIC instant (tl). If a bccomcs positive for the next open interval (12). then it will cause an incrcnsc in 21, making it positihc for 12. Similitrly, positive IJ causes an incrcasc in 5, making it positive for 12. ‘I’hus llic Integration Rule uses the relation bctwccn each quantity and its dcrivativc to locally propagntc the cffccts of changes along a chain from higher ordcl dcrivntivcs down towards the lower order derivatives. As WC hnvc seen above, the Integration Rule describes the direction a quantity is moving with respect to zero (e.g., towards or away from zero). If a quantity is zero and increasing or dccrcasing during the next interval. then the quilntity must transition from zero. If, howcvcr, a quantity (A) is moving towards zero for some inlcrval of time, it may or may not reach ycro by the end of the interval. St~ppos~’ some other q\lantity (B) rcachcs ycro first and H causes I,$$ to bccomc /cro, then A will not reach PXO. ‘I‘hus WC need a mechanism for determining which quantity or set of quantities will reach zero ;irht during an open interval of time. ‘I‘RANSI’I’ION ORI)12HING As a result of Transition Recognition WC have divided the set of all qriantitics into 1) those which m,\y transition (they arc moving towards zero) 2) those which can’t transition (they arc not moving towards zero) and 3) those whose status is unknown (their direction is unknown). Next WC want to dctcrminc which subsets of thcsc quantities can transition without leading to 1) quantities which arc inconsistent with the set of qualitative relations (c.g., [A] = + and [I?] = 0 when [A] = [B]) and 2) q uantitics which violate the lntcrmcdii~tc Value ‘I‘hcorcm and thus arc discontinuous (e.g., Q is caused to jump from + to - without crossing 0). ‘I’hc simplest solution to this is to CnumCr;ltC all sets of possible transitions and test each for the above two criteria. Howcvcr, the number of sets of possible transitions grows cxponcntinlly with the number of quantities which Can transition, thus this solution bccomcs intractable for large systems. (dc Klccr ilnd IMrow, 1954) use a similar approach, but only need to consider the transitions of the indcpcndcn t state variables. Instead, Transition Ordering uses 1) the direction each quantity is moving with rcspcct to zero, and 2) the qualitative rclntions bctwccn thcsc quantities as a set of constraints. to dctcrminc which qunntitics can transition first and still satisfy the criterion of consistency and continuity. If in the worst cast, cvcry quirlitativc relation is used during ‘I’ransition Ordering, then this solution grows lincnrly with the number of relations in Ihc system. If the derivative of a non-zero quantity (Q) is unknown, then its direction cannot bc dctcrmincd by ‘l’ransition liccognition. In this cast a qualitative relation associated with Q, along with the directions of the other quantities involved in that rclntion can somctimcs be used to dctcrminc Q’s direction. ‘I‘hc qualitntivc relations used in modeling devices consists of equality, negation. addition and multiplication. ‘I’hus for each of thcsc operations Transition Ordering contains a set of rules which place constraints on the direction (c.g.. toward zero) and transition status (c.g.. can’t transition) of each quitntity involved in the operation. ‘I’hc next section provides a few cxamplcs of these rules for cnch type of operation. A complctc list of Transition Ordering rules is prcscntcd in (Williams, 1984). Transition Ortlcring Rules If the signs of two continuous quantities arc cquivnlcnt (i.c.. A = HI, where Ic is a positive conslnnt) o\ cr the open interval of intcrcst and the following instant, then WC know that 1) they arc moving in the same direction, and 2) if one of the quantities transitions to 7cro then the other quantity must transition at the same time. ‘I’his may bc viewed simply as a consistency check on equality. ‘I’hc above rule also holds for negation (i.c., A = -I&), since negating a quantity dots not challgc its direction with respect to lcro. ‘I’hc cast whcrc a quantity is the sum or diffcrcncc of two other continuous quniititics is mot-c intcrcsting. For example, assume that quantities A alld C arc moving towards Lcro and B is constant, whcrc C = k,A + k?B. If A, B and C arc positive, then A will transition to zero before C and C can bc eliminated from the list of potential 353 transitions.* On the other hand, if B is negative, then C will transition bcforc A, and finally, if B is zero, then A and C will transition at the same time (since C = kIA). Also, consider the case whcrc A and C arc positive and B is negative but the direction of C is unknown. If B is known to bc constant and A is moving towards zero, then C must also be moving towards zero and will reach zero before A. Finally, for multiplication (c.g., A x B = kc) WC know that, if A and/or B transitions to zero, Ihcn C will transition to Lero at the same time; othcrwisc, ncithcr A nor B is transitioning and C won’t transition. Thus, Transition Ordering 1) factors the qunntitics into sets which transition at the same time and 2) crcatcs an ordering bctwccn thcsc sets according to which transitions prcccdc other transitions. Applying the Transition Or&ring Huh Transition Ordering rules arc applied using a constraint propaga- tion mechanism similar to the one used in propagating qualitative values. If as the result of applying thcsc infcrcncc rules it is dctcr- mined that 1) all the remaining potcntii~l transitions will occur at the same time, and 2) the direction of thcsc quantities is known to bc toward zero, then the transitions occur at the end of the current interval. Othcrwisc, an ordering may bc cxtcrnally provided for the remaining potential transitions, or the system can try each of the remaining sets of possible transitions. Marc qunntitativc techniques which help rcsolvc the remaining sets of possible transitions arc cur- rcntly being cxplorcd. Returning to the RC circuit, WC have dcduccd thus far that the capacitor has a positive voltage across it and is discharging through the resistor. Next it must bc dctcrmincd whcthcr or not any quantities will transition to zero at the end of interval 12. By applying the lntcgrntion Rule to [V,N] = + and [&SF] = -, WC know that VI~J is moving towards pro. Using a similar argument, WC dctcrminc that [I,?,] and [I(*,] arc also moving towards zero. ‘I’hc direction of [!$‘f], [%l] and [?&I. howcvcr, cannot be dctcrmincd using the Integration Rule, since their derivatives arc lmknown. ‘I’hc direction of each of thcsc quantities can bc dctcrmincd using the ‘l’ransition Ordering rule for cquivalcnccs dcscribcd above. For cxamplc, WC know that [9] is moving towards zero, since [Tel] is moving towards zero and [Ic,] = [gyp] from the capacitor model. In addition, it is dcduccd from KCI, and the resistor model, which arc both equivalences, that [w] and [$$L] arc also moving towards zero. Finally, since all of the quantities arc qualitatively equivalent, they will all transition to zero at the same time. Since no other potential transitions exist, each of thcsc quantities will transition to 81r inslead WC had snid that C transilloncd to /cro firxt ~hcn A would have IO jump liom ~IUT to minus wlhouI crossing /cro (i c . [A] = [C] - [II] = (0) - (-) = --) This violales !hc lntcrmcdiatc Vnluc ‘lhcorcm and. thcrcforc, cannot occur. zero at the end of interval 12. Thus the voltage, currents and their dcrivativcs are zero at the next instant. Both Causal Propagation and Transition Analysis have been implemented and used to correctly predict the behavior of many RLC and mosfct circuits such as high and low pass filters, oscillators and hOOtStl-ilp circuits. ‘I‘cmporal Qualitative Analysis is currently being cxtcndcd to incorporate more quantitiltivc information, allowing it to make more prccisc predictions about complex physical systems. In addition ‘I’Q Analysis is being incorporated into a system for designing and dchugging high pcrformancc MOS circuits. Two components of Temporal Qualitative Analysis have been discussed: Causal Propagation dctcrmincs the incrcmcntal response of a system to a change in an input or its higher order dcrivativc, while ‘I’rirnsition Analysis dctcrmincs the long term cffcct of thcsc changes. By assuming that physical quantities arc modeled by continuous functions, WC have been able to dcvclop a few rules to dctcrminc how state variables move bctwecn qualitative regions. ‘I’hcsc rules capture one’s intuitive notion of continuity and integration. I thank Howic Shrobc, Rich Zippcl. Johnn dc Klecr, Dannicl Bobrow, Ramcsh I’atil and Dan Weld for many insightful comments. HP:l’I~:l<~NCFS 1 [l] dc Klccr, J. and Ijrown, J.S., “A Quali tativc Physics Ilascd on Conllucnccs,” to appear in nrf$cral Irilclligetrce. [2] dc Klccr. J. and 13obrow, I).. “Qualitative Rcnsoning with Higher Order 1)crivativcs.” in I’rocecdiqs r~f hz Na(ioml C’or@rertce 011 Arf$cial Itr~clligctm, Austin, ‘I’cxas, August, 1984. [3] dc Klccr. J., “Causal and ‘I’clcological Reasoning in Circuit Recognition,” ‘1X-529, MI’l’ Artificial Intclligcncc I .aboratory, Camb., Milss‘lcllusctts, Scptcmbcr 1979. [4] Forbus, K.13.. “Quitlitativc Process l’hcory,” AIM-664A, Ml’1 Artificial Intclligcncc I .&oratory, Cambridge, Massachusetts., May 1983. [6] Kuipcrs, II., “Getting the Envisionmcnt ltight,” in l’rucecditrgs of Ihc Ntrrioml ~‘otlJiwrrce OII Arlifcid Ittlrlligrrm, Pittsburgh, Penn., August, 1952, pp. 209-212. [7] I .oomis. I,., (“rtic~rrlus, Addison-Wcslcy, I<crtding. Massachusetts, 1977. Iv1 Williams, Ij.C., “Qunlitativc Analysis of MOS CircLlits,” to i\ppcar in Jrl~ficiirl /tll~Yligcvi~y?.
1984
2
303
Constraint-based Generalization Learning Game-Playing Plans from Single Examples Steven Minton Computer Science Department Carnegie-Mellon University Pittsburgh, PA 15213 Abstract Constraint-based Generalization is a technique for deducing generalizations from a single example. We show how this technique can be used for learning tactical combinations in games and discuss an implementation which learns forced wins in tic-tat-toe, go-moku, and chess.’ 1 Introduction During the last decade “learning by examples”, or concept acquisition, has been intensively studied by researchers in machine learning [l]. In this paradigm, the learner induces a description of a concept after being shown positive (and often negative) instances of the concept. A limitation of many existing concept acquisition systems is that numerous examples may be required to teach a concept. People, on the other hand, can make accurate generalizations on the basis of just one example. For instance, a novice chess player who is shown just one example of a “skewer” (Figure 1) will later be able to recognize various forms of the skewer principle in different situations. Understanding how and why a particular example works allows him to arrive ai a generalized concept description. However, most existing concept acquisition systems are completely data-driven; They operate in relative isolation without the benefit of any domain knowledge. Constraint-based Generalization is a technique for reasoning from single examples in which generalizations are deduced from an analysis of why a training instance is classified as positive. A program has been implemented that uses this technique to learn forced wins in tic-tat-toe, go-moku and chess. In each case, learning occurs after the program loses a game. The program traces out the causal chain responsible for its loss, and by analyzing the constraints inherent in the causal chain, find a description of the general conditions under which this same sequence of events wiil occur. This description is then incorporated into a new rule which can be used in later games to force a win or to block an opponent’s threat. Following a discussion of this implementation, a domain-independent formulation of Constraint-based Generalization will be introduced. 2 Learning Plans for Game-playing’ In game-playing terminology, a tactical combination is a plan for a achieving a goal where each of the opponent’s moves is forced. Figure 1 illustrates a simple chess combination, called a “skewer”. The black bishop has the white king in check. After the king moves out of check, as it must, the bishop can take the queen. Figure 1: A Skewer A student who has had this particular instance demonstrated to him can find an appropriate generalization by analyzing why the instance worked. Such an analysis can establish that while the pawns are irrelevant in this situation, the queen must be “behind” the king for the plan to succeed. Ultimately, a generalized set of preconditions for applying this combination can be found. In future games this knowledge can be used to the student’s advantage. Presumably, he will be less likely to fall into such a trap, and may be able to apply it against his opponent. The learning aigorithm we propose models this reasoning process. There are three stages: 1. Recognize that the opponent achieved a specific goal. 2. Trace out the chain of events which was responsible for realization of the goal. 3. Derive a general set of preconditions for achieving this goal on the basis of the constraints present in the causal chain, 3 The Game Playing System This section describes a game-playing system that has learned winning combinations in tic-tat-toe, go-moku and chess. A forcing state is a configuration for which there exists a winning combination - an an offensive line of play that is guaranteed to win. Figure 2 illustrates a winning combination in go-moku, a game played on a 19x19 board. The rules are similar to tic-tac- toe except that the object of the game is to get 5 in a row, either vertically, horizontally or diagonally. If, in state A, player X takes the square labeled 2, then player 0 can block at 1 or 6, but either way X will win. If 0 had realized this prior to X’s move, he could have pre-empted the threat by taking either square 2 or 6. The game-playing system learns descriptions of forcing states and the the appropriate offensive move to make in each such state by analyzing games that it has lost. 1 Th1.s research was supported in part by the Defense Advanced Projects Agency (DOD) Arpa Order No. 3597, monitored by the Air Force Avionics Laboratory under Contract F3361578-C-1551. and in part by a Bell Laboratories Ph.D Scholarship. 251 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. ’ 2 3)(4)(5)(6 + ’ 2x3x4x5x6 -+ ’ *x3x4x5x60 J$++j# State A X to move State B 0 to move State C X to move Figure 2: A Winning Combination in Go-moku The game-playing system is organized into several modules, including a Top-Level module that interacts with the human player and a Decision module that chooses the computer’s moves. A set of features which describes the current board configuration is kept in a data structure called Game-State. Most of the system’s game-specific knowledge is partitioned into two sets of rules: l A set of State-Update Rules provided by the programmer for adding and deleting features from Game-State after each turn. l A set Recognifion rules employed by the Decision module to detect forcing states. Initially this set is empty. The Learning module produces more recognition rules whenever the program loses. Features in Game-State are represented by predicates. For example, in tic-tat-toe is-empty(square1) might be used to indicate that square1 is free. The State-Update-Rules form a production system that updates the Game-State as the game progresses. The IF-part or left-hand side of each rule is a description: a conjunction of features possibly containing variables. (Angle bracke?s are used to denote variables, eg. <x>). The right-hand side of a rule consists of an add-hst and a delete- ltst specifying features to be added and deleted from Game-State when the rule is activated. Figure 3 shows some State-Update rules that were used for go-moku.2 In the present implementation, only one State-Update rule can be applicable at any time, so no conflict resolution mechanism is necessary. Whenever a rule fires, it leaves behind a State-Update- Trace, indicating the features it matched in Game-State. RULE Create-win1 RULE Create-four-in-a-row IF input-move(<square>, <p>) IF input-move(<square>,<p>) Is-empty(<square>) is-empty(<square>) four-in-a-row(<4position>,<p>) three-in-a-row(<3position>, <p>) extends(<4position>, <square>) extends(<3position>, <square>) THEN composes(<newposition>, ADD won(<p>) THEN <square>,<3position>) DELETE three-in-a-row(<3position>,<p>) Input-move(<square>,<p>) ADD four-in-a-row(<newposition>,<p>) Figure 3: Some State-Update rules for Go-rnoku An INPUT-MOVE feature is added to Game-State after each player moves (see Figure 3). The State-Update system is then allowed to run until no rules can fire, at which point Game-State should accurately reflect the new board configuration. When a player <p> wins, the State-Update system adds a feature WON(<p>) to Game-State. The Decision module relies on the set of Recognition rules to identify forcing states. (See Figure 4 for some representative recognition rules). The right-hand side of each recognition rule indicates the appropriate move to initiate the combination3. When a recognition rule indicates that the opponent is threatening to win, then the computer blocks the threat (unless it can win before the opponent). The blocking move is classified as 2 Extends(<positionX>,<squareV>) is true when <squareY> is adjacent to, and in the same line as, the sequence of squares at <positionX). Composes(<positionX>,<squareV>,<posltionZ>) is true when <squareY) and <positionZ> can be joined to form <positionX). State D X wins forced, and the name of the recognition rule and the features it matched are recorded in a data-structure called DECISION- TRACE. The Decision module also contains a simple procedure for deciding on the best move if no recognition rule is applicable; For example, in go-moku the program merely picks the move which extends its longest row. RECOG-RULE Recog-Four RECOG-RULE Recog-Open-Three IF tour-in-row(<position>,<player>) IF three-in-row(<3position>,<player>) Is-empty(<square>) extends(<position>, <square>) is-empty(tsquareC>) RECOMMENDED-MOVE extends(<3position>,<squareC>) input-move(<square>,<player>) composes(<4position>,<squareC), <3position>) is-empty(<squareB>) extends(<4position>,<squareB>) is-empty(<squareA>) extends(<4position>,(squareA>) RECOMMENDED-MOVE Input-move(<squareC>,<player>) Figure 4: Recognition Rules Learned in Go-moku Initially, the system has no recognition rules. Whenever it loses a game the learning module analyzes why the loss occurred; A new recognition rule can be introduced if the state occurring after the computer’s last non-forced move - the critical state - can be shown to be a forcing state. If this analysis is successful, a new rule will be built so that this state. and others like it, can be recognized as forcing states in future games. The learning module must identify the features in the critical state that allowed the opponent !o win. It accomplishes this by examining the sequence of state-update rules and recognition rules which fired between each of the opponents moves and which eventually added the the feature WON(opponent to Game-State. Assuming that the threats recognized by the computer were independent4 then the critical state must have been a forcing state. indeed, any state in which this same sequence of rules will fire, resulting in a wtn, must be a forcing state. To build the new recognition rule, the learning module finds a generalized description of the critical state such that the constraints defined by the sequence of rules are satisfied. A procedure named Back-Up accomplishes this by reasoning backward through the rule sequence. In order to traverse rules in the backward direction, Back-Up takes a description of a set of post-features, and finds the most general set of pre-features such that if the pre-features are true before the rules fire, the’ post- features will be true afterwards. This operation is an instance of “constraint back-propagation” [12]: Dijkstra formalizes this method ot reasoning backwards in his discussion of weakest preconditions for proving program correctness [3]. in order to illustrate how Back-Up operates, we will consider how Hecog-Open-Three (Figure 4) is acquired after the computer loses the position shown in Figure 2. Recog-Open-Three recognizes an “open three”, which consists of a “three-in-a-row” with two free squares on one side and one free square on the other. In order for Recog-Open-Three to be learned, the computer must have previously learned Recog-Four which states that a four-in-a-row with an adjacent open square constitutes a forcing state. After the opponent (player X) takes square 2, the computer (player 0) finds two instanttations of Recog-Four in state 8 (one for each way for X to win). Since only one of these 3 Instead of listing all the subsequent moves in the combination, a separate recognition rule exists for each step. 4 Threats are independent if there is no way to block them simultaneously. 252 can be blocked, the computer arbitrarily moves to square 6, recording that the move was forced by the particular instantiation of Recog-Four. Then the opponent proceeds to win by taking the fifth adjacent square on the other side. The learning module is then invoked to build a new recognition rule. By examining the State-Update-Trace, the program finds that an instantiation of Rule Create-Win1 (Figure 3) was responsible for adding Won(opponent) to Game-State after the opponent made his last move. Back-propagation identifies the pre-features necessary for this rule to produce a post-feature matching Won(<player>): input-move(<squareA>, <player>) 8f four-in-row(<4position>, <player>) & is-empty{ <squareA>) & extends(<4position>, <squareA>) Deleting the input-move feature gives a generalized description of state C, the forcing state existing prior to the opponent’s last move. Since the computer’s move previous to this (from State B to State C) was in response to the independent threat identified by Recog-Four, the system continues backing- up. The left-hand side of Recog-Four is combined with the preconditions above to arrive at a generalized description of state 6. This is a state with two independent threats: four-in-row(<4position>, <player>) & is-empty(<squareA>). & is-empty(<squareB>) & extends(<4position>, <squareA>) & extends(<4position>, <squareB>) Continuing, Back-Up finds that the opponent’s move (into square X) caused rule Create-four-in-a-row to fire, producing the four-in-a-row feature in this description. Back-propagating across this rule allows us to restate the pre-conditions as show in Recog-Open-Three (Figure 4). The Recommended-Move is the input-move precondition corresponding to X’s move from state A to State B. The left-hand side of Recog-Open-Three describes the relevant features in state A which allowed X to force a win. 4 Discussion Murray and Elcock [9] present a go-moku program that learned patterns for forcing states by analyzing games that it had lost. A similar program by Koffman 161 learned forcing states for a class of games. Pitrat [lOJ describes a program that learned chess combinations by analyzing single examples. In each of these programs, generalizations were produced either by explicit instruction, or through the use of a representation that only captured specific information. The approach outlined in this paper is similar m spirit to these earlier programs, but more powerful, since generalizations are deduced from a declarative set of domain-specific rules. After being taught approximately 15 examples, the system plays go-moku at a level that is better than novice, but not expert. Based the performance of Elcock and Murray’s go-moku learning program, it seems likely that the system could be brought to expert level by teaching it perhaps 15 more examples. However, as more complex rules are learned the system slows down dramatically, despite the use of a fast pattern matcher (a version of the rete algorithm [5]). The problem is that the complexity of each new rule, in terms of the number of features in its left-hand side, grows rapidly as the depth of the analysis is extended. In order to overcome this, the complex left-hand side descriptions should be converted into domain-specific patterns that can be efficiently matched. This has not been implemented. In addition to learning combinations for winning tic-tat-toe and go-moku, the system (with modifications to the decision module) has learned patterns for forced matings in chess. While we believe that this implementation demonstrates the generality of the learning technique, it does not provide a practical means for actually playing chess. The patterns learned are inefficient and represent only a fraction of the knowledge required to play chess [13]. 5 Requirements for Learning With many learning systems, it is necessary to find some “good” set of features before learning can occur. An important aspect of this system is that we can specify exactly what is necessary for the system to be able to learn. In particular, if a State-Update system can be written that satisfies the following requirements, it can be shown that correct recognition rules will be acquired for tic-tat-toe, go-moku, or any other game in which the concept of a forcing state can be appropriately formalized [7]. 1. FORMAT REQUIREMENT: the State-Update rules must conform to the format specified in section 3. 2. APPLICABILITY REQUIREMENT: The State-Update rules must indicate when the game has been lost by adding a Won feature to Game-State. 3. LEGALITY REQUIREMENT: The Update-System must only accept legal moves. Informally speaking, the FORMAT requirement guarantees that back-propagation can be used to find the preconditions of a sequence of rules; The APPLICABILITY requirement guarantees that the system can identify when to begin backing up; The LEGALITY requirement guarantees that only legal Recommended-moves will be found. While there will exist many Update-Systems that meet these requirements for any particular game, with any such system the learning algorithm can learn patterns describing forcing states. However, the particular choice of features and rules will influence the generaiity of the learned patterns. The more general the State-Update rules are, the more general the learned patterns will be. In the previous section a recognition rule for Go-moku was learned; The generality of this rule was directly attributable to the level of generality in the State-Update rules. If instead, a large set of very specific State-Update rules was provided (eg. listing all 1020 ways to win) a much less general recognition rule would be learned from the exact same example. It is possible to extend the system so that preconditions for other events besides forced wins can be learned, provided that such events are describable given the features used by the State- Update system. For example, learning to capture pieces in checkers is only possible it one is able to describe a capture in the description language. In order to [earn recognition rules for arbitrary events, the definition of a forcing state must be modified. We define a state S to be a forcing state for player P with respect to event ,5 iff P can make a move in S that is guaranteed to produce an event at least as good as E. Unfortunately, recognition rules for arbitrary events may cause more harm than good if they are used indiscriminately. A player may be able to force E, but then find himself in a situation where he is worse off in other respects. 6 Comparing Constraint-based Generalization systems Within the past 2 years, a considerable amount of research has been presented on systems that learn from single examples 18, 14, 11, 21. In addition, there exists an older body of related work [4,6, 91. Each of these systems is tailored to a particular domain: game playing [6, 91. natural language understanding [2], visual recognition of objects [14], mathematical problem solving [8, 1 l] and planning [4]. In order to characterize what these systems have in common, we present the following domain-independent description of Constraint- based Generalization: 253 Input: A set of rules which can be used to classify an instance as either positive or negative AND a positive instance. Generalization Procedure: Identify a sequence of rut& that can be used to classify the instance as positive. Employ backward reasoning to find the weakest preconditions of this sequence of rules such that a positive classification will result. Restate the preconditions in the description language. Each of the systems alluded to earlier can be viewed as using a form of Constraint-based Generalization although they differ in their description languages, formats for expressing the rules and examples, and criteria for how far to back-propagate the preconditions. In order to substantiate this claim, we will show how two well-known systems fit into this view. Winston, Binford, Katz and Lowry [14] describe a system that takes a functional description of an object and a physical example and finds a physical description of the object. In their system, the rules are embedded in precedents. Figure 5 shows some precedents, a functional description of a cup, and a description of a particular physical cup. (The system converts natural language and visual input into semantic nets.) The physical example is used to identify the relevant rules (precedents), from which a set of preconditions is established. The system uses the preconditions to build a new rule as shown in Fin. 6. A cup is a stable liftable Functional Descriotion a a CUD: open-vessel. Phvsrcal Examole of a CUD: E is a red object. The objects body IS small. Its bottom IS flat. The object has a handle and an upward-pointing concavity. l A Brick: The brick is stable because i!s bottom is flat. The brick is hard. l A Suitcase: The suitcase is liftable because it is graspable and because it is Irght. The suitcase is useful because it is a portable container for clothes. l A bowl: The bowl is an open-vessel because it has-an upward pointing concavity. The bowl contains tomato soup. Figure 5: Functional Description, Example, Precedents IF [object9 is light] & [object9 has concavily7] 8 [object9 has handle41 8 [object9 has bottom71 & [concavity7 is upwardpointing] & [bottom7 is flat] THEN [object9 isa Cup] UNLESS [[object9 isa openvessel] is FALSE] or [[object9 is liftable] is FALSE] or [[objects is graspable] is FALSE] or [[object9 is stable] is FALSE] Figure 6: New Physical Description, in Rule Format The LEX system learns heuristics for solving symbolic integration problems. Mitchell, Utgoff and Banerji [8] describe a technique that allows LEX to generalize a solution after being shown a single example. A solution is a sequence of problem- solving operators that is applied to the initial problem state. (Fig. 1). In this system, the example serves to identify a sequence of operators that can be used to solve a particular problem. The system then back-propagates the constraints through the operator sequence to arrive at a generalized description of the problerns that can be solved by applying this operator sequence. Below is a problem and a solution sequence provided to LEX: OPl OP3 j-7(x2) dx ====> 7(x2 ====> 7 x3/3 Back-propa % $on establishes that the initial expression must match la(x ) in order for this sequence of operators to be applicable. 0~1: Jr f(x) dx ==> rl f(x) dx OP2: -f sin(x) dx ==> -cos(x) + c OP3 : I xrZ1 dx =3> x r+l/(r+l) + C Table 1: Some Operators Used by LEX 7 Conclusions Constraint-based generalization is a form of meta-reasoning in which generalizations are deduced from a single example. The example serves to isolate a sequence of rules that identify positive instances. By finding the weakest preconditions of these rules that produce a positive classification, a generalization can be made. The power of this technique stems from the focus that the example provides for the analysis process. 8 Acknowledgements Tom Mitchell and his colleagues’ research on LEX suggested many of the ideas presented here. I thank Murray Campbell, Jaime Carbonell, Hans Berliner and Pat Langley for their suggestions. References 1. Carbonell, J., Michalski, R. and Mitchell, T. An Overview of Machine Learning. In Machine Learning, Carbonell, J., Michalski, R. and Mitchell, T., Ed.,Tioga Publishing Co., 1983. 2. DeJong,G. An Approach to Learning by Observation. Proceedings, International Machine Learning Workshop, , 1983. 3. Dijkstra, E.. A Discipline of Programming. Prentice Hall, 1976. 4. Fikes, R., Hart, P. and Nilsson, N. “Learning and Executing Generalized Robot Plans.” Artificia! Intelligence 3, 4 (1972). 5. Forgy, C. “Rete: A Fast Algorithm ior the Many Pattern/Many Object Pattern Matching Problem.” Artificial Intelligence 79, 1 (1982). 6. Koffman, E. “Learning Through Pattern Recognition Applied to a Class of Games.” IEEE Trans. Sys. Sciences and Cybernetics SSC-4, 1 (1968). 7. Minton, S. A Game-Playing Program that Learns by Analyzing Examples. Tech Report, Computer Science Dept., Carnegie Mellon University, forthcoming 8. Mitchell, T., Utgoff, P. and Banerji, R. Learning by Experimentation: Acquiring and Refining Problem-Solving Heuristics. In Machine Learning, Carbonell, J., Michalski, R. and Mitchell, T., Ed.,Tioga Publishing Co., 1983. 9. Murray, A. and Elcock, E. Automatic Description and Recognition of Board Patterns in Go-Moku. In Machine intelligence 2, Dale, E. and Michie, D., Ed.,Elsevier, 1968: 10. Pitrat, J. Realization of a Program Learning to Find Combinations at Chess. In Computer Oriented Learning Processes, Simon, J., Ed.,Noordhoff, 1976. 11. Silver, B. Learning Equation Solving Methods from Worked Examples. Proceedings of the International Machine Learning Workshop, , 1983. 12. Utgoff, P. Adjusting Bias in Concept Learning. Proceedings International Machine Learning Workshop,, 1983. 13. Wilkins, D. “Using Patterns and Plans in Chess.” Artificial intelligence 74 (1980). 14. Winston, P., Binford, T., Katz, B. and Lowry, M. Learning Physical Descriptions from Functional Definitions, Examples and Precedents. Proceedings of the National Conference on Artificial Intelligence, AAAI, 1983.
1984
20
304
Generalization for Explanation-based Schema Acquisition Paul O'Rorke Coordinated Science Laboratory University of Illinois at Urbana-Champaign Urbana, Illinois 61801 ABSTRACT This paper is about explanation-based learn- ing for heuristic problem solvers which "build" solutions using schemata (frames like scripts) as both "bricks" and "mortar". The heart of the paper is a description of a generalization method which is designed to extract as much information as pos- sible from examples of successful problem solving behavior. A' related generalizer, (less powerful but more efficient), has been implemented as part of an experimental apprentice.* I INTRODUCTION Many knowledge-based AI systems have used schemata (knowledge packets such as frames or scripts) as the basis of computational models of understanding [I], planning [2] or other problem solving [3], but very few of these systems have been capable of generating their own schemata. As a result, most schema-based systems have been unable to automatically profit from their experi- ences, so that the main way of improving their performance has been by laboriously hand-coding new schemata. At the University of Illinois, a small group led by Prof. Gerald DeJong has been exploring and automating a solution to this knowledge- acquisition bottleneck: a particular brand of explanation-based learning called "explanatory schema acquisition (ESA)." [4,5] This paper describes the explanation and gen- eralization methods underlying explanatory schema acquisition in the context of our first complete implementation of an experimental apprentice. The apprentice, (named MA), contains a heuristic search schema-based problem solver specializing in interactive human-oriented theorem proving. Like any other apprentice, MA starts life with very limited problem solving ability. Initially, MA can only make tiny contributions to most problem solv- ing efforts; its master must supply the insights * This report describes work done in the AI group of the Coordinated Science Laboratory at the University of Illinois at Urbana-Champaign. The work was supported by the National Science Founda- tion under grant NSF IST 81-20254. which lead to successful proofs. At first, MA "merely" observes the master's behavior, but MA recognizes when this behavior leads to success. Then by generalization based on analysis of the reasons for success MA learns new schemata and heuristics for their use. II PROBLEM SOLVING WITH SCHEMATA Schema-based problem solvers are goal directed systems which aim to construct schemata satisf'ying given constraints. Of course, the details of schemata will vary from one domain to another but in general schemata used in explanation-based schema acquisition systems are comprised of parameters (variables), constraints on parameters (which may function as "slots"), and deDendencv relations between the constraints. A schema-based problem solver makes progress toward its goal by instantiating general schemata called prototypes. Instantiation is accomplished by invoking a prototype (copying it with new, unique names for parameters) and binding parame- ters. Parameters may only be bound to or iuenti- fied with objects subject to the constraints. Two kinds of prototype are used to build solutions: nrimitive schemata and schematic forms. A schematic form is a schema which is used to com- bine existing schemata into a new composite schema; it has parameters which are constrained to be filled by other schemata. MA has a primitive proof prototype which plays the role of the assumption axiom schema of Manna's Gentzen style natural deduction system [61. Given a set of hypotheses, the assumption axicxn can be used to infer a desired conclusion when it is a member of the given set. The parame- ters associated with AssumptionAxiom schemata are Self, A and Gamma. A is constrained to be a WFF (well-formed formula) and Gamma must be a SET-OF- WFFS. The Self parameter has the constraint (PROOF Self OF A FROM (Union Gamma {A)) which depends on the other constraints. The constraints associated with an instance of a prototype are represented as assertions in a database and the dependencies between the constraints are represented as data-dependencies as described and illustrated in [7]. The dependency graph associ- ated with an instance of assumption axiom represents the fact that the instance (denoted by ftSelflt) is a complete proof (of A from the set of WFFs derivea by adding A to Gamma) if and only if A is a WFF and Gamma is a set of WFFs. 260 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. (PROOF AnInstanceOfAssumptionAx. OF A ON Gamma,A) A (WFF A)- (SET-OF-WFFS Gamma) Fig. 1: JLSS~DtiOnAXiOm Constraints & DeDendencies Not all MA's schemata are primitive like assumption axiom. Complex schemata can be con- structed using "inferential forms." For example, MA has schematic forms (corresponding to the "Or Introductionn rules of [6]) which facilitate the construction of proofs of disjunctions. If we have a proof of A we can plug it into an instance of OrIntroductionA to get a proof' of A OR B for any WFF B. A similar OrIntroductionB schema can be used to construct a proof of a disjunction out of a proof of the second disjunct. (PROOF AnInstanceOfOrIntro. OF (OR A B) ON Gamma) (PROOF Pf OF A ON (WFF B) (SET-OF-WFFS Gamma) (WFF A) Fig. 2: OrIntroduction Constraints & DeDendencies MA also has an inferential form corresponding to the Elimination of Assumption rule. If we have two proofs of A, one based on some assumptions plus B and another on the same assumptions plus NOT B then we can plug the proofs into this form to get a proof of A that doesn't aepend on B. Of' course, it's not enough simply to have a collection of passive schemata: one must also know how to use them! MA is a heuristic schema based problem-solver because heuristics control the instantiation of schemata. The heuristics have conditions which only allow invocation of a schema when it is sure to help achieve the goal. For (PROOF AnInstOfElimOfAssumption OF A ON Gamma) (PROOF Pfl 0F A ON Gamma,B) \ (PROOF Pf2 OF A ON Gamma,(NOT B)) Fig. 3: EliminationOfAssumDtion Constraints Figure 4: A proof that P OR (NOT P) is a tautology example, MA has a heuristic which causes it to instantiate AssumptionAxiom to achieve goals of the form (PROOF GoalPF OF A FROM (Union Gamma {A))) because AssumptionAxiom is sure to achieve such goals. OrIntroduction schemata are invoked for goals of the form (PROOF GoalPF OF (OR A B) FROM Gamma) because they reduce such goals to simpler goals of proving one or the other dis- junct. On the other hand, just because a schema can be applied in a given situation is no reason that it should be, so MA does not instantiate EliminationOfAssumption just because it sees a goal of the form (PROOF GoalPF OF A FROM Gamma). Narrowing the conditions under which schemata are invoked has the advantage of minimizing search, but we pay a price in generality (in this case our theorem prover is rendered incomplete). In other words, when the problem solver can solve a problem it will do so efficiently but there will be soluble problems which it can not solve. Ini- tially MA, (like many people), will not know what to do if you ask it to prove P OR NOT P is a tau- tology (i.e. to construct a schema named GoalPF under the constraint that (PROOF GoalPF OF (OR P (NOT P)) FROM Empty-Set)). None of MA's heuris- tics are applicable to this goal, so the appren- tice program gives up and waits for its master to give it a hint. To set the stage for the next section, assume that the user sees how to build the desired proof. Assuming the user applies EliminationOfAssumption toward achieving the goal, MA will fill in the needed OrIntroduction(A and B) and AssumptionAx- ioms hooking them up as illustrated in figure 4. More importantly, the achievement of the goals involved will trigger the generalization algo- rithm, which will make sure MA is not at a loss when confronted with similar problems and subprob- lems in the future! I Empty Se+ 261 III GENERALIZATION The goal of ESA learning is to improve the performance of schema-based problem solvers. One way of maximizing this improvement is by extract- ing as much knowledge as possible from given exam- ples of success. When knowledge is encoded in schemata and heuristics representing associations between different goals and methods Of achieving them, ESA can enable the problem solver to achieve as many new goals as possible by maximizing OrIntroduction(A and B), and AssumptionAxiom could be used to prove conclusions other than P OR NOT P from sets of hypotheses other than the Empty-Set. Most people realize this composition could be used to prove any conclusion of the form A OR NOT A (A need not be the particular WFF: P) from an arbi- trary (possibly non-empty) set of hypotheses. 1) the number and 2) peneralitv of heuristics and schemata extracted from each example. Extracting many schemata can enable the prob- lem solver to achieve subgoals which were achieved during the construction of complex examples. In the P OR NOT P example, we should not only have a heuristically invoked schema generalized from the proof of figure 4, but we should also extract gen- eral schemata from subproofs such as the proof of P OR NOT P from P. In addition it is sometimes desireable to extract new schematic forms from example schemata. This amounts to learning new ways of combining schemata to achieve goals. This makes it possible for Fti to learn new "inferential f'orms" (ways of forming new proofs from existing proofs) like the derived rules of inference in C61 l Please see the expanded version of this report for a discussion of these issues [8]. ESA uses explanations of the reasons for suc- cess as the basis for generalization and for determining the condition under which the general- ized schemata should be invoked. The explanations are embodied in dependency networks generated dur- ing the process of solving a problem. Figure 5 shows part of the dependency network underlying the P OR NOT P example. An ESA algorithm computes the condition which determines when a novel schema should be invoked by examining this sort of explanation and collect- ing facts crucial to the success of the schema. The following taxonomy is used to seperate the important "wheat" from the irrelevant "chaff." A TAXONOMY OF CONSTRAINTS Essential constraints form integral parts of explanations of success and must be incorporated into the results of generalization. There are two types of essential constraint: This section focuses on maximizing the qual- ity (generality) rather than the quantity of new heuristics and schemata. Maximizing generality enables the problem solver to achieve many goals which differ in insignificant ways from the goals successfully achieved in an example. ESA maxim- izes generality by minimizing constraints associ- ated with examples: dropping all irrelevant details due to idiosyncracies of the example while retaining important facts. Essential -Inter-schema constraints con- nect component schemata together into a complex schema. Technically, these are the assertions which support the goal achievement by identifying parameters of instances of schematic forms with instances of prototypes. For example, consider the composite schema used in the proof of P OR NOT P from the Empty- Set. It is obvious that the same composition of instances of ElJminationOfAssumption, (PROOF GoalPf OF (OR Essential Intra-schema constraints ensure that each component of a complex schema is ncomplete.w Technically, these are the immediate supporters of the "self" constraints which support the achievement of the goal. P (NOT P)) ON Emptyset) 11 (PROOF Instance0 OF A0 ON GammaO) (PROOF PFO OF A0 ON GammaO,BO) (PROOF Instance1 5 (OR Al Bl) ON Gammal)' . . . (PROOF PFO' OF A0 ON GammaO,(NOT BO)) (PROOF Insl!tnceP OF (OR A2 B2) ON Gamma;!) T .-a (PROOF Pfl-OF Al ON Gammal) (PROOF InstLice3 OF A3 ON Gamma3,A3) A-+% (WFF ~3) (SET-OF-WFFS Gamma3) (PROOF PF~ OF ~2 0~ ~amma2) (PROOF Instance4 OF A4 ON Gamma4,All) + k (WFF ~4) (SET-OF-WFFS Gamma4) Fig. 5: JeDendencv && Underlsing .P OR NOT P ExamDle 262 Ontional constraints are "forced" or implied by essential constraints. They need not be included but should be. They don't alter generality but improve efficiency. Technically, any assertion which has some justification depending purely on essential constraints is in this class. Extraneous constraints include most instantiation bindings and all implications based at least in part on extraneous constraints. Technically these are just defined to be the non-essential non- optional constraints. In the P OR NOT P example, extraneous con- straints include the identification of the conclu- sion as P OR NOT P and the identification of the set of hypotheses as the Empty-Set. In fact, one may use an arbitrary set of hypotheses and the conclusion does not even have to be a disjunction of the form A OR NOT A. "Anything goes" so long as the essential constraints (which hold the compo- site schema together and which ensure that each component is legally instantiated) are not violated. It turns out that ambiguous constraints allow the ESA generalization method sketched in this paper to learn more from the P OR NOT P example than most people [ 81. This is because most people don't realize that the same composite schema applied to proving P OR NOT P from Empty-Set can also be used to construct proofs: OF (A OR B) FROM (Union Gamma {A]) OF (A OR B) FROM (Union Gamma {B)) OF (A OR B) FROM (Union Gamma {A] {B)) Unfortunateiy, our first implementation shares this fault: it only learns to invoke the composi- tion when a proof of A OR NOT A from Gamma is desired. (Where A is an arbitrary WFF and Gamma is any SET-OF-WFFS). IV RELATION TO PREVIOUS WORK This paper continues research on ESA ini- tiated by Prof. Gerald DeJong in [4]. This work is closely related to the hybrid analytical/empirical learning methods of Mitchell et al [q], but while Mitchell's methods are res- tricted to learning new heuristic conditions specifying when existing operators should be applied, the generalization method described in this paper provides new DrOblem solving operators as well as new heuristic conditions. The construc- tion of new operators out of combinations of old ones makes our system similar to the MACROPS learning procedure of STRIPS [IO] but our method is more "human oriented" and avoids reconstructing solutions during the generalization process. This is chiefly possible because we record data depen- dencies during problem solving. L71 l Also, STRIPS and LEX were self contained and automatic but somewhat autistic. They based learning on the results of very general (but inefficient) automatic problem solving methods whereas our emphasis is on apprentice-like systems which learn by observing the goal directed actions of effi- cient human experts. V CONCLUSION Explanation-based learning methods promise to turn examples of problem solving behavior into dramatic improvements in problem solving ability. This paper discussed generalization for improving schema-based problem solvers. ACKNOWLEDGEMENTS I hereby sincerely express my gratitude to Prof. Gerald DeJong for ideas, encouragement and advice. Thanks also to my office mate Jude Shavlik for improving the graphics interface and imple- menting a "Reason Maintenance System" in Interlisp on our XEROX 1100. REFERENCES [II G. F. DeJong, "Skimming Stories in Real Time: An Experiment in Integrated Understanding," 158, Yale University Dept. of Comp. Sci., New Haven, Conn., May 1979. C21 R. Wilensky, Planning & Understandi-, Addison Wesley, Reading, Mass., 1983. [3] G. S. Novak, "Computer Understanding of Physics Problems Stated in Natural Language," Technical Report NL-30, Department of Computer Science, University of Texas at Austin, 1976. C41 G. DeJong, "Generalizations Based on Explanations," -International Joint Conference on Artificial -Intelligence-&L, Vancouver, B.C., Canada, August 24-28, 1981, 67-70. C51 G. DeJong, "Acquiring Schemata Through Understanding and Generalizing Plans," Jnternational Joint Conference on Artificial Jntellinence-fi, Karlsruhe, West Germany, August 8-12, 1983, 462-464. [6] Z. Manna, Mathematical Theory of Commutation, McGraw-Hill, New York, 1974. [7] E. Charniak, C. Riesbeck and D. McDermott, "Data Dependencies," in Artificial Intellinence Programming Associates, Hillsdale, N.;., Lawrence Erlbaum 1980, 193-226. [8] P. O'Rorke, "Generalization for Explanation- based Schema Acquisition," Working Paper 51, univ. of Illinois Coordinated Science Laboratory Artificial Intelligence Group, Urbana, IL, 1984. [g] T. M. Mitchell, "Toward Combining Empirical and Analytical Methods for Inferring Heuristics," LCSR-Technical Report-27, Lab. for Computer Science Research, Rutgers: the State University of New Jersey, New Brunswick, New Jersey, March 1982. [lOI R. Fikes, P. Hart and N. Nilsson, "Learning and Executing Generalized Robot Plans," Artificial Intelligence 3, 4 (19721, 251-288. 263
1984
21
305
Learning Operator Transformations Bruce W. Ported, Dennis F. Kibler Information and Computer Science Department University of California at Irvine Irvine, Ca 92717*+ Abstract A relational model representation of the effect of op- erators is learned and used to improve the acquisition of heuristics for problem solving. A model for each operator in a problem solving domain is learned from example ap- plications of the operator. The representation is shown to improve the rate of learning heuristics for solving symbolic integration problems. I. Introduction Machine learning research in problem solving domains has focus& on acquiring heuristics to guide the applica- tion of operators. Predominantly, researchers have assumed an operator representation (e.g. program code) which hides the operator semantics [5,6,7,10]. We call this operator rep- resentat ion opaque in that the transformation performed by the operator is not explicit. In contrast, transparent opera- tor representations (e.g. STRIPS-like) enable the learning agent to reason with operator definitions. This research examines two issues: o how to learn transparent operator representations from opaque represent ations. o how to improve the process of acquiring problem solv- ing heuristics by using transparent operator represen- t ations. We demonstrate the approach with a PROLOG imple- mentation, named PET, which learns to solve symbolic in- tegration problems. Section 3 formalizes the representation for operators used by PET and describes an algorithm for learning the representation. We call this representation of an opera- tor a relational model. We discuss a two step algorithm for learning a relational model for an opaque operator OP from example applications of OP. First PET induces a gen- eral form, PRE, for states in which OP is usefully applied and a general form, POST, for states resulting from the application. Then PET selects relations from background knowledge [12] which link features of PRE with features of POST. Discovering a good relational model is formulated as a state space search. Section 4 discusses how relational models improve the process of learning problem solving heuristics. The repre- sentation reveals features of heuristics which may be overly * N ew address: Computer Science Department, The University of Texas at Austin. ** This research was supported by the Naval Ocean Systems Center under con- tract N00125-81-1165. specific. Further, the representation suggests training in- stances which test these features, thereby guiding general- ization. For preliminaries, section 2 briefly reviews our past re- search on PET which serves as a “testbed” for experiment- ing with operator representations. II. The PET System [4,5 This section presents an overview of the PET system I . Two central features of PET are episodic learning of use ul problem solving macros and perturbation to auto- matically generate training instances. Episodic learning is an incremental approach to learn- ing heuristics which recommend problem solving opera- tors and sequences. The LEX system [7,8] learns heuristic rules which recommend individual operators. The heuris- tics learned are an accurate compilation of past problem solving experience, but, taken together, may not enable ef- ficient problem solving. The contextual information of an operator’s position in problem solving seQuences is not cap tured by LEX. MACROPS [3], on the other hand, learns operator sequences but does not acquire heuristics to se- lect useful sequences to apply to particular problem states. Generally useful sequences are not identified and reuse of the macros during problem solving results in combinatorial explosion [2]. PET learns heuristics for operator sequences by incre- mentally learning new sub-goals. PET can only learn a heuristic for an operator if the purpose of the operator is understood. Initially, this restricts PET to learning heuris- tics for operators which achieve a goal state. Problem states covered by these heuristics are learned as sub-goals. Now PET learns heuristics for operators which achieve the sub- goals. Operator sequences thus grow incrementally. Perturbation is a technique for reducing teacher in- volvement during training by automatically generating near examples and near-misses. The role of the teacher in learn- ing from examples is to generate and classify training in- stances. This role is diminished by shifting responsibility to the student. Given a positive instance POS for operator OP, PET generates and classifies further instances by: - generation: make a minimal modification of POS by applying perturbation operators to POS. These o P er- ators select a feature F of POS and generate POS by deleting F from POS or by replacing F by a sibling in a concept hierarchy tree. - classification: POS’ is a positive instance for operator yields the same (sub)goal as ap- 278 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. Viewed abstractly, episodic learning of problem solving involves learning why individual operators are useful and nerturbation is useful in learning when operators should be applied. Sections 3 and 4 demonstrate the importance of learning an explicit representation of what individual operators do during problem solving. III. Learning Relational Modela This section formalizes the relational model represen- tation of operators and presents an algorithm for learning the represent ation from examples. A relational model of an operator OP is an augmen- tation of a heuristic rule for OP. Following Amarel [I], a heuristic for OP is a production rule which explicitly rep- resents OP’s pre and post conditions. The form of the rule is: PREYPOST and has the interpretation: IF the current state, S, matches PRE, and the state resulting from apply(OP,S) matches POST THEN OP is recommended in S. The pre and post state conditions are represented as parse trees of problem states. The following is an example production rule which recommends the operator OP: J in state J x2 dx (“+C” is dropped for simplicity): /\ /\ 1 op . X 3 /\ /\ Notzthat 2 x 3 the state resulting from the operator appli- cation, POST, is explicitly represented as the RHS of the rule. Heuristic rules are generalized using “standard” gener- alization techniques. For example, the candidate elimina- tion algorithm (71 is used by LEX to form generalizations of heuristic rules of the form PRE + OP. Applying the algorithm to states resulting from OP’s application yields a generalization of POST. For each operator OP in a prob- lem solving domain, PET uses the dropping conditions and climbing hierarchy tree generalization operators to induce general forms both for states in which OP is recommended and for states resulting from recommended applications.* Relational models are an augmentation of heuristic rules with background knowledge. The background knowl- edge consists of domain specific relations. In the do- main of mathematics, PET uses the relations equal( X,Y suc(N,M), sum(L,M,N), product(L,M,N), and derivative( X,Y) . A relational model is a tuple (OP, PRE, POST, AUG). The augmentation, AUG, is a set of relations {refl, . . , rel,} l This research resentations which does not present a novel generalization technique. improve existing techniques are proposed. Instead, rep from background knowledge. Each relation reZ; E AUG has a relation name, or functor, and m 2 2 arguments, {a17 a2, * - * 9 a,}. The purpose of the augmentation is to re- late subexpressions of PRE with subexpressions of POST, thereby “linking” PRE to POST. To establish these links, each aj is constrained to be a subexpression of ei- ther PRE or POST, such that not all aj are from the same source. (Actually, this is a simplification. By allow- ing aj to be a subexpression of an argument of another relation in AUG, composites of relational descriptors can be formed by “daisy-chaining” a link between PRE and POST through multiple descriptors. For example, the re- lation that PRE is the double derivative of POST is repre- sented by derivutive(PRE,X),derivative(X,POST). See [9] for a description of the algorithm which permits chaining and its ramifications.) An evaluation function c estimates the “quality” of a relational model by measuring the coverage of PRE and POST by AUG. Intuitively, coverage is a measure of the number of nodes of PRE and POST which are in argu- ments of AUG. Formally, c((OP, PRE, POST, AUG)) = 1 SI 1 + 1 S2 1 where ] S ] is the cardinality of set S and S1 = {nodes n in PRE: 3rel(ul, a2.. . , a,) E AUGA 3i, 1 5 i < m, such that descenduntof(n, a;)} S2 is similarly defined for nodes in POST. Note that an in- dividual node in PRE or POST can contribute to coverage at most once since Sr and S2 are sets not bags. In addition to representing the transformation per- formed by an operator, a relational model constrains the interpretation of the heuristic. For example, the rule on page 3 is augmented with eg and sue relations, yielding the relation al model: I . T -suc(2,3)- The interpretation of the heuristic is: IF the current state, S, matches s x2dx and the state resulting from apply(OP,S) matches f such that the relations in the augmentation hold, THEN OP is recommended in S. Given an unaugmented rule R = (OP, PRE, POST), a relational model of R is constructed by searching for the set of instantiated augmentation relations, AUG, which best covers R. This search is implemented in PET as a beam- search through the space of candidate augmentations. In this space, nodes are represented by the tuple where Pool is the set of subexpressions of PR B AUG, Pool and POS d not covered by AUG. In particular, the initial state is (nil, {PREuPOST}). Th ere is one operator in this search which is described by: Given a state (AUG, Pool), SELECT a relational descriptor, D, from the set of background concepts. INSTANTIATE D with members of Pool or their sub-expressions. REMOVE selected Pool members from Pool, yielding Pool’. ADD instantiated descriptor to AUG, yielding AUG’. Generate new state (AUG’, Pool’). The search terminates with AUG when continued search fails to improve coverage. Built-in biases reduce the non-determinism of the search for an augmentation with maximal coverage and minimal complexity. In the selection of a relational descriptor, pref- erence is given to more primitive relations, such as equal and sue, over more complex relations, such as product. F’ur- t her, there are semantic constraints on the subexpressions selected to instantiate a relation. For example, the first pa- rameter in the derivative relation must contain a variable of differentiation. Finally, note that the algorithm tries large subexpressions from PRE and POST before small subex- pressions, thereby maximizing the coverage of the augmen- tation. If two relational models have the same coverage, then the one with fewer relations is preferred. This section introduced relational models and briefly described how they are constructed. It demonstrated that relational models can be built using the general techniques of state-space search. This search is constrained by built- in bias toward simple models with maximum coverage and by semantic constraints on relational descriptors. Section 4 describes an application of relational models. IV. Using Relational Models This section describes how PET uses relational models to improve learning of problem solving heuristics. Rela- tional models explicitly represent the transformation per- formed by operators. This enables PET to reason with operator semantics to guide the generation of training in- st antes. As described in section 2, PET applies perturbation op- erators to a single teacher-supplied training instance to gen- erate and classify multiple near-examples and near-misses.* Perturbation automates part of the teacher’s role, but not the task of selectivefy generating training instances which are most useful in enabling concept convergence. Relational models present an alternative to naively generating all pos- si ble training inst antes. PET selectively generates training instances which test features of a concept suspected to be spurious or overly specific. Spurious features are removed with the dropping conditions generalization operator. Rather than test the relevance of every feature, PET heuristically selects can- didates. Given relational model (OP, PRE, POST, AUG) the heuristic states: Features of PRE which are not transformed by OP may be irrelevant to the rule recommending OP. Those features of PRE which are not transformed are exactly those linked by the eq relation to features of POST. This heuristic identifies candidate irrelevant features which can be tested with perturbation. Relational models also guide the generation of training instances which test features suspected to be overly specific. Again, the selection of candidate perturbation operators is heuristically guided. The heuristic relies on two sources of . - mformat Ion: o relational models - which represent the transformation performed by an individual rule application. o episodes - which represent the “chaining” of individual rules into a useful problem solving sequence. Consider an episode E consisting of rule applications rl,r27---,fn- Each rule r; is represented with relational model (OP;, PRE;, POST;, AUG;). AUG; represents the “intra-rule” links between PRE; and POST;. “Inter-rule” links are implicit in E. As reviewed in section 2, r; is added to an episode if it enables r;+l. This establishes an implicit link between POST; and PREi+l. Constraints imposed on r; by rj, i < j, are discovered by following inter-rule links through E and intra-rule links through rules. These constraints suggest perturbation operators for r;. The heuristic of locating overly specific features by propagating constraints through episodes is motivated by this observation: Due to the incremental growth of episodes, for any pair of rules r; and rj, i < j in E, the size of the training set for rj exceeds the size of the training set for ri because every training instance for r; is also a training instance for rj. This suggests that features of PREj and POSTi are more general than features of PRE; and POSTi. PET selects perturbation operators which capitalize on this ob- servation by back-propagating general features of PREj to potentially overly-specific features of PRE;. We illustrate this back-propagation with an example from Utgoff [ 111. Assume that from prior training for op- erator OPl : sin2x + I - cos2 x PET has acquired the following relational model: f I i\ /\ /*\ x /*\ x sin Note that this model has been generalized from ground instances such that PRE,,l matches states of the form J(sin2 2) nonzerointeger sin x dx. Now PET is given the training instance I sin’ x sin x dx with the advice to apply the opaque operator: * LEX uses a similar technique. See IS]. Of 2 : sinn 2 + (sin2 2); PET applies the operator, yielding J sin2 z)’ sin z dx. As reviewed in section 2, PET can only \ earn a rule for this training instance if it achieves a known (sub)goal (allowing the rule to be integrated into an existing episode). In this example, the training instance achieves the subgoal defined by PRE,i. The following relational model for the training instance is built by the state-space algorithm in section 3: / sin Now that episodic learning has associated the relational models for OR1 and OP2, perturbation operators are ap- plied to generalize the model for OP2. The relaxed con- straint in PRJ!&~~ is regressed through the episode with the potential of identifying a feature of RR&,2 which can be relaxed (generalized). The inter-rule link implicit in episodes connects the relational model of OP2 with the re- lational model of OPl. Matching POST& with RR&r binds variable ni with 3. This suggests that the relational model for OP2 is overly-specific. Perturbation tests relax- ing this constraint by generating a training instance with the feature slightly modified. This is done by traversing intra-rule links represented by the augmentation. Specifi- cally, PET generates a useful training instance by the fol- lowing steps: Locate the relation r E AUG,,2 with argument of 3 from POSTop2. In this case, r = product(2,3,6). Perturb r to generate a slight variant, f. This is done in three steps: First, replace the argument with a neigh- boring sibling in a concept hierarchy tree. In this case, replace 3 with 4. Second, locate an argument p in r such that p is a sub-expression of PRE, 2 and replace it by free variable Z. In this case, p = g Third, eval- uate the resulting partially instantiated descriptor to uniquely bind x to p’. In this example, p’ = 8 and # =product (2,4,8). Generate PREL,, a perturbation of PRE,,2, by re- placing p by p’. Here, PRh$, = s sin’ z sin z dx Classify PRE$, as an example or near-miss of a state in which op2 is useful. As reviewed in section 2, PRG,,:! is an example if apply(OP2,PREb2) achieves the same subgoal as apply(OP2,PRE0p2). In this exam- ple, PRE$, is an example which achieves the subgoal of PREopl. / sin X Note that the product(2,n~,n2) augmentation descriptor corresponds to the concept of even-integer(n2). PET uses this relational model to guide the incremental refinement of the rule with subsequent training (see [9]). In addition to this example, PET learns relational mod- els for eighteen other operators. The longest episode is a sequence of seven operations. We are currently examining metrics for measuring performance of learning algorithms. Representational adequacy is of major importance. For heuristic accuracy, description languages for rules should represent relations observed among features during train- ing. Relational models address this concern. V. Summary This paper examines the effect of operator represent a- tion on the acquisition of heuristics for problem solving. Opaque operator represent at ions, which conceal the trans- formation performed by the operator, are frequently used. Transparent operator representations reveal the transfor- mation, allowing reasoning about operator effects. How- ever, it is unreasonable to assume transparency in “real- world” learning domains. This paper presents an approach to learning transpar- ent representations from examples of opaque operator ap- plications. The transparent representation is called a rela- tional model. Domain-specific background knowledge, rep resented as a set of relations, augments rules which model the transformation of each operator. The learning algo- rithm is described as a state-space search for an augmen- tation which is simple yet predictive. Once learned, a rela- tional model for an operator OP is also a heuristic which identifies states in which OP is recommended. Lastly, the paper examines an advantage of the rela- tional model representation over “traditional” opaque rep resentat ions. The representation reveals features of heuris- tics which are candidates for generalization. A method for automatically generating training instances which test these candidates is presented. The research ideas are implemented in a system which learns to solve symbolic integration problems. Please re- fer to [9] for a more complete description of this research including an algorithm for generalizing over a set of rules represented as relational models. Finally, PET generalizes the original training instance with examples generated by perturbation. The following relational model is the minimal generalization of this (2 member) training set: Reference8 [I] Amarel, S. “On Representations of Problems of Rea- soning About Actions,” in Machine Intelligence 3, D. Michie (Ed.), 131-171, 1968, Edinburgh Univ. Press. [2] Carbonell, J. “Learning by Analogy: Formulating and Generalizing Plans from Past Experience,” in Machine Learning, Michalski, Carbonell, Mitchell (Eds.), l37- 162, 1983, Tioga Press. [3] Fikes, R., Hart, P. and Nilsson, N. “Learning and Executing Generalized Robot Plans,” Artificial Inteffi- gence, 3, 251-288, 1972, North-Holland Publishing Co. [4] Kibler, D. and Porter, B. “Perturbation: A Means for Guiding Generalization,” Proceedings of Interna- tional Joint Conference on Artificial Intelligence, 415- 418, 1983. [S] Kibler, D. and Porter, B. “Episodic Learning,” Pro- ceedings of National Conference on Artificial Inteffi- gence, 191-196, 1983. [6] Langley, P. “Learning Effective Search Heuristics,” Pro- ceedirlgs of In terna tionaf Joint Conference on Artificial Intelligence, 419-421, 1983. [7] Mitchell, T. Version Spaces: An Approach to Concept Learning, PhD Dissertation, Stanford University Com- puter Science Dept, December 1978, CS-78-711. (81 Mitchell, T., Utgoff, P., and Banerji, R. “Learning by Experimentation: Acquiring and Refining Problem Solving Heurist its,” Machine Learning, Michalski, Car- bone& Mitchell (Eds.), 163-190, Tioga Press, 1983. [9] Porter, B. Learning Problem Solving, PhD Disserta- tion, University of California at Irvine, Information and Computer Science Dept, (forthcoming). [IO] Silver, B. “Learning Equation Solving Methods from Worked Examples,” International Machine Learning Worfcshop, 99-104, June 22-24,1983, Monticello, Illi- nois. [ll] Utgoff, P. “Adjusting Bias in Concept Learning,” In- ternational Machine Learning Workshop, 105-109, June 22-24,1983, Mont icello, Illinois. [l2] Vere, S.A. “Induction of Relational Productions in the Presence of Background Information,” Proceedings of fnternationaf Joint Conference on Artificial fnteffi- gence, 349-355, 1977. [l3] Waldinger, R. “Achieving Several Goals Simultane- ously!” Machine Intelligence 8, 1977 Elcock, E. W. and Michle D. (eds.), New York: Halstead and Wiley. 282
1984
22
306
Constraint Limited Generalization: Acquiring Procedures From Examples Peter M. Andreae M.I.T. Artificial Intelligence Laboratory 545 Technology Sq., Cambridge, MA 02139 Abstract Generallzatlon IS an essential part of any system that can acquire knowledge from exanIples. l argue that generallzatlon must be limited by a variety of constraints tn order to be useful This paper gives three pnnclples on how generallzatron processes should be constramed. It also describes a system for acquiring procedures from examples which IS based on these pnnclples and IS used to illustrate them. 2. Acquiring Procedures from Examples. In the standard concept acqulsttlon task. a teacher provldcs the learner with a series of examples (and possibly non-examples) of a concept. The learner must generalize these examples to obtain a descnptlon of the concept from which the examples were derived. The procedure acquisition task IS s1mIIar. a teacher provides the learner wrath a senes of traces of the execution of a procedure. Each trace wtll show the operation of the procedure in one partrcular Set of circumstances. The learner must generalize the traces to obtain a __-- ‘This paper reports bvolk done at the Altlflclal lntelllgence Laboratory of the Massachusetts lnstrlute 01 Technology .%p[JOrt for ihe laboratory’s altlf!cEil Ili!elllgence research IS provided In part by the Advanced Research Profccts Agency of the Department of Defense under OffIce of Naval Research contract NO014-8&C-0505 “Procedure Matcher and Acquirer descrlptlon of the procedure that will apply under all circumstances-the procedure that the teacher was using to generate the traces. For example, the teacher may snow a robot how to assemble a device In several different cases: perhaps the normal case. the case when the parts are not found rn the usual position, the case when the washer sticks during assembly. and the case when the screw holes are not aligned correctly. For each case. the teacher will lead the robot through the entire assembly task. and each trace WIII consist of the sequence of actions and the feedback paiterns after each action. From this, the robot should acquire the complete assembly procedure. Several people (e g., Mltchelt [19X3], Langley [1983] arid Anderson [1983]) have approached related problems using a prodlrctlon system representation of the procedure being acquired. Here, we wash to ac- quire procedures with explicrt control structure. This control structure- sequencing. branching, loops. and variable reference-is not present in the example traces and must be Inferred. Therefore. we cannot use the generalization methods used in in either concept or production system acquisition In a straightforward manner. Acqulsltion of procedures with exptlclt control structure has also been studied by Van Lehn [1983] (a multi-column subtractlcn procedure) and Latombe [1983] (a robot “peg-m-hole” procedure). The Induction of finite stale automata from regular strings. and the induction of functions from Input/output pairs (see Anglum and Smith [1982]) IS related to procedure acquisition, but the goals and methods in these tasks are sufficiently different that they will not be discussed here. 2.1 Domain. PMA embodtes a procedure acqulsltlon algorithm that IS intended t0 apply to a wide variety of dnrnalns. For each domain. there is a set of legal acf~ons that can be performed In the domain and the feedback patterns thdt will result from the actions. PMA IS deslyned to acquire procedures in any dornam In which the actlons are specified by an actlon type and a set of parameters, (not necessarily numertcal), and the patterns consist of a set of pattern components. each component being speclfled by a pattern type and a list of parameters All the examples glven below wtll be tal\r?n from a simple two drmen- slona! robot world which meets these crlterla. The pnmnltlve actions of this robot domain Include IlOVE. ~IOVE-U~ITiL.-C(II~TACT, ROTATE, GRASP and UNGRASP. The pararneters of the HOVE and KOVE-UNTIL-CONTACT actions consist of a vector specif;,lr-,g the distance and dlrcction of the move. the parameter of the ROTATE actlon IS an angle, and the other two actions have an empty parameter list. The pafterr? that the robot world returns In response to an action has three components: the new POSITION of the robot, given In x-y coordinates; its ORIENTATION, specified by an angle: and the CONTACT, if any, between the robot and an obstacle, specified by direction of the obstacle. 2.2 Representation. The traces (or examples) gtven to FMA by the teacher are se- quences uf alternating nct/or?s and pafler~s starting with a STAKT action and endlng with a STOP actlon They WIII be represcnled by a sequence of eve/Its. each event containing a pattern and the following action. Figure 1 shows two traces the teacher might provide to teach a Simple turtle procedure for clrcumnavlgatlng obstacles. 7 he turtle procedure IS “Move towards goal; if you hit something, move perpendic- ularly away from the obstacle 1 step, move to the side 1 step, and try again.” The first trace results from the application of the procedure when no obstacles are present. and the second, when one small obstacle is present. 6 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. /_I START at: (0, -3.5) L-J MOVE 1 @O” Figure 1. Two Turtle Traces. 7 he procedures that PMA must Infer from the traces may have condItional branchmg, lteratlon (loops). vanables, and generaltzed ac- tlons that may specify a class of primttlve actlons. We will represent these procedures by a directed graph rather lake the usual graphlcal representation of a finite state automata. Each node of the graph is marked with an event which specifies the condition under which control can pass to the event trom the previous event, and an act/on to perform If control does reach the event. The conditions are generalizations of the patterns In the traces. The condltlon may also assign variables to parameters of the pattern for use In later actions. One event, which has no links into tt. is dlstlngurshed as the sfart event and contains a ilull condition and a START action. Conditional branchmg is represented by multiple edges or I&s proccedmg from one event, and iteration or loopmg IS represented as a cycle In In the graph. Generalrzed actions are represented in the same way as the pnmltive actlons, i.e., by an actlon type and associated parameters. Figure 2 shows the representatton for the turtle prpcedure. The MOVE-WTIL-CONTACT-TOWARD (0. I)) actlon IS an example of a gen- eralized action--lt IS not one of the pnmltlve actions of the domain. It specifies whatever HOVE-UNTIL-CONTACT actlon will move from the current posItton toward the posltlon (0.0). The event containing this actton IS followed by a conditional branch: If the positton after the MOVE- UNTIL-CONTACT-TOWARD action is [at: (O.(J)] then the left branch will be taken and the START action performed. If the position is anywhere other than (0. O), and there IS a contact at some angle [contact: (any- angle)], then the actual angle of contact will be stored In the variable 0 and the right branch WIII be taken, entenng the loop. If neither condition IS met, the procedure fails. The directions of two MOVE actions in the loop are specified In terms of functions of the angle 0. - START 1 at : (anywhere) MOVE-UNTIL-CONTACT-TOWARD (0,O) at: (0,O) STOP at: (anywhere) contact: (any.angle)(8] I, MOVE .5 @(0 - 180”) L at : (anywhere) MOVE 1 a(0 - 90”) Figure 2. Turtle Procedure PMA can infer procedures like that of figure 2 from traces like those of figure 1. The following sections outline the matching and generalization methods that it uses. 2.3 Matching and Generalizing. PMA operates incrementally on two levels. Like WInston’s [1970] concept learner. PMA builds its descnptlon of the goal procedure in- crementally. taking one new trace at a time and gcnerallzlng Its current descnptlon of the procedure to incorporate the new trace. Its initial description of the procedure will just be the first trace. PMA also processes each new trace Incrementally. To incorporate a new trace, PMA matches the current procedure and the new trace to find a pairing between the procedure events and the trace events. It notes any differences and generalizes the procedure to eliminate the differences. However, the matching and generalizing IS done in several stages. The in&al stage of matching the procedure and the trace does no generalization of the mdlvldual events In the procedure and finds only a skeleton match that pairs procedure and trace events that match exactly. This provides the context for the second matching stage that does generalize procedure events, If necessary, to find a more complete pairing of procedure and trace events. This, in turn, provides the context for further stages which can perform more powerful generalizations in :he appropriate circumstances. Tlus Incremental generalization is based on the principle of context limited generalization-the more powerful generalization methods are only applied In the context of the match produced by the less powerful methods. Since the later stages depend upon the correctness of the earlier stages, It IS Important that the earlier stages do not fmd any spurious patnngs of events. Therefore, PMA must only attempt to match two events when there is good justification for doing so. To avoid spurious pairings in the skeleton match, PMA only searches for pairings mvolvmg the events of the procedure for which reliable matches can be found-the key events. The START and STOP events are obvious candidates for the key events Figure 3 shows the skeleton match of the traces of figure 1 using these key events. In more complex procedures, the key events may also include cJnque events (events of which the action type occurs only once in the procedure), and bottle- neck events (sequences of events at the merging of several branches through which control always flows). 1 1 b at: (O,-6) at: (O,-6) MOVE 6 @So’ MOVE 3 890° P 1 1 c at: (0,O) STOP Figure 3. Skeleton Match 2.4 Second Stage-Propagation and Event Generalization. The second stage builds on the skeleton match by pairing pro- cedure and trace events found by propagating through the procedure and trace startmg from the pairs found in the skeleton match. The propagation exploits the sequential structure of procedures In order to fmd justlflable pairs m much the same way as WInston’s analogy pro- gram [Winston 19841 exploits the causal structure of stones. Figure 4 Illustrates this. building on the skeleton match of figure 3 The pair a-ct was found In the skeleton match. Smce b and 3 were the rcspectlve successors of a and u, PMA paired b and ,f?. Propagating from b-,9. PMA attempted to pair c and 5. PMA also propagated backwards from c-; to find the pair b--E, and attempted to pair a and n. For this stage, if the procedure and trace events being paired only match partially, PMA attempts to find a generalization of the two events to place In the new procedure. If no generalrzatlon can be found, then the pair IS abandoned. In figure 4, b and fl did not match exactly, but PMA found a generalization of them. as shown at the bottom of the figure. When it attempted to pair c and 7, not only were they not equal. but there was no possible generallzatlon of the two events, so the pairing was abandoned. Stmllarly, In propagating backwards, PMA found a generalization of b and E but not for a and d. Figure 4. Propagation and Event Generalization lhe propagatron stage IS compicted by several bookkeepmg steps. Parrs that involve the same events are grouped and generalized, and events from the procedure and the trace that have not been paired are collected. In the example of figure 4, the event h IS Involved In two pairs, (b-/3 and b-f), which are then matched and generalized. The events y and Cs were not parred with any other events, so they are srmply mstalled Into the new procedure along with b-(i-r and the skeleton pairs a-u and c-c. The new procedure IS shown In figure 5. - a-u START I I b-R- 6 & L at (anywhere) 1 MOVE-UNTIL CONTACT TOWARD (0. (1) c- !I at (0. 0) at. (O-3) contact: 00 STOP MOVE .5 r~-!Xl T I ’ II Figure 5. New Procedure The matching and generalrzrng of events IS done wrth reference to the action and condition hierarchies. These hierarchies are partiaily oraered graphs where each node dt =scrrbes a generalized actron or condrtron. Each actron hrerarchy corresponds to one of the classes of action specrfred by the domain, and the base of the hre:archy is the prrmrtive action of that class. Srmllarly, each condrtron hierarchy corresponds to one component of the pattern specified by the domain, and tne base of a condrtion hierarchy is a pnmitrve pattern that occurs in the traces. Every higher node of a hrerarchy IS a generalized action type or condition. Figure 6 shows part of the MOVE action hrerarchy. Each node describes the type of the action or conditron and the parameters associated with it. For example. the HOVE-TO node in figure 6 has a position parameter (z. 71). Also attached to each node, but not shown in figure 6, are procedures for determining whether an instance of the node is a generalizatron of an instance of a lower node and constructor procedures for creating generalizations of two instances of lower nodes. MOVE-UNTIL-CONTACT-TOWARD (z, y) Figure 6. EYIOVE Action Hierarchy Figure 8. Procedure with Parallel Segments If PMA IS given two actions to match, It will first determlne If they are of the same class. If not, the match rmmedlately falls. Otherwise, it will determme therr types, and the relative positron of the nodes of those types in the appropriate action hierarchy. If the actions are both instances of the same node (i.e.. they are of the same type), it simply tests equa!rtv of the parameters. !f one node IS a direct supenor of the other, it ~11 invoke the appropriate procedure attached to the higher node to determme whether the first action is a direct generalrzatton of the second. If ether of these tests fall, or neither node IS a drrect superior of the other, rt will search for a node that is a d,rect superior of both nodes and :nvoke the constructor procedure to create a generalization of the two actrons. Often the constructor procedure will not return a generaltzatron, in which case the match fails. The same process is followed in matching patterns and/or condrtions. Each new domain in which PMA IS to be used will requtre a different set of action and condrtron hlerarchles. snce they are obviously domain dependent. However, the structure of the hlerarctires and the way they are used in matching and generalizing events remarns tne same across domarns. Furthermore. the hrcrarchrcs thenlselves could be acqurred from the traces provided by the teacher. How thrs might be done WIII be discussed bnefly in a later sectron. The same mechanrsm could also be used to extend hierarchies whrch have been prcvlousty specified to incorporate new generalized actions or conditions that are needed for particular classes of procedures. 2.5 Third Stage-Function Induction. The third stage of matching and generalrzlng searches for a partrc- ular configuration-parallci segmenrs--in the description of the proce- dure produced by tne first two stages. There are no parallel segments in the procedure of figure 5,. but there are in the procedure of figure 8 which was generated by applymy the first two stages to the procedure of figure 5 and the new trace shown In figure 7 A segment IS a sequence of connected events with no branching. Two segments of a procedure are parallel If they start and end at the same events. they contain the same number of events. the corresponding conditions match. and the corresponorng actions are of the same type. Paral!el segments rep- rescn! events that the second stage attempted to pair but abandoned because It could find no generalizations of the actions usng the action goal. (0.0) ( at: (0.4) MOVE 3 090” 1 at: (O,-3) contact: -135” Figure 7. Third Turtle Trace at: (0, -3.5) Parallel at : (.35, -3.35) MOVE 1 BO” Segments MOVE 1@45” s, s, / hterarchy. The corresponding IJOVE actlons In the parallel segments of ftgure 8 cannot be merged wlthout reference to the contact angle in an earlier pattern upon whtch the dIrectIons of the MOVE’s depend. No generalization in the action hierarchy could express this dependency. The identical context of the parallel segments suggests that they play the same role in the procedure. Wtth this justlftcatlon, the third Stage applies a more powerful generalizatton method which attempts to match the events by searching for functional dependencies of actions upon earl:er patterns. in the example of figure 8. the two pairs of MOVE actions should be generalized to HOVE’s whose direction is given by the earlier contact angle menus 180 and g0 respectively, as shown in the procedure of figure 2. These functional expressions are simple and are found readily When the parallel segments are merged by this third stage, the resulting procedure IS exactly the goal procedure of figure 2. FIndIng the functlonal dependenctes involves a double search to find both an earlier pattern component on which the actions may depend and also the function relating the pattern component to the action. To avord finding spurious functional relatlonshlps, PMA searches for the condltlon closest to the actions for which It can tmd a functional relation. For each candidate condition component that the first search con- slders. PMA searches the space of possible functions that fit the past values of the condition components and the correspondmg values of the actions bemg merged. (Note that this requires that PMA retain a certain amount of InformatIon about the past values of the patterns and actlons from which the generalized conditions and actions were con- structed). 1 he space of functtons IS searched by incrementally building expressions from a known set of operators. The choice of operators is constrained bv the type of the input and output values (posltlons, angles, numbers, Ilsts, etc.). which requires that the types of the arguments and ranges of every operator must be known. The algorithm Initially considers expressions containing a single operator applied to the domain values (from the condition) which returns the range values (from the action). If none are found, It WIII recursively apply any appropriate operators to the domam and range values, and search tor an “connecting” operator which returns the new range values when applied to the new domain values. The resulting expresslon will be the composition of the inverses of the operators applied to the range values, the connecting operator. and the operators applied to the domain values. The search falls when it cannot find an expresslon within some complexity limit. Functions lnvolvtng constants pose a problem for function induc- tion. snce It IS not possible to search the space of all possible values of constants rf the space IS Infinite. as In a domain Involving real-valued parameters such as the robot domatn. PMA’s algorithm solves this prob- lem by only considering one new constant for each expression. Such a constant can be found If applying an operator to each pair of the domain and range values produces a constant value. For example, when the difference operator IS applied to the pairs of angles (00 . -00”) and (135 . -45-). the result IS 180 for both pairs. The required expression can be found by inverting the difference operator and using the con- stant 180 I to obtain the expression: mowedrecfion = ( - contact-angle 180 ). If there are any constants with predetermmed values which may be relevant to the functional dependency, these can also be included In the candidate expressions. One source for such known constants IS the condition immediately preceding the actions being merged. The possible relevance of the parameters of this condition IS justified by the fact that this condition represents information about the state of the world m which the action is to be performed. The algorithm for searching through the space of functions relies only on bemg provided with a set of Invertible operators whose domain and range types are specified. The operators need not be numeric (although they are for the robot domain) and the algorithm is therefore quite domain independent. The generalization 6f the third stage is more powerful than that of the second stage, both because it involves two events simultaneously and because the space of possible functions is very large. In fact, if the space IS unconstrained, it will be possible to find a functlonal relation t!etween almost any pattern and action. For this reason, the functional generalization is only applied in the context of parallel segments and the Complexity of the expressIons that are considered must be const:ained by the number of data points available. At this pomt we note that some of the generalized actions (e.g., MOVE-TO (z. y)) are actually primitive actions whose parameters are a function of the immediately preceding pattern. These nodes in the action hierarchy are essentially memolzed forms of these “local” func- tional relationships. The action hierarchy can be augmented by noting reoccurring actions wltn the same local functions and constructing the memo;zed form of the function. This has not ye: been Implemented in PMA. 2.6 Final Stage-Consistency checking. The final stage of PMA checks that the description produced by the first three stages satlsfles the constraint that valid procedures must be determmlstlc, i e , at every step, the procedure must specify exactly one action. This constraint may be violated If the condltlons at a conditional branch are not sufficiently distinct. If there are no possible patterns that would match more than one of the branching condltlons, then the branch satisfies the constraint. It there IS a pattern which matches two condltlons. then the branch may be Indeterminate. We adopt the convention that It one of the conditions IS a strict generalization of the other, then control passes to the most speclflc. This conventlon eliminates the need for condltlons with complex exception clauses. If this IS not the case-either the two condttrons are are identical or part of one condition IS a generalization and part IS a speclaltzatlon of the other condltlon-then the branch violates the constramt, and must be rectified. There are several ways a non-detcrrnlnistlc branch could arise, each representing a dltferent way of resolving the non-determmlsm. One source IS that the second stage was not able to find a generalization of the two actions of the events involving the conflicting condrtions. PMA therefore attempts to generalize the actions by searching for a funcilonal dependency as in the third stage. If this IS successful, the events can be combmed, and the indeterminacy removed. If this IS not successful, it will attempt to specialize the conditions on the assurnption that the second stage may have over-generalized them. This is done by searchmg In Ihe condition hierarchy for a node lower than the current condition. For example. it might be that some action should be performed only when the posltlon IS within some circular region. If the mrtlal traces contain the action occurring In several posl!ions, PMA will generalize the condition to [at: (anywhere)]. When later traces show a different action occurring at other positions, PMA will have to specialize the ongmal condition to [inside: (circle-l)]. If no speclallzatlon node IS found in the con&Ion hierarchy, it may be possible to create a new node using standard concept acquisition techniques. For example, if there were no circle node, one could be created, using the positions associated with the tlrst action as the positive examples of the new concept and the posl:lons associated with the other action as the near misses. In a domain like the robot world involvmg numerical and geometric parameters, it may be possible to use an algorithm similar to the function induction algorithm to create the expressions representing the new concepts. This has not been implemented in PMA. If neither of these methods eliminates the indeterminate branch, the pairrng that created the branch event will be “undone”. This may have to be repeated until all indeterminate branches are removed. 3. Discussion. The need for constraints on generalization IS not a new Idea. Winston [1970] constrained his concept learner to always choose the most speclfrc generallzatlon consistent with the examples. Furthermore, It was constrained to Ignore negative examples unless there was exactly one difference from the current concept, indicating an unarnblguous change to the current concept. These constraints reduced the search by avoiding the need for backtracking, which IS very expensive. MItchelI’s 119821 version space algorithm relaxed these constraints by provldmg an efficient characterization of an entire set of generalizations consistent with the examples. This falls, however. If dIsjunctions are allowed in tne descnptlons, and further constramts are necessary. Efficiency IS not the only reason for constraints. In some cases, the generalization task IS so under-specified that addItional constraints must be found in order to perform the task at all. A good example is Berwlck’s language learner [1982] which acquires grarnmar rules when given only grammatical sentences and no negative examples. It was only by adopting a particular parser and the very strict constraints on the form of its rules that It was possible to learn any grammar rules stnctly from posltlve examples. The three principles stated In the Introduction describe three classes of constraints on generalization which apply to any generalization task. 3.1 Domain Constrained Generalization. Exploiting the constramts of the domain IS an important and es- tnbllshed technique for all areas of Al. Domain constramts may reduce the search space by ellmmatmg descriptions that can vaildly be gen- 9 erated by the representation language, but descnbe situations that are Illegal in the domatn. PMA exploits the constraint that procedures must be determtnlstlc to eliminate any descnptions with non-deterministic branches. This determtnacy constraint also reduces the space of legal generalized actrons. Although the action [MOVE-TO (0, O)] represents many possible primitive MOVES, In any particular situation (i e., from any particular position) it specifies exactly one. However, the action [MOVE 1 @(any-angle)] is indetermmate in that it never speciftes a particular prlmltlve action and the determlnacy constraint therefore eliminates it from consideration. Domain constraints may also be used to guide the generalization process in ways other than simply reducing the search space. For example, it is a particular property of the robot domam that MOVES and MOVE-UNTIL-CONTACTSs are very closely related. though they are ac- tually different primitive actions. PMA exploits this relation by treating ROVE-UNTIL-CONTACT as a generalization of IItOVE, and is able to deter- mine when a particular MOVE made by the teacher was intended to be a HOVE-UNTIL-CONTACT. This type of generalization is very domain spe- cific, but illustrates the way in which particular properties of a domain can be used to Increase the power of the generalizer. 3.2 Undesirability Ordering. In order to guide the generalization processes, some ordering must be placed on the space of possible descriptions. Generally, out of a set of descriptions that are all valid generalizations of a set of examples, one chooses the descrrptlon that IS lowest In the ordering. This IS particularly important for acquisition tasks In which no negative examples are given. In most concept acqulsltlon programs, this ordering has been based on either generality or complextty-the more general (or complex) the descnptlon, the more undesirable it is. This IS sufflclent for restricted domains, such as those in which all the concepts that need to be considered can be described in terms of a conjunctive list of properties of. and relations between objects. In domains involvmg descnptlons based on a more powerful description language, however, this undesirability orderrng must involve more than just generality or complexity. For example, if the description language allows disjunction, there will always be a generalization of any two examples consisting of the disjunctIon of the descrlpttons of the two examples. This is the most specific generaltzation possible, but it is seldom a useful or desirable generalization. Similarly, there is always an (1~ - 1) degree polynomial that fats n points on the plane, but it is seldom a useful generalization of the points. Neither of these generalizations is useful because the existence of such a generalization was a foregone conclusion, whether the relation between the examples was significant or entirely random. If, however, there were a conjunctive description, or a low degree polynomial, this would describe a relation between the examples which would not be true of a random set of examples. Both dlsjunctlve descriptions and high order polynomials are necessary at times, and cannot be eliminated from the search space entirely, but should be placed high on the undesirability ordering. The common element of these two undesirable gei\eralizations is that they use representation constructs that are very “powerful” in the sense that they allow one to construct descriptions of any set of items, whereas conjunctive descriptions or 2nd degree polynomials can only describe some sets of items. In other words, the space of possible de- scriptions IS very much wider if dIsjunctIons, or other powerful constructs are allowed than if they are prohibited. The undesirability ordering must therefore take into account the descriptive power of the components of the representation language, and place descriptions using the more powerful constructs higher than those with more restrictive constructs. PMA must be able to acquire procedures that involve conditional branching, which is a forrn of disjunction. However, following this prin- ciple of undesirability ordering, it always chooses a procedure without branches over one with branches even at the cost of more general events or actions containing functional expressions. Similarly, although an explicit functional expression in an action is not any more more general than an action from the action hierarchy, PMA always prefers an action from the hierarchy, if one exists, because the description lan- guage for actions in the action hierarchies is less powerful than that for arbitrary functions, and therefore lower in the undesirability ordering. 3.3 Context Limited Generalization. 10 However, it is not sufficient to simply order the generalizations by undesirability and choose the least undesirable. Matching two descrip. tlons involves finding a pairing between the elements of the descriptions. With a sufficiently powerful descrlptlon language, a generalization can be found for any pair of the elements. But most of these pairings will be spurious. We need to place some limit on the degree of undesir- ability to which we are prepared to go, but this limit must not eliminate undesirable generalizations that really are part of the match. The so- lution is to use a limit that varies with the justification for believing that a generalization exists. If a teacher has asserted that two situations match, then there is good justification for resorting to a very undesirable generalization of the two situations. On the other hand, when search- ing a data base for possibly relevant siiuations to a problem at hand, then only generalizations low in the order should be considered. When matching two structures, the partially completed match consisting of pairings of very similar components may provide a context that justifies considering a very undesirable generalization of two components that fill corresponding positions according to the pairings found so far. In general, the generalization must be limited to a level of undesirability consistent with the context in which the generalization takes place. The several stages of PMA illustrate this principle well. In the first stage, there is no context to suggest what pairings should be made, and therefore no event generalization at all is allowed. In the second stage, the context of the perfectly matched pairs gives more justification to the pairs found by propagation, so generalized events are considered. The function induction is only considered in the highly restricted contexts of parallel segments or indeterminate branches. If function induction were allowed in the second stage it would be very likely !o fmd spurious generalizations. But with this context limited generalization, PMA is able to use powerful generalization methods without producing spurious matches. References. Anderson, JR [1983]; Acquisition of Proof Skills in Geometry; in “Machine Learning”, eds. Michalski, Carbonelt, Mitchell, Tioga Pub Co., Palo Alto, California. Angluin, D. and C.H. Smith [1982]; A Brief Survey of inductive Inference; Technical Report 250, Dept. Comp. Sci., Yale University. Berwick, RR. [1982]; Locality Principles and the Acquisition of Syntactic Knowledge; PhD thesis, M.I.T.. Langley, P, [1983]; Learning Effective Search Heuristics; Proceedings of IJCAI-83, Vol 1, 419-421. Latombe, J-C. and Dufay, B. [1983]; An Approach to Automatic Robot Programming Based on Inductive Learning; Robotics Workshop, M.I.T.. Mitchell, T.M. [1982]; Generalization as Search; Artificial Intelligence, Vol. 18, 203-226. Mitchell, T.M. [l983]; Learning and Problem Solving; Proceedings of IJCAI-83, Vol 2, 1139-1151. Michalski, R.S. [1983]; A Theory and Methodology of inductive Learning; in “Machine Learning” eds. Mlchalski, Carbonell, Mitchell, Tioga Pub. Co., Palo Alto, California. Winston, P.H. [1970]; Learning Structural Descriptions From Examples; PhD Thesis, M.I.T.. Winston, P.H. [1984]; Artificial Intelligence; Ch 12, Addison Wesley, Reading, Massachusetts. Van Lehn, Kurt 119831; Felicity Condition for Human Skill Acquisition; Validating an A/-based Theory; PhD Thesis, M.I.T..
1984
23
307
LEARNING PROBLEM CLASSES BY MEANS OF EXPERIMENTATION AND GENERALIZATION Agustln A. Araya Departamento de Ciencia de la Computacibn P. Universidad Catolica de Chile Casilla 114-D, Santiago , Chile ABSTRACT We discuss a method of learning by practice based on the idea of determining classes of problems that can be solved in simplified ways, A description of a class is obtained by processes that hypothesize descriptions, generate and classify problem variations, and test the hypotheses against them. The approach has been implemented in a system that learns by practice in a domain of elementary physics. The system has two main components, a Problem Solver and a Learning Agent. The Problem Solver handles the problems in the domain and the Learning Agent does the actual learning. To perform its tasks the Learning Agent utilizes algorithms, heuristics, and domain knowledge, and for this reason it can be regarded as an expert system whose expertise resides in being able to learn by experimentation and generalization. 1. INTRODUCTION In recent years learning has been increasingly recognized as an important area for Artificial Intelligence research ([61,[11lL While human expert behavior is characterized by the ability to increase expertise by learning in the course of solving problems, current expert systems lack many important capabilities in this respect: i> They are not capable of analyzing their own solutions, and thus, they cannot determine if a solution can be *‘improved” (e.g., by simplifying it). ii) They don’t try “mental experiments”, that is, imaginary problems in which they could apply new methods or heuristics that might improve their problem solving capabilities. iii) They are not capable of remembering. If they are given a problem similar, or even identical, to one previously solved, they will not recognize this fact and will repeat the solution process. -------------------- This work was partially done at the Departments of Computer Science of the University of Texas at Austin, and the P.Universidad Catdlica de Chile. It was supported in part by grant 216/82 of the Direccidn de Investigaci6n, P.Universidad Catblica de Chile. We report research on a method of learning by practice that provides these capabilities to a certain degree. The method is embodied in a system called DARWIN that functions in a domain of elementary physics in which interacting ideal rigid bodies are in equilibrium. Starting with a general method of solving problems in the domain, the system learns problem classes and their corresponding specialized methods of solution by means of experimentation and generalization. This presentation is organized as follows : after characterizing the problem under study an overview of the learning approach is given, the processes used to obtain a description of a problem class are explained, and some of the limitations and difficulties encountered are discussed. 2. THE PROBLEM In the description of the ISAAC system [IO], that solves physics problems stated in natural language, it was noted that for some of the problems that it solved a human expert would have produced a simpler solution. This observation lead us to consider the following situation. Let us suppose that a system has a general method for solving problems in its domain. One of the things it can learn by practice is that there are problems that can be solved in simplified ways, that is: IF problem belongs-to problem-classj THEN use special-methodj Thus, two things must be learned: a description of a problem class and its corresponding special method. One way of doing this is by applying a deductive approach. If the system had a theory of the domain it could deduce that for problems having certain characteristics a simplified solution could be obtained. In this paper, on the other hand, we take an empirical approach in which the system tries to determine descriptions of problem classes by performing experiments. Thus, a theory of the domain is not necessary and this represents an important advantage because the experimentation and generalization methods can be made relatively independent of the domain. The price one pays, 11 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. however, is diminished certainty of the acquired knowledge. Since exhaustive experimentation is impossible, this knowledge represents only plausible conjectures. Research on learning by practice and discovery has been reported in [41, [61 and [91, generalization methods are presented in [21,[71 and [81, a method of experimentation based on perturbation is proposed in [31, and expertise in solving problems in physics has been analyzed in [l] and [5l. 3. OVERVIEW OF THE LEARNING APPROACH The approach has been implemented in a system with two main components: a Problem Solver (PS) and a Learning Agent (LA). To emphasize the fact that the Learning Agent is responsible for the evolution of the problem solver by developing specialized solution methods to cope with problems in its domain, we refer to it as the DARWIN system. Analyze solution of problem Determine problem class description: cycle Hypothesize descriptions Generate problem variations Classify variations Test descriptions against variations end cycle Build specialized method and integrate it in PS Fig.1: Outline Process of the Learning The PS starts with a general method that allows it to solve problems in its elementary physics domain. The method is based on the principles of equilibrium of forces and moments. The LA acts as a supervisor of the PS and its task can be divided into the following stages (See Fig.1): 1) Determine if a given problem can be solved in a simplified way: using knowledge about the domain, the solution is analyzed to see if it has some special characteristics that could lead to simplifications (e.g., that one or more equations produced by the general method are not actually needed to solve the problem). 2) Determine a description of the problem class: when it has been found that the solution of a problem can be simplified the LA uses experimentation and generalization processes to obtain a description of the class, in such a way that all the problems belonging to it can be solved in the same simplified way. 3) Derive the specialized method and integrate it into the PS: using knowledge about the problem domain and about information consumed and produced at each step of the general solution method, the system builds the specialized solution method and adds it to the PS. The new method can be obtained by eliminating steps from the general method and by replacing some steps by simpler ones. The second stage is 12 the most important and complex and will be analyzed in detail in the next section. Due to space limitations the other two stages will not be given further consideration. 4. DETERMINING A DESCRIPTION OF A PROBLEM CLASS The learning situation under study is characterized by an important fact: the system has made a single observation that a specific problem can--&-solved in a simplified way. This fact is in itself useless, because the likelihood that exactly the same problem will be encountered by the system in the future is nil, so that this knowledge will most likely never be used! This implies that the system must perform some kind of generalization to determine a class of problems to which the specialized solution applies. In order for the system to obtain a generalization from only one observation it needs to perform experiments to gather additional information. Thus, the determination of the description of the problem class is carried out by two highly intertwined processes: experimentation and generalization. To perform these tasks the LA utilizes algorithms, heuristics, and domain knowledge, and for this reason it can be regarded as an expert system whose expertise resides in being able to learn by experimentation and generalization. The description of the class will be expressed in terms of first order logic and will be based on the problem description and the background knowledge available to the system [12]. The latter kind of information can be classified as follows: i) predicates that refer to relations between positions of objects (e.g. : "symmetrically located", "at left end") * ii) predicates that refer to relations Aetween attributes of objects (e.g.: "perpendicular", "same size") ; and iii) predicates that refer to attributes of objects (e.g. : "angle of force equal to x1'). To constrain the search space we have decomposed the process so that at each stage different kinds of predicates are determined in the order indicated by the previous classification. We illustrate the process with an example: "A horizontal lever 15 ft long is supported at the left and right ends by a pivot and rolling pivot, respectively. Two forces are applied to the lever, one which has a magnitude of 120 lb, an angle of 270 degrees. and is applied at a point 3 ft from the left end; and another which has a magnitude of 120 lb, an angle of 270 degrees, and is applied at a point 12 ft from the left end. Determine the forces exerted by the pivots so that the lever is in equilibrium." (See Fig.2a). Fig. 2a: Initial Problem is-45 Fig. 2b: Most General Problem After preprocessing the initial description the following information is obtained: lever: weight=0 angle=0 length= 15 force1 : force2 : magn= 120 magn=120 angle=270 angle=270 pos=3 pas= 12 desunk=False desunk=False pivot: pos=o desunk=True rpivot : pos=15 desunk=True (rpivot:rolling pivot; desunk:desired unknown; pos:position) The PS produces a solution to the problem which is then analyzed by the LA. The following special characteristics are detected : the horizontal component of the force applied by the pivot is zero, the vertical components of the forces applied by the pivot and the rolling pivot have the same magnitude, and all the torques due to horizontal components of the forces are zero. 4.1 Determining Position and Attribute Relations The system starts by determining which relations between positions and attributes of objects are important. 4.1.1 Hypothesize Descriptions: The system uses background knowledge and hypothesizes that the set of predicates that evaluate to True in the initial problem constitutes a description of the class. The following predicates defined in the background knowledge are true in the initial problem: (I) (2) (3) (4) (5) (6) i;i; (9) symloc(pivot,rpivot) (*c> atleftend(pivot) atrightendcrpivot) symloc( force1 , force2) sameangles{ forcel, force2) perpendicular ( force1 , lever > perpendicular(force2,lever) samesize(forcel,force2) mirrorangles(forcel,force2) (*) (symloc: symmetrically located with respect to the center of the lever) 4.1.2 Generate Problem Variations: Some of these relations may not necessary and since we want 13 to obtain as general as possible a description the system should try to eliminate them. This is accomplished by proposing variations of the original problem and seeing how the predicates behave with respect to them. To determine interesting problem variations the system uses a method based on what we call the “mutual support principle** (MSP) . Let us assume that the description of the problem class will be expressed as a conjunction of predicates. (In general, it will consist of a disjunction of con juncts, but the MSP can be easily restated to cover that case). The MSP simply says that con juncts l*supportl* each other in the sense that negative variations not rejected by one conjunct must be rejected by others, and positive variations must be accepted by all predicates (See 4.1.3, below). Using this principle, problem variations are generated as follows: for every predicate found true in the initial problem generate, if possible, **true’* and **false** variations (T-vars and F-vars, respectively). A T-var with respect to a predicate is a variation in which the predicate evaluates to true. A false variation is a variation in which the predicate evaluates to false. T-vars and F-vars are generated using functions in the background knowledge, associated with each predicate. In general there may be several ways of generating T and F-variations from a predicate. Let us consider the predicate mirrorangles(forcel,force2). F-variations can be generated as follows: a) leave the angle of force1 fixed and select a random angle for force2; b) fix angle of force2 and select a random angle for forcel; c) select random angles for both force1 and force2. In all of these variations the predicate evaluates to False, as desired. The other predicates are affected in different ways. For instance, perpendicular(force2,lever) will evaluate to False in a) and c), but will evaluate to True in b). We can conclude from this that in order to obtain useful variations, as many different ways of generating them as possible must be considered. 4.1.3 Classify variations: The variations generated must be classifFed as positive (POS) or negative (NEG) according to * whether their solutions do or do not have the same special characteristics that were detected in the solution of the initial problem. 4.1.4 Test the descriptions: Finally, the predicaKar=ested against the sets of POSs and ~GS. In addition, since the system is trying to obtain as general as possible a description, a “most general** variation must be picked from all the POSs variations. The criterion used is to select the one that is re jetted by more predicates, because that means that it satisfies fewer constraints. (In general, this criterion may give more than one “most general” variation, meaning that there are alternative most general structures. In the current implementation the system picks any one of them). Once a most general variation is selected, the predicates that reject it are eliminated, as long as all the NEGs are still re jetted . Continuing with the example, it turns out that predicates (2),(3),(5),(6) and (7) can be eliminated, which means that the initial problem satisfied more constraints than needed. Thus, we obtain a new, more **general** problem, in which the forces don’t have to be perpendicular to the lever, the pivots don’t need to be at the ends of the lever, and whose solution has the same special characteristics as the solution of the initial problem (See Fig.2b). 4.2 Considering problems with a different number -- of objects c- So far in our analysis the number of objects has remained constant. But in the domain we are considering there may be several forces applied to a rigid body, so that it is worth exploring such cases. 4.2.1 Hypothesize Descriptions: The approach consists of replacing the constants that appear in the predicates by variables and then of quantifying these variables so that they range over sets of objects. The system uses heuristics to constrain the number of quantified predicates that are generated. For the example above, let P = [symloc(forcel,force2) and samesize(forcel,force2) and mirrorangles(forcel,force2)1. That is, P represents the conjunction of all the predicates involving forces that were true in the most general problem. The system generates all predicates of the form: (quantl fl (quant2 f2 P>> in which quantl and quant2 can be the quantifiers “for all”, “there isI1 and **there is one**. (Additional predicates are generated by adding the clause notequal(fl,f2) to some of those quantified predicates). 4.2.2 Generate Problem Variations: T and F-variations are -ted from each hypothesis, using heuristics that depend on the quantifiers and basic predicates that appear in them. For instance, consider the predicate PI = (forall fl (thereisone f2 (symloc(f1 ,f2) and samesize(f1 ,f2) and mirrorangles(fl,f2)) 1) T-variations from this predicate can be generated by adding two forces, one of them with arbitrary attribute values, and the other such that all the basic predicates are satisfied. F-variations can be obtained by adding one or two arbitrary forces. 4.2.3 Classify the variations: Similar to the process describFin Section 4.1.3. 4.2.4 Test the Hypotheses: The system tries to determ= aminimal set of predicates that accept all POSs and reject all NEGs, This is carried out by a process which is a modification of one proposed in II71 to obtain a disjunctive description of a concept. After performing this process for the example, predicate PI defined above accepts all POSs and rejects all NEGs generated at this stage, so that it constitutes a partial description of the class. 4.3 Considering Values of Attributes of Objects In the previous stages the system obtained a partial description of the problem class that takes into account relations between positions and attributes of different objects. In this last stage it is necessary to take into account the values of attributes of single objects. Additional background knowledge about *lspeciall* and “non-special** values of attributes of objects is used. For instance, the **weight** attribute of a lever has **O** as special value. Any other value is non-special. Using this knowledge new predicates are hypothesized and tested. At the end of the whole process the following predicates were obtained for the problem class under study: (forall fl (thereisone f2 (symloc(f1 ,f2) and samesize(f1 ,f2) and mirrorangles(f1 ,f2)) >>, (forall pivot (forall rpivot symloc(pivot,rpivot) >>, (forall lever (lever.angle = O)), one(pivot), one(rpivot). 5. DISCUSSION AND CONCLUSIONS In a typical inductive situation a set of positive and negative instances of a concept is given. In the approach described above this information is lacking, so that the search for the description of the problem class develops in two spaces : the space of descriptions and the space of variations. The experiments we have per formed show that the process of classifying the problem variations is the most expensive. For the example given above, approximately 300 problem variations had to be classified. (The exact number depends on the background knowledge available to the system). In the current (interpreted) implementation of the problem solver it takes an average of 9 seconds to solve a problem. In order to classify the proposed variations of the original problem they must first be solved. Thus, for the system to learn this new class it would need 2700 seconds plus the (substantially smaller) time required to carry out the other processes involved. To lower this cost we have followed a **mixed** approach in which the variations are classified by the instructor (who is informed by the system of the special characteristics their solutions should satisfy). If, however, the instructor is in doubt about any specific variation, he lets the system to classify it by itself. In the experiments performed, the DARWIN system has learned several problem classes that have simplified solutions. It has also learned problem classes in which the unknown(s) take special values which are remembered by the system, and problem classes in which heuristics that 14 transform a problem into a simpler one, can be validly applied. The classes that can be learned by the system are essentially determined by the kinds of special characteristics that it can detect in a solution and by the information available in the background knowledge. If the system is not capable of detecting some interesting characteristics of a solution then it will not even start a learning episode. On the other hand, once a learning episode is triggered, the corresponding class will be learned only if the system has the appropriate predicates in the language to express it. (The lack of a predicate will be detected by the system during the testing phase, because there will be negative variations that will not be rejected). Also, the exploration capabilities of the system are determined by the functions and heuristics that it uses to generate variations. The inability to generate certain kinds of variations may lead to learning less general descriptions (i.e., subclasses of problems) or, even, incorrect descriptions. A detailed discussion of this point, however, is beyond the scope of this paper. In order to apply the learning method to another domain it will be necessary to replace the background knowledge that the system currently has by the background knowledge appropriate for the new domain. This can be done without difficulty. There are certain characteristics of the current domain, however, that are more deeply embedded in the system, so that it may be necessary to make some non-trivial changes to it. An example of this is the order in which different aspects of a problem are examined. More importantly, in some domains it may be difficult or even impossible to have the system classify the variations by itself. In those cases a human instructor will have to perform that task. Also, the **well formedness** of the variations generated by the system may become an issue. By the nature of the problems in the physics domain it was very easy to generate **legal** problems, but this may not be the case for other domains so that complex rules of formation may have to be introduced. When comparing the method proposed here with inductive methods in which sets of positive and negative variations are given by an instructor, certain advantages and disadvantages can be discerned. The main advantage is that a system implementing this method is more autonomous than other systems because instead of being a passive receptor of instances it is an active explorer of the domain, and so is less dependent on an instructor. Disadvantages are that, i> as the system assumes more of the burdens of the learning process its complexity is increased, and iij the method may not be applicable to some domains, as was indicated above. A detailed comparison between the two kinds of approaches constitutes an important topic for research, because it may very well be that the advantages of the new method far outweigh its disadvantages, at least for certain domains. ACKNOWLEDGMENTS I would especially like to thank Gordon Novak for many discussions of the ideas presented here. I would also like to thank Julian Gevirtz for his comments and his assistance in editing earlier drafts of this paper. REFERENCES 1. Chi,M.T., Feltovich,P.J. and Glaser,R. **Representation of Physics Knowledge by Experts and Novices*' Tech.Rep. 2. Learning Research and Development Center, University of Pittsburgh, 1980. 2. Dietterich,T. and Michalsky,R. **Learning and Generalization of Characteristic Descriptions: Evaluation Criteria and Comparative Review of Selected Methods*' In Proc. IJCAI-79, Tokyo, Japan, August 1979. 3. Kibler,D. and Porter,B. "Perturbation: A Means for Guiding Generalization** In Proc. IJCAI-83, Karlsruhe, Germany, August 1983. 4. Langley,P., Bradshaw,G.L. and Simon,H.A. l*Rediscovering Chemistry with the BACON system** In Machine Learning, Michalsky, Carbonel, Mitm(Eds), Tioga Publishing Company, 1983. 5. Larkin,J.L, McDermott,J., Simon,D.P. and Simon,H.A. **Models of Competence in Solving Physics Problems**. Tech.Rep. CIP 408, Dept.of Psychology, Carnegie-Mellon University, 1979. 6. Lenat,D.B. "The Role of Heuristics in Learning by Discovery: Three Case Studies" In Machine- Learning, - Michalsky, Carbonell, Mitchell (Eds), Tioga Publishing Company, 1983. 7. Michalsky,R.S. **A Theory and Methodology of Inductive Learning** Artificial Intelligence 20:2 (1983) 111-161. 8. Mitchel1,T.M. **Generalization as Search** Artificial Intelligence 18:2 (1982) 203-226. 9. IO. 11. 12. Mitchell,T.M., Utgoff,P.E., Nudel,B. and Banerji,R. **Learning Problem Solving Heuristics through Practice" In Proc. IJCAI-81, Vancouver, August 1981. Novak,G.S. "Computer Understanding of Physics Problems Stated in Natural Language" American Journal of Computational %nche 3, 1976. Linguistics, Schank,R.C. **The Current State of AI: One Man's Opinion I* AI Magazine 4:l (1983) 3-8. - Vere,S.A. '*Induction of Relational Productions in the Presence of Background Information** In Proc. IJCAI-77, Cambridge,Mass., August 1977. 15
1984
24
308
Learning About Systems That Contain State Variables Thomas G. Dietterich Department of Computer Science Stanford University Stanford. CA 94305 A bst ract It is difficult to learn about systems that contain state variables when those variables are not directly observable. This paper formalizes this learning problem and presents a method called the @rarlve exrension merhod for solving it. In the iterative extension method, the learner gradually constructs a partial theory of the state-containing system. At each stage, the learner applies this partial theory to interpret the I/O behavior of the system and obtain additional constraints on the structure and values of its state variables. These constraints can be dpplied to extend the partial theory by hypothesizing additional internal state variables. The improved theory can then be applied to interpret more complex I/O behavior. This process continues until a theory of the entire system is obtained. Several sufficient conditions for the success of this method are presented including (a) the observabtlity and decomposability of the state information in the system. (b) the learnability of individual state transitions in the system, (c) the ability of the learner to perform synthesis of straight-line programs and conlunctive predicates from examples and (d) the ability of the learner to perform theory-driven data interpretation. The method is being implemented and applied to the problem of learning UNIX file system commands by observing a tutorial interaction with UNIX. 1. Int reduction Many important learning tasks mvolve forming theories about systems that contain state variables. Virtually all software systems, for example, contain state variables that are difficult to observe. Examples include operating systems, editors, and mail programs. These systems contain state variables such as mode switches, default settings, initialization files, and checkpoint mechanisms. Many problems in the sciences also involve learning about systems that contain state variables. In molecular biology, for example. the “state” of an organism includes the sequence of its DNA molecules. Existing techniques of molecular genetics provide only very indirect means for observing this state information. Learning about a system that has internal state variables is difficult because the system does not always produce the same outputs when given the same inputs. Hence, in addition to solving the inherently underdetermined problem of guessing the relationship between the inputs and the outputs, the learner must also face the problem of guessing the structure and value of the state information and the relationship between the state information and the Inputs and outputs. This paper presents a method, called the iferafive extension method for learning about certain state-containing systems. The problem of learning about systems with state can be formalized as follows. A state-containing system, M, is a function from DX S to K X S, where D is the domain set of possible input values. K the range set of output values, and S the set of internal states of M. When M is given an input value and a state. it produces an output value and a new state. Let I be a sequence of input values, <i,, iz, . . . . i,> and so be the initial state. Then the sequence, 0, of output values is generated as M(il.sJ=(ot,st), M(i~st)=(o& . . . The learning task is to develop a theory of M given only the sequence 1 of inputs and the sequence 0 of corresponding outputs. This theory must be strong enough to predict o and sj given any input ij and previous state I s ] - L’ In the most general case, this learning problem is unsolvable. However, if I, 0, and M satisfy certain conditions. then it is possible for a learning system to form a theory of M by employing the iterative extension method. During iterative extension. the learning system develops a sequence of ever more accurate theories. T,. Each theory explains additional behavior of M either by (a) proposing additional procedures that access and modify known state variables or (b) proposing the existence of additional state variables. At each step i, the learning system applies theory Ti to infer the values of the known state variables of M at as many points in the training sequence as possible. Given this knowledge of the state of M, the learning system then examines the remaining, unexplained data points, looking for some points that can be explained by simple extensions to T,. First, the learner looks for points that can be explained by proposing additional procedures that compute some function of the inputs and the known state and produce the observed output values and state changes. If no such points can be found, then the learning system looks for data points that could be explained by hypothesizing the existence of an additional state variabie inside M. A new theory T‘,+ , is developed by extending T, to include either the new procedure or the new state variable, and the process is repeated. The key assumption underlying this method is that the learner will be able to identify such procedures and state variables at each point in the learning process. Notice that this is a greedy algorithm that attempts to minimize the number of state variables introduced. This learning method is significant because it demonstrates how a learning system can exhibit something other than “one-shot” learnmg. Most existing learning systems start with some body of knowledge T,, move to a larger body of knowledge T,,. and halt. In this iterative extension method, each partial theory T, IS applied to interpret the data so that the next partial theory T, + 1 can be developed. There is no point at which the method necessarily halts. The outline of the paper is as follows. First, prevtous research on learning about systems with state ts reviewed and compared to the present effort. Second, a detailed example of the iterative extension technique is presented and its underlying assumptions are formalized. Third, a system, called EG, is described that applies the method to form theories of 13 UNIX file system commands from a trace of a tutorial session with UNIX. The paper concludes wnh a summary of the main issues. From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. 2. Review of previous work Most research on learning has focused on the learning of pure functions, predicates, and procedures. Al1 of the concept learning work, for example, has dealt with the problem of determining a definition for a predicate concept in terms of some concept description language (e.g., Mitchell, 1977; Michalski, 1969. 1983; Quinlan, 1982; Winston, 1975). Research on automatic programming from examples has, for the most part. focused on the learning of functions in pure Lisp (Hardy. 1975; Shaw, Swat-tout, & Green, 1975). pure Prolog (Shapiro, 1981) and similar languages (Amarel, 1983. Bauer, 1975: Siklossy & Sykes, 1975: Sussman, 1975). The main problems addressed by this body of research are (a) generalization (determining the class of input values for which the procedure is defined), (b) loop introduction (determining when to introduce a loop or recursive call), (c) subroutine introduction (determining when to create a subroutine to share code among different parts of the system), (d) conditional induction (determining which boolean function of the inputs should be tested at a particular choice point in the program), and (e) planning (determining a sequence of actions (or a functional expression) that will compute the output as a function of the input). These are very difficult problems, but they are orthogonal to the problem of learning about state. For these authors, there are no state variables that retain their values from one invocation of the system to the next. If there are any variables at all. they serve only as temporary variables that disappear when each output is produced. One body of research that is superficially similar to the current effort is research on automatic programming from traces (Biermann & Krishnaswamy, 1976; Neves, 1981; VanLehn, 1983). The goal of such work is identical to the work described above-namely, to synthesize a pure procedure. The similarity with the state-learnmg task stems from the fact that each individual step within a trace takes place in the context of some global variables. However. the values of such global variables are provided to the learner at each point, so this body of research is not relevant to the present task. The body of research most similar to that described in this paper is the work on synthesis of Turing machines and finite-state machines from traces (Biermann, 1972; Biermann and Feldman, 1972). In the case of Turing machines, for exampie, the contents of the tape and the action of the machine at each step are given. The learning task is to infer the finite-state controller for the machine. This involves hypothesizing the number of states and the state transition matrix. It is appropriate to view these systems as having a single state variable whose value gives the current state of the controller. The internal state bears a particularly simple relationship to the output. The output is a simple table lookup given the current state and the input. Hence, the kind of learning taking place is rote learning of I/O pairs subject to the organization imposed by the attempts to minimize the number of states in the finite-state machine. The bulk of the state of the system is stored on the tape-and that state information is known to the learner. Hence, these methods are not relevant to the present task either. The conclusion to be drawn from this review of the literature is that little or no progress has been made on the problem of learning about systems that contain state variables. Now that we have reviewed the literature. we present the iterative extension method, which can be employed to learn about certain kinds of state-containing systems. 3. The iterative extension method The easiest way to describe the iterative extension method is by example. Suppose that the system to be learned, M, is the following PASCAL-like program that computes the balance of some checking account. The account has an overdraft limit, and if a check would cause the balance to go below this limit. then it is refused and a message is printed. The special input “OK” causes an additional $100 to be added to the overdraft limit. BALANCE :- 0: LIMIT := -100; WHILE TRUE DO BEGIN READ(I); IF I=0 THEN PRINT(BALANCE) ELSE IF (I<O) AND (BALANCE+I<LIMIT) THEN PRINT( “CHECK REFUSED”) ELSE IF I=“OK” THEN LIMIT := LIMIT - 100; PRINT( “OK”) ELSE BALANCE := BALANCE + I; PRINT(“NEXT?“); END ; This system contains two state variables: BALANCE and LIMIT. BALANCE is directly observable when I=O, but LIMIT can only be observed indirectly by knowing BALANCE and I when the message CHECK REFUSED is printed. Now suppose the learning system is given the following sequence of I/O pairs ( ij,oj). ((0. 0) (0, -100) (0. 0) (-6, CHECK REFUSED) (10, NEXT?) (0, -100) [is ii; (OK, OK) (-ilO, NEXT?) (-6, NEXT?) (-96, NEXT?) (0, -100) (0, -200) (-1, CHECK REFUSED) (-2, CHECK REFUSED)> Given this I/O sequence, the following paragraphs present one possible path of inferences that the learner might make in applying the iterative extension method. Many other paths are possible. The iterative extension process begins with a null partial theory*, T,. The learner looks for points in the sequence for which a simple theory can be developed. The first two I/O pairs provide such a point. The learner can propose that whenever a 0 in given to M, a 0 is printed. This is theory T,. Of course, T, is immediately contradicted by the fourth and fifth I/O pairs. However, this case triggers one of the learner’s state introduction heuristics. This heuristic-called the constant-change-constant rule-says: If M exhibits one constant behavior and then shifts to another constant behavior. hypothesize that there is a state variable responsible for the behavior and that its value has changed. Hence, the learner guesses that there is a state variable (SVl) that is printed whenever i, = 0. The input of i, = 10 changed the value of SVl. This is theory T,. Now, by applying this theory, it is possible to interpret several points in the I/O sequence and infer the value of SVl. In particular, me learner can determine that after the step in which i, = 10, SVl = 10. and before that step, SVl =O. This is very nice. because it reduces the problem of learning about state-containing systems to the problem of synthesizing pure programs from I/O pairs. In this case, the inputs are x= 10 and y =O. and the output is z = 10. Existing methods of expression induction (Langley, 1980; VanLehn. 1983) can be employed at this point to guess that z = x + y. Translating this back into the program M, the learner can obtain theory T3 that M is performing the operation SVl : = SVl + I. Now, T, is employed to interpret the I/O sequence. T, seems to hold true for every point in the sequence at which NEXT? is printed. *In addition to T,,, ,the leammg system must have some pnor knowledge and bias about the space of possible programs. However, for the two cases where CHECK REFUSED is printed, something else is happening. It appears that some conditional behavior is occurring. Techniques of concept learning can be employed at the points where i,= - 1 and il0= - 5 to determine that CHECK REFUSED is being printed when SVl + I < - 100. This provides theory T,. However, T, breaks down later after the OK command. At that point, it appears that CHECK REFUSED is being printed when SVl+ I < -200. The constant-change-constant rule can be applied again here to propose that there is a second state variable (SV2) that is changed by the OK command. The final theory says that the CHECK REFUSED message is printed whenever SVl + I < SV2. This example shows how the learner can form theories about the “easy cases” and then apply these theories to simplify the remaining learning problem in order to expose additional easy cases. This is the key idea behind the iterative extension method. 4. Conditions on the applicability of the method What must be true in order for the iterative extension method to work? This section attempts to formalize (a) the conditions that must hold for the system M, (b) the conditions that must be satisfied by the training sequences I and 0. and (c) the capabilities required of the learning system. Three sufficient conditions on M can be stated: (a) state observability, (b) learnability of individual state transitions, and (c) state decomposability. The condition of state observability says that every distinct point in the state space, S, of M must lead to an observable behavioral difference. That is, given any two distinct states, sa and +, there must be some sequence of inputs I’ such that the sequence of outputs Ola obtained by placing M in sa and feeding it I’ differs from the sequence of outputs O’, obtained when M is started in sb. To describe the remaining two conditions, several auxiliary definitions are needed. First, let us define the decision-free decomposition of M as follows. Assume that the true theory of M is known. Rewrite M to gather all conditional tests into a decision tree and all actions into straight-line programs in the leaves of that tree. Define a new subprogram, Mi, for each leaf of this tree. The new subprogram contains a long conditional test for applicability, C,, (obtained by traversing the decision tree from the root down to the leaf) plus the straight-line program, Pi taken from the leaf of the tree. This decomposition can be viewed as representing M as a set of production rules such that, in any given situation, the antecedent of exactly one rule is satisfied**. In the checking account example of the previous section, the decision- tree decomposition is Ml: IF I=0 THEN PRINT(BALANCE) M,: IF (I<O) AND (BALANCE+I<LIMIT) THEN PRINT( “CHECK REFUSED”) M,: IF I=“OK” THEN LIMIT :- LIMIT - 100; PRINT( “OK”) M4: IF I>0 OR (I<O) AND (BALANCE+I>=LIMIT) THEN BALANCE := BALANCE + I; PRINT( ‘*NEXT?“). Roughly speaking, at each point in the iterative extension process at least one Mi is accessible because enough is known about the state variables of M for the learner to observe the effects of M,. Hence. the learner is able to form a partial theory of at least one M1 at each **This can always be accomplished, even for embedded loops and subrouunes- by encoding control information in additional state vanabies iteration. This partial theory is then applied to interpret more data so that additional Ml’s will be made accessible. It should be emphasized that the decision-tree decomposition is an analytical fiction developed from a privileged viewpoint. The learner need not represent its theories in production-rule or decision-tree form. From the decision-tree decomposition, we can define an interaction graph as follows. The nodes of the graph are the subprograms, {Ml). Two nodes Mj and M, are connected by a directed edge from M, to M, if M. modifies state information that is accessed by M,. The interaction h grap for the checking account example is M, f- M, +- M, M2 f Ir Since M satisfies the state observability condition, it follows that the interaction graph can be spanned by a root-directed forest. In this case. the roots of the forest are M, and M,. The iterative extension process begins by developing (partial) theories of the roots of this forest and then working backwards along the edges until all nodes have been learned. Two more definitions are needed, Define S, to be that portion of the state information that is directly observable according to theory TJ. In the checking account example, for instance, S, includes only the state variable BALANCE, but not LIMIT. Also, define Ml/S to be the partial theory of Mi involving only the state information o S . In the f checking account example, M,IS,! is the rule IF I<>d THEN BALANCE := BALANCE + I; PRINT(“NEXT? “) in which all mention of LIMIT has been removed. condition and action parts of M,[S,. C,ISI and P,lS, denote the Given these definitions, the second condition-learnability of state transitions-can be defined as follows. For each j such that T, is a partial theory, there must exist some M, such that PIIS is learnable from examples. The intuition behind this condition is that giben only the state information in Sj, it must be possible to form d straight-line procedure for PilSj. The third condition-state decomposability-is the most interesting. Its role is to ensure that each Ci can be learned and additional state variables can be hypothesized. In order to learn each such conditional, it is important to be able to gather examples of situations in which it is true and false. Such examples can be gathered by establishing known prior states, {s,), exercising Mi, and then observing the resulting states {s,+~). But, the process of establishing known prior states and observing the resulting states requires that the learner apply its current theory Tj. Since this theory is a partial theory, there might be unknown side-effects of it that would interact with M, and hence confUse the process. The state decomposability condition guarantees that this will not happen. It requires that for each MI in the context of Sj it must be possible to force the overall system M into a region Q, of state space in which Ci is true and all changes wrought by P, or any of the (Mk} in T, either change state information in Sj or else change state information that does not take M out of the region Q,,. This condition as stated is difficult to understand. The problems to look for are cases in which Pi (or one of the {Mk} in Tj) changes a state variable that is tested by C . If this state variable IS not observable according to the current pakial theory, then it must be possible-by controlling the inputs and the values of other state variables-to keep Ci true. Each of the Mi in the checking account example satisfies this decomposability condition. Notice in particular that although C, tests the value of BALANCE and P4 modifies this value, BALANCE is 98 already observable according to T,. Suppose for a moment that BALANCE is not observable or is modified by M,. M, would still satisfy the condition because C, does not test BALA6ICE <n the region of state space for which DO. We can obtain a system that violates the decomposability condition by modifying M, to read IF (I<O) AND (BALANCE+I<LIMIT) THEN LIMIT*=LIMIT . - 20; PRINT( “CHECK REFUSED”). This rule M, violates the condition because it modifies LIMIT in such a way that & condition C, is no longer true, and LIMIT is not in Si. Every time the learner t&s to observe the value of LIMIT, it change;. This makes it impossible for the iterative extension approach to succeed. Now that we have described the requirements for the learned system, M, we turn our attention to the requirements for the training sequence. The training sequence of I/O pairs must exercise a directed spanning forest of the interaction graph. Furthermore, at the point in the training sequence where the learning system is attempting to form a theory of C,l!$, the training sequence must force M into the region Q,, where valid’ training examples can be obtained and the sequence must include appropriate surrounding inputs so that the states before and after M, can be inferred. In other words, the training sequence must include ‘controlled experiments for each C,IS,. The exact requirements for the training sequence depend somewhat on the power of the learning system. The requirements for the learning system are quite stringent. First, the learner must be able to perform theory-driven data interpretation. In other words, given a partial theory Ti, the learning system must be able to apply that theory to interpret the ‘training data and thereby infer the values of the observable state variables. In the case of procedural theories, this involves reasoning both forwards and backwards through a partial program to obtain constraints on the state variables accessed by that program. This problem is very difficult because of the combinatorial explosion of alternative interpretations of the partial program when the values of the state variables are unknown and because programs are generally not invertible. Second, the learner must be able to perform program synthesisfrom I/O pairs. The principle difficulty here is the straight-line planning task of finding a sequence of actions Pi that will produce the outputs from the inputs. Most of the problems encountered in standard AI planning tasks are met here (e.g., goal interaction, the desire to plan a single act to achieve multiple goals, combinatorial explosion of operator choices). Third, the learner must be able to induce the conditions C, under which the Pi occur. This is an instance of concepf learning with the additional twist that the learner is permitted to introduce new state variables. The learner must have a set of state-introduction heuristics similar to the constant-change-constant heuristic. Two other heuristics deserve mention here. One is the toggle rule. It says: If repeated inputs i seem to shift the system from one behavior to another and back again. /hen propose that i. causes a boolean state variable to be toggled and that the Mi’s test this variable. Another heuristic is the information flow rule: /f unusual input 5 appears as an output at a later time. then suggest the existence of a new state variable that stores the value of $. 5. An application of the method: forming theories of UNIX A program, called EG, is being developed that applies the iterative extension strategy to the task of learning the file system commands of the UNIX operating system. This section gives a brief overview of the 99 UNIX learning task and of the two principle components of EG: the program reasoner and the theory-formation engine. The UNIX learning task is shown in Figure 5-1. This task was selected with the goal of developing an automatic knowledge acquisition system for the Stanford IA project. The IA project is an attempt to build an intelligent front-end for the diverse operating systems of the Arpanet UNIX is notoriously difficult to learn (Norman, 1981). Nonetheless, this learning task satisfies the conditions of learnability set forth in the preceding section. Given: ‘A programming language and a set of primitive operations The syntax for 13 UNIX file system commands (Is, mv. cp, rm, In, mkdir. rmdir. chmod, umask, create. type, pwd, cd) A partial theory for 2 of these commands (1s. type) A tutorial session with UNIX where each of the commands is exercised in detail Find: Procedural theories for each of the 13 commands. Figure S-1: The UNIX learning task UNIX is clearly a state-containing system. The principle state variables are (a) the file system (including the directory structure, the attributes and contents of every file, and so on), (b) the current working directory, and (c) the default file protection code. Within the file system there is some state information that is only indirectly observable. For example, information indicating which files are alias file names for one another is not printed by the default 1 s command. This information can be observed by, for example, modifying one file and then checking the contents of the other files. Also information about the configuration of the file system across several disk devices is not directly observable. UNIX commands are sufficiently complex that the training sequence must be carefully designed to guarantee that the conditions described in the previous section are met. Notice in Figure 5-l that EG is given some information besides the I/O training sequence (tutorial session). In particular, EG is given an initial theory of the 1s and type commands. This was necessary because EG does not have a state-introduction heuristic capable of guessing the structure of the file system merely by observing the training sequence. Indeed, for most applications of the iterative extension method, it will be necessary to provide the learner with a starting theory that connects some part of the internal state information to some observable output. EG is also given the syntax of the UNIX commands. This simplification is intended to insulate EG from user interface issues so that the basic problem of learning about state can be addressed. The EG program contains two major subprograms: the program reasoner and the theory-formation engine. The program reasoner is a general interpreter and symbolic executor for programs expressed in the language of the programmer’s apprentice “deep plans” representation (Rich and Shrobe, 1976). It operates in a manner similar to the EL system (Stallman and Sussman, 1977). EG uses the program reasoner to perform theory-driven data interpretation. Given a theory T. and some input and output values, the program reasoner is invoked td propagate the input and output values through Tj to infer the values of UNIX state variables. For example, given the output of a directory listing and a theory of the directory listing command, the program reasoner can infer the names and attributes of the files in the given directory. The program reasoner operates by propagating input and output values through the partial program just as EL propagates values through a circuit. As with EL. when the program reasoner cannot propagate a value, it creates a variable and propagates expressions involving that variable around the program. One important difference between EL and the EG program reasoner is that in EG, constraints on the possible values of the variable can also be propagated. Hence, EG may not know the exact value of a list, but it may know that the list begins with (A B C). Another important departure from EL is that the program reasoner pursues several interpretations in parallel. This is essential, because it is a rare case that the I/O data admit of only one interpretation. The theory-formation engine is a means-ends analysis planner similar to NOAH (Sacerdoti, 1977). Given starting and ending states of UNIX, it attempts to construct a plan that will get from the starting state to the ending state. The operators available to the planner are the primitive operators in the language (e.g., operators to manipulate lists, sets, and finite mappings) and any procedures that were included in one of the previous theories, Tj. EG is capable of developing conditional plans, but not loops or recursive programs. 6. Summary and Concluding Remarks The problem of learning about state-containing systems is difficult to solve because, in addition to solving standard problems of induction from I/O pairs, the learner must also hypothesize the structure and values of the internal state variables of the system. For systems that satisfy the three conditions of state observability, state decomposability, and state-transition learnability, the iterative extension strategy can be applied to learn them, The iterative extension method shows how a learning system can go beyond “one-shot” learning. Prior knowledge is applied to acquire further knowledge. The way in which the prior knowledge aids the learning process is by enabling the learner to interpret additional data from the training sequence. Theory T, can be applied to interpret additional data so that T, + 1 can be developed. A critical condition for the success of the iterative extension method is that the training sequence be properly structured. An important question for future research is whether a learning system can be built that develops its own training sequence by performing controlled experiments. What additional constraints on the learned system must hold in order for experimentation to succeed? A system, called EG, is being constructed that applies the iterative extension strategy to learn the semantics of UNIX file system commands. 7. Acknowledgments I wish to thank James Bennett and Bruce Buchanan for valuable criticism of drafts of this paper. Advice from Bruce Buchanan and Mike Genesereth has been extremely valuable in guiding this research. I thank IBM for supporting this research through an IBM graduate fellowship. 8. References Amarel, S., Program synthesis as a theory formation task--problem representations and solution methods, Rep. No. CBM-TR-135, Dept. of Computer Science, Rutgers University, 1983. Bauer. M., A basis for the acquisition of procedures from protocols, IJCAI-4,226-231, 1975. Biermann, A. W., On the inference of Turing machines from sample computations, Artificial Intelligence. Vol. 3, 181-198. 1972. Biermann, A. W., and Feldman, J. A., On the synthesis of finite-state machines from samples of their behavior, IEFF Transactions on Computers, Vol. C-21,592-597. 1972. Biermann, A. W., and Krishnaswamy, R., Constructing programs from example computations, IEEE Transacttons on Software Engineering, Vol. SE-2, 141-153, 1976. Hardy, S., Synthesis of LISP functions from examples, lJCA1 4, 240-245, 1975. Langley, P. W., Descriptive discovery processes: Experiments in Baconian science. Rep. No. CS-80-121, Computer Science Department, Carnegie-Mellon University, 1980. Michalski, R. S. On the quasi-minimal solution of the general covering problem, in V International Symposium on Information Processing, FCIP 69, Yugoslavia, Vol. A3, 2-12, 1969. Michalski. R. S., A theory and methodology of inductive learning, Artificial Intelligence. Vol. 20, 111-161, 1983. Mitchell, T. M. Version spaces: an approach to concept learning. Rep. No. STAN-CS-78-711, Computer Science Dept., Stanford University. (Doctoral dissertation.) 1977. Neves, D. M., Learning procedures from examples. Unpublished doctoral dissertation, Department of Psychology, Carnegie-Mellon University, Pittsburgh, PA, 1981. Norman, D., The trouble with UNIX, Datamarion, 139-150. November, 1981. Quinlan. J. R. Learning efficient classification procedures and their application to chess end-games, in Machine Learning, Michalski, R. S., Carbonell, J. G., and Mitchell, T. M., eds.. Palo Alto: Tioga, 1982. Rich, C., and Shrobe, H. E., Initial report on a Lisp programmer’s apprentice, Rep. No. AI-TR-354. Artifcial Intelligence lab. MIT, 1976. Sacerdoti, E. D.. A structure for plans and behavior. North-Holland. 1977. Shapiro, E. Y., Inductive inference of theories from facts. Res. Rept. 192, Department of Computer Science, Yale University, 1981. Shaw, D. E.. Swartout, W. R., Green, C. C., Inferring LISP programs from examples, IJCAI4, 351-356, 1975. Siklossy, L, and Sykes, D., Automatic program synthesis from examples problems, IJCAI-4,268-273, 1975. Stallman. R. M., and Sussman. G. J., Forward reasoning and dependency-directed backtracking in a system for computer-aided circuit analysis, Artificial Intelligence. Vol. 9. No. 2, 1977. Sussman, G. J., A computer model of skill acquisition, New York: American Elsevier, 1975. Utgoff, P. E., Adjusting bias in concept learning, Proceedmgs of the International Machine Learning Workshop, Department of Computer Science, University of Illinois, Urbana. 1983. VanLehn, K., Felicity conditions for human skill acquisition: validating an AI-based theory, Rep. No., CIS-21, Xerox Palo Alto Research Center, 1983. Winston, P. H., Learning structural descriptions from examples, in The psychology of computer vision, New York: McGraw-Hill, 157-209, 1975.
1984
25
309
Towards Chunking as a General Learning Mechanism John E. Laird, Paul S. Rosenbloom and Allen Newell Computer Science Department Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 ABSTRACT Chunks have long been proposed as a basic organizational unit for human memory. More recently chunks have been used to model human learning on simple perceptual-motor skills. In this paper we describe recent progress in extending chunking to be a general learning mechanism by implementing it within a general problem solver. Using the Soar problem-solving architecture, we take significant steps toward a general problem solver that can learn about all aspects of its behavior. We demonstrate chunking in Soar on three tasks: the Eight Puzzle, Tic-Tat-Toe, and a part of the RI computer-configuration task. Not only is there improvement with practice, but chunking also produces significant transfer of learned behavior, and strategy acquisition. 1 Introduction Chunking was first proposed as a model of human memory by Miller [8], and has since become a major component of theories of cognition. More recently it has been proposed that a theory of human learning based on chunking could model the ubiquitous power law of practice [12]. In demonstrating that a practice mechanism based on chunking is capable of speeding up task performance, it was speculated that chunking, when combined with a general problem solver, might be capable of more interesting forms of learning than just simple speed ups [14]. In this paper we describe an initial investigation into chunking as a general learning mechanism. Our approach to developing a general learning mechanism is based on the hypothesis that all complex behavior - which includes behavior concerned with learning - occurs as search in problem spaces [ll]. One image of a system meeting this requirement consists of the combination of a performance system based on search in problem spaces, and a complex, analytical, learning system also based on search in problem spaces [lo]. An alternative, and the one we adopt here, is to propose that all complex behavior occurs in the problem-space-based performance system. The learning component is simply a recorder of experience. It is the experience that determines the form of what is learned. Chunking is well suited to be such a learning mechanism because it is a recorder of goal-based experience [13, 141. It caches the processing of a subgoal in such a way that a chunk can substitute for the normal (possibly complex) processing of the subgoal the next time the same subgoal (or a suitably similar one) is genera&d. It is a task-independent mechanism that can be applied to all subgoals of any task in a system. Chunks are created during performance, through experience with the goals processed. No extensive analysis is required either during or after performance. This research was sponsored by the Defense Advanced Research Projects Agency (DOD), ARPA Order No. 3597, monitored by the Air Force Avionics Laboratory Under Contract F33615-81-K-1539. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or impiied, of the Defense Addanced Research Projects Agency or the US Government. The essential step in turning chunking into a general learning mechanism is to combine it with a general problem-space problem solver. One candidate is Soar, a reflective problem-solving architecture that has a uniform representation and can create goals to reason about any aspect of its problem-solving behavior [5]. Implementing chunking within Soar yields four contributions towards chunking as a general learning mechanism. 1. Chunking can be applied to a general problem solver to speed up its performance. 2. Chunking can improve all aspects of a problem solver’s behavior. 3. Significant transfer of chunked knowledge is possible via the implicit generalization of chunks. 4. Chunking can perform strategy acquisition, leading to qualitatively new behavior. Other systems have tackled individual points, but this is the first attempt to do all of them. Other work on strategy acquisition deals with the learning of qualitatively new behavior [6, lo], but it is limited to learning only one type of knowledge. These systems end up with the wandering bottle-neck problem - removal of a performance bottleneck from one part of a system means that some other locale becomes the bottleneck [lo]. Anderson [l] has recently proposed a scheme of knowledge compilation to be a general learning mechanism to be applied to all of cognition, although it has not yet been used on complex problem solving or reasoning tasks that require learning about all aspects of behavior. 2 Soar - A General Problem-Solving Architecture Soar is a problem solving system that is based on formulating all activity (both problems and routine tasks) as heuristic search in problem spaces. A problem space consists of a set of Hates and a set of operators that transform one state into another. Starting from an initial state the problem solver applies a sequence of operators in an attempt to reach a desired state. Soar uses a production system’ to implement elementary operators, tests for goal satisfaction and failure, and search control - information relevant to the selection of goals, problem spaces, states, and operators. It is possible to use a problem space that has no search control, only operators and goal recognizers. Such a space will work correctly, but will be slow because of the amount of search required. In many cases, the directly available knowledge may be insufficient for making a search-control decision or applying an operator to a state. When this happens, a difficulty occurs that results in the automatic creation of a subgoal to perform the necessary function. In the subgoal, Soar treats the difficulty as just another problem to solve; it selects a problem space for the subgoal 1 A modified versions of OpsS [3], which admits parallel execution of all satisfied productions. 188 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. in which goal attainment is interpreted as finding a state that resolves the difficulty. Thus, Soar generates a hierarchy of goals and problem spaces. The diversity of task domains is reflected in a diversity of problem spaces. Major tasks, such as configuring a computer will have a corresponding problem space, but so also will each of the various subtasks. In addition, problem spaces will exist in the hierarchy for performing tasks generated by problems in the system’s own behavior, such as the selection of an operator to apply, the application of an operator to a state, and testing for goal attainment. With such an organization, all aspects of the system’s behavior are open to problem solving when necessary. We call this property universal subgoaling (51. Figure 1 shows a small example of how these subgoals are used in Soar. This is the subgoal/problem-space structure that gets generated while trying to take steps in a task problem space. Initially (A), the problem solver is at State1 and must select an operator. If search control is unable to uniquely determine the next operator to apply, a subgoal is created to do the selection. In that subgoal (IS), a selection problem space is used that reasons about the selection of objects from a set. In order to break the tie between objects, the selection problem space has operators to evaluate each candidate object. A. F. Task goal r, Select Operator tatel B. peratorl EY perator2 perator3 Task goal r-1 peratorl El-+ perator2 perator3 s&?ct . . . Evaluate[Opl(Statel)] State1 Er State2 perator 1 perator2 pera tot-3 Evaluate[Op2(Statel)] State1 peratorl El+ perator2 perator3 State4 Figure 1: Eight Puzzle subgoal/problem space structure. Evaluating an operator, such as Operator1 in the task space, is a complex problem requiring a new subgoal. In this subgoal (C), the original task problem space and state (Statel) are selected. Operator1 is applied, creating a new state (State2). The evaluation for State2 is used to compare Operator1 to the other operators. When Operator1 has been evaluated, the subgoal terminates, and then the whole process is repeated for the other two operators (Operator2 and Operator3 in D and E): If, for example, Operator2 creates a state with a better evaluation than the other operators, it will be designated as better than them. The selection subgoal will terminate and the designation of Operator2 will lead to its selection in the original task goal and problem space, At this point Operator2 is reapplied to State1 and the process continues (F). 3 Chunking in Soar Chunking was previously defined [14] as a process that acquired chunks that generate the results of a goal, given the goal and its parameters. The parameters of a goal were defined to be those aspects of the system existing prior to the goal’s creation that were examined during the processing of the goal. Each chunk was represented as a set of three productions, one that encoded the parameters of a goal, one that connected this encoding in the presence of the goal to (chunked) results, and a third production that decoded the results. These chunks were learned bottom-up in the goal hierarchy; only terminal goals - goals for which there were no subgoals that had not already been chunked - were chunked. These chunks improved task performance by substituting efficient productions for complex goal processing. This mechanism was shown to work for a set of simple perceptual- motor skills based on fixed goal hierarchies [13]. At the moment, Soar does away with two of the features of chunking that existed for psychological modeling purposes: the three production chunks, and the the bottom-up nature of chunking. In Soar, single-production chunks are built for every subgoal that terminates. The power of chunking in Soar stems from Soar’s ability to automatically generate goals for problems in any aspects of its problem-solving behavior: a goal to select among alternatives leads to the creation of a production that will later control search; a goal to apply an operator to a state leads to the creation of a production that directly implements the operator; and a goal to test goal-satisfaction leads to a goal-recognition production. As search-control knowledge is added, performance improves via a reduction in the amount of search. If enough knowledge is added, there is no search; what is left is a method - an efficient algorithm for a task. In addition to reducing search within a single problem space, chunks can completely eliminate the search of entire subspaces whose function is to make a search- control decision, apply an operator, or recognize goal-satisfaction. The conditions of a chunked production need to test everything that was used in creating the results of the subgoal and that existed before the subgoal was invoked. In standard problem solvers this would consist of the name of the goal and its parameters. However, in Soar there are no fixed goal names, nor is there a fixed set of parameters. Once a subgoal is selected, all of the information from the prior goal is still available. The problem solver makes use of the information about why the subgoal was created and any of the other information that it needs to solve the problem. For each goal generated, the architecture maintains a condition-list of all data that existed before the goal was created and which was accessed in the goal. A datum is considered accessed if a production that matched it fires. Whenever a production is fired, all of the data it accessed that existed prior to the current goal are added to the goal’s condition-list. When a goal terminates (for whatever reason), the condition-list for that goal is used to build the conditions of a chunk. Before being turned into conditions, the data is selectively variablized so that the conditions become tests for object descriptions instead of tests for the specific objects experienced. These variables are restricted so that two distinct variables can not match the same object. The actions of the chunk should be the results of the goal. In traditional architectures, a goal produces a specific predefined type of result. However, in Soar, anything produced in a subgoal can potentially be of use in the parent goal. Although the potential exists for all objects to be relevant, the reality is that only a few of them will actually be useful. In figuring out the actions of the chunk, Soar starts with everything created in the goal, but then prunes away the information that does not relate directly to objects in any supergoal. What is left is turned into production actions after being variablized in accordance with the conditions. At first glance, chunking appears, to be simply a caching mechanism with little hope of producing results that can be used on other than exact duplicates of tasks it has already aJtempted. However, if a given task shares subgoals with another task, a chunk learned for one task can apply to the other, yielding across-task 2 Those that are pruned are also removed from memory because they are intermedtate results that wdl never be used again. transfer of learning. Within-trial transfer of learning can occur when a subgoal arises more than once during a single attempt on a task. Generality is possible because a chunk only contains conditions for the aspects that were accessed in the subgoal. This is an implicit generalization, by which many aspects of the context -the irrelevant ones -are automatically ignored by the chunk. 4 Demonstration In this section we describe the results of experiments on three tasks: the Eight Puzzle, Tic-Tat-Toe, and computer configuration (a part of the Rl expert-system implemented in Soar[15]). These tasks exhibit: (1) speed ups with practice; (2) within-trial transfer of learning; (3) across-task transfer of learning; (4) strategy acquisition (the learning of paths through search spaces); (5) knowledge acquisition in a knowledge-intensive system; and (6) learning of qualitatively different aspects of behavior. We conclude this section with a discussion of how chunking sometimes builds over-general productions. 4.1 Eight Puzzle The states for the Eight Puzzle, as implemented in Soar, consist of different configurations of eight numbered tiles in a three by three grid; the operators move the blank space up (U), down (D), left (L) and right (R) [5]. Search-control knowledge was built that computed an evaluation of a state based on the number of tiles that were moved in and out of the desired positions from the previous state.3 At each state in the problem solving, an operator must be selected, but there is insufficient search-control knowledge to intelligently distinguish between the alternatives. This leads to the selection being made using the set of selection and evaluation goals described in Section 2. The first column of Figure 2 shows the behavior of Soar without chunking in the Eight Puzzle problem space. All of the nodes off the main path were expanded in evaluate-operator subgoals (nodes on the main path were expanded once in a subgoal, and once after being selected in the top goal).4 Task 1 R R R No Learning Task 1 Task 1 While Learning After Learnlng While Learning Task 1 Task 1 After Learning Task 2 Figure 2: Within-trial and Across-task Transfer in Eight Puzzle. 3 To avoid tight loops, search-control was also added that avoided applying the inverse of the operator that created a given state. 4At two points in the search the correct operator had to be selected manually because the evaluation function was Insufficient to pick out the best operator. Our purpose is not to evaluate the evaluation function, but to investigate how chunking can be used in conjunction with search-control knowledge. When Soar with chunking is applied to the task, both the selection and evaluation subgoals are chunked. During this run (second column of Figure 2), some of the newly created chunks apply to subsequent subgoals in the search. This within-trial transfer of learning speeds up performance by dramatically reducing the amount of search. The third column in the figure shows that after one run with learning, the chunked productions completely eliminate search. To investigate across-task learning, another experiment was conducted in which Soar started with a learning trial for a different task - the initial and final states are different, and none of the intermediate states were the same (the fourth column). The first task was then attempted with the productions learned from the second task, but with chunking turned off so that there would be no additional learning (the final column). The reduced search is caused by across-task transfer of learning - some subgoals in the second trial were identical in all of the relevant ways to subgoals in the first trial. This happens because of the interaction between the problem solving only accessing information relevant to the result, and the implicit generalization of chunking only recording the information accessed. 4.2 Tic-Tat-Toe The implementation of Tic-Tat-Toe includes only the basic problem space - the state includes the board and who is on move, the operators make a mark on the board for the appropriate player and change who is on move - and the ability to detect a win, loss or draw [5]. With just this knowledge; Soar searches depth-first through the problem space by the sequence of: (1) encountering a, difficulty in selecting an operator; (2) evaluating the operators in a selection subgoal; (3) applying one of the operators in an evaluation subgoal; (4) encountering a difficulty in selecting an operator to apply to the resulting state; and (5) so on, until a terminal state is reached and evaluated. Chunking in Tic-Tat-Toe yields two interesting results: (1) the chunks detect board symmetries, allowing a drastic reduction in search through within-trial transfer, (2) the chunks encode search- control knowledge so that the correct moves through the space are remembered. The first result is interesting because there is no knowledge in the system about the existence of symmetries, and without chunking the search bogs down terribly by re-exploring symmetric positions. The chunks make use of symmetries by ignoring orientation information that was not used during problem solving. The second point seems obvious given our presentation of chunking, however, it demonstrates the strategy acquisition [6, lo] abilities of chunking. Chunking acquires strategic information on the fly, using only its direct experience, and without complex post- processing of the complete solution path or knowledge learned from other trials. The quality of this path depends on the quality of the problem solving, not on the learning. 4.3 Rl Part of the RI expert system [7] was implemented in Soar to investigate whether Soar can support knowledge-intensive expert systems [15]. Figure 3 shows the subgoal structure that can be built up through universal subgoaling, including both subgoals that implement complex operators (heavy lines) and subgoals that select operators (thin lines to Selection subgoals). Each box shows the problem-space operators used in the subgoal. The actual subgoal structure extends much further wherever there is an ellipsis (...). This subgoal structure does not pre-exist in Soar, but is built up as difficulties arise in selecting and applying operators. Table 1 presents statistics from the application of RI-Soar to a small configuration task. The first three runs (Min. S-C) are with a minimal system that has only the problem spaces and goal detection defined. This base system consists of 232 productions (95 productions come with Soar, 137 define Rf-Soar). The final three runs (Added S-C) have 10 additional search-control 190 Configure backplane Place modules in BP Configure all modules onfigure backpla Place BP in box Cable backplace select instantiation ompare objects backplane . . . Figure 3: Subgoal Structure in RI-Soar. productions that remove much of the search. In the table, the first trial bypasses the subgoal. If the special-case production number of search-control decisions is used as the time metric because decisions are the basic unit of problem-solving.5 would lead to a different result for the goal, the chunk is over- general and produces an incorrect result. Run TVDQ Initial Prod. Final Prod. Decision3 Min. S-C 232 1731 Min. S-C with chunking 232 291 Min. S-C after chunking 291 291 7 Added S-C 242 242 150 Added S-C with chunking 242 254 90 Added S-C after chunking 254 254 7 Table 1: Run Statistics for RI-Soar. The first run shows that with minimal search control, 1731 decisions are needed to do the task. If chunking is used, 59 productions are built during the 485 decisions it took to do this task. No prior chunking had occurred, so this shows strong within-trial transfer. After chunking, rerunning the same task takes only 7 decisions. When Soar is run with 10 hand-crafted search-control rules, it only takes 150 decisions. This is only little more than three times faster than Soar without those rules took when chunking was used. When chunking is applied to this situation - where the additional search control already exists - it still helps by decreasing to 90 the number of decisions for the first trial. A second trial on this task once again takes only 7 decisions. Figure 4 contains an example of how the problem solving and chunking in Soar ‘can lead to over-generalization. Consider the situation where 0 is to move in state 1. It already has the center (E), while X is on a side (6). A tie arises between all the remaining moves (A,C,D,F-I) .leading to the creation of a subgoal. The Selection problem space is chosen in which each of the tieing moves are candidates to be evaluated. If position I is evaluated first, it leads to a line of play resulting in state 2, which is a win for 0 because of a fork. On return to the Selection problem space, move I is immediately chosen as the best move, the original tie-subgoal terminates, move I is made, and 0 goes on to win. When returning from the tie-subgoal, a chunk is created, with conditions sensitive to all aspects of the original state that were tested in productions that fired in the subgoals. All positions that have marks were tested (A-C, E, I) as well as those positions that had to be clear for 0 to have a fork (G, F). However, positions D and H were not tested. To see how this production is over-general consider state 3, where 0 is to move. The newly chunked production, being insensitive to the X at position D, will fire and suggest position I, which leads to a loss for 0. 4.4 Over-generalization The within-trial and across-task transfer in the tasks we have examined was possible because of implicit generalization. Unfortunately, implicit generalization leads to over-generalization when there is special-case knowledge that was almost used in solving a subgoal. In Soar this would be a production for which most but not all of the conditions were satisfied during a problem solving episode. Those conditions that were not satisfied, either tested for the absence of something that is available in the subgoal (using a negated condition) or for the presence of something missing in the subgoal (using a positive condition) . The chunk that is built for the subgoal is over-general- because it does not include the inverses of these conditions - negated conditions for positive conditions, and positive conditions for negated conditions. During a later episode, when all of the conditions of a special-case production would be satisfied in a subgoal, the chunk learned in the 1 2 3 Figure 4: Over-generalization in Tic-Tat-Toe. ‘on a Symbolics 3600, Soar USually runs at 1 second per decision. Chunkin ados an overhead of aPPrOximately 1546, mostly to compile new productions. The increased number of Productions has no affect on the overall rate if the chunked Productions are fully integrated into the existing production-match network. Over-generalization is a serious problem for Soar if we want to encode real tasks that are able to improve with experience. However, over-generalization is a problem for any learning system that works in many different environments and it leads to what is called negative-transfer in humans. We believe that the next step in handling over-generalization is to investigate how a problem solver can recover from over-general knowledge, and then carry out problem solving activities so that new chunks can be learned that will override the over-general chunks. This would be similar to John Anderson’s work on discrimination learning using knowledge compilation [l]. 191 5 Conclusion In this paper we have taken several steps towards the establishment of chunking as a general learning mechanism. We have demonstrated that it is possible to extend chunking to complex tasks that require extensive problem solving. In experiments with the Eight Puzzle, Tic-Tat-Toe, and a part of the RI computer-configuration task, it was demonstrated that chunking leads to performance improvements with practice. We have also contributed to showing how chunking can be used to improve many aspects of behavior. Though this is only partial, as not all of the different types of problem solving arose in the tasks we demonstrated, we did see that chunking can be used for subgoals that involve selection of operators and application of operators. Chunking has this generality because of the ubiquity of goals in Soar. Since all aspects of behavior are open to problem solving in subgoals, all aspects are open to learning. Not only is Soar able to learn about the task (chunking the main goal), it is able to learn about how to solve the task (chunking the subgoals). Because all aspects of behavior are open to problem solving, and hence to learning, Soar avoids the wandering bottle-neck problem. In addition to leading to performance speed ups, we have shown that the implicit generalization of chunks leads to significant within- trial and across-task transfer of learning. This was demonstrated most strikingly by the ability of chunks to use symmetries in Tic- Tat-Toe positions that are not evident to the problem solving system. And finally, we have demonstrated that chunking, which on first glance is a limited caching function, is capable of strategy acquisition. It can acquire the search control required to turn search-based problem solving into an efficient method. Though significant progress has been made, there is still a long way to go. One of the original goals of the work on chunking was to model human learning, but several of the assumptions of the original model have been abandoned on this attempt, and a better understanding is needed of just why they are necessary. We also need to understand better the characteristics of problem spaces that allow interesting forms of generalization, such as use of symmetry to take place. We have demonstrated several forms of learning, but others, such as concept formation [9], problem space creation [4], and learning by analogy [2] still need to be covered before the proposal of chunking as a general learning mechanism can be firmly established. References 1. Anderson, J. Ft. Knoweldge compilation: The general learning mechanism. Proceedings of the 1983 Machine Learning Workshop, 1983. 2. Carbonell, J. G. Learning by analogy: Formulating arrd generalizing plans from past experience. In Machine Learning: An ArtificiaI intelligence Approach, Ft. S. Michalski, J. G. Carbonell, & T. M. Mitchell, Eds., Tioga, Palo Alto, CA, 1983. 3. Forgy, C. L. OPS5 Manual. Computer Science Department, Carnegie-Mellon University, 1961. 4. Hayes, J. R. and Simon, H. A. Understanding complex task instructions. In Cognition and Instruction, Klahr, D., Ed.,Erlbaum, Hillsdale, NJ, 1976. 5. Laird, J. E. Universal Subgoaling. Ph.D. Th., Computer Science Department, Carnegie-Mellon University, 1983. 6. Langley, P. Learning Effective Search Heuristics. Proceedings of IJCAI-83, IJCAI, 1983. 7. McDermott, J. ‘Xl: A rule-based configurer of computer systems,” Artificial intelligence 79 (1982), 39-88. 8. Miller, G. A. “The magic number seven, plus or minus two: Some limits on our capacity for processing information.” Psychological Review 63 (1956) 81-97. 9. Mitchell, T. M. Version Spaces: An approach to concept /earning. Ph.D. Th., Stanford University, 1978. 10. Mitchell, T. M. Learning and Problem Solving. Proceedings of IJCAI-83, IJCAI, 1983. 11. Newell, A. Reasoning, problem solving and decision processes: The problem space as a fundamental category. In Attention and Performance VIII, R. Nickerson, Ed.,Erlbaum, Hillsdale, NJ, 1980. 12. Newell, A. and Rosenbloom, P. Mechanisms of skill acquisition and the law of practice. In Learning and Cognition, Anderson, J. A., Ed.,Erlbaum, Hillsdale, NJ, 1981. 13. Rosenbloom, P. S. The Chunking of Goal Hierarchies: A Mode/ of Practice and Stimulus-Response Compatibility. Ph.D. Th., Carnegie-Mellon University, 1983. 14. Rosenbloom, P. S., and Newell, A. The chunking of goal hierarchies: A generalized model of practice. Proceedings of the 1983 Machine Learning Workshop, 1983. 15. Rosenbloom, P. S., Laird, J. E., McDermott, J. and Newell, A. Rl -SOAR: An Experiment in Knowledge-Intensive Programming in a ProblemSolving Architecture. Department of Computer Science, Carnegie-Mellon University, 1984. 192
1984
26
310
Task Frames in Robot Manipulation Dana H. Ballard Department of Computer Science University of Rochester Rochester, NY 14627 Abstract Most robotics computations refer to a single world- based frame of reference; however, several advantages accrue with the introduction of a second frame, termed a task frame. A task frame is a coordinate frame that can be attached to different objects that are to be manipulated. The task frame is related to the world-based coordinate frame by a simple geometric transformation. The virtues of such a frame are: (1) certain actions that are difficult to specify in the world frame are easily expressed in the task frame: (2) the task-frame to task-uorld transformation provides a formalism for describing physical actions; and (3) the task frame can be related to the world frame by proprioception. *This research was supported in part by the National Science Foundation under Grant WCS-8203920. 1. Introduction Robotics problems are best considered at different levels of abstraction. This is because many problems can be analyzed effectively vvithin a given abstraction level without appealing to other levels. For example, in robot planning it is often helpful to consider actions symbolically without involving details of the servomechanisms that implement such actions. The standard symbolic description for the action move xfiom y to z in a STRIPS-like expression [Fikes and Nilsson, 19711 is: MOVW, Y, d Preconditions: CLEAR(x): CLEAR(z): On(x,y) Postconditions: ON(x,z); CLEAR(y). The principal advantage of such a system is that symbolic plans involving several actions which achieve a set of goal conditions can be created systematically. The disadvantage of this level is that important geometric details are suppressed. For example, the predicate CLEAR(x) may depend on the geometry of the environment and the manipulator. Objects that might be CLEAR with respect to a multiple degree-of-freedom manipulator might not be CLFAR to a loh-degree-of- freedom Cartesian manipulator. Similar kinds of arguments can be made for the lowest level of abstraction, the servomechanism itself. The basic problem of the servomechanism is to exert forces on objects in the world and transport them along desired trajectories. The control problem of “given a trajectory, find the actuator torques required to follou it,” can be solved independently from the symbolic plan using only the inverse dynamics of the manipulator. At the symbolic level, problems can be solved independent of the details; just as important, at the servomechanism level problems can be solved independent of the context. The natural level of abstraction to introduce between the symbolic level and servo level is a geometric level. The geometric level provider an explicit representation of space that includes geometrical and mechanical structure. For example, while CLEAR(x) may be a necessary property for an action at the symbolic level, the geometrical level contains the necessary structure that allows this property to be established. Another element of the geometrical level is that it must be able to communicate with the servomechanism level. More specifically, we argue that the geometrical level must contain structure that functions as a command language for the servomechanism level. To first order, we argue that manipulators need only these three levels, which are described in Table 1. 16 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. Table 1: Levels of Abstraction in Robot Manipulation 9 (b) Trajectory as a transformation locus. Level Description symbolic STRIPS-like description of actions; planning done by chaining appropriate actions to change current state to goal state geometric servomechanism representation of space in which actions take place: command language for servomechanism level detailed description of manipulator: inertial parameters, friction models, manipulator internal geometry The central idea of this paper is that of a task fww. A task frame is described at the geometric level and is a geometric coordinate frame that is attached to the object being manipulated. To understand the notation for a task frame, one must appreciate that current robot manipulation and control strategies tend to refer computations to a single reference frame, termed the wor/Li frame. An example of such a strategy is that of compliance [Paul, 19811, where the manipulator is constrained to move along given geometric surfaces. Such surfaces have been termed C-surfaces [Mason, 19811. For example, when turning a crank. the crank handle will traverse a given path in the world frame, as shoun by Figure la, which is taken from [Brady et al.. 19821. Figure 1: (a) Trajectory as a world-frame locus. - -u- The crux of the rest of the paper is to describe these advantages in detail. Section 2 contains the basic notions from geometry and mechanics needed to understand the subsequent material. Section 3 describes the interface between the geometric and symbolic levels. The focus is on the recognition and implementation of actions that \ I .a involve mechanics. Section 4 describes the interface A task frame is related to C surfaces but is different in a crucial way. Rather than thinking about the C- surface in the world frame, the task frame is intrinsically fixed to the object being manipulated (for the duration of the task) and is related to the world frame by the obvious geometric transformation. That is, given any two of the set (task frame, world frame, task-world transformation), the third is easily computed. Figure lb shows the characterization of the task frame for the problem of turning the crank. The two representations are equivalent in the sense that either one could be transformed into the other. The important difference is that uithin the task frame formalism, the problem of turning the crank can be simply described as “push in the el-direction until hou meet resistence.” The expedient of introducing an intrinsic frame and separating thz intrinsic frame from its transformation has the following adv an tages: 1) 2) 3) many actions that are difficult to express in the world frame have very srmplr: expressions with respect to the task frame and tranbfortnation (in fact, they can be described by invariants with respect to these two entities); the frame-transformation decoupling allous us to relate simply geomtitric and mechanical changes in the world with corresponding symbolic descriptions; and the transformation betueen the task frame and world frame can be ccjmputed via both visic~n and proprioception. 17 between the geometric and servomechanism levels. The focus is on the self-calibration necessary to relate the task and world frames, and the automatic relations between the two levels introduced by the task frame formalism. 2. Geometry and Mechanics A. Geometry An orthogonal geometric coordinate frame consists of three vectors, el, e2, e3, such that any two pair are mutually perpendicular (ei * ej = 0, for i f j), they form a right-handed coordinate system (e3 = el x e2), and all the vectors are unit vectors (ei . ei = 1). To denote the task frame the subscript t is used, i.e., elt, e2t, e3t, and to denote the world frame the subscript w is used. Each frame has an origin x = (x, y, z), so that xt is the origin of the task frame and xw is the origin of the world frame. The world frame can be thought of as described in terms of master coordinates. In this case xw = 0, elw = (1, 0, 0), e2w = (0, 1, 0), and e3w = (0, 0, 1). To denote a frame (x, eL, e2, e3) we use E. Given the task frame and world frame. the transformation between them can be specified by a rotation and a translation. The understanding is that the rotation is done first since rotation and translation do not commute. The transformation is specified by an origin change Ax = (AK, A}‘, AZ) and a rotation (n, 8). The later notation stands for a rotation 8 about a unit vector n where n is expressed in world coordinates. The transformation can be computed directly as: Ax = xt - xw and, assuming a quaternion representation for rotations [Per-tin and Webb, 19831: n = Normalize((elt - eLw) x (e2t - eZ,)) e = (-(n x q)(n x qw>) l/2 B. Mechanics A robot manipulator is a series of links. Each link is independently controlled by its own servomotor. The links can be described by joint angles e (rotary joints) which are controlled by applying torques 7. One such configuration is the two-link. planar manipulator suggested by [Horn, 19751. Figure 2 shows the manipulator geometry. The force and torque applied at the tip of the manipulator can be described by a vector(f, n). The external force and torque can be related to the joint torques by general dynamic equations F (1, 8, f, n) = 0 (1) To drive the arm, S, f, n are assumed known and Equation 1 is solved for the control torques 1. This way of solving (1) is known as the inverse dynamics. That is, the torques 7 can be related to (f, n) by a set of equations: I. = f-l@, f, n). Recently developed solution techniques have made it practical to solve the inverse dynamics equations in real time [Luh et al., 1980: Hollerbach, 1980). The easier problem is readily also solved: that is, given 2, f, and n, determine 8. ‘T Figure 2: Two-link planar manipulator. Besides the dynamics problem there is the kinematics problem. Given the state of the system in terms of @, d@/dt, d2g/dt2, one must determine the motion of the manipulator tip xp, dxp/dt, d2xp/dt2. This is the easy part. The reverse problem--given x find &-is harder but can be solved analytically by designing manipulators with special geometries. One such geometry is a spherical wrist [Feather-stone, 19831. In c’ery simple manipulator geometries both the inverse dynamics and inverse kinematics may have analytical solutions. For example, in the two-link planar manipulator, the joint torques 71 and 72 may be expressed as 71 = A + B + Cfx + Dfy + n ?2 - - E + F + Gf, + Hfy + n (2) 18 where A, B, C, D, E, F, G, and H are expressions involving the manipulator joint angles S, joint angular velocities and accelerations, mass, and inertial parameters. The letters A and E denote terms dependent on velocity and acceleration: the letters B and F denote terms dependent on gravity. These equations show that the problem of controlling (n, f) in this case is underdetermined. However, if the problem is simplified slightly, e.g., the gripper is a finger such that n = 0, then the external forces at the tip can be directly related to the control torques. Our example assumes such a gripper. 3. Geometry and Symbols: Recognizing and Implementing Actions This section shows how the geometrical notions of a task frame and transform can be of general use in a symbolic planner. In particular: (1) the framework allows the interrelation of symbolic and geometric descriptions of actions; (2) the task frame and transformation allows a simple description of tasks in terms of invariants; and (3) the task frame allows checking for collisions between objects. To start with the first point. consider the description of falling. If an object is falling then its origin is approaching that of the world frame origin such that the z-velocity is negative. In other words, FALLING(obj) < = > (vz < 0) where the understanding is that expressions involving positions and orientations and changes in such are statements about the world-frame to task-frame transformation. Expressing the process of falling as a rate of change has the effect of making it comparable to a static situation. The logical expression (vz < 0) must hold throughout the falling process. The task frame orientation may also play a role in the description. For example: RIGHT-SIDE-UP(obj) < = > ALIGNED(e3,, e3w) In this as in the previous example, a first order logic syntax is assumed with expressions consisting of predicates denoted by upper case, terms denoted by lower case. The point of these examples is that for problems involving Newtonian mechanics, the task-frame to world-frame transformation provides the basis for systematically relating symbolic expressions and geometrical expressions. 19 The task frame structure leads naturally to a formalism at the geometric level for describing actions. This formalism has three principal advantages: (1) its elements are all invariants: (2) the formalism is sufficiently abstract that the same action can be used in a variety of contexts; and (3) its structure can bc interpreted by the servomechanism. The format that we adopt for representing actions has a STRIPS-like syntax. Each action has a set of preconditions that must be true for the ,action to be applicable, a set of while conditions that must hold during the execution of the action, and a set of stopping conditions. Thus an action is described as: ActionName(params) if (preconditions) then do ( whileconditions} until (stoppingconditions) where the parameters are used by the various conditions. This structure takes advantage of the previous development which related symbolic constraints and geometric constraints. Consider the example of closing a door. This can be expressed symbolically as: DOORCLOSI%G(door) if TOUCHING(door) then do PUSH(door) until ARRESTED(door) but also can be expressed geometrically as DOORCLOSING(door) if (E,O,O) then do (f,O,O) & .4LIGNED(Et,Ehandle) until (O,O,O) This syntax illustrates a number of important points which we will nou elaborate. In the first place. note that we have been able to decouple the force constraints from the geometric constraints. (A similar decoupling is seen in C-surfaces for the cases of pure force or pure position control [Mason, 19811.) Thus rather than specify that force control of the handle in the world frame [Mason, 19311 uhere it has a varying locus, it is specified in the task frame where it has a very simple structure. The notation ft = (fl, f2, f3) stands for exertjbrce ft in the task frame coordinate s?lstem. Ihe quantity E is a small contact force, less than that required to move the object, v+ hereas f is large enough to start the object moving. The key virtue of the task frame is that the force ft can be an invariant during the action. The second part of the while condition for closing the door expresses the relationship between the task frame and some other frame expressed in world frame coordinates. For the earlier discussion one can appreciate that given Et and Ehandle. *e predicate ALIGNED(E,,Eh) is easy to compute. The following scenario is imagined for the action DOORCLOSING. Given that the handle is grasped, the servomechanism applies a force in the elt direction of the task frame to move the door. The door moves until it bumps into the door frame, at which time the frame exerts a force f to cancel the manipulator torques. This satisfies the stopping condition. Details such as the microdynamics of the contact are left to the servomechanism, and we will defer the discussion of these details until the next section. The above strategy does not address the problem of slamming the door. This can happen when the force is too large. To deal with this example we will add one condition and change notation slightly. First we add the while condition (11~11 E [YO - AU, ~0 f A”]). This states that the speed of the task frame with respect to the world frame is to be constrained in the interkal YO -t AU. As a shorthand, we use capital letters to specify interFals, i.e., vo f AU = VO. The second change we will make is to relax the force specification in the task frame to just the specification of the axis to be controlled, in this case ‘1. The while condition becomes: {whilecondition) = FORCECO\TROL(etl) and (11~11 f L’o) and ALIGhED(E,,Eh) The understanding is that the servomechanism can use this to generate the appropriate commands. One simplistic possibility is: if llvll > YO + AV then fl : = fl + A if llvll < ~0 + AU then fl : = fl - A Notice that although the while conditions have become more complex, their essential structure has been maintained in that they are invariants with respect to the action. We now turn to the second advantage of the formalism, which is that, once appropriate bindings have been established, a wide variety of different situations can be described by the same task frame description of the action. Figure 3 shows two different tasks which can be handled by DOORCLOSING. In the first, gravity is assumed to be perpendicular to the plane, i.e., the figure shows a top view. Figure 3: Different tasks which can be handled by DOORCLOSING. In these examples, recognizing that the described action is one of DOORCLOSING from the geometric features would be difficult but perhaps not impossible. More plausibly, the relevant geometric features of the problem. which in this case are specified by Et, may already be known. In any case, the geometric level is the essential starting point from which the relevant constraints can be synthesized. We note that some details are being finessed at this lebel of description. For example, what if the masses in these examples are such that the servomechanism cannot achieve II\11 E Vg? This case of failure has to be resolved at the planning level, and aside from characterizations of the failure mode, we are not addressing these kinds of problems in this paper. 20 Another problem is that of collision detection. Task frames provide a partial mechanism for handling this problem. First instantiate all the geometrical objects with respect to the task frame, and then use the details of the geometric representation, e.g., constructive solid geometry, to check for solid material from two or more objects occupying the same physical space. 4. Geometry and Servomechanisms: Self-Calibration In order for the task frame scheme to work there must be some way of computing the transformation between the task frame and the world frame. [r-r this section we discuss ways of doing this and show how they can be integrated into the real-time control program of the servomechanism. One way of establishing the desired transformation is through visual input. This much-researched problem can be done in constant time on a parallel machine if suitable visual features can be identified [Ballard and Sabbah, 1983; Hrechanyk and Ballard, 19831. But from a robotics context, a more interesting method is to use the inverse dynamics and kinematics of the servomechanism itself. To see how this might work, let us reconsider the problem of closing the door. From the inverse kinematics of the manipulator it is possible to calculate the end effector velocity. In the normal case of door closing, once the door moves, its velocity vector is available in world- frame coordinates. In this case of compliant motion, the door can only move in the direction oj’ the el axis of the task frame. Thus, el can be computed as el = Normalize(v), where v can be measured from the kinematic equations. Since el is the crucial axis in the closing task, the other axes may not need to be updated beyond enforcing the orthogonality condition. If they should be updated, an additional constraint is that, for two different times tl and t2, then e3 can be computed as e3 = Norrnalize(v(tl) x v(t2)), where the two velocities must have different directions. This is a natural constraint in the door closing situation. In pushing the block the velocity needs to wobble arbitrarily while being pushed to establish the second direction vector. Another way to calculate the transform is via force proprioception. Assuming the velocity parallel to the door surface is clamped at zero, the transformation parameter, which in 2D is a single angle (denoted by, OL in Figure 3) can be readily computed. The transform is assumed to be initialized at the beginning of the action and continuously updated during the action. One way of updating is: 1) use I(t), f(t) to solve for e(t): 2) use g(t) to solve for x,(t); 3) a’ = tan-l((dyp/dt)/(dxp/dt)); 4) if la - all < ~1, then CI := CX’: else failure. (3) Now we turn to the stopping condition. If stopped, B = F = 0 in (2). This leads to: 1) compute zs using B = F = 0 and f, measured from proprioception: 2) if ]x - zs] < ~2, then stop. T ; I I B-e can d. start CO&O/ SC0p Siyngl + Low 4 Sr’q nal c I 1& Toverse %f namics Figure 4 Details of servomechanism level. c Prop rio - Ception 9 c 21 These kinds of computations can be utilized by a servo controller in the manner depicted in Figure 4. To see how the controller works, consider again the door closing action. First, at the symbolic level, the symbolic description of the action can be automatically translated into task-frame constraints. Second, actions at the geometric level can be automatically translated into servomechanism commands: the ifconditions are utilized to generate a start signal, the while conditions are utilized to synthesize a control function, and the until conditions are utilized in a termination monitor. In other words, the invariants at the geometric level become set points for the controller at the servomechanism level. Third, at the servo level, the controller computes a command signal in the task frame. This command is translated into world- frame coordinates by the task-world transformation. The inverse dynamics allows the actuator torques to be synthesized from the desired control signal. The actuator torques have an effect on the plant which is monitored by proprioception. Proprioception uses the inverse dynamics but assumes the torques and system state are known in order to estimate the world forces and velocities. These are checked against the termination conditions and also used to update the task-world transformation. The termination condition is propagated to the symbolic level where ARRESTED(door) is set to TRUE. 5. Summary Task frames make many issues that arise in robot planning and manipulation simpler. The change from earlier work has been inspired in part by recent work at the servomechanism level which has allowed the deh elopment of dynamically accurate plant models [Mukerjee et al., 19841 and aforementioned fast solutions to the problems of inverse kinematics and dynamics. This means that the main portion of manipulator control can be carried out as an open loop rather than a closed loop. Before these advances, manipulator control has had to be segregated into a planning phase and an acting phase, and the dynamics of the acting phase could not be introspected during the planning phase. With accurate plant models and open loop control strategies, the planning and acting phases can be more intimately linked. It is important to acknowledge that this paper does not tackle many issues that must be solved to make robot manipulation practical. Some of these are: trajectory planning, recovering from failures, and the representation of large amounts of detailed spatial information. The exposition is limited to characterizing single actions and showing how they may be characterized as geometrical and mechanical invariants. Hopefully this representational strategy will make the solution of the other problems easier. 6. References Ballard, D.H. and D. Sabbah, “Viewer independent shape recognition,” IEEE Trans. on Pattern Analysis and Machine Intelligence 5, 6, November 1983. Brady, M., J.M. Hollerbach, T.L. Johnson, T. Lozano- Perez, and M.T. Mason. Robot Motion: Planning and Control. Cambridge, MA: The MIT Press, 1982. Featherstone, R., “Position and velocity transformations between robot end effector coordinates and joint angles,” Int. J. Robotics Research 2, 1983. Fikes, R.E. and X.J. Nilsson, “STRIPS: A new approach to the application of theorem proving to problem solving,” Art$cial Intelligence 12, 3/4, 184-209, 1971. Hollerbach, J.M., “A recursive formulation of Lagrangian manipulator dynamics,” IEEE Trans. Systems, Man. Cybernetics SMC- IO, 11, 730-736, 1980. Horn, B.K.P., “Kinematics, statics, and dynamics of two- d manipulators,” MIT AI Lab, Working Paper 99, June 1975. Hrechanyk, L.M. and D.H. Ballard. “A connectionist model for shape perception,” Computer Vision Workshop, Ringe, NH, August 1982; also appeared as “Viewframes: A connectionist model of form perception,” DARPA Image Understanding Workshop. Washington, D.C., June 1983. Luh, J.Y.S., M.W. Walker, and R.P.C. Paul, “On-line computational scheme for mechanical manipulators,” J. Dynamic Systems, Measurement, Control 102, 69- 76, 1980. Mason, M.T., “Compliance and force control for computer controlled manipulators,” IEEE Trans. Systems, Man, and Cybernetics 1 I, 6, 418-432, 198 1. Mukerjee, A., R.C. Benson, and D.H. Ballard, “Towards self-calibration in robot manipulator sq stems: Dynamics enhancement through trajectory deviation analysis,” Working Paper, Depts. of Mechanical Engineering and Computer Science, U. Rochester, March 1984. Paul, R.P. Robot Manipulators: Mathematics. Programming, and Control. Cambridge, 41A: MIT Press, 1981. Pervin, E. and J.A. Webb, “Quaternions in computer vision and robotics,” Proc., IEEE Computer Vision and Pattern Recap-nition Conf., 382-383, Washington, DC, June 1983. 22
1984
27
311
Path Relaxation: Path Planning for a Mobile Robot Charles E. Thorpe Computer Science Department, Carnegie-Mellon University Abstract. Path Relaxation is a method of planning safe paths around obstacles for mobile robots. It works in two steps: a global grid starch that finds a rough path, followed by a local relaxation step that adjusts each node on the path to lower the overall path cost. The representation used by Path Relaxation allows an explicit tradeoff among length of path, clearance away from obstacles, and distance traveled through unmapped areas. 1. Int reduction Path Relaxation is a two-step path-pl;lr~ning process for mobile robots. It finds a safe path for a robot to traverse a field of obstacles and arrive at its destination. The first step of path relaxation finds a preliminary path on an eight-connected grid of points. The second step adjusts, or “relaxes”, the position of each preliminary path point to improve the path. One advantage of path relaxation is that it allows many different factors to be considered in choosing a path. Typical path planning algorithms evaluate the cost of alternative paths solely on the basis of path length. The cost function used by P%h Relaxation, in contrast, also includes how close the path comes to objects (the further away, the lower the cost) and penalties for traveling through areas out of the Ii&i of view. The cffcct is to produce paths that neither clip the corners of obstacles nor make wide deviations around isolated objects, and that prefer to stay in mapped terrain unless a path through unmapped regions is substantially shorter. Other factors, such as sharpness of corners or visibility of landmarks, could also bc added for a particular robot or mission. Path Relaxation is part of Fido, the vision and navigation system of the CMU Rover mobile robot. [7] The Rover, under Fido’s control, navigates solely by stereo vision. It picks about 40 points to track, finds them in a pair of stereo images, and calculates their 3D positions relative to the Rover. The Rover then moves about half a meter, takes a new pair of pictures, finds the 40 tracked points in each of the new pictures and recalculates their positions. The apparent change in position of those points gives the actual change in the robot’s position. Fido’s world model is not suitable for most existing path-planning algorithms. They usually assume a completely known world model, with planar-faced objects. Fido’s world model, on the other hand, contains only the 40 points it is tracking. For each point, the model records its position, the uncertainty in that position, and the appearance of a small patch of the image around that point. Furthermore, Fido only knows about what it has seen; points that have ncvcr been within its field of view are not listed in the world model. Also, the vision system may fail to track points correctly, so there may be phantom objects in the world mode! that have been seen once but arc no longer being tracked. All this indicates the need for a data structure that can rcprcscnt unccrt,Gnty and inaccuracy, and for algorithms that can USC such data. Section 2 of this paper outlines the constraints avnilablc to Fide’s path planner. Sccrion 3 discusses some common types of path planners. and shows how ~hcy are inadequate for our application. The Path Relaxation algorithm is explained in detail in Section 4, and some additions to the basic scheme are presented in Section 5. Finally, Section 6 discusses shortcomings of Path Relaxation and some possible extensions. 2. Constraints in intclllgcnt path planner needs to bring lots of information to bear on the problem. This section discusses some of the information ~rxfi~l for mobile robot path planning, and shows how the constraints for mobile robot paths differ from those for manipulator trajectories. Low dirncnsionality. A ground-based robot vehicle is constrained to three degrees of freedom: x and y position and orientation. In particular, the CMU Rover has a circular cross-section, so for path planning the orientation does not matter. This makes path planning only a 2D problem, as compared to a 6 dimensional problem for a typical manipulator. Imprecise control. Even under the best of circumstances, a mobile robot is not likely to be very accurate: perhaps a few inches, compared to a few thousandths of an inch for manipulators. The implication for path planning is that it is much less important to worry about exact fits for mobile robot p&hs. If the robot could, theoretically, just barely fit through a certain opening, then in practice that’s probably not a good way to go. Computational resources are better spent exploring alternate paths rather than worrying about highly accurate motion calculations. Cumul;lti~c error. Errors in a dead-reckoning system tend to accumulate: a small error in heading, for instance, can give rise to a large error in position as the vehicle moves. The only way to reduce error is to periodically measure position against some global stiindard, which can be time-consuming. The Rover, for example, does its measurement by stereo vision, taking a few minutes to compute its exact position. So a slightly longer path that stays farther away from obstacles, and allows longer motion between stops for measurement, may take less time to travel than a shorter path that requires more frequent stops. In contrast, a manipulator can reach a location with approximately the same error rcgardlcss of what path is taken to arrive there. There is no cumulative error, and no time spent in reorientation. Unknown arcas. Robot manipulator trajectory planners usually know about all the obstacles. The Rover knows only about those that it has seen. This leaves unknown areas outside its field of view and behind obstacles. It is usually prcfcrable to plan a path that traverses only known empty regions. but if that path is much longer than the shortest path it may bc worth looking at the unknown regions, Fur/ly ohjcrts. Not only do typical m,uriptrlator path-planners know about all the objects, they know precisely hhcre each object is. ‘I’his information might come, for instance. from the CAD system that designed the robot workstatlon. Mobile robots. on the other hand, usually sense the world as they go. Fide. instead of having precise 318 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. bounds for objects, knows only about fuzzy points. ‘I’hc location of a point is only known to the precision of‘ the stcrco vision system, and the extent of an object beyond the point is cntircly unknown. In summary, a good system for mobile robot path planning will be quite different from a manipulator path planner. Mobile robot path planners need to handle uncertainty in the sensed world model and errors in path execution. They do not have to worry about high dimcnsionality or extremely high accuracy. Section 3 of this paper discusses some existing path planning algorithms and their shortcomings. Section 4 then presents the algorithms used by Path Relaxation, and shows how they address these problems. short paths and obstacle avoidance is the Regular Grid m&hod. This covers the world with a regular grid of points, each connected with its 4 or 8 neighbors to form a graph. In existing regular grid impli.mentations, the only information stored at a node is whether it is inside an object or not. Then the graph is searched, and the shortest grid path returned. This straightforward grid search has many of the same “too close” problems as the vertex graph approaches. 3. Approaches to Path Planning This section outlines several approaches to path planning and some of the drawbacks of each approach. All of these methods except the potential fieids approach abstract the search space to a graph of possible paths. This graph is then searched by some standard search tcchniquc, such as breadth-first or A* IS], and the shortest path is returned. The important thing to note in the following is the information made explicit by each representation and the information thrown away. Free Space methods. [2, 3,9] One type of p&h planner explicitly deals with the space between obstacles. Paths are forced to run down the middle of the corridors between obstacles, for instance on the Voronoi diagram of the free space. Free space algorithms suffer from two related problems, both resulting from a data abstraction that throws away too much information. The first problem is that paths always run down the middle of corridors. In a narrow space, this is desirable, since it allows the maximum possible robot error without hitting an object, But in some cases paths may go much filrther out of their way than necessary. The second problem is that the algorithms do not use clearance information. The shortest path is always selected, &en if it involves much closer tolerances than a slightly longer path. Path Relaxation combines the best features of grid search and potential fields. Using the rolling marble analogy, the first step is a global grid search that finds a good valley for the path to follow. The second step is a local relaxation step, similar to the potential field approach, that moves the nodes in the path to the bottom of the valley in which they lie. The terrain (cost function) consists of a gradual slope towards the goal, hills with sloping sides for obstacles, and plateaus for unexplored regions. The height of the hills has to do with the confidence that there really is an object there. Hill diamctcr depends on robot precision: a more precise robot can drive closer to an object, so the hills will be tall and narrow, while a less accurate vehicle will need more clearance, requiring wide, gradually tapering hillsides. This section first presents results on how large the grid size can be without missing paths. It next discusses the mechanism for assigning cost to the nodes and searching the grid. Finally, it presents the relaxation step that adjusts the positions of path nodes. Grid Size. How large can a grid be and still not miss any possible paths? That depends on the number of dimensions of the problem, on the connectivity of the grid, and on the size of the vehicle. It also depends on the vehicle’s shape: in this section, we discuss the simplest case, which is a vchiclc with a circular cross-section. 319 Vertex Graphs. [S, 10,6] Another class of algorithms is based on a graph connecting pairs of vertices. For each pair of vertices, if the line between them does not intersect any obstacle, that line is added to the graph of possible paths. Vertex graph algorithms suffer frotn the “too close” problem: in their concern for the shortest possible path. they find paths that clip the corners of obstacles and even run along the cdgcs of some objects. It is, of course, possible to build in a margin of error by growing the obstacles by an extra amount; this may, however, block some paths. Jjoth free space and vertex graph methods throw away too much information too soon. All obstacles arc modeled as polygons, all paths arc considcrcd cithcr open or blocked, and the shortest path is always best. There is no mechanism for trading a slightly longer path for more clearance, or for making local path adjustments. Thcrc is also no clean way to deal with unmapped regions, other than to close them off entirely. The Potential Fields [l, 41 approach tries to make those tradeoffs explicit. Conceptually, it turns the robot into a marble, tilts the floor towards the goal, and watches to SW which way the marble rolls. Obstacles arc represented as hills with sloping sides, so the marble will roll a prudent distance away from them but not too far, and will seek the passes between adjacent hills. The problem with potential field paths is that they can get caught in dead ends: once the marble rolls into a box canyon, the algorithm has to invoke special-case mechanisms to cut off that route, backtrack, and start again. Moreover, the path with the lowest threshold might turn out to bc a long and winding road, while a path that must climb a small ridge at the start and then has an easy run to the goal might never be investigated. The arca to be traversed can be covered with a grid in which each node is connected to either its four or its eight nearest neighbors. For a four- connected grid, if the spacing were r, there would be a chance of missing diagonal paths. At left in Figure 1, for instance, there is enough room for the robot to move from (1,l) to (2,2), yet both nodes (1,2) and node (2,l) arc blocked. To guarantee that no paths are missed, the grid spacing must be reduced to r * sqrt(2) / 2, as in the center of bigure 1. That is the largest size allowable that guarantees that if diagonally opposite nodes are covered, there is not enough room between them for the robot to safely pass. Note that the converse is not ncccssarily true: just because there is a clear grid path does not guarantee that the robot will fit. At this stage, the important thiilg is to find all possible paths, rather than to find only possible paths. Another approach that could explicitly represent the conflicts between If the grid is tight-ccnncctcd, as in the right of Figure 1, (each node connected to 11s diagonal, as well as orthogonal, neighbors), the problem with diagonal paths disappears. The grid spacing can be a full r, while guaranteeing that if there is a path it will bc found. 4. Path Relajtation // @ / 1 ~ 1 1 2 Figure 1: Grid Size Problems Grid Senrcit, Once the grid size has been fixed, the next step is to assign costs to paths on the grid and then to search for the best path along the grid from the start to the goal. “I&J”. in this case, has three conflicting requirements: shorter path Icngth. greater margin away from obstacles, and less distance in uncharted arcas. Thcsc three are explicitly balanced by the way path costs are calculated. A path’s cost is the sum of the costs of the nodes through which it passes, each multiplied by the distance to the adjacent nodes. (In a 4-connected graph all lengths are the same, but in an &connected graph we have to distinguish between orthogonal and diagonal links.) The node costs consist of three parts to explicitly represent the three conflicting criteria. 1. Cost for distance. Each node starts out with a cost of one unit, for length traveled. unlikely to bc exactly on a grid point. If the grid path is topologicaily cquivalcnt to the optimal path (i.c. goes on the same side of each object), the grid path can bc iteratively improved to approximate the optimal path (see Section 5). But if the grid path at any point goes on the “wrong” side of an obstacle, then no amount of local adjustment will yield the optimal path. The chance of going on the wrong side of an obstacle is rclatcd to the sic.e of the grid and the shape of the cost VS. distance function. For a given grid size and cost firnction, it is possible to put a limit on how much worse the path found could possibly be than the optimal path. If the result is too imprecise, the grid size can bc decreased until the additional computation time is no longer worth the improved path. 2. Cost for near objects. Each object near a node adds to that node’s cost. The ncarcr the obstacle, the more cost it adds. ‘I’hc exact slope of the cost function will dcpcnd on the accuracy of the vchiclc (a more accurate vchiclc can afford to come closer to object\), and the \&clc’s speed (a faster vehicle can afford to go farther out of iis way), among other factors. 3. Cost for within or near an unmapped region. The cost for traveling in an unmapped region will depend on the vehicle’s mission. If this is primarily an exploration trip, for example, the cost might be relatively low. ‘I’hcrc is also a cost added for being near an unmapped region, using the same sort of function of distance as is used for obstacles. This provides a buffer to keep paths from coming too close to potentially unm‘lppcd hazards. 320 The first step of Path Relaxation is to set up the grid and read in the list of obstacles and the vehicle’s current position and field of view. The system can then calculate the cost at each node, based on the distances to nearby obstacles and whether that node is within the field of view. The next step is to create links from each node to its 8 neighbors. The start and goal locations do not necessarily lie on grid points, so special nodes need to be crcatcd for them and linked into the graph. Links that pass through an obstacle, or between two obstacles with too little clearance fo{ the vehicle, can bc detected and deleted at this stage. A few details on the shape of the cost fimction deserve mention. Many different cost functions will work, but some shapes are harder to handle properly. ‘fhc first shape we tried was linear. This had the advantage of being easy to calculate quickly, but gave problems when two objects were close together. l’hc sum of the costs from two nearby objects was equal to a linear function of the sum of the distances to the objects. This creates ellipses of equal cost, including the dcgeneratc ellipse on the line between the two objects. In that case, there was no reason for the path to pick a spot midway between the objects, as we had (incorrectly) expected. Instead, the only change in cost came from changing distance, so the path went wherever it had to to minimize path length. In our first attempt to rcmcdy the situation we replaced the linear slope with an exponentially decaying value. This had the desired effect of creating a saddle between the two peaks. and forcing the path towards the midpoint between the objects. The problem with hxponentials is that they never reach zero. For a linear tinction, there was a quick test to see if a given object was close enough to a given point to have any influence. If it was too far away, the function did not have to be evaluated. For the exponential cost tinction. on the other hand, the cost tinction had to be calculated for every obstacle for each point. We tried cutting off the size of the exponential, but this left a small ridge at the cxtremum of the function, and paths tended to run in nice circular arcs along those ridges. A good compromise, and the function WC finally scttlcd on, is a cubic function that ranges from 0 at some maximum distance, set by the user, to the obstacle’s maximum cost at 0 distance. This has both the advantages of having a good saddle between neighboring obstacles and of being easy to compute and bounded in a local area. The system then searches this graph for the minimum-cost path from, the start to the goal. The search itself is a standard A” [8] search. The estimated total cost of a path, used by A* to pick which node to expand next, is the sum of the cost so far plus the straight-line distance from the current location to the goal. This has the effect. in regions of equal cost, of finding the path that most closely approximates the straight-line path to the goal. The path found is guaranteed to be the lowest-cost path on the grid, but this is not necessarily the overall optimal path. First of all, even in areas lvith no obstacics the grid path may bc longer than a straight-line path simply because it has to follow grid lines. For a 4-connected grid, the worst case is diagonal lines, where the grid path is sqrt(2) times as long as the straight-lint path. For an 8-ionnccted grid, the cquivalcnt worst case is a path that goes equal distances forward and diagonally. This gives a path about 1.08 times as long as the straight-line path. In cases where the path curves around several obstacles, the extra path length can be even more significant. Secondly, if the grid path goes bctwecn two obstacles, it may bc non-optimal because a node is placed closer to one obstacle than to the other. A node placed exactly half way between the two obstacles would, for most types of cost functions, have a lower cost. The placement of the node that minimizes the overall path cost will dcpcnd both on node cost and on path length, but in any case is No& motion has to be rcstrictcd. If nodes were allowed to move in any direction. they would all end up at low cost points, with many nodes bunched together and a few long links bctwccn them. This would not give a very good picture of the actual cost along the path. So in order to keep the nodes spread out, a node’s motion is restricted to be perpendicular to a line between the preceding and following nodes. Furticrmore, at any one step a node is allowed to move no more than one unit. AS a node moves, all three factors of cost arc affcctcd: distance traveled (from the preceding node, via this node, to the next node), proximity to objects, and relationship to unmapped regions. The combination of thcsc factors makes it difficult to directly solve for minimum cost node Reluxatiun. Grid search finds an approximate path; the next step is an optimization step that fine-tunes the location of each node on the path to minimize the total cost. One way to do this would be to precisely define the cost of the path by a set of non-linear equations and solve them simultaneously to analytically determine the optimal position of each node. This approach is not, in general. computationally feasible. The approach used here is a relaxation method. Each node’s position is adjusted in turn, using only local information to minimize the cost of the path sections on either side of that node. Since moving one node may affect the cost of its neighbors, the entire procedure is repeated until no node moves farther than some small amount. position. Instead, a binary search is used to find that position to whatever accuracy is desired. The relaxation step has the cffcct of turning jagged lines into straight ones where possible, of finding the “saddle” in the cost function between two objects, and of curving around isolated objects. It also does the “right thing” at region boundaries. The least cost path crossing a border between different cost regions will follow the same path as a ray of light refracting at a boundary between media with different transmission velocities. The relaxed path will approach that path. 5. Additions to the Basic Scheme One extension we have tried is to vary the costs of individual obstacles. The current vision system sometimes reports phantom objects, and sometimes loses real objects that it had been tracking correctly. The solution to this is to “fade” objects by decreasing their cost each step that they are within the field of view but not tracked by the vision module. Another extension implcmcnted is to rc-use existing paths whenever possible. At any one step, the kehiclc will usually move only a fraction of the length of the planned path. If no new objects are seen during that step, and if the vehicle is not too far off its planned course, it is possible to USC the rest of the path as planned. Only if new objects have been seen that block the planned path is it necessary to replan from scratch. The relaxation step can bc greatly speeded up if it runs in parallel on several computers. Although an actual parallel implementation has not yet been done, a simulation has been written and tested. 6. Remaining Work Path Relaxation would be easy to extend to higher dimensions. It could be used, for example, for a 3D search to bc used by underwater vehicles maneuvering through a drilling platform. Another use for higher-dimensional scarchcs would be to include rotations for asymmetric vehicles. Yet another application would bc to model moving obstacles; then the third dimension becomes time, with the cost of a grid point having to do with disL?nce to all objects at that time. This would have a slightly different flavor than the other higher-dimensional extensions; it is possible to go both directions in x, y, z, and theta, but only one direction in the time dimension. Another possible cxtcnsion has to do with smoothing out sharp corners. All wheels on the Rover steer, so it c,m follow a path with sharp corners if necessary. h4any other vehicles. arc not so maneuverable; they may steer like a car, with a minimum possible turning radius. In order to accommodate those vehicles, it would be necessary to restrict both the graph search and relaxation steps. A related problem is to use a smoothly curved path rather than a series of linear segments. An interesting direction to pursue is multiple-precision grids. This could make it possible to spend more effort working on precise motion through cluttered areas, and less time on wide open spaces. Path relaxation, as well as almost all existing path planners, deals only with geometric information. A large part of a robot’s world knowledge, bowever, may be in partially symbolic form. For example, a map assembled by the vehicle itself may have very precise local patches, each mcasurcd from one robot location. The relations between patches, though, will probably be much less precise, since they depend on robot motion from one step to the next. Using such a mixture of constraints is a hard problem. Aclcnon~led~ert~errts ‘I‘hanks to tI,ms Moravcc. I>arry Matthics, and Rich Wallace ti)r advice and cncouragcmcnt. This rcscarch was partially supported by Office of Naval Research contract N00014-81-K-0503. Example Run. Figure 2 is a run from scratch, using real data extracted from images by the Fido vision system. The circles are obstacles, where the size of the circle is the uncertainty of the stereo vision system. The dotted line surrounds the arca out of the field of view. The start position of the robot is approximately (0, -.2) and the goal is (0, 14.5). The grid path found is marked by 0’s. After one iteration of relaxation. the path is marked by l’s, and after the second (and, in this case, last) relaxation, by 2’s. --- ---. References 1. J. Randolph Andrcws. Impedance Control as a Framework for Implcrncnting Obstacle Avoidance in a Manipulator. Master Th., MIT, 1983. 2. Rodney I3rooks. Solving the Find-Path Problem by Representing Free Space as Generalized Cones. Al Memo 674, Massachusetts Institute of ‘l‘cchnology, h/lay, 1982. 3. Georgcs Giralt, Ralph Sobck, and Raja Chatila. A Multi-Level Planning and Navigation System for a Mobile Robot; A First Approach to Hilarc. Proceedings of IJCAI-6, August, 1979. 4. Oussama Khatib. Dynamic Control of Manipulators in Operational Space. Sixth CISM-IFToMM Congress on Theory of Machines and mechanisms, New Delhi, India, Dcccmber, 1983. 5. Tomas I.ozano-Perez and Michael A. Wesley. “An Algorithm for Planning Collision-Free Paths Among Polyhedral Obstacles.” CACM 22,10 (October 1979). 6. Hans Moravcc. Obstacle Avoidance and Navigation in the Real World by a Seeing Robot Rover. Tech. Rept. CMU-RI-TR-3, Carnegie- Mellon Univesity Robotics Institute, September, 1980. 7. Hans Moravec. The CM U Rover. Proceedings of AAAI-82, August, 1982. 8. N. Nilsson. Problem Solving Methods in Arfificial Infelligence. McGraw-Hill, 1971. 9. Colm O’Dunlaing, Micha Sharir, and Chee Yap. Retraction: a new approach to motion-planning. Courant Institute, November, 1982. 10. Alan M. Thompson. The Navigation System of the JPL Robot. Proceedings of IJCAI-$1977. 321
1984
28
312
‘l‘hrer findpath problcrns are considered. First, ihe probkrn of finding a collision frrc trajectory for a tentacle niariipulator is examirlcd. Sec- Abstract Three Findpath Problems Richard S. Wallace Department of Computer Science Carnegie-Mellon Univeristy Pittsburgh, PA 15213 Olld, a nrw firidpalh nlgorithrn for a mobile robot rover is prcsl~Jltcl~. ‘I’h is :~lgorilhrr~ dill& froJn clarlicr ones in its USC Oi- qll:tJl tilativc in- formation about the uncertainty in the position of the robot to keep the robot away from obstaclrs without going too near them and with- out going too far out of its way to avoid them. Third, a method for coordinating two rrioving arms so thilt Lhcy avoid collisions with each other is presented. The two-arJn fiudpath algorithm here is restricted to cases where coordirlated collision- free trajectories can bc found by controlling the velocities of the arnls. Each of these findpath problems suggests a hc>uristic end of the paper. to find its solution and these are discussed at the 0. Introduction With the rnnturaLion of the theory of Configuration Space[l] and the fouutf~t,ional work of Schwartz and Sharir on the ‘Piano Movers’ problcrJli2] it is tcmpGJlg to conclude that the final word on the findpath probleJu has been said. Indeed for the case of a two-dimensional robot moving around fixed plarlar obstacles (or a three-dimensional robot who can be reduced to a two-dimensional one by projection) many al- gorithms for Lhc findpath problem have been proposed [3,4,5,6,7,8,9], no one of which is obviously best for simple cases. I3ut the theory of configuration space tr:ls us that for an n-dof robot we must search an n dimensional Euclidean space for a collision-free path when the ob- stacles are fixed. Also, theoretical work on the findpath problem for linkages indicates that its algorithm is Nh’P-complete[lO]. So the issue for computer scicrlce becomes not solving the findpath problem in gcn- era1 but examining special cases one by one, to see if WC can find an etlicicnt solution for any. Three !iJidpaLh problems are exarnirled here: for a planar tentacle Jnanipulator; for a mobile robot, with tljc additional twist that we don’t want, the robot to come Loo near any obstacles nor to navigate too far away from them when avoiding them and; for two coordinated Jnanipulafor arms. Each solutions to a findpath problem discussed here suggests a heuristic that may bc useful for solving other findpath prGblcms. The l.ccristirs are discussttd at the end of the paper. 1. Tentacle Findpath Problem Spiral fuiictioris (i.e. monotonically increasiJig polar furlctioris of the form r = f(0)) app ear to be reasonable carldidntes for modeling ten- tar&. The important ronsideratiorl from the robot theoretical point of view is that the total length of such a r,lanipulator is constant. The diagram in figure 1 illustrates a tentacle marlipulator modeled by a logarithmic spiral (a function of the form r = CO). This tentacle robot can “spiral out” or “wind up” arid also rotat,c at its shoulder about the origin. The parameters for this type of arm are its rotational orienta- tion q5 and a parameter a, which dcterrniJ!es how much thr JnaJGpula%or has “spiraled out.” For the findpath problcrn, the logarithmic spiral tentacle has the advantage that a closed form of its inverse kinematic solution is obtained easily [L2]. The method used to solve the tentacle Lindpath problem is an approx- ~J~~;r.Liorl Iilolhod, in which obstacles in the iij;irjipul:tLor’s l)o:;itir,jj sp;jcc are hounded by instanccas of a class of obstacles, called tentacle obata- cles, which arc simple to snap into tho robot’s configuration SURCC. The obstacles under considcrntion have four sic&. The IL/~ side is the side first illtcrscctcd by the ~~r~taclc if iL is rotating clockwisr arourltl the origin. The right side is the side First intcrscctod by the tcrltaclc as it rotates counter-clockwise. The near side is the side first intersected by the tentacle as it holds its $ value corlstanl arid increases u, that is, ~hc side nearest the origin ‘I’l~c fur side is the side of Lhc obstacle farthest from the origin. For rcasoIjs described below, the near and far sides are always circular arcs and the right and left sides are particular polyliiies. The near side of a tentacle obstacle is a circular arc between two angles 41 and 42 so that when ~$1 5 (b 5 42 the tentacle parameter a must be sufficiently small to ensure that the point on the tentacle furthest from tbc origin is closer to the origin than the circular arc. In figure 2 the near sides of the two obstacles correspond with thr sides of the configuration space obstacles nearest the 4 axis and parallel to it. The relationship between a and the point on the tentacle furthest from the origin is non-linear but is bounded by a linear function which is asymptotically equal to the precise relation. Figure 1. Tentacle JnanipJJlator JJlod&d by a logarithmic spiral func- tion showrl in several configurations. These drawings were produced by a program which solves the irlversc kiiicrrlatics of this type of arm. Figure 2. A collisiorl free path for a tcntaclc robot manipulator. IkCXI obstacles in lhc tc:Jlt.ack!‘~ psiLioJt space arc hounded by tentacle o6stcafc.s that look like the 01~s Jrlarkcd OHSI arid OIH2 hero. Thcsc obstacle map into thr tcnteclc’s configuration space a8 shown. 111 the configuratiorl space the vertical boundary amaz rcprcscnts the maxi- JJIUTJI reach of the tcrltach! in ariy direction. 326 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. Figure 3 illustrates the area swept out by the tentacle as its (z value varies while its r$ value remains fixed. A polygon is shown which closely bounds this area. In the diagram shown call the boundary of the poly-’ gon above the x axis the top side and those below the bottom side. The right side of a tentacle obstacle is a rotated portion of the top side and the left side of a tentacle obstacle is a rotated portion of the right side. This can be seen by comparing the contours of the obstacles in figure 2 with the sides of the bounding polygon in figure 3. These contours cor- respond to the sides of the configuration space obstacles parallel to the u axis. In other words, they represent linlits on the value of (b outside of which the manipulator is guaranteed to not intersect the obstacle, for any value of a. Given the forcgoirlg understanding of the left, right, and near sides of a tentacle obstacle it, is relatively sirJJple to plan collisioJJ--free trajecte ries for the manipulator around these obstacles, because (ignoring for the monJeJJt the far sides) these obstacles map into the configuration space as rectangles (the obstacles in the configuratiorl space of figure 2 wilhout the V-shaped notches or1 top). There is, however, the addi- tional problem of planning paths so that the manipulator Jnay reach points on or beyond the top side of an obstacle. Exarnirling the ten- tacle obstacles in figure 2 it can be seen that for a given value of 4 there might be a range of values of (I for which the hand is beyond the top of the obstacle but for which the arm does riot irltcrscct the obstacle. As a becomes suffJciently small or sufflcicntly large the arm will intersect the obslacle, however. These considerations give rise to the V-shaped notches in Ihe configuration space obstacles. The exact relationship between 4 and a for which the hand reaches beyond the obstacle and the arm doesn’t intersect the obstacle is non-linear, but can be conservatively bounded by the linear function represented by the V-shape in figure 2. A program to map tentacle obstacles into configuration space ha9 been implcrncnted in C. Using this program the collision-free trajectory il- iust,r:rtcJI in Jigurc 2 wan found. 2. “Not Too Near, Not Too Far” Findpnth Problem ColJsidcr;lble work hits bccrl donc on the problcrn of IiJJtiiJlg a collision - free trajectory for a rigid two-dimensional robot nJoving arouJJt1 1&d obstacles in a plane. A new algorithm called the “Not too Near, Not too Far” algorithm has been developed and iJJJI)lemenlcd. Where this algorithm dilrers from others previously developed is in its quantitative assessment of the uncertainty in the position of the rover and how that inforJnation is used to plan a path for the robot that takes it, “not too Jlcar” obstacles but “not too far” front them tither. The robot rovers built in the Mobile Robotics Laboratory at, C-MU C~JJ be viewed as rigid two-dimensional robots’ moving around in a plane if the robot aJJd obstacles are projected onto the floor. Many algorithms have been developed t,o solve the findpath problem for this siJnp1c case. 1t can be observed, however, that they all share one or another basic fl;lw. A two-diJncJJsional findpath algorithm may find a collision-free pat,lJ for a robot, but the path may not be suitable for a real robot either because it brings the robot so close to obstacles that the robot might actually hit them if there is any uncertainty about its position or, coversely, the robot may be sent, far out of its way to avoid very small obstacles. The problem then becomes to write a findpath program which keeps the robot ‘<not too near” to obstacles but “not too far” from them. Figure 3. A polygon bounds the region W,X:~L out, by t,ho Icrltaclc m 4 k hC!ld COJd2iIlt (hro # = 0) ;rnd (L v;irics. ‘rhc “t,~~~ si&y or this I)O1ygoJl are USCC~ Lo construct, thr: “rigtIC” sides of hIltac]c obstacles and h “bottom” sides am used Lo corlstruct i,lJe “1efL” sides of tc!JJtacle obstacles. Figure 4. A PatAt four~d by a V-graph .scarctl algorithm. One of the earliest fiJJdpath algorithms is the visibility graph or V- graph algoritIim developed at SRI[5] for the robot Shakey in the late 1960’s. The V-graph algorithm works like this: Given the obstacles are a11 fixed convex polygons and the robot, is a moving point (if the robot is’. not a point, grow the obstacles by including within their walls all points” less thar~ or equal to thr radius of the robot and shrink the robot to a poiJJt), construct the set of line segrncnts linking all the vertices of the polygons with each other and with the start and goal positions. Delctc l’rorn I,his srt all scgrnenls which intersect polygons (but not, the srgn~cnts lying along the edges of polygons). The renlaining scgnicnts forJJJ a V-graph, which may be searched for a collision-free palh frorn start to goal. Figure 4 illustrates a path found by a V-graph algorithm. The V-graph algorithm clearly has the property that the robot comes too near the obslaclcs, in fact it often must, follow path right, along the walls of obstacles. (Imagine walking through the corridors of a building while staying as close s possihlc to the walls). The obvious solution to this “too near” problem is to grow the obstacles. But what is a reasonable amourit of “growth?” An answer to this question appears below. A more recrnt findpath algorithm is the “freeway” algorithJn devel- oped by Rod Brooks[6]. In tl Jis algorithm grnernlizcd ribbons (roughly syJnmctric polygoJJs with a well-defined axis) are fit to the free space betwcrtn obslaclt s. Path-planning for a simple pointlike robot consists or finding the nc:lrest gc:nrralisetl ribbon axis and following a conJJected set of the LYCS to a point on aJ1 axis nearest. the goal. Figure 5 illus- trates a collision free path found by this algorithm. The freeway algorithm has the unfortunate drawback that for cases where there arc few obstacles and they are sparsely distributed the c0liisltifl-l’rcc pallI;; pl:lIlJlc~d Illilj !,akc the rob,)! far out of it’s way. (IJnngine w;tlkirJg through .a gyJJJn;lsiuJJJ-sixcd ~-00111 from one corner to its di;\gon;rl opposite but walking along a long IAiaped path to avoid a sri~;~ll box: placed in the ccJitcr.) -1--------i-- Figure 5. A Path found by a Preewsy Algorithm. 327 ‘l’hcl “Not too Near, Not too Far” algorithm has two p.arts. First, obstacles are grown into uirtuul obataclev such that the obstacles nearer the robot arc grown less than those furthrr away, taking into account the fact that the certainty in the position of the rover dcgradcs with search distancr away applied Lo find from its start a path around posi Con. , bSecorid, a the virtual obstacles. program is I assume that the robot is a circle capable of omnidirectional motion in tht: plane and that the obstacles are convex polygons. This type of robot, was sclccted in part because the Pluto rover in the Mobile Robotics Laboratory is cylinder shaped and omnidirectional. When Pluto and obstacles are projected onto the floor, this algorithm models the situation exactly. IL is assumed also that the uncertainty in the position of thr robot inrrcxsrs according to a linear function of distance away from the robot’s start posilion. ’ Illquivalcntly, wc can say that the size of the robot grows as R = kd + T wbcrc R is the radius of the grown robot, d is distance from the start position and r is the radius of the actual robot and k is some empirically selected constant. This assumption is of course only an approximation of the uncertainty in the position of the robot, which actually yaries as a function of distance traveled by the robot rather than distance from the start. Ijut distance traveled cannot be known before a path is selected so we :~ssume that the uncertainty in the robot’s position can at least be bounded by the linear function above. Over the short distances traveled by the robots’ in the Mobile Robotics Laboratory this assumption is ccrt:Gnly valid. Given the linear-growth assumption we proceed to expand the obsta- cles and to shrink the robot. The robot transforms to a point. If an obsLacle has an edge e and p is a point along e then we look for a point p‘ d011g a perpendicular to e from p so that the distance from p to pw is R = kd -I r where k and r arc as above and d is the dist.ance from the robot’s start position to p*. It is cnsy to see t!laL u~ilcsu the wall is aligned with a ray from the robot start position the transformed virt#ual obstacle wall will have a complicated shape. Fortunately, the virtual obstacle walls can bc approxirnatcd in the following way: At each vertex of the original polygonal obstacle consLruct circles of ra- dius II’ = kd +- r where k and r are again as before and d is now the di&aricc from the start posilion Lo the vcrkx. The approxirnatcd walls of the virtual obstacle are the outside tangents bctwccn these circles. The Lransformation is illustrated in figure 6. The second part of the algorithm involves searching for a collision- free path around the transformed obst,acles. It is easi!y seen that if the shortest collision-fret path lies along segments linking the start and goal positions with the obstacles such that the segments arc tangent to 328 Figure 6. Transformation of an obstacle into a virLua1 obstacle by “Not too Near, Not too Far” algorithm. The original obstacle is the polygon whose vertices are the centers of the circles. The virtual ob- stacle is bounded by the circular arcs and Langcntial cdgcs shown in bold. The dashed line indicates the path selected by the program. the circles at the verticrs of the obstacles (this can be proved with a string-tighLening argument), or along a straight -line path from start to goal. The algorithm proceeds by constructing the tangential lint segments and eliminating those which intersect obstaclm. A search graph is construct,ed so that each node represents either a circle or lhe start or goal. Links in the graph rcprcsent tangential segments which may be foilowcd from one circle Lo another. Thcsc Langcntial scgrncnta may be cilher edges of virtual obstacles or tangents between virtual obstacles. WC assume that the robot won’t “back up” along a circle, that is, when iL reaches a circle by a Larigcnl it will continue around the circle unlil iL rcaclics a sccolid Larrgc‘riL along which it can srrioothly rxiL. Thus each circle is represented as two circlrs, a clockwise one and a counterclockwise one. From the clockwise circle Lhe counterclockwise one may not be rcachcd directly, but any other circle rnay be reached. The resultanL starch graph may bt> scarchrd using any of a variety of conventional search procedures. A version of the “not too near, not too far” algorithrn has been imple- mentcd in C. The diagram in figure 6 was produced by this program. 3. Two-Arm Findpath Problem ~yslca~s of multiple robot rnanipulators must be coordinated so that the arms are not always crashing into each other. The general findpath problem for multiple arms is computationally expensive, but in certain special cases very inexpensive solutions may be obtained. Developed here are some thcorelical approaches to the multiarm collision avoid- ance problem, based on some trajectory planning work done by Kamal Kant at McGill University[ll]. The ideas were used to implementing a simple two-arm path planner discussed below. Kamal Kant has done some interesting theoretical work on the find- path problem for a mobile robot in an environment with moving obsta- cles. He considers the special case of this problem in which the mobile robot’s path is fixed in advance, and the trajectories of moving obsta- clcs are known. The pararneter in this system is the speedof the robot. mo”emlfi&;k ---______Figure 7(a). The trajectory of two planar manipulators. The rn* nipulators start in configurations given by the solid lines and move to configurations given by the dashed lines. The dotted line follows the trajectory of the hand. Figure 7(b). Th e s x t space constraint corresponding Lo figure 7(a). It is easy Lo visualize the problem by considering a train moving nlone; a fixed track which is being crossed by, say, pcoplc and au toy. If the vcloritirs of the people and autos :Irc known then WC c;trl plim a velocity profile for Lhc train so that it avoids collisions with Lhcm. Applying this idea to the two-arm findpath prohlcm, we begin by fiiirlg t.hr Lrajcctorics of each arm. For simplicity, wr Lake thcsr trn- - jccLorics I,O he straight line scgrncrlL paths in joint space (for a 2 joint m;tnipulator such ;ts the orlc illustratrd in figure 7 there is a 2 dirnen- siorlai joiI!L space). Also, for one of the arms WC assume a constant velocity. Thus the joint ~pacc pnLh of one arm can be parametcrized with ampiL~aIILCtcI~ t -rcprescnting tirne, such Lhat 0 5 t < 1. Tile other m;tnipulator is paramct.crizcd with a pararnetcr a, 0 5 s < 1 so that we can cunLro1 the speed ds/dt of the sccoud manipulalor. If WC now construct the space which is s x t WC can plan the velocity profile for the secorld manipulator. In figure 8(a) we see two fixed trajectories for each of two manipulators, ml and mg. We fix the trajectory of ml and rn2 but allow ourselves to vary the speed of m2. The two paths cross each other at some initial to corresponding to some .90 and overlap until tl which has a corresponding .~l. This constraint is represented as a rectangular obstacle in a x t space. The obstacle conslrains path from (‘3, t) = (0,O) to (.9, t) = (I,]) so that m2 must move slowly so that it avoids ml, or, a3 the program suggests, that ml must cornpletc its motion first. and ~1 is vtbrp difficult. The problem can be approxim:~Lcd by consid- ering c;lnonical casts of obstacles in a X t space (i.e. box against 3 axis, Lox agairlsf, t :ixib, box ag;Grlsit no axis etc.), Ijy classifying a particular problrrrr iilto one of Lhcse canonical types WC car1 find not Lhe exact volociLy of the second arm, but at least get an inclicatioll of which arm to move first. A program in Lisp was implumentcd which plans collision-free paths for Lhc sirnplo two-arm system illustrated in figures 7 and 8. Figure 8 illustrates some of the results of this algorithm. Of course, fixing the trajectory and varying the velocity of robot arms will not result in collision-free paths in all cases (see figure 8). out it is interesting to consider using an approach such as this as a front--end to some more complete findpath solver. In the cases where the 3 x t space approach works, it will find solutions very quickly. 4. Conclusions and Future Work ‘l’hrce findpath problems were discussed here. Each problem suggested a hcurisiic for its solution. It may be possible to use these heuristics in ~11~ search for efficient solutions to other findpath problems. For a particular findpat. problem, we could ask: no cf path by this algorithm Figure 8. Some motion strategies for the two-arm path planning problem suggostcd by Lhc program. See the description of figure 7. 1. Does the configuration space naturally generate any LLinteresting” obstacles? If so, can these obstacles be used to bound obstacles that occur in position space? 2. ban information about the uncertainty in the position of the robot be used to constrain the space in which we search for a cf path? 3. Can the dimensionality of the findpa& problem be reduced by searching in spaces other than the actual configuration space of the- robot? Can parameters such as the velocity of the robot be ised to rcd1lc.c thr size of the search space? ln Lhis article we reported cases in which these heuristics lead to effi- cicnt solutions to particular findpath problems. Using the Grst heuristic it w;1s sho~vr~ that the LentncIe’s configuratjon space contained some ea+ ily construcLcd obstacles which could be used to bound real obstacles in position space. In the rover “Not too Near, Not too Far” program we used the second heuristic to find a manageable search spaccin which to search for real-world paths. The third heuristic suggested the solution to the constrainrd two-arm findpath problem, in which the trajectory paths of each robot is already selected and the speed of the arms is con- trolled to prcvcnt collisions. More work needs to be done, however, to see if Lhrse heuristics are helpful in Gnding solutions to other findpath problems. Future work on the particular findpath problems exarnined here in- cludes solving t,entacle findpath problems for more interesting tentacles, experimentally evaluating the performance of the “Not too Near, Not too Far” prograrn on a real robot, and extending the two-arm solution to work for manipulators with polyhedral links. Bibliography [I] I,OZaIlo Perez, Thorn;~s Autondic Planning of~lunipdator Trana- jer Movements MIT A1 Memo 606, December, 1980. [‘21 Schwart)z, JWCJL 'I' . ;lIItl MiClliJ Sllarir OII the ‘l’i(l:bJ Ai%,Uers’ Prob- km 1. The case of u l’wo dimensional Rigid t’olygvnal Ijody Moving Amidst 1’V[ygo?lnf llnrriers Corripulcr Science Dcpartnicnt, CouranL In- slitute of hlathcrn:~ticnl Scirrlccs, l<cbport No. 39. October, 1981. [:I] M oravc(‘, I!:IIIs I’. Obstuclr AwxXance rlntf Naoigation in ihc Ileal Wvrld by a seeing Ilobot Rover, S(,anford Artificial InLelIigence Labo- ratory Memo AIM 340, Septcrnber, 1980. [4] Rowat, Peter I;orbrs Representing Spatial Experience and Solving Spatial Planning I’rvblems in a Simuluted i{obot Environment, Ph. D. thesis, University of British Columbia, Department of Computer Sci- ence, October, 1979. [5] Nilsson, N. J. arid Raphael, B. “Preliminary Design of an Intelligent Robot”, Computer and Information Scicncea vol. 7 no. 13 pp. 235-- 259. 1967. [6] Brooks, R. A. “Solving the Findpath Problem by Good Representa- tion of Free Spare” in MAJ-82, Procrcdings of the National Conference on Artilical Intclligcnce, pp. 381-386. August, 1982. [7] Thornpson, A. M. “The Navigation System of the JPL Robot” in Proceedings of I.JCAl-5, August, 1977. [8] Udupa, Shriram M. “Collision Detection and Avoidance in Computer- Controlled Manipulators” in Proceedings of IJCAI-5, August, 1977. [9] Thorpe, Charles E. Path Relaxation: Path Planning for u Mobile Robot, Department of Computer Science, Carnegie-Mellon University, in preparation, 1984. [IO] Hopcroft, J., Joseph, D., and Whitcsides, S. “On the Movcmcnt of Robot Arms in 2-Dimensional Hounded Regions” in ~‘~3rd Annual Sym- posium on Foundations of Computer Science, IEEE Computer Society, pp. 281-289. November, 1982. (I I] Kant, Kamal “Trajectory Planning Problems, I: Determining Ve- locity along a Fixrd Path” iu CSGSI 84 (J’ roccrdings of the Fifth Na- tional Conference of the Canadiau Society for Cornputationxl Studies of Intelligence), May, 1984. [12] Wallace, Richard S. “Three l’indpath Problenls”, extended version of this paper. /orthcoming. 329
1984
29
313
A MECHANICAL SOLUTION OF SCHUBERT'S STEAMROLLER BY MANY-SORTED RESOLUTION Christoph Walther Institut fiir Informatik I UNIVERSITET KARLSRUHE Postfach 6380 D-7500 Karlsruhe 1 Abstract We demonstrate the advantage of using a many-sorted resolution calculus by a mechanical solution of a challenge problem. This problem known as "Schubert's Steamroller" had been unsolved by automated theorem provers until now. Our solution clearly demonstrates the power of a many-sorted resolution calculus. The proposed method is applic- able to all resolution-based inference systems. 1, SCHUBERT'S PROBLEM In 1978, L. Schubert of the University of Alberta set up the following challenge problem: Wolves, foxes, birds, caterpillars, and snails are animals, and there are someof each of them. Also there are some grains, and grains are plants. Every animaleither likes to eat all plants or all animals much smaller than itself that like to eat some plants. Caterpillars and snails are much smaller than birds, which are much smaller than foxes, which in turn are much smaller than wolves. Wolves do not like to eat foxes or grains, while birds like to eat caterpillars but not snails. Caterpillars and snails like to eat some plants. Therefore there is an animal that likes to eat a grain-eating animal. This problem became well known since in spite of its apparent simplicity it turned out to be too hard for existing theorem provers because the search space is just too big. Using the following predicates as an abbreviation: A(x) -xisananimal W(x) - x is a wolf F(x) - x is a fox B(x) - x is a bird C(x) - x is a caterpillar S(x) - xis a snail GM - xisagrain p (xl - xis aplant M(xy)-x is much smaller than y E(xy)-x likes to eaty we obtain the following set of clauses as a predicate logic formulation of the prob- lem: (1)IWh)l (2){F(f)) (2){B(b)) !4)fC(cj 1 (5) {S(s) 1 (6; iG(q) 1 (7)tik,) ,A(x,)) (8) {Fby ) ,A(Xj) 1 (9) thq) ,A(x,) 1 (lO){~(xl),A(xl)) (ll)C%x,),A(x,)) (12)G(x,),P(x1)3 (13)rA(x,),P(x2),A(x3),P(xq) ,E(x1x2),M[x3xl), E (x3x4) $3 (x,x3) 1 (14)+%x,),B(x2),M(x,x2)! (15)i~(x,),~(x,),M(x,x2)) (16)Ih,) &x2) ,M(x1x2) 1 (17)@(xl) @(x2) ,M(x,x2)) (18)~~(x,),~(x,),~(x,x1)) (19)~~(x,),W(x2),~(x2x~)~ (20){B(x,),~(x2),E(x1x2)} (21)~~(x,),s(x2),~(xlx2)~ (22) Ih,) ,P(h(xl) 11 (23)cCk+ ,E(xlh(xl))3 (24)IS(x1),P(i(xl))J (25)Cs(x,) ,E(xli(xl) )) (26) 6(x-,) ,x(x2) ,G( j (x,x,) ) 1 (27) G(x,) ,%x2) ,E (x1x2) ,E(x2j (x1x2) 1) where w,f,b,c,s and g are skolemconstants, x1,x2,x3 and ~4 are universally quantified variables and h, i and j are skolemfunctions. Figure 2.1 Schubert's problem in clause notation In the fall of 1978 L. Schubert spend his sabbatical at the University of Karlsruhe and a first-order axiomatization of his Problem was given to the Markgraf Karl Refutation Procedure (MKRP) CBES811, a resolution-based automated theorem prover under development at the University of Karlsruhe. The system generated the clause set of figure 1.1, but failed to find a refutation. Though several significant 330 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. improvements have been incorporated into the MKRP-system during the last six years, it is still unable to find a refutation of the above clause set today. The same is true for all the other automated theorem provers we know about, which were confronted with this problem. But there exists a refutation asit can be seen from Schubert's hand computed deduction of the empty clause CSch78,Wa184al Looking at the clause set of figure 1.1 and the handcomputed refutation of the problem, the reason for the difficulties of anauto- mated theorem prover in computing a solut- ion become apparent: *The size of the initial search space (we can compute 102 distinct clauses, 94 re- solvents and 8 factors already inthefirst generation) and *the search depth necessary to compute the empty clause (which is 20 in Schubert's handcomputed solution) leads to such a *rapidly growing search space that the time and/or space boundaries of an automated theorem prover are exceeded before the empty clause can be deduced. This holds true even if we use some refine- ments, like for instance set-of-support CWRC651, which reduces the initial search space to 28 potential resolvents and 2 potential factors. 2, A MANY-SORTED SOLUTION The first-order axiomatization in figure 1.1 reflects a specific view of the given prob- lem: We consider an unstructured universe, the objects of which are associated with properties (expressed by unary predicates) - for instance "is a wolf", "is an animal", "is a grain" etc. - and where relations between these properties are given by im- plications. But there is another, more natural way of looking at the given scenario, which, in- cidentally, enables a human to find a solution: Given a many-sorted universe, which consists of sorts of objects like wolves, animals, grains, plants etc. and certain relations between these objects, e.g. wolves are animals and grains are plants, everything which is true foranimals (or plants), automatically holds for wolves (or grains respectively). In this scenario we talk about the preferences of woZves of eating grains and not about these prefer- ences of a22 objects, which satisfy "is a wolf" and "is a grain". Hence a many-sorted first-order calculus is more suitable for a formalization of Schubert's problem. In such a calculus the domains and ranges of functions, predicates and variables are restricted to certain sub- sets of the universe (which are given as a hierarchy of sorts) where these restrictions are respected by the inference rules. In a many-sorted axiomatization the problem reads (in clause notation) as follows: (1) R;ype w:w (2) type f:F (3) Xgpe b:B (4) ;type c:c (5) -type s:s (6) fypc g:G (7) aoti W<A (8) au&X F<A (9) hoti B<A (10) auti C<A (11) hoti %A (12) noti G<P (13) {E(a,p,),fi(a2a,),E(a2p2),E(a,a2)~ (14) iM(c,b,)) (15) {M(s,b,)) (16) (M(blfl)) (17) INflW,)I (18) {E(w,f$) (19) mw,g, 11 (20) 03 b,c-, 1) (21) IE(b,s,)} (22) -type h(C):P (23) 02 (c,h k, ) ) 1 (24) type i(S):P (25) (E(s,i(s.,))) (26) Xype j(AA):G (27) IE(a,a2) ,E(a,j (a.,a2) 1 Figure 2.1 The many-sorted version of Schubert's problem in clause notation In this axiomatization the symbols W,F,B,C, S,A,G and P are used as sort symbols which are ordered by the subsort order according to the subsort declarations (7) - (12), i.e. 331 W,F,... ,S are subsorts of A and G is a sub- sort of P. The type declarations (1) - (6), e.g. Xyp& w:W, define a signature in which for instance w is a constant of sort W. The type declarations (22), (24) and (26) denote an extension of the given signature computed by the system for the skolem- functions h, i and j, e.g. h is an unary function of sort P with domainsort C. The subscripted lower case letters, e.g. alfa2, PI . . . . are universally quantified variables of the sort denoted by the corresponding upper case letter, e.g. A,P,... . The MKRP-system was extended to a many- sorted theorem prover on the basis of the many-sorted calculus as proposed in CWa1831, In this calculus, the subsortorder and the signature cause a restriction of the unifi- cation procedure CWa184bl: A variable xcan only be unified with a term t iff the sort of t (which is determined as the sort of the outermost symbol of t) is a subsort of or equals the sort of x. For instance we can resolve upon the literals 20(l) and 27(l) in figure 2.1 using the most general unifier {a,+b.,,a2+c11 (but not {b,+a,,c,+a21). However there is no such resolvent upon the literals 20 (1) and 21(l) in the many-sorted resolution calculus since there is no sub- sort relation between C and S. As a conse- quence the variables c1 and s1 are not unifiable. Using the clause set of figure 2.1 the MKRP- system computed the following refutation within 10 resolution steps: (28) rE(a,p,),i;i(a2a,),E(a2j(a,a2))} ;13(4) + 27(l) (29) {f(w,p,),E(f,j(w,f,))) (30) {E(f,j(w,f,))l (31) {~(flp,),E(b,j(f,h~))} (32) IE(b,j(f,~,))l _ (33) IE_(b2p,),;(s1b2),E(s,p2)1 (34) {M(s,b,),E(s,p,)1 (35) IE(s.,p, 11 (36) i 1 ;17(1) + 28(2) :19(l) + 29(l) ;16(1) + 28(2) ;30(1) + 31(l) ;13(4) + 21(l) :32(l) + 33(l) ;15(1) + 34(l) :25(l) + 35(l) continue next page Figure 2.2 The MKRP-solution of the many- version of Schubert's problem For this proof the system uses the replace- ment principle [Rob651 (cf. clause 28) and the set-of-support strategy CWRC651 with clause 27 as the set of support. Having computed the 5th resolvent, i.e. clause 32, the control of the search was taken over by the terminator module C~0831, which had found a unit-refutation for the remaining clause set. But why does the system find a solution for the many-sorted formulation, when it didnot find one for the unsorted type? The reason is the significantly reduced search space as cornparted to the clause set of figure 1.1: For the many-sorted case there are only 12 clauses with 16 literals instead of 27 clauses with 65 literals. The resulting search space is further reduced by the constraints imposed on the unification procedure: For instance we can compute the resolvent upon the literals 20(3) and 21(3) in figure 1.1 yielding I~(x,),c(x2),S(x2)1 from which we obtain {?(x2),s(x2)> by re- solution with clause 3. But c(x2)Cz(x2)1 can only be resolved upon 4(l) C5(1)1 yielding a pure clause z(c) C?(s)1 ineither step. In the many-sorted case these deadends are impossible: the correspondins resolution step upon the literals 20( 1) and 21(l) in figure 2.1 is blocked because the variables c1 and s 1 are not unifiable. As a result the size of the initial search space is totally reduced to 12 potential resolvents (compared to 94 potential resol- vents and 8 potential factors), which again can be reduced to 3 potential resolvents (compared to 28 plus 2 potential factors) if the set-of-support strateqy is used. The following diagram compares the statistical values of both solutions, where the values of the handcomputed solution are given in the black boxes. The relation between the size of the corresponding boxes is propor- 332 tional to the ratio of the values: initial search space search depth clauses generated literals generated deductions performed deduced clauses in proof length of refutation I 8 1 Figure 2.3 The statistical values of both solutions 3, TtIE EENERAL SOLUTION Having found a solution of a many-sorted version of Schubert's steamroller, we have to verify that this solution also solves the original problem. It is well known how to compare a many- sorted calculus with its unsorted counter- part by so-called sort axioms andrezativi- zations (cf. CObe62, Wa1831): The sort axioms serve to express the signature and the subsort order in terms of first-order formulas (viz. implications). The relativi- zation of a formula expresses the sort of each variable by atomic formulas using sort symbols as unary predicates. In clause notation we obtain for instance clause 1 of figure 1.1 as the sort axiom corresponding to the type declaration 1 of figure 2.1 and we obtain clause 7 of figure 1.1 as the sort axiom for the subsort decla- ration 7 of figure 2.1. The relativization of a clause is obtained by extending the clause with all literals of form Q(x), where x is a variable of sort Q in the given clause . For instance clause 13 of figure 1.1 is a relativization of clause 13 in figure 2.1. Defining S as the set of all clauses of fi- gure 2.1, !? as the set of all relativized clauses of S and AC as the set of all sort axioms for the signature and the subsort order defined in figure 2.1, it is easily verified that (t U A') is the set of all clauses of figure 1.1 (up to variable re- naminqs). From the Soundness-, the Complete- ness- and the Sort-Theorem for the many- sorted resolution calculus CWa1831 we obtain S 17" iff (^s U A') I- q (where 1, q denotes a refutation in themany- sorted calculus and I- q denotes a refutat- ion in the ordinary resolution calculus). Moreover one direction of this equivalence is constructive, i.e. there exists an algo- rithm which translates each refutation of S into a refutation of (6 U A'). Hence by solving the many-sorted version of Schubert's problem, a solution of the original problem is also obtained using the above transfor- mations. 4, CONCLUSION Most mathematical problems have a many-sor- ted structure and it is not a mere accident that almost all mathematical textbooks are written in a many-sorted language (albeit often very implicit). The advantage of many-sorted theorem pro- ving was also recoqnized by CHay71, Hen72, Wey77, Cha78, BM79, Coh831. Many-sorted first-order calculi were investigated by LHer30, Sch38, Sch51, Wan52, Hai57, Gi158, Obe62, Ide641 and CWa1831 extends the re- sults to the resolution calculus with para- modulation. The advantage of this calculus for automated theorem proving was demonstrated here using Schubert's steamroller. Of course, the real 333 power of a many-sorted theorem prover is only obtained, if the problem to be solved has a many-sorted structure: It turned out in several example runs (cf. CWa1831) that the performance of the system increases with an increasing cardinality of the sub- sort order relation. Often problems with a many-sorted struc- ture are presented in an unsorted axioma- tization. For such problems an algorithm has been developed which translates an un- sorted axiomatization into an equivalent many-sorted axiomatization CSch841. Acknowledgement I would like to thank J. Siekmann for his helpful criticism and support which greatly contributedtothepresentform ofthispaper. References C~0831 CBES811 CBM791 CCha781 [coh831 [Gil581 [Hai [Hay711 [Hen721 [Her301 Antoniou, G. and H.J. Ohlbach Terminator. Proc. of the 8th Intern. Joint Conference on Artificial Intelligence (IJCAI-83), Karlsruhe (1983) BZisius, K., Eisinger, N., Siekmann, J., SmoZka, G., HeroZd, A., and C. Walther The Markgraf Karl Refutation Procedure. Proc.of the 7th International Joint Conf. on Arti- ficial Intelligence (IJCAI-81), Vancouver (1981) Boyer, R.S. and J S. Moore A Computational Logic. Academic Press (1979) Champeaux, D. de A Theorem Prover Dating a Semantic Network. Proc. of AISB/GI Conf., Hamburg (1978) Cohn, A.G. Improving the Expressiveness of Many-Sorted Logic. Proc. of the 3rd Nat. Conf. on Artificial Intelligence (AAAI-83), Washington (1983) GiZmore, P.C. An Addition to "Logic of Many- Sorted Theories". Compositio Mathematics 13 (1958) Hailperin, T. A Theory of Restricted Quantification I. The Journal of Symbolic Logic 22 (1957) Hayes, P. A Logic of Actions. Machine In- telligence 6, Metamathematics Unit, Univ. of Edinburgh (1971) Henschen, L.J. N-Sorted Logic for Automatic Theorem Proving in Higher-Order Logic. Proc. ACM Conference, Boston (1972) Herbrand, J. Recherches sur la theorie dela demonstration (These Paris), Warsaw (1930) chapter 3. Also in "Logical Writings" (W.D. Goldfarb ed.), D.Reidel Publ.Co.(l971) CIde641 Ideison. A.V. Calculi of Constructive Logic with Subordinate Variables. American Mathe- matical Society Translations (2) 99 (1972) - translation of Trudy Mat. Inst. Steklov 72 (1964) CObe621 OberscheZp, A. Untersuchungen zur mehrsor- tigen Quantorenlogik. Mathematische Anna- len 145 (1962) [Rob651 Robinson, J.A. A Machine-Oriented Logic Based on the Resolution Principle. JACM 12 (1965) , also in Csw831 CSch381 Schmidt, A. uber deduktive Theorien mitmeh- reren Sorten von Grunddingen. Mathematische Annalen 115 (1938) [Sch511 Schmidt, A. Die Zulassigkeit der Behandlung mehrsortiger Theorien mittels der Dblichen einsortigen Pradikatenlogik. Mathematische Annalen 123 (1951) CSch781 Schubert, L. Private Communication Csch841 Schmidt-Schauss, M. Mechanical Generation of Sorts in Clause Sets. Interner Bericht, Fachber. Informatik, Universitat Kaisers- lautern (forthcoming 1984) [SW831 Siekmann, J. and G. Wrightson (eds.) Auto- mation of Reasoning - Classical Papers on Computational Logic, vol 1, Springer-Ver- lag (1983) Cwa1831 Walther, C. A Many-Sorted Calculus based on Resolution and Paramodulation. Proc. of the 8th International Joint Conference on Artificial Intelligence (IJCAI-83), Karls- ruhe (1983) CWa184al WaZther, C. Schubert's Steamroller - A Case CWal84bJ [Wan52 1 CWey771 CWRCSSI Study in Many-Sorted Resolution. Interner Bericht 5/84, Institut fur Informatik I, Universitat Karlsruhe (1984) Walther, C. Unification in Many-Sorted Theories. Proc. of the 6th European Conf. on Artificial Intelligence (ECAI-84), Pisa (forthcoming 1984) Wang, H. Logic of Many-Sorted Theories. The Journal of Symbolic Logic 17 (1952) Weyhrauch, R.W. FOL A Proof Checker for First-Order Logic. MEMO AIM-235.1, Stan- ford Artificial Intelligence Laboratory, Stanford University (1977) Wos, L.T., Robinson, G.A. and D.F. Carson Efficiency and Completeness of the Set of Support Strategy in Theorem Proving. JACM 12 (1965), also in [SW831 334
1984
3
314
RECONSTRUCTING A VISIBLE SURFACE A. Blake Computer Science Department, University of Edinburgh, King's Buildings, Mayfield Rd, Edinburgh, Scotland. ABSTRACT We address the problem of reconstructing the visible surface in stereoscopic vision. We point out the need for viewpoint invariance in the reconstruction scheme and demonstrate the undesirable "wobble" effect that can occur when such invariance is lacking. The design of an invariant scheme is discussed. J. INTRODUCTION In this paper we consider aspects of the task of generating geometrical information from stereo vision. The aim is to derive as rich a geometric description as possible of the visible surfaces of the scene - a "viewer-centred representation of the visible surfaces" (Marr, 1982). Principally this is to consist of information about surface discontinuities and surface orientation and curvature. Ideally it would be desirable to label discontinuities, and generate smooth surfaces between them, all in a single process. Some preliminary work has been done towards achieving this (Blake, 1983) but here we restrict discussion to reconstruction of smooth surfaces. Grimson (Grimson, 1982) discusses the task of interpolating smooth surfaces inside a known contour (obtained from stereo e.g. (Mayhew and Frisby, 19811, (Marr, 791, (Grimson, 19821, (Baker, 1981)). He shows how surface interpolation can be done by minimising a suitably defined surface energy, the "quadratic variation". The interpolating surface that results is biharmonic and under most conditions is defined uniquely. Terzopoulos (Terzopoulos, 1983) derives, via finite elements, a method of computing a discrete representation of the surface; the computation uses relaxation which is widely favoured for minimisation problems in computer vision (Ullman, 19791, largely because of its inherent parallelism. Both Grimson and Terzopoulos suggest that the surface computed represents the configuration of a thin plate under constraint or load. In this paper we first point out that the faithfulness of the computation to the physical thin plate holds only under stringent assumptions - assumptions that do not apply for the intended use in representing visible surfaces. It is argued that physical thin plates do not anyway have the right properties for surface interpolation - it is not desirable to try and model one. Secondly, the effect of biharmonic interpolation is investigated in its own right. We show that it lacks 3-D viewpoint invariance and demonstrate, with 2-D examples, that this results in an appreciable flwobblen of the reconstructed surface as the viewpoint is varied. An alternative method of surface reconstruction is proposed that does have the requisite viewpoint-invariance. n m THIN PLATE Accurate mathematical modelling of a thin plate is fraught with difficulties and, in general, generates a somewhat intractable, non-linear problem. Under certain assumptions however the energy density on the plate can be approximated by a quadratic expression; minimising the total energy in that case is equivalent to solving a linear partial differential equation with linear boundary conditions. The partial differential equation determines the displacement f(x,y) of the plate, in the z-direction (the viewer direction), that interpolates a set of matched points. These matched points are assumed to be available as the output of stereopsis. With an approximate representation of the plate in a discrete (sampled) space, using finite differences or finite elements, the linear differential equation becomes a set of simultaneous linear equations. These can be solved by relaxation. The assumptions necessary to approximate the surface energy by quadratic variation are analysed in (Landau and Lifshitz, 1959) and we enumerate them: 1. The plate is thin compared with its extent. 2. The displacements of the nlate from its equilibrium position z=o are substantially in the z-direction; transverse displacement is negligible. 3. The normal to the plate is everywhere approximately in the z-direction. 23 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. 4. The deflection of the plate is everywhere small compared with its extent. 5. The deflection of the plate is everywhere small compared with its thickness Assumption 1 is acceptable - indeed intuitively it is preferable to use a thin plate that yields willingly to the pull of the stereo-matched points. Assumption 2 may also be acceptable if the pull on the plate from each matched point is normal to the plate. The remaining assumptions 3-5 are the ones which prove to be stumbling blocks for reconstruction of visible surfaces. Assumption 3 is clearly unacceptable: any scene (for example, a roan with walls, floor, table-tops etc.) is liable to contain surfaces at many widely differing orientations. By no means will they all be in or near the frontal plane (i.e. normal to the z-direction), though it seems that human vision may have a certain preference for surfaces in the frontal plane (Marr, 1982). In particular, surfaces to which the z-axis is almost tangential are of considerable interest: it is important to be able to distinguish, in a region of large disparity gradient, between such a slanted surface and a discontinuity of range (caused by occlusion). Assumption 4 and the even stronger assumption 5 are again unacceptably restrictive. In fact assumption 5 can be removed at the cost of introducing non-linearity that makes the problem considerably harder; the non-linear formulation takes into account the stretching energy of the plate as well as it bending energy. It is this energy that represents the unwillingness of a flat plate to conform to the surface of a sphere rather than to, say, a cylindrical or other developable surface. Even without assumption 5, assumption 4 on its own is still too strong because it requires the scene to be relatively flat - to have an overall variation in depth that is small compared with its extent in the xy plane. This is clearly inapplicable in general. One conclusion from the foregoing review of assumptions is that that faithfulness of visible surface reconstruction to a physical thin plate model is undesirable. This is because of the stretching energy discriminating against spherical surfaces, which is not generally appropriate in surface reconstruction. In fact, happily enough, we saw that quadratic variation is not an accurate description of the surface energy of a thin plate precisely because it omits stretching energy, so biharmonic interpolation does not exhibit this discraination. Y An alternative formulation attaches the surface f(X,Y) to ma tched points by springs, allowing some deviation of the surface from the points. We now declare ourselves free from any obligation to adhere to a physical thin plate model and will explore the geometrical properties of biharmonic interpolation. III Ui&RMONIC INTERPOJ,ATION We now examine biharmonic interpolation in its own right. A variety of forms of such interpolation are possible and the one preferred by Grimson (Grimson,l982) is to construct that surface z=f(x,y) that (uniquely) minimises the quadratic variation F = I fxx2 + fyy2 dx dy (1) subject to the constraints that l f(x,y) passes through the stereo-matched points. In (Landau and Lifshitz, 1959) , the solution to this minimisation is given by the biharmonic equation A' = 0, where (2) under certain boundary conditions. For instance when the edges of the surface are fixed (constrained, for example, by stereo-matched points) the condition is that f is fixed, and b2f/bn2 = 0 (3) <b/an denotes differentiation along the normal to the boundary). Consider the effect on a simple shape such as a piece of the curved wall of a cylinder, assuming that the surface is fixed on the piece's boundary. It is easy to show that a cylindrical surface defined by f(⌧,y) q ,/(a2 - x2> (4) does not satisfyh2f = 0, so we cannot expect the surface to be interpolated exactly. Grimson (Grimson, 1982) demonstrates this: his interpolation of such a boundary conforms to the cylindrical surface near the boundary ends but sags somewhat in the middle. To return to the definition in (l), a serious objection to using quadratic variation to define surface energy is that it is not invariant under change of 3D coordinate frame. As (Brady and Horn, 1983) point out, it is isotropic in 2D - invariant under rotation of axes in the x-y plane. However, under a change of coordinate frame in which the z- axis also moves, the quadratic variation proves not to be invariant. Is it altogether obvious that 3D invariance is required? Certainly the situation is not entirely isotropic in that the visible surface is single valued in z - any line perpendicular to the image plane intersects the visible surface only once 24 - the z-direction is special. On the other hand it is also desirable that the interpolated surface should be capable of remaining the same over a wide range of viewpoints. Specifically, given a scene and a set of viewpoints over which occlusion relationships in the scene do not alter, so that the points matched by stereo do not change, the reconstructed surface should remain the same over all those positions. Such a situation is by no means a special case and is easy to generate: imagine, for example, looking down the axis of a "beehive". There is no of occlusion over a range of viewer directions that lie a certain - cone. We want the reconstructed surfaces of both beehive and table to remain changes. viewpoints, static in 3D as viewing position The point is that, over such a set of the available information about the surface does not change; neither then should there be any change in the estimate of its shape. Without invariance, a moving viewer would perceive a wobbling surface. To demonstrate the wobble effect, surface interpolation using quadratic variation has been simulated in 2-D (fig 1) over a range of viewpoints. In the 2-D case, biharmonic viewer OZO --W-M Figure 1: Biharmonic interpolation scheme. Here is an example of the interpolation scheme reformulated as follows: first interpolation defined for an arbitrary 3D surface, defined by operating in 2-D rather 3-D. curve interpolates 3 points (marked by circles). As the viewer direction varies from 0 to 30 degrees there is marked movement of the interpolating curve. Clearly the scheme is far from invariant to change of viewpoint. - EdS is invariant wi of coordinate frame interpolation simply fits a piecewise cubic polyncmial to set of points. There is continuity of second derivative at those points and the second derivative is zero at the end-points. In other words, interpolation in 2-D reduces simply to fitting cubic splines. As expected, the wobble effect is strong when boundary conditions are such that the reconstructed surface is forced to be far from planar. XYA VIEWPOINT-INVARIANT~ENERGY In order to obtain the desired invariance to viewpoint while still constraining the surface to be single valued along the direction of projection, the interpolation problem can be th respect to change - A dx dy = E dS in one coordinate frame but not in certain others therefore A dx dy cannot be invariant under change of coordinate frame. The original energy (1) has a unique minimum (Grimson, 1982) but with the new energy (7) the situation is more complicated. To understand this we will consider, for simplicity, a 2-D form of (7): rsx J a F= E(f,,f,,)ds x: at, (9) where E(t,u) = u2(1+t2)-3 and ds = w(f,)dx, where w(t>=(l+t2)1/2. 25 A standard result from the calculus of variations (Akhiezer, 1962) states certain sufficient conditions for a minimum of F to exist, one of which is that: there exist a>O, p>l, b s.t. for all t,u E(t,u)w(t) >= alulP + b. This condition is not satisfied by (9) because the term in t becomes arbitrarily small for large enough t. This problem can be circumvented by restricting f to a family of functions whose normal is nowhere perpendicular to the line of sight- say at most 85' away. Now the term in t is bounded below. There remains a uniqueness problem: w(t)E(t,u) fails to satisfy a certain sufficient condition for uniqueness (Troutman, 1983): it is not convex. This too can be remedied by replacing E in (9) by E+P, where P is a positive constant, representing the energy of a flexible rod under a stretching load. Now, for t-values in a certain range Itl<=T (T depends on P and may be made arbitrarily large by choosing a large enough P), w(t)(E(t,u)+P) becomes convex. Thus the energy functional (7) is convex in f,, f,, provided that, for all x in the appropriate interval, If xx :<=T. (10) The consequence is that any admissible function f for which the functional F (9) is stationary uniquely minimises F. This suggests that, in a discrete version of the problem suitable .for computation, optimisation by gradient descent (using relaxation) could succeed in finding the surface f(x) that has minimum energy. If, for this f(x), there is equality in condition (10) then viewpoint invariance is lost. But provided T is chosen sufficiently large this will occur only .for reconstructed curves of extreme slope and/or curvature. The case of extreme slope, for example, occurs at extremal boundaries - for which changing viewpoint affects occlusion - in which case reconstruction is not expected to be viewpoint invariant. 1. Biharmonic interpolation does not accurately model a thin plate and, in any case, a thin plate model would be inappropriate for use in surface interpolation. 2. Biharmonic interpolation of the visible surface is not viewpoint invariant and that, in specific 2-D cases, this lack of invariance certainly causes significant surface wobble. 3. A possible alternative reconstruction scheme uses an energy that is a function of surface curvature and area. This method is viewpoint invariant and certainly possesses the necessary existence and uniqueness properties, in the 2D case. It remains to extend these results to 3D and to develop and test a discrete computation to perform the reconstruction. ACKNOWLEDGEMENT The author is grateful to Professor J. Ball and to Dr J. Mayhew and Professor J. Frisby for valuable discussion. He is indebted to the University of Edinburgh for the provision of facilities. REFERENCES Akhiezer,N.I. (1962). m calculus a variations. Blaisdell, New York. Baker,H.H. (1981). Depth from edge and intensity based stereo. IJCAIconf. 1981, 583-588. Blake,A. (1983). Parallel Dnutation j.~ m-level vision. Ph.D. Thesis, University of Edinburgh, Scotland. Brady,M. and Horn,B.K.P. symmetric operators for suyfZ)* Rotational1y interpolation. ComD. vis. Granh. tiaae Proc., 22, 70-94. Grimson,W.E.L. (1982). From m ti surfaces. MIT Press, Cambridge, USA. Landau,L.D. and Lifschitz,E.M. (1959). The ti elastiu. Pergamon. Marr,D. and Poggio,T. (1979). A computational theory of human stereo vision. Pro% &. sot Land. a, 204, 301-328. Marr,D. (1982). Vision. Freeman, San Francisco. Mayhew,J.E.W and Frisby,J.P. (1981). Towards a computational and psychophysical theory of stereopsis. AI JouTnal, 17, 349-385. Terzopoulos,D. (1983). The role of constraints and discontinuities in visible-surface reconstruction. IJCAIB, 1073-1077. Thorpe,J.A. (1979). wntarp tonics ti differeu seanet=. Springer-Verlag, New York. Troutman,J.L. (1983). Variational Calculus with elementarv convexity. Springer-Verlag, New York. Ullman,S. (1979). Relaxed and constrained optimisation by local processes. Computer . leg aimane Drocessinq, 10, 115-125.
1984
30
315
A SYSTEM OF PLANS FOR CONNECTED SPEECH RECOGNITION Renato DE MO& Yu F. MONG Ooncordia Univeruity, Department of Computer Science, 1466, de MAsonneuve Blvd. Montreal, Quebec. H3G lM8, Canada Abrt?tLCt A planning system for recognising connected letters is described and some preliminary experimental results are reported. 1. Motlv~tla~r and RelatIona with Pllerlour Work8 A number of researches on Automatic Speech Recognition (ASR) have been carried out using a recognition model based on feature extraction and classification. With such an approach, the same set of features are extracted at fixed time intervals (typically every 10 msecs.) and classification is based on distances between feature patterns and prototypes &EVINSON 81) or likelihoods computed from a Markov model of a source of symbols generated by matching centisecond speech patterns and prototypes [BAHL 831. These methods are usualIy speaker-dependent and are made speaker independent by clustering prototypes among many speakers. The classifier is not capable of making reliable decisions on phonemes or phonetic features, rather it may generate scored competing hypotheses that are combined together to form scored word and sentence candidates. If the protocol exhibits enough redundancy it is likely that the cumulative score of the tight candidate is remarkably higher than the scores of competing candidates. If there is little redundancy in the protocols, like in the case of connected letters or digits or in the case of a large lexicon, then it is important that ambiguities at the phonetic level are solved before hypotheaes are generated. Evidences of these difficulties are reported in recent literature fBAHL 84, RABINER 841. For example, in the case of connected letters, in order to distinguish between /p/ and /tf the place of articulation is the only distiuctive feature and its detection may require the execution of special sensory procedures on a limited portion of the signal with a time resolution finer than 10 msec. The need for a hierarchical application of recognition algorithms for plosive consonant recognition has been recently pointed out by many authors ~EMICHELIS 83,KOPEC 841. This suggested that computer perception of speech can be modelied with a collection of operators for extracting and describing acoustic properties.Operator application is conditioned by the verification of some preconditions in the database that contains already generated descriptions of the signal under analysis. Sequences of operators belong to a system of plans where goals are the extraction and interpretation of various aspects of speech patterns. The input to the system is made of descriptions of acoustic properties obtained by hybrid (parametric and syntactic) pattern recognition algorithms and the outputs are hypotheses about phonetic features. The recognition of unconstrained sequences of connected letters is a problem unsolved so far. Using a redundant 8s of. acoustic properties for characterizing place and manner of articulation of some sounds makes it possible to have an accurate phoneme ‘lypothesination even in difficult protocols. Nevertheless, ,&he extraction of such properties requires the application of sequences of operators, some of them can be applied only if some preconditions are met. Useful sequences of operator applications is decided based on specific knowledge which establishes precedence relations depending on contextual constraints. For example, burst properties are useful for hypothesising the place of articulation of plosive sounds [DEMICBELIS 831, but the operator that extracts them can be applied onIy after the successful application of another operator that detects and locates a plosive butst. Following a slightly different approach, Kopec VOPEC 841 has shown that the point of consonant release has to be detected before applying effkient plosive recognition algorithms. Preconditions for operator application are logical expressions of predicates. Predicates are defined over relations between acoustic properties. Both precondition expressions and predicate definitiona are not known a priori and have to be learned. Learning is based on examples and is a search for plausible general descriptions of situations in which it is worth to apply an operator. This paper describes the application of the planning concept and of AI learning methodologies to the conception of a system for the recognition of spoken connected letters belonging to the following set : El :- ( P,TJGUW,GV&3 }. 8. Overview of the Sydiem of Plana The speech signal is fiat analyned on the basis of loudness, rero-crossing rates and broad-band energies using an expert system described in IDE MORI 821. The result of this analysis is a string of symbols and attributes. 92 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. Symbols belong to an alphabet of Primary Acoustic Cues (PAC) whose definition is recalled in Table I. A Semantic-Syntax Directed Translation scheme operates on PAC descriptions and through the use of sensory procedures identifies the vocalic and the consonantal segments of syllable nuclei and for the vocalic segments hypothesises the place of articulation of the vowel Plans rue then applied for interpreting the consonantal segment of every syllable. An overview of the plan for the recognition of the El-set is shown in Fig.1. p (dh / phoneme) . I , Hypdd~ Fmmtion I Deecriptioa DatcBmo PAC deecAption 1 I ,I 1 - Plan PI311 PIsn PI312 Plan PEIS Plnn PEl4 FIN PE16 1 Detection and Dotecth BiiJ Beer-brr deurlptioa Bunt and snrslope detection ol temporal bmlyti dercription detection and creak at the of rpectral dewziption vowel oaeet traneientr t L Fig. 1 The plan s subdivided (PEll,PEl2,PE13,P;l4,PE15). into sub-plans PEll produce8 an envelope description by analyzing the signal amplitude before and after preemphasis. Envelope samples are obtained every msec by taking the absolute value of the difference between the absolute maximum and the absolute minimum of the signal in a 3 m8ec interval. The envelope description is made by the following alphabet (- represents negation) : EDA = {SHORT-STEP(ST), LONG-STEP(LST), NO-STEP(NST), STEP WITH HIGH LOW FREQUENCY ENERGY(BZ), BURST-PEAK(BUR), POSSIBLE-BURST(PBU), NBZ=-BZ, NBU =‘BUR, NPB=-PBU.} PEl2 detect8 a buss-bar by analyring the shape of FFT spectra before the voice onset. The alphabet of the descriptions it produces is : BZA = {NOB,BUl,BUQ,BU} NOB mean8 no buss and the other three symbol8 describe degree8 of buss-bar evidence (BUl : little evidence, BU : strong evidence) PE13 analyses temporal events at the voice onset. These events are related to voice onset time. They are : D : the delay between the onset of low and high frequency energies, ZQ : the duration of the largest zero-crossing interval of the signal at the onset, ZR : the number of zero-crossing counts in the largest sequence of sucessive aero-crossing intervals with duration less than 0.5 msecs. PE14 and PE15 perform respectively burst and formanf transition analysis as described in [DEMICHELIS 831. Precondition8 for plan execution are learned with a general-purpose algorithm whose details are given in [DEMORI 841. The highlights of this algorithm are summariaed in the next section. 8. Learning Methodology Learning rule8 from example8 can be 8een a8 the process of generaliaing description8 of positive and negative example8 and previously learned rules to form new candidate rules. When applied incrementally this methodology can produce results which depend on the order in which example8 are supplied and on the occurence of examples which are exceptions to the relevant rules. Incremental learning of rules ha8 to come out with a set of rule8 that is the most consistent with the example8 encountered so far. In order to allow dynamic preservation of consistency among the set of rules, an algorithm ha8 been conceived which Use8 the Truth Maintenance System formalism [DOYLE 791 and which is reminiscent of previous work by Whitehill [WHITEHILL 801. The choice of a description language for examples and rules along with that of the generalizing algorithm8 is critical in a learning system in the sense that it may or may not allow the learning of relevant rules. A description language and rule generalization heuristic8 have been defined based on knowledge about rule-based Automatic Speech Recognition (ASR). A relevant aspect of the learning system developped for ASR is that generalisation rule8 are not constrained by the Maximally Common Generaliaation property introduced in ~HITEHILL 80). Positive and negative facts used for learning operator8 precondition8 are described by their relevant concept and a conjunction of predicate expressions. Each predicate expression or selector [MICHALSKI 831 assert8 that an acoustic property ha8 been detected or that an acoustic parameter ha8 been extracted with some specified value. A generalization rule derive8 from two conjunction8 Cl and C2 a conjunction C3 that is more general than both Cl and C2 i.e. Cl +C3 and C2 =+C3. The generaliaed rules themselves are the node8 of a TMS [DOYLE 791. Each node represents a rule of left-hand-side (LHS) CONJ and right-hand-side (RHS) CONC, having a support list SL whose IN and OUT parts are respectively the list of node8 with RHS CONC and LHS less general than CONJ and the list of node8 with RHS different of CONC and LHS lese general than CONC. With each node are kept the list8 of consistent example8 (PE for positive evidence) and unconsistent example8 (NE for negative evidence). Lastly each node ha8 a STATUS property which is IN when the corresponding rule is believed to be true and OUT 93 otherwise. A node is IN i.e. its STATUS is IN if and only if all the nodes in the IN part and all the nodes in the OUT part of its SL are respectively IN and OUT and the numbers of examples in PE and NE satisfy a given predicate P (for example NE 2 2.PE). As the numbers PE and NE keep changing during learning, a generalization can be true at a certain moment and it can become false later. This justifies the use of TMS. When a new example is learned a new node is created if necessary and this node is generalized with the existing ones to generate new nodes that are themselves generalized with other ones. Then the PE and NE of concerned nodes are updated and STATUS properties are modified when necessary and propagated through the network in order to maintain consistency. The stability of this process is guaranted together by the definitions of SL and the predicate P. For each concept encountered so far a characteristic rule is derived from the network of nodes whose LHS is the disjunction of LHSs of all IN nodes with corresponding RHS. Using the just outlined learning algorithm, the following precondition expressions PRj (1 ( j 5 5) have been inferred. PRl = (LDD + SDD) * (LPK + SPK + MNS LPK) PR2 = LDD * (LPK + SPK) PR3 = (LDD + SDD) * (LPK + SPK) PR4 = SPK + BUR + PBU PR5 = LDD + LPK Notice that ‘+’ represents logical disjunction and A*B means that A preceeds B in time. The two curves in Fig. 2a represent the time evolution of the signal energy (--) and the zero-crossings counts (--) in successive intervals of 10 msec of the first derivative of the signal. The phrase is the sequence of letters and digit E3G PCB. Fig. 2b shows the corresponding PAC description. Time unit is 0.01 sec. Total enwgy - Zero crorrlng 4 :: No. of Cram-r Fig. 2a PAC tb LDD LPK LNS SPK LPK SDD baas LPK LDD LPK SMD LNS LVI SDD LPK 8 19 w Iz Cl1 w 75 90 ml 112 114 I27 142 1% ta r aa a8 41 00 es 11 w w 111 11s II 141 146 188 Fig. 2b 4. Hypothesis Generation Rules Expressions made of symbols extracted by snbplans PEll and PE12 and representing positive and negative information have been inferred for each PAC description and for each phoneme using the outlined learning algorithm. An example of such rules is given in the following : E := NOB NBZ NST NBU NPB B := BU BZ NST NBU PBU There are 96 of such rules in the system. A PAC description is used for indexing a set of rules that is matched against the input description produced by the plan. As rules and descriptions contain the same number of symbols, a similarity index between a rule and a description is computed in closed form. The parameters extracted by PE13, PE14 and PE15 are used in fuzzy relations. There is a fuzzy relation for each phoneme and the invocation of a fuzzy relation is conditioned by PAC, PEll and PE12 descriptions. Fuzzy relations are conjunctions of disjunctions of fuzay sets. A fuzzy relation computes another similarity index between phonemes and data. A fuzzy relation can be seen as a conjunction of clauses. Each clause contains a disjunction of fuzzy sets defined over a parameter extracted by the planning system. Fuaay sets have been derived from a-priori probability distributions of parameters. A similarity index is computed by using the max operator for disjunctions and by summing the contributions of each clause and then dividing the sum by the number of clauses. An example of fuzzy relations is the following : E := short D short ZQ low ZR K := long D short ZQ high ZR where ‘%hort, long, high, 1ow”are fuzzy sets. There are 43 of such relations. A-priori probabilities of the two similarity indices are inferred from experiments fur every phoneme. These probabilities can be supplied to the language model for further preprocessing. 94 A simple recognition strategy based on similarity indices haa been used for the experiment described in the next section. Its details are omitted for the sake of brevity. 6. Results and Conclus¶o~~ Experiments on 500 samples of sequences of -bob in the El set pronounced by two male and one female speaker8 have given an error rate of 0.5% in segmentation without requiring any speaker adaptation. The proposed approach has been tested on a protocol of 400 connected pronounciations of symbols of the El set in strings of five symbols each. The strings were pronounced by one male speaker, the voice of which WAS used for deriving the rules. As the recognition algorithm ia syllable based, it is not constrained by the number of syllables. Error Analysis in the Recognition of the El Set Contribution to the Type of Error Overall Error (%) Confusion among p, t, k 13 Confusion among b, d 6 Confusion between cognate consonants 4 Confusion between b, v 13 Confusion between e and p or t and vice-versa 41 Confusion among g, c, 3 10 Other 3 Fig. 4 Nevertheless, the idea of using a number of phonetically significant Properties in a recognition system based on the planning paradigm appears very promising. The analysis of the behavior of each plan and of the errors generated by their application suggests the actions that have to be taken in order to improve recognition accuracy. performances are shown in Fig. 3 (speaker #I). The curve shows the error rates obtained in the following cases : Acknowledgements This research was supported by the Natural Sciences and Engineering Research Council of Canada With grant no. A243g. top 1 : there is an error when the right candidate B. Delgutte, (CNET, Lannlon A) suggested the introduction of temporal is not ranked in the first position; cues for characterleing plosive phonemes against vowels. A. Fran (ENST, Paris) wrote the program for extracting top 2 : there is an error when the right candidate these cues. M.Gilloux (CNET Lrnnion A) wrote the learning is not ranked in the first two positiona; program. The Authors wish to thank all of them. top 3 : l rrc., 1 x 20 there is an error when the right candidate is not ranked in the first three positions. 15.. \ I lOPl top2 IOP’ caw Fig. 3 Performances on the voice of a new male speaker are also shown by the curve labelled speaker #2. An analysis of the most frequent errors is summariaed in Fii. 4. The results seem interesting even if a large population of speakers has to be analyzed before deriving robust furry sets capable of giving the same performances on diiferent speakers. References [BAHL 831 Bahl L. R., Jellnek F., Mercer R. L. : A Maximum Likelihood Approach to Continuous Speech Recognition; IEEE Trans. on Pattern Analysis and Mae hi ne Intelligence, Vol. PAMI-6, Num. 2, March 1983. [BAHL 841 Bahl L. R., Das S. K., De Sousa P. V., Jelinek F., Kate S., Mercer R. L., Pichenx M. A. : Some Experiments with Large-Vocabulary Isolated Word Sentence Recognition; Proc. IEEE Conference on Acoustic Speech and Signal Processl.ng, San Diego, Col. 2661 - 2663. [DEMICHELIS 831 Demlchelis P., De Mori R., Laface P. and O’Kane M. : Computer Recognition of Plosive Sounds Using Contextual Information; IEEE Transactlons on Acoustic Speech and Slgnal Processing, Vol. ASSP-31, Num. 2, p 369-377, April 1983. [DE MORI 821 De Mori R., Glordana A., Laface P., Saitta L. : An Expert System for Interpreting Speech Patterns; Proc. of AAAI-82, p. 187-110, 1982. [DE MORI 831 De Mori R. : Computer Models of Speech Using Fuzzy Algorithms; Plenum Press N.Y. 1983. [DE MORI 841 De Mori R. and Gllloux M. : Inductive Learning of Phonetic Rules for Automatic Speech Recognition; Proc. CSCSI-84, London, Ontario, p. 103-106, 1984. [DOYLE 791 Doyle J. : A Truth Maintenance System; Artificial Intelligence, Vol. 12, Num. 3, p 231-272, 1979. [KOPEC 841 Kopec G. E. : Voiceless Stop Consonant Identification Using LPC Spectra; Proc. IEEE Conference on Acoustic Speech and Signal Processing, San Diego, Cal. 4211 - 4214. [LEVINSON 811 Levlnson S., Rablner L. R. : Isolated and Connected Word Recognition Theory and Selected Applications; IEEE Trans. on Communications, Vol. COM-29, Num. 6, p. 621-669, May 1981. [MICHALSKI 831 Michalski R. S. : A Theory and Methodology of Inductive Learning; in Mac hi ne Learning : an Artlflclal Intelligence Approach, Tloga, p. 83-134, 1983. [RABINER 841 Rabiner L. R., Wllpon J. G., Terrace S. G. : A Directory Listing Retrieval System Based on Connected Letter Recognition; Proc. IEEE Conference on Acoustic Speech and Signal Processing, San Diego, Cal. 3641 - 3644. [WHITEHILL SO] Whltehlll 5. B. : Self Correcting Generalleatlon; Proc. of AAAI-80, p. 240-242, 1980. 95
1984
31
316
A significant problem in image understanding (IU) is to represent objects as models stored in a DOHAIN INDEPENDENT OBJECT DESCRIPTION AND DECOKPOSITION Tad S. Levitt Advanced Information L Decision Systems 201 San Antonio Circle, Suite 286 Mountain View, C4 94040 ABSTRACT machine environment for-IU systems to use in model driven pattern matching for object recognition. This paper presents a technique for autonomous machine description of objects presented as spatial data, i.e., data presented as point sets in Euclidean n-space. This general definition of objects as spatial data encompasses the cases of explicit listings of points, lines or other spatial features, objects defined by light pen in a CAD system, generalized cone representations, polygonal boundary representations, quad-trees, etc. The description technique decomposes an object into component sub-parts which are meaningful to a human being. It is based upon a measure of symmetry of point sets. Most spatial data has no global sym- metry. In order to arrive at a reasonable descrip- tion of a point set, we attempt to decompose the data into the fewest subsets each of which is as symmetric as possible. The technique is based upon statistics which capture the opposing goals of fewest pieces and most symmetry. An algorithm is proposed which operates sequentially in polynomial time to reach an optimal (but not necessarily unique) decomposition. The semantic content of the descriptions which the technique produces agrees with results of experiments on qualitative human perception of spatial data. In particular, the technique provides a step toward a quantitative measure of the old perceptual Gestalt school of psychology's concept of "goodness of figure". 1. I.HTRODlJCTION A significant problem in image understanding (IU) is to represent objects as models stored in a machine environment for IU systems to use in model driven pattern matching for object recognition. This paper presents a technique for autonomous machine description of objects presented as spatial data, i.e., data presented as point sets in Euclidean n-space, En. This general definition of objects as spatial data encompasses the cases of explicit listings of points, lines or other spatial features, objects defined by light pen in a CAD system, generalized cone representations (Brooks, 19811, polygonal boundary representations, quad- trees (Samet and Webber, 19831, etc. The description technique decomposes an object into component sub-parts which are meaningful to a human being. It is based upon a measure of sym- metry of point sets in En. Symmetry is quantified as the reciprocal of the number of reflections and rotations under which the point set is invariant relative to some fixed point (the center of reflec- tion or rotation). 2 For instance, the regular k- sided polygon in E remains invariant under k rota- tions and k reflections relative to its center for a total of 2k invariant transformations. Thus2 the s mmetry measure of the regular k polygon in E is 9 -. Note that the identity transformation (the %ll rotation) leaves any point set fixed so that the symmetry value of any point set is always defined and bounded above by one. A circle, rela- tive to its center, has infinitely many invariant transformations, resulting in a minimum possible symmetry measure of zero. Most spatial data has no global symmetry. In order to arrive at a reasonable description of a point set, we attempt to decompose the data into the fewest subsets each of which is as symmetric as possible. The technique is based upon statistics which capture the opposing goals of fewest pieces and most symmetry. An algorithm is proposed which operates sequentially in polynomial time to reach an optimal (but not necessarily unique) decomposi- tion. Standard applications of the technique are in E2 for 2D images and E for 3D object modeling. The technique allows a machine to provide a more meaningful description of spatial data than a sim- ple list of points. This description can be represented as a model and may subsequently be used to match the model to observed instances of the object in imagery. The technique could potentially relieve humans of the need to manually indicate sub-parts of objects which we want the machine to model. Its effectiveness depends upon the semantic reality of the decomposition, and the relationship of that decomposition to autonomous image segmenta- tion techniques. That is, the machine should pro- duce descriptions of objects which indicate much the same "parts" decomposition which a human would provide, and, if matching objects in imagery is desired, the spatial data output from segmentation must lead to a decomposition which is capable of being matched to the stored model. 207 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. The semantic content of the descriptions which the technique produces agrees with the results of experiments on qualitative human perception of spa- tial data. In particular, the technique provides a step toward a quantitative measure of the old per- ceptual Gestalt school of psychology’s concept of “goodness of figure” (Allport, 1955). In a machine environment , we can combine this technique for quantifying the qualitative aspects of perception with the physiologically based structuralist approach to image segmentation by utilizing points of greatest change (i.e. most information) extracted during segmentation as the input to the description and matching process. This differs from the wholly structuralist approach to IU, as typified by Mar-r and his associates (Marr, 19791, which uses the principle of minimum energy to obtain object descriptions from high information point sets. The (mathematical) relationship between the symmetry-based and information-based approaches is an open question, although Hoffman’s work (Hoffman, 19661, provides an excellent first step. The second section of this paper presents the definition of quantization of symmetry of a single point set along with some mathematical prelim- inaries. In Section 3, the notion of symmetry is extended to multiple point sets, and2we show condi- tions for spatial data imbedded in E under which decompositions into regular-polygonal subsets are optimal . We use the optimality criterion of regular-polygonal decompositions to derive an algo- rithm in Section 4 for con tructing decompositions in finite points sets in E 2 . 2. SYMMETRY OF SINGLE POINT SETS Many techniques exist to quantify various aspects of regularity in spatial data including texture measures (Wechsler, 19801, analysis in the frequency domain (Crowley , 1984) and information theoretic approaches (Green and Courtis, 1966). Symmetry is one of the most striking forms of regu- larity in human perception, yet it is not well quantified by any of the methods mentioned above. Most previous attempts (Birkoff, 1932, Eysenck, 19681, to quantify symmetry focused on brute-force delineation of point, side and angle relationships in polygons. Weyl (Weyl, 1952) provided a mathematical description of symmetry of a point set, but even this was not a general enough frame- work for machine description (nor was that the intention of the work), and his research did not attempt to define symmetry in multiple point sets. Symmetry may be defined in terms of three types of linear transformations in En: ref lec- tions, rotations and translations. A point set in En is symmetric with respect to a linear transfor- mation if it remains invariantn unde; that transfor- mation. That is, if T maps E to E , and S is a subset of En, S C En, then we say S is symmetric with respect to T if T(S) = S. For brevity we will use “wrt” to mean “with respect to”. Rotations leave only the point in En which is the center of rotation fixed, and reflections have only the line or (hyper) plane of reflection fixed. Note that the line or plane of reflection uniquely determines the corresponding transformation. En, Non-null translations have no fixed points in and it can be shown that no finite point set in En is symmetric wrt a translation. other than En, (Some spaces such as a torus obtained by identi- fying opposite sides of a rectangle as in CRT screen wrap-around, admit finite symmetric subsets wrt translation, but we do not address those issues here. > Finally, by allowing centers of rotation to be translated about, we may also subsume any need for separate translation transformation in defining symmetry. For these reasons we do not explicitly consider translations further in the quantification of symmetry. We denote the set of reflections and rotations of En relative to some fixed point, c, by O(n, c>. When c is the origin this is more commonly denoted by O(n) for the orthogonal group of (rigid) linear transformations in En. Stretching and contract ion are not permitted in this subset of the larger set of all linear transformations which map En onto En. Note that O(n,c> is just another copy of O(n) with the origin translated to c, so arguments regarding O(n) apply to O(n,c>. Possession of the properties of closure, iden- tity , inverse, and associativity defines O(n) to be a group in the mathematical sense. The order of a group is the number of distinct elements it con- tains. If G is a group, then we write o(G) for the order of G. (See (Weyl, 1939)) or other standard texts for detail on O(n) and groups.) The struc- ture of the group O(n) and its subgroups is well- understood. If a connection between the descrip- tion of spatial data and O(n) is made, then the structure of O(n) may provide additional insight into the description. This is precisely our pro- gram in the following. Let S be a point set in En, and c E En. Let sym(S,c) = {T c O(n,c>lT(S> = S), then sym(S,c) is the set of orthogonal transformations under which S is invariant relative to the point c. It is not hard to show that sym(S,c) is a subgroup of O(n,c>. .(See (Weyl, 1939, 19521.) We define the symmetry measure of S wrt a point c c En, m(S,c> , to be the reciprocal of the order of the group of invariant orthogonal transformations of S. -1 hat is m(S,c) = [o(sym(S,c>>l . The purpose of the reciprocal is to obtain a bounded measure. Since we always have I c sym(S,c > , we know o(sym(S,c>> L 1. If o(sym(S,c)) is not finite then we define m(S,c> = 0. It follows that for any S and c, 0 5 m(S,c> ( 1. To see the usefulness of quantifying symmetry within a well-known mathematical qbject, consider the case where S is a square in E and c is its center. There are four reflections which leave S invariant, one each across the vertical and hor- izontal bisectors of the sides, and one across each diagonal. There are also four 90’ rotations (including the identity) which leave 7 invariant. Thus, o(sym(S,c>> = 8, and m(S,c> = s. 208 For the case of a square, sym(S) is a (Other criterion such as relative size and cluster- mathematical object well studied in the nineteenth ing of subsets could also be considered, but are century, called the dihedral group and denoted beyond the scope of this paper.) D(8). As in all groups, the subgroups of D(8) form a partially ordered hierarchy. For D(8) this This approach suggests that we search for a hierarchy is pictured in Figure 2-1. The point is decomposition of S into k subsets each with associ- that subgroups correspond to different ways to ated point c.: {(S., c.) 1 i=l to k} which minimizes decompose S into parts. The vertical and horizon- the evaluatihn funktioh: tal reflections together imply quartering the square, while each alone implies halving it into E({(Si,ci)l i=l to k)) = rectangles. The diagonal reflections similarly give rise to spatial partitions of the square into k k -= two or four triangles. Thus, we can map the struc- ture of D(8) back to S to extract the structure k [m(Si,ci)l-l k C o(sym(Si,ci)) inherent in the square. i=l i=l . Note that since [m(Si,ci)]-' = o(sym(S., 1 'i)) 1. ', I / \ we have g [m(Si,ci)]-1 1. k so that, - I * i=l Figure 2-l: Hierarchical Decomposition Example In fact, if we consider, rather than the whole square, just the corner points (which have high information content) and then include all midpoints between pairs of corner points in order to encode reflectively information, we obtain the dot diagram in Figure 2-l. The decompositions which humans tend to make of this diagram (Zusne, 1970) correspond nicely to those predicted by the subgroup structure of D(8). However, regardless of the nature of human perception, this technique pro- vides an approach to more semantically based machine perception of objects. 3. DECOHPOSITION OF SPATIAL DATA Most point sets in En have no global symmetry. The technique outlined in Section 2 is therefore not sufficient by itself to allow a machine to derive a meaningful description of complex spatial data. Our objective is to decompose the spatial data into subsets which are inherently more iden- tifiable than the object represented by the total data. For instance, the side view of a car might be roughly described as a smaller rectangle, (the roof and windows), over a larger rectangle (the car body), over two circles (the wheels). The tech- nique presented here allows a machine to generate this description for itself. Following guidelines suggested by research in qualitative visual perception, we seek a decomposi- tion of a set S C E into as few subsets as possi- ble, each of which is as symmetric as possible. E({(Si,ci)l i=l to k)) = k k _ L " C m(Si,ci) -1 i=l Since k and m(S.,c.) are always positive, we have O< E({(S.,c.)l i=lito k))Ll. Thus, E is a bounded evaluati& t unction. E is not the only function we might choose. E is in fact a mean statistic satis- fying the properties of means as defined, for instance, in (Mays, 1983). Any mean of the set {m(Si,ci)l i=l to k), such as the standard arithmetic mean, would serve as well, although results will differ. The mean E, as defined above, however, is particularly tractable because of its linearity in the o(sym(Si,ci)). We take advantage of this in the following p5oposition. Recall that the regular n polygon in E is the polygon which has n equal sides. Proposition: Let SC E2 be a finite set and let D = {(S.,c.)l i=l to k) be a decomposition of S with no:e if the S. being a regular polygon. If P = {(P.,d.) 1 i-l to m) is another decomposition of S whereiali the Pi are regular polygons, then E(P) < E(D) Furthermore, if P = {(P.,d.)l i=l to m} and Q = {(Q-, e.)l i=l to nf a:e two different regular polygon& dkcompositions, then the one with fewer subsets will have the smaller evaluation function. The proof of this proposition is omitted due to lack of space. This proposition shows that any regular polyg- onal decomposition of a set is better than any decomposition which has no regular polygons, and that the regular polygonal decomposition with fewest polygons is superior to other regular polyg- onal decompositions. Figure 3-l shows the value of the symmetry measure, E, for several possible decompositions of the point set S pictured in Fig- ure 3-la. Notice that E is minimized by the regu- lar polygonal decomposition into two squares. However, the case of decompositions with some, but not all, regular polygons is subtler. The 209 measure captures a trade-off between number of objects and relative symmetry of objects. It favors more objects with higher individual sym- metry, if their symmetries are close to the same value, but favors few objects if there must be a great disparity in their relative symmetry. / / . I , i , d’ , cl / / , , / / / , a) original point set I b)E=$ c) E=3 d) E -1 11 r r-7 L 1 e,E=; Figure 3-l: Evaluation of Point Set Decompositions These statements can be explicitly quantized by observing the value of E on decompositions of a set S of N points. Suppose S has a regular polygo- nal decomposition into m polygzns. Then this decomposition has E equal to 2~. If we lump m-k of the polygons into a single set (which is not a reg- ular polygv) then the new decomposition has E equal to 2n+x where n is the number of points left in regular polygons, and x is between 1 and 4. (This-fact is e by-pgoduct We see that 2n+x of them omittgd groofs.) - < 2~ when N - <,)n < (C)(T). Notice that m_ measures how many more objects there are in the kegular polygonal decomposition, while n (n(N) is the number of points left in regu- lar polygons. So, for instance, if it is possible to lump together at least half the regular polygons using no more than half the total points, it always pays to do so, according to the measure E. 4. ALGORITHM FOR DECOMPOSITIONS OF 2D FINITE POINT SETS In this section, the evaluation function, E, is as defined in Section 2. We also assume fami- liarity with the Hough transform method of line finding in spatial data (Duda and Hart, 1972). The decomposition algorithm presented here depends upon the proposition in Section 3, and also on the following observation. If S is sym- metric with respect to a reflection, then the mid- points of the pairs of points in S which are reflected onto each other will lie on the line of reflection. Furthermore, the line joining any two reflection-related points will be perpendicular to the line of reflection. These observations motivate the following out- line of the algtirithm. We take the set of mid- points of all (,I pairs of points in S, and associ- ated with each midpoint the orientation of the line of reflection it would lie on if the pair of points associated to the midpoint were in a subset of decomposition induced by a reflection. Two coin- cidental midpoints, as illustrated in Figure 4-1, where points A and B have the same midpoint and orientation as points C and D, must also be dis- tinguished by an appropriate data handling mechan- ism. To points already in S we associate all pos- sible (quantized, therefore, finitely many) orien- tations. We then apply the Hough tranfiform line finding algorithm to this set of N + (,I points from the original N points and their midpoints with their associated (possibly multiple) orientations. Lines found by the Hough algorithm are candidates for axes of reflection. Error can be adjusted here in the quantization cells of the Hough transform to allow detection of reflectional symmetry to be as loose or tight as desired. Lines with high Hough transform values are good candidates for axis of reflectional symmetry because many midpoints lie on them. A MIDPOINT B K C A MIDPOINT B D Y a) Coincidental mldpoints of single orientation. b) Coincidental midpoints of multiple orientations. Figure 4-l: Coincidental Midpoints Must Be Distinguished To each line found by the Hough technique, a subset of the points of S can be uniquely associ- ated which have reflectional symmetry with respect to that line, namely the points which lie on the line or those pairs of points whose associated midpoint(s) lie on the axis of reflection. We now cluster lines of reflection by grouping lines asso- ciated to midpoints which appear with multiple orientations. In each line group we compute the pairwise composition of the reflections across each pair of lines. Since the composition of two reflections is a rotation, this gives a set of angles of rotation associated to each line group. These angles are searched to determine sequences of multiples, i.e., 210 sequences of angles (8, 28, 38,..., k@=n). The intersection of the subsets of the points associ- ated to the lines associated to the angles, yield either regular k-polygons or lines of points in the original set S, (if k=l). Sort the subsets of S by their symmetry meas- ures. We now choose a decomposition of S by begin- ning with the subset of smallest measure, and adding them in order of increasing measure as long as at least one point not already included is added in. When all points are in at least one chosen subset, we have, say, m subsets which together con- tain N points. We now sequentially remove the sub- sets of largest measure until the function N - (&)(n+l) > 0 where k is the number of subsets removed and n is the number of points left. (This is an application of the "lumping together" cri- teria from section 3.) An optimal decomposition is given by the remaining regular polygons and lines, and the single subset obtained by lumping together all the removed subsets. $he worst case step in this procedure is order ((2)>4, so &he algorithm can be completed in polynomial (o(N >> time. 5. S-Y We have developed a mathematical model of the concept of the symmetry of objects presented as spatial data. This model provides a domain independent approach to autonomous machine decompo- sition of objects into component parts. A polyno- mial time algorithm for perform'ng such decomposi- tions on finite point sets in E ii was presented. Furthermore, there is reason to believe, based on simple examples and studies in qualitative percep- tion by experimental psychologists, that these decompositions are similar to those which humans would choose. REFERENCES [ll Allport, F.H., Theories of Perception and the Concept of Structure, Wiley & Sons, Inc., New York, 19%. [2] Birkhoff, G.D., Aesthetic Measure, Harvard University Press, Cambridge, MA, 1932. [31 Brooks, R., "Symbolic Reasoning Among 3- Dimensional Models and 2-Dimensional Images," Artificial Intelligence, 1981. [41 Crowley, J.L., "A Representation of Shape Based on Peaks and Ridges in the Difference of Low-Pass Transform," IEEE-PAMI, Vol. PAMI-6, No. 2, March 1984. [61 Eysenck, H.J., "An Experimental Study of Aesthetic Preference for Polygonal Figures," The Journal of General Psychology 79, 1968. ---- [7] Green, R.I. and Courtis, M.C., "Information Theory and Figure Perception: The Metaphor that Failed," Acta Psycholopica 25, North- Holland Publishing Co., 1966. 181 Hoffman, W.C., "The Lie Algebra of Visual Perception," Journal of Mathematical Psvchol- gig, Vol. 3, 1966.- [9] Marr, D., "Representing and Computing Visual Information," in Artificial Intellipence: An MIT Perspective, Vol. 12, P. Winston and R.H. Brown [Eds.], MIT Press, 1979. [lOI Mays, M.E., "Functions Which Parametrize Means, " American Mathematical Monthly, Vol. 90, No. 10, December 1983. ill1 Samet, H. and Webber, R. E., "Using Quad- trees to Represent Polygonal Maps," Proceed- ings of Computer Vision and Pattern Recogni- tion 83, Washington, D.C., June 1983. [121 Wechsler, H., "Texture Analysis-A Survey," Signal Process-, Vol. 2, North-Holland Pub- lishing Co., 1980. [131 Weyl, H. The Classica_l_ Groups, Princeton University Press, 1939. [l41 Weyl, H. Symmetry, Princeton University Press, 1952. [151 Zusne, L., Visual Perception of Form, Academic Press, NY, 1970. 151 Duda, R.O. and Hart, P.E., "Use of the Hough Transformation to Detect Lines and Curves in Pictures", Communications @, Vol. 15, No. 1, January, 1972. 211
1984
32
317
A REPRESENTATION FOR IMAGE CURVES Pnvid H. Ivl,larirnont Artificial Intc~lligcuc*c L&oratory Stanford TJnivcrsit,y Stanford, California 04305 ABSTRACT A rcpresmltation for image curves and an algorithm for its complltntion arc introducrd. The representation is designed to facilitate matching of image curves to completely specified motlcl plane curves and estimation of t,hcir oricnt,ation in space, despite the presence of noise. variable resolution, or partial oc- clusion. This is an important subproblem of model-based vision. A curve may bc represented at a variety of scales, and a strat- egy for s&ctiiig natural scales is proposed. At each scale, the rcprrscntntion is simply a list of positions in the plane, with tan- gent directions and curvatures specified at each position; each ctlrvature is cithcr a zero or an extremllm (hereafter critical points). The algorithm for computing the representation in- volvcs smoothing with gaussians at different scales: extracting tile critical points from the smoothed curves. and using dynamic programming to construct a list of critical points which best ap- proximate the curve for each length of list possible. We propose to examine the tradeoff between the error of the approximation and length of the lists to find natural scales. I. INTRODUCTION In this paper we describe a rc>prcscntntion for image curves designed to serve as input to the following complltation: given a database of model plane curves. and an image containing the projection of one or more of them. decide which model curves it contains and cstirnate their positions and orientations in space. This is model-l)nscd vision applic>tl to plane curves rather than to arbitrary tllrclc-cliillcnsionnl objects, as in [Brooks 10811 or [Goad 19831: cvcn this drastic rcstric’tion is still an important problem, sincr the edges of tllrcc~-dinlcnsiorl;Il models and their bounding contours arc oftcln plane cllrves. At an abstract level, ollr design mrthodology has two phases. The first is to identify those characteristics of image cluves which (~rlable computing a desired lcvc~l of reliability in 1nodc1 matches and viewpoint estimates at minimum cost. Next, a representation for those characteristics is selected to serve as input to a program which matches models and cstirnates view- points. Representations are judged by the rxtcnt to which they irialrc it possible for ;I progratn, at least in tlic~ory, to achieve ally specified reliability at minimum rest. Thcso considerations lcad to the following design criteria: 1. The representation must exhibit partial invariance with re- spect to viewpoint. so that matching can take place by comparing models to representations, rather than compar- ing models projcxctcd at all possible viewpoints to repre- * This report describes work done at the Stanford Artificial Intelligence Laboratory. It was supported by the Advanced Rc- search Projects Agency of the Department of Defense under contract N00030-80-C-0250. sent,ations. The space of possible viewpoints is simply too large for the latter approach to be feasible computationally. The representat,ion must deal c~ff(~ctivcly with chnngcs in the resolution of the image curve: since the curve can appear at my distance from the camera. the resolution at which it is imaged can vary widely. so that details that are clear in the model may be unavailable in the image. The representation nnlst be insensitive to noise introduced by imaging. which botll obscures fine details and introduces spurious ones. The representation must be robust with respect to partial occlusion of the model cllrve to be useful in any real nppli- cation. The representation must provide a range of scales of de- scription for image clirvcs for reasons of computational economy. Coarse descriptions can be used when error toler- ances arc high rnollgli to jiistify C~lilllini~ting irrelwant de- tail which netdlt&y overh~udens tllc complltation, while fine descriptions are ~availablc when the dcmnnd for the higher qlmlity results they produce justifies the added com- putational cost. II. OVERVIEW OF THE REPRESENTATION The rcprcscntation described here is designed with these requirements in mind. The reprcscntntion has multiple scales. At each scale. it consists of a list of points in the plane, with tangent direction and signet1 curvature specified at each point; each curvature is eith(>r a zero or an cxtrrmum. (We refer to such points as critical points, and following spline terminology, we call each clcmrnt of these lists a knot.) The automatic se- lection of “natural” scales is being explored. A curvatllre-based rcprc~entation has attributes which help make it insc>nsitivt to c.hangcs in viewpoint. In the plane, cur- vature is invariant with respect to rotation and translation, and curvature ratios arc’ invariant with rcppect to scale. The use of cxtrema and zeros of nlrvaturc provides insensitivity with respect to thr projection of a plane curve orientcsd arbitrarily in space. A rc>presc>rlt ation of an image curve bas;ckd OII these features will I)(> iIlVill+k~lt in sonic respects as a function of vicw- point of tlic niodcl cilrve md deform slowly or predictably in others, thus facilitating mntchiug of image curves to models and estimation of viewpoint. The availability of multiple. llatllral scales of reprcsenta- tion serves several purposes. It, hchlps provide insensitivity to changes in the resolution of image curves. It provides flexibil- ity in meeting the quality-cost tradeoff demands of a particular task. Finally, it helps discount the effect of noise, which may influence the representation at a very fine scale, but usually not 237 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. at coarser ones. Irisensitivity to partial occlusion is provided by the fart that each knot in the lists of knots comprisin g the representation has local support, so that it contains information only about that portion of the curve between the knots adjacent to it. Thus if part of the image curve is absent, the rc>prcsentation of the parts which remain is not necessarily affected. The sensitivity of matching and location estimation to partial occlusion of the model curve then depends on how effectively these operations can proceed based only on a subset of the information available from an unoccluded curve. The algorithm for computing the representation begins with a list of points in the plane, perhaps the output of an edge detector. We will refer to this list as the original sam- pled curve. The sampled curve is smoothed with gaussinns at scvernl different resolutions. Critical points on these smooth curves are folmd, and position. tangent dircc*tion, and curva- ture are estimated at each. These knots from different scales of smoothing are rnntZir1nte.s for inchlsion in the lists of knots that will ultimately represent the curve. All the knots are considered together without regard to scale of smoothing in a graph structure which represents all possible lists of knots covering the entire sarnplcd curve, One pass of dynamic programming is used to find each possible fixed- length list of knots which whm considered as knots in a splint best approximates the origin,11 curve. That is, for each possible number of knots, that set of knots which minimizes the npprox- imntion error is chosen from the candidnt,cs. Thus smoothing at different scales produces candidate knots, while an approxi- mation error criterion selects from them and combines them in the final rcpresc>ntntion. Those lists of knots which correspond to natllral scales of representation will ultimately be selected by examining the tradeoff between the length of the lists and their approximation error. III. EXTREMA AND ZEROS OF CURVATURE Claims for the relevance of extremn and zeros of curvature to the perception of curves have come from both psychology and computer vision. [Attnenvc 1054] tlt~monstratctl experimentally the importance of curvature maxima in recognizing known ob- jects. !Hoffblar~ 1982] suggested segmeuting curves at (signed) curvature mininla, provided experimental evidence that humans did so, and implemented a program to segment curves on this basis. Others who have sllggcstctl the USC of critical points include [Duda and Hart 19731, [&ady 19821, and [Hollerbach 1975). Our claim for the relevance of critical points follows from t,he mathematics of the specific computational task for which the representation is to serve as input. In this section we present a few results which demonstrate why z&os and extrema of curva- ture provide information useful for recognizing and estimating the orientation of known plane curves in space. Even thollgh the image curves to be rcprcscntcd are per- spective projections of plane curves in space, oilr analysis is based on orthoyraphic projection, which for our purposes is a suitable approximation for analyzing the behavior of of curva- ture extrcma and zeros. The basic imaging sitllation consists of the image plane containin g the image curve and the object plane cant ainin g the object curve (a curve from the database of rl~otld plane curves). The object curve is projected onto the image plane by dropping the normal from the object point to the image plane. The relationship between curvature in the object and cur- vature in the image is the heart of the analysis. While its dcriva- tion is beyond the scope of this paper, the most important con- scqucnc*ts can be stated quite simply. First, zeros of curvature in the object curve always project to zeros of curvature in the image. (This is the difFerentia1 form of the well-known fact that straight lines in space always project to straight lines in the image.) I I Critical Points I Figure 1: The stability of critical points under orthographic projection. Left, the critical points of a plane curve. On the right, the curve is projected orthographically at various orientations and the critical points of the resulting curves are marked. The stability of their critical points aids in matching the curves to models and estimating their orientation. 238 Second, as long the object plane is viewed “from above,” that is, if the angle between the normals to the image and the object planes is less than K, the sign of curvature of an object point does not change under projection. If the curve is viewed “from below ,” the sign of curvature always reverses. (In the degenerate case, when the object curve is viewed “edge on,” with the object plane orthogonal to the image plane, the object curve projects to a straight lint and all image curvatures are zero.) This means that the pattern of curvature sign changes along a curve is invariant under projection, except in the degenerate case. Also, since it follows that zeros of curvature are never introduced by the projection, except in the degenerate case, they too arc invariant under projection. The analysis of curvature extrema proceeds by differenti- ating the relationship between curvature in the object and cur- vature in the image. The interpretation of the result is more difficult and still continuing. bllt our preliminary conclusions are that t,hat curvature cxtrema in the image move about sta- bly and prtadictably as a function of viewpoint, that new ones do not appear. and old ones do not disappear, except in isolated or degenerate cases. Furthermore, as an extrcmum becomes more pronounced, becoming either locally straight on the one hand or a tangent discontinuity (a clasp or corner) on the other, the more invariant under projection the location of the extremum becomes. (Here a zero of curvature is considered a minimum of unsigned cur- vature.) This is not surprising. since where a curve is locally straight, curvature is zero, which as we have seen is a projective invariant. Cusps or corners. of course. remain cusps or corners from any viewpoint, and so are projective invarinnts as ~011. IV. J’tONOTONICITY OF CURVATUR& This section would be unnecessary but for an unfortunate lIli~t.lI~IIlatiCill reality: given two positions in the plane, each with a tangent direction and curvature, it is not always possi- blc to draw a smooth path between the positions which agrees with the information at the endpoints and contains no curva- ture extrema. Thus, precautions must be taken wlIcrI knots are assembled into lists to cnsurc that smooth pat,hs monotonic in curvature can be drawn between adjacent knots. Otherwise, the representation itself implicitly introduces spurious curvature extrcma. The test for this monotonicity curvature relation bctwccn knots is quite simple. First, since there are knot,s at both zeros and extrema, we can narrow the problem somewhat, since paths never need be drawn between knots with curvatures of opposite sign. Consider the case when both curvatures are positive, and recall that the osculating circle at a point on a curve is that circle tangent to the curve at the point with radius equal to one over the curvature at the point. and lying to the same side of the tangent as the curve itself. Two knots define two osculating circles. It is not hard to show that to draw a monotone curvature path interpolating the knots, the larger osculating circle must completely contain the smaller, as in the leftmost subfigure of Figure 2. This test checks thca feasibility of a monotone curvature path between two knots with the same sign of curvature. When one of the two knots to bc t&cd has zero curvature, its cur- vature is approximated with an arbitrarily small number of the same sign as the curvature at the other knot and the test pro- ceeds as before. In addition to testing two knots for the feasibility of a monotone curvature path, it is sometimes necessary to interpo- late such a path. In the figures in this paper, and for measuring the error in using two knots to represent a portion of an image curve, a spline consistin, m of three circular arcs is used. The spline agrees with the knot s at its endpoints in position, tan- gent direction, and curvature, except when curvature at a knot is zero, in which case its curvature is approximated. The splint is continuous, continuous in tangent direction, and a monotonic step function in curvature: that is, the curvature of the middle arc is between that of the first and last arcs. We shall refer to this spline as the monotone curvature spline. See Figure 2 for an example. V. SMOOTHING WITH GAUSSIANS. In this section the algorithm for finding knots which are canditlatt~s for assc~mbly into the final lists is tlcacribcd. The Figure 2: Monotone curvature splines. Left, two knots which can be interpolated with a monotone curvature path. The square and the triangle indicate the positions, the arrows tangent directions, and the circles curvatures. Center, a monotone curvature spline consisting of three circular arcs interpolates the knots. The first and last arcs coincide with the knots’ osculating circles. The vertices of the “V”-shaped polygonal arc are the centers of the three circular arcs. Right, the position markers and the spline are displayed alone. 239 Figure 3: Smoothing two-dimensional curves with gauasians. Top left, a hand-drawn sampled curve. The other curves are smoothed versions of the sampled curve, with the gaussian’s scale parameter increasing from top right, to bottom left, to bottom right. goal is to estimate position, tangent direction. and curvature at critical points along the sampled curve. Unfortunately, tan- gent direction and curvature arc not defined for sampled curves. Further, since the goal is to represent the curve at a variety of scales, they must include position, tangent direction, and cur- vature somehow measllrcd at a variety of scales. Another constraint is that knots cstimatcd at one scale should be consistent in the scnsc that it be possible to draw a monotone curvature pnth intc>rpolat,ing them. This suggests that f>xtracting critical points from curvature cstimatcd by lo- cally fitting circles AS in [Brady and Asatla 19841 is inadequate for this purpose. since thcrc is no gunrnntcc that the curvature monotonicity relation will hold bctwccn adjacent critical points. One way to avoid this problem is to map from thcx sampled curve to a smooth one and then to detect critical point,s in the smooth curve. Smoothing the sampled curve with gaussians at varying res- olutions meets thcsc rcquiremcnts. The smoothing technique discussed in this section prodllccs nn infinitely differentiable curve. so that a scalt> of smoothing dcfincs a map from the sam- pled curve to a smooth curve (in the scmse of infinitely differen- tiable), and critical points can then be detected in the smooth curve. Varyin g the scale of smoothing varies the scale at which position, tangent direr tion. and cluvnturc arc mcnsurcd. Figllre 3 is a example of a simlpled curve smoothed at several diffcrcnt scales. [Witkin 19831 has taken this approach in filtering one- dimensional sampled curves. He points out that zero crossings of the second derivative, which are closely related to zeros of curvature, can disappear as the scale of smoothing increases, but new ones can never appear. While we have no correspond- ing claim for critical points of two-dimensional curves, it is at least intuitively plausible that they exhibit the same behavior, and our experimental evidence is consistent with this conjecture. One desirable consequence is that shorter lists of knots can be used to describe a curve if the scale of smoothing is increased sufficiently. The basic approach of the smoothing algorithm is to smooth each coordinate function independently after defining it as a function of the straightline distance between adjacent points. At each point: the smoothed value of the coordinate function is a weighted average of the values of the coordinate function at nearby samples; the weights decrease with distance from the point being smoothed. The weighted average is com- puted by convolving the coordinate function with a gaussian, -ad normalizing the result at each point to correct for the fact that intersample distances vary along the curve. The normal- ized result turns out to be infinitely differentiable, so that it is possible to compute position, tangent direction, and curvature of the smoothed curve defined by the two smoothed coordinate functions. The critical points on the smoothed curve do not neces- sarily lie at points corresponding to samples of the original curve. The method used to find critical points oversamples the smoothed curve at a rate that depends on the range of inter- sample distances and computes position, tangent direction, and curvature at each oversamplcd location. The pattern of sampled curvatures indicate when a critical point lies between samples, and an iterative interpolation method is used to find its loca- tion as accurately as necessary. Figure 4 illustrates the critical points of a smoothed curve found by this method. Given a scale parameter for the gaussian. this algorithm specifies how to obtain a list of critical points, with position, tangent direction, and curvature at each, describing the curve smoothed at that scale. The choice of the range of scales for which smoothing should be performed to obtain these lists has not yet been automated; ultimately it will be based on the range of intersample distances, noise, and expected size of image curve features. VI. ASSEMBLING KNOTS INTO LISTS The next step is to assemble the knots obtained from smoothing the curve at different scales into the lists of knots which best approximate the curve. The approximation here refers to some measure of the distance between the original sampled curve and the monotone curvature spline which inter- polates the knots OII the list. Dynamic programming is used to find for each number of knots the list of knots which best approximates the curve. Note that scale is used in two senses here. The scale of smoothing refers to the scale parameter of the gaussian. The scale of the representation refers to the number of knots on a list which approximates the curve. The two may be different because a list of knots output by the dynamic programming algorithm may contain knots obtained from various scales of smoothing. This is in part a consequence of the definition of approxi- mation error of a list of knots. The error between a consecutive pair of knots and the corresponding portion of the original sam- pled curve is defined as the area between the monotone curva- ture spline which interpolates the knots and that portion of the sampled curve. The error for a list of knots is the maximum of these consecutive knot errors. Thus the error for a list bounds 240 the error between any consecutive pair of knots. This rechms the sensitivity of a list to partial occlusion, since the error of most subsets of the list have the same error as the list itself. Mortb global ~IWFIII‘CS of error. like the sum of consecutive kiiot errors. do not have this property. Thus the rcprcscntation of subsets of the curve achieving a given approximation error is more likely to be stable with respect to how much of the curve outside the subset is present. As a portion of the curve 1.. ‘q smoot!ird morr and more, the error in using knots ol~tained frown it to approximate the sarrl- pled curve 011 the average increases. But the rate of increase in any region of the curve dcpc~ls on the behavior of the curve in that region. For example. shallow undulations along a ba- sically linear portion of the curve will result in many knots to capture the small changes in clirvatiire at the smallest scale of smoothing: but perhaps just a knot or two whc*n the scale of smoothing is increasing at a very small cost in increased error in the approximation. At a sharp corner. however, smoothing tends to increase error dramatically as the corner bccornes more rounded, but there is no corresponding savings in the number of knots required to describe that portion of the curve. Thus the tradeoff between error, the number of knots, and their scale of smoothing can vary alo11g ‘a curve. It follows that minimizing the error achieved by a list of n knots can result in knots obtained from different scales of smoothing. [Plass and Stone 19831 USC dynamic programming to find the best list of knots to approximate a sampled clirvc with para- metric cubic splints. The basic idea is to construct a graph which represents all possible lists of knots and to find the mini- mum error list using the optimal search strategy. Our problem is slightly different, . since our goal is to find the brst list of knots for each feasible length list. A new algorithm has been devel- oped which finds all such lists in one pass through the graph; Figure 5 displays an example of its output for a curve smoothed at one scale. Each curve is the best approximation to the orig- inal curve for its number of knots. VII. FUTURE RESEAR,CH The integration of knots from different scales of smooth- ing into t,he same list has in some cases posed problems at those locations 011 t,hc c11rvc where the optimal scale for the curve is changing rnpi’lly. The likelihood that a inonotonc cur- vature transition between acl.jart>nt knots will b(t feasible dc- creases when tlic knots arc from widely separated scales, since they come from two possibly quite different curves. The current solution is to ensure that the spacin, v in scales is dense enough to guarantee the possibility of a monotone curvature transition between adjacent knots from different scales. If scale is chang- ing quickly enough even in one part of the curve. this may force smoothing at many scales and therefore generate many sets of candidate knots for the final representation. The dynamic pro- gramming technique used to assemble the knots into lists, which performs the (most efficient) exhaustive search, has complexity . The aiitoniat ion of the sclcc~tion of nntliral sc,nles is ongoiiig. The strategy is to postlilatc a iit ility filnction of the> quality and cost of computin g with a rcprc~scIit;ltiorl. aricl choosc~ sc;\lcs of representation which arc local maxin~a of 111 ility. A prtlirtiinary version of this i~lq~roach has been iniplerliciitecl which uses the approxirnalioii errc)r of n list of knots .a’ 5 a proxy for qii;ility. and the length of the list as it proxy for cost. The jiistificat ion is that nI’l)roxinl;Ltioli error 1. ‘r: T('liLtC(l to wroh in Inotlcl 111iLt(‘lliIlg ilIld viewpoint WtiIlliLl ion. nrltl tlitt cost of niatchiiig and estiitintion is in part a function of the volume of information on which the rcqiiired cornpiitations are based. So filr; the irnplt~riic~ntntion of this irpI”.oil(.li with simple iitility functions has given mixed results, and snore work is necdcd. -~__-- r-- I Critical Points q : max rc A : min K, K # 0 +: rc=o Figure 4: The critjcal points of a smoothed curve. Left, a sampled curve produced by a simple edge detection progrram written by the author and run on a real image. Center, the curve smoothed with a gaussian. Right, t2le same smoothed curve with critical points marked. Monotone curvature splines interpolate the critical points in the rightmost two figures. 241 The ultimate test of the representation will be how well the model-matching and viewpoint estimation algorithm performs using the rrprestutation as input. This goal guided the design of the rcprcscntntion, and while the design and implementation of this algorithm is far from complete, it is a crucial part of this research and will be the t,opic of future papers. ACKNOWLEDGMENTS The author thanks Rod Brooks, David Lowe, Brian Wan- dell, and Andy Witkin for their helpful comments on an earlier draft of this paper. REFERENCES [l] Attneave, Fred, “Some informational aspects of visual per- ception,” Psychological Review, 61 (1954), 183-193. [ 21 Brady, Michael. “Parts description and acquisition using vision,” Proceedings oJ the Society of Photo-opticul and In- strumentntion Engineers, 1082. 131 Brady. M ic lze , and Haruo Asada, “Smoothed Local Sym- 1 c 1 metrics and Their Implementation,” The First Interna- tional Symposium on Robotics Research, Michael Brady and R.P.Paul, eds., MIT Press, Cambridge, Mass., 1984 (to appear). [4] Brooks, Rodney A., “Symbolic reasoning among 3-D mod- els and 2-D images,” Artificial Intelligence, 17 (1981), 285- 348. [5] Duda. Richard O., and Pctcr E. Hart, I’cLttern Classijicn- tion cbncl Scene Ancllysis, Wiley-IIltcrscic~nce; 1973. (01 Goad, Chris. “Special purpose automatic programming for 3D motlt~l-bast~l vision,” Proceedings ARPA Image Under- stnndiny U’orkshop, 1983. [7] Hoffman. Donald D., Representing Shapes for Visual Recognition, Ph.D. Thesis, Massachusetts Institute of Technology (May 1983). [ 81 Hollerbach, J., ‘*Hierarchical shape description of objects by selection and modification of prototypes,” MIT-AI- TR- $46, 1975. [9] Plass. Michncl. and Maureen Stone, “Curve-Fitting with Pic>ccwiscs Para.mc+ric Cubits,” Computer Graphics, 17:3 (1983), 229-239. [lo] Witkin, Autlrcw I’., “Scale space filtering,” Proceedings of the Eioht Internation Joint Conference on Artificial Intel- ligence,, 1983, pp. 1019-1022. Critical Points Cl : max K. A: minrc, lc#O +: n=O Figure 5: Finding the best sets of knots to approximate a sampled curve. Each curve above is a set of knots interpolated by the monotone curvature splint. In this example (the same curve as iu Figure 41, only one scale of smoothing produced the candidate knots, although the algorithm cau handle more scales. A dynamic programming algorithm was used to f?nd the best set of knots to approximate the original sampled curve for each possible number of knots; some of the sets are displayed here. The number of knots decreases most rapidly across rows from left to right and then down columns. 242
1984
33
318
SHADING INTO TEXTURE Alex P. Pentland Artificial Intelligence Center, SRI International 333 Ravenswood Ave., Menlo Park, California 94025 ABSTRACT Shape-from-shading and shape-from-texture methods have the serious drawback that they are applicable only to smooth surfaces, while real surfaces are often rough and crumpled. TO extend such methods to real surfaces we must have a model that also applies to rough surfaces. The fractal surface model [Pentland 831 provides a for- malism that is competent to describe such natural 3-D surfaces and, in addition, is able to predict human perceptual judgments of smooth- ness versus roughness - thus allowing the reliable application of shape estimation techniques that assume smoothness. Thia model of surface shape has been used to derive a technique for 3-D shape estimation that treats shading and texture in a uni6ed manner. I. INTRODUCTION The world that surrounds us, except for man-made environments, is typically formed of complex, rough, and jumbled surfaces. Current representational schemes, in contrast, employ smooth, analytical primi- tives - e.g., generalized cylinder8 or splinee - to describe tbree- dimensional shapes. While such smooth-surfaced representations func- tion well in man-made, carpentered environments, they break down when we attempt to describe the crenulated, crumpled surfaces typical of natural objects. This problem is most acute when WC attempt to develop techniques for recovering 3-D shape, for how can we expect to extract 3-D information in a world populated by rough, crumpled surfaces when all of our models refer to smooth surfaces only? The lack of a 3-D model for such naturally occurring surface8 ha8 generally restricted image-understanding efforts to a world populated exclusively by smooth objects, a sort of “Play-Doh” world [l] that is not much more general than the blocks world. Standard shape-from-shading (2,3] methods, for instance, all employ the heuristic of u8moOthne88” to relate neighboring points on a surface. Shape-from-texture [4,5] method8 make similar assumptions: their models are concerned either with marking8 on a smooth surface, or discard three-dimensional notion8 entirely and deal only witb ad hoc measurements of the image. Before WC can reliably employ such tech- niques in the natural world, we must be able to determine which sur- faces are smooth and which arc not - or else generalize our techniques to include the rough, crumpled eurfaces typically found in nature. To accomplish this, we must have rccour8e to a 3-D model com- petent to describe both crumpled surface8 and smooth ones. Ideally, we would like a model that capture8 the intuition that smooth surfaces are the limiting case of rough, textured one8, for such a model might allow us to formulate a unified framework for obtaining ehape from both shading (smooth surfaces) and texture (rough surfaces, markings on smooth surfaces). * The research reported herein wa8 eupported by National Science Foundation Grant No. DCR-83-12768 and the Defense Advanced Research Project8 Agency under Contract No. MDA 903-83-C-0027 (monitored by the U.S. Army Engineer Topographic Laboratory) Figure 1. Surfaces of Increasing Fractal Dimension. The fractal model of surface shape [6,7] appears to possess the required properties. Evidence for this comes from recently conducted surveys of natural imagery [6,8]. These survey found that the fractal model of imaged 3-D surfaces furnishes an accurate description of most textured and shaded image regions. Perhaps even more convincing, however, is the fact that fractals look like natural surfaces [9,10,11]. This is important information for workers in computer vision, because the natural appearance of fractals is strong evidence that they capture all of the perceptually relevant shape structure of natural surfaces. II. FRACTALS AND THE FRACTAL MODEL During the last twenty years, Benoit B. Mandelbrot ha8 devel- oppd and popularized a relatively novel class of mathematical func- tions known as fractals [9,10]. Fractals are found extensively in nature [9,10,12]. Mandelbrot, for instance, shows that fractal surfaces are produced by many basic physical processes. The defining characteristic of a fractal is that it has a fractional dimension, from which we get the word “fractal.” One genera1 characterization of fractals i8 that they are the end result of physical processes that modify shape through lo- cal action. After innumerable repetitions, such processes will typically produce a fractal surface shape. The fractal dimension of a surface correspond8 quite closely to our intuitive notion of roughness. Thus, if we were to generate a series of scenes with the same 3-D relief but with increarring fractal dimension Z), we would obtain a sequence of surface8 with linearly increasing perceptual roughness, a8 is shown in Figure 1: (a) shows a flat plane (D = Z), (b) rolling countryside (D w 2.1), (c) an old, worn mountain range (D ti 2.3), (d) a young, rugged mountain range (D m 2.5), and, finally (e), a stalagmite-covered plane (D w 2.8). EXPERIMENTAL NOTE: Ten naive subjects (natural- 269 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. language researchers) were shown sets of fifteen 1-D curves and 2-D surfaces with varying fractal dimension but constant range (e.g., see Figure 1), and asked to estimate roughness on a scale of one (smoothest) to ten (roughest). The mean of tbe subject’s estimates of rougbness bad a nearly perfect 0.98 correlation (i.e., 96% of tbe variance was accounted for) (p < 0.001) witb the curve’s or surfaces’s fractal dimension. Tbe frsctal measure of perceptual rougbness is tberefore almost twice as accurate as any otber reported to date, e.g., 1131. Fractal Brownian finctions. Virtually all fractals encountered in physical models have two additional properties: (1) each segment is statistically similar to all others; (2) they are statistically invariant over wide transformations of scale. The path of a particle exhibiting Brownian motion is the canonical example of this type of fractal; the discussion that follows, therefore, will be devoted exclusively to frac- tal Brownian functions, which are a mathematical generalization of Brownian motion. A random function I(z) is a fractal Brownian function il for all z and AZ pr ( Jb + *4 - w < y ll*41H 1 = F(g) 0) where F(y) is a cumulative distribution function 171. Note that zc and I(z) can be interpreted as vector quantities, thus providing an extension to two or more topological dimensions. If I(z) is scalar, the fractal dimension D of the graph described by I(z) is D = 2-H . If H = l/2 and F(y) comes from a zero-mean Gaussian with unit variance, then I(z) is the classical Brownian function. The fractal dimension of these functions can be measured either directly from I(z) by using* of Equation 1, or from I(x)‘B Fourier power spectrum ** P(l), as the spectral density of a fractal Brownian function is proportional+ to I-2H-1, Properties 0fFZactalBrownian Functions. Fractal functions must be stable over common transformations if they are to be useful as a descriptive tool. Previous reports [6,7] have shown that the fractal dimension of a surface is invariant with respect to linear transforma- tions of the data and to transformations of scale. Estimates of fractal dimension, therefore, may be expected to remain stable over smooth, monotonic transformations of the image data and over changes of scale. A. The Fractd Surface Model And The Imaging Process Before we can use a fractal model of natural surfaces to help us understand images, we must determine how the imaging process maps a fractal surface shape into an image intensity surface. The first step is to define our terms carefully. DEFINITION: A frrctal Brownlrn murface is a continuous function that obeys the statistical description given by Equation (l), with z as *We rewrite Equation (1) to obtain the following description of the manner in which the second-order statistics of the image change with scale: E(~A~~,~)I]Az]I-~ = E(IA1*,,1[) where E(lAla,l) is the ex- pected value of the change in intensity over distance Ax. To estimate H, and thus D, we calculate the quantities E(IAIA,I) for various AZ, and use a least-squares regression on the log of our rewritten Equation (0 **That is, since the power spectrum P(j) is proportional to /-2H-1, we may use a linear regression on the log of the observed power spectrum as a function off (e.g., a regression using log(P(I)) - -(2H+l)log(j)+k for various values of /) to determine the power H and thus the fractal dimension. +Diacussion of the rather technical be found in Mandelbrot [lo]. proof of this proportionality may a two-dimensional vector at all scales (i.e., values of AZ) between some smallest (Azmin) and largest, (AZ,,,) scales. DEFINITION: A spatially isotropic fractal Brownlan surface is a surface in which the components of the surface normal N = (N,, N,, N,) are themselves fractal Brownian surfaces of identical frac- tal dimension. Our previous papers [6,7] h ave presented evidence showing that most natural surfaces are spatially isotropic fractals, with A2,in and AX 7na2 being the size of the projected pixel and the size of the examined surface patch, respectively. This flnding has since been confirmed by others [S]. Furthermore, it is interesting to note that practical fractal generation techniques, such as those used in computer graphics, have had to constrain the fractal-generating function to produce spatially isotropic fractal Brownian surfaces in order to obtain realistic imagery [ 111. Thus, it appears that many real 3-D surfaces are spatially isotropic fractals, at least over a wide range of scales* . With these definitions in hand, we can now address the problem of how 3-D fractal surfaces appear in the 2-D image. Proposition 1. A 3-D surface with a spatially isotropic fractal Brownian shape produces an image whose intensity surface is fractal Brownian and whose fractal dimension is identical to that of the com- ponents of the surface normal, given a Lambertian surface reflectance function and constant illumination and albedo. This proposition (proved in 171) demOnBhahB that the fractal dimension of the surface normal dictates the fractal dimension of the image intensity surface and, of course, the dimension of the physical surface. Simulation of the imaging process with a variety of imag- ing geometries and reflectance functions indicates that this proposition will hold quite generally; the “roughness” of the surface seems to dic- tate the “roughnessn of the image. If we know that the surface is homogeneous,** we can estimate the fractal dimension of the surface by measuring the fractal dimension of the image data. What we have developed, then, is a method for inferring a basic property of the 3-D surface - i.e., its fractal dimension - from the image data. The fact that fractal dimension has also been shown to correepond closely to our intuitive notion of roughness confirms the fundamental importance of the measurement. EXPERIMENTAL NOTEtFifteen naive subjects (mostly Ian- guage researchers) were shown digitized images of eight natural textured surfaces drawn from Brodatz 1141. They were asked “if you were to draw your ffnger horizontally along tbe surface pic- tured bere, bow rough or smootb would tbe surface feel?’ - i.e., they were asked to estimate tbe 3-D rougbness/smootbness of tbe viewed surfaces. A scale of one (smoothest) to ten (roughest) was used to indicate 3-D rougbness/smootbness. Tbe mean of tbe subject’s estimates of 3-D roughness bad an excellent 0.91 correla- tion (i.e., 83% of the variance accounted was for) (p < 0.001) witb rougbnesses predicted by use of tbe image’s 2-D fractal dimension and Proposition 1. This result supports tbe general validity of Proposition 1. B. Identlflcstion of Shrdlng Ver~r I’bxture Fractal functions with H FJ 0 do not change their statistics as a function of scale. Such surfaces are planar except for random varia- tions described by the function F(y) in Equation (1). If the variance of F(y) is small people judge these surfaces to be “smooth”; thus, the fractal model with small values of H is appropriate for modeling *This does not mean that the surfaces are completely isotropic, mearly that their fractal (metric) properties are isotropic. **Perhaps determined by the uze of imaged color. 270 smooth, shaded regions of the image. If the surface has significant local fluctuations, i.e., if F(y) is large, the surface is seen a8 being smooth but textured, in the sense that marking8 or Borne other 2-D effect is modifing the appearance of the underlying smooth surface. In contrast, fractals with H > 0 are not perceived a8 smooth, but rather a8 being rough or three-dimensionally textured. The fractal model can therefore encompass shading, 2-D texture, and 3-D texture, with shading a8 a limiting case in the spectrum of 3-D texture granularity. The fractal model thus allows us to make a reasonable, rigorous and perceptually plausible definition of the cate- gories “textured” versus “shaded, n “rough” versus “smooth,” in terms that can be measured by using the image data. The ability to differentiate between U8moothn and “roughn 8ur- faces is critical to the performance of current shape-from-shading and shape-from-texture techniques. For surfaces that, from a perceptual standpoint, are smooth (H w 0) and not 2-D textured (Var(F(y)) small), it seem8 appropriate to apply shading techniques.* For sur- faces that have 2-D texture it is more appropriate to apply available texture measures. Thus, u8e of the fractal surface model to infer qualitative 3-D shape (namely, smoothness/roughness), ha8 the poten- tial of significantly improving the utility of many other machine vision methods. relationship: E(I~I)=E(IP:~~~~)I)pE(lldNll) (4 where E(z) denotes the expected value [mean] of z. That is, we can estimate how crumpled and textured the surface is (i.e., the average magnitude of the surface normal’8 second derivative) by observing www Equation (4) provides us with a measure of 3-D texture that is (on average and under the above assumptions) independent of illuminant effects. This measure is affected by foreshortening, however, which acts to increase the apparent frequency of variation8 in the surface, e.g., the average magnitude of &N. We can, therefore, obtain an estimate of surface orientation by employing the approach adopted in other texture work [S]: if we assume that the 3-D surface texture is isotropic, the surface tilt* is simply the direction of maximum E&PI/I]) and the surface slant** can be derived from the ratio between rnw E(]d21/1]) and mine E(l&I/I]), h w ere # designates the [implicit] direction along which the texture measure is evaluated. Specifically, the surface slant is the arc cosine of ZN, the z-component of the surface normal, and for isotropic textures zN is equal to the square root of this ratio. The square-root factor is necessitated by the use of second-derivative terms. One of the advantage8 of this shape-from-texture technique is that III. Shape EatSmater From Texture And Shadlng The fractal surface model allow8 u8 to do quite a bit better than simply identifying smooth versus textured surfaces and applying pre- viously discovered techniques. Because we have a unified model of shading, 2-D texture and 3-D texture, we can derive a shape estimation procedure that treats shaded, two-dimensionally textured, and three- dimensionally textured eurfaces in a eingle, unitled manner. A. Development of a Roburt Texture Meuure Let us assume that: (1) albedo and illumination are constant in the neighborhood being examined, and (2) the surface reflect8 light isotropically (Lambert’a law). We are then led to this simple model of image formation: I=pX(N.L) (2) where p is surface albedo, X ie incident flux, N is the [three-dimensional] unit surface normal, and L is a [three-dimensional] unit vector point- ing toward the illuminant. The Brst assumption mean8 that the model holds only within homogeneous region8 of the image, e.g., regions without self-shadowing. The second assumption is an idealization of matte, diffusely reflecting surfaces and of shiny surfaces in region8 that are distant from highlight8 and specularities [3]. In Equation (2), image inteneity is dependent upon the surface normal, a8 all other variable8 have been aerrumed constant. Similarly, the second derivative of image intensity is dependent upon the second derivative of the surface normal, i.e., &I = pX(d2N - L) (3) not only can it be applied to the 2-D texture8 addressed by other researchers [4,5] (by simply using this texture frequency measure in place of theirst ), but it can alao be applied to surfaces that are three-dimensionally textured - and in exactly the 8ame manner. This texture measure, therefore, allows u8 to extend existing shape-from- texture methods beyond 2-D texture8 to encompass 3-D texture8 a8 well. B. Development of a Roburt Shape Eatlmator These shape-from-texture technique8 are critically dependent upon the assumption of isotropy: when the texture8 are anisotopic (stretched), the error is substantial. Estimate8 of the fractal dimension of the viewed surface [6,7], by virtue of their independence with respect to multiplicative transforms, o5er a partial solution to this problem. Because foreshortening is a multiplicative effect, the computed fractal dimension is not a5ected by the orientation of the 8urface.tt Thus, if we measure the fractal dimension of an isotropically textured sur- face along the z and y directions, the measurements must be identical. If, however, we find that they are unequal, we then have prima facie evidence of anisotropy in the surface. This method of identifying anisotropic texture8 is most e5ective when each point on the surface ha8 the game direction and magnitude of anisotropy, for in these ca8e8 we can accurately discriminate change8 in fractal dimension between the z and y directions. When the surface texture is variable, however, thie indicator of anisotropy becomes fess useful. Thus, local variation in the surface texture remain8 a major source of error in our estimation techniques; it is therefore important to develop a method of estimating surface orientation that is robust with respect to local variation in the surface texture. (Notation: we will write dr1 and dLN to indicate the second deriva- tive quantities computed along 8omt image direction (dt, dy) - thie direction to be indicated implicitly by the context.) *The image-plane component of the surface normal, i.e., the direction the surface normal would face if projected onto the image plane. **The depth component of the surface normal. The fractal model taken together with previous results [15], implies that on average &N is parallel to N. Consequently, if WC divide Equation (2) by Equation (3) we will on average obtain the following *Indeed, it is only in theae case8 that meaeurement noiee can be reduced tThis measure include8 edge information, i.e., the frequency of Marr- IIildreth zero-crossings as we move in a given direction appears to be proportional to E(]dLI/I]) along that direction; consider that Marr- IIildreth zero-crossings are also zero-crossings of &I/I. (by averaging) to the level8 required by shape-from-shading techniques ttAt least not until self-occlusion effect8 have become dominant in the without simultaneously destroying evidence of surface shape. appearance of the surface. 271 A-+ +++++++++++++++++ ++++++++++++++++++ TI*+tTClllllf-t*C *-*+e---*e+++e*e*+ +++++++++++++++++ ++++++++++++++++++ +++*+*+++++f+**+~ +++++..+*++++-+*+c ++++*+**++++++*+* +++++++++++++++++ ++-t++++++++++++++ *+++++-+++++++++++ +++++++-I-+++++++++ +++++-t+++++++++++ -++++-+++++++*+++-+ B Figure 2. Variation in Local Texture (a) Compared with No Variation (b). Such robustness can be obtained by applying regional, rather than purely local, constraints. Natural texture8 are often uhomogeneou8n over substantial regions of the image, although there may be significant local variation within the texture, because the processes that act to create a texture typically a5ect region8 rather than point8 on a surface. This fact is the basis for interest in texture segmentation techniques. Current shape-from-texture technique8 do not make u8e of the regional nature of textures, relying instead on point-by-point estimates. By capitalizing on the regional nature of texture8 we can derive a substan- tial additional constraint on our shape estimation procedure. Let us assume that we are viewing a textured planar surface whose orientation is a 30” slant and a vertical tilt. Let us further suppose that the surface texture varies randomly from being isotropic to being anisotropic (stretched) up to an aspect ratio of 3:1, with the direction of this anisotropy also varying randomly. Such a surface, covered with small crosses, is shown in Figure 2(a); for comparison, the same surface, minus anisotropies, is shown in Figure 2(b). If we apply standard ahape errtimation technique8 - i.e., estimat- ing the amount of foreshortening (and thue aurfaco orientation) by the ratio of Some texture meaSure along the [apparently] unforshortened and [apparently] maximally foreshortened direction8 - our estimates of the foreshortening magnitude will vary widely, with a mean error of 65% and an rms error of 81%. If, however, WC eetimate the value a of the unforshortened texture mea8urc by examining the entire region, and then compare this regional estimate to the texture measure along the (apparently) maximally foreshortened direction then our mean er- ror is reduced to 40% and the rms error to 49%. By combining this notion of regional estimation with the texture measure developed above, i.e., E(]#I/I]), we can conlstruct the follow- ing shape-from-texture algorithm that ie able to deal with both smooth two-dimensionally textured surface8 and rough, three-dimensionally textured 8urface8, and that L robuet with respect to local variation8 in the surface texture. C. A Shape E&rmation Algorithm We may construct a rather elegant and efficient ehape estimation algorithm based on the notion of regional estimation and on the texture measure introduced above by employing the fact that (5) for any orthogonal u, II. This identity will allow ua to estimate the surface slant immediately rather than having to search all orientations for the directions along which we obtain the maximum and minimum values of E( Id2 I/I]). Let us assume that we have already determined (Y = mine E(l@I/Jl), which is the regional estimate of unforeshortened E(]dZN]). When the estimate of a is exact, Equation (5) gives ua the (b) L- Figure 3. Tuckerman’s Ravine. result that (6) as the directions of maximum and minimum E ( > ]q] are orthogonal. VZ’e may therefore estimate ZN, the z component of the surface normal, by where 0 = E(lV21/I]) an d a is the regional estimate of the unforeshor- tened value of E(]GI/I]). Th e constant (Y can be estimated either by the median of the local [apparently] unforeshortened texture-measure values, or by use of the constraint that 0 5 ZN 5 1 within the region. The direction of surface tilt can then be estimated by the gradient of the resulting slant field - e.g., the local gradient of the zN values - or (as in other methods) by examining each image direction to find the one with the largest-value of the texture frequency measure. In actual practice we have found that the gradient method is more stable. D. A Unifled Treatment of Shadfng and Texture The fractal surface model capture8 the intuitive notion that, if we examine a series of surface8 with successively less three-dimensional texture, eventually the surfaces will appear shaded rather than tex- tured. Because the shape-from-texture technique developed here was built on the fractal model, we might expect that it too would degrade gracefully into a shape-from-shading method. This is in fact the case: this shape-from-texture technique is identical to the local shape-from- shading technique previously developed by the author [15]. That ie, we have developed a shape-from-x technique that appliea equally to 2-D texture, 3-D texture and shading. As an example of the application of this shape-from-texture- and-shading technique,* Figure 3 8hOW8 (a) the digitized image of Tuckerman’s ravine (a skiing region on Mt. Washington in New Hampshire), and (b) a relief map giving a side view of the estimated surface shape, obtained by integrating the slant and tilt estimates.** *This example was originally reported in Pentland (151 a8 the output of a local shape-from-shading technique followed by averaging and in- tegration. This algorithm is identical to the shape-from-texture tech- nique described here; in fact, investigation of the shape-from-texture properties of this method was motivated by the coneternation caused by this successful application of a ebsding technique to a textured nur- face. 272 This relief map may be compared directly with a topographic map of the area; when we compare the estimated shape with the actual shape, WC find that the roll-off at the top of Figure 3(b) and the steepness of the estimated surface are correct for this surface; the slope of this area of the ravine averages 60’. [15] Pentland, A. P. (1984) “Local Shape Analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, March 1984, pp. 170-187 IV. Summary Shape-from-shading and texture methods have had the serious drawback that they are applicable only to smooth surfaces, while real surfaces are often rough and crumpled. We have extended these methods to real surfaces using the fractal surface model [6,7]. The fractal model’s ability to distinguish successfully between perceptually “smooth” and perceptually ?ough” surfaces allows reliable application of shape estimation techniques that assume smoothness. Furthermore, we have used the fractal surface model to construct a method of es- timating 3-D shape that treats shading and texture in a unified manner. REFERENCES [l] H.G. Barrow and J.M. Tenenbaum, “Recovering Intrinsic Scene Characteristics From Images,” in A. Hanson and E. Riseman, Eds., Computer Vision Systems, Academic Press, New York, New York 1978 [2] B. K. P. II. Horn, “Shape From Shading: A Method for Obtaining the Shape of a Smooth Opaque Object from One View,* A.I. Technical Report 79, Project MAC, M.I.T. (1970). [3] B. K. P. H. H orn and K. Ikeuchi, “Numerical Shape from Shading and Occluding Boundaries,” ArtiEcial Intelligence, 15, Special Issue on Computer Vision, pp. 141-184 (1981). [4] J. R. Kender, “Shape From Texture: An Aggregation Transform that hlaps a Class of Textures Into Surface Orientation,” Proceedings of the Sixth International Joint Conference on Artificial Intelligence, Tokyo, Japan (1979). [s] A. P. Witkin, “Recovering Surface Shape and Orientation from Texture,” Artificial Intelligence, 17, pp. 17-47 (1981). (61 A. Pentland, “Fractal-Based Description,” Proceedings of International Joint Conference on Artificial Intelligence (IJCAI) ‘83, Karlsruhe, Germany, August 1983. [7] A. Pentland, “Fractal-Based Description Of Natural Scenes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, to appear September 1984. [S] G. Medioni and Y. Yasumoto, “A Note on using the Fractal Dimension for Segmentation,” IEEE Computer Vision Workshop, Annapolis, MD, April 30- May 3, 1984. [9] B. B. Mandelbrot, “Fractals: Form, Chance and Dimension,” W. H. Freeman and Co.,San Francisco, California, 1977. [lo] B. B. hlandelbrot, “The Fractal Geometry of Nature,” W. H. Freeman, San Francisco, 1982. [11] A. Fournier, D. Fussel and L. Carpenter, “Computer Rendering of Stochastic Models,” Communications of the ACM, vol. 25, 6, pp. 371-384, 1982. 1121 L. F. Richardson, “The Problem of Contiguity: an Appendix of Statistics of Deadly Quarrels,” General Systems Yearbook, vol. 6, pp. 139-187, 1961. 1131 H. Tamura, S. Mori, and T. Yamawaki, “Textural Features Corresponding to Visual Perception,” IEEE Trans. on Sys., Man and Cyber., Vol. SMC-8, No. 6, pp.466-473, June 1978 1141 P. Brodatz, “Textures: A Photographic Album for Artists and Designers,” Dover, New York, New York, 1966. **The shape algorithm produces estimates of the surface orientation. For display purposes, these estimates were integrated to produce a relief map of the surface. 273
1984
34
319
EFFICIENT MULTIRESOLUTION ALGORITHMS FOR COMPUTING LIGHTNESS, SHAPE-FROM-SHADING, AND OPTICAL FLOW Dcructri l’cr~opoulos M/T Artficial lrrrclligcnce Laboratory 545 Tccl1110logy Square Canlbrrdge, hlA 02139 Abstract 1. INTRODUCTION A ni~mbcr of ccmputAionn1 Lhks in lou -Ic\cl niacllinc dsion lid\ c hccn fimiit~l.ircd dc \ ,iri,ltion,ll principh (Inininll/,lti~lll problcn~s) or ‘13 (elliptic) p,11u,11 dlft’crcntul cqu,ttion~ (f’IX:s) (c.g., 11. 2, 8. 9. IO, 15. 171). C:ndcr ccmin (~clf ,IdJointncss) miditions, PIX formul,lrions can 1~ linked to \,ari.ltion,il principles. as ncccssary condirion~ for minimi. through the I-Alcr-l ‘igr‘lngc cqu,itions of Lhc calculus of 1 ariationc 141. .2n ,ittracti\ c fcatiirc of many variational PI Inclplc and assoc~~cd Pl)E formulations. once discrctircd, is that thclr solutions can he computed by itcrati\e algorithms requiring onl! local computations uhich CAM hc pcrfolmcd in pnrailcl by man> simple processors in locall! -conncctcd networks or grids. Such algorithmic structures arc appealing. both in L icw of the apparent structure of biolog]cA vision s>rtcms and the imminent proliferation of m,issi\cly parallel, locally conncctcd VI SI processors for vision. Visual rcprcscntations visually possess certain csscntial global propertics (consistcnc>. smoothncsb. minimal cncrgy. etc.) which the variational principle or IWE formulations aim to csprurc formally. Gi\,cn onl!, local processing capabililics. global propcrtics must be s:irisiicd indircctlt. tl pically by propagating \ isual information across grids through iteration. Substantial computational inefficiency can result since the computatlonal grids tend to bccomc cxtrcmcly large in mncl:inc vicion ,ipplications. Convcrgencc of the iterative process is often so slow as to nearly nullify the potential bcncfits of massive par,~llclism. A cast in point is the local. itcrativc computation of \ isiblc-surface rcprcscntations from scattered, local cstimatcs of surface shnpc [14-161. Multircsolution processing in hierarchical rcprescntations can hc effccti\c in counteracting the computational sluggishness of local, itcrativc solutions to vision problems posed as variational principles or WEs. Multigrid methods [7]. cfficicnt tcchniqucs for solving PI)& numerically. hale been adapted successfully in our previous work to the computation of visible-surface rcprcscnt;itions (14, 151. An objccti\c of this paper is to dcmonstratc that this methodology has -- This rcporl describes research done at the Artificial Intelligence Laboratory of !he Masslchuselts Institute of ‘Technology, Support for the laboratory’s Artificial Intelligence rc$carch is plo\idcd in p:lrt by the Adbanccd Re- search Projcclh Agency of the Depanmcnr of Dcfcnse under Oficc of Na\al Research contract N0OOl-b75-C-0633. and the System Development Found;lGon. The author gratcfilllq acknowledges the fin;tnciaI support of the >c3turaI Sciences and Engineenng Research Council of Canada and the Fends F.C.A.C. QuCbfc. Canada. I~I~~LIJ ,~l~I~l~c.~hilit~ in \ Gn (\cc ,~IuI [ 14, 01). After ;I brief o\cr\.icM of‘ mullisl Ed mcrhod~. ~c stud!. III turn. rhc itcrA\c comput,~tion ot’ Il~htfic\4. ~li,lllc-l’l.olii-sh,tdiii~. illld OlJtiC,ll 1lOH flom imdgcs. LVC l>rc~nt cmpiric.ll c\ idcncc th,lt our ti~iiltirc~olution ,rlgcJrithms can hc cjrdcr\ 01‘ m,l_rnitudc ~I~I~C Clkicnt ULIII con\ cntional single Ic\cl \ cr4ions. 2. MULTIGRID METHODS Progress h,l< rcccntl! hccn made in applied numerical analysis M Ith rcg,tldb to multigrid methods (4cc. c.g.. 13, 71). WC have dr,l\r 11 upotl thc~ tcchnlquc\ ,rnd the ‘1+4oci,ltcd thcorq in our work on multlrc~c)lutio~l colnI,ut,ltic,n,ll 1 ijion w g.rin colnput,ltion~tl and rcprc~crit~1tic,n~ll Ic\cr,lgc. Our .Id.ipr;rtion of‘ thcsc? methods provide ‘III cfticient mc,~nj of‘ compurinr conxi\icnt \,isu,il rcprcscntations at niultiplc sc~lcs. In multigrid mcrhod~. ;: hicr;lrcliy ol‘di~rctc problems ij formtll,~tcd and local. mulrilc\ cl r’clax,ltlon schcmcs arc applied to ilCCCl~I2W con\ cr_ecncc. Our ,ilgorithms ha\ c scvcral compcmcnts: (i) multiple \ i\ual rcprcscntAm\ o\ cr a range of spati;ll resolutions, (ii) loc,ll intralc\cl proccsscs that itcr;lti\ cl! propagate constraints within each rcprcscntatic,li;11 Ic\,cl. (iii) 10~11 coarhc-to-fine (prolongation) proccsscs that iillOW coarser rcprcscntations to constrdin finer ones, (i! ) fine-to-codrsc (restriction) processes that allow finer rcprcscntstions to improic the accuracy of coarser ones, and (i\ ) adaptive (recursive) coordination stratcyicj [3] that cnahlc the hicrarchg of rcprcscntations and component p~occsscs to coopcrate towards incrcnsing cficicncy (see [14, 151 for details). Gcncrally, the intralcvcl proccsscs arc familiar Gauss-Scidel or Jacobi rcl,lxation 151. the prolongation proccsscs are local 1,agrange (pol!,nominl) interpolations, and the restriction proccsscs are local avcr,lging opcrntions [3]. The precise form of these processes is problem-dcpcndcnt. The nlgorithm~ in this paper employ simple injection for the fine-to-coarse restrictions and bilinear interpolation fclr coarse-to-fine prolongation. Appropriate relaxation operations are derived by discrctiying the continuous vision problems. The finite dlfl‘crcncc method [5] can be cmploycd when a problem is posed as a PI)E, whcrcas the finite clcmcnt method [13], a more gcncral and po\vcrful discrctizition technique, can be applied directly to variational principle formulations [14-161. 3. THE LIGHTNESS PROBLEM ‘I’hc lightness of a surface is the perceptual correlate of its rcficctancc. Irrddiancc at a point in the image is proportional to the product of the illuminancc and rcflcctancc at the corresponding point on the surface. ‘I‘hc lightncbs prublcm is to compute lightness from image irradinncc, assuming no prccisc knowledge about either rcflcctnncc or illuminance. ‘I’hc rctincx theory of lightness and color proposed by Land and ,\lcCann 1121 is b;iscd on the observation t!at illuminance and rciicctancc patterns differ in their spatial propcrtics. llluminance changes arc usually gradual and, thcrcforc, typically give rise to smooth illumination gradjcnts, while reflccrnncc changes tend to bc sharp. since they often originijte from abrupt pigmentation changes and surfllcc occlusions. Horn 191 proposed a two-dimensional gcncralization of the l-and-hlcCann algorithm for computing lightnsss 314 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. in .IIuurlrih/t sccncs consisting of pl,tnar arc;ts di\ idcd into subregions of uniform matte color. I.ct /l(r, y) bc tl1c rcflcct;lncc Of 1llC \LII.f~ICC ilt i1 point corrc\ponding to the iln,igc point (.r,y) illld ICt S(.r,!/) IX! 1hC Ill~lllllnil~~CC &It lhilt S130int. ‘I IlC irr,idi,incc iit 1hC IlTlilgL2 point is gi\Cn b> /l(s, !I) = S(T, tj) x /{(I, !I). I)cnoting the Ic+lr-ithmc of rhc a!wvc function4 ;I\ lo~crc~~sc qimnlitic5. MC hale L(s, !I) -- .q(~, ~1) t 7(x,1/). \c\t. I lorn cmplo;~cd the I ,~pl;ici,ni opcr‘lt~~r A H hlch gibes rl(.r, 1~) = Ab(.r, ~1) -2 A.q(;r, !I) P Ar(r, !I). In the Xlondri,tn $ituiltion illuminancc ic ,Issumcd to ear’! smoothI! so th‘~t A.*(r, !I) bill be finitc cvcry\chcrc, N hilc AT(J. y) N ill cxhiblt pulp doublets ;rt intcnriry edges scp,u-sting nci$boring regions. 11 thrcsholding opcr,wr 7’ con bc itpplicd to disc& the finite pirt: ‘Tjd(~, y); -= A+, y) -T--Z j(x, y). Ilcncc, the rcllcctancc II is given b! the in\crsc logilrithm of the solution to Poisson’s equation A+, Y) = Jk, Y), in II, where I2 is the planar region covered by the image. Horn WI\ cd the abo\ c PI)E by con\ olution with the approprintc Green’s functmn. WC will instead pursue an itcrati\c solution \ 11~3 is also local and parallel, hcncc apparently biologically feasible. ~I’hc finite diffcrcncc method can be applied directly. Suppose that II is cnlcrcd by a uniform square grid with spacing h. We can approximate Ar = T,, + ryy using the order h? approximations r,, = (r,h,,,3 - 2rf,j + rIh_,,j)/hz and ryy = (rf;3+1 - 2rp,, + rF,j-,)/h2 to obtain a swdard discrctc version of Poisson’s equation (rt+,,, + rtP,,3 + r:,j+l -t rt,,,-l - 4rt3)/h2 = r,, . This dcnotcs a sybtcm of linear equations whose coefficient matrix is sparse and banded [5]. Rearranging, the Jacobi relaxation step is given by p ,(n+') 1 rh =- CT3 4 ( (71) l+I,j (-1 + r+ s-1 ,j + rF,j+l(n) + rFj-l(n) - /L*?~). Jacobi relaxation is suitable for parallel implcmcntation, whereas Gauss-Scidel relaxation is better suited to a serial computer and, morcovcr, rcquircs less storage. The synthcsixcd Mondrjan images shown in Figure 1 were input to a four lcvcl lightness algorithm (uith grid siLes 129 x 129, 65 x 65, 33 x 33, and 17 x Ii’). The grid function e,j was computed by maintaining only the local peaks in the Laplacian of r;“,j at each Icvcl. Zero boundary conditions wcrc provided around the cdgcs of the images. and the computation was started from the zero initial approximation ri,] h - 0. E‘igurc 2 shows the rcconstructcd Mondrian, which lacks much ofthe illumination gradient. Reconstruction required 33.97 work units, whcrc a work unit is the amount of computation required for an iteration on the finest grid. ‘l‘hc total number of itcrntions performed on each lcvcl f?om coarsest to finest respcctivcly is 142, 100, 62, and 10. In comparison, a single-lcvcl algorithm requircc! about 500 work units to obtain a solution of the same accuracy at the Jincst level in iso!ation. ‘l‘hc single-lcvcl algorithm requires at least as many iterations for convcrgcncc as thcrc are nodes across the surface. since information at a node propagntcs only to its ncartlst neighbors in one jtcration. ‘I‘hc multilcvcl algorithm is much more cficicnt because it propagates information more effccti\ely at the coarser scales. 4. THE SHAPE-FROM-SHADING PROBLEM In gcncral. image irradiancc dcpcnds on surface gcomctry. 5CCIlC illUl~lillilllCC. SUI’filCC rcflcctancc, iind im,lging gcomctry. ‘I‘hc slli~pc-frown-sll;lding prohlcm is to rwn’cr Lhc rJla~lc of 5urfXcs from image irradiancc. 13) ilssuming th;~t illuminancc. rcflcctancc, and imaging gcomctry arc constan t .md knw n, image irradiancc can IX rclafcd directly to surfjcc orientation. I.ct U(T, .v) bc a surface patch with constant albcdo defined over a hounded planar region (2. l‘hc rcLltionchip bctwccn the >urfacc orientation at a point (s! w) i\nd the image irrndi,rncc there 11(~, ?/) is dcnotcd by Q,g). uhcrc IJ = U, and (I = ‘1~~ arc Figure 1. Synthetic Mondrian images containing patches of uniform reflectance and an illumination gradient which increases quadratically from left to right. The three smaller images are increasingly co;1rscr sampled versions of the largest image which is 1~1 x 129 pixels, quahtircd to 256 irradjance levels. _____ Figure 2. The reconstructed hlondrinn computed nftcr 33.07 work units by t-he four-level multiresolution lightness alguritbm. Most of the illumination gradient in FIgtIre 1 has been eliminated. -_-- the fir.,t J>iil’tlilI dcri\iltl\ Cc of IhC \llifiiCC f’tlll~ti~~ll At (1, y). ‘!‘JlC sh,~pc-f~~)m-slli~ding problem can bc posed ‘14 ;I nonlinear. first-order PI>E in t\co unknon ns. cilllcd the Iln~rffc-irrHdiilll~~ cqwltion [I 11: Ij(.r, y) - 1+, q) =- 0. ClcCu4)~. surfhcc oricnt,ltion cClnnot bc computed strict]) IoC~Ill~ bCc~lll~c illlilgc iI riidiilllCC proi ides a sinflc mc,lsurcmcnt, while surfilcc oricnt;rtion has two indc\pcndcnt compo~~cncs. .l‘hc image irradiancc cquiltion pro\ ides one cxpllcit conwClint on surface orientation. I kcuchi and Ilor-n [ 11) cmploqcd an .idditional surface ~moothncss constraint. An ,IJipiVpriittC wt of boundary conditions is ncccwry to solve the problem. and the) suggested the USC of occluding boundaries of surfitccs. Since LI~C 1j-g pal.i~rnctcri/~ttio~~ of surface orientation bccomcs unbounded ,lt occluding boundaries however, t.hc!, rcparametcrilcd surfilcc orientation in terms of the stereographic mapping: j = 2prr. CJ = 24tr. where a = (&$Ti9--- l)/(p’+ 9.‘). The above considcr,ltions ucrc form;rJi/.ed in a variational principle in\ol\ ing the minimi/.ation of the functional CC/t d = JJ .(I:‘J:)+(~:+~t)d3.dy+S JJ ni+t~)- ~~(J,.r1)]2d4/ The first integral incorporates the surface smoothness constraint. 7%~ second is a least-squares term which attempts to cocrcc the solution into satisfying the image irradiancc equation. thus treating the image irradiancc equation as a penalty constraint wcightcd by a factor X. ‘l‘hc Euler-Lagrange equations are giicn by the following system of coupled Pl>Es AI - X[JJb, Y> - W, s)iQ = 0, Ag - X[U(z, y) - R( j, g)] 12, = 0. Discrctizing the above equations on a uniform grid with spacing h using the standard finite diffcrcnce approximations, we obtain the Jacobi relaxation scheme 315 $,j(n+l) = @[$Fjj(“) + X[Bi,J - Z”(ff:j’ “‘ ,g~,j’ “‘ )][z~~]~~‘ l g;, j(n+l) ZZZ @[gf,j](“) + X[Bi,j - R( Gj’ “‘ , g,“, j’ “‘ )][R,]!:,!, whcrc @[c,j] = [Q-l,j + c+l,j + Q,j-l + e,j+l]/4 and @[$,j] =I id- 1 ,3 + g;“,, , j + g:, j-l + g11,i+l]/4 are local averages of /‘l and gh at node (;, j) (a factor of l/4 has been absorbed into X), 1~1 = i3R/i3j, and Rg = ~II/&J. WC employ the Gauss-Scidcl form of the relaxation in our multilcvcl algorithm. Appropriate boundary conditions may be obtained from occluding boundaries in the image (see [ll] for a discussion). A four level shape-from-shading algorithm (with grid sizes 129 x 129, 65 x 65, 33 x 33. and 17 x 17) was tcstcd on he 4! ntltctic~~ll~ -gcncr,itcd I ,imbcrti.in \phcri‘ im,fgcs sho\r 11 in tCtgiirc 3. SLI~I;ICC oricnt&on M;I\ spccificd ;~round the occluding houndnry of' Illc‘ yhTc. M IIICII was m,~rkcd ,I\ ,I dixxmttnult), and the comput,ition \t,l\r \t,irlcd fron1 the /cro lnlti,ll ilp~llc~xi~~l~itiOl~. j -= !I o M ithIn the ~phcrc. I hc ~llut~on \j ,I\ obt,~~ncd .Ilicr 6.175 work un114. I hc tot,11 1111i~ilx2r of‘ itcr,itionj pcrfilrlllcd on c,ich Ic\cl from co~~r~~\t to firic\t rc\pccti\cl! I\ 3-. ) 10. 4. ;ind 1. In conip,iricon. ;I \I nslc-lc\ cl ,ilgorlrhni rcqiiircJ ~10~2 to 200 work tInit\; to obtain ‘1 v~Iu~I~u~ of‘ the \;unc’ accllrac! ,II the finest Ic\cl in i\ol,ition. Unlike the li$trlc\c ,llg~xithm. ~OHC\ cr. rhc sh‘lpc-from-sll,ldin_c algorithm cmplo! \ +,idm_r infiirm,ition ‘ind the imlfc irr,ldi.incc equation to con\tr‘lin the 4urf;Icc sh:~pc M irhln rhc surfxc boundaries. For this rc,l\on. con\crgcncc is cxpcctcd to bc faster. I‘hc sur-fxc norm,tl~ computed b! the sh;rl’c-frown-shading ~iIc~,rillim at rhc three ccxirsc\t rc5olution*l iII’C rcprcscntcd in Figure -1 ;I\ “necdlcs.” 1 hcsc nccdlcs ;Irc‘ sho\r II I! ing on ;I pcrspcctii e view or’ the surt’acc in depth. ‘I‘hc depth rcprcscnt:ition was computed b! ;I (four-lc\cl) multircsolutio~~ surf,lcc reconstruction algorithm (l4-IO] u+g the norm,rls ;IS sur~‘ICC oricntaticm constr,rints. Nodes on the occluding bo\lndar~ of the sphere uc’rc m,trkcd ;IS depth disctlntlnuitic\ and the comput,ltion was started from the zero depth initial ,Ipproximation. ‘I‘he surfacc rcconsIruction rcquircd 8.8 work units. 5. THE OPTICAL FLOW PROBLEM Optical flow is the distribution of apparent vclocitics of irradiance pattClXS in the dynamic imilgC. I hc optical flow field and its discontinuitics can bc an important source of infonnstion about the drrangcmcnt and the motions of I isiblc surfaces. ‘l‘hc optical flow problem is to compute optical flow from a discrctc scrics of images. Ilorn and Schunck [lo] suggcstcd a technique for dctcrmining optical flow in the rcstrictcd cast v,hcrc the obser\,cd velocity of image irradiancc patterns can be attributed directly to the movement of surfaces in the scent. Under thcsc circumstances, the relation belu ccn the change in image irradiancc at a point (I, g) in the image plane at time t and the motion of the irradiancc pattern is given by the flow cqaation &U + L&V + ZI, = 0, whcrc Zj(z,y,t) is the image irradiance, and u = dx/dt and v = dy/dt arc the optical flow components. An additional constraint is nccdcd to solve this linear equation for the two unknowns, ZL and 71. If opaque objects undergo rigid motion or dcfonnation. most points have a iclocity similar to that of their neighbors. csccpt \vhcrc surfaces occlude one nnothcr. Thus, tilt velocity field will \‘ary smoothly almost cvcrywhere. Horn and Schunck formulated the optical flow problem as finding the flow functions ~(2, y) and V(X, 21) which minimize the functional whcrc cx is a constant. The first term is the smoothness constraint, while the second term is a Ic;lst-squnrcs penalty ftlnctional Hhich cocrccs the flow field into s:r!isfying the flow equation as much as poss~l~lc. The E&r-l .a_rrangc equations for the above fimctional are given by [lo] Figure 3. Synthetic images of a Lambertian sphere illuminated by a distant point source perpendicular to the image plane. The three smaller images are increasingly coarser sampled versions of the largest image which is 129 x 1211 pixels. quantized to 256 irradiance levels. Figure 4. Surface normals which were computed after 0.125 work units by ~?e four-level mulCrcsolut.ion shape-from-shading algorithm are shown as “needles” for the three coarsest levels (the finest resolution surface is too dense to illustrate as a 3-D plot). The surfaces were computed from the normals by a multiresolution surface reconstruction algorithm after 8.8 work units. ---- -- I&, $ I~,I~,7~ ru’?Au I), 4 7 /I, I& IL + I$, = a AV I&, IIt. :255itiiiing ;i cubical network of no&\ M ith spicing 1,. where i. j. illld k index n(Jdcls iil(Jng the r. !/. 2nd t ,~xcs rc\pccti\cly. WC USC the lilllowlllg stand;trd tinitc dltt;‘rcncc formul,ls to dis- crcti/c the dill2rcntiJ O~~c’l‘irt~ll‘s: !/j,]j’,,.k :- 21,) (/I:+ I .j.k - I{:- ,.g.k); I~XJ.~ = $,(I~!:,+I.k - IC-I.k): iW3qk -- @l’.,.k tl - tt,.J: A” II. = i,l2 (+[u~,,,] - of.,.,): Ah7r - i;L(+;l,.k] - I:;~,~): where +i4t,.kl = :;(Cl.3.k + 4tJ +l.k + C I.3.k + $J ,.J ;d Wt,.A = f(L,,k + C’.,,,., 7 \I’, I.J.k + 9.,-L,). 0 1 1 PI t ICI I >ioxim,itions ;lrc po\siblc: for example. tho\c suggc$rcd hq t lorn ,md Schunck [IO] Mhich. ho\+ c\ cr. rcquirc o\ cr four time\ the cornput,ition per iteration. GI\CII d! n,lmic images o\cr at least Utrcc frxncs. ;t s!mmctric central dilrcrcncc formul:l [IIljff,,k = ,:, (llt,,,k+, - II!‘,.,- l) i3 prcfcr,iblc. Substituting the iih(lVC approxilnations into the t-Iulcr-I .agrangc equations and solving for II:,,, and \!:,*I, yields the following Jacobi rclaxJtion formula t%;j,k 316 where pf,j,k = ([II~]!‘~,~)~ + ([/Iyjp,j,k)2 + +n2 and h = [nz],hj,k@[l]lhj k ] + [r~,]fj k~[vthj k] + i&]l”.,,ke Al?propriate ;%idary conhitions ‘a& the &t&al b;&ndary conditions of zero normal derivative at the boundary of fl. They can bc enforced by copying ~;rlucs to boundary nodes from neighboring interior nodes. A four lcvcl optical flow algorithm (with grid sizes 129 x 129, 65 x 65. 33 x 33, and 17 x 17) was tested on a s]vnthctically-gencratcd image of a Lambcrtian sphere expanding uniformly over two frames (Figure 5). l’hc velocity field was specified around the occluding boundary of the sphere, and the compurarion was started from the zero initial approximation, u = 2) -_ 0 within the sphere. The occluding boundary itself was marked as a velocity field discontinuity. The solution computed on the three coarsest lcvcls after 4.938 work units is shown in Figure 6 as velocity vectors in the 2-y plane. ‘Ihe total number of iterations pcrfonncd on each level from coarsest to finest rcspcctivcly is 40, 5, 4, and 3. In comparison, a single-level algorithm rcquircd 37 work units to obtain a solution of the same accuracy at the finest level in isolation. The comrncnts about the convergence speed of the shrlpc-from-shading algorithm apply here also. Employing the Horn-Schunck relaxation formulas, Glarcr [6] also reports improvements in the convcrgcncc rate of a multilevel optical flow algorithm relative to a single lc\cl algorithm. 6. CONCLUSION Once discrctizcd, problems in machine vision posed as variational principlcls or partial diffcrcntial equations arc amcnablc to local support. parallel. and iterative solutions. lhc to the locality of the irerntivc process, howcvcr. these computations arc inhcrcntly inctficicnt at propagating constraints over the large rcprcscntations [J plc,rll! encountcrcd. Mtilt~r~\olurion procc4sing c‘111 o~c~~con~c this inclficicncl I~! exploiting CO‘II-scr ~cprcccnt,ltlolls which tr,idc off Ic~olilticu~ for direct inlcraction4 o\cr I,ircc’r di\tancc\,. ,Ac \v~I~ chown 111 0111’ prcl iou4 ~l[~~~liCiltlOlll* 10 lhc wrfdcc recon\lruclion prc)blcm [IA- IO] .~nd. in [Ill\ p~pcr. to lhc ligh[nc44. ~lli~pc-fi~)~~l-sh;ldin~. and opr~l tlow prol~lcms. dr,nn;ltlc incrc,lsch in c‘flicicnc> c,~n rrsult. I’4ing our ~Ip~~[.OilCll. it is clc,~rI! po~41blc to dc\clop niulti- rc\olution Ilcr;iti\c ,Ilgorithm\ for other \l\ion prohlcms, including imG1pc rcgisrration 111. interpolating the motion field ciLhcr along contoun 181 or o\ cr regions. computing 31~1~ I‘rom-contour [J]. itnd liar hoI\ iiig itcr,rti\cl!, the strucltlrc-1’ic,nl-ulotion problem [I?]. In f;ict. ,111) iter,itilc (rclijx,ltton) l~roccsscs \rhich \ccks global ct)n\istcncy, burr M hose proccs\ors ,~rc‘ rc~ti.icUXI to 4implc. local intcr,ictions can bcncfir from the appro:ich. most c\ idcntlq when it is go\crncd by a ~iiriilti~~n~ll pl inciplc or p‘irtial difTcrcntia1 equation. References 1. 7 L. 3. 4. 5 6. Bajcsy. It. nnd Broit. C.. “Matching of dcl’ormsd images.” Proc. Sixfh lnr. J. Cor$ f’urrrm Kecognirion. Munich. 1982. 351-353. Briidy, J.Rl., and Yuille, A., “An extremum principle for shape from contour.” IEEE Trans. Pat. Anal. Much. Intel. I’ARll-6, 1984, 288-301. Brandt, A., “hlulti-level ad:tplive solutions lo boundary-value prob- lems.” ~larh. Comp.. 31, 1977, 333-390. Courant, II., and flilberl, Il., Methods of ,Ilathematicul Physics, Vol. 1. Interscience, London, 1953. Forsythe, G.E., and Wasow, W.R., Finite D@erence Methodsfor Partial D$krential Equations, Wiley, New York, 1960. Glazer, F., “Multilevel relaxation in low-level computer vision,” Multiresolution image Processing and Analysis, A. Rosenfeld (ed.), Springer-Verlag, New York, 1984, 312-330. Hachbuscb, W., and Troltenberg, U., (ed.), Multigrid Methods Lecture Notes in Mathematics, Vol. 960, Springer-Verlag, New York, 1982. Hildreth, EC., Computations underlying the measurement of visual motion, MIT AI. Lab., Cambridge, MA, AI Memo No. 761, 1984. llorn, B.K.P., “Determining lightness from an image,” Compufer Graphics and Image Processing, 3: 1974, 111-299. Figure 5. Synthetic images of a Lambertian sphere at four resolutions illuminated by a dist:mi point source perpendicular to the image plane (top). The three smaller images are increasingly coarser sampled versions of the largest image which is 12!1 x I’,‘!) pixels, quantized to 256 irradiance levels. The fr:lmcs for the second time inshint (bottom) show an expanded sphere. ,/,,I1 ,\,,, Figure 6. The velocity field computed by the multiresolution opticnl flow 3lgoritim after 4.938 work units is shown at the Three coarsest resolutions (the finest level solution is too dense to plot here). - -- 10. 11. 1’. 13. 14. 15. 16. 17. I lorn, !I K. t’.. and Schunc~, l&G., “Determining optical flow,” Artificial in~dl~~enc.e. 17. 1% 1, 185-203. Ikuchi. h.. and llorn, B.K.P.. “Numerical shape from shading and occludmg bound;lries.” Artificial Intelligence, 17, 1981, 141-184. l,and, ILlI.. and RlcCann. J.J., “Lightness and retinex theory,” J. Upz. Sot. Amer.. 61. 1971. l-11. Strilllg. G.. and Fia, C;.J., An Analysis of the Finite Element Merhod, Pr-entice-tlall, L nglewoods Cliffs, NJ, 1973. Tcrqmulos. I)., M~~l~ile~cl reconslruction of visual surfaces: Variational princlplcs ,Ind litme elcmen[ reprebcntalions. MIT A.I. Lab.. Cambridge, MA. 1082. Al Memo No. 671. reprinted in Multiresolution Image Prowwing and A~,I/JY~~. A. Roscnfeld (ed.). Springer-Verlag. New Yorh. 1984, 237-310. I‘er/opoulos, D.. “Multilevel computiLional processes for visual surface reconslrucuon.” Computer Vision. Graphics and image Processing, 24, 1983a. 52-96. Terlopoulos. D.. “The role of constraints and discontiiuities in vbiblz-surface reconstruction,” Proc. 8’j’ Int. J. Con/: AI, Karlsruhe, W. Gemmany. 1983b, 1073-1077. Ullmnn, S., The Interpretation of Visual Motion, MIT Press, Cambridge, MA, 1979a 317
1984
35
320
Jon A. Wcbbt and Edward Pervint +Dfy2rtmc~ut of Computer Scicuce Carnegie-Mellon University, Pittsburgh, PA 15217 tPerq Systems Corporation Pittsburgh, PA 15217 ABSTRACT We develop a theoretical framework for interpolating vi- sual contours and apply it to subjective contours. The theory is based on the idea of consistency: a curve fitting algorithm must give consistent answers when presented with more data consistent with its hypothesis, or the same data under dif- ferent conditions. Using this assumption, we prove that the subjective contour through two point-tangents is a parabola. iye extend the theory to include multiple point-tartgents and points. Sample output of programs implementing the theory is provided. I. INTRODUCTION Subjective contours are curves filled in by the visual sys- tem in the absence of an explicit curve. An example is shown in Figure 1. These curves are relevant to computer vision and graphics, because it is often necessary to fill in missing curves in these fields. If we understand how the human visual system does this, we should be able to program computers to do it. For this reason, subjective contours have received much atter,t,ion in computer vision. However, this research has not always been generally useful, because to apply a human visual a lgorithl[l to computer vision we must understand more than just the a.!gorithm: we must understand the assumptions on which the algorithm is based, and these assumptions must be pyecisc and referred to the external world, not to other parts of the visual system. Otherwise we would not be able to tell if the algorithm could be expected to work in a specific situation, or we would have to implement large parts of the human visual system to use a single algorithm. Figure 1. (a) The K anizsa triangle [l]. (b) Subjective contour created by our method. * This research was sponsored by the Defense Advnnced Research Projects Agency (DOD) under ARPA Order No. 3597, and monitored by the Air Force Avionics Laboratory under Contract F33615-78-C-1551. The views and co~clu- sions in this document are those of the authors and should not be interprested as representing the oficial policies, ei- ther expressed or implied, of the Defense Advanced Research Projects Agency or the U.S. Government. We will show how a theory of subjective contours can be derived from a few simple assumptions that are referred to the external world. To do this we will start with a few assumptions (principally, that subjective contours arc the projections of un- seen occluding contours) and show mathemalically how this leads to a unique shape for the subjective contour. Knowing the shape we can write programs to draw subjective contours. Because our shape is unique, we have shown that any visual system - human or computer - that makes the same as- sulnptions about contours as we have must produce contours with the same shape. II. FUNDAMENTAL ASSUMPTIONS. Consider a visual process that fills in missing contours by connecting scattered dat,a. We assume that this process assumes that the contours it fills in are unseen occluding contours, If we were designing the best possible such process, what might it try to do? The process cannot always fill in the correct contour because the correct contour is not known, <and cannot be pre- dicted. But we will show that the process can bc consistent: it can give the same answer when presented with more data consistent with what it has seen before, or data equivalent to what it has seen before. Consistency is important in perception, just as in reason- ing. A consistent process can be relied upon to accumulate fragments of data into a complete whole, just as in science we construct a theory by accumulating evidence. An incon- sistent process, on the other hand, might arbitrarily change its conclusion as the result of more evidence. We will now show how consistency leads to specific be- havior, which is describable mathematically, and which we can expect from a contour-fitting algorithm. III. CONSEQUENCEs OF CONSISTENCY The first step in deriving the consequences of consistency is to define it; we can do this as follows. There is a class of transformations on image contours which do not affect the contour seen, but rather represent a chauge in the position of the viewer: for exampl tl, the contour can be rotated, or more of the same contour can be seen. Suppose that the contour- fitting algorithm produces a contour’ given some scattered data, then we transform the scztteretl data b;/ one of the transformations above. and wz present the transformed data to the contour-fit?ing algorithm. !;\‘t: will say the algorithm is consistent if it protluces the same contour it did originally, but subjected to the transformnbion. We will now describe the class of transformations men- tioned above. There are two kinds of transformations: one where information is added or replaced, and another where 340 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. a set of information is subjected to some visual distortion. The first kind of transformation leads to a criterion called extensi6ility. A. Extensibility The principle of extensibility is that adding redv7Ldant information to the contour should not afiect its shape. &ten- sibility is desirable in vision because it makes it possible to deal with an image in which new information is being discov- ered (perhaps through the action of an on-line visual process). AS more information is added to an image, the contours fit through scattered data will change only if the new information is not redundant. IJespitc this, most algorithms previously proposed for interpolating subjective contours (3,1,5,6] xe not extensible. Extensibility has been rejected primarily because it appears to conllict kvith other dcsiral~le criteria: for esnmplc, fitting a snlootll contorlr throngll tlic data. I3ut wc will show that the contours produced by the extensible algorithm described here are quite smooth and look rrasonnble. In order to show this, we must precisely define what we mean by extensibility. To define cxtensbility precisely, we must introduce some notation. WC treat any contour-fitting algorithm as a function S(m) mapping from a tuple of contour data (Al, AZ,. . . , A,) to a contour S (Al, . . . , An). The contour S(A1,. . . , A,) passes through the A; in sequence. Each A, may indicate either that the contour passes through a point, or that the contour passes through a point with a signed direction. In the second case the Ai is referred to ZLS a point-tangent, which is a pair of vectors (P, P’). In this pair, P’ is a unit tangent vector to the contour at P, and is called the head of the point-tangent. The tail of the point-tangent is -P’. The contour passes through each A; in the same direction. A subjective contour may be closed or open; if it is closed, we have Al = A,. We writePE S(A1,...,A,)ifS(A1,...,A,)passesthroughthe point or point-tangent P. If Y precedes Q on the contour (in the sense of the contour direction as defined above) we write P < Q. 341 These definitions make it possible to state extensibility precisely, as an axiom that we assume true of our contour- fitting algorithm: 3.1. Axiom: Extensibility. IfX E S(A1,...,A,,) is a point on a subjective contour such that A, < X < Ai+l then S(Al,Az,...,A;,X,Ai+l,..., A,) = S(h,... 4,) A similur condition applies if X < Al or X > A,. Extensibility makes it possible to add data to a contour, but we would also like to be able to replace data on the contour, under certain conditions. We will call this criterion point replacement. B. Point replacement The ideal contour-Gtting algorithm would be able to tol- erate arbitrary replacement of data on the contour by other data, but this is too strong a condition to require; if we al- lowed this, we could move all the data to bc nearly adjacent to one point, and it would be unreasonable to expect that we would still get the same subject,ive contour. We can restrict this criterion while still making it meaningful by allowing only points at the end of the contour to be movable. We can state this restriction as follows: 3.2. Axiom: Point replacement. IfR E S(A1,...,A,) is a point on a subl’ective contour such that R > AIL-l then S(Al,..., An-.-l, R) = S(A,, . . . , A,); similarly if R < AS. This completes the development of criteria dealing with the addition and replacement of data. Next we consider cri- teria arising from viewing transformations. C. Viewpoint independence When we shift our point of view we see the same real objects. To bc consistent, subjective contours should behave in this way. WC can state this principle of viewpoint inde- pendence 7~s follows. Ch,anging the point of view, creating a subjective contour, th.cn changing the point oj view back should produce the same contour as would be produced from the orig- inal point of view. We cannot satisfy this condition for all contours; but there is a nat,tlral sub-class of contours for which it is satisfi- able. We make three different restrictions on the real-world contours that project onto the subjective contours: we re- strict the shape of the object along the real-world contour, we restrict the real-world contour’s shape, and we restrict its relation to the viewer. In order to guarantee that the same portion of real con- tour is seen as we shift our point of view, we require the real contour to be generated by <an abrupt bend in the bound- ary of the object, rather than a smoothly turning boundary. Because the contour-fit,ting algorithm has no information on the shape of the contour in depth, we require the real-world contour to be planar. I?inally, because contours close to the viewer must be modelled using central or point projection, we require the contours to be distant from the viewer, so that we can use parallel or orthographic project, which is n~atl~cn~nticnlly more tractable. All of these assumptions are quite common in computer vision [7j, and they are also common in the real world. For example, leaves have abruptly changing contours that are often planar, and leaves are small enough so that when they are viewed parallel projection is a good model to use. Planarity, abrupt contours, and pnrallrl projection to- gether illlply t,hnt subjective contour algorithnl:j must be com- mntativc with aDYne trnnsformutions and translations. An afflne transformntion and translation is ;I linear trsnsforma- tion A: (2, y) t+ (ax -t by -t 7~: cx + dy -i- v) for some constants a, b, c, cl, u, and ZI. Afine trarlsfo’orrllntiorls include such things as rotations, skew distortions, and scale changes. Affine com- mutativity can be st,ated as follows; note that any viewpoint- indcpendcnt contour-fitting algorithm must satisfy this con- dition: 3.3. Affine comrnutativity. S(Pl,~2,...,K), For any subjective contour and any afine transformation and trans- lation A, S(P,,... > Pn) = A-‘[S (A(C), - ’ . , A(K))] we Now that we have prcciscly defined all of 0111’ restrictions, can derive the first important result of the paper. IV. SUBJECTIVE CONTOURS THROUGH TWO POINT-TANGENTS We will now prove that the subjective contour through two linked point-tangent,s is either a parabola or two straight lines, where two point-tangents are linked if a ray extended from the head of one intersects a ray extended from the tail of the other. The proof works by first. showing that any pair of point- tangents can be transformed into a certain configuration. Then we show that this configuration can be mapped into itself in a way that produces an infinite number of new points on the curve. All of these points lie on a conic, so that the curve must be a conic. Now from the conditions we have stated it follows that the subjective contour through two point-tangents is a four parameter curve; and since the only four parameter subclasses of the tonics are the parabolae and pairs of straight lines, these are the only curves that can be subjective contours. First tions: we need a lemma dealing with afflne transforma- 4.1. Lemma. For any two pair of non-degenerate linked point-tangents, P, Q and R, S there is an afine transjorma- tion and tran&ation Aff (P, Q; R, S) mapping P onto R and Q onto S. A pair of linked point-tangents A : (U,U’) and B = (V, V’) is non-degenerate if the triangle formed by U, V, and the intersection point of the head of A with the tail of B is non-degenerate. The proof of this is fairly straightforward and is omitted. In the proof of t,he next theorem, we assume the subjective contour is continuous and connected. Continuity is easy to show for visual contours; since we always see only discretely sampled data, a continuous contour could account for what is seen as well as any discontinous contour. Connectivity can be proved using continuity and our assumptions, but we omit the proof here. 4.2. Theorem. The subjective contour through any degenerute linked two point-tangents is a conic section. non- Proof. Since we can map any pair of non-degenerate linked point tangents onto a given pair, it is sufficient to deter- mine the subjective contour through a speciEc pair of linked point-tangents, and then to determine the subjective con’tour through any pair using affine commutntivity. Consider two symmetric point- tangents passing through (1, 1) and (-1,l). We distort them by an affine transformation that stretches parallel to the x-axis so that the subiective contour between them passes through (0,O). ‘~1 lis configuration is invariant to the affine transformation that flips about the y-axis; hence by afline commutativity the tangent at (0,O) is parallel to the x-axis; let it be (I, 0). Call the three point-tangents pro- duced irl this way 1’ = ((-1, l), (p, q)), Q T ((1, l), (p, -q)), and R E (0, O), (0,l)). Let the tarlgent from I’ intersect the x-axis at 1 -k,O), so that the tail of Q intersects the x-axis at (k, 0). Consider the afline transformation A = Aff (P, R; R, Q). There is no difficulty in writing A down in lerms of k; in homogenous coordinates [8] it is (+ y i) where t = i - 1. We will now show that A maps S (P, Q) onto itself. Since S(P,Q) = AiS(P,Q)j, it follows that S(P,Q) = An[S(P,Q)], f or any n. Now we will consider the effect of A when repeatedly applied to any point, say R. Since A maps the subjective contour onto itself, A”(R rz. Now we shall show that all the A”\R We do this by showing that there is a matrix C such that AtCA = C. If this is so, all the A”(R) must lie on the conic vtCv = RtCR, since (A”(R))%A”(R) = Rt(h”)‘CAnR = Rt (A+)TAnR - Rt(A+)‘m-‘CA”-lR = . -. - = Rvx The matrix C, in terms of homogcnous coordinates, is The conic that C generates is (t - l)y2 + 2y - (t + 1)~~ = 0. The reader can verify that AtCA = C. Now if all the A”(R) arc different, we have generated an infinite number of point-tangents that lie on S (P, Q) and which also lie on a conic. If S (Y, Q) is not the conic C and passes through all these point-tangents, it must have some inflection points. But inflection points are preserved under affine transformations, so we can map two tangents on S(P, Q) and around these inflection points onto Y and Q, which in- creases the number of iuflcction points between P :LII~ Q; SO the contour is not afline commutative. IJCIKC, if the A”(R) are all dif~crcnt, the contour is Ihe conic C. ( If the A”“(R) are not all different then A is a root of unity i.e., there is an n such that A” = I, the identity matrix). In this case there will be only a finite nunlbcr of points. We can use a special argument to take care of this case: map any other two tangents on S(P, Q) onto P and Q, and the point crossing the y-axis onto (0,O) as above. The new contour will either have the same tangents at P and Q as the old contour, or it will not. If it has the same tangents, we keep choosing points until we get different tangents; if we never do, WC have generated an infinite number of points that can be mapped by an affine transformation into the configuration Y, Q, II?; such a contour must be a conic, by an argument similar to the one above. If the tangents are not the same, we use continuity to show that there exist points giving a value of k for which A is not a root of unity, since the class of matrices of the form of A which are roots of unity is countable, and the class of points on the contour is uncountable. Hence the new contour will be a conic, which means the original contour is a conic, This completes the proof. 1 We have as a corollary: 4.3. Corollary. The subjective contour through two WWL- degenerate linked point-tangents is a parabola or two straight lines. Proof. The class of subjective contours is closed with re- spect, to afflnc transformnlions and translations by affine CQM- mutativity. Now we will show that no two elements of tie class of subjective contours through linked point-tangents ca+a intersect in two point-tangents. Suppose, to the contrary, that S(P,Q) f S(U,V) are subjective contours and that they intersect in two point-tangents X and Y. We cannrtt have S(X,Y) = S(P,Q) and S(X,Y) = out loss of generality assnme that S (X, point replacement S(U, V) = S(U, Y) = contradiction. 342 Hence the class of subjective contours through two point- tangents is closed with respect to saline transformations and has the property that no two contours from the class intersect in two point-tangents. There are oniy two subsets of the tonics that have these properties: the parabolae and pairs of lines. I The subjective contours most commonly observed are curved, so that they must be parabolae. Under special condi- tions straight lines may bc observed, however [4]. Parabolae have been suggested by Bookstein [!J] for curve interpolation. Subjective contours can be formed by point-tangents, point,s, or combinations of the two. So far we have only con- sidered the case of two linked point-tangcts. In the remainder of the paper, we consider subjective contours through other kinds of data. V. SUBJECTIVE CONTOURS THROUGH OTHER DATA We will now complete our theory of the shape of subjec- tive contours by considering subjective contours through non- linked point-tangents, multiple point-tangents, and points. Because of space limitations we will consider each case only briefly. We will note statements for which we have proofs, and statements which are only conjectures. First, we consider non-linked point-tangents. If the point- tangents are parallel, the subjective contour is provably two parallel lines passing through the point-tangents. If the point- tangents are not linked but not parallel the subjective con- tour is provably a “double parabola,” which consists of two parabola with the same shape, rotated so that they are tan- gent to each other and to the two point-tangents. Next we consider multiple point-tangents. There seem to be two approaches, though we have not proved there are only two. The subjective contour can be constructed locally, by fitting subjective contours through successive pairs of point- tangents, or it can bc constructed globally by fitting tonics through sucessive pairs of point-tangents, and requiring cur- vature continuity across the point-tangents. The second ap- proach has the advantage of producing circles as subjective contours in some situations, a condition favored by Knuth [2]. However, it is more difficult to implement. Both of these approaches provably are consistent. Finally, we consider subjective contours through points. W’e conjecture that there is only one approach. Parnbolae can be passed through every pair of points, and the curva- ture of the parabolae can be required to be continuous across the points. At the ends of the contour, we fit the parabola through the last three points on the contour, and require cur- vature continuity where it joins the rest of the contour. This (4 (4 Figure 2. (a) 0 ‘g ri inal data. (b) Interpolated contour. (c) Contour with added points (black rectangles). approach is provably consistent only with the first approach to multiple point-tangents. It is provably affinc commutative and allows point replacement, and we conjecture that it is ex- tensible on the basis of demonstrations like the one in Figure 2. All of the above applies only to curved subjective con- tours. The theory for straight subjective contours (consisting of intersecting straight lines) is much easier to work out. We can move that all that is necessary is to extend point-tangents until they intersect, and to join points with straight lines. But this kind of contour-fitting is uninteresting. We conjecture from the above discussion that curved subjective contours always consist of parnbolae, with single or double parabolae fit between point-tangents, and curvature- continuous parabolae Et through points. and PI PI PI PI PI PI PI PI PI ACKNOWLEDGMENTS This work benefitted from discussions with TLakeo Kanade Geoff Hinton. REFERENCES Kanizsa, G., “Subjective contours,” Scientific American, 234, (4), 48-52. 1976. Knuth, D. E., “Mathematical typography,” Bull. Amer. Math. Sot. (new series), 1, 337-372. 1979. Brady, M. and W. E. L. Grimson, “Shape encoding and subjective contours,” Proceedings of the First Annual Na- tional Conference on Artificial Intelligence, The Ameri- can Association for Artificial Intelligence, Stanford, CA, 15-17. 1979. Ullman, S., “Filling-in-the-gaps: the shape of subjective contours and a model for their completion,” Biological Cybernetics, 1, (6). 1976. Brady, M. and W. E. L. Grimson, “The perception of subjective surfaces ,” A. I. Memo 666, Artificial intelli- gence Laboratory, Mass. Inst. Tech., Cambridge, MA. 1981. Rutkowski, W. S., “Shape completion,” Computer Graph- ics and Image Processing, 9, 89-101. 1979. Stevens, K. A., “The visual interpretation of surface con- tours,” Artificial Intelligence, 17, 47-73. 1981. Newman, W. M., and R. F. Sproull, Principles of inter- active computer gru.phics, Second edition, McGraw-Hill. 1979. Bookstein, L., ‘Closing gaps, and gaps with a stepping stone, by means of parabolas,” Computer Graphics and Image Processing, 10, 372-374. 1979. 343
1984
36
321
FINGERPRINTS THEOREMS A.L. Yuille and T. Poggio Artificial Intelligence Laboratory*Massachusetts institute of Technologymcambridge, Mass. Abstract. We prove that the scale map of the .zero-crossings of almost all signals filtered by a gaussian of variable size deter- mines the signal uniquely, up to a constant scaling. Exceptions are signals that are antisymmetric about all their zeros (for in- stance infinitely periodic gratings). Our proof provides a method for reconstructing almost all signals from knowledge of how the zero-crossing contours of the signal. filtered by a gaussian filter, change with the size of the filter. The proof assumes that the filtered signal can be represented as a polynomial of finite, albeit possibly very high, order. The result applies to zero- and level- crossings of signals filtered by gaussian filters. The theorem is extended to two dimensions, that is to images. These results imply that extrema (for instance of derivatives) at different scales are a complete representation of a signal. 1. introduction Images are often described in terms of “edges”, that are usually associated with the zeros -of some differential operator. For in- stance, zero-crossings in images convolved with the laplacian of a gaussian have been extensively used as the basis repre- sentation for later processes such as stereopsls and motion (Marr, 1982). In a similar way. sophisticated processing of 1-D signals requires that a symbolic descnption must first be ob- tained, in terms of changes in the signal. These descriptions must be concise and. at the same time, they must capture the meaningful information contained In the signal. It is clearly im- portant, therefore, to charactenze in which sense the information in an image or a signal is captured by extrema of derivatives. Ideally, one would like to establish a unique correspondence between the changes of intensity in the image and the physical surfaces and edges which generate them through the imaging process. This goal is extremely difficult to achieve in general, al- though it remains one of the pnmary objectives of a comprehen- sive theory of early visual processing. A more restricted class of results, that does not exploit the constraints dictated by the signal or image generation process, has been suggested by work on zero-crossings of images filtered with the laplacian of a gaussian. Logan (1977) had shown that the zero-crossings of a 1-D signal ideally bandpass with a bandwidth of less than an octave determine uniquely the filtered signal (up to scaling). The theorem has been extended-only in the special case of oriented bandpass filters-to 2-D images (Poggio, et al., 1982; Marr, et al., 1979) but it cannot be used for gaussian filtered signals or images, since they are not ideally ~-_ SuPPOrt for work done at the Artificial intelligence Laboratory of the Massachusetts lnstltute of Technology is provided In part by the Advanced Research Projects Agency of the Department of Defense un- der OffICe Of Naval Research contract N@0014-E?O-C-0505. Arjdi{ionally, T. P. was partially supported under Air Force Office of Sponsor& Research contract F49620-83-C and a grant from the Sloan Foundation. bandpass. Nevertheless. Marr et al. (1979) conjectured that the zero-crossings maps, obtained by filtering the image with the second derivative of gaussians of variable size, are very rich in information about the signal itself (see also Marr and Poggio, 1977; Grimson, 1981; Marr and Hildreth, 1980; Marr, 1982; for multiscale representations see also Crowley, 1982 and Rosenfeld, 1982 also for more references). More recently, Witkin (1983) (see also Stansfield, 1980) intro- duced a scale-space description of zero-crossings, which gives the position of the zero-crossing across a continuum of scales, i.e., sizes of the gaussian filter (parameterized by the u of the gaussian). The signal-or the result of applying to the signal a linear (differential) operator-is convolved with a gaussian filter over a continuum of sizes of the filter. Zero- or level- crossings of the (filtered) signal are contours on the T--O plane (and surfaces in the 2, ?I,U space). The appearance of the scale map of the zero-crossing is suggestive of a fingerprint. Witkin has proposed that this concise map can be effectively used to obtain a rich and qualitative description of the signal. Furthermore, it has been proved in 1-D (Babaud et al, 1983; Yuille and Poggio, 1983a) and 2-D (Yuille and Poggio, 1983a) (J. Koenderink, pers. comm., 1984) has now obtained similar results exploiting properties of the diffusion equation.) that the gaussian filter is the only filter with a “nice” scaling behavior, i.e., a simple behavior of zero- crossing across scales, with several attractive properties for fur- ther processing. In this paper, we prove a stronger completeness property: the map of the zero-crossing across scales determines the signal uniquely for almost all signals (in the absence-of noise). The scale maps obtained by gaussian filters are true hngerprints of the signal. Our proof is constructive. It shows how the original signal can be reconstructed by information from the zero- crossing contours across scales. It is important to emphasize that our result applies to level-crossings of any arbitrary linear (differential) operator of the gaussian, since it applies to functions that obey the diffusion equation. These results were originally reported In Yuille and Pogglo (1983b). The proof is constructive and applies in both 1-D and 2-D. Reconstruction of the signal is of course not the goal of early signal processing. Symbolic primitives must be extracted from the signals and used for later processing. Our results imply that scale-space fingerprints are complete primitives, that capture the whole informatlon in the signal and characterize it uniquely. Subsequent processes can therefore work on this more compact representation instead of the original signal. Our results have theoretical interest in that they answer the question as to what information is conveyed by the zero- and level-crossings of multiscale gaussian filtered signals. From a point of view of applications, the results in themselves do not justify the use of the fingerprint representation. Completeness of a representation (connected with Nishihara’s sensitivity) is not sufficient (Nishihara, 1981). A good representation must, 362 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. in addition, be robust (i.e. slab/e in Nishihara’s terms) against photometric and geometric distortions (the general point of view argument). It should also possibly be compact for the given class of signals. Most importantly it should make explicit the information that is required by later processes. two points on the zero-crossing contours Exploitation of the whole zero-crossings contours should make the reconstruction considerably more robust. The second question is about the stability of the recovery of the unfiltered signal l(z) from E(~,fl). This is eouivalent to inverting the diffusion equation, which is numericaliy unstable since it-is a classically ill-posed problem, 2. Assumptions and results Reconstruction is, however, possible with an error depending on the signal to noise behavior (see Yuille and Poggio, 1983b). We consider the zero-crossings of a signal I(X), space-scale filtered with the second derivative of a gaussian, as a function of Z, 6. Let E be defined by 2.1. Outline of the 1 -D Proof We summarize here the 1-D proof from a slightly different point of view that clarifies its bare structure. E(s, u) = I * G h-(x, u) = 1(x) * [G(x, u)] = / I(<)i &?xp-5K&. Pll Notice that E(z,a) obeys the diffusion equation in x and a: The proof starts by taking derivatives along the zero CrOSSing contours at a certain point. Such derivatives split into combina- tions of 5 and t derivatives (where t = a’/fl). Because the filter iS assumed to be gaussian, however. derivatives can be expressed In terms of s derivatives. This is a key point: since the filtered signal I:(s, f) satisfies the diffusion equation, the t derivatives can be expressed in terms of the s derivatives simply by /Ct = /5,,. The next stage is to find the s derivatives of /<(I, 1) up to an IPE 1 al3 -- a22 = u au’ WI We restrict ourselves to images, or signals, P such that E can be expressed as a finite Taylor series of arbitrarily high order and such that K is not antisymmetric about all its zeros. Observe that any filtered image can be approximated arbitrarily well in this way, because of the classical Weierstrass approximation theorem, except for those functions antisymmetric about all their zeros. This class of functions is discussed in detail in a forthcoming paper (Yuille and Poggio, 1984a) where it is shown that additronal information about the gradient of the function on the zero-crossings is sufficient to determine the signal. Note that, for a finite order polynomial, functions antisymmetric about all their zeros only have one zero-crossing contour. We will show that the local behavior of the zero.crossing curves (defined by /S[~.U) .= 0) on the s CJ plane determines the image. Our reconstruction scheme provides the image I in terms of Hermite polynomials. The proof of this result can be generalized to 2-D and extended to zero- and level-crossings of linear (differential) operators. More precisely we have proven the following theorem: Theorem 1: The derivatives (including the zero-order derivative) of the zero-crossings contours defined by k’(~,a) = 0, at two distinct points at the same scale, uniquely determine a signal of class P up to a constant scaling (except on a set of measure arbitrary degree ~1 from such derivatives along the zero crossing contours in the T - f plane. We show that this can be done by using 2 points on 2 contours. (It is possible that one point is sufficient, but we are as yet unable to prove this.) Since /S(r, t) is entire analytic, because of the gaussian filtering, it can be rep- resented by a Taylor series expansion in J-. Since we know the values of the tl derivatives of I:‘(s, I) with respect to .r, we know its Taylor series expansion and hence I:‘(s, t). The unfiltered signal I(:r), (K(s, t) = I(X) * C:(S, I)) can then be recovered in the ideal noiseless case by deblurring the gaussian. A particularly simple way of doing this is provided by a property of the function 9, in which we will expand the function II’: the coefficients of an expansion of I(S) in terms of & are equal to the coefficients of the Taylor series expansion of IS(z,l). In the presence of noise, the recovery of I(r) from IC(r, t) is obviously unstable, since it is a classically ill-posed problem. It is limited by S/N ratio since high spatial frequencies in the signal are masked by the noise for increasing t. (For instance, if j’(r) = C IL,,c”‘~, the filtered signal is 1<(5, t) = CCJ~@C-~~‘~ .) Note that since the zero-crossing contours are available at all scales a reconstruction scheme that exploits more than 2 points will be significantly more robust. As one would expect, the reconstruction of the unfiltered signal is therefore affected by noise. The reconstruction of the filtered signal I’r’(~,t) is likely to be considerably more robust. We plan to study theoretically and with computer simulations the noise sensitivity of the reconstruction scheme. zero). Note that the theorem does not apply to signals that do not have at least two distinct zero-crossings contours. Yuille and Poggio (1983b) have extended Theorem 1 to the two dimensional case: Theorem 2: Derivatives of the zero-crossings contours, defined by 1s(z, IJ, 0) = 0, at two distinct points at the same scale, uniquely determine an image of class P up to a scaling factor (except on a set of measure zero). These theorems break down when all the zero-crossing contours are independent of scale (i.e. the contours go straight up in the scale-space fingerprint). This is a rare, though interesting, special case and is discussed in detail in a future paper (Yuille and Poggio 1984a). It can only occur for functions which are antisymmetric about all their zeros, such as sinusoidal functions, and for odd polynomials with only one real zero. The theorems do not directly address the stability of this recon- struction scheme. The first question concerns stability of the reconstruction of the filtered function I<(z,o) at the u where the derivatives are taken. Note that our result relies only on the coefficients of the expansion of I(Z) = E(x,O) in functions 3. Proof of the Theorem in 1-D Our proof can be divided into three main steps. The first shows that derivatives at a point on a zero-crossing contour put strong constraints on the “moments” of the Fourier transform of ~CC(Z,O) (see eq. 3.1.4). The second relates the “moments” to related to the Hermite polynomials. Finally the “moments” can be uniquely determined by the derivatives on a second point of a different zero-crossing contour. We outline here only the first part of the proof, which is given in full in Yuille and Poggio (1983b). 3.1. The “moments” of the signal are constrained by the zero-crossing contours Let the Fourier transform of the signal I(Z) be j(u) and the 363 gaussian filter be C(Z,U) = k,ig with Fourier transform c(w) = au2 L.+ e . The zero crossings are given by solutions of ZS(z, t) = 0. Using the convolution theorem we can express 1S(z, t) as and t = a?/?. The Implicit Function theorem gives curves r(t) which are (IX (this is a property of the gaussian filter and of the diffusion equation. see Yuille and Poggio, 1983a,b). Let c be a parameter of the zero crossing curve. Then [X I.21 On the zero-crossing surface, 1:’ = 0 and $‘:; 15 = 0 for all integers ~1. Knowledge of the zero crossing curve is equivalent to knowledge of all the derivatives of .r and i with respect to c. We compute the derivatives of I:’ with respect to c at (I,, to). The first derivative is : -$iT(x, t) = $ J e-w2te’wz(iw)i(w)dw dt I +;i; e-“‘“(-w2)eiwzi(w)dw [X1.3] and is expressed in terms of the first and second moments of the function e--wzt~lwz I(W). The moment of order n is defined by: J 47 M, = -oo(iw)ne- %~~~i(W)dW. [3.1.4] The second derivative is $-E(x, t) = $ / emw2’eiw’(iw)T(w)dw d2t +p J e-wZt(-u2)eiwz?(~)d~ + dx 2 (>J q e --w2teiuz(-w2)?(u)dw [3.1.5] +2dzdt -- 4 4 J e-Wa’(-wz)eiwz(;w)i(w)dw e-w”(w”)e-i~zi(w)dw. Since the parametric derivatives along the zero crossing curve are zero, equation [3.1.3] is a homogeneous linear equation in the first two moments. Similarly, [3.1.5] is a homogeneous linear equation in the first four moments. In general, the ath equation, &IS(x,t) = 0, is a homogeneous equation in the first 2n moments. We choose our axes such that L, = 0. We can then show that the moments of e-w21/(~) are the coefficients a, in the expression of the function I(X) in Hermite polynomials. So we have r~ equations in the first 2n coefficients (2,. To determine the n, uniquely, we need n. additional and independent equations which can be provided by considering a neighboring zero crossing curve at (x1, to) (see Yuille and Poggio,h 983b). 4. Conclusions We conclude with a brief discussion of a few issues that are raised by this paper and that will require further work. a)sjtabi//ty of the reconstruction. Although we have not yet rigorously addressed the question of numerical stability of the whole reconstruction scheme, there seem to be various ways for designing a robust reconstruction scheme. The first step to consider i9 the reconstruction of the filtered signal I:‘(r, 1). One could exploit the derivatives at II points - at the given t - and then solve the resulting highly constralned linear equations with least squares methods. Alternatively, it may be possible to fit a smooth curve through several points on one contour, and then obtain the derivatives there in terms of this interpolated curve. The same process must be performed on a second separate zero-crossing contour. This scheme provides a rigorous way of proving that instead of derivatives at two points, the location of the whole zero-crossing contour across scales can be used directly to reconstruct the signal. The second step involves the reconstruction of the unfiltered signal f(r). This reconstruction step is unstable if only one scale is used, but it can be regularized and effectively performed in most situations, especially by using information from zero- crossings at smaller scales. b) Degenerate fingerprints. Our uniqueness result applies to almost all signal: a restricted but well known class of signals, with vertical zero-crossings in the scale-space diagram. correspond to nonunique fingerprints. These signals, which will be discussed in a forthcoming paper (Yuille and Poggio, 1984a), and which correspond to functions antisymmetric about all their zeros, do not belong to the class P introduced in Theorem 1 and 2. Interestingly, elements of this class can be distinguished by level-crossing (with a level different from zero) or by knowledge of the gradient (Yuille and Poggio, 1984a). c) Extensions. Our main results apply to zero- and level- crossings of a signal filtered by a gaussian filter of variable size. They also apply to transformations of a signal under a linear space-invariant operator - in particular they apply to the linear derivatives of a signal and to linear combinations of them. In both 1-D and 2-0, local information at just two points is sufficient. In practice, since many derivatives are needed at each point, information about the whole contour, to which the point belongs, is in fact exploited. d) Are the fingerprints redundant? The proof of our theorem implies that two points on the fingerprint COntOUrS are sufficient. As we mentioned earlier, several points are probably required to make the reconstruction robust and to ensure the avoidance of a non-generic pair of points. We conjecture, however, that the fingerprints are redundant and that appropriate constraints derived from the process underlying signal generation (the im- aging process in the case of images) should be used to charac- terize how to collapse the fingerprints into more compact rep- resentations. Witkin (1983) has already made this point and discussed various heuristic ways to achieve this goal. e) hnplications of the results. As we discussed in the introduc- tion, our results imply that the fingerprint representation is a COmpkfe representation of a signal or an image, Zero- and level-crossings across scales of a filtered signal capture full in- formation about it. These results also suggest a central role for the gaussian in multIscale frltering that assure that zero- and level-crossing indeed contain full information, Note, however, that the fingerprint theorems do not constrain or characterize in any way the differential filter that has to be used. The filter may be just the identity operator, provided of course that enough zero-Crossings contours exist. Independent arguments, based on the constraints of the signal formation process, must be exploited to characterize a suitable filter for each class of signals. For images, second derivative operators such as the Laplacian are suggested by work that takes into account the physical properties Of objects and of the imaging process (Grimson. 1983; Torre and POWiO, 1984; Yuille, 1983). We plan to explore this approach in the near future. 364 63) Zero-crossings and slopes. A natural question to ask is whether gradient information across scales at the zero-crossings, in addition to their location, can be used to reconstruct the original. Hummel (1984, pers. comm.) has recently shown that this is the case, as one would of course expect in the light of our results (Yuille and Poggio, 1983b; Yuille & Poggio, 1984a). We have been able to simplify and extend the elegant proof by Hummel and obtain the following result (Yuille and poggio, 1984b): knowledge of zero-crossing surfaces and magnitude of the z - t gradient over a finite, nonzero interval of the zero- crossing surface is sufficient to determine the image. Yuille, A.L. and Poggio, T. “Scaling Theorems for Zero-Crossings”. MIT A.I. Memo 722, June, 1983a. Yuille, A.L. and Poggio, T. “Fingerprints Theorems for Zero crossings”. MIT A.I. Memo 730, October,l983b. Yuille, A.L. and Poggio, T. “Fingerprints and the Psychophysics of Gratings”, MIT A.I. Memo 751, in preparation, 1984a. Yuille, A.L. and Poggio, T. “Fingerprints and their slope”. MIT A.I. Memo 752, in preparation, 1984b. Acknowledgments: We are grateful to E. Grimson, M. Kass, C. Koch, K. Nishihara and D. Terzopoulos for useful discussions and suggestions. C. Bonomo typed this manuscript more than once. REFERENCES Babaud, J., Witkin, A. and Duda, R., “Uniqueness of the gaussian kernel for scale-space filtering,” Fairchild TR 645, Flair 22, 1983. Crowley, J.L. “A representation for visual information”, CMU-RI- TR-82-7, Robotics Institute Carnegie-Mellon University, 1982 Grimson, W.E.L. From Images to surfaces, MIT Press, Cam- bridge, Mass, 1981 Grimson, W.E.L. “Surface consistency constraints in vision” Comp. C.V.G./.P. 24, 28-51, 1983 Logan, B.F. “Information in the Zero Crossings of Bandpass Signals”. Bell Sys. Tech. J., 56, 4, 4-87-510, 1977. Marr, David. Vision, A computational investigation into the human representation & processing of visual informati0n.W. H. Freeman & Co., San Francisco, 1982. Marr. D. 8 Hildreth, E. “Theory of Edge Detection”. Proc. R. Sot. Land. B, 207, 187-217, 1980. Marr, D., Poggio, T. ,Ullman, S. “Bandpass channels, zero- crossings and early visual information processing” J. Opt. Sot. 70, 868-870, 1979. Marr, D., Poggio, T. 1979. A computational theory of human stereo vision. Proc. R. Sot. Lond. 8, 204, 301-328. Also M.I.T. A.I. Memo October, 1977. Nishihara, H.K. 1981. “Intensity, visible-surface, and volumetric representations”. Artificial Intelligence, 17, 265-284. Poggio, T., Nishihara, H.K. and Nielsen K.R.K. “Zero-crossings and spatiotemporal interpolation in vision: aliasing and electrical coupling between sensors”, A.I. memo 675, May, 1982. Rosenfeld, A. “Quadtrees and Pyramids: hierarchical repre- sentation of images” TR 1171, University of Maryland, 1982 Stansfield, J. L. “Conclusions from the commodity expert project”, MIT Artificial Intelligence Lab Memo No. 601, 1980. Torre, V. and Poggio, T., “On Edge Detection,” MIT A.I. Memo 768, 1984. Witkin, A. “Scale-Space Filtering”, Proceedings of IJCAI, 1019- 1021, Karlsruhe, 1983. Yuille, A.L. “Zero-crossings and lines of curvature”, MIT A.I. Memo 718, 1983 365
1984
37
322
PERSONAL CONSTRUCT THEORY AND THE TRANSFER OF HUMAN EXPERTISE John H. Boose Boeing Computer Services, 7A-03, PO Box 24346, Seattle, Washington, 98124 Abstract. The bottleneck in the process of building exnert svstems is retrieving the appropriate problem- solving knowledge from tKe hum-in expert. -Methods of knowledge elicitation and analysis from psychotherapy based on enhancements to George Kelly’s Personal Construct Theory are applied to this process. The Expertise Transfer System is described which interviews a human expert and then constructs and analyzes the knowledge that the expert uses to solve his particular problem. The first version of the system elicits the initial knowledge needed to solve analysis problems without the intervention of a knowledge entineering: team. Fast (two hour) initial prototypyng auf expercsystems which run on KS- 300*,* (an extended version of EMYCIN) and OPS5 is also performed. Conflicts in the problem-solving methods of the expert may also be enumerated and explored. Index Key Words: artificial intelligence, expert systems, factor elicitation, knowledge acquisition, knowledge engineering, Personal Construct Theory. Introduction. An expert system is a computer system that uses the experience of one or more experts in some problem domain and applies their problem- solving expertise to make useful inferences for the user of the system (Waterman and Hayes-Roth, 1982). This knowledge consists largely of rules of thumb, or heuristics. Heuristics enable a human expert to make educated guesses when necessary, to recognize promising approaches to problems, and to deal effectively with incomplete or inconsistent data. Eliciting problem-solving knowledge from an expert is one of the critical problems in building expert. systems. A long series of interview, build, and test cycles are necessary before a system achieves expert performance. The time required to build an expert-level prototype is typically six to twenty-four months. Knowledge engineering is the process of acquiring knowledge and building an expert system (Feigenbaum, 1977). The goal in building the Expertise Transfer System (ETS) is to provide a tool set to significantly shorten the knowledge acquisition process, and *KS-300t, is a trademark Alto, California. of Teknowledge, Inc., of Palo improve the quality of the elicited problem-solving knowledge. To do this, ETS automatically interviews the expert, and helps construct and analyze an initial set of heuristics and parameters for the problem. Experts need no special training to use ETS; an initial fifteen minute explanation of the basic idea and how to use the workstation is usually all that is necessary. No initial knowledge base is necessary. ETS is capable of automatically producing KS- 300t, and OPS5 (Forgy, 1981) knowledge bases. KS- 300t, is an extended version of EMYCIN developed by Teknowledge, Incorporated, of Palo Alto, California. EMYCIN, an expert system building tool, was extracted from MYCIN, developed at Stanford University (Shortliffe, 1976; van Melle et al., 1981). In-depth analyses of ETS’s knowledge base listings and reports by both the expert and knowledge engineers provide problem-solving information prior to any human interviewing. Most of the initial interviewing process is eliminated, which is typically the most painful and time-consuming part of the knowledge acquisition process. First, aspects of Personal Construct methodology will be discussed which are relevant to the Expertise Transfer System. Then, the system itself is described, along with its relation to the knowledge engineering process. Finally, results and limitations of the methodology are discussed. Personal Construct Theory. ETS employs clinical psychotherapeutic interviewing methods originally developed bi George Kelly who was interested in helping people categorize experiences and classify their environment. A person can not only use this organization to nredict events more accurately and ac’t more effectively, but can also change the organization to fit specific perceived needs (Shaw, 1981). George Kelly’s (Kelly, 1955) theory of a personal scientist was that each individual seeks to predict and control events by forming theories, testing hypotheses, and weighing experimental evidence. Certain techniques for use in psychotherapy were developed by Kelly based on this philosophy. In a Renertorv Grid Test, for eliciting: role models, Kelly ask&his clients to list, compare, &d rate role modeis 27 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. to derive and analyze character traits. Aspects of these role models were used to build a rating grid. A non-parametric factor analysis method was then used to analyze the grid (Kelly, 1963). The results helped Kelly and his client understand the degree of similarity between the traits. He named a trait and its opposite a construct, and hypothesized that each construct represented some internal concept for the client. Following construction and analysis of the grid, the clinician entered an interviewing phase. Typically, in this phase, the interviewer would attempt to help the subject expand on and verify the relationships between concepts pointed out by the grid analysis. One interviewing technique was known as Zaddering. This was a method which helped connect the elicited concepts in their superordinate and subordinate relationships by asking the client “how” and “why” questions. Hinkle (Hinkle, 1965) developed a taxonomy of implication types. He suggests ambiguity may arise when a subject has an incomplete abstraction of the differences between the contexts in which the concept was used, or a subject uses one concept label for two independent traits. He also felt that the processes of psychological movement, conflict resolution, and insight depend on locating and resolving such points of ambiguous implication into parallel or orthogonal forms using techniques similar to laddering. More recently, elicitation and analysis of repertory grids has been made available through interactive computer programs (Shaw, 1979). A variety of grid analysis techniques using distance- based measures between vectors - either rows or columns of the grid - have been used, in which both elements and constructs may be graphically compared by the subject to find similarities and differences. Some of these techniques include principal components analysis (INGRID, Slater, 1977), a Q-Analysis of the grid in a cluster-analyzed hierarchical format (QARMS, Atkin, 1974), and a linear cluster analysis (FOCUS, Shaw, 1980). A more formal description of implication relationships is presented by Gaines which is based on logic (Gaines and Shaw, 1981). Instead of looking at grid rows and columns as vectors in space, Gaines views them as assignments of truth-values to logical predicates. In binary rating systems such as those used in Kelly’s original grid methodology, an “X” would simply mean true, and a blank would mean false. A grid, then, can be seen as a matrix of truth values. Gaines goes on to show a method of deriving implications from grids which use rating scales rather than binary scales. The method is based on multi-valued logics (Rescher, 1969) using fuzzy set theory (Zadeh, 1965). Using this method, the implication relation can be extended to include implication strength. The program ENTAIL achieves this and produces graphs which show entailment relations among constructs and elements (Gaines, 1981). Human and Computer Interviewing. It is almost always difficult for the expert to articulate problem- solving knowledge in terms which can be utilized by an expert system. Human interviewing processes elicit knowledge which is incomplete, inconsistent, and imprecise. The knowledge is often subconscious, and the expert may not be reliable when introspecting about problem-solving. The expert must come to trust the interviewer enough to overcome any fears or insecurities felt about the expert system building process. He may feel insecure about losing his job, or feel threatened by the encroachment of computers into his private domain, or he may not want to subject his problem-solving methods to the scrutiny of other human experts. Gaines points out that using a computer to interview subjects alleviates many of these difficulties (Gaines and Shaw, 1981). Expertise Transfer System. In an effort to apply grid methodology techniques to knowledge acquisition, the Expertise Transfer System (ETS) has been developed. ET’S runs in Interlisp-D on a xerox 1100 Dolphin Lisp Machine, using the high- resolution bitmap windows and mouse interaction capabilities provided. In the following example build a knowledge session, an expert will attempt to base for a Database Management System Advisor. system would be able The completed expert to advise a software engineer as to which database management system to uie for an - application problem. First, the Expertise Transfer System elicits conclusion items from the expert. Kelly referred to these items as elements. An expert system would be expected to recommend some subset of these items based on a given set of problem characteristics. In this case, the elements consist of all the databases which the expert believes the expert system should be knowledgeable about (see Figure 1). Do ET8** YES ou already have a list ofthings which you wish to classify? Please enter a list ofdatabase t c$~F~$~STOP, or a RETUR 1yp es, one to a line. When you’re . ETS’+ SYSTEM-2000 ETS*+ IN CTIRE ETS*+ ID ?i S ETS*+ TOTAL ETS** S L/DS 8 ETS** 1M DEL-204 ET~**'RAMIS-II ETS**DMS-170 ETS+*R.IM ETS**ORACLE ETS+* INGRES ETS++ADABAS ETS** SIR ETS** EASYTRIEVE ETS** CREATABASE If the expert can not verbalize the set of conclusion elements initially, ETS enters an Fi gf ure 1. ETS Eliciting Elements rom the Expert - List Mode. incremental interview mode of operation based on a program called DYAD (Keen et al., 1981), where elements are elicited one at a time, based on differences between and similarities to other elements (see Figure 2). thereby forming a rating grid (see Figure 5). In addition to allowing numerically scaled ratings, ETS accepts the ratings “N” - neither trait applies - and “?‘I - both traits apply - as described by Landfield (Landfield, 1976). What is the name of a database you’d like to consider? ETS** CREATABASE What is the name of another database to consider? ETS** EASYTRIEVE What is the name of a third database to consider? ETS** SIR Think of an im ortant characteristic that two of CREATABASE EASYTRIEVzand SIR share, but that the other one does not. What like (SIR What is t h ** 1 e name of another database which you feel is different from SIR in some important attribute? ETS+* ADABAS What is that attribute? ETS*+ INVERTED . . . Figure 2. ETS Element and Construct Elicitation - Incremental Mode. After the expert has listed the database management systems to be considered, ETS asks him to compare successive groups of three databases, and name an important trait and its opposite which distinguishes two members of this triad from the third one (see Figure 3). The result of this first phase of the interview process is a list of elements to be classified. and a list of classification parameters, all of which were derived from the expert.- Think of an im ortant characteristic that two of CREATABASE EASYTRIEVEfl and SIR share, but that the other one does not. What is that characteristic? ETS+* TEXT RETRIEVAL What is that characteristic’s o osite as it applies in this case? ETS** NOT TEXT RETRIEi%L Think of an important attribute that two of EASYTRIRVE. SIR, and ADABAS share, but that the other one does not. What ts that attribute? ETS** INVERTED What is that attribute’s o ETS*+ NOT INVERTES posite as it applies in this case? Think of an important trait that two of SIR, ADABASand INGRES share but that the other one does not. What is that trait? ETS** RUN ON VAX What is that trait’s o osite as it applies in this case? ETS** DO NOT R8ON VAX Figure 3. Problem Trait (Construct) Derivation. As Kelly points out, an initial set of constructs will probably not be a sufficient window into an individual’s construct system. Later in the interviewing process, Kelly’s technique of laddering is used, as well as construct volunteering and further triad formation to expand the construct network. So far, these techniques have been sufficient for building rapid prototype systems with reasonable behavior. Laddering may also be continued later on in the manual interviewing process as the expert and knowledge engineers work together to refine the knowledge base. Next, the system asks the expert to rate each element against each construct (see Figure 4), r ! Figure 4. Rating Constructs with Elements. Once this grid is established, several analytic methods are invoked to structure the knowledge. A non-parametric factor analysis is performed to cluster similar constructs into constellations (Kelly, 1963). Then, an entailment graph of implication relationships is built using the methodology of ENTAIL (see Figure 5). The constellation analysis shows constructs which are nearly functionally equivalent, given the elements chosen-by the expert. ETS and Conflict Analysis. The entailment graph shows imnlication relationships between various halves of&the constructs. For instance, “RUN ON VAX,” implies both “NOT TEXT RETRIEVAL” and “NOT INVERTED.” These graph arcs should correspond with paths in the expert’s construct system. However, as Kelly points out, logical representations do not necessarily correspond with a person’s internal construct hierarchy. Typically, the expert is surprised at many of the relationships which are revealed by the graph. In this instance, the expert initially disagreed with several of the relationships. ETS helps the expert explore them. When the expert points at an arc and presses a mouse button, the corresponding ratings which produced that entailment are highlighted on the grid. The expert may edit these ratings until he is satisfied with their values. If this does not correct the perceived problem shown by the entailment graph, ETS asks the expert if he can think of any elements which would be exceptions to the entailment. If the expert can think of an exception, then that element is added to the knowledge base, rated against all the constructs, and the entailment graph is regenerated. If this still does not correct the problem, ETS asks the expert if he would like to refine the constructs involved in the implication relation. This involves breaking up one of the constructs into two or more new constructs. To do this, ETS invokes a simple laddering method which asks “why?” and “how?” questions concerning the 29 constructs. New constructs are added to the knowledge base, and the rating grid and entailment graph are regenerated. If the entailment still exists on the graph, the expert may indeed agree that the entailment relationship is sound, and that he just never thought about the problem characteristics in that manner before. In this way, ETS helps the expert structure the problem-solving knowledge based on his own operative internal construct network. On the other hand, if the entailment relationship still exists, and the expert still disagrees with it, then this represents an inconsistency in the way the expert thinks about the problem. In effect, what has happened is that ETS has captured an important internal conflict in the expert’s construct hierarchy. Ambiguous construct relations also point out internal conflicts. When the expert is finished correcting entailment arcs, ETS searches for ambiguous relations, and a ain invokes the laddering method to try and re me these points of P conflict into parallel or orthogonal forms. Both ambiguous relations and relations with which the expert disagrees are important points of conflict in the expert’s problem-solving methods. These may be resolved with ETS or in later discussions between the knowledge engineers and the expert. The process of resolving them involves “psychological movement, conflict resolution, and insight” (Hinkle, 1965). These are points of interest in which further exploration is necessary both in producing the expert system, and in refining the expert’s own problem-solving processes. A set of conflict points is generated as part of the knowledge base report listings. Rule Generation. After the entailment graph has been constructed. ETS generates two types of heuristic production rule%: conclusion r&s and intermediate rules. Each production rule is generated with a belief strength or certainty factor. Certainty factors are used to represent a relative strength of belief which the expert would associate with the conclusion of the rule. Once generated, all rules may be reviewed and modified by the expert. Conclusion rules are created from individual ratings in the grid. Each rating has the potential for generating a rule. The expert is first asked to rate the relative importance of each construct in terms of its potential importance in solving the problem. Then, ETS employs an empirical algorithm to generate certainty factors for each rule. The algorithm takes into account grid ratings, relative construct importance, and the certainty factor combination Experlise Transfer System REQUIRE LOW LEVEL OF :.G’ERIENCE GO00 REPORT THEY ARE NETWORK DBMS RUN ON IBM MAINFRAME NON NETWORK DBMS DO NOT RUN ON IBM MAINFRAME THEY ARE NETVORK DDMS REQUIRES HIGH LEVEL OF EXPERIENCE NW NETYORK DCMS REQUIRE LOV LEVEL OF EXPERIENCE THEY ARE HIERARCHICAL NOT TEXT RETRIEVAL NON HIERARCHICAL TEXT RETRIEVAL THEY ARE HIERARCHICAL NUN RELATIONAL NON HIERARCHICAL RELATIONAL THEY ARE HIFRARCHICAL RUN ON ~~~MAINFR%~E- NON HIERARCHICAl DO NOT RUN ON IBM MAINFRAME ‘PARALLEL’ *RECIPROCAL* 3, Edit, F(ll-in, List, etc... 1s.. Figure 5. Screen Snapshot of ETS Showing Rating Grid and Entailment Graph. 30 algorithm in the target expert system building tool. Intermediate rules are based on relations in the entailment graph. For each entailment, one rule is generated. The strength of the rule’s certainty factor is based on the relative strength of the entailment. These rules generate intermediate pieces of evidence at a higher conceptual level than those of conclusion rules. Multiple Knowledge Representations. Allowing the exnert to work with multiple forms of his stated problem-solving knowledge is an important aspect of ETS. Lenat (Lenat, 1982) argues that knowledge representations should shift as different problem- solving needs arise. Each different representation method in ETS potentially helps the expert think about the problem in a new way, and tends to point out conflicts and inconsistencies over time. Rather than trying to force the expert to eradicate inconsistencies, this methodology takes advantage of the important psychological and problem-solving aspects of inconsistencies by helping the expert explore them. Knowledge Expansion. In addition to exploring conflicts to add new elements or constructs, the knowledge base may be expanded in a number of ways. Information may be modified and volunteered. New element triads may be created for comparison. Incremental interviewing may be continued, and laddering may be invoked to expand construct hierarchies. Listings and reports generated by the system are useful in later manual interviewing phases of the knowledge engineering process. The knowledge engineering team does not need to begin from scratch when beginning discussions with the expert. They have basic vocabulary, important problem traits, an implication hierarchy of these traits, and conflict areas where discussions may begin, This has been an important aid in streamlining the knowledge acquisition process in building expert systems. Knowledge engineers may also use associated interviewing techniques from personal construct methodology such as laddering and the resolution of ambiguous construct relationships. Testing - Rapid Prototyping of an Expert System. Once the rules have been generated, ETS has enough information to automatically generate a knowledge base for an expert system building tool based on production rules. Currently, ETS can enerate 8 knowledge bases for KS-300t, and OPS5. onsultations are then run from these prototypes to test the knowledge base for necessity and sufficiency. An example consultation using the knowledge base generated for the Database Management Advisor is shown in Figure 6. Manual interviewing and incremental knowledge refinement are still necessary to produce a system that performs at an expert level. The initially I ________ DATABASE-1 ________ 1) What is the name of DATABASE-l for this s *+ TEST f ecific application? ‘2) What is the REPORT-WRITER attribute of DATABASE- 1 (GOOD-REPORT-WRITER NO. REPORT-WRITER)? +* GOOD-REPORT-WRITER 3) What is the NETWORK attribute of DATABASE-l (THEY-ARE-NETWORK-DBMS NON-NETWORK-DBMS)? **NON-NETWORK-DBMS 4) What is the EXPERIENCE trait of DATABASE-l (RE UIRE-LOW-LEVEL-OF- EXPERIENCE RE UIRES-HIGH-LEVEL-OF- EXPERIENCE)? 8 **REQUIRES-HIGH-LEVEL-OF-EXPERIENCE The values of DATABASE-l are as follows: , Parms, or other option (? for help) Figure 6. Ra id Protot pin : A KS-300t, Knowledge rp ase Deve&pecfin Two Hours generated knowledge base may not be similar in structure to later ones. However, fast prototyping can be used to help analyze the sufficiency of the initial knowledge base. Other Efforts in Developing Knowledge Acquisition Systems. ETS could be used in combination with any of the knowledge acquisition tool families described below as a front-end processor to elicit initial traits and heuristics. TEIRESIAS (Davis and Lenat, 1982) is a subsystem of EMYCIN, which aids the expert and knowledge engineers when they attempt to refine an existing knowledge base. ETS can be used to supply the initial knowledge base since TEIRESIAS is not capable of eliciting such information on its own. META-DENDRAL (Buchanan and Feigenbaum, 1978) and AQll (Michalski, 1980) both perform classification analyses of training examples from their respective knowledge bases in order to produce generalized rules using inductive inference strategies. META-DENDRAL learns rules that predict how classes of compounds fragment in a mass spectrometer, and AQll formulates rules from traits and test cases. Both of these systems need an initial set of problem traits before classification can begin; it is up to the expert and knowledge engineer to produce the initial list of applicable traits and relevant training examples. Again, ETS methodology could be used as a front-end for these systems to elicit the problem characteristics. NANOKLAUS (Hass and Hendrix, 1981) attempts to elicit a classification hierarchy from an ex ert through a natural language dialog. The in ormation is then used for certain classes of P 31 deductive data retrieval. The expert uses NANOKLAUS to enter IS-A hierarchy relations and object descriptions. The expert needs to have such a hierarchy in mind before using NANOKLAUS. ETS’s methods could be combined with this system to help elicit initial relevant concepts in terms of constructs derived from objects, and to produce heuristic rules. Discussion. Over sixty prototype systems have been built with ETS. ranging over a variety of problems. Although no formal &u&es have yet been performed, knowledge engineers associated with these projects feel that anywhere from two to five calendar months of knowledge acquisition time are saved using ETS. Experts claim that these systems generally demonstrate reasonable behavior, although it is expected that “expert” expert systems can only be built by using ETS in combination with more traditional methods. Frequently, an expert begins using ETS without a clear problem structure in mind. A “false start” might occur, for example, when the expert gives conclusion items at varying levels of abstraction. ETS forces the expert to deal with this problem during item triad comparison. After spending fifteen or twenty minutes trying to generate traits for these items, the expert realizes the nature of the expected responses, and starts over again. In these and similar processes, the expert is trained to think in terms that are useful for problem-solving using production systems. As a rule, experts are enthusiastic about using ETS. Typically, an expert will want to use the system again the following day after having had a chance to think about the problem in the system’s terms. This enthusiasm is important in starting projects quickly. ETS is best suited for analytic problems whose solutions may be based on production systems. The system can not readily handle synthesis class problems or problems which require a combination of analysis and synthesis. However, ETS can handle the analytic portions of these problems, and it should be noted that many planning and design problems involve synthesizing the results of several analytic components (eg., Rl, in McDermott, 1980a and 1980b). These components may be investigated with ETS. It is difficult to apply grid methodology to elicit deep causal knowledge, procedural knowledge, or strategic knowledge, although some alternate forms of interviewing techniques are being explored in this area for use with ETS. For instance, the expert may be asked for problem-solving strategies rather than conclusion items, and traits of these strategies could then be elicited. One assumption of grid methodology (Kelly, 1955) is that the elicited set of elements will be a sufficient representation of the problem conclusion set. It must be assumed that the expert knows what these conclusions are, or that the relevant set will be built with ETS knowledge expansion methods or subsequent manual interviewing. It is difficult to verify that a sufficient set of constructs have been elicited. Inappropriate constructs are relatively easy to weed out of the system, but errors of omission are harder to detect. Some important constructs which are missing may be elicited using ETS’s knowledge base expansion techniques, but there is no guarantee that a sufficient set will be found. This is a problem with knowledge acquisition in general. Expert-level performance of the final expert system is critically dependent on obtaining and effectively using a sufficient set of problem-solving knowledge. Many enhancements are being considered to improve ETS’s utility. These include expansion of the interview methods, inclusion of more analytic tools to identify the relative importance and validity of elicited constructs and elements (such as in Hinkle, 1965, and principle components analysis), and development of feedback paths between ETS and the target expert system building tool. Other psychological techniques such as multi-dimensional scaling are being analyzed. A knowledge engineering guide, illustrating the use of ETS methodology and its associated manual interviewing techniques, is also _ . being prepared. In conclusion. ETS and its related methods have been useful aids ~during knowledge engineering serving to streamline the knowledge acquisition process. Acknowledgements. Thanks to Roger Beeman, Keith Butler, Alistair Holden, Earl Hunt, Art Nagai, Steve Tanimoto, Lisle Tinglof, Rand Waltzman, and Bruce Wilson for their contributions and support. This work was performed at the Artificial Intelligence Center of Boeing Computer Services in Seattle, Washington. References Atkin, R. H., Mathematical Structure in Human Affairs, London: Heinemann, 19’14. Bannister, D., and Mair, J. M. M., The Evaluation of Personal Constructs, Academic Press, 1968. Barstow, D. R., Aiello, N., Duda R., Erman, L., Forg Greiner R. Lenat D. B London P McDermott J. Nii and We’iss ’ S h C., P. “Languages and Tbols for Buildfng Expert Systems,” in I? Hayes-Roth, D. A. Waterman, and D. B. Lenat (eds.), Buildinp Expert Systems, Addison-Wesley, 1983. Buchanan B. G. and Fei enbaum, E. A., “DENDRAL and META-DENDRAL: Their R Intelligence, 11,1978. pplications Dimension,” Artificial Buchanan, B. G., Barstow D. Bechtal, R., Bennet, J., Clancey, W. Kulikowski C. Mitchell T. M. and Waterman D. A. “Constructing an Expert Sy$tem,” in F: Ha Waterman and D. B. Lenat (eds.), Building 5 es-Roth, D. A: Addison-Wesley, 1983. xpert Systems, Davis R. and Lenat D. B., Knowledge-Based Systems in Artificial fntelligence, New York: McGraw-Hill, 1982. Fei enbaum, E. A. “The Art of Artificial Intelligence I. Themes an 8 Case studies o! Knowledge Engineering,” Proceedings, Fifth 32 International Joint Con erence on Arti icial 4 J ethnology, 197 . Intelligence, Massachusetts Institute of Forgy, C. L., OPS5 User’s Manual, De artment of Computer Science, Carnegie-Mellon University, 19 1; 1. Gaines,. B. R., and Shaw, M.. L. G.., “New Directions in the &ns;tzss,,and Interactive Elicitation of Personal Construct cy in M. Shaw (ed.) Recent Advances in Personal onstruit Technology, New York: Academic Press, 1981. Hass, N., and Hendrix, G., “Learning by Bein Told: AC uirin Knowled P e for Information Management ” in If S Mich&ki f Carbonel and T. M.Mitchell (eds.), Machine Learning: An Artificial lntellipence Approach, Palo Alto, Calif ‘I’ioga Press, 983. Hinkle D. N. The Change of Personal Constructs from the Viewpoint of a’Theorv of Implications, Ph.D. Dissertation, Ohio State University, 1965. Keen, T. R., and Bell, R. C., “One Thing Leads to Another: A New A roach to Elicitation in the Repertory Grid Technique,” in M. 8i aw ted.1 Recent Advances in Personal Construct Technologv , New York: Academic Press, 1981. Kelly, G. A. The Psychology of Personal Constructs, New York: Norton, 1955. Kelly,. G. A., “Non-parametric Factor Analysis of Personality Theories,” Journal. ofIndividual Psychology: 19,1963. Landfield,,, A., “A Personal Construct Approach to Suicidal Behavior in Slater, P. (ed.) Dimensions of Intrapersonal Space,Vof. 1, London: Wiley, 19/S. k;nr;8E$. B., “The Nature of Heuristics,” Artificial Intelligence 9 McDermott, J., “Rl: An Expert in the Computer Systems Domain,” in AAAI 1,198Oa. McDermott, J., “Rl: An Expert Configurer,” Rep. no. CMU-CS- 80-119,.Com University, Pi I? uter Science Department, Carnegie-Mellon tsburgh, Pa., 1980b. Michalski Inference, R. S. “Pattern Recognition as Rule-guided Inductive ‘I IEEl?! Transactions of Pattern Analysis and Machine Intelligence 2, no. 4,198O. y;es;her, N., Mar-iv-Valued Logic, New York: McGraw-Hill, Shaw, M. L. G., and Gaines, Personal Construin B. R., “Fuzzy Semantics for f? in Systems Science and Science, Kentucky: Society for eneral Systems Research, 1980. Shaw, M. L. G., and McKnight, C., “ARGUS: A Program to Explore Intra-Personal Personalities,” in M. Shaw (ed.), Recent Advances in Personal Construct Technology, New Yorki Academic Press, 1981. Shortliffe E. H., Computer-Based Medical Consultants: MYCIN, New York: Elsevler, 1976. Slater, P. (ed.), Dimensions of Intrapersonal Space, Vol. 2, London: Wiley, 1) van Melle, W. Shortliffe, E. H., and Buchanan, B. G., “EMYCIN: A Domain-Independent System that Aids in Constructin Knowledge-Based Consultation Pro rams,” Machine Inte f? 5 eries 9, No. 3,198l. igence, Infotech State of the Art Report, Waterman, D. A., and Hayes-Roth F., An Investigation of Tools for Building Expert Svstems, R2$18-NSF, Kand Corporation, 982. Zadeh, L. A., “Fuzzy Sets,” Information and Control, 8, 1965 33
1984
38
323
CLASSIFICATION PROBLEM SOLVING William J. Clancey Heuristic Programming Project Computer Science Department Stanford University Stanford. CA 94305 ABSTRACT A broad range of heuristic programs-embracing forms of diagnosis, catalog selection, and skeletal planning-accomplish a kind of well- structured problem solving called classification. These programs have a characteristic inference structure that systematically relates data to a pre-enumerated set of solutions by abstraction. heuristic association, and refinement. This level of description specifies the knowledge needed to solve a problem. independent of its representation in a particular computer language. The classification problem-solving model provides a useful framework for recognizing and representing similar problems, for designing representation tools, and for understanding why non-classification problems require different problem-solving methods.* I INTRODUCTION Over the past decade a variety of heuristic programs have been written to solve problems in diverse areas of science, engineering, business, and medicine. Yet, presented with a given “knowledge engineering tool,” such as EMYCIN (van Melle, 1979), we are still hard-pressed to say what kinds of problems it can be used to solve well. Various studies have demonstrated advantages of using one representation language instead of another-for ease in specifying knowledge relationships, control of reasoning, and perspicuity for maintenance and explanation (Clancey, 1981. Swartout, 1981, Aiello, 1983, Aikins, 1983, Clancey, 1983a). Other studies have characterized in low-level terms why a given problem might be inappropriate for a given language, for example, because data are time-varying or subproblems interact (Hayes-Roth et al.. 1983). Rut dttempts to describe kinds of problems in terms of shared features have not been entirely satisfactory: Applications-oriented descriptions like “diagnosis” are too general (e.g., the program might not use a device model), and technological terms like “rule-based” don’t describe what problem is being solved (Hayes, 1977, Hayes, 1979). Logic has been suggested as a tool for a “knowledge level” analysis that would specify what a heuristic program does, independent of its implementation in a programming language (Nilsson, 1981. Newell, 1982). However, we have lacked a set of terms and relations for doing this. In an attempt to specify in some canonical terms what many heuristic programs known as “expert systems” do, an analysis was made of ten rule-based systems (including MYCIN, SACON. and The Drilling Advisor), a frame-based system (GRUNGY) and a program coded directly in LISP (SOPHIE III). There is a striking pattern: These programs proceed through easily identifiable phases of data abstraction, heuristic mapping onto a hierarchy of pre-enumerated solutions, and refinement within this hierarchy. In short, these programs do what is commonly called classification. *This research has been supported In part by O\R and AR1 Contract NOOO14-79C-0302. Computational resources have been provided by the SLMFX-AIM facdlty (NIH grant RRO0785). .Many of the ideas presented here were sumulated by discussIons wtth Denny Brown in our attempt to develop a framework for teachmg knowledge engineering. I am also grateful to Tom Dlettench. Steve Hard\, and Peter Szolov~ts for their suggestions and encouragement The Dnlhng Ad\ lsor mentloned herein IS a product of Teknowledge, Inc Focusing on content rather than representational technology, this paper proposes a set of terms and relations for describing the knowledge used to solve a problem by classification. Subsequent sections describe and illustrate the classification model in the analysis of MYCIN, SACON, GRUNDY. and SOPHIE III. Significantly, a knowledge level description of these programs corresponds very well to psychological models of expert problem solving. I‘his suggests rhat the classification problem solving model captures genera1 principles of how experiential knowledge is organir.ed dnd used, dnd thus generalizes some cognitive science results. There are several strong implications for the practice of building expert systems and continued resedrch in this feld. II CLASSIFICATION PROBLEM SOLVING DEFINED We develop the idea of classification problem solking by starting with the common sense notion and relating it to the reasoning that occurs in heuristic programs. A. Simde classification As the name suggests, the simplest kind of classification problem is to identify some unknown object or phenomenon as a member of a known class of objects or phenomena. Typically, these classes are stereotypes that are hierarchically organized, and the process of identification is one of matching observations of an unknown entity against features of known classes. A paradigmatic example is identification of a plant or animal, using a guidebook of features, such as coloration, structure, and si7e. Some terminology we will find helpful: The problem is the object or phenomenon to be identified; dafa are observations describing this problem; possible sofufions are patterns (variously called schema, frames or units): each solution has a set offiarures (slots or facets) that in some sense describe the concept either categorically or probabilistically; solutions are grouped into a specialization hierarchy based on their features (in general. not a smgle hierarchy, but multiple, directed acyclic graphs): a hypothesis is a solution that is under consideration; evidence is data that partially matches some hypothesis: the outpuf is some solution. I‘he essential characteristic of a classlficdtion problem is that the problem solver selects from a set of pre-enumerated solutions. This does not mean, of course, that the “right answer” is necessarily one of these solutions, just that the problem solver will only attempt to match the data against the known solutions, rather than construct a new one. Evidence can be uncertain and matches partial, so the output might be a ranked list of hypotheses. Besides matching, there are several rules of‘ fnfirence for making assertions about solutions. For example. evidence for a class is indirect evidence that one of its subtypes is present. Conversely, given a closed world assumption, evidence against all of the subtypes 1s ecidence against a class. Search operators for finding a solution also capitalize on the hierarchical structure of the solution gpace. I’hese operators From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. include: refining a hypothesis to a more specific classification; caregorizing the problem by considering superclasses of partially matched hypotheses: and discriminaring among hypotheses by contrasting their superclasses (Patil, 1981, Pople, 1982. Clancey, 1984). For simplicity, we will refer to the entire process of applying these rules of inference and operators as refinemenf. The specification of this process-a control strategy-is an orthogonal issue which we will consider later. 6. Data abstraction In the simplest problems, data are solution features, so the matching and refining process is direct. For example, an unknown organism in MYCIN can be classified directly given the supplied data of gram stain and morphology. For many problems, solution features are not supplied as data, but are inferred by data abstraction. There are three basic relations for abstracting data in heuristic programs: l quafitafive abstraction of quantitative data (“if the patient is an adult and white blood count is less than 2500, then the white blood count is low”): a definitional abstraction (“if the structure is one-dimensional of network construction, then its shape is a beam”); and l generalization in a subtype hierarchy judge, then he is an educated person”). (“if the client is a These interpretations are usually made by the program with certainty; thresholds and qualifying contexts are chosen so the conclusion is categorical. It is common to refer to this knowledge as “descriptive. ” “factual,” or “definitional.” C. Heuristic classification In simple classification, the data may directly match the solution features or may match after being abstracted. In heuristic classification, solution features may also be matched heuristically. For example, MYCIN does more than identify an unknown organism in terms of features of a laboratory culture: It heuristically relates an abstract characterization of the patient to a classification of diseases. We show this inference structure schematically, followed by an example (Figure 1). Basic observations about the patient are abstracted to patient categories, which are heuristically linked to diseases and disease categories. While only a subtype link with Ecoli infection is shown here, evidence may actually derive from a combination of inferences. Some data might directly match Ecoli by identification. Discrimination with competing subtypes of gram-negative infection might also provide evidence. As stated earlier, the order in which these inferences are made is a matter of control strategy. The important link we have added is a heuristic association between a characterization of the patient (“compromised host”) and categories of diseases (“gram-negative infection”). Unlike the factual and hierarchical evidence propagation we have considered to this point, this inference makes a great leap. A heuristic relation is based on some implicit, possibly incomplete, model of the world. This relation is often empirical, based just on experience: it corresponds most closely to the “rules of thumb” often associated with heuristic programs (Feigenbaum, 1977). Heuristics of this type reduce search by skipping over intermediate relations (this is why we don’t call abstraction relations “heuristics”). These associations are usually uncertain because the intermediate relations may not hold in the specific case. Intermediate relations may be omitted because they are unobservable or poorly understood. In a medical diagnosis program, heuristics typically skip over the causal relations between symptoms and diseases. HEURISTIC MATCH Patient Abstractions * Disease Classes DATA t REFINEMENT ABSTRACTION Patient Data Diseases HEURISTIC Compromised Host j Gram-Negative Infection GENERALIZATION t 1 SUBTYPE lmmunosuppressed E.coli Infection GENERALIZATION t Leukopenia DEFINITIONAL t Low WBC QUALITATIVE t WBC < 2.5 Figure 1: Inference structure of MYCIN To repeat, classification problem solving involves heuristic association of an abstracted problem statement onto features that characterize a solution. This can be shown schematically in simple terms (Figure 2). HEURISTIC MATCH Data Abstractions DATA t ABSTRACTION * Solution Abstractions REFINEMENT Solutions Figure 2: Classification problem solving inference structure This diagram summarizes how a distinguished set of terms (data. data abstractions, solution abstractions, and solutions) are related systematically by different kinds of relations and rules of inference. This is the structure of inference in classification problem solving. In a study of physics problem solving, Chi (Chi, et al., 1981) calls data abstractions “transformed” or “second order problem features.” In an important and apparently common variant of the simple model, data abstractions are themselves patterns that are heuristically matched. In essence, there is a sequence of classification problems. GRUNDY. analyzed below, illustrates this. 50 D. Search in classification problem solvinq The issue of search is orthogonal to the kinds of inference we have been considering. “Search” refers to how a network made up of abstraction, heuristic, and refinement relations is interpreted, how the flow of inference actually might proceed in solving a problem. Following Hayes (Hayes, 1977) we call this the process strucfure. There are three basic process structures in classification problem solving: 1. Data-direcfed search: The program works forwards from data to abstractions, matching solutions until all possible (or non-redundant) inferences have been made. 2. Solution- or Hypothesis-directed search: The program works backwards from solutions. collecting evidence to support them, working backwards through the heuristic relations to the data abstractions and required data to solve the problem. If solutions are hierarchically organized, then categories are considered before direct features of more specific solutions. 3. Opportunistic search: The program combines data and hypothesis-directed reasoning (Hayes-Roth and Hayes- Roth, 1979). Data abstraction rules tend to be applied immediately as data become available. Heuristic rules “trigger” hypotheses, followed by a focused, hypothesis- directed search. New data may cause refocusing. By reasoning about solution classes, search need not be exhaustive. Data- and hypothesis-directed search are not to be confused with the implementation terms “forward” or “backward chaining.” Rl provides a superb example of how different implementation and knowledge level descriptions can be. Its rules are interpreled by forward-chaining, but it does a form of hypothesis-directed search. systematically setting up subproblems by a fixed procedure that focuses reasoning on spatial subcomponents of a solution (McDermott, 1982). The degree to which search is focused depends on the level of indexing in the implementation and how it is exploited. For example, MYCIN’s “goals” are solution classes (e.g., types of bacterial meningitis), but selection of rules for specific solutions (e.g., Ecoli meningitis) is unordered. Thus. MYCIN’s search within each class is unfocused (Clancey, 1983b). The choice of process structure depends on the number of solutions, whether they can be categorically constrained, usefulness of data (the density of rows in a data/solution matrix), and the cost for acquiring data. ill EXAMPLES OF CLASSIFICATION PROBLEM SOLVING Here we schematically describe the architectures of SACON. GRUNDY, and SOPHIE III in terms ofclassification problem solving. These are necessarily very brief descriptions, but reveal the value of this kind of analysis by helping us to understand what the programs do. After a statement of the problem. the general inference structure and an example inference path are given, followed by a brief discussion. A. SACON Problem: SACON (Bennett, et al., 1978) selects classes of behavior that should be further investigated by a structural analysis simulation program (Figure 3). Analysis Program t DATA HEURISTIC MATCH ABSTRACTION Abstract Structure + Quantitative Prediction of Material Behavior DATA ABSTRACTION Structure Description Inelastic-Fatigue Program t DEFINITIONAL I Fatigue Deflection + Material + HEURISTIC I QUALITATIVE Size I Beam + Support * Stress and Deflection t Distribution Magnitude DEFINITIONAL One-dimensional and Network Figure 3: Inference structure of SACON Discussion: SACON solves two problems by classification- analyzing a structure and then selecting a program. It begins by heuristically selecting a simple numeric model for analyzing a structure (such as an airplane wing). The model produces stress and deflection estimates, which the program then abstracts in terms of features that hierarchically describe different configurations of the MARC simulation program. There is no refinement because the solutions to the first problem are just a simple set of possible models, and the second problem is only solved to the point of specifying program classes. ( In another software configuration system we and14 led, specific program input parameters are inferred in a refinement step.) B. GRUNDY Problem: GRUNDY (Rich, 1979) heuristically classifies a reader’s personality and selects books he might like to read (Figure 4). HEURISTIC MATCH Self-Description and Behavior HEURISTIC People Classes Book Classes REFINEMENT * Books HEURISTIC Watches No TV * Educated * Books with Intelligent Person Main Character Stereotype SUBTYPE t “Earth Angels” Figure 4: Inference structure of GRUNDY Discussion: GRUNDY solves two classification problems heuristically. Illustrating the power of a knowledge level analysis, we discover that the people and book classifications are not distinct in the implementation. For example, “fast plots” is a book characteristic, but in the implementation “likes fast plots” is associated with a person stereotype. The relation between a person stereotype and “fast plots” is heuristic and should be distinguished from abstractions of people and books. One objective of the program is to learn better people stereotypes (user models). The classification description of the user modeling problem shows that GRUNDY should also be learning better ways to characterize books, as well as improving its heuristics. If these are not treated separately, learning may be hindered. This example illustrates why a knowledge level analysis should precede representation. It is interesting to note that GRUNDY does not attempt to perfect the user model before recommending a book. Rather, refinement of the person stereotype occurs when the reader rejects book suggestions. Analysis of other programs indicates that this multiple-pass process structure is common. For example, the Drilling Advisor makes two passes on the causes of sticking, considering general. inexpensive data first, just as medical programs commonly consider the “history and physical” before laboratory data. C. SOPHIE Ill Problem: SOPHIE III (Brown. et al., 1982) classifies an electronic circuit in terms of the component that is causing faulty behavior (Figure 5). HEURISTIC MATCH Qualitative Values a of Ports DATA ABSTRACTION t Quantitative Circuit Behavior CAUSAL PROPAGATION t Local Circuit Measurements Behavior at Some Port of Some Module in Behavior Lattice I REFINEMENT Component Fault HEURISTIC (VOLTAGE Nil N14) j Variable Voltage is High Reference is High or OK QUALITATIVE t 1 CAUSE (VOLTAGE Nl 1 N14) > 31 V Q5 Collector Open Figure 5: Inference structure of SOPHIE Discussion: SOPHIE’s set of pre-enumerated solutions is a lattice of valid and faulty circuit behaviors. In contrast with MYCIN, solutions are device states and component flaws, not stereotypes of disorders, and they are related causally, not by features. Data are not just external device behaviors, but include internal component measurements propagated by the causal analysis of the LOCA13 program. Reasoning about assumptions plays a central role in matching hypotheses. In spite of these differences, the inference structure of abstractions, heuristic relations, and refinement fits the classification model, demonstrating its generality and usefulness for describing complex reasoning. IV CAUSAL PROCESS CLASSIFICATION To further illustrate the value of a knowledge level analysis, we describe a generic inference structure common to medical diagnosis programs, which we call causal process cfassificafion, and use it to contrast the goals of electronic circuit and medical diagnosis programs. In SOPHIE, valid and abnormal device states are exhaustively enumerated, can be directly confirmed, and are causally related to component failures. None of this is generally possible in medical diagnosis, nor is diagnosis in terms of component failures alone sufficient for selecting therapy. Medical programs that deal with multiple disease processes (unlike MYCIN) do reason about abnormal states (called pathophysiologic slates, e.g., “increased pressure in the brain”), directly analogous to the abnormal states in SOPHIE. But curing an illness generally involves determining the cause of the component failure. These “final causes” (called diseases, syndromes, etiologies) are processes that affect the normal functioning of the body (e.g., trauma, infection, toxic exposure, psychological disorder). Thus, medical diagnosis more closely resembles the task of computer system diagnosis in considering how the body relates to its environment (Lane, 1980). In short, there are two problems: First to explain symptoms in terms of abnormal internal states, and second to explain this behavior in terms of external influences (as well as congenital and degenerative component flaws). This is the inference structure of programs like CASNET (Weiss, et al., 1978) and NEOMYCIN (Clancey, 1981) (Figure 6). HEURISTIC HEURISTIC (CAUSED BY) (CAUSED BY) Patient a Abstractions DATA ABSTRACTION t Patient Data Pathophysiologic a Disease States and Classes Classes I REFINEMENT Diseases Figure 6: Inference structure of causal process classification A network of causally related pathophysiologic states causally relates data to diseases**. The causal relations are themselves heuristic because they assume certain physiologic structure and behavior, which is often poorly understood and not represented. In contrast with pathophysiologic states, diseases are abstractions of processes--causal stories with agents, locations, and sequences of events. Disease networks are organized by these process features (e.g., dn organ system taxonomy organizes diseases by location). A more general term for disease is disorder stereotype. In process confrol problems, such as chemical manufacturing, the most general disorder stereotypes correspond to stages in a process (e.g., mixing, chemical reaction, filtering, packaging). Subtypes correspond to what can go wrong at each stage (Clancey, 1984). **Programs differ m whether they treat pathophyslologlc states as Independent solutions (NEOMYCIN) or find the causal path that best accounts for the data (CASNET) Moreover. a causal explanation of the data requires finding a state network, mcludmg normal states. that IS Internally consistent on multiple levels of detail Combmatorial problems, as well as elegance. argue agamst pre-enumeratmg solutions, so such a network must be constructed. as m ABEL (Panl. 1981) In SOPHIE. the LOCAI program deals with most of the state mteractions at the component level. others are captured In the exhaustive hierarchy of module behablors A more general solution 1s to use a structure/function device model and general dlagnostlc operatorc. as In DAR r (Genesereth. 1982) 52 To summarize, a knowledge level analysis reveals that medical and Whether the solution is taken off the shelf or is pieced together has electronic diagnosis programs are not all trying to solve the same kind of problem. Examining the nature of solutions. we see that in a electronic circuit diagnosis program like SOPHIE solutions are component flaws. Medical diagnosis programs like CASNET attempt a second step, causal process classification, which is to explain abnormal states and flaws in terms of processes external to the device or developmental processes affecting its structure. It is this experiential knowledge-what can affect the device in the world-that is captured in disease stereotypes. This knowledge can’t simply be replaced by a model of device structure and function, which is concerned with a different level of analysis. V WHAT IS NON-CLASSIFICATION PROBLEM SOLVING? important computational implications for choosing a representation. In particular, construction problem-solving methods such as constraint propagation and dependency-directed backtracking have data structure requirements that may not be easily satisfied by a given representation language. For example-returning to a question posed in the introduction-applications of EMYCIN are generally restricted to problems that can be solved by classification. VI KNOWLEDGE LEVEL ANALYSIS As a set of terms and relations for describing knowledge (e.g, data, solutions, kinds of abstraction, refinement operators, the meaning of “heuristic”), the classification model provides a knowledge level analysis of programs, as defined by Newell (Newell, 1982). It “serves as a We first summarize the applications we have considered by observing that all classification problem solving involves selection of a solution. We can characterize kinds of problems by what is being selected: l diagnosis: solutions are faulty components (SOPHIE) or processes affecting the device (MYCIN); specification of what a reasoning system should be able to do.” Like a specification of a conventional program, this description is distinct from the representational technology used to implement the reasoning system. Newell cites Schank’s conceptual dependency structure as an example of a knowledge level analysis. It indicates “what knowledge is required to solve a problem... how to encode knowledge of the world in a representation.” l user model: solutions are people stereotypes in terms of their goals and beliefs (first phase of GRUNDY); 0 catalog selection: solutions are pro;lucts, services, or activities, e.g., books, personal computers, careers. travel tours, wines, investments (second phase of GRUNDY); l theoretical analysis: solutions are numeric models (first phase of SACON); After a decade of “explicitly” representing knowledge in AI languages, it is ironic that the pattern of classification problems should have been so difficult to see. In retrospect, certain views were emphasized at the expense of others: l Procedureless languages. In an attempt to distinguish heuristic programming from traditional programming, procedural constructs are left out of representation languages (such as EMYCIN. OPS, KRL (Lehnert and Wilks, 1979)). Thus, inference relations cannot be stated separately from how they are to be used (Hayes. 1977, Hayes, 1979). l Heuristic nature of problem solving. Heuristic association has been emphasized at the expense of the relations used in data abstraction and refinement. In fact. some expert systems do only simple classification: they have no heuristics or “rules of thumb,” the key idea that is supposed distinguish this class of computer programs. l skeletaal planning: solutions are plans, such as packaged sequences of programs and parameters for running them (second phase of SACON, also first phase of experiment planning in MOLGEN (Friedland. 1979)). A common misconception is that the description “classification problem” is an inherent property of a problem, opposing, for example. classification with design (Sowa, 1984). However, classification problem solving, as defined here, is a description of how a problem is solved. If the problem solver has a priori knowledge of solutions and can relate them to the problem description by data abstraction, heuristic association, and refinement, then the problem can be solved by classification. For example, if it were practical to enumerate all of the computer configurations Rl might select, or if the solutions were restricted to a predetermined set of designs, the program could be reconfigured to solve its problem by classification. l Implementation terminology. In emphasizing new implementation technology, terms such as “modular” and “goal directed” were more important to highlight than the content of the programs. In fact, “goal directed” characterizes any rational system and says very little about how knowledge is used to solve a problem. “Modularity” is a representational issue of indexing. Furthermore, as illustrated by ABEL, it is mcorrcct to say that medical diagnosis is a “classification problem.” Only ruutme medical diagnosis problems can be solved by classification (Pople, 1982). When there are multiple, interacting diseases, there are too many possible combinations for the problem solver to have considered them all before. Just as ABEL reasons about interacting states, the physician must construct a consistent network of interacting diseases to explain the symptoms. The problem solver formulates a solution; he doesn’t just make yes-no decisions from a set of fixed alternatives. For this reason, Pople calls non-routine medical diagnosis an ill-structured problem (Simon, 1973) (though it may be more appropriate to reserve this term for the theory formation task of the physician-scientist who is defining new diseases). In summary, a useful distinction is whether a solution is selected or constructed. To select a solution, the problem solver needs experiential Nilsson has proposed that logic should be the lingua franca for knowledge level analysis (Nilsson, 1981). Our experience with the classification model suggests that the value of using logic is in adopting a set of terms and relations for describing knowledge (e.g., kinds of abstraction). Logic is valuable as a tool for knowledge because it emphasizes relations, not just implication. level dnalysis While rule-based languages do not make important knowledge level distinctions, they have nevertheless provided an extremely successful programming framework for classification problem solving. Working backwards (backchaining) from a pre-enumerated set of solutions guarantees that only the relevant rules are tried dnd useful data considered. Moreover. the program desrgner 1s encouraged to use means-ends analysis, a clear framework for organizing rule wrtting. (“expert”) knowledge in the form of patterns of problems and solutions and heuristics relating them. To construct a solution, the problem solver applies models of structure and behavior, by which objects can be assembled, diagnosed, or employed in some plan. 53 VII RELATED ANALYSES Several researchers have described portions of the classification problem solving model, influencing this analysis. For example, in CRYSALIS (Engelmore and Terry, 1979) data and hypothesis abstraction are clearly separated. The EXPERT rule language (Weiss, 1979) similarly distinguishes between “findings” and a taxonomy of hypotheses. In PROSPECTOR (Hart, 1977) rules are expressed in terms of relations in a semantic network. In CENTAUR (Aikins, 1983) a variant of MYCIN, solutions are explicitly prototypes of diseases. Chandrasekaran and his associates have been strong proponents of the classification model: “The normal problem-solving activity of the physician... (is) a process f 1 0 c assifying the case as an element of a disease taxonomy” (Chandrasekaran and Mittal, 1983). Recently, Chandrasekaran and Weiss and Kulikowski have generalized the classification schemes used by their programs (MDX and EXPERT) to characterize problems solved by other expert systems (Chandrasekaran, 1984, Weiss and Kulikowski, 1984). A series of knowledge representation languages beginning with KRL have identified structured abstraction and matching as a central part of problem solving (Bobrow and Winograd, 1979). Building on the idea that “frames” are not just a computational construct, but a theory about a kind of knowledge (Hayes, 1979), cognitive science studies have described problem solving in terms of classification. For example, routine physics problem solving is described by Chi (Chi, et al., 1981) as a process of data abstraction and heuristic mapping onto solution schemas (“experts cite the abstracted features as the relevant cues (of physics principles)“). The inference structure of SACON, heuristically relating structural abstractions to numeric models, is the same. Related to the physics problem solving analysis is a very large body of research on the nature of schemas and their role in understanding (Schank, 1975, Rumelhart and Norman, 1983). More generally, the study of classification, particularly of objects, also called categonzation, has been a basic topic in psychology for several decades (e.g., see the chapter on “conceptual thinking” in (Johnson-Laird and Wason, 1977)). However, in psychology the emphasis has been on the nature of categories and how they are formed (an issue of learning). The programs we have considered make an identification or selection from a pre-existing classification (an issue of memory retrieval). In recent work, Kolodner combines the retrieval and learning process in an expert system that learns from experience (Kolodner, 1982). Her program uses the MOPS representation, a classification model of memory that interleaves generalizations with specific facts (Kolodner, 1983). VIII CONCLUSIONS A wide variety of problems can be described in terms of heuristic mapping of data abstractions onto a fixed. hierarchical network of solutions. This problem solving model is supported by psychological studies of human memory and the role of classification in understanding. There are significant implications for expert systems research: a The model provides a high-level structure for decomposing problems, making it easier to recognize and represent similar problems. For example, problems can be characterized in terms of sequences of classification problems. Catalog selection programs might be improved by incorporating a more distinct phase of user modelling, in which needs or requirements are classified first. Diagnosis programs might profitably make a stronger separation between device- history stereotypes and disorder knowledge. A generic knowledge engineering tool can be designed specifically for classification problem solving. The advantages for knowledge acquisition carry over into explanation and teaching. 54 l The model provides a basis for choosing application problems. For example, problems can be selected that will teach us more about the nature of abstraction and how other forms of inference (e.g.. analogy, simulation. constraint posting) are combined with classification. #The model provides a foundation for describing representation languages in terms of epistemologic adequacy (McCarthy and Hayes, 1969), so that the leverage they provide can be better understood. For example, for classification it is advantageous for a language to provide constructs for representing problem solutions as a network of schemas. l The model provides a focus for cognitive studies of human categorization of knowledge and search strategies for retrieval and matching, suggesting principles that might be used in expert programs. Learning research might similarly focus on the inference and process structure of classification problem solving. Finally, it is important to remember that expert systems are programs. Basic computational ideas such as input, output, and sequence, are essential for describing what they do. The basic methodology of our study has been to ask, “What does the program conclude about? How does it get there from its input?” We characterize the flow of inference, identifying data abstractions, heuristics, implicit models and assumptions, and solution categories along the way. If heuristic programming is to be different from traditional programming, a knowledge level analysis should always be pursued to the deepest levels of our understanding, even if practical constraints prevent making explicit in the implemented program everything that we know. In this way, knowledge engineering can be based on sound principles that unite it with studies of cognition and representation. References Aiello, N. A comparative study of control strategies for expert systems: AGE implementation of three variations of PUFF, in Proceedings of the National Conference on AI. pages l-4. Washington, D.C., August, 1983. Aikins J. S. Prototypical knowledge for expert systems. Artt3ciaf Intelligence, 1983, 20(2), 163-210. Bennett, J., Creary, L., Englemore, R., and Melosh, R. SACUN: A knowledge-based consultant for structural analysis. STAN-CS-78-699 and HPP Memo 78-23. Stanford University, Sept 1978. Bobrow, D. and Winograd, T. KRL: Another perspective. Cognitive Science, 1979, 3,29-42. Brown, J. S., Burton, R. R., and de Kleer, J. Pedagogical, natural language, and knowledge engineering techniques in SOPHIE I, II, and III. In D. Sleeman and J. S. Brown (editors), intelligent Tutoring Systems, pages 227-282. Academic Press, 1982. Chandrasekaran, B. Expert systems: Matching techniques to tasks. In W. Reitman (editor), AI Applications for Business, pages 116-132. Ablex Publishing Corp., 1984. Chandrasekaran, B. and Mittal, S. Conceptual representation of medical knowledge. In M. Yovits (editor), Advances in Computers, pages 217-293. Academic Press, New York, 1983. Chi, M. T. H., Feltovich, P. J., Glaser, R. Categorization and representation of physics problems by experts and novices. Cognitive Science, 1981, 5, 121-152. Clancey, W. J. and Letsinger, R. NEOMYCIN: Reconfiguring a rule- based expert system for application to teuching, m Proceedings of the Seventh International Joint Conference on A rt rficral Intelligence, pages 829-836, August, 1981. (Revised version to appear in Clancey and Shortliffe (editors), Readmgs in Medical Artificial Intelligence: The First Decade, Addison-Wesley, 1983). Clancey, W. J. The advantages of abstract control knowledge in expert system design. in Proceedings of the National Conference on AI, pages 74-78, Washington, D.C., August, 1983. Clancey, W. J. The epistemology of a rule-based expert system: A framework for explanation. Artificial Intelligence, 1983, 20(3), 215-251. Clancey, W. J. Acquiring, representing, and evaluuting a competence model of diagnosis. HPP Memo 84-2, Stanford University. February 1984. (To appear in Chi, Glaser, and Farr (Eds.), The Nature of Expertise, in preparation.). Engelmore, R. and Terry, A. Structure and function of the CR YSAL.IS system, in Proceedings of the Sixth International Joint Conference on Artificial Intelligence, pages 250-256, August, 1979. Feigenbaum, E. A. The art of artificial intelligence: I. Themes and case studies of knowledge engineering, in Proceedings of the Fifth International Joint Conference on Artificial Intelligence, pages 1014- 1029, August, 1977. Friedland, P. E. Knowledge-based experiment design in molecular genetics. Technical Report STAN-CS-79-771, Stanford University, October 1979. Cienesereth, M. R. Diagnosis using hierarchical design models, in Proceedings of the National Conference on AI, pages 278-283, Pittsburgh, PA, August, 1982. Hart, P. E. Observations on the development of expert knowledge-based systems, in Proceedings of the Fifrh International Joint Conference on Artificial Intelligence, pages lOOl- 1003, August, 1977. Hayes, P.J. In defence of logic, in Proceedings of the Fifrh International Joint Conference on Art$cial Intelligence, pages 559-565, August, 1977. Hayes, P. The logic of frames. In D. Metzing (editor), Frame Conceptions and Text Understanding, pages 45-61. de Gruyter, 1979. Hayes-Roth, B. and Hayes-Roth, F. A cognitive model of planning. Cognitive Science, 1979, 3, 275-310. Hayes-Roth, F., Waterman, D., and Lenat, I). (eds.). Building expert systems. New York: Addison- Wesley 1983. Johnson-Laird, P. N. and Wason, P. C. Thinking: Readings in Cognitive Science. Cambridge: Cambridge University Press 1977. Kolodner, J. L. The role of experience in development of expertise, in Proceedings of the National Conference on AI, pages 273-277, Pittsburgh, PA, August, 1982. Kolodner, J. Maintaining organization in a dynamic long-term memory. Cognitive Science, 1983, 7, 243-280. Lane, W. G. Input/output processing. In Stone, H. S. (editor), Introduction to Computer Architecture, 2nd Edition, chapter 6. Science Research Associates, Inc., Chicago, 1980. Lehnert, W., and Wilks, Y. A critical perspective on KRL. Cognitive Science, 1979,s. l-28. McCarthy, J. and Hayes, P. Some philosophical problems from the standpoint of Artificial Intelligence. In B. Meltzer and D. Michie (editors), Machine Intelligence 4, pages 463-502. Edinburgh University Press, 1969. McDermott, J. Rl: A rule-based configurer of computer systems. Artificial Intelligence, 1982, 19(f), 39-88. Newell, A. The knowledge level. Artificial Intelligence, 1982, 18(l), 87-127. Nilsson, N. J. The interplay between theoretical and experimental methods in Artificial Intelligence. Cognition and Brain Theory, 1981, 4(l), 69-74. Patil, R. S., Szolovits. P., and Schwartz, W. B. Causal understanding of patient illness in medical diagnosis, in Proceedings of the Seventh international Joint Conference on Artificial Intelligence. pages 893-899, August, 1981. Pople, H. Heuristic methods for imposing structure on ill-structured problems: the structuring of medical diagnostics. In P. Szolovits (editor), Artificial Intelligence rn Medicine, pages 119- 190. Westview Press, 1982. Rich, E. User modeling via stereotypes. Cognitive Science, 1979, 3, 355-366. Rumelhart, D. E. and Norman, D. A. Representation in memory. Technical Report CHIP-116, Center for Human Information Processing, University of California, June 1983. Schank, R. C., and Abelson, R. P. Scripts, Plans, Goals, and Understanding. Hillsdale. NJ: Lawrence Erlbaum Associates 1975. Simon, H. A. The structure of ill structured problems. Artificial tntelligence, 1973, 4, 18 l-20 1. Sowa, J. F. Conceptual Structures. Reading, MA: Addison-Wesley 1984. Swartout W. R. Explaining and just&ins in expert consulting programs, in Proceedings of the Seventh International Joint Conference on Artificial /ntelligence, pages 815-823, August. 1981. van Melle. W. A domain-independent production rule system for consultation programs. in Proceedings of the Sixth International Joint Conference on Artificial intelligence, pages 923-925. August, 1979. Weiss, S. M. and Kulikowski, C. A. EXPERT: A sysiem for developing consultation models, in Proceedings of the Sixth fnternational Joint Conference on Artificial Intelligence, pages 942-947, August. 1979. Weiss, S. M. and Kulikowski, C. A. A Practical Guide IO Designing Expert Systems. Totowa, NJ: Rowman and Allanheld 1984. Weiss, S. M., Kulikowski, C. A., Amarel. S.. and Safir, A. A model- based method for computer-aided medical decision making. Artificial Intelligence, 1978, Il. 145-172. 55
1984
39
324
Knowledge Inversion Yoav Shoham Drew V. McDermott Department of Computer Science Yale University Box 2158 Yale Station, New Haven, CT. 00520 Abstract We define the direction of knowledge, and what it means to extend that direction. A special case is function inversion, and we give three algorithms for function inversion. Their performance on non-trivial problems and their shortcomings are demonstrated. All algorithms are implemented in Prolog- 1 Introduction. Given a manual describing how to assemble a machine, we can usually use that manual to disassemble the same machine; given our knowledge of differentiation of algebraic functions, we can integrate a variety of functions. On the other hand while it is trivial to disarrange Rubik’s Cube it is less trivial to arrange it, as many have discovered to their frustration. We can then ask ourselves two questions: 0 Can we characterize the instances of “easily invertible” knowledge? l Can we automate the inversion of procedural knowledge in those easy cases? In this paper we mainly ignore the first question, but give a partial positive answer to the second one. We present essentially three different algorithms for function inversion and demonstrate their power and weaknesses. Our algorithms are implemented in Prolog ( [Clocksin & hlellish Sl]), which may seem at first a bit strange since the popular view of Prolog is as a “declarative’ language. In section 2 we dispel this optical illusion which oddly enough is sometimes encouraged by the logic programming community itself. Our algorithms could be written in any applicative language that employs backtracking; Prolog happens to be particularly convenient because of the explicit representation of the output variables (or perhaps this is a post-hoc rationalization by the first author of his enthusiasm for the language - the reader may be the judge of that). We do not rely on the formalism of logic programming, but the reader is expected to have a basic understanding of deductive systems ‘This work was supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored under the Office of Naval Research under contract N00014-83-K-0281 like Prolog or DUCK ( [M c D ermott 821) and of the syntax of Prolog. 2 Directed relations. Consider the familiar Quicksort, defined by, say: qsort([HITl.S) :- split(H,T,A,B),!, qsort(A,Al), qsort(B,Bl), append(Al,[HlBll,S). qsort( [I, El>. split(H.[A~X],[AIY],Z) :- order(A,H), split(H,X,Y,Z). split(H,[AIXl,Y,[AIZI) :- order(H,A), split(H,X,Y,Z). sp 1 i tL, [I, [I , 111). order(A,B) :- AcEI. One would expect invocation of the goal qsort(X, [1,2,31> to bind X successively to all six permutations of [1,2,3]. What in fact will happen is that the interpreter will return two error messages and fail. Other cases are still worse - replacing Quicksort by Insertionsort will cause the interpreter to go into an infinite recursion, and similar disasters will happen with Bubblesort. The problem is obviously that goals are invoked with the “wrong” arguments instantiated. In this case we might say that sortname(X,Y) is a function2 from X to Y rather than a relation on X and Y. More generally one can make the following definitions: Definition: A Prolog predicate R with a given intended extension is said to be a function from Sl to S2 if <Sl,S2> is a partition of the set of all variables appearing in R, and for all invocations of R with all the variables in Sl instantiated, all the tuples in the intended extension of R matching the instantiation of variables in Sl will be fairly generated. For our purposes a partition of a set S is a tuple <Sl,S2> of disjoint sets whose union is S. A fair generation of a sequence is one in which any given element is generated after a finite amount of time. Deffnition: A Prolog predicate R with a given intended extension is said to be D-directed relation if D is a set of From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. tuples {<Sli,SL?i>} such that R is a function from Sli to S2i for all i. Note that a function from Sl to S2 is a special case of a directed relation, one that is {<Sl,S2>}-directed. Definition: A Prolog predicate R is called complete if it is D-directed for D the set of all partitions of the set of variables in R. It is not immediately clear what the direction a given predicate in a given program is - the traditional view encourages regarding it as complete, while typically it is written as a function. However once a predicate is identified as a function a question that arises naturally is whether its directionality can be extended, perhaps even so as to make it complete (in the latter case we will say that the predicate had been completed). A special case is where the directed relation is a function from Sl to S2, and we want to extend it to be { <Sl,S2>,62,Sl>}-d irected, that is we want to invert the function. In another paper ( [Shoham & McDermott 841) we describe a general procedure for exploring a directionality of a given predicate in a given program. Here we restrict the discussion to function inversion, which is the subject of the next section. 3 Function inversion The general problem of function inversion is hard and suggests some immediate caveats. For example a solution to the general problem would yield a factoring algorithm and a statement on Fermat’s last theorem. Remember however that we are not trying to invert all functions, but rather are investigating which ones are easily invertible. Thus the algorithms we present are really heuristics for function inversion. In this section we are concerned with a detailed description of the algorithms and their performance; we return to more global considerations in section 4. 296 We first present a simple inversion algorithm which stated roughly reverse says “Given a conjunctive goal order. Given a single goal solve the conjuncts in reduce it if possible, otherwise execute it”. The precise Prolog implementation is given in Figure 1. When we apply the above algorithm to the sorting program from section 2 we observe the following behavior:3 Example 1: inverting Quickeort 1. I ?- invgoal( qsortK[1,2,31) X = [1,2,31 ; *Since our formalization ser ves mainly to provide intuition for the remainder of the paper, we allow ourselves some freedom in using the terminology. As we will define the term /unction it will always denote a nondeterminstic function. X = [1,3,2] ; X = [2,1,3] ; X = [2,3,11 ; X = [3,1,2] ; X = [3,2,1] ; no I ?- which is indeed what is required. However this inversion procedure is too simplistic as it does not take into account some of Prolog’s idiosyncracies. In Figure 2 we present a procedure that adopts the same basic algorithm, but pays more respect to special Prolog features. Armed with this slightly more meaty some more inversions. The next example algorithm we can do brings us back to our original motivation, that of inverting the solution of counting problems in combinatorics. Since the example is not trivial, and because we think automating the solution of problems in combinatorics is of interest in itself, this example will be a bit long and the reader’s indulgence is requested. In [Shoham 841 we describe a program (FAME I) for proving combinatorial equalities by combinatorial arguments. The general structure of proving two expressions equal by a combinatorial argument is showing that both are a correct solution to the same counting problem. An example of an equality is N*c(N-l,R-l)=R*c(N,R), where c(X,Y) stands for “X choose Y”. An example of a combinatorial proof of this equality is that both describe the number of ways to choose a team of R players from N candidates and appoint a captain from among them. The first expression describes the process of first choosing the captain and then the rest of the team, and the second expression describes the process of first choosing the whole team and then the captain. In that paper we pointed out the shortcomings of our program, namely that the knowledge of counting was only implicit in it and there was no obvious way to gracefully extend the program to handle other problems in combinatorics. The “correct” way to go about it, we said, was to write a program (FAME II) that solved counting problems. Then another program could be written that used the knowledge of FAME II to synthesize a program similar to FAME I, by inverting the knowledge of counting. Figure 3 is an example of a counting problem solved by FAME II (translated into English it reads “In how many ways can you choose a set set2 of size r from a set set1 of size n, and choose a set set3 of size 1 from set2?“). We now ask the converse question - “What counting problem is the expression c(n,r)*c(r,l) a solution to” by inverting count. The result is shown in figure 4. 3All the examples version 3.47. in this paper were done on a running Prolog- 10 invgoal((A,B)) :- !, invgoal(B),invgoal(A). invgoal(A) :- !,clause(A,B),invgoal(B). invgoal(A) :- call(A). Figure 1: Algorithm 1: A simple inversion invgoal(invgoal(X)) :- call(X). invgoal(assert(X)) :- retract(X). invgoal(asserta(X)) :- retract(X). invgoal(retract(X)) :- assert(X). invgoal(A is B+C) :- var(B),B is A-C. % and any other invgoal(A is B+C) :- var(C),C is A-B. % mathematical inversions invgoal(A is B-C) :- var(B),B is A+C. % needed; see below. invgoal(A is B-C) :- var(C),C is B-A. % invgoal(A is -B) :- B is -A. % invgoal((A,B)) :- ! ,invgoal(B),invgoal(A). invgoal(A) :- !,clause(A,B),invgoal(B). invgoal(A) :- call(A). Figure 2: . Algorithm 2: A less simple inversion, 1 ?- count([(setl,n),(set2,r),(set3,1)], [subset(set3,set2),subset(set2,setl)], Solution). Solution = c(r,l)*c(n,r) Figure 3: Solving a counting problem 1 ?- inv(gensgm(X,input7)). 1 ?- invgoal(count(X,Y,c(n,r)*c(r,l))). ** Error: evaluate( -246) X = ((_241,r),(_368,1),(_242,n)l_832], Y = jsubset(-241,- 242),subset(-368,-241)] ; X = 1(_369,r),(_368,1),(_242,n),(_241,1)1_948], Y = (subset(-241,- 242),subset(-368,-369)) Figure 4: Example 2: inverting Count The next algorithm, Algorithm 3, may seem at first sight like an elaborate version of Algorithm 2. It has two phases - in the first interactive phase the system inverts functions, asserts their inverse to the database and writes them to a file - all according to the user’s specification. In the second independent phase the inverted code is simply run. As it is presented here, the inverse of a function F is called i nv (F) . The algorithm traverses the computation tree and whenever a goal is unifiable with a head of a clause A : - B, the user is given the choice of continuing along that branch of the tree or quitting it. Continuing means asserting the clause i nv(A) : - i nv (B) , and recursing on B.4 This is in contrast to the previous algorithm where if a goal is unifiable with a head of a clause the algorithm will definitely recurse on the body of that clause. The advantage of Algorithm 3 is that the user can detect infinite recursion during the inversion phase, and prevent it from occurring during runtime. The disadvatage is that when the user decides to quit pursuing a branch of the tree he may lose information. The example we choose is the inversion of a function with side effects. The predicate gensym is defined in [Clocksin & Mellish 811 (p. 150) and since our definition is very similar we will not repeat it here. I ?- findinv(gensym(X,Y)). Do you want the resulting code asserted in the database? (y/n) I: Y. (Where) do you want to save the resulting code? (filename/no) I: no. Do you want to invert the goal gensym(-31,-52)? (y/n) I: Y. X = input yes 1 ?- Figure 5: Example 5: inverting gensym Finally, we demonstrate that the above algorithms will not suffice to invert all functions. Consider the following program: f([alXl> :- g(X). f([blXI) :- g(X). gUc,-I) * g(X) :- f(X). ‘The limitations imposed on the length of this presentation prohibit a TTWP detailed descriotion of the alnorithm or a comtAete I/O 10~. Considered as a function from [X] to fl, f(X) acts as recognizer for the regular language (a+b)*.c.C. Inverting f would cause it to act as a “fair” generator of the same language (in the sense defined in section 2). The reader should convince himself that none of the above algorithms will invert At this point we should mention an obvious non-solution to all inversion problems (and predicate redirection in general) - conduct a breadth-first search of the computation tree. Both aspects of its %on-solutioness’ (namely, its theoretical completeness and impracticality) can be demonstrated on the above program. We have implemented a breadth-first theorem- prover in Prolog; invoking the goal bf (G) will initiate such a proof. Example 6: Generating the language (a+b)*.c.C I ?- bf(f(X)). X = [a,c,-3121 ; X = [b.c.-5161 ; X = [a,a,c,-13421 ; X = [a,b,c,-16181 ; X = [b,b,a,a,c,-148651 ; X = [b,b,a,b,c,-153981 ; X = Cb,b.b,a,c,-159411 ; ! more core needed [ Execution aborted 1 4 Related work,. Summary, Further Research. 4.1 Discussion of related work. In 1950 hlcCarthy addressed the problem of inverting recursive functions [hlcCarthy 561, pointing out the difficulty of the problem.’ The one method he discussed explicitely is the enumeration procedure, which is the analog of proving a theorem by systematically generating English text and testing to see if the text is a correct proof of the theorem. He speculated on what, would be needed to improve upon this procedure, and one can consider the work described here a cant inuat ion of those speculations. More recently Dijkstra has also considered the problem of program inversion. In [Dijkstra 831 he gives a (manual) inversion of the vector inversion problem. As he himself says, 5We do not agree with his claim there that solving any “well specified” problem amounted to the inversion of some Turing Machine. In our notation a specification procedure is a {<S,[]>}-directed relation R (i.e. a function) for some S and R, while the algorithm solving it is not the { <[],S>}-directed R but rather the {<Sl,S2>}-directed R for some partition <Sl,S2> of S. This however does not affect the relevance 01 his subsequent discussion of inverting functions defined by Turing Machines. that inversion is straightforward because “the algorithm is deterministic and no information is lost”, while the general inversion problem remains open. In an interesting paper Toffoli ( [Toffoli 80)) suggests a way of transforming any computational circuit to an equivalent invertible one with a worst case additional cost of doubling the number of channels. While the scope of this paper does not permit a detailed discussion of his work, there are two basic ideas - add “redundant” information to insure function inversion, and try to reduce entropy by making the redundant information to one function be essential information for another function. The motivation behind that work is different from ours, but we feel that the two basic ideas may carry over (see last subsect ion). Other references to theoretical work on reversible computations are [Bennett 731, [Burks 711, [Toffoli 771. 4.2 Summary. l We suggest viewing Prolog predicates as denoting directed relations. For a predicate denoting a relation with a certain direction, we asked whether its direction can be extended. A major part of the paper has been concerned with the special case of function inversion. l We have presented two effective algorithms for inverting functions - Algorithm 2 and Algorithm 3. Both involve reversing the bodies of encountered clauses, but the latter is more selective in which clauses are inverted. Both allow for extra-logical features of Prolog, namely inverting assert/retract and arithmetic operations. The treatment of the latter is very cursory and ad-hoc, and if any non- trivial inversion of mathematical functions is desired the question of the representation of mathematical objects requires closer attention. l It has been demonstrated that these algorithms are effective in some non-trivial cases, and that there exist functions not invertible by either. The exact characterization of functions invertible by each algorit,hm has not been given (see discussion below). l A complete yet impractical algorithm for predicate redirection has been presented (namely a breadth- first search of the computation tree) and its performance has been demonstrated. 4.3 Further research. We repeat the two questions posed in the introduction: 0 Can we characterize the instances of “easily invertible” knowledge? l Can we automate the inversion of procedural knowledge in those easy cases? As we said there, we only gave a partial answer to question. The task remains, then, to complete that the second answer and provide one for the first question. Several ways of approaching the first half of the task suggest themselves. First, we have not explored the power of combining techniques - for example perform Algorithm 2 and add the clause i nvgoa I (bf (G) > : - bf (G). A related issue that needs exploring is how to make ail implicit knowledge explicit. For example, if we write the definite clause f (X, X, Z) with the intention that f(X,Y,Z) be used as a function from [X,Y] to [Z], we sometimes ignore additional (overdetermining) information, for example that if X=Y then Z=[]; th is sort of information may be crucial for the inversion of f. While we have mentioned and discredited BFS as a sole strategy for searching the computation tree, we have not mentioned other possible strategies. One obvious candidate is a probablistic one - the interpreter could flip a coin to decide on which clause to resolve against the current goal, and even to decide on the ordering of a clause body. Another approach could be more in the spirit of mainstream AI, that the choice of ordering itself be a knowledge-intensive problem solving task. A recent paper ( [Smith & Genesereth 831) has concerned itself with part of the problem, that of deciding on the optimal ordering of conjuncts in the simple case where those conjuncts resolve only against unqualified assertions in the data base (that is, no further inference is necessary). Finally, Toffoli’s work suggests both an approach for answering the first question as well a technique answering the second one. It implies that one should look for an appropriate measure of entropy in the computation, and try to minimize it. In the case where the entropy is zero the computation is invertible. Where the we cannot eliminate the loss of information, we should try to supply excess information at the start, so that we could reconstruct just the “right” subset of it later. This also suggests application of the automatic programming paradigm, whereby the process of adding redundant information to an existing piece of code is automated. We have allowed ourselves some free speculation in this last subsection, which reflects our excitement with the possibilities. It is not clear why the problem has been largely neglected - for example in the survey of machine learning ( [Michalski et al 831) there is no ment)ion of knowledge inversion as part of skill acquisition. Whether knowledge inversion is classified as part of a learning process or not it see& a fundamental capability of people, and AI will benefit much frqm a better understanding of it. ACKNOWLEDGMENTS Thanks go to Tom Dean, Stan Letovsky, Dave Miller and Jim Spohrer for helpful comments on previous drafts. We &o thank one referee for correctly pointing out related work by S. Sickel of which we had not been aware. References [Bennett 731 Bennett, C.H. Logical Reversibility of Computation. IBM J. Res. Dev. 6, 1973. (Burks 711 Burks, A.W. On Backwards-Deterministic, Erasable, and Garden-of-Eden Automata. Technical Report 012520-4-T, Comp. Comm. Sci. Dept., University of Michigan, 1971. [Clocksin & Mellish 811 [Dijkstra 83) Clocksin, W.F. and Mellish, C.S. Programming in Prolog. Springer-Verlag, 1981. Dijkstra, E.W. [McCarthy 561 [McDermott 821 In Shannon, C.E. and McCarthy, J. (editor), Automata Studies, . Princeton University, Press, 1956. McDermott, D.V. DUCK: A Lisp-Based Deductive System. Yale University, Department of Computer . Science , 1982. [Michalski et al 83) Michalski, R.S., Carbone!!, J.G. and Mitchell, T.M. Machine Learning: An Artificial Intelligence approach. ,!iJ’WD671: Program Inversion. Springer-Verlag, 1983, . McCarthy, J. The Inversion of Functions Defined by Turing Machines. [Shoham 841 Tioga, 1983. Shoham, Y. FAME: A Prolog Program That Solves Problems in Combinatorics. In Proc. 2nd Intl. Logic Programming Conf.. Uppsala, Sweden, 1984. to appear. [Shoham & McDermott 841 Shoham,Y. and McDermott, D.V. Prolog Predicates as Denoting Directed Relations. submitted , 1984. [Smith & Genesereth 831 Smith, D.E. and Genesereth, M.R. Ordering Conjuncts in Problem Solving. Computer Science Department, Stan ford Uniuersity , 1983. unpublished at this time. [Toffoli 771 Toffoli, T. Computation and Construction Universality of Reversible Cellular Automata. J. Comp. Sys. Sci. 15, 1977. [Toffoli 801 Toffoli, T. Reversible Computing, Technical Report MIT/LCS/TM-151, Laboratory for Computer Science, MIT, February, 1980. 299
1984
4
325
YEWMVS: A Continuous Real Time Expert System J.H. Griesmer, S.J. Hong, M. Karnaugh, J.K. Kastner, M.I. Schor Expert Systems Group, Mathematical Science Department R.L. Ennis, D.A. Klein, K.R. Milliken, H.M. VanWoerkom Installation Management Group, Computer Science Department IBM T. J. Watson Research Center Yorktown Heights, NY 10598 ABSTRACT: YES/MVS (Yorktown Expert System for MVS opera- tors) is a continuous, real time expert system that exerts interactive control over an operating system as an aid to computer operators. This paper discusses the YES/MVS system, its domain of application, and issues that arise in the design and development of an expert system that runs continuously in real time. I INTRODUCTION Expert systems techniques are beginning to be success- fully applied to real problems in industry, although only a handful are reportedly in use so far. Most of the applica- tions are consultation oriented, run in a session or in a batch mode, and deal with a static world. The nuclear reactor monitoring expert system, REACTOR [ 11, and the patient monitoring expert system for intensive care units, VM [2], are among the few attempts at continuous on-line operation and real time processing. However, neither of these sys- tems exercise any real time interactive control over the subject being monitored. The Yorktown Expert System for MVS operators (YES/MVS) is a real time interactive con- trol system that operates continuously. The idea of on-line monitoring or controlling of one computer by another is not new. Watch-dog processors [3] and maintenance processors [4] have been designed to assist in the recovery from software errors and hardware errors while the subject computer is in operation. What is new is the application of an expert system approach to the control of computer operations. A. Importance of the Domain Computer operations is a monitoring and problem solv- ing activity that must be conducted in real time. It is be- coming increasingly complex as data processing installations grow. Large data processing installations often involve multiple CPU’s and a large number of peripherals networked together, representing a multi-million dollar in- vestment. Many of the installations run real time applica- tions (e.g., banking, reservations systems). The control of a typical large system rests largely in the hands of just a few operators. Besides carrying on such routine activities as mounting tapes, loading and changing forms in printers, and answering phones, an operator continuously monitors the condition of the subject operating system and initiates queries and/or commands to diagnose and solve problems as they arise. A long training period is required to produce a skilled operator; trained operators, in turn, are often pro- moted to systems programmers. The resulting shortage of skilled operators and the increasing complexity of the op- erator’s job calls for more powerful installation manage- ment tools. We have chosen the management of a Multiple Virtual Storage (MVS) operating system, the most widely used operating system on large IBM mainframe computers, as an example of the application of expert systems to problems in computer installation management. B. Use of Expert System Techniques Each installation has a different configuration and dif- ferent local policies for computer operations, both of which change over time. The software running in a large com- puter installation represents hundreds of man-years of de- velopment and is comprised of many interacting subsystems. To deal with such complexify, operators and system programmers often rely on many rules of thumb gained through experience. The development of installa- tion management tools which can be easily tailored and modified, and which can incorporate such “rules of thumb” are highly desirable. An expert systems approach was a natural choice because of its flexibility and maintainability. C. Special Challenges There are many new requirements in building a real time expert system to assist a computer operator. The environ- ment is too complex and dynamic to allow for obtaining information by querying the human operator. Unlike many of the consultation style expert systems (e.g., MYCIN [5], CASNET [6]), this means that conclusions are based on primitive facts obtained directly from the system being monitored and not from human interpreted inputs. In ad- dition, the dynamic nature of the subject system introduces potential inconsistencies in the expert system’s model of the subject MVS system. By the time conclusions are to be put into effect (recommendations made, actions taken), many of the facts from which those conclusions were derived may have changed. This complexity and dynamic character make it very difficult to simulate or model the subject sys- tem. Developing and debugging such an expert system presents yet another interesting challenge. D. The OPS5 Base To be able to handle real time on-line data, the inference engine needs to be mainly data driven (also recognized by REACTOR [l] and VM [2]). The OPS5 production system developed by C. L. Forgy [7] was chosen as our tool pri- marily for this reason. Also, significant applications based 130 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. on the OPS family of production systems have been re- ported (Rl/XSEL/PTRANS [8-lo] and ACE [ 111). Of importance was our ability to convert OPS5 to run in our computing environment. While OPS5 was not directly us- able for our application, it possessed the important proper- ties of flexibility (stemming from being a low level language) and easy modifiability. We have made a number of important extensions to OPS5 so that our continuous real time requirements could be met. II THE DOMAIN The MVS operating system running with a Job Entry System (JES), puts out various system messages to the op- erator. While there are literally hundreds of different types of messages, the number that are relevant to an operator is much smaller. The majority of purely informational mes- sages may usually be statically filtered and diverted to a log. Even then, the peak message rate from MVS to the opera- tor sometimes exceeds 100 a minute (e.g., a response to an operator request for a list of jobs in a particular category). When a (potential) problem is detected, the operator may query MVS for additional information and send one or more corrective commands. The operator must often anticipate informational needs and dynamically keep track of a number of relevant status variables (out of thousands). There are many different subdomains in the domain of operator activities. The six subdomains described below were selected for early implementation, since they touch a majority of those operator activities which involve no physical intervention. A. JES Oueue Snace Management All batch jobs processed under MVS are staged from a central spool file, called the Job Entry System (JES) queue space, before, during, and after execution. The operator is concerned with the remaining available queue space, be- cause the job staging subsystem, JES, cannot recover if queue space is exhausted. When the level of remaining queue space becomes critically low, many actions are initi- ated to free additional space, such as forcing the printing of jobs which have finished execution and dumping large print jobs to tape. In extreme cases, the system can be made to refuse new jobs, and stop data being transmitted from other systems. To initiate such actions the operator makes use of the available facilities connected to the sub- ject MVS system. This means the operator has to perform some anticipatory actions (e.g., mounting a tape to dump jobs) as queue space decreases, and before it becomes crit- ical. B. Problems in Channel-to-Channel Links The networking of computers at the same site is often implemented by means of I/O channel-to-channel trans- mission links. Failure to maintain these links in an active status not only delays data traffic but also contributes to the exhaustion of JES queue space. Monitoring and corrective actions include: periodic querying of the states of these links, using heuristics to infer line degradation, attempting to restart the links, freeing links from troublesome jobs, and rerouting the data through other computers. C. Scheduling Large Batch Jobs Off Prime Shift Large batch jobs must be scheduled to balance consid- erations of system throughput and user satisfaction. These considerations may vary in detail from one installation to another. These include: ensuring that no jobs are indefi- nitely delayed, employing round-robin scheduling among users submitting multiple jobs, giving priority to users who are waiting on site, or require some other special consider- ation, running longer jobs early in the shift and running only those jobs that can finish before a scheduled shut- down. Since new jobs may arrive or be withdrawn during the shift, initial scheduling may have to be changed among the jobs that are still in the queue. A separate paper [ 121 describes a truth maintenance approach using OPS5 for keeping a dynamically correct priority assignment of the jobs. D. MVS Detected Hardware Errors When MVS fails to recover from a detected hardware error, the system notifies the operator so that he or she may attempt to solve the problem. Due to the time criticality of possible remedies (such as speedy reconfiguration), recov- erable situations may result in a system crash since a human operator cannot respond in time. Responses to the most frequent hardware problems have been implemented in rules. These rules are not tied to a particular hardware configuration but rather make use of hardware configura- tion data placed in the OPS5 working memory. The hard- ware configuration data is initially loaded used in the MVS system generation process. E. Monitoring Software Subsystems from the files The main activity in this area is to generate informative incident reports for the systems programmers who are re- sponsible for specific software subsystems. When an inci- dent occurs, such as an abnormal end of execution, relevant information is captured and an appropriate incident report is prepared. In limited cases, reallocation of resources may allow recovery from software failures. F. Performance Monitoring This task goes beyond the usual scope of an operator’s activities. A short term goal is to interpret the data from existing performance monitoring software, and automat- ically detect and classify performance problems in real time, generating summary reports in hard copy as well as in computer graphics. An eventual goal is to diagnose the cause of performance problems and to take corrective actions. III THE YES/MVS SYSTEM A. System Organization YES/MVS runs under the VM/370 operating system on an IBM 3081 computer. Because YES/MVS and the subject MVS system are resident in different computers, problems in MVS do not interfere with the operation of YES/MVS, and MVS can continue to operate under man- ual control should the YES/MVS host experience difficul- ties. 131 SUBJECT MVS MACHINE i CCOP VIRTUAL ~~ACHINE EXPERT VIRTUAL MACHINE HOST VW370 MACHINE Figure 1. YES/MVS System YES/MVS is presently partitioned into three virtual machines for speed as well as functional separation. One of these contains the MVS operator expert, the second one contains the MVS Communications Control Facility (MCCF), and the third one is used to control the YES/MVS operator’s display console. MCCF communi- cates with the MVS system through a separately developed facility, called CCOP [13]. CCOP provides centralized control and filtering of messages between the various com- puters in an installation and their operators. Intelligence in the form of OPS5 rules is distributed be- tween the expert virtual machine and the display controller. These machines communicate with each other via the REMOTE-MAKE mechanism, which is described in the next section. MCCF is implemented in several thousand lines of the system exec language, REXX [ 141. MCCF acts as a message filter and also translates the messages between the expert system and MVS. For exam- ple, a JES command generated by the MVS operator ex- w-6 (reroute-print-job-from-3211-to-3800-printer) together with three parameters: job name, 3211 address, and 3800 address, is translated and sent to JES as 8f u j=SAMPLE, d=6CO, nd=OOE B. Operator’s Console ~NTE~ACE VIRTUAL MACHINE t The YES/MVS operator console displays one-line mes- sages on the top level input screen describing events relat- ing to the various tasks YES/MVS is concerned with. The operator may select one of the displayed messages and re- quest further detail. The detail level screen contains the recommended action or information along with an expla- nation. If an action is called for, the operator is given the choice of automatically issuing the command (U-DO), showing that he or she will manually type the recommended command at another terminal (I-DID), or rejecting the command being proposed (NO-DO). If the command is rejected (NO-DO), the operator is prompted to enter the reasons for the rejection which is fed back to the YES/MVS knowledge engineers. The action screen is in- tended only for use during a pre-certification phase. Once a particular command is certified, the operator display ma- chine will send the commands to MCCF without asking the operator. For certified commands, the detail screen only displays information on the action taken along with a justi- fication. Another second level screen allows operators to enter unsolicited information or requests. YES/MVS TOP LEVEL 16:14 Pending: 0 ===> 15:57 BATCH SCHEDULER STATUS UPDATED: 16:09 15:57 SMF: CHECK STATUS OF SMF DATASETS 16:09 BATCH SCHED: MODIFY JOB-ID 5003 TO PRIORITY 14 16:lO SMF: ENTER DUMP FOR SYSl.MANA 16:ll CTCFIX: RESTART COMMUNICATION TO YKTVMZ PFOl PFOZ PF03 PF04 PF05 PF06 PF07 PF08 PFO9 PFlO PFll PF12 U.I. EXIT WRKNG SELCT DONE BACK FWD RFRSH ERRST HOME Figure 2. YES/MVS Operator Console Top Level Screen IV CONTINUOUS, REAL TIME, INTERACTIVE CONTROL ISSUES The MVS system being monitored and controlled by YES/MVS is a dynamic world. Problem states may be en- tered spontaneously. Also, a problem may disappear in the middle of the solution process. In this sense, the MVS world is highly non-monotonic. It is impossible to maintain an accurate model of MVS that is complete in all detail. Instead we maintain a model that provides a reasonably good description of the status of MVS, from the viewpoint 132 of operations. The model is updated whenever MVS vol- unteers pertinent status information, based upon responses to queries and upon acknowledgement messages to control commands. Queries of status information are submitted at regular intervals or may be triggered by events and the need for information in the resulting analysis. The frequency of different queries varies enormously based on the volatility of the status data involved and on the requirement for cur- rent information. Extensive use of timestamps and validity flags provides additional information on the “currentness ’ of MVS status. The status model of MVS is updated only on the receipt of information from MVS. Attempts to compute status from history and the anticipated response to stimuli are avoided. This is because of the many pitfalls that exist for a stimulus not to have the anticipated effect. These include delays in command submission or processing, conflicting commands from operators, and non-response or errors in response by operators to advice. It is especially the case that, when YES/MVS is providing advice as opposed to submitting control commands directly, there is a potential race condition between the existence of a problem state and the submission of a corrective command. It should be noted that this is an inherent problem and the use of an automatic control system such as YES/MVS improves rather than exacerbates such situations. We now identify specific requirements of an inference system which is to perform continuous, real time, interac- tive control, and describe solutions in terms of various ex- tensions to OPS5. Some of these extensions take the form of new primitives; others are LISP functions and macros added to the OPS5 environment. A. Speed Considerations The ability of an inference engine to process in real time is a basic concern. We have improved the speed of exe- cution of OPS5 by compiling the right hand side (RHS) or consequent part of a rule. (Such a compilation process has been independently introduced in YAPS [ 151, and in OPS83 [ 161.) The matching process has been tuned with several LISP macros. The modified version of OPS5 runs significantly faster than other LISP implementations of OPS5. Also, we distribute the rules among multiple OPS5 systems using concurrent processes in the form of separate virtual machines supported by a host computer. B. Timed Productions Being able to initiate an action at a given time is one of the fundamental requirements of a real time control prob- lem. With a data-driven inference engine, this includes the production of working memory elements (WMEs) at some future time. We accomplish this by defining a new RHS action primitive for delayed production, TIMED-MAKE, which takes the normal OPS5 MAKE arguments followed by a time specification. (The OPS5 MAKE action creates new elements and adds them to working memory.) For example, execution of an RHS action, (TIMED-MAKE AAAI tdue-date past (AT TIME: 1700 DATE: 84 4 2)) would cause the production of a WME, named AAAI, at 5 p.m. on April 2nd, 1984, with the value “past” assigned to the attribute “due-date”. A timer function and timer queue were added as neces- sary support functions for the TIMED-MAKE action. To support debugging, functions were provided to manipulate the timer clock and pop the timer queue as needed. C. Communications Another requirement of real time processing is the abil- ity to have distributed processes interact in a timely fashion. Fast communication is achieved by introducing a new communication phase in the normal OPS5 inference cycle (recognize, conflict resolution, act). During the communi- cation phase, external messages are picked up and out- bound messages are sent. Conflict resolution then takes place based on changes to working memory as the result of both RHS actions and incoming messages. All messages are sent out by a communication primitive, REMOTE-MAKE, which takes the same arguments as the regular OPS5 MAKE action, with an additional attribute 4Rm-to: whose value is the user-id of the intended receiver virtual machine. The message is actually sent by the host system’s program level message sending mechanism. The f Rm-to: attribute-value pair is changed, en route, to an- other attribute tRm-from: with the sender’s machine user-id as its value. The REMOTE-MAKE action can use any of the OPS5 functions to create result elements, Thus, one can write a meta level REMOTE-MAKE rule, if desired, to dynam- ically create messages from templates, defaults, and substi- tuted values of bound variables. For debugging purposes, a global variable can be set to block the actual transmission of result elements. Then the messages are displayed along with requests for replies. When a reply is entered or selected from a pre-existing file using a multi-window interactive editor, it is employed just as if it came from another virtual machine. D. Need for Explicit Control There are critical problems that require a command se- quence to be issued to MVS without other queries or com- mands being interspersed, which can happen when different kinds of problem episodes overlap in real time. Hardware error message handling is one such case. Such a real time requirement necessitates explicit control over the rule firing in the inference engine. For this purpose, the two modes of OPS5 conflict resolution, LEX (lexical) and MEA (means-ends-analysis) [7], have been extended by a Prior- ity Mode which is orthogonal to these. To implement the priority mechanism, each rule has an additional left-hand-side (LHS) condition element, (TASK ttask-id XXX), where XXX is a unique task name or a list (expressed as an OPS5 disjunction) of task names to which the rule is relevant. Each such task-id XXX has an associ- ated priority. The conflict resolution phase of OPS5 is modified so that the active conflict set is temporarily re- duced by excluding all active rules that do not have the highest priority task among the set. Then, the normal OPS5 conflict resolution process acts on this reduced set. The task working memory elements as well as associated priori- ties are defined either by a top level MAKE or by an RHS action. Tasks can thus be dynamically created or de- stroyed. The priority can also be dynamically computed as an RHS action of a rule. 133 (p Start-Clean-Up (Task ttask-id CLEAN-UP) ; Low priority rule ; that fires when no -. (Make Task ttask-id I N-CLEAN-UP)) ; other normal action ; rules fire. (p Doing-Clean-Up (Task ttask- id IN-CLEAN-UP) ; This rule repeatedly (<garbage> ; fires and removes all [List of WME names to be removed]) ; garbage as an atomic ; procedure, at high * (R emove <garbage>)) ; priority. (p Clean-up-done {<done-task> (Task ttask-id IN-CLEAN-UP)) * (R emove <done-task>)) ; This rule removes the ; IN-CLEAN-UP task which ; is now garbage and the ; system reverts back to a ; low priority CLEAN-UP mode. Figure 3. Three Rules Ill ustrat ing the Collection of Unneeded WME’s a s an Atomi c AC tion The priority control mechanism effectively satisfies our real time control needs. It also allows a powerful control over rule interaction between different subdomain areas, The priority mechanism emulates the control aspects of meta rules and eliminates the need for an additional level of indirection caused by their use. (Benjamin and Harrison [ 171 use meta rules for a different purpose: reasoning about the contents of the conflict set.) Furthermore, it allows rule grouping similar to the use of contexts in EMYCIN [5] and rule-groups in EXPERT [ 181. E. Requirements for Continuous Operation There are at least three basic requirements to operate in a continuous mode. They are: a) The inference engine should not terminate when no rule is eligible to fire. We implemented a LISP function C)Z)i;WAIT which puts the system into a suspended waiting . Any external message (including a timer event) causes the system to resume, with the new data added to working memory. b) The system should ideally run on a special purpose high availability computer, different from the subject ma- chine. If the host computer itself or the virtual machines comprising the system go down, the system must be re- started. We issue an automatic restart instruction during the host computer initial-program-load and also when a down machine is detected during a periodic mutual polling among virtual machines of the system. c) Working memory elements that have served their purposes must be removed. The accumulation of old use- less data in the working memory not only creates a memory space problem in continuous operation, but of more im- portance, instantiates the wrong productions in a data driven inference engine, such as OPS5. We have made use of many different ‘garbage collection’ techniques (RHS actions) to remove old data, including the one illustrated next. Removal of multiple working memory elements must be done carefully so as not to unintentionally trigger rules which might be satisfied when only a partial set of working memory elements has been removed. For example, the ability of a rule to fire may depend not only on the presence of some elements, but also on the absence of others. The priority mechanism can be used to cause an atomic proce- dure as shown in the following three rule example. (This also illustrates the dynamic creation of tasks.) Suppose the normal operating priority is 100. The priority for the CLEAN-UP task would be set low, say, at 50. Define an- other task name, IN-CLEAN-UP, with a priority, say, 150 which is higher than the priority of other tasks. The CLEAN-UP task is created in the system as a permanent WME during initialization. The final rule in Figure 3 is less specific than the rule above it and so does not fire until all garbage has been re- moved, due to the conflict resolution mechanism of OPS5. V BUILDING THE KNOWLEDGE BASE Most of the expertise is encoded in over 500 OPS5 rules distributed between the expert virtual machine and the dis- play control virtual machine. (The rule coding process was facilitated by a programming environment in which a locally developed LISP system [ 193, on which OPS5 was built, and the system editor XEDIT [20] exist as co-routines.) The expertise was gathered mostly from the operations staff at Yorktown. In addition, systems programmers, manuals and even the designers of the MVS operating system were con- sulted. Some of the expertise was encoded in relational tables. (An OPS5 WME is equivalent to a row of a relational data base table. Disjoint cases can be represented as table en- tries used by driver rules rather than listing separate rules for each case. Use of WMEs as part of the permanent knowledge base has been found to provide a cleaner and more understandable representation in certain cases [21].) Some expertise was implemented in the MCCF translation tables for more direct execution. Also, there are a few pa- rameters hidden from the inference process, that have to do with the MVS interface. Therefore, the knowledge base is not restricted to the rule base alone. (p stop-reception (Task ttask-id jes-q-space) (JES-Q tmode panic) ; If the task of (<the-Link>(Link tid <L-id> ; maintaining JES-q-space . , is active, the space tstatus <<active i/o-active>> treceive yes)) ; is critically low, and ; there is an active + (Call remote-make ; receiving Link, Link-command tid <L-id> ; then cut the Link treceive no ; and mark the Link trm-to: MCCF) ; reception status as (Modify <the-Link> treceive to-be-no)) ; about to be no. (p start-reception (Task ttask-id jes-q-space) ; If the task of maintaining (JES-Q tmode <> lpanic) ; JES-q-space is active, (<the-Link>(Link tid <L-id> ; the space is not tstatus <<active i/o-active>> ; critically low, and treceive no)) ; there is an active Link ; not receiving, * (Call remote-make ; then Link-command tid <L-id> ; reopen the Link trece i ve yes trm-to: MCCF) ; and mark the Link status (Modify <the-Link> ; as about to be yes treceive to-be-yes)) Figure 4. Two Rules from the JES Queue Space Subdomain The number of rules generally increased along with the coverage. However, increased understanding of the domain sometimes permitted significant reductions by the use of tables and improved knowledge representation in general. Figure 4 is an example of a pair of rules that stop re- ception on an incoming link when JES queue space is crit- ically low and restart the reception when it improves. Notice that the Link status value is modified to an antic- ipated value, awaiting further confirmation from MVS that action has been taken. A real system must be verified by actual on-line testing. We have found many important pieces of knowledge during on-line testing that the experts did not mention to us. There are other problems as well. Some of the error handling rules can only be exercised during on-line testing if someone sabotages MVS to cause the error. This was done to some extent off prime shift hours. The situation stemming from this real system is that one cannot use a record of test cases (due to dynamic interaction), or use a simulator of MVS (too complex and too large). In contrast, REACTOR [l] was exercised against a simulator and VM [2] used a mag- netic tape recording of real time data for a relatively small fixed set of variables. We have used rule walk-throughs, rules to partially simulate some aspects of MVS, and hand interaction in lieu of MVS to aid testing. Thus the vali- dation and certification process is not formal, and needs long experience and a certain amount of confidence derived from seeing the general integrity of system actions. VI PROJECT STATUS AND FUTURE PLANS The YES/MVS prototype development took little over one year from inception to on-line testing. We expect the system will be in continuous use at the Yorktown Comput- ing Center by May 1984. YES/MVS now routinely schedules the queue of large batch jobs. It has alerted MVS operators to network link problems. When jobs which nearly exhaust JES queue space are submitted to MVS, YES/MVS responds with appropriate corrective recommendations to the operator. Other task areas are in the final stages of testing. Our future plans include broadening the coverage of YES/MVS by the addition of other subdomains, such as facilities to assist the operator during initial-program- loading of MVS, and both planned and emergency shut- down. A learning component is planned for the scheduling of large batch jobs so as to take account of the behavior of previous jobs submitted by a user in scheduling his or her next job. Our success with the computer operations domain causes us to look for the application of expert systems to other areas of computer installation management: capacity planning, configuration and installation. VII CONCLUSIONS YES/MVS extends the use of expert systems techniques to continuous, real time, interactive control applications. The extensions we made to OPS5 include facilities essential to such applications and generally applicable to other real time interactive problems such as process control. We found building the system for actual use to be a challenge involving much more than the usual expert system issues. Integration of the core expert system with a com- plex real time environment required not only extensions to the OPS5 language, but also some new concerns including how to distribute processing between the expert system and the conventional programming environment. The total system not only interacts intimately with the subject ma- chine, it also interacts with the host system during process- ing. The difficulty and importance of integration have been observed and emphasized by others [22, 231 but there are still no easy solutions. While we did learn that OPS5 was an excellent base language, we found that its trigger happy rule firing was 135 awkward to live with. But it was only through an actual application experience that we uncovered suitable ways to improve this forward-chaining production system language. The techniques we developed are both relevant and effec- tive for real time processing issues. We have also gained many valuable ideas for future improvements in inference engines intended for real time applications. VII ACKNOWLEDGMENTS We wish to acknowledge the substantial contribution of Barry Trager, in making the conversion of OPS5 to run under the YKTLISP system on VM/370. We thank the Yorktown Computing Systems management for encour- agement during the course of this work and express our deep appreciation to the computer operators who exhibited considerable patience and good cheer during the knowledge acquisition and testing phases of our project. 111 121 [31 [41 [51 El 171 Bl REFERENCES William R. Nelson, “REACTOR: An Expert System for Diagnosis and Treatment of Nuclear Reactor Ac- cidents , Proceedings ofAAAI-82, pp. 296-301. Lawrence M. Fagan, “VM: Representing Time- Dependent Relations in A Medical Setting”, PhD Thesis, Stanford University, June 1980. D. J. Lu, “Watch-Dog Processors and Structural In- tegrity Checking”, IEEE Trans. on Comput., July 1982. R. Reilly, A. Sutton, R. Nassar R. Griscom, “Processor Controller for IBM 3081”, IBM Journal of Research and Development, Vol. 26, No. 1, January 1982, pp. 22-29. E. H. Shortliffe, Computer-Based Medical Consulta- tions: MYCIN, (Elsevier, N.Y.), 1976. S. M. Weiss, C. A. Kulikowski, S. Amarel, A. Safir, “A Model-Based Method for Computer-Aided Med- ical Decision-Making”, Artificial Intelligence, 11: 1,2, 1978, pp. 145-172. c. L. Forgy, “OPS5 User’s Manual”, CMU-CS-81-135, Dept. of Computer Science, Carnegie-Mellon University, July 198 1. John McDermott, “Rl: A Rule Based Configurer of Computer Systems”, Artificial Intelligence, 1982, Vol. 19, pp. 39-88. [9] John McDermott, “XSEL: A Computer Sales Per- son’s Assistant”, Machine Intelligence IO, J. E. Hayes, D. Michie, and Y-H Pao, eds., J. Wiley and Sons, New York, 1982, pp. 325-337. [lo] John McDermott, “Building Expert Systems”r Pre- sented at the 1983 NYU Symposium on Artificial In- telligence Applications for Business, May 1983. [l l] G. T. Versonder, S. J. Stolfo, J. E. Zielinski, F. D. Miller, and D. H. Copp, “ACE: An Expert System for Telephone Cable Maintenance”, Proceedings of IJCAI-83, lp. 116-121. [12] M. Schor, Using Declarative Knowledge Represen- tation Techniques: Implementing Truth Maintenance in OPS5”, IBM Research Report RC 10455, Yorktown Heights, NY, April 4, 1984. [ 131 A. A. Guido, “Unattended Automated DP Center Operation Is It Achievable?“, European GUIDE Pro- ceedings, June 7-10, 1983, Lyon, France, pp. 440-446. [14] The VM/SP System Product Interpreter Reference (Release 3), SC24-5239, IBM Corporation, Septem- ber 1983. [ 151 Elizabeth Allen, University of Maryland, “YAPS: A Production Rule System Meets Objects”, Proceedings of AAAI-83, [16] C. L. Forgy, pi. 5-7. OPS-83 User’s Manual”, in prepara- tion, as a Dept. of Comp. Sci. Report, Carnegie- Mellon University [17] D. P. Benjamin and Malcolm C. Harrison, “A Pr$- duction System for Learning Plans From Expert , Proceedings of AAAI-83, pp. 22-26. [18] S. M. Weiss and C. A. Kulikowski, “EXPERT: A System for Developing Consultation Models , Pro- ceedings of IJCAI- 79, pp. 942-947. [19] C. N. Alberga et al., “A Program Development Envi- ronment “, IBM Journal of Research and Development, Vol. 28, No. 1, January 1984, pp. 60-73. [20] The VM/SP System Product Editor Command and Macro Reference (Release 3), SC24-5221-2, IBM Corporation, September 1983. [21] A. Pasik and M. Schor, “Table Driven Rules in Expert Systems”, SIGART Newsletter, No. 87, January 1984. [22] R. Davis, H. Austin, I. Carlbom, B. Frawley, P. Pruchnik, R. Sneiderman, J. A. Gilreath, ‘The DIPMETER ADVISOR: Interpretation of Geological Signals”, Proceedings of IJCAI-81, pp. 846-849. [23] S. J. Hong, “Knowledge Engineering in Industry”, IBM Research Report RC 10330, Yorktown Heights, NY, January 12, 1984; also in Proceedings of Japan Systems Science Symposium, January 1984.
1984
40
326
SELF-EXPLANATORY FINANCIAL PLANNING MODELS Donald W. Kosy and Ben P. Wise The Robotics Institute, Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 The purpose of computing is insight, not numbers. -- R. W. Hamming ABSTRACT A financial model is a representation of the activities of a busi- ness in terms of quantitative relationships among variables that can help an analyst understand the financial consequences of past activities or assumed future activities. The equations com- prising such models form a kind of knowledge base which can be used to generate explanations. In this paper we give some back- ground on financial models, discuss two sorts of explanations in this domain, and present a procedure for explaining model results. Int reduction “It is February 1974 and as President of the Battery Company you are a little concerned at the results for 1973 that you have just received. Despite a 20% increase in sales over 1972, profits have decreased by 1%. You feel that the decrease in profit could be due to a combina- tion of three causes: increase in overhead expenses, decrease in contribution (or profit) margins (difference between selling price and direct manufacturing cost) or a change in product mix toward less profitable units. Alternatively, you would like to know how the additional revenues from increased sales were spent. You would like to investigate the cause of the decreased profit using The Information System.” Thus began the statement of a problem that Malhotra gave to a number of managers and management students as part of his investigation into the utility and feasibility of an English language question-answering system to support management [6,7]. In or- der to determine the design specifications for such a system, e.g., the vocabulary, grammar, and types of questions it would have to deal with, an “ideal” system was simulated that was capable of “perfect” interpretation and response to naturally occurring questions and commands. Users could ask about what the sys- tem could do, what kinds of data it had, how computed values were derived, and what the data values were, either for a par- ticular plant, product, customer and year, or aggregated over subsets of these, as the user’s question required. The simulation was conducted by sending user inputs to another terminal where a human experimenter would interpret it and create responses on the user’s terminal. The responses provided were those that Mal- hotra felt could be reasonably produced by a computer system, either because a simplified prototype he had developed could produce them or because they seemed to require only straightfor- ward extensions to that prototype. Malhotra’s prototype embodied an early version of what have come to be called “financial modeling languages” [ll] or “decision support system generators” [8]. Spreadsheet cal- culators, such as VisiCalc [2], are simpler systems that also fall into this class. Although they lack a natural language interface, these systems allow users to interactively display data, aggregate it, compute functions of it (e.g. averages, percentages, ratios, etc.) and to define algebraic models that assist in business decision-making, Given historical data, the results they produce are similar to the figures that appear on financial reports. An example of a report generated for the Battery Company is shown in Table 1.. These systems are not, of course, limited to only historical data. They can also generate hypothetical data, or projections, based on assumed data and expectations about the future. The first two columns in Table 1, for example, show historical data on Battery Company operations and the last three show projections. However, neither Malhotra’s natural language prototype nor more recent systems allow our president’s question to be asked directly, to wit: Why A little reflection such as, did profit go sales went down up? in 73 even though gross on Table 1 may suggest other similar questions, Why do gross sales go up? in 75? in 76? Why does gross margin go up so little in 76? Why is there a peak in profit in 75? Why does unit cost go down in 74? These questions call for an explanation of results, not just a presentation of them, and the task of explaining results has tradi- tionally been left to human analysts. The purpose of this paper is to show that, with suitable underlying models, generating such explanations by machine is not difficult and can be quite useful. The technique to be presented has been developed for use in the ROME system, a Reason-Oriented Modeling Environment for business planning managers [5]. The Explanation Problem for Financial Models Financial Models A financial model is a representation of the activities of a busi- ness in terms of quantitative relationships among financial vari- ables. Financial variables are variables that have some economic or accounting significance and the relationships among them can generally be expressed by formulas and conditional statements. -Due to space limitations, the data and model presented in this paper represent only a one-plant, one-product, one-customer version of the original Battery Company. 176 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. Volume 100.00 120.00 132.00 145.20 145.20 Selling price 35.00 35.00 36.40 37.86 39.37 Gross sales 3500.00 4200.00 4804.80 5496.69 5716.56 Labor/unit 9.00 9.00 9.36 9.73 10.12 Matl. price/unit 8.00 8.00 8.64 9.33 10.08 Material/unit 8.00 8.00 7.34 7.93 8.57 Shipping/unit 2.00 2.00 2.08 2.16 2.25 Unit cost 19.00 19.00 18.78 19.83 20.94 Variable cost 1900.00 2280.00 2479.49 2879.19 3040.42 Indirect cost 285.00 342.00 371.92 431.88 456.06 Production cost 2185.00 2622.00 2851.41 3311.07 3496.49 Gross margin 1315.00 1578.00 1953.39 2185.62 2220.07 Operating exp. 415.00 630.00 720.72 824.50 857.48 Interest exp. 0.00 0.00 0.00 0.00 0.00 Depreciation 35.00 35.00 35.00 29.00 29.00 Mgmt. salary 182.00 236.60 246.06 255.91 266.14 Overhead cost 632.00 901.60 1001.78 1109.41 1152.63 Profit 683.00 676.40 951.60 1076.21 1067.45 Profit margin 16.00 16.00 17.62 18.03 18.43 1972 1973 1974 1975 Table 1: Financial Model Results for the Battery Company The time span encompassed by the model is normally divided into time periods and output is generated by computing values for each variable for each period and displaying the values of selected variables on a report. There are three categories of formulas in a typical model. Exacf formulas correspond to definitions and equivalences, e.g. “sales = volume * selling price” and “beginning inventory(period) = ending inventory(period - 1)“‘. Approximations are essentially es- timating relationships for endogenous variables, i.e. variables taken to be “internal” to the system of activities being modeled. These formulas are intended to yield the aggregate effect of (very) complex causal processes without actually simulating or even defining those processes. Examples include the use of historically-derived ratios to estimate one value from another and the use of cross-sectional regression equations. Finally, predictions are formulas used to estrmate values for exogenous (external) variables, such as the price a firm must pay for its raw materials. All the numerous forecasting methods, such as growth rate factors, trend extrapolation, exponential smoothing, and the like, fall into this category. Table 2 shows the formulas used to generate the numbers in Table 1, grouped into the three categories. Similarly, there are three kinds of input data to a financial model: actual data, approximation parameters, and prediction parameters. Actual data are historical, factual, non-negotiable numbers while the parameters are negotiable numbers, es- timates, and assumptions. Approximation parameters appear in the approximation formulas and prediction parameters appear in the prediction formulas. Parameters for the Battery Company model are the constants that appear in Table 2. If we think of a financial model as a kind of “knowledge base” from which we can “infer” (numerical) properties of business ac- tivities, we can make an analogy here with backward-chaining rule-based systems like Mycin. The formulas in a model cor- respond to rules and evaluating formulas corresponds to drawing conclusions. Rules change the degree of belief in propositions while formulas change the values of variables. The derivation of a value spawns a directed acyclic graph of subderivations much 1976 like the goal tree generated by backward chaining. The amount by which belief in a proposition changes can depend on judgmental factors and the amount of change in a variable Can depend on judgmental parameters. Not to push the analogy too far, a rule-based system is is much more complicated Since it depends on pattern matching and allows for more than One rule to contribute to the degree of belief in a conclusion. Neverthe- less, the analogy suggests that the same explanation techniques that are used in Ivlycin [A] might also work for financial models. The next section shows why these techniques are inadequate for our problem. Explanations The purpose of an explanation is to make clear what is not understood. Depending on their initial level of understanding, users of financial models can benefit from two sorts of explana- tions. The first sort deals with the model itself and involves show- ing how it corresponds to reality and why that correspondence is justified. Such an explanation might include, for example, a description of what financial entity some variable represents and a justification for why some approximation was chosen to assign a value to it. The second sort deals with the results of the model and involves showing how those results were derived and why the derivation produces the results observed. In this paper, we focus on explaining results rather than explaining the model. There are several kinds of results that we might want explained. First of all, there are the results that are explicit in the output report and are produced directly by formulas. It seems to us that explaining these is simple. To answer a question like Why is operating expense equal to 724.84 rn 74. for example, we can imagine nothing better than a display of the associated formula and the the values it was used with. In other words, we interpret a why question about the value of a variable as a how question about it’s derivation and show the derivation. A more difficult problem arises, however, if the user questions the formula, e.g. But why does operating expense = .75 * gross sales? Clearly, such questions should be answerable by giving the justification for the formula, or for its parameter values. But notice that this rea!!y calls for an explanation of the model: why did the mode/ 177 Definitions gross sales = volume * selling price production cost = variable cost + indirect cost gross margin = gross sales - production cost profit = gross margin - overhead cost variable cost = unit cost l volume profit margin = selling price - unit cost unit cost = labor/unit + material/unit + shipping/unit material/unit = matl. price/unit l (1 - volume discount) overhead cost = operating exp. + interest exp. + depreciation + mgmt. salaries Approximations operating expense = .15 * gross sales Predictions indirect cost = .15 * variable cost inflation = .04 interest expense = 0 depreciation(74) = 35 volume(74) = volume(73) * 1.1 depreciation(75) = 29 volume(75) = volume(74) * 1.1 depreciation(76) = 29 volume(76) = volume(75) selling price(y) = selling price(y-1) l (1 + inflation) mgmt. salaries(y) = management salaries(y-1) l (1 + inflation) labor/unit(y) = labor/unit(y-1) * (1 + inflation) matl. price/unit(y) = matl. price/unit(y-1) + 1.06 shipping/unit(y) = shipping/unit(y-1) * (1 + inflation) volume discount = 0 if volume < = 130, .15 if volume > 130 Table 2: Battery Company Model builder choose this formula/this parameter value to compute variable? How to do that goes beyond our present focus.** that The other kinds of results are all implicit in the report and hinge on comoarisons the user makes between values. The questions posed in the introduction ask about results of this kind and answering them involves explaining the difference. We can class- ify these kinds of results and their associated explanations along several dimensions. 1. Referent of comparison. All questions focus on a par- ticular variable, which is the subject of the question sentence, but the referent it is compared to depends on the question. In a question about change, e.g. Why did gross sales go up in 73?, the referent is the value the subject variable had in a previous period. In questions of relative magnitude, e.g., Why is depreciation so small?, the referent is the user’s expectation for the value of the focus variable. Otherwise, the referent is explicit, e.g. why is sales of product A J sales of product B? In any case, the result to be explained is the difference between the focus value and the referent. 2. Implicit referents. There are two sources for a user’s ex- pectations about values which we will call “local” and “external”. Local expectations come from the set of values observed on the report and are essentially local averages. So, for example, we interpret a question like Why did gross margin go up so little in 76 as Why is the change in gross margin small in 76 compared to the average of the changes in other periods? External expectations come from a user’s pre-existing knowledge of either analogous or prescriptive values. Analogous values include historical norms, industry averages, values observed for competing firms, and the like, while prescriptive values are goals (target values) the user knows to have been set. **but see [9] for a technique that ought to apply if a financial modeler’s knowledge could be suitably represented. 3. Level of specificity. A user may phrase his question in terms of mere difference (Why did x change?), direction of dif- ference (Why did x go up?), or magnitude of difference (Why did x go up so much?). An explanation should take these different levels of specificity into account by referring to directions or mag- nitudes when the user implies he desires it. 4. Interval to be covered. A question may ask about a single difference (Why does x go up in 74), several differences (Why does x go up in 74-76), or all the differences (Why does x go up?). We interpret the latter questions as calling for a summary ex- planation that attempts to account for all the differences in the interval using the same factors. If that is not possible, we would like an answer to at least group similar explanations of individual differences into subinterval explanations and to indicate the con- trast among the members of the set. Along the same lines, ques- tions about peaks and dips seem to demand an explanation which covers the interval of inflection (at least two time periods) and accounts for the inflection by a single set of factors, or by a contrasting set. 5. Violated presuppositions. In general, a user may ask for an explanation of a result either because he simply wants to ob- tain the reason or because he can think of a reason to believe the contrary and wants to resolve the conflict. He can highlight the second case, however, by asking a why not question or using a contrastive subordinate clause, e.g. Why did profit go down in 73 even though sales went up?. It is then necessary to infer the presupposed relationship and to show in the answer why it does not hold for the situation at hand. It may be seen that the major problem in explaining a difference does not lie in determining the difference of interest. Although a small amount of inference may be required to choose an implicit referent, and perhaps somewhat more to determine a presupposi- tion, if these were problematic, one could simply ask the user to select among the possible interpretations. Nor is there a problem 178 in showing the mathematical derivation of a difference. Rather, the problem lies in clarifying that derivation, which is the topic of the next section. An Explanation Procedure While it would be truthful to explain a model’s results by exhibit- ing the formulas, the input data, and exclaiming “The math works out that way”, it would not be clear. When we asked human analysts to explain model results they tended to cite only the most important factors involved. What they did in answering specific questions gave us a set of goals for artificial explana- tions: l distinguish the relevant parts of the model from the irrelevant l distinguish the significant effects from the insig- nificant 0 translate quantitative information into a qualitative characterization l summarize if the same reason accounts for more than one result General Strategy To explain a difference, Ay, our general strategy is to first find a set of variables, A, which “account” for it and then to express that information to the user. Suppose, for the sake of simplicity, that we have a direction question -- Why did y go up? -- so that by is the change in variable y. The relevant part of the model is then the formula that computes y, say f, Ay = fta,, b,, cp, . ..) - fta,, b,, c,, . ..) where the subscripts on the arguments denote the two different time periods, and As {a, b, c, . ..}. We first delete from S all variables that didn’t change, since they clearly have no effect on Ay. Call the reduced set S*. To determine A, we need to determine the “significance” of each variable in S* and collect the the smallest subset whose joint significance is sufficient to account for Ay. Our initial ap- proach (the obvious one) was to loop through all possible sums of partial derivatives until nearly all of the difference had been ac- counted for. For example, we would stop with the single variable a if (2fba)Aa w Ay. This method turned out not to work because of two fundamental flaws. First, it assumes that the value of af/aa is nearly the same at both time points and this was not always true. When #Aa changes markedly from period 1 to period 2, there is no clear way of deciding whether it should be evaluated at a,, or a*, or perhaps some value in between. Second, it as- sumes that all the other variables in S remain constant, and this was rarely true. The result was that the above test would often fail on a variable that was significant and succeed on one that wasn’t. So we defined a new measure, called &(X,y), to indicate the effect of the set of variables in X on y in one context, such as one time period, relative to another. The general definition is &(X,Y) = y, - f(Z) where the vector Z contains the values of variables in X evaluated in context 1 and values for the other variables in S evaluated in context 2. If X contains just the variable a, for example, e(X,y) = y, - f(a,, b,, c2, . ..). Thus, f(Z) gives the value y, would have had if all other variables had changed exceot those in X, and e(X,y) gives the amount of y2 contributed by the change in the X vari- ables. Restating this in words, we measure the effect of a vari- able by what the result would have been without the influence of 179 that variable, leaving all other influences intact. If the total effect is large enough for some X, we conclude that X = A and Ay is accounted for. The test we use is 1 /B > &(X,y)/Ay > 9 where 8 is the fraction of the difference considered large enough to be sufficient. The bound on the high side is needed when variables not in X counteract the effects of those in X. If the former effects are large enough, they should be included in X and so the test should fail. The value of 6’ was set empirically to be .75. We also associate with each variable xi in A its relative effect on y, ai( where a,(y) = &(xi,yJ / Z ]etx,,y& When A is found, we can answer the original question with an explanation. In general, the answer given includes (1) the dif- ferences that account for Ay, (2) the formula f, (3) the primary explanatory variable, and a qualification, which expresses coun- teracting or reinforcing effects. What we do then depends on the specific form of the question and the contents of A, so before discussing that, it will be helpful to look at a specific example. Details Let us consider the first example from the introduction, Why did profit go down in 73 even though gross sales went up? The fol- lowing describes the processing steps. 1. Interpreting the question. As outlined above, it is neces- sary to determine the focus of the question, referent of com- parison, level of specificity, interval to be covered, and presup- positions. The ROME system uses a pattern-matching parser [3] to extract and label the parts of the input sentence, and a straightforward set of linguistic tests to make the determinations. e For example, the verb or complement of the main clause es- tablishes the type of comparison, and use of a time modifier in- dicator sets the interval to be covered. If the referent is implicit, we assume the expectation is local unless it is not satisfied by the data displayed, in which case we look for a global expectation. ROME allows the specification of external expectations for values, and their sources, and we use the first expectation found (if there is one) that has the right relation to the focus. In the question at hand, the focus is profit for the period 73, the referent is profit for period 72, and the level of specificity is direction. To apply the explanation procedure, the focus and the referent must be comparable. In the present system, this means they must be computed by the same formula so that the difference in value arises from different contexts of evaluation. The contexts allowed are set by internal indices on the variables (e.g. time, plant, product, etc.) which range over different instances of entities of the same semantic type. The types are represented as elements in a semantic network using the frame-style language SRL [lo]. If the variables are for some reason not comparable, a message is produced giving the reason. Our treatment of presuppositions has not gone beyond the ad hoc stage. Currently, we just save the variables involved for later use in deciding when to stop the explanation, as described below. 2. Identifying significant effects. Since both gross margin and overhead cost change in the formula for profit, S* = {margin, overhead}. Working out the calculations gives rz({margin},profit)/Aprofit -36.98 257.27/-6.6 &({overhead},profit)/Aprofit = -264.61-6.6 = 40 a margin = -.493 a overhead = 507. Since neither value of E/A passes the significance test, both are needed to explain the difference (which the procedure discovers when it considers X = {margin, overhead}).*** 3. Characterizing effects qualitatively. All the differences for variables in A are translated into direction and magnitude descriptors. The magnitude descriptors are normally percents since these are more familiar to users than our a values. 4. Expressing the answer. The answer generator is template- driven, where the templates are just those needed to express for- mulas, simple comparisons between variables, change, relative significance, reference to change, conjoined noun phrases, and contrast between propositions. The first sentence states either the most signrficant cause of the difference, based on the a values, or all the causes if they are positively correlated with the difference. For the question at hand, the first sentence is: Profit went down in 73 primarily because overhead cost went up and profit = gross margin - overhead cost. The next sentence expresses the qualification, if any, such as a contrast among counteracting factors, a statment of primary cause, or a statement of additional cause. In this case, the qualification is: Although gross margin went up by 79%, overhead cost went up by 4 1% and the latter outweighed the former. 5. Continuing the explanation. Without the presupposition, the explanation would normally stop at this point with the mes- sage Would you like me to continue? However, since the answer has not yet mentioned gross sales, it has not yet been related to the presupposition, so we continue down the derivation path that leads to gross sales. The four previous steps yields the following continuation: Gross margin went up in 73 primarily because gross sales went up and gross margin = gross sales - production cost. However, the increase in gross sates was not enough to affect the change in profit. Would you like me to continue? Notice that the qualification is one relevant to the violated presup- position, not the contrast between sales and production cost that would otherwise be generated. It is known that the increase is not enough because gross sales has a positive influence on gross margin and hence on profit but the change in profit was negative. Continuing one step further will illustrate two final points. Since the presupposition variable has been mentioned, the system returns to the primary path: Overhead cost went up in 73 because operating exp and mgmt salaries went up and overhead cost = operating exp. + interest exp. + depreciation + mgmt. salaries. The increase in overhead cost was due primarily to the increase in operating exp. However, the effect of mgmt. salaries was also sig- nificant to the change in profit. The first thing to notice is that the last sentence mentions an effect on the initial difference to be explained. It can happen that, l . . In the case of a no-change question -- Why did y remain constant -- a specialist procedure IS invoked which looks for cancelling effects or the complete absence of change in the terms of the formula. while sufficient to explain a local difference, a particular set of variables A is not sufficient to account for a difference higher UP in the derivation tree. That is, the higher level difference would not have been observed without the effect of variables that hap- pen not to be significant to the local difference. Hence, the general strategy described above also includes a test for sig- nificance with respect to higher level variables and adds variables to A as required. These secondary “long distance” effects are important to an accurate explanation and would be missed by a purely local analysis. Second, the explanation halts at this point since it has reached the leaves of the derivation tree. We define a leaf to be either an input value to the model or a result produced by a prediction formula. The latter case reflects our view that explanations are relative to models and that the representation of a model must distinguish the endogenous from the exogenous. This does not preclude explaining results from interrelated models, but simply breaks off explanations at model boundaries. Discussion and Conclusions We have presented an explanation procedure which couples an analytic technique with a natural language facility in order to ex- plain differences between values of variables. The procedure seems to work well when: l only a few variables out of many account for the dif- ference to be explained l the variables form a natural hierarchy via their for- mulas l lower level variables and their values have a priori meaning to the user l the complexity of the model comes from the depth of the derivation trees, not the complexity of the com- putations. These conditions are well met in many financial models, and the procedure can be applied to a number of contrastable situations, such as actual vs historical comparisons, budgeted vs actual, and scenario vs scenario. However, the procedure offers no direct help in explaining iterative computations, such as those used in probabilistic models, discrete event simulations, or optimization algorithms. Financial modeling is not the only domain in which this proce- dure might be used since there are many domains where at least some knowledge is encoded in quantitative relationships. The QBKG program [l], for example, uses a similar sort of procedure to explain the reasons for the backgammon moves it selects via a quantitative evaluation function. It may be seen in [l] that the form of this function and its use in the selection task ap- proximately satisfy the above criteria. However, the test of sig- nificance used is very much different and the details of express- ing the explanation are very specific to QBKG’s particular func- tion. Our procedure is more general but it does not &orp.orate any knowledge of what the value of a formula will be used for, The difficulty we see in using our technique as it stands to explain a heuristic selection lies in making all the terms and coefficients in the evalutation function meaningful to the user and in generat- ing a meaningful characterization of the degree of difference in worth of different alternatives. Financial models are intended to provide their users with insight 180 into the consequences of financial activities. It appears that automated explanation of the results can enhance that insight by focusing the user’s attention on the major reasons for those con- sequences. References ill PI [31 [41 Ackley, D.H. and Berliner, H.J., “The QBKG System: Knowldege Representation for Producing and Explaining Judgements,” Computer Science Department, Carnegie- Mellon University, March 1983 (abridged in Berliner and Ack- ley, “The QBKG System: Generating Explanations from a Non-Discrete Knowledge Representation,” Proceedings of AAAI-82, August 1982, pp 213-216). Beil, D.H., The Visicalc Book, Reston, Va.: The Reston Publishing Co., 1983. Boggs, W.M, Carbonell, J.G., and Monarch, I., “The Dypar-I Tutorial and Reference Manual,” Computer Science Depart- ment, Carnegie-Mellon University, 1984. Davis, R., Applications of Meta Level Knowledge to the Con- struction, Maintenance and Use of Large Knowledge Bases, PhD Thesis, Computer Science Department, Stanford University, July 1976. 151 IS1 [71 WI PI WI 1111 Kosy, D.W., and Dahr, V., “Knowledge-Based Support Sys- tems for Long Range Planning,” The Robotics Institute, Carnegie-Mellon University, 1983. Malhotra, A., Design Criteria for a Knowledge-Based English Language System for Management: An Experimental Analysis, PhD Thesis, Sloan School of Management, MIT, 1975. Malhotra, A., “Knowledge-Based English Language Systems for Management Support: An Analysis of Requirements,” Advance Papers of IJCAI-4, September 1975, pp 842847. Sprague, R.H., Jr., “A Framework for the Development of Decision Support Systems,” MIS Quarterly 4, December 1980, pp l-26. Swartout, W.R., “XPLAIN: A System for Creating and Ex- plaining Expert Consulting Programs.” Artificial Intelligence 21,1983, pp 285-325. Wright, M., and Fox, MS., “SRL/lS User Manual”, The Robotics Institute, Carnegie-Mellon University, 1982. An Introduction to Computer-Assisted Planning Using the In- teractive Financial Planning System, EXECUCOM Systems Corp., Austin, TX., 1980.
1984
41
327
SELECTIVE ABSTRACTION OF AI SYSTEM ACTIVITY Jasmina Pavlin and Daniel D. Corkill Department of Computer and Information Science University of Massachusetts Amherst, Massachusetts 01003 ABSTRACT The need for presenting useful descriptions of problem solving activities has grown with the size and complexity of contemporary AI systems. Simply tracing and explaining the activities that led to a solution is no longer satisfactory. We describe a domain-independent approach for selectively abstracting the chronological history of problem solving activity (a system trace) based upon user- supplied abstraction goals. An important characteristic of our approach is that, given different abstraction goals, abstracted traces with significantly different emphases can be generated from the same original trace. Although we are not concerned here with the generation of an explanation from the abstracted trace, this approach is a useful step towards such an explanation facility. I. Introduction: The Problem with Traces Understanding the problem solving activities of a large knowledge-based AI system is often difficult. Simply tracing the activities quickly inundates an observer’s ability to assimilate the many inferences and their relationships. Despite their unsatisfactory nature, activity traces remain a popular means of recording system activity because they are easily generated. A truce is a chronological execution history of the system. It records the many primitive events that comprise the problem solving process. An example of a small part of a trace, containing the events arising from executing one knowledge source in the Distributed Vehicle Monitoring Testbed [2] is shown in Figure 1. Traces generated by the Testbed typically contain thousands of primitive events. When investigating a particular system behavior, many events in the trace are unimportant or are meaningful only when considered with respect to other events. In addition, conceptually adjacent processing activities (for example, an activity that creates data used by another activity) can be quite distant in the trace. Understanding the system’s behavior directly from its trace requires that the user weed out those events that are irrelevant to the question at hand and group salient events into a meaningful description of system activity. Even for the designer of the system, this is a tedious and time consuming task. This research was sponsored, in part, by the National Science Foundation under Grant MCS-8306327 and by the Defense Advanced Research Projects Agency (DOD), monitored by the Office of Naval Research under Contract NRO4!%041. tssts~stssssstsssssssssssssssssss~sssssssssssssssnssssss~ Executing Node 2 --- Inv Kaia 4 -- Time Frame 4 -- Node Time 51 BLACKBOARD EVENT --z quieaence external receive BLACKBOARD EVENT -> quieaence external send INVOKED KS1 -------> kai:02:0604 46 a:gl:vl 51 (g:02:0837 g:02:0047 g:02:0052 g:02:0079) (h:02:0016 h:02:0017 h:02:0018) (13171 CREATED HYP ------> h:82:0019 v I ((3 (16 16) 1) 1 11200) SUPPORT I NG HYP ---> h:82:0016 gl ((3 (16 16) 1 I 1 (6001 SUPPORTING HYP ----> h:02:0019 gl ((3 (16 16) 1) 2 11200) SUPPORTING HYP ---> h:82:0019 gl ((3 (16 16))) 3 11200) BLACKBOARO EVENT --> hyp-creation VI (h:82:0019) INSTANTIATED KS1 --> kai:82:0017 goal-aend:vt (g:02:0093) (h:82: 0019) <1200 -10000> (2194) INSTANTIATED KS1 --> kai:02:0018 jf:vl:vt (g:02:0099) (h:02:0019) <1200 2432, (14521 UNSUCCESSFUL KS1 -> jb:vl:vt g:02:0105 (h:82:0019) (nil nil) -10000 INSTANTIATED KS1 --> kai:02:0019 ff:vl:vt (g:02:0105) (h: 02: 0019) <1200 2432> (358) RERATED KS1 ------ > kai:02:0018 jf:vl:vt (g:02:0026 g:82:0859 g:02:0072 g:02:0089 g:02:0B99) (h:62:0019) cl0000 2432~ 11452 to 59851 RERATEO KS1 -------> kai:02:0012 a:gl:vl (g:02:0065 g:02:0100) (h:02:0019 h:02:0020 h:02:0021) cl320 1032> (1012 to 1072) sstsstssssssstsssstsssssssssssssssssssssssssssss%ssssssssssssss Figure 1: A Portion of A Trace. What is needed is to automate the recognition, grouping, and potential deletion of traced activities in a manner that appropriately summarizes the behaviors under investigation. In this paper, we present an approach for selectively abstracting a trace based upon user-specified abstraction goals. An important characteristic of our approach is that abstracted traces with significantly different emphases can be generated from the same original trace. For example, an abstracted trace that emphasizes redundant processing activity might be quite different from one that emphasizes unsuccessful solution paths. The selective abstraction process is described, followed by an example and a presentation of additional issues related to the abstraction process. Relations with other work on presenting system activity is discussed in the last section. II. Trace Nets and Abstraction Actions We begin the selective abstraction process by transforming the sequence of primitive events in the trace into a trace net. A truce net is a data structure that records the input/output relationships among problem solving activities as well as their execution ordering. We represent a trace net as a Petri net (41 on which the execution history is imposed. Figure 2 shows a trace net for a small run of the Distributed Vehicle Monitoring Testbed. The portion corresponding to the trace fragment of Figure 1 has been indicated. Not all events in the original trace have been 264 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. Figure 2: An Example Trace Net. represented in Figure 2. In the examples presented In this paper, we have chosen to represent only knowledge;source executions and hypothesis creation. Trace events that are unrelated to this modeling level have been eliminated from the trace net. Formally, a truce net 2’ is a pair (N, E) where N is a Petri net structure N = (0, A, I, 0) with: A” set of data units set of activities I:AHD’ an input function connecting input data units to activities O:A- D’ an output function connecting activities to output data units and where E is a partial order’ over the set of activities called the ezecution order. A data unit might represent a single hypothesis or fact and an activity a single knowledge source execution or inference rule application. An abstracted trace net 9 is then generated by appropriately collapsing and deleting portions of the original trace net T. The abstracted trace net contains a (generally) smaller set of data units and a (generally) smaller set of activities than 2’ as well as a reduced (and possibly empty) execution order. The execution order of the abstracted trace net can be empty if the order in which activities execute is considered irrelevant. In an abstracted trace net a data unit might represent a group of hypotheses or facts and an activity a group of knowledge l In a single-processor system, E is a total order. Figure 3: An Abstracted Trace Net. source executions or inference rule applications. The following is a view of the overall abstraction process: ABSTRACTION GOALS u ABSTRACTION PATTERNS u ABSTRACTION PREDICATES u ABSTRACTION ACTIONS u TRACE NET + ABSTRACTED TRACE NET An ABSTRACTION ACTION either deletes an object’ from the trace or lumps a group of objects into a single object. The ABSTRACTION GOAL determines which objects can be deleted and/or lumped. An example of an abstraction goal is “show redundant processing and the solution path.” An object is deleted if it is considered unimportant with respect to the specified abstraction goal. A group of objects is lumped if it can be considered a single object with respect to the abstraction goal. In order to transform the abstraction goal into abstraction actions, the system needs to know what ABSTRACTION PATTERNS in the trace are relevant to the abstraction goal. For the redundant processing abstraction goal, important patterns are multiple activities creating the same output data units. Each ABSTRACTION ACTION is controlled by ABSTRACTION PREDICATES which are logical functions over the trace. An example trace net and one of its abstractions are shown in Figures 2 and 3, respectively. A circle represents ’ An object is either a data unit or au activity. 265 a data unit, a bar represents an activity, incoming arrows connect the activity to its input data units, and outgoing arrows connect the activity to its output data units. We have found the following to be a relatively general and sufficient set of abstraction predicates. Corresponding actions are illustrated in Figure 4. 1. 2. 3. 4. 5. 6. 7. 8. unused-data-deleted? If true, data which are not input to any activity (except for the solution) are deleted. no-output-activities-deleted? If true, activities with no output data are deleted. shared-i/o-activities-lumped? If true, two activities in which the first provides inputs to the second are replaced by a single activity. shared-input-activities-lumped? If irue, activities which share input data are replaced by a single activity. shared-output-activities-lumped? If true, activities which share output data are replaced by a single activity. input-context-data-lumped ? If true, a group of data which are input to a single activity are replaced by a single data unit. output-context-data-lumped? If true, a group of data which are output of a single activity are replaced by a single data unit. execution-order-ignored? If true, activities can be lumped even if they are not successive in terms of execution order. III. An Example: %how redundant processing” The following are the predicate values for the abstraction goal ‘show redundant processing and the solution path.” The condition condition1 is derived from the pattern for the redundant processing abstraction goal and is true when data are created by multiple activities. Since this pattern represents a relation between a data unit and its creating activities, condition1 parameterizes predicates 1, 3, 5 and 7: 1. 2. 3. 4. 5. 6. 7. 8. unused-data-deleted? : true, except for the solution and . . condstaonl; no-output-activities-deleted? : true; shared-i/o-activities-lumped? : true, except for . . condstaoq; &are&input-activities-lumped? : true; shared-output-activities-lumped? : false;* input-con text-data-lumped? : true; output-con text-dataJumped? : true, except for condition1 ; execution-order-ignored? : true. Figure 3 shows the abstracted trace net obtained from the trace net in Figure 2 by performing the abstraction actions applicable with the above predicate values. The actions performed were (listed by type): l Since predicate 5 is defined to cause the lumping only if this pattern is true, its value can be set to false, to eliminate testing. w5 Figure 4: The Actions for Abstraction Predicate8 l-7. l delete-data: deleted data 1, 24, 25 and 26 because predicate 1 is true; l delete-activity: deleted activity 9 because predicate 2 is true; l lump-data: lumped data 2 and 3 into i’ because predicate 6 is true; lumped data 4, 5, 6 and 7 into 2’ because predicate 6 is true; 0 lump-activities: lumped activities 1 and 2 into 1’ because predicate 3 is true; lumped activities 3 and 4 into 2’ because predicate 3 is true; lumped activities 5 and 9 into 3’ because predicate 4 is true; In this example, the abstracted trace has 12 data units and 10 activities-a reduction of 45% in the number of objects from the original trace. More importantly, only irrelevant information is abstracted out. All the relevant information is preserved; i.e., all of the activities generating redundant data can still be seen. When considering an object for an action application, the predicates are evaluated in the listed order, except that predicate 8 is used as a constraint for all activity lumping predicates (3, 4 and 5). That is, if the value of predicate 8 is false, and the activity considered for lumping with the current activity is not the next in the execution order, then the action can not be taken. Predicates 1 and 2 are evaluated first because they cause deletion of 266 objects and thus result in less work for the remaining abstraction actions. Activity lumping predicates (3, 4, and 5) are considered before data lumping predicates (6 and 7) because they can either make a data lumping action unnecessary (see Figure 4 Case 3b) or make a new data lumping action possible. Such is the situation in Figure 4 Case 4 where the output data can be lumped after, but not before, the activity lumping action. Predicates 4 and 5 are related since both predicates must be true for a lumping action to occur if the candidate activity shares both inputs and outputs with other activities. Consequently, the false value of either predicate excludes the lumping action, and both predicates must be evaluated before the action can be taken. A similar relation (and the same evaluation order) holds for predicates 6 and 7. The abstraction process traverses the trace net from the top down (from the solution data to the system input data), iteratively evaluating all predicates and performing all applicable actions until quiescence. There are two reasons for our use of the top-down ordering. First, some abstraction goals are tied to the solution of the system (such as %how only the solution path”). Second, in interpretation systems (such as our Distributed Vehicle Monitoring Testbed), an activity typically has more input data than output data, and a top down traversal causes earlier lumping. Abstraction predicates are general means of specifying the context of the abstraction action application, based on the structural properties of the trace net. Although they can be generated from abstraction goals, it is important for the user to have the ability to access the predicates directly. We view the transformation of abstraction goals into abstraction predicates as merely an aid to generating a suitable set of default predicate values-not as the sole means of specifying parameter values. If the predicate values are allowed to be arbitrary logical functions, the result of two actions can depend on their ordering. Consider an example with three activities, the first two sharing inputs, and the last two sharing outputs. Predicates 4 and 5 have the following values: 4. shared-input-activities-lumped? : true, if all inputs are shared; 5. shared-output-activities-lumped? : true, if all outputs are shared. The result of first considering lumping activities 1 and 2 is different from the result of first considering lumping activities 2 and 3 (see Figure 5). Before activities 1 and 2 are lumped, two lumping actions are applicable (activities 1 and 2; activities 2 and 3). After lumping activity 1, the conditions for lumping activities 2 and 3 are no longer true (not all the inputs are shared). Similarly, lumping activities 2 and 3 eliminates the possibility of lumping activities 1 and 2. If non-monotonic actions (where applying one action precludes applying another action) are specified through predicate values, additional action ordering should also be specified. Left: Lumping activities 1 and 2. R.ightr Lumping activities 2 and 3. Figure 5: Order Dependent Lumping Actions. IV. Adding Domain Dependent Information to the Abstraction Procese The abstraction process that we have outlined so far is completely domain independent. It has used as inputs only the information about the structure of the trace net (input and output connections and the processing order). However, there are cases where some domain dependent information can be used to improve the abstraction process. Domain-specific information may affect the following: 1. Abstraction goals. Some abstraction goals can be 2. 3. - satisfied only if certain attributes of objects are known. For example, if the abstraction goal is to show whether the system was distracted during processing, a corresponding pattern is a sequence of activities in which there is a shift from processing “good” input data, to processing ‘bad” input data. In order to recognize this pattern, the abstraction process needs to know what constitutes ‘good” and “bad” data. Also note that this is one type of abstraction goal for which the execution order is important, and the predicate execution-order-ignored? must have the value true. Abstraction predicates. Domain specific predicates can further reduce the amount of information in the trace. Consider a predicate that deletes a data unit of a smaller scope when an equivalent data unit of a larger scope are both inputs to an activity. The assumption here is that the smaller scope data contain redundant information. The notion of scope is domain dependent. For example, in our vehicle monitoring domain, the scope can be represented by the length of the track of a vehicle. Lumping mechanism. The result of lumping may be sensitive to the type of objects that are being lumped. Consider, for example, the lumping of two activities where the output of the first is the input for the second in the vehicle monitoring domain. There are two types of activities: merging and synthesis [3). A merging activity combines input tracks to produce a longer track. A synthesis activity combines lower level input data to obtain higher level output data. In the same vehicle monitoring example, an activity can combine acoustic signals into harmonic groups (signal- to-group synthesis), or it can combine harmonic groups corresponding to different acoustic sources associated with a vehicle type to identify the vehicle (group-to- vehicle synthesis). The lumping action for two merging activities is shown in Figure 4, case 3a. The lumping of two synthesis activities is shown in Figure 4, case 3b. 267 V. Capabilitiee of Our Approach Our approach of reducing the trace net into an abstracted trace net results in several important capabilities: 0 On/off-line abstractions. The abstraction process can be performed either during processing, with only a partial trace available, or after a solution has been found. The difference is that before processing is complete the full implications of activities and the relations among all their created data are not known. Thus, goals which depend on these implications can not be satisfied. At the end of processing, solution data can be marked as special type of data, which can be related to the abstraction goals. l Zooming. The result of lumping is linked to the lumped objects. If the user decides that portions of the trace are “overabstracted,” the abstraction of any lumped object can be restored to see more details. Similarly, more constraining predicates can be applied to portions of an “underabstracted” trace. l Hjghligh ting. When a class of abstraction patterns is defined in the system, it can be used not only to determine the predicate values but also to tag the instances of the patterns found in the trace net. These tags serve to highlight the patterns to the user (for example, by blinking on an output graphic device). l Iterative abstractions/feedback. A uniform representa- tion of the trace net and the abstracted trace net fa- cilitates an iterative approach to goal satisfaction. If a particular abstraction goal does not sufficiently reduce the trace net, further abstraction goal refinement can be obtained from the user. In particular, at the begin- ning of an investigation the user may not know what abstraction goals appropriately abstract the “interest- ing” activities that occurred in the system. By using an initial abstraction goal to reduce the trace net, the user may be able to improve his understanding of the sys- tem’s behavior to the point of selecting a more suitable abstraction goal. This iterative process of selecting an abstraction goal and viewing the resulting abstraction is a powerful investigative technique. VI. Discussion Perhaps the system that comes closest to our work is the GIST behavior explainer, which generates an explanation from the trace of a symbol evaluator [5]. The GIST behavior explainer has a single abstraction goal: the selection of interesting and surprising events, where the notion of interesting and surprising is domain dependent. The main focus of the behavior explainer is the generation of a natural English explanation, and the issues in generating natural language are different from issues in generating symbolic descriptions. For example, an important strategy in reducing the complexity of natural language explanations is to restructure the explanation so that the relationships being described are more easily comprehended [6]. Such presentation strategies are not an issue in our work. Work on recognizing patterns of events as a tool for debugging distributed processing systems also has much in common with our approach [l]. However, we have taken the approach of displaying the whole abstracted trace net, rather than isolating patterns of activity and presenting them to the user. We feel patterns are best understood in the context of other events or in the context of the overall solution path. The trace net to abstracted trace net transformation has been implemented and is being used in conjunction with the Distributed Vehicle Monitoring Testbed. Presently, the generation of abstraction predicates from an abstraction goal is not implemented, and the user must set their values directly. However, even its current state, the implementation is significantly improving our abilities to investigate problem solving activities in the Testbed. As we increase our experience with the selective abstraction process, we hope to automate the transformation of abstraction goals into abstraction predicates. PI PI PI PI 151 PI REFERENCES Peter Bates and Jack C. Wileden. Event definition language: An aid to monitoring and debugging of complex software systems. Proceedings of the Fifteenth Hawaii International Conference on System Sciences, pages 86-93, January 1982. Victor R. Lesser and Daniel D. Corkill. The Distributed Vehicle Monitoring Testbed: A tool for investigating distributed problem solving networks. AI Magazine 4(3):15-33, Fall 1983. Jasmina Pavlin. Predicting the performance of distributed knowledge- based systems: A modeling approach. Proceedings of the Third National Conference on Artificial Intelligence, pages 314-319, August 1983. James L. Peterson. Petri Net Theory and Modeling of Systems, Prentice- Hall, 1981. Bill Sw artout. The GIST behavior explainer. Proceedings of the Third Nationat Conference on Artificial Intelligence, pages 402-407, August 1983. J. L. Weiner. BLAH: A system which explains its reasoning. Art&U Intelligence 15( 1):19-48, September 1980.
1984
42
328
CONTINUOUS BELIEF FUNCTIONS FOR EVIDENTIAL REASONING Thomas M. Strat SRI International 333 Ravenswood Ave. Menlo Park, CA 94025 ABSTRACT Some recently developed expert systems have used the Shafer- Dempster theory for reasoning from multiple bodies of evidence. Many expert-system applications require belief to be specified over arbitrary ranges of scalar variables, such as time, distance or sensor measurements. The utility of the existing Shafer- Dempster theory is limited by the lack of an effective approach for dealing with beliefs about continuous variables. This paper introduces a uew representation of belief for continuous variables that provides both a conceptual framework and a computation- ally tractable implementation within the Shafer-Dempster the- ory. 1. Introduction The lack of a formal semantics for the representation and manipulation of degrees of belief has been a difficulty for expert systems. The frequent need to reason from evidence that can be inaccurate, incomplete, and incorrect has led to the recognition of evidential reasoning as an important component of expert sys- tems [2] [3]. Evidential reasoning, based on a relatively new body of mathematics commonly called the Shafer-Dempster theory, is an extension of the more common Bayesian probability analysis. In the theory, the fundamental measure of belief is represented as an interval bounding the probability of a proposition, thus allowing the representation of ignorance as well as uncertainty. A procedure to pool multiple bodies of evidence expressed in this manner to form a consensus opinion is also provided by the theory. Expert systems are often applied to situations involving con- tinuous variables such as time, distance, and sensor measure- ments. Because the Shafer-Dempster theory is defined over dis- crete propositional spaces, dealing with continuous variables has been approached by partitioning the variable’s range into discrete subsets of possible values. In practice however, this approach has two difficulties: conclusions are sensitive to the selected parti- tioning, and there is no means for specifying belief in a smoothly varying manner. Belief as well as ignorance about a continuous variable should vary smoothly through the range of possible values. By making an appropriate restriction in the class of propositions, smoothly varying beliefs can be expressed. This restriction motivates a ~~_ This research was supported by the Defense Advanced Research Projects Agency under Contract No. N00039-83-K-0656 with the Naval Electronics Systems Command. new, continuous representation for belief over continuous vari- ables that is computationally practical and conceptually appeal- ing. The paper begins with a brief overview of the Shafer- Dempster theory. Section 3 presents a formalism for representing and manipulating evidence about a discretized scalar variable. The representation is generalized to the truly continuous case in Section 4, enabling discourse about any interval of values at any level of detail and permitting the representation of smoothly varying beliefs over those intervals. This is followed by an ex- ample which illustrates the new representation and its use. The paper concludes with a discussion of the theory’s relevance and extensions. 2. Review of Shafer-Dempster Theory Suppose that there is a fixed set of mutually exclusive envi- ronmental possibilities e= {61,62 )“‘, 6,). Any proposition of interest can be represented by the sub- set of 8 containing exactly those environmental possibilities for which the proposition is true. The collection of all propositions (i.e., the power set of 9) constitutes the frame of discernment. Figure 1 shows the power set of 8 (for n = 4) arranged as a tree. AuBuC AuBuD AuCuD BuCuD I..\ yy7 A B C D Figure 1: The Frame of Discernment: 8 = {A, B, C, D} 308 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. The nodes of the tree are the propositions each node logically implies its ancestors. arranged such that Bodies of evidence (i.e., sets of partial beliefs) are repre- sented by mass distribt~tions that distribute a unit of belief (i.e., mass) across the propositions in 0. In other words, the mass distribution assigns a value of belief in the range [0, l] to each subset of 8, such that c M(F;) = 1 F,C@ M(4) = 0 where nf (F;) is the mass attributed to proposition F;. Viewed intuitively. mass is a body of evidence attributed to the most precise propositions supports. If a portion of mass is attributed to a proposition, it represents a minimal commitment to that proposition as well as all the propositions it implies (i.e., nodes higher in the tree). At the same time, that portion of mass re- mains noncommital with regard to those propositions that imply it (i.e., descendant nodes in the tree). This representation allows one to specify his belief at exactly the level of detail he desires while remaining noncommital toward those propositions about which he is ignorant. Mass attributed directly to the disjunction of all propositions (i.e., 9) is neu- tral with respect to all propositions and represents the degree to which the ci’idcnce fails to support anything. The support for an arbitrary proposition Q, Spt(Q), is the total bclic i attributed by the mass distribution to propositions that imply Q (r’.e., the sum of the mass attributed to Q and all its descendants in the tree). Spt(Q) = c M(K) FiCC? The plausibility, P/s(Q), is the total belief attributed to propo- sitions that do not imply YQ. WQ) = c M(Fi) FinQfO =I- C M(Fi) F, E-6 = 1 - Spt(-Q) For each proposition Q, a mass function defines an interval [Spf(Q), Pls(Q)] that bounds the probability of Q. The differ- ence Pfs(Q) - Spt(Q) p re resents the degree of ignorance; the probability of Q is known exactly if Spt(Q) = P/s(Q). Dempster’s Rule of Combination pools multiple bodies of evidence represented by mass distributions. It takes arbitrarily complex mass distributions Ml and M,, and, as long as they are not completely contradictory, produces a third mass distribution that represents the consensus of those two disparate opinions. The rule moves belief toward propositions that are supported by both bodies of evidence and away from all others. For all F;, Fi, Q C 8 MS(Q) = & C Ml(Fi) * M2(F') F,nF,=Q /\ LO, 3) Cl, 4) /\/\ LO, 2) [I, 3) [2,4) /\/\/\ LO,1 1 [La R3) [3,4) Figure 2: The Frame of Discernment of a Discretized Variable If ic = 1, the bodies of evidence represent,cd by hiI and h!z are contradictory, and their combination is not defined. It is interesting to observe that Dempster’s Rule is both commutative and associative, allowing bodies of evidence to be combined in any order and grouping. A thorough treatment of the Shafer- Dempster theory can be found in Dempster [1] and Shafer [4]. 3. Discrete Analysis of a Random Variable The standard approach to reasoning with continuous vari- ables under the Shafer-Dempster theory has been to associate propositions with portions of the number line. Mass can then be attributed to individual propositions that correspond to ar- bitrary sets of points on the number line, and mass assignments from disparate sources can be combined using Dempster‘s Rule by computing the intersections of these sets. This approach has several undesirable properties. Because mass must be assigned to specific propositions, com- putations based on such a mass function can be critically sensitive to slight variations in the proposition of interest. For example, Spt([O,2))’ may differ greatly from Spt([O, 1.99)) if there happens to be mass assigned to a proposition such as Spt([l,2)). This type of discontinuity is an artifact of the way the propositional space is discretized and may not be indicative of the underlying beliefs. Secondly, the traditional approach provides no means for specifying a smoothly varying set of beliefs about the vallle of a continuous variable. Intuitively, one would prefer a hrlief func- tion that varies gradually with both the magnitude of the propo- sition of interest and the level of detail of the proposition. The following observation provides the key to overcome these difficulties: when reasoning about the value or a cant imlous vari- able, expert systems are most often interested in whether or not the value lies within some contiguous range of values. For rsam- pie, a proposition of interest might be that today’s temperature is between 65” and 75”. Rarely does a situation arise in which a disjoint subset would be a proposition of interest (such as “the temperat.ure is either between 45” and 50” or between 70” and 80”“). This observation allows the frame of discernment (0 be ‘Here Spt([O,Z)) d enotes the proposition that the value of the variable is in the interval [0,2). W e use open-ended intervals for simplicity. 309 4 3 END POINT STARTING POINT 0 1 2 3 STARTING POINT 0 a b [O, 4) [l, 4) [2,4) [3,4) I [0,3) [I,31 [2,3) 2 [O, 2) [I, 2) 1 t? LO, 1) Figure 3: The hlass Function of a Discretized Variable restricted ability to to contain represent a only contiguous intervals, wide range of interesting yet to retain propositions. b END POINT (4 Spt( b a 0 a1 w Figure 4: Computation of Support and Plausibility - Discrete Case The restriction provides several powerful simplifications. Imagine dividing the number line from 0 to N into N inter- vals of unit length. The number of propositions in this frame of discernment is reduced from 2N (the size of the power set) to approximately i Sz. Figure 2 depicts the simplified tree. The computation of the intersection of pairs of propositions in Demp- ster’s Rule is reduced to a simple intersection of contiguous inter- vals. Furthermore, the restricted frame of discernment is a class of subsets which is closed under the application of Dempster’s Rule so that pooled evidence can always be represented in the same propositional space (i.e., contiguous intervals). The structure of the tree suggests the representation of the frame of discernment as a triangular matrix as shown in Figure 3. IIere the abscissa specifies the beginning of an interval and the ordinate specifies the endpoint. The set 8, which represents all the environmental possibilities, is the interval [0, N) and is rep- resented by the upper left-hand entry. The atomic propositions, the intervals of minimum length, are located along the diagonal. Intervals with a common starting point are located in the same column while those with a common endpoint are in the same row. It is easy to see that the matrix of Figure 3 bears a strong resemblance to the tree of Figure 2. A mass function of a discretized variable can now be rep- resented as a triangular matrix. To assign a mass of .l to the interval [2,4) f or example, we enter .l at the corresponding lo- cation in the matrix. Additional beliefs fill out the remainder of the matrix. As with any mass function, Shafer-Dempster theory requires that the entries in the matrix sum to one. The computation of Spt(Q) and PIs(Q) can be easily un- derstood graphically. Spt(Q) is the sum of the masses of those intervals wholly contained in Q (the shaded area of Figure 4(a)), and P/s(Q) is the sum of the masses of the intervals whose in- tersection with Q is not empty (the shaded area of Figure 4(b)). The sum of the masses in the difference of those two regions is the ignorance remaining about proposition Q. Mathematically, (using the obvious notation)2 STARTING POINT 0 a b the b-1 N Spt([a, b)) = ‘2 f: WZ,Y) z=a y=z+1 ‘Here we use M(z,y) to represent the mass associated with the interval [z,y). z=O y=l+max(a,x) Given two mass functions represented by triangular matrices, one can obtain a third mass function that represents the pooled evidence using Dempster’s Rule. The mathematics of intersecting sets is straightforward with this representation, and Dempster’s Rule can be rewritten as follows: - hfl(a, b) . &(a, 6) ) N-2 N-t N-I N p=Og=pSl r=9 cr+l 4. Generalization to Continuous Random Variables The generalization from a finite number of discrete intervals to an infinite number of infinitesimal intervals is made using the standard ploys of calculus. In the limit’ as the width of the inter- vals shrinks to zero, the triangular matrix becomes a triangular region where any interval is represented by its location in Carte- sian coordinates. Let’s examine some properties of the region more closely (Figure 5). Th e universal set 8 (the interval [0, N]‘) is located at the upper left-hand corner. Points along the hypotenuse refer to individual points along the number line. As before, points in the same vertical or horizontal line refer to intervals with identical aWe switch to closed intervals for the continuous case to simplify the mathe- matics. We are no longer concerned with an atomic set of mutually exclusive propositions. 310 Intervals which contain [a, bl \ STARTING POINT yo a b N STARTING POINT STARTING POINT END POINT Intervals of constant Intervals of 0 width, i.e., exact points along the number line Point corresponding \ to the interval [a, b] Successively larger intervals \ centered around [a, bl Region of intervals wholly contained in [a, bl Figure 5: The Continuous Frame of Discernment start or end points. Points along a northwesterly ray from some point [a, b] correspond to successively larger intervals centered around [a, 61. Points along a northeasterly line refer to intervals of identical width, thus representing propositions with a common level of detail. The triangular region is, in a sense, the continuous analog of the tree structure of Figure 2. A continuous mass funct,ion with all the desirable properties mentioned earlier is represented by a surface over this region. The extent to which the volume under the surface is pushed to- ward the northwest corner (Q) indicates the overall degree of ignorance. Concentrating all the volume along the hypotenuse corresponds to knowin, p the probability density function of the variable exactly. ~11alogo11sly with the discrete case, Spt([a, 61) is the volume under the surface within the region shaded in Figure 6(a). Figure G(b) shows the region containing Pf~([a,b]). In mathematical terms, The extension of Dempster’s Rule to the continuous case yields the following result: Ms(a, 6) = & 1’ lgN[M1(2, 6) . &(a, Y) + M2(z, b) - M&4 d +~l(~,~).~~~(~,Y)+~z(~,~)~~l(~,Y)l dYdZ b END POINT a Figure 6: ous Case Computation of Support and Plausibilit,y -- Continu- This can be construed as a form of convolution of the two mass functions being pooled. As in the discrete case, the resultming mass function can be represented in the same formalism. In theory, if we desire to assign mass to a precise interval [u, 61, we must use impulse functions of finite volume at t.he cor- responding point. The degree to which we cannot be so precise about the interval represents the degree to which the impulse is spread out to neighboring points. If impulse funct,ions are present, the rule of combination becomes slightly more complex since we must take care not to count certain combinations dou- bly. Impulse functions need only be considered when merging discrete with continuous mass functions. 5. Example We now present a simple example to tation and the combination of evidence: illustrate the reprcscn- The state highway patrol is attempting to identify speed- ers on Interstate 80. A patrolman on a motorcycle ob serves that his speedometer reads 60 mph when matching speed with a suspected speeder. Meanwhile, a parked pa- trolman obtains a reading on his radar gun of 57 mph for the same vehicle. Is this sufTicient evidence to issue a traffic citation for speeding? The first thing to do is to construct mass functions for both bodies of evidence. Here we will simply present intuitively rca- sonable functions; a formal theory for deriving mass functions from sensor measurements is the subject of a future paper. Fig- ure 7(a) depicts the mass function for the motorcycle spetdome- ter reading. The frame of discernment has been restricted to the range from 50 to 65 mph (i.e., 9 = (50,651) in order to focus on these values. Values outside that range are considered impossible in this example. The most precise interval that mass has beeri committed to is [58,62], indicating that the precision of the pa- trolman reading his speedometer is no better than f2 mph. The remainder of the mass function attributes mass to successively larger intervals centered around 60 mph (until the upper limit of 65 is reached at the bend in the ridge). This represents the un- k = J,“/,“/,“/r” [M (P, q)%(r, ~+WP, d-W (r, 41 dadrddp biased ignorance associated with inaccuracy in the speedometer or with the patrolman not matching speeds properly. Note how [55,651 5 65 50 65 50 65 (a) Mass function (b) Support (c) Plausibility Figure 7: Representation of Evidence from the Speedometer Reading 50 65 50 65 50 65 (a) Mass function (b) Support (c) Plausibilit} Figure 8: Representation of Evidence from the Radar Gun this differs from an ordinary probabilit,y distribution. The sup- port and plausibility for each interval have been computed from the mass function and plotted in Figures 7(b) and (c). These plots clearly show how the beliefs vary smoothly as the proposi- tion of interest is varied. Support and plausibility both increase monotonically toward one as the interval is widened. The dif- ference between these surfaces at any point represents the igno- rance remaining about the probability that the true value lies in the interval corresponding to that point. The support for the proposition “the suspect is speeding” is Spt([55,65]) = -28 and Pls([55,65]) = 1.0, indicating the probability the car was travel- ing greater than 55 mph is between .28 and 1.0. Figure 8(a) shows the mass function for the evidence ob- tained with the radar gun. Some insight can be gained by com- paring it with the speedometer mass function. The ridge, which is centered at 57 mph, is further to the left indicating a lower mea- sured speed. There is more mass near the hypotenuse reflecting a more accurate instrument. There is a peak at 8 indicating the possibility of a gross error that provides no information about the true speed. Based on the evidence from the radar gun, this mass function provides Sp1([55,65]) = -23 and Hs([55,65]) = 1.0. The support and plausibility surfaces are plotted in Figures 8(b) and (c). The values of plausibilit,y along the hypotrnusc constitute a curve showing the plausibility of any individual speed. Notice how the curve along the hypotenuse is more peaked in Figure 8(c) than in Figure 7(c), reflecting greater conviction. Given these two mass functions, Dempster’s Rule is u>cd to compute a third mass function representing the combination of the two bodies of evidence (Figure 9). Herr, the two ridges arc still visible with some mass having been “sprtad” bttncen tllc ridges. This shows support for the intermediate values; that are common to both bodies of evidence. Additionally, some rna~s has shifted away from 0 toward the hypotenuse indicating an incrr- mental narrowing of belief. The support and p!allsibilit y sr~rfnces show the bounds on the probabilities of all intervals of speed. The support surface has generally risen and the plausibility surface along the hypotenuse has grown more peaked, showing that the combination of evidence has strengthened and refined out be- liefs. This combination of evidence yields Spt([55,65]) = .4-l and P19([55,65]) = 1.0, meaning that there is at least a 44% chance that the car was speeding and that there is no evidence to the contrary. This may still be insuf%cient evidence to prove the car was speeding. The important point is that the mass function captures exactly those beliefs that are warranted by the evidence, 312 65 50 50 65 50 65 50 65 (a) h4ass function (b) Support (c) Plausibility Figure 9: Representation of Combined Evidence without overcommitting or understating what is known. Addi- tional evidence can be combined in the same fashion to yield mass functions that may or may not change our belief in propositions about the speed of the car. 6. Discussion Restricting the frame of discernment to include only con- tiguous intervals along the number line provides the key to the computational and conceptual simplicity of the framework. In particular, it reduces the space of propositions from 0(2n) to O(n’) where n is the number of atomic possibilities. In most cases, the restriction is a natural one because we would rarely expect to encounter disjoint intervals. Representing the mass function as a two-dimensional surface permits the specification of smoothly varyin, p brliefs. A gradual shift in an interval of in- terest incurs a gradual change in the associated support for that interval. Similarly, a gradual widening of an interval incurs a gradual increase in support. ‘4s an extension, one may expand the frame of discem- ment to include intervals that “wrap around” the endpoint N. This enlarged class of subsets would allow the representation of M(l[a, b]) and is also closed under the application of Dempster’s Rule. In this case the triangular mass function becomes a full square (with a discontinuity along the diagonal) and formulas for S@(e), P/s(.) and Dempster’s Rule can be derived in an analo- gous fashion. Another extension features the ability to reason over multi- dimensional regions. This formulation would allow for bounded ateas and volumes in t,he frame of discernment. In the two- dimensional case, propositions of interest are restricted to be rectangles of fixed orientation. This frame of discernment is closed under Dempster’s Rule and requires a four-dimensional mass function. Regions of higher dimensionality can be repre- sented but the computational burden becomes large. The specification of continuous mass functions is a matter for further investigation. One may envision special sensors that provide not a single value, nor a probability density function as output, but a continuous mass function by which they explicitly express their imprecision as well as their uncertainty about the measurement . Evidential reasoning, as based on the Shafer- Dcnlpstcr thca- ory, allows belief to be represented at any level of detail and allows multiple opinions to be pooled into a conscnc;u\ opinion. The ability to reason evidentially over continuous variables is cru- cial for expert systems that must reach decisions based on unct’r- tain, incomplete, and inaccurate evidence about such quant it it.5 as time, distance. and sensor measurements. This paper provides a novel representation that permits a conceptually appealing im- plementation of Shafer-Dempster theory applied to continuous variables. It provides the means for expressing belief as a contin- uous function over cont,iguous intervals of contin1lous!:; \-iLrj in:: widths. REFERENCES [l] Dempstcr, Arthur P., “A Generalization of Bayesian In- ference’, Journal of the Royal Statistical Society SO(Series B), 1968, pp. 205-2-17. [2] Lowrance, John D., and Garvey, Thomas D., ‘Evidential Reasoning: A Developing Concept”, Proceedings of the IEEE International Conference on Cybernetics and Soci- ety, October 1982, pp. 6-9. [3] Lowrance, John D., and Garvey, Thomas D., “Eviden- tial Reasoning: An Implement!ation for hlultisensor Inte- gration”, Technical Report TN 307, Artificial Intelligence Center, SRI International, hlenlo Park, California, Dccem- ber 1983. [4] Shafer, Glenn A., A Mathematical Theory of Evidence. Princeton University Press, New Jersey, 1976. 313
1984
43
329
The Tractability of Subsumption in Frame-Based Description Languages Ronald J. Brachman Hector J. Levesque Fairchild Laboratory for Artificial Intelligence Research 4001 Miranda Avenue Palo Alto, California 94304 ABSTRACT A knowledge representation system provides an important ser- vice to the rest of a knowledge-based system: it computes au- tomatically a set of inferences over the beliefs encoded within it. Given that the knowledge-based system relies on these infer- ences in the midst of its operation (i.e., its diagnosis, planning, or whatever), their computational tractability is an important concern. Here we present evidence as to how the cost of comput- ing one kind of inference is directly related to the expressiveness of the representation language. As it turns out, this cost is per- ilously sensitive to small changes in the representation language. Even a seemingly simple frame-based description language can pose intractable computational obstacles. 1. Introduction There are many different styles of knowledge representation system in use in Artificial Intelligence programs, but they all have at least this in common: the representation system is supposed to provide both a repository for the beliefs of the knowledge-based system in which it is embedded, as well as automatic inferences over those beliefs. Typ ical inferences automatically computed by AI representation systems include inheritance of properties, set membership and set inclusion, part/subpart inferences, type subsumption, and resolution. Here we address a fundamental problem in the nature of the service to be provided by knowledge representation systems: the greater the expressiveness of the language for representing knowledge, the harder it becomes to compute the needed inferences (see [7] for an overview of this tradeoff). In this brief paper, we present a formal analysis of the computational cost of expressiveness in a simple frame-based description language. We illustrate how great care needs to be taken in the design of a representational facility, even when our intuitions about the language tell us that it is a simple one. As it turns out, even an apparently modest representation language can prove intractable. 2. Subsumption in Frame Languages Among the more popular representation languages in use today are those based on the notion of frames (see, for example, [l], (31, and 191). Frames give us the ability to define structured types; typically a frame comprises a set of more general frames (its super/rames) as well as a set of descriptions of the attributes (slots) of instances of the frame. The most common type of slot description specifies a restriction on the value of the filler of the slot for all instances of the frame. The restriction can be as specific as a particular value that all instances of the frame must exhibit (alternatively, the value may be just a de/auf& in which case an individual inherits the value provided he does not override it). or it mav be a more neneral constraint on attribute values, in which case this value restriction is usually a pointer to another frame. Less commonly, the number of required fillers is also specified in a slot restriction (often in terms of a minimum and a maximum number of attribute values). 1 The generalization relation between frame and superframe, or between two frames where one is simply a more restricted version of another, implicitly forms a tozonomy, or inheritance Aierorchy. Notationally, a frame might be defined by a list of superframes (with either an explicit or implicit “isa” relation [Z]), followed by a set of slot restrictions expressed by attribute/value-description pairs (with attribute and value-description usually separated by a colon). For example, the simple frame [PERSON child (2 1): son : LAYYER daughter: DOCTOR] is intended to be a struct,ured type representing the concept of a person that has at least one child, and all of whose sons (i.e., male children) are lawyers and all of whose daughters are doctors. Similarly, the more complicated frame, [STUDENT, FEMALE department: COMPUTER-SCIENCE enrolled-course (2 3) : [GRADUATE-COURSE department: ENGINEERING-DEPARTMENT]] is intended to be a structured type that describes female Computer Science students taking at least three graduate courses in a department within a school of Engineering.’ There is a natural correspondence between this frame form of de- scription and noun phrases in natural language. For example, the above frame might just as well have been written as “student and a female whose department is computer-science, and who has at least 3 enrolled-courses, each of which is a graduate-course whose department is an engineering-department.” A simple set of trans- lation rules would allow us to move easily from frame form to (almost) readable English.’ ‘ While the use of number restrictions is not widespread, they have been used extensively in KL-ONE (S] and Ianguages Iike it 141. They seem to be a useful generalization of the existential reading of slots (see below), so we include them here. 2Typically, frames are given nomu as well; for example, we might have labeled our first example frame, “PROUD-PARENT”. We have explicitIy chosen to avoid these here so as to eliminate confusion about their meanings. For this paper, we are interested in relations among frames implied by their rlrcrctwe only (see below), and will assume that atoms are all independent. aFor example, the list of superframes would translate into a conjunction of nouns (“otudent and a female”). A slot that had only one filler might translate into a simple “whose” clause (‘whose deDutmmt is computer rciencr”). And a slot From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. One interesting property of these structured types is that we do a male, and a person (i.e., a man). In general, z is an (AHD cl c2 . . . not have to state explicitly when one of them is below another in the taxonomy. The descriptions themselves implicitly define a taxonomy of subsumption, where type A subsumes type B if, by virtue of the form of A and B, every instance of B must be an instance of A. In other words, it can be determined that being an A is implicit in being a B, based only on the structure of the two terms (no “user” needs to make an explicit statement of this relationship). For example, without any world knowledge, we can determine that the type “person” subsumes the type Yperson each of whose male friends is a doctor”, which in turn subsumes is a doctor Uperson each of whose friends specialty is surgery.” Similarly, “person who has at least 2 children” subsumes “person who has at least 3 male children”. The computation of analytic relations like subsumption (and others, such as difijointnesesee 141) is arguably the most important service to be provided by a frame description system (see [4] for evidence of this). If this service is to be provided in a reasonable fashion to the rest of a knowledge-based system, then these relations must be de termined in a timely way. Thus, while expressive power is typically the most immediate concern of representation language designers, it cannot be &dressed without simultaneous consideration of its compu- cn) iff z is a cl and a c2 and . . . and a cm. This allows us to put sev- era1 properties (i.e., superconcepts or slot restrictions) together in the definition of a concept. The ALL construct provides a value- or type restriction on the fillers of a role (z is an (ALL r c) iff each r of z is a c). Thus (ALL child doctor) corresponds to the concept of something all of whose children are doctors. It is a way to restrict the value of a slot at a frame. The SOME operator guarantees that there will be a least one filler of the role named (z is a (SOME t) iff z has at least one r). For instance, (AND person (SOME child)) would represent the concept of a parent. This is a way to introduce a slot at a frame. Note that in the more common frame languages, the ALL and SOME are not broken out as separate operators, but instead, either every slot re- striction is considered to have both universal and existential import, or exclusively one or the other (or it may even be left unspecified).’ Our language allows for arbitrary numbers 01 role fillers, and allows the SOME and ALL restrictions to be specified independently. Finally, the RESTR construct accounts for roles constrained by the types of their fillers, e.g., (RESTB child male) for a child who is a male, that is, a son (in general, y is a @SIR r c) of z iff y is an r of z and y is a c). tational implications. It is simple to map more standard notations into our frame lan- guage. One reading of the the frame used aa the first example in this paper is “person with at least one child, and each of whose sons is a lawyer and each of whose daughters is a doctor”. In 3L1, that reading would bc represented this way: (AND person Computational cost concomitant with expressive power has been treated in depth in the arena of formal languages like that of first- order logic. However, while frames have been used extensively in AI systems, and have been found expressively adequate for some tasks, their instrinsic computational properties have not been accounted for. We have explored the complexity of determining subsumption in a family of frame-based description languages, and have found that it is in fact remarkably sensitive to what seem to be small changes in the representational vocabulary. In order to illustrate this surprisingly touchy tradeoff, we here examine in detail a representative frame lan- guage and a simple variant. 3. A Formal Frame Description Language Let us consider a simple description language, 3X!, with two major syntactic types-concepts and roles. These will correspond to the typ ically less well-defined notions of “frame” and “slot”. Intuitively, we think of concepts as representing individuals, and roles as representing relations between individuals. 3C has the following grammar: (concept) ::= (atom) ] (AHD (conceptl). . . (concept,)) 1 (ALL (role) (concept)) ] (SOMEI (role)) (role) ::=(atom) ] (RETR (role) (concept)) While the linear syntax is a bit unorthodox, Z is actually a dis- (SOME child) (ALL (RESTR child male) lawyer) (ALL (RESTR child female) doctor)) 4. Formal Semantics tillation of the operators in typical frame languages. Atoms are the names of primitive (undefined) concepts. AND constructions represent conjoined concepts, so for example, (AND adult male person) would represent the concept of something that was at the same time an adult, We now briefly define a straightforward extensional semantics for 3Z, the intent of which is to provide a precise definition of subsump tion. This will be done as follows: imagine that associated with each description is the set of individuals (individuals for concepts, pairs of individuals for roles) it describes. Call that set the ezlension of the de- scription. Notice that by virtue of the structure of descriptions, their extensions are not independent (for example, the extension of (AND cl ~2) should be the intersection of those of cl and ~2). In general, the structures of two descriptions can imply that the extension of one is always a superset of the extension of the other. In that case, we will say that t!le first subsumes the second (so, in the caSe just mentioned, cl would be said to subsume (AND cl c2)). Let D be any set and & be any function from concepts to subsets of D and roles to subsets of the Cartesian product, D x D. So E [c] E D for any concept c, and &[r] C D x D foranyroler. We will say that & is an eztension function over D if and only if 1. l[(AND cl . . . cn)] = n, E[c,] 2. l[(ALL r c)] = { z E D 1 if (2, y) E E (r] then y E &[c]} 3. E[(SOME r)] = { = E D I 3~ [ (2, Y) E E[rl] } 4. &((RESTR r c)] = ((z, y) E D x D I (z, y) E E[r] and y E &[c]}. Finally, for any two concepts cl and cp, we can say that cl is sub- sumed by c2 if and only if for any set D and any extension function E with multiple fillers might translate into a “who (or that) has n” construct (“who has at least 3 enrolled-courrtr”), possibly followed by an “each of which” qual- iEcation (“each of which is a gradu&tr-eourrr"). Finally, a slot with multiple filters, but with no number restriction specified, would translate simply into an “each (or all) of whose” qualification (“all of whose daulhtrr6 are doctorr"). ‘See [s] for some further discussion of the import of languages like KRL. As it turns out, the unversal/existential distinction is most often moot, because most frame languages allow only single-valued slots. Thus the slot’s meaning is re- duced to a simple predication on a single-valued function (e.g., the slot/value pair 6nr:iBt666r means inteoerfaaelzl)). 35 over D, E[cl] C C[cz]. Th a is, one concept is subsumed by a second t concept when all instances of the first-in all extensions-are also in- stances of the second. From a semantic point of view, subsumption dictates a kind of necessary set inclusion. For an illustration of how this is an appropriate view of subsumg tion, let us consider two descriptions in 3l, dl and dz, where dl sub- sumes d2: dl = (AND person (ALL child doctor)) dz = (AND (AND person (ALL child rich)) (AND male (ALL (RE!XE child rich) (AND doctor (SOHE (BESIB specialty surgery) 1) 1) 1 dl corresponds to Yperson each of whose children is a doctor,” and dz corresponds to “person each of whose children is rich, and a lnale each of whose rich children is a doctor who has a surgery specialty.” A proof that dl subsumes dz, based on our formal definition of subsumpt,ion, might go as follows. Let D be any set, E any extension function over D, and z any element of f[dz]. By ap- plying (1) above to d2 twice, we know that z E E[person] and that by (2), if (z,y) E E[child], then y E &[rich], and so by (4), (z,y) E l[(RE!XR child rich)]. Also, by (2), if (z, y) E E[(RESIR child rich)], then, by (1) and the definition of dz, y E C[doctor]. Putting these two together, we have that if (z,y) E flchild], then y E f [doctor]. Since z E f [person], then by (2) and (l), z E f[d;]. To summarize, because all of the children of a d2 are rich, and each of a dz’s rich children is a certain kind of doctor, then all of d2’s children are doctors. Because any d2 is also a person, the description dz is subsumed by the description dl. 5. Determining Subsumption Given a precise definition of subsumption, we can now consider algo- rithms for calculating subsumption between descriptions. Intuitively, this seems to present no real problems. To determine if a subsumes b, what we have to do is make sure that each component of a is “implied” by some component (or components) of b, exactly the way we just de- termined that dl subsumed dp. Moreover, the type of “implication” we need should be fairly simple since 3L has neither a negation nor a disjunction operator. Unfortunately, such intuitions can be nastily out of line. In partic- ular, let us consider a slight variant of 3L-call it 3L3-. 3L%- include9 all of 31 except for the RESTR operator. On the surface, the difference between 3l- and 3f seems expressively minor. But it turns out that it is computationally very significant. In particular, we have found an O(n*) algorithm for determining subsumption in 3l-, but have proven that the same problem for 3C is intractable, In the rest of this section, we sketch the form of our algorithm for 3f- and the proof that subsumption for 3l is as hard as testing for propositional tautologies, and therefore most likely unsolvable in polynomial time. Subsumption Algorithm for 3f -: SUBS?[a,bj 1. Flatten both o and b by removing all nested AND operators. So, for example, (AND z (AND y Z) w) becomes (AND z y z w). 2. Collect all arguments to an ALL for a given role. For example, (AND (ALL r fAND u b ~1) (ALL r IAND . X\\l becomes (AND (ALL r (AND a b c . X))). 3. Assuming a is now (AND al . . . a,) and b is (AND bl . . . b,), then return true off for each ai, (a) if a, is an atom or a SOME, then one of the b, is a,. (b) if a, is (ALL r z), then one of the bj is (ALL r y), where SUBS?lz,y]. The property of SUBS? that we are interested in is the following: Theorem 1: SUBS? calculates subsumption for 3L]- in O(n*) time. Before considering a proof of the correctness of this algorithm, notice that it operates in O(n2) time (where n is the length of the longest argument, say). Step 1 can be done in linear time. Step 2 might require a traversal of the expression for each of its elements, and step 3 might require a traversal of b for each element of a, but both of these can be done in O(n*) time. Now, on to the proof that this algorithm indeed calculates sub- sumption: first we must show that if SUBS?[a,b] is true then u indeed subsume9 b (soundness); then we must show the converse (complete- ness). Before beginning, note that the fint two steps of the algorithm do not change the extensions of a and b for any extension function, and so do not affect the correctness of the algorithm. To see why the algorithm is sound, suppose that SUBS?[u,b] is true and consider one of the conjuncts of a-call it 0.i. Either ui is among the bj or it is of the form (ALL r 2). In the latter case, there is a (ALL r y) among the b,, where SUBS?[z,y]. Then, by induction, any extension of y must be a subset of x’s and so any extension of bj must be a subset of a,‘9. So no matter what ai is, the extension of b (which is the conjunction of all the bj’s) must be a subset of ui. Since this is true for every a,, the extension of b must also be a subset of the extension of a. So, whenever SUBS?[a,b] is true, a subsumes b. The proof of the completeness of the algorithm is a bit trickier. Here we have to be able to show that anytime SUBS?[a,b] is false, there is an extension function that does not assign a to a supenet of what it assigns b (i.e., in some possible situation, a b is not an a). There are three cases to consider, and for each of them we will define an extension function f over the set (0, l} that has the property that 1 is in the extension of every description, but 0 is in the extension of b but not that of a. 1. Assume that some atomic ai does not appear among the bj, Let f assign the ordered pairs ((0, l), (1,l)) to every role and (0, I} to every atom except ai to which it assigns (1). 2. Assume that ai is (SOME r), which does not appear among the bj. Let f assign (0, 1) to every atom and ((0, l), (1,l)) to every role except r, to which it assigns only ((1,l)). 3. Assume that a, is (ALL r z), where if (ALL r y) appears among the bj, then, by induction, z does not subsume y. Let f * be an extension function not using 0 or 1 but such that some object * is in the extension of y but not of 2. Then, let f contain f l and assign (0, 1) to every atom and ((0, l), (1, 1)) to every role except r, to which it assigns ((1, l), (0, *)}. In all three cases it can be shown that f [u] is not a superset of f [b], and so, that a does not subsume b when SUBS?[u,b] is false. So, in the end, SUBS? is correct, and calculates subsumption in O(n2) time. We now turn our attention to the subsumption problem for full 3L. The proof that subsumption of descriptions in 3T% is intractable is baaed on a correspondence between this problem and the problem of deciding whether a sentence of propositional logic is implied by an- other. Specifically, we define a mapping n from propositional sentences in conjunctive normal form to descriptions in 3Z that has the property that for any two sentences (Y and /?, a logically implies ,9 iIf ~[a] is subsumed bv 1rl/31. 36 T[Pl Surv=e PI, ~2, . . ., pm are propositional letters distinct from A, B, R, and S. VP2 v... VP,V-Pn+l V-rP”+:!V...V-Pm] = (AND (ALL (RESTR R pl) A) . . . (ALL (RESTR R p,,) A) (SOE (mm R P,+I)) . . . (som (== R ~4) Assume that al, CT*, . . ., a* are disjunctions of literals not using A, B, R, and S. n[al hazh...hak] = (AND (ALL (RESIR S (SOME (RESTR R A))) B) (ALL (RESTR S +q]) B) . . . (ALL (RESTR S +k]) B)) A proof that this mapping has the desired property is given in [S]. What this means is that an algorithm for subsumption can be used to answer questions of implication by first mapping the two sentences into descriptions in 3L and then seeing if one is subsumed by the other. Moreover, because r can be calculated efficiently, any good algorithm for subsumption becomes a good one for implication. The key observation here, however, is that there can be no good algorithm for implication. To see this, note that a sentence implies (ph -p) just in case it is not satisfiable. But determining the satisfiablity of a sentence in this form is NP-complete [5]. Therefore, a special case of the implication problem (where the second argument is (p A -p)) is the complement of an NP-complete one and so is a co-NP-complete problem. The correspondence between implication and subsumption, then, leads to the following: Theorem 2: Subsumptioo for 3& is co-iVP-complete. In other words, since a good algorithm for subsumption would lead to a good one for implication, subsumption over descriptions in 3L3 is intractable.’ 6. Conclusion The lesson here is clear-there seems to be a sudden and unex- pected “computational cliff” encountered when even a slight change of a certain sort is made to a representation language.6 We are actively engaged in examining other dimensions of representation languages, in an effort to understand exactly what aspect of the representation is responsible for the computational precipice. Besides warning us to be careful in selecting operators for a knowl- edge representation language, the tradeo5 between expressiveness and computational tractability serves as admonition against blind trust of our intuitions. The change from 3l%- to 3lf seemed simple enough, yet caused subsumption to become intractable. Other generalizations to 3lI:- that we have considered appear at least as dangerous, and yet in the end prove no problem at all. For example, we have examined a variant of 31- that generalizes the SOME operator to be an AI-LEAST operator, whereby we can require any number of fillers for a certain role. Further, we might add an operator called ROLE-CHAIN, that al- lows us to string roles together. Given these two new operators, we “More precisely, the cc+NP-complete problems are strongly believed to be unsolv- able in polynomial time. 61t should be emphasized that the question of tractability is a matter of expres- siveness, and not of the particular description language. Here we have used a simple language to illustrate our point, but the tradeoff affects a.ny language that allows the same distinctions to be made. could form interesting concepts like “a person with at least two grand- children”: (AND person (AT-LEAST (ROLE-CEAIB child child) 2)) Remarkably enough, even the simultaneous addition of both of these operators to 3l- does not cause subsumption to fall off of the com- putational cliff (81. This line of research probably has a long way to go before the defini- tive story is told on the complexity of computing with AI description languages. However, we seem to have made a significant start in for- mally analyzing an essential frame language and its variants. Further, the methodology itself is an important factor. Crucially, the notion of subsumption in this account is driven from the semantics, so that there is always a measure of correctness for the algorithms we design to compute it. Thus, we will not fall prey to one problem that has plagued work in this area since its inception-the excuse that what subsumption (or any other important relation) means is “what the code does to compute it”. In fact, our approach so well defines the problem that we can find cases where it is provable that no algorithm of a certain sort can be designed. Acknowledgements This research was done in the context of the KRYPTON project, and as a result, benefited greatly from discussions with Richard Fikes, Peter Pate!-Schneider, and Victoria Pigman. 111 PI PI 14 I51 161 171 I81 PI REFERENCES Bobrow, D. G., and T. Winograd, “An a Knowledge Representation Language.” Vol. 1, No. 1, January, 1977, pp. 3-46. Brachman, R. J., “What IS-A Is and Isn’t: Overview of KRL, Cognitive Science, An Analysis of Tax- onomic Links in Semantic Networks.” IEEE Compuler, Special Issue on Knowledge Representation, October, 1983, pp. 30-36. Brachman, R. J., and J. G. Schmolze, “An Overview of the KL- ONE Knowledge Representation System.” Cognitive Science, forthcoming. Brachman, R. J., R. E. Fikes, and H. J. Levesque, “Krypton: A Functional Approach to Knowledge Representation.” IEEE Computer, Special Iesue on Knowledge Repreeentation, October, 1983, pp. 67-73. Cook, S. A., “The Complexity of Theorem-Proving Procedures.” Proc. 3rd Ann. ACM Symposium on Theory of Computing. New York: Association for Computing Machinery, 1971, pp. 151-158. Hayes, P. J., “The Logic of Frames.” In Frame Conceptions and Text Understanding. D. Metzing (ed.), Berlin: Walter de Gruyter & Co., 1979, pp. 46-61. Levesque, H. J., “A Fundamental Trade05 in Knowledge Repre- sentation and Reasoning.” Proc. CSGSI-84, London, Ontario, May, 1984, pp. 141-152. Levesque, H. J., and R. J. Brachman, “Some Results on the Complexity of Subsumption in Frame-Based Description Lan- guages.” In preparation. Minsky, M., “A Framework for Representing Knowledge.” In Mind Design, J. Haugeland (ed.). Cambridge, MA: MIT Press, 1981, pp. 95-128. 37
1984
44
330
LIKELIHOOD, PROBABILITY, AND KNOWLEDGE Joseph Y. Halpern IBM Research Laboratory San Jose, California 95193 David A. McAllester MIT Artificial Intelligence Laboratory Cambridge, Massachusetts 02 139 Abstract: The modal logic LL was introduced by Halpern and Rabin [HR] as a means of doing qualitative reasoning about likelihood. Here the relationship between LL and probability theory is examined. It is shown that there is a way of translating probability assertions into LL in a sound manner, so that LL in some sense can capture the probabilistic interpretation of likelihood. However, the translation is subtle; several more obvious attempts are shown to lead to inconsistencies. We also extend LL by adding modal operators for knowledge. The propositional version of the resulting logic LLK is shown to have a complete axiomatization and to be decidable in exponential time, provably the best possible. 1. Introduction Reasoning in the presence of incomplete knowledge plays an important role in many AI expert systems. One way of representing partially constrained situations is with sentences of first-order logic (cf. [MH,Li,Re]). Any set of first-order sentences specifies a set of possible worlds (first-order models). While such assertions can deal with partial knowledge, they cannot adequately represent knowledge about relative likelihood. This problem was noted by McCarthy and Hayes ([MH]), who made the following comments: We agree that the formalism will eventually have to allow statements about the probabilities of events, but attaching probabilities to all statements has the following objections: 1. It is not clear how to attach probabilities to statements containing quantifiers in such a way that corresponds to the amount of conviction that people have. 2. The information necessary to assign numerical probabilities is not ordinarily available. Therefore, a formalism that required numerical probabilities would be epistemologically inadequate. There have been proposals for representing likelihood where a numerical estimate, or certainty factor, is assigned to each bit of information and to each conclusion drawn from that information (see [DBS,Sh,Zal] for some examples). But none of these proposals have been able to adequately satisfy the objections raised by McCarthy and Hayes. It is never quite clear where the numerical estimates are coming from; nor do these proposals seem to capture how people approach such reasoning. While people seem quite prepared to give qualitative estimates of likelihood, they are often notoriously unwilling to give precise numerical estimates to outcomes (cf. iSPI). In [HR], Halpern and Rabin introduce a logic LL for reasoning about likelihood. LL uses a modal operator L to help capture the notion of “likely”, and is designed to allow qualitative reasoning about likelihood without the requirement of assigning precise numerical probabilities to outcomes. Indeed, numerical estimates and probability do not enter anywhere in the syntax or semantics of LL. Despite the fact that no use is made of numbers, LL is able to capture many properties of likelihood in an intuitively appealing way. For example, consider the following chain of reasoning: if P, holds, then it is reasonably likely that P, holds, and if P, holds, it is reasonably likely that P, holds. Hence, if P1 holds, it is somewhat likely that P, holds. (Clearly, the longer the chain, the less confidence we have in the likelihood of the conclusion.) In LL, this essentially becomes “from P,+LP, and P,+LP3, conclude P,+L2P,“. Note that the powers of L denote dilution of likelihood. One way of understanding likelihood is via probability theory. To quote [HR], “we can think of likely [the modal operator L] as meaning ‘with probability greater than a’ (for some user-defined a)“. The exact relationship between LL and probability theory is not studied in [HR]. However, a close examination shows that it is not completely straightforward. Indeed, as we show below, if we simply translate “P holds with probability greater than a” by LP, we quickly run into inconsistencies. Nevertheless, we confirm the sentiment in the quote above by showing that there is a way of translating numerical probability statements into LL in such a way that inferences made in LL are sound with respect to this interpretation of likelihood. Roughly speaking, this means that if we have a set of probability assertions about a certain domain, translate them (using the suggested translation) into LL, and then reason in LL, any conclusions we draw will be true when interpreted as probability assertions about the domain. However, our translation is somewhat subtle, as is the proof of its soundness; several more obvious attempts fail. These subtleties also shed some light on nonmonotonic reasoning. We enrich LL by adding modal operators for knowledge, giving us a logic LLK which allows simultaneous reasoning about both knowledge and likelihood. This extends the logics used in [MSHI,Mo]), where knowledge has been treated in an all or nothing way: either a person knows a fact or he doesn’t. However, there are many cases in which knowledge is heuristic or probabilistic. For example, suppose I know that Mary is a woman, but I have never met her and therefore do not know how tall she is. Under such circumstances, I consider it unlikely that she is over six feet tall. However, suppose that I am told that she is on the Stanford women’s basketball team. My knowledge about her height has now changed, although I still don’t know how tall she is. I now consider it reasonably likely that she is over six feet tall. From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. LLK gives us a convenient formal language for reasoning about such situations. LLK can be shown to have a complete axiomatization, which is essentially obtained by combining the complete axiomatization of LL with that of the modal logic of knowledge. In addition, we can show that there is a procedure for deciding validity of LLK formulas which runs in deterministic exponential time, the same as that for LL. This is provably the best possible. In the next section, we review the syntax and semantics of LL. In Section 3, we discuss the translation of English sentences into LL and show that there is a translation which is sound with respect to the probabilistic interpretation of L. In Section 4, we add knowledge to the system to get the logic LLK. Detailed proofs of theorems and further discussion of the points raised here can be found in [HMc], an expanded version of this paper. 2. Syntax and semantics We briefly review the syntax and semantics of LL. We follow [HR] with one minor modification: for ease of exposition, we omit the “conceivable” relation in the semantics (and thus identify the operator L* of [HR] with the dual of G). We leave it to the reader to check that all our results will also hold if we reinstate the conceivable relation. The reader should consult [HR] for motivation and more details. Syntax: Starting with a set @u = {P,Q,R,...] of primitive propositions, we build more complicated LL formulas using the propositional connectives Y and A and the modal operators G and L. Thus, if p and q are formulas, then so are -p, (pAq), Gp (“necessarily p”), and Lp. We omit parentheses if they are clear from context. We also use the abbreviations pVq for -(-PA-q), p*q for -pVq, p=q for (p+w)A(q+p), FP (“possibly p”) for -G-p, and Lip for L.,.Lp (i L’s). Semantics: We give semantics to LL formulas by means of Kripke structures. An LL model is a triple M = (S,g,n), where S is a set of ~tclte~, 9 is a reflexive binary relation on S (i.e., for all SES, we have (s,s)E~) and a:QOxS+(true,false). (Intuitively, 9~ assigns a truth value to each proposition at all the states,) We can think of (S,p) as a graph with vertices S and edges 9. If (s,t)& then we say that t is an &?-successor of s. Informally, a state s consists of a set of hypotheses that we take to be “true for now”. An g-successor of s describes a set of hypotheses that is reasonably likely given our current hypotheses. We will say t is reachable (in k steps) from s if, for some finite sequence sg,...,sk, we have so=s, sk=t, and (s~,s~+~) E A?’ for i < k. We define M,s k p, read p is satisfied in state s of model M, by induction on the structure of p: M,s bP for PE a0 iff v(P,s)=true, M,s I= -p iff not(M,s /= p), M,s CpAq iff M,s Cp and M,s Cq, M,s FGp iff M,t kp for all t reachable from s, M,s C LP iff M,t Cp for some t with (s,t) ES?‘. Definitions: A formula p is satisfiable iff for some M = (S,&?,w) and some s ES we have M,s kp; p is valid iff for all M = (S,g,a) and all s ES we have M,s b p. It is easy to check that p is valid iff -p is not satisfiable. If Z is a set of LL formulas, we write M,s I= Z iff M,s /=p for every formula pe 2. C semantically implies a formula p, written Z C p, if, for every model M and state s in M, we have M,s C X implies MS C p. 3. The probabilistic interpretation of likelihood Lp is supposed to represent the notion that “p is reasonably likely”. Certainly one way of interpreting this statement is to a”. “p holds with probability greater than or equal However, as already noted in [HR], there are problems with this interpretation of Lp. Suppose we take a=1/2, and consider a situation where we toss a fair coin twice. If P represents “the coin will land heads both times”, and Q represents “the coin will land tails both times”, then we clearly have L(PVQ), as well as ,LP and -LQ. But, for any LL model, L(PVQ) is true iff LPVLQ is true, giving us a contradiction. We solve this problem by changing the way we translate statements of the form “p is reasonably likely” into LL. Note that if a state s satisfies the formula p (i.e. M,s tp), this does nor imply that p is necessarily true at s, but simply that p is one of the hypotheses that we are taking to be true at this state. We must use Gp to capture the fact that p is necessarily true at s, since M,s /=Gp iff M,t C p for all t reachable from s, and thus in no state reachable from s is -p taken to be an hypothesis. The English statement “The coin is likely to land heads twice in a row” is really “It is likely to be (necessarily) the case that the coin lands heads twice in a row”, (and not “It is a likely hypothesis that the coin lands heads twice in a row”) and thus should be translated into LGP rather than LP. Similarly, “the coin is likely to land tails twice in a row” is LGQ, while “it is likely that the coin lands either heads or tails” is LG(PVQ). With these translations, we do not run into the problem described above, for LG(PVQ) is not equivalent to LGPVLGQ. These observations suggest that the only LL formulas which describe real world situations are (Boolean combinations of) formulas of the form L’GC, where C is a Boolean combination of primitive propositions. We will return to this point later. Having successfully dealt with that problem, we next turn our attention to translating statements of conditional probability: “if P, then it is reasonably likely that Q” or “Q is reasonably likely given P”. The obvious translation of “if P then likely Q” would be P+LQ. The argments of the previous paragraph suggest that we should instead use GP+LGQ, but even this translation runs into some problems. Consider a doctor making a medical diagnosis. His view of the world can be described by primitive propositions which stand for diseases, symptoms, and test results. The relationship between these formulas can be represented by a joint probability distribution, or a Venn diagram where the area of each region indicates its probability, and the basic regions correspond to the primitive propositions. For example, the following Venn diagram might represent part of the doctor’s view, where PI and P2 represent diseases, and P, and P, represent symptoms: 138 The diagram shows (among other things) that (a) disease P, is reasonably likely given symptom P,, (b) P, is always a symptom of Pz, (c) if a patient has P2, then it is not reasonably likely that he also has P,, (d) P, and P4 never occur simultaneously. The second statement is clearly G(P,+P,) from which we can deduce GP,+GP,. Now suppose that we represented the first and third statements, as suggested above, by GP,+LGP 1 and GP,+ - LGP 1, respectively. Then simply using propositional reasoning, we could deduce that GP,+LGP, A - LGP t, surely a contradiction. The problem is that when we make such English statements as “P, is reasonably likely given P,” or “the conditional probability of P3 given P, is greater than one half”, we are implicitly saying “given P, and all else being equal” or “given P, and no other information”, P, is likely. We cannot quite say “given P, and no other information” in LL. Indeed, it is not quite clear precisely what this statement means (cf. [HM]). However, we can say “in the absence of any information about the formulas P,,...,Pk which would cause us to conclude otherwise”, and this suffices for our applications. In our present example, P, is reasonably likely given P, as long as we are not given -P, or P2 or P,. Thus, a better translation of “P, is reasonably likely given P,” is: ,G-PIA-GP,A-GP, AGP3+LGP1. Similarly, “if a patient has P2, then it is unlikely that he has P,” can be expressed by: -GP,AGP,+-LGP,. In general, we must put all the necessary caveats into the precondition to avoid contradictions. This translation seems to avoid the problem mentioned above, but how can we be sure that there are no further problems lurking in the bushes? We now show that, in a precise sense, there are not. Fix a finite set of primitive propositions V = {Pl,,..,Pn]. An atom of V is any conjunction QfA...AQ,, where each Qi is either Pi or ,Pi. Note that there are 2” such atoms. Let AT(V) be the set of atoms of V, and let LIT(V)={P,-P]PcV] be the set of literals of V. We say a function Pr:AT(V) - [O,l] is a probability assignment on V if %EAT(“)P’(C) = 1. A propositional probability space is a pair W=(V,Pr), where V is a finite set of primitive propositions and Pr is a probability assignment on V. Let BC(V) consist of all the Boolean combinations of the propositions in V. If C,DcBC(V), we write CID if C+D is a propositional validity. We extend Pr to BC(V) via Pr(D) = ‘{CAT / V) IC<DJ Pr(C). If Pr(D)#O, we define the conditional probabi ify of ?Z given D, Pr(C ] D)=Pr(CAD)/Pr(D). We now consider a restricted class of probability statements about the domain W. Fix a with O<a<l. A probability assertion about W is a formula in the least set of formulas closed under disjunction and conjunction, and containing conditional probability statements of the form Pr(C]D)la’ and Pr(C]D)<a’, where iI0, C,DeBC(V) and Pr(D)>O. (Closure under negation is built into these formulas since, for example, ,Pr(C ] D)>ai iff Pr(C ] D)<ai-) Note that by taking D= true, we get Pr(C) > ai or Pr( C)<J, and by taking i=O in the former term, we can assert that a certain statement holds with probability one). Since we are dealing with a discrete probability space, this amounts to saying that the statement is true. Corresponding to these probability assertions about W, we will consider the standard LL formulas over V. These are formed by taking formulas of the form LiGC and -L’GC, i 2 0, where C eBC(V), and closing off under conjunction and disjunction. By the observations above, these are, in some sense, exactly those LL formulas that describe a “real world” situation involving the primitive propositions of V. We want to translate probability assertions about W into standard LL formulas over V. As discussed above, a conditional probability assertion of the form Pr(C ] D) I ai will be translated into a formula of the form -GQ,A...A-GQkAGD + L’GC, where Q1 ,..., Qk are the “necessary caveats”. We now make the notion of a “necessary caveat” precise. Given C,D eBC(V), and QcLIT(V), we say Q has negative (resp. positive) impact on C given D in W if Pr(DAQ)>O and Pr(C 1 DAQ)<Pr(C 1 D) (resp. Pr(DAQ)>O and Pr(C 1 DAQ)>Pr(C 1 D)). Thus Q has negative (resp. positive) impact on C given D in W if discovering Q lowers (resp. increases) the probability of C given D. We say Q has potential negative (resp. positive) impact on C given D in W if for some D’ID, Q has negative (resp. positive) impact on C given D’ in W. Note that if Q does not have potential negative impact on C given D in W, then once we know D, no matter what extra information we get, finding out Q will not lower the probability that C is true, Similar remarks hold for potential positive impact. We define PNI( C,D) = (Q E LIT(V) ] Q has potential negative impact on C given D], PPI(C,D)=[QeLIT(V) ] Q has potential positive impact on C given D]. Now using the idea of potential positive and negative impact, we give a translation q+qt from probability assertions about W to standard formulas over V. We first define [WC I D)I silt = (AQ~ ~NI(c,D)-GQ)AGD + L’GC, [Pr(C I D)<a’l’ = (AQ~~~~(c,D)-GQ)AGD + -L’GC, and then translate conjunctions and disjunctions in the obvious way; i.e., if p, q are probability assertions about W, then (pVq)’ = ptVqt and (pAq)’ = ptAqt. Again we note that the term AQ~ PNI(C,D)Q (rev. A\Q~PPI(c,D)Q) in the translation of Pr(C ] D)ha’ (resp. Pr(C ] D)<a’) is intended to capture the idea of “putting in all the necessary caveats in order to avoid contradictions”, We now consider a family of translations Tr,, DeBC(V), from standard LL formulas over V to probability assertions about W. Roughly speaking, we want L’GC to be translated to Pr(C)>a’. This will be the effect of Trtrue. Using Tr, relativizes everything to D; we require this greater generality for technical reasons. Let Tr,(L’Gc) = Pr(C I D)>a’, i?O, TrD( -I-kc) = Pr(C ] D)<cr’, i?O. Again, conjunctions and disjunctions are translated in the obvious way, so that if p, q are standard LL formulas: Tr,(PVq) = TrD(P)VTrD(q) and TrD(PAq) = TrD(P)ATrD(q). Finally, let CON(W)=ICcBC(V) I C is a conjunction of formulas in LIT(V) and Pr(C)>O]. (We take the empty conjunction to be true; of course, Pr(true)=l.) With these definitions in hand, we can now state the theorem which asserts that there is a translation from probability assertions about W into LL which is sound. Theorem 1: Let Z be a set of probability assertions true of W, and Z’ the result of translating these formulas into LL (via p&p’). If q is a standard LL formula which is semantically implied by Z’ (i.e., Z’ kq), then for all DcCQN(W), TrD(q) is a probability assertion true about W. The theorem follows from two lemmas, which are proved in [HMc]. The first shows the relationship between the translations described above. Lemma 1: If q is a probability assertion true of W, then TrD(qt) is true of W for all DeCON(W). (We remark that neither Lemma 1 nor Theorem 1 holds for arbitrary DcBC(V) ( a counterexample is given in [HMc]). Since we are mainly interested in Trtrue, this point will not greatly concern us here, but it is interesting to note that we could have modified the translation p+pt so that Theorem 1 did hold for all DeBC(V) with Pr(D)>O. The idea would be to allow PNI and PPI to include arbitrary elements of BC(V), rather than just literals, The cost of doing this is that the translation could be doubly exponential in the size of V, rather than just linear. If for some reason we are interested in TrD for DcBC(V), another (less expensive) solution to the problem is to add a new primitive proposition Q to V, extend Pr so that Pr(QGD)= 1, and consider TrQ instead.) We next construct an LL model Mw=(S,g.v) corresponding to the propositional probability space W. The set of states S consists of countably many copies of each c EBC(V) with Pr(C)>O. Succesive copies are connected by 2, as well as a state you are likely to move to as your knowledge increases. More formally, S = ((Ci I i10, CcBC(V), Pr(C)>O), e, = ItcitCi), (Ci*Ci+l) Ii201 U I(Ci,Du) I D<C, Pr(D I C)za’+l]. The definition of v is somewhat arbitrary. All we require is that M,Ci /= C, for all Ci ES. For definiteness, we define 7~ is follows. For each CcBC(V) such that Pr(C)>O, choose some atom DcCON(W)nAT(V) such that D<C (such a D must exist since Pr( C) >O). Call this atom AT(C). Then n(P,Ci)=true iff AT(C)sP. We leave it to the reader to check that with this definition, M,C, b C. The following lemma relates truth in Mw to truth in W. Lemma 2: If q is a standard LL formula, then Mw,Co tq iff Trc(q) is true of W. Proof of Theorem 1: Suppose Z is a set of probability assertions true of W, Mw is the canonical model for W constructed above, q is a standard LL formula over V such that X’/=q, and DeCON(W). By Lemma 1, for each formula pcx, we know that Tr,(p’) is true of W. By Lemma 2, it now follows that Mw,Do I= pt. Thus Mw,Do b Et. Since Zt /= q, we also have M w,De /=q. By another application of Lemma 2, it follows that TrD(q) is true of W. 0 Discussion of the theorem: Theorem 1 shows that by putting in all the “necessary caveats”, we do indeed get a sound translation. But in a real world situation, it is not always possible to compute PNI(C,C’) or PPI(C,C’). either because we may not know whether a given literal Q should be in one of these sets, or because the set of primitive propositions V may be so large that the computation is impractical. Indeed, in the examples discussed in [McC], V is viewed as being essentially infinite. If we take P to be “Tweety is a bird” and Q to be “Tweety can fly”, then Q is likely given P as long as Tweety is not an ostrich, Tweety is not a a penguin, Tweety is not dead, Tweety’s wings are not clipped, . . . . The list of possible disclaimers is endless. Our assumption of having only finitely many primitive propositions does seem to be both epistemologically and practically reasonable in many natural applications. For example, in medical diagnosis we could take V to consist of relevant symptoms, diseases, and possible treatments, where the symptoms are qualitative (his temperature is very high) rather than quantitative (his temperature is 104’ F.). In any case, if we cannot compute PNI or PPI, and instead use a subset in the translation, then our reasoning may be unsound (in the sense of Theorem 1). This may help to explain where the nonmonotonicity comes from in certain natural language situations. People often use a type of informal default reasoning, saying “P is likely given Q”, without specifying the situations where the default Q may not obtain. Of course, this means that the conclusion Q may occasionally have to be withdrawn in light of further evidence. If, on the other hand, we “play it safe”, by replacing PNI(C,C’) (resp. PPI(C,C’)) wherever it occurs in the translation by a superset, it is straightforward to modify to proof of Theorem 1 to show that the resulting translation is still sound. We have viewed Theorem 1 as a soundness result. It is natural to ask if there is also a complementary completeness result. For example, suppose q is a standard LL formula over V, and for all propositional probability spaces W=(Pr,V), and all DcCON(W), we have TrD(q) true of W for all choices of a in the translation. Is it then the case that q is a valid LL formula? Unfortunately, the answer is no. To see this, first note that TrD(LGP V LG-P) = Pr(P ]D)za V PR(-P ]D)la is true for all probability models W as long as the threshold likelihood a is chosen 5 l/2. Similarly, TrD(-LGQ V -LG-Q) = Pr(Q ID)<a V Pr(-Q ] D)<a is true for all probability models as long as a>1/2. Thus TrD(LGP V LG-P V -LGQ V -LG-Q) will be true for all choices of a. But it is easy to see that (LGP V LG-P V -LGQ V -LG-Q) is not a valid LL sentence. The intuitive reason behind this phenomenon is that LL can deal with situations where likelihood is interpreted as being something other than just probability. Thus, while a given LL formula may be true of any situation where L is interpreted as meaning “with probability 2 a”, it may not be true for some other interpretation of L. We could, for example, take LGp to mean “I have some definite information which leads me to believe that p holds with probability 2 a”. With this interpretation, the sentence above would not be valid. 4. Reasoning about knowledge and likelihood We can augment LL in a straightforward way by adding modal operators for knowledge, much the same way as in [Mo,MSHI]. The syntax of the resulting language, which we call LLK, is the same as that of LL except that we add unary modal operators K l,...,Kn, one for each of the “players” or “agents” 1 ,...,n, and allow formulas of the form Kip (which is intended to mean “player i knows p”). Thus, a typical formula of LLK might be Ki(GQALGP): player i knows that Q is actually the case and it is likely that P is the case. We give semantics to LLK by extending the semantics for LL so that to each knowledge operator Ki there corresponds a binary relation pi which is reflexive, symmetric, and transitive (we remark that the assumption of symmetry gives us the axiom ,KP+K-KP, and can be dropped without affecting any of the results stated below). We can think of a state and all the states reachable from it 140 via the 9 relation as describing a “likelihood distribution”, Two states are joined via the 3yi relation iff player i views them as possible likelihood distributions (rather than just possible worlds, as in [Hi,Mo,MSHI]) given his/her current knowledge. Further details, as well as proofs of the technical results stated for LLK stated in the introduction, can be found in [HMc]. 5. Conclusions We have examined the relationship between the logic LL and probability theory. We have shown that there is a precise sense in which a restricted class of probabilistic assertions about a domain can be captured by LL formulas. However, in order to correctly deal with statements of conditional probability, we must specifically list all the situations in which the conclusion may not hold. The failure to do so in informal human reasoning is frequently the cause of the nonmonotonicity so often observed in such reasoning. (However, we note here in passing a number of the problems which [McD] suggests can be dealt with by nonmonotonic logic can also be dealt with by LL, in a completely monotonic fashion. See [HR] for further discussion on this point.) Even the restricted class of probabilistic assertions which can be dealt with by LL should be enough for many practical applications. Indeed, we view the translation from probability assertions into LL described in Section 3 as a practical tool: a discipline which forces a practitioner to list explicitly all the exceptions to his rules. Of course, this method does not guarantee correctness. If an exception is omitted, then any conclusion made using that rule may be invalid. But, whenever a conclusion is retracted, it should be possible to find the missing exception and correct the rule appropriately. As the discussion after Theorem 1 suggests, LL seems to be able to express some notions of likelihood which probability theory cannot. This may make it applicable in contexts where probability theory is not, It would be interesting to know whether LL is able to capture other notions of reasoning about uncertainty, such as possibility theory ([Zal]) or belief functions ([Sh]). (See the survey paper by Prade [Pr] for a thorough discussion of various approaches to modelling reasoning about uncertainty). A number of other interesting open questions regarding Theorem 1 remain. Is there a semantics for LL, or an interpretation for L, for which a soundness and completeness result in the spirit of Theorem 1 is provable? Can we give nonstandard LL formulas a reasonable interpretation? Is there a reasonable syntax for LL in which, in some sense, all formulas are standard? An alternative approach to reasoning about likelihood is fuzzy logic. Indeed, fuzzy logic has attempted to provide a framework for reasoning about notions such as “most”, “few”, “likely”, and “several”, which are common occurrences in natural language (cf. [Za2 I). However, although the syntax of the examples in [Za2] uses these natural language notions, the semantics is still quantitative. It would be interesting to see if LL or LLK could be extended in a reasonable way to deal with the type of examples considered by Zadeh in [Za2]. Another rich area for further work is simultaneous reasoning about knowledge and likelihood. LLK provides a first step, but does not allow, for example, statements of the form “p is more likely than q”. Gardenfors ([Gal) presents a modal logic QP where we can say “p is more likely than q”, but not “p is likely”. The axioms of QP seem more complicated than those of LL, and although QP is decidable, it seems that the decision procedure would be quite complex. More research needs to be done to find an appropriate logic that is both formally and epistemologically adequate. References DBSI IGal MM1 [HMcl [HRI WI D-4 [McCl IMHI [ MSHI] IMcDl [MO] WI IReI tSh1 [SPI R. Davis, B. Buchanan, and E. Shortliffe, Production rules as a representation for a knowledge-based consultation system, Artificial Intelligence 8, 1977, pp. 15-45. P. Gardenfors, Qualitative probability as an intensional logic, Journal of Philosophical Logic 4, 1975, pp. 171-185. J. Y. Halpern and Y. 0. Moses, Towards a theory of knowledge and ignorance, manuscript in preparation, 1984. J. Y. Halpern and D. A. McAllester, Likelihood, probability, and knowledge, IBM RJ4313, 1984. J. Y. Halpern and M. 0. Rabin, A logic to reason about likelihood, in “Proceedings of the 15th Annual Symposium on the Theory of Computing”, 1983, pp. 310-319. J. Hintikka, Knowledge and Belief, Cornell University Press, 1962. W. Lipski, On the logic of incomplete information, in “Proceedings of the 6th International Symposium on Mathematical Foundations of Computer Science”, Lecture Notes in Computer Science 53, Springer-Verlag, 1977. J. McCarthy, Circumscription - a form on non-monotonic reasoning, Artificial Intelligence, 13, 1,2, 1980. J. McCarthy and P. Hayes, Some philoshophical problems from the standpoint of artificial intelligence, in Machine Intelligence 4, (ed. D. Michie), American Elsevier, 1969, pp. 463-502. J. McCarthy, M. Sato, T. Hayashi, and S. Igarishi, On the model theory of knowledge, Stanford AI Laboratory, Memo AIM-312, 1978. D. V. McDermott, Nonmonotonic logic II: nonmonotonic modal theories, JACM, 29:1, 1982, pp. 33-57. R. Moore, Reasoning about knowledge and action, SRI AI Center Technical Note 19 1, 1983. H. Prade, Quantitative methods in approximate and plausible reasoning: the state of the art, Technical Report, Univ. P. Sabatier, Toulouse, 1984. R. Reiter, Towards a logical reconstruction of relational database theory, in Conceptual Modelling: Perspectives from Artificial Intelligence, Databases, and Programming Languages, (M. L Brodie, J. Mylopoulos, and J. Schmidt, eds.), Springer-Verlag, 1984, pp. 191-233. G. Shafer, A Mathematical Theory of Evidence, Princeton University Press, 1976. P. Szolovits and S. G. Pauker, Categorical and probabilistic reasoning in medical diagnosis, Artificial Intelligence 11, 1978, pp. 115-144. L. A. Zadeh, Fuzzy sets as a basis for a theory of possibility, Fuzzy Sets and Systems 1, pp. 3-28, 1978. L. A. Zadeh, Possibility theory and soft data analysis, in Mathematical Forntiers of the Social and Policy Sciences (L. M. Cobb and R. M. Thrall, eds.), A.A.A.S Selected Symposium, Vol. 54, Westview Press, Boulder, Co., 198 1, pp. 69-129. 141
1984
45
331
A Logic of Implicit and Explicit Belief Hector J. Levesque Fairchild Laboratory for Artificial Intelligence Research 4001 Miranda Avenue Palo Alto, California 94304 ABSTRACT As part of an on-going project to understand the found* tions of Knowledge Representation, we are attempting to characterize a kind of belief that forms a more appropriate basis for Knowledge Representation systems than that cap tured by the usual possible-world formalizations begun by Hintikka. In this paper, we point out deficiencies in current semantic treatments of knowledge and belief (including re- cent syntactic approaches) and suggest a new analysis in the form of a logic that avoids these shortcomings and is also more viable computationally. The kind of belief that underlies terms in AI such as ‘Know!- edge Representation” or “knowledge base” has never been ade- quately characterized. r As we discuss below, the major existing formal model of belief (originated by Hintikka in [l]) requires the beliefs of an agent to be closed under logical consequence, and thus can place unrealistic computational demands on his reason- ing abilitites. Here we describe and formalize a weaker sense of belief that is much more attractive computationally and forms a more plausible foundation for the service to be provided by a Knowledge Representation utility. This formalization is done in the context of a logic of belief that has a truth-based semantic theory (like the possible-world approach but unlike its recent syntactic competitors). This logic is also shown to have con- nections to relevance logic and, in a certain sense, to subsume it. 1. Logical Omniscience & Possible Worlds A recurring problem in the modelling of belief or knowledge is what has been called in [z] logical omniscience. In a nutshell, all formalizations of belief based on a possible-world semantics suffer from the fact that at any given point, the set of sentences considered to be believed is closed under logical consequence. It is simply built into the logic that if a is believed and a logically implies ,8, then B is believed as well. Apart from the fact that this does not allow for a resource-limited agent who might fail to draw any connection between a and fi, this has at least three other serious drawbacks from a modelling point of view: 1. Every valid sentence must be believed. 2. If two sentences are logically equivalent, then one must be believed if the other is. ‘Because what is represented in a knowledge base is typically not required to be true, to be consistent with most philosophers and computer scientists, we are calling the attitude involved here ‘belief” rather than “knowledRe”. 3. If a sentence and its negation are both believed, then so must every sentence. Any one of these might cause one to reject a possible-world for- malization as unintuitive at best and completely unrealistic at worst. There is, however, a much more reasonable way of interpret- ing the possible-world characterization of belief. As discussed in [3], instead of taking logical omniscience as an idealization (or heuristic) in the modelling of the beliefs of an agent, we can understand it to be dealing realistically with a different though related concept, namely, what is implicit in what an agent be- lieves. For example, if an agent imagines the world to be one where a is true and if o logically implies B, then (whether or not he realizes it) he imagines the world to be one where B also hap pens to be true. In other words, if the world the agent believes in satisfies cy, then it must also satisfy ,8. Under this interpreta- tion, we examine not what an agent believes directly, but what the world would be like if what he believed were true. There are often very good reasons for examining the consequences of what an agent believes even if the agent himself has not yet appreci- ated those consequences. If the proper understanding of a possible-world semantics is that it deals not with what is believed, but what is true given what is believed, what then is an appropriate semantics for deal- ing with the actual beliefs of an agent? Obviously, we need a concept other than the one formalized by possible worlds. If we use the terminology that a sentence is ezplicitly believed when it is actively held to be true by an agent and implicitly believed when it follows from what is believed, then what we want is a formal logical language that includes two operators, B and L: Ba will be true when a is explicitly believed while La will be true when Q is implicit in what is believed. While a possible- world semantics (like that of [l] or [4]) is appropriate for dealing with the latter concept, the goal of this paper is to present one for the former. 2. The Syntactic Approach When talking about what an agent actually believes, we want to be able to distinguish between believing only a and (a > 8) on the one hand, and believing a, (CY > a) and @, on the other. While the picture of the world is the same in both cases, only the second involves realizing that /3 is true. This is somewhat of a problem semantically, since the two sets of beliefs are true in nreciselv the same possible worlds and so, in some sense, seman- From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. tically indistinguishable. This might suggest that any realistic the syntactic and the possible-world approaches so that different semantics for belief will have to include (something isomorphic sets of sentences can represent the same beliefs without requir- to) a set of sentences to distinguish between the two belief sets ing that all logically equivalent sets do so. We now show that above. The usual way to interpret a sentence like La in a stan- there is a reasonably intuitive semantics for belief that has these dard Kripke framework is to have a model structure that con- properties. tains a set of possible worlds, an accessibility relation and other things. It appears that to interpret a sentence like Ba, a model structure will have to contain an explicit set of sentences. This is 3. Situations indeed what happens in the formalizations of belief of [S] and (61 that share our goal of avoiding logical omniscience. A slightly On closer examination, the reason the possible-world ap more sophisticated approach is that of [7] where the semantic preach to belief or knowledge leads to logical omniscience is that structure contains only an initial set of sentences (representing beliefs are characterized completely by a set of possible worlds a base set of beliefs) and a set of logically sound deductive rules (namely, those that are accessible from a given possible world). for obtaining new derived beliefs. Logical omniscience is avoided Intuitively, these possible worlds are to be thought of as the full there by allowing the deductive rules to be logically incomplete. range of what the agent thinks the world might be like. If he With or without deductive rules, I will refer to this approach to only believes that p is true, the set of worlds will be all those modelling belief as the .yntactic approach since syntactic entities where p is true: some, for example, where q is true, others, where have to be included within the semantic structures. q is false. However, because sentences which are tautologies will Apart from this perhaps ill-advised mixture of syntax and also be true in all these possible worlds, the agent is thought of semantics, the syntactic approach suffers from a serious defect as believing them just as if they were among his active beliefs. that is the opposite of the problem with possible worlds. A In terms of the possible worlds, there is no way to distinguish p possible-world semantics is, in some sense, too coarse-grained to from these tautologies. model belief in that it cannot distinguish belief sets that logically One way to avoid all these tautologies is to to make this no- imply the same set of sentences. The syntactic approach, on tion of what an agent thinks the world is like be more relevant the ocher hand, is too fine-grained in that it considers any two to what he actually believes. This can be done by replacing the sets of sentences as distinct semantic entities and, consequently, possible worlds by a different kind of semantic entity that does different belief sets. not necessarily deal with the truth of all sentences. In partic- To see why this a problem, consider, for example, the disjunc- ular, sentences not relevant to what an agent actually believes tion of LY and 8. There is no reason to suppose that (including some tautologies) need not get a truth value at all. B(a v ,9) E B(/3V a) Following [8] (but not too closely), we will call this sort of partial possible world a Gtuation. bughly speaking, a situation may would be valid given a syntactic understanding of B since (@VP) support the truth of some sentences and the falsity of others, may be in the belief set while (/? V a) may not.2 The trouble but may fail to deal with other sentences at all. with this is that if we consider intuitively what For example, consider the situation of me sitting at my ter- “It is believed that either o or /I is true.’ minal at work. We might say that this situation supports the is saying, the order seems to be completely irrelevant. It is fact that I’m at work, that somebody is at my terminal, that almost an accident of lexical notation that we had to choose one there is either a terminal or a book at my desk, and so on. On of the disjuncts to go first. Yet, the syntactic approach makes the other hand, it does not support the contention that my wife the left to right order of disjuncts semanticallysignificant in that is at home, that she is not out shopping, or even that she is at we can believe one ordering but fail to believe the other. home or not at home. Although the latter is certainly true, me sitting at my terminal does not deal with it one way or another. The obvious counter to this is that the logic of the syntactic approach has to be embellished to avoid these spurious syntactic One way of thinking about situations is as generalizations of distinctions. For example, we might insist as part of the seman- possible worlds where not every sentence in a language is re- tics that to be well-formed, any belief set containing (ckVb) must quired to have a truth value. Conversely, we can think of pos- also contain (/? V cr) (or, for Konolige, the obvious deduction sible worlds as those limiting cases of situations where every rule must be present). The trouble with this kind of constraint sentence does have a truth value. Indeed, the concept of a pos- is that it is semantically unmotivated. For example, should we sible world being compatible with a situation is intuitively clear: also insist that any set containing 11~ must also contain cr? every sentence whose truth is supported by the situation should Should every belief set containing a and b also contain (a ha)? come out true in that possible world and every sentence whose Should every belief set contain the ‘Lobviousn tautologies such falsity is supported should come out false. Again drawing from as (a > a)? Where do we stop ? Clearly, it would be preferable (81, we will also allow for incoherent situations with which no to have a semantics where restrictions such as these follow from possible world is compatible. These are situations that (at least the meaning of Ba and not the other way around. In other seem to) support both the truth and falsity of some sentence. words, we want a semantics (like that of possible worlds) that From the point of view of modelling belief, these are very useful is based on some concept of truth rather than on a collection since they will allow an agent to have an incoherent picture of the world. of ad hoc restrictions to sets of sentences. Ideally, moreover, the granularity of the semantics should lie somewhere between The “trick”, then, that underlies the logic of belief to follow is to identify explicit belief with a Bet o{aituationa rather than 21n Konolige’s #y&em, one disjunction may be deducible while the other possible worlds. Before examining the formal details, there is mav not. one point to make. Traditional lonics of knowledge and belief 199 have dealt not only with world knowledge but also with meta- knowledge, that is, knowledge about knowledge. To be able to deal with this in our case is somewhat of a problem since we would have to deal with a whole raft of questions about what is believed about what is explicitly or implicitly believed. For example, even without assuming that everything believed is true, it is not clear whether or not B(La > CX) should be valid. For reasons given in [3], L(La > CY) should be valid even if belief does not, in general, imply truth. Instead of trying to settle all of these questions here and now, we will ignore them completely. The language below will simply not contain any sentences where a B or a L appears within the scope of another. This will simplify the semantics immensely while still illustrating how the two concepts can co-exist naturally. 4. A Formal Semantics The language we are considering (call it L) is formed in the obvious way from a set of atomic sentences P using the stan- dard connectives V, A, and 1 for disjunction, conjunction, and negation respectively, and two uuary connectives B and L. Only regular propositional sentences (without a B or a L) can occur within the scope of these last two connectives. We assume that other connectives such as > and E can be understood in t(erms of the original ones.s Sentences of L are interpreted semantically in terms of a model atructute (S,B,T,3) h w ere .S is a set, B is a subset of S, and both t and 3 are functions from P (the atomic sentences) to subsets of S. Intuitively, S is the set of all situations with B being those situations that could be the actual one according to what is believed. For any atomic sentence p, T(p) are the situations that support the truth of p and 3(p) are those that support the falsity of p. To deal with the possible worlds compatible with a situation in a model structure, we define W by the following: W(3) = { 3’ E S 1 for every p E P, a) a’ is a member of exactly one of 7(p) and 3(p), b) if 3 is a member of 7(p), then so is Q’, and c) if a is a member of 3(p), then so is s’.} The first condition aboves guarantees that s’ will be a possible world, while the last two guarantee compatibility. Also, for any subset S * of S, we will let W (S’) mean the union of all W (8) for every s in S’. ,* Given a semantic structure (S, 8, T,3 ), we can define the support relations /==T and +p holding between situations and sen- tences of L. Intuitively, 8 kTa when 8 supports the truth of CX, and .9 kp Q when s supports the falsity of Q. More formally, we have the following: kT and k=F E S x L and are defined by 1. skTpiff8ET(p). u k=p p iff d E 3(p). aWe may eventually want a special implication operator, especially for sen- tences that are obiects of belief. 5. J kT Ba iff for every 3’ in 8, 8’ bTa. a kFBcr iff 3 IfTBa. 6. J /== La iff for every 3’ in W(B), 3’ k=a. 3 +FLa iff 8 kTLa. If 9 is an element of W(S) ( i.e. 8 is a possible world), then if B +=a, we say that a is true at a and otherwise that a ia joke at 8. Thus, as to be expected, a sentence is true iff it is not false iff its negation is false. Finally, we say that a is valid and write /= a provided that for any model structure (S, B, T ,3 ) and any J in W(S), (Y is true at s. The satisfiablitity of a sentence (or of a set of sentences) can be defined analogously. This completes the semantics of L. While space precludes a lengthy examination of the properties of L, here are the major highlights. First of all, L handles its standard propositional subset correctly in that all instances of propositional tautologies are valid and, moreover, any sentence not containing a B or L is valid iff it is a standard tautology. As for implicit belief, it is easy to see that all tautologies are implicitly believed and that it is closed under implication. In other words, we have If + Q (where Q is propositional), then b La and k (La A L(cK 3 /4)) 3 L/9. Equally important, the sentence (Ba > La) is valid, meaning that everything that is explicitly believed is an implicit belief. In fact, if a sentence is a logical consequence’ of what is believed, then it is implicitly believed. Unfortunately, the converse does not hold since in some interpretations, there may be sentences that are true in the right set of possible worlds without be- ing implied by what is believed. For example, if a sentence is necessarily true then it will be an implicit belief-even if it is not logically valid-a generic problem with the possible-world semantics for knowledge and belief that seems to have gone un- noticed in the literature. We should not be too concerned about this, however, since it does not affect either the valid or the sat- isfiable sentences of L, but only whether or not certain infinite sets of sentences are satisfiable.5 Of course, the major issue here is how the B operator behaves. Before examining the valid sentences containing B, it is worth copsidering some satisfiable sets of sentences that show that be- lief does not suffer from logical omniscience. The following sets are all satisfiable: . 1. {Bp,B(p~q),-Bq} Th is s ows that beliefs are not closed h ’ under implication. ‘A sentence Q is a logical consequence of a set L’ of sentences iff L’ U {TX} is unsatisfiable. 6There is, moreover, a fairly simple way to eliminate the problem of non- logical necessary truths always being implicitly believed. Call a model structure ezpunriue if for any set of atomic sentences, there is a possible world in the structure such that the atomic sentences it supports is precisely that set. Now while there are certainly model st.ructures that are not expansive, it can be shown that the validity or satisfiability of a sentence would not change if these were defined in terms of expansive structures only. With this definition, moreover, a sentence would indeed be implicitly believed if and only ilit was lonicallv implied bv what was believed. 200 2. (1B(pv -p)} A Id va i sentence need not be believed. 3. { Bp, -B(p A (q V -q))} A logical equivalent to a belief need not be believed. 4. {Bp, B-p, -Bq} B 1 f e ie s can be inconsistent without every sentence being believed. The above sets show what freedom the logic allows in terms of beliel; to demonstrate that the logic does impose reasonable constraints on belief, we must look at the valid sentences of L. We will present these in terms of a proof theory for L that is both sound and complete with respect to the above semantics. The important point, however, is that unlike the syntactic approach, these constraints follow from the semantics. The only reason to consider a proof theory here is that it does provide an elegant and vivid way to examine the valid sentences of L (especially those using B).’ 5. A Proof Theory The proof theory of L must begin with a propositional basis of some sort to guarantee that all tautologies are present. The simplest way to do this is to have a single rule of inference, Modus Ponens, and the usual three axioms that can be found in any elementary logic textbook. To this basic system we will adjoin a collection of new axioms for implicit and explicit belief but no new rules of inference. The appropriate axioms for implicit belief should make sure that it contains all tautologies and all beliefs and is closed under implication. This can be achieved with three axiom schemata: 1. Lo, where a is a tautology. 2. (Ba 3 La). 3. kr A .+ 3 a) 3 Lg. For explicit belief, on the other hand, we have to dream up a set of axioms stating what has to be believed when something else is. In other words, we need a set of axioms of the form (Ba > BB), for various 0 and /?. Remarkably enough, this work has already been done for us in what is called relevance logic [9]. This logic deals with a relationship between pairs of sentences called entailment that is a proper subset of logical implication. Entailment is based on the intuition that the antecedent of an implication should be relevant to the consequent. As it turns out, entailment and belief are very closely related, as the follow- ing key result attains: Theorem 1: /= (Ba > B/?) if7 a entails /?. The proof of this theorem’ is based on a correspondence between our semantics of situations and a semantics of four truth-values described in 1111. What this tells us is that L contains relevance logic as a subpart: questions of entailment can be reduced t.o questions of belief in L. Moreover, we get this relevance logic without having to give up classical logic and the normal inter- pretation of > and the other connectives. ‘We could imagine constructing a decision procedure for L directly from the above without even passing through a proof theory at all. Such A decision procedure, after all, is what counts when building a system that reasons with L. ‘Proofs of this and the two other quoted theorems can be found in [lo], a slinhtlv revised version of this Daoer. So all that is needed to characterize the constraints satisfied by belief is to apply a set of axioms for entailment in relevance logic 4. 5. 6. 7. 8. 9. to belief. One such set given in [9] is the following: B(o A B) E B(/‘? A a). B(a v a) E B(/!3 V a). B(a A (B A 7)) = B((a A 8) A r)- B(a v (B v 7)) = B(b V b’j V -Y). B(a A (B V +I)) = B((o A B) V (a A r))- B(a v (B A 7)) - B((a V 8 A b V -/j)- B-+rVj3) GE B(yaA+). B+A\) E B(lcrV 18). B-VTK G Ba. This Ba A B/3 z B(a A a). Bav B/9 > B(LIVB). particular axiomatization states that belief must respect properties of the logical operators such as commutativity, asso- ciativity, distibutivity, De Morgan’s laws and double negation. Nothing in these axioms forces all the logical consequences of what is believed to be believed (as in axioms 1 and 3, above, for implicit belief), although each one forces Some consequences to be believed (e.g., by axiom 8, a double negation of a sentence must be believed if the sentence itself is). Another way to understand these axioms (except for the very last one) is as constraints on the individuation of beliefs. For example, (cr V 8) is believed iff (/l V a) is because these are two lexical notations for the same belief. In this sense, it is not that there is an automatic inference from one belief to another, but rather two ways of describing a single belief. This, in itself, does not juSti&/ the axioms, however. It is easy to imagine logics of belief that are different from this one, omitting certain of the above constraints or perhaps adding ad- ditional ones. Indeed, there is not much to designing a proof theory with any collection of constraints on belief. The interest- ing fact about this particular set of a,xioms, however, is that it corresponds so nicely to an independently motivated semantic theory. Specifically, we have the following result: Theorem 2: (Soundness and Completeness) A sentence of L is a theorem of the above logic iff it is valid. Furthermore, and perhaps most importantly, the logic of L has very attractive computational properties as well, which we now turn to. 0. The Payoff What does this new logic of belief buy us? One thing is a lan- guage that can be used to formally reason about the beliefs of other agents without assuming logical omniscience, If we imag- ine a system planning speech acts as in [12], we can represent what it knows about the beliefs of another as a theory in L. It could then plan to remind someone of something he only believes implicitly. Similarly, it could take someone through certain steps of an argument or proof, at each stage pointing out implications of the other agent’s beliefs. There are any number of ways to mechanize the necessary rea- soning in L. One currently fashionable method involves trans- lating evervthine: into first-order Ionic and running a resolution 201 theorem-prover over the results. This would involve the usual encoding of sentences of L as terms and characterizing either its validity or provability (or both) using a first-order theory. Just doing this, however, would miss a very important feature of L, namely that calculating propositional beliefs is much easier than doing general propositional reasoning. Consider, in particular, the role of a logical Knowledge Rep- resentation system (such as KRYPTON [13]) that is given as a knowledge base (or KB) a finite set of sentences in some lan- guage. What a knowledge-based system using this KB (such as a robot) will be interested in is whether or not some proposition is true of the application domain (e.g. “Is it raining outside?“). The ideal way of answering this kind of questions is yes if the question follows from what is in the KB, no if its negation does and unknown otherwise. The sad fact of the matter, however, is that for all but extremely simple languages (including some without quantifiers) this question-answering is computationally intractable. This might be tolerable if the kind of question you ask is an open problem in mathematics where you are willing to stop arid redirect the theorem-prover with problem-specific heuristics if it seems to be thrashing. If, on the other hand, a robot is trying to decide whether or not to use an umbrella, and calls a Knowledge Representation system utility as a subroutine, this kind of behaviour is unacceptable. A possible solution to the problem is for the Knowledge Rep- resentation system to manage what is explicitly believed rather than its implications. In those cases where a question cannot be answered directly on the basis of what is believed, the robot can decide to try to figure out the answer by determining the implications of what it believes. Moreover, new facts can be sought and the question can even be abandoned it it becomes too expensive to pursue (e.g. the robot can decide to bring its umbrella just to be safe). The point is that this more general form of reasoning can be controlled very carefully depending on the situation since it is no longer just a subroutine call to a Knowledge Representation system. The robot can, in fact, plan to figure something out just as it would plan any other activity. This is all very speculative, of course. How do we know, for example, that it is any easier to calculate what is believed rather than its implications? There is, fortunately, fairly strong evidence for this, at least in the propositional cme: Theorem 3: Suppose KB and Q are propositional sentences in conjunctive norm al form. Determining if KB fogically implies a is co-J/P-complete but determining if KB entails a has an O(mn) ajgorjthm, where m = ]KBl and n = lal. Corollary 4: Assume KB and Q are as above. Then, in the worst case, deciding if a) /= (BKB 3 La) is very dJ%cult. b) + ( BKES > Ba) is relatively easy. What this amounts to is that if we consider answering questions of a given fixed size, the time it takes to calculate what the KB believes will grow linearly at worst with the size of the KB, but the time it takes to calculate the implications of what the KB believes will grow ezponenfiallys at worst with the size of the KB. sMore precisely, it will grow faster than any polynomial function, unless P eauals NP. Returning now to the formal modelling of the beliefs of other agents, the reason we would not want to simply run an untuned resolution theorem-prover over encodings of sentences of L is that we would lose the opportunity to exploit the computational tractability of belief. Again, it is not so much that our logic is the only one to capture a semantically and computationally respectable notion of belief. What it demonstrates, however, is first, that it is possible to move away from closure under classical implication without espousing the syntactic approach and giving up semau- tics altogether, and second, that there is hope for a non-trivial domain-independent Knowledge Representation deductive ser- vice. Of course, it remains to be seen whether these advantages can be preserved for a language that includes meta-knowledge and quantifiers. Discovering appropriate semantics and decision procedures in these cases remains a difficult open problem. ACKNOWLEDGEMENTS This work wa done as part of the KRYPTON project at Fairchild and I am indebted to its other members, Ron Brachman, Richard Fikes, Peter Pat&Schneider, and Victoria Pigman, as well as to David Israel of BBN, Joe Halpern and the other participants of the Knowledge Seminar at IBM San Jose, and to the Best Western family of hotels. REFERENCES Hintikka, J., Knowledge and Belief: An Inlroduction to the Logic o/ the Two Notions, Cornell University Press, 1962. Hintikka, J., Impossible Possible Worlds Vindicated, Journal of f’hiiosophicnl Logic, 4, 1975, 475-484. Levesque, H. J., Foundations of a Functional Approach to Knowledge Representation, Artijiciaf Intelligence, forthcoming. Moore, R. C., Reasoning about Knowledge and Action, Techni- cal Note 181, SRI International, Menlo Park, 1980. Moore, R. C. and Head&, G., Computational Models of Beliefs and the Semantics of Belief-Sentences, Technical Note 187, SRI International, Menlo Park, 1979. Eberle, R. A., A Logic of Believing, Knowing and Inferring, Sun- these 26, 1974, 356-382. Konolige, K., A Deduction Model of Belief, Ph. D. Thesis, Com- puter Science Department, Stanford University, in preparation. Barwise, J. and Perry, J., Situations and Attitudes, Bradford Books, Cambridge, MA, 1983. Anderson, A. R. and Belnap, N. D., Entailment, The Logic of Releoance and Necesaitg, Princeton University Press, 1975. Levesque, H. J., A Logic of Implicit and Explicit Belief, Fairchild Laboratory for Artificial Intelligence Research, Technical Re- port, in preparation. Belnap, N. D., A Useful Four-Valued Logic, in G. Epstein and J. M. Dunn (eds.), Modern User of Multiple-Valued Logic, Reidel, 1977. Perrault, C. R. and Cohen, P. R., Elements of a Plan-Based Theory of Speech Acts, Cognitive Science 3, 1979, 177-212. Brachman, R. J., Fikes, R. E., and Levesque, H. J., KRYP- TON: A Functional Approach to Knowledge Representation, IEEE Computer, 16 (lo), 1983, 67-73. 202
1984
46
332
A SELF-ORGANZING RETRIEVAL SYSTEM FOR GRAPHS Robert Levinson Department of Computer Sciences University of Texas at Austin Austin 9 TX 7871“ Y ABSTRACT* The design of a general knowledge base for labeled graphs is presented. The design involves a partial ordering of graphs represented as subsets of nodes of a universal graph. The knowledge base’s capabilities of fast retrieval and self-organization are a result of its ability to recognize common patterns among its data items. The system is being used to support a knowledge base in Organic Chemistry. 1. Introduction When asked to develop a retrieval system for known chemical reactions and molecules, we chose to undertake the more fundamental task of designing a general knowledge base for labeled graphs. In particular, we wished to have a system that would efficiently handle the query: Given a labeled-undirected graph Q and a data base of labeled undirected graphs answer the following: 1. Is Q a member of the data base? (exact match) 2. Which members of the data base contain Q as a subgraph? (supergraphs) 3. Which members of the data base contain Q as a supergraph? (subgraphs) 4. Which members of the data base have large subgraphs in the data base in common with Q? (close matches) In this paper we discuss a system that meets the design objective mentioned above and also supports other features that are highly desirable in intelligent knowledge bases but are usually difficult to achieve. Most important of these features is the ability of the system to structure its own knowledge base through the recognition of common patterns (subgraphs) in its data items (graphs). In fact, the critical idea that our system demonstrates is that the common patterns can be exploited for multiple purposes. We call these common patterns concepts. They can be used to enhance retrieval efficiency, to increase the knowledge of the system (concept discovery), to characterize the relationships between its individual data items, and to provide criteria to select among partial and relaxed matches. The system is currently being used successfully to support a knowledge base for Organic Chemistry. In further research we hope to demonstrate its utility in a variety of domains where individual data items can be represented as iabeled graphs. Some of the domains being *This research is sponsored in part by the Robert A. Welch Foundation, a Cottrell-Research Grzkt from Research Corporation, NSF grant MCS-81?2039, and an NSF Graduate Fellowship. considered are ITSI designs, program trees, computer networks, structural diagrams, and semantic nets. 2. The data base design The key to the data base design is the recognition that (1) All of the graphs of the system can be viewed as subgraphs of a single Muniversala graph. and that (2) these graphs can be represented by subsets of nodes of the universal graph. The universal graph is constructed as new graphs are added to the system and it is used when old graphs are to be retrieved. For an example see Figure 1. The four graphs: a b d a a b c a b c n A - 7-I c d 0 f d e can be represen ted as subsets of nodes in a universal graph : (node labele in parentheses) 7(a) 1 (a> 4(c) 5(e) 6(f) The euboets are Cl 2 3 4). (3 5 61, (1 2 7 81, and (1 2 3 8 9). Figure 1: An example of a universal graph The rest of the design involves making explicit the ordering achieved by the partial ordering relation subgraph-of ( see Figure 2.) This is achieved by storing with each graph pointers to its immediate predecessors and to it,s immediate successors in the partial ordering. Each gr:;ph in the system is called a concept and describes a structure that is determined to be of interest. Initially the system has only the graphs (concepts) that represent complete facts in the problem and primitives. Primitives are the labels t,hat appear on the nohes of the graphs. As concepts are added to t!he system they are inserted in their proper position in the partial ordering. Some of these new concepts may represent new complete objects and some may represent new primitives. But others may represent common substructures that are useful in 203 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. analyzing complete objects. These intermediate concepts provide structure to the partia,l ordering, and thus aid the response rate and flexibility of the system. Another way to view the universal graph is that it is a graph that is pointed to by all graphs at the top of the partial ordering. universal graph D D data items D D C concepts C P primitives Figure 2: The Partial Ordering of Concepts (A typical chain is shown) 3. The retrieval algorithm In this section we describe an algorithm for answering parts l-4 of the basic query given in-the Introduction. The algorithm operates on the universal graph and partial ordering described in Section 1. In fact, the algorithm works on any set of data for which a partial ordering has been established. It is desirable to have an algorithm that minimizes the number of comparison operations that are necessary to answer the &ery for exact match, superjqaphs, subgraphs and close matches. This minimization of comparison operations is particularly important in a system that uses complex objects like graphs since the complexity of these operations is usually exponential. The main way that our algorithm attempts to- minimize the number of comparisons is by using the partial orderin, r to segment the data base so that only a small part of it need actually be considered in detail. The algorithm has the feature that it is easy to implement and that it searches nodes in a logical bottom-up order that may be useful in domains in which additional computation is desired during the retrieval process. (For example, we may wish to apply general concepts to a situation before more specific ones are found to be applicable.) On a data base of 200 concepts an average of about ?+5 node-by-node searches are required to answer a typical query. We are looking for other algorithms that might require an even smaller number of comparisons. First we discuss the general algorithm for all partial orderings, and then we- show how the universal gra,p h represent ation can be used to further the efficiency of the aliorithm for a data base of graphs. The query can be a[iswered by finding where the query structure should fit in the partial ordering, whether it is already in the ordering or not, Then Yart 1 can be answered. Then parts Z-4 of the query can be answered by simply following pointers (chaining) in the partial ordering. 1Ve will see that. most of the pointer chasing is already accomplished in the process of finding the immediate predecessors and successors of the query object in the partial ordering. We accomplish this in two phases: Let 1r(y) denote the set of immediate predecessors of the data element y. In Phase 1 we determine IF’(Q) where Q is the query object: S := c3 While there is an unmarked element J of the data base such that each member of IP(y) is marked T or IP(y) = 0 do If y 5 Cl (* comparieon needed *) then mark y as T S := [s - IP(y)I u fy3 Else mark y as F. This process terminates with S = II’(Q). Note that when Phase I begins, all objects at the bottom of the partial ordering are compared to Q since they have no immediate predecessors. This process can be accomplished quickly if we require t#hat the bottom of the partial orderin, q contain real primitives (such as single nodes) for Fvhich the comparison operation is trivial. An informal description of Phase 2 shows what takes place: The goal of Phase 2 is to calculate IS(&) - the immediate successors of Q: Chain up from each member of IP(Q) in breadth first or depth first fashion (the chaining from the last member of IP(Q) must be breadth first) When these upward chains meet (i.e. there is a data item y on each of these chains) check if Q -( y. If so, y is in IS(Q), else continue to chain up from y. Now let’s go over how Phase I and Phase 3 help to answer parts l-4 of the query: 1. Exact match: Q already exists in the data base, if IF’(Q) = IS(Q). If so, then Q is the single element contained in these sets. 2. Subgraphs: The subgraphs are simply all nodes that were marked T in Phase 1. 3. Supergraphs: (This is the only place where additional chaining is required). The supergraphs are t,he union of the upward chains from each member of IS(Q) 4. Close ma.tches: The close matches are the union of the upward chains from each member of E’(Q). In the most obvious implementation of Phase 2, a hash table is used to manage t,he breadth first search. It contains information about which nodes have been visited and which upward chains they are on. The desired union can be found simply by collecting elements of the hash table. (In our inlplcmentation of the graph system we use these nodes a.s candidates for comparison to Q, and we use an heuristic-based maximal common subgraph algorithm to extract larger close matches.) How can the universal graph improve the efficiency of the algorithm when applied to a data base of graphs? Since we try to construct the universal graph t,o be as small as possible, many of its nodes will be shared by many graphs. This overlap and the fact that the graphs of the system are represented as sets means that some exponential graph operations may be improved or they may be replaced by linear set operations. Where in the algorithm do these savings take place? 1. If we find a supergraph in Phase 2 we can use the location of Q in this supergraph to find a proper occurrence of graph Q in the universal graph itself. Now that Q has been reduced to a set, we can infer that all graphs that are represented by sets that are supersets of this set are supergraphs, wit,hout doing a node-by-node search. In practice, the universal graph helps to eliminate about 20? of the node-by-node searches required in phase 3. 2. If we know the placement of Q in the universal graph and we \vish to dc~tcrminc common subgrnphs often we can do this 1)~ taking intcrscctions of Q’s set with the sets representing other graphs. 4. The system applied to organic chemistry In this section we show how the system is being used as a knoislcdgc ~JsSe for organic chemistry. The chemical data base represents chemical reactions reported in the chemical literature as labeled graphs. Primitives (see Section 2) are written in the form “X-Ylr” meaning that atoms X and Y arc connected with bond t,ype 1 on the left-hand-side of the reaction and bond r on the right- hand-side of the reaction. (The atom names represent’ed bv X and J’ are in lexicographic order.) If a molecule is b;ing represented, 1 will equal r. For example, C-C21 represents a double-bond between two carbon(C) atoms that is changed to a single bond. C-002 likewise represents a newly created double bond between carbon(C) and oxygen(O). Finally, these labels are given concept numbers, since all primitives must also be concepts. Complete molecules and reactions also become concepts. In addition, intermediate concepts such as the functional groups arene and ester are added (either by hand or by the system) to provide additional structure t,o the part ial ordering. The data base currently has nbout GO0 concepts for complete structures, 50 intermediate concepts, and 100 primitives. Soon 500 reactions with the associated molecules will be added. Preliminary experinlcnts confirm that the use of well-chosen intermediate concepts to structure the partial ordering does in fact significantly limit the number of graphs that must be examined to answer a query. To be useful, the chemistry data base (or most any other data base, for that matter) must contain more than just graph structures. Other knowledge is associated with each structure. For example, each reaction concept has associated with it, pointers to the two graphs representing the left-hand-side and right-hand-side of the reaction. This association allows us to view reactions as graph-to- graph production rules. Further information about. the chemical reactions such as yields, reaction conditions and literature references are stored in auxiliary files that are associated with the standard graph system. This will help the system to serve as an aid to the organic chemist who is trying to synthesize an organic moltlculc. The major difference between our syst,em and other chemical substructure search systems is our ability to organize multi-fezjels of search screens tiynamirally. See also (Adamson, 1973), (Bnwden, 1983), (Dittmar, 1983), (Feldman and IIod(>s, 1975), (Fugmann, 1979), (O’Korn, 1977) and (!Villett, 1980). The major difference between our systrm and other systems designed to do organic synthehis is our ability to organize n.nd employ a large body of real ujorld daft. Anot her important difference is the explanatory power that can be gleaned from the grneralization arcs in the partial ordering. See also (Gelertner, 1973), (Sridharan, 1973), (Willett, 1980) and (Wipke, 1977) 5. The system applied to organic chemistry In this section we discuss features that in addition to the retrieval capacity make the graph system a useful design for an Artificial Intelligence knowledge base. Examples are taken from the Orgaiic Chemistry application. l The universal graph is an efficient way to store a large number of graphs since adjacency lists need only be stored once as part of the universal graph. l The universal graph and the partial ordering are excellent aids to concept discovery. We have seen that in the Organic Chemistry domain useful concepts to the chemist can be found by finding common subgraphs among the elements in the data base. These common subgraphs may be recognized as overlaps of sets of nodes in the universal graph. For instance the set {1,2,8} in Figure 1. By finding places in the partial ordering where further differentiation among concepts is required we can see where additional graphs should 1,~ added. See Figure 3. The added concepts make the ordering more balanced. These local techniques are important on large data bases where global statistical techniques like cluster analysis are co~llputationally infeasible. Examples of common chemical st rue t ures discovered by our system include the functional groups arene, ether, - phenol, and carbosylic acid as well as some useful generalizations of real-world react ions. Before After ------ ----- v (points to 10 graphs) Figure 3: Adding concepts (pointo to 4 graphs) o The partial ordering represents a useful characterization of the data in the data base. As we move down the partial ordering we move to conrepts of greatclr and grca.tcr grnerality. Likewise, as we move up, we move to more and more specific concepts. A theory of such generaliznfion hierarchies is given in (Sowa, 1983). An important feature of our system is its ability to derive generalizations from its reaction data bilqc. These generalizations become import ant M.~IPII they are applied to suggesting precursors to a molecule not. yet known by the data base. ldnother unique feature of the Organic Chemistry domain is that gineralizations can be written down simply as substructures of larger graphs. See Figure 4. 205 The Reaction: Br The Generalization: ( : + II 0 I Figure 4: A Reaction and Its Generalization l Retrieval is fastest for graphs that already exist or are quite similar to stored graphs. This allows the system. by storing query structures and finding common patterns jvith them, to adapt its retrieval capabilities to the needs of an individual user who may often ask queries that are similar or identical to previous queries. l The system has the capability for relaxed- matching. This is made possible by allowing an individual label in the query structure to match any of a set of labels. The retrieval algorithm no longer works the same as before since the partial ordering does not contain the pointers associa.ted with the “relaxed” structures. Subgraphs are discovered as before but often we must wait until the close match stage to determine the supergraphs. This is because IP( Q) for a “relaxed’ Q usually contains more elements than IP(Q) otherwise. An example of relaxed matching in the chemistry domain is allowing the primitives C-CL11 and C-BRll and C-F11 to be equivalent since halogens (Cl, Br, and F) often function similarly. These equivalence classes currently must be defined by the user who has the option of invoking one or more of them at query initiation time. We are exploring whether the system can discover some of these equivalence classes on its own. 0. Conclusion The capabilities of our system mny seem at first glance to be surprising when taken with the result, that weak, syntactic method s are usually not enough to support intelligent, behavior. However, there seems to be a more powerful principle at work here: An ideal representation is one that hus a form analogous to what it represents. 1Ve exploit this principle twice: 1. We use chemical structural diagrams. These are known to be useful analogies of the real world. ?. The universal graph and partial ordering make explicit the relationships between individual data items. Graphs that have much in common are physically and logically close together. This principle is not new. For example, Doug Lenat cites this principle as the major reason for success of his ,Wl program and C:elertncr’s geometry theorem prover (Lcnat and BroAvn, 108.3) and (Gelertner, 1063). The Handbook of Artificial Intelligence (Barr, 1081) calls such ideal representations direct or analogical representations. Recently, (Pentland and E’ischler, 1083) called these rc>prescntations isomorphic reyreserztntions . i+.l’ithout a good deal of commonality between the individual data items. the power acquired from the second application of the principle would be lost. However, a data base of more or less unrelated data items probably would not be useful for complex reasoning. ACKNOWLEDGE-MENTS I would like to thank my advisor Dr. Elaine Rich and my Bollahorator Dr. Craig \Vilcos (Dept. of Chemistry) for their support and many contributions to this research. I also would like to thank James 1Vells for the graphics programs, and Mohan Ahuja for his encouragement. REFERENCES 1. Adamson, G. W. , Cowell, J. , Lynch, M. F. , McLure, H. W. , Town, W. G. and Yapp, hl. A . ‘Strategic Considerations in the Design of a Screening System for Suhstracture Searches of Chemical Structure I:ilerj.’ Journal of Chemical Documentation 13 (1973), 1:&157. 2. Barr, A. and Feigcnbaum, E. A. The Ha7tdbooIi of Artificial Intflligt~ce. Kaufman, Los Altos, Calif. , 1981. 3. Bawden, D. . “C’omputerized Chemical Structure-Handling Techniques in Structure-Activity Studies and >lolecu!ar Property Prediciion.’ Journal of Chemical Information and Computer Sciences 2.9 (Feb 19X3), 14-32. 4. Dittmar, P. G. , Farmer, N. A. , Fisanick, \V. , Haines, R. C. , Mockus, J . “The CXS UNLIKE Search System 1. General System Design and Selection, Generation, and Use of Search Screens.” Journal of Cl;cmical Informntion and Computer Sricncrs 23 (Aug 19&q, X3-102. 5. Feldman, A. and Hodcs, L . “An Efficient Design for Chemical Structure Searching I, The Screens.” Journal of Chexicai Information and Co77iputtr Science.3 15 (1975). 147-151. 0. Fugmann, R. , Iius~~rrrann, G. , and \Vinter, ,J. II . “The Supply of Information on Chemical Reactions in the IDC System.” Information Proceaaing a/rd Munagc77atnt 15 (1979), 303-333. 7. Gelertner, 11 Iic:tlization of Geometry Thrarem Proving h4nchine. In Conzputrr,q a,Ld Thought, Feigenhaum and Feldmna, Eds.. hlcgraw-Ijill, 1963, pp. 134-152. 8. Gelcrtner, II ‘The Discovery of Organic Synthetic Routes by Computer.’ Tupic,s in Currcnf Chemistry 42 (lO”3). 9. Haye+I:otb, F. , W’ntrrman, D. , and Lenat, D. B . Building Ezytrt Syatetna. Addison-Wesley, 1983. 10. Lenat, D. B. and l!rown, J. S . Why AM and Eurisko Appear to Work. Proc. A,4AI-83. 1963. 11. O’Korn, L. J . Algorithms in Computer Handliug of Chemical 1nfo;rnation. In Algorithrn~ for Chemical Computationa, Christofferscn, El. E. , Ed.,American Chemical Society, 1977, pp. 122-148. 12. Pentland, ,4. P. , E‘ischler, M. A .A hlore Rational Vitw of Logic.” AZ Map,-ir2e 4. 4 (1953). 13. Sowa, J. F. C’onctptunl Structurta: Information Proces.cing in Afind and M,~chin F. Addison-\Z’cslpy, 1983. 14. Sridharan, N. S. . Search Strategies for the TX!< of Chemical Organic Synthesis. Proc. IJCAI-3, 1973. 15. \Villett, P . ‘The Evaluation of an Aut.omaticaIly Indexed, hfachine-Readable Chemical Reactions File.” Journal of Chemical Information n7ld Computer Sciezrea 80 (1980), 93-96. 18. \Vipke, W. T. and IIowe, W. J.(editors) . Computw-Assisted Organic Synthesis. American Chemical Society, 1977.
1984
47
333
A~-T~ORETICFRCWlEWORKFOR~IET'~OCESSINGOF UNCEHTAINKNOWLEDGE S. Y Lu and H. E. St ephmmu Long Range Research Division Exxon Production Research Co. P. 0. Box 2189 Houston, Texas 77001 ABSTRACT In this paper, a knowledge base is represented by an input space, an outpUt space, and a set of mappings that associate subsets of the two spaces. Under this represen- tation, knowledge propocessing has three major parts: (i) The user enters observations of evidence in the input space and assigns a degree of certainty to each observa- tion (2) A piece of evidence that receives a non-zero cer- tainty activates a mapping. This certainty IS multiplied by the certainty associated with the mapping, and is thus propagated to a proposition in the output space. (3) The consensus among all the propositions that have non-zero certainties is computed, and a final set of conclusions is drawn. A degree of support is associated with each con- clusion The underlying model of certainty in this processing scheme is based on the Dempster-Shafer mathematical theory of evidence. The computation of the consensus among the propositions uses Dempster’s rule of combina- tion The inverse of the rule of combination, which we call the rule of decomposition, is derived in this paper. Given an’ expected consensus, the inverse rule can generate the certainty required for each proposition. Thus, the certainties in the mappings can be inferred iteratively through alternating use of the rule of combination and the rule of decomposition. 1. INTRODUCTION In this paper, we propose a new representation of knowledge based on set theory. A knowledge base con- sists of three parts: an input space from which evidence is drawn, an output space that consists of propositions to be proved, and a set of mappings that associate subsets of the inout space with subsets of the output space. In this representation, two types of certainties are defined. The certainty assigned to a piece of evidence expresses the degree of confidence that a user has in his observa- tion of the evidence. The certainty assigned to a mapping expresses the degree of confidence that an expert has in his definition of the mapping. These two sources of cer- tainty are compounded in proving a proposition. The theoretical foundation for handling partial cer- tainty under this representation is based on the Dempster-Shafer “theory of evidence” [l]. Shafer defines certainty to be a function that maps subsets in a space on a scale from zero to one, where the total certainty over the space is one. The definition also allows one to assign a non-zero certainty to the entire space as an indication of ignorance. This provision for expressing ignorance is one way in which Shafer’s theory differs from conventional probability theory, and is a significant advantage, since in most applications the available knowledge is incomplete and mvolves a large degree of uncertainty A mapping is activated when the input part of the mapping, the user’s observation of evidence, receives a non-zero certainty. The product of this certainty with the certainty in the mapping is the certainty in the pro- position. Dempster’s rule of combination provides a mechanism to combine the certainty of several proposi- tions, which can be concordant or contradictory. When this mechanism is used, reasoning becomes a process of seeking consensus among all the propositions that are supported by the user’s observations. This approach is attractive, since such problems as conflicting observa- tions from multiple experts, knowledge updating, and ruling-out are resolved automatically by the rule of com- bination. The conventional approaches to knowledge pro- cessing, which use tightly coupled chains or nets such as deductive rules or semantic nets, do not have this advan- tage [2,3]. The use of the Dempster-Shafer theory of evidence to handle uncertainty in knowledge processing was first discussed at the Seventh International Conference on Artificial Intelligence, 1981. Two papers related to the subject were presented [4,5]. Barnett discussed the com- putational aspects of applying the theory to knowledge processing [4]. Garvey et al. applied the method to model the response from a collection of disparate sen- sors [5]. The signals from an emitter are parameterized, and the likelihood of a range of parameter values is expressed by Shafer’s definition of certainty. The integration of parameters is computed by using Dempster’s rule of combination, In this paper, we use the theory of evidence as an underlying model for partial certainty in a general knowledge-processing scheme. 2. A SEF-‘EEORJDIC REPRESENTATION A knowledge base can be represented by two spaces and a set of mappings between the two spaces. Let the two spaces be called an input space, labeled I, and an output space, labeled 0. A proposition in 1 is represented by a subset of elements in I. Its relation to a subset of 0, wbch represents a proposition in 0, defines a mapping. Let us denote the collection of map- pings defined in this way by R. Then R : I -+ 0. The input space consists of evidence that can be observed by users. The ouput space consists of conclusions that can be deduced from the observations. From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. In this representation, we consider two types of cer- tainty. One is associated with the user’s observation of Definition 2 evidence in the input space. The second type is the cer- tainty that an expert assigns to the mappings. By com- bining these two certainties, a knowledge processing scheme deduces the most likely hypotheses in the out- put space. Among the problems that can be represented in this way are those of classification, translation, and diagnosis. CLASSIFICATION. A typical pattern recognition prob- lem can be represented by an input space that is a space of independent features, an output space that is a group of dlsjolnt classes, and a set of mappings that are described by a classifier. TRANSLATION. Language translation 1s a typical example of a translation process. In a translation scheme, the two spaces are the source language and the target language. Within each space, elements are genetl- tally related and well structured. These relations are characterized by syntax, and the mappings can be represented by a transformational grammar. DlAGKOSIS Medical diagnosis can involve more than two spaces. First, there is a symptom space, which is composed of features of visible symptoms or laboratory measurements. The second space may consist of possible diseases, and the third space of treatments to be admm- lstered. The ruling-out capabihty is important in this case, smce some treatments can be fatal to a patient with certain symptoms or diseases. 3. THEDEXPSIXF-SHAFER THEORY OF EVIDENCE Shafer defines certainty to be a function that maps subsets in a space on a scale of 0 to 1, where the total certainty over the space 1s 1. If a certainty function assigns 0.4 to a subset, it means that there is 0.4 cer- tainty that the truth is somewhere in this subset. The definition also allows one to assIgn a non-zero certainty to the entlre space. This is called the degree of “ignorance.’ It means that any given subset is no closer to contammg the truth than any other subset in the space. Some definitions that are used throughout the paper are given in ths section Definition 1 Let 0 be a space; then a function m:2e -) called a certainty function whenever (1) m (PC) = 0, where p is an empty set, (2) 0 < m(A) < 1, and (3) [OJ 1 1s Cm(A) = 1. AC6 The space “O”, and the certainty function “m”, are called the “frame of discernment”, and the “basic proba- bility assignment”, respectively, in [ 11. A subset A of 0 is called a focal element if m(A) > 0. The simplest certainty f-unction is one that has only one focal element. A certainty function is called a simple certainty function when (1) m:A) > 0, (2) m(O) = 1 - m(A), and (3) m(B) = 0, for all other B CO The focus of the simple certainty function is A. Here, a simpl e certi support func tion” in [l]. anty function is called a ’ ‘simple The quantity m(A) measures the certainty that one commits specifically to A as a whole, i.e. to no smaller subset of A However, this quantity is not the total belief that one commits to A. Shafer defines the total belief committed to A to be the sum of certainties that are committed to A and all the subsets of A. Ikmtion 3 A function Bel:p -) [0, l] is called a belief function over 0 if it is given by &1(A) = c m(B). (1) BcA Dempster defines an operation on certainty func- tlor!s that is called “orthogonal sum,” and is denoted by .d3. Demtion 4 Let ml and m2 be two certainty functions over the same space 0, with focal elements 81,. . . , @, respectively. Suppose that Tnen the function m: 2’ + [0, l] 1s defined by m(p) = 0, v-v&A m(A)= 1- C mlL4pdBj) 4 nB, =v for all non-empty subsets A C 0, 1s a certainty function, and m = ml@ m2 Equation (2) is called Dempster’s rule of combmatlon (2) 4. KNOWMDGE PROCESSING UNDER PARTIAL CEXTAINTY Given this defirutlon of certainty, we can quantify our belief m a mapping. We assume that a mapping dehnes a simple certainty function over the output space. This certainty indicates the degree of assoclatlon between elements in I and elements in 0. Therefore, a mappLng in R 1s expressed as e -+h,v, (3) where e c I, h c 0, and 0 g v < 1. This mapping defines a simple certainty function, where the focus in 0 is h, and v is the degree of association, in the expert’s opin- ion. between e and h. That is, v = 1 means complete confidence, and 1 - v is the degree to which the expert chooses to be noncommittal, or the degree of ignorance. Furthermore, a mapping 1s assumed to be an indenpen- dent piece of knowledge. 217 The user of a knowledge processor gives his observa- tion of evidence in the input space, and also the cer- tainty associated with that observation Each observation defines a certainty function over the mput space. Let the certainty function be denoted by q, g : Z1-+ [O,l]. The user is allowed to make multiple observations. Assuming that each observation is independent, we can derive a combined observation by using the orthogonal sum of these observations. That is, q = g,@qz@ r * * (B qn, where Ql# . 0 qn are rc independent observations. Then the belief function defined by the combination of the obser- vations is denoted by Bel,. We say that a mapping is activated when the evl- dence for that mapping is assigned a non-zero belief in a combined observation. That is, the mapping e --) h is activated if Be+(e) > 0. When a mapping 1s activated, the certainty in the evidence is propagated by the mapping to a decision in the output space. As a result, an activated mapping defines a certainty function < : 2e+ [O,I] over the output space, where t(h) = v x p, and C(O) = 1 - v x p. In the case where there is more than one mapping activated in a run, several certainty functions will be defined over the output space. The final certainty func- tion is the combined certainty obtained by taking the orthogonal sum of all the activated mappings. Finally, the total belief for the output space is computed by using Eq. (1) in Definition 2. In summary, this processing procedure has five steps: (1) (2) (3) (4) (5) Query for Observations An observation of evidence, and the certainty associated w-lth the observation are entered by the user. They define a certanty function over the input space. Normalize the Certainty in the Input Space. The user is allowed to make multiple independent observa- tions. The certainty functions defined by these observations are combined by using the rule of com- bination. Activate the Mappings. A mapping is activated when the evidence for the mapping receives a non-zero belief in the combined observation. Propagate the Certainty to the Output Space. The certainty of the evidence in an activated mapping is multiplied by the certainty in the mappmg. The result is a certainty function defined over the output space for each activated mapping. Normalize the Certainty in the Output Space. By means of the rule of combination, all the activated certainty functions in the output space are com- bined into a single certainty function. From this cer- tainty function the total belief for the output space is computed. This process is illustrated by the following example. Example 1 A knowledge base is schematized in Figure 1: an input space that contains subsets A, B, and C, and an output space that contains subsets X, Y, and 2. Three mappings are given by A -+x, 0.8 B -4 Y, 0.7 c-4 2, 1. Suppose that the user makes two independent observations defined by the certamty functions q,(A) = 0.8, 91u> = 0.2, and g?.(B) = 0.4, q2(/) = 0.6. Then the combined observation is given by the cer- tainty function g(A) = 0.48, q(B) = 0.08, g (A nB)= 0.32, q(I) = 0.12. From the combined observation, we find that the belief in A, B, and C of the input space, I, is given by the belief function Be14(A) = q (A) + q (A nB> = 0.8, Be&(B) = q (B) + q (A nB) = 0.4, and BeZq(C) = q(AnB) = 0.32. Therefore, for the given observations all three map- pings are activated. The three activated certamty func- tions m the output space, 0, are cl, {s, and (3: cl(X) = 0.6 x 0.8 = 0.64, and cl(O) = 0.36; &z(Y) = 0.4 x 0.7 = 0.28, and <e(O) = 0.72; r&(Z) = 0.32 x 1 = 0.32, and {s(O) = 0.68. Finally, by using the rule of combmation we can compute the certamty function <, < = (-+B(-&,: C(X) = 0.3133, t(r) = 0.0675, < (2) = 0.0829, ((XnY) = 0.1218, <(XnZ) = 0.1474, {(YnZ) = 0.0322, {(XnYnZ) = 0.0573, and c(O) = 0.1662. The final belief function over the output space is Belt(X) = 0.64, BeZ&Y) = 0.28, Belt(Z) = 0.32, BeZ&Xn Y) = 0.1792, BeZ[(XnZ) = 0.2048, BeZ& YnZ) = 0 0896, BeZ&XnYnZ) = 0.0573 As has been said earlier, the set-theoretic represen- tation and the rule of combmatlon are attractive for knowledge processing because such problems as multiple experts, knowledge updating, and rulmg-out can be automatically resolved in the processmg scheme. 218 A mapping represents an opinion. When the rule of combination is applied to combine several mappings, the result can be interpreted as the consensus among several opinions. When some opinions are contradictory, they erode each other. On the other hand, concurring opinions reinforce each other. The problem of multiple experts can be handled by treating them as a knowledge base with several sets of mappings, each contributed by a different expert. If all experts’ opinions are weighted equally, then it makes no difference whom the mappings come from. The problem of knowledge updating can be handled by simply addmg a new set of mappings to an existing knowledge base. The following examples illus- trate the handling of conflicting opinions and “ruling-out” in this scheme. Example 2 The input and output spaces are the same as in Example 1, but the knowledge base now contains the fol- lowing mappings: A -+ X, 0.8, B -+ x, 0.7, c ----) z, 1. That is, B supports the opposite of what A supports. Assume that the user’s observations are the same as in Example 1. Then the final belief function in the output space is BeZ&X) = 0.5614, BeZc(X) = 0 1228, BeZ((Z) = 0.32, BeZ&XnZ) = 0.1796, BeZt(XnZ) = 0.0393. In comparison with the results in Example 1, the belief in X 1s eroded to some extent, but the belief in Z remains the same. Ruling-out means that if evidence x is observed, then proposition y LS false with total certainty (i.e. it is ruled out). This is represented as x ---) a, 1. E&ample 3 The second mapping in Example 2 is changed to be a “ruling-out” mapping for proposition X if B is observed; that is, are B -+x, 1. If the same observations as in the previous examples ised, the final belief function is Belt(X) = 0.4161, BeZ&X) = 0.1935, BeZ&Z) = 0.32, BeZt(XnZ) = O.l65i, BeZ((XqZ) = 0.0619. Because the belief in B is not fully supported by the observations, proposition X by the ruling-out mapping. comple tely suppressed Example 4 Assume that the user makes the following observa- tions : &A) = 0.8, qm = 0.2, and 9m = 1. If the knowledge base in Example 3 is used, the final belief function is BeZ((X) = 1, and the belief in all the other propositions is 0. 5. THEi DECOMFOSITlON OF CERTAINTY One dificulty in this knowledge processing scheme is the assignment of certainty to a mapping Even the domain expert can provide only a crude approximation, since the degree of belief is a relative matter. It is difficult for one to be consistent in assigning certainty to a mapping on a scale of 0 to 1 when a large number of such mappings are involved. Motivated by this difficulty, we have derived an inverse to the rule of combination. We call it “the rule of decomposition.” With initial certainty assignments to the mappings, the expert can use the knowledge base by entering evidence and observing the final deduced belief function over the output space. If the final belief function is inconsistent with the expert’s expectation, he can use the rule of decomposition to modify the certainty assignment for individual mappings. If the expert is consistent, the knowledge base will approach consistency after a number of iterations. The rule of decomposition decomposes a certainty function into a number of simple certainty functions However, not all certainty functions can be decomposed into simple certainty functions. Shafer defines the class of certainty functions that can be decomposed as “separ- able” certainty functions. Shafer also proves that the decomposition of a separable certainty function is unique. Before deriving the general rule of decomposition, we consider four special cases of combining n simple certainty functions. The rule of decomposition is then derived for each of the four cases. Finally, we give a pro- cedure for decomposing any separable certainty func- tlon. hmma 1 n Simple Certainty Functions with Identical Focus Let ml, rn2# . , m, be n simple certainty func- tions, where A c 0 is the only focus, and mi(A) = aa, forlgiln, and 0 < ai _( 1. Then the combined certainty function, m = ml@m20~ .BDm,, is m(A) = 1 - fi(1 -a,). (4) i=l 219 bxnma 2 R Simple Certainty Functions with n Disjoint Focuses Let ml, rnze , m, be n simple certainty func- tions, and Al, AZ, . , & be their n focuses, respec- tively. 4: nAj = ~0, for all i, j , i#j, Assume that nq(q) = ai, and 0 I a, < 1, for all i. Then the combined certainty function is aL J&i - aj) 44) = jti fJC$fi(? - CXj) + fi(i - Rj) (5) i=l j=l j=l jti fori = 1,. . . ,n. I.emma 3 n Simple Certainty Functions with n Focuses Where the Intersection of Any Number of Focuses Is Non-Empty Let m+, rnz? . . , m, be n simple certainty func- tions, and Al, AZ, . , & be their n focuses, respec- tively. Also, let tc be a subset of the index set, il, 2, . . , nj, and n4 be the intersection of the subsets for wmch icx the indexes are in K. Assume that n-4 St p, for all possi- i&K ble K, and n4 # nAj for all K: and L, where both K and L icn icr are subsets of the index set i:, 2, , n], and K # L. Then the combined certainty function is m(n4) = j+Jtl - "j)s i&K itx jc3 whereE= f 1, 2,. ,n I-K: (6) Lemma 4 n Simple Certainty Functions with Nested Focuses Let A,, AZ, . . . , A, be the n focuses of n simple cer- tainty functions, ml, m2, . ~ , m,, respectively. Assume that A lcAz~ “’ c A,. Then the combined certainty function is i-l m(4) = ai n(1 - CXj), j=l (7) Equation (4) to (7) can be proved by induction From the defimtions of a certainty function, we have The inverse of the three special cases described in Lemma 2 to 4 are given by equations (8) to (IO), respec- tively. mt-4) tXi = 1 - fJm(Aj) j.=! I+L and (8) (9) a, = m(Ai) i-l l- Cm(&) j=l (10) For the case where the focuses of n simple certamty functions are identical, the combined certainty function is also a simple certainty function, as shown in Eq. (4). For this case, although a simple certainty function can be decomposed into several simple certainty functions on the same focus, the decomposition is not unique. In Eqs. (8) to (10) we have derived the decomposi- tion of three special types of separable certainty func- tions into simple certainty functions We now show a pro- cedure for decomposing any separable certainty function into two certainty functions: one simole ccrtaintv function and one separable certainty function. By repeatedly applying this procedure, one can decompose a given separable certainty function into a number of simple certainty functions. Lemma 5 The Decomposition of a Separable Certainty Func- tion Let m be a separable certainty function with focal elements A,, -42, . . . , 4. Then m can be decomposed into m, and m2. That is, m = m,@mZ, where m, is a sim- ple certainty function focused on 4, and m2 is a separ- able certamty function. Choose 4 such that 4 c & ?A, is not true for all l(k,1<n,Let~bethefocusofm,.Let~beasubset oftheindexsetfl,2 ,..., i-l,i+l,..., njsuchthat j is in IC if and only if Aj f 4 niA, for some 1, 1 = i, . , n. Assume that A3 ‘s, for all j EK, are focal elements of ms. Then using the rule of combination we have m(A)= 1- 7-44) x m2(W c m&4 > x m&J, > and m(Aj) = i _ ml(@) x m23j) 1 z m&4) xm2tA,) 4 afTAj=V for all j &KG. 02) ml(A) + ml(O) = 1, (13) and xm2(AJ) + m2(0) = 1. w jcrc From Eqs. (11) through (i4), we can derive mm m(A) + mt@) -= md@) Cm(Aj) + m(@> (15) jcz Now, from Eqs. (11), (12), and (15) we have mlt4) m2w 44) -= -x - mz(Aj) mm m(Aj) for j s/c. (16) 220 Therefore, substituting Eq. (16) for ms in Eq. (14), we have ml(@) m&4) = mx x m(4) 2 m(Aj) + m(O) jcc and Similarly, substituting Eq. (16) for m, in Eq. (13), we have m2W mz(Aj) = -X m(4) md@) m(Ai) + mt@> 08) and m2w m*(o) = -x m(O) mm 44) + mt@> In the case where a certainty function is not separ- able, the two certainty functions can still be derived with Eqs. (17) and (18). However, their orthogonal sum will not be equal to the original certainty function. output spaces correspond to intermediate hypotheses, or decisions. The normalization that takes place at each stage eliminates the problem of rapidly diminishing in probabilities during propaga- tion in a Bayesian model. The Proposition that a mapping in a knowledge base defines a simple certainty function is to make the pro- cessing scheme tractable. However, the normalization that is based on Dempster’s rule of combination assumes the independence among mappings. To satisfy both requirements, the applicability of the knowledge pro- cessing scheme is limited to a small class of knowledge Our future work is to expand the representation to larger classes of belief functions, namely, separable support functions and support functions in Shafer’s definition. In the expanded scheme, dependent pieces of knowledge will be represented by one belief function. ACKNOWLEDGMENT The authors are indebted to Professor K. S. Fu of Purdue University for first bringing the Dempster-Shafer theory of evidence to their attention. HEFERENCES 6. DISCUSSION In this report, we propose a new knowledge representation and processing scheme. A knowledge base is represented by an input space, an output space, and a set of mappings. The input space and the output space define the domain of the knowledge. The mappings link subsets in the input space to subsets in the output space. In addition to being able to handle partial cer- tainty, the new scheme also has the following advantages: (0 (2) (3) The representation can handle incomplete or conflicting observations Conflicting knowledge sources erode the certainty in the processing scheme and yield less meaningful results, but do not disrupt the reasoning process. Thus, in this representation the usual difl?culties associated with multiple experts and conflicting opinions do not exist. It is easy to implement knowledge acquisition and updating. The conventional rule-based approach organizes a knowledge base as tightly coupled and consistent chains of events, so that the reasoning mechanism can be implemented easily 111. However, addmg new knowledge or modifying the existing knowledge base requires the restructuring of the chaining. The complexity of updating a knowledge base increases as the knowledge base grows larger. In the set-theoretic knowledge representation, updating can be done by expanding the input and output spaces and adding or removing mappings between the two spaces. In this case, the complexity is not related to the size of the knowledge base. The representation can be extended to multiple stages. Throughout the report, the partial certain- ties are presented in two stages. First the certainty in the observations is given, and then the certainty in the mappings. The idea of combinmg the cer- tainty in two stages can easily be extended to multi- ple stages. The spaces in between the input and the PI PI PI [41 PI G. Shafer, A Mathematical Theory of .&idence, Princeton University Press, Princeton, New Jersey, 19’76. R. D. Duda, P. E. Hart, and N. J. Nilsson, “Subjective Bayesian Methods for Rule-Based Inference Sys- tems,” Proceedings of National Gbmputer Confer- ence, AFIPS, Vol. 45, pp. 1075-1082, 1976. E. Shortllffe, Computer Based Medical Consulta- tions: MYCIN, American Elsevier, New York, 1976. J. A. Barnett, “Computational Methods for a Mathematical Theory of Evidence,” Proceedings of the 7th International Conference on Artificial Intel- ligence, Aug. 1981, Vancouver, pp. 668-875. T. D. Garvey, J. D. Lowrance, and M. A. Fischler, “An Inference Technique for Integrating Knowledge from Disparate Sources,” Proceedings of the 7th Interna- tional Conference on Artificial Intelligence, Aug 1981, Vancouver, pp. 319-325. A-X B--Y c-z FIGURE 1 THE INPUT SPACE AND THE OUTPUT SPACE OF EXAMPLE 1 TO 3 221
1984
48
334
Expressiveness of Languages1 Jock Mackinlay Michael R. Genesereth Computer Science Department St anford University Stanford, California 94305 Abstract Specialized languages are often a good choice for expressing a set of facts. However, many specialized languages are limited in their expressive power. This paper presents methods for determining when a set of facts is expressible in a language. Some specialized languages have the property that when some collec- tions of facts are stated explicitly, additional facts are stated implicitly. A set of facts should not be stated in such a language unless these implicit facts are cor- rect. This paper presents an algorithm for identifying implicit facts so that they can be checked for correct- ness. Criteria are also presented for choosing between languages that are sufficiently expressible for a set of facts. This research is being used to build a system that automatically determines when a specialized lan- guage is appropriate. It is also relevant to system de- signers who wish to use specialized languages. 1. Introduction Specialized languages are used in everyday life as well as in the development of computer software. Com- mon examples include maps, geometry diagrams, and organization charts. General languages, such as predi- cate calculus, can express a broader range of facts than more specialized languages. However, specialized lan- guages have distinct advantages in efficiency, clarity, or parsimony for certain information. In an information presentation system [Zdybel 811, it is desirable to use specialized languages for clear, succinct presentation of information to the user. When an information presentation system acts as the user in- terface for a representation system or database system, it is often expected to present arbitrary collections of information. In such circumstances, taking advantage ‘This work was supported in part by grant NOOO14-K-0004 from the Office of Naval Research. of specialized languages requires that the presentation system be able to automatically determine when a spe- cialized language is appropriate. Many languages have the property that when some collections of facts are stated explicitly, addi- tional facts are stated implicitly. We call such lan- guages implicit languages. For example, in the fol- lowing diagram, the placement of the engine rectangle inside the car rectangle states that an engine is part of a car. Similarly, the placement of the piston rectangle inside the engine rectangle states that a piston is part of an engine. Car I “@%iil The diagram also states implicitly that a piston is part of a car because the piston rectangle is contained (in- directly) in the car rectangle. When choosing an implicit language to express facts, one must make sure that the implicit facts are correct. If the nesting of rectangles represents the re- lation “next to” instead of the relation “part of”, the following diagram states that Canada is next to the U.S.A. and the U.S.A. is next to Mexico: It also states implicitly that Canada is next to Mex- ico. Although the two explicit facts are correct, this implicit fact is not. Therefore, this rectangle language is inappropriate for expressing facts about the adja- cency of countries. This paper examines one component involved in the selection of a language: expressiveness. Section 2 describes how messages and facts are related by the conventions of a language and when a fact is stated by a message. Section 3 specifies when a set of facts is 226 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. expressible in a language. Section 4 describes how to check implicit facts for correctness and presents some criteria for choosing between languages that are suffi- ciently expressive for a set of facts. 2. Messages and Facts A message is an arrangement of the world in- tended to convey meaning. Stacks of children’s blocks on a table, puffs of smoke in the sky, and spots of ink on a page can all be messages. A language is a set of conventions that a speaker and hearer have for con- structing and interpreting messages.2 The process of understanding messages involves identifying them in the world and determining their meaning. Intuitively, the first step is the syntactic interpretation of the mes- sage; the second step is the semantic interpretation. We describe the world and the messages it con- tains with predicate calculus formulas. For exam- ple, the relation Inside can be used to describe the nesting of the rectangles in the first diagram in this paper. The formula Inside ([m, IEnffinel) de- scribes the nesting of two of the rectangles” We use predicate calculus because it is sufficiently expressive to describe interesting languages. Any for- malism with similar characteristics could have been used. The results in this paper do not depend in any direct way on this choice. Variables in predicate calculus formulas are written in lower case. All free variables are universally quantified. Quotes are used around predicate calculus formulas to represent them in propositions. 2.1. Stating Facts in Messages A language relates facts and messages. For exam- ple, the fact PartOf (Piston,Engine) is paired with the message Inside (m, -1). Thus, a fact f is stated in a language L if the corresponding message m is satisfied by the world: Definition 1: Stated(f ,L) u Satisfied(m) .4 2 This definition of L‘languageO is similar to Winograd’s: “a system intended to communicate ideas from a speaker to a hearer” [W inograd 711. 3A rectangle around a symbol is used to denote the rectangle in a diagram that corresponds (given a language) to what that symbol represents. We use this specialized notation for clarity only; it can be replaced with a functional notation in accor- dance with predicate calculus syntax. 4This definition is related in spirit to Pylyshyn’s [Pylyshyn 751 Semantic Interpretation Function (SIF). He correctly observed Prereq(FundMTC,AdvDB) Qtr(FundCS,Fall) Prereq(FundCS,DB) Concur (DB, PL) Prereq(FundCS, PL) Qtr(PL,Winter) Prereq(DB,AdvDB) Concur(AIProg,AdvDB) Prereq(PL, OS) Concur (AdvDB, OS> Prereq(PL,Compiler) Concur (OS, Compiler) Concur(FundAI,FundMTC) Qtr(Compiler,Spring) Figure 1: Facts about Classes and Quarters When a language is defined so that a fact can be stated by more than one message, the formula m is a dis- junction of clauses describing the various possible mes- sages. Example: Stacks of blocks. A stack of children’s blocks can be used to express facts. Suppose that a speaker and hearer agree that the placement of block q above block q represents the fact NextTo (x, y> .s Call this language STACK. When block q represents Canada and block q represents the U.S. A., the fol- lowing stack states the fact NextTo (Canada, U. S . A. > : cl C cl U In predicate calculus, this message is described by Above (m, q ) . In most languages the relationship between mes- sages and facts is fairly stylized; STACK is no exception. Schema (1) d escribes when facts are stated in STACK: Stated(“NextTo(x,y) I1 ,STACK) w Satisfied(“Above(m,B)“) 0) Example: Layered tree. A diagram consisting of nodes and arcs can be used to express facts. Figure 1 lists a set of facts that describe constraints on class scheduling and prerequisite relationships among sev- eral computer science classes. Prereq means that one class is a prerequisite for another, Concur means that two classes may be taken concurrently, and Qtr means that a class is given in a particular quarter. The dia- gram in Figure 2 is a message that states these facts in a layered tree language called LAYERTREE. that there are many possible interpretations for a collection of objects in the world. The particular interpretation depends on the SIF that is being used. However, our approach can be used in a computer system to reason about a language. 5The placement of a square around a symbol is used to denote a children’s block. This notation is similar to the rectangle notion introduced earlier. 227 The messages for this language can be de- scribed with three predicates: Connected (x, y) , Same- Layer(x,y) and HorzLabel(x,y). Connected(x,y) means that node x is connected to node y, Same- Layer (x, y) means that x and y are on the same layer of the diagram, and HorzLabel (x, y) means that y la- bels the layer that contains x. The following schema describes how LAYERTREE facts are stated: Stated(“Prereq(x,y)“,LAYERTREE) e Satisfied(“Connected([x,ml)“) Stated(“Concur(x, y) ” ,LAYERTREE) _ Satisfied(“SameLayer(~,~I)“) /3\ \&J Stated(“Qtr(x,y) “, LAYERTREE) c Satisfied(“HorzLabel(m,Name(y))“) Example: Predicate calculus. Strings displayed on a terminal can be used to express facts. The language PC (for Predicate Calculus) is an example. If the func- tion Wf f maps well-formed formulas to strings that represent them, the following schema describes when sentences in PC are stated: Stated(f ,PC) a Satisf ied(“OnTermina1 (Wf f (f )) “) Example: The world. The world can be used as a language. If WORLD denotes this language, Stated(f ,WORLD) - Satisfied(f) describes when facts are stated in this language. 2.2. Constraints on Messages The physical properties of the world constrain the messages of a given language. For example, it is not possible for two blocks to be mutually above each other. The predicates that are used to describe mes- sages can also be used to construct formal descriptions of these constraints. Example: Stacks of blocks. The axioms in (3) de- scribe the relation Above among blocks. The first three axioms are anti-reflexivity, anti-symmetry, and transi- tivity. The last two axioms state that a block is not above another unless it is directly overhead. Thus, if two blocks are in the same stack because they are above (or below) a block, one must be above (or below) the other. 1 Above(m, q ) Above(m,m) + lAbove(a,l) CAbove(lliJ,IYl)Above([,~)] + Above(m, q ) [Above <m, q ) A Above (lifJ, [Above <m, q ) V Above [Above(m, q >A Above(m, [Above (m, q ) V Above (3) =5 Example: Layered tree diagrams. The axioms in (4) describe the predicates SameLayer and Connected. SameLayer is symmetric and transitive. Connected is transitive. HorzLabel is unconstrained. SameLayer (PI, IyI) * SameLayer (lyI, PI) CSameLayer(m,(YI)A SameLayer(m,],I)] + SameLayer([x, lzl> [Connected(l~,~~)AConnected([~I,~])] =+ Connected(m,El) (4 3. Expressiveness A fact f is expressible in a language L if it is con- sistent with the world for f to be stated in L. Two complications arise when extending this definition to stating sets of facts. First, it might be impossible to state two facts simultaneously in L. Second, every mes- sage that states all the facts might also state additional incorrect facts. Therefore, we say that a set of facts F is expressible in L if exactly those facts (and no more) can be stated simultaneously in L: Fall Winter Spring Figure 2: Prerequisite and Class Schedule in LAYERTREE 228 Definition 2: Expressible (F, L) e Consistent([VfEF Stated(f ,L)lA [Vf@F lStated(f ,L)]) The first clause in Definition 2 might not be sat- isfied for three reasons: there is no message in the lan- guage that corresponds to one of the facts, the message that corresponds to one of the facts cannot be stated in the world, and two of the messages cannot be stated simultaneously because they conflict with each other. Example: No message. A fact is not expressible in a language when there is no message to represent that fact. For example, {p V q} is not expressible in STACK because there is no convention in STACK for represent- ing disjunctions with a stack of blocks. This means that Stated (“p V q” , STACK) is false. Example: Message not possible. Sometimes the message that represents a fact cannot be achieved. For example, Stated(“NextTo(Canada, Canada) II, STACK) is equivalent to Above (m, q ) , and the Above relation is anti-reflexive. Example: Messages conflict. Sometimes two or more facts cannot be stated simultaneously be- cause their messages conflict. For example, it is im- possible to state both NextTo (Canada, U. S . A. ) and NextTo (U. S . A. , Canada) in STACK because the Above relation is anti-symmetric. For some languages, the only messages that state certain sets of facts also state additional facts implic- itly. These additional facts are the implicit facts in the message. The second clause in Definition 2 excludes implicit facts because they might be incorrect. How- ever, in some cases these implicit facts are correct. In Section 4 we present an algorithm that can be used to identify these implicit facts so that they can be checked for correctness. Example: Implicit facts-incorrect. When block q represents Mexico, the following stack states the set {NextTo(Canada,U. S.A. ) , NextTo(U. S.A. ,Mexico)}. The incorrect fact NextTo (Canada, Mexico) is also stated implicitly because block q is above block q . Cl C cl U 0 M Example: Implicit facts-correct. The facts about classes and quarters listed in Figure 1 are not express- ible in LAYERTREE because the diagram in Figure 2 in- cludes many implicit facts. These implicit facts, listed in Figure 3, are correct. Furthermore, these additional facts would be useful to someone being presented the original facts. Definition 2 is the basis of an algorithm that de- termines whether a given collection of facts is express- ible in a language. Given a set of facts, assume that these facts are stated and all other facts are not stated. The facts will be expressible if these assumptions are consistent with a description of the world. An auto- matic deduction technique such as resolution is used to determine if these assumptions and the axioms de- scribing the world are consistent. Although there is no guarantee that the deduction will terminate, transitive axioms and other recursive axioms can be handled us- ing techniques described in [Smith 841. In general, a depth limit can be used to force termination. Example: Expressibility algorithm. The proof in Figure 4 shows that the set {Prereq(FundCS ,DB) , Prereq(DB,AdvDB)} is not expressible in LAYERTREE. The transitive axiom for the relation Connected in (4) is combined with the two positive assumptions to con- clude that [FundCSl is connected to -1. However, the negative assumption that PreReq(FundCS ,AdvDB) is not stated leads to a contradiction. 4. Choosing a Language Expressibility (Definition 2) can be used as a cri- terion for choosing a language in which to state a given collection of facts: a language should not be used if the facts are not expressible in that language. Prereq(FundCS,AdvDB) Qtr (FundAI , Fall) Prereq(FundCS,OS) Qtr (FundMTC, Fall) Prereq(FundCS,Compiler) Qtr(DB,Winter) Concur(FundAI,FundCS) Qtr (AIProg , Spring) Concur (AIProg , OS) Qtr (AdvDB, Spring) Concur(AIProg,Compiler) Qtr(OS,Spring) Concur(AdvDB,Compiler) Concur(FundMTC,FundAI) Concur(FundCS,FundAI) Concur(FundCS,FundMTC) Concur (OS, AIProg) Concur(AdvDB,AIProg) Concur(Compiler,AdvDB) Concur(PL,DB) Concur(Compiler,AdvDB) Concur(OS,AdvDB) Concur (Compiler, OS) Figure 3: Implicit Facts of Figure 2 229 Assumptions a. Stated(“PreReq(FundCS ,DB) I1 ,LAYERTREE) b. Stated(llPreReq(DB,AdvDB) ‘I ,LAYERTREE) c. lStated(llPreReq(FundCS,AdvDB)“,LAYERTREE) Proof d. Co~ected(lEiiZEj,~lI) a, (2) e. Connected(~,I,IAdvDB b? (2) f. [Connected(~,~I)~Connected(~,~I)I + Connected(m, El) (4) g. Connected(lFundCS, -1) d,e,f h. lComected([FundCS,~Jl) c, (2) i. Contradiction f, h Figure 4: Proof that a Set Is Inexpressible In this section we address two problems with this criterion. First, Definition 2 excludes languages in which messages state additional facts. As the class scheduling example suggests, this restriction can be relaxed when the additional facts are correct. This is particularly relevant for implicit languages, in which the additional facts are stated without any additional cost. Second, this criterion does not indicate how to choose between two languages that are sufficiently ex- pressible for a set of facts. 4. I. Using Implicit Languages Due to the implicit properties of a language, it is often necessary to state more facts than are desired. An implicit closure F* for a set of facts F is a minimal expressible set of facts that contains F. The set differ- ence F*-F describes the implicit facts that are stated when F* is used to state F. If all the implicit facts are correct, the implicit language can be used to state F. Definition 3 shows the relation ImpCl between a set of facts F and an implicit closure P . If F is ex- pressible, it is its own implicit closure. Definition 3: VF,F*ImpCl(F,F* ,L) w FCF*A Expressible (F* , L) A 1 [3X FcXCF*A Expressible(X, L)] Note that ImpCl may not be a function. For exam- ple, there are two implicit closures in STACK for the set {NextTo(Canada,U.S.A.),NextTo(Canada,Mexico)}. The following stacks describe these two messages: cl C Cl C cl U cl M cl M cl U The first states the implicit fact NextTo (U. S. A. , Mexico), while the second states NextTo (Mexico, U.S.A.). An algorithm for generating the implicit closures is produced by modifying the algorithm used in the last section to determine if a set of facts is expressible. In that algorithm, we assumed that the facts in the set were stated and all the other facts were not stated. However, the negative assumption does not hold for implicit facts. When a contradiction is derived while trying to prove that a set of facts is expressible, we can reverse any negative assumption that was used in the derivation by making the corresponding fact an implicit fact. This will invalidate that particular derivation. When every contradiction is invalidated by placing a fact in the implicit closure, the implicit closure is guaranteed to be expressible because it is consistent with the world. If there is more than one negative assumption that can be reversed to invalidate a contradiction, the alternatives generate different im- plicit closures. If there are no negative assumptions to be reversed, the set of facts is not expressible. Example: Generating an implicit closure. The proof in Figure 4 can be used to generate the implicit closure of {Prereq(FundCS, DB) , Prereq(DB, AdvDB)}. Since we used ~Stated(l~Prereq(FundCS,AdvDB)ll, LAYERTREE) to derive the contradiction, the implicit closure is the set {Prereq(FundCS, DB) , Prereq(DB, AdvDB) , Prereq(FundCS, AdvDB)}. 4.2. Choosing Between Languages There are many criteria for choosing between lan- guages that are sufficiently expressive for a set of facts. For example, one presentation might be more desirable than another because it is: 0 smaller l easier to draw l in the expected style 0 more pleasing l more dramatic Developing a precise criterion from each of these ex- amples is beyond the scope of this paper. However, the concepts developed in this paper can be used to suggest how an information presentation system might choose among languages. We first consider a criterion based on the cost of constructing messages, and then we consider one based on the cost of perceiving messages. Note that the first 230 two examples in the previous list focus on the con- struction cost, while the rest focus on the perception cost. The cost of constructing a message is equivalent to the cost of stating the corresponding facts. Under the criterion of construction cost, implicit languages are preferred over other languages because the implicit facts are stated without additional cost. The implicit kernel for a set of facts is the smallest subset that can be stated so that its implicit closure contains all of the facts. The cost of stating a set of facts is the cost of stating the facts in its implicit kernel. Definition 4 shows the relation ImpKer between a set of facts F and its implicit kernel K. Definition 4: VF,K; ImpKer(F,K,L) c KCFcK*A 1 [3X XCK A FsX*] Example: Comparing layered trees and trees with labeled arcs. The language ARCTREE, which is based on labeled arcs, is an alternative to the LAYERTREE lan- guage for expressing the facts listed in Figures 1 and 3. The diagram in Figure 5 shows how these facts are stated in ARCTREE. . The following schema describes when facts are stated in ARCTREE. The predicate LabArc (n, m, 1) means that node n is connected to node m by a se- quence of arcs that have label 1. Stated(llPrereq(x,y)ll ,ARCTREE) w Satisfied(llLabArc([x,~],PreReq)ll) Stated ( I1 Concur (x, y) I’ , ARCTREE) u Satisfied(‘lLabArc((x(,(Y],Concur)ll) (5) Stated(“Qtr(x,y)” ,ARCTREE) c Satisfied(llLabArc(~,IY],Qtr)ll) LabArc satisfies the following transitivity axiom: [LabArc(~,~,l)ALabArc(~,~~,l)] + LabArc (1x1, p]. 1) Recall that Figure 3 lists the implicit facts in the LAYERTREE diagram (Figure 2) of the facts listed in Figure 1. The facts on the left side of Figure 3 are the implicit facts in the ARCTREE diagram. The facts listed on the right side are stated explicitly in Figure 5 by the arcs drawn between class nodes and quarter nodes, and by the arrowheads on the left side of the concurrent arcs. Therefore, the ARCTREE implicit ker- nel is larger than the LAYERTREE kernel. Since both languages are tree languages, it is reasonable to as- sume that the cost of stating facts in them is identical. Thus the LAYERTREE language is more economical. The cost of perceiving messages can also be used as a criterion for choosing between languages. The cost of perceiving messages in the world depends on the nature of the messages. The LAYERTREE and ARCTREE diagrams are described with different predicates. Be- cause a person looking at these diagrams must ascer- tain that these predicates are true, the cost of perceiv- ing facts in these diagrams is directly proportional to the cost of determining the truth value of these pred- icates. By inspection, it is clear that the predicate LabArc is more difficult to perceive than Connected, SameLayer, or HorzLabel because the label must be read. This means that the cost of perceiving facts in the LAYERTREE diagram is lower than the cost of per- ceiving the same facts in the ARCTREE diagram. 5. Related Work Genesereth has proposed a representation system that allows and even encourages the use of multi- ple specialized representation languages [Genesereth 801. Any criterion for choosing presentation languages can also be used to evaluate specialized representation languages. Implicit languages, in particular, are de- sirable representation languages because the implicit facts need not be stated explicitly. Implicit languages are related to the intuitive con- Figure 5 : Prerequisite and Class Schedule in ARCTREE 231 cept of direct or analogical representations [Barr 811. An analogical representation, such as a map, has a structure that directly reflects the world it represents. Sloman has argued for the importance of analogical representations, which he contrasts with ‘LFregean” represent at ions like predicate calculus [Sloman 7 I]. His definition of analogical representation consists of an informal collection of examples and a philosophi- cal discussion. We believe that Sloman is incorrect in asserting that analogical representations are dra- matically different from the more formal representa- tions used in artificial intelligence. Critiquing Sloman, Hayes has argued for the unity of analogical represen- tations and formal logic languages [Hayes 741. This paper is a step toward this unity. Implicit languages have been used in the design of many software systems. One of the earliest uses of an implicit language was Gelernter’s Geometry-Theorem Proving Machine [Gelernter 631. It used a diagram of the problem to help control the search for a proof. The diagram implicitly stated many common facts about geometry. Of course, Gelernter had to be careful that the diagram did not state incorrect facts: “If a calculated effort is made to avoid spurious coincidences in the figure, one is usually safe in gener- alizing any statement in the formal system that correctly describes the diagram.” 6. Conclusion This paper has presented a collection of axioms for describing the expressiveness of languages. These axioms can be used to compute whether a given set of facts is expressible in a language. The paper has also extended these results to implicit languages, in which additional facts may be stated implicitly, in- cluding an algorithm for generating implicit closures. Finally, the paper has discussed ways to use these ax- ioms to choose a language in which to express some facts. This research is currently being used to con- struct an information presentation system that can au- tomatically choose specialized languages for presenting information [Mackinlay 831. Acknowledgements We wish to thank David Smith and Polle Zellweger for their incisive comments on drafts of this paper. References Barr, A. and E. Feigenbaum, editors. The Handbook of Artificial Intelligence, Volume 1. William Kaufmann Inc., 1981, 200-206. Gelernter, H. “Realization of a Geometry-Theorem Prov- ing Machine.” In E. Feigenbaum and J. Feldman, editors. Computers and Thought. McGraw-Hill, 1963, 134-152. Genesereth, M. R. “Metaphors and Models.” Proc. AAAI 80. Stanford University, August 1980, 208-211. Hayes, P. J. “Some Problems and Non-Problems in Rep- resention Theory.” Proc. AISB Summer Conference, 1974, 63-79. Mackinlay, J. “Intelligent Presentation: The Generation Problem for User Interfaces.” Report HPP-83-34, Com- puter SC&-ice Department, Stanford University, 1983. Pylyshyn, Z. W. “Representation of Knowledge: Non- linguistic Forms. Do We Need Images and Analogues?” Proc. TINLAP 75. Massachusetts, June 1975, 174-177. Sloman, A. “Interactions Between Philosophy and Artifi- cial Intelligence: The Role of Intuition and Non-Logical Reasoning in Intelligence.” Artificial Intelligence 2 (1971) 209-225. Smith, D. E. and M. R. Genesereth, “Controlling Recursive Inference.” Report HPP-84-6, Computer Science Depart- ment, Stanford University, 1984. Winograd, T. “Procedures as a Representation for Data in a Computer Program for Understanding Natural Language.” PhD Thesis, MIT, 1971. Zdybel, F., N. Greenfeld, M. Yonke, and J. Gibbons. “An Information Presentation System.” Proc. IJCAI 81. Van- couver, August 198 1, 978-984. 232
1984
49
335
QUALITATIVEHODELIMG IN THE TURBChJETENGIHEDOFlAIN Raman Rajagopalan" Coordinated Science Laboratory University of Illinois at Urbana-Champaign 1101 W. Springfield Ave. Urbana, Illinois 61801 ABSTRACT This paper addresses some of the issues involved in modeling the domain of turbojet engine operation. A causal model based on the relation- ships between engine parameters has been developed and used to implement an engine simulation. The implementation includes a facility for explaining the results of the simulation. I INTRODUCTION AND MOTIVATION Several theories of mechanism modeling such as Common Sense Reasoning [2], Incremental Quali- tative Analysis (IQ) [3], and Qualitative Process (QP) Theory [4] have been proposed in recent years. The application of these theories has been limited to a narrow set of domains, such as the electronic circuit analysis aomain, the operation of a steam plant, and domains with simple processes involving motion. Electronic circuit analysis nas been by far the most popular domain, with applications including simulation, circuit recognition, and troubleshooting. Like the electronics domain, the turbojet engine has been studied extensively, and presents a rich domain for mechanism modeling. However, unlike tne electronics domain, where the number of individual parameters is small (current, voltages, etc.), and where the number and function of com- ponents can vary (a transistor can function in one of several different ways), the aircraft engine is a fixed device, and is described by hundreds of parameters. Furthermore, both the relationships between engine parameters and the operational lim- its of the engine are often described by complex non-monotonic functions. Proper qualitative models of the engine will be useful for several reasons. (1) Numerical simu- lators, whiie providing a large amount of quanti- tative information, have only limited capabilities for explaining the results that are obtained. A qualitative model can be used to explain the results in an efficient manner. (2) As in other *Author's current address: IBM Corporation, Federal Systems Division, 1322 Space Park Drive, Mail Code 1210, Houston, TX 77058. @*A detailed description of the ideas presented in this paper may be founa in Cll. This work has been supported by the Air Force Office of scien- tific Research under contract number F49620-82-K- 0009. 283 domains, qualitative reasoning may be useful for constraining the number of equations which need to be solved by a quantitative model. (3) Prediction and troubleshooting may be possible, and will be useful in aiding mechanics and pilots. Qualita- tive models would be especially useful if warnings of potential failures could be given to the pilot, along with suggestions for avoiding these failures. (4) Finally, a qualitative model will be faster ana less expensive to use than a quanti- tative model. Can a qualitative model be designed which is capable of achieving the above goals? What information should be included in such a model? What are the limitations of such a model? These are the kinds of questions we have addressed in our research, which is an attempt to demon- strate the feasibility of such a qualitative simu- lator. II TEXTBOOK DESCRIPTIONS OF THE ENGINE The feasibility of a qualitative engine model 1s strongly supported by the fact that a substan- tial portion of basic textbook descriptions of engine operation is qualitative in nature [5,6]. Basic textbook descriptions of the engine concentrate on operational parameters (e.g., tem- peratures, pressures, air flows, fuel flows, etc.). These descriptions include (1) specifica- tions of causal connections between parameters (e.g., "As the ambient (environment) temperature increases, compression ratio tends to decrease).", (2) descriptions of limits (e.g.,"If the angle of attack is too high, stall will result."), and (3) effects of variable structures such as bleed air ports (similar to a valve). In addition, descrip- tions of the underlying processes in the engine provide the framework to which all other descrip- tions are tied. In order to fully "understand" the operation of the engine, it is important to include as much of the available information as possible. The current model may be looked upon as a causal model (our focus is on the relationships which exist between engine parameters), and includes informa- tion belonging to categories (1) and (2) above. III THE CAUSAL MODEL A causal model of turbojet engine operation requires that the relationships which exist between engine parameters be represented. It has been observed that in basic engine texts, complex multi-parameter relationships are broken down and described through the relationships which exist From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. between pairs of component parameters. This greatly simplifies the complexity of parameter relationships; while multi-parameter relationships are often non-monotonic in nature, a large percen- tage of the relationships which exist between com- ponent parameter pairs are described by monotonic functions. Monotonic reiationships between parameters can be reasoned about in the following manner, as has been described in [3,4]: A monotonicallv yronortional relationshiD (m) between two parameters, A and B, implies that a qualitative change in one parameter induces a like change in the other (e.g., if A increases, then B also increases). In the current model, an MPR is denoted by the symbol "I+". A monotonically inverse proportional relationshiD (MIPR) between two parameters implies that a qual- itative change in one parameter induces an oppo- site change in the other (e.g., if A increases, then B decreases) and is represented by the symbol "I-". A. Non-monotonic RelationshiDs In addition to the simple monotonic relation- ships described above, relationships between parameters can also be described by non-monotonic curves. Such cases are denoted by the symbol "1". As in previous work, non-monotonic curves are reasoned about by breaking such curves into mono- tonic components at inflection points. However, unlike past work, where each non-monotonic rela- tionship in the domain has been represented indi- vidually, a more general approach of representing non-monotonic relationships is employed in our model. Generic models of possible non-monotonic curves, such as concave-down and certain piece- wise linear curves, have been developed and applied to individual cases. I- IXLECTION POINT \ PARAMETER 1 between fuel-air ratio and combustion efficiency, which is described by a concave-down curve, and which has an inflection point at a value of fuel air ratio of .0625. In the current model, this relationship is represented by: ( (combustor fuel-air-ratio) (combustor efficiency) concave-down .0625 > Figure 1. A Generic Concave-Down Curve As an example, consider the relationship In order to reason about efficiency effects, this particular instance is applied to a model of a generic concave-down curve. Consider the generic concave-down curve given in Figure 1. Let us suppose that we know the ini- tial values of the variables. This curve may be broken at the inflection point and reasoned about as follows: Let the parameter on the X axis be known as "An (fuel-air ratio) and the one on the Y axis be known as "Bn (efficiency). The inflection point of the curve is known (.0625), as well as the initial value of A and its aerivative change (i.e., increase or decrease). (1) If the initial value of A is less than the inflection point (.0625), and if A decreases, then B decreases. The same holds if the initial value of A is greater than the inflection point, and A increases. (2) Otherwise, multiple paths are possible, and the final value of A is required for further rea- soning. If both the initial and final values of A lie on the same side of the inflection point, i.e., both values are less than or greater than the inflection point, then B increases. (3) In the final case, the absolute values of the difference between (i) the inflection point and the initial value and (ii) the difference in the inflection point and the final value are compared. If the initial value difference (i) is smaller, then B decreases. If the final value difference (ii) is smaller, then B increases. B. Time The current model also includes a crude representation of the time taken for the change in one parameter to propagate to the other. This time is not an exact Vealtf time, but a comparison with other relationships. For example, the rela- tionship between altitude and air density is a concurrent change, while a finite delay is encoun- tered as the effects of a change in density pro- pagate and cause a change in compressor parameters. Even this crude representation of time has its utility. One of the possible uses of a quali- tative model is in aiding the pilot. Assume that the turbine inlet temperature was approaching its limit. Although both a change in the throttle setting and in the airflow into the engine could eventually lower turbine temperature, the primary suggestion will be to change the throttle setting, because of the relatively shorter aelay between its change and its effect. model Relationships in the lows: are represented ( parameter1 time-delay) type-of-relationship parameter2 An example of the same is: as fol- 284 ( (environment altitude) I- (environment density) 0) The rule above indicates that altitude and density share an I- relationship, and that there is no delay between changes in altitude and density. Note that the same rule can be used for both simulation and diagnosis. Given a change in alti- tude, the change in density is found by indexing along the left-hand side of the rule. The causes of a change in density can be found by indexing along the rightrhand side. IV WRATIONAL LIMITS A representation of the operational limits of a domain is mandatory for a useful qualitative model. The operational limits of the engine may be dependent upon the value of a single parameter (e.g., turbine inlet temperature should not exceed 1650 degrees R.) or be described by a multi- parameter curves of varying complexity (monotonic curves, parabolas, or even closed geometric fig- ures such as ellipses). The current model includes only single parameter limits; however, a technique for reasoning about multi-parameter operational limits is discussed later in this sec- tion. Single parameter limits are represented by IF-THEN rules with built-in quantifying condi- tions. The quantifying conditions employ the results of a simulation to determine if any limit at all could have been exceeded. The limit rule for the turbine inlet-temperature limit is given below: (If (is-increasing '(turbine inlet-temperature)) then (if (greaterp (get-newval '(turbine inlet-temperature)) 1650) then (printout T "approaching turbine inlet-temperature limit,,)] This statement indicates that, if turbine inlet temperature is increasing, exceeding the limit is possible, and a check to determine if the value has exceeded 1650 degrees R. is carried out. Multi-parameter limits are not easily modeled by IF-THEN rules since such limits are described by complex curves rather than a single value. One methoa of handling such curves is to model the curves themselves. ” Mass atr flow Figure 2. Compressor Stall Curve 2) Consi der the compressor stall curve (Figure where, if the state of the engine falls in region 1, stall is likely. To model such a curve, the individual regions have to be defined, and connections between them specified (e.g., region 1 is the stall region and is connected to all other regions). In addition, heuristics have to be specified which connect parameter changes to regions which may be entered. For the stall curve, such heuristics will include "both regions 1 and 2 may be entered from region 3 if compres- sion ratio increases" and "(in the event of a decrease in massflow) the only new region which may be entered from region 2 is region 1." Now assume that the initial operating point is known. From this initial point, the regions which may be entered can be determined using the results of a qualitative simulation, which pro- vides all necessary information regarding the qualitative change in any parameter. Quantitative knowledge will be necessary to determine in which of the possible regions the final steady state condition will reside. V SIMULATION The qualitative simulation essentially con- sists of a propagation of constraints. For the current model, the constraints are increases and decreases in parameters. As determined in the earlier work [33, any case where a parameter does not change is unimportant. Figure 3 demonstrates the results of increases in airspeed (EA) and throttle setting (CKTS) for our simulation. In the figure, changes in parameters are given by + (increase) and - (decrease). Non-monotonic rela- tionships which depend upon the state of the engine are represented by the symbol ,,I,,. The figure is labelled using the following scheme: Normally, a link is labelled with a parameter and the qualitative change in that parameter. The case of an "interesting point,, is represented by a circled node, the node itself is associated with a parameter, with links leading to that node being marked with the nirlfluencen of preceding changes on the node. Any point where branches merge have been marked as "interesting points". At these points a coincidence (the merging of like changes) or a conflict (the merging of opposite changes) occurs. In the figure, a coincidence occurs when changes in the compressor inlet-pressure (CIP) and compressor angle-of-attack (CAA) both cause a decrease in the compression-ratio (CCR). A con- flict occurs when the combustor inlet-massflow (CBIM) and the fuel-flow-rate (CBFFR) have oppos- ing effects on the fuel-air-ratio (CBFAR). Note that the "1" relationship between fuel- air-ratio and the efficiency (CBE) cannot be resolved until the conflict regarding fuel-air- ratio is likewise resolved. Conflicts are the bottlenecks of such a qualitative simulation. A. Resolution of Conflicts In the past, conflicts have been resolved in two ways: 285 / ---CONFLICT - - - -COINCIDEHCE i (CDTP'JT) EA - (Ervlroasent Airsped) CAV - (Compressor Airflou-Velocity) CIP - (Compressor Inlet-Pressure) cm - (Comwessor Inlet-Massfiou) CAA - (Compressor AqLsOf-Attack) CCR - (Congressor Canpression-Ratio) CBM - (Cabustor Exit-Massflow) CBFAR - (Cabustor Fuel-Air-Ratio) c3M - (Caubustor Inletd4sssflou) CBIT - (Ccmbustw Inlet-Temcerature) CDP - (Compressor Dlscbarge-Pressure) CBIP - icombustor InletPrekre) C3FFR - (Combustor Fuel-Flow-Rate) C3E CBEAV - (Corbustor Exit-AirflowVelocity) TIM - (Coobustor Effioieacy) - (Turbme Inlet-Massflow) T;AV - (Turbine Iclet-Airflow-Velocity) TSS - (Turbme Shaft-Speed) TINT - (Turbine Inlet-Temperature) EM - (Exhaust Massflow) TEAV - (Turbine ExibAirflow-Velocity) ET - (Exhaust Thrust) EIXV - (Exhaust Inlet-Airflow-Velocity) errs - (Cockpit ThrottlbSett1r.g) EEAV - (Exhaust Exit-Airflow-Velocity) l -> IN CREA SE - -> DECREASE I --> NON-MDNOTUNIC NNCTIONAL RELATIONSRIP NO SiGN --> DEPENIX UPON PREVIOUS RESULT --> INTERESTING WINTS Figure 3. Effects of an Increases in Airspeed and Throttle Setting (1) In cases where both initial and final (desired state) values were either available or calculable, a simple subtraction resolved the conflict. This technique can be employed when diagnosis (analysis) is the intended application, since both initial and final values are known. In addition, numerical values may also be obtained when a qual- itative simulation is used to constrain a quanti- tative one. An example of this possibility is provided by de Kleer in his model of the roller coaster domain [7]. numerical simulator's results has been imple- mented. The cause of any change in an engine parameter can be provided. When requested, coin- cidences and conflicts can also be noted and explained. Further capabilities can be added as the model expands to include more information and relationships of the engine. (2) The most popular technique of conflict resolu- tion has been the simulation of all possible paths of change by propagation of both an increase and a decrease after a conflict. External information is then used to select one among the different possibilities. This technique is also applicable to the engine domain. If both an increase and a decrease were propagated at the conflict in fuel-air-ratio 288 As in the roller coaster domain [73, a qualitative-quantitative engine simulator can con- ceivably work in the following manner: wherever possible, linearized equations can be solved except for certains regions of interest. These regions of interest can be determined by the qual- itative portion of the simulation, i.e., through the detection of unresolvable conflicts and non- monotonic relationships. A limitation of this technique is that many of the equations which describe the engine cannot be linearized. (CBFAR) in Figure 3, individual paths will exist which lead to either an increase or a decrease in thrust (ET). EIAV 1 c (-) ( 1 - PATR 1 + C-1 EEAV v t-7 I cr (an-?'JT) +1- - PATH 2 Figure 4. Possible Paths Leading to a Change in Thrust Figure 4 shows these possible paths (all unmarked links in Figure 4 are described by an I+ relation- ship). If it were desired that thrust should decrease, then path 1 can be chosen as the only applicable path. Since in path 1 efficiency (CBE) decreases, the effect of fuel-air ratio (CBFAR) on efficiency can be either an increase or decrease. VI POSSIBLE APPJJCATIONS OF A QUALITATIVE GINE MODEL Other than simulation, there are several pos- sible applications of proper qualitative engine models. These include (I) the explanation and analysis of the results of a simulation, (2) con- straining the equations which need be solved by a numerical simulator, and (3) as a pilot's aide. A prototype facility for the explanation of a The fact that a qualitative simulation is capable of identifying many possible paths will be useful in warning of different parameter limits that are likely to be exceeded and for providing suggestions for avoiding the same. If the engine is operating near a limit, the identification of connections to the input parameters, or of con- flicts along a path leading to the limit will pro- vide insight in determining which input parameter should be changed. Complications are possible since for a given input change, multiple paths of change are possible. Consider path 1 as shown in Figure 4. Under the right circumstances, it is possible that an increase in throttle-setting causes either an increase or a decrease in thrust (e.g., a decrease is caused by path 1, and an increase is caused by the right-hand link, as shown in Figure 3). If an increase in thrust were desired, it is not clear what the change in throttle setting should be since such a change can lead to either an increase or a decrease in thrust. VII uMITAT_IONS One of the major limitations of qualitative models of the engine is that transient analysis is not possible. Thus, the effects of the feedback path between the turbine and the compressor cannot be fully appreciated. Due to the effects of feed- back, certain operational parameters often increase and decrease a number of times before reaching a steady state condition. With current mechanism modeling techniques, it is only possible to determine whether a system is exhibiting posi- tive or negative feedback. While this recognition capability is not sufficient during a simulation, such a capability can enhance the capabilities of an explanation facility. In order to really "understand" a device, it is important to know its purpose. In the engine domain, an understanding of the purpose is OdY partially achieved by a representation of the underlying processes which cause parameter changes. In representing processes, the notion of a structural hierarchy is lost. The process of compression in the compressor is a result of several individual processes: the flow of air through the compressor, the rotation of the compressor, and the action of the compressor blades and vanes on the air passing through them. While all these processes are important, it is not an easy task to represent the complex aerodynamic relationships of the air flow passing through the engine. Finally, a complete model of the engine will need to include the effects of parts whose state can vary. These include such parts as variable inlet guide vanes, variable exhaust nozzles, and bleed airports. ACKNOWLEDGMENTS The author would like to thank the people who have helped the progress of this research in many countless ways: Prof. David Waltz, my thesis advisor, Prof. Gerald DeJong, Cathy Cassells, Shahid Siddiqi, and all the members of the CSL AI Group at the University of Illinois. Cl1 L-21 c31 c41 c51 C61 t-71 REFERENCES Rajagopalan, R. "Qualitative Modeling in the Turbojet Engine Domain," M.S. Thesis, CSL Tech. Rept. T-139, Dept. of Electrical Engineering, Univ. of Illinois, Urbana, IL, March 1984. Rieger, C. "The Commonsense Algorithm as a Basis for Computer Models of Human Memory, Inference, Belief and Contextual Language Comprehension." In R. Schank and B. Nash- Webber (eds.), Theoretical Issues in Natural Language Processing. Arlington, VA: ACL, 1975, 180-195. de Kleer, J. "Causal and Teleological Reason- ing in Circuit Recognition," Ph.D. Thesis, AI Tech. Rept. 529, MIT AI Lab, Cambridge, MA, September 1979. Forbus, K. "Qualitative Process Theory," AI Memo 644, MIT AI Lab, Cambridge, MA, February 1982. General Electric Aircraft Engine Group. Air- Guide. craft m Turbine Cincinnati, OH: General Electric Company, October 1980. Treager, I.E. Aircraft Gas Turbine Engine Technolonv Second McGraw-Hil;, 1979. Edition. New York: de Kleer, J. "Qualitative and Quantitative Reasoning in Classical Mechanics," AI Tech. Rept. 352, MIT AI Lab, Cambridge, MA, December 1975. 287
1984
5
336
Proce;;i; % Evtailments and Accessing Facts nrform Frame System* ArLthony S. Maida Institute of Cognitive Studies University of California, Berkeley Berkeley, California 94720 Computer Science Dept. Penn State University University Park, PA 16802 Abstract This paper: 1) describes the structure of a “uni- form” frame system; 2) shows how entailments can be computed within the system; and, 3) shows how con- tingent facts that are related to a concept can become accessable as a function of how deeply the meaning of that concept is processed. The system is called Uni- Frame and differs from slot-filler frame systems pri- marily in its commitment to uniformly representing all concepts and to maintaining a representation which is semantically well knit. I. INTRODUCTION This paper describes the structure of a “uniform” frame system, showing how entailments can be com- puted within the system, and how contingent facts that are related to a concept can become accessable as a function of how deeply the meaning of that concept is processed. The system is called Un’Wrame and differs from slot-filler frame systems primarily in that it does not use slots. Instead it places emphasis on four desireable characteristics in a knowledge representation. They are: 1) The uniformity of the formalism; 2) The semantic coherence of the conceptual structures represented in the formalism; 3) Whether new concepts are defined by differentiat’mg them from existing concepts; and, 4) Whether the representation allows for a variable depth of processing of concepts depending on the contextual situation. We will describe each of these dimensions in turn. Uniformity: This dimension embodies the intuition that all concepts are made of the same “stuff” (cf., [8]). For instance, humans invariably explain one concept in terms of other concepts and, although the concepts used in the explanation are different concepts, they are nonetheless the same kind of object. Below are two examples of common representational situations we wish to avoid because we wish to maintain a uniform *-Acknowledgements: Many of these views emerged from discus- sions in a knowledge representation seminar given by Robert Wilensky. Other participants of that seminar were: M. Butler, D. Chin, C. Cox, B. D’Ambrosio, P. Jacobs, M. Luria, J. Martin, J. Mayfield, P. Norvig, L. Rau, J. Sokolov, and N. Ward. Another language which emerged from that seminar was Wilensky’e KO- DIAK. Richard Alterman and Nigel Ward provided helpful com- ments on this paper. This research was supported by the A. P. Sloan foundation. Author’s present address: Computer Science Dept., Whitmo:e Lab, Penn State University, University Park, PA 16802, representation. They can be viewed as “bugs” in hierarchical slot-filler frame systems [e.g., FRAIL [3]; NETL [5]; PEARL [4]]. Frames vs Slots. A frame for the concept PHY- SICAL OBJECT is likely to have the slot COLOR-OF. If, in addition there is a frame for the concept COLOR, there is likely to be no indi- cation that the COLOR-OF slot in the PHYSICAL OBJECT frame and the COLOR frame are semantically related. Somehow, the COLOR-OF slot should be in part derived from the concept COLOR (Wilensky, [II], [12])**. Uniformity. In a frame system, the concept PERSON would be represented as a frame but a concept such as MURDERER would be the slot, MURDERER-OF, in the MURDER-EVENT frame. This violates the intuition of uniformity because MURDERER has as much reason to be represented as a full-fledged concept as PERSON does. This lack of uniformity propagates itself in representing semantically similar assertions. Con- sider the sentences: All persons belong in jail. All murderers belong in jail. Despite the semantic similarity of these sentences, their representations would be dissimilar. The former sentence references a frame whereas the latter references a slot. Semantic Coherence: Concepts can be related to one another in various ways (cf., [l]). We would like con- cepts in a data base to be well-knit, while remaining semantically precise. We have identified three forms of relatedness. Similarity. The most obvious form of relatedness has to do with the sharing of common components of meaning. This can be seen in families of verb senses which have similar meaning. For instance, the verbs: kill, murder, strangle, and suicide all share similar components. Definitional Relationships. There are at least two kinds of definitional relationships, namely derived concepts and component concepts. For instance, the concept MURDERER is derived from the concept MURDER-EVENT. A MURDERER is one who murders somebodv. However, the con- cept DIE bears a different relationship to the con- cept MURDER-EVENT. Namely, DIE is used as a component in the definition of MURDER-EVENT. **-The idea of patching frame systems in this manner is due to 233 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. Contingent Relationships. There is a form of concept relatedness that stems from contingencies about how the world happens to be organized. The relatedness of the concepts MURDERER and JAIL can be explained by the highly salient empirical fact that murderers are supposed to be put in jail. As Brachman, Fikes & Levesque [2] have pointed out, frame systems often confuse contingent and definitional relationships. For instance, one might see in a frame system the factual assertion that if person B is the victim of a murder event, then person B is dead. This assertion however does not represent a contingent fact about the world, but rather reflects an entailment. Progressive Differentiation of Concepts: Hierarchi- cal frame systems define new concepts by saying they are kinds of old concepts, but which have more slots. This is analogous to saying that an X is a Y that has such-and-such features. We shall opt for a different method of progressive differentiation. We say instead that an X is a Y which obeys such-and-such constraint, where the constraint relation can be more general than a feature. Instead of saying that a murder event has a murderer slot, a method slot, and a victim slot, we can define a murder event as the event of a volitional force causing a normally living thing to die by some action. This is our primary point of divergence from KRYP- TON [2]. Advantages are: a) The method offers increased expressive power over simply adding slots. b) The constraint, which is used to differentiate a concept from its superordinate( can be used to compute semantic ent,ailments. c) Progressive differentiation is compatible with a highly plausible form of concept acquisition; namely, we acquire new concepts by differentiating concepts we already have [6]. Depth of Processing: As will be seen, UniFrame’s constraint relation allows for a variable depth of pro- cessing of the meaning of a concept. As the meaning of a concept is processed more deeply, an increasing number of contingent facts related to that concept become accessable. II. DETAILS OF UNIFRAME The Concept Frame. In accordance with the princi- ple of uniformity, UniFrame uses only one device, called the concept frame, to define concepts, A concept frame describes a concept by saying how it is a different concept from one of its superordinates. This differentiation process results in an ISA hierarchy. The concept frame depicted in Table 1 represents the con- cept MURDER-EVENT. Wilensky, although the virtue of uniformity in a conceptual hierar- chy can be traced back at least to Quillian [9j, and the idea of having only nodes (and not links) representing concepts in a se- mantic network is due to Shapiro [lo]. [concept: MURDER-EVENT isa: (KILL-EVENT INTENTIONAL-DO) with: ($agent: VOLITIONAL-FORCE $instrument: ACTION $victim: LIVING-THING) constraint: (INTENTIONAL-DO $agent (NAIVE-CAUSE $instrument (DIE $victim))) derived concepts: (MURDERER $agent) (MURDER-VICTIM $victim) slot-functions: (MURDERER-OF $agent) ] Table 1 A concept frame, for the moment, consists of four parts: 1) a concept name, 2) an isa-specification, 3) a typed variable list, and, 4) an optional constraint relation template which is used to specify how the concept that is currently being defined is defined in terms of constraints on less differentiated concepts. The isa-specification indicates the category of entity that results when MURDER-EVENT is instantiated. The variable list declares the obligatory “slots” of the frame and indicates the category of object that can fill the slot. The constraint template, by making reference to the variable names in the variable list, serves to embed the slots in a relationship of previously defined concepts. The template in Table 1 indicates that MURDER- EVENT is an intentional action in which a volitional force causes a living thing to die by some action. The concept frame defines concepts by making references to other concepts. These concepts are refer- enced in three places: 1) The ISA specification; 2) The typings of the variable list; and, 3) All atomic expressiocs in the constraint template which are not variables. Thus, to define the concept MURDER- EVENT, we Ek; ;e;Te to the following other con- cepts: INTENTIONAL-DO, VOLITIONAL-FORCE, ACTION, LIVING-THING, NAIVE-CAUSE, and DIE. The major effect of specifying the concept frame in Table 1 is to define a new three-place relation to the system, MURDER-EVENT. For instance, after defining MURDER-EVENT, we can use expression (1) to assert that John murders Bill by suffocation (provided that SUFFOCATE has been defined as a concept). (1) (MURDER-EVENT John (SUFFOCATE John Bill) Bill) The constraint template allows concepts to be expli- cated as constraints on relations between more general and simpler concepts. UniFrame can optionally expli- cate expression (1) into expression (2) below. (2) (INTENTIONAL-DO John (NmCAUSE (SUFFOCATE John Bill) (DIE Bill))) That is, MURDER-EVENT can be treated at an unanalyzed level, or it can be explicated as “intentional cause to die.” The component concepts: INTENTIONAL-DO, SUFFOCATE and DIE, are them- selves represented as concept frames which may have template fields that can in turn be explicated. Derived Concepts. The concept MURDERER is derived from the concept MURDER-EVENT. That is, a necessary and suffkient condition to be a murderer is to be the agent of a murder event. MURDERER is defined in Table 2. [concept: MURDERER isa: VOLITIONAL-FORCE with: ($murderer: VOLITIONAL-FORCE) constraint: ((lambda (x) (MURDER-EVENT x $murderer) Table 2 A murderer is a volitional force with the property that it murders a normally living thing. This definition can be generated automatically by specifying the expression !r MURDERER $agent) in the derived concepts field in able 1. How can this be done?*** Note that the variable, $agent, is typed as a volitional force in Table 1. Thus MURDERER is a volitional force. The constraint rela- tion is constructed by doing lambda abstraction on $agent in Table 1 and substituting typed existential quantifiers for any slot variables encountered. What good is it? MURDERER is now a concept about which facts can be uniformly asserted (e.g., Murd- erers are dangerous. Murderers may be violent., etc). There is one remaining field in Table 1. This is the slot-functions field. It enables us to make reference to the murderer of a particular murder-event rather than to the concept of the generic murderer. III. ENTAILMENTS Two types of inference operations are used to com- pute entailments. One is inheritance down the ISA hierarchy. For instance, if John strangles Bill, then it follows that John kills Bill by virtue of a STRANGLE EVENT being a KILL-EVENT. The other operation concerns explication of the definition of a concept. Since inheritance is a well known tool, we will only dis- cuss explication. Explication of a concept involves two things: 1) expansion of the relational template to make the component concepts explicit; and, 2) instantiation of the derived concepts to make the derived concepts expli- cit. Template Expansion. Template expansion involves substituting a concept’s constraint relation template, as was done in expression (1) to generate expression (2). Complete expansion would involve recursively expand- ing all of the concepts referenced in the template until primitives were reached. Expansion to two levels of the sentence n John murdered Bill” (ignoring tense and aspect) generates the following. ***-This has also been done by the use of the QUA link in Brachman’s SI-NETS 171. (3) (MURDER-EVENT John (some ACTION) Bill) --> (4) (INTENTIONAL-DO John (NAIVE-CAUSE I some ACTION) --> DIE Bill))) (5) (INTENTIONAL-DO John (NAIVECAUSE I some ACTION) STATECHANGE I LIVING Bill) not (LIVING Bill))))) Deciding how far to expand is a control problem depen- dent on the inference requirements of the task. Instantiating Derived Concepts. Other entailments of expression (3) are expressions (6) and (7) below. 6 I II MURDERER John) 7 MURDER-VICTIM Bill) These instantiations are obtained by applying the appropriate argument to the derived concepts of MURDER-EVENT, namely, MURDERER and MURDER-VICTIM. Concept Explication. Explicating expression (3) one level involves: 1) instantiating the derived concepts, MURDERER and MURDER-VICTIM, with appropriate arguments; and, 2) expanding the template field of MURDER-EVENT to one level. If this is done accord- ing to Table 1, expressions (4), (6), and (7) result. The concepts directly mentioned in these expressions are considered to have been made explicit. IV. ACCESSING FACTS: AN EXAMPLE Contingent facts associated with a concept become accessable when that concept is made explicit, either by being used directly, or by the process of explication. Consider the task of referent identification in either of the situations below. John murdered Bill. The funeral was held on Monday. John murdered Bill. The trial was held on Monday. “The funeral” refers to Bill’s funeral but “the trial” refers to John’s trial. Whatever processes are involved in identifying these referents, relevant facts must be accessed. In this case, the contingent fact that people who die have funerals (associated with DIE) determines that “the funeral” refers to Bill’s funeral, and the infor- mation that murderers have trials (associated with MURDERER) determines that “the trial” refers to John’s trial. When will these facts become available? Explicating expression (3) one level makes both of these facts accessable. Since contingent facts become accessable as the concepts they are associated with become explicit then, expression (4) allows facts stored with DIE to become accessable, and expression (6) allows facts stored with MURDERER to become access- able. As more templates composing the meaning of a concept are expanded, more facts related to that con- cept become available. However, search for contingent facts is highly constrained and can proceed only as the concept’s definition is explicated. Assuming that search terminates when the relevant facts are found, we have a situation where the meanings of concepts are processed 235 to a variable depth and this is controlled by the infer- ence requirements of the task domain. V. SUMMING UP How do UniFrame’s features match up to the desired characteristics of a representation mentioned in the introduction? We discuss each in turn. Uniformity: Concepts are defined only by concept frames and concept frames themselves make reference only to other concepts. Concepts which are typically slots in other systems are full-fledged concepts in this system. Assertions about murderers can be made in the same way as assertions about persons. Coherence of Conceptual Structures: In terms of similarity of related verbs, UniFrame is like most hierarchical frame systems. The concepts underlying these verbs would appear at proximal places in the con- cept hierarchy. With respect to the relatedness that derives from definitional relations, UniFrame has better facilities than most frame systems. UniFrame can represent derived concepts, such as MURDERER. It also cap- tures the way DIE is a component concept of MURDER-EVENT. It also distinguishes between the slot-function MURDERER-OF in the MURDER- EVENT frame and the concept MURDERER. With respect to contingent relationships, factual knowledge can be added to UniFrame in the same way that it can be added to any frame system. However, UniFrame discriminates between the two kinds of knowledge. Death of a murder victim follows from the definition of the murder event, rather than as an asserted fact about the murder event. Progressive Differentiation of Concepts: UniFrame progressively differentiates concepts by the use of a hierarchy, just as most frame systems do, but instead of adding slots in order to specialize concepts, the con- straint relation uses other concepts to impose relation- ships between slots. Depth of Processing: UniFrame makes it easy to pro- cess the meaning of a concept to a varying amount of depth, simply by deciding whether to expand a tem- plate, or instantiate derived concepts. As the meaning is processed more deeply (explicated), more contingent facts become accessable. REFERENCES [I) Alterman, R. “Event concept coherence in narra- tive text” In Proc. Fifth Annual Conference of the Cognitive Science Society, Rochester, New York, May, 1983. [2] Brachman, R., Fikes, R., and Levesque, II. “KRYPTON: Integrating terminology and asser- tion” In Proc. AAAI-8, Washington, D.C., August, 1983, 31-35. [3] Charniak, E. “A common representation for prob- lem solving and language comprehension informa- tion.” Artificial Intelligence, 16, 1981, 225-255. PI PI PI PI PI PI 1101 PI WI Deering, M., Faletti, J., and Wilenksy, R. “PEARL: An efficient language for artificial intelli- gence programming” In Proc. IJCAI-81, Vancouver, Canada, August, 1981. Fahlman, S.E. NETL: A system for representing and using real-world knowledge. Cambridge, Mass.: MIT Press, 1979. Kolodner, J. “Maintaining organization in a dynamic long-term memory.” Cognitive Science, 7:4, 1983, 243-280. Leitner, H.H. & Freeman, M.W. “Structured inheritance networks and natural language under- standing” In froc. IJCAI- 70, Tokyo, Japan, August, 1979, 525-530. Maida, A.S. & Shapiro, S.C. “Intensional concepts in propositional semantic networks.” Cognitive Sci- ence, 6:4, 1982, 291-330. Quillian, M.R. “Semantic Memory.” In M. Minsky (Ed.) Semantic Information Processing, Cambridge, Mass.: MIT Press, 1968. Shapiro, S.C. “A net structure for semantic infor- mation storage, deduction and retrieval” In Proe. LEAI-71, vol. 2, 512-523, 1971. Wilensky, R. “Knowledge Representation - A Cri- tique and a Proposal” In Proc. Fird Annual Workshop on Theoretical Issues in Conceptual Injormation Processing, Atlanta, Georgia, March, 1984. Wilensky, R. “KODIAK: A Knowledge Represen- tation Language” In Proc. 6th Annual Conference of the Cognitive Science Society, Boulder, Colorado, June, 1984. 236
1984
50
337
Constraint Equations: A Concise Compilable Representation for Quantified Constraints in Semantic Networks Matthew Morgenstern Information Sciences Institute’ University of Southern California 4676 Admiralty Way, Marina del Rey, CA 90292 Abstract Constraint Equations provide a concise declarative language for expressing semantic constraints that require consistency among several relations. The constraints provide a natural addition to semantic networks, as shown by an extension to the KL-ONE/NIKL representation language. The Equations have a more natural and perspicuous structure than the predicate calculus formulas into which they may be translated, and they also have an executable interpretation. Both universal and existential quantifiers are expressible conveniently in Constraint Equations, as are cardinality quantifiers and transitive closure. For a subclass of these constraints, a prototype compiler automatically generates programs which will enforce these constraints and perform the actions needed to reestablish consistency. 1. INTRODUCTION Constraini Equalions (CEs) provide a concise declarative language for expressing a class of invariant constraints which must hold among chains or sequences of relationships. The declarative Constraint Equations have an executable interpretation, and can be compiled directly into routines for automatic maintenance of the Constraints. This is preferable to writing procedural code to express and enforce these constraints. The prototype implementation has demonstrated such automated generation of programs from CE specifications. The declarative nature of Constraint Equations and their executabie interpretation have an analogy with algebraic equations, For example, the equation X q Y + 2 is a declarative statement of an equivalence between the expressions on either side. If this is to be treated as a constraint which is to be maintained by the system, then there is an executable interpretation which may be thought of as two condition-action rules: (1) if Y and/or Z change, then revise the value of X accordingly, and (2) if X changes, select between the alternatives of disallowing the change, revising Y, or Z, or both. The following example of a Constraint Equation specifies that the Projects which a Manager has a responsibility for are to be the same as the set of Projects which his/her Employees work on. MANAGER.PROJECT == MANAGER.EMPLOYEE.PROJECT ‘This research was supported by the Defense Advanced Research Proiects Agency (DARPA) contract MDA-903-81 C-0335. Views and conclusions contained in this paper are those of the author and should not be interpreted as representing the official opinion or policy of DARPA or the U.S. Government. Here the dot ‘I. ” may be thought of as standing in for the relationship between the objects or entities appearing on either side of it. In general, the dot allows a form of ellipsis in which the relation name or object type may be omitted. Each side of the CE describes a sequence of relations from the Anchor object (here MANAGER) on the left to the Target object on the right of the path. There may be a set of one or more Target instances associated with one Anchor instance by these relationships. This CE says that the set of Projects that arise from both sides must be equal, and that this must be true for each Manager. The concept of Path Quantifiers is defined below to provide existential, universal, and cardinality based quantifiers. A Constraint Equation represents a structured shorthand for a set of condition-action rules -- a fact which is exploited below when extending the range of behaviors describable using these Equations. The operational semantics of these declarative Equations helps address the need for improved facilities to declaratively represent knowledge of the data’s semantics. Fikes has noted that “such declarative facilities would reduce the (knowledge representation) designer’s dependence on frame actions, and therefore make the resulting implementation more perspicuous and accessible . ...” [Fikes81]. Other studies of constraint-based systems include [Bornrng79], [Goldstein80], and [SussmanSO]. The Information Management system [Balzer83] has provided a testbed for prototype implementation of the Constraint Equation facility. Non-trivial hand written code for constraint maintenance has been replaced by routines that were automatically generated from the CEs. Extensions beyond these operational facilities also are presented below. 2. CONSTRAINT SPECIFICATION and CONNECTION PATHS Each side of a Constraint Equation is a Path Expression, which is an abbreviated representation of a sequence of associations from the semantic network model of the application. The nodes of the network are typed and represent objects -- also sometimes referred to as entities or domains, The attributes of the object are treated here as binary relationships to other objects or to literal values. The abbreviated path expression is compared with the semantic network to determine each elided component, which may be either an object or a relation name. The CE is considered ill-formed if there is ambiguity in the translation. The fully expanded sequence of associations is called a Connection Path. For example, consider the following entities: (where “-->>‘I denotes a multi-valued attribute) 255 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. MANAGER Entity OVERSEES -->> PROJECT MANAG ES -->> EMPLOYEE EMPLOYEE Entity WORKSON -->> PROJECT The abbreviated Path Expressions of a CE are translated into complete Connection Paths (second equation) as follows: MANAGER.PROJECT == MANAGER.EMPLOYEE.PROJECT [ (MANAGER) OVERSEES (PROJECT) ] == [(MANAGER) MANAGES (EMPLOYEE) WORKSON (PROJECT)] In general, a simple Connection Path is a sequence of the form: [ (EO) Rl (El) R2 . . . RN (EN) J , where Ei denotes an object (entity) type, and Ri denotes a (binary) relation from Eiml to Ei. (Relations are shown in parentheses when there may be ambiguity between the names of objects and relations.) EO is the Source and En is the Target of the Connection Path. In a CE, EO also is referred to as the Anchor, since it anchors the CE with a common binding for both paths. When an instance is provided for domain EO (or En), the Connection Path defines a mapping from the Source (if EO was given) to the Target set. A Connection Path defines a relation Rcp(E0 En) derived from the sequence of component relations Ri by joining them paitwise on their common domains, A composition of Connection Paths a!so is a derived relation, since each subpath can be treated as a relation in the overall path. Thus a Connection Path, or composition of subpaths, can be used wherever a relation appears in a Constraint Equation. 3, FORMAL INTERPRETATION of CONSTRAINT EQUATIONS Constraint Equations can be viewed as a compact shorthand for a class of predicate calculus formulae that are useful for knowledge representation paradigms. Consider the following Constraint Equation and its expansion into Connection Paths: EO.El == EO;E2.E3 [ (EO) Rl (El) ] == [ (EO) R2 (E2) R3 (E3) ] Each relation may be viewed as a binary predicate, Ri(Ej, Ek). Since each side of the CE is a derived relation, we obtain the following expression in predicate calculus with set notation: { (EO El) j Rl(E0 El) } = { (EO E3) j 3 E2 ( R2(EO E2) A R3( E2 E3) ) } An alternative formulation emphasizes the fact that a Constraint Equation may be thought of as being implicitly iterated over the instances of the Anchor EO. This viewpoint is valuable for understanding CEs, and is utilized later when expressing the Path Quantifiers2 2 An algebra for symbolic manipulation of CEs is under development -- it is used to analyze the consequences of constraints and to derive new related Equations from existing ones. VEO { El j Rl(E0 El) } = { E3 1 3 E2 ( R2( EO E2) A R3(E2 E3) ) } Here, each EO instance serves as a common binding for the Anchor on both sides. Each Connection Path defines a mapping to a set of Target instances -- the Target sets for the left and right sides being {El} and (E3). This CE constrains these two Target sets to be equal for any such Anchor instance. (In lieu of equality, there may be a subset or superset comparator, or a common elements (intersection) constraint, denoted =m: n=, requiring that the Target sets have from m to n members in common -- with =o= denoting no common members, ie. disjointness.) The equality based CE also may be expressed without set notation as: VEO,El ( Rl(E0 El) <==> 3 E2 ( R2( EO E2) A R3(E2 El) ) ) 4. UPDATE SEMANTICS and AUTOMATIC CONSTRAINT ENFORCEMENT When changes occur to the data, one or more Constraint Equations may be affected. A compiler-like facility accepts the Constraint Equation specifications and automatically generates maintenance programs which enforce the constraints (currently for existentially quantified constraints). If there is no way of reestablishing the constraint, then the initial change will not be accepted. Usually however, the maintenance routine can execute the consequential changes needed to satisfy the constraint(s). The system implementation provides triggers or demons which are activated when changes occur to specified relationships [Goldman82]. The enforcement routines which are generated by the CE Compiler are attached to the demons for each of the named relations that are involved in the Constraint Equation. Thus when an insertion, deletion, or update occurs to any instance of these relations, this enforcement routine is automatically invoked to take the appropriate action, When an object instance is created, deleted, or updated, changes occur to relationships which involve that object. Deletion of an object causes all its attributes and relationships to be deleted also. Updating an object actually involves updating the relationships of the object. CEs are activated by these changes to relationships. When a change occurs to a relationship on one side of a CE, a compensating change may be made to a relationship on the other side in order to reestablish satisfaction of the constraint. Since there may be more than one relation on a side, the one to change is indicated by the ” ! ” symbol to the left of or in place of a relation name (the ” ! ” is used in lieu of the dot “. “). The designated relation can be thought of as a weak bond, since it is more readily modified in response to an initial change to the other side of the CE. As an example, consider the constraint that an Employee’s Phone’s Backup (the extension which takes messages when the phone is busy or does not answer) is the same as the Employee’s Project’s Secretary’s Phone. This may be expressed as a CE: EMPLOYEE. PHONE ! BACKUP == EMPLOYEE.PROJECT.SECRETARY.PHONE 256 The designation of a weak bond on the left indicates that if any of the associations on the right changes (eg. a Project’s Secretary) then the Backup extension for the Employee’s Phone is changed. The absence of a weak bond on the right indicates that a change directly to the relations on the left is not allowed if it would cause a violation of the constraint. For example, the Employee’s Phone could be changed to any other Phone having the same Backup without violating the constraint. Alternative update semantics are specifiable as discussed below. The update semantics are intuitive when relationships are single valued. If an Employee changes to a different Project, and all the relationships! except the changed relation and the weak bond relation, are single valued, then the Secretary’s Phone is clearly defined, and the change of Backup extension for the Employee’s Phone is simple. The potentially multi-valued relationship between Employees and Projects can give rise to a set of changes in other cases. If a Secretary’s Phone is changed, then the Backup extension must be changed for the Phones of the (potentially) several Employees on the associated Project(s) (ie. on Projects served by that Secretary, and limited to those Employee Phones having the old Backup number). Since the activation of a CE can result in additional changes to relationships, a chain of activations of several CEs may arise. Each such activation serves to propagate the consequences of the initial change [Morgenstern88]. Similar issues regarding constraint propagation arise in truth maintenance systems [ Doyle781. As another example, consider the CE presented earlier where a Manager oversees those Projects his/her Employees work on: MANAGER ! PROJECT == MANAGER ! EMPLOYEE. PROJECT The weak bond on each side here indicates that Projects stay with the Employee if there are any other changes. Thus if a Manager adds a Project, then he adds those Employee(s) who work on that Project. 4.1. Specialized Update Semantics The algorithms stated above assume that a change to one side of a CE may be responded to by a change to the designated weak bond relation on the other side. There are cases when a change warrants different responses. We provide this by additional annotations which take the form of condition-action rules or production rules [HayesRoth83] A consistency constraint expressed as a condition-action rule would state the change or combination of changes to the data which serve as the condition for activating the rule. And it would indicate the action to be taken -- typically an expression of how to reinstate consistency, Other forms of action might be to disallow the change, provide information to the user, or invoke a more general procedure to execute an arbitrary action. In fact, the Constraint Equation is directly expressible as a set of such condition-action rules -- one for each relation that may change in the Equation. Here we use such rules to express exceptions to the primary update algorithms. The condition part indicates the relation change which would activate this exception rule, and optionally, the type(s) of change (insertion, deletion, update). The action or response may be of arbitrary complexity, but primarily is intended to indicate a relation of the CE to which the compensating change should be made -- thus allowing the weak bond relation to be conditional on which change occurred. The following CE is similar to the one presented earlier, except that here the semantics are that a change of Manager for an Employee changes the Projects the Employee works on. The additional rule overrides the base semantics of the weak bond on the left of the CE. The rule is invoked when the relationship implied by MANAGER. EMPLOYEE is changed, and the response is to treat the relation EMPLOYEE. PROJECT as the weak bond for this case. MANAGER ! PROJECT == MANAGER ! EMPLOYEE .PROJECT except MANAGER.EMPLOYEE = EMPLOYEE. PROJECT Another example is repeated below with a new response. Here a change to a Project’s Secretary would cause the compensating change to be made to the Phone of the old and new Secretaries __ in order that the Backup number (and the phone associated with the Project) stays the same: EMPLOYEE.PHONE !BACKUP == EMPLOYEE.PROJECT.SECRETARY.PHONE except PROJECT.SECRETARY = SECRETARY.PHONE 5. ENHANCED EXPRESSIVE POWER 5.1. Path Quantifiers The set oriented semantics of Constraint Equations can naturally express a spectrum of quantifiers, including existential and universal quantifiers. Existential quantifiers are implicit in CEs, as shown earlier. All intermediate objects along the Connection Path (other than the Anchor and Target) have been existentially quantified for the CEs discussed above. This corresponds to the fact that the path expression on each side of these CEs produces the union of the Target instances for an Anchor. The union operation gives rise to the existential quantification over the different sequences (paths) of intermediate objects and relationships connecting the Anchor with the Target. The ability to express the Universal quantifier is needed for a constraint such as: the Projects of a Department are those Projects on which &I the Employees of that Department work. In other words, the Projects of a Department are those which are common to every Employee of that Department. This notion of commonness to all sets of instances arising from a (possibly derived) association is represented as a Path Intersection Quantifier ” n/ ” -- which replaces the implicit union for a path with an explicit inteisection over the Target sets. This example may be represented as: DEPARTMENT.PROJECT == [DEPARTMENT. EMPLOYEE n/ PROJECT] The intersection here is over the sets of Projects associated with the Employees of that Department (since each Employee works on a set of Projects). The CE requires that the resulting set of common Projects is to be equal to the set of Projects which the Department directs. We expand this CE into a full Connection Path using the previous object definitions together with the following Department object: 257 DEPARTMENT entity DIRECTS -->> PROJECT EMPLOYS -->> EMPLOYEE [ (DEPARTMENT) DIRECTS (PROJECT) ] == [ (DEPARTMENT) EMPLOYS (EMPLOYEE) n/(EMPLOYEE) WORKSON (PROJECT) ] Expressing this constraint in terms of sets, we have: V DEPARTMENT { PROJECT 1 DIRECTS(DEPARTMENT PROJECT) } ; PROJECT 1 3EMPLOYEE ( EMPLOYS(DEPARTMENT EMPLOYEE) ) A VEMPLOYEE ( EMPLOYS(DEPARTMENT EMPLOYEE) * WORKSON(EMPLOYEE PROJECT) ) } In the second set expression above, a Project is included in the resulting set if a// Employees of the Department work on that Project. Note that the existence of least one Employee in the Department is required here to ensure that the predicate calculus Universal Quantifier does not become satisfied for each and every Project just because there are no Employees in that Department ! Such concerns are taken care of by the semantics of the Path intersection quantifier. More generally, a Path Intersection expression such as [ El . E2 fl/ E3 . E4 -J expands to a Connection subpath of the form [ (El) R2 (E2) r-T/ (E2) R3 (E3) R4 (E4) ] . For an El instance, this path yields those E4 instances which are common to every E2 -- ie. an E4 instance is related to an El by this path if this E4 is related to every E2 associated with this El. We may formally express this derived relation Rcp(E1, E4) by the following set of pairs. The universal quantifier applies to the entity E2 which immediately precedes the Path Intersection symbol ( fl/ ) in the expressions above. The scope of the universal quantifier is the immediately containing bracketed path expression. The other intermediate objects along the path (here E3) are existentially quantified as usual. { (El E4) 1 3 E2 ( R2(El E2) ) A t/E2 ( R2(El E2) * 3 E3 ( R3( E2 E3) A R4(E3 E4) )) } Since this represents a derived relation Rcp(E1, E4), the above Path Intersection (the expression from El to E4) can be used as part of a larger Path. Thus quantified expressions can be nested within each other. 5.1 .l. Spectrum of Quantifiers The Path Quantifier concept may be extended to provide a spectrum of quantification capabilities ranging from existential to universal quantifiers. In particular, universal quantification required above that E4 be related to every E2, whereas existential quantification requires that E4 be related to at least one E2 for an Anchor instance. We define ,.,,fl/,, to be a Limifed Path Quantifier. If it is used in place of the unconditional intersection quantifier fl/ above, it means that an E4 instance is included if it is related (for a given El) to at least m E2 instances and not more than n E2 instances, We let 1 E2 1 denote the size of the set of E2’s which are related to the given El. The upper bound n defaults to this set size 1 E2 I, and may be different for each Anchor instance. The lower bound m defaults to the smaller of the upper bound and /E21 -- so these defaults are consistent with the unsubscripted path intersection symbol fl/ . For example, the constraint that a Department is responsible for helping to direct a Project if at least three employees of that Department are working on the Project, may be written as: DEPARTMENT.PROJECT == [DEPARTMENT ,fV EMPLOYEE.PROJECT] It can be seen that for the previous path from El, I E21 fl/ is equivalent to the unconditional Path lntersecfion (universal) quantifier f-v , since this explicit lower bound requires that for an E4 to be included in the result, it must be related to all E2s of an El. Furthermore ,n/ is equivalent to the existential quantifier, since for an E4 to qualify, it must be related to just one or more E2s. Thus we have a spectrum of quantifiers. 5.2. Path Operators and Transitive Closure The Connection Path on either side of the Constraint Equation may be extended to include Set Union, Set Intersection, and/or Set Difference of a pair of Connection subpaths. These set operators are subject to the restriction that the Source object type for each subpath is the same, and the Target object type for each subpath is the same. When there is a type hierarchy for objects, this restriction is loosened to require just compatibility of object types. We may view the set operator in either of two ways: as the union (or other set operator) of the Target sets arising from each subpath for a given Anchor instance, or as the union (or set operator) of the relation tuples from each subpath. Since these are equivalent, the compound Connection Paths also define a derived (binary) relation, just as for simple Connection Paths. Constraint Equations now can represent the transitive closure. For example, consider a programming environment where the system keeps track of potential calling relationships between programs (as provided by the Masterscope package of Interlisp [Teitelman]). The CALLS relationship exists between function Fl and those functions Fj for which a calling form appears in the body of Fl, Then the relation REACHABLE for function Fl is the transitive closure of the CALLS relation -- ie. all functions called directly by Fl or indirectly Reachable from such called functions. The Constraint Equation representation is: FUNCTION ! REACHABLE == FUNCTION.CALLS U FUNCTION.CALLS.REACHABLE This recursive definition takes on the expected meaning due to the executable interpretation of Constraint Equations. In particular, when a new [Fl CALLS F2] relationship is entered, the following responses occur: the right side of the CE causes F2 and all functions Reachable from F2 to be included as Reachable from Fl. The resulting change to the REACHABLE relation for Fl causes other activations of this CE for those functions which call Fl . In turn this may modify REACHABLE again. The cycle terminates since the union is over a finite number of elements. 5.3. Application to the KL-ONE/NIKL Semantic Network NIKL is a recently developed knowledge representation system [Bobrow83, Moser831, which is a successor to KL-ONE [Schmolze83], and incorporates ideas from the KRYPTON system [Brachman83]. NIKL also has similarities to the KRL representation language, except that KRL also provides for operational semantics which are specified by collections of attached procedures [Fikes82]. The NIKL semantic network is a taxonomy of concepts, (intentional objects) which are related by specialization ._ indicated by a superconcept (is-a) link. The attributes of a concept are referred to as roles, and may include restrictions such as the number and type of values that may fill the role. Role Constraints (role value maps) are intended as a way of mutually restricting the values that may fill two or more roles. As an example, a Role Constraint for a locally employed person (LE- PERSON) is that his/her home is in the same city as the company which employs the person. The following NIKL diagram from [Moser831 shows this requirement. A Constraint Equation which represents this constraint is shown in both its abbreviated and complete path forms: LE-PERSON.HOME.TOWN == LE-PERSON.JOB.COMPANY.LOCATION [ (LE-PERSON) HOME (RESIDENCE) TOWN (CITY) ] == [(LE-PERSON) JOB (EMPLOYMENT) COMPANY (BUSINESS) LOCATION (CITY)] Thus far, universal quantifiers have not been expressible in NIKL. Some consideration had been given to the use of a separate predicate to filter the cross product of values from the several roles, and thereby select those combinations which mutually satisfy the Role Constraint [Bobrow83]. The universal quantifier is captured by Path Intersection in a Constraint Equation. So for example, to express the fact that a person’s friends are those people who are friends of all his brothers, we write the CE: PERSON.FRIEND == [PERSON.BROTHER n/FRIEND] If the CE did not include the Path Intersection ( fI/ ), then any friend of any brother would be one of the person’s friends, rather than requiring friendship with all the brothers in order to qualify. Thus Constraint Equations overlap with other knowledge representation schemes, and they provide a natural extension to the already rich KL-ONE semantic network. 6. CONCLUSION Constraint Equations (CEs) provide a concise declarative representation for a commonly occurring class of constraints in which two differently derived sets of instances, and two different chains of relationships, are to be consistent. CEs have a more natural and perspicuous structure than the predicate CaMuS formulas into which they may be translated. Yet both universal and existential quantifiers are expressible conveniently in Cl% as are cardinality quantifiers, transitive closure, and disjointness. Automatic constraint enforcement is provided in the prototype implementation by compilation of a basic CE specification into a program which will perform the actions needed to reestablish consistency. ACKNOWLEDGEMENTS I would like to thank Don Cohen, Neil Goldman, Tom Lipkis, and Jack Mostow for their useful comments: the predicate calculus formulation benefited from discussions with Don, Jack suggested the transitive closure example, and Tom offered insight into the current NIKL/KL-ONE system. REFERENCES [Balzer83] Robert Balzer, David Dyer, Matthew Morgenstern, Robert Neches, Specification-Based Computing Environments, Proc. National Conf. on Artificial Intelligence (AAAI-83), Washington, D.C., August 1983, pp.12-16. [Bobrow83] Rusty Bobrow, NlKL - A New implementation of KL-ONE, Bolt Beranek and Newman, Cambridge, Mass., January 1983, draft. [Borning79] Alan Borning, Thinglab - A Constraint-Oriented Simulation Laboratory, Stanford Univ. report STAN- CS-79-746, July 1979, Ph.D. thesis. [Brachman83] R.J. Brachman, R.E. Fikes, and H.J. Levesque, KRYPTON: A Functional Approach to Knowledge Representation, IEEE Computer, Oct. 1983, pp.67.73. [Doyle781 Jon Doyle, Truth Maintenance Systems for Problem Solving, Masters Thesis, M.I.T., January 1978, A.I. TR-419, 97PP. [Fikes81] Richard E. Fikes, Odyssey: A Know/edge-Based Assistant, Artificial Intelligence Jour., v.16, 1981, pp.331 -361. [Fikes82] Richard E. Fikes, Highlights from K/one-Talk, Proc. of the 1981 KL-ONE Workshop, Fairchild Camera Technical Report No.618, May 1982, pp.88-103. [Goldman821 Neil M. Goldman, AP3 Reference Manual, June 1982, USC Information Sciences Institute, Marina del Rey, CA. [Goldstein801 I.P. Goldstein & D.G. Bobrow, Descriptions for a Programming Environment, Proc. First Annual Conf. Nat’1 Assn for A.I. (AAAI-80), Stanford, CA, August 1980. [Hayes-Roth831 Fredrick Hayes-Roth, Donald Waterman, & Douglas Lenat, eds., Building Expert Systems, Addison- Wesley Pubs., 1983. [Morgenstern83] Matthew Morgenstern, Active Databases As A Paradigm For Enhanced Computing Environments, Ninth Int’l Conf. on Very Large Data Bases (VLDB-83), Florence, Italy, Ott 1983, ~~34-42. [Moser831 M.G. Moser, An Overview of NIKL, the New implementation of KL-ONE, pp.7-26, in Research in Knowledge Representation for Natural Language Representation, October 1983, Bolt Beranek & Newman, Report No.5421. [Schmolze83] James G. Schmolze and Thomas A. Lipkis, Classification in the KL-ONE Knowledge Representation System, Proc 8th Int’l Joint Conf. on A.I., August 1983, Germany, pp.330.2. [Sussman80] Gerald Jay Sussman and Guy Lewis Steele, Jr,’ CONSTRAINTS -- A Language for Expressing Almost- Hierarchical Descriptions, Artificial Intelligence Journal, v-14, 1980, pp.1 -39. [Teitelman] Warren Teitelman, Interlisp Reference Manual, Xerox Palo Alto Research Center, 1978. 259
1984
51
338
Implicit Ordering of Defaults in Inheritance Systems David S. Touretzky Computer Science Department Carnegie-Mellon University Pittsburgh, PA 15213 Abstract There is a natural partial ordering of defaults in inheritance systems that resolves ambiguities in an intuitive way. This is not the shortest-path ordering used by most existing inheritance reasoners. The flaws of the shortest-path ordering become apparent when we consider multiple inheritance. We define the correct partial ordering to use in inheritance and show how it applies to semantic network systems. Use of this ordering also simplifies the representation of inheritance in default logic. 1. Introduction There is a natural partial ordering of defaults in inheritance systems that resolves ambiguities in an intuitive way, This ordering is defined implicitly by the hierarchical structure of the inheritance graph. Surprisingly, it is not the shortest-path ordering used by most existing inheritance systems, such as FRL [I] or NETL [2]. We define the correct ordering, called inferential distance, and show how its use results in more reasonable inheritance behavior than that of either FRL or NETL. We go on to represent inheritance systems in default logic, following the example of Etherington and Reiter [3]. Although exceptions must normally be treated explicitly in default logic, use of inferential distance allows us to handle them implicitly, which has several advantages. 2. The Inferential Distance Ordering The intuition underlying all inheritance systems is that subclasses should override superclasses. Where inferential distance differs from the shortest-path ordering is in determining subclass/superclass relationships. The inferential distance ordering says that A is a subclass of B iff there is an inheritance path from A to B. In single (as opposed to multiple) inheritance systems, the shortest inference path always contains the inference from the most specific subclass. But under multiple inheritance, there are two cases where the shortest-path ordering disagrees with inferential distance. One involves the presence of true but redundant statements; the other involves ambiguous networks. 3. Handling True But Redundant Statements Figure 1 illustrates a problem caused by the presence of redundant links in an inheritance graph. Let us start with the following set of assertions: “elephants are typically gray; royal elephants are elephants but are typically not gray; circus elephants are royal elephants; Clyde is a circus elephant.” If This research was sponsored by the Defense Advanced Research Projects Agency (DOD), ARPA Order No. 3597, monitored by the Air Force Avionics Laboratory Under Contract F3361581 K-1539. The author was partially supported by a fellowship from the Fannie and John Hertz Foundation. The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the US Government. subclasses override superclasses, then Clyde is not gray. But what happens when we add the explicit statement that Clyde is an elephant, as shown in figure 1 ? This is a redundant statement because Clyde is indisputably an elephant; he was one before we added this statement; in fact, this is one of the inferences we expect an inheritance reasoner to generate. Yet when we make the fact explicit it causes problems. In FRL Clyde will inherit properties through both Circus.Elephant and Elephant, so FRL will conclude that he both is and is not gray. In NETL, the redundant statement that Clyde is an elephant contributes an inference path to gray that is shorter than either of the two paths (one to gray, one to not-gray) which go through Circus.Elephant. NETL will therefore conclude that Clyde is gray, which contradicts the (correct) conclusion it would reach without the redundant link present. Figu Gray Elephant Royal.Elephant CircusElephant Clyde *e 1. Inferential distance is unaffected by redundant links. Clyde could either inherit grayness, a property of Elephant, or non- grayness, a property of Royal.Elephant. Since the network contains an inheritance path from Royal.Elephant to Elephant, according to the inferential distance ordering Royal.Elephant is a subclass of Elephant; the direct link from Clyde to Elephant does not alter this relationship. Therefore we conclude that Clyde should inherit non-grayness from Royal.Elephant rather than grayness from Elephant. 4. Ambiguous Inheritance Networks Consider the following set of assertions, shown in NETL notation in figure 2. “Quakers are typically pacifists; pro-defense people are typically not pacifists; Republicans are typically pro-defense; Nixon is both a Quaker and a Republican.” This network is ambiguous: it has two valid extensions. (An extension is the nonmonotonic or default logic equivalent of a theory [4].) In one extension Nixon is a pacifist; in the other he is not. Most existing inheritance reasoners would not recognize this ambiguity. If pacifism were a slot that could be filled with either 322 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. “yes” or “no,” FRL would simply return both values, with no notice of the inconsistency. NETL would conclude that Nixon was a pacifist simply because the inference path to that conclusion is shorter than the path to the opposite conclusion. Yet the fact that one path is shorter than the other is irrelevant. Nixon can inherit either pacifism, a property of Quaker, or non- pacifism, a property of Pro.Defense. Since there is no inheritance path from Quaker to Pro.Defense, nor vice versa, the inferential distance ordering provides no justification for viewing either class as a subclass of the other. Thus an inheritance reasoner based on inferential distance would be forced to recognize the ambiguity with regard to Nixon’s pacifism. Quaker Pro.Defense Republican Figure 2. 5. Inferential Distance in Semantic Networks TINA (for Topological Inheritance Architecture) is a recently implemented inheritance reasoner based on inferential distance [5]. TINA constructs the extensions of unambiguous inheritance networks by incrementally generating inheritance paths and weeding out those that violate the inferential distance ordering. This method also allows TINA to detect and report ambiguities in networks with multiple extensions. Since TINA does not use the shortest-path approach to inheritance, it is not misled by redundant links in the inheritance graph. Another part of TINA, called the condifioner, can be used to correct certain problems with inheritance in NETL reported in [6]. These problems are due to NETL’s implementation as a set of parallel marker propagation algorithms based on shortest-path reasoning. TINA’s conditioner modifies the topology of a NETL network (after the extension has been computed) so as to force marker propagation scans to produce results in agreement with the correct extension, as defined by inferential distance. This technique can also be applied to other semantic network systems (incuding parallel systems) to speed up their inheritance search, since once a network has been conditioned we can search it using a shortest-path inheritance algorithm. Shortest-path algorithms are simpler and more effcient than inferential distance algorithms. One drawback is that any changes to the network will require at least a portion of it to be reconditioned. 6. Representing Inheritance in Default Logic In default logic, a default inference rule is written in the form 323 where a(x), p(x), and y(x) are well-formed formulae called the prerequisite, the justification, and the consequent of the default, respectively [7]. The interpretation of this rule is: if (r(x) is known, and /3(x) is consistent with what is known, then y(x) may be concluded. A default is said to be normal if the consequent is the entire justification, i.e. p(x) and y(x) are identical. A default is said to be semi-normal if it is of form Etherington and Reiter use semi-normal defaults to represent inheritance systems in default logic [3]. To see why semi-normal defaults are necessary, consider the example in figure 3. Can -Fly Bird Ostrich b Henry Figure 3. This figure could be represented as the set of normal defaults Dl-D3 below, plus the assertion Ostrich(Henry). We represent “ostriches are birds” (rule D2) in this example as a default rather than as a strict implication mainly for uniformity; this decision is not critical to the example. Another reason, though, is that in NETL, which we are trying to model, all statements are defeasible. 01) Bird(x) : Can.Flv(xl Can.Fly(x) (W Ostrich(x) : Birciu Bird(x) (D3) Ostrich(x) : 4Zan.Flv(xl %an.Fly(x) Ostrich(Henry) Using Dl -D3, the assertion Ostrich(Henry) generates not one but two extensions. In one extension, Henry can’t fly because he is an ostrich. But in the other, Henry can fly because he is a bird. This problem of “interacting defaults” was noted by Reiter and Criscuolo [8]. To solve it, they would replace the normal default Dl with the semi-normal version Dl’: (Dl 7 Bird(x) : TOstrich(x) A Can.Flv(xl Can.Fly(x) In Dl’, the restriction that ostriches should not be inferred to fly is incorporated into the default rule that birds fly. If we add two more types of non-flying birds, say penguins and dodos, then 01’ would have to be replaced by another default that mentions all three exceptions. There are three problems with handling exceptions explicitly using semi-normal defaults. First, as information is added to a knowledge base, existing default rules must continually be replaced with new ones that take the new exceptions into Second, the complexity of each individual default account. increases as the knowledge base grows, because more exceptions must be mentioned. Third, in any given inheritance network, the translation of one link cannot be determined independently of that of the others. For example, an IS-A link between Bird and Can.Fly might be represented as the normal default Dl, yet in some networks the exact same link must be represented by Dl’. Syntactically, every link in an inheritance network is a normal default, since the network formalism makes n0 explicit reference to exceptions. The problem with representing inheritance assertions as semi-normal defaults can be summarized by saying that it lacks what Woods calls “nOtatiOnal efficacy,” a term that encompasses such properties aS conciseness of representation and ease of modification 191. Etherington and Reiter suggest that NETL treats some types of exceptions explicitly (i.e. its rules are semi-normal) because two types of exception link were proposed in [6]. These links were to be added to the network automatically, in a preprocessing step, to force NETL’s marker propagation algorithms to produce the desired results. In order to add these exception links one must have a specification for the correct interpretation of the network. When one creates a NETL network, then, the meaning must already be determined, whether or not the network is subsequently annotated with exception links. The NETL formalism itself does not require that exceptions be treated explicitly. Exception links were later abandoned as a marker propagation device. In FRL, an explicit mechanism for noting exceptions has never even been proposed. If we wish to translate inheritance networks into default logic using semi-normal rules, how are we to derive these rules from the syntactically normal ones the inheritance system contains? This questlon was left unanswered by earlier work on nonmonotonic inheritance. The inferential distance ordering provides an answer. 7. A Formal Analysis of Inferential Distance By representing inheritance in default logic, Etherington and Reiter were able to give a formal semantics to inheritance systems along with a provably correct inference procedure. However, since their representation does not include the notion that subclasses should override superclasses, it does not fully express the meaning of inheritance. In [5] I present a formal analysis of inheritance under the inferential distance ordering. Some of the major theorems are: l Every acyclic inheritance network has a constructible extension. (A similar result was proved in [7].) l Every extension of an acyclic inheritance network is finite. l An extension is inconsistent iff the network itself is inconsistent. (We use an expanded notion of inconsistency in which the rules “typically birds can fly” and “typically birds cannot fly” are mutually inconsistent. They wou Id not be in default logic.) l The union of any two distinct extensions is inconsistent. l A network is ambiguous (has multiple extensions) iff it has an unstable extension. Instability is a property defined in [5]. A necessary condition for instability is that the network contain a subgraph of the form shown in figure 4. l Every extension of an ambiguous network is unstable. Corollary: we can determine whether a network is ambiguous by constructing one of its extensions and checking it for stability. A : . 6 8 . v Figure 4. l Every inheritance network is conditionable. That is, given a network and one of its extensions, we can always adjust the topology of the network so that a shortest path reasoner will produce results in agreement with the chosen extension. l Additive conditioning (i.e. subtracting links) is sufficient. adding but never 8. Implementing Inferential Distance in Default Logic Consider a subset of default logic corresponding to a family of acyclic inheritance graphs. We can represent an IS-A or IS-NOT-A link between a class P and a class Q as a normal default in the obvious way, viz. : P(x) : X?(x) lQ(x) Let P(x) be the prerequisite of a rule Di and Q(x) the prerequisite of a rule Dj. We define Di < Dj to mean that either there exists a default with prerequisite P(x) and conclusion Q(x), or there exists a default Dk such that Di < Dk and Dk < Dj. Returning to the ostrich example, note that 03 < Dl and D2 < Dl by this definition. D2 and D3 are unordered with respect to each other since their prerequisites are the same. The < relation is clearly a partial ordering. The equivalent of an inheritance path in default logic is a proof sequence. The example involving Henry the ostrich, when represented by the normal defaults Dl-D3, generates a pair of conflicting proof sequences Sl and S2. The arrows in these sequences indicate the defaults that justify each inference. 61) Ostrich(Henry) --D3--> Xan.Fly(Henry) W) Ostrich(Henry) --DZ--> Bird(Henry) --Dl--> Can.Fly(Henry) Note that if Di precedes Dj in some proof sequence, then Di < Dj. If we order proof sequences by comparing the ordering of the maximal rules used in each proof, we see that Sl < S2 because 03 < 01. To apply inferential distance to default logic, we use the ordering on proof sequences as a filter over the set of possible extensions. (This idea was suggested by David Etherington.) Basically, we reject as invalid any extension in which a conclusion depends on a proof sequence Si such that there is a contradictory sequence Sj < Si. Thus, the extension in which Henry can fly would be rejected, since that conclusion depends on proof sequence S2 but there is a contradictory proof sequence Sl < S2. 324 10. Conclusions Now let us try expressing figure 2 in default logic: (D4) Quaker(x) : Pacifist(x) Pacifist(x) (D5) Republican(x) : Pro.Defensefxl Pro.Defense(x) P6) Pro.Defense(x) : -Pacifist(x) lPacifist(x) Quaker(Nixon) A Republican(Nixon) The inference paths we generate about Nixon are: (S3) Quaker(Nixon) --D4--> Pacifist(Nixon) (S4) Republican(Nixon) --D6--> Pro.Defense(Nixon) --D6--> lPacifist(Nixon) The only ordering relation among these defaults is D5 < D6. Since 04 and D6 are unordered, the proof sequences S3 and S4 are unordered, so of the two extensions we obtain, one relying on 53 and one on S4, neither is to be preferred over the other. At this point the reader should have no trouble translating figure 1 into a set of normal defaults and verifying that under the inferential distance ordering, only the desired extension is produced. 9. The Significance of Hierarchy Brachman, in his discussion of what IS-A is and isn’t, suggests that “to the extent inheritance is a useful property, it is strictly implementational and bears no weight in any discussion of the expressive or communicative superiority of semantic nets” [lo]. When an inheritance system is devoid of exceptions he is clearly right. But in nonmonotonic inheritance systems, which provide for a simple form of default reasoning, the basic assumption that classes are structured hierarchically makes implicit handling of exceptions possible. In contrast, exceptions must normally be handled explicitly in default logic, since default logic contains no notion of hierarchy. Implicit handling of exceptions is possible when we are restricted to hierarchical domains with simple forms of defaults, but default logic admits more intricate sorts of theories. Unrestricted semi-normal theories cannot be represented by normal ones using the ordering defined here. Default logic is clearly a more powerful formalism than inheritance for representing knowledge, but the latter remains important due to its conceptual simplicity and efficient inference algorithms. The intuition underlying all inheritance systems is that subclasses should override superclasses. Inferential distance is a partial ordering on defaults that implements this intuition. The inferential distance ordering differs from the shortest-path ordering used by most inheritance reasoners in cases where the network is ambiguous or contains true but redundant statements. In these cases, the shortest-path ordering fails to ensure that subclasses (and only subclasses) override superclasses. Applying inferential distance to the default logic representation of inheritance systems allows us to faithfully represent these systems with no loss of notational efficacy. Under inferential distance, default rules need not be discarded as more information is added to the knowledge base; individual rules do not become more complex as exceptions accumulate; and the translation of any one link in an inheritance network into a default is independent of that of any other. Acknowledgements I am grateful to David Etherington, Jon Doyle, and Scott Fahlman for many insightful discussions, and to the referees for sugggestions which led to the restructuring of this paper. [II 121 131 [41 El [61 [71 PI WI Cl01 References Roberts. R. B., and I. P. Goldstein. The FRL Manual. MIT Al Memo 409, MIT, Cambridge, MA, 1977. Fahlman, S. E. NETL: A System for Representing and Using Real- World Knowledege. MIT Press, Cambridge, MA, 1979. Etherington, D. W., and R. Reiter. “On inheritance Hierarchies With Exceptions,” hoc. AAAI-83, August, 1983, pp. 104-108. Reiter, R. “A Logic for Default Reasoning,” Artificial intelligence Vol. 13, No. l-2, April 1980, pp. 81-l 32. Touretzky, D. S. The Mathematics of tnheritance Systems. Doctoral dissertation, Computer Science Dept., Carnegie- Mellon University, Pittsburgh, PA, 1984. Fahlman, S. E., D. S. Touretzky, and W. van Roggen. “Canceflation in a Parallel Semantic Network,” Proc. IJCAI-8 7, August, 1981, pp. 257-263. Etherington, D. W. Formalizing Non-Monotonic Reasoning Systems. Technical report 83- 1, Dept. of Computer Science, University of British Columbia, Vancouver, BC, Canada, 1983. Reiter, R., and G. Criscuolo. “On Interacting Defaults,” Proc. IJCAI-8 7, August, 1981, pp. 270-276. Woods, W. A. “What’s Important About Knowledge Representation?” Computer, Vol. 16, No. 10, October 1983, pp. 22-27. Brachman, R. J. “What IS-A Is and Isn’t,” Computer, Vol 16, No. 10, October, 1983, pp. 30-36. 325
1984
52
339
VERY-HIGH-LEVEL PROGRAM-MING OF KNOWLEDGE REPRESENTATION SCHEMES Stephen J. Westfold Stanford University and Kestrel Institute, Palo Alto, CA 94304 ABSTRACT This paper proposes building knowledge-based systems using a programming system based on a very-high-level language. It gives an overview of such a programming system, BC, and shows how BC can be used to implement knowledge representation features, providing as examples, automatic maintenance of inverse links and property in- heritance. The specification language of BC can be ex- tended to include a knowledge representation language by describing its knowledge representation features. This permits a knowledge-based program and its knowledge base to be written in the same very-high-level language which allows the knowledge to be more efficiently incor- porated into the program as well as making the system as a whole easier to understand and extend. fj 1 Introduction A knowledge-based system typically consists of a pro- gram and a knowledge base that the program uses. The knowledge base is expressed in a special knowledge repre- sentation language that is essentially a very-high-level lan- guage that the program interprets. This paper describes a very-high-level language programming system, BC, and shows how BC can be used to define knowledge repre- sentation languages so that they can be efficiently com- piled. Furthermore, the knowledge-based program itself can be specified in BC using the same techniques with the same advantages of ease of comprehension and main- tainability that are associated with the knowledge base. This allows the knowledge base to be viewed as part of the specification of the program, which is the key to its efficient incorporation into the program. In this way BC may be viewed as a knowledge compiler, pre-processing knowledge so that it is used efficiently in the knowledge- based system. This research is supported in part by the Defense Advanced Re- search Projects Agency Contract NOOOld-81-C-0582, monitored by the Office of Naval Research. The views and conclusions contained in this paper are those of the author and should not be interpreted as representing the official policies, either expressed or implied of KESTREL, DARPA, ONR or the US Government. BC allows programs to be factored into a descrip- tion of the problem to be solved and a description of the implementation of the solution. The implementation description can include schemes for representing entities of the problem description or solving particular types of sub-problem. BC can be used to define implementation schemes for knowledge representation features such as property inheritance, inverse link maintenance, and proce- dural attachment. The definitions of the. first two of these features are given later in this paper. BC is described fully in [Westfold, 19841. The specification language for EC is basically a mathe- matical language including logic, sets, relations, and func- tions. This very-high-level language is convenient for defining new language constructs in terms of existing con- structs, and t.here is a mechanism for defining syntax for the new constructs. Thus the system designer can define a language that is convenient for system users; the parser converts this language into relations that are defined in terms of mathematical objects that have properties that facilitate their manipulation (compilation) by BC. By use of manipulation such as equivalence transformation BC can produce an implemented program whose structure is quite different from that of the problem specification. In other words, convenient, uniform interfaces can be defined for the user and to facilitate the description of the different components of the system, but the implementation can be non-uniform, crossing interfaces and taking advantage of different views of the problem domain in order to produce an efficient program. The ideas in this paper are being tested by using BC in building the CHI knowledge-based programming sys- tem [Green et al., 19811. CHI includes the following com- ponents, all of which make use of BC in their specification and implementation: data structure selection, algorithm design, parallel algorithm derivation, and project manage- ment, the database manager, program analysis, finite differencing, and BC itself. Many of these components are useful in building knowledge-based systems, so CHI as a whole is better than just RC for building knowledge-based systems. 344 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. $2 Overview of BC BC is essentially a compiler that produces Lisp code from a specification in the form of logic assertions. The specification consists of three parts: the basic definition of the problem domain; the definition of auxiliary ob- jects that are needed in an efficient implementation of the problem domain; and information about how the defining assertions are to be used procedurally. It is convenient to identify and use an intermediate rule language in going from the logic assertion language to procedural Lisp. A rule specifies an action (procedure) in terms of its precon- dition (applicability condition) and postcondition (what is true after its application). A rule consists of two logical formulas, written as P+Q where P is the precondition and Q is the postcondition. (Note that ’ -+ ’ is a procedural construct and ‘j’ is the symbol for implication.) The part of BC that compiles the logic assertion specification into rules is called the Logic Assertion Compiler (LAC). F rom an assertion, which could be used to make many different inferences, and instructions stat- ing which particular inference and in what context, LAC produces a rule that is a specification of that particular inference. The part of BC that compiles rules into Lisp is the Rule Compiler (RC). It works by a process of step- wise refinement similar to other transformational systems such as PECOS [B arstow, 19791, TI [Balzer, 19811, and [Burstall and D 1 gt ar in on, 19771. At intermediate stages of refinement the program contains a mixture of constructs from very-high-level to low-level, so a wide-spectrum lan- guage must be used that includes all these constructs in a unified framework. LAC 1 Logic Assertion Figure 1. Structure of BC Compiler The language used by BC is called V and it is the language used throughout CHI. V was initially defined by Phillips [Phillips, 19821 and has since been refined and extended by the CHI group. It contains a number of integrated sub-languages: a first-order predicate logic language, VLogic, which is the basic specification language used by BC; a rule language, VRL; a procedural language, VP;,and the target language Lisp. 2.1 Procedural Use of Assertions LAC compiles a specification written in VLogic asser- tions by converting each assertion into an inference pro- cedure specialized to that assertion. The user specifies which particular inference procedure should be used. BC provides three dimensions of choice for the type of in- ference procedure. The first corresponds to the general form of the assertion that is used: either an implication or of P=+Q an equality (with equivalence considered a special case equality) p=9- Each of the general forms may have a precondition which is written as the antecedent of an implication with the form as the consequent. For example: r ==+ p=q can be considered an equality with precondition r. It may also be treated as an implication. The second dimension corresponds to the direction of use of the general form: from left, to right or right to left. For implication, the former corresponds to forward or data-driven inference and the latter to backward or, goal-directed inference. Considering the assertion as a constraint, the former corresponds to enforcing the con- straint and the latter to using or taking advantage of the constraint. An equality is commutative, but typically there is a directionality associated with each one. For ex- ample, a function j can be defined using an equality of the form j(z)=def. The third dimension is choice of compile-time versus run-time use of an assertion. Use of an assertion at com- pile time provides the possibility for circumventing the clean specification-level interfaces and producing efficient, tangled code. The result of compiling an assertion for compile-time use is a procedure that affects the compila- tion of other code. An important use of assertions at compile time is to maintain and use them as constraints. Constraint incor- poration is done at the stage of compilation where a proce- dure is expressed as a rule. Rule compilation involves us- ing the rule to form a statement in logic of the relationship between the computation states before and after the rule application, and then producing a procedure that, given an initial state, will produce a new state that satisfies the relationship. The intermediate statement in logic is a convenient form for performing inference to incorporate constraints stated in logic assertions. Use of an assertion at run time requires converting it to the run-time constructs available in the target en- vironment. Therefore we need to consider two models of computation: the model of computation as inference at the specification level and the Lisp model which is basically a recursive function model. This means that any run-time inferences have to be put into a functional form. Goal-directed, run-time inference can be imple- mented efficiently using Lisp functions. This may involve adding an extra definition so that the goal is in the form of a function call. In order to implement forward-inference procedures we need some extra machinery in the target environment. The procedures need to be attached somewhere so that they are triggered at the appropriate time, and they need to be able to store the values that they compute so that the values are found when wanted. This can be done with a database of [function, argument, value] triples that are indexed by the function and argument. BC uses a database that stores objects (the things that may be function arguments) as mappings from functions to values. Functions that are treated in this way are called properties. Storing the value of a property in the database may trigger attached forward inference procedures which may store values for other properties. When the value of a property is needed, the database is examined to see if there is a stored value, otherwise a Lisp function for computing the value is called, if there is one. 2.2 Specifying How to Use an Assertion The ways an assertion is used are specified by attach- ing simple meta-assertions to the assertion. This section describes the basic options provided by BC. Run-time use is encapsulated as a function. For for- ward use it is necessary to specify the triggering form that causes the function to be called. For backward use it is necessary to specify the name of the function whose value is to be computed: triggered-by formr, forma, . . . (the formi are the triggering forms) computes fni, fna, . . . (the fn; are the functions (Closed functions) to be computed) Other options are: memo (Save computes values in database) check (Give an error if assertion violated) For compile-time use it is necessary to specify whether the assertion is to be used as a constraint for optimiza- tion or as a constraint to be maintained (or both), or for transforming some forms into equivalent ones. compile-optimize form (Backward) (Use the assertion to remove redundant tests) compile-in-line form (Forward) (Add in-line code to maintain the constraint) compile-transform form (Transform form to an equivalent form) For convenience the forms may be referred to by their primary function if this is an unambiguous referent. These are the basic meta-level annotations. Internally, they are simply meta-level properties of assertions. New annotations can be defined in terms of these basic ones using logic assertions at the meta-level from which BC can produce demons that, given the new annotation, generate the equivalent basic annotations. 2.3 The Implementation of BC BC is written primarily in its own languages-VLogic and VRL. ,4 basic version of RC was written in Lisp and then the VRL specification of RC was compiled and this version replaced the Lisp version. The implementation of LAC is at the stage where it can compile assertions given in the exact form needed for the particular use of it. The part of LAC that preprocesses assertions to get them into the correct form has been designed and is in the process of being implemented. BC has been developed in Interlisp [Teitelman and Masinter, 19811 on a DEC 2060 machine and then in Zetalisp [Weinreb and Moon, 19811 using the Interlisp Compatibility Package on Symbolics 3600 machines. 53 Example Implementations of Knowledge Representation Features The examples begin with a simple database that only provides storage and retrieval of binary-relation triples, This is used as the basis for defining knowledge repre- sentation features. The examples presented are for main- tenance of inverse links and property inheritance. Other features that have been specified are specialized treatment of transitivity, attached procedures, and memoing of com- puted properties. 3.1 Maintaining Inverse Links The first example is the task of maintaining inverse links in a database. This requires that whenever f(z)=y is stored in the database, f-‘(y)=z is also stored. The language used is introduced informally as necessary. The basic assertion is: inverse (f)= g A one-to- one (f) =+ f(z)=y - g(y)=z By convention, unbound variables are universally quantified, so f, g, z and y are universally quantified over this assertion. 346 3.1.1 Maintaining ’ the Constraint with a Run-time Procedure One way of maintaining the constraint is to attach a demon function that is executed to add the inverse whenever a property is stored. This can be specified as follows: inverse (f>=g A one-to-one(f) =+ fb>=y = g(y)=2 triggered-by f(z)=y where “triggered-by p” is a meta-level annotation that means whenever p is asserted (stored) the assertion should be made true. LAC produces the following demon from this specification: trigger f( z)=y inverse (f)= g A one-to-one(f) A 1 DB(g(y)=z) + ~B(g(Y)=4 This uses a generalized demon construct which con- sists of a triggering event-in this case the assertion that f(z)=y, and a procedure body-in this case a rule whose applicability condition (left-hand side) is inverse (f)=g A one- to-one(f) A 1 DB( g(y)=z) and whose action is to make its right-hand side DB( g( y)=z) true in the new state. DB( z) is true if and only if z is stored in the database. The DB predicate is used to distinguish some- thing that is true because it is explicitly stored in the database from something being true because it is im- plied by the database. Thus the condition 1 DB( g( y)=z) prevents the rule from applying if its action would be redundant. This prevents the possibility of infinite and ineffectual forward chaining. RC compiles the rule into the following Clisp code: (if (db-get f ‘one-to-one) then (let ((g (db-get f ‘inverse))) (if (NEQ (db-get y g) x) then (db-put y g x)))) which is executed whenever a property is stored in the database. (db-get x y) and (db-put x y z) are functions for retrieving from and storing into the database, respec- tively. Basically what RC does in this simple example is decide the order in which conjuncts are used and how each conjunct is to be used--either tested or used to bind a variable. 3.1.2 Maintaining the Constraint with In-line Code An alternative way to maintain the constraint is to add in-line code, specified as follows: inverse (f)=g A one-to-one(f) =$ f(z)=y - g(y)=2 compile-in-line j(z)=y where “compile-in-line p” means that whenever code 347 that makes p true is being compiled, add extra code to make the assertion true. procedure The compile-time rule code is: for adding this in-line a=‘Sutisfy (f(z)=y)’ A inverse (f)=g A one-to-one (f) + u=‘!htisfy(f(z)=y A g(y)=z)’ which, for example, transforms Sutisfy(Ihs(m)=n) into Satisfy(lhs(m)=n A lhs-of(n)=m) where inverse (lhs)= lhs-of. Th e f orms in single bold quotes act as patterns that on the left-hand side match expressions and on the right-hand side cause new expressions to be constructed. Satisfy(p) means change the state to make p be true. It is used as an intermediate form in compil- ing rules, that is later transformed into code to make the desired change of state. The constraint may also be used to optimize a test of f(r)=y A f-‘(y)=2 t 0 a test of just f(z)=y. It may also be used to replace f (z)=y by f -‘(y)=z, which is useful, for example, when another rule is looking for z as a function of y. 3.2 Property Inheritance This section shows how an implementation scheme can be described by stating a single invariant and the ways that it is to be maintained and used. BC derives code for each of the procedures that maintain or use the invariant from the single specification of the invariant, so all the procedures are consistent. The type of property inheritance in this example is all members of a set having the same value for a property. For example, if all elephants are the color grey, and Clyde is an elephant, then we can deduce that Clyde is the color grey. Using VLogic these statements are: if z E elephants =+ color (z)=grey and Clyde E elephants then the database system should deduce that color (CIyde)=grey when asked for color(Clyde). A scheme for doing this is for each property that has this inheritance behavior (e.g. color), to introduce a cor- responding property that applies to the set as a whole (e.g. color-of-all) and connect these two properties by the property all-prop (so all-prop ( coloT )= color-of-all). This scheme can be described by the invariant: (z E s =+ p(z)=p-of-all(S)) z all-prop(p)=p-of-all In the following, I refer to this as the “scheme invariant.” We want to use this invariant to compute p(z) when applicable. For example, the value of color (Clyde) is color-of-ull(elephants) because al l-prop (cola? )= color- of-all. To maintain the invariant we need to update all-prop and the instances of p-of-all. For example, when x E S=+ color(x)=color-of-all(S) is as- serted, we need to make all-prop(colo7)=color-of-all, and later, when z E elephants =+ color(z)=grey is asserted, we need to make color-of-ull(elephants)=grey. These uses of the assertion are expressed by saying that it is used to compute p and used to maintain all-prop and p-of-all. The complete specification of this in BC is: (x E s =$ p(x)=p-of-all(S)) - all-prop(p)=p-of-all computes p triggered-by z E S * p(z)=~, x E s =N p(x)=p-of-all(S) Before looking at how each of the three procedures is derived from this specification, we mention an alter- native, similar scheme to emphasize that this constraint could be used in different ways: instead of computing p when needed it could be maintained. In this case, when Clyde E elephants is stored then color (Clyde)=grey is also stored. 3.2.1 Computing an Inherited Property The first case is deriving a partial procedure for com- puting p(z) f rom the scheme invariant, for example com- puting color (Clyde) as color-of-all (elephants). First LAC converts the scheme invariant to the form r + p(z)=d by treating the equivalence as a right-to-left implication and merging the nested implications into a single implication with a conjunction as antecedent: (x E s =+ p(x)=p-of-all(S)) - all-prop(p)=p-of-all becomes all-pTop (p)=p-of-all A z E S =+ p(x)=p-of-all(S). From this, LAC produces the partial function: function p(x) all-prop (p)=p-of-all A x E S + value (p-of-all(S)) where value (x) means value of the function. that z should be returned as the 3.2.2 Maintaining Inheritance Links The second procedure is necessary to ensure that all-prop is stored whenever a relevant univer- sal statement is made. For example, when x E S + color(z)=color-of-all(S) is asserted, it makes all-prop (color)= color-of-all. This involves using the equivalence of the scheme in- variant as a left-to-right implication, and using the left- hand side as a triggering condition for the procedure. The resulting demon is stated: trigger 2 E S 3 p(x)=p-of-all(S) true + all-prop (p)=p-of-all. 3.2.3 Maintaining Inheritable Properties The third procedure is necessary to store p-of-all when suitable universal statements are made. For example, when x E elephants + color( x)=grey is as- serted, it adds color-of-ull(elephants)=grey (assuming that all-prop (color )=color-of-all). LAC converts the scheme assertion into the form q =+ p-of-ull(S)=d b y introducing a new variable v whose value is equal to p(x) and p-of-all(S) in order to split the equality p(x)=p-of-all(S). This converts the scheme invariant: 2 E S =$ p(x)=p-of-all(S) E all-prop(p)=p-of-all into all-prop(p)=p-of-all A (2: E S =+ p(x)=v) * p-of-all(S)=v. Choosing the second conjunct as the trigger gives the following demon procedure: trigger x f S =+ p(x)=v all-prop (p)=p-of-all + p-of-all(S)=v. 3.3 Default Inheritance In many AI systems a variation of the above scheme is implemented in which a specific value of property for an individual may be given which conflicts with the value for the property given by the sets the individual is a member of. In other words, the property value stored on the set is a default value to be used only if a specific value for a particular individual is not known. We can express the default scheme in our logic using the DB predicate. The default inheritance scheme is basically the same as the direct scheme with an extra condition: (DB(p(x)=l) A xE S =k p(x)=p-of-most(S)) = most-prop(p)=p-of-most where I_ means undefined. In fact, typically a stronger condition is used so that if there are two sets with a most-prop value with one set a subset of the other, then the smaller set is used. This can be expressed by adding the further condition 13 Si [Si C S A x E 5’1 A p-of-most (Sl) # _L]. The procedures neces- sary to carry out this scheme are all derived similarly to the ones above. 348 $4 Related Work The specification language for BC is logic, which can be used to express knowledge. However, the main utility of BC with respect to knowledge representation, is the facility with which it allows knowledge representation schemes to be described and implemented. Knowledge representation schemes may be defined that have no relation to logic. However, the ability of BC to use logic encourages the specifier to relate knowledge repre- sentation schemes to logic. For example, the formulation of property inheritance given in section 3.2 is in terms of sets, quantification, and relations between properties. A similar scheme inheriting properties from prototypical elements is a little more difficult to express because the relation to logic is less direct. Hayes and Nilsson, amongst others, have argued that knowledge representation lan- guages should be analyzed using logic in order that they may be better understood and the different languages compared more easily [Hayes, 19791, [Nilsson, 19801. BC allows logic to be used as a tool for synthesis. Other systems for building knowledge-based systems are EMYCIN [van Melle, 19801, AGE [Nii and Aiello, 19793, LOOPS [Stefik et al., 19831 and MRS [Genesereth et al., 19831. These systems supply a set of facilities that are useful for building knowledge-based systems. BC t,akes a more programming-oriented view in that it al- lows useful facilities to be programmed easily. It may be useful for a system builder to draw on a library of knowledge representation features specified in BC, but these may be combined flexibly and modified as needed for the particular system and tightly integrated because of their specification in BC. MRS, like BC, aims to decouple the specification language of the user from the implemen- tation of the system. This goal is in contrast to knowledge representations such as semantic networks and frame systems where the specification language used is more closely linked to the actual implemented representations. MRS provides the user with a few implementation choices whereas BC provides tools for the user to speci.fy how to compile knowledge. References [Balzer, 19811 Robert Balzer “Transformational Implementation: An Example,” IEEE Transactions on Software Engineering, January, 1981, pp. 3-14. [Barstow, 19791 David Barstow. Knowledge-Based Program Construction. The Computer Science Library, Programming Language Series. Elsevier-North Holland Inc. New York. 1979. [Burstall and Darlington, 19771 Rod M. Burstall and John Darlington. “A Transformation System for Develo- ping Recursive Programs J ” in Journal of the ACM. Vol. 24 No. 1. January, 1977. pp. 44-67. [Genesereth et al., 19831 Michael Genesereth, Russell Greiner, and Dave Smith. “A Me tu-level Representation System, IJ Memo HPP-83-28, Computer Science Department, Stanford University, December 1980. [Green et al., 19811 Cordell Green, Jorge Phillips, Stephen Westfold, Tom Pressburger, Susan Angebranndt, Beverly Kedzierski, Bernard Mont-Reynaud, and Daniel Chapiro, “Towards a Knowledge-Based Programming System J ” Kestrel Institute Technical Report KES.U.81.1 March, 1981. [Green and Westfold, 19821 Cordell Green, Stephen Westfold. “Knowledge-Based Programming Self Applied,” in Machine Intelligence 10. Ellis Forward and Halsted Press (John Wiley). 1982. [Hayes, 19791 P. J. Hayes. “The Logic of Frames, ” in B. L. Webber and N. J. Nilsson (eds) Readings in Artificial Intelligence. Tioga Publishing Company, Palo Alto, Ca., 1979. [Nii and Aiello, 19791 1-I. Penny Nii and Nelleke Aiello. “AGE (Attempt to Generalize): A Knowledge- Based Program for Building Knowledge-Bused Programs J ” in Proceedings of the Sixth International Joint Conference on Artificial Intelligence. Tokyo, Japan, 1979, pp. 645-655. [Nilsson, 19801 Nils J. Nilsson, Principles of Artificial Intelligence. Tioga Publishing Company, Palo Alto, Ca., 1980. [Phillips, 1982) Jorge Phillips, Self-Described Programming Environments: -4n Application of a Theory of Design to Programming Systems. Ph.D Thesis, Electrical Engineering and Computer Science Departments, Stanford University, 1983. [Stefik et al., 19831 Mark J. Stefik, Daniel G. Bobrow, Sanjay Mittal and Lynn Conway. “Knowledge Programming in Loops J ” in The AI Magazine Vol. 4 No. 3, 1983, pp. 3-13. [Teitelman and Masinter, 19811 Warren Teitelman and Larry Masinter, “The Interlisp Programming Environment, ” Computer, Vol. 14, 4, April 1981. [van Melle, 19801 William van Melle, A Domain- independent system that Aids in Constructing Knowledge-based Consultation Programs. Ph.D. Thesis, Computer Science Department, Stanford University, 1980. [Weinreb and Moon, 19811 Daniel Weinreb and David Moon. Lisp Machine Manual. Symbolics, Chatsworth, Ca., 1981. [Westfold, 19811 Stephen Westfold “Documentation for TINTEX,” Internal Report. Kestrel Institute. Palo Alto, Ca., 1981. [Westfold, 19841 Steph en Westfold, Logic Specifi- cations for Compiling. Ph.D. Thesis, Computer Science Department, Stanford University, 1984.
1984
53
340
A MODEL OF LEXICAL ACCESS OF AMBIGUOUS WORDS Garrison W. Cottrell Dept. of Computer Science University of Rochester Rochester, N.Y. 14627 ABSTRACT Recent psycholinguistic work in the study of lexical access has supported a modular view of the process. That is, lexical access proceeds indepedently of the sentential context. Herein we describe a connectionist model of the process which retains modularity, explains apparent anomalies in the results, and makes empirically verifiable predictions. INTRODUCTION Within the domain of Artificial Intelligence there is considerable interest in parallel architectures for both machines and computational models (cf. Lesser dz Erman 1977, Fahlman, 1980; Hillis, 1981; Fahlman, Hinton & Sejnowski, 1983) in part because of their promise as avenues for solving the fundamental AI problem of search. One such computational paradigm which has met with considerable enthusiasm and skepticism is the connectionist approach developed by Feldman and Ballard (1982; Feldman 1982). The theory was developed to reflect the current understanding of the information processing capabilities of neurons, and consequently the type of processing it supports is of a spreading activation/mutual inhibition character. While it at an early stage, the paradigm has been successfully applied in models of visual recognition of noisy inputs (Sabbah, 1982), motor control (Addanki, 1983), limited inference in semantic networks (Shastri & Feldman, 1984) and word sense disambiguation (Cottrell & Small, 1983). We believe it can be a useful cognitive modelling tool as well. Our intention here is to demonstrate how a simple connectionist model of a low level process in sentence comprehension can be effective in explaining psychological results. We have two goals in building a cognitive model: To explain the existing data and to make empirically verifiable predictions. On the first point, in order to explain the data, it must be possible to form a clear correspondence between the elements of the model and the elements of the world that the theory attempts to explain. While many would argue, and have (Feigenbaum & Feldman 1963), that the neuronal level is the wrong place to start on such an enterprise, we claim that in order to explain the wealth of psycholinguistic data on low level language processing*, the correspondence must be at a level below the functional: that the mechanisms involved in carrying out these functions must be considered if we are ever to have real explanatory power. Functional level models are effective in demonstrating what functions must be carried out; mechanism level models are better at explaining data from processing tasks. Second, we share the goal of all cognitive modelers to make predictions. Without this, there is no way to falsify the claims of the model. The model we present of lexical access makes several predictions which may be falsified or substantiated by subsequent research. Within the domain of sentence processing various levels of analysis have been identified and generally agreed upon (eg. phonological, lexical, syntactic, semantic and pragmatic). However, the question of the characteristics of the interaction between these levels has been the focus of much study and debate (Forster, 1979; Marslen-Wilson & Tyler, 1980). The question is whether these systems can be regarded as independent modules or whether processing at one level can influence processing at another. The process of lexical access (defined below) presents a unique opportunity for modeling. The lexical level of processing has been intensively studied in recent years by psycholinguists who have focused precisely on the modularity question. What has emerged is a fairly well understood set of results which appears to resolve the question in favor of the modular view. Apart from the obvious consequences of the modularity issue for their field, researchers in AI should also be interested in this because obtaining the correct meaning of a word from the lexicon represents a search problem, due to large number of meanings of the most frequently used words. We discuss this research and then present a simple model of the lexical access process which explains apparent anomalies in the psycholinguistic results and makes empirically verifiable predictions. *By “low level” we do not mean to tmplq the functional level/mechanism level distinction. We simply mean early stages of processing, such as.phonological and lexical. From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. LEXICAL ACCESS The process of accessing all of the information about a word, phonological codes, orthographic codes, meaning and syntactic features is called lexical access. We will mainly be concerned here with the access of meaning and syntactic class, and will use the term “lexical access” to refer to this process. It is useful to distinguish three stages the processing of lexical items, of which access is the second: (1) decoding the input and matching it with a lexical item, (2) accessing the information about that item, and (3) integrating that information with the preceding context. These are termed prelexical, lexical and postlexical processing, respectively. An important research question is discovering whether, to what degree, and through what channels these levels interact. Does each level only receive the completed output of the previous level (the “modular” view), or can processing at one level affect processing at adjacent or even more distant levels (the “interactive” view), or is the answer somewhere between these extremes? Recent studies in lexical access have borne directly on the question of whether preceding context only has influence at the integration (postlexical) level or whether it can affect the lexical access level. A common tool in this research is to use ambiguous words and to study the effects of context on the processing of such words. The empirical question is whether the context of a sentence constrains the search in the lexicon for the contextually appropriate meaning of a word or not. The interactive view holds that context affects the lexical access level, so that only a single meaning is accessed (the Prior Decision Hypothesis). The modular view holds that all meanings of the word are initially accessed, since the lexical access mechanism can’t “know” what the context requires, and all meanings are then passed to the integration level, where context selects the proper one (the Post Decision Hypothesis). Early research produced mixed results, some studies supporting one hypothesis, some the other (Conrad, 1974; Foss and Jenkins, 1973; Holmes, 1977; Lackner and Garret, 1972: Swinney and Hakes, 1976). However, most of these studies only looked at one time point of the process, which as later results show, explains the discrepancy. Recent work by Swinney (1979) and others (Tanenhaus, Leiman, and Seidenberg, 1979; Seidenberg, Tanenhaus, Leiman, and Bienkowski, 1982) using the semantic priming measure has shown that the time course of lexical access is important. (We will only discuss the latter experiments here, referred to hereafter as STLB. This study is the most comprehensive to date.) They us the semantic priming effect to measure the activation of a meaning of a word. People are faster and more accurate on various word tasks if the word is preceded by an associatively or semantically related word (see Meyer & Schvaneveldt, 1971). Semantic priming has also been shown to work cross-modally, i.e., a spoken word can 62 prime a visually presented word (Swinney, et. al., 1979). In the STLB study, the subject listened to a sentence containing an ambiguous priming word while being required to say a word (the larger) flashed on a screen. By using targets semantically related to one meaning of the priming word (with appropriate controls) they were able to test the activation of the different meanings of the ambiguous word at different time points in the sentence. When the target is immediately following the prime, STLB found priming from both meanings, (with one exception, discussed below) but when the target is 200 milliseconds later, only priming from the appropriate meaning for the sentence is found. This is interpreted as evidence that people initially access all meanings of a word followed by rapid decision (for sentence final words). In addition finding a narrow decision window, STLB discussed two types of context which differ in their effects on lexical access. They contrasted pragmatic context, resulting from world knowledge with semantic (or priming) context, resulting from associative and semantic relationships between word meanings, as in the following sentences. (1) The man walked on the deck. (2) The man inspected the ship’s deck. (3) The man walked on the ship’s deck. The first sentence contains a pragmatic bias towards the “ship” related meaning of deck: one is more likely to walk on that kind. The second sentence contains a word highly semantically related to one meaning (ship -> deck). The third contains both types of information. They did experiments which contained a completely neutral context, a pragmatic context, or a semantic context. The results were that multiple access was obtained for neutral and pragmatic context, but selective access (only one reading active at the end of the word) for the semantic context. This result held for noun-noun ambiguities, but not noun-verb ambiguities, where multiple access occurred in all conditions (including syntactic context). These results are summarized in Table 1. Table 1. Summary of Results of STLB’s Experiments Context Type Ambiguity Type Outcome Neutral Syntactic Pragmatic Priming Priming Noun-Noun Multiple Access Noun-Verb Multiple Access Noun-Noun Multiple Access Noun- Verb Multiple Access Noun-Noun Selective Access The apparent anomaly lies in the selective access result in this one condition. STLB attribute the result to intralexical priming by the strong associate preceding the ambiguous word (and the organization of the lexicon: see below). It should be noted that the only meaning of “intralexical” in this context that makes sense is actually “intrasemantic”: A single meaning of the word, and not the lexical representation of the word itself, is primed. So, they assume the appropriate meaning of the ambiguous word is primed by the associated word’s meaning and blocks or inhibits the alternate reading. STLB conclude from this that the results support a modular, autonomous account of the lexical access process. The only contextual effect, selective access of noun-noun ambiguities, was due to intralexical priming, which is local to the lexicon in their view. Second, the results indicate that there are at least two classes of context which interact with word recognition in different ways. Third, the difference in the results for noun-noun and noun-verb ambiguities suggest that syntactic information is encoded in the mental lexicon. This point is obvious, but the question is how syntactic information is encoded. It is possible that a word’s syntactic class is encoded with the lexical representation or with the meaning representation. The distinction will become clear when we see their model, which chooses the former, and ours, which chooses an intermediary position. Finally, the results suggest that studies which illuminate the time course of comprehension processes are essential to decoding the structure of the processor(s). STLB’s MODEL STLB’s model is a combination of Morton’s (1969) logogen model and Collins and Loftus’ (1975) spreading activation model. A lexical logogen governs recognition, and is connected to semantic memory where it activates its meaning(s) via spreading activation. The meaning nodes are accessed along pathways from the lexical nodes in the order of relative activation levels. The meaning nodes may be primed by the access of words highly related to one meaning, which is the only exception to the automaticity and autonomy of lexical access. They posit that if there are large differences in activation due to frequency or priming, then selective access obtains. In order to account for the difference in noun-noun vs. noun-verb results for semantic context, they posit that nouns and verbs have different nodes with identical recognition procedures in the lexical network (See Figure 1). Now, the story goes, for noun-verb ambiguities with one meaning primed, both nodes get recognized because they share all the same features, and both meanings are accessed. In the noun-noun case, if one meaning is primed, that pathway is followed first. Note that this explanation implies serial evaluation of the possibilities in the noun-noun priming case. Figure 1: STLB’s model of lexical access. A CONNECTIONIST MODEL OF LEXICAL ACCESS First, a short introduction to connectionism. Connectionist models consist of simple processing units connected by links, The units have a small set of slates (not used in the following model), a bounded potential (we use the range 0 to l), an output, which for our purposes is just a thresholded potential*, a vector of inputs, and functions for computing a new state, potential and output from the old ones and the inputs. There are no constraints on the functions that can be used, though they should be kept simple. (It is an important research topic at the moment to discover what constraints on the functions can be reasonably assumed without losing computational ability.) The basic idea is that a unit stands for a value (the infamous “grandmother cell”), and collects inputs from other units which represent evidence for that value, positive or negative. The links between the units are weighted at the input sites, reflecting the importance to the receiving unit of the evidence from that link. Thus much of the information is contained in the connections between units (hence the name “connectionism”). A unit’s output represents its confidence in the hypothesis that its value is represented by its input. Thus the typical way to go about building connectionist models is to first decide on what the elements of the domain are that we want to model, choose a way to encode those as units, and then to wire the units together in such a way as to encode constrainls between the elements, Finally, we must choose an appropriate function for combining the evidence. Our model for the lexical access process is shown in Figure 2. We show the network for the word “deck”, since it is at least four ways ambiguous, with two noun meanings and two verb meanings. The network for a noun-noun ambiguous word would just consist of the left half of this network, (right half for verb-verb), and a noun-verb ambiguous word would just have the outer “V” of seven nodes. The lowest node represents the lexical item and is assumed to be activated by a phoneme or letter recognition network (such as the one described in McClelland & Rumelhart, 1981). The top row of nodes represent the various meanings of the lexical item and are assumed to be connected into a sentence *Official versions of the theory require that there be only integer outputs from 0 to 10. in order to model the small number of bits that can be encoded in neuronal firing frequency. We are not purists m this respect. processing and/or an active semantic network. The lexical node activates its meaning nodes through a discrimination network, starting with the grossest distinctions possible, then progressively finer ones. Note that the most efficient way to do this is to make two-way splits between large classes of alternatives (divide and conquer), if possible (but we don’t assume all splits are two-way), since the inhibitory connections are minimized this way*. We assume that syntactic information is more discriminatory than semantic information, i.e., that the distinction into “noun” and “verb” divide the possibilities up more than divisions based on meaning. The alternatives at any discrimination inhibit one another, so that one path through the network eventually “wins” and the meaning nodes that the other paths support fade away. This is the decision process. We assume that this process is driven by feedback to the meaning nodes from higher levels in the network. In the case of a biasing sentence, this would be from higher level nodes representing the role that meaning could play in the sentence, as in, for example, the Cottrell & Small (1983) model of sentence processing. (We also assume there is not a direct link to such role nodes.) In the case of semantic priming, we assume the meaning node is directly primed by a node representing the relation of the priming meaning to this meaning, as in the Collins & Loftus (1975) model of semantic priming. The unfortunate meaning node that does not get top down feedback (or does not get as much) will not be able to provide as much feedback to the pathway nodes which activated it, and its pathway will be inhibited by the pathway nodes that do get more feedback. In order to account for the modular nature of lexical access, we had to make two simple assumptions about the units. We assume that the units are thresholded (i.e., they can collect activation but they will not fire until they cross threshold, as in Morton’s (1969) “logogen” model) and that top down links have lower weights than bottom up links. A unit may thus be activated above threshold by bottom up evidence, but not by top down evidence. This combination of threshold and weighting acts as a barrier to top down information affecting Jower level processes by itself, such as recognition. It may come in to play, however, after recognition of the lexical item has begun, in the decision process. This assumption is independently motivated at all levels of our networks by the need to prevent top down activation from hallucinating inputs. An interesting feature of this network is that the meanings themselves are not mutually inhibitory. When one considers constraints between units, there is no functional reason to assume that a particular meaning in isolation from its source (a particular lexical item) is not compatible with another meaning. However, it is reasonable to assume that the assignments of different meanings to the same use of a word is inconsistent. Indeed, if the meanings themselves were mutually inhibitory, we would expect that a word with the same meaning as an inappropriate reading of a previous word in the sentence (assuming the meaning node is shared) would be harder to process than a control word For example, this would imply that it should be hard to understand “I had a ball at the forma/ dance “. Our model would predict, however, that people would be slower at processing sentences such as “I had a ball at the ball “. AN EXAMPLE RUN We present the result of running the model using the ISCON simulator (Small et al., 1982) in Figure 3. It will be helpful to refer to Figure 2 to understand the trace. We include a driver node, ml (not shown), that provides constant feedback to SHIP-FLOOR throughout the simulation. (In a complete model this would be a node representing one of the types of SHIP-FLOOR. For example, ml could be PART-OF-SHIP, activated by the context prime “ship’s”). The units average their input from three sites, bottom up, top down, and inhibitory. The first two sites take the maximum of their inputs, and the inhibitory site uses a parameterized arctangent function to enhance the difference in inhibition between two units that are close to each other in activation level. This helps avoid the problem of two units getting into equilibrium without one suppressing the other below threshold. Bottom up weights are 1.0, top down are 5, and inhibitory weights are -0.5. The threshold is set at 0.3. The potential function is similar to the one used by McClelland & Rumelhart (1981). At step 5, SHIP-FLOOR has been primed by the context prime ml. Now we activate “deck”, and continue feeding it for 30 steps. We skip along to step 13 where the semantic discrimination nodes (the “as Xmeaning” nodes) have just fired (not visible at Figure 3’s resolution), but their activation has not spread to the meaning nodes yet. Notice that SHIP-FLOOR has been primed now to near threshold. Thus the bottom up activation from “as Nmeaningl” causes it to fire in step 14, while the other meaning nodes have to accumulate more activation for several steps before they will fire. This gives SHIP-FLOOR a chance to increase the relative activation of nodes that are on its feedback path, before the other meaning nodes fire. This allows the nodes on that path to begin to win over their competition so that by step 24, “as Nmeaning2” has been suppressed. This results in CARD-DECK fading from lack of support. 64 Also, “as Ntneaningl” is no longer inhibited by “as Nmeaning2” , so it rises, giving more support to “asNOUN”, which then suppresses “asVERB”. Later, KNOCK-DOWN and DECORATE fade due to lack of support from “asVERB”. DISCUSSION AND CONCLUSION This model makes several claims about lexical access. First, decisions within a syntactic class happen “nearer” the meaning nodes than decisions between classes, so the incorrect meaning nodes fade faster when within the same class as its competitors than when its competitors Figure 2: Our model of lexical access. deck asNOUN asVERB Nmeaningl Nmeaning2 Vmeaningl Vmeaning2 SHIP-FLOOR CARD-DECK DECORATE KNOCK-DOWN Figure 3: Trace of the simulation of the network in Figure 2 (X means firing). 65 are in different classes. Thus noun-noun decisions are faster than noun-verb decisions, as was seen in the sample run. Thus it predicts that verb-verb ambiguities, which have not been tested (to our knowledge) in the psycholinguistic literature, will act like noun-noun ambiguities. However, the STLB study used homonyms (words with unrelated meanings). Verbs tend to polysemy (related meanings). Because this may affect the results, we restrict our claim to verb-verb homonyms. In order to explain different context effects we have to mention some claims about context. We saw how in our model feedback does not flow freely downward from the priming node (ml) through the meaning node (SHIP-FLOOR) because it is blocked by SHIP-FLOOR’s threshold. However, when activation comes up from “deck” through the other nodes, the barrier is broken, and feedback flows down. If we assume that higher levels of processing act the same way, then in the case of pragmatic context, no feedback to meaning nodes would occur before the meaning node actually fired because it is too far away in the network. By this time, multiple access has occurred, and a target word to be named (say, “spade”) can take advantage of the priming from all of “deck”‘s meanings. The case illustrated in the sample run was one of priming context with a noun-noun ambiguity (ship’s- >deck). Here, the contextual priming word is so closely related to one of the ambiguous word’s meanings that they are not far iway in the semantic network and direct priming of the meaning occurs (eg., “ship’s”->SHIP- PART-XHIP-FLOOR). A decision will be reached much more quickly than in the case of pragmatic context, where the feedback has to come from “farther away” (semantically) in the network. Therefore, the model claims that there will be faster decisions in strongly priming contexts. Yet, contrary to STLB, multiple access did occur in our version of a semantic context. We rely on our prediction of the relative speed of ambiguity resolution in different contexts to resolve this. Naming presumably requires at least two stages, recognition and production. The word to be named is presented at the end of the contextually primed ambiguous word. If the decision for the ambiguous word is over before the recognition stage of naming completes, the naming process could not make any use of priming from the alternate meaning of the ambiguous word*. Thus we claim multiple access always occurs, and if the word to be named were presented slightly before the end of the ambiguous word, we would see multiple access. *This claim can be relaxed if we assume our barrier (threshold) is “leaky”, that is, with enough top down activation. the meaning node might actually cross threshold before it got bottom up activa- tion. It would then be able to prime the semantic decision node below it to the point where the alternate meaning never gets active. This can be made to happen by using more priming from ml. Our model is therefore in the “chameleon” class with respect to this par- ticular issue. 66 Finally, in the case of four way ambiguous words such as “deck”, we claim that we would see the pattern of results seen in our sample run: In a semantic context, the alternate meaning within the same class would be deactivated first, then the meanings in the other class. In conclusion, we have desgned and built a model of lexical access within the connectionist framework that accounts for the data and makes empirically verifiable claims. This model has several advantages over STLB’s in that (1) we don’t have to posit nodes with identical recognition procedures, (2) the decision process is motivated by the discrimination network and the difference between nouns and verbs “falls out” of that representation, and (3) it is a computational model. With respect to Artificial Intelligence, we have a parallel model which tackles the major problem of the decision process between the possibly many meanings of a word, An interesting goal now is to design the levels above this which drive the decision process. We think this is a strong case for continued research in the area of connectionist models. ACKNOWLEDGEMENTS I would like to thank Michael Tanenhaus and James Allen for helpful comments on this paper. Any errors that remain are mine. REFERENCES Addanki, Sanjaya. A connectionist approach to motor control. Ph.D. thesis, Computer Science Dept., U. of Rochester, 1983. Collins, A. M., and Loftus, E. F. A spreading activation theory of semantic processing. Psychological Review, 1975, 82, 407-428. Conrad, C. Context effects in sentence comprehension: A study of the subjective lexicon. Memory and Cognition, 1974, 2, 130-138. Cottrell, G. W. and Small S. A connectionist scheme for modelling word sense disambiguation. Cognition and Brain Theory, 1983, 6, 89-120. Fahlman, S. A. The Hashnet Interconnection Scheme. Technical Report, Computer Science Department, Carnegie-Mellon University, June 1980. Fahlrnan, S. A., G. E. Hinton, and T. J. Sejnowski. Massively parallel architectures for AI: NETL, Thistle, and Boltzmann machines. In Proceedings of the National Conference on Artificial Intelligence, Washington, D.C., August 22-26, 1983. Feigenbaum, E. A. and Julian Feldman, eds. Computers and Thought. New York: McGraw Hill, 1963. Feldman, Jerome A., Dynamic connections in neural networks. Biological Cybernetics, 1982, 46, 27-39. Feldman, Jerome A. and Dana Ballard. Connectionist Models and their Properties. Cognitive Science, 1982, 6 205-254. Forster, K. I. Levels of processing and the structure of the language processor. In W.E. Cooper & E.C.T. Walker (Eds.), Sentence Processing. Hillsdale, NJ: Erlbaum, 1979. Foss, D. and Jenkins, C. Some effects of context on the comprehension of ambiguous sentences. Journal of Verbal Learning and Verbal Behavior, 1973, 12, 517-589. Holmes, V.M. Prior context and the perception of lexically ambiguous sentences. Memory and Cognition, 1977, 5, 103-110. Lackner and Garret. Resolving ambiguity: Effects of biasing context in the unattended ear. Cognition, 1972, I, 359-372. Lesser, V.R., and Errnan, L. D. A retrospective view of the Hearsay-II architecture. Proceedings of the Fifth International Joint Conference on A rttficial /n telligence, 1977. Marslen-Wilson W.D. and Tyler, L.K. The temporal structure of spoken language understanding. Cognition, 1980, 8, 1-71. McClelland, James L. and David E. Rumelhart. An interactive activation model of the effect of context in perception: Part I, An account of basic findings. Psych Review, 88, Meyer, D. E., and Schvaneveldt R. W. Facilitation in recognizing pairs of words: Evidence of a dependence between retrieval operations. Journal of Experimental Psychology, 1971, 90, Morton, J. Interaction of information in word recognition. Psychological Review, 1969, 76, Sabbah, Daniel, A connectionist approach to visual recognition. TR 107 and Ph.D. thesis, Computer Science Dept., U. of Rochester, April 1982. Seidenberg, M. S., Tanenhaus M., Leiman, J. and Bienkowski, M. Automatic access of the meanings of ambiguous words in context: Some limitations of knowledge-based processing. Cognitive Psychology, Shastri, L. and Feldman, J.A. Semantic Networks and Neural Nets. T.R. 131, Dept. of Computer Science, University of Rochester, May 1984. Small, S. L., Shastri L., Brucks M., Kaufman S., Cottrell, G., and Addanki, S. ISCON: An Interactive Simulator For Connectionist Networks. Technical Report 109, Department of Computer Science, University of Rochester, Dec. 1982. Swinney, David A. (1979), Lexical Access during Sentence Comprehension: (Re)Consideration of Context Effects. Journal of Verbal Learning and Verbal Behavior, 1979, 18, 445-660. Swinney, D. A., and Hakes, D.T. Effects of prior context upon lexical access during sentence comprehension. Journal of Verbal Learning and Verbal Behavior, 1976, IS, 681-689. Swinney, David A., William Onifer, Penny Prather, and Max Hirshkowitz. Semantic facilitation across sensory modalities in the processing of individual words and sentences. Memory and Cognition, 1979, 7, 159-165. Tanenhaus, M., Leiman, J., and Seidenberg, M. S. Evidence for multiple stages in the processing of ambiguous words in syntactic contexts. Journal of Verbal Learning and Verbal Behavior, 1979, 18, 427-440. 67
1984
54
341
AUTOMATED COGNITIVE MODELING* Pat LangIcy Stcllan Ohlsson ‘I’hc Robotics lnstitutc Carncgic-Mellon University Pittsburgh, Pennsylvania 15213 USA Abstract In this paper WC dcscribc an approach to automating the construction of cognitive process models. WC make two psychological assumptions: that cognition can bc modclcd as a production system, and that cognitive behavior involves starch through some problem space. Within this framework, WC employ a problem reduclion approach to constructing cognitive mod&, in which one begins with a set of indcpcndcnt, overly gcncral condition-action rules, adds appropriate conditions to each of thcsc rules, and then rccombincs the more specific rules into a final model. Conditions arc dctcrmincd using a discrimination learning method. which rcquircs a set of po’;itivc and ncgativc instances for each rule. Thcsc instances are based on infcrrcd solution paths that lead to the same lnswcrs as those obscrvcd in a human subject. We have implcmcntcd ACM, a cognitive modeling sjstcm that incorporates thcsc methods and applied the system to error data from the domain of multi-column subtraction problems. 1, Int reduction The goal of cognitive simulation is to construct some process explanation of human behavior. Towards this end, researchers have developed a number of methods for collecting data (such as recording verbal protocols, observing cyc movements, and measuring reaction times), analyzing thcsc data (such as protocol analysis and linear regression) and describing cognitive processes (such as production systems and nco-Piagctian structures). Unfortunately, there are inherent reasons why the task of cognitive simulation is more difficult than other approaches to explaining behavior. Cognitive simulators must infer complex process descriptions from the observed behavior, and this task is quite different from searching for a simple set of equations or even a structural description. Given the complexity involved in formulating cognitive process models, it is natural to look to Artificial Intelligence for tools that might aid in this process. Along these lines, some researchers have constructed AI systems that generate process models to explain errorfir behavior in mathematics. For instance, Burton [l] has described DEBUGGY, a system that diagnoses a student’s behavior in the domain of multi-column subtraction problems, and creates a procedural network model of this behavior. In addition, Sleeman and Smith [2] have developed LMS, a sysrcm that diagnoses errortil algebra behavior, and formulates process models to explain that behavior. The task of constructing cognitive models makes contact with two other areas of current interest within Artificial Intelligence. The first of these is concerned with formulating mental models. This research has focused on process models of physical phenomena, and though this work faces problems similar to those cncountercd in cognitive modcling, WC will not pursue the connections here. The second area of *This rcsc;h was suppoltcd by Contract NOOOld-83-K-0074, NK 154-5(X. from the Office of Naval Rcscarch. contact is the rapidly growing field of machine learning, and it is the relation bctwccn cognitive simulation and machine. learning that we will discuss in the following pages. Let us begin by propdsing some constraints on the ropnitivc modcling task that will enable the application of machine learning methods in automating this process. 2. A Framework for Cognitive Modeling Bcforc a rcscarchcr can begin to construct a cognitive model of human bchacior. hc must dccidc on some rcprcscntation for mental proccsscs. Similarly, if WC cvcr hope to aulo/rjale the formulation of cognitive models. WC must sclcct some rcprcscntation and work within the resulting framework. ‘1‘0 constrain the task of cognitive modeling, WC will draw 011 the following hypothesis, first proposed by Newell [3]: a The /‘roduc/iutl S’J)s/em H~porhesis. All human cognitive behavior can bc modclcd as a production system. A production system is a program stated as a set of condition-action rules. Combined with a production system archifecfure, such programs can be used to simulate human behavior. WC will not argue here for the psychological validity of the production system approach, except to mention that it has been succcss~lly used in modeling behavior across a wide variety of domains. For our purposes, we are more intcrcstcd in another feature of production system programs: they provide a well-defined framework for learrzing procedural knowledge. WC will discuss this fcaturc in more detail later. Although the production system hypothesis considerably limits the class of models that must be considered by an automated system, additional constraints arc required. Based on years of experience in constructing such models, Newell [4] has proposed a second general principle of human behavior: l The Problem Space Hypothesis. All human cognition involves search through some problem space. This proposal carries with it an important implication. This is that if we plan to model behavior in some domain, we must dcfinc one or more problem spaces for that domain. Such .a definition will consist of a number of components: l A reprcsentafion for the initial states, goal states, and intermediate states in the space; l A set of operators for generating new states from existing states; l A set of rules that state the legal conditions under which operators may bc applied; we will refer to these move-suggesting rules as proposers. For any given task domain, there may be a number of possible spaces, and the cognitive modclcr must be willing to entertain each of these in his attempt to explain the observed behavior. However, by requiring that thcsc components be specified, the problem space approach tirther constrains the task of formulating cognitive process models, The problem space hypothesis also carries with it a second interesting implication: algorifhmic behavior should be viewed as “frozen” search through a problem space in which the proposers suggest only one move at each point in the search process. From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. In addition to being psychologically plausible, the combination of the problem spdcc hypothesis and a production system rcprcscntation has an additional advantage. In this framework. rclativcly indcpendcnt condition-action rules arc rcsponsiblc for suggesting which operators to apply. Assuming one’s set of operators includes those operators actually used by the subject being modclcd, then the task of cognitive modcling can bc reduced to the problem of: (1) determining which operators arc useful; and (2) dctcrmining the conditions under which each operator should bc applied. Since the operators arc independent of one another, one can divide the cognitive modcling task into a number of simpler problems, each concerning ol?e of the operators. We may formulate this as a basic approach to cognitive modcling: l The Problem Reduciion Approach to Cognitive Modeling. Taken together, the production system and problem space hypotheses allow one to replace search through the space of cognitive models with several independent scarchcs through much simpler rule spaces. To reiterate, the problem reduction approach lets one factor the cognitive modeling task into a number of manageable subproblems. Each of these subproblcms involves determining whether a given operator was used by the subject, and if so, determining the conditions under which it was used. Once each of these subtasks has been completed, the results are combined into a complete model of the subject’s behavior. This approach is closely related to recent work in the field of machine learning. A number of the researchers in this area - illchiding Anzai [5], Langley [6], and Ohlsson [7] - have applied the problem reduction approach to the task of learning starch heuristics. However, this work has focused on acquiring a correct search strategy for some domain of expertise. Our main contribution has been to realize that the same basic approach can also be applied to automating the construction of cognitive models, and to explore the details of this application. Now that we have laid out our basic framework for stating process models of cognition, let us turn to one method for implementing the approach. 3. The Automated Cognitive Modeler As we seen, our approach to cognitive modeling requires two basic inputs: the definition for a problem space (consisting of state descriptions, operators, and proposers) and some information about the behavior of the person to be modclcd. This information may take the form of problem behavior graphs, error data, or reaction time measurements. Given this information, the goal is to discover a set of additional conditions (beyond the original legal ones) for each of the proposers that will account for the observed behavior. Fortunately, some of the earliest work in machine learning focused on a closely related problem; this task goes by the name of “learning from examples”, and can be easily stated: l Lear/zing from Examples. Given a set of positive and negative instances for some rule or concept, dctcrminc the conditions under which that rule or concept should bc applied. A number of methods for learning from cxamplcs have been cxplorcd, and WC do not have the space to cvaluatc the advantages and disadvantages of them hcrc. However, all of the methods require a set of positive and ncgativc instances of the conccpt/rulc to bc lcarncd, so let us consider how such a set can bc gathered in the context of automated cognitive modcling (or learning search heuristics). Recall that WC have available a problem space within which the behavior to be modclcd is assumed to have occurred. Since the proposers are more gcncral than WC would like them to be, their unconstrained application .will lead to breadth-first search through the problem space. If the obscrvcd behavior actually occurred within this space, then one or more of the resulting paths will successfully “explain” this behavior. For example, if partial or complete problem behavior graphs are available, then one or more paths will have the observed sequence of operator applications. If only error data arc available, then OIK or more paths will lead to the observed response. Since we have been working primarily with error data, we shall focus on this latter cast in our discussion. Presumably, the subject has been obscrvcd working on a number of different problems, so that WC will obtain one or more “solution paths” for each of these problems. For now, let us assume that only one such path is found for each problem; WC will return to this assumption later. Given a solution path for some problem, one can employ a quite simple method for generating positive and negative instances for each of the rules used in starching the problem space. We may summarize this method as follows: l Learning from Solufion Paths. Given a solution path, label moves lying along the solution path as positive instances of the rules that proposed them, and label moves leading one step off the path as negative instances of the rules that proposed them. This method allows one to transform a solution path into the set of positive and negative instances required for learning from examples. Note that not all moves are classified as desirable or undesirable; those lying more than one step off the soiution path arc ignored, since these states should never have been reached in the first place. Sleeman, Langley, and Mitchell [8] have discussed the advantages and limitations of this approach in the context of learning scnrch heuristics. The most notable limitation is that one must bc able to exhaustively search the problem space, or be willing to chance the possibility of misclassifications, thus leading to effective “noise”. Fortunately, in many of the domains to which cognitive simulation has been applied, the problem spaces allow exhaustive search. . . . fJ.2/ solution Figure I. Search tree for the problem 93 - 25 = 68. Given a set of positive and negative instances for each of the proposers, one can employ some method for learning from examples to dctcrminc additional conditions for thcsc rules. ‘I’hc resulting set of more specific rules arc guaranteed to regencratc the inferred solution path for each problem. and thus constitute a second lcvcl explanation of the obscrvcd behavior. l’akcn together, thcsc rules constitute a cognitive process model stated as a production system. WC have implcmcntcd the Automated Cognitive Modclcr (ACM), an AI system that instantiates the approach outlined above. Given a set of positive and ncgativc instances for each proposer, the system constructs a discrimination network for each rule, using an approach similar to that dcscribcd by Quinlan [9]. Once a network has been found for a proposer, it is transformed into a set of conditions which are then added to the original rule. These additional conditions let the proposer match against positive instances, but not against negative ones, and in this sense explain the observed behavior. The details of this process are best understood in the context of an example, to which WC now turn. Table 1. Production system model for the correct subtraction strategy. find-diffcrcnce If you are processing columnl, and numberi is in colwnnl and rowl, and number2 is in columrzl and row2, [and row1 is above row2], [and number1 is greater than number21, then find the difference between numbed and number2, and write this difference as the result for columnl. decrement If you are processing columnl, and numbed is in columni and rowI, and number2 is in column1 and row2, and rowl is above row2, and column2 is left of columnl, and number3 is in column2 and rowl, [and number2 is greater than numberl], then decrement number3 by one. add-ten If you arc processing columnl, and number1 is in colum& and rowl, and number2 is in columni and row2, and robvl is above row2, [and number2 is greater than numbed], then add ten to numbed. shift-column If you arc processing cohmnl, and you have a result for columnl, and colurm2 is left of columnI, then process coIumn2. 4. Modeling Subtraction Errors Our initial tests of ACM have focused on modcling errors in the domain of multi-column subtraction problems. WC sclcctcd this domain as a tcstbcd bccausc substantial empirical analyses of subtraction errors wcrc available, and bccausc other efforts had been made to model subtraction behavior, to which WC could compare our approach. In particular, Vanlelm and his collcagucs have compiled descriptions of over 100 systcrnatic subtraction errors, and have used this analysis to construct DERUGGY, a system capable of diagnosing students’ subtraction strategies. Although our work relics heavily on this group’s analysis of subtraction errors, our approach to automating the process of cognitive modcling differs considerably from their 195 scheme. The most obvious difference is that DEBUGGY made significant use of a “bug library” containing errors that students were likely to make, while ACM constructs explanations of errorful behavior from the same components used to model correct behavior. As a result, ACM carries out no more search in modeling behavior involving multiple bugs than it does in modeling errors due to single bugs: we believe this is a very desirable feature of our approach to cognitive modeling. In order to model subtraction behavior, ACM must be provided with a problem space for subtraction. This may seem countcrintuitivc, since WC tend to think of subtraction strategies as algorithms, but recall that the problem space hypothesis implies that even algorithmic behavior can be dcscribcd in terms of “frozen” search. In addition, different students clearly use different subtraction procedures, so one may view this space as the result of generalizing across a set of quite distinct algorithms. In order to define a problem space, we must specify some reprcscntation for states, a set of operators for generating these states, and a set of proposers. WC will not go into the details of our representation here, and for the sake of clarity, we will focus on only the four most basic operators - finding a difference between two numbers in a column, adding ten to a number, dccrcmenting a number by one, and shifting attention from one column to another.* The initial rules for proposing thcsc operators can be cxtractcd from Table 1 by ignoring the conditions enclosed in brackets. We will see the origin of the bracketed conditions shortly. Although WC have applied ACM to modeling crrorful subtraction proccdurcs, the system can best be explained by examining its rcsponsc to correct subtraction behavior. As we have seen, the overly gcncral initial conditions on its proposers leads ACM to starch when it is given a set of subtraction problems. Figure 1 shows the system’s search on the borrowing problem 93 - 25, when the correct answer 68 is given by the student that ACM is attempting to model. Stales along the solution path arc shown as squares, while other states are rcprcscntcd by circles. Dead ends occur when the program generates a partial answer that dots not match the student’s result. The system is also given other problems and the student’s answers to those problems, and ACM also scarchcs on thcsc until it find acccptablc solution paths. After finding the solution paths for a set of problems, ACM uses the instances it has gcncratcd to formulate more conservative proposers that will let it regenerate those paths without search. Let us examine the search tree in Figure 1, and some of the good and bad instances that result, Since most of the interesting learning occurs with respect to the find-difference operator, we shall focus on it here. Upon examining the starch tree, we find two good instances of finding a diffcrcncc, 13 - 5 and 8 - 2 (which lie on the solution path), and six bad instances, two casts of 5 - 3, two cases of 3 - 5, and one case each of 5 - 13 and 2 - 8 (which lit one step off the solution path). Given these instances and others based on different problems, ACM proceeds to construct a discrimination network that will let it distinguish the desirable cases of the find-diffcrcnce rule from the undesirable ones. The system itcrates through a list of tests, determining which tests are satisfied for each instance. For the subtraction domain, we provided ACM with ten potentially relevant tests, such as whether one number was greater than another, whether one row was above another, whcthcr ten had been added to a number, - *Actually, these operators arc not cvcn capable of corrcdy solving all subtraction problems (additional operators are required for borrowing from zero, as in the problem 401 - 283) and they are ccrtninly not capable of modcling all buggy subtraction stratcgics. Howcvcr: limiting attention to this set will considerably simp’:i fy the cxamplcs. so WC ask the reader to take on faith the system’s ability to handle additional operators. and whether a number had already been dccrementcd. For example, the negative instance 5 - 3 satisfies the greater test, since 5 is larger than 3, but fails the above test, since the S’s row is below the 3’s row. Given this information, ACM dctcrmincs which of its tests has the best ability to discriminate positive from negative instances. In determining the most discriminating test, ACM computes the number of positive instances matching a given test (M+), the number of negative instances failing to match that test (U-), the total number of positive instances [1‘+), and the total number of rrcgatlvc instances (l’-). Using thcsc quantities. ACM calculates the sum S = M+/T + u/T-, and computes E = maximum (S, 2 - S). ‘I’hc test wi ik the highest value for E is sclccted. In a 20 problem run involving the correct subtraction strategy, the greater test achicvcd the highest score on the function E, although the above test scored nearly as well. As a result, ACM used the former test in the top branch of its discrimination tree. Since all of the positive instances and some of the negative instances satisfied the greater test, the system looked for another condition to further distinguish between the two groups. Again the most discriminating test was found, with the above relation emerging as the best. Since these two tests completely distinguished between the positive and negative instances, ACM halted its discrimination process for the find-difference rule, and moved on to the next proposer. Once it has generated a discrimination network for each of its proposers, ACM translates these networks into condition-action rules. To do this for a given network, it first eliminates all branches leading to terminal nodes containing negative instances. For each of the remaining terminal nodes, ACM constructs a different variant of the proposer by adding each test as an additional condition. Thus, if more than one terminal node contains positive instances, the system will produce a disjunctive set of condition-action rules to represent the different situations in which an operator should be applied. Once it has generated the variants for each proposer, ACM combines them into a single production system model. This program will regenerate the student’s inferred solution paths without search, and can thus be viewed as a cognitive simulation of his subtraction strategy. Table 1 presents the rules that are generated when correct subtraction behavior is observed; the conditions enclosed in brackets are those added during the discrimination process. Now that WC have considcrcd ACM’s discovery m&hods applied to modeling the correct subtraction algorithm, Ict us cxaminc the same methods when used to model a buggy strategy. Many subtraction bugs involve some form of failing to borrow. In one common version, students subtract the smaller of two digits from the larger, rcgardlcss of which is above the other. In modeling this crrorful algorithm, ACM begins with the same proposers as bcforc (i.c., the rules shown in Table shift 1, minus the brackctcd conditions). If WC prc5cnt the same subtraction problems as in the previous cxamplc, WC find that the buggy student produces the incorrect answer 93 - 25 = 72, along with similar errors for other borrowing problems. As a result, the solution path for the borrowing problem shown in Figure 2 differs from that for the same problem when done correctly, shown in Figure 1. In contrast, the student generates the correct answers for non-borrowing problems, such as 54 - 23 = 31. As before, ACM’s task is to discover a set of variants on the original proposers that will predict these answers. Table 2. Model for the “smaller from larger” subtraction bug. find-difference If you arc processing columnl, and numberl is in coluntnl and r-owl, and number2 is in colu~nrrl and row2, and numberl is greater than number2, then find the diffcrcncc bctwecn numberf and number2, and write this difference as the result for columnl. shift-column If you arc processing columnl, and you have a result for columnl, and coIumrz2 is left of columnl, then process column2. -- In the correct subtraction strategy, the decrement and add-ten operators are used in problems that require borrowing. However, the solution path for the borrowing problem shown in Figure 2 includes only the find-difference and shift-column operators. Apparently, the student is treating borrowing problems as if they were non-borrowing problems, and the student model ACM dcvclops should reflect this relationship. As bcforc, the system uses the solution paths it has infcrrcd to produce positive and negative instances. As in the previous run, only positive instances of the shift-column operator wcrc found, indicating that its conditions need not be altered. And since both positive and negative instances of the find-diffcrcnce rule wcrc noted, ACM called on its discrimination proFess to determine additional conditions for when to apply this operator. The major difference from the earlier run was that only negative instances of the add-ten and decrement operators are found. This informed ACM that these rules should not bc included in the final model, since apparently the student never used these operators. For this idcali;rcd student, ACM found the grcatcr test to hale the best discriminating power. Howcvcr, the ahovc test, which was so useful in modcling the correct strategy. does not appear in the final model. In fact, the grcatcr test complctcly discriminated bctwccn the positive and ncgativc instances, leading ACM to a very simple variant Figure 2. Search tree for the problem 93 - 25 = 72. 196 of find-diffcrcncc rule. This was because the idcalilcd student was always subtracting the smaller number from the larger, rcgardlcss of the position, and this is exactly what the resulting student model does as well. Table 2 prcscnts the variant rules that ACM gcncratcd for this buggy strategy. This model is very similar to that for the correct strategy, cxccpt for the missing condition in the find-differcncc rule, and the notable absence of the rules for dccrcmenting and adding ten, since these are not needed. ACM has been implcmcntcd on a Vax 750, and succcssfi~lly run on a number of the more common subtraction bugs. Table 3 presents clcvcn common bugs reported by VanLchn [lo], along with their observed frequencies. ACM has success~lly modclcd each of these bugs, given idcalizcd behavior on a set of 20 representative test problems. A number of these bugs involve borrowing from zero, and so rcquircd some additional operators beyond those dcscribcd in the earlier examples. ‘l’hcsc operators shift the focus of attention to the left or to the right, in search of an appropriate column from which to borrow. Introducing these operators considerably expanded the search tree for each problem, though ACM was still capable of finding a solution path using cxhaustivc search. Table 3. Subtraction bugs succcss~lly modclcd by ACM. BUG EXAMPLE FREQUENCY CORRECT STRATEGY 81 - 38 = 43 ShlALLER FROS4 LARGER 81 - 38 = 57 124 S I-01’S BORROW AT 0 404 - 187 = 227 67 BORROW ACROSS 0 904 - 237 = 577 51 0-N=N SO - 23 = 33 40 BORROW NO DECREMENT 62-44=28 22 BORROW ACROSS 0 OVER 0 802 - 304 = 408 19 0 - S = N EXCEPT AFI-ER BORROW 906 - 484 = 582 17 BORROW FROM 0 306 - 187 = 219 15 BORROW ONCE THEN SMALLER 7127 - 2389 = 5278 14 FRO,M LARGER BORROW ACROSS 0 OVER BLANK 402 - 6 = 306 13 O-N=0 50 - 23 = 30 12 5. Discussion In c\ alunting the prnblcm reduction approach to cognitive modcling and its implcmcntation in ACM, WC must cxaminc three characteristics of the approach - generality, potential difficulties, and practicality. On the first of thcsc dimensions, ACM fares very well. One can readily see the system being used to model behavior dcscrihcd in terms of a problem behavior graph; in fact, this task should be considerably easier than working only with error data, since the process of inferring solution paths will be much more constrained. The approach might ckcn bc adapted to reaction time data, though this would certainly be a more challenging application. However, there are some difficulties with our approach to automating the construction of cognitive models, relating to the three levels at which explanation occurs in the system. First, it is possible that a subject’s behavior can bc explained in terms of search through more than one problem space. We have avoided this issue in the current system by providing ACM with a single problem space. However, we have described elsewhere [II] our progress in extending the system to handle multiple spaces, and we plan to continue our work in this direction. Second, it is possible that more than one solution path can account for the observed behavior. The current version simply selects the shortest path, but more plausible heuristics are desirable. However, this problem is greatest when only error data are available; providing ACM with additional information about the order of operator application (a partial problem behavior graph) eliminates this ambiguity. Finally, for some sets of positive and ngative instances, two or more tests may appear to be equally discriminating. The current system selects one of these at random, but future versions should be able to generate diagnostically useful problems to resolve the conflict. In terms of practicality, the existing version of ACM does not operate quickly enough to be us&l1 in diagnosing student behavior in the classroom. For a set of 20 subtraction problems, the system takes some 2 CPU hours to gcncrate a complete cognitive model. However, most of the effort occurs during the search for solution paths, which can be as long as 20 steps for a five-column subtraction problem. There arc many domains which involve substantially smaller spaces, and for these ACM’s run times should be much more acceptable. In addition to continuing to test the system on subtraction, our future work will explore the ACM’s application to other domains, showing the approach’s generality and its practicality in automating the process of modcling cognitive behavior. References 1. Burton, R. R. Diagnosing bugs in a simple procedural skill. In Ir~telliga~r Tutoring S~SIEI~~S. I). Slccman and J. S. Isrown, Eds., Academic Press, London, 1982. 2. Slccman, II). H. and Smith, M. J. “Modcling students’ problem solving.” ArriJicinl It~rclligerzce 16 (1981), 171-187. 3. Ncwcll, A. and Simon, H. A.. Hujnnrl f’rubfenz Solving, Prcntice- Hall, Inc., Englewood Cliffs, N.J., 1972. 4. Ncwcll, A. Reasoning, problem solving, and decision processes: The problem space hypothesis. In Aftenfion and Pcrfimance, R. Nickcrson, Ed.,Lawrcncc Erlbaum Associates, Hillsdale, N. J., 1980. 5. Anzai, Y. L<carning strategies by computer. Proceedings of the Canadian Society for Computational Studies of lntclligcnce, 1978, pp. 181-190. 6. Langley, P. Learning effective search heuristics. Proceedings of the Eighth International Joint Confcrcnce on Artificial Intclligcnce, 1983, pp. 419-421. 7. Ohlsson, S. A constrained mechanism for procedural learning. Proceedings of the Eighth International Joint Confcrcncc on Artificial Intclligcnce, 1983, pp. 426-428. 8. Slceman, D., Langley, P., and Mitchell, T. Learning from solution paths: An approach to the credit assignment problem. AI Magazine, Spring, 1982, pp. 48-52. 9. Quinlan, R. Learning efficient classification procedures and their application to chess end games. In Machine Learning: An Arlificiul Infelligence Approach. R. S. Michalski, J. G. Carbonell, and T. M. Mitchell, Eds., Tioga Press, Palo Alto, CA, 1983. 10. VanLchn, K. “Bugs are not enough: Empirical studies of bugs, impasses, and repairs in procedural skills.” Journal of Mafhematical Behavior 3 (1982), 3-72. 11. Ohlsson, S. and Langley, P. Towards automatic discovery of simulation models. Proceedings of the European Conference on Artificial lntelligcncc, 1984.
1984
55
342
EXPLAINING AND ARGUING WlTH EXAMPLES ’ Edwina L. Rissland Eduardo M. Valcarce Kevin D. Ashley Department of Computer and Information Science University of Massachusetts Amherst, MA 01003 AlShCt In this paper, we discuss two tasks - on-line help and legal argument - that involve use of examples. In the case of on-line HELP, we discuss how to make it more intelligent by embedding custom-tailored examples in the explanations it gives its user. In the case of legal argumentation, we discuss how hypotheticals serve a central role in analyzing the strengths and weakness of a tax and describe the generation of hypotheticals, stronger or weaker for one of the parties with respect to a doctrinal aspect, through modification of already existing cases or hypotheticals. 1. Introduction Explaining and arguing are two tasks which both often involve examplebased reasoning. In explaining, one tries to elucidate certain knowledge, educe and correct misconceptions, answer questions and otherwise satisfy the question asker. Argumentation involves all that and more, but in a much more adversarial context; the emphasis is on convincing another that one’s position is correct or showing that the other’s is not. Admittedly there are major differences in explaining and arguing - for instance, the goals of the explainer and arguer - but nonetheless, there are striking commonalities. In particular, both rely heavily on the use of ‘for instances” to accomplish their tasks. In this paper, we shall focus on this shared theme. Examples are critical to learning and to the structure of knowledge and memory [Dietterich & Michalski, 1983; Kolodner, 1983; Rissland, 19781. Recently, Schank has suggested that explaining is perhaps even more critical than “reminding” in the structure of dynamic memory [Schank, 19&4]. Examples play a central role in explaining since it is with examples that one fiids the limits of generalizations and explanations. Anomalies and counter-examples, in particular, help bound concepts and rules. In legal argument, one is constantly trying to test the limits of “rulalike” propositions and show why certain precedents should or should not control the decision of ‘This work supported in part by Grant ET-8212238 of the National Science Foundation. another case. In the law, it is cases, both “real” and hjrpothetical (i.e., cases which have not actually been litigated), which seme as examples. Hypotheticals serve many roles; they create, remake, refocus, and organize experience and are used to explore concepts and rules and to tease out hidden assumptions wand, l!W]. Thae observations apply to other domains as well, Iike mathematics and computer programming. In mathematics, where concepts and truth are more clearly defined than in the law, one is constantly engaged in the “dialectic of proofs and refutations” [LalEatos, 1976J? In programming, there is an “inevitable intertwining” [Swartout & Balzer, 19g2) of the examples with the evolution of programs and specifications. This intertwining of examples and experience with proposing, refiig and refuting can be seen “almost everywhere”; it is inherent to the basic life cycle of science [Kuhn, 19701. In this paper, we will discuss the use of examples in two types of explanation and argumentation: the fii is the case of on-line HELP and the second is legal argument with hypotheticals. 2. Explanation: On-Line HELP By on-line help, we mean command assistance and assistance about certain concepts and standard tasks and, although we have not included it in our work, error and prompting assistance as well. One important component of knowledge that is missing in most on-line (and off-line) explanation, especially help and manuals, concerns examples. Examples offer a concrete illustration of what is being explained and a memorable hook into more general information. They are especially important for the beginner. 2 CertainIy in mathematical research. One could argue that learning mathematics should also foI.Iow along this line. Regardless, exampIes are an important component of mathematical knowledge and are vital to teaching, learning and understanding bland, 19781. 288 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. Examples can provide easily understood and remembered usages. For instance, PRIM VITAMEY is clearly more perspicuous than “PRINT [[Cqf’I z tvrmne[.ext~[~[lCJ[/P]...~ (from PM 19831) A novice uses simple cases: to figure out how to instantiate the general syntactic description, to use as “recipes~ for standard tasks, as a basis for generalizing, and as a basis for a “retrieval+modification” wand, 19811 approach to generating other examples. For the expert, examples can serve as a reminder of syntax and things previously done, much like an icon. In most current help facilities, like that of VAXIVMS, the user asks for information about a particular command, like “HELP PRINT”, and is then presented with information on PRINT, including relevant parameter options, but almost never including examples of standard, potentially dangerous, or clever uses. The explanations often include system jargon, like “queue”, “world” or “filespec”, which the user ought to be able to ask about but usually can’t. Research on intelligent on-line explanation is still not very far advanced (see [Houghton, 19841) although some interesting starts have been made. For instance, Wilensky , in his UC system, allows the user to ask for assistance in natural language; his work has concentrated on request understanding wilensky, 1982a, 1982b]. Finin, in his WIZARD system, has focussed on the problem of recognizing when the user needs help, particularly, because he is using inefficient means to accomplish a task (e.g., using repeated DELETES instead of PURGE) and then volunteering advice [Shrager & Finin, 1982; Finin, X%3]. Our approach to improving on-line HELP is two-fold: (1) include more ingredients of expert knowledge like examples of various categories, heuristics, and pragmatic knowledge in the information provided to the user; (2) embed user-tailored examples in the explanations. In the rest of this discussion, we shall assume that the on-line help facility has already been invoked (by the user or the system) and that the facility has already ‘parsed* the use&s request (i.e., knows with what the user requires assistance). Our emphasis is on the generation of the response, in particular, from a model of the user’s expertise, contextual information, and exemplar knowledge. We don’t use any text generation, although using a language generation program like McDonald’s MUMBLE NcDonald, 1982] is an obvious thing to do. Clearly, this work should eventually be tied in with work on invocation and parsing like that of Wilensky or Finin. 2.1 (More) IntelLlgerlt on-uue HELP In our HELP facility, we too offer explanations on commands like PRINT but we embed relevant, user-tailored examples in the explanation and allow the uSer to ask about system jargon as well as certain tasks. Further, we ‘ripple” contextual information in our explanations as well as use some (very) crude user modelling. For instance, if the user has just asked about PRINT and then asks what a “queuew is, the system would give as examples of queues those used for print jobs. We also allow the user to ask for assistance in a more task-oriented way, through keywords and phraset? like “clean-up” and “unlock”. (The first leads into assistance about PURGE and saving files; the second, about setting the protection - which in most help systems is only accessible by explicitly asking about a seemingly unrelated command like SET - and possibly related system jargon like “world”.) Task-oriented requests usually feed into explanations about specific commands, which are then focussed by the context defined by this type of access. 2.2 Embedding Cnstomized lhamplts We use taxonomic knowledge of examples’ to help select and order the presentation of examples. For instance, we provide the absolute neophyte user with ‘%ta.lt-up” examples and the more experienced user with ‘keferences”. Where a sequence of examples is called for, the taxonomic knowledge can be used to order the examples, for instance, with references presented before models which are presented before counter-examples and anomalies. Taxonomic knowledge can also enable the explanation facility to allow the user to ask specifically for examples in a certain class (e.g., “easy”, “anomalous”, %lever”). One way to customize examples is to modify them to reflect specifics of the user and his context, for instance, using information about the user% own directory in explanations about directory commands. We use techniques of retricval+nwdif ication and instantiation in our on-line HELP by linking the explanation program with an example generator, which uses an Examples-Knowledge-Base (ERR) of already existing examples together with procedures for modification and instantiation. The EKB consists of examples, represented as frames, harvested and organized by an expert. Procedurally attached to examples are instantiation and modification procedures like those to generate extreme variations or to personalize an example. The ability to generate examples on the fly allows the explanation facility to respond dynamically to the user, his tasks, goals, context, domain, etc. User-specific “constraints” on the examples are provided by user-modelling capabilities, for instance, keeping track of how many examples have already been presented in the current invocation of HELP, and roughly gauging the user’s expertise on the basis of certain system data like the block size of the directory, the number of subdirectories, the types of files owned, etc. 3 Already known to the system and available for perusal by the user, this list easily can be augmented on the basis of users’ needs. ’ bland, 1!978] described a taxonomy of examples: %tart-up” (easy, perspicuous cases); “reference” (standard, textbook cases); “model” (paradigmatic, template-like cases); %ountertxamples” (Iimiting, bad cases); “anomalous” (ill-understood, strange cases). 289 The point is to work examples into the explanation given the user, and better still to make the examples meaningful in the sense of addressing the particular needs and knowledge of the user. 23 TEXPLATER !kpnrating Control from Couteut In our HELP system, we separate the control of the help session from its content. We use a script-like control structure of text and examples organized in a template, a ‘TEXPLATE”, which is then used to generate HELP’s respon=. The explanations are assembled by retrieving the needed text and examples which are pointed to in the relevant texplate (texplates are indexed by commands, jargon and task keywords). A TEXPLATE is a set of related nodes, each of which points to chunks of text or examples, and contains the selection criteria for that node. To maintain consistency with other system documentation, wherever possible we use text used elsewhere, for instance, in the manual. Calls to examples are either requests for explicitly named examples in the EKB or constraints describing modiiications to be made to an example. For instance, example calls could be for: a named, known extreme case (e.g., DEL *.*.*); an example generated to fulfill prescribed constraints (like using the name of the user’s most recently edited file); using a previously used example perhaps with an embellishment to m*e it extreme, anomalous, or clever. A Texplate-Interpreter controls the flow through the texplates, including which user-options to present and what to do in response (e.g., MORE to go on with the explanation, EXAMPLE for an example, QUIT, etc.) as well as directing the generation of sequences of examples, should the user repeatedly ask to see an example. Each TEXPLATE defines a context within which the possible meanings of a particular request are narrowed. For instance, asking about the PRINT command will constrain certain specifications in subsequently selected examples such as those for related jargon (e.g., “queue”, “job”), commands (e.g., SHOW/QUEUE), and tasks like those that wzdo the command just explained, (e.g., deleting a job from a print queue). Thus, if the user then asked about the meaning of ‘queue”, the examples would be tuned to reflect the fact that the system just explained PRINT. Certain control options are always available for a texplate: these include asking for repeated examples and asking about commands that verifv or undo the command just explained. 2.4 Two Examples: IA-LADYBUG & VM!S In our on-going work, we are working within two systems. The first is IA-LADYBUG, a system designed specifically for novice programming students with the aim of introducing them to notions useful in the Pascal programming language (like subprocedures) by having them work with a graphics icon, the LADYBUG, which can be commanded by LOGOlike commands such as CRAWL, RIGHT-TURN, etc. [LRvine & Woolf, 19&t]. The second is a subset of VAYUVMS command language [DEC, 19781 dealing with directory and protection commands like PURGE, DELETE, and SET PROTECTION. Figure I. Example scenario of user interactions with HELP S help Enter a subject or command you need help with, or type a ? for a list of choices. Enter Q to quit. HELP >prhlt PRINT Queues one or more files for printing, either on a default system printer or on a specified device. For example: print !ogin.com will cause your file loginwm to be printed on the default system printer. HELP (Type M for More, E for ExampIe, Q for Quit, or enter another request)> queue A print queue is a list of files to be printed on a specific device. For example: print Iogincom /queue=narrow requests that file logincom be printed on the printer corresponding to the queue named “narrow”. The /QUEUE qualifier is synonymous with the /DEVICE qualifier. For instance, print login.com /device=narrow will do the same thing. HELP (Type M for More, E for exampIe, Q to quit, or enter another request)>E Another example: print IogincomIqueue =sys$Iaser wilI cause the file logincom to be printed on the laser printer HELP (M, E, Q,...)> m PRINT expects the file type to be “LIS” unkss otherwise specified. For example: print 1ogi.n will not print your file logincom and it wilI cause an error because you do not have a fiIe named loginlis. HELP@& E, Q,...)-ndo To remove a file from a print queue, use DELETE/ENTRY. For example: deletelentry=l234 narrow wilI remove the job with entry number 1234 from the queue named narrow. HELP&l, E, Q,...)>M To see what job number to -9 typt show queue/ah narrow Another way to stop a file being printed is to rename or delete it; this avoids having to know the job entry number. =LPIM, Es Q,...)>Q 290 For the IA-LADYBUG system, over which we have total control, the manual, an on-line tutorial, and on-line HELP deliberately share material, specifically examples and text. Often, the tutorial and HELP present dynamic examples only summarized in the manual, for instance, one involving the ladybug crawling off the screen, or examples too complicated or whose effect (like color) would be lost in the manual. The simpler “start-up” and “reference” examples presented in the manual are the fii examples presented in the tutorial and HELP. Both, but especially HELP, go on to present more complex or difficult examples, like counterexamples to show the limits of commands (e.g., RIGHT 362 exceeds the parameter range for degrees of turning). HELP also tunes its examples based on user-information like the user’s directory (e.g., in DIR examples) or procedures already completed (e.g., in SEQUENCE examples). Figure I gives an example scenario of user interactions with HELP in our second domain of application, VAYWMS command language. (What the uSer types is indicated in bold face.) A few things should be noted: (1) the first sentence explaining PRINT is that used in the existing system documentation which doesn’t contain examples; (2) the explanation given for uqueuen not only reflects what has just been explained (PRINT), but also offers some information on synonomous qualifiers; (3) HELP relates the “undo” explanation with what has gone before and also provides an alternative way of accomplishing the same task. (4) HELP provides pragmatic knowledge; (5) HELP provides counter-examples, i.e., instances of “bad” usage. 3. Argumentation: Dynamic Hypothetical8 In our second line of research on examples, we have built a program that will generate hypothetical cases (“hypes”). One area of our current work concerns cases involving protection of property interests in software under trade secret law. Using prior decided cases as examples and guides, the program will modify the hypos to make them stronger or weaker cases in favor of the plaintiff or defendant. Hypes and cases are contained in an EKB and are both represented using similar frames. The frames have three or four levels of subframes presenting increasingly detailed factual information. A trade secret case frequently involves two corporations, a plaintiff and defendant, who produce competing products. The plaintiff usually alleges that the defendant gained an unfair competitive advantage in developing and marketing its product by misappropriating trade secret information developed by the plaintiff. There are at least three stereotypical scenarios by which the defendant gains access to the plaintiff’s trade secrets: (1) A former employee of the plaintiff with knowledge of the trade secrets enters into the defendant’s employ and brings with him trade secret information which he learned while working for the plaintiff; (2) The plaintiff may disclose the “secret” information to the defendant perhaps in connection with an attempt to enter into a sales or other agreement with the defendant; (3) The trade secret information may be stolen from the plaintiff and passed to the defendant. 291 Frames and subframes have been defined to represent these typical trade secret fact patterns. Figure 2 illustrates excerpts of frame structures representing the following hypothetical trade secret case, involving the fii, “employee”, scenario, named RCAVICTIM v. SWIPEINC aml Leroy Sold. In the hypo, plaintiff RCAVICTIM sues defendants SWIPEIIUC and Leroy Soleil for misappropriation of trade secrets in connection with software developed by the plaintiff over a period of two years, from 1980 to 198L, with an expenditure of $2 million. Plaintiff markets the software, known as AUTOTELL, a program to operate a system of automated teller machines, to the banking industry. In 1982, computer whiz Leroy Soleil, one of plaintiff’s key personnel on the AUTQI’ELL project, left RCAVICTIM and began working for SWIPEJNC on a competing product, TELLERMATIC, also an automated teller program, which the defendant had just begun to develop. SWIPEINC managed to perfect TELLERMATIC also in about two years after spending about $2 million. RCAVICTIM claimed that SWIPEINC used trade secret information about AUTQTELL which Soleil brought with him. 3.1 DImensional Analysts In actual trade secret cases, the courts have decided a number of legal issues. For each issue decided, the court frequently identifies certain facts that it deems significant in making its “holding” in favor of a party [Levi, 19491. The holdings of prior cases may be grouped into general categories that represent dimensions along which a hypo can be modified in ways that have legal significance for one or the other party. The dimensions factor a legal domain into basic modifications that affect the relative strengths of the parties’ arguments and organize the prior cases in terms of how they can be used to guide modifying a hypo or to support a hypothetical party% argument. Dimensions that have been identified in the trade secret case law [Gilbume & Johnston, 19821 and implemented in the program include the following: 1. Unfair Competitive Advantage: Plaintiff’s argument is strengthened if the alleged trade secret information allowed defendant to gain a competitive advantage over plaintiff. 2. Generally Known: Plaintiff’s argument is weakened if the alleged trade secret information is generally known within the industry. 3. Learnable Hsewherc: If the information was learned by an employee in his work for the plaintiff and he could have learned the information working for some other employer, plaintiff’s argument is weakened. 4. Vertical Knowledge: Plaintiff’s argument is weakened if the alleged trade secret information was about a vertical market. For example, cases imply that knowledge about a vertical market, such as knowledge of the structure of the banking industry, that an employee might learn in the course of developing computer programs for that market is not protectible as trade secret information. 5. TeUtaIe Signs of Misappropriation: Plaintiff’s argument is strengthened if there are certain telltale signs that the defendants sought to misappropriate the plaintiff% alleged trade secret information, e.g., that the corporate defendant paid a very high bonus to get the employee to bring with him a copy of the code he worked on for the plaintiff. 6. Noncompetition Agreement: Plaintiff’s argument is strengthened if the employee entered into an agreement not to work for plaintiff’s competitors. 7. Accessible by Others: Plaintiff’s argument is weakened to the extent that plaintiff did not keep secret its alleged trade secret information by allowing an increasing number of other persons to have access to the information. 8. Confldentfnlity Agreements Cum&rain& Access: Plaintiff’s argument is strengthened to the extent that the persons with access to the trade secret information entered into agreements not to disclose the information to others. 33 Dimension- and Example-Directed Modification Our HYPO program can modify a hypothetical case in favor of either party along any of the above dimensions. For instance, one simple way to modify the hypo in favor of plaintiff is to introduce the fact that SWIPEINC developed the competing software after 1982, the date when Soleil joined the company, at a considerable saving in time and money compared to plaintiff’s expenditures. Such a modification is done so as to reflect the fact situation of an actual case in the knowledge base, thus allowing one to argue analogically for or against a party-s position. Suppose that there is a trade misappropriation case in the EKB, JCN Corp. v. TEREX , where the court held for plaintiff JCN and TEREX took two years and $l,ooO,ooO to develop a product that JCN took four years and $2,00O,CNMI to develop. The modification procedure simply computes the relative savings in development time and expenditures in the case from the EKB and modifies the appropriate slots in the hypo so that SWIPEJNC also saved relatively the same amounts in developing the program. See figure 2. Under the new facts of the hypo, RCAVICTIM’s attorney could cite JCN Corp. v. TEREX in favor of his client’s position. If on the other hand, the hypo were to be modified in favor of the defendant, the procedure would decrease the relative savings in development expenditures so that SWIPEINC’s attorney could distinguish JCN Corp. v. TELEX on the basis that his client did not save as much in development costs as TEREX did. Plaintiff Defendant-l Name: RCAVICTZhf SWZPEZNC Produce8: Product-l Product-2 Product-l Name: Autotefl Devd~By: RCAVZCTZM ,Key-employee~: Leroy Soled Demlopmtmbstartl~Date: 1980 - Devekbpmmt-ComplDate: 1982 Devebpmemt-TlmeEx~ended: 2 years DeNelopment-Morley-Exm: $2,ooo,ooo~ ~Knowledge-Used: Secret-Knowledge-l Competes-With: Product-2 1 Secret-Knowledge-l Subject-Matte: Vertical-knowledgeabout-banking / Gemrally-Known-h-hdwtry?: Generally-known LeamabbElsewhere?: Yes Number-Of-Persona-Wtlh-Access: 0 I/ (1) Modify for plaintiff along Dimension 1 using JCN Corp. v. Terex as a guide. (See text.) (2) Modify for plaintiff along Dimensions 2-4. Confldentlallty-Agr-WbAcceasor 83: N.A. 1 0 Defendant-2 Leroy Soled Product-2 Tellermatic SWZPEZNC HYPO-1A I&’ i978 PC-D: 1982 BT-E: 4 years D-M-E: $4,000,000 Leroy Soled 1982 1984 2 years sums Secret-Knowledge-l S-M: l . Technical-knowledge about-real-time- applications-software GK-H?: Novel-application-of- technical-knowledge -L-E?: N o p-0-P-W-A: 1 LISA-W-A?: YES Key to Modifications: (3) Modify for defendant along Dimension 7 and for plaintiff along Dimension 8. Figure 2. Hpl, RCAVZCTZM v. SWIPEINC & &my Soled, and Modifications 292 Figure 2 illustrates other modifications that strengthen plaintiff’s argument in the hypothetical The subject matter of the claimed trade secret can be characterized as technical information, e.g., about software engineering issues relevant to real time applications, as opposed to vertical knowledge about the banking business (Dimension 4). The technical knowledge can be characterized as applied in a novel way, newly discovered by plaintiff’s personnel, or combined in a unique way with other technical knowledge, as opposed to being generally known within the industry (Dimension 2). It may be taken that Leroy Soleil would not have been able to learn such knowledge while workiig with any one other than the plaintiff (Dimension 3), or that SWIPEINC paid him a bribe to enter its employ (Dimension 5), or that Soleil brought with him a copy of the source code of plaintiff-s program when he switched to SWIPEINC’s employ (Dimension 5). Modifications along each of the dimensions affect the values in some subset of the frame slots representing the hypo. The dimensions provide access to those cases in the database that could be cited or distinguished by virtue of the modification. 3.3 Umitations and Applications The effects of the modifications on the relative strengths of the parties are not n-y independent. For example, the hypo could be modified in favor of the plaintiff along Dimension 6 so that Leroy !Wei.l and plaintiff had entered into an enforceable noncompetition agreement. Now the effect of modifications along Dimensions 1 through 5 that otherwise would favor the defendant is rendered moot. That is, eventhough the plaintiff has a weak argument (along Dimensions l-5) that the claimed secret knowledge is protectible as a trade secret, he may still be able to enforce the noncompetition agreement. Another example of a collision in the effects of modifications along dimensions can be illustrated by modifying the hypo in favor of the defendant along Dimension 2 and for the plaintiff along Dimension 3. As a result, the claimed secret knowledge is both generally known within the industry and not learnable by employees working any where but with the plaintiff, a contradiction. The modification procedures can be used to generate a “slippery slope” type sequence of hypos, a common feature of legal argument. Suppose the hypo is modified along Dimension 7 in favor of the defendant so that one other person, let us say a customer, has access to the information that plaintiff claims is a trade secret. This weakens plaintiff’s argument because it implies that plaintiff did not treat the information as secret. If the hypo is modified along Dimension 8 in favor of plaintiff, that customer is made subject to a contractual obligation not to divulge the information it received from the plaintiff; plaintiff’s argument that it treated the information as secret is restored. Suppose this sequence of modifications were iepeated so that instead of one customer’s having access to the information, twenty did. Plaintiff could still prevail since the corraponding modifications along Dimension 8 impose confidentiality agreements on all twenty customers. Now suppose that the sequence were repeated so that the number of customers with access were twenty thousand, two hundred thousand, two million. The gambit of imposing confidentiality agreements on all of the customers may not continue to satisfy plaintiff’s burden of showing that it had kept secret the information. If 200,ooo customers have access, even if they have entered into confidentiality agreements, from whom is the secret being kept? To the distributors of software to the mass market this hypothetical fact situation is of more than academic interest. Modifications along one dimension may make other dimensions applicable or inapplicable. For example, as has already been mentioned, the hypo can be modified along Dimension 6 to introduce a noncompetition agreement. As a result of this modification, another dimension becomes applicable to the case: 9. Duration of Noncompetition Prof3Mtfon: Plaintiff’s argument is weakened if the noncompetition agreement purports to prohibit the employee from working for competitors for too long a period. The hypo can be modified to increase the time period for which the agreement purports to prevent the employee from competing, eventually to the point where the agreement is no longer enforceable by the plaintiff. How long a prohibition against competition by the employee is too long to be enforced? The legal rule which purports to answer that question is that the covenant not to compete will be enforced as long as its terms are not unreasonable. Obviously such a rule provides a program little guidance in the modification of the hypothetical. Legal cases in the EKB which are relevant to Dimension 9, however, constitute specific examples of the application of this rule, complete with actual time periods that courts have deemed reasonable and others deemed unreasonably long. The modification procedure will use the actual time periods as guides in strengthening or weakening the plaintiff% argument. The cases indexed under Dimension 9 can be cited to justify the interpretation of the effect of the modification and to fashion an explanation of the argument by reference to the general rule enunciated in the cited cases. A case from the EKB may participate in more than one dimension and be applied to modify a hypo along a dimension eventhough the case differs substantially from the hypo in other respects. Where a dimension involves slots whose values are not quantitative, more complex methods of modifying the slot values are necessary. The modifications must be made consistently within the context of the hypo’s other facts, particularly the time ordering of significant events in the hypo. 293 4. snmmaly In this paper we have examined two lines of research sharing the theme of examples and example generation. In the first, on-line explanation systems, there is no distinction made between real and hypothetical examples as there is in the second, legal argumentation. Both research programs rely heavily on a preexisting corpus of examples, structured and represented in an mples-Knowledge-Base (KKB) and the use of domain-specific procedures to modify existing examples to create new ones. In each program, there are constraints on the selection and generation of new examples. In the case of on-line HELP, the constraints come from knowledge of the user, his task and context as well as the subject matter being explained. In the case of legal argumentation, the constraints come from internal consistency (e.g., of time) within the example, dimensional analysis, domain-specific doctrinal aspects, and the desired direction of the modification (i.e., stronger or weaker for plaintiff or defendant) with respect to the controlling case from the EKB. Particularly, in the argumentation examples there is the need to mediate between potentially conflicting constraints. In our future work on argumentation and explanation, we plan to explore contextual knowledge, which relates to the constraints to be placed on the examples to be generated and on goal knowledge, of the user and arguer. Such deeper analyses of the arguments, as well as involved. research -directions will involve structure of explanations and of the knowledge and parties 5. References DEC (1978). VAXIVMS Command Lunguage User Guide. Digital Equipment Corporation. Order No. AADO23ETE. Dietterich, T., and Michalski, R. S. (1983). “A Comparative Review of Selected Methods for Learning from Examples”. In Michalski, Carbonell & Mitchell (Eds.) Machine Learning: An Artificial Intelligence Approach, Tioga Publishing, CA. Finin, T. W. (1983). ‘Providing Help and Advice in Task Oriented Systems”. In Prmcedings IJCAI83. Karlsruhe, W. Germany. Gilbume, M. R., and Johnston, R. L. (1982). ‘Trade Secret Protection for Software Generally and in the Mass Market”. ComputerLuw Journal. Vol III, No. 3 (Spring). Houghton, R. C. (1984). “Gnline Help Systems: A Conspectus”. CACM, Vol. 27, No. 2, February. IBM (1983). Disk Operating System by Microsqft, Inc.. IBM Personal Computer Language Series, IBM Corp. Kolodner, J. L. (1983). ‘Reconstructive Memory: A Computer Model”. Cognitive Science. Vol. 7, NO. 4. Kuhn, T. S. (1970). The Structure of Scientific Revolutions. Second Edition. University of Chicago Press. Lakatos, I. (1976). Proofs and Refutations. Cambridge University Press. Levi, E. (1949). An Introduction to Legal Reasoning. University of Chicago Press. Levine, L., and Woolf, B. (1984). “Do I Press Return?” In Proceedings ACM-SXGCSE Symposium on Computer Science and Education, Philadelphia, February. McDonald, D. D. (1982). ‘Natural Language Generation as a Computational Problem: An Introduction”. In Brady (Ed.) Computational Theories of Discourse, MIT Press. Rissland, E. L. (1981). Constrained Example Generation. COINS TR 81-24, Department of Computer and Information Science, University of Massachusetts, Amherst. Rissland, E. L. (1983). “Examples in Legal Reasoning: Legal HypotheticaIs”. In Proceedings IJCAf-&f. Karbruhe, W. Germany. Rissland, E. L. (1984). “Lear&g to Argue: Using Hypothetical!?. Proceedings First Annual Work&q on Theoretical Issues in Conceptual Id- Processing. Atlanta, GA. Rissland, E. L. (1978). “understanding Understanding Mathematics- Cognitive Science, Vol. 2, NO. 4. Schank, R. S. (1984) “Explaining”. Keynote talk at First AMU~ Conference on Theoretical Issues in Conceptual Information Processing, Atlanta, GA. Shrager, J., and Finin, T. W. (1982). “An Expert System that Volunteers Advice”. In Pruceedings -82, Pittsburgh, PA, August. Swartout, W., and Balxer, R. (1982). “Gn the Inevitable Intertwining of Specifications and Programs”. CACM, Vol. 25, No. 7, July. Wile&y, R. (1982a). “Talking to UNIX in English: An Overview of UC”. In Pruceedings m-82, Pittsburgh, PA, August. Wilensky, R. (1982b). Talking to UNIX in English: An Overview of an On-fine consuftant. Report No. UCBICSD82/104, Computer Science Division, University of California, Berkeley, September.
1984
56
343
PHENOMENOLOGICALLY PLAUSIBLE PARSIBJG David L. Waltz and Jordan B. Pollack Coordinated Science Laboratory University of Illinois at Urbana-Champaign 1101 W. Springfield Ave. Urbana, Illinois 61801 ABSTRACT This is a description of research in develop- ing a natural language processing system with modular knowledge sources but strongly interactive processing. The system offers insights into a variety of linguistic phenomena and allows easy testing of a variety of hypotheses. Language interpretation takes place on a activation network which is dynamically created from input, recent context, and long-term knowledge. Initially ambi- guous and unstable, the network settles on a sin- gle interpretation, using a parallel, analog relaxation process. We also describe a parallel model for the representation of context and prim- ing of concepts. Examples illustrating contextual influence on meaning interpretation and "semantic garden path" sentence processing are included. I INTRODUCTION The interpretation of natural language requires the cooperative application of many kinds of knowledge, both language specific knowledge about word use, word order and phrase structure, and "real-world" knowledge about stereotypical situations, events, roles, contexts, and so on. And even though these knowledge systems are nearly decomposable, enabling the circumscription of individual knowledge areas for scrutiny, this decomposability does not easily extend into the realm of computation; that is, one cannot con- struct a psychologically realistic natural language processor by merely conjoining various knowledge-specific processing modules serially or hierarchically. We offer instead a model based on the integration of independent syntactic, semantic, lexical, contextual, and pragmatic knowledge sources via SDreadina activation and lateral inhi- bition links. Figure 1 shows part of the network that is activated when the sentence (Sl) John shot some bucks. is encountered. Links with arrows are activating, while those with circles are inhibiting. Mutual inhibition links between two nodes allow only one of the nodes to remain active for any duration. (However, both nodes may be simultaneously inactive.) Mutual inhibition links are generally placed between nodes that represent mutually *This work is supported by the Office Of Naval Research under contract NO001 4-75-C-0612. incompatible interpretations, while mutual activa- tion links join compatible ones. If the context in which this sentence occurs has included refer- ence to "gambling," only the shaded nodes of Fig- ure 1 (a) remain active after relaxation of the network. If, on the other hand, "hunting" has been primed, only the shaded nodes shown in Figure 1 (b) will remain active. Notice that the "decision" made by the system integrates syntactic, semantic, and contextual knowledge: the fact that nsome bucks" is a legal noun phrase is a factor in killing the readings of "bucks" as a verb; the fact that "hunting" is associated with both the "fire" meaning of "shot" and the "deer" meaning of nbucksn leads to the activation of the coalition of nodes shown in Fig- ure 1 (b); and so on. At the same time, the knowledge is discrete, and easy to add or modify. In this model of processing, decisions are spread out over time, allowing various knowledge sources to be brought to bear on the elements of the interpretation process. This is a radical depar- ture from cognitive models based on the decision procedures that happen to be convenient in conven- tional programming languages. Of course, we are using a conventional language for simulating the system, but we plan to implement a connection machine [l]. Our program operates by (1) constructing a graph with weighted nodes and links from a sen- tence, and (21 running an iterative operation which recomputes each node's activation level (i.e its weight) based on a function of its current value and the inner product of its links and the activation levels of its neighbors. For these examples, we are primarily interested in the behavior of the network, and not in the program that dynamically constructs the network. The syn- tactic portions of the networks in this paper are constructed by a chart parser [2], while the semantic and contextual portions are permanently resident in memory. Initially all nodes are given zero weight, except for the nodes used to model context (e.g. "hunting" and "gambling"). Each activation link has a weight of .2 and each inhi- bition link has a weight of -.45. The iterative operation uses a proportional function to compute new weighting for nodes, similar to the one used by McClelland and Rumelhart [3] in their interac- tive activation model. The net effect of the program is that, over several iterations, a coalition of well-connected nodes will dominate, while the less fortunate 335 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. (A) 0) Figure 1: Two interpretations of "John shot some bucks." (A) Shows the result in the context of gambling, i.e. John wasted some money; (B) shows the result in the context of hunting, i.e. John fired a gun at a deer. Both examples required about 25 cycles to settle; in each case only a slight initial advantage was given to HUNT or GAM- BLE. The numbered nodes control the arrival times of the words. nodes (those which are negatively connected to winners) will be suppressed. We exploit this behavior several ways in our system: by putting inhibitory links between nodes which represent well-formed phrases with shared constituents (which are, thus, mutually exclusive), we ensure that only one will survive. Similarly, there are inhibitory links between nodes representing dif- ferent lexical categories (i.e. noun or verb) for the same word; between concept nodes representing different senses of the same word (i.e. suharine as a boat or as a sandwich); and between nodes representing conflicting case role interpreta- tions. There are activation links between phrases and their constituents, between words and their different meanings, between roles and their fill- ers, and between corresponding syntactic and semantic interpretations. II MODELING PHENOMENOLOGY Because our system operates in time, we are able to model effects that depend on context, and effects that depend on the arrival times of words. Consider the network shown in Figure 2, which shows three snapshots taken during the processing of the sentence (due to Charniak [4]>: (S2) The astronomer married a star. Figure 2 includes three possible meanings for "star", namely (1) the featured player in dramatic acting, or (2) a celestial body t or 3) a pen- tagram. We presume that nastronomern primes CELESTIAL-BODY by the path of strong links: astronomer -> ASTRONOMER -> ASTRONOMY -> CELESTIAL-BODY, but that MOVIE-STAR would be primed very little, if at all, because any activa- tion of HUMAN via nastronomern and "married" is spread fairly evenly among a vast number of other concepts (PHYSICIAN, PROFESSOR, etc.). When the word "star" is encountered, the meaning CELESTIAL-BODY is initially highly preferred, but eventually, since CELESTIAL-BODY is inanimate, whereas the object of MARRY should be human and arimate, the MOVIE-STAR meaning of "star" wins out. In Figure 2(d) we show the activation levels for CELESTIAL-BODY and MOVIE-STAR as functions of time. The activation of CELESTIAL-BODY is initially very high; only later does MOVIE-STAR catch up to and eventually dominate it. We argue that, if activation level is taken as a prime determinant of the contents of consciousness, then this model captures a common experience of people when hearing this sentence. This phenomenon is often reported as being humorous, and could be considered a kind of "semantic garden path". It should be emphacized that this behavior falls out of this model, and is not the result of juggling the weights until it works. In fact, the examples shown in this paper work in an essentially similar way over a broad range of link weightings. III CONTEXT: INTRODUCTION Earlier (Figure 1) we used "context-setting" nodes such as "hunting" and "gambling" to prime particular word and phrase senses, in order to force appropriate interpretations of a noun phrase. There are, however, major problems that preclude the use of such context setting nodes as a solution to the problem of context-directed interpretation of language, A particular context- setting word, e.g. "hunting", may never have been explicitly mentioned earlier in the text or discourse, but may nonetheless be easily inferred by a reader or hearer. For example, preceding (Sl) with: (S3) John spent his weekend in the woods. should suffice to induce the "hunting" context. Mention of such words or items as noutdoorsn, "hike", "campfire", "duck blind", "marksman", etc. ought to also prime a hearer appropriately, even though some of these words (e.g. noutdoorsn and "hike") are more closely related to many other 336 (A) (Cl n*IE-sru . . . . . . . . Figure 2: The cognitive "doubletake" when process- ing "The astronomer married the star." (A) shows CELESTIAL-BODY dominant at cycle 27; (B) shows a balance of power at cycle 42; (C) shows MOVIE-STAR finally winning the battle by cycle 85; and (1)) shows a plot of their activation values over time. concepts than to "hunting. " We are thus apparently faced with either (a) the need to infer the spe- cial context-setting concept "hunting", given any of the words or items above; or (b) the need to provide connections between each of the words or items and &J the various word senses they prime. There is, however, a better alternative. We propose that each concept should be represented not merely as a unitary node, but should in addition be associated with a set of "microfeatures" that serve both (a) to define the concepts, at least partially, and (b) to associate the concept with others that share its micro- features. We propose a large set of microfeatures (on the order of thousands), each of which is potentially connected to every concept node in the system (potentially on the order of hundreds of thousands). Each concept is in fact connected to OdY some subset of the total set, via either bidirectional activation or bidirectional inhibi- tion links. Closely related concepts have many microfeatures in common. We suggest that microfeatures should be chosen on the basis of first principles to correspond to the major distinctions humans make about situations in the world, that is, distinc- tions we must make to survive and thrive. For example, some important microfeatures correspond to distinctions such as threatening/safe, animate/inanimate, edible/inedible, indoors/out- doors, good outcome/neutral outcome/bad outcome, moving/still, intentional/unintentional, or characteristic lengths of events (e.g., whether events require milliseconds, hours, or years). As in Hinton's [5] model, hierarchies arise natur- ally, based on subsets of shared microfeatures, but are not the fundamental basis for organizing concepts in a semantic network, as in most AI models. A. Jicrofeatures as a Priminn Context - An Example Let us see how microfeatures could help solve the problems presented by the example in Figure 1. Figure 3 shows a partial set of microfeatures, corresponding to temporal event length or location (setting) running horizontally. A small set of concepts relevant to our example is listed across the top. Solid circles denote strong connection of concepts to microfeatures, open circles, a weak connection, and crosses, a negative connection. A simple scoring scheme allows "weekend" and "out- doors" to appropriately prime concepts related to "fire at" and "deer" relative to "waste money" and second hour decade inside house store office 2 school CL 2 factory s casino 8 E bar E restaurant theatre outside racetrack city street city park rural forest lake desert mountain seashore canyon f Concept and microfeature characteristically associated; weighting = 1 -& Mild association; weighting = .5 + Could be associated, but characteristically unrelated; weighting = 0 -& Negatively associated: concept and microfeature tend to be mutually exclusive; weighting = -.5 FiRure 3a. "dollar," as well as the ability of "casino" or "video game" to induce an opposite priming effect, as shown in Figure 3(c). It is interesting to compare these effects with the effects of priming with "hunting" or "gambling" directly. No relaxa- tion was used, though it obviously could be (i.e. a concept could activate microfeatures, priming other concepts, and then the primed concepts could change the activation of the microfeatures, in turn activating new concepts and eventually set- tling down. We have been experimenting with a number of possible weighting and propagation schemes, and have built up a much larger matrix than the one shown in Figure 3. IV RELATED WORK There are many research projects which are very much in the same spirit as ours in addition to ones mentioned already in this paper. Begin- ning in the early 1970's, Schank argued that semantics, not syntax, should have the central role in both theories and programs for natural language processing; Riesbeck's parser for MARGIE [6] has a clear relationship to the model proposed here. Small 171 was another worker in AI to ques- tion the traditional serial integration of PRIMING CONCEPTS PRIMED CONCEPTS Fire-at Waste Deer Dollar Weekend .41 .55 0 .46 Outdoors .41 0 .44 .08 Casino .05 .59 0 .42 Video Games .18 .36 0 .19 Weekend + Outdoors .41 .07 .25 .12 Hunting .36 0 .50 0 Gambling .09 .59 0 .38 Fraction of Maximum Possible Score Figure 3b: Instantaneous priming effects on con- cepts; microfeatures start at 0, and undergo a single priming cycle. Figure 3: This figure illustrates the use of microfeatures to provide contextual priming. At any given time, microfeatures will display sane pattern of activation. Each concept has an induced activation level as a result of the microfeature activation values. The microfeature activations are modified whenever a concept is primed. For OUT example, assume "weekend" is primed, with all microfeatures initially at 0. The top line of Fig- ure 3b shows the activation levels of concepts, where the number represents a fraction of the max- imum possible activation for that concept. These values prime various word sense nodes differently. language processing. He suggested that rather than having separate modules for syntax and seman- tics, each word was an expert in interpreting its own meaning and role in context. Following on that work, Cottrell is recasting word-sense selection into a connectionist framework, and his work is very closely related to our own [S]. Jones [9] is also working on parsing with spreading activation, but of the digital kind. Other work has set integrated parsing into the production system framework. BORIS [lo] uses a lexically-based demon-driven production system to read stories and answer questions about them. The READER system [l l] is a multi-level parallel pro- duction system which models chronometric data, i.e. data on how long humans visually fixate on each word while reading. Another interesting approach to language integration is taken by Hendler and Phillips [ 123, who are using a message-passing ACTOR [13] system to model the interactions between syntax, seman- tics and pragmatics. Other work that has influ- enced our research includes the spreading activa- tion work by Ortony and Radin [14], based on a network of free associations to English words. v mCLUSION Using spreading activation and lateral inhi- bition enables a good framework for embedding comprehension phenomena which cannot even be approached with binary serial models. While we have not discussed them here, we have explored ties to psychological and linguistic results and theories; these are reported in Waltz and Pollack [151. There we show that structural preferences such as Minimal Attachment [163 can be understood as side-effects of, rather than as strategies for, a syntactic processor; current hypotheses about lexical disambiguation in context [17,18] can nicely fit into a model with lateral inhibition, but cannot be accounted for by activation alone. Garden-paths at different levels of processing can be explained by the breakdown of a common approxi- mate consistent labeling algorithm -- Lateral Inhibition -- the "Universal Will to Disambigu- ate." Cl1 [21 [31 [41 c51 Ii61 c71 REFERENCES Hillis, W.D., "The Connection Machine (Com- puter Architecture for the New Wave)." AI Memo 646, MIT AI Lab, 1981. Kay, M., "The MIND System." In Rustin led. > , Natural Language Processing. New York: Algo- rithmics Press, 1973. McClelland, J.L. and D.E. Rumelhart, "An Interactive Activation Model of the Effect of Context in Perception," TR 91, Center for Human Information Processing, UCSD, 1980. Charniak, E., "Passing Markers: A Theory of Contextual Influence in Language Comprehen- sion." Cognitive Science 7:3 (1983) 171-190. Hinton G.E., "Implementing Semantic Networks in Parallel Hardware." In G.E. Hinton and J.A Ass~,,~~~~~so~em~~so)' Parallel Models Or . . . Hillsdale, NJ: Lawrence Erlbaum Associates, 1981. Schank, R.C., N. Goldman, C. Rieger and C. Riesbeck, "MARGIE: Memory, Analysis, Response Generation and Inference in English." In ~~62UWr Stanford University, 1973, pp. - . Small, s. "Word Expert Parsing: A Theory of Distributed Word-Based Natural Language Understanding," TR-954, Department of Com- puter Science, University of Maryland, 1980. [81 c91 II101 Cl11 Cl21 [131 Cl41 Cl51 Cl61 [171 Cl81 Cottrell, G.W. and S.L. Small, "A Connection- ist Scheme for Modelling Word Sense Disambi- guation." Cognition and Brain Theorv 6:l (1983) 89-120. Jones, M.A., "Activation Based Parsing." In JJCAI, Proc. Karlsruhe, West Germany, 1983, pp. 678-682. Dyer, M., "In-Depth Understanding: A Computer Model of Integrated Processing for Narrative Comprehension," Yale Computer Science Research Report 219, May 1982. Thibadeau, R., M.A. Just and P.A. Carpenter, "A Model of the Time Course and Content of Reading." Cognitive Science 6:2 (1982) 157- 203. Hendler, J. and B. Phillips, "A Flexible Con- trol Structure for the Conceptual Analysis of Natural Language Using Message Passing," TR- 08-81-03, Texas Instruments, Dallas, TX, 1981. Hewitt, C., "Viewing Control Structures as Patterns of Passing Messages," AI Memo 410, MIT AI Lab, 1976. Ortony, A. and D. Radin, "SAPIENS: Spreading Activation Processor for Information Encoded in Network Structures," Tech. Rept. 296, Center for the Study of Reading, University of Illinois, Urbana, October 1983. Waltz, D.L. and J.B. Pollack, "Massively Parallel Parsing: A Strongly Interactive Model of Natural Language Interpretation." Cognitive Science (1984) to appear. Frazier, L., "On Comprehending Sentences: Syntactic Parsing Strategies," Indiana University Linguistics Club, 1979. Swinney, D.A., "Lexical Access During Sen- tence Comprehension: (Re)consideration of Context Effects." Journal of Verbal Learning and Verbal Behavior 18 (1979) 645-659. Seidenberg, M.S., M.K. Tanenhaus and J.M. Leiman, "The Time Course of Lexical Ambiguity Resolution in Context," TR 164, Center for the Study of Reading, University of Illinois, Urbana, March 1980. 339
1984
57
344
Hardware and Software Architectures for Efficient AI Michael F. Doming Fairchild Laboratory for Artificial Intelligence Research Fairchild Camera and Instrument Corporation 4001 Miranda Avenue Palo Alto, California 94304 Abstract With recent advances in AI technology, there has been in- creased interest in improving AI computational throughput and reducing cost, as evidenced by a number of current pro- jects. To obtain maximum benefit from these efforts, it is necessary to scrutinize possible efficiency improvements at every level, both hardware and software. Custom AI machines, better AI language compilers, and massively paral- lel machines can all contribute to efficient AI computations. However, little information is available concerning how to achieve these efficiences. A systematic study was undertaken to fill this gap. This paper describes the main results of that study, and points out specific improvements that can be made. The areas covered include: AI language semantics, AI language compilers, machine instruction set design, parallel- ism, and important functional candidates for VLSI implemen- tation such as matching, associative memories, and signal to symbol processing for vision and speech. 1 Introduction As AI software grows in complexity, and as AI applica- tions move from laboratories to the real world, computational throughput and cost are increasingly important concerns. In general, there are two motives for increasing the efficiency of computations. One is the need to obtain faster computation, regardless of cost. This may be due to explicit real-time constraints. It may also be due to current methods being taxed well beyond the limit of complexity or timely response. The other is when increases in computational efficiency are part of an overall effort to obtain a better cost/performance ratio. Both these motives arise within AI, and causes for each will be examined. Behind both, however, is usually the imperative of real world market pressures. Opportunities for increased efficiencies in AI computa- tions exist at every level. Improved instruction set designs combined with improved AI language semantics allow more powerful compiler optimixations to be performed. Con- current machines allow parallel execution of Lisp and declarative constructs, raising issues of Md, or and szreum parallelism. Custom VLSI implementations for current AI performance bottlenecks are also possible, via devices such as hardware unifiers, associative memory, and communication hardware for coordinating parallel search. Many of these speed-ups are orthogonal and can potentially lead to multipli- cative performance enhancements of several orders of magni- tude. However, this is not always the case, as the optimiza- tions can sometimes interfere. For example, some language optimixations may tend to serialize the computation, negating parallelism gains. As part of an effort to design a massively concurrent architecture for AI computation (the Fairchild FAIM-1 prc+ ject), a comprehensive study was done to determine potential throughput increases at various levels and their interactions. This paper will examine several results of this study. 2 Misconception There are several misconceptions of what needs to be done to improve computational throughput for AI. Since most AI is done in Lisp, many believe the key is simply to make Lisp a few orders of magnitude faster. However this approach ignores potential speed-ups that may be easier to obtain elsewhere. Others see no reason to concentrate upon anything other than the fundamental problem of parallelism. This approach presumes routine solution of a very difficult problem: decomposing arbitrary AI computations to effectively use thousands of parallel processors. A problem with this is that most programs, even ones with a high degree of inherent parallelism, almost always have several serial bottlenecks. As an example, most parallel programs need to gather the result of one batch of parallel computations for reflection before generating the next batch. In many cases these serial sections will dominate the running time of the entire program. So one cannot ignore the issue of how to extract as much serial speed as possible from languages and machines. Otherwise it might be the case that, having built an expensive parallel machine hundreds of times faster than existing machines, a new compiler and/or microcode may make some existing serial machine even faster1 The machine coded unifier in the Crystal AI language, for instance, is two orders of magnitude faster than the Lisp coded unifier in the predecessor PEARL AI language [Deering 81a]. 3 Software: What can be done to help AI language implementations 3.1 Compile the language directly to machine code Most “AI languages” per se are not complete computer languages, but packages of routines on top of an existing language (usually Lisp.) While this is a great way of rapidly prototyping a language, and results in an order of magnitude savings in development costs over a traditional full compiler, it does not lead to very efficient implementations. If to increase the speed of AI applications the extreme of building custom parallel processors is being considered, it is silly not to compile AI languages directly onto these processors. There is a large body of computer science knowledge on compilation that can be brought to bear, and great potential for 73 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. performance increase. (Consider the 100x plus speed difference between most Lisp based Prolog interpreters and Warren’s DEE-20 Prolog compiler [Warren 771.) 3.2 Make sure that the language is compilable Because most AI language implementations have been interpreters, issues of compilability generally have not been thought through. Language features that seemed efficient in an interpreted environment may be very slow when compiled, if they are compilable at all. A proper choice of features in light of a compiled environment will lead to more efficient program execution. 3.3 Add extensive libraries of useful routines Another problem with many AI languages is the lack of general tools to support common applications. While it is argued that this allows the user to write his own customized tools (that may be very efficient), most users will do a much worse job than the language implementor could. For exam- ple, PEARL did not directly support any particular theorem proving or search system (such as forward and backward chaining), leaving the user to his own devices. But the MRS system [Genesereth 831, while it provides a convenient meta- level control for users to write their own search systems in, also provides a range of built-in search strategies, from back- ward chaining to full resolution theorem proving. The point is that an extensive library of well written routines of general use will speed the operation of typical user programs (not to mention their development). 4 Hardware: What can be done to help conventional computer instruction sets It is often said that conventional computer instruction sets are not well suited for AI software, but there has been few attempts to quantify the reasons why. For older genera- tion machines, severe address space limitations and lack of flexible pointer manipulation facilities are easy to point to [Fateman 781. But what of the new more modem machines, such as the VAX, 68000,166ooo and RISC machines, and how do they compare with the custom Lisp machines? (Such as [Knight 811 and [Lampson SO].) To obtain insights into instruc- tion set design, several Lisp systems and the fine details of their implementation were examined [Deering 841. Several things were learned. It is very important to identify how rich of an environment one wishes to support. For example, con- trary to many people’s expectations, on a large application program, Franz Lisp [Foderaro 831 on a VAX-U/780 was not significantly slower than Zetalisp on a Symbolics 3600. The difference was that most all type checking and generic func- tion capabilities were either turned off (by the programmer) or missing in Franz, and the overall environment was much poorer. Assuming that such things are not frills, the expense of providing them on different architectures was examined. Flexible Lisp processing depends upon dynamic type checking and generic operations. Associating the data type directly with the data object means that the data type will always be at hand during processing, and this is the reason that tagged memory architectures are well suited to lisp pro- cessing. Because of this, the speed of various processors upon the generic Lisp task was dependent upon how fast they could effectively emulate a tagged memory architecture. A number of experiments were performed to compare Lisp systems and processor instruction sets. As a representa- tive sample, the timing results for a simple aggregate function incorporating some of the most common Lisp primitives (car, cdr, plus, function call/return) is shown in the table below: Lisos vs. Processors on: 1 More extensive benchmarks have borne out (very) roughly the same speed ratios. The variance exceeded 50%, but this was not unexpected. Slight modifications of the compilers or instruction sets produced similarly large changes in the speeds. Existing Franz and PSL [Griss 821 compilers for the VAX and 68000’9 were used to compile foo. Type checking was turned off to obtain the fastest speeds. (Both PSL and Franz were told not to verify that the arguments of + were small integers, Franz did and PSL did not check for numeric overflow .) The timings figures were generated by examination of the assembly code produced and some actual machine tim- ings. The timings of Zetalisp for the 3600 and CADR was taken by running existing systems. Zetalisp-like operations for the VAX and 68000’s were hand coded, and the timings produced in the same way as those for PSL and Franz. The 68000 and 68010 were 10MHx no wait-state machines. The 68000 used 24 bit addresses, leaving the upper 8 data bit free for tag values. The 68010 used 32 bit addresses, and required the tags to be anded off before addresses could be used. The 68020 timings are estimates based upon the best available (but sketchy) preliminary performance data for a full 32 bit 16MHx machine with a small instruction cache. Other experiments examined the architectural require- ments for fast computation of some AI operations not directly supported by Lisp, in particular unification and asso- ciative search. When AI languages are fully compiled, these two functions many times become the computational bottlenecks. For traditional microprocessor instruction sets, the requirements of these operations turned out to be the same as for Lisp primatives: fast simulation of tagged archi- tectures. More specifically, the instructions and capabilities that would make a conventional microprocessor better suited for Lisp (and Prolog, Krypton, MRS, PEARL, etc.) are: 0 “Extract bit field and dispatch”, an instruction to extract a sequence of bits from an operand, then add these bits to a dispatch table address, and jump indirect. This is necessary for rapid handling of tag values in generic operations, type checking, and for helping with unification. 0 “Extract two bit fields, concat, and dispatch*‘, an instruc- tion for dispatching upon the context of zwo operands. (needed for the same reasons as the single argument ver- sion.) 0 The memory address system of the processor should ignore the upper address bits of data addresses that are not otherwise in use. This allows the wasted space in 32 bit pointers to be used as a tag field. In the Zetalisplike code, more than 30% of the time on the 68000’s was spent in emulating the bit field dispatch instructions. Stripping off the tag bits accounted for another approximately another 10%. It is therefore estimated that if the existing microprocessors had hardware support for these features, full type checking Lisps (like Zetalisp) could run almost twice as fast. These percentages come from hand implementing several Zetalisp primatives on current microprocessors. As an example, below the 68010 assembler code is shown for CAR. The number of processor clock 74 cycles per instruction is shown in the left hand column. The boxed code will later be replaced by a single instruction. ZUalisp car for Ml0 ; To take the car we do a few lines of in line code and ; then index jump to a subroutine. (Space for time.) ; The cons cell to take the car of is assumed in a0. ; dispatch to CAR subr baaed upon the tag in upper bits of a0 4 move1 aO,d2 ;putacopyoftheargintod2 24 Ml #t&d2 ; first 8 of: shift copy over by 9 bits 10 lsll r&d2 ; last 1 of: shift copy over by 9 bits 14 andl #QlFO,d2 ; and off non-tag (shifted over) 18 ig CAR(d2) ; branch to car table indexed by type ; At return. the car of the obicct is in a2 ; The CAR subroutine. CAR + DTP-CONS: ; CAR procedure entry point $or normal cons cell. ; We will arrive here if the argument passed to car was of type ; *inter to cons cell”. Other objects passed to car => error ; dispatch to TRANSPORT subr based upon the tag ; in the upper bits of a2 4 move1 a2,d2 ; put a copy into d2 24 lall #8,d2 ; first 8 of: shift copy over by 9 bits 10 lsll r&d2 ; last 1 of: shift copy over by 9 bits 14 andl #oxlFO,d2 ; and off non-tag (shifted over) 10 jmp TRANSPORT(d2) ; branch to car table ; indexed by type. ; The reason for this jump is to check ; for possible invisible pointers, unbound, etc. TRANSPORT + NORMAL: ; jump entry point for normal son8 cell contents 8 ru ; We’re all done, return 174 clocks, @lOMHz = 17&r Now CAR for the 68010 will be recoded assuming two archi- tectural refinements. First assume t&t the upper 7 bits of all addresses be ignored by the (virtual) memory system. Second, assume one additional instruction “extract bit field and dispatch”. This instruction takes the bit field out of the second argument, as specified by the first argument (format: < #starting-bit, field-width>), adds it to the third argument (the jump table base address), and jump indirect through this address. ; N<rr the car routine is recoded using the new instructions: ; index jump to a subroutine. ; dispatch to CAR subr based upon the tag in upper bits of a0 22 extractdispatch < #26,#6> ,aO,CAR ; The CAR subroutine. CAR + DTP-CONS: ; CAR procedure entry point for ; normal cons cell. ; follow the pointer to the car l2 moveal (aO)& ; the upper 6 bits of a0 are ignored. ; dispatch to TRANSPORT subr based upon the tag ;intheuppcrbitsofa2 22 extractdispatch < #26,#6> &DISPATCH TRANSPORT + NORMAL: ; jump entry point for normal ; cons cell contents 8 rts ; We’re all done, return 64 clocks, @lOMHz = 6&s, 2.7 rdes ftzua For new, fully custom machine designs which are tailored specifically for AI, such features can all be built in. With a tagged architecture, many generic operations, such as “add”, do not need to be dispatch subroutine calls. Rather the processor can examine the tags of the arguments to an add instruction, and if they are simple integers, directly per- form the add. If the arguments are of a more exotic numeric type, the processor can generate a software interrupt to an appropriate routine. Further, for such designs it is very help ful to have a “smart” memory subsystem capable of rapidly chasing down indirect pointers (as on the PDP-10 and the cus- tom Lisp machines). Additional customizations of a special AI instruction set design generally fall into the category of complete attached processors rather than just another instruc- tion. This tactic has already been taken by many micropre cessors whose floating point instructions are handled by what could be viewed as attached processors. The specific categories of important attached processors include: pipelined unifiers, associative memory sub-systems, multiprocessor com- munication packet switchers, and special signal processing chips for vision and speech. Studies of a custom instruction set for the FAIM-1 machine indicate that not only can a single processor be designed that is memory bound by DRAM access delays, but that this is the case even when a large cache is employed. This is an important fact. It means that parallel machines sharing a single large common memory are a bad idea, there is not enough memory bandwidth to go around. 5 Parallelism: The great hope Traditionally, concurrency has been viewed as a great method of obtaining increased computational power. In prac- tice, however, designers continue to concentrate upon making single processor machines faster and faster. However, now that hard technological limits have been hit for serial proces- sors, parallelism has become recognized as perhaps the only hope for further orders of magnitude performance increases. Unfortunately concurrency is not free as it brings new sy+ terns organizational problems to the fore. The first conceptual problem with parallelism is the con- fusion between multi-processing and multi-processors. There are algorithms that are very elegantly expressed in terms of a set of cooperating processes (e.g. writers and readers), but these same algorithms have little or no inherent puraflelism that can be exploited by parallel computers. Just because an algorithm can be expressed in concurrent terms is no guaran- tee that, when run on many parallel processors, it will run significantly faster than as separate processes on a single sequential machine. The true measure of parallelism is how much faster a given program will run on n simple parallel processors com- pared to how fast it would run on a single simple processor, and for what ranges of n this is valid. The best one can hope for in principle is a factor of n speedup, but in practice this is rarely reached (due to overheads and communication conten- tion). The maximum amount of speedup attained for a given program upon any number of parallel processors indicates the inherent parallelism of that program. Unfortunately, for most existing programs written in traditional computer 75 languages, the maximum parallelism seems to be about 4 (Gajski 821. This surprisingly low number is due to the style of programming enforced by the traditional languages. There are special purpose exceptions to this rule, and the hope is that non-traditional parallel languages will encourage more concurrent algorithms. Compilers for parallel machines can take advantage of techniques such as Md, or, and stream parallelism if AI languages support concurrent control struc- tures that will gives rise to them. But the jury is still out as to the amount of speed up such techniques can deliver. Another problem in parallelism is failure to take the entire systems context into account. Before building a paral- lel machine one must not only simulate the machine but determine how to write large programs for it. This will reveal potential flaws in the machine before commencing with time consuming hardware development. If however the simulation does not properly take scheduling and technologically realistic hardware communication overhead into account, the timings produced will have little or no connection to reality. Good examples of software systems that have not taken realistic hardware considerations into account are some of the parallel Lisps that have been proposed, such as [Gabriel 841. These proposals point out places in Lisp-like processing where multiple processors could be exploited, but they do not analyze the overheads incurred. They usually assume that multiple processors are sharing a single large main memory where cons cells and other lisp objects are being stored. This is equivalent to assuming that memory is infinitely fast, which is just as un-realistic as assuming that processors are infinitely fast. The problem is that with current technology a single well designed Lisp processor could run faster than current mass memory technology could service it. Adding additional processors would thus not result in any throughput increase. There are several reasons why designers of parallel Lisps may have missed this fact. Perhaps one is that current 68000 Lisps are not memory bound. Another is the potential use of caches to reduce the required memory bandwidth to each processor. However even with caching, the number of pro- cessors that can be added is not unlimited; a 90% hit rate cache would allow only ten processors. What about the thousand processor architectures desired? Finally, experimen- tal data shows that a single processor can run signi6cantly fas- ter than memory can service it: one must employ a cache just to keep a single processor running full tilt! The lesson is that processors are still much faster than (bulk) memories, and any sharing of data between multiple processors (beyond a few) must be done with special communication channels. In other words, MIMD machines with a single shared memory are a bad parallel architecture. This has important implica- tions for some AI paradigms, such as Blackboard systems and Production systems that (in their current forms) rely upon memory for communication between tasks. This is not to say that there are not opportunities for spreading Lisp like processing across hundreds of processors. There are many techniques other than a single shared memory system for connecting processors. More realistic areas of research are the spreading of parallel inference com- putation via techniques of und, or, and streum parallelhun. The point is that all of these techniques incur some overhead, and one cannot simply solve the parallel computation prob lem by saying that arguments to functions should be evaluated in parallel. One must first study hardware teclmol- ogy to determine what at what grain sixes parallelism is fess- able, and then figure out how to make AI language compilers decompose programs into the appropriate size pieces. 6 Generic AI problems for custom VLSI One of the main hopes for more efficient computation in the future is the use of custom VLSI to accelerate particu- lar functions. The ideal functions for silicon implementation should currently be bottlenecks in AI systems, and generic to many AI tasks. Four classes of operations were identified that fit this description: symbolic matching of abstract objects, semantic associative memory, parallel processor communica- tion, and signal to symbol processing. Each will now be examined in detail. 6.1 Matching and Fctdbg The concept of matching two objects is a general and pervasive operation. Most AI languages define one or more match functions on their structured data types (such as frames.) Some of these match functions are very ad-hoc (thus supposedly flexible), but others are sub- or super-sets of unification. If significant support for matching is to be pro- vided in hardware, the match function must have well defined semantics. When a match function is applied to a data base of objects, the operation is called fetching. In this case matching becomes the inner loop operation, and this is a context in which matching should be optimized. An ideal solution would integrate matching circuitry in with memory circuitry, so that fetching would become a memory access of a content addressable memory (CAM). The choice of match function is critical. To obtain reasonable memory densities, the rela- tive silicon area of match circuitry cannot overwhelm that of the memory circuitry. Unfortunately, full unification and more complex match functions require too much circuitry to be built into memory cells. But if a formal subset of unification could be built in, then the CAM could act as a pre-filter function for unification. The primary source of unification complexity is the maintenance of the binding environment. The match func- tion of muck un+xtbn resembles full unification, except that all variables are treated as “don’t cares”, and no binding list is formed. It is the most powerful subset of unification that is state-free. Because of this, mock unification is a suitable can- didate for integration into VLSI memory. We name associa- tive memory systems that utilixe mock unification as their match function CxAM’s: Context Addressable Memories. From a hardware point of view, designing associative memory architectures involves a resource tradeoff between processing and memory: the more hardware devoted to “matching” the more data that can be examined in parallel, leading to faster search time per bit of storage. But con- versely, the more matching hardware there is, the smaller the amount of hardware that can be devoted to data memory, and the lower the density of the associative memory. The data path widths of the match hardware is also a factor in making these tradoffs. Therefore associative memories can be rated by their storage density (bits stored per unit silicon area) and search throughput (bits searched per unit time per unit silicon area). We examined two classes of associative memory in which the match function is mock unification. One integrated the matching circuitry in with memory circuitry, the other was hash based. Hashing was considered because in many applications in the past software hashing has dominated CAM technology [Feldman 691. In more detail the two classes are: 76 Brute force search. The contents of a memory is exhaus- tively searched by some number of parallel match units. For this class of search a custom VLSI mock uni6cation memory architecture was designed. Hashing. Objects to be fetched are hashed, and then the collision list is serially searched by a match unit. A pro- posed VLSI implementation of PEARL’s hashing scheme (called the HCP: Hash Co-Processor) served as an embo- diment of hash based searching. In this system the bit storage is conventional DRAM. Bits EQed/(nanosecondWu&) 4 T 3 -- CxAM-3: HCF’ 15K bits/mm2 1 2 3 4 Figure 1 Minimum System Configuration in Bits 1OOM - Hash Based CxAM 100K -- 10K -- Search Based CxAH 1K --/ 4 8 12 16 Figure 2 Figures 1 and 2 display graphs of CxAM design space trade offs. In figure 1 the range of bit and search power den- sities are displayed. The hash based CxAM has a single operating point because the fetch time is essentially indepcn- dent of memory size, as is the density. The search based CxAM has a variable range because one can vary the relative proportions of storage and processing in such architectures. The two lines represent two different search based architcc- tures. One has inherently better bit density, but over most of the design space this advantage is negated by an inherently worse search throughput. However neither design completely dominates the other, a choice between the two will depend upon the relative storage density - match throughput balance desired. In figure 2 the defect of the hashing CxAM is displayed. The minimum usable size system is too large for some applications. Thus the trade-offs between these two schemes turn out to be in density and minimum usable size. As a representa- tive data point, both techniques could perform a mock- unification of their entire local memory contents for an aver- age query (an S-expression of length 16) in 5+. The density of the search based CxAM was about eight times worse than that of conventional single transistor DRAM. The hashing scheme utilized conventional DRAMS, and so had high den- sity. But the minimum configuration of a hash based CxAM memory system utilizing standard 256K DRAMS is 10 mega- bits, where as the search based CxAM can be configured for much smaller system storage sizes. This extreme high speed of 5~s portends very efficient systems for those bottlenecked by data base fetch time. But which technique should be used is very dependent upon grain size. If one were constructing a large non-parallel machine, a bank of HCP’s and conventional DRAMS would work well. But for an array of small grain processors with onchip memories, the search based CxAM approach is more tract- able. By combining a CxAM with software based routines, a range of tailored matching services can be provided, with slid- ing power-price/throughput trade-offs. The design of the FAIM-1 machine provides an example of this. For each of thousands of processors, there is parallel CxAM hardware for mock unification, a single (pipelined) serial hard-wired full unifier, and software support for post-unification matching features (attached predicates and demons). With such a hardware/software hierarchy, simple matches (like Lisp’s equal) will run fast, whereas more complex matching services (such as KRL’s [Bobrow 77D would cost more in time due to the software component. In summary, matching is a common operation ripe for VLSI implementation, but the complexity of match functions varies by orders of magnitude. Below a simple list of match operations and data types are arranged in order of complex- ity. Successful high performance AI machines will have to carefully decompose these function into hardware and cloftware components. I Match Hierarchy I Match Operation Object Type CornDare Instructioa 32 bit data object Lisp EQ Function Atomic Lisp Objecti Lisa EOUAL Function s-E!xDressioIls Mock Unification S-Exprcsaion with don’t cares Unification S-Exprtion with Matching Variables Unification 4% Prcdicatcn S-Exprcsaion with VariablcafF’rcdicatcs Arbitrary User Code 1 Arbitrary Uacr Representation Objects 63 PoralleJ Proassa r Communicatfons As mentioned several times previously, when utilizing a number of processors in parallel, they cannot communicate objects and messages by sharing a large common memory. Some sort of special message passing (and forwarding) hardware is absolutely essential for efficient handling of the traffic. In many general purpose parallel processors, interpro- cessor communication is rhe computational bottleneck. 6.3 Signal to Symbol processing Despite all attention given to speeding up high level sym- bolic computation, within some AI applications the main pro- cessing bottleneck has been in the very low level processing of raw sensory data. Within many vision systems 90% or more of the run time may be incurred in the initial segmenta- tion of the visual scene from pixels to low level symbolic con- structs [Perkins 781. Moreover limitations of the higher level vision processing usually are traceable to an inadequate initial segmentation peering 81bJ Similar problems arise in many speech systems. In such cases one should look to special pur- 77 pose VLSI processors to directly attack the problem. Exam- ples include special image processing chips, such as (Kurokawa 831, and speech chips, such as [Burleson 831. As array processors have shown us, for these special processors to be usable by programmers, they need to be very well integrated with the other hardware and software components of the system, and as transparent as possible to the program- mer. As most AI programmers are not good microcode hackers, one is in trouble if this is the only interface with a special device. 7 Conclusion Feldman 691 J. Feldman and P. Rovner, “An Algol Based Associative Language,” Commun. ACM, Vol. X2, No. 8, Aug. 1969. [Foderaro 831 J. Foderaro, “The Franz Lisp System,’ unpublished memo in Be&fey 42 UNIX Distribution, Sept. 1983. [Gabriel 841 R. Gabriel and J. McCarthy, “Queue-based Multi-processing Lisp”, preprint, 1984. [Gajski 821 D. Gajski, D. Pradua, D. Kuck and R. Kuhn, “A Second Opinion on Data Flow Machines and Languages,” IEEE Computer, Vol. IS, No. 2, Feb. 1982, pp. 5869. [Genesereth 831 M. Genesereth, “An overview of Meta- Level Architecture,” in Proc. AAAI83, Washington, DC., 1983. Opportunities for increased efficiency are present at all levels of AI systems if we only look, but to obtain the orders of magnitude throughput increases desired all these potential improvements must be made. We must make hard trade offs between traditional AI programming practices and the discip- line necessary to construct algorithms than can make effective use of large multiprocessors. We must compile our AI languages, and these compilers must influence instruction set design. Key computational bottlenecks in AI processing must be attacked with custom silicon. There is a real need to use concurrency at all levels where it makes sense, but the over- head must be analyzed realistically. Acknowledgments The author would like to acknowledge the contributions of members of the FAIM-1 project: Ken Olum for his colla- boration on the instruction set benchmarks, Ian Robinson and Erik Brunvand for their VLSI CxAM designs, and Al Davis for overall architectural discussions. [Bobrow 771 [Burleson 831 [Deering ala] [Deering 81b] [De-h w [Fateman 781 REFERENCES D. Bobrow and T. Winograd, “An overview of KRL-O, a knowledge representation language,” Cognitive Science, Vol. 1, No. 1, 1977. “A Programmable Bit-Serial Signal Process- ing Chip,” SM Thesis, MIT Dept. of Electri- cal Engineering and Computer Science, 1983. M. Deering, J. Faletti and R. Wilensky, “PEARL - A Package for Efficient Access for Representations in LISP,” in Proc. NCM81, Vancouver, B.C., Canada, Aug. 1981, pp. 930-932. M. Deering and C. Collins, “Real-Time Natural Scene Analysis for a Blind Prosthesis,” in Proc. IJCAI81, Vancouver, B.C., Canada, Aug. 1981, pp. 704-709. M. Deering and K. Olum, “Lisp and Proces sor Benchmarks,’ unpublished FLAIR Technical Report, March 1984. R. Fateman, “Is a Lisp Machine different from a Fortran Machine?,” SIGSAM Vol. 12, No. 3, Aug. 1978, pp. 8-11. [Griss 821 [Knight 811 [Kurokawa 831 [Lampson 801 [Perkins 781 [Warren 771 M. Griss and E. Benson, “Current Status of a Portable Lisp Compiler,” SIGPLAN, Vol. 17, No. 6, in Proc. SIGPLAN ‘82 Symposium on Compiler Construction, Boston, Mass., June. 1982, pp. 276283. T. Knight, Jr., D. Moon, J. Holloway and G. Steele, Jr., “CADR”, MIT AI Memo 528, March 1981. H. Kurokawa, K. Matsumoto, M Iwashita and T. Nukiyama, ‘The architecture and performance of Image Pipeline Processor,” in Proc. VLSI ‘83, Trondheim, Norway, Aug. 1983, pp. 275284. B. Lampson and K. Pier, “A Processor for a High-Performance Personal Computer,” Proc. 7th Symposium on Computer Architec- ture, SigArcMEEE, La Baule, May 1980, pp. 146160. W. Perkins, “A model based vision system for industrial parts,” IEEE Trans. Comput., Vol. C-27,1978, pp. 126-143. D. H. Warren, “Applied Logic - Its Uses and Implementation as a Programming Tool,’ PhD. Dissertation, University of Edinburgh, 1977, Available as Technical Note 290, Artificial Intelligence Center, SRI International. 78
1984
58
345
SYNTAX PROGRAMMING Stefan Feyock Department of Computer Science College of William and Mary Williamsburg, Virginia 23185 ABSTRACT This paper describes a new programming tech- nology that is to syntax analysis as formal logic is to logic programming, and which we have accord- ingly named syntax programming. The table-driven nature of bottom-up parsers provides this approach with a number of attractive features, among which are compactness, portability, and introspective capability. Syntax programming has been success- fully used for a number of applications, including expert system construction and robot control as well as non-AI problems. 1. - Introduction This paper describes a new programming tech- nology that is to syntax analysis and parser con- struction as formal logic is to logic programming (LP), and which we have accordingly named syntax programming (SP). This approach is made plausible by the strong formal similarity of BNF (Backus- Naur Form) productions to Horn clauses, and attractive by the power and elegance of present- day parser construction technology. Investigation of the syntax programming/logic programming analogy has led to results which we have found intriguing as well as encouraging. Like logic programming, SP provides a production- oriented programming framework similar to LP's Horn clauses, with the attendant inducements to orderly task composition and hierarchical program structure. More interesting, however, are the aspects of SP that differ from LP. SP programs are inherently table-driven, handle information propa- gation differently, and do not backtrack automati- cally. It thus appears that LP and SP are not direct competitors, but rather represent tailored approaches to specific types of problems. We will describe some of the strengths of SP, particu- larly its efficiency and capability for self- reference, and discuss work in progress toward a synthesis of SP and LP. 2. Overview of parser technology - -- Our development of SP has been concerned with bottom-up parsing techniques using some version of LR parsing. Top-down techniques were considered in the early phases of this project, and quickly rejected: most grammars occurring "naturally" are not parsable by the top-down approach, and must undergo various transformations to make them acceptable to such a parser. These transforma- tions, which are usually acceptable in programming language parsing, are not feasible in many AI applications. Expert systems, for example, are frequently required to display their rules to the user in order to explain their reasoning. If these rules have been distorted for the benefit of the parser, they may no longer be comprehensible to the user. We have found no instances, on the other hand, where such distortions were necessary when using the more powerful LR (specifically LALR) parsing technique as a basis for SP. We begin with a brief overview of the parser construction technology underlying our work. We will assume that the reader is familiar with BNF notation, and that this overview constitutes a review rather than an introduction to the basic concepts of compiler construction for him; if not, [l] is an appropriate source. 2.1. The MYSTRO Parser Generator -- --- Our research has been performed using the MYSTRO parser generation system developed at the College of William and Mary [6]. This system is written in Pascal, and was therefore easily modi- fied as required for this project. Since MYSTRO's basic organization is typical of the operation of parser generators, our exposition will consist of a description of this system. As shown in Fig. 1, MYSTRO is a program development tool that takes as input the BNF specification of the language to be processed, along with code specifying the semantics, i.e. the operations to be performed when a particular syn- tactic construct is encountered. The input to MYSTRO consists of a series of BNF productions and their associated semantics, as shown in Fig. 2. From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. MYSTRO analyzes this grammar and, if no errors are detected, produces a set of parse tables which are subsequently utilized by the parser to process its input. This parser is con- structed by MYSTRO by inserting declarations, scanner routines, and the semantics for each pro- duction into a preexisting skeleton parser pro- gram. An LR parser reads and stacks input text until the parse stack contains the right-hand side (rhs) of the appropriate production, as determined by the parse tables. At that time the semantics associated with that production are executed. The semantics stack referred to above is a parallel stack to the parse stack. The semantics stack ele- ments are typically records containing a field for +------- ---- -- ---- -+ input grammar --> MYSTRO parser generator parser skeleton--> 1 +------------------+ I parse tables +---a-----+ user interaction <--> 1 parser 1 +---------+ The operation of MYSTRO Fig. 1 * Lines with * in column 1 are comments. * A < in column 1 signifies the beginning * of the left-hand side (lhs) * of a BNF production. * <declaration> ::= DECLARE <id> INTEGER; * Here come the semantics, * denoted by a blank in column 1. <declaration>.location := get-free-word; * The notation <symbol>.attribute * occurring in the semantics is translated * by MYSTRO into a semantics stack reference: * sem stack[stack ptr].location := get free word; <declaration>.type := integer; - - <declaration>.value := <id>.value; Typical BNF Production and associated semantics Fig. 2 each item of interest, such as location, type9 value, etc. Upon completion of the semantics code the rhs on the parse stack is replaced by the lhs of the production. 2.2. Ambiguity Resolution -- It frequently happens that the grammar that is given to the parser generator is not acceptable to the parsing scheme used by the parser (MYSTRO is a LALR narser generator). MYSTRO allows the use of disambiguating predicates to deal with such contingencies. When the parser reaches a parse table entry corresponding to a shift/reduce con- flict the decision is always to shift; this has worked well in practice. Upon encountering a reduce/reduce conflict, the disambiguating predi- cates associated with the conflicting productions are evaluated in order, and the first one whose predicate evaluates to true is used. (If none of the predicates evaluates to true, the last produc- tion in sequence, which must not have a predicate associated with it, is used.) The robot controller given in the Appendix makes extensive use of such disambiguating predicates, which are denoted by a / in column 1. It should be noted that the disambiguating predicates play the role of metarules: when more than one rule "fires", these predicates are used to establish priority. They have access to the entire parsing environment, including the parse stack, present parse state, and the parse tables themselves, and thus can be used to do extensive introspection if desired. 3. A Syntax Programming Example - -- 3.1. Optimizer Expert System -- -- It is important to note that syntax program- ming, like logic programming, is a general-purpose software construction methodology rather than an AI-specific tool. Like LP, however, SP appears particularly I suited to the requirements of a variety of AI problems. Our first example is accordingly the SP version of an expert system currently under development by J. Rogers at NASA/Langley Research Center. It represents a con- sultation system to be sent to prospective users of the ADS-l General Purpose Optimization Program [Ill, a large package of FORTRAN-based optimizer programs for structural optimization. To use this package the user must make a number of decisions that depend on the nature of his optimization problem. In particular, he must decide on the strategy, optimizer, and type of one-dimensional search to be used. These decisions can require considerable expertise; the SP of Fig. 3 is an excerpt from an SP expert system that provides consultation to aid the user in making this deci- sion. This example serves to illustrate a number of points regarding syntax programming. Perhaps the most obvious is the extreme simplicity of this program. While many of the productions that con- stitute the actual system have been omitted, that system differs from our excerpt only in the number of productions. It should be emphasized that this system is an actual application that is to be sent out to ADS-l users, not a contrived toy problem. 111 It is interesting to note that much of the simplicity results from the fact that an LR parser maintains a large amount of information relevant to the problem automatically in its states and parse tables. Consider, for example, the item set corresponding to state 3, depicted in Fig. 4. (Readers not familiar with item sets: back to [l]!) As can be seen, this state automatically records the fact that "problem is gradient" is known, as well as indicating clearly the facts yet to be established. This automatically engendered knowledge maintainance facility explains the scar- city of explicit semantics associated with most of the productions. The s econd import ant point of this example is that it is typical of applic ations that require two of the fundamental advantages of syntax pro- grams: compactness and portability. If an expert consultant system is to be sent to a user commun- ity accustomed to FORTRAN packages, it is not usu- ally feasible to write the expert system in, say, INTERLISP or one of the expert system generators based on it, and then send the resulting system, which is apt to be quite large, to the user along with the blythe directive "Just put this system up on your machine and you'll be all set." On the other hand, the arguments against writing an expert system ad hoc in FORTRAN or other algo- rithmic languages are well known. Syntax programs bypass both sets of difficul- ties in an elegant manner. As depicted in Fig. 1, the output of the parser generator is a set of ? AMBIGUOUS+ XREF- MAXIMA- SCAN- ECHO+ <answer> ::= <optimizer choice> writeln(' END OF SESSTON. '); <optimizer choice> ::= <strat:2,opt:2,ld_search:4> * MYSTRO attaches no special significance to most special characters * such as underscore, comma, or colon. writeln(' STRATEGY IS 2, OPTIMIZER IS 2, AND lD-SEARCH IS 4. ') * <optimizer-choice> ::= <strat:2,opt:4,ld search:4> writeln(' STRATEGY IS 2, OPTIMIZER IS 4, AND lD-SEARCH IS 4. ') * * (*** etc. --- other <optimizer choice> alternatives not shown ***) - * <strat:2,opt:2,ld_search:4> ::= <strat:2> <opt:2> <search:4> <strat:2> ::= <?ask,problem,is,lst-order> <strat:2 or 4> - - <strat:2 or 4> ::= + <?ask';prcblem,is,unconstrained> + <?ask,problem,has,more,than,5O,design,variables> + <?ask,problem,is,gradient> + <?ask,no,feasible,starting,points,can,be,found> * + in column 1 denotes continuation of production . , * (*** etc. --- remaining productions not shown ***) Typical Productions from SP Optimizer Expert System Fig. 3 State 3 <?ask,problem,is,gradient> shift [ 71 <opt:3> ::= <?ask,problem,is,gradient> . <?ask,problem,is,large,or,very,large> shift [ 91 <opt:5> **= <?ask,problem,is,g;adient> . <?ask,problem,is,medium,or,small> . . . . . . . Portion of Typical Item Set Fig. 4 tables, which are plugged into a parser skeleton to produce a running parser that embodies the expert system. This parser skeleton can be in whatever language is desired; we currently have a parser skeleton in Pascal, one in LISP, and are working on a FORTRAN version. Moreover, these parser skeletons are quite compact: the Pascal skeleton has fewer than 800 lines, while the LISP skeleton is less than half as large. 3.2. Turning a Parser into an SP Processor -- -- ---- We now turn to an important technical aspect of this example. Consider the production <strat:2> ::= <?ask,problem,is,lst-order> <strat:2 or 4> - - Recall that entities enclosed in < -- > are con- sidered to be single (nonterminal or pseudotermi- nal) symbols, regardless of their length. The symbol <strat:2 or 4> is a nonterminal defined on the right-hand sTdeof a separate production. The symbol <?ask,problem,is,lst-order>, on the other hand, is a pseudoterminal of a kind unique to SP. The scanner that is part of the parser skeleton has been modified to give pseudoterminals begin- ning with the character sequence "<?" special treatment: such pseudoterminals are deemed to be procedure calls to the (boolean) procedure named after the "?". The pseudoterminal <?ask,problem,is,lst-order> is thus handled by the scanner as if it were the procedure call ask('problem is lst-order'); The procedure ask simply queries the user regard- ing its inpuKtring; in this case it would gen- erate IS IT TRUE THAT problem is lst-order ? If the user's response is "yes", the effect is as if the pseudoterminal had indeed been encountered as head-of-input, and the parser proceeds accord- ingly; if not, the scanner seeks to establish the presence (truth) of alternate pseudoterminals, as directed by its tables. The pseudoterminals are read in as part of these tables. The parser/scanner modification we have just described is critical to SP. It effectively transforms the parsing environment from input text --> scanner --> tokens --> parser to data base <--> scanner --> tokens --> parser It is this generalization of the scanning mechan- ism that transforms a parser from a language pro- cessing device to a powerful general-purpose pro- gramming tool. 4. Argument passing and control flow - ~-~- We now turn to a comparison of two important aspects in which SP and LP differ: argument pass- ing and flow of control. Information in LP is propagated up and down the "parse tree", i.e. the tree of subgoals at a given point in the computation, by means of instantiation of argument variables as forced by unification. We assume the reader is familiar with this mechanism; if not, [3] and [7] are the stan- dard references. We have experimented with two approaches to argument manipulation and information propagation in connection with SP. One is the methodology we have described in our overview of parsing technol- %Y 9 which involves associating with each produc- tion certain semantic actions which consist of essentially unrestricted code written in the language of the parser (Pascal or LISP in our case) performing environment and arbitrary manipulat ion of the database at the time a reduction is pending. This information propagation scheme has several disadvantages. One of these is the aforementioned lack of discipline: the programming environment is that of the semantics language. For example, if the semantics language is Pascal, the semantics consist of essentially arbitrary Pascal code. A second disadvantage lies in the fact that information residing in the semantics of symbols on the parse stack below the left-hand side of the current production is not accessible in an orderly fashion; in other words, all attributes are syn- thesized attributes. 4.1. Affix Grammars -- - These disadvantages have led us to investi- gate the feasibilty of using affix grammars [9], which promised to provide a highly disciplined information propagation method that has strong formal similarity to the arguments used for infor- mation propagation in logic programs. This approach has proven to be highly productive, and has been used in a number of SP programs. The Appendix contains an SP program with affix argu- ments that implements a robot controller similar to the one presented in [lo]. A similar program has been used (after preprocessing as described in [9]> to control one of the robot arms in NASA/Langley Research Center's Intelligent Systems Research Laboratory. The theory underlying LR parsing of affix grammars is far too extensive to present in this space; we must confine ourselves to a very brief overview. Consider the production <puton>!object,support ::= <getspace>!object,support -place <putat>!object,place is entered, and that the database consists of mother(ann,john), father(harry,john), father(harry,jane) 4.2. Control Flow and Backtracking in SP -- ~-- -- Since, like most logic programmers, we had become accustomed to thinking of backtracking as a way of life, it was with some surprise that we noted that SP's lack of backtracking caused no difficulties in the problems we attacked. This was true even though these problems had not been chosen for their lack of backtracking requirements; rather, they were research problems that were "in the air" at NASA/Langley Research Center. Nonetheless there are many problems for which a backtracking solu- tion is natural, and it is desirable for SP pro- grams to be able cope with them. 4.3. -- Semantic Backtracking One obvious solution lies in the fact that the semantics of an SP program can contain calls to arbitrary procedures written in or callable from the language in which the parser is written. As indicated, we have implemented a parser skeleton in LISP, as well as one in Pascal that can call Lispkit LISP code [5]. Thus arbitrary LISP, Lispkit, or Pascal functions can be invoked, in particular functions that implement Prolog-like capabilities. PiL [8] provides such a function in (full) LISP, while [2] describes a purely applica- tive version suitable for a Lispkit LISP implemen- tation. By this means backtracking can be con- fined to situations where it is necessary for searching the solution space, and need not be used in roles which are more aptly filled by other con- trol structures. 4.4. Transforming Backtracking into Database -- Queries There is a further class of situations which are implemented by means of backtracking in LP, but which turn out to be easily implementable as straightforward database searches. Consider this example: sibling(x,y) :- parent(z,x),parent(z,y). /* sibling here actually refers to sibling or half-sibling */ parent(z,x) :- mother(z,x). parent(z,x) :- father(z,x). Suppose the query sibling(john,jane)? (in that order). Then the subgoal parent(ann,john) will be tried first, but will have to be retracted, since parent(ann,jane) cannot be esta- blished. An LR parser-driver SP program cannot perform such a backup. A query such as this would be handled by expressing it in terms of a database query. In relational terms, the given query is equivalent to the retrieval (in an idealized rela- tional query language) parent john intersect parent jane where-parent john = {z father(z,john) } union {z mother(z,john) } and parent jane = (2 father(z,jane> } - union {z 1 mother(z,jane) } A retrieval such as this (or its equivalent in the semantics language) would then appear as part of the semantics of the SP program. We have found that it is frequently possible to eliminate backup by means of such a transformation. 5. - Discussion Having presented the concepts underlying syntax programming, we now examine some of the implica- tions of this method of programming. These derive largely from the fact that the behavior of an LR- parser based syntax program is driven by the parse tables it reads in. We have already discussed the fact that SP programs thus inherit the compactness and high speed exhibited by LR parsers in general. We have not yet emphasized, however, one of the most important implications of this fact: since an SP program's behavior is determined by its parse tables, and since these parse tables can form part of the data base accesssed by the program, any SP program has the potential for extensive introspec- tion into its own operation. In particular it appears straightforward to provide the user with the capability to to ask questions such as " what is your present state?" and "what are the presently legal inputs?", and have the responses generated automatically on the basis of the parse tables and parse stack. ([4] discusses the imple- mentation of a similar capability for transition diagrams.) We consider this capability to be one of the most exciting consequences of our method, and are actively pursuing this aspect of SP. 5.1. Explanation of reasoning process -- The ability to explain its reasoning to the user is an indispensable feature of expert systems. SP-based expert systems achieve this effect very neatly: since their mode of operation is based on parsing, it is trivial for them to display their parse tree, which is a representation of their "reasoning process" so far. 114 6. Conclusion 4. Feyock, S., Transition Diagram-based CAI/HELP - Systems, International Journal of Man-Machine Stu- Syntax programming has been successfully applied dies 9, pp. 399-413, 1977. to a number of problems in addition to those presented in this paper. These problems include 5. Henderson, P, Functional Programming, the Tower-of-Hanoi problem, a graph manipulator, Prentice-Hall, 1980. an expert system to diagnose robot end effector malfunctions, as well as a NASA-funded project to 6. Noonan, R., and R. Collins, The MYSTRO Parser apply SP to the construction of an in-flight pilot Generator PARGEN User's Manual, Internal Report, aid system to provide malfunction consultation. Dept. of Computer Science, College of William and This last project is currently in progress and Mary, Willimamsburg, VA. typifies the sort of problem for which SP is well suited: the construction of rule-based expert sys- 7. Kowalski, R., Logic for Problem Solving, P-P terns that feature the compact size and high execu- North-Holland, 1979. tion speed inherent in table-driven LR parsing technology. 8. Wallace, R., An Easy Implementation of PiL (Prolog in LISP), SIGART Newsletter, No. 85, pp. 29-32, July 1983. REFERENCES 9. Watt, D. The Parsing Problem for Affix Gram- mars, Acta Informatica, v. 8, pp. l-20 (1977). 1. Aho, A., and J. Ullman, Principles of Compiler Construction, Addison-Wesley, 1977. - 10. Winston, P., and B. Horn, LISP, Addison- Wesley, 1981. 2. Carlsson, M., On Implementing Prolog in Func- tional Programming, Proc. of the 1984 Interna- tional Symposium on Logic Programming, pp. 154-159 Atlantic City, NJ, February 1984. 11. Vanderplaats, G., et al., ADS-l: A New General-Purpose Optimization Program, Proceedings of the AIAA/ASME/ASCE/AHS 24th Structures, Struc- tural Dynamics, and Materials Conference, pp. 3. Clocksin, W., and C. Mellish, Programming in 117-123, Lake Tahoe, Nevada, May 1983. - Prolog, Springer-Verlag, 1981. Appendix <goal> ::= <readconds>^object,support (apop 2) ; clear the affix stack * This syntax program uses the LISP s * <readconds>^object,support ::= (terpri) (print "object to move?,") (terpri) (print "support?,") (apush * <puton>!object,support ::= <getspace>!object,support "place (apop 3) ; * <puton> b!obje ct , support keleton. (apush ( read) (read) > (terp <putat>! objet t, place (terpri) ri <eof> <putat>!object,place ::= <grasp>!object <moveobject>!object,place <ungrasp>!object (apop 3) ; * ***** Several productions have been omitted here for brevity * <notsupported> ::= * epsilon productions often cause ambiguity (usually intentional) ! (! notsupported) * If (! notsupported) returns true, this production fires. ***** Remaining productions have been omitted ***** Complete program avai lable upon request for brevity. 115
1984
59
346
D-NODE RETARGETING IN BIDIRECTIONAL HEURlSTIC SEARCH George Politowski and Ire Fohl Computer and Information Sciences Uaivoruity of California, Santa Crux, CA 95064 ABSTRACT AIthough it is generally agreed that bidirectional heuristic search is potentially more efficient than unidirectional heuristic search, so far there have been no algorithms which realize this potential. The basic difficulty is that the two search trees (one rooted at the start, the other at the goal) do not meet in the mid- dle. This results in essentiofly two unidirectional starches and poorer performance. In this paper we present an efficient olge rithm for bidirectional heuristic search which overcomes this difficulty. We also compare this algorithm with de Champeaux’s BHFFA (2331 on the basis of search efficiency, solution quality, and computational cost. I. INTRODUCTION ,Searching for paths in very large graphs has been an impor- tant problem in AI research. Barr and Feigenbaum (11 gives an excellent overview of this area. In this paper we present a new algorithm for efficient bidirectionaf heuristic search. We demon- strate empirically that it is more efficient than other search methods, including previous bidirectional techniques [2,3,7,8]. The Heuristic Path Algorithm (HPA) (81 is a modified vet- sion of Dijkstra’s algorithm [4]. The specification of the evaluation function used to order the nodes is f =(1-w )*g +w*h , where g is the 1ength of the known path from the candidate node to the root of the search tree, h is the (heuristic) estimate of the distance (shortest path length) between the node and the goal, and w is a constant which adjusts the relative weights of the two terms. If w is zero, then HPA is equivalent to Dijkstra’s illgorithm. If w is less than or equal to one-half, and the heuristic estimate never exceeds the actual distance, and the edge costs in the graph are bounded Mow by some positive number, then HPA is still adminsibie (it. guaranteed to hnd the shortest path if any path exists) and consid- erably more efficient than breadth-first search. If w equals one. thin the search is calIed pure heuristic starch. Frequently heuristics which satisfy the admissibi1ity criterion are too weak to be of practical use. Also it is often the case that the length of the solution path is not of primary importance and finding any reasonable path is sufficient. In such caSeS it is gen- erally desirable to set w greater than one-half in HPA, or to use a more accurate (but non-admissible) heuristic, or both. These choices trade off path quality for search efficiency. It has been shown [9] that the efficiency of heuristic search may be improved if the search proceeds bidirectionally, i.e. if the search expands outwards from both the start and goal nodes until the searched areas overlap somewhere in between. Although this technique is guaranteed to improve non-heuristic breadth-first search in graphs of uniform density, it has so far not worked well with heuristic search because the expanded areas frequently do not meet ‘in the middle.’ In the worst case, bidirectional heuristic search performs worse than unidirectional heuristic search. Pohl [9] demonstrates this result for various models of error in tree spaces. Pohl [7] gives some data on bidirectional heuristic search using the l5puuie. The data was collected using a bidirectional version of the predecessor of HPA. This algorithm was first called the Very General Heuristic Algorithm (VGHA). Lawler [6] sum- marizes the potential efficiency of unidirectional and bidirectional search for both the heuristic and non-heuristic cases. De Champeaux [2,3] describes a Bidirectional Heuristic Front-to-Front Algorithm (BHFFA) which is intended to remedy the ‘meet in the middle’ problem. Data is included from a set of sample problems corresponding to those of PohI (71. The data shows that BHFFA found shorter paths and expanded less nudes than Pohl’s bidirectional algorithm. However, there are several problems with the data. One is that most of the problems are too easy to constitute a representative sample of the 15puzzle state space, and this may bias the results. Another is that the overall computational coat of the BHFFA is not adequately measured, although it is of critical importance in evaluating or selecting a search algorithm. A third problem concerns admissibility. Although the algorithm as formally presented is admissible, the heuristics, weightings, termination condition, and pruning involved in the implemented version all violate admissibility. This makes it difficult to determine whether the results which were obtained arc a product of the algorithm itself or of the particular implementa- tion. It is also difficult to be sure that the results would hold in the context of admissible search. One additional problem is that no data is presented to support the claim that the search trees did in fact meet in the middle, although our own tests of BHFFA indi- cate this result. In our current research we have attempted to avoid the pit- falls mentioned above. We have explicitly espoused non-admissible search and postponed all concerns about admissibiiity. We have conducted our tests on randomly generated (hard) problems. We have included data on how well the search trees met in the middle and on how costly the searches were. These precautions allow our data to be more easily interpreted and evaluated by other researchers. II. ANALYSIS AND DESCRIPTlON OF ALGORITHM As stated above, the main problem in bidirectional heuristic search is to make the two partial paths meet in the middle. The problem with Pohl’s bidirectional algorithm is that each search tree is ‘aimed’ at the root of the opposite tree. Pohl recognized this and compared the situation to two missiles ‘independently aimed at each others base in the hope that they would collide.’ [7, p. lO8] What is needed is some way of aiming at the front (i.e. the leaves) of the opposite tree rather than at its root. There are two advan- tages to this. First, there is a better chance of meeting the opv site front if you are aiming at it. Second, for most heuristics the aim is better when the target is closer. However, aiming at a front 274 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. rather than a single node is somewhat troublesome since the heuristic function is only designed to estimate the distance between two nodes. One way to overcome this difficulty is to choose from each front a representative node which will be used as a target for nodes in the opposite tree. We call such nodes d- nodes, and in the following we discuss a simple scheme for choos- ing these nodes. Figure 1 Consider a partially developed search tree, such as the one shown in Figure 1. The growth of the tree is guided by the heuris- tic function used in the search, and thus the whole tree is inclined, at least to some degree, towards the goal. This means that one can expect that on the average those nodea furthest from the root will also be closest to the goal. Based on the reasoning above, these nodes are the best candidates for the target to be aimed at from the opposite tree (not shown in figure). In particular, the very farthest node out from the root should be the one chosen. D-node seiection based on this criterion costs only one comparison per node generated. HPA We incorporated this idea in the following fashion: into a bidirectional version of 1. Let rhe root node be the initial d-node in each tree. 2. Advance the search n moves in either the forward or back- ward direction, aiming at the d-node in the opposite tree. At the same time, keep track of the furrhest node out, i.e. the one with the highest g value. 3. After a moves, if the g value of the furthest node out is greater than the g value of the Iast d-node in this tree, then the furthest node out becomes the new d-node. Each time this occurs, all of the nodes in the opposite front should be re-aimed at the new d-node. 4. Repeat steps 2 and 3 in the opposite direction. The above algorithm does not specify a Sufficient analysis may enable one to choose a good 275 value for n. value based on other search parameters such as branching rate, quality of heuris- tic, etc. Otherwise, an empirical choice can be made on the basis of some sample problems. In our work good results were obtained with values of II ranging from 25 to l25. A value of 75 was even- tually chosen for generating the data included in this paper. It is instructive to consider what happens when II is too large or too small because it provides insight into the behavior of the d- node algorithm. A value of I) which is too large will lead to per- formance similar to unidirectional search. This is not surprising since for a sufflcicntly large * a path will be found unidirection- ally before any reversal occurs. A value of I) which is too small will lead to poor performance in two respects. First, the runtime will be high because the overhead to re-aim the opposite tree is incurred too often. Second, the path quality will be lower (i&. the paths will be longer). To understand the reason for this it is necessary to consider Figure 1 again. Note that if there are ~vtral major branches in the search tree and a new target is being chosen after each move, then it is possible that successive targets arc not in the same branch. When this happens, it causes the opposite tree to be roaimed at a node which is not near the previous tar- get. If this happens very often, the result is a long ‘zig-zag’ path. III. THE TESTS The evaluation function used by the d-node search algorithm is the same as that used by HPA, namely f =(1-w )*g +w*h, except that h is now the heuristic estimate of the distance from a particu- lar node to the d-node of the opposite tree. This is in contrast to PohI’s algorithm, where h estimates the distance to the root of the opposite tree, and to unidirectional heuristic search, where Ir tsti- mates the distance to the goal. Our aim was to develop an algo- rithm which would perform well for a variety of heuristics and rtver a range of w values. With this in mind we decided to test our algorithm on a set of 50 problems with four different heuris- tics at three different w values. The 15puzzle was selected as a convenient and tractable problem domain. Appendix I shows the initial tile configurations for all So sample puzzles. Problems 1 through 10 are the same puz- zles used by Pohl [7] and de Champzaux (21 for their tests. Prob- lems 11 through 25 were generated by hand; included here are some systematic attempts at generating hard puzzles. Problems 26 through 50 were randomly generated by a program. The exponen- tial nature of the problem space makes it highIy probable that ran- domly generated puzzles will be relalively hard, iz. their shortest solution paths will be relatively long with respect to the diameter of the state space. The four functions used to compute h are listed below. These functions were originally developed by Doran and hfichle [!?J, and they are the same functions as those used by Pohl and de Champeaux. 1. h=P 2. h =P +20&R 3. h=S 4. h =S +2O”R The three basic terms P , S, and R have the following definitions. 1. P(o &)=xipl where pI is the Manhattan distance between the position of tile i in a and in b. 2. s (0 b >=c jPlw5 where pf is as above and dl is the distance in a from tile i to the empty square. 3. R (u a) is the number of reversals in u with respect to b . A reversal means that for adjacent positions i and j, o(i)=b(j) and u(j)=b(i). Finally, the w values which we used were 05, 0.75, and 1.0. This covers the entire ‘interesting’ range from w = 05, which will result in admissible search with a suitable heuristic, to w = 1.0, which is pure heuristic search. XV. THE RESULTS The results of our test of the d-node algorithm are shown in Table 1. For the purpose of comparison, we conducted identical tests on several other algorithms. These results are shown in Tables 2, 3, and 4 for unidirectional HPA, bidirectional HPA (Pohl’s algorithm), and de Champeaux’s BHFFA, respectively. For each algorithm, the set of 50 sample problems was run 12 times (once for each weighting of each heuristic); data was collected separately for each batch of problems. Listed below are the mean- ings of the code letters used in the tables. All of the averages were computed on the basis of solved puzzles only. The search was terminated after 3OCXl moves if no solution was found. TABLE 3 h = 1 h-2 h-3 h=4 S 4 S 6 S 18 S 44 I p 245 w=OJ ; &;0 N 1052.5 P 30.7 P 693 P 80.1 D 5.7 D 643 D 73.9 M 12582 h4 1031.4 M 948.4 N 24783 N 23532 N 22263 S number of problems solved. P average path length. D average difference between the length of the partial path in the forward tree and the partial path in the backward tree. This is a measure of how well the paths met in the middle. (bidirectional onlyj M average number of moves, N average number of nodes generated. T average CPU time in seconds. T 143 T 42.0 T 43.9 T 43.6 s 19 s 34 s 22 s 50 P D M N T 60.1 P 66.6 P 97.0 P 51.0 D 59.4 D 91.0 D 1507.7 M 12778 M 15695 M 30815 N 26148 N 36076 N 37.6 T 35.4 T 745 T 99.4 94.6 6595 15698 262 50 120.6 1163 756.6 17952 29.9 S 34 P 1755 D 1695 M 17033 N 3518.7 T 31.9 Table - Bidirectional I-PA w = 0.75 TABLE11 h=l 1 h=2 1 h=3 S 6 S 12 S 42 P 323 P 42.0 P 1043 w=OJ D 9.7 D 7.7 D 20.6 M 784.0 M 11605 M 9955 N 1600.0 N 23473 N 2277.8 T 233 T 43-6 T 71.0 S 41 s 50 s 47 P 96.1 P 863 P 1812 w = 0.75 D 203 D 179 D 28.0 M 1170.0 M 701.6 M 1079.9 N 24383 N 1466.9 N 2470.6 T 375 T 253 T 936 s 50 s 50 s 48 P 280.4 P 1495 P 298.6 w=l-o D 29.0 D 253 D 29.0 M 909.6 M 4156 M 11442 N 1917.4 N 8768 N 26501) T 315 T 143 T 1126 Tabie 1 - D-node Algorithm h=4 s 50 P 95.0 D 20.7 M 469.7 N 1103.4 T 295 -- s 50 P 120.1 D 19.1 M 3715 N 878.1 T 235 s 50 P 255.6 D 25.4 M 3863 N 9185 T 238 w = I.0 TABLE 4 h=l s 26 P 816 D 48 M 10453 N 21981) T 4268 s 243 P 135.4 D 122 M 985-U N 2086.9 T 4059 h=2 s 44 P 755 D 4.4 M 861.0 N 1795.1 T 5145 S 47 P 1253 D 11.0 M 825.5 N 1750.6 T 500.9 S 50 P 185.8 D 20.0 M g668.4 N 18495 , T 522.9 h=3 h=4 S 21 s 50 P 98.6 P 78.4 D 9.0 D 7.0 M 9968 M 346.7 N 2213.1 N 773.1 T 10203 T 411.6 w =OJ s 26 s 50 P 1725 P 913 D 105 D 10.4 M 14t2.7 M 324.5 N 3217.7 N 728.8 T lXU3 T 384.2 w = 0.75 S 33 s 50 P 2005 P 111.2 D 132 D 13.7 xi 1167.7 M 362.9 N 2561.9 N 815.6 T lL32.6 T 428.9 S 43 P 2532 D 263 M 1311.6 N 27813 LT 5433 fi h=3 1 h=4 I S 24 IS 46 w = 1.0 P 693 P 813 M 1373.8 M 827.4 N 3087.0 N 1917.1 I- T 933 1 T 47.7 Table 4 - BHFFA saving is not as dramatic because the overhead rql S P M N T 50 1085 6128 1434 .o 31.! ---_ 50 136.7 544.4 1277.1 23.1 662 l231.4 2517.0 425 --.- 49 1588 7203 1506.0 555 P 1101.4 M 2247.9 N 34.9 T -!-- -76 I s 2201 1 P 15215 M 31672 N lix node method is somewhat higher than it is in Pohl’s algorithm. The results also show that the performance of unidirectional HPA is comparable to Pohl’s bidircctioaa1 algorithm. S E’ M N T When comparing corresponding blocks in the tables it should be noted that S is the dominating statistic, i.e. if S differs greatly in two corresponding blocks, then the other data in the block are no longer directIy comparable. This is because the block with the smaller S represents the soIutioa of easier problems (ix. those with shorter paths) which means that P is sure to be smaIIer in that block, and more than likely M, N and T as wtI1. if two corresponding blocks have comparable values of S, then it ir rea- sonable to compare the other statistics. 1 T 34.0 1 T 192 Tabfc 2 - Unidirectional HPA The most significant result is that the d-node method dom- inates both previously published bidirectional techniques, regard- less of heuristic or weighting. In comparison to de Champeaux’s BHFFA, the d-node method is typicalIy 10 to 20 times faster. This is chiefly because the front-to-front calculations required by BHFFA are computationally expensive, even though the number of nodes expanded is roughly comparable for both methods. In comparison to PohI’s bidirectional algorithm, the d-node method typically solves far more probiems, and when solving the same pr&Iems it expands approximately half as many nodes. The time Another consideration concerning the data from BHF’FA is that the algorithm requires pruning to restrict the size of the fronts. This has various effects on the search results, depending on which pruning technique is used. We restricted the front size to 50 nodes by pruning off those nodes with the lowest g values. Previous tests which we conducted indicate that this technique accounts for the high number of solutions in the blocks in Table 4 corresponding to heuristics 1 and 2 with w = 05. 276 V. FURTHER RESEARCH Further investigation of the d-node Plgorithm is planned. Preliminary work by Politowski aad Chapman oa higher dimen- sional sliding block puzzIes supports the current results. In the near future other combinatorial problems, such as the Rubik’s cube will bc similarly tested. Other areas to be worked on are path quaIity considerations (in&ding adrnissibiIity) and a better formal mode! for understanding the performance of this algorithm. ACKNOWLEDGEMENTS Some of the ideas in this paper are based on discussiun with and work of Brian Chapman, Dan Chenet, and Phil Levy. APPENDIX I GOAL: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 - 1: 12 3 4 5 6 7 81315141110 912 - 2: 9 5 1 3 13 7 2 8 14 6 4 11 10 15 12 - 3: 6 2 4 7 5 15 11 8 10 1 3 12 13 9 14 - 4: 1 3 7 4 9 5 8 11 13 6 2 12 10 14 15 - 5: 2 5 6 4 9 1 15 7 14 13 3 8 10 12 11 - 6: 1 2 3 4 5 6 7 8 10 12 11 13 15 14 9 - 7: 1 4 2 3 6 5 8 11 14 9 12 15 10 13 7 - 8: 7l3111 - 4 14 6 8 5 2 12 10 15 9 3 9: 15 1114 12 7 10 13 9 8 4 6 5 3 2 1 - 10: 9 21310 512 7 414 l-1511 6 3 8 11: 15 5 7 2l31011 412 9 - 8 114 3 6 12: 13 I5 5 2 4 10 1 7 14 3 9 8 I.2 - II 6 l3: 1214 4 611 513 215 3 9 110 8 7 - 14: 11 5 12 14 10 - 74l339615812 l5: 14 4 16 9lOl311 2 -12 8 315 7 5 16: 912lOl3 4 7 114 211 5 - 815 6 3 17: I5 512 811 414 19 213 6 3 7 -10 18: I3 7 5 814 110 - 4 9l511 2 3 6I2 19: -151413121110 9 8 7 6 5 4 3 2 1 20: 2 14 3 6 5 8 710 912111415l3 - 21: l514l3121110 9 8 - 7 6 5 J 3 2 1 22: - 4 9 3 6 214 715 5 1810111312 23: 613 9 1311 7 -1215 51410 4 2 8 24: - 10 1 6 2 13 14 12 11 8 5 9 3 15 4 7 25: 5 6 7 g 9101112131415 - 12 4 3 26: 10 15 -1513 2 8 9 712 6 3 41114 27: 4 113 71011 5 6 - 8 3 91415 212 28: 7 1 13 15 6 9 11 8 4 5 - 10 12 14 3 2 29: 4 - 1 15 13 3 9 11 7 10 12 8 5 14 6 2 30: 14 19 5 713 411 -1Ol2l5 3 8 2 6 31: 7 - 1 10 12 11 9 8 5 6 3 14 2 13 15 4 32: 1 2 10 15 6 8 14 7 - 9 4 13 5 11 12 3 33: 10 12 4l3 815 - 314 7 6 91112 5 34: 4 11012 91413 211 - 6 8l5 3 7 5 35: l3 12l5 3 81411 412 - 7 910 6 5 36: 5 114 4llU 8l5 912 6 7 - 310 2 37: - 3 7 10 5 11 13 12 2 15 1 6 8 14 4 9 38: 14 3111213 4 2 7 9 6 -10 5 115 8 39: 7 2 315 -14 8l311 1910 412 6 5 40: 1 4 12 6 10 13 3 5 11 7 9 15 2 14 8 - 41: l533- 5 14 6 13 7 10 8 1 11 4 9 12 2 42: 8 3 710 9 511 ll5 -1312 214 4 6 43: 12 4 8 5 9 113 71011 - 61514 3 2 44: 4 215 9 - 3 6 10 S 11 I2 7 13 8 1 14 45: I5 4 8 14 10 - 2 91312 111 3 7 5 6 46: - 5131015 2 19 314 6 4 7 81ll2 47: 10 5 6 - 9 3 12 14 13 1 4 7 11 8 2 l.5 48: 3 513 4 - 6 11 8 l5 10 9 14 1 12 2 7 49: I.3 5 6 9 10 - 15 3 7 8 4 114 12 2 11 50: - 5 6 41012 2 3 9 8 1714Ul511 1. 2. 3. 4. 5. 6. 7. 8. 9. REFERENCES Barr, A. and E. A. Fcigenbaum, eds., The Handbook of Artificial InteIIigence, (WiiIiam Kaufmana, Inc., Los Altos, CA, 1981). De Champeaux, D. aad L. Sint, ‘An Improved Bidirectional Heuristic Search Aigorithm,’ (Journal of the ACM, Vol. 24, No. 2, April 1977, pp. 177-191). De Champeaux, D., ‘Bidirectional Heuristic Search Again,’ (Journal of the ACM, Vol. 30, No. 1, January 1983, pp. 22- W Dijkstra, E., ‘A note oa two problems in connection with graphs,’ (Numerische Mathematik, Vol. 1.1959, pp- 269-271). Doraa, J. and D. Michie, ‘Experiments with the Graph Traverser program,’ (Proceedings of the Royal Society A, Vol. 294,1966, pp. 235-259). Lawler, E. L., M. G. Luby and B. Parker, ‘Finding Shortest Paths in Very Large Networks,’ (unpublished, 1983). Pohl, I., ‘Bi-directional and Heuristic Search in Path Prob- lems,‘ (SLAC Report 104, Stanford Univ., Stanford, CA, 1969). PohI, I., ‘Bi-directional Search,’ (Machine Intelligence, Vol. 6, 1971, pp. X27-140). Pohl, I., ‘Practical and Theoretical Considerations in Heuris- tic Search Algorithms,’ (Machine IntelIigeace, Vol. 8, 1977, pp. 55-72). 277
1984
6
347
Initial Assessment of Architectures for Production Systems Charles Forgy’ Anoop Guptal Allen Newell’ Robert Wedig’ Carncgic-Mellon University Pittsburgh, Pennsylvania 15213 Abstract Although production systems are appropriate for many applications in the artificial intelligence and expert systems areas, there are applications for which they are not fast enough to be used. If they are to be used for very large problems with severe time constraints, speed increases are essential. Recognizing that substantial further increases are not likely to be achieved through software techniques, the PSM project has begun investigating the use of hardware support for production system interpreters. The first task undertaken in the project was to attempt to understand the space of architectural possibilities and the trade-offs involved. This articlc presents the initial findings of the project. Briefly, the preliminary results indicate that the most attractive architecture for production systems is a machine containing a small number of very simple and very fast processors. 1. Int reduction Forward-chaining production systems are used extensively in artificial intelligence today. They are especially popular for use in the construction of knowledge-based expert systems [9,11,13,14,17]. Unfortunately, production systems are rather slow compared to more conventional programming languages. Consequently some computationally intensive tasks that arc otherwise suitable for these systems cannot be implemcntcd as production systems. The Production System Machine (PSM) project was created to dcvclop hardware solutions to this problem. The first goal of the project is to understand the space of architectural possibilities for the PSM and the trade-offs involved. This article describes the initial results of the studies performed by the PSM project. The rest of the paper consists of the following sections. Section 2 provides a brief description of the OPS production systems considered by the PSM project and includes a description of the Rctc algorithm that is used to implement them. The Rctc algorithm forms the basis for much of the later work. Section 3 elaborates on the need for hardware for production systems. It explains why WC do not cxpcct substantial further speed-ups from software tcchniqucs. Section 4 prcscncs the results of mcasurcmcnts of some existing production system programs. The mcasurcmcnts cnablc US to cxplorc the possibility of using parallelism in executing production system programs. Sections 5, 6, and 7 discuss three methods for speeding up the execution of production systems. Section 5 considers the role of parallelism, Section 6 considers processor architectures. and Section 7 considers hardware technology issues. The conclusions arc presented in Section 8. ‘With the DcpaNnent of Computer Science. 2 With the Department of Electrical and Computer Engineering. 2. Background The PSM project is concerned with the OPS family of production systems [2,4, 61. These languages arc for writing pure forward-chaining systems. An OPS program consists of a collection of produclion rules (or just “productions”) and a global data base called working memory. Each production has a left-hand side which is a logical expression and a right- hand side consisting of zero or more executable statements. The logical expression in the left-hand side is composed of one or more con&ions. A condition is a pattern; the left-hand side of a production is considered satisfied when cvcry condition matches an element in working memory. The OPS interpreter executes a program by performing the following cycle of operations: 1. Match: The left-hand sides of all the productions are matched against the contents of working memory. The set of satisfied productions is called the conflict set 2. Conflict Resolution: One of the satisfied productions is selected from the conflict set If the conflict set is empty, the execution halts. 3. Act: The statements in the selected production’s right-hand side are executed. The execution of these statements usually results in changes to the working memory. At the end of this step, the match step is executed again. In this paper WC arc primarily concerned with speeding up the match operation. This is because the match operation takes most of the run time of interpreters that are implemented in software on uniproccssors. Moreover, when OPS is run on a parallel machine (which the PSM will be) the three operations can be pipclined, and much of the time required for conflict resolution and act can be overlapped with the time taken for the match. The total run time will consist of tbc time for the match plus a small amount of start-up time for the other two operations. The algorithm that will be used in the production system machine is the Rcte match algorithm [I, 31. This algorithm has been used with variations in all the software implementations of OPS. It exploits two basic propertics of OPS production systems to reduce the amount of processing required in the match: l The slow r:ltc of change of working memory. It is common for working memory to contain from a few hundred to over a thousand clemcnts. Typically, cxccuting a production results in two to four of the clcmcnrs being changed. Thus on each cycle of the system, the vast majority of the information that the matcher needs is identical to the information it used on the previous cycle. Rcte matchers take advantage of this by saving state bctwcen cycles. l The similarities among the left-hand sides. The lcfi-hand sides of productions in a program always contain many common subcxprcssions. Rctc attempts to locate the From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. common subexprcssions. so that at run-time the matcher can evaluate each of thcsc expressions only once. The Rete interpreter processes the l&-hand sides of the productions prior to executing the system. It compiles the lcfi-hand sides into a network that specifies the computations that the matcher has to perform in order to effect the mapping from changes in working memory to changes in the conflict set. The network is a dataflow graph. The input to the network consists of changes to working memory encoded in data structures called tokens. Other tokens output from the network specify the changes that must be made to the conflict set. As the tokens flow through the network, they activate the nodes, causing them to perform the necessary operations, creating new tokens that pass on to subsequent nodes in the network. The network contains essentially four kinds of nodes: l Constant-test nodes: These nodes test constant features of working memory elements. They effectively implement a sorting network and process each element added to or deleted from working memory to determine which conditions the elcmcnt matches. l Memory nodes: These nodes maintain the matcher’s state. They store lists of tokens that match individual conditions or groups of conditions. l Two-input nodes: These nodes access the information stored by the memory nodes to determine whether groups of conditions are satisfied, For example, a two-input node might access the lists of tokens that have been determined to match two conditions of some production individually and determine whether there are any pairs of tokens that match the two conditions together. In general, not all pairs will match because the left-hand side may specify constraints such as consistency of variable bindings that have to hold bctwccn the two conditions. When a two-input node finds two tokens that match simultaneously, it builds a larger token indicating that fact and passes it to subsequent nodes in the network. l Terminal nodes: Terminal nodes arc concerned with changes to the conflict set. When one of these nodes is activated, it adds a production to or removes a production from the conflict set. The processing pcrformcd by the other nodes insures that these nodes arc activated only when conflict set changes arc required. 3. The Need for Hardware The previous work on the cfftciency of OPS systems has conccntratcd on software techniques. Over the past several years, improvements in the software have brought about substantial speed incrcascs. The first LISP- based version of OPS was OPS2, which was implcmcntcd in 1978 (51. The widely-used I,lSP version OPSS was implcmcntcd about 1980 [2]. The improvements in software technology during that time made OPSS at least five to ten times faster than OPS2. OPSS/LISP has been followed by two major reimplemcntations: an intcrprctcr for OPS5 written in BLISS (a systems programming language) and the OPS83 interpreter [6]. OPSS/BLISS is at least six times faster than OPSS/LISP, and OPS83 is at lcast four times faster than OPS5/BLISS.3 The speed-up from OPS2 to OPSS/BLISS rcsultcd from a number of factors, including changing the rcprcsentations of the important data structures and putting in special code to handle common cases efficiently. The additional 3 In absolute terms, a large production system with a large working memory and moderately complex left-hand sides (e.g.. Rl [13]) might be expected to tun at a rale of one to two production firings per second with OPWLISP running on a VAX U/780; at a rate of six to twelve firings per second with OPWI3LISS; and a rate of twenty-live to fifty firings per second with OPS83. speed-up of OPS83 rcsultcd primarily from a new method of compiling left-hand sides. In all earlier versions of OPS, the left-hand sides were compiled into an intcrmcdiate representation that had to be interpreted at run time; in OPS83, the left-hand sides are compiled into native machine code. It appears that with the advent of OPS83, further substantial improvements in software techniques have become difficult to achieve. Some amount of optimization of tbc compiled code is certainly possible, but this is expected to result in rather small increases in speed compared to what has occurred in recent years. The code that the OPS83 compiler products is fairly good already. A factor of two speed-up due to compiler optimi/ations might be achieved; a factor of five seems unlikely at this time. Since the importance of achieving further speed increases for OPS is so clearly indicated, we feel that it is essential to investigate hardware support for OPS interpreters4 4. Measurements of Production Systems One of the first tasks undertaken by the PSM group was to perform cxtcnsive measurements of production systems running in OPS5. These measurements were necessary to evaluate the possibilities for speeding up Rete interpreters. Six systems were measured: R1[13], a program for configuring VAX computer systems; XSEL 1141, a program which acts as a salts assistant for VAX computer systems; PfRANS 191. a program for factory managcmcnt; HAUNT, an advcnturc-game program dcvclopcd by John I.aird; DAA [ll], a program for VlSl design; and SOAR [12], an cxpcrimcntal problem-solving architecture implcmcntcd as a production system. Ihc Rl, XSEL, and YI’RANS programs were chosen bccausc they arc three of the largest production systems cvcr written, and bccausc they arc actually being used as cxpcrt systems in industry. The DAA program was chosen because it rcprcscnts a computation-intensive task compared to the knowledge-intensive tasks pcrformcd by the previous programs. The SOAR program was chosen bccausc it embodies a new paradigm for the USC of production systems. Altogcthcr, the six programs represent a wide spectrum of applications and programming styles. The systems contain from 100 to 2000 productions and from 50 to 1000 working memory elements. A few of the more important results are presented here; more detailed results can be found in [7]. The first set of measurements concern the surface characteristics of production system programs-that is, the characteristics of the programs that can be described without reference to the implementation techniques used in the intcrpretcr. Table 1 shows the results. The first line gives the number of productions in each of the measured programs5 The second line gives the average number of conditions per production. The number of conditions in a production affects the complexity of the match for that production. The third line gives the average number of actions per production. The number of actions dctcrmincs how much working memory is changed when a typical production fires. Together these numbers give an indication of the size and complexity of the productions in the systems. They show that productions are typically simple, containing neither large numbers of conditions nor large numbers of actions. Feature R1 XSEL PTRANS HAUNT DAA SOAR 1. Productions 1932 1443 1016 834 131 103 2. Conds/Prod 5.6 3.8 3.1 2.4 3.9 5.8 3. Actions/Prod 2.9 2.4 3.6 2.5 2.9 1.8 Table 1: Summary of Surface Measurements 4 ‘lhc DAD0 project at Columbia University is also investigating hardware support for production systems [8, 181. 5 In some WCS only a subset of the complete production system program was measured because of problems with the LISP garbage collector. ‘Ihc numbers given in the table indicate the number of productions in the subset of the program that was mcasurcd. 117 The second set of measurements relate to the run-time activity of the OPSS interpreter. Table 2 shows how many nodes are activated on average aI& each change to working memory. Line 1 shows the number of constant-test nodes activated. Although constant-test node activations constitute a large fraction (65%) of the total node activations, only a small fraction (10% to 30%) of the total match time is spent in processing them. This is bccausc the processing associated with constant-test nodes is very simple compared to the memory nodes and the two-input nodes. Line 2 shows the number of memory nodes activated, and Line 3 the number of two-input nodes. Most of the matcher’s time is spent evaluating these two kinds of nodes. Lint 4 shows the number of terminal nodes activated. Since thcsc numbers are small, updating the conflict set is a comparatively inexpensive operation. Thcrc arc two major conclusions that can be drawn from this table. First, except for the constant-test nodes, the number of nodes activated is quite small. Second-and perhaps more significantly-except for the constant-test nodes, the numbers are csscntially indcpcndent of the number of productions in the system.6 This is important in the design of parallel production system interpreters (see the discussion of parallelism below). Tvne Node a XSEI, P’TRANS HAUNT DAA SOAR 1. Constant-test 136.3 105.3 122.1 88.5 35.9 26.5 2. Memory 12.3 8.7 10.7 12.5 4.0 11.1 3. Two-input 47.1 32.4 35.0 36.8 22.2 39.5 4. Terminal 1.0 1.7 1.7 1.5 2.0 4.0 Table 2: Node Activations per Working Memory Change 5. Parallelism On the surface, the production system model of computation appears to admit a large amount of parallelism. This is because it is possible to perform match for all productions in parallel. Even after the left-hand sides have been compiled into a Rctc network, the task still appears to admit a large amount of parallelism, because different paths through the network can be proccsscd in parallel. It is our current asscssmcnt, however, that the speed-up available from parallelism in production systems is much smaller than it initially appears. We are exploring three sources of parallelism for the match step in production system programs: production-level, condition-level, and action-level parallelism. In the following paragraphs we briefly describe each of these three sources, and where possible give the speed-up that we expect from that source. 5.1. Production-level Parallelism In production-lcvcl parallelism, the productions in the system are divided into several groups and a separate process is constructed to perform match for each group. All the processes can execute in parallel. The extreme case for production-level parallelism is when the match for each production is performed in parallel. The major advantage of production-level parallelism is that no communication is required between the processes performing the match, although the changes to working memory must be communicated to all processes. Since the communications rcquircments are very limited, both shared memory and non-shared memory multiprocessor architectures can exploit production- level parallelism. The mcasurcmcnts dcscribcd in Section 4 arc useful in determining the amount of speed-up that is potentially available from production-lcvcl parallelism. Line 3 of Table 2 shows that on average each change to working memory causes about thirty-five two-input nodes to be activated. Since the sharing of nodes at this lcvcl of the network is limited, the number of two-input nodes activated is approximately equal 6. Ihcrc are known methods of reducing the cffcct number of constant-test node activations (SW PI). of production system size on the to the number of productions containing conditions that match the working memory clcmcnt. Thus, on avcragc, when an clement is added to or dclctcd from working memory, the stored state for thirty-five productions must be updatcd.7 ’ l-he number of affected productions is significant because most of the match time is devoted to these productions. Thus the immcdiatcly apparent upper bound to the amount of speed-up from production-level parallelism is around thirty- five. However, it is easy to see that this is a very optimistic upper bound. Measurements show that it is common for a few of the affected productions to require five or more times as much processing as the average production. Thus in a machine that uses substantial amounts of production-level parallelism, the match would be characterized by a brief flurry of parallel activity followed by a long period when only a few processors are busy. The average concurrency would be much lower than the peak concurrency. 5.2. Condition-level Paralle’lism In condition-level parallelism, the match for each condition in the left- hand side of a production is handled by a separate process. Condition- level parallelism involves more communication overhead than production-level parallelism. It is now necessary to communicate tokens matching one condition to processes that combine tokens, thus forming new tokens matching several conditions in the left-hand side. This increased communication makes shared-memory multiprocessors preferable to non-shared memory multicomputers. The speed-up expected from condition-level parallelism is quite limited. This is because productions tend to be simple, as Table 1 shows. Since the typical production contains only three to six conditions, even when all the conditions in an left-hand side have to be processed (a rare occurrence) only three to six parallel processes can be run. 5.3. Action-level Parallelism In action-lcvcl parallelism, all the changes to working memory that occur when a production fires are processed in parallel. Action-level parallelism dots not require any more data communication overhead than the previous two sources of parallelism, but it dots involve a substantial amount of extra synchronization overhead. The speed-up possible from action-lcvcl parallelism is also quite limited. A typical production makes two to four working memory changes, so the amount of action-level parallelism available is at most two to four, 5.4. Simulation Results To gain a more dctailcd evaluation of the potential for parallelism in the interpreter, a simulator has been constructed, and simulations of the execution of the XSFL, PI‘RANS, and DAh expert systems have been run. The cost model assumed for the simulation is based on the costs that have been computed for the OPS83 matcher. Since the OPS83 matcher would have to be modified somewhat in order to run in parallel, the costs have been adjusted to take these modifications into account. The graph in Figure 1 indicates the speed-up that is achieved through the use of production-level, condition-level, and action-level parallelism. As the graph shows, the speed-up obtained is quite limited. This is a combined effect of the facts that (1) the processors must wait for all affected productions to finish match before proceeding to the next cycle, and (2) there is a large variance in the computational requirements of the affected productions. The graphs show that a speed-up of four to six times can be obtained with relatively good processor utilization, but to obtain a larger factor requires much more hardware. 7Note that the number thirty-five is independent of the number of productions in the program. An intuitive explanation for this is that programmers divide problems into subproblems, and at any given time the program cxccution corresponds to solving only one of these subproblcms. The size of the subproblems is indcpcndcnt of the size of the overall problem and primarily dcpcnds on the complexity that an individual can deal with at the same time. 118 7. Device Technology Since the correct choice for the machine appears to be a RISC-like processor and rather modest levels of parallelism, we are exploring the USC of high-speed logic families in its implementation, such as ECL or GaAs. The difficulties inherent in the use of these technologies arc offset to a large dcgrce by the fact that the machine will use relatively little hardware. Certainly designing each component will be more dificult than designing a similar component in 1TL or MOS; however the machine will be fairly simple so the total design time will not be excessive. In addition, while the processors will be more expensive than processors implemented in slower tcchnologics, the machine will not contain large numbers of them, and the total cost will not be excessive. WC estimate an F,CL implementation of the machine would bc about four times faster than a ?TL implcmcntation, provided the processor did not spend too much time waiting on memory. 8. Conclusions The PSM project is investigating rhc USC of hardware support for production system intcrprctcrs. WC cxpcct to obtain speed increases from three sources: parallelism, processor architccturc, and dcvicc technology. Our studies arc not complctc, but some initial results are available: l Parallelism: The task admits a modest amount of parallelism. WC expect parallelism to contribute a 5 to 10 fold incrcasc in speed. a Processor architecture: The most attractive architectures for this task arc the simple (or so-called RISC) processors. We estimate that a RISC machine would be 2 to 4 times faster than a complex instruction set machine. l Device technology: Since speed is of paramount importance in this task, and since very simple processors arc appropriate, it will be advantageous to use high-speed device technologies. We estimate that using ECL would provide a factor of 4 increase in speed. In summary then, a machine built along the lines we suggest would be between 5 * 2 * 4 = 40 and 10 * 4 * 4 = 160 times faster than a complex uniproccssor irnplementcd in a slower speed technology. It should be emphasized that these arc preliminary results, and are subject to change as the work proceeds. 9. Acknowledgments H. T. Kung, John McDermott, and Kemal Oflazer contributed substantially to this research. This research was sponsored by the Defense Advanced Research Projects Agency (DOD), ARPA Order No. 3597, monitored by the Air Force Avionics Laboratory under Contract F33615-81-K-00450. References 1. Forgy, C. L. On ihe Efficient Impiemenfafions of Production Systems. Ph.D. Th., Carnegie-Mellon University, 1979. 2. Forgy, C. L. OPS5 User’s Manual. Tech. Rept. CMU-CS-81-135, Carncgic-Mellon University, 1981. 3. Forgy, C. L. “Rctc: A Fast Algorithm for the Many Pattern/Many Object Pattern Match Problem.” Arlificial Intelligence 19 (September 1982). 4. Forgy, C. L. and McDermott, J. OPS, A Domain-Independent Production System. International Joint Conference on Artificial Intelligence, IJCAI-77. 5. Forgy, C. L. and McDermott, J. The OPS2 Reference Manual. Dcpartmcnt of Computer Science, Carnegie-Mellon University, 1978. 6. Forgy, C. L. The OPS83 Report. Department of Computer Science, Carnegie-Mellon University, May 1984. Figure 1: Parallelism in Production Systems 6. Processor Architecture Because of our expericncc with the Rctc network, we-have a good idea of how a machine executing OPS will behave. In the Rcte network, there are only a few different types of code scquenccs to deal with. By calculating the time that a given processor requires to execute these sequences, we can accurately dctcrminc how effective the pr&essor is for this task. Typical code sequences from the Rete network are shown in Figures 2 and 3. Figure 2 shows the computation pcrformcd by a constant-test node. Figure 3 shows a loop from a two-input nod&. The loop is cxccutcd when the two-input node compares a token from one memory with the tokens in another memory. load Rl,“active” ;load the constant cmp Rl,l(R.CurWme) ;compare the value jne Ll ;if not equal, fail Figure 2: Assembly Code for a Constant Test move RO,R.MPtrl ;test memory pointer jeq L2 ;exit if nil lO$:load R.Wmel,WME(R.MPtrl) ;get the wme jsb L3 ;goto tests load R.MPtrl,NEXT(R.MPtrl);get next token jne 10s ;continue if not nil jmp L2 ;exit Figure 3: Loop from a Two-input Node As these code sequences illustrate, the computations performed by the matcher are primarily memory bound and highly sequential. Each instruction’s cxccution depends on the previous one’s, leaving little room for concurrent execution of the instructions. Consequently, it is not advantageous to develop a processor with multiple functional units able to extract concurrency and simultaneously exccutc multiple instructions. It is also not worthwhile to design a computer with a large range of complex instructions and addressing modes since the majority of time is spent executing simple operations. We conclude that a machine for executing production systems should have a simple instruction set and should execute the instructions in as few clock cycles as possible. The processor designs that best satisfy these requirements are the reduced instruction set (RISC) machines such as the Berkeley RISC [15], the Stanford MIPS [lo], or the IBM 801[16]. Such a machine could execute most instructions in two machine cycles. We estimate that a complex insrruction set machine requires four to eight cycles per instruction, making the simple machine two to four times faster. 119 7. Gupta, A. and Forgy, C. I-. Mcasurcmcnts on Production Systems. Carncgic-Mellon University, 1983. 8. Gupta, A. lmplcmcnting OPS5 Production Systems on DADO. lntcrnational Confcrcncc on Parallel Processing. August, 1984. 9. Halcy, P., Kowalski, J., McDermott, J., and McWhortcr, R. FYfRANS: A Rule-fjascd Managcmcnt Assistant. In preparation, Carncgic-Mellon University 10. Hcnncssy, J. L., ct al. The MIPS Machine. Digest of Papers from the Computer Confcrcnce. Spring 82, February, 1982, pp. 2-7. 11. Kowalski, I’. and Thomas. 1). The VLSI Design Automation Assistant: Prototype System. Proceedings of the 20th Design Automation Confcrencc, ACM and IEEE, June, 1983. 12. I.aird, J. and Newell, A. A Universal Weak Method: Summary of Results. International Joint Confcrcnce on Artificial Intelligence, IJCAI-83. 13. McDermott, J. Rl: A Rule-based Configurcr of Computer Systems. Tech. Rept. CMU-CS-80-119, Carncgic-Mellon University, April, 1980. 14. McDermott, J. XSEL: A Computer Salcspcrson’s Assistant. In Afachine In/eiligence, J.E. Hayes, D. Michie, and Y.H. Pao, Ed.,Horwood, 1982. 15. Patterson, D. A. and Sequin, C. H. “A VLSI RISC.” Conrpuler 9 (1982). 16. Radin, G. “The 801 Minicomputer.” IBM Journal of Research and Developntenf 27 (May 1983). 17. Stolfo, S. J. and Vcsonder, G. T. ACE: An Expert System Supporting Analysis and Management Decision Making. Department of Computer Science, Columbia University, 1982. 18. Stolfo, S. J. and Shaw, D. E. DADO: A Tree-Structured Machine Architecture for Production Systems. National Conference on Artificial Intelligence, AAAI-1982.
1984
60
348
FIVE PARALLEL ALGORITHMS FOR PRODUCTION SYSTEM EXECUTION ON THE DAD0 MACHINE* Salvatore J. Stolfo Computer Science Department Columbia University New York City, N.Y. 10027 Abstract In this paper we specify five abstract algorithms for the parallel execution of production systems on the DAD0 machine. Each algorithm is designed to capture the inherent parallelism in a variety of different production system programs. Ongoing research aims to substantiate our conclusions by empirically evaluating the performance of each algorithm on the DAD02 prototype, presently under construction at Columbia University. 1 Introduction In this paper we outline five abstract algorithms specifying parallel execution of production s stem (PS) programs on the DAD0 machine. Each algorit m offers a !I number of advantages for particular types of PS programs. We expect to implement these algorithms on the DAD02 prototype and critically evaluate the performance of each on a variety of application programs. Software development is presently underway using the DAD01 prototype that has been operational at Columbia University since April, 1983. We begin with a brief description of PS’s and identify various possible characteristics of PS programs which may not be immediately apparent from a general description of the basic formalism. These characteristics lead to different algorithms which will be discussed in the remaining sections of this paper. 2 Production Systems In general, a Production Systenl (PS Davis and King 1975, Rychener 1976, A [Newell 1973, orgy 19821 IS defined by a set of rules, or prodzlctions, which form the Production Memory (PM), together with a database of assert ions, called the Worlcing Memory (WM). Each production consists of a conjunction of pattern elements, called the left-hand side (LHS R of the rule, along with a set of actions called the rzgfzt- and side (RHS). The RHS specifies information that is to be added to (asserted) or removed from WM when the LHS successfully matches against the contents of WM. Pattern elements in the LHS may have a variety of forms which are dependent on the form and content of WM elements. In the simplest case, patterns are lists composed of constants and variables (prefixed with an *This research h as been supported by the Defense Advanced Research Projects Agency through contract N00039-84-C-0165, as well as gr&ts from Intel, Digital Eauinment. Hewlett-Packard. Valid Logic Svstems. AT&T Be’11 &Labor&tories and IBM Corporations-and -the New York State Science and Technology Foundation. We gratefully acknowledge their support. equals sign), while WM elements are simple lists of constant symbols (corresponding to tuples of the relational algebra). An example production, borrowed from the blocks world, is illustrated in figure 1. (Goal Clear-top-of Block) (Isa =x Block) (On-top-of =y =x) (Isa =y Block) -- > delete( On-top-of =y =x) assert(On-top-of =y Table) If the goal is to clear the top of a block, and there is a block (=x) covered by something (=y) which is also a block, then remove the fact that =y is on =x and assert that =y is on the table. Figure 1: An Example Production. In operation., the production system repeatedly executes the followmg cycle of operations: Match: For each rule, determine whether the LHS matches the current environment of WM: each pattern element is matched by some WM element with variables consistently bound throughout the LHS. All matching instances of the rules are collected in the conflict set of rules. Select: Choose exactly one of the matching rules according to some predefined criterion. Act: Add to or delete from WM all assertions specified in the RHS of the selected rule or perform some operation. During the selection phase of production system execution, a typical interpreter provides confht resolution strategies based on the recency of matched data in WM, as well as syntactic discrimination. Other resolution schemes are possible, but for the present paper such issues will not significantly change our analysis, and hence will not be discussed. We shall only consider the parallel execution of PS programs with the goal of accelerating the rule firing rate of the recognize/act cycle as well as the number of Wh4 transactions perrormed. In a later section of this paper, we shall consider other possible parallel activities as, for example, the concurrent execution of multiple PS programs. 300 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. algebraic operation. processed by a parallel update of WM. this approach by the abstract algorithm illustrated in figure 2. 1. Assign some subset of rules to a set of (distinct) processors. 2. Assign some subset of WM elements to a set of processors (possibly distinct from those in step 1). 3. Repeat until no rule is active: 6. a. Broadcast an instruction to all processors storing rules to begin the match phase, resulting in the formation of a local conflict set of matching instances. b. Considering each maximally rated instance within each processor, compute the maximally rated rule within the entire system. Report its instantiated RHS. c. Broadcast the changes to WM reported in step 3.b to all processors, which update their local WM accordingly. end Repeat; Figure 2: Abstract Production System Algorithm. This very simple view of the parallel implementatjon of the PS cycle forms the basis of our subsequent analysis. 3 Characteristics of Production System Programs In this section we enumerate various characteristics of PS programs in general terms. The reader will note that these characteristics are less indicative of a specific PS formalism, but rather are characteristics of various problems whose solutions are encoded in rule form. It should be noted, though, that the “inherent parallelism” problems may Ert;gc0u;“a”r PS f not be represented by the ormalksm used for their solution. 1. Temporal Redundancy. Few WM changes are made on each cycle. Thus, by saving state between each cycle, previous matching operations need not be repeated. The Rete algorithm [Forgy 19821 is probably the best example of a PS interpreter incorporating this strategy. 2. Few Affected Rules. Few rules are affected by changes to WM on each cycle, and thus relatively few rules need be matched against the 3. 4. 5. 7 8 9 10. 11. new state of WM. Note, however, that temporal redundancy alone does not guarantee this to be always the case. Many Affected Rules. Many rules are affected by the changes to WM on each cycle. This may arise, for example, in situations where similar pattern elements appear in many rules. Massive changes to WM (non-temporally redundant). In this case, action specifications in the RHS of a rule may have large global effects on WM. Thus, restricting the scope of the match operation seems unlikely, i.e., saving state. is not appropriate. Restricted scope of pattern matches. The number of WM elements which may potentially match each rule is relatively small. Thus, a single rule may not need access to all of WM but to a relatively small subset of data elements. Global tests of WM. Pattern elements in the LHS of a rule may test conditions requiring access to large portions of WM, rather than individual elements (for example, tests which compare the number of WM elements against some constant threshold value). This case may be viewed as the converse of characteristic 5. Multiple rule firings. On each cycle of operation, a number of conflict rules may be executed prior to initiating the match phase of the next cycle. Small PM. The number of rules is restricted to only a few hundred. Small WM. Similarly, WM may consist of only a few hundred elements. Large PM. A PS may consist of several thousands of rules in PM. Large WM. Similarly, WM may consist of thousands of data elements. 4 Five Algorithms In this section we outline five different algori;ha;; suitable for direct execution on the DAD0 machine. will be independently discussed leadin to various conclusions about which characteristics fl t ey are most appropriate for capturing. Ongoing research aims to verify our conclusions by empirically evaluating their performance for different classes of PS programs. The reader is assumed to be knowledgeable about the Rete match algorithm (see [Forgy 19791 and [Forgy 19821 . We will thus freely discuss the details of the Rete mate h when needed without prior explication. We begin with a brief description of the DAD0 architecture. (The reader is encouraged to see 19841 for complete d Stolfo 19831 and [Stolfo and Miranker etails of the system.) 301 4.1 The DAD0 Machine DAD0 is a fine-grain, parallel machine where processing and memory are extensively intermingled. A full-scale production version of the DAD0 machine would comprise a very large (on the order of a hundred thousand) set of processing elements (PE’s , each containing its own processor, a small amount (16 k in the current design of the random access memory P rototype version (RAM , and a I bytes, of local specia ized I/O switch. The PE’s are interconnected to form a complete binary tree. Within the DAD0 machine, each PE is capable of executing in either of two modes under the control of run- time software. In the first, which we will call SIA4D mode Ii for single instruction stream, multiple data stream), the P executes instructions broadcast by some ancestor PE within the tree. (SIMD typically re&hs, t;A;;ingol; stream of “machine-level” instructions. the other hand, SIMD is generalized to mean a single stream of remote procedure invocation instructions. Thus, DAD0 makes more effective use of its communication bus by broadcasting more “meaningful” instructions.) In the second, which will be referred to as MIMD mode (for mult,iple instruction stream, mulifle data stream), each PE executes instructions stored its own local RAM, independently of the other PE’s. A single conventional coprocessor, adjacent to the root of the DAD0 tree, controls the operation of the entire ensemble of PE’s. state When a D,4DO PEsuE;ters MIMD mode, its logical is changed in a WaY as to effectively “disconnect” it and its descendants from all higher-level PE’s in the tree. In particular, a PE in MIMD mode does not receive any instructions that might be placed on the tree-structured communication bus by one of its ancestors. Such a PE may, however, broadcast instructions to be executed by its own descendants, providing all of these descendants have themselves been switched to SIMD mode. The DAD0 machine can thus be configured in such a way that an arbitrary internal node in the tree acts as the root of a tree-structured SIMD device in which all PE’s execute a single instruction (on different data) at a given point in time. This flexible architectural design supports multiple- SIMD execution (MSIMD). Thus, the machine may be logically divided into distinct partitions, each executing a distinct task, and is the primary source of DADO’s s eed in executing a large number of primitive pattern mate K ing operations concurrently. Our comments will be directed towards the DAD02 prototype consisting of 1023 PE’s constructed from commercially available chips. Each PE contains an 8 bit Intel 8751 processor, 16K bytes of local RAM, 4K bytes of local ROM and a semi-custom I/O switch. The DAD02 I/O swit,ch, which is being implemented in semi-custom gate array technology, has been designed to support rapid global communication. In addition, a specialized combinational circuit incorporated within the I/O switch will allow for the very rapid selection of a single distinguished PE from a set of candidate PE’s in the tree, a process we call mu-resolving. (The max-resolve instruction computes the maximum of a s ecified register in all PE’s in one instruction cycle, whit Tl can then be used to select a distinct PE from the entire set of PE’s taking part in the operation.) Currently, the 15 processing element version of DAD0 performs these operations in firmware embodied in its off-the-shelf components. 4.2 Algorithm 1: Full Distribution of PM In this case, a very small number of distinct production rules are distributed to each of the 1023 DAD02 PE’s, as well as all WM elements relevant to the rules in question, i.e., only those data elements which match some pattern in the LHS of the rules. Algorithm 1 alternates the entire DAD0 tree between MIMD and SIMD modes of operation. an MIMD process, The match phase is implemented as whereas selection and act execute as SIMD operations. In simplest terms, each PE executes the match phase for its own small PS. One such PS is allowed to “fire” a rule, The 1. 2. 3. 4. 5. 6. 7. however, which is communicated to all other PE’s. algorithm is illustrated in figure 3. Initialize: Distribute a simple rule matcher to each PE. Distribute a few distinct rules to each PE. Set CHANGES to initial WM elements. Repeat the following: Act: For each WM-change in CHANGES do: a. Broadcast WM-change (add or delete a specific WM element) to all PE’s. b. Broadcast a command to locally match. [Each PE operates independently in MIMD mode and modifies its local WM. If this is a deletion, it checks its local conflict set and removes rule instances as appropriate. If this is an addition, it matches its set of rules and modifies its local conflict set accordingly]. C. end do; Find local maxima: Broadcast an instruction to each PE to rate its local matching instances according to some predefined criteria (conflict resolution strategy (see [McDermott and Forgy, 19781). Select: Using the high-speed max-RESOLVE circuit of DADOB, identify a single rule for execution from among all PE’s with active rules. Instantiate: Report the instantiated RHS actions. Set CHANGES to the reported WM-changes. end Repeat; Figure 3: Full Distribution of Production Memory. 4.2.1 Discussion of Algorithm 1 We have left the details of the local match routine unspecified at step 3.b. Thus, a simple precompiled Rete match network and interpreter may be distributed to each processor. However, it is not clear whether a simple naive matching algorithm may be more appropriate since only a very small number of rules is present in each PE. Memory considerations may decide this issue: the overhead associated with linking and manipulating intermediate partial matches in a Rete network may be more expensive than direct pattern matching against the local W’M on each cycle. 302 Performance of this algorithm varies with the complexity of the local match. In the best case, the time to match the rule set is bounded by the time to match only a. few rules. The worst case is dependent on the maximum number of WM elements accessed during the match of the rules. If a simple naive match is used at each PE, this may require a considerable amount of computation even though the size of the local WM’s IS limited. Simple hashing of WM may dramatically improve a local naive matching operation, however. We conclude that this algorithm is probably best suited to implementing PS programs characterized by: 1. 3. 5. 9. 11. case Temporal redundancy, since massive changes to WM would require a considerable amount of sequential execution at each PE to modify its local WM. Many rules are affected on each cycle. Thus, depending on the initial distribution of PM, it would be best to partition similar rules separately. Note, though, that characteristic 2 may also be suitable, but a relatively small number of PE’s would be actively computing new match results on each cycle. Restricted scope of pattern matches. Clearly, each rule is required to potentially match against a relatively small local WM. Hence, global tests of WM would not be particularly appropriate. Large PM is possible. Given the above characteristics, three or four rules stored at each PE make it possible for a PM consisting of 3000-4000 rules. Similarly, depending on the average number of common pattern elements between rules, WM may be quite large. Even if an average of one unique WM element is resident in each PE (while a significant number of additional local WM elements are replicated in other PE’s), a minimum of 1000 individual elements may be stored in WM. The most serious drawback of this algorithm is the where a local WM is too large to be conveniently stored in a PE. Clearly, characteristic 5 is appropriate for this algorithm only in the presence of characteristic 9, small WM. Multiple rule firings (characteristic 7) are indeed possible. A discussion of this case is deferred to a later section. 4.3 Algorithm 2: Original DAD0 Algorithm The original DAD0 algorithm detailed in [Stolfo 19831 makes direct use of the machine’s ability to execute in both MIMD and SIMD modes of operation at the same point in time. The machine is logically divided into three conceptually distinct components: a PM-/eve/, an upper tree and a number of WM-subtrees. The PM-level consists of MIMD-mode PE’s executing the match phase at one appropriately chosen level of the tree. A number of distinct rules are stored in each PM-level PE. The WM- subtrees rooted by the PM-level PE’s consist of a number of SIMD mode PE’s collectively operating as a hardware content-addressable memory. WM elements relevant to the rules stored at the PM-level root PE are fully distributed throughout the WM-subtree. The u per SIMD mode PE’s lying above rl tree consists of t e PM-level, which implement synchronization and selection operations. It is probably best to view WM as a distributed relation. Each WM-subtree PE thus stores relational tuples. The PM-level PE’s match the LHS’s of rules in a manner similar to processing relational of the Rete match, e’ntraconditkon tests o pattern elements ? ueries. In terms in the LHS of a rule are executed as relational selection, while intercondition tests correspond to equi-join operations. Each PM-level PE thus stores a set of relational tests compiled from the LHS of a rule set assigned to it. Concurrency is achieved between PM-level PE’s as well as in accessing PE’s of the WM-subtrees. The algorithm is illustrated in figure 4. 4.3.1 Discussion of Algorithm 2 This algorithm was specifically designed for PS programs characterized as: 4. 3. 6. 8. Non-temporally redundant. Indeed, the ability to distribute WM elements in a content-addressable memory allows not only parallel access to WM for matching, but large changes to WM may also be efficiently implemented. For such an environment, saving state between cycles has few advantages. Many rules are affected by WM-changes on each cycle. Since massive changes to WM may be permitted on each cycle, many rules may potentially be affected. The concurrency achieved at the PM- level would allow many rule matchings to be achieved efficiently. Global tests are also efficiently handled by the WM- subtrees operating as an SIMD mode parallel device. PM is, unfortunately, rather restricted in size. Since only one level of the tree is used for rule storage, the full capacity of the machine for PM is underutilized. In DAD02, for example, we envisage a PM-level at level 4 of the machine. Thus, 32 PE’s would each store roughly 30 rules for a thousand rule system, potentially decreasing performance. Rule systems with a few hundred rules are more appropriate. 11 A A , WM may be quite large, however. For example, the DAD02 configuration noted above would allow for 32 WM-subtrees, each consisting of 32 PE’s. Since each DAD0 PE has considerable storage capacity, many thousands of WM elements may be easily stored. Furthermore, this allows a 32-way parallel access to WM for each PM-level PE. In total, nearly 1000 WM elements may be accessed in parallel at a given point in time. While attempting to implement temporally redundant systems, Algorithm 2 may recompute much of its match results calculated on previous cycles. This indeed may not be the case if we modify Algorithm 2 to incorporate many of the capabilities of the Rete match. 303 1. 2. 3. 4. 5. 6. 7. 8. Initialize: Distribute a match routine and a partitioned subset of rules to each PM-level PE. Set CHANGES to the initial WM elements. Repeat the following: Act: For each WM-change in CHANGES do; a. Broadcast WM-change to the PM-level PE’s b. The level i. ii. . . . 111. and an instruction to match. match phase is initiated in each PM- PE: Each PM-level PE determines if WM- change is relevant to its local set of rules by a partial match routine. If SO, its WM-subtree is updated accordingly. [If th is is a deletion, an associative probe is performed on the element (relational selection) and any matching instances are deleted. If this is an addition, a free WM- subtree PE is identified, and the element is added.] Each pattern element of the rules stored at a PM-level PE is broadcast to the WM-subtree below for matching. Any variable bindings that occur are reported sequentially to the PM-level PE for matching of subsequent pattern elements (relational equi-join). A local conflict set of rules is formed and stored along with a priority rating in a distributed manner within the WM-subtree. C. end do; Upon termination of the match operation, the PM-level PE’s synchronize with the upper tree. Select: The max-RESOLVE circuit is used to identify the maximally rated conflict set instance. Report the instantiated RHS of the winning instance to the root of DADO. Set CHANGES to the reported action specifications. end Repeat; Figure 4: Original DAD0 Algorithm. Simple changes may _ dramatically improve the situation. For example, rather than lteratmg over each pattern element in each rule as in step S.b.ii, we may only execute the match for those rules affected by new WM changes. The selection of affected rules can be achieved quickly using the WM subtree as an associative memory. By distributing pattern elements as relational tu les in a manner similar to WM, associative probing P relational selection) can be used to faster than hashing). select rules for matching (perhaps Consideration of these techniques led us to investigate Rete for direct implementation on DAD02. Algorithms 3 and 4 detail this approach. 4.4 Algorithm 3: Miranker’e TREAT Algorithm Daniel Miranker has invented an algorithm which modifies Algorithm 2 to include several of the features of the Rete match for saving state. The TREe Associative Temporally redundant (TREAT) algorithm [Miranker 19841 makes use of the same logical division of the DAD0 tree as in Algorithm 2. However, the state of the previous match operation is saved in distributed data structures within the WM-subtrees. TREAT views the pattern elements in the LHS of rules as relational algebra terms, as in Algorithm 2. Thus, the evaluation of such rela,tional algebra tests is also executed within the WM-subtrees. State is saved in a WM-subtree in the form of distributed Rete alpha memories corresponding to partial selections of tuples matching various pattern elements. Rule instances in the conflict set computed on previous cycles are also stored in a distributed manner within the WM-subtrees. These two additions substa,ntially improve the performance of A’gorithm 2. v e note that Anoop Gupta of Carnegie- Mellon University analyzed a similar algorithm in TREAT shoul d independently Gupta 1983. 1 Compared to Algorithm 2, perform su stantially better for temporally redundant systems. We note that Gupta’s analysis of algorithm 2, however, depends on certain assumptions that derive misleading results.) Another aspect of TREAT is the clever manner in which relevancy is computed. Pattern elements are first distributed to the WM subtrees. When a new WM element is added to the system, a simple match a,t each WM-subtree PE determines the set of rules at the PM- level which are affected by the change. Those identified rules are subsequently matched by the PM-level PE restricting the scope of the match to a smaller set of rules than would otherwise be possible with Algorithm 2. The TREAT algorithm is outlined in figure 5. 4.4.1 Discussion of Algorithm 3 The TREAT algorithm is a refinement of Algorithm 2 incorporating temporal redundancy. Hence, TREAT is best suited for PS programs characterized as: 1. Temporally redundant. 3. Many rules are affected on each cycle. 6. Global tests of WM are also efficiently handled. 8. Small PM. 11. Large WM. We note, though, that minor changes allow TREAT to implement Algorithm 2 directly (b setting L to all of the rules at the PM-level in step 3. B .ii and ignoring step 3.d.i). Thus, TREAT may also efficiently execute: 4. Non-temporally redundant systems. In step 3.d.iii, TREAT also implements a useful 1. Initialize: Distribute to each PM-level PE a simple matcher (described below) and a compiled set of rules. Distribute to the WM-subtree PE’s the appropriate pattern elements appearing in the LHS of the rules appearing in the root PM- level PE. Set CHANGES to the initial WM elements. 2. Repeat the following: 3. Act: For each WM-change in CHANGES do; a. Broadcast WM-change to the WM-subtree PE’s. b. If this change is a deletion, broadcast an instruction to match and delete WM elements and any affected conflict set instances calculated on previous cycles. c. Broadcast an instruction to PM-level PE to enter the Match Phase. d. At each PM-level PE do; i. Broadcast instruction to to WM-subtree PE’s an match the WM-change against the local pattern element. ii. Report the affected rules and store in L. iii. Order the pattern elements of the rules in L appropriately. iv. For each rule in L do; 1. Match remaining patterns of the rules specified in L as in Algorithm 2. 2. For each new instance found, store in WM-subtree with a priority rating. 3. end do; v. end do; e. end for each; 4. Select: Use max-RESOLVE to find the maximally rated instance in the tree. 5. Report the winning instance. 6. Set CHANGES to the instantiated RHS of the winning rule instance. 7. end Repeat; Figure 5: The TREAT Algorithm. strategy. When iterating over each of the rules in L affected by recent changes in WM, those pattern elements with the smallest alpha memories are processed first. This technique tends to process the join operations quickly by filtering out many potentially failing partial joins. As noted above, Gu ta’s algorithm, as well as f analysis of a TREAT-like Miranker [1984], su sequent analysis performed by show TREAT to be highly efficient compared to Algorithm 2 executing temporally redundant systems. (Th e implementation, study and detailed analysis of TREAT forms a major part of Daniel Miranker’s Ph.D. thesis.) 4.5 Algorithm 4: Fine-grain Rete A Rete network compiIed from the LHS’s of a rule set consists of a number of simple nodes encoding match operations. Tokens, representing WM modifications, flow through the network in one direction and are processed by each node lying on their traversed paths. Fortunately, the maximum fan-m of any node in a Rete network is two. Hence, a Rete network can be represented as a binary tree (with some minimal amount of node splitting). This observation leads to Algorithm 4 whereby a logical Rete network is embedded on the ph sical i DAD0 binary tree structure. In the simplest case, eaf nodes of the DAD0 tree store and execute the initial linear chains of one-input, test nodes, whereas internal DAD0 PE’s execute two-input node operations. The physical connections between processors correspond to the logical da.ta flow links in the Rete network. The entire DAD0 machine operates in MIMD mode while executing this algorithm, behaving much like a pipelined data flow architecture. Algorithm 4 is illustrated in figure 6. 4.5.1 Discussion of Algorithm 4 Since this algorithm is a direct implementation of the Rete match, it is most suitable for PS programs characterized as: 1. Temporally redundant 2. Few rules are affected by WM changes. This observation is noted in [Forgy 19791. 10. Large PM. We may, for instance, believe that only 1023 Rete nodes may be processed by DADOB. However, a straight forward overlay technique can be implemented where several Rete networks are embedded in the tree and processed in turn. Thus, large PM may be achievable. 9. Small WM. However, since Rete network nodes require significant storage for intermediate partial match results (stored at alpha and beta memories), the limited storage capacity of a DAD02 PE may require restricting the size of WM. Although overlayed Rete networks would be processed sequentially on DADOB, significant performance improvements can be achieved by a natural pipelinin effect. Immediately following a successful match an 3 communication at a node, the next two-input test from the overlayed network is initiated. Thus, while the parent node is processing the first network node, its children are proceeding with their tests of the second overlayed network node. A second source of ninelining can improve performance as well. In this cker the &tire RHS action specification is broadcast at once to the DAD0 leaf PE’s at step 3.a. Immediately following the conclusion of the first match operation and communication of the first WM 305 1. Initialize: Map and load the compiled Rete network on the DAD0 tree. Each node is provided with the appropriate match code and network information (see [Forgy 19821 for details). Set CHANGES to initial WM elements. 2. Repeat the following: 3. Act: For each WM-change in CHANGES do; a. Broadcast WM-change (a Rete token) to the DAD0 leaf PE’s. b. Broadcast an instruction to all PE’s to Match. (First, the leaf processors execute their one-input test sequences on the new token. The interior nodes lay idle waiting for match results computed by their descendants. Those tokens passing the one-input tests are communicated to the immediate ancestors which immediately begin processing their two-input tests, The process is then repeated until the physical root of DAD0 reports changes to the conflict set maintained in the DAD0 control processor). C. end do; Select: The root PE is provided with the chosen instance from the control processor. Set CHANGES to the instantiated RHS. 4. end Repeat; Figure 6: Fine-grain Rete Algorithm. token, the leaf PE’s initiate processing of the second WM token. Hence, as a WM token flows up the DAD0 tree, subsequent WM tokens flow close behind at lower levels of the tree in pipeline fashion. 4.0 Algorithm 5: Multiple Asynchronous Execution In our discussion so far, no mention was made about characteristic 7, multiple rule firings. We may view this as - multiple, independently executing PS programs, or - executing multiple conflict set rules of the same PS program concurrently. In this regard we offer not a single algorithm, but rather an observation that may be put to practical use in each of the abovementioned algorithms. We note that any DAD0 PE may be viewed as a root of a DAD0 machine. Thus, any algorithm operating at the physical root of DAD0 may also be executed by some descendant node. Hence, any of the aforementioned algorithms can be executed at various sites in the machine concurrently! (Th is was noted in [Stolfo and Shaw 1982 .) This coarse level of parallelism, however, will need to II e controlled by some algorithmic process executed in the upper part of the tree. The simplest case is represented by the procedure illustrated in figure 7, which is similar in some respects to Algorithm 2. 1. Initialize. Logically divide DAD0 to incorporate a static Production System-level (PS-level), similar to the PM-level of Algorithm 2. Distribute the appropriate PS program to each of the PE’s at the PS-level. 2. Broadcast an instruction to each PS-level PE to begin execution in MIMD mode. (Upon completion of their respective PS-level PE reconnects to the SIMD mode.) programs, each tree above in 3. Repeat the following. a. Test if all PS-level PE’s are in SIMD mode. End Repeat; 4. Execution Complete. Halt. Figure 7: Simple Multiple PS Program Execution. In the cases where various PS-level PE’s need to communicate results with eachother, step 3 is re laced with appropriate code sequences to report and broa cast values a from the PS-level in the proper manner. Each of the programs executed by PS-level PE’s are first modified to synchronize as necessary with the root PE to coordinate the communication acts, at, for example, termination of the Act phase. In addition to concurrent execution of multiple PS programs, methods may be employed to concurrently execute portions of a single PS program. These methods are intimately tied to the way rules are partitioned in the tree. Subsets of rules may be constructed by a static analysis of PM separating those rules which do not directly interact with each other. In terms of the match problem- solving paradigm, for example, it may be convenient to think of independent subproblems and the methods implementing their solution (see [Newell 19731). Each such method may be viewed as a high-level subroutine represented as an independent rule set rooted by some internal node of DADO. Algorithm 1, for example, may be applied in parallel for each rule set in question. Asynchronous execution of these subroutines proceeds in a straight forward manner. The complexity arises when one subset of rules infers data required by other rule sets. The coordination of these communication acts is the focus of our ongoing research. Space does not permit a complete specification of this approach, and thus the reader is encouraged to see [Ishida 1984) for details of our initial thinking in this direction. 5 Conclusion References We have outlined five abstract, algorithms for the parallel execution of PS programs on the DAD0 machine and indicated what characteristics they are best suited for. We summarize our results in tabular form as follows: Algorithm PS Characteristics Davis, R. and J. King. An Overview of Production Systems. Technical Report, Department of Computer Science, Stanford University, 1975. 1. Fully Distributed PM 1, 3, 5, 7, 9, 11 2. Original DAD0 3, 4, 6, 7, 8, 11 3. Miranker’s TREAT 1, 3, 4, 6, 7, 8, 11 4. Fine-grain Rete 1, 2, 5, 7, 9, 10 5. Multiple Asynchronous Applies to all cases. Of the five reported algorithms, only the original DAD0 algorithm (number 2) has been carefullv studied analyticallj;. The ‘performanie statistics of the ;emaining four algorithms have yet to be analyzed in detail. However, much of the performance statistics cannot be analyzed without specific examples and detailed implementations. Working in close collaboration with researchers at AT&T Bell Laboratories, in the course of the next year of our research we intend to implement each of the stated algorithms on a working prototype of DADO. In this paper, we have outlined our expectations concerning the suitability of each of the algorithms for a variety of possible PS programs. findings to substantiate our We expect ouirtreenyrted claims, and to demonstrate this with working examples in the near future. Forgy, C. L. On the Efficient Implementation of Production Systems. Technical Report, Carnegie- Mellon University, Department of Computer Science, 1979. Ph.D. Thesis. Forgy C. L. Rete: A Fast Algorithm for the Many Pattern/Many Object Pattern Matching Problem. Artificial Intelligence, 1982, 19, 17-37. Gupta, A. Implementing OPS5 Production Systems on DA-DO. Technical Report, Department of Computer Science, Carnegie-Mellon University, 1983. Ishida T., and S. J. Stolfo. Simultaneous Firing of Production Rules on Tree-structured Machines. Technical Report, Department of Computer Science, Columbia University, 1984. McDermott, J. and C. Forgy. Production System Conflict Resolution Strategies. In Waterman and Hayes-Roth (EW, Pattern-directed Inference Systems, Academic Press, 1978. Miranker D. P. Performance Estimates for the DAD0 Machine. A Comparison of TREAT and RETE. Technical Report, Department of Computer Science, Columbia University, April 1984. Newell, A. Production Systems: Models of Control Structures. In W. Chase (Ed.), Vi’aual Information Processing, Academic Press, 1973. Rychener, M. Production Systems as a Programming Language for Artificial Intelligence. Technical Report, Carnegie-Mellon University, Department of Computer Science, 1976. Ph.D. Thesis. Stolfo S. J. The DAD0 Parallel Computer. Technical Report, Department of Computer Science, Columbia University, August 1983. (Submitted to AI Journal). Stolfo S. J., and D. E. Shaw. DADO: A Tree-structured Machine Architecture for Production Systems. Proceedings National Conference on Artificial Intelligence, Carnegie-Mellon University, August, 1982. Stolfo S. J., and D. P. Miranker. The DAD0 Production System Maclaine: System-level Details. Technical Report, Department of Computer Science, Columbia University, 1984. (Submitted to IEEE Transactions on Computers). 307
1984
61
349
Referential Determinism and Computational Efficiency: Posting Constraintsfrom DeepStructure* Gavan Duffy and John C. Mallet-y Department of Political Science Massachusetts Institute of Technology Cambridge MA 02139 Arpanet: Gavan at MIT-MC, JCMa at MIT-MC Abstract Most transformational linguists would no longer create explicit deep structures. Instead they adopt a surface-tnterpretlve approach. We find deep structures indIspensable for projection into a semantic network. In conjunction wrth a reference architecture based on constraint-posting, they mlnlmlze referential non-determinism% We extend Marcus’ Determm~sm Hypothesis to include rmmed/ate reference, a foundational subc!ass of reference. This Referential Determinism Hypothesis, ccnstitutes a semantic constraint on theories of syntactic analysis, argumg for theories that minimize referential non-determinism. We show that our combination of deep structures and constraint-posting eliminates non-determinism in immediate reference. We conclude that constramt-posting, deep-structure parsers satisfy the referential determinism hypothesis. I Determinism Pragmatic reasoning is necessarily non-deterministic. It is capable of non.monotonic belief-revision. It can hypothesize and then, if the hypothesis IS rejected, it can backtrack, erase some or all of its structure, and start again. As more resources become available to such a reasoner, i!s performance improves. Its available resources are maximized when its input does not require revision and when all other components perform deterministically. This view is consistent with Marcus’ Determinism Hypothesis [23] and sympathetic to linguists’ desires for parsimonious grammar speclficatlons. Sentence analysis, of course, does not exhaust the range of Interpretation necessary for sentence understanding. Reference, for example. IS another crucial aspect. We offer as a corollary to Marcus’ hypothesis this Relerent/al Defermimsm Hypothesis: Prefer a se!7tence analysis that minimizes the non-determinism of reference. We argue below that deep-structure representations with constraint-posting reference satisfy this theoretical constraint because together they minimize referential non-determinism. tl Reference Russell [27] first distinguished internal (private) and external (;~ublic) reference. We do not discuss external reference. neither as the correspondence of terms to an external world [26] nor as the problem of recoverrng the intended meanings of speakers [I: IO. 211. Much of pragmatics also concentrates on external reference [I]. Instead, we analyze internal reference. wherein a hearer finds a correspondence * This research was done at the Artificial Intelligence Laboratory of the Massachl;setts Institute of Technology. Support fol the laboratory’s artificial intelligence research is provided in part by the Advdnced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00r)14-80-C-05rX. between a sentence and his or her model of the world. We focus on the private reference of sentences using public syntax. Research conducted at SRI on the referenttal resolution of NPs [13, 14, 161 constitutes an important antecedent of our work. We do not, however, limit internal reference to NP resolution. We distifiguish two categories of internal Ireference: immediate reference and medtated or dehberative reterence. Both are conceived within the constraint-posting framework discussed below. They are distingllished by their computational ccmplexity. Immediate reference is simpler. tl uses constraints that just “read otf” relations from explicit representations in memory, creating no new structure. Immediate .* reference may at times utilize spontaneous inierences, e.g., viftua! copy inheritance [3, 121. We consider an inference spontaneous if it can be computed deterministically, creates no new structure, and requires no deliberation [8] or reflection [2].* * * Deliberative reference is more complex. It may incorporate constraints requiring complex reasoning to discriminate arnong possible referents. Deliberative referential ability depends first on the capacity to locate the terms involved in reasoning. lmmcdiate reference thus provides a bootstrapping foundation for deliberative reference. In our preliminary view, contradictory. incommensurate, and null references for immediate references, as w?tt as certain ambiguous syntactic constructions, signat the need for deliberation. Its inherent complexity puts further discussion of deliberative reference beyond the scope of this paper. III Al and Transformational Grammar Transformational Grammar (TG) has remained controversial in Al circles due primanly to its alleged computational intractability [cf. 301. Berwick [4] argues that Al researchers should view modern TG not as a system of computationally mtractabte rules, but rather as a tractable system of constraints. Berwick advocates a surtace-interpretive (SI) approach [6], In surface structure annotatIon guides both analytic and generative derivations. No longer are the traditionat deep structure trees exptrcitty created. They exist only rmplrcitly in the annotation. Berwick urges Al researchers to reconsider their views of TG in that light. The transition from the traditional deep-structure approach to SI runs briefly as follows. Chomsky’s “Standard Theory” [5] was open to certain criticisms. Most crucial was the cbservation that logical l *Ken Haase [15] proposes “deliberative” inference. the distinction between “spontaneous” and Since class inclusion can be only bc known to be inheritable computed quickly [24] and default relations need (not actually inherited), we place our version of vittual-copy inheritance m the category of spontaneous inference. that the class hierarchy does not have exceptions and that the virtual justifies nothing aside from reference. We assume inheritances From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. operations can only be determined at the surface level. Jackendoff [18] suggested using deep structures for the Interpretation of grammatical relations while retaining surface structures for determining quantifier scopes and other logical relations. Shortly thereafter, Fiengo [l l] began development of “trace theory”, in which grammatical relations are determined by the Interpretation of traces (pointers) left In the surface representation. Since both the grammatical relations and logical relations could be computed from the surface level, parsimony appeared to dictate that deep structures be abandoned. We find SI to be less parsimonious than it appears. We have found explicit deep structures indispensable for computattonnlly efficient reference. Together with a reference architecture based on constraint p%stmg, they eliminate needless recomputation and backtracking. Below, we examine the reference mechanism situated between the transformational Relatus parser* and the Relatus knowledge base, Gnoscere. l l Below, we Illustrate the efficiencres we gain by using explicit deep-structures for the analysis of grammatical relations. IV Immediate Reference Using Constraint Posting A precedent for our use of constraint-posting is MUMBLE [25], which posts co&traints in sentence generation rather than analysis. The general principle embodied in constraint-posting is to “wait and see” before committing to any particular course of action, thereby avoidrng false starts. Backtracking is avoided because decisions are made only when full information is available. Mapping from syntax to semantics decomposes into two recursive phases. First, the parse-tree is traversed depth-first and *** constramts are posted on the deep-structure nodes. This traversal composes a constraint tree, constructed from a subset of existing parse-tree nodes. Second, the constraint tree is traversed depth-first and nodes are referenced or created (when necessary) in some semantic notwork: * l l Both phases sub-divide their respective tasks, localizing them to the deep-structure nodes. In the reference phase, localization makes possible the successful reference of subtrees of the constraint tree In the semantic representation even thO?lgh other parts of the sentence may be new. Using the constraint tree as a match key, the reference phase implements a hierarchical match of the semantic network. Before posting its constraints, a sentence specialist creates a canonicalized deep structure tree composed of expert nodes and headed l The Relatus parser is a de!erministic, !ransformational parser which creates deep syntactic structure trees compossd of intelligent nodes. was implemented by Duffy 191. Credit for demonstratiny the computational feasibility of a real-time transformational parse should go to Katz [20]. by itself. 7 he sentence specrailst supervises constraint-postmg within its scope by telling each of its immedtate constituents to post their constraints, and so on, recursively. When this recursion unwinds, it leaves a constraint tree attached to the top-level VP node. The constrair?t tree conserves the canonical grammatical relationshlps explicit in deep structure. For us, these constraints constitute logical form (LF). The constraints are “public”, or independent of any particular semantic network. Because they support input to a network environment suitable for logical inference, the constramts differ from the LF of TG [7]. (REFERENCE WANT :SUBJECT (REFERENCE POLICE :CONSTRAINTS ((INDIVIDUAL-P) (PNUMBER-OF-SUBJECT-RELATIONS HQ 1) (SUBJECT-RELATION HQ SECRET ((TRUE)))) :FORCE-NEW-P NIL :PARTICULAR-P T) :OBJECT (REFERENCE ELMER :CONSTRAINTS ((INDIVIDUAL-P)) :FORCE-NEW-P NIL :PARTICULAR-P T) :CONSTRAXNTS ((TRUE) (PMSUBJECT-RELATION HAS-TENSE PAST) (PMSUBJECT-RELATION HAS-ASPECT PERFECT) (SUBJECT-RELATION FOR (REFERENCE CRIME :CONSTRAINTS ((INDIVIDUAL-P) (SUBJECT-RELATICN HAS-QUANTITY PLURAL) (PMDBJECT-RELATION-TO-UNKNOWN COMMI r (RCFERENCE-UNKNOWN *SOMETHING* :CONSTRAINTS ((INDIVIDUAL-P))) ((TRUE) (PMSUBJECT-RELATION TRANSFORMED-BY PASSIVE-TRANSFORM (PMSUBJECT-RELATION HAS-TENSE PAST) (PMSUBJECT-RELATION HAS-ASPECT PERFECT) (SUBJECT-RELATION AGAINST (REFERENCE STATE :CONSTRAINTS ((INDIVIDUAL-P) (PNUMBER-OF-SUBJECT-RELATIONS HQ 1) (SUBJECT-RELATION HQ TOTALITARIAN ((TRUE) ))) :FORCE-NEW-P NIL :PARTICULAR-P r))))) :FORCE-NEW-P NIL :PARTICULAR-P T)))) Figure 1: Constraint tree for “The secret police wanted crimes that were committed by the totalitarian sfate. ” Elmer tor some l * The knowledge base is a semantic network that provides a constraint-posting reference mechanism and a set of reference contraints. It was implementgd by Mallery. Gnoscere is built out of a frame system and implements multiple semanttc networks. one for each represented bellcf system These netwol ks ate lel&onal Winston [31] showed uses of a lelational semantic representation in analogical reasoning The mechanism that converts syntxtic defap structure tlees Into constramt trees suited to Gnoscele’s reference system was rmol~?rnented by hlallery in close collaboration with Duffy. Research on the f7eldtl:s system IS still preliminary. .L. A of the parallel algorithm deep structure. would fan out top-down according to the branching factor ..I. A parallel algorithm would perform reference of the constraint tree bottom-up from the bottom-most non-terminal nodes. “fanning into” the sentence at the top. The constrailit tree for the sentence “The secret police wanted Elmer for some crimes that were committed against the totalitarian State” is presented in Figure 1. I\lote that every appearance of the symbol reference marks points where a single reference takes place, and thus the points of semantic composltlon. The symbol following reference is the token type to be referenced. The keyword xonsfrainfs is followed by the list of constraints appiied In the reference. The keywords subject and/or :oblect srgnify that a relation IS being referenced. They designate the subject and object of that relation (oblecls are omitted for unergative verbs). So in Figure 1, the subject of the ‘want’ relation is the ‘police’ reference and the object is the reference to an ‘Elmer’. In the reference to ‘police’, the indlvrdual-p constraint means we are looklng for an individual ‘pokce as opposed to Its universal (the class of ‘police’). The subject-relatiorr constraint means that we want a ‘police’ which participates as the subject of an l+iQ (has-quality) relation to ‘secret’. A subject-relation constraint IS at1 ordinary constraint. Ordinary constraints are necessary. A successful referent must satisfy them. The inverse of a subject-relation constraint is an object-refalion constraint. Only relations have truth value constraints (e.g. true). Relations and objects may both have quantiflcational constraints (e.g., un/versa/-p). A P prefix distinguishes preference constraints from ordinary constraints. Successful referents need not satisfy preference constraints, These order successful candidates by unweighted voting, aiding selection of the most promising candldate for reference. Pnumber-of-subject-re/at,ons, for example: finds a referent preferring one tICI relation over others. Preferred-mandatory constraints, prefixed with PM, also use the voting scheme to order the referential possibility space, but lhey require the successful referent to have the feature they specify. For example, if the referent for the ‘want’ relation in the top-level of Figure 1 does not have <subject-relation has-tense past>, this relation is created for it. Constraints are posted directly on the deep-structure nodes. They may be displaced from lower nodes to a higher node. When a node’s constraints are raised onto the constraints of another, the displaced node does not itself appear in the resulting constraint tree. It iS represented instead in the raised constraints. An example of displacement occurs in Figure 1 where the ‘commit’ relation uses the pmo@ect-relation to-unknown constraint to restrict the reference for ‘crime’. requlnng It to possess a certain object relation with an unspecific subject. Constraint displacement occurs for all adverbs, relative CiauseS, and prepositional phrases. Deep structure. by virtue of its hierarchical connectivity, explicrtly encodes a unique, grammatically canonical plan for corlstralnt-tree construction. Parse-tree nodes exploit their internal structure to post reference constraints. Constraint posting is simplified by the presence of a deep-structure !ree because all positional decisions, including embcddedness, are pre-determined. Restrictive relative clauses, for example, appear in deep structure as adjectival modifiers of an NP. The reference Constraint:; for I’ 1 he man who was wanted by the police laughed”, presented in Figure 2, contain a restrictive relative on ‘man’ (<PMobject-relation want . . .>). Smce the parse tree includes an explicit trans!orm of the passive embedded S, the transform’s constraints are easily percolated to ‘man’. Because the transformation is already represented expltcltly. there is no riced to “unpack” the embedded S. The constraints then select the appropriate ‘want’ referent (created previously by the constraint tree in Figure 1) from the network. (REFERENCE LAUGH :SUBJECT (REFERENCE MAN :CONSTRAINTS ((INDIVIDUAL-P) (PMDBJECT-RELATION WANT (REFERENCE POLICE :CONSTRAINTS ((INDIVIDUAL-P)) :FORCE-NEW-P NIL :PARTICULAR-P T) ((1RUE) (PMSUBJECT-RELATION TRANSFORMED-BY PASSIVE-TRANSFORM) (PMSUBJECT-RELATION HAS-TENSE PAST) (PMSUEIJECT-RELATION HAS-ASPECT PERFECT)))) :FORCE-NEW-P NIL :PARTICULAR-P T) :CONSTRAINTS ((TRUE) (PMSUBJECT-RkLATION HAS-TENSE PAST) (PMSUBJECT-RELATION HAS-ASPECT PERFECT))) Figure 2: faug hed. ” Constraint tree for “The man who was wanted by the police Relative clauses exemplify local decision-making by an NP specialist. lf the NP scopes a clause, it tells the c&se to pass back its constraints and posts them as referential restrrctions on the noun. Local decision-making also facilitates implementation of inter-constituent syntactic constraints, e.g., the interconstraints between the subject and object (predicate nominal) of the verb “to be”. Whenever a “be” VP is told to post its constraints, it knows to classify its subject and object as individuals or universals according to the definiteness or classness of each [19, 221. It then adds this as constraints on the subject and object. Once the constraint tree is complete the reference phase begins. In immediate reference we must always find any referent indexed by the symbol and its associated constraints. Only when no node in the semantic network satisfies the referential constraints is a new node created. Any other behavior would constitute a reference failure, requiring subsequent backtracking. Sentence constituents are referenced in a semantic network bottom-up, working up the constraint tree to the sentence at the top. Thus, any extant semantic objects corresponding to constituents at any level in the trees may successfully be referenced. Each non-terminal deep-structure node references itself, using the symbol of its head and its associated constraints. The reference mechanism of the semantic representation finds an existing semantic object which satisfies its constraints. If no suitable node is found, a new one IS created according to the symbol and constraints. Once a referent is found (or created), it is returned to the superior constraint tree node. The supenor object then references itself using the returned referent in the position designated by the constraint tree, and in turn, returns its own reference. The structure of the constraint tree determines the position of referents and order of reference for each node. V Determinism of immediate Reference Minimally, an SI approach would require recomputation of intermediate data structures for each interpretive “access” of a parse object. Quite possibly, an St approach would be forced to backtrack in the composition of constraint trees. A third alternative would not canonlcalizo the syntax, forcing the reference and reasoning components to unpack the non-canonical structures that are passed forward. Whichever option is chosen, the SI approach would clearly be less elegant and less efficient than the deep-structure approach. It would add considerable overhead in determining (often repeatedly) positional and order-of-evaluation relationships between the various parse-objects. In contrast, the deep structure approach caches this information explicitly. The information is simply read off the tree without repeatedly “unpacking” the information. Constraint posting from deep structure is the critical factor that makes immediate reference deterministic. No action is taken without full information. Use of full information precludes any need to retract premature references. False starts can therefore only occur in deliberative reference. Canonical parse trees simplify constraint-posting by constituents because the constituents need only enough knowledge to handle the canonical case. There is no need to recompute constraints or store them in ad hoc structures because parse tree nodes remember their constraints. The deep structures localize and simplify the algorithms for constraint composition and reference. The transformation of sentences and constituents to canonical form before reference has three major beneftcial effects. First, it guarantees the (syntactically) canonical nature of the semantic network and thus mmimlzes semantic backtracking (belief revision). The examples In the figures above illustrate this point.* Second, the resulting canonical semantic representation further simplifies reference. Since both the constraint tree and the representation are syntactically canonical, reference will not fall due to syntactic irregularities. Third, canonicalizatron of the semantic representation eliminates any need for syntactic transformations for all operations on the representation requiring reference (e.g., learning). The parser effectively compiles out 103 syntax, makmg the semantic representation more efficient. Consider the constramts presented in Figure 3 for the NP “the wanted man” in the sentence “Elmer knew then that he was the wanted man”. The partlcipial adjective, ‘wanted’, has been expanded into an embodded S, with the proviso that the subject of the ‘want’ relation is unknown. The constraint mechanism successfully discovers that this ‘want’ relation IS the same ‘want’ relation as that of Figures 1 and 2.’ If, as in [4], particlpial adjectives and participles in what are ordmarily termed passive constructs (e.g., Figure 2) were treated as one-place properties just as any adjective is treated, measures to control for syntactic variation would be required. and consequsntly, a significant inefficiency would resul!. For every reference, each property would need to be checked to see whether it has previously been expressed as a relation, and vice versa. The avallabilrty of effective resources would be further reduced by a factor which is a function of the size of the semantic units manipulated. The cost becomes steeper as application size increases and must be paid repeatedly -- on each reference. (REFERENCE KNOW :SUBJECT (REFERENCE ELMER :CONSTRAINTS ((INDIVIDUAL-P)) :FORCE-NEW-P NIL :PARTICULAR-P T) :OBJECT (REFERENCE BE :SUBJECT (REFERENCE ELMER :CONSTRAINTS ((INDIVIDUAL-P)) :FORCE-NEW-P NIL :PARTICULAR-P T) :OBJECT (REFERENCE MAN :CONSTRAINTS ((INDIVIDUAL-P) (PMOBJECT-RELATION-TO-UNKNOWN WANT (REFERENCE-UNKNOWN *SOMETHING* :CONSTRAINTS ((INDIVIDUAL-P))) ((TRUE) (PMSUEJECT-RELATION HAS-TENSE PAST) IPMSUBJECT-RELATION HAS-ASPECT PERFECT)))) :FORtE-NEW-P NIL :PARTICULAR-P T) :CONSTRAINTS ((TRUE) (PMSUBJECT-RELATION HAS-TENSE PAST) IPMSUBJECT-RELATION HAS-ASPECT PERFECT))) :CONSTRAINTS ((TRUE) (PMSUBJECT-RELATION HAS-TENSE PAST) (PMSUBJECT-RELATION HAS-ASPECT PERFECT) (SUBJECT-RELATION HQ THEN NIL))) Figure 3: Constraint tree for “Elmer knew then that he was the wanted man. ” VII Conclusions V/e have i!lustrated how deep strtictures combine with constramt.posting reference to improve the cfflclency of immediate reference: a central aspect of any interlace between syntax and semantics. Canonicallzatlon of syntax makes immediate reference deterministic, and thus increases the efficiency of all operations which build on Immediate reference, including deliberative reference, reasoning, and learning. l Some oppose the derivation of such adjectival participials from sentential sources as involving ad hoc rules. such as “Whiz” deletion [cf. 291 However, lngria [17] argues that this analysis may be restated without resort to ad hoc processes. Constraint-posting reference is not impossible, In principle, within the context of an SI approach to sentence analysis. From a syntactic standpoint, our approach and the SI approach are mere notational variants [7]. Only when mapping from syntax to semantics do the full set of ccmputatlons relevant to parsimony claims emerge. At the very least, an SI parser with constraint-posting reference would be forced to repeatedly recompute i[s relationships to deep-structure, or be forced to backtrack in the computation of constramts. Alternatively, it may simply export Its non-canonicalities to the semantic component. Because this alternative would force addltional chores on the semantic component and thus slow ail reasoning actlvitles, our objection to it is concomitantly more strenuous. Canonicalization of its input enhances consistency in the semantic component. Explicit representation of both surface and deep structures enhances the elegance and efficiency of the composltion of referential constramts. These constraints, in turn, maximize the determinism of reference, thereby minimizing the amount of non-deterministic bzcktraskmg (belief-revision) necessary in the semantic component. The structure of the referential constramts inherit the canonicalized grammatical relations of deep structure. Inference operations in the semantic network can rely on that cnnonlcalization and thus perform better. A variant of SI which supports a canonical semantic representation may someday be invented, but the parsimony argument for SI would no longer remam tenable. Acknowledgements This paper was Improved by comments from Steve Bagley, John f3atali, Bob Berwick. Carl Hewitt, Joel Isaacson, and Rick Lathmp. Margaret Fleck provided valuable comments. Robert tngria helped refine our arguments with detailed comments. Responsiblllty for the content, of course, remains with the authors. References [l] Bach, K. and R. M. tiarnlsh, L,nguist/c Commumcatio/l and Speech Acts. Cambridge, Mass.: MIT Press, 1979. [2] Satall, J. “Computational Introspection.” Memo No. 701, MIT Artificial lntelltgence Laboratory, Cambridge, Mass.. February 1983. [3] Hawdell, A., D. tilllls, and D. A. McAllester, “Virtual Copy CompressIon.” seminar, MIT Artificial Intelligence Laboratory, Spring, 1982. [4] Berwick, R. C., “Transformational Grammar and Arttficlal Intelligence: A Contemporary View.” Cc)gn/t/o1I and 6ra/l, Theory. 6:4 (1983) 383-416. [5] Chomsky, N., Aspects of the Theory of Syntax. Cambridge, Mass.: MIT Press, 1965. [6] Chomsky, N., Lectures on Government and Binding. Dordrecht: Foris, 1981. [7] Chomsky, N. “On the Representation of Form and Function.” The Lingulsflc Review 1 :l (1981-82) 3-40. [O] Doyle, J., A Model for [>e!iberalion, Action, and lntros;pection. Doctoral dissel t&on. MIT. May 1980. [3] Duffy, G., “Parsing with Smart Nodes.” forthcoming. 1984. [lo] Evans, G., The Varieties 0, f Reference.. New York: Oxford University Press, 1982. [l t] Fiengo, R. W., Semantic Conditions on Surface Sfructure. Doctoral disse: t&on. MIT. 1974. 104 [12] Fahlman, S., NETL: A System for Representing Knowledge., Cambridge, Mass.: MIT Press, 1979. and Using Real- World [ 131 Grosz, B. J., “Discourse Analysis.” in [28]. 235’268. [14] Grosz, 8. J., “Resolving Deflmte Noun Phrases.” in [28]. 287-298. [15] Haase, K., ARLO: The Implementation of a Language Representation Languages. Bachelor’s Thesis. MIT. 1984. for Describing [I61 Hendrix, 1281. 121-181. G. G., “The Representation of Semantic Knowledge.” in [17] Ingria, R. J. P., “Rehabilitating a Classical for Nominal Modifiers.” forthcoming. 1984. [la] Jackendoff, R. S., Semantic Interpretation Cambridge, Mass.: MIT Press. 1972. Analysis: Clausal Sources in Generative Grammar. [lOI Jackendoff. Press. 1983. n. S., Semantics and Cognifion. Cambridge, [20] Katz. B.. and P. H. Winston “Parsing and Generating English Using Commutattve Transfcrmations.” Memo No. 677. MIT Artlflclal Intelligence Laboratory. Cambndge. Mass. May 1982. [21] Lyons, J.. Press. 1977. Semanhcs., Vol. Cambridge, U.K.: Cambndge University [22] Mallery, J. C., “Unlversalrty and Individuality: The Interaction of Noun Phrase Determiners in The Case of The Verb ‘To Be’.” forthcoming. 1984. [23] Marcus, M. P., A Theory of Syntactic Language. Cambrrdge, Mass.: MIT Press. 1980. Pocognifion fO1 Natural [24] McAllester, D. A., “An Outlook on Truth Maintenance.” Memo No. 551. MIT Artificial Intelligence laboratory. Cambridge, Mass. August, 1980. [25] McDonald, D. M., “Natural Language Generation as a Computational Problem: An Introduction.” In R. C. Berwick and M. Brady, Computatronal Models of Discourse. Cambridge, Mass.: MIT Press, 1983, pp. 208-265. [26j Putnam, l-i., Reason Umversity Press, 1981. Truth and Hisrory. Cambridge, U.K.: Cambridge [27] Russell, Bertrand, Human Know/edge: Ifs Scope and Limits. New York: Simon and Schuster, 1948. [28] Walker, 0. E., ed., North-Holland, 1978. Understanding Spoken Language. New York: Pgl Williams, E. S., “Small Clauses in English.” in J. Kimball, and Semantics/ Vol 4., New York: Academic Press, 1974. ed., Syntax [30] Winograd, T., Procedures as a Representation for Data in a Computer Program for Underslanding Natural Language. Doctoral dissertation. MIT. February 1971. [31] Winston, P. H. , “Learning New Pnnciples Exercises.” Artificial lntelltgence 1913 (1982). From Precedents And 105
1984
62
350
A SEMANTIC PROCESS FOR SYNTACTIC DISAMBIGUATION Graeme Hirst Department of Computer Science University of Toronto Toronto, Canada M5S lA4 ABSTRACT Structural ambiguity in a sentence cannot be resolved without semantic help. We present a process for struc- tural disambiguation that uses verb expectations, presupposition satisfaction, and plausibility, and an algorithm for making the final choice when these cues give conflicting information. The process, called the Semantic Enquiry Desk, is part of a semantic inter- preter that makes sure all its partial results are well- formed semantic objects; it is from this that it gains much of its power. 1. INTRODUCTION It is universally accepted that syntactic analysis of natural language requires much semantic knowledge, and it is generally accepted that semantic analysis requires much syntactic knowledge. (Convincing argu- ments for the latter are presented by Marcus 1984.) The goal of the present research is a system in which syntax and semantics relate well to one another, and are both properly deployed to find the semantic interpretation of the input, dealing with ambiguities of word sense, case slot filling, and syntactic structure. We are assuming a frame-like representation of knowledge with a suitable retrieval and inference engine - in particular, we are using the FRAIL frame system (Charniak, Gavin and Hendler 1983). In Hirst 1983a, 1983b, we showed how such a representation can pro- vide an adequate notion of “semantic object”, in the Montague (1973) sense, and developed a system named Absity in which semantic rules operated in tandem with corresponding syntax rules upon corresponding objects. The system has some of the flavor of Montague’s, but replaces possible worlds with A.I.-style representations and the categorial grammar with an A.I.-style parser with wider syntactic coverage. A mechanism for word sense and case slot disambi- guation that worked in conjunction with Absity was presented by Hirst and Charniak (1982; Hirst 1983a). This mechanism, called Polaroid Words, drew much of This work was carried out while I was at the Department of Computer Science, Brown University, Providence, Rhode Island. Financial support was provided in part by the U.S. Office of Naval Research under contract number NOOO14-79-C-0592. Preparation of this paper was supported by grants from the Connaught Fund, University of Toronto, and the Natural Sciences and Engineering Research Council of Canada. its power from the design of Absity, which ensured that all semantic entities in the system were always well- formed semantic objects in the FRAIL representation and inference system. It remained, however, to deal with ambiguities of syntactic structure. We now present a mechanism for this, the Semantic Enquiry Desk @ED). There are many types of structural ambiguity (see Hirst 1983a for a long list); the SED handles two important kinds - prepositional phrase attachment and problems of gap-finding in relative clauses - and pro- vides a foundation for the development of methods for dealing with other kinds. In this paper, we will look at prepositional phrase (Pp) attachment, in which a PP may be attached to either the verb phrase (VP) of the clause as a case slot filler, or to a noun phrase (N.) as a modifier. We are using a parser similar to Marcus’s (1980) limited-lookahead deterministic parser, Parsifal. Our approach could, however, be adapted to other types of parser, provided only that they are able to give the SED sufficient information. 2. TWO THEORIES OF STRUCTURAL DISAMBIGUATION The SED synthesizes two rather different theories of structural disambiguation: The lexical preference theory of Ford, Bresnan and Kaplan (1982) and the presuppo- sition minimization theory of Crain and Steedman (1984). We explain each briefly. Ford, Bresnan and Kaplan #XX) show that disam- biguation strategies such as Minimal Attachment (Fra- zier and Fodor 1978) that are based solely on syntactic preferences are inadequate to account for the resolution preferences that people exhibit in experiments. Rather, the preferred structure can change with the verb: (1) The women discussed the dogs on the beach. (i.e., NP attachment: The dogs on the beach were discussed by the women.) (2) The women kept the dogs on the beach. fi. e., VP attachment: On the beach was where the women kept the dogs.) FBK propose a theory of lexical preferences, in which each verb is marked with the cases that are generally used with it. . Each. PP is assumed to be one of these expected cases, to be attached to the VP, and is inter- preted as such if at all possible, until the last expected 148 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. case is filled; subsequent PPs are assumed to be NP modifiers of the final expected case. These assumptions are dropped if an anomalous interpretation would result, or if pragmatics overrule them. FBK show that this principle accounts for some other kinds of struc- tural ambiguity as well as PP attachment. A very different theory of structural disambiguation has been proposed by Crain and Steedman (19841, who claim that discourse context and, in particular, presuppo- sition and plausibility, are paramount in structural disam- biguation. The presuppositions of a sentence are the facts that a sentence assumes to be true and the entities that it assumes to exist. If a sentence presupposes information that the reader does not have, she has to detect and invoke these unsatisfied presuppositions. People have no trouble doing this, though there is evi- dence that it increases comprehension time (Haviland and Clark 1974); Weischedel (1979) has shown how presuppositions may be determined as the sentence is parsed. Crain and Steedman hypothesize The Principle of Parsimony: the reading that leaves the fewest presuppo- sitions unsatisfied is the one to be favored, other things being equal. This is a particular case of the Principle of A Priori Plausibility: prefer the reading that is more plau- sible with regard to either general knowledge about the world or specific knowledge about the universe of discourse, other things being equal. These principles can explain well-known garden-path sentences such as (3): (3) The horse raced past the barn fell. The correct parse presupposes both the existence of a particular horse and that this horse is known to have raced past a barn, presuppositions unsatisfied in the null context. The incorrect parse, the one that garden- paths, only presupposes the first of these; the other is taken as new information that the sentence is convey- ing. The Principle of Parsimony claims that the garden-path parse is chosen just because it makes fewer unsatisfied presuppositions. Experiments by Crain and Steedman support this analysis, and suggest that Ford, Bresnan and Kaplan’s results are just artifacts of their use of the null context, not controlling for unsatisfied presuppositions. Nevertheless, FBK’s experiments found ambiguities whose preferred resolutions do seem to require an explanation in terms of lexical preference rather than presupposition or plausibility (Hirst 1983a). A more detailed discussion of the two approaches may be found in Hirst 1983a. 3. PREPOSITIONAL PHRASE ATTACHMENT Many easy cases of prepositional phrase attachment can be handled by simple and absolute lexical and syntactic knowledge about allowed attachment. For example, few verbs will admit the attachment of a PP whose preposition is oJ; and such knowledge may be included in the lexical entry for each verb. For those cases where deeper consideration is necessary, the SED’s approach to PP attachment is to synthesize the two approaches of the previous section. There are four things needed for this: An annotation on each verb sense as to which of its cases are “expected”. A method for determining the presuppositions that would be engendered by a particular PP attach- ment, and for testing whether they are satisfied or not. A method for deciding on the relative plausibility of a PP attachment. A method for resolving the matter when the preceding strategies give contradictory recommen- dations. 3.1. Verb annotations The first requirement, annotating verbs for what they expect, is straightforward once we have data on verb preferences. These data should come from formal experiments on people’s preferences, such as the one Ford, Bresnan and Kaplan (1982) ran, or from textual analysis; however, for a small, experimental system such as ours, the intuitions of the author and his friendly informants will suffice. We classify cases as either compulsory, preferred, or unlikely. 3.2. Testing for presupposition satisfaction The next requirement is a method for deciding whether a particular PP attachment would result in an unsatisfied presupposition. Now, there is a simple trick, first used by Winograd (1972)) for determining many PP attachments: try each possibility and see if it describes something that is known to exist. For exam- ple, sentence (4): (4) Put the block in the box on the table. could be asking that the block be placed in the box on the table, or that the block in the box be placed on the table. The first reading can be rejected if the block does not in context uniquely identify a particular block, or if there is no box on the table, or if the box on the table does not uniquely identify a particular box. Similar considerations may be applied to the second reading. (If neither reading is rejected, or if both are, the sen- tence is ambiguous, and Winograd’s program would seek clarification from the user.) Crain and Steedman have called this technique The Principle of Referential Success: a reading that succeeds in referring to an entity already established in the hearer’s mental model of the domain of the discourse is favored over one that does not. We will show that the Principle of Referential Suc- cess suffices in checking for unsatisfied presuppositions. We observe the fol1owing.l First, a definite non-generic NP presupposes that the thing it describes exists and is available in the focus or knowledge base for felicitous (unique) reference; an indefinite NP presupposes only the plausibility of what it describes. Thus, _a blue chip- munk presupposes only that the concept of a blue chip- munk is plausible; & blue chipmunk further presup- poses that there is exactly one blue chipmunk available for ready reference. Second, the attachment of a PP to an NP results in new presuppositions for the new NP thus created, but cancels any uniqueness aspect of the referential presuppositions of both its constituent NPs. Thus, the red tree with the blue chipmunk presupposes that there is just one such tree available for reference (and that such a thing is plausible); the plausibility and existence of a red tree and a blue chipmunk continue to be presupposed, but their uniqueness is no longer required. Third, the attachment of a PP to a VP creates no new presuppositions, but rather always indicates new (unpresupposed) information. 2 These observations allow us to “factor out” most of the presupposition testing: the candidate attachments will always score equally for unsatisfied presuppositions, except that VP attachment wins if the NP candidate is definite but NP attachment would result in reference to an unknown entity. On the other hand, if NP attach- ment would result in a felicitous definite reference, the number of unsatisfied presuppositions will remain the same for both attachments, but by the Principle of Referential Success we will prefer the NP attachmente3 Testing for this is easy for the SED because of the property of Absity that the semantic objects associated with the syntactic constituents are all well-formed FRAIL objects. The SED puts them into a call to FRAIL to see whether the mooted NP-attachment entity exists in the knowledge base or not. (The entity may be there expli- citly, or its existence may be inferred; that is up to FRAIL.) If the entity is found, the presupposition is satisfied, and the PP should be attached to the NP; otherwise, if the presupposition is unsatisfied, or if no presupposition was made, the VP is favored for the PP. As an example, let’s suppose the SED needs to decide on the attachment of the PP in (5): (5) ROSS saw the man with the telescope. It will have the semantic objects for see, the man, and with the telescope, the last having two possibilities, one for each attachment mooted. It constructs the FRAIL statement (6) for the NP attachment: (6) (the ?x (man ?x (attr = (the ?y (telescope ?y))))) ‘The proof of the generality of these observations is by ab- sence of counterexample. If the reader has a counterexample, she should notify me promptly. 2This is not quite true; sentences asserting a change of state presuppose that the new state did not previously hold. 3A coiollary of this is that a PP is never attached to an indefinite NP if VP attachment is at all possible, except if the NP is the final expected argument. This seems too strong, and our rule will probably need toning down. This corollary is not completely out of line, however, as definiteness does influence attachment; see Hirst 1983a. If this returns an instance, man349 say, then the SED knows that presupposition considerations favor NP attachment; if it returns nil, then it knows they favor VP attachment. 3.3. Plausibility Now let’s consider the use of plausibility to evaluate the possible PP attachments. In the most general case, deciding whether something is plausible is extremely difficult, and we make no claims to having solved the problem. In the best of all possible worlds, FRAIL would be able to answer most questions on plausibility, and the slot restriction predicates on frames would be de$ned to guarantee plausibility; but, of course, we don’t know how to do that. However, there are two easy methods of testing plausibility that we can use that, though non-definitive, will suffice in many cases. The first of these, used in many previous systems, is selectional restrictions. In the present system, these are applied as slot restriction predicates by the case slot disambiguation part of Polaroid Words even before the SED becomes involved, and are often adequate by themselves. While satisfying the predicates does not guarantee plausibility, failing the predicates indicates almost certain implausi- bility. The second method is what we shall call the Exem- plar Principle (a weak form of the Principle of Referen- tial Success): an object or action should be considered plausible if the knowledge base contains an instance of such an object or action, or an instance of something similar. Again, the SED can easily construct from the semantic objects supplied to it the FRAIL call to deter- mine this. For example, if the SED wants to test the plausibility of a cake with candles or operate with a slug, it looks in the knowledge base to see if it has run across such a thing before: (7) (a ?x (cake ?x (attr = (some ?y (candle ?yWN (8) (a ? x (operate ?x (instrument=(a ?y (slug ?y))))) If it finds an instance, it takes the attachment to be plausible. If no such item is found, the matter is unresolved.4 Thus the results of plausibility testing by the SED will be either exemplar exists or can’t te1L5 3.4. Making the attachment decision The SED’s last requirement is a method for deciding on the PP attachment, given the results of verb expecta- tion _ and presupposition and plausibility testing. If all 4Various recovery strategies suggest themselves; see Hirst 1983a. ‘With a large knowledge base it may be possible to assign rat- ings based on the number of exemplars found; an item that has a hundred exemplars would be considered more plausible than one with only one exemplar, other things being equal. See Hirst 1983a for discussion. 150 TABLE 1. DECISIONALGORITHMFOR RESTRICTIVE pp ATTACHMENT (ONE VP AND ONE NP) [Referential success] if NP attachment gives referential success then attach to NP [Plausibility] else if an exemplar is found for exactly one attachment then make that attachment [Verb expectations] else if verb expects a case that the preposition could be flagging then attach to VP else if the last expected case is open then attach to NP [Avoid failure of reference] else if NP attachment makes unsuccessful reference then attach to VP else sentence is ambiguous, but prefer VP attachment anyway. agree on how the attachment should be made, then everything is fine. However, as Ford, Bresnan and Kaplan (1982) make clear, verb expectations are only biases, not absolutes, and can be overridden by conflicting context and pragmatic considerations. Therefore, the SED needs to know when overriding should occur. Table 1 shows a decision algorithm for this that assumes that one VP and one NP are available for attaching the PP to. (An algorithm for the case of several available NPs is presented in Hirst 1983a.) The algorithm gives priority to ruling out implausible read- ings, and favors NP attachments that give referential success (referential success is tried first, since it is a stronger condition); if these tests don’t resolve matters, it tries to use verb expectations.6 If these don’t help either, it goes for VP attachment (i.e., Minimal Attach- ment), since that is where structural biases seem to lie, but it is more confident in its result if an unsatisfied presupposition contraindicates NP attachment. Some sentences for which the algorithm gives the correct answer are shown in Table 2. We also show a couple of sentences on which the algorithm fails. The fault in these cases seems to be not in the algorithm but rather in the system’s inability to use world knowledge as well as people do, I can’t believe that people have some sophisticated mental algorithm that tells them how to attach PPs in those awkward cases where several different possibilities all rate approxi- mately the same; rather, they use a simple algorithm and lots of knowledge, and in the rare awkward (and, probably, artificial) case, either ask for clarification, 6There are sentences in which verb expectations prevail over plausibility; see Hirst 1983a. Ideally, the SED would react to these sentences the way people do; however, the procedure we present errs on the side of common sense. 151 TABLE 2. PPS THAT ARE AND AREN’T CORRECTLY ATTACHED PPs THAT ARE CORRECTLY ATTACHED The women discussed the dogs on the beach. NP-a ttached. The women discussed the tigers on the beach. NP-attached if there are tigers on the beach, but VP- attached tf no examples of tigers on the beach are found. Ross bought the book for Nadia. VP-attached unless there is a book for Nadia available for reference. Ross included the book for Nadia. NP-a ttached, as per FBK’S preference data. PPs THAT ARE NOT CORRECTLY ATTACHED The women discussed dogs on the beach. NP-attached because dogs on the beach is plausible and doesn’t fail referentially, though VP attachment seems to be preferred by informants. The women discussed the dogs at breakfast. NP-attached like the dogs on the beach, because the subtle unusualness of the dogs at breakfast is not detected. choose an attachment almost at random, or use cons- cious higher-level inference (perhaps the kind used when trying to figure out garden paths) to work out what is meant. 4. MUFFLING COMBINATORIAL EXPLOSIONS The preceding discussion assumed that while the mean- ing of the preposition of the PP may be unresolved, the potential attachment heads (i.e., the noun of the NP and the verb of the VP) and the remainder of the PP were all either lexically unambiguous or already disam- biguated. Now let’s consider what happens if they are not, that is, if the words that must be used by the SED to decide on an attachment are ambiguous. We will see that the SED’s decision will often as a side effect allow the words to be disambiguated as well. In principle, the number of combinations of mean- ings of the words that are not yet disambiguated could be large. For example, if the two potential attachment heads, the preposition, and the prepositional comple- ment all have three uneliminated senses, then 81 (i.e., 34) combinations of meanings could be constructed. In practice, however, many combinations will not be semantically possible, as one choice will constrain another - the choice for the verb will restrict the choices for the nouns, for example. Moreover, such multiple ambiguities are probably extremely rare. (I was unable to construct an example that didn’t sound artificial.) It is my intuition that verbs are almost always disambiguated by the NP or PP that immediately follows them, before any PP attachment questions can arise. Moreover, the SED could use the strategy that if the verb remains ambiguous when PP attachment is being considered and combinatorial explosion seems imminent, the verb is required by the SED to disambi- guate itself forthwith, even if it has to guess.7 (This is in accord with Just and Carpenter’s (1980) model of reading, in which combinatorial explosion is avoided by judiciously early choice of word senses.) Given, then, a manageably small number of lexical ambiguity combinations, structural disambiguation by the SED may proceed as before. Now, however, each attachment must be tried for each combination. The type of attachment that scores best for some combina- tion is then chosen, thereby also choosing that combi- nation as the resolution of the lexical ambiguities. For example, if combination A suggests NP attachment on the basis of referential success, thus beating combina- tion 8s suggestion of VP attachment on the basis of plausibility, then both NP attachment and the word senses in combination A are .declared winners. Ties are, of course, possible, and may indicate genuine ambiguity; see Hirst 1983a for discussion. 5. OTHER STRUCTURAL AMBIGUITIES In Hirst 1983a, I show how similar techniques may be used for gap-finding in relative clauses, and give some preliminary suggestions on how the SED may also han- dle particle detection, relative clause attachment, and adverb attachment. 6. CONCLUSION Like Polaroid Words, the Semantic Enquiry Desk gains much of its power from the property of Absity that its partial results, the constituents with which the SED works, are always well-formed FRAIL objects, enabling it to use the full power of a frame and inference system. Even if the correct choice of object for an ambiguous word is not yet known, the alternatives will be well- formed and easily accessible. ACKNOWLEDGEMENTS I am grateful to Eugene Charniak for discussions from which this work developed, to Stephen Crain for infor- mation about his work, and to Nadia Talent and Yawar Ali for their comments upon an earlier draft of the paper. REFERENCES CHARNIAK, Eugene; GAVIN, Michael Kevin and HENDLER, James Alexander (1983). “The FRAIL/NASL reference manual.” Technical report CS-83-06, Department of Computer Science, Brown University, Providence, RI 02912. February 1983. CRAIN, Stephen and STEEDMAN, Mark (1984). “On not being led up the garden path: The use of context by the psychological parser.” in Dowry, David R; KARTTUNEN, Lauri Juhani and ZWICKY, Arnold M (editors). Syntactic theory and how people parse sentences (= Studies in natural language processing 1). 7This strategy is not yet implemented in the SED. Cambridge University Press, 1984. FORD, Marilyn; BRESNAN, Joan Wanda and KAPLAN, Ronald M (1982). “A competence-based theory of syntactic closure.” in BRESNAN, Joan Wanda (editor). The mental representation of grammatical relations. Cambridge, MA: The MIT Press, 1982. 727-796. FRAZIER, Lyn and FODOR, Janet Dean (1978). “The sausage machine: A new two-stage parsing model.” Cognition, 6 (41, December 1978, 291-325. HAVILAND, Susan E and CLARK, Herbert H (1974). “What’s new? Acquiring new information as a process in comprehen- sion.” Journal of verbal learning and verbal behavior, 13(S), October 1974, 512-521. HIRST, Graeme (1983a). Semantic interpretation against ambiguity. Doctoral dissertation [available as technical report CS-83-251, Department of Computer Science, Brown University, 1983. HIRST, Graeme (1983b). “A foundation for semantic interpreta- tion.” [ 11 Proceedings, 21st Annual Meeting of the Association for Computational Linguistics, Cambridge, Massachusetts, June 1983. 64-73. [2] Technical report CS-83-03, Department of Computer Science, Brown University, Providence, RI 02912. January 1983. HIRST, Graeme and CHARNIAK, Eugene (1982). “Word sense and case slot disambiguation.” Proceedings, National Confer- ence on Artificial Intelligence (AAAI-82), Pittsburgh, August 1982. 95-98. JUST, Marcel Adam and CARPENTER, Patricia A (1980). “A theory of reading: From eye fixations to comprehension.” Psychological review, 87(4), July 1980, 329-354. MARCUS, Mitchell P (1980). A theory of syntactic recognition for natural language. Cambridge, MA: The MIT Press, 1980. MARCUS, Mitchell P (1984). “On some inadequate theories of human language processing.” in BEVER, Thomas G; CAR- ROLL, John M and MILLER, Lance A (editors). Six distinguished lectures on language. Cambridge, MA: The MIT Press [to appear], MONTAGUE, Richard (1973). “The proper treatment of quantification in ordinary English.” [ll in HINTIKKA, Kaarlo Jaakko Juhani; MORAVCSIK, Julius Matthew Emil and SUPPES, Patrick Colonel (editors). Approaches to natural language: Proceedings of the 1970 Stanford workshop on grammar and semantics. Dordrecht: D. Reidel, 1973. 221-242. [21 in: THO- MASON, Richmond Hunt (editor). Formal philosophy: Selected papers of Richard Montague. New Haven: Yale University Press, 1974. 247-270. WEISCHEDEL, Ralph M (1979). “A new semantic computation while parsing: Presupposition and entailment.” in OH, Choon-Kyu and DINNEEN, David A (editors). Syntax and semantics 11: Presupposition. New York: Academic Press, 1979. 155-182. WINOGRAD, Terry (1972). [ll “Understanding natural language.” Cognitive Psychology, 3(l), 1972, 1-191. 121 Understanding natural language. New York: Academic Press, 1972. 152
1984
63
351
** AravindJoshi and Ronniekbber Departmnt of muter and Information Science More School/D2 University of Pennsylvania Philadelphia PA 19104 Ralph M. Weischedel Departmnt of Gmputer & Infomation Sciences University of Delaware Kemrk DE 19716 AlssIRAcr In cooperative rmmrachine interaction, it is necessary lxt not sufficient for a system to respond truthfully and -- - informatively to a user's qwstion. In particular, if the system haa reason to believe that its planned response might mislead the user, then it mst block that conclusion by modifying its response. This paper focusses on identifying arxl avoidi% potentially misleading responses by acknowledging types of "informing behavior" usually expected of an expxt. Me attempt to give a formal account of several types of assertions that should be included in response to questions concerni% the achievement of sane goal (in addition to the simple answr), lest the questioner otherwise be misled. In cooperative rtamchine interaction, it is necessary but not sufficient for a system to responu truthfully and --- infornratively to a user's question. In particular, if the system has reason to believe that its planned response might mislead the user to draw a false conclusion, then it mst block that conclusion by nmdifying or adding to its response. w cooperative behavior ms investigated in [7], in which a dfiation of Qice's Maxim of Quality - '% tKUthfUl" - is --P proposed: * This tx>rk is partially supported by NSF Grants KS 81-07290, MCS 83-05221, and IST 8311400. ** At present visitirlg the Ik~rbnent of Gmputer Information Science, University of Pennsylvania PA 19104. al-d If You, the speaker, plant0 say anythingwhichmay imply for the hearer scmething that you believe to be false, then provide further information to block it. This behavior ws studied in the context of interpreting certain definite rmn phrases. In this paper, w investigate this revised principle as applied to respomling to users' plarr-related questions. (xlr overall aim is to: 1. characterize tractable cases in &ich the system as respondent (R) can anticipate the possibility of the user/questioner (Q) drawing false conclusions frun its response and hence alter it so as to prevent this haPpeni% 2. develop a focal method for ccsnputing the projected inferences that Q my draw fran a particular response, identifyis those factors *se presence or absence catalyzes the inferences; 3. enable the system to generate modifications of its response that can defuse possible false inferences and that my provide additional useful infomtion as hell. In responding to any question, including those related to plans, a respondent (R) mst conform to &ice's first Maxim of -- Quantity as ~4~11 as the revised Maxim of Quality stated above: _-- --- ---- bke your contritution as informative as is required (for the current purposes of the mchange). At best, if R's response is hot so informative, it may be seen as uncooperative. At wx-st, it may end up violating the revised Maxim of Quality, causing Q to conclude samthing R ---- - - -.-- either believes to be false or does not know to be true: the cmsequences could be dreadful. 0x task is to characterize mre precisely &at this expected informativehess consists of. In question -ring, there seem to be several quite different types of inform&ion, over axxl beyond the simple ammx to a question, that are nevertheless expected. For -4e, 1. klhen a task-related question is posed to an expert (k), R is expected to provide additional information that he recognizes as necessary to the performance of the task, of which the questioner (Q) my be unaxxe. Such 169 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. 2. 3. response behavior was discussed and implemented by Allen [II in a system to simulate a train infomtion booth attendant respondi% to requests for schedule and track information. In this case, not providing the expected additional information is simply uncooperative: Q mn't conclude the train doesn't depart at any time if R fails to volunteer one. With respect to discussions and/or argments, a speaker contradicting another is expected to support his contrary contention. Again, failing to provide support wmld simply be vie+& as uncooperative [2, 51. With respect to an expert's responses to questions, if Q expects that RwGLd infoxmhimofP if P wxe true, then Q my interpret R's silence regarding P as implying ** P is not true. Thus ifRknowPtobetrue,his silence my lead to Q's being misled. This third type of expected informativeness is the basis for the potentially misleading responses that wz are trying to avoid and that constitute the subject of this paper. What is of interest to us is characterizing tile Ps that Q muld expect an expert R to inform him of, if they hold. Notice that these Ps differ fran script-based expectations [ll], which are based on tit is taken to be the ordinary caxse of events in a situation. In describing such a situation, if the speaker doesn't explicitly reference sune element P of the script, the listener simply assumes it is true. Cn the other hand, the Ps of interest here are based on no& cooperative discourse behavior, as set out in Crice's maxims. If the speaker doesn't n&e explicit sane information P that the listener believes lx muld possess and inform the listener of, the listener assunes it is false. In this paper, w attempt to give a formal account of a subclass of Ps that should be included (in addition to the simple -r> in response to questions involving Q's **** achievirgsanegoal - e.g., "Can I drop CIS577?", "I want to enrol in CIS577?", "J&w do I get to Marsh CLeek on the Fxpressmy?", etc., lest that response otherwise mislead Q. In this endeavor, our first step is to specify that kncwledge that an expert Rmust have in order to identify the Ps th& Q wuld expect to be informed of, in response to his question. Cur second step is to formalize that knowledge and show how the system can use it. Cur third step is to show how the systefn can mdify its planned response so as to convey those PS. In this palm, Section 2 addresses the iirst step of this process and Sections 3 and 4 address the second. The third step wz mention here only in passing. *** 'Ihis is an interactional version of what Reiter [lb] has called the "Closed World Assunption" and what WCarthy [12] has discussed in the context of "Circunscription". *w* A canpanion paper [8] discusses responses which my mislead Q into assuning saw default tiiich R knows not to hold. F&elated wrk [6] discusses providing indirect or modified responses to yes/no questions where a direct response, tiile truthful,mightmislead Q. II FACIORSINCX.WUIIl'GLIKELYIl'lE'O~BRHAVIQR -__I__I__II__w Before discussing the factors involved in canputing this desired system behavior, wz wmt to call attention to the distinction we are drawing between actions and events, and --- betwen the stated goal of a question aml its intended goal. -- - We limit themyction to things that Q has some control over. Ihiqs beyond Q'strol wz kl-l call events, even if performed by other agents. Wile eventsmay be likely or even necessary, Q and R nevertheless can do nothing mre than wait for to happen. Ihis distinction between actions and events shows up in R's response behavior: if an action is needed, Rcan suggest that Qperform it. If an event is, Rcan dononmethaninfomQ. CW second distinction is between the stated goal or 11. %goal" of a request and its intend4 goal orTG$al . Ihe former is the goal nrxt direcec:a with Q's request, beyond that Q know the informtion. lhat is, bE take the S-goal of a request to be the goal directly achieved by using the information. Underlying the stated goal of a request though my be another goal that the speaker wmts to achieve. This intended gcd or "I-goal"mybe related to the S-goal of the request inny of a mmber of mys: -The I-godlmybethe saw as the S-goal. -lhe I-goal may bemre abstract than the S-goal,tich addresses only part of= (This is the standard _ - goal/sub-goal relation found in hierarchical planning [17].) For example, Q's S-goal my be to delete sare files (e.g., "How can I delete all but the last version of FOO.MSS?"), tiile his I-goal may be to bring his file usage under qmta. lhis more abstract goal my also involve archiving saw other files, tnoviug saw into another person's directory, etc. - 'Ihe S-goal my be an enabliog condition for the I-goal. p___I_ For example, Q's S-goal my be to get read/write access to a file, &.le his I-goal my be to alter it. -lhe I-goal may be mm general than the S-goal.For example, Q's S-goal ma% Row how to repeat a control-N, while his I-goalmaybeto knowbowto effect multiple sequential instances of a control character. - Conversely, the I-goal may be mre specific than the -- - s-g& - for example, Q's S-goal my be to koow how to send files to sawone on another machine, &ile his I-goal is just to semi a particular file to alocal network user, which may allm for a specialized procedure. Inferring the I-goal corresponding to an S-goal is an active area of research [I, 3, 13, 141. Ma assme for the purposes of this paper'that R can successfully do so. Qle problem is that the relationship that Q believestohold betweenhis -- --. -- S-goal and his I-goal my not actually hold: for example, the -- S-goal rmy not fulfill part of the I-goal, or it may not instantiate it, or it my not be a pr&condition for it. In fact, the S-goal may not even be possible to effect! Ihis failure, under the rubric "relaxibg the appropriatf+qwry assunption", is discussed in more detail in [13, 141. It is also reason for augmenting R's response with appropriate Ps, as we note informally in this section and rtme focally in the next. Having dram these distinctions, w now claim that in order for the system to canpute both a direct ansmr to Q's request and such Ps ashewxldexpect tobeinfomed of,=re they 170 A frame axian states that only pl, . . . . pn have changed. t=, the system mst be able to draw upon knowledge/beliefs atout - the events or actions, if any, that cau bring about a gd - their enablir7g conditions - the likelihood of an event cccuring or the enabling conditions for an action holding, with respect to a state - Wys of evaluating n&mds of achieving goals - for =-de, with respect to simplicity, 00m consequences (side effects), likelihmd of mcess, etc. - general characteristics of cooperative expert behavior The roles played by these different types of knowledge (as wzll as specific exmples of than) are wzll illustrated in tte next section. III J!UWALIzm KNWLEDX FOREGEURESKNSE ----- In this section wz give examples of lmw a formal model of user beliefs about cooperative expert behavior can be used to avoid misleading responses to task-related questions - in particular, tit is a very representative set of questions, those of the form 'Tbw do I do X?". Although we use logic for the mdel because it is clear and precise, wz are not proposir7g theorem proving as the IlYans of callplting cooperative behavior. In Section 4 wz suggest a canputatiooal umhanisn. The examples are fran a danain of advising students and involve responding to the request "I mnt to drop CIS577". The set of individuals includes not only students, instructors, courses, etc. but also states. Since events and actions Change states, wz represent them as (possibly paramterized) functions fran states to states. AL1 terms corresponding to events or actions will be underlined. For these examples, the following notation is convenient: Q theuser R the expert Z;(P) the current state of the student R believes proposition P RBQB@) R believes that Q believes P admissible(e(S)) event/action e cm apply in state S likely(~,s)- a is a likely-event/action in state S holds(P,S) P, a proposition, is true in S want(x,P) x mnts P to be true. To encode the preconditions and conseq~?~~~s of performing an action, wz adopt an axiamtization of STRIPS operators due to [4, 10, 181. Ihe preconditions on an action being applicable are encoded using "holds" and "admissible" (essentially defining "admissible"). Nmely, if cl, . . . . are preconditions on an action?, holds(cl,s) &...& holds(cn,s) -> admissible($s)) a's inmdiate consequences pl, . . . . pm can be stated as admissible(a(s)) -> holds(pi,+)) & l -0 E' hold+% a(s)) -(p=pl) & . . . & -(P=P> & holds(p,s) & admissible(a(s)) -> holds,?(s)) In particular, w can state the preconditions and consequences of droppillg CIS577. (h and n are variables, tie - - C starxis for CIS577.) R.E(holds(enrolled(~, C, fall), n) 6 holds(date(n)Wvl6, n) - -> adn&sible(drop(h C)(n))) - -'- - R.R(admissible(drop(h,C)(n)) --- - -> ho&(-enrolled(&,C,fall),drop(h,C)(n))) --_- - RE(-(p=enrolled(~,C,fall)) & admissible(drop(h,C)(n)) & holds(p,nJ --_ - -> holds(p,drop(h,C)(n))) --- - Of course, this only partially solves the frme problem, since there will be implications of pl, . . . . p in general. For instance, it is likely that one might have an axian stating that one receives a grade in a course 0ril.y if the individual is enrolled in the course. Q's S-goal in dropping CIS577 is not being in the course. By a process of reasoning discussed in [13, 141, Rmy conclude that Q's likely intended goal (I-goal) is not failing it. That is, Rmay believe: WWolcWfafi(Q,C>, drop(Q C)(k))) - -'- RlXwant(Q,-faUQ,C)) > What w claim is: (1) R must give a truthful response addressing at least Q's S-goal; (2) in addition, Rmay have to provide informtion in order not to mislead Q; and (3) R may give additional infonmtion to be cooperative in other mys. In the s&sections below, wz enmerate the cases that Rmust check in effecting (2). In each case, wz give both a formal representation of the additional information to be conveyed and a possible English gloss. In that gloss, the part addressing Q's S-goal will appmr in normal type, tie the additional information will be underlined. For each case, W. give tm formulae: a statement of R's beliefs about the current situation and an axian stating R's beliefs about Q's expectations. Formulae of the first type have the form RR(P). Foxmulae of the second type relate such It will also the RI@(admissible(drop(Q,C)(Sc))) "f Q's asks 'ccan I dt:k --- CIS577?", but not if he asks "Can I drop CIS577?". In the latter case, Q must of course believe that it my be admissible, or why ask the question. -i- In either case, R s subsequent behavior doesn't seen contingent on his beliefs about Q's beliefs about admissibility. 171 beliefs to performing an informing action. Ihey involve a statenent of the form RB[P] -> likely(i, SC), where i is an informing act. For emmple, if R believes there is a Gtter my to achieve Q's goal, R is likely to inform Q of that better wy. Since it is assamad that Q has this belief, m have QB( RB[P] -> likely(i, Sc) ). where wz can equate “Q believes i is likely" with "Q expects z." Since R has no direct access to Q's beliefs, this must be &bedded in R's mdel of Q's belief space. Therefore, the axians have the fotm (rmdulo quantifier placement) Rl@( RB[P] -> likely(i, SC) ). An informing act ismeant to serve as a cammxlto a natural language generator thich selects appropriate lexical items, phrasing, etc. for a natural language utterance. Such an act has the form infomthat(R,Q,P) Rinforms Qthat Pistrue. ----- A. Failure of enabling conditions --- Suppose that it is past the November 15th deadline or that the official records don't show Q enrolled in CIS577. Iher~ the enabling conditions for dropping it are not met. That is, K believes Q's S-goal cannot be achieved fran SC. 111 RB(wnt(Q,-fail(Q,C) > & %dmissible(drop(Q,C)(Sc))) --- lhus R initially plans to answzr "You can't drop CIS577". Beyond this, there are tm possibilities. 1. A=Y If Rknm another action b that muld achieve Q's goals (cf. formula [2]), Q w&d e%pect to be informed about it. If not so informed, Qmymistakenly conclude that there is no other way. Formula [3] states this belief that R has about Q's expectations. [2] RB((Eb) [admissible(b(Sc)) -id ~~w-f~(Q,C), b< SC) > I> 131 RBQB(f@mnt(Q,-fail(Q,C)) 6 Wmissible(drop(Q,C)(Sc))] & RB[(Eb)[admissible(b($)) & -> l%ely(infonn-that(R, 4, holds(-fail(Q,C),l$Sc))]] _I_-- (Eb) [admissible(b(Sc)) & - -- R's full response is therefore "You can't drop 577; you cm -- b." For instance, b could be changiq status to auditor, %ich may be perfoxkl until December 1. 2. Ibwiy If R doesn't knw of any action or event that could achieve Q's goal (cf. [4]), Qwmld expect to be so informed. Formula [5] states this belief abcut Q's expectations. [4] RB(N(Ea)[admissible(a(Sc)) - & holds(-f~(Q,C),a(Sc))l) [5I RBQB(RB(=nt(Q,-faWQ,C)) & -(Ea)[admissible(a(Sc)) & holds(-fail(Q,C),-a(Sc))]) -> likelv(infonn-that(R. 0, - .- m.-- -L’ -(E a)[admissible(a(Sc)) -- & horclswuQ,C) ,&> > I > $4) To say only that Q cannot drop the course does not &bit expert cooperative behavior, since Q wuld be uncertain as to whether R had considered other alternatives. Iherefore, R's full response is 'You can't drop 577; there isn't anything you can do to prevent failing." -__--- -.-.-..--A Notice that R'xysis of the situation my turn up additional infomation which a cooperative expert could provide that does not involve avoiding misleading Q. For instance, R cou?%i?&icate enabling conditions that prevent there being a solution: suppose the request to drop the course is made after the Kovenber 15th deadline. Then R muld believe the following, in addition to [l] RB(holds(enrolled(Q,C,fall),Sc) & holds(date(Sc)>Novl5,Sc)) tire generally, m need a schema suchasthe following about Q's beliefs: RBQB(RHwmt(Q,-f~i1(Q,C)) & (kUs(P1, S) &..A holds(Pn, S) -> adnrissible(a(S))) & (~olds(Pi, S), f<r scme Pi above)] -> likely(infomthat(R,Q,%olds(Pi,S)),S)) ___---- --- In this case the response should be "You can't drop 577; Pi isn't true." --_ -- Alternatively, the language generator migE paraphrase the dole response as, "if Pi wxe true, you could drop." Of caxse there are potentially many WAYS to try to achieve a goal: by a single action, by a single event, or by an event and an action, . . . In fact, the search for a sequence of events or actions that wmld achieve the goal may consider amy alternatives. If all fail, it is far fran obvious which blocked condition to notify Q of, and lawwledge is seeded to ho&(-fail(Q,C),b(Sc)) & --- --- can(Q b)),Wl) --'- 172 guide tlw choice. Sane heuristics for dealirg with that problen aw given in [15]. B. An nonproductive act - Suppose the proposed action does not achieve Q's I-goal, cf. [6]. For example, dropping the course my still man that failing status muld be recorded as a WF (witklrawal tie failing). Rmay initially plan to answzr "You can drop 577 by II . . . . k+.wer, Q mild expect to be told that his proposed action does not achieve his I-goal. Fonmla [7] states R's belief abcut this expectation. [61 RB(%ldKfail(Q,C), drop(Q,C)@c)) ---- & admissible(drop(Q,C)(Sc)) ) --- 171 WNW =nt(Q,"fafl(Q,C)) & ~lds(-fail(Q,C),drop(Q,C)(Sc)) --- 6 admissible(drop(Q C)(Sc))] - -'- -> likly(inform-that(R,Q, ---- -ho&(-fail(Q,C), --- ---- drop(Q,C)(W)),*)) -_- -- R's full response is, "You can drop 577 by . . . . kbmver, you - -- will still fail." Furthermore, given the reasoning in section ------7- 3.1.1 above, R s full response wnild also infoxm Q if there is an action b that the user can take instead. - C. A better tsy --- Suppose R believes that there is a better way to achieve Q's I-goal, cf. [8] - for exmple, taking an incauplete to have additional tim to perform the wrk, and thereby not losis all the effort Q has already expended. Q wmld expect that R, as a ccmperative expert, mild inform him of such a better my, cf. [9]. If R doesn't, R risks misleading Q that there isn't one. [al RB( (e) bldd-fafl(Q,C) , _b(W) & adrnissible(b(Sc)) & better(b drop(Q C)(Sc))]) -. -'-- -'-. - 91 RBQB(RB[mnt(Q,-fail(Q,C))] & RBCOi'b) bldd-faUQ,C), _b_(W) & %lmissible(b(Sc)) & -> 1 better(b drop(Q C)(k)) -'--- -'- - ikely(inform-that(R,Q, ~---- (Eb)[holds(-fail(Q,C),b(Sc) --- -- -&missible(b(Sc)) & >& 'Ihus even when adhering to expert response behavior in terms of addressing an I-goal, wz mst keep the system aksre of potentially misleading aspects of its modified response as kell. Note that Rmay believe that Q expects to be told the best =Y* Thismuld change tl= second axianto include within the scope of the existential quantifier (A a){-(a=b) -> [ho&(-fail(Q,C), a(k)> & admissible(~(Sc)) & better(b,a)l} - -. D. Iheonlymy --- Suppose there is nothing inconsistent about what the user has proposed - i.e., all preconditions are met and it will achieve the user's goal. R's direct response muld simply lx to tell Q how. Wver, if R notices that that is the only wy to achieve the goal (cf. [lo]), it could optionally notify Q of that, cf. [ll]. [lo] RB((E!a)[holds(-fail(Q,C),a(Sc)) &~&missible(aJSc)) 6 a=drop(Q,C)(sC)l) _--_- - 1111 RBQB(RB(-nt(Q,"fti(Q,C))) & RB((E!a)[holds(-fail(Q,C), a(%)) & admissible(a(Sc)) & a=drop(Q,C>(Sc)]> _--.- - -> likely(infonn-that(R, Q, --- -- (E!a)[holds(-fail(Q,C),a(Sc)) -- -- --- --- & admissible(a(Sc)) 6 I- & a=drop(Q,C)(Wl>, SC)) ----- - R's full respmse is "You can drop 577 by . . . . That is the __-- only wiy to prevent failing." -__--.I__-- E. SanethingkmingUp - -- - Suppose there is no appropriate action that Q can take to achieve his I-goal. That is, RB( -(E a)[admissible($Sc)) & holds(g, a(sc)>l> There my still be sane event e out of Q's control that could bring about the intended goal: This gives several mre cases of R's edifying his response. -:-- better(b,drop(Q,C)(Sc)))l, WI> ____-.--. - - -. 1. lui!Tcelyevent R's direct response is to indicate how f can be done. R's full response includes, in addition, "b is abetter WY." ------ Notice that if R doesn't explicitly tell Q that he is presenting a better wy (i.e., he just presents the mthod), Q may be misled that the response addresses his S-goal: i.e., he my falsely conclude that he is being told how to drop the ccurse. (The possibility shows up clearer in other ewmples - e.g., if R omits the first sentence of the response below Q: lbw do I get to Marsh &eek on the Expressway? R: It's faster and shxter to take Route 30. GO out Lancaster Ave until.... If e is unlikely to occur (cf. [12]), Q wwld expect R to info& him of_e, while noting its implausibility, cf. I131 [12] RB((Ee)[admissible(e(Sc)) -_ 6 holds(-f-%l(Q,C), +)) & -likely(e, SC)]) 173 1131 RBQB(REK=t(Q,-fcl(Q,W & RB(-(Ea)[admksible(a(Sc)) & - ho&(-f&(Q,C),a(Sc))] & (Ee)[admissible(e(Sc.) & holds(-fail(Q,C),e(Sc)) & "li.ly(~,Sc)]) -> likely(infom-that(R, Q --- --' (E e)[admissible(e.Sc) --- 2 --Pm-‘ -- & holds("fail(Q,C), e(k)) - ---- -7 & YikWe, WI), *I) -- -_. -. Thus R's full response is, "You can't drop 577. g-e occurs, you will not fail 577, but e is unlikely." --- --- ------ 2. Iikelyevent If the event e is likely (cf. [14]), it does not swm necessary to state it, but it is certainly sate to do so. A fotia representing this case follows. [14] RJ3((E&)[admissible(_q(Sc)) & holds(-fail(Q,C),@)) & R's beliefs about Q's expectations are the same as the previous case except that likely(2, SC) replaces Yikely(e, SC). 'lhus R's full response may be "You can't drop 577. Elowever, e is likely to occur, in which case you will not fail -- --___-- --_--__-_-- 577." 3. JZventfollom2d byaction If event e brings about a state in Wtlich tte enabling cooditions of-an effective action? are true, cf. [15] [I51 RB((Ee)(Ea)L~ssible(e(Sc)) & ad&sible(a(e(Sc))) & -- holcWfaUQ,(=), 44W))l) -- WI RBQB(RB((Ee)(Ea)[~nt(Q,-fail(Q,C)) & admissible(e(Sc)) & admissible(g(e(Sc))) -- & ~ldCfaiUQ,C), a(e(W>>l> -- -> likely(infomthat(R,Q, ---- wd(W -- - bl~(~fa~(Q,C) ,a(e(W>>> d ---- - -.- aduissible(a(e(Sc))])),Sc)) -~--- then the same principles about informing Q of the likelihood or unlikelihood of e apply as they did before. In addition, R must inform Q of a, cf. [16]. Thus R's full response muld be "You can't drop 577. If e wxe to occur, whichis (un)likely -m-w- -- --' you could a and thus not fail 577." -----m--w_ IvltEAmmG CW intent in using logic has been to have a precise representation language whose syntax inform R's reasoning about Q's beliefs. Having caaputed a full response that conforms to all these expectations, Rmy go on to 'trim' it according to principles of brevity that wz do not discuss here. Our proposal is that the informing behavior is "pre-canpiled". That is, Rdoes notreason explicitly about Q's expectations, but rather has canpiled the conditions into a case analysis similar to a discrimination net. For instance, w2 can represent informally several of the cases in section 3. if admissible(drop(Q C)(Sc)) - -'- then if %olds(Wfail(Q,C),drop(Q,C)(Sc)) -._ ---- then begin nonproductive act if (E b)[adudssible(_b(Sc)) & -_ holds(Wfail(Q,C) thenamy -- else no vay -- -- eril else if (Eb)[admissible(b(Sc)) & -- - holds(-fail(Q,C) & better(b,f)] then a better my -_ ~ - else if (Eb)[admissible(b(Sc)) & ho&(-fail(Q,C), _9< thenamy --- elsenokay -- . . . ,b(Wl ,gw W)l Wte that w are assuning that R assumes the most dmamliog expectations by Q. Therefore, R can reason solely within its own spacewithoutmissing things. Since the behavior of expert systems will be interpreted in tern of the behavior users expect of cooperative hunan experts, w (as systgn designers) mst understand such behavior patterns so as to implement them in our systems. If such system are to be truly cooperative, it is not sufficient for then to be simply truthful. Mditionally, they must be able to predict limited classes of false inferences that users might draw fran dialogue with them and also to respond in a way to prevent those false inferences. Ihe current enterprise is a s&l but nontrivial step in this direction. In addition to questions about achieving goals, w2 are investigating other cases where a cooperative expert should prevent false .inferences by another agent, including preventing inappropriate default reasonirlg [8, 9]. F'uturewxkshould include - identification of additional cases where an expert mst prevent false inferences by another agent, - formal statement of a general principle for constaining the search for possible false inferences, ami - design of a natural language planning ccmponent to carry out the informing acts assuned in this paper. 174 'tk wuld liketothank~rtha PoUack, &torah cahl,JuIia Hirschberg, Kathy NzcOy and the AA&I program cmrnittee reviwxs for their cuunents on this paper. 1. Allen, J. Recognizing Intentions fm Natural Iarguage Utterances. In CanputationaI wels of Discourse, M. Brady, -_-P Ed.,MIT Press, Cambridge MA, 1982. 2. Birnbaun, L., Flcwrs, M. & ti@ire, R. 'Rwards an AI l%deI of Argunentation. Proceedings of 1980 Conference, brican Assoc. for Artificial Intelligexe, Stanford CA, August, 1980. 3. Car-berry, S. Tracking User Coals in an Information Seeking Environment. Proceedirgs of the National Conference on Artificial Intelligence, AAAI, 1983, pp. 59-63. 4. &ester, D. L. . . personal cartxmm 'cation, 1983 5. Cohen, R. A Theory of Discourse Coherence for Argmrent Understanding. Proceedings of the 1984 Conference, Canadian Society for CanputationaI St&ies of Intelligence, University of Western Cntario, Iondon Chtario, by, 1984, pp. 6-10. 6. Hirschberg,J. Scdlar Implicature and Indirect%qxxxses in QzstiowAnswring. Proc. CSCSI-84, Iondon, Ontario, by, 1984. 7. Joshi, A.K. titual Beliefs in Question Answxing Systans. In titual Belief, N. Bnith, Fd.,Academic Press, New York, pm__ 1982. 8. Joshi, A., Nabber, B. & Weischedel, R. Preventing False Inferences. Proceedings of COLlNG-84, Stanford CA, July, 1984. 9. Joshi, A. K., Wzbber, B., and Nzischedel, R. M. &fault Reasonirg in Interaction. %txnitted to Nxkshop on No*notoI-'ic Reasoning 10. &w&ski, l&art. Logic for Problem Solving. North - --- ~ ^~ Holland, New York, 1979. 11. Lehnert, W. A Gxnputational 'Itleory of HLanan @estion Answering. &I El-nts‘of Discourse Understanding, A. Joshi, p---e- B. kbber & I. Sag, Fd.,Cambridge University Pre&, 1981. 12. &Carthy, John. "Circwxscription - A Form of Non-Monotonic Reasoning." Artificial Intelligence 13 (1980), --- 27-39. 14. pbllack, M. Bad Answers to bad @estions. Proc. Canadian Society for Canputati0na.l Sttiies of Intelligence (CSCSI), Univ. of Wastern titario, hterloo, Canada, May, 1984. 15. Ramshaw, Lance and Ralph M. Weischedel. Problem Localization Strategies for Pragnwics Processing in &0xral LanguageFront E. Proceedings of 03LINX34,JuIy, 1984. 16. Reiter, R. aosed World I&abases. In Iogic and Databases, H. Gallaire & J. Minker, Ed.,Ple~~~l978, -149-177. 17. Sacerdoti, Earl D.. A Structure for Plans and Behavior. f&ricanEIsevier, &wYo&,1977.---- 18. brren, D.H.D. NARPIAN: A System for Generating Plans. Props?dings of IJCAI-75, &gust, 1975. 13. Ibllack, Martha E. Goal Inference in Expert Systesn. Tech. l%pt. MS-CIS-84-07, University of Pennsylvania, 1984. Doctoral dissertaion proposal. 175
1984
64
352
FRAME SELECTION IN PARSING* Steven L. Lytinen Dept. of Computer Science, Yale University New Haven CT 06520 ABSTRACT The problem of frame selection in parsing is discussed, with the focus on the selection of a frame for texts which contain highly ambiguous or vague words. An approach to frame selection is presented which involves the use of a small number of general inference rules, in conjunction with a hierarchically-organized conceptual memory. This is in contrast to various other methods, which rely either on disambiguation rules stored in the dictionary definitions of ambiguous words, or on previously-activated frames to guide the selection of new frames. The selection of frames for vague or ambiguous words using these previous methods is shown to be problematic. The method presented here does not suffer from these same problems because of the hierarchical organization of memory and the general inference rules used. I Introduction One problem which has been encountered in the use of frames [6] or other frame-like structures such as scripts (81 in natural language processing has been the selection of the appropriate frame for a text. How does a system with a large number of frames choose the correct one? Sometimes, particular words in a text point directly to a particular frame, thus trivializing this problem. For example, the word “arrest” refers directly to the 8ARREST script. However, more often it is the case that no one word in a text points definitively to a unique frame. Instead, many of the words in the text are ambiguous or vague, and it is only by considering them in combination that a frame can be selected. An arrest, for instance, can be described without using the word ‘arrest”, as in “Police took a suspect into custody”, or even “They got their man’. In cases like this, frame selection is much more difficult. In this paper, I will present a method for selecting frames for texts containing ambiguous or vague words. This method uses a small set of general inference rules in conjunction with frame-like structures organized in a hierarchical fashion. This method of frame selection has been developed within a multi-lingual machine translation project, called the MOPTRANS system [5], which translates short (l-2 sentences) newspaper articles about terrorism. II Previous Approaches to Frame Selection *This research was supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under contract No. N00014-82K-0149 A. Lexically-based Disambinuation Rules One approach to the frame selection problem in parsing has been to use disambiguation rules which are stored in the dictionary definitions of ambiguous or vague words. I will call this the lexical/y-baaed approach. This approach was first used in Riesbeck’s parser [Riesbeck75], and since then in many other parsers, such as the Word Expert Parser [Small80]. In this approach, disambiguation rules take the form of test- action pairs, called request8 or demone. For each possible meaning of an ambiguous or vague word, one or more requests/demons are responsible for determining if that meaning of the word is the one being used in the given context. The requests in an ambiguous word’s dictionary entry are activated when the parser encounters the word. Then, one of these requests fires, or is executed, when its conditions are met by the state of active memory, thus choosing its word sense as the meaning of the word in that context. The lexically-based approach to frame selection was used in the Word Expert Parser to disambiguate the word “throw* to its many possible meanings. In WEP, the dictionary definition of a vague word consisted in part of a discrimination net of possible concepts to which the word could refer, as well as a group of demons which functioned as the discrimination rules for the d-net of frames. Thus, the frames for ‘throw”, which included frames such as PERSON- THROW, THROW-OBJECT-TO-LOCATION, and THROW- OUT-GARBAGE, were arranged in a discrimination net, with demons for choosing the correct frame such as “If the agent of uthrOwn is a person, then refine uthrown to PERSON-THROW”, and “If the object of PERSON-THROW is garbage, then refine PERSON-THROW to THROW-OUT-GARBAGE.” The problem with the lexically-based approach to frame selection is that in a system with a large number of frames, very vague words require a great many disambiguation rules. This is because the number of disambiguation rules needed for a given word is proportional to the number of frames to which the word could possibly refer, since each possible meaning of a word requires one or more rules to determine whether or not that meaning is being used in a given context. In WEP, the number of demons needed to disambiguate “throw” to all of its possible meanings was quite large, since Yhrow” could refer to a large number of frames. The word “throw” is relatively specific compared to other English verbs like “encounter”, or ‘do”, which could refer to almost any action. Extremely vague words such as these would require an unmanagebly large number of disambiguation rules to handle all the possible frames to which they could refer. 222 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. B. Expectationa from Other Frames --- PTRANS i Another approach to the frame selection problem involves using frames to predict the occurrence of other frames. I will call this approach the frame-based approach. This approach was used in the GUS system [Bobrow77], a system which conversed about airplane trips; and in the Integrated Partial Parser [LebowitzgO], which parsed short newspaper articles about terrorism. In this approach, frames already selected are responsible for predicting other frames that are likely to appear in a text. These predictions can help to disambiguate words which could refer to many different frames. For example, in IPP the word “seized” could refer to many different scripts: SHI JACK, STAKE-OVER (a building), and SKIDNAP. Expectations from already active structures often determined which of these scripts “seized” referred to. Thus, if the structure EXTORT, another frame in IPP, was already active, then ‘seized- was assumed to mean SHIJACK, since hijackings are often part of extortions. The frame-based approach solves some of the problems with frame selection. This approach does not suffer from the rule explosion encountered in the lexically-based approach, since the number of rules does not depend on the number of meanings of a word. However, an obvious problem with this approach is the selection of an initial frame. If no frames are active at the beginning of a story, then no predictions can be made as to what frames will occur in the story. To avoid this problem, the GUS system only dealt with texts having to do with airplane trips. Thus, the trip specification frame was always active at the beginning of the story. This frame could then be used to predict other frames that might appear in the text. The IPP parser also relied in part on a restricted domain to deal with the problem of selecting an initial frame. Many words in English which are vague in general are unambiguous in the domain of terrorism, and thus were unambiguous in IPP. For instance, the word “divert” in IPP referred to only one frame, namely SHIJACK. C. Frame Selection by Discrimination Another approach to frame selection was used in the FRUMP system [DeJong70]. FRUMP produced summaries of newspaper articles from many domains. Thus, the frame selection problem was very real in FRUMP. To handle this problem, DeJong used discrimination nets called sketchy script initiator discrimination trees (SSIDTs). One SSIDT existed for each conceptual dependency primitive. An SSIDT, when given a conceptual dependency representation, selected a frame, or “sketchy script”, on the basis of the roles and role fillers contained in the Conceptual Dependency representation. Thus, a text was first decomposed into its CD representation, then parsing rules would fill in various roles in the representation, and finally an SSIDT selected a sketchy script on the basis of what roles were filled in, and how they were filled. FRUMP used the SSIDT in Figure 1 to select the sketchy script SEARTHQUAKE for the sentence “The ground trembled.” First, the word “trembled” was represented by node 2 (OBJECT) / I \ GROUND VEHICLE HUMAN node 3 (ACTOR) / \ EXPLOSIVE GEOLOGICAL FORCE node 4 (MANNER) CYCLICAL Figure 1: SSIDT for PTRANS requiring large, ad hoc discrimination trees, whose only purpose is to disambiguate words. Also, it depends on the ability to initially represent a text in terms of conceptual dependency primitives. While this works well for words such as “trembled”, which refer very clearly to physical actions, it is not as easy to represent the meanings of all words in terms of conceptual dependency primitives. III A Different Approach to Frame Selection - -- I will now discuss a different approach to the problem of frame selection, which uses a small number of general inference rules in conjunction with a hierarchically arranged set of frames. The frames range from very general to very specific, depending on their position in the hierarchy. To explain this approach, I will discuss the frame selection process for a very general Spanish phrase, ‘haccr diligencias”, which was encountered by the MOPTRANS program. Literally, this phrase means “to do diligent actions”. Often, it is equivalent to the English ‘to run errands”, as in the following example: Spanish: Maria no puede ir a la reunion porque tiene que HACER MUCHAS DILIGENCIAS. English: Mary cannot go the gathering because she HAS TO RUN A LOT OF ERRANDS. However, it can mean many other things, depending on the context in which it appears. This is because often the context provides enough information to allow the reader to infer quite specifically what action the phrase refers to. Here are some examples*: Spanish: Juanita salio a HACER UNAS DILIGENCIAS AL MERCADO. English: Juanita went TO SHOP FOR GROCERIES. Spanish: Va a pintar su apartamento? - Si, pero antes tengo que HACER UNAS DILIGENCIAS PARA VER si consigo la pintura que quiero. PTRANS, the CD primitive for physical motion. - “Trembled” also provided the information that the motion was cyclical in manner. Then, parsing rules assigned “ground” to be the OBJECT of this PTRANS. This role-filling information guided the script selection process through the SSIDT to the sketchy script SEARTHQUAKE. English: Are you going to paint your apartment? - Yes, but first I have TO GO SEE if I can find the paint that I want. Spanish: La policia REALIZA INTENSAS DILIGENCIAS PARA CAPTURAR a un reo que dio muerte a una mujer. English: The police ARE UhDERTAKING AN INTENSE FRUMP’s appoach to frame selection does not suffer from the same rule- explosion as the lexically-based approach. Also, it can select an initial frame for a story, unlike the frame-based approach. However, it has the disadvantage of INVESTIGATION in order to capture a criminal who killed a *Sometimes, the verb “realizar” (to realize or achieve) is used in place of “hater”. 223 woman. From these examples, we see that a large number of different frames can be used to represent ‘hater diligencias” in different contexts. How can we devise rules which disambiguate a vague phrase like “hater diligencias”? At first glance, one might think that we could find straightforward features of the surrounding context which would discriminate between at least some of the different frames to which the phrase can refer. For instance, in the police investigation story above, the fact that POLICE is the ACTOR of “realizar diligencias” might be enough to discriminate this sense of the phrase. However, this is not the case, as the following example illustrates: Spanish: La reina Isabela va a visitar a la ciudad de Nueva York el lunes. La policia REALIZA DILIGENCLAS para insorar su seguridad durante la visita. English: Queen Elizabeth will visit New York city on Monday. The police ARE TAKING PRECAUTIONS to insure her safety during her visit. Let us continue to look at the police investigation example above. What parts of the context in this example are relevant to determining that ‘realizar diligencias” means POLICE-INVESTIGATION? To answer this, consider the line of reasoning that a human reader might follow in order to infer this. First, since the prepositional phrase “para capturar” (in order to capture) follows %ealizar diligencias”, a human reader knows that the action expressed by “realizar diligencias” somehow will lead to a capture, or that the capture is the goal of the “diligencias”. Capturing something involves getting control of it, and we know that before we can get control of an object, we have to know where it is and we have to find it. This indicates that perhaps “realizar diligencias” refers to some sort of finding. But when police are trying to find something in order to get control of it, they usually do a formal type of search, or an investigation. Therefore, we know that in this case, the phrase refers to a police investigation. What we would like, then, is to devise frame selection rules which parallel this line of reasoning. In essence, this line of reasoning is a refinement process. Each inference limits further the type of action to which “realizar diligencias” could refer. At first, we know nothing more than simply that the phrase is referring to some action. Then, it is limited to be a type of FIND. Finally, we can infer that it is an INVESTIGATION. The frame selection method used in the MOPTRANS system parses this example using a similar refinement process. The frames in MOPTRANS are arranged hierarchically, from most vague to most specific. The dictionary definitions of words consist of pointers into this hierarchy. The level of specificity at which the definition of a word points into the hierarchy depends on how vague or ambiguous the word is. Since “realizar diligencias” is a very general phrase, it points into the hierarchy at a very general level, to the frame ACTION. ACTION falls in the hierarchy as is shown in Figure 2. The dotted lines connecting the concepts ACTION, FIND, etc., represent IS-A links. In addition to this hierarchical information, GET and POLICECAPTURE, which are script-like structures, provide information about stereotypical sequences of events. All of the concepts in this IS-A hierarchy have case frames, specifying the prototypical fillers for various slots, such as ACTOR, OBJECT, etc. For example, the case frame for FIND indicates that its ACTOR should be a PERSON, its OBJECT should be a PHYSICAL OBJECT, and its RESULT ---- ACTION -------- I I FIND GET-CONTROL I I POLICE-INVESTIGATION ARREST GET = FIND + GET-CONTROL POLICE-CAPTURE = POLICE-INVESTIGATION + ARREST Figure 2: Memory structures for police investigation example should be a GET-CONTROL. The case frame for POLICE INVESTIGATION indicates that its ACTOR should be an AUTHORITY, its OBJECT should be a CRIMINAL, and its RESULT is an ARREST. With this hierarchical memory organization and scriptal knowledge, very general rules can be used to select the correct frame for this example, along the same lines as I suggested that a human reader would follow. The dictionary definition of the word ‘capturarn points to the concept GET-CONTROL in the hierarchy above. From the event sequence GET, we know that GET-CONTROL is often preceded by the event FIND. Since the story says that an ACTION, “diligencias”, precedes the GET-CONTROL, we can infer that the action is probably a FIND. This suggests the following general inference rule: If a scene of a script is mentioned in a story, then other scenes of the same script can be expected to be mentioned. Thus, if an abstraction of another scene of the script is mentioned, we can infer that the abstraction actually is the other scene. In more concrete terms, in this example GET-CONTROL is a scene of the script GET. Another scene of GET is the scene FIND. “Realizar diligencias” refers to an abstraction of the concept FIND, namely ACTION. Since GET-CONTROL was mentioned, indicating that other scenes of the script GET are likely to be encountered, we can infer that the ACTION is actually a FIND, since ACTION is an abstraction of FIND. Put more precisely, this line of inferencing can be expressed in the following rules: SCRIPT ACTIVATION RULE: If an action which is part of a stereotypical event sequence is activated, then activate the stereotypical event sequence, and expect to find the other actions in that sequence. EXPECTED EVENT SPECIALIZATION RULE: If a word refers to an action which is an abstraction of an expected action, and the slot-fillers of the action meet the prototypes of the slot-fillers of the more specific action, then change the representation of the word to the more specific expected action. Next, consider how we can infer that the FIND is a POLICEINVESTIGATION. First, in the story the ACTOR of the FIND is the POLICE. One piece of knowledge that we have about POLICE is that often they are the ACTORS of POLICEINVESTIGATIONs, since that is part of their job. Then, since the IS-A hierarchy tells us that POLICE- INVESTIGATION is a refinement of the concept FIND, we can infer that in this story, the FIND is most likely a POLICEINVESTIGATION. This suggests the following inference rule: SLOT-FILLER SPECIALIZATION RULE: If a slot of concept A is filled by concept B, and B is the prototypical filler for that slot of concept C, and concept C IS-A concept A, then change the representation of concept A to concept C. In this case, concept A is FIND, and concept B is the POLICE. The POLICE are the prototypical ACTORS of concept C, a POLICEINVESTIGATION. Since FIND is above POLICEINVESTIGATION in the IS-A hierarchy, then we can conclude that FIND in this case refers to POLICE- INVESTIGATION. Thus, I have suggested three general inference n:les, the script activation rule, the expected event specialization rule, and the slot-filler specialization rule; which can perform the disambiguation of ‘realizar diligencias” in the example above. These rules require the organization of knowledge structures in a hierarchical fashion, so that they can use this hierarchy to guide the refinement of concepts. They also require the existence of event sequences (scripts) in memory, to provide expectations as to what actions are likely to occur together in stories. Given these rules, frame selection for the police investigation example proceeds as follows: first a general representation is built for (Irealizar diligencias”; simply, the concept ACTION. Then, the ACTOR of ACTION is filled in with the concept AUTHORITY, since “policia” is the subject of the verb “realiza*. Next, the concept GET-CONTROL is built to represent the word Ycapturar”. This also causes the event sequence GET to be activated, because of the Script Activation Rule above. This, in turn, causes the concept ACTION to be changed to FIND, due to the Expected Event Specialization Rule. Finally, since the ACTOR slot of FIND is filled by AUTHORITY, and since the prototype of the ACTOR slot of POLICE-SEARCH is AUTHORITY, and since POLICESEARCH is the only concept under FIND in the IS-A hierarchy for which this is t,rue, the concept FIND is further refined to POLICE-INVESTIGATION, due to the Slot-filler Specialization Rule. filled by AUTHORITY, and since the prototype of the ACTOR slot of POLICE-SEARCH is AUTHORITY, the concept FIND would be changed to be POLICEINVESTIGATION because of the slotfiller specialization rule above. IV Conclusion I have presented three general inference rules, the Script Activation Rule, the Expected Event Specialization Rule, and the Slot-filler Specialization Rule, which can be used to select frames for very vague words such as “diligencias”. These rules draw on information from a hierarchically organized conceptual memory, w hit h provides know ledge about abstractions of events and sequences of events. This frame selection method is in contrast to the lexically-based and frame-based methods discussed earlier. In the lexically-based method, since at least one disambiguation rule is needed for each sense of an ambiguous word, very vague or general rules require a very large number of disam biguation rules. The frame-based method is limited in that selection of an initial frame must be done by some other method. The disambiguation method which I have presented here does not suffer from these limitations. MOPTRANS’s frame selection method is most similar to the method used in FRUMP (DeJong791, involving the use of SSIDTs. However, the MOPTRANS method haa several advantages over DeJong’s method. First, text does not need to be represented in terms of Conceptual Dependency primitives at the beginning of the frame selection process. In DeJong’s system, “hater diligencias” would initially need to be represented in CD. Second, although the organization of frames in a hierarchy serves much the same function as the discrimination nets used by DeJong, the traversal of the hierarchy in the approach I have presented is less ad hoc than in DeJong’s system. A small number of inference rules perform the traversal of the hierarchy, in conjunction with the definitions of the frames in the system. In FRUMP, arbitrary tests were used to determine what path in the discrimination net should be followed. The frame selection process which I have described here is also similar in some respects to the Incremental Description Refinement process used in RUS [Bobrow&Weber80]. In this system, a taxonomic lattice [woods78] is used to refine the semantic interpretation of a sentence as it is being parsed. The refinement process is similar to the frame selection method I have described here, in that it relies on the structure of the hierarchy to provide it with the information needed to discriminate to more specific concepts in the hierarchy. For example, the sentence “John ran the drill press” was parsed in this system using a taxonomic lattice containing nodes RUN- CLAUSE, PERSON-RUN-CLAUSE, RUN-MACHINE- CLAUSE. The parser refined its semantic interpretation of the sentence from RUN-CLAUSE to the more specific PERSON-RUN-CLAUSE and finally RUN-MACHINE CLAUSE as more information was provided by the parse of the sentence. Although the refinement processes in RUS and MOPTRANS are similar, the content of the nodes in the hierarchies used in the two systems is completely different. The nodes in the taxonomic lattice in RUS are in no way independent of lexical items, since nodes represent both semantic and syntactic functions of lexical items. Thus, the system would presumably have a separate node, RUNNING- OF-MACHINE-NOUN-PHRASE, to represent a noun phrase like “The running of the drill press”, even though the noun phrase has virtually the same meaning as RUN-MACHINE- CLAUSE. This is in contrast to the nodes in the hierarchy used in MOPTRANS, which are meant to be elements in a conceptual representational system, and therefore independent of the specific lexical items which built the representation. REFERENCES PI PI PI 141 151 PI PI 181 191 Bobrow, D.G., Kaplan, R.M., Kay, M., Norman, D.A., Thompson,H., and Winograd, T!. “GUS, a Frame Driven Dialog System.” Artificial Intellrgenec 8:l (1977). Bobrow, R.J., and Weber, B.L. Knowledge Representation for Syntactic/Semantic Processing. Proc. AAAI-80, Stanford University, August, 1980, pp. 316323. DeJong, G. Skimming Stork8 in Real Time: An Ez eriment in Integrated Undercrtanding. Ph.D. Thesis, Ya e University, May 1979. Y Lebowitz, M. Generalization and Memory in an Integrated Understanding System. Ph.D. Thesis, Yale University, October 1980. Lytinen, S.L., and Schank, R.C. Translation.” n Representation and Text 2:1/3 (1982), pp. 83-112. yinsky, M. “A framework for represe;ti;fnknowledge.” In c Psycholog I of Computer P. Winston, Ed.,McGraw-Hi! , New York, 1975, ch. 6, pp: 211-277. Riesbeck, C. “Conceptual Analysis.” In Conceptual Informatron Roceeaing, North-Holland, Amsterdam, 1975. Schank, R.C. and Abelson, R.. Scripts, Plane, Goale and Understanding. Lawrence Erlbaum Associates, Hillsdale, New Jersey, 1977. Small, Steven. Word Ezpert Pareing: A Theory of Distrrbuted Word-baaed Natural Language Understanding. Ph.D. Thesis, Department of Computer Science, University of Maryland, 1980. [lo] Woods, W.A. ‘Taxonomic Lattice Structures for Situation Recognition.” In TINLAP-2, July, 1978. 225
1984
65
353
A PRODUCTION RULE SYSTEM FOR MESSAGE SUMMARIZATION Elaine Marsh and Henry Hamburger* Navy Center for Applied Research in Artificial Intelligence Naval Research Laboratory - Code 7510 Washington, D.C. 20375 *Also at George Mason University ABSTRACT In summarizing a message, it is necessary to access knowledge about linguistic relations, subject matter knowledge about the domain of discourse, and knowledge about the user’s goals for the summary. This paper investigates the feasibility of integrating these knowledge sources by using computational linguistic and expert system techniques to generate one-line summaries from the narrative content of a class of Navy mes- sages. For deriving a knowledge representation of the narra- tive content, we have adapted an approach developed by Sager et al. at New York University. This approach, called informa- tion formatting, uses an explicit grammar of English and a classification of the semantic relationships within the domain to derive a tabular representation of the information in a mes- sage narrative. A production system, written in OPS5, then interprets the information in the table and automatically gen- erates a summary line. The use of a production rule system provides insight into the mechanisms of summarization. A comparison of computer-generated summaries with those obtained manually showed good agreement, indicating that it is possible to automatically process message narrative and gen- erate appropriate, and ultimately useful, summaries. INTRODUCTION Behavior modeled in expert systems has generally been held distinct from that modeled in natural language under- standing systems. Attempts at practical expert systems have been directed toward design [McDermott 19801, diagnosis [Shortliffe 19761, and interpretation [Buchanan 19781, among others. Practical systems for natural language understanding have concentrated largely on database interfaces [Grosz 1983, Ginsparg 1983, Grishman 19831 and database creation [Sager 19781. In this paper we investigate the feasibility of integrat- ing techniques from computational linguistics and expert sys- tem technology to summarize a set of Navy equipment failure messages called CASREPs (casualty reports). A natural language analysis procedure automatically generates a tabular representation of the information contained in message narra- tive. A production rule system then interprets the tabular representation and identifies a clause that is appropriate as a message summary. We have chosen to use a production system for a natural language application because it facilitates understanding and modification of the system. More impor- tant for research purposes, a production system makes the operations involved in summarization explicit and, thus, can provide insight into the genera! problem of summarization. Summarization can be approached at several different lev- els. Typically, strategies for summarization have taken a single-level approach. Summaries of stories have been derived at the high level of conceptual representation. Structural features of a graph reveal the central concepts of a story [Lehnert 19801. Goal-directed summaries have also been inves- tigated in some detail [Fum 19821. We, on the other hand, have taken a multi-level approach, incorporating several Ralph Griahman Department of Computer Science New York University New York, New York 10012 sources of knowledge in the linguistic analysis and prod-uction rule system. This permits us to investigate not only the requirements of individual knowledge sources, but also their interactions. NATURAL LANGUAGE PROCESSING Each CASREP message contains a set of structured (i.e. pro jorma) fields and a narrative describing the equipment failures. These narratives typically consist of two to twelve sentences and sentence fragments. The central task of narrative analysis is the extraction and representation of information contained in narrative por- tions of a message. This task is dificult because the structure of the information, and often much of the information itself, is implicit in the narrative. Several formalisms, such as xripta and framed, have been developed to describe such information and have been used in text analysis [Schank 1977; Montgomery 19831. We are using an approach called informa- tion formatting that was developed at New York University for the representation of the information in medical narratives [Sager 1978, Hirschman 19821. In simple terms, an information format is a large table, with one column for each type of infor- mation that can occur in a class of texts and one row for each sentence or clause in the text. It is derived through a distribu- tional analysis of sample texts. The narrative is automatically transformed into a series of entries in the information format table. This procedure involves three stages of processing: (1) parsing, (2) syntactic regularization, and (3) mapping into the information format. First the text sentences are parsed using a top-down parser and the broad-coverage Linguistic String Project English grammar [Sager 19811 extended to handle the sentence fragments and special sublanguage constructions (e.g. date expressions, such as NLT 292M? 2 SEP 88) that appear in these messages. The grammar consists of a set of context-free definitions augmented by grammatical restrictions. It also uses a Navy sublanguage lexicon that classifies words according to their major parts of speech (e.g. noun, verb, adjective), as we!! as their special subfield classes (e.g. PART, FUNCTION, SIG- NAL), and certain English syntactic subclasses. The parsing procedure identifies the grammatical relations that hold among parts of the sentence, principally subject-verb-object relations and modifier-host relations. The syntactic regularization component utilizes the same machinery as the parser, augmented by standard transforma- tional operations. The principal function of the regularization component is to reduce the variety of syntactic structures and word forms to be processed, without altering the information content of the sentences, thereby simplifying the subsequent mapping into the information format. Regularization includes: (1) standardization into subject-verb-object word order, e.g. passive to active; (2) expansion of conjoined phrases into con- joined assertions; (3) reduction of words to “canonical form” plus information marker(s); (4) filling in of certain omitted or From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. reduced forms of information. The third stage of processing moves the phrases in the syntactically regularized parse trees into their appropriate for- mat columns. It involves two steps: (1) identifying connectives and (2) mapping into the information format. A connective word indicates a causal, conjunctional, or time relation between the two clauses it connects. The connective is mapped into the CONNective column of the format table; arguments of the connective are mapped into separate format rows, and their words are mapped into the appropriate format columns. The mapping process is controlled in a large part by the sub- language (semantic) word classes associated with each word in the lexicon. In genera!, the formatting procedure is straight- forward because most word classes are in a one-to-one correspondence with a particular format column. The production system for message summarization operates on the information format that is generated for each message. PRODUCTION RULE SYSTEM FOR SUMMARIZATION We have implemented prototype knowledge bases for two application areas: dissemination and summary generation [Marsh 19841. While the dissemination application relies on information obtained from both pro forma and narrative data sets of a message, summary generation is based entirely on information contained in narrative portions of the messages. Such summaries, which up to now have been generated by hand, are used to detect patterns of failure for particular types of equipment. This failure information is crucial to decision- makers who procure equipment for new and existing ships. Typically, the manually derived summary consists of a single clause, extracted from the sentences of text. Only rarely is a summary generated from material not explicitly stated in the narrative. The single line summary results in a five- to ten-fold reduction of material. Clearly, the sharp reduction in reading material can ease the decision-making process, provided that the key information from the report regularly finds its way into the summary. Our current system consists of a set of productions, implemented in a Lisp-based version of the OPS5 production system programming language. OPS5 permits the assignment of attributes and numerical values, or scores, to the working memory elements, and our system takes advantage of this. Productions operate on an initial database of working memory elements that includes data from the the information formats and identify the crucial clause that will be used for the sum- mary. Criteria for production rules are based on the manual summarization that is currently performed. Several types of knowledge are required for message sum- marization. Knowledge of the possible relationships is reflected in the initial choice of what fields are available in the format system devised for the domain. This is represented by the columns of each message’s information format table. Addi- tional domain knowledge and knowledge of the nature of the application are embodied in the production rules of the expert system. Each production rule incorporates one of three different types of knowledge necessary for summarization. The first type reflects an understanding of the subject matter of the equipment failure reports. These production rules assign semantic attributes or categories to working memory elements by explicitly specifying these words in a list in the rule. For example, the working memory element containing the word inhibit is assigned a category IMPAIR. Elements indicating a bad status (e.g. broken, corroded, failure, malfunction, etc.) have the category BAD assigned and so on. Other category assignment rules are concerned with level of generality, flagging equipment failures at the assembly level, and not at the more detailed part or more genera! system level, since assemblies are most important to the summary. Other production rules are based on genera! principles of summarization, and these rules are typically inferencing rules. These identify causal relationships among working memory elements and may add information to the data base in the form of new working elements. We will see an example of this type below. Finally, the end use that will be made of the summaries is also a guiding factor in some of the productions. To guide future equipment specification and procurement, one must know not only what went wrong and how often, but also why. Format rows that contain such information are identified as being more important by having the score of the row boosted. For example, causality is important to the summaries. Once a causal relationship is identified, the row specifying the ‘cause’ has its score boosted. Taken together, the productions are attentive to such matters as malfunction, causality, investiga- tive action, uncertainty, and level of generality. In addition, the system has rules excluding from summaries format rows containing very genera! statements. For instance, universal quantification and mention of the top level in a part-of tree betray a clause that is too genera! for a summary line. Summarization proceeds in three stages: (i) inferencing, (ii) scoring the format rows for their importance, and (iii) selection of the appropriate format row as the summary. First, inferences are drawn by a set of production rules. For example, the presence of one of the words in the IMPAIR category triggers an inferencing rule. If part1 impairs part2, we can infer that part1 causes part2 to be bad, and we can also infer that part1 is bad. A set of production rules, sum- marized as rules (1) and (2) b e ow, operate on the format lines I to draw such inferences. The production rule in (1) infers that the second argument (part2) of CONN is bad. (1) if both (a) CONN contains an ‘impair’ word and (b) the STATUS column of the 2nd argument of CONN [the connective] is empty then both (c) fi!! the STATUS column of the 2nd argument with ‘bad’ and (d) assign the word in CONN the attribute ‘cause’. For example, in Table 1, the connective word inhibit has been mapped by the formatting procedure into the CONN column, connecting two format rows, its first argument, APC-PPC cir- cuit, a PART, and its second, PA drioer, also a PART. Both rows have the PART column of the format filled. CONN PART STATUS APC-PPC circuit inhibit PA driver Table 1: Simplified information format for the sentence: APC-PPC in inhibiting PA driver By a previous production rule, the inhibit has been categorized in the class of impairment verbs. Rule (1) replaces impairment by a format version of “cause to be bad.” Specifically, the verb inhibit in the CONN column gets assigned the attribute ‘cause’. Since the STATUS column of the second argument is empty, bad is inserted into that STATUS column. Thus, it is inferred that the PA driver is bad because it has been impaired. Another production rule, summarized as (2) infers that the STATUS column of the first argument (partl) of CONN is also ‘bad’ and inserts bad into the STATUS column since it has caused something else to be bad. (2) if both (a) CONN has the attribute ‘cause’ and (b) the STATUS of the Iirst argument of CONN is empty and (c) the STATUS of the second argument of CONN is ‘bad’ then (d) insert ‘bad’ into the empty STATUS column, In our example Table 1, ‘inhibit’ in the CONNective column has been assigned the attribute ‘cause’, and the STATUS of APC-PPC circuit is empty. The STATUS of the PA driver contains ‘bad’, by rule (1). S o ‘bad’ is inserted into the STATUS column of the first argument, yielding APC-PPC cir- cuit bad. The second stage of the summarization system rates the format rows for their importance to the summary. When it comes time to score the various formats to determines the most appropriate one for the summary, since “bad” is a member of the class of words signifying malfunction, it will cause both arguments of inhibit to be promoted in importance. An additional scoring increment will accrue to the first argu- ment but not the second because it is a cause rather than an effect. Another rule increments a format row referring to an assembly (a mid-level component), since such a format is more revealing than a format containing a statement about about a whole unit or an individual part (such as a transistor). For example, circuit, the head of the PART phrase of the first argument is identified as belonging to a class of components at the assembly level. As a result, the score of the row contain- ing APC-PPC circuit bad is incremented again. The third and final stage of summarization is to select the format row or rows with the highest rating. As a result of the various production rule actions, the winning format row is “PART: APC-PPC circuit; STATUS: bad.” While other for- mat rows may also have positive scores, only the row with the highest score is selected. The system does not preclude select- ing several format rows if they have equally high scores. IMPLEMENTATION The LSP parser is implemented in about 15,000 lines of Fortran 77 code. The parser runs on a DEC VAX 11/780 under the UNIX and VMS operating systems and requires 2 megabytes of virtual memory when executing, of which two- thirds is list space for holding the grammar, dictionary entries, etc. The English grammar, regularization component, and information formatting components are written in Restriction Language, a special language developed for writing natural language grammars [Sager 1975). The dissemination and sum- mary generation applications programs are written using the OPS5 production system. In total, there are 63 production rules in the applications programs. EXPERIMENTAL RESULTS The purpose of this experiment was to test the feasibility of automatically summarizing narrative text in Navy equip- ment failure messages using techniques of computational linguistics and artificial intelligence. Computer-generated results were compared to those obtained by manual summari- zation procedures to evaluate the performance of the system. The manual summaries were prepared independently of our experiment by experts who routinely summarize such mes- sages. Since both the natural language processing components and the applications programs were under development while this experiment was being carried out, 12 casualty reports were used for debugging the programs. Subsequently, 12 other reports were used for the computer-human comparison. For an appropriate summary line to be generated, it is necessary that 100% of the sentences in a text be processed correctly by the natural language procedures. The natural language analysis procedures processed 100% of the sentences contained in the documents; this percentage includes 9 sentences (25%) that were paraphrased and rerun because they were not - _ correctly processed on their first run. Paraphrasing these sen- tences brought the total number of sentences from 30 to 38. The sentences were paraphrased. to expedite processing since the major p urpose of running the messages was to investigate methods of summari zation and not the performance of the natural language processing system. 70 format lines were gen- erated from 38 sentences in 12 messages. The computer-generated results of the summarization program compke fa;orab!y to those obtained manually. Fig- ure 1 shows a comparison of the two sets of results for the 12 test documents. The discrepancies between the computer- generated results and the manual results are summarized in Figure 2. Dot. Machine Manual Agreement # format row.9 # aentencea Machine/Manual 1. 1 1 l/l 2. 1 1 ljl 3. 1 1 l/l 4. 1 1 O/l 5. 1 2 l/2 6. 2 1 l/l 7. 1 1 l/l 8. 2 1 l/l 9. 1 1 O/l 10. 1 2 l/2 11. 1 1 l/l 12. 1 2 l/2 14 15 10/15 Fig. 1: Comparison of machine and manual summary results word not included in category list second manual summary not about bad-status second manual summary not in narrative text Fig. 2: Analysis of machine and manual summary results Agreement between machine and manual summaries is 245 the manual summaries. The most significant discrepancies (a total of 2) were caused by the system selecting more specific causal information than was indicated in the manual summary. In message 9, which contains the sentence Los8 of lube oil preaaure when atart air compreaeor engaged for operation is due to wiped bearing the manual summary line generated was Loaa of LO preaaure, while the system selected the more specific informa- tion that indicated the cause of the casualty, i.e. wiped bear- ing. Similarly, in message 12, the system selected the line low output air preaaure from the assertion low output air preaaure reeulting in 810~ gas turbine starta since it indicated a cause. The program did not identify the second part of the manual summary because its score was not as high as that of the cause low output air pre88ure. However, its score was the second highest for that message. This suggests that it may be more appropriate to select a!! the summary lines in some kind of score window rather than only those lines that have the highest score. In two cases (messages 6 and 8) the system generated two summary texts, although the manual summary consisted of only one sentence. Two summary lines were selected because both had equally high scores. Nonetheless, one of the two summaries was also the manual summary. In conclusion, the summarization system was able to identify the same summary line as the manual summary lo/15 times (00.0%). For 10 out of 12 messages, the summarization system selected at least one of the same summary lines as the manual generation produced. For two messages, the system was not able to match the manual summary, in one case, because the crucial status word was not in the appropriate list in the production rule system and, in a second case, because the automatic procedure identified the more specific causal agent. CONCLUSION 246 The results of our work are quite promising and represent a successful first step towards demonstrating the feasibility of integrating computational linguistic and expert system tech- niques. We recognize that much remains to be done before we have an operational system. Our work up to now has pointed to several areas that require further development. Refinement of the semantic representation. Our current information format was developed from a limited corpus of 38 messages, including those in the test set. Even within that corpus not a!! types of information have been captured - for example, modes of operation, relations between parts and sig- nals, and relations and actions involving more than one part. Some of this information has been incorporated into the expert system. For example, part-assembly-system information has been encoded as a categorizat’ion rule. However it is clear that enrichment of our semantic representation is a high priority. We are considering the use of some external knowledge sources to obtain this information. One possibility is to access machine-readable listings of Navy equipment. Interaentential proceaaing. Our current implementation does almost no intersentential processing. This has proved marginally adequate for our current applications, but clearly needs to be remedied in the long run. One aspect of this pro- cessing is the capture of information that is implicit in the text. This includes missing arguments (subject and objects of verbs) and anaphors (e.g. pronouns) that can be reconstructed from prior discourse (earlier format lines); such processing is part of the information formatting procedure for medical records [Hirschman 19811. It should also include reconstruc- tion of some of the implicit causal connections The reconstruc- tion of the connections will require substantial domain knowledge, of equipment-part and equipment-function re!a- tions, as we!! as ‘scriptaln knowledge of typical event sequences (e.g. failure - diagnosis - repair). ACKNOWLEDGMENTS This research was supported by the Office of Naval Research and the OfYice of Naval Technology PE-6272lN. The authors gratefully acknowledge the efforts of Judith Froscher and Joan Bachenko in processing the first set of messages and providing the specifications for our dissemination system. REFERENCES [Buchanan 19781 Buchanan, B.C., Feigenbaum, E.A. DEN- DRAL and Meta-DENDRAL: their applications dimension, Artificial Intelligence 11, 5-24. [Fum 19821 Fum, D., Guida, G., Tasso, C. Forward and back- ward reasoning in automatic abstracting. COLING 82 (J. Horecky (ed)). North Holland Publishing Company. [Ginsparg 19831 Ginsparg, J.M. A robust portable natural language data base interface. Proceeding8 of the Conference on Applied Natural Language Proceasing, ACL. [Grosz 19831 Grosz, B.J. TEAM, a transportable natural language interface system.Proceedinga of the Conference on Applied Natural Language Proceaaing, ACL. [Grishman 19831 Grishman, R., Hirschman, L., Friedman, C. Isolating domain dependencies in natural language interfaces.Proceedings of the Conference on Applied Natural Language Proceaaing, ACL. [Hirschman 19821 Hirschman, L., Sager, N. Automatic infor- mation formatting of a medical sublanguage. In Sublanguage: Studied of Language in Restricted Domains (Kitttedge and Lehrberger, eds). Walter de Gruyter, Berlin. [Hirschman 19811 Hirschman, L., Story, G., Marsh, E., Lyman, M., Sager, N. An experiment in automated health care evaluation from narrative medical records. Computer8 and Biomedical Research 14, 447-403. [Lehnert 19801 Lehnert, W. Narrative text summarization. AAAI-80 Proceedinga: 337-339. [Marsh 19841 Marsh, E., F roscher, J., Grishman, R., Ham- burger, H., Bachenko, J. Automatic processing of Navy mes- sage narrative. NCARAI Internal Report. [McDermott 19801 McDermott, J. Rl: A rule-based configurer of computer systems. Carnegie-Mellon University, Department of Computer Science Report CMU-CS-80-119, April 1980. [Montgomery 1983) Montgomery, C. Distinguishing fact from opinion and events from meta-events. Proceeding8 of Confer- ence on Applied Natural Language Procedaing, 55-61, Assn. for Computational Linguistics. [Sager 19751 Sager, N., Grishman, R. The Restriction Language for computer grammars of natural language. Com- munication8 of the ACM,l8, 390. [Sager 19781 Sager, N. Natural language information format- ting: The automatic conversion of texts to a structured data base. In Advance8 in Computer817 (M.C. Yovits, ed), Academic Press. [Sager 19811 Sager, N. Natural Language Information Pro- ceasing. Addison- Wesley. [Schank 19771 Schank, R., Abelson, R. Scripta, Plana, Goala, and Understanding. Lawrence Erlbaum Associates. [Shortliffe 19761 Shortliffe, E.H. Computer-baaed medical con- aultationa: MYCIN. American Elsevier.
1984
66
354
An Interactive Computer-based Tutor for LISP* Robert G. Farrell John R. Anderson Brian J. Reiser Advanced Computer Tutoring Project Department of Psychology, CMU Pittsburgh, PA 15273 USA recursion, and iteration. of this instruction. We have currently implemented 18 hours Abstract ‘This paper describes an intelligent computer-based tutor for LISP tha? incorporates some of the Ingredients of good private tutoring. The tutor consists of a problem-solver that generates steps toward a solution and an advisor that compares the problem-solver’s steps to the student’s steps. Our system can interact with students in a number of different problem spaces for algorithm design and coding. The tutor reduces memory demands by displaying relevant contextual information and directs problem- solving by immediately intervening when a student generates an unacceptable partial answer. Initial experiments indicate that our tutor is approximately twice as effective as classroom instruction. Our tutor works through the problems with the student interactively. It consists of a problem-solver and an advisor. We first describe how the problem-solver helps to interactively model students as they learn to program. We then describe the advisor and its tutoring strategy. Finally. we discuss three features of the tutor which we feel contribute to its effectiveness: 1. Use of different problem spaces to cover a broad range of programming behavior 2. Use of the graphic reminders to reduce the amount of information that a student must remember while programming Int reduction Students have extreme difficulty learning their first programming language. This difficulty is magnified by the learning environment - a cold terminal, an unforgiving textbook, and an Inaccessible teacher. The student may be entirely lost until an experienced student or teaching assistant volunteers their expertise. We estimate that private instruction is between two and four times as effective as classroom instruction. Students taught by private tutors learn both more quickly and more deeply than students in classrooms (McKendree, Reiser & Anderson, 1984). Our goal is to capture private tutors’ expertise by constructing intelligent computer-based tutors that can interactively help students solve problems. We also want to test our theory of how people learn complex skills (Anderson, 1983) and more specifically how people learn to program (Anderson, Farrell, & Sauers, 1984). A good human tutor can follow a student’s problem solution, giving suggestions when the student makes an incorrect or non- optimal step or when the student is lost. Human tutors can give this type of tutorial assistance because they infer a model of the student’s knowledge. We follow students’ problem-solving through a similar process, called interactive studs: modelling. Our system continually monitors the student’s progress and tries to assess the knowledge that the student rnust have in order to produce the given behavior. This knowledge is represented in the form of GRAPES production rules and goals. In addition, the tutor has a set of common “buggy” rules (Brown & Burton, 1978) and “buggy” goals that it can recognize. Interactive student modelling is achieved by inferring which rules and goals in the tutor’s catalog could possibly produce the observed student behavior. Because of our detailed model, our system can convey the heuristic knowledge needed to solve a wide range of beginning 3. Use of immediate feedback to direct problem solving and reduce learning time. Interactive Student Modelling In previous work (Anderson, Farrell, & Sauers, 1984; Anderson, Pirolli, & Farrell; in press) we outlined a detailed theory of how students learn to program in LISP. We used GRAPES (Sauers & Farrell, 1982), a Goal-Restricted Production System, to model students at many different levels of performance. Our current work incorporates these models into a tutoring system that can interactively assess a student’s knowledge during problem- solving. In this paper, we describe an initial version of a computer- based LISP tutor that incorporates some of the ingredients of good private tutoring (Anderson, Boyle, Farrell, and Reiser, 1984). Students learn LISP with our tutor by first reading some short instructional material and then working through a series of problems. We plan to use our tutor to teach a 30 hour course in LISP which covers the basic structures and functions of LISP, function definition, conditionals and predicates, helping functions, *his research was supported by grants NOOO14-61-C-0335 and N00014-84-K-0064 from the Office of Naval Research programming problems. 106 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. The Tutoring System Control Structure Using Different Problem Spaces The LISP tutoring system consists of two major components: the problem-solver and the advisor. The problem solver consists of the GRAPES interpreter, novice model rules, and buggy or “mal” rules (Sleeman, 1982). The advisor is a production system interpreter much like GRAPES; it also provides a tutorial strategy and many facilities for creating tutoring sessions including graphics, text generation, and parsing. The tutor interpreter executes tutoring rules (t-rules) (Clancey, 1982) which contain patterns for creating explanations and menu entries. Producing a program in any language consists of a medley of algorithm design, coding, and debugging (Brooks, 1977). A good human tutor can converse with the student in a variety of problem spaces. In this section we describe how our tutor communicates in the problem-spaces involved in algorithm design and coding. We are not concerned with debugging since the tutor never allows the student to produce a final solution that is incorrect. Our tutor currently utilizes three problem spaces for coding and algorithm design: a coding space, a means-ends analysis space, and a problem decomposition space. A problem input to the tutor consists of a small data base of The LISP Coding Problem Space facts and an initial goal. The problem-solver tries to decompose the initial goal into easier subgoals, using the novice model rules The LISP coding problem space is used in normal problem- and the facts in the data base. The student also tries to solving. The student enters LISP code in a syntax-based editor. decompose the goal using a goal description generated by the The hierarchical structure of the solution is represented by tutor and the facts that appeared in the instruction booklet given symbols to be expanded. The student’s plan for the solution is before each problem session. represented by the structure and name of the symbols. For example: The advisor matches the problem-solver’s next step against (defun subset (list1 llst2) the student’s next step and categorizes the student’s response. (cond <TERMINATING-CASE> Since there are many ways to solve any interesting programming <RECURSIVE-CASE> problem, the problem-solver must generate a list of possible 1 correct and incorrect actions and the advisor matches all of them 1 against the student’s answer. If the student generates a correct illustrates that the student is using CDR recursion to solve the step toward the solution, the advisor directs the problem-solver to subset problem. The student can choose to code either the execute the rule corresponding to the student’s step. If the terminating case or recursive case first. student displays a bug, the system generates text explaining why the answer was incorrect. If the student fails to produce a correct CDR recursion is a programming plan(Soloway, 1980; Rich answer after a number of trials, the system provides the best & Shrobe, 1978) well known to expert LISP programmers but answer and generates an explanation of why the answer was the difficult for novices to induce on their own. Part of the utility of a best choice. programming tutor is to introduce powerful programming techniques like CDR recursion during problem-solving Interacting with the System (Anderson, Boyle, Farrell, and Reiser, 1984). The tutor’s top window displays explanations, hints, and querys, the “code” window provides a structured editing environment for entering LISP code, and the bottom window displays the problem statement and any planning information or reminders. The tutor brings the student’s attention to new information by flashing the appropriate window. The tutor interprets each keystroke typed by the student and gives immediate feedback about correct and incorrect steps. At any time, the student can press a clarify key to get additional help about the problem or an info key to access a tree-structured help facility. When a student types a function name, place holders appear for the function arguments. The structured editor allows the student to code these arguments in any order. A spelling corrector and parentheses checker help the student enter code in a graceful manner, The Means-Ends Analysis Problem Space The means-ends analysis (Newell & Simon, 1972) space is used when the student is having trouble producing code for a problem that can be characterized by a set of successive operations on an example. In this problem space, the student can develop a solution by sclpplying LISP operators that reduce differences between the current state and the goal state in the example. Figure 1 illustrates a sample interaction with the tutor during means-ends analysis. The student is trying to produce some code to get all but the last element of a list. Menus list both correct and incorrect ways of performing an operation. The menu entries are generated from patterns associated with both good and buggy rules in the novice model. Once the student picks a correct entry, he or she must provide a function that will perform the operation described. The tutor separately assesses the student’s knowledge of what operations must be performed and their ability to irnplement those operations in LISP. 107 Problem Decomposition The problem decomposition space is used when the student is having trouble producing code for a problem that can be easily decomposed into pieces. The conceptual pieces of the problem may not correspond exactly to the form of the code. The system displays a menu of possible decompositions of the problem and the student must pick the correct answer. The tutor makes sure that the student actually implements their algorithm when finally producing the LISP code. Again, the tutor separately assesses the students’ ability to derive the algorithm from their ability to implement the algorithm. Reducing Memory Demands Solving programming problems requires holding a great deal of requisite information in a mental working memory. This requisite information consists of unsolved goals, partial products of calculations, and descriptions of LISP functions. We estimate that half of students’ time spent solving programming problems is spent recovering from working memory failures (Anderson, Farrell, & Sauers, 1984). Anderson and Jeffries (1984) demonstrated that working memory load in one part of a task causes students to err on other parts of the task, even if those parts are logically unrelated. Therefore, it is extremely important that tutors keep working memory load to a minimum. One way that the tutor keeps working memory load low is by displaying descriptions of the student’s goals on the terminal screen. The student’s goals are represented in GRAPES and the tutoring system uses this representation to generate english descriptions. The tutor displays the overall goal and the current goal as well as the goals along the shortest path between these two goals. For example, if the student is solving for the second argument to lessp in the following code: (defun lessoreqp (x y) (or (equal x y) (lessp x A)) then the system would display the following goal context: Write a function called lessoreqp. Test if x is less than or equal to y. Test if x Is less than y. Write code for the second argument to lessp, Students solving LISP problems also have trouble remembering partial results. In our LISP tutor, any calculations that the student performs on examples are displayed in a window for later reference. In addition, the partially-correct code is always displayed on the screen. Immediate Feedback Novices spend a large amount of time exploring incorrect solutions that result in little learning. A good human tutor directs the student toward correct answers, while still letting the student learn from mistakes. Lewis and Anderson (1984) have shown that students learn more slowly when they are given delayed feedback about their erroneous applications of operators. In our studies of LlSP learning (Anderson, Farrell, & Sauers, 1982), our subjects spent more than half of their time exploring wrong paths or recovering from erroneous steps. Our tutor monitors the student with every keystroke, giving immediate feedback when it detects an error. Since the student never strays more than one step off of a correct solution path, our tutor can model the student in great detail. When the student makes an error, an explanation is generated from a pattern stored with the buggy rule and a query is generated from the student’s current goal, directing the student toward a correct answer. Our tutor cannot generate immediate feedback when the student’s behavior does not disambiguate which goal he or she is pursuing. The tutor is silent until it can disambiguate the goal. If the student is generating an especially ambiguous piece of code, the tutor may display a menu of goals and ask the student to decide among them. Once the student’s goal is known, the tutor can then intervene with tutorial assistance. Conclusion Our computer-based tutor for LISP incorporates some abilities of good human tutors. Our system can interact with students in a number of different problem spaces for algorithm design and coding. The tutor reduces memory demands by displaying relevant contextual information and directs problem- solving by immediately intervening when a student generates an unacceptable partial answer. Our system interactively models the student by updating a set of production rules. These production rules also serve as a novice model that follows the student as he or she solves the problem. We performed an evaluation study on our tutor (McKendree, Reiser, & Anderson, 1984) which confirms our belief that it is about twice as effective as classroom instruction. We plan to further test the tutor’s pedagogical effectiveness by automating a 30 hour LISP course taught in the fall of 1984. References Anderson, JR. The Architecture of Cognition. Cambridge, MA: Harvard University Press 1983. Anderson, J.R., Farrell, R., and Sauers, R. Learning to program in LISP. Cognitive Science, 1984, , . in press. Anderson, JR., Pirolli, P. and Farrell, R. Learning recursive programming. In forthcoming book edited by Chi, Farr, & Glaser. Anderson, J.R., Boyle, C. F., Farrell, R.G., and Reiser, B.J. Cognitive Principles in the Design of Computer Tutors. Paper submitted to the CACM. Brooks, R.E. Towards a theory of the cognitive processes in computer programming. lnternafional Journal ol Man- Machine St!ldif?S, 1977, 9, 737-751. Brown, J.S. and Burton, R.R. Diagnostic models for procedural bugs in basic mathematical skills. Cognitive Science, 1978, 2,155192. Clancey, W. J. Tutoring rules for guiding a case method dialogue. In D. Sleeman and J.S. Brown (Ed.%), intelligent Tutoring Systems, New York: Academic Press, 1982. Lewis, M. and Anderson, J.R. The role of feedback discriminating problem-solving operators. Submitted. in McKendree, J., Reiser, B.J., and Anderson, J.R. Tutorial goals and strategies in the instruction of programming skills. Paper submitted to the 1984 conference of the Cognitive Science Society. Newell, A. and Simon, H. Human Problem Solving. Englewood Cliffs, N.J.: Prentice-Hall 1972. Rich, C. and Shrobe, H. Initial report of a LISP programmers’ apprentice. IEEE Trans. Soft. Eng., 1978, , 456-466. Sauers, R., and Farrell, R. GRAPES user’s manual. ONR Technical Report ONR-82-3, Carnegie-Mellon University, 1982. Sleeman, D. Assessing aspects of competence in basic algebra. In D. Sleeman and J.S. Brown (Eds.), lntelligcnt Tutoring Systems, New York: Academic Press, 1982. Sleeman, D. & Brown, J.S. (Eds.). Intelligent New York: Academic Press 1982. Tutoring Systems. Soloway, E.M. From problems to programs via plans: The context and structure of knowledge for introductory LISP programming. COINS Technical Report 80-19, University of Massachusetts at Amherst, 1980. Figure 1 What can we do to change (a b c d) into (a b c)? PRESS: 1 2 3 4 IF YOU WANTTO: removed from (a b c d) get each element and gather them into a list have the tutor choose get information about LISP Menu choice: 1 Very Good. What function return the result? Will remove just 7 element from a list and Function name: I don’t know The function CDR will remove an element from a list. You will have to apply CDR to (a b c d) to remove d. Remember that in this case, d is the last element of the argument list. So, in order to apply CDR, you have to get d in the first position of the list. The LISP Tutor Teaching with Means-Ends Analysis
1984
67
355
Intention-Based Diagnosis of Programming Errors W. Lewis Johnson - Elliot Soloway Yale University Computer Science Department New Haven, Ct. 06520 Abstract PROUST is a system which identifies the non-syntactic bugs in novices’ programs and provides novices with help as to the misconceptions under which they were laboring that caused the bugs. In this paper we will discuss the methods which PROUST uses to identify and diagnose non-syntactic bugs. Key in this enterprise is PROUST's ability to cope with the significant variability exhibited by novices’ programs: novice programs are designed and implemented in a variety of different ways, and usually have numerous bugs. We argue that diagnostic techniques that attempt to reason from faulty behavior to bugs are not effective in the face of such variability. Rather, PROUST'S approach is to construct a causal model of the programmer’s intentions and their realization (or non- realization) in the code. This model serves as a framework for bug recognition, and allows PROUST to reason about the consequences of the programmer’s decisions in order to determine where errors were committed and why. 1. Introduction We have been constructing a system, PROUST, that can identify the non-syntactic bugs in novice’s programs and provide students with help in resolving the misconceptions that caused the bugs. In this paper we will discuss the principal techniques PROUST uses to identify and explain non-syntactic bugs. PROUST's analysis techniques were designed to cope with the key feature of our domain: the high degree of variability in novice programs. Novice programmers often have misconceptions about programming language syntax and semantics, resulting in large numbers of seemingly bizarre bugs. They also lack the expert’s knowledge about how to analyze program specifications and design and implement algorithms. The result is that the intentions underlying novice programs, and the methods used for realizing these intentions, tend to vary greatly. We present an approach to error diagnosis which integrates identifying program errors with discovering the programmer’s intentions. In this view, bug diagnosis involves This work was co-sponsored by the Personnel and Training Research Groups, Psychological Sciences Division, Office of Naval Research and the Army Research Institute for the Behavioral and Social Sciences, under Contract No. N00014-82- K-0714, Contract Authority Identification Number, Nr 154-492. Approved for public release; distribution unlimited. Reproduction in whole or part is permitted for any purpose of the United States Government. reasoning in the space of intentions as well as in the space of program behavior. This contrasts with conventional fault diagnosis methods which do not take intentions into account, or which fail to distinguish between intended program behavior and actual program behavior. The intentional model that PROUST constructs provides a framework for testing bug hypotheses, and for comparing alternative hypotheses using differential analysis. Causal reasoning about the programmer’s intentions makes it possible to determine which of the programmer’s intentions is faulty and why. In what follows, we first present arguments in support of the intention-based approach, followed by a more detailed description of PROUST'S approach to error diagnosis. Results of empirical tests of PROUST with novice programmers will then be presented. 2. Three Approaches to Diagnosis We can identify at least three types of diagnostic reasoning techniques which might be applicable to bug diagnosis: classificatory reasoning about symptoms, causal reasoning about behavior;, and intention modeling. l In classificatory reasoning about symptoms, the diagnostician knows what classes of symptoms different classes of faults exhibit. The diagnostician extracts important facts from the findings, uses t,hem to suggest types of faults which might explain the findings, and then does further analysis in order to refine the diagnosis. The diagnostician’s knowledge about the domain can thus be boiled down to a collection of classificatory rules relating symptoms to disease classes. Medic al diagnosis systems tend to depend particularly heavily upon classificatory reasoning [15, 31. Classificatory approaches to program debugging have also been attempted [6]. 0 Causal reasoning about behavior uses an understanding of the structure and function of a system and its components to identify faults which are responsible for faulty behavior (11, 4, 5, 141. In program debugging the causal reasoning usually takes the form of analysis of control and data flow. Causal reasoning is useful for diagnosing errors in domains involving a degree of complexity and variability, where empirical associations between faults and symptoms are unavailable or inconclusive. l Intention modeling is necessary, however, when the 162 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. programmer’s intentions are in odds with what the programming problem requires. Here, the programmer’s view of the problem must be determined and then tied to the manner in which it has/has not been realized. Knowledge of intentions assists the diagnostic process in several ways. Predict,ions derived from the intention model enable top-down understanding of buggy code, so that diagnosis is not thrown off when bugs obscure the code’s intended behavior. The right fix for each bug can be found (81. Finally, causal reasoning about the implications of the implementor’s intentions makes it possible to test bug hypotheses by looking at other parts of the program and verifying the implications of bug hypotheses. We will argue that neither classificatory reasoning about symptoms nor causal reasoning about behavior is adequate for fault diagnosis in novice programs. Rather, accurate bug diagnosis depends upon intention modeling. 3. Intention-Based Diagnosis vs. Qther Approaches: An Example We will first walk through an example of bug diagnosis in order to contrast the intention-based approach to the other diagnostic approaches. Further details of how intention-based diagnosis works will be provided in subsequent sections. Figure 1 shows a programming problem, to compute the average of a series of inputs, and an actual buggy student solution. Instead of reading a series of inputs and averaging them, this program reads a single number New, and outputs the average of all the values between New and 99999, i.e., (Neu+99999)/2. We believe that the error in this program is that the student wrote New : = Neu+l at line 12 instead of Read (New), as indicated by PROUST's output, which is shown in the figure. This bug is probably the result of a programming misconception: novices sometimes overgeneralize the counter increment statement and use it as a general mechani. m for getting the next value. A symptom classification approach would make use of general heuristic rules for relating symptoms to causes. Example rules might be the following: l If a program terminates before it has read enough input, it may have an input loop with a faulty exit test. l If a program outputs a value which is too large, check the line that computes the value and make sure that it is correct. Neither of these rules addresses the true cause of the bug; instead of focusing on the way new values are generated, one rule focuses on the exit test, and the other focuses on the average computation. In general, many different program faults can result in the same symptoms, so knowledge of the symptoms alone is insufficient for distinguishing faults. The principal diagnostic methods which employ causal analysis of behavior are symbolic execution, canonicalization, and troubleshooting. If we followed a symbolic execution paradigm, as in PUDSY [lo], we would go through the following sequence of steps: 1) use causal knowledge of program semantics to derive a formula describing the output of the Problem: Read in numbers, taking their sum, until the number 99999 is seen. Report the average. Do not include the final 99999 in the average. 1 PROGRAM Average( input, output 1; 2 VAR Sum. Count, Neu: INTEGER; 3 Avg: REAL; 4 BEGIN 5 sum := 0; 6 Count := 0; 7 Read( Neu ) ; 8 YHILE Neuo99999 DO 9 BEGIN 10 Sum := Sum+Ner; 11 Count := Count*l; 12 Neu := Neu+l 13 END; 14 Avg := Sum/Count; 15 Uriteln( ‘The average is ‘, avg 1; 16 END; PROUST output: It appears that you were trying to use line 12 to read the next input value. Incrementing NEW will not cause the next value to be read in. You need to use a READ statement here, such as you use in line 7. Figure 1: Example of analysis of a buggy program program, 2) compare it against a description of what the program is supposed to do, in order to identify errors, and then 3) trace the erroneous results back to the code which generated them. In the example in Figure 1, we would determine that the program computes New/2+49999.5, compare this against what it should compute, namely (CNew)/count(New), then examine the parts of the program which compute the erroneous parts of the formula. The main problem here is that it is hard to compare these two expressions and determine which parts are wrong. This requires knowledge of which components of the first expression correspond to which components of the second expression. Expression components correspond only if their underlying intentions correspond. Thus some knowledge of the programmer’s intentions is necessary in order for the symbolic execution approach to generate reliable results. Canonicalization tee hniques [I] translate the student’s program into a canonical dataflow representation and compare it against an idenlized correct program model. Again the aim is to determine intentions by comparison. Such comparisons are easier to make, but only if the student’s intended algorithm is the same as the model algorithm. Non-trivial programming problems can be solved in any of a number of ways. Thus an approach which compares against a single model solution cannot cope with the variability inherent in programming. Troubleshooting approaches suffer from similar difficulties as symbolic execution. In program troubleshooting, the user is expected to describe the specific symptoms of t,he fault, rather than give a description of the intended output. The system then traces the flow of information in the program to determine what might have caused the symptoms. In this example the symptom is that the program computes New/2+49999.5. We have already seen in the case of symbolic execution that this information alone is not sufficient to pinpoint the bug. Because of behavior pose the problems in debugging, that analysis of symptoms a number of implementors of and bug 163 diagnosis systems have augmented these techniques with recognizers for stereotypic programming plans [13, 12, 161. This is an attempt at determining the intentions underlying the code; by recognizing a plan we can infer the intended function of the code which realizes the plan, which in turn helps in localizing bugs. Unfortunately plan recognition by itself is not adequate for inferring intentions. First, bugs can lead to plan recognition confusions. In the example, the loop looks like a counter-controlled iteration; diagnosing the bug requires the realization that the loop was not intended to be counter controlled. Second, bugs may arise not in plans themselves, but in the way that they interact or in the manner that the programmer has employed them. To a certain extent one can determine the interactions of plans in a program by analyzing the flow of information among the plans. However, we will consider examples in the next section where the intended plan interactions and the actual plan interactions are different. In such cases a better understanding of the programmer’s intentions is needed than what plan recognition alone can provide. In contrast to these other approaches, the intention-based approach attempts to construct a coherent model of what the programmer’s intentions were and how they were realized in the program. Instead of simply listing the plans which occur in the program, one must build a goal structure for the program; i.e., one must determine what the programmer’s goals were, and how he/she went about realizing those goals, using plans or some other means. This is accomplished in PROUST as fo!!ows. PROUST is given an informal description of what the program is supposed to do. It then makes use of a knowledge base of relations between goals and plans, on one hand, and rules about how goals combine and interact, on the other hand, to suggest possible goal structures for the program. The goal structure that fits the best suggests that the Read at line 7 satisfies the goal of initializing the YHILE loop; the loop is organized to process each value and then read the next value at the bottom of the loop, in a process-n-read-nextn fashion. Lines 5 and 10 are responsible for totaling the inputs, lines 6 and 11 are responsible for counting them, and line 14 computes the average. Given this model, there is no role for the New := New+1 to serve other than as a means to read the next value inside the loop. This leads directly to the conclusion that the student has overgeneralized the use of counter increment statements. 4. Examples of Intention-Based Diagnosis Program analysis in PROUST involves a combination of shallow reasoning for recognizing plans and bugs, and causal reasoning about intentions. The relative importance of each kind of reasoning depends upon the complexity of the program’s goal structure and the extent to which PROUST must analyze the implications of the programmer’s design decisions and misconceptions to determine what bugs they cause. We will show how PROUST reasons about programs and bugs by way of a series of examples. The first program example has no bugs, and has a simple goal structure; accordingly, the reasoning processes involved in understanding it are primarily shallow. The next example has bugs, and the goal structure is somewhat more complex; although the bugs are discovered via shallow recognition tactics, a greater amount of reasoning about intentions is required to construct the right goal structure and test the bug hypothesis. In the third example the programmer’s intentions are not reflected directly in the code, so PROUST must use knowledge about goal interactions to hypothesize and differentiate possible intention models for the program. 4.1. Shallow reasoning about correct programs Figure 2 shows a typical introductory programming problem. Figure 3 shows the plan analysis of a straightforwardly correct solution, i.e., one in which the intentions are correct and the program is implemented in accordance with rules of programming discourse [17]. PROUST is supplied wit.h a description of the programming problem, shown in Figure 4, which reflects the problem statement which the students are given. This description is incomplete, in that details are omitted and terms are used which must be defined by reference to PROUST'S knowledge base of domain concepts. PROUST derives from the problem description an agenda of goals which must, be satisfied by the program. PROUST must go through a process of building hypothetical goal structures using these goals, and re:ating them to the code. We call this process constructing interpretations of the code. The goal structure for a given program is built by selecting goals to be processed, determining what plans might be used to implement these goals, and then matching them against the code. Let us consider what happens to the Sentinel-Controlled Input Sequence goal. This goal specifies that input should be read and processed until a specific sentinel value is read. PROUST must determine what plans might be used to realize this goal. PROUST has a knowledge base of typical programming plans, and another knowledge base of programming goals. Each plan is indexed according t.o the goals it can be--used to implement, end each goal linked to plans and/or collections of subgoals which implement it. PROUST retrieves from these databases several plans which implement the goal. One of these, the SENTINEL-CONTROLLED PROCESSREAD WHILE PLAN, is shown in Figure 3. This plan specifies that there should be a UHILE loop which reads and processes input in a process-n-read-next-n fashion; we saw a buggy instance of this plan in the program in Figure 1. Noah needs to keep track of rainfall in the New Haven area in order to determine when to launch his ark. Write a program which he can use to do this. Your program should read the rainfall for each day, stopping when Noah types “99999”, which is not a data value, but a sentinel indicating the end of input. If the user types in a negative value the program should reject it, since negative rainfall is not possible. Your program should print out the number of valid days typed in, the number of rainy days, the average rainfall per day over the period, and the maximum amount of rainfall that fell on any one day. Figure 2: The Rainfall Problem Each plan consists of a combination of statements in the target language, Pascal, and subgoals. The syntax and semantics of plans, and the methods used for matching them, are discussed in detail in 171. Matching the plan against the code involves 1) finding statements which match the Pascal part of the plan, and 2) selecting and matching additional plans to implement the plan’s subgoals. For example, the WHILE loop at line 4 matches the Pascal pak of the plan. This plan also has two subgoals, both Input goals. The plans for implementing these goals which match the code are both READ GOALS Sentinel-contrdled Input Sepaenee(?Ra I nfa I I, 99999) Olrtprt( Average(?RaInfalI) ) Smm(i’Rainfall) Constants: %top Variables: ?Mev Variables: Template: ?Total, ?Neu Template: rabpal Inprt(?Neu) hit: hfain loop : ?TotaI = Of- WHILE ?Newc>7Stop DO Update: 7TotaI = ?Total*PNeu (in rcgment hxese: of goal Nezt: subgoal Input(?Neu) Read d proceacr) I 10 11 12 13 14 END MAX =0, COUNT =O. RAINY =O. READLN. READ(N) WHILE N<,STOP DO BEGIN IF N>=O THEN BEGIN SUM =SUM*N. COUNT =COUNT+l. IF N>MAX THEN MAX =N. IF N>O THEN RAINY =RAINY+l, END 15 ELSE WRITELNCN 0 2,' IS NOT POSSIBLE, REENTER'). 16 WRITELNC'ENTER RAINFALL'), + :; READLN, READ(N) + 19 END, Figure 3: Plan Recognition SINGLE VALUE plans, i.e., READ statements which read single values. The SENTINEL-CONTROLLED PROCESSREAD PLAN thus matches- ;he program exactly, so this plan is incorporated into the goal structure. DefProgram Rainfall, DefObJect ?Ratnfall.DatlyRain Type Scaladeasurement; Sentinel-Controlled Input Sequence( ?Ra i nfa I Input Vulidation( ?Ratnfall’DaclyRaln, I.DailyRain. 99999 ), ?Ralnfall.DailyRaln<O ) Output( Aoerage( ?Rainfall:DallyRaln ) ). Output( Count( 7Rainfall.DailyRain ) ); Outputt Count( ?Ralnfall,DailyRain s t ?RainfalI DailyRain>O ) ); Outputf hf~~imum( ?RainfalI .Dai IyRain ) ); Figure 4: Representation of the Rainfall Problem PROL’ST continues ae!ecting goals from the agenda and map ping them to the code, until every goal has been accounted for. This involves some analysis of implications of plans. For example, the choice of plan for computing the Average goal implies that a Sum goal be added to the goal agenda. This is in turn implemented using a RUNNING TOTAL PLAN, shown in the figure. - However, in a-program such as this relatively little work 7s involved in manipulat&g the goal agenda; most-of the work in understanding this program is in the plan recognition. PROUST is thus able to analyze straightforwardly correct programs primarily using shallow plan recognition techniques. This is not a surprising result. We have argued elsewhere 1171 that programmers make extensive use of stereotypic plans when writing and understanding programs. Furthermore, we have evidence that novice programmers acquire plans early on [2]. We encourage this by including plans in our introductory programming curriculum. We can therefore assume that plans will play a major role in the construction of the programs that PROUST analyzes. We assume furthermore that if a programmer uses plans correctly, and if they fit together into a coherent design, then the functionality of the plans corresponds closely to the programmer’s intentions. 4.2. Differentiating program interpretations We will now look at an example which involves integrating bug recognition into the process of constructing program interpretations. Recall that the problem statement in Figure 2 requires that all non-negative input other than 99999 should be processed. However, the program in Figure 5 goes into an infinite loop as soon as a non-negative value other than 99999 is read. The reason is that the YHILE statement at line 13 should really be an IF statement. The programmer is probably confused about the semantics of nested WHILE statements, a common difficulty for novice Pascal programmers [Q]. Otherwise the the loop is constructed properly. Apparently programmer understands how YHILE loops work when the body of the loop is straight-line code, but is confused about how multiple tests are integrated into a single loop. We will show how PROUST develops this interpretation for the program. The bug in this example is encountered while PROUST is processing the Sentinel-controlled input aequcncc goal. Two plans implementing this goal match the code: the SENTINEL PROCESSREAD WHILE PLAN, which we saw in the previous example, and the SENTINEL READ-PROCESSREPEXTPLAN. The WHILE loop plan matches the loop starting at line 13, while the REPEAT loop plan matches the loop starting at line 3. PROCESSREAD WHILE Constants: %top Variables: 7Neu Template: Initinpt: READ-PROCESS REPEAT constants: xtop variables: 7Neu Template: 4 WRITELN ('ENTER RAINFALL'). 5 READLN, 6 READ (RAIN), 7 WHILE RAIN < 0 DO 8 BEGIN 9 WRITELN ( RAIN 0 2,'NOT P SIBLE, REENTER'!, :0 READLN. 11 READ ( RAIN ) 12 END * 13 WHILE RAIN <) 99999 DO p 14 BEGIN 15 DAYS * DAYS l 1, 16 TOTALRAIN = TOTALRAIN l RAIN, 17 IF RAIN > 0 THEN 18 RAINDAYS = RAINDAYS l 1, 19 IF HICHRAIN < RAIN THEN 20 HIGHRAIN - RAIN 21 END 22 UNTIL RAIN = 99999, Figure 6: Program Requiring Shallow Bug Reasoning Although the Pascal portions of these plans match fairly closely, difficulties arise when the subgoals are matched against the program. Consider first the REPEAT loop plan. It indicates that there should be an Input subgoal at the top of the loop, and the remainder of the loop should be enclosed in a Sentinel Guard, i.e., a test to see if the sentinel value has been read. There is in fact a READ statement at line 6 which could satisfy the Input goal. However, the code which follows, at line 7, is not a sentinel guard; instead, it is a loop that performs more input. This indicates that there is a flaw in this model of the code. A similar problem arises when PROUST tries to match the WHILE loop plan. The problem there is that the plan indicates that there should be an Input goal above the loop, but PROUST finds the loop at line 7 interposed between the initial read and the apparent main loop at line 13. Mismatches between plans and codes are called plan differences; whenever a plan fails to match exactly a plan difference description is constructed describing the mismatches. There are two mechanisms which are used for resolving plan differences. One is to look for some other way of structuring the subgoals of the plans to match the code better. The other is to come up with an explanation of the plan mismatch in terms of bugs or plan transformations. Both mechanisms are needed in this example. The first step in resolving the plan differences associated with the looping plans is to restructure the subgoals in order to reduce the differences, Besides the READ SINGLE VALUE plans, PROUST has other plans which can be used for input. One plan is a WHILE loop which tests the input for validity as it is being read, and rereads it if the data is not valid. This plan satisfies two goals simultaneously, Input and Input Validation. However, Input Validation is also on the goal agenda, so PROUST combines the two goals and matches the plan. The result is that in the case of the SENTINEL-CONTROLLED PROCESS-READ WHILE PLAN lines 6 through 12 is viewed as performing the initial input, and in the case of the REPEAT plan these same lines of code are viewed as the main input inside the loop. Given t,hese interpretations of the subgoals, the main loop plans still do not quite match. The remaining differences are camp:;: I’r J against PROUST's bug catalog. This catalog has been built via empirical analyses of the bugs in hundreds of novice programs 191. It consists of production rules which are triggered by plan differences and which chain, if necessary, in order to account for all the plan differences. In the case of the REPEAT loop plan, the plan difference is that a UHILE statement is found instead of an IF statement; this is listed in the bug catalog, along with the probable associated misconception. In the case of the WHILE loop plan, there are two plan differences: the Input subgoal is missing from inside the loop, and the entire loop is enclosed inside of another loop. The missing input is fisted in the bug catalog; novices sometimes have the misconception that a READ statement is unnecessary in the loop if there is a READ statement elsewhere in the program. The enclosing loop bug is not listed in the catalog; that does not mean that this is an impossible bug, only that there is no canned explanation for this kind of plan difference. We have thus constructed two different interpretations of the implementation of the Sentinel-Controlled Input Sequence in this program. There are others which we have not listed here. It is necessary to construct all these different interpretations, instead of just picking the first one that appears reasonable, because there is no absolute criterion for when a plan should be considered to match buggy code. The only way to interpret code when bugs are present is to consider the possible interpretations and pick one which appears to be better than the others. In other words, PROUST must perform differential diagnosis in order to pick the right interpretation. The intention model makes it possible to construct such a differential; PROUST uses it to predict possible plans and subgoal structures which might be present, thus enumerating hypotheses to consider. Choosing from among the possible interpretations proceeds as follows. If there is an interpretation which is reasonably complete, and which is superior to competing interpretations, PROUST picks it, and saves the alternatives, in case evidence comes up later which might invalidate the decision. Here the REPEAT loop interpretation is superior, because each part of the plan has been accounted for, albeit with bugs. The YHILE loop interpretation is not as good, because the embedded loop plan difference is unexplained, and because the Input subgoal inside the loop was never found. PROUST therefore adopts the REPEAT loop interpretation, and adds the “while-forif” bug to the current diagnosis for this program. We see in this example that although shallow reasoning is used to recognize plans and bugs, this works only because causal reasoning about the programmer’s intentions provides a framework for performing plan and bug recognition, and for interpreting the results. The intention model makes it possible to employ differential diagnosis techniques, which helps PROUST arrive at the correct interpretation of the program even when bugs make it difficult to determine what plans the programmer was trying to use. In contrast, analysis methods which analyze the program behavior would probably be fooled by this program, because they would treat the WHILE statement at line 13 as a loop, rather than as an IF statement. Such a system might be able to determine that the program goes into an infinite loop, but it would not be able to explain to the programmer why his/her intentions were not realized. 4.3. Differentiating intention models Figure 6 shows a program which requires deeper analysis of the programmer’s intentions. This example illustrates how programming goals are sometimes realized indirectly in a program, by interacting with the implementation of other goals. Debugging such programs requires the ability to reason about goal interactions in order to differentiate models of the programmer’s intentions. Causal reasoning about intentions is essential in this enterprise. Let us examine how PROUST maps the goal Input Validation onto this program, i.e., how it determines how bad input is filtered from the input stream. One plan which implements this goa! is the BAD INPUT SKIP GUARD, which encloses the computations in the loop with an IF-THEN-ELSE statement which tests for bad input. PROUST discovers plan differences when it tries to match this plan against the program. PROUST can find a test for bad input in the loop, but it is too far down in the loop, and it does not have an ELSE branch. It also contains an unexpected counter decrement statement, NUMBER := NUMBER-l. It turns out that these plan differences are explained not by postulating a bug in the BAD INPUT s~.rp GUARD plan, but by inferring an altogether different goal structure for the program. 166 Input Validation(?RainfaI I :Dsi IyRsir. ?Roinfol I :Doi lyroin<O) BAD INPUT SKIP GUARD Variables: 'Val. ?Pred Errors Template: 1) Test part mtsplaced (in uegmen t FPocess: of Read d process) (should be at top of spanned by Process part of Teet : loop) IF 7Pred THEN '2) ELSE branch mtsslng subgoal Output diagnostic(j 3) Unexpected counter decrement ELSE Proceee : ?* YHILE RAINFALL 0 99999 W BEGIN NUNBER ‘= NUMBER + 1; IF RAINFALL > 0 THEN DAYS := DAYS + 1; IF RAINFALL ) HIGHEST THEN HICHEST = RAINFALL; TOTAL := TOTAL + RAINFALL; - satisjies hypothesis 2 IF RAINFALL < 0 THEN BEGIN YRITELN (*BAD INPUT'); NUNBER .= NUHBER - 1; 3 satisfies hypothea’r 1 END. conclusion: hypothesis f READLN; is satisfied READ ( RAINFALL) END; Hypothesis 1: conttngent goal present Hypotheses 2: contingent goal absent Contingency( E/Jetted-by( ?Plan. (?RainfalI :DailyRain<O) Compensate( ?Plan. (?Rainfal I.DoilyRain<O) 1 1 Figure 6: Differentiating Intention Models PROUST assumed that a single plan would be used to implement the Input Validation goal. Instead, two plans are used in this program: one prints out a message when bad input is read in, and the other decrements the counter NUMBER when bad input is read in. In fact, for this design to be correct, there would have to be a third plan, which subtracts bad input from the running total, TOTAL. We must therefore reformulate the Input Validation as a contingent goal: Contingency( Effected-by( ?Plan, (?Rainfal I:DailyRain<O) >, Compensate{ ?P I an, (?Rainfal I :Doi lyRain<O) 1 1. This goal states that whenever a plan is effected by the rainfall variable being less than zero, a goal of compensating for this effect must be added to the goal agenda. If we assume that bad input was being filtered in a contingent fashion, then input will be tested when it might effect the result; it would not be tested when the maximum, HIGHEST, is computed, for example. This hypothesis is generated by a bug rule which fires when PROUST tries to explain the plan differences listed in the previous paragraph. This rule stipulates that whenever a guard plan only guards part of the code that is supposed to guard, a new goal structure should be created in which the guard goal is reformulated as a contingent goal. Testing a contingent goal hypothesis is difficult, because it depends upon what plans are used to implement the other goals in the agenda, and PROUST does not yet know what those plans are. PROUST must construct a differential of two goa! structure hypotheses: hypothesis 1 holds that Input Validation has been reformulated as a contingent goal, and hypothesis 2 holds that the programmer neglected Input Validation altogether, and the code which appears to guard against bad input really serves another purpose. In order to test these hypotheses, PROUST activates two demons. The first demon tests hypothesis 1, by looking for plans which satisfy the contingency test and checking to see that bad input has been accounted for. The other demon tests hypothesis 2, by looking for cases where bad input should have been checked for but was not, and for alternative explanations for the code which is attributed to the contingent goal. Each demon finds one case supporting its respective hypothesis. The program compensates for the effect of bad input on the counter, NUMBER, which serves as evidence for the contingent goal hypothesis. However, the program does not compensate for the effect of bad input on the running total, TOTAL; this serves as evidence that the Input Validation goal was not implemented at all. This does not mean that the hypotheses are equally good, however. Hypothesis 1 can account for the running total being unguarded if we presume that the programmer left that case out by mistake. Hypothesis 2 cannot account for NUMBER := NUMBER-1 line at a!!; it would have to be dismissed as spurious code. PROUST avoids program interpretations which cannot account for portions of the code. Therefore PROUST can assume that the programmer has the contingent input validation goal in mind. Knowledge about how a goal structure was derived by the student makes it possible for PROUST to help the student iru prove his programming style. This program can be fixed by adding a line TOTAL := TOTAL-RAINFALL next to the NUMBER := NUMBER-1 line. This is not the right correction to suggest: it was a mistake for the programmer to validate the input in a contingent fashion in the first place. A single plan could have implemented the input validation directly, and the fact that the programmer overlooked one of the contingencies demonstrates that indirect goal implementations are harder to perform correctly. As we see in the output which PROUST generates for this bug, shown in Figure 7, PROUST suggests to the student that he re-implement the Input Validation goal. Thus causal reasoning about intentions not only makes it possible to find bugs in complex programs, it makes it possible to correct the reasoning that led to the occurrence of the bugs. 6. The sum is not shielded against invalid input. If the user types in a bad value it will be included in the sum. I noticed that you do test for bad input elsewhere. Your program would really be simpler if it tested the input in one place, before it is used, so bugs like this would not crop up. Figure 7: PROUST output for the program in Figure 6 Results To t,est PROUST, we performed off-line analysis of the first syntactically correct versions of 200 different solutions to the Rainfall Problem. A team of graders debugged the same programs by hand, to determine the actual number of bugs present. The results are shown in Figure 8, labeled “Test 1”. For each program PROUST generates one of three kinds of analyses. l Complete annlysis: the mapping between goal structure and code is complete enough that PROUST regards it to be fully reliable. l Partial analysis: significant portions of the program were understood, but parts of the program could not be analyzed. PROUST deletes from its bug report any bugs whose analysis might be effected by the unanalyzable code. 167 l No analysie: PROUST's analysis of the program was very fragmentary, and unreliable, and was therefore not presented to the student. In Test 1, 75% of the programs received complete analyses; PROUST found 95% of the bugs in these programs, including many that the graders missed. 20% of the programs were partially analyzed, and 5% got no analysis. lest 1 lest 2 lest 2 Repeated Total number of programs: 206 76 76 Number analyzed completely. 155 (75%) 30 (39%) 53 (70%) Total number of bugs: 531 133 252 Bugs recognized correctly: 502 (95%) 131 (98%) 247 (98%) Bugs not recognized 29 (5%) 2 (2%) 5 (2%) False alarms 46 5 18 Number analyzed partially' 40 (20%) 33 (43%) 19 (25%) Total number of bugs 220 163 105 Bugs recognized correctly. 79 (36%) 65 (40%) 42 (40%) Bugs deleted from bug report: 80 (36%) 58 (36%) 36 (34%) Bugs not recognized 61 (28%) 40 (25%) 27 (26%) False alarms 36 20 17 Number unanalyzed. 11 (5%) 13 (17%) 4 (5%) Figure 8: Results of running PROUST We have recently made on-line tests of PROUST in an introductory programming course. The column labeled uTest 2” summarizrs PROUST’s performance. Unfortunately the percentage of complete analyses went down. This turned out to be because of problems in transporting PROUST from the research environment to the classroom environment, and were not due to essential flaws in PROUST itself. We corrected these problems and re-ran PROUST on the same set of data; this time PROUST’S performance was comparable to what it was in Test 1. We also ran PROUST on another programming problem; the results of that test have yet to be tabulated. 6. Concluding Remarks We have argued that intention-based understanding is needed in order to diaqntise errors effectively in novice programs. Knowledge of intentions makes it p3ssib!e for PROUST to grapple with the high degree of variability in novice programs and novice programming errors, and achieve a high level of performance. Intention-based diagnosis is complex, but our results suggest that it is tractable for non-trivia! programs. This gives us optimism that the remaining obstacles to achieving high performance over a wide range of student populations and programming problems can be overcome in due course. References 1. Adam, A. and Laurent, J. “LAURA, A System to Debug Student Programs.” Artificial Intelligence 15 (1980), 75-122. 2. Bonar, J. and Soloway, E. Uncovering Principles of Novice Programming. SIGPLAN-SIGACT Tenth Symposium on the Principles of Programming Languages, 1983. 3. Chandrasekaran, B. and Mittal, S. Deep Versus Compiled Knowledge Approaches to Diagnostic Problem-Solving. Proc. of the Nat. Conf. on Artificial Intelligence, AAAI, August, 1982, pp. 349-354. 4. Davis, R. Diagnosis via Causal Reasoning: Paths of !nteract,ion and the Locality Principle. Proc. of the Nat. Conf. on Artifical Intelligence, AAAI, August, 1983, pp. 88-94. 5. Geneseret h, M. Diagnosis Using Hierarchical Design hilodels. Proc. of the Nat. Conf. on Art. Intelligence, 1982, PP. 278-283. 6. Harandi, hi,T. Knowledge-Baaed Program Debugging: a Heuristic Mode!. Proceedings of the 1983 SOFTFAIR, SoftFair, 1983. 7. Johnson, \Y.L. Intention-Based Diagnosis of Programming Errors. Tech. Rept. forthcoming, Yale University Department of Computer Sci., 1984. 8. Johnson, L., Draper, S., and Soloway, E. Classifying Bugs is a Tricky Business. Proc. NASA Workshop on Soft. Eng., 1983. 9. Johnson, W.L., Soloway, E., Cutler, B., and Draper, S. Bug Collection: I. Tech. Rept. 296, Dept. of Computer Science, Yale University, October, 1983. 10. Lukey, F.J. ‘Understanding and Debugging Programs.” ht. J. of Man-A4achine Studiea 12 (ISgO), 189-202. 11. Pople, H. E. Heuristic Methods for Imposing Structure on I!! Structured Problems: The Structuring of Medical Diagnostics. In Szolovits, P., Ed., Artificial Intelligence in Medicine, West View Press, 1982. 12. Sedlmeyer, R. L. and Johnson, P. E. Diagnostic Reasoning in Software Fault Localization. Proceedings of the SIGSOFT Workshop on High-Level Debugging, SIGSOFT, Asilomar, Calif., 1983. 13. Shapiro, D. G. Sniffer: a System that Understands Bugs. Tech. Rept. Al Memo 638, MIT Artificial Intelligence Laboratory, June, 1981. 14. Shapiro, E.. Algorithmic Program Debugping. MIT Press, Cambridge, Mass., 1982. 15. Shortliffe, E.H.. Computer-Baaed Medical Consultationa: MYCIN. American Elsevier Publishing Co., New York, 1978. 16. Soloway, E., Rubin, E., Woolf, B., Bonar, J., and Johnson, W. L. “MENO-II: An Al-Based Programming Tutor.” Journal of Computer-Baaed Instruction IO, 1 (1983). 17. Soloway, E. and Ehrlich, K. “Empirical Investigations of Programming Knowledge.” IEEE Tran8actions of Software Engineering SE-IO, In press (1984). 168
1984
68
356
CONTEXT-DEPENDENT TRANSH’IONS IN TUTORING DISCOURSE Beverly Woolf David D. McDonald Department of Computer and Information Science University of Massachusetts Amherst, Massachusetts 01003 ABSTRACX Successful machine tutoring, like other forms of ~~mamnachine discourse, requires sophisticated com- munication skills and a deep understanding of the student’s knowledge. A system must have the ability to reason about a student’s knowledge and to assess the effect of the discourse on him. In this paper we describe Meno-tutor, a LISP program that deliberately Plans the rhetorical structure of its output and cw&mks its responses to the level of understanding of the individual student. The Nature of Tutoring The goal of this research has been to identify the kinds of machinery and knowledge that are necessary to carry on an acceptable tutoring discourse. We have studied human tutoring protocols and have identified some of the rules and structures that govern this kind of behavior [Woolf & McDonald, 1983; Woolf, 19841. In this paper we describe how we used this information to build a preliminary version of a machine tutor. Tutoring suffers from the same problems that afflict other forms of communication: the system cannot know with certainty whether a student understands the topics being discussed or what meanings can be attributed to his answers. Because students are not aware of what they do not know, tutors, even more than typical speakers, must take care to define for both participants the topics and portions of the student’s knowledge that are missing or ambiguous. A machine tutor should have the ability to adapt its discourse to the context of student and discourse history; it should, for instance, engage the knowledgeable student in a way that is fundamentally different from the way it engages the confused one. We call this kind of system “contextdependent” and contrast it with what we call “retrieval-oriented” systems, such as the original WEST’ system [Brown Bi Burton, 1974 or the BIP project [Barr et al., 19761. While we have placed our emphasis on choosing among alternative discourses that respond best to what the tutor knows about the student’s knowledge and the Tutor: Do you know what the climate is like in Washington and Oregon? Student: Is it cold? Tutor: No, it is rather mild. Can you guesss about the rainfall there? Studertt: Normal, I guess. Tutor: Well, the Japan current, which starts in south eastern Pacific, goes along the coast of Japan and across the North Pacific ending up off the coast of Washington and Oregon. How do you think that current affects the climate there? &deW It’s probably rainy. Tutor: It is indeed very wet, there are rain forests in both Washington and Oregon. What does that tell you about the temperature of the Japan current? Student: It’s warm. Figure 1: An Example Meno-totor disamse. discourse history, the retrievaloriented system has been directed at retrieving the correct answer, which is stored in the expert knowledge base. In the latter system the inputoutput routines act as a front end to the knowledge retrieval system. In contrast, the context-dependent response considers the sensitive response more effective than the correct response. For instance, correcting a wrong answer may be appropriate at times. However, it may not be appropriate if the student’s wrong answer follows a series of wrong answers in which the student has shown a lack of knowledge of the domain and if related topics exist that might help focus the student’s attention onto the correct answer. In such a case, the better approach might be to briefly acknowledge the wrong answer and move on to provide more supplemental data. 355 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. Examples from Meno-tutor Tutor: As an example of a discourse produced by Meno-tutor we present Figure 1. Thisdiscourseis modeled directly on a human tutoring dialogue recorded by Stevens et al. [1982, pg 181.’ That is, we analyzed the common transitions and speech patterns used in this and twelve other human dialogues and defied the structures and knowledge necessary for a machine tutor to behave the same way. In the example, the student’s first two answers were wron$ and the system marked the student as having limited knowledge about the topic. The question-answer approach was then changed because it was not working and the system chose, instead, to tell the student about the “Japan Current,” which is a dominant influence behind the region’s climate. Moving beyond this “reverse engineering,” we have begun to explore the “tutoring space” our apparatus defines by varying the domain and the particulars of the rules. The discourse in Figure 2, for example, is based on the same domain as the first, but is done in an alternative tutoring style, brought about by modifying the “meta-rules” that govern whether the tutor explores the student’s frontier of knowledge (Figure 1) or probes the student’s misconceptions about the current topic as soon as the first mistake is made (Figure 2). Two meta-rules were modified to achieve this second discourse. The first makes the tutor change discourse tactic after a set of topics has been completely discussed and the tutor has some confidence in its assessment of what the student knows. In the first discourse, it was set at a more conservative value. In the second discourse, it caused the shift in strategy after a single wrong answer. The second modified meta-rule shifts the discourse to focus on the student’s misconception. Typically, this rule is triggered only after all topics have been covered and a misconception has been observed. In the second discourse this rule WaS modified to eliminate the first precondition, resulting in the rule being triggered after a single l The input text from the student is straight from Stevens et al. We feed the concephA equivaIent of the student’s input to the system by hand (i.e., what would have been the output of a naturaI language comprehension system). The output is presentIy produced by standard incremental replacement techniques but in a way that should permit us to easiIy move over to a more capable generator (we expect to use MUMBLE [1983D at the point when we invest in a larger knowledge base and a richer representation. Do you think it rains much in Oregon? Student: No. Tutor: Why do you think it doesn’t rain much in Oregon? Student: Because the neighboring states are dry. Tutor: The neighboring states are dry, but, you can not generalize! from that. In fact, it does rain a lot in Oregon. Can you guess what causes the rain there? Student: Well, there are high mountains in the region and the states border the Pacific Ocean. Tutor: Yes. The Pacific borders Oregon. How do you think it is involved in the heavy rainfall there? Student: Ocean air rises and cools upon reaching the land Tutor: That is right. Figure 2: A Second MenoCcltor Disconrsc. wrong answer occurs which is linked to a common misconception. Our second exploration of the tutoring space was to substitute a new knowledge base for the facts about rainfall, namely elementary PASCAL looping concepts. The focus of this PASCAL tutor is on the misconceptions behind a student’s explicit programming errors. The model for the misconceptions drew on the results of of extensive cognitive studies about how novices learn PASCAL, constructs [Bonar, 1984; Soloway et al., 19811. The Meno-tutor defines a general framework within which tutoring rules can be defined and tested. It is not an exhaustive tutor for any one subject but rather a vehicle for experimenting with tutoring in several domains. Though the number of discourses produced is still small (i.e., S), the fact that our architecture has been adapted to two quite different domains and that we can produce varied but still quite reasonable discourses in short order by changing the particulars of the rules, is evidence of its potential. 2 It’s not that those answers were simply %vrong,” but that they refkct reasonable default assumptions about what happens in “northern states.” An attempt to probe such assumptions is made in the next dkourse, in Figurt 2. 356 The Architecture of the Menoltutor Meno-tutor separates the planning and the generation of a tutorial discourse into two distinct components: the tutoring component and the surface language generator. The tutoring component makes decisions about what discourse transitions to make and what information to convey or query, and the surface language generator takes conceptual specifications from the tutoring component and produces the natural language output. These two components interface at the third level of the tutoring component as described below. The knowledge base for the tutor is a KL-ONE network annotated with pedagogical information about the relative importance of each topic in the domain. The tutoring component is best described as a set of decision-units organized into three planning levels that successively refine the actions of the tutor, (see Figure 3). We refer to the network that structures these decisions, defining the default and meta-level transitions between them, as a Discourse Management Network or DMN. The refinement at each level maintains the constraints dictated by the previous level and further elaborates the possibilities for the system’s response. At the highest level, the discourse is constrained to a specific tutoring approach that determines, for instance, how often the system will interrupt the student or how often it will probe him about misconceptions. At this level a choice is made between approaches which would diagnose the student’s knowledge (tutor), or introduce a new topic (intruduce.) At the second level, the pedagogy is refined into a strategy, specifying the approach to be used. The choice here might be between exploring the student’s competence by questioning him, or describing the facts of the topic without any interaction. At the lowest level, a tactic is selected to implement the strategy. For instance, if the strategy involves questioning the student, the system can choose from half a dozen alternatives, e.g., it can question the student about a specific topic, the dependency between topics, or the role of a subtopic. Again, after the student has given his answers, the system can choose from among eight ways to respond, e.g., it can correct the student, elaborate on his answer, or, alternatively, barely acknowledge his answer. The tutoring component presently contains forty states, each organized as a LISP structure with slots for functions that are run when the state is evaluated. The slots define such things as the specifications of the text to be uttered, the next state to go to, or how to PfZDhGOGIC 5TATC 4 I CorlPLtTe DC3Chlbt DOMAIN CMPMATIC coha:cT ACKHOWLtDbtMtf-1 PROPOSE ANALOOY Figure 3: The IHsconrse Management Network @MN). 357 update the student and discourse models. The DMN is structured like an augmented transition network (ATN); it is traversed by an iterative routine that stays within a predetermined space of paths from node to node. Paths, however, are not statically defined; the default path can be preempted at any time by meta-rules that move Meno-tutor onto a new path, the action of the meta-rule corresponding functionally to the high-level transitions observed in human tutoring. These preemptions move the discourse to paths which ostensibly are more in keeping with student history or discourse history than the default path. The ubiquity of the meta-rules-the fact that virtually any transition between tutoring states (nodes) may potentially be preempted-represents an important deviation from the standard control mechanism of an ATN. Formally, the behavior of Meno-tutor could be represented within the definition of an ATN; however the need to include arcs for every meta-rule as part of the arc set of every state would miss the point of our design. The system presently contains 20 meta-rules; most originate from more than one state and move the tutor to a single, new state. The preconditions of the meta-rules determine when it is time to move off the default path: they examine data structures such as the student model (e.g., Does the student know a given topic?), the discourse model (e.g., Have enough questions been asked on a given topic to assess whether the student knows it?), and the domain model (e.g., Do related topics exist?). Two meta-rules are described in an informal notation in Figure 4 and in more detail in the next section. An Example of Discourse Planning In this section, we show how the decision-units and meta-rules interact in the tutoring process. We describe the generation of a portion of the discourse in Figure 1. The example discourse begins after the student’s second incorrect answer. Snapshots l-6 show Meno-tutor’s passage through a small portion of the Discourse Management Network (DMN) as it plans and generates the sample discourse. As shown in Snapshot 1, the tutor begins in the state cxplicii-incorrect-acknawledgcmcnt, which is a tactical state the principal action of which is to say something, in this case “No.” Having said this, the tutor still has “control” of the discourse and can continue to elaborate its response to the student’s wrong answer. In the present design there is no default path out of the state at the tactical level? We decided, in designing these rules, that the best thing to 3 With a different set of rules, the tutor might, for CXX.ltiUUC speaking or it might reinforce the student’s perhaps by repeating it <If elaborating part of it. example, answer, Sl-EXPLORE - a Strategic Meta-rule From: teach-data To: explorcxwmpetency Description: Moves the tutor to begin a series of shallow questions about a variety of topics. Activation: The present topic is complete and the tutor has little confidence in its assessment of the student’s knowledge. Bebavlor: Generates an expository shift from detailed examination of a single topic to a shallow examination of a variety of topics on the threshold of the student’s knowledge. T6-AJMPLKITLY - a Tactical Meta-rule From: explicit-incorrect-acknowledgement To: implicit-incorrect-acknowledgement Description: Moves the tutor to utter a brief acknowledgement of an incorrect answer. Activation: The wrong answer threshold has been reached and the student seems confused. Behavior: Shifts the discourse from a explicit correction of the student’s answer to a response that recognizes, but does not dwell on, the incorrect answer. FIgme 1: luformd Notation of the Mebrdes. do at this point is to move to a higher planning level and to consider reformulating either the strategy or the pedagogy of the utterance. Therefore, the tutor returns to the strategic level and to the parent state, teachdata, as indicated by the up arrow in Snapshot 1. Once in teach-data, we move along the default path down to the tactical level to teach-spetificdato. In general, at this point, a meta-rule might have applied to take the tutor to a more particular tactical state. At teach-specific-data an utterance is constructed from the specification spezlfic-value (current-topic), where current-topic has been carried forward from the previous ply of the discourse and is “the climate in Washington and Oregon.” The attribute value of this topic is “rather mild” (a canned phrase), and the surface language generator renders it in this discourse context as “It’s rather mild.” From teach-specific-km&edge there is again no default path and the tutor moves up again to teuchdata (Snapshot 2). This time, however, the context has changed and before teuchduta can move along the 358 ITUTORI UTTER RLM-75 ** SPCClrlC TCAC+d .% CULZR=NI ROLE i ImID aoLC suapshot 1 default path as before, a meta-rule takes the tutor to a different decision-unit. The context has changed because the topics brought up until this point in the discourse have been answered or resolved. In detail, what happened was that, when the tutor supplied the correct answer to its own question (i.e., “It’s rather mild”), the DMN register *question-complete* was set, satisfying one of the preconditions of the meta-rule, Sl-EXPLORE (shown in Figure 4). The other precondition for this meta-rule was already satisfied, namely that some topics related to the current topic remain to be discussed (as indicated by another register). When Sl-EXPLORE is triggered it moves the [TUTOR] tutor to explore-competency, in effect establishing that previous topics are complete and that a new topic can be explored. The next most salient topic in the knowledge base is “rainfall in Washington and Oregon” and it becomes the current topic. Once in explore-competency, the tutor takes a default path to the tactical level and to exploratory- question (Snapshot 3), where it asks another question on a topic at the threshold of the student’s knowledge. The utterance this time is constructed from the specification question-model (current-topic) - “Can you quess about the rainfall there?” At this point Meno-tutor continues along a default path and enters the tactical state evuluufe-input (not shown) which receives and evaluates the student’s answer. This answer is again wrong and the default path moves the tutor, once again to explicit-incorrect-acknowledgement, where it would normally correct the student, as before. However, this state is not evaluated because the context is different and a new meta-rule, T6-AIMPLICITLY (Figure 4) fires first, moving the tutor to another decision-unit (Snapshot 4). The difference in context is two-fold: 1) the student seems confused and 2) the test for wrong answers threshold is met. Recognizing a confused student is admittedly a subjective and imprecise inference for a machine tutor. In this implementation, we have chosen to measure it as a function of the number of questions asked, the number of incorrect responses given, and the extent to which the student’s frontier of knowledge has been explored. In the example discourse, two questions have been asked, two answers have been incorrect, and the student’s frontier of knowledge is barely explored. Therefore, the student is judged to be confused and the meta-rule T6-AIMPLICITLY is triggered, forcing the system to move to the tactical state implicit-incorrect- 359 acknowledgement. Instead of correcting the student, this state causes a response which implicitly recognizes, but does not dwell on, the incorrect answer. The tutor responds with “Well, . . .” There is no default path from implicit-incorrect- acknowledgement and the tutor moves up to teuch-data (Snapshot 5). Once here, a meta-rule, S3-DESCRIBE, moves the tutor to describe-domain, terminating the question-answer approach and beginning a lengthy descriptive passage about a single topic. This happened because the threshold of wrong answers has been met (as recorded by a register) and there is a link from the major topic, “climate in Washington and Oregon,” to an undiscussed geographical factor on which it is dependent, namely the “Japan Current.” I TUTOR I ITUTOR) I IMPUCIT r4CceKC’ AcY*(Owl.LoeeMch Snapsbot 6 From describedomain, the tutor takes the default path to describe-specific-knwfedge at the tactical level (Snapshot 6) and constructs an utterance from the specification specificdescribe (current-topic). specific- describe enunciates each attribute of its argument and Meno-tutor says “the Japan Current, which starts in the Southeast Pacific, goes along the coast of Japan and across the North Pacific, ending up off the coast of Washington and Oregon.” snapshot 5 360 Current status At this point in our research, the tutor’s knowledge of two domains is shallow and, as mentioned above, we have not yet interfaced the tutoring component with the surface language generator. Our intent is to develop a complex knowledge base, in either the domain of rainfall or PASCAL, to extend the surface language generator to deal with the domain, and to build a simple natural language parser to understand the student’s input. REFERENCES Barr, A., Beard, M., & Atkinson, R. C., ‘The Computer as a Tutorial Laboratory: The Stanford BIP Project,” in the Intemational Journal of Man-Machine Studies, 8, 1976. Bow, J., Understanding the Bugs of Novice Programmers, Ph.D. Dissertation, Department of Computer and Information Science, University of Massachusetts, Amherst, Mass., 1984. Brown, J. S. & Burton, R. R., “Multiple Representations of Knowledge for Tutorial Reasoning,” in D. Bobrow h A. Collins, (Eds), Representation and Understanding: Studies in Cognitive SCikTlCC, Academic Press, New York., 1975. McDonald, D., “Natural Language Generation as a Computational Problem: an Introduction,” in M. Brady & R. Benvick (eds.), Computational Models of Discourse, MlT Press, Cambridge, Mass, 1983. Soloway, E., Woolf, B., Rubin, E., Barth, P.,“MenoII: An Intelligent Tutoring System for Novice Programmers”, Proceedings of the International Joint Conference in Artificial Intel1 igence , Vancouver, British Columbia, 1981. Stevens, A., Cdlins, A., & Goldin, S., ‘Diagnosing Student’s Misconceptions in Causal Models,” in International Journal of Man-Machines Studies, 11, 1978 and in Sleeman & Brown (eds.), Intelligent Tutoring Systems, Academic Press, Cambridge, MA, 1982. Woolf, B., Context-Dependent Planning in a Machine Tutor, Ph.D. Dissertation, Computer and Information Sciences, University of Massachusetts, Amherst, MA, 1984. Woolf, B., & McDonald, D., “Human-Computer Discourse in the Design of a Pascal Tutor,” CHZ 83: Human Factors in Computer Systems, ACM, 361
1984
69
357
HOW TO COPE W-ITH ANOMALIES IN PARALLEL APPROXIMATE BRANCH-AND-BOUND ALGORITHMS Guo-jie Li and Benjamin W. Woh School of Electrical Engineering Purdue University West Lafayette, Indiana 47907 Abstract: A genera! technique for solving a wide variety of search problems is the branch-and-bound (B&B) algorithm. We have adapted and extended B&B algorithms for parallel processing. Anomalies owing to parallelism may occur. -In this paper sufficient conditions to guarantee that parallelism will not degrade the performance are presented. Necessary condi- tions for allowi& parallelism to have a speedup greater than the number of processors are also shown. Anomalies are found to occur infrequently when optima! solutions are sought; how- ever, they are frequent in- approximate B&B algorithms. Theoretical analysis and simulations show that a best-first search is robust for parallel processing. 1. INTRODUCTION The search for solutions in a combinatorially large 7 rob- lem space is very important in artificial intelligence (AI [8]. Combinatorial-search problems can be classified into two types. The first type is decision problems that decide whet her at least one solution exists and satisfies a given set of con- straints. Theorem-proving, expert systems and some permuta- tion problems belong to this class. The second type is optimi- zation problems that are characterized by an objective func- tion to-be minimized or maximized and a set of constraints to be satisfied. Practical problems, such as traveling salesman, job-shop scheduling, knapsack, vertex cover, and game-tree search belong to this class: A genera! techni ue for solving combinatorial searches is the B&B algorithm [S . 7 This is a partitioning algorithm that decomposes a problem into smaller subproblems and repeat- edly decomposes until infeasibility is proved or a solution is found 161. -It can be characterized by four constituents: a branch&i rule, a selection rule, an elimination rule and a ter- mination-condition. The first two rules are used to decompose the problem into simpler subproblems and appropriately order the search. The last two rules are used to eliminate generated subproblems that are infeasible or that cannot lead to a better solution than an alreadv-known feasible solution. Kumar et II a!. have shown that the B&B approach provides a unified way of formulating and analyzing AND/OR tree searches such as SSS’ and Alpha-Beta search [4). The technique of branching and pruning in B&B algorithms to discover the optimal ele- ment of a set is the essence of many heuristic procedures in AI. To enhance the efficiency of implementing B&B algo- rithms, approximations and parallel processing are two major approach&. It is impractical-to use p~ralle! processing to solve intractable problems with exponential complexity because an exponential number of processors must be used to solve the problems in polynomial time in the worse case. For these problems, approximate solutions are acceptable alternatives. Experimental results on vertex-cover, O-l knapsack and some integer-programming problems reveal that a linear reduction in accuracy may result in an exponential reduction in the average computational time [lo]. On the other hand, parallel process- ing is applicable when the problem is solvable in polynomial time (such as finding the shortest path in a graph), or when Research was supported by National Science Foundation Grant ECS81- 059 68. AAAI-84 National Conference on Artificial Intelligence. the problem is NP-hard but is solvable in polynomial time on the average [9], or when the problem is approximately solvable in polynomial time (such as game-tree search). Analytical properties of parallel approximate B&B (PABB) algorithms have been rarely studied. In genera!, a k- fold speedup (ratio of the number of iterations in the serial case to that of the parallel case) is sought when k processors are used. However, simulations have shown that the speedup for PABB algorithms using k P rocessors can be (a) less than one--“detrimental anomalg 3,5]; (b) greater than k-- “acceleration anomaly” [3,5]; or (c) between one and k-- “deceleration anomaly [3,5,10]. Similar anomalous behavior have been reported by others. For instance, the achievable speedup for AND/OR-tree searches is limited by a constant (5 to 6) independent of the number of processors used (parallel- aspiration search) or & with k processors (tree-splitting algo- rithm) [l]. S o f ar, a!! known results of parallel tree searches showed that a near-linear speedup holds only for a small number of processors. It is desirable to discover conditions that preserve the acceleration anomalies, eliminate the detrimental anomalies and minimize the deceleration anomalies. The objectives of this paper are to provide conditions for achieving the maximum speedup and to find the appropriate parallel search strategy under which a near-linear speedup will hold for a considerable number of processors. PARALLEL APPROXIMATE BRANCH-AND- kOUND ALGORITHMS Many theoretical properties of serial B&B algorithms have been developed [Z], and a brief discussion is given here. In this paper minimization problems are considered. Let P, be a subproblem, i.e., a node in the state-space tree, and f(P,) be the value of the best solution obtained by evaluating all the subproblems decomposable from Pi. A lower bound, g(P,), is calculated for P, when it is created. If a subproblem is a feasi- ble solution with the best objective-function value so far, the solution value becomes the incumbent z. The incumbent represents the best solution obtained so far in the process. During t#he computation, Pi is terminated if: dpi) 2 z (1) The approximate B&B algorithm is identical to the optima! algorithm except that the lower-bound test is modified to: dpi) 2 * c>o, z>o (2) where 6 is an allowance parameter. The final incumbent value zF obtained by the modified lower-bound test deviates from the optimal solution value, zo, by: ZF Let L denotes the lower-bound cutoff test, that is, Pj LP; means that Pj is feasible solution and f(P,)/(l +f) < g(Pi), fZO* F oraexamp!e, in Figure 1, Pi L P,, since 91/1.1<85, and similarly P,LP,. However, P,LP, is false because 100/1.1>85. Ibaraki mapped breadth-first, depth-first and best-first searches into a genera! form called heuristic aearchea heuristic function is used to define the order in which su 212 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. n Figure 1. Solution in parallel case Example of a detrimental depth-first search (~~0.1). an0 maly under a paralllel lems are selected and decomposed. The algorithm always decomposes the subproblem with the minimum heuristic value. In a best-first search, the lower-bound values define the order of expansion, hence the lower-bound function can be taken as the heuristic function. In a breadth-first search, subproblems with the minimum level numbers are expanded first. The level number can, thus, be taken as the heuristic function. Lastly, in a depth-first search, subproblems with the maximum level numbers are expanded first. The negation of the level number can be taken as the heuristic function. - Branch-and-bound algorithms have inherent parallelism: (a) Parallel selection of .wbprobfema: A set of subproblems less than or equal to in size to the number of processors have to be selected for decomposition in each iteration. A selection function returns k subproblems with the minimum heuristic values from U, where k is number of processors and U is the active list of subproblems. The selection problem is especially critical under a best-first search because a set of subproblems with the minimum lower bounds must be selected. (b) Parallel 6 ranching: The subproblems assigned to the processors can be decomposed in parallel. In order for the pro- cessors to be we!! utilized, the number of active subproblems should be greater than or equal to k. (c) Parallel termination test: Multiple infeasible nodes can be eliminated in each iteration. Further, multiple feasible solutions may be generated, and the incumbent may have to be updated in parAle!. (d) Parallel elimination teat: If the incumbent is accessible to all‘the processors, the lower-bound test (Es’s 1 or 2) can be carried out in parallel. The above sources of parallelism has been studied in MANIP, a multiprocessor implementing PABB algorithms with a best-first search and lower-bound tests (lo]. 3. ANOMALIES ON PARALLELISM In this section anomalies are studied under lower-bound elimination and termination rules. The results on anomalies with dominance tests are shown elsewhere [7]. For simplicity, only the search for a single optima! solution is considered here. A synchronous model of PABB algorithm is used. The incumbent is stored in a global register that can be updated concurrently. Active subproblems can be stored in a central- ized list or multiple lists. The distinction lies in the memory configuration. When a!! the processors are connected to a cen- tralized memory, the subproblem list is global to the proces- sors. When each processor has a private memory, only the local subproblem list can be accessed. The sequence of operations performed in an iteration are selection, branching, feasibility and elimination tests, and inserting new! generated subproblems into the list(s). Let TC(k,c) and Td k,c) denote Q the number of iterations required for expanding a B&B tree using centralized and k subproblem lists respectively, where k is the number of processors used, and c is the allow ante parameter. An example of a detrimental anomaly is illustrated in Figure 1. In a serial depth-first search, subtree T, is te?- minated owing to the lower-bound test of P, : f(P;)/(l +E) Q$P,) where c ~0.1. In a parallel depth-f&t search with two processors, a feasible solution, P,. that ter- minates P1 and Py is found in the second iteration. As men- tioned before, P, is not eliminated by P,. Consequently, sub- tree T, has to be expanded that will eventually terminate sub- tree TJ. If the size of T, is much larger than the size of T,, the time it takes to expand T, using two processors will be longer than the time it takes to expand T, using one proces- sor. Note that the above anomaly does not happen in a best- first search because subtree T, is not expanded in both the serial and the parallel cases. An example of an acceleration anomaly is shown in Fig- ure 2. When a single Drocessor with a death-first search is used, subtree T win be expanded since f(P,)/( 1 i-f) > g(P,) where c = 0.1. When two Drocessors are used. P, and hence T will be terminated b; lower-bound tes& with P3: f(P,)/(l +c) < g(b). If T is very large, an acceleration ano- maly will occur. 4. GENERALIZED HEWRISTIC SEARCHES Recall that the selection function uses the heuristic values to define the order of node expansions. In this section we show that detrimental anomalies are caused by the ambiguity in the selection rule. A generalized heuristic search is proposed to eliminate detrimental anomalies in a single subproblem list. Consider the serial depth-first search. The subproblems are maintained in a last-in-first-out list, and the subproblem with the maximum level number is expanded first. When mul- tiple subproblems have identical level numbers (heuristic values), the subproblem chosen by the selection function depends on t,he order of insertion into the stack. Suppose t,he rightmost son is always expanded and inserted first. Then the leftmost son will be the subproblem inserted last and expanded first in the next iteration. 1 list. In a parallel depth-first search with a single subproblem the mere extension of the serial algorithm mav cause an andmalous behavior. For example, the &der of exp*ansion in a serial depth-first search for the tree in Figure 3 is A, B, D, I, J, E, etc. \Vhen two processors are used, nodes B and C are expanded in the second iteration that result in nodes D, E, F, n Figure f=91 2. Example of an acceleration anomaly le! depth-first search (~‘0.1 1. under a paral- 213 level 0 level 4 Figure 3. The path numbers of a tree. G and H. Since these nodes have identical level numbers, any two of these nodes can be chosen for expansion in the next iteration by the conventional heuristic function discussed in Section 2. Suppose the nodes are inserted in the order E, D, H, G and F. Then nodes F and G will be selected and expanded in the third iteration. This may cause a detrimental anomaly if subtree T, is large. In fact, this is exactly the rea- son for the anomalies reported by Lai and Sahni [5]. To solve this problem, we must define distinct heuristic values for the nodes so that there is no ambiguity on the nodes to be chosen by the selection function. In this paper a path number is used to uniquely identify a node in a tree. The path number of a node is a sequence of d-t 1 integers that represent the path from the root to this node where d is the maximum number of levels of the tree. The path number E = e0e1e2...ed is defined recursively as follows. The root PO exists at level 0 and has a path number of OOO...O. A node Pij on level II which is the j-th son (counting from the left) of Pi with path number EPi = eoel...ell-lOOO... has path number Epi, = eoel...ef-ijO . . . . As an example, the path numbers for the iodes in the tree of Figure 3 are shown. To compare path numbers, the relations ‘I>” and I‘+ must be defined. A path number El = eiiei * * - is less than another path number E, = e;el * * . (E, < E2) if there exists O<j<d such that e: =eT, O<i<j, and e,‘<e:. The path numbers are equal if e: =ei2 for Oli<d. For example, the path number 01000 is less than 01010. Note that nodes can have equal path numbers if they have the ancestor-descendant relationship. Since these nodes never coexist simultaneously in the list of active subproblems, the subproblems in the active list always have distinct path numbers. The path number is now included in the heuristic func- tion. The primary key is still the lower-bound value or the level number. The secondary or ternary key is the path number and is used to break ties in the primary key. I (level number, path number) breadth-first search (path number) h(Pi)=’ (1 depth-first search ower bound, level number, path number) (4 or (lower bound, path number) best-first search For a best-first search, two alternatives are defined that search in a breadth-first or depth-first fashion for nodes with identical lower bounds. The heuristic functions defined above belong to a general class of heuristic functions that satisfy the following properties: (a) NP,)#h(P,) if P,#Pj, P,, P, E U (all heuristic values in the active list are distinct) (b) h(P,)< h(P,) if Pd is a descendant of Pi (5) (heuristic values o not decrease) (6) In general, any heuristic function with a tie-breaking rule that satisfy Eq’s 5 and 6 will not lead to detrimental anomalies. Due to space limitation, the results are stated without proof in the following theorems. The proofs can be found in [7]. Theorem 1: Let c=O, i.e., an exact optimal solution is sought. TC(k,O) LT’( 1,0) holds for parallel heuristic searches of a single optimal solution in a centralized list using any heuristic func- tion that satisfies Eq’s 5 and 0. When approximations are allowed, detrimental anomalies cannot always be avoided for depth-first searches even though path numbers or other tie-breaking rules are used (see Figure 1). The reason for the anomaly is that lower-bound tests under approximation, L, are not transitive. That is, PiL P and P. L Pk do not imply Pi L Pk, since fhPi)/( 1 +c) 5 g(Pj) and f[Pj)/(iI +c{ < g(Pk) implies f(Pi)/( 1 tc) 5 g(Pk) rather than f Pi)/(l + c 5 g(Pk). In this case detrimental anomalies can be avoided for best-first or breadth-first searches only. Theorem 2: TC(k,c) 5 TC(l,c), c>O, holds for parallel best- first or breadth-first searches for a single optimal solution when a heuristic function satisfying Eq’s 5 and 6 is used. Since the lower-bound function is used as the heuristic function in best-first searches, Eq’s 5 and 6 are automatically satisfied if all the lower-bound values are distinct. Otherwise, path numbers must be used to break ties in the lower bounds. In Section 7 a more general condition will be given for best- first searches. For depth-first searches, the conditions of Theorem 2 are not sufficient, and the following condition is needed. For any feasible solution Pi, all nodes whose heuristic values are less than h(P,) cannot be eliminated by the lower- bound test due to Pi, that is, f(P,)/( 1 +c) 5 g(Pj) implies that h(P,) < h(Pj) for any l’j. Generally, this condition is too strong and cannot be satisfied in practice. CONDITIONS TO ENSURE kCC%ii%t?i ANOMALIES IN A SINGLE SUB- PROBLEM LIST When an exact optimal solution is sought, acceleration anomalies may occur if a depth-first search is used or some nodes have identical heuristic values. This is characterized by the incomplete consistency between the heuristic and the lower-bound functions. A heuristic function, h, is said to be not completely consistent with g if there exist two nodes Pi and Pj such that h(P,) > h(Pj) and g(P,) 5 g(Pj). Theorem 3: Let c = 0. Assume that a single optimal solution is sought. The necessary condition for TC(k,O) < TC( 1,0)/k is that the heuristic function is not completely consistent with g. For a breadth-first search, no acceleration anomaly will occur if the heuristic function defined in Eq. 4 is used. For a best-first search, acceleration anomalies may exist if the level number is not used in the heuristic function. It is important to note that the condition in Theorem 3 is not necessary when approximate solutions are sought. An example showing the existence of an acceleration anomaly when h is completely con- sistent with g is shown in Figure 2. A looser necessary condi- tion is that h is not completely consistent with the lower- bound test with approximation, that is, there exist Pi and P, such that h(Pi) > h(P,) and P, L Pj. 6. MULTIPLE SUBPROBLEM LISTS When there are multiple subproblem lists, one for each processor, a node with the minimum heuristic value is selected for decomposition from each local list. This node may not belong to the global set of active nodes with the minimum heuristic values; however, the node with the minimum heuris- tic value will always be expanded by a processor as long as the nodes are selected in a consistent order when there are ties. Since it is easy to maintain the incumbent in a global data register, the behavior of multiple lists is analogous to that of a centralized list. However, the performance of using multiple lists is usually worse than that of a single subproblem list [lo]. So far, we have shown conditions to avoid detrimental anomalies and to preserve acceleration anomalies under lower- bound tests only. The results are summarized in Table 1. The corresponding results when dominance tests are used will not be shown here due to space limitation [7]. 214 Table 1. Summary of results for the elimination of detrimental anomalies and the preservation of acceleration anomalies in parallel B&B algorithms with lower- bound tests. Conditions: I: heuristic function satisfies Eq’s 5 and 6. II: h is not completely consistent with g. anomaly: the su5cient conditions are impractical. exists: the necessary conditions are too loose. 7. ROBUSTNESS OF PARALLEL BEST-FIRST SEARCHES The preceding sections have shown that best-first searches are more robust for parallel processing in the sense of avoiding detrimental anomalies and preserving acceleration anomalies. In this section we shown that best-first searches are more robust as far as deceleration anomalies are concerned. Figure 4 shows the computational efficiency of a parallel optimal B&B algorithm using a best-first or a depth-first search for solving knapsack problems in which the weights, w(i), are chosen randomly between 0 and 100 and the profits are set to be p(i)=w i)+ 10. The assignment used is intended \ to increase the camp exity of the problem, In the simulations each processor has a local memory. Load balancing is incor- porated so that an idle processor with an empty subproblem list can get a subproblem from its neighbor. It is observed that the speedup is sensitive to T(l,O), and the speedup is better for best-first searches. For instance, when 64 processors are used, the average speedup is 48.8 for best-first searches and 27.9 for depth-first searches. Moreover, it should be noted that the generalized heuristic search presented in Section 4 cannot guarantee T(k,,O) < T(k,,O), k, > k, > 1, for depth-first and breadth-first searches. Similar results were observed for vertex-cover problems. The following theorem gives the performance bound of parallel best-first searches. The maximum number of proces- sors within which a near-linear speedup is guaranteed can be predicted. a-‘6 7i A 114 t - 2 f oY. , * . . . . . lo 1 2 3 4 5 6 7 a 9 log2(number of processors) Figure 4. Average speedups and space requirements of paral- lel optimal B&B algorithms for 10 knapsack prob- lems with 35 objects (average T(1,0)=15180 for best-first searches; average T( 1,0)=15197 for depth-first searches). Theorem 4: For a parallel best-first search with k processors, c=O, and g(Pi)#f’ if Pi is not an optimal-solution node (f’ is the optimal-solution value), where P is the maximum number of levels of the B&B tree to be searched. Since the performance is not affected by using single or multiple subproblem lists, the superscript in T is dropped. Since 9 is a polynomial function of (usually equal to the problem size while T(l,O) is an exponential function o I the problem size for NP-hard problems, the first term on the R.H.S. of Eq. 7 is much greater than the second term as long as the problem size is large enough. Eq. 7 implies that the near-linear speedup can be maintained within a considerable range of the number of processors for best-first searches. As an example, if P ~50, T(l,O)=lO” (for a typical traveling- salesman problem), and k=lOOO, then T(lOOO,O)< 1049. This means that almost linear speedup can be attained with 1000 processors. Furthermore, it can be shown that there is always monotonic increase in performance for all r < k < k < dm. For this example, there will not be any ’ detrimental anomaly for any combinations of 15 k, < k& 141 if the assumptions of Theorem 4 are satisfied. Before ending this paper, it is worth saying a few words about the space required by parallel B&B algorithms. In the serial case, the space required by a best-first search is usually more than that required by a depth-first search. Somewhat surprisingly, the simulation results on O-l knapsack problems show that the space required by parallel best-first searches is not increased significantly (may also be decreased) until the number of processors is so large that a near-linear speedup is not possible. In contrast, the space required by parallel depth-first searches is almost proportional to the number of processors (Figure 4). Note that the space e5ciency is problem-dependent. For vertex-cover problems, the space required by parallel best-first searches is not increased significantly regardless of the number of processors used. REFERENCES PI PI Finkel, R., “Parallelism in Alpha-Beta Search,” Artificial Intelligence, (1982) 84106. Ibaraki, T., “Theoretical Comparisons of Search Strategies in Branch-and-Bound Algorithms,” Int? Jr. of Comp. and Info. SC;., 5:4 (1976) 315-344. Imai, M., T. Fukumura and Y. Yoshida, “A Parallelizz~ Branch-and-Bound Algorithm Implementation Efficiency,” Systema, Computers, Controls, 10:3 (1979) 62- 70. Kumar, V., and L. Kanal, “A General Branch-and-Bound Formulation for understanding and synthesizing AND/OR Tree-search Procedures,” Artificial Intelligence, 14 (1983) 179-197. Lai, T.H. and S. Sahni, “Anomalies in Parallel Branch- and-Bound Algorithms,” in Proc. 1983 Int’l Con/. on . Parallel Proceasing, Bellaire, Michigan, Aug. 1983, pp. 183-190. Lawler, E. L., and D. W. Wood, ‘)) Branch-and-Bound Methods: A Survey,” Operations Research, 14 (1966) G99- 719. Li, G.-J., and B. W. Wah, “Computational E5ciency of Parallel Approximate Branch-and-Bound Algorithms,” Tech. Report TR-84-6, School of Electrical Engineering, Purdue University, West Lafayette, Indiana, March 1984; a shorter version appears in Proc. 1984 Int’l Conf. on Parallel Processing, Bellaire, Michigan, Aug. 1984. Pearl, J., Heuristics, Addison-Wesley, 1984. Smith, D. R., “Random Trees and the Analysis of Branch-and-Bound Procedures,” Journal of the ACM, 31:l (1984) 163-188. Wah, B. W. and Y. W. Ma, “MANIP - A Multicomputer Architecture for solving Combinatorial Extremum-Search Problems,” IEEE Trans. on Computera, C-33:5, (1984) 377-390. 215
1984
7
358
A General Bottom-up Procedure for Searching And/Or Graphs Vipin Kumar Department of Computer Sciences University of Texas at Austin Austin, TX 78712 ABSTRACT This paper summarizes work on a general bottom- up procedure for searching AND/OR graphs which includes a number of procedures for searching AND/OR graphs, state-space graphs, and dynamic programming procedures as its special cases. The paper concludes with comments on the significance of this work in the context of the author’s unified approach to search pro- cedures. 1. INTRODUCTION AND/OR graphs are extensively used in such wide domains as pattern recognition, theorem proving, deci- sion making, game playing, problem solving, and plan- ning. A number of problems in these domains can be formulated as: “Given an AND/OR graph with certain cost functions associated with the arcs, find a least-cost solution tree of the AND/OR graph”. This paper presents a general heuristic bottom-up procedure for finding a least-cost solution tree of an AND/OR graph when the cost functions associated with the arcs are monotone. Since monotone cost functions are very gen- eral, the procedure is applicable to a very large number of problems. The procedure develops solutions for the subproblems of an AND/OR graph (in an order deter- mined by the heuristic information) until an optimal solution tree of the AND/OR graph is found. This framework is different from the heuristic top-down search of AND/OR graphs (e.g., AO’ [20]) and game trees (e.g., alpha-beta [ZO], B’ [l], SSS* [22]), in which an optimal solution tree of the AND/OR graph is found by selectively developing various possible solutions [9], [ll]. In principle, a least-cost solution tree of an AND/OR graph may be found by performing search in either top- down or bottom-up fashion. But depending upon the specific problem being solved, one technique may be superior to the other. Many breadth-first, depth-first and heuristic strategies for conducting top-down search are already well known (e.g., AO*, alpha-beta, SSS’, B’). But all the known bot,tom-up search procedures to date (with the exception of algorithms in [8] and [15], which use a limited amount of problem-specific information to constrain search) are essentially breadth-first. The pro- cedure presented in this paper provides a mechanism for using problem-specific heuristic information in the bottom-up search of AND/OR graphs. The actual amount of benefit gained is dependent upon the kind of heuristic information available and the problem domain itself. In Section 2 we briefly review AND/OR graphs, define a cost function on the solution trees of an AND/OR graphs, and discuss the relationship between the problem of finding a least-cost solution tree of an AND/OR graph and the problems solved by dynamic programming. In Section 3 we present a general bottom-up procedure which includes a number of impor- tant procedures for searching AND/OR graphs and state-space graphs as well as dynamic programming pro- cedures as its special cases. Section 4 comments upon the significance of this work in the context of author’s previous work on a unified approach to search pro- cedures. 2. AND/OR Graphs Following the terminology in [20], [16], we define AND/OR graphs as hypergraphs. Each node of an AND/OR graph represents a problem, and a special node root(G) called root of G represents the original problem to be solved. Transformation of a problem into a set of subproblems is depicted by a hyperarc directed from a parent node to a set of successor nodes. These hyperarcs are also called connectors. A hyperarc p: n -+ nl,...,nk is a k-connector which shows that the problem n can be solved by solving the subproblems nl,...,nk. A Node hav- ing successors is called nonterminal. In general, a non- terminal node can have more than one hyperarcs directed from it. Nodes with no successors are called terminal, and each terminal node represents a primitive problem. An AND/OR graph G is acyclic if no node of G is a successor of itself. An AND/OR graph G is called an AND/OR tree if G is acyclic and every node except root(G) has exactly one parent. Given an AND/OR graph representation of a prob- lem, we can identify its different solutions, each one represented by a “solution tree”. A solution tree T of an A-ND/OR graph G is an AND/OR tree with the follow- ing properties: (i) root(G) = root(T). From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. (ii) if a nonterminal node n of the AND/OR. graph G is in T, then exactly one hyperarc p: n --) nl,...,nk is directed from it in T, where p is one of the hyperarcs directed from n in G. A solution tree T of G represents a plausible “prob- lem reduction scheme” for solving the problem modeled by the root node of G. The subgraph G’n of G rooted at a node n is in fact a problem reduction formulation of the problem represented by n, and a solution tree of G’n represents a solution to that problem. By a solution tree rooted at n we mean a solution tree of G’,. Often, a cost function f is defined on the solution trees of G, and a least-cost solution tree of G is desired.*** There are various ways in which a cost func- tion can be defined; the one defined below is applicable in a large number of problem doma.ins. For a terminal node n of G, let c(n) denote the cost of n, i.e., the cost of solving the problem represented by n. Let a k-ary cost function t,(.,...,.) be associated with each k-connector p: n-+nl,...,nk; tp(rl,...,rk) denotes the cost of solving n if n is solved by solving nl,...,nk and if the costs of solving the nodes nl ,..., nk are rl ,..., rk. For a node n of a solution tree T, we define CT(n) to be the cost of n if the problem modeled by n is solved by the problem reduction scheme prescribed by T. It fol- lows that 2.la if n is a terminal node, then CT(n) = c(n); 2.lb if n is a nonterminal node and if p:n -+ nl,...,nk is the hyperarc originating from n in T, then c&) = t,(cT(n~),...,cT(nk)). If T is a solution tree rooted at some node n, then we define f(T) = c*(n). Thus the cost of a solution tree is defined recursively as a composition of the cost of its subtrees. Fig. 1 shows an AND/OR graph, associated cost functions, and the computation of the cost of one of its solution trees. We define c*(n) for nodes n of an AND/OR graph G to be the minimum of the costs of the solution trees rooted at n. Note that if n is nonterminal, then c*(n) may be undefined, as there may be an infinite number of solution trees of decreasing costs rooted at n. The following theorem provides a way of computing c*(n) for the nodes n of an AND/OR graph. Theorem 2.1: If the cost functions t,(.,...,.) are mono- tonically nondecreasing in each variable, and if c*(n) is defined for all nodes n of G, then for the nodes n of an AND/OR graph the following recursive equations hold. (i) If n is a terminal node, then c’(n) = c(n). ***In many problem domains, f(T) denotes the mer- it of the solution tree T, and a largest-merit solution tree of G is desired. The discussion in this paper is applica- ble to such cases with obvious modifications. (ii) If n is a nonterminal node, then c*(n) = min{tP(c*(n,) ,..., c’(Q)1 p:n --) nl ,..., nk is a hyperarc directed from n}. Proof: See [Q). Thus if the cost functions tp are monotone, then c’(root(G)), th e smallest of the costs of the solution trees of G, can be found by solving the above system of equa- tions. The procedures for solving these equations can often be easily modified to build a least-cost solution tree of G. Relationship with Dynamic programming Note that solving an optimization problem by Bellman’s dynamic programming technique also involves converting the optimization problem into a problem of solving a set of recursive equations. Interestingly, most of the discrete optimization problems solved by dynamic programming can be formulated as the problem of finding a least-cost solution tree of an AND/OR graph with suitably defined monotone cost functions [9], [4]. We can also state a principle similar to Bellman’s princi- ple of optimality (all subpolicies of an optimum policy are also optimal). First, let us define the optimality cri- terion for a solution tree (the counterpart of Bellman’s “policy” in our formulation). A solution tree rooted at a node n of G is called an optimum sohtion tree rooted at n if its cost is the smallest of all the solution trees rooted at n. Lemma 2.1: If the cost functions tp are monotone and if c*(n) is defined for all nodes n of G, then for every node n of G, there exists an optimum solution tree rooted at n, all of whose subtrees (rooted at the immediate succes- sors of n) are also optimal. Proof: See (91. This lemma says that due to the monotonicity of q.‘...‘.)’ an optimal solution tree can always be built by optimally choosing from the alternate compositions of only the optimal subtrees. This technique of first finding the optimal solution to small problems and then using them to construct optimal solutions to successively bigger problems is at the heart of all bottom-up pro- cedures for searching AND/OR graphs and of all dynamic programming algorithms. 3. A General Bottom up Search procedure In this section we present a general bottom-up search procedure for finding an optimum solution tree of an AND/OR graph with monotone cost functions. The procedure makes use of a “lower bound” function defined as follows. If n is a node of G and x is the cost of some solution tree T rooted at n, then lb(n,x) is defined as a lower-bound on the cost of a solution tree of G (i.e., rooted at root(G)) and having T as a subtree; i.e., lb(n,x) 5 min{f(TT1) 1 T, is a solution tree of G, and T is a subtree of TI}. For a given AND/OR graph G, the following procedure finds (on terminating successfully) an optimum solution tree of G. 183 Procedure BUS (1) @)a cw c4c (3) (4) and Initialize a set OPEN to the empty set, and a set CLOSED to the set of terminal symbols of G. For all terminal symbols n (in CLOSED), set q(n) - c(n). For all nodes n of G which are neither in OPEN nor in CLOSED, if p:n-nl,...,nL is a connector such that b,,..., nL} C CLOSED, th en add n to OPEN, and com- pute q(n) = min{t,(q(n,),...,q(n,))j p:n-nl,...,nk is a connec- tor and {nl,..., nk} C_ CLOSED}. For all nodes n in CLOSED, if p:n-nl,...,nk is a connec- tor such that {n,,...,nL} E CLOSED and q(n) > tp(q(nl),...,q(nk)), then recompute q(n) = min{t.,(q(n,),...,q(nJ)I p:n-nl,...,nk is a connec- tor and {nl,...’ nk} 5 CLOSED}, and remove n from CLOSED and put it back in OPEN. For all nodes n in OPEN, if p:n-nl,...,nL is a connector such that { n,,...,nt} c CLOSED and q(n) > tp(q(n,),...,q(nJ), then recompute q(n) = min(t,(q(n,),...,q(n,))I p:n-nl,...,nk is a conrec- tor and {n,,..., nk} & CLOSED}. (Termination Test) If root(G) is in OPEN or CLOSED and q(root(G)) 5 Ib(n,q(n)) for all n in OPEN, then ter- minate. The cost of an optimum solution tree of G is q(root(G)). Otherwise, if OPEN is empty, then ter- minate with failure. Select and remove a node from OPEN and add it to CLOSED. Go to step (2)a. The procedure maintains two sets of nodes: OPEN CLOSED. Due to step (1) (initialization) and step (2)a, a node n of G is on CLOSED or OPEN if and only if there exists at least one solution tree rooted at n whose all other nodes are in CLOSED. For a node n on OPEN or CLOSED, it is easily seen that q(n) denotes the cost of a solution tree rooted at n. Furthermore, due to steps (2)a, b, and c (and due to the monotonicity of the cost functions t,), for all nodes n on OPEN or CLOSED, q(n) 5 min{ f(T) 1 T is a solution tree rooted at n whose all nodes except possibly n are in CLOSED}. The following theorem, needed for the correctness proof of BUS, is proved in [lo]. Theorem 3.1. In BUS, when root(G) is in OPEN or CLOSED and q(root(G)) 5 lb(n,q(n)) for all n E OPEN then q(root(G)) = c*(root(G)). Correctness Proof If BUS terminates unsuccessfully, then OPEN is empty and root(G) is not in CLOSED; hence, obviously G does not have any solution tree. Otherwise, if BUS terminates successfully, then from Theorem 3.1, q(root(G)) = c*(root(G)). By keeping track (during the execution of BUS) of those connectors directed out of the nodes n on OPEN and CLOSED which result in the current q(n) value for the node n, an optimum solution tree of G can be constructed at the successful termina- tion of BUS. Even though upon successful termination the pro- cedure is guaranteed to find an optimum solution tree of G, but the termination itself is not guaranteed. As proved in [9], the general problem of finding an optimum solution tree of an AND/OR graph with monotone cost functions is unsolvable. But if sufficient problem-specific information is available, termination can be guaranteed. Using Heuristic to Select a Node from OPEN For a node n on OPEN, let hf(n,x) denote the (heuristic) promise that a solution tree rooted at n of cost x will be a subtree of an optimum solution tree of G. If available, this information can be used to select, the most promising node from OPEN in step (4). If hf provides reasonable estimates, then the procedure can be speeded up substantially. A useful heuristic is hf(n,x) = lb(n,x); because if lb(.,.) is a tight bound, then smaller the lb(n,x) value greater the possibility that a solution tree rooted at n of cost x is a part of an optimal solution tree of G. When hf(n,x) = lb(n,x), hf is called a lower- bound heuristic function. If in step (4) of BUS a node n with smallest lb(n,q(n)) is moved from OPEN to CLOSED, then we call it procedure BUS*. The following lemma (proved in [lo]) gives the con- dition on lower bound, under which procedure BUS’ can terminate whenever root(G) is transferred from OPEN to CLOSED. Lemma 3.1. If lb( root(G),x) = x, then in BUS* when- ever root,(G) is selected from OPEN in step 4, q(root(G)) = c*(root(G)). Hence, if lb(root(G),x) = x, then steps (3) and (4) of BUS* can be modified as follows: (3) If OPEN is empty, then terminate with failure. (4) Let n be a node on OPEN such that Ib(n,q(n)) < Ib(m,q(m)) for all nodes m on OPEN. If n = root(G), then terminate (q(n) is the cost of an optimal solution tree of G), else remove n from OPEN and put it in CLOSED. A lower bound function is logically consistent if for ail nodes n of G, x > y *lb(n,x) > lb(n,y). A lower bound function is heuristically consistenf if whenever T, is a solution tree of cost x rooted at a node nl, and ‘I’, is a solution tree of cost y rooted at n2, and T, is a subtree i of ‘I’,, then lb(nl,x) > lb(n,,y). The following lemma proved in [lo] states the condition under which a node will never be transferred from CLOSED back to OPEN (i.e., step (2)b would become superfluous in BUS*). Lemma 3.2. If lb(.,.) is both logically and heuristically consistent, then in BUS* whenever a node is selected and transferred from OPEN to CLOSED, q(n) = c*(n). Knuth’s Generalization of Dijkstra’s Algorithm A function t(xl,...! xk) is positive monotone if in addi- tion to being monotone nondecreasing in each variable it satisfies the following propert,y: t(xp..JJ 2 max{xl,...,xk}. For example, tpl, tp2’ tr, in Fig. 1 are positive monotone. If all the cost functions tp of G are positive monotone, then it is easily seen that we can use lb(n,x) = X. It follows that this lower bound function is logically consistent and (due to the positive monotonicity of tP) heuristically consistent. In this case BUS* becomes identical to Knuth’s gen- eralization of Dijkstra’s algorithm [8]. The heuristic bottom-up algorithm for searching AND/OR graphs by Martelli and Montanari [15] is also a special case of this procedure. Searching Acyclic AND/OR Graphs When G is acyclic, we can number the nonterminal nodes of G such that, for any two nonterminal nodes n and m, if n is a successor of m in G, then number(n)<number(m). If number(n) is used as a heuris- tic for selecting a node from OPEN, then it is easily seen that (because G is acyclic) whenever a node n is transferred from OPEN to CLOSED, q(n) = c*(n). When root(G) is transferred from OPEN to CLOSED then (due to the numbering scheme used) OPEN becomes empty and the procedure terminates successfully. Note that this procedure does not use a lower bound function; hence the bottom-up search is essentially unin- formed (the termination is guaranteed due to the acyclic nature of G). This is how most of the dynamic program- ming procedures (with some exceptions, e.g., (181, (71) perform search. Relationship with State-Space Search Procedures We define regular AND/OR graphs to be those AND/OR graphs which have only two types of connec- tors: (i) 2-connectors n + nr n2 such that nr is a nonter- minal and n2 is a terminal; (ii) l-connectors n + nr such that nr is a terminal. There is a natural correspondence between regular AND/OR graphs and regular grammars w-hich follows from the natural correspondence between AND/OR graphs and context-free grammars [6]. Furth- ermore, due to the equivalence of finite state-space graphs, finite-state automata and regular grammars, it is possible to construct a regular AND/OR graph given a state-space graph and vice versa. See Fig. 2 for a regular AND/OR graph and its equivalent state-space graph. Note that in the context of regular AND/OR graphs, BUS* is essentially a generalization of the classi- 185 cal A’ algorithm for state-space search. A* works on a restricted set of regular graphs in which (i) tJxr,xJ = x1 + x2; (ii) tp(xl) = xi; (iii) c(n) 2 0 (i.e., arc costs in the state-space graph are positive). Hence, it is possible to define lb(n,x) = x + h(n), where x represents the cost of the current “regular” solution tree rooted at n, and h(n) represents the lower bound on the remaining cost (in the context of state-space graphs, x is the cost of the path from source node to n, and h(n) is the lower bound on the cost of the path from n to the goal node). Since lb(root( G),x) = x (because h(root(G) = 0), BUS* (like A*) terminates whenever root(G) is transferred from OPEN to CLOSED. Clearly lb(n,x) as defined here is logically consistent. Furthermore the heuristic con- sistency assumption on the lower bound function (lb(n,x) = x + h(n)) is virtually identical to the so called “mono- tonic” restriction*** on h in 1201 (which, if satisfied. guarantees that a node is never transferred back from CLOSED to OPEN). The modifications to A’ presented in [14] and [17] can also be applied to BUS* (see [lo] for details). Various dynamic programming procedures for finding a shortest path in a graph (e.g., (31, (71) are also special cases of procedure BUS for finding a least-cost solution tree of a regular AND/OR graph (see [lo]). 4. Concluding Remarks It was discussed in [9], [12], [21] that most of the procedures for finding an optimum solution tree of an AND/OR graph can be classified as either top-down or bottom-up. In [9] we presented a general top-down search procedure for AND/OR graphs, which subsumes most of the known top-down search procedures (e.g., ,40*, B’, SSS*, alpha-bet,a). Here, we have presented a general bottom-up search procedure which subsumes most of the bott,om-up procedures for searching AND/OR graphs. Almost all of the dynamic program- ming (DP) procedures can also be considered special cases of our bottom-up procedure. On the other hand, it is natural to view the top-down search procedures for AND/OR graphs as branch-and-bound (B&B) [9], [19]. Note that state-space search procedures like A* can be considered both top-down [19] and bottom-up. The reason is that for any state-space graph, it is possible to construct two equivalent regular AND/OR graphs such that the top-down search in one is equivalent to the bottom-up search in the other, and vice versa. This explains the confusion prevalent, in the operations research literature as to whether certain shortest path algorithms are DP or B&B. For example, Dijkstra’s algorithm for shortest path [2] (an algorithm very similar to A’) has been claimed to be both DP (31 and B&B (51. ***Note that the monotone restriction on heuristic function h a.s defined in [20] has no connection with the monotonicity property of the cost functions tp. Constructing a general procedure is quite useful, as it is applicable to a large number of problems and pr+ vides us insights into the nature and interrelationships of different search procedures. This is particularly impor- tant in an area which has been full of confusion. For a sample of confusing and contradictory remarks regarding the interrelationships of B&B, DP, and heuristic search procedures see [S], [ 121. By identifying two natural groups of search procedures, we have resolved much of this confusion [12], [9]. Furthermore, our approach has also helped synthesize variations and parallel implemen- tations of a number of search procedures (e.g., see Ill], I131 1. PI PI PI PI PI P-4 PI PI PI PO1 REFERENCES H. Berliner, The B’ Tree Search Algorithm: A Best-First Proof Procedure, ArtijGal Intelligence 1.2, pp. 23-40, 1979. E. W. Dijkstra, A Note on Two Problems in Con- nection with Graphs, Numer. Math. 1, pp. 269-271, 1959. S. E. Dreyfus and A. M. Law, The Art and Theory oj Dynamic Programming, Aca,demic Press, New York, 1977. S. Gnesi, A. Martelli, and U. Montanari, Dynamic Programming as Graph Searching, JACM 28, pp. 737-751, 1982. P. A. V. Hall, Branch-and-Bound and Beyond, Proc. Second Internat. Joint Con/‘. on Artif. Intell., pp. 641-658, 1971. P. A. V. Hall, Equivalence between AND/OR Graphs and Context-Free Grammars, Comm. ACM 16, pp. 444-445, 1973. T. Ibaraki, Solvable Classes of Discrete Dynamic Programming, J. Math. Analysis and Applications 43, pp. 642-693, 1973. D. E. Knuth, A Generalization of Dijkstra’s Alge rithm, Information Processing Letters 6, pp. l-6, 1977. V. Kumar, A Unified Approach to Problem Solving Search Procedures, Ph.D. thesis, Dept. of Com- puter Science, University of Maryland, College Park, December, 1982. V. Kumar, A General Heuristic Bottom up PC+ cedure for Searching And/Or Graphs. Working paper. 1984. Ill1 WI I131 PI P51 PI I171 WI PI PO1 I211 V. Kumar and L. Kanal, A General Branch and Bound Formulation for Understanding and Synthesizing And/Or Tree Search Procedures, Artificial Intelligence 21, 1, pp. 179-198, 1983. V. Kumar and L. N. Kanal, The Composite Deci- sion Process: A Unifying Formulation for Heuristic Search, Dynamic Programming and Branch & Bound ?ocedures, 1983 National Conference on Artificial Intelligence (AAAI-831, Washington, D.C., pp. 220-224, August 1983. V. Kumar and L. N. Kanal, Parallel Branch and Bound Formulations for And/Or Tree Search, IEEE Trans. on Pattern Analysis and Machine Intelligence (to appear), 1984. A. Martelli, On the Complexity of Admissible Search Algorithms, Artificial Intelligence 8, pp. l- 13, 1977. A. Martelli and U. Montanari, Additive AND/OR Graphs, Proc. Third Internat. Joint Conf. on Artif. Intell., pp. l-11, 1973. A. Martelli and U. Montanari, Optimizing Decision Trees Through Heuristically Guided Search, Comm. ACM 21, pp. 1025-1039, 1978. L. Mero, Some Remarks on Heuristic Search Alg+ rithms, IJCAI-81, Vancouver, Canada, pp. 572- 574, 1981. T. L. Morin and R. E. Marsten, Branch and Bound Strategies for Dynamic Programming, Operations Research 24, pp. 611-627, 1976. D. S. Nau, V. Kumar, and L. N. Kanal, General Branch-and-Bound and its Relation to A* and AO’, to appear in Artificial Intelligence, 1984. N. Nilsson, Principles of Artificial Intelligence, Tioga Publ. Co., Palo Alto. CA, 1980. D. R. Smith, Problem Reduction Systems, Unpub- lished report, 1981. [22] G. C. Stockman, A Minimax Algorithm Better than Alpha-Beta ?, Artificial Intelligence 12, pp. 179- 196, 1979. 186 Cost functions associated with the hyperarcs of G: tpl(xl’xp) = x1 + x2; t&J = 2*x1; tp&Qx,) = n~in(x~,xJ; $&,,x,) = XI - x2- Terminal cost function c: c(a) = 10; c(b) = 2. I:ig. l(3). iIn And/Or graph G, and t,hc associated cost functions. CT(A)=min(4, =4 C,(S)=2*2=4 f(T) = CT(S) = 4+8 Pl =lO = 12 CT(B) = lo-2= 8 CT(b)=2 23 b CT(b)=2 Fig. l(b). Computation of f(T) of a solution tree T of G. (a> (b) Fig. 2(a). A state space graph S. Fig. 2(b). A regular And/Or graph G equivalent to the state space graph S. A nonterminal node Ni depicts the problem of going from the source node Ml to node Mi in the state space graph S. A terminal node ni j 9 in G depicts the problem of going from Node M. to M. in S. 1 J 187
1984
8
359
META-LEVEL CONTROL THROUGH FAULT DETECTION AND DIAGNOSIS Eva Hudlicka and Victor R. Lesser Department of Computer and Information science university of Ma!BachuMts Amherst, Massachusetts, 01003 ABSlTtACT Control strategies in most compla p&lem-s0lving systems, though highly parameter&d, are not adaptive to the characteristics of the particular task being solved. If the characteristics of the task are atypical, a fiied control strategy may cause incorrect or inefficient mg. We present an approach for adapting the control strategy by introducing a meta-level control component into the problem-solving architecture. This meta-level control component is based on the paradigm of Fault Detection/Diagnosis. Our presentation will concentrate on modeling the problem-solving system and on the inference techniques necessary to use this model for diagnosis. We feel that meta-level control based on the Fault Detection/Diagnosis paradigm represents a new approach to introducing more sophisticated control into a problem- solving system. I INTRODUCTION This paper explores the use of meta-level control in a problem-solving system to adaptively change the system’s control parameters in order to make problem solving more robust and efficient. In many complex problem-solving systems the control strategies are highly parameterized. These parameters antrol decisions such as: 1. what importance to attach to information generated by different sources of knowledge; 2. what type of search to perform (e.g., breadth vs. depth first; data vs. goal directed); 3. what type of predictions to generate from partial results; 4. what criteria to use to @dge whether a solution is acceptable. These parameter settings, which are often determined in an ad hoc manner, are based on typical characteristics of the tasks being posed to the problem-solving system and the characteristics of the problem-solving system itself. Even though such a parameterization makes it relatively easy to change control strategies, the system is rarely allowed to change its own control parameters as the task or system characteristics change during p-g. Thus, This research was sponsored, in part, by the National !kiencc Foundation under Grant Mw and by the Defense Advanced Research Projects Agency (DOD, monitored by the Office of Naval Research under Contract N k 049441. if the characteristics of a particular task are atypical or the system characteristics* change during execution, the resulting incorrect parameter settings may cause inefficient or incorrect processing. Our approach to adapting these problem solving control parameters is to introduce a meta-level control component into the problem-solving system architecture, based on an extension of the Fault **Detection/Diagnosis (FDD) paradigm [4, 51 to handle problem-solving control errors resulting from inappropriate parameter settings. The FDD system has three components: the Fault Detection module, the Fault Diagnosis module, and the Strategy Replanning module. See Figure 1 for a diagram of the system architecture. The Fault Detection module monitors the state of problem solving in order to detect when the problem-solving system’s behavior deviates from the expected behavior. The criteria for expected behavior are based on standards for acceptable problem solving performance and internal consistency in the problem- solving system data base. Examples of detection criteria are: 1. a large number of highly rated proces&g goals not being achieved; 2. tasks on the problem solving agenda being too low rated or the agenda being empty; 3. low credibility of intermediate results or contradictory information being generated; I 4. results not being produced in a timely fashion or no results being produced for problems where a solution is expected. If such a situation is encountered by the Fault Detection module, the Fault Diagnosis module is invoked to analyze why the situation occurred. The Diagnosis module, using a detailed model of the problem-solving system and the current state of problem solving, determines which control parameter settings were responsible for reaching the undesirable situation. A Strategy Replanning module is then invoked to adjust the parameters so that appropriate problem solving activities are performed. l Previous work has examined this approach in a distributed problem-solving environment where it is likeIy for pocessn communication channels, and sensors to be faulty [9]. ’ M We use the term fault in a very liberal sense to i.ncIude inappropriate parameter values. 153 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. MET&LEVEL COmROL D&ii pmba of uatc Abnraacd uatc of New &tnctcl uttinpr of problem 5olving problem wkiag Y > PROBLEM SOLVING SYSTEM Figure 1: System Archlteeture. This approach to meta-level control, which involves adapting the control strategies, is a generalization and extension of earlier work by Hayes-Roth and Lesser on policy knowledge sources for Hearsay-II [8], the Hayes-Roths multi-level control structure for planning ml, and Wilensky’s work on meta-level control [l3]. It is, however, much different in character and emphasis from the work on meta-level control by Davis [3], Genesereth and Smith [6], and B. Smith [l2]. Though the general frameworks they posit for meta-level control can be used to build the type of meta-level control proposed here, their emphasis is different. Their work is oriented more towards how to layer control knowledge within a single uniform inference framework to accomplish each control decision rather than the type of knowfedge and inference required to introspect about the behavior and the performance of the system. It is this latter orientation which will be the focus of the remainder of this paper. We will illustrate the use of our approach to adaptive control by examining the knowledge and inference structure necessary to implement the Fault Diagnosis module for a problem-solving system based on a goal-directed Hearsay-II architecture, the Vehicle Monitoring Testbed (VMT) [ll]. The task of this system is to interpret acoustic signals produced by vehicles moving through a twodimensional area and generate a map of the environment, indicating what types of vehicles there are and what paths they took. Section II describes how we model the VMT system structure and function. Section III illustrates by way of example how this model is used by the Diagnosis module of the FDD system to diagnose a faulty parameter setting. Section IV describes the status of the system and directions for future research. II MODELING A PROBLEMSOLVING SYSTEM This section describes our model of the Vehicle Monitoring Testbed (VMT) problem-solving system and explains how this model can be used to understand why the system arrived at a particular state. The VMT system derives its results from the input data (see Figure 2a) by incrementally constructing and aggregating intermediate level hypotheses until hypotheses that represent a complete map of the environment are generated. As part of the processing of the system, the creation of an intermediate hypothesis causes the generation of several types of goals. These goals are descriptions of the classes of higher level hypotheses that can potentially be generated given the existence of the newly created hypothesis [2]. Once a goal has been generated, the system attempts to satisfy the goal by scheduling and executing knowledge sources to produce the higher level hypotheses. This is the basic system cvcle. LOW-HYP-CREATED KSI-SCHEDULED HIGHER-HYPCREA’IED \ \/\ PART B: THE AB.Sl-RACTED OBJECT MODEL HYPOTHESES -> gcamtc upcctatiw5 DATA BLACKBOARD GOAL BLACKBOARD aI QUEUES PART A: PROCESSING Sl-RUcTuRE OF THE VMT SYSTEM plgorc 2: Moaellng the VMT Prohlemsolvtng System. This figure illustrales the state transition/abstracted object model C# the VMT system, a high level view 4 the system Structure, and the relmhip among them. 154 The system behavior thus consists of a series of events. Each event results in the creation of an object (e.g., hypothesis, goal, or knowledge source instantiation) . or the modification of the attributes of some existing objects. We can represent the system behavior by specifying either the events or the changes these events cause in the system in terms of their effects on the attributes of the system objects. We chose the latter as the basis for our representation and model the problem- solving system behavior by a state transition diagram (see Figure 2c). Each state represents a specific state of some object in the VMT system in terms of its attribute values. Each state is specified by a schema, which contains finks to other states in the model (such as states prececding it and following it), pointers to the descriptions of the system objects the state refers to (these descriptions of the VMT objects are called abstracted objects; see Figure 2b), and a constraint expression over the abstracted ob@ts’ attribute values. This constraint expression is evaluated during diagnosis to determine whether the state has been reached by the VMT system; i.e., whether there exist objects in the problem-solving system whose attribute values satisfy the constraint expression associated with the state. For example, the process of generating a hypothesis at a higher level of abstraction from one at a lower level of abstraction can be described as follows: the creation of a lower level hypothesis causes the creation of a goal to produce a specific result (i.e., the higher level hypothesis) that incorporates the lower level hypothesis. This causes the scheduling of a knowledge source instantiation (KU) which later executes and produces the higher level hypothesis. In our model this serie!s of events is represented as the sequence of states: LOW- HYP-CREATED, GOAGCREATED, KU-SCHEDULED, KSI-EXECUTES, and HIGHER-HYP-CREATED (see Figure 2). The state transition arcs, which co~cct the individual states in the model, represent causal relationships among the states. In some cases there may be more than one state transition arc coming in or out of a given state. For example, in Figure 3, states A, B, and C precede state D. The model needs to represent the exact relationship among the four states. If all three states A, B, and C are necessary before state D can be reached, then the relationship among the three states preceding state D is logical AND (Figure 3a). If any one of the states A, B, or C is sufficient to reach state D, then the relationship among the three states is logical OR (Figure 3b). States are related not only by their causal connections but also by constraint relationships among the abstracted objects associated with them. The abstracted objects are represented as schemas consisting of attributevalue pairs. (The three parts of Figure 2 illustrate how the State Model and the Abstracted Object Model and the actual objecfs in the VMT system relate to one another.) Each object contains information that allows the system to determine the values for that object’s attributes using objects whose attribute values are already known. Constraints among states can then be specified by states sharing the same object or via the relationships among the attributes of the objects attached to the states. For example, each HYP object (see Figure 2b and 2c) has an attribute LEVEL. The relationships among the LEVEL attributes of the HYP objects attached to the states LOW-HYP-CREATED and HIGHER-HYP-CREATED is expressed by the following sets of constraints. The value of attribute LEVEL of object HYP attached to state LOW-HYP-CREATED is obtained by calling the function GET-LOWER-LEVEL with the value of attribute LEVEL of object HYP attached to state HIGHER-HYP-CREATED. Conversely, the value of attribute LEVEL of objxt HYP attached to state HIGHER-HYP-CREATED is obtained by calling the function GET-HIGHER-LEVEL with the value of attribute LEVEL of object HYP attached to state LOW- HYP-CREATED. The abstracted objects either point to existing objects in the VMT system or specify characteristics of objects that should exist in the system. The ability to represent not only objects that already exist in the problem-solving system but also objects whose existence is nece~~afy in order for the system to achieve a particular state allows the model to serve as the basis for a high level simulation of the underlying problem-solving system. This simulation is accomplished by propagating attribute values among the interrelated abstracted objects based on the causal relationship among the states. In addition to reasoning about system behavior in terms of sequences of states, we also need to reason qualitatively about how system object attribute values are computed from the attribute values of other objxts and from system control parameters. This requires modeling some of the internal computations performed by the problem-solving system. In order to model the problem-solving system at this level, we use a model very similar to the one used for modeling the behavior of the system. In this case, the states represent values of attributes of the system objects, values of controls parameters, and values of important intermediate states of the internal computation. The transition arcs represent how the value of a state is computed from the values ass&ated with the states that precede it. We are currently using a simple causal model in which the arcs are labelled as either having an increasing or decreasing PART A: S~arn related by AND PART B: Stata related by OR FIgure 3: Lqical Rclatlonshlps among States ln the Model. 155 effect on the value of the state that represents the result of the computation [l]. Two states are connected by an increasing arc if an increase in the value of one state causes an increase in the value of the other state. In some cases not shown in this paper, we also need to reason using the exact formula representation of the computation. The states in the model can thus represent different aspects of the underlying VMT system. One of the attributes in the state schema is the STATE-VALUE attribute. This attribute can represent one of several aspects of the problem-solving system. In some cases we are interested in whether a particular intermediate state has been reached; i.e., is there an object in the VMT system that matches the characteristics of the abstracted object associated with that state. In these cases the STATE-VALUE is true if the object does exist, and false otherwise. In other cases we need to reason about the value of some attribute of a particular object and relate it to the value of the corresponding attribute of another object. For example, we need to reason about the relatively low rating of a hypothesis with respect to another hypothesis. In these cases the STATE-VALUES represent the relationships among two or more objects in the VMT system. The values of the STATE-VALUES attributes are then low, high, or equivalent. The model is organized into clusters of states (Figure 4 illustrates three such clusters). Each cluster GLHYP-- VX,HYP-m VT-ANSWEX.HYP-EXISTS , GOALSATISRED stat- /’ \ \ / PART A: (A Part of the) Answer Dctivrtioa MO&J \ / KSI-SCHEDULED HIGHER-HYtiCREATU) GOAMXEATED / KSI-ExEcurEs LOW-HYP-CREA~D /’ I I / PART 6: KS1 Schcduiing Mock4 I I I / I KSI-SCHEDULED K!x-u<EcurEs PART C: KS1 Exautiw hfdd a---- 3 duuct Linl: F’lgun 4: System Behavior Model Clusters. represents an aspect of the system behavior at some level of detail. The representation is hierarchical in that only certain events are represented at any one level of the hierarchy. For example, the Answer Derivation Mudel represents only the answer hypotheses and their support structure in terms of intermediate hypotheses; vehicle track (VT) preceded by vehicle location (VL) preceded by group location (GL). It does not represent any of the knowledge sources scheduled and executed in the process. This information is represented in clusters at a lower level of the model hierarchy. Because of this hierarchical representation two states may be contiguous in one cluster while in fact a number of other states occur in between which are represented by a cluster at a lower level of the model hierarchy. Equivalent states in clusters at different levels of abstraction are connected via cluster links. Objects may be shared across the different clusters. This hierarchical structure allows fast focusing into the problem area during diagnosis by avoiding detailed analysis until the part of the model that is relevant has been identified. The system model represents a subset of all the possible system behaviors, which we think is sufficient fo! detecting and diagnosing a significant number of faults; We call this model the system behavior model (SBM). The SBM is used by both the Fault Detection module and the Fault Diagnosis module. The Detection module identifies a specific undesirable situation in the monitored system; i.e., a specific abstracted object along with an associated state. This state-object pair constitutes the symptom detected by th e Detection module, which is passed on to the Diagnosis module. Diagnosis is accomplished by constructing a representation of the current system state, constructing a model of how this state was reached and comparing this with the correct system behavior as represented by the model. Any points of departure from this expected behavior are traced to the states at the lowest level in the SBM. These states are marked as primitive. A primitive state that is found to be false during diagnosis constitutes a reportable failure. The current system state representation is constructed using information from the SBM and the VMT system data structures. The construction begins with locating the symptom state in the SBM. The predecessor states of this state are then found, along with their abstracted objects descriptions. First, the attributes of these abstracted objects are evaluated, using the constraint relationships between the existing abstracted objext and the one being evaluated. once these attributes have been evaluated, the Diagnosis module looks for the corresponding objects in the VMT system. If such objects are found, they are linked to the abstracted object. Finally, for each abstracted object the corresponding state is created and the STATE-VALUE l The system model could be extended to represent the de level of the VMT system. However we have not found it ==-y to represent the VMT system at such a low level of detaiI in order to effectively reaso~l abwt iti behavior. 156 attribute is evaluated. Depending on the type of state and its value, the type of reasoning may now change. The next paragraph describes the different types of reasoning. The underlying mechanism for all the different types of diagnostic reasoning is bidirectional constraint propagation, which begins at one or more state-object pairs in the SBM whose values have already been determined. This constraint propagation m*es possible sophisticated diagnostic reasoning. In the next section we show how the system model supports four different types of reasoning necessary to diagnose inappropriate parameter settings: 1. Backward cuusul tracing: given a particular state and its value the system can go back through the model and explain, in terms of the model states, why that state was reached. 2. Comptua!ive reasoning : the system can compare two different objects and explain why they were different, in terms of the model states. 3. Unknown value derivation: the system can determine a value of an unknown state in the system model by finding the value which is consistent with the known values of the surrounding model states. 4. Resoiving inco?uistencies : having found two inconsistent objects, the Diagnosis module can decide which one is correct by comparing both objects to a model of an ideal or expected objzct. III AN EXAMPLE OF FAULT DIAGNOSIS The following example (see Figure 5a) represents a scenario in the VMT system in which the system is receiving data from two input sources; sensors, A and B. The two sensors overlap, so some data are sensed by both, but because the system is more confident about sensor B the sensor weight parameters are set such that the data generated by that source are valued more than the data generated by sensor A. This results in the data from sensor B being rated high and the data produced by sensor A in the same area being rated low. In the example scenario the supposedly reliable source of data for the particular task (sensor B) does not in fact generate reliable data because it is malfunctioning. It is instead generating very short noise segments that cannot be incorporated into a single vehicle track. BecaUSe sensor B’s sensor weight parameter has such a high value, these short noise segments are very highly rated. The goal of the diagnosis is to recog&e that sensor B is malfunctioning and change the sensor weight parameters so that the systems begins to process data generated by sensor A. A vehicle is moving through the monitored area, from left to right, generating signals at locations 1 through 8 (see Figure 5a). Sensor A sensesall PART A: Diagram of the signals generated by the moving vehicle (locations I through 8) and the sensor layout. The sensors send the sensed SigMlS to the processing lwde. NO“r pm B: After some time. the system ger~ates a vehicle track (VT) hypothesis connecting /oca.tiorrr I through 4 sensed 6~ SEIVSOR A. It also generates several short track scgment~ which are the result of the ~&SC generated b the faulty SENSOR B. Figure 5: Faclk sanrrto. locations but, becuse of the sensor-weight parameter, locations 5 through 8 are rated low. Sensor B, because it is malfunctioning, is not sensing the vehicle SigndS but rather is generating very highly rated noise segments. The VMT system generates a vehicle track (VT) hypothesis connecting locations 1 through 4 based on the strong data from sensor A (see Figure 5b). As a result of sensor A’s data being weighted low in the area where SigdS 5 through 8 appear, sensor B malfunctioning, and sensor BS sensor weight parameter being high, the knowledge source instantiation (ICSI) that would extend the partial track to include the location in time 5 is rated low.’ Because the short segments of noise generated by sensor B are rated high, they cause the scheduling of knowlege sources which are highly rated. The system queue ha- a number of these highly rated KSIs that delay the execution of the low rated K!%s which would extend the true vehicle track hypothesis. As a result, the system spends all its time forming short segments from the noise signals and the true vehicle track remains unextended. b A KS1 rating is a function of, among other things, the input data. 157 This situation can generate a number of symptoms. Due to lack of space we will illustrate the diagnosis by pursuing only one of the symptoms. The symptom we pursue here is a highly rated goal, VMT-GOAL#l, which represents the system’s intent to extend the existing vehicle track hypothesis connecting locations 1 through 4 to include location 5 (see Figure 5b). This goal has remained unsatisfied for a long time and has therefore been selected by the Fault Detection module as a representative symptom. Diagnosis begins with the arrival of the symptom from the Detection module. A symptom consists of a stateobject pair; the unachieved state is GOAL-SATISFIED and the abstracted object is GOAL- OBJECT, which points to the object VMT-GOAL#l in the VMT system. v-r-IIYP-Exlsn VT-HYP-Eixsn GOALSATISFED \ PART A (A Par-~ of the) Anrwcr Dcrivaticm Mcdcl \ LOW-HYP-CREATED / I Fii, the SBM cluster that contains the state GOAL-SATISFIED and its associated abstracted objects must be located. This is the Answer Derivatbn Model cluster. The relevant. objects and states in this cluster are evaluted, using the constraint expressions in the SBM and the already evaluated attributes of the symptom state and its object. The values of the states in this cluster can be either true or false depending on whether objects of the desired characteristics exist in the VMT system or not. In this case the state GOAL-SATISFIED is false because the associated object (VMT-GOAL#l) has not been satisfied in the VMT system (i.e., there is no vehicle track hypothesis connecting locations 1 through 5). We continue backward causal tracing through the SBM model to the state preceeding the GOALSATISFIED state: the state VT-HYP-EXISTS and its associated object, VT-HYP. The attribute values of this object are determined from the attribute values of the object VMT-GOAL#l using the constraint relationships described in the previous section. The state VT-HYP-EXISTS evaluates to false, since no VT hypothesis of the desired characteristics exists in the VMT system. The reasoning continues backwards through the SBM attempting to find the first state that evaluates to true (i.e., the last point where desired system behavior stopped). Because a vehicle track can be formed from a shorter vehicle track or a set of vehicle locations (VL) the state VT-HYFEXISTS is preceeded by the states VT-HYP-EXISTS or VCHYP-EXISTS. The objects associated with these states are VT-HYP and VL-HYP respectively. Again, we Look for the associated objects in the VMT system in order to evaluate the states. In this case the objects are track fragments containing locations 1 through 5, or the locations 1 through 5 themselves, which could lead to the desired hypothesis. This brings us to another instantiation of the state VT-HYP-EXISTS and object VT-HYP, this time with the hypothesis connecting locations 1 through 4. E3ecause such a hypothesis does exist in the VMT system, this state evaluates to true. This is where the generation of the vehicle track that would satisfy the goal VMT-GOAL#l stopped. The evaluated model is in Figure 6a. / PART B, KS1 Scheduling Model I I I KSI.SCHEDULED Ksr-ExEcuxEs ----- 3 PART C: KS1 ticcuriw Mdd TRUE STATE El FALSE STATE Figure 6: Evaluated System Model. At this point we cannot continue reasoning using the Answer Derivatbn Mudef cluster because it does not represent the events occurring in between the last true state (VT-HYP-EXISTS; VT hypothesis connecting locations 1 through 4) and the first false state (V’I-HYP- EXISTS; VT hypothesis extending the hypothesis l-4 through location 5). Anytime such a truestate/falsestate pair is found, we must find the cluster which represents the states occurring between those two states. The cluster pointed to by the VT-HYP-EXISTS state is the KSI Scheduling Model cluster shown in Figure 4b. We continue determining the types of objects and evaluating the states. The result is the evaluated model in Figure 6b. We find another gap in the expected processing: the KS1 that would produce the desired hypothesis was scheduled but did not execute. Again, following the cluster links, we switch to a cluster that describes in more detail what occurs in between the true state (KSI-SCHEDULED) and the false state (K!31- EXECUTE!S). This is the cluster KSI Execution Modcf in Figure 4c. We eventually arrive the state KSI-RATED- MAX. This state represents the fact that a KS1 must be “The state VT-HYPEXBTS represents aU track h !I- h-up to sqne fixed track length. Therefore it is a re exive state, *Comparative reasoning contains many complexities which we poixihg back to itself. cannot go into in tls paper. of the types of re awning For more detailed descriptien mentioned in this paper set [lOl 158 rated the highest of all the KsIs on the queue in order to execute. This state is false since the KS1 that could extend the l-4 VT hypothesis is rated low with respect to the other KSIs on the queue. The evaluated model is in Figure 6c. The state KSI-RATED-MAX is a different type of state. Unlike the states mentioned so far, which represent the existence of some object in the VMT system, the state KSI-RATEIMfAX represents a relationship among a group of objects; in this case, the relationship among the knowledge source instantiations on the scheduling queue. Whenever this type of a state is reached, the system switches to compurative remming.* This involves comparing some attributes of two objects in the system: one that achieved a desired state (in this case, the KS1 that is maximally rated) and one that did not (in this case the low rated KS1 that would extend the VT hypothesis 14 to include location 5). The system builds a model of how those objects were created and attempts to discover what differences along the object creation paths were responsible for the different outcomes. Two slots in the state schema are important here: the ACTUALVALUE! slot, which repraents the value of the attribute of interest, and the RELATIVE- VALUE slot, which represents the relationship among the ACTUAL-VALUES of the two objects in the parallel investigation. In this type of reasoning the states do not represent the existence or non-existence of some ob+t but rather the relationship among the values of a particular attribute of some object (for example the rating of a knowledge source or a hypothesis) as compared to the corresponding attribute of the other object in the parallel investigation. In this case the relevant attribute is the RATING attribute of the KS1 object. The two objects being investigated here are the two KSIs (the low rated KS1 to create a hypothtsis connecting locations 1 through 5 and the KS1 which is rated the highest on the scheduling queue). We investigate, in parallel, how the ratings of the two KSIs were derived in an attempt to identify what caused the lower rating of the KS1 that would extend the 14 track. We first switch to a cluster where the attribute of interest (IN-RATING) is represented by a state. This is the KS1 and Hy~hesis Rating Model in Figure 7. Because we are investigating two objezts we must instantiate two copies of this cluster. One copy wilI represent the creation of the low rated KS1 that would extend the VT hypothesis through location five (we will call this the low hi path). The other will represent the creation of the highest rated KS1 on the queue (we will call this the high ksi path). We begin with the state KSI-RATING. Because the rating of the KS1 of interest is lower than the highest rated KS1 we assign the value low to the RELATIVE-VALUE attribute of the state representing the relationship among the two values. We go back through the SBM and find that what determines a KS1 rating is the DATA-COMPOlUENT- RATING of the ICSI. We compare the data components of the two KSIs and again find that the DATA- COMPONENT-RATING of the low-rated KS1 is lower than the corresponding DATA-COMPONENT’-RATING of the high-rated KSI. We continue evaluating the model for the derivation of the KS1 rating for both K.SIs, via the KS1 data components at various levels of abstraction (vehicle location, VL, preceeded by group location, GL, preceeded by signal location, SL) arriving finally at a point that represents how the sensor weights and the strength of the data signal determine the value of the sensed signal for each sensor. Because the signal location rating on the low Rsi path is lower than the signal location rating on the high ksi path, the value of the state SL-HYP-RATING for the low-rated KS1 is low. We reason that in order for this value to be lower than the corresponding value in the high ksi path, the two .objects that influence this value (sensed-value by sensor A and sensed-value by sensor B) must be rated lower than the corresponding objects on the other path. When we enumerate the relationships among the two pairs of sensed-values we get four relationships: KS,.MGG RE~TWEVALUE: LOW DATA-CbPOkNT-RAmG ,tCl-UALVAL~: INCONSlSIFKT PART A: THE LOW KS1 PATH PART B: n-(E HIGH KS1 PATH Ffgun 7: Parallel Investigation of two KSI Rating DerlvatIon Paths. 159 1. 2. 3. 4. In this Sensed-value for sensor A on the low ksi path < sensed-value for sensor B on the high ksi path- Sensed-value for sensor A on the low ksi path > sensed-value for sensor A on the high ksi prh. Sensed-value for sensor B on the low ksi path = sensed-value for sensor A on the high ksi path. (They are both 0 because no signal was sensed at that place by the other sensor.) Sensed-value for sensor B on the low ksi path < sensed-value for sensor B on the high ksi path- case the RELATIVE-VALUE attribute of the state SENSED-VALUE for SENSOR A can have two values, depending on which of the corresponding sensed mlues in the other path we compare the state to: the values are low for case 1 above and high for case 2 above. Because we are trying to determine why the SG HYP-RATING is lower, we follow paths to any states that contain a lower relationship. In this case, both the state SENSED-VALUE of SENSOR A and the SENSED VALUE of SENSOR B contain a lower relationship so both are followed in parallel. We have two paths to follow now: investigating why the SENSEDVALUE for SENSOR A was low with respect to SENSED-VALUE for SENSOR B in the high ksi path investigation and investigating why the SENSED VALUE for SENSOR B was low, again with respect to SENSED-VALUE for SENSOR B in the high ksi path investigation. We fii follow the path from state SENSED-VALUE for SENSOR A backwards. We reason that the senmr weight was low, the data signal was low, or both. We then find that value for SENSOR- WEIGHT for SENSOR A is indeed low compared to the SENSOR-WEIGHT for SENSOR B. Because this state is a primitive state (no transition or cluster arcs connect it to any other part of the model), we can report this finding as one fault responsible for the low KS1 rating that led to the original symptom. We have found one problem that explains the low KS1 rating but the investigation is not complete. We still need to fiid the value for the state DATA-SIGNAL and follow the path of LOW SENSEDVALUE by SENSOR B. This latter path also leads to the state DATA-SIGNAL since it is one of the predecesso r states of the state SENSED-VALUE for SENSOR B. Since there is no way of knowing what the actual data signal was, we must employ the tinown value derivation type of reasoning where an unknown value is determined by examining the values of the neighboring states. This type of reasoning is necessary anytime the state value cannot be determined from the problem- solving system-s data base. In this type of reasoning the ACTUAL-VALUE (or STATE-VALUE) attributes of the states represent the value that is derived by looking at the surrounding states. Depending on the types of values represented by those states, this value can be either the value the states agree on or inconsistent if contradictory values can be determined from the surrounding states. The unknown state is DATA-SIGNAL. We attempt to derive the value for this state, which represents the actual value of the data signal in the environment, by examining the ACTUALVALUE slots in its surrounding states: SENSOR-WEIGHT for both sensors and SENSED VALUE for both sensors. In fact we cannot fiid a consistent assignment for all these states. According to sensor A the value sensed is low; according to sensor B, no value is sensed at all. The value for the state DATA-SIGNAL is therefore INCONSISTENT. In a case where an inconsistency is discovered among two objects in the VMT system we have to use incons%ency resolving reasoning in which we compare the two objects (in this case the two disagreeing sensors) with a model of the expected behavior of that object (a sensor) and try to determine which one is correct. In this case we compare the characteristics of each of the two sensors with the characteristics of an ideal sensor which mu- correlated data. We determine that data from sensor A is welI correlated (all data fits into one track) whereas data from sensor B is only correlated for at most 2 location track segments. We therefore conclude that sensor B is faulty. We have now found both reasons for the initial symptom (unsatisfied goal): the faulty sensor B in conjunction with the low SENSOR-WEIGHT parameter for sensor A. IS’ STATUS ANDFUTURE RESEARCH The basic model and the constraint propagation mechanisms have been implemented. We are currently extending the system to handle the comparative reasoning. Currently the system behavior model represents only the system behavior. It does not make an attempt to represent the reasons for the expected behavior in terms of the system architecture (e.g., a goal represents the intent to produce a hypothesis in the goal’s area) or in terms of the assumptions about the domain (e.g., the characteristics of goals based on hypotheses that led to them). We believe that such deeper models of both the architecture and the domain would increase the PDD system’s expertise by allowing it to detect more subtle errors (e.g., redundant satisfaction of goals) and to detect a wide range of faulty assumptons about the task domain. An example of the latter case is having a model of how the goal characteristics depend on the hypothesis characteristics, for example, the maximum acceleration of a vehicle and its turning radius. We also believe that such a deeper model of the problem-solving system could serve as a knowledge-base that the system could use to automatically generate the complex criteria necessaq for fault detection and the knowledge needed to implement the Strategy Replanning module. 160 We feel that meta-level control based on the Fault Detection/Diagnosis paradigm represents a new approach to introducing more sophisticated control into a problem- solving system. In addition, the system can be of great help in debugging complex problem-solving systems. It also presents interesting issues in modeling and reasoning about a problem-solving system. ACKNOWLEDGEMENTS We would like to thank Daniel Corkill for his help in developing the ideas and implementation presented in this paper. V REF’ERENCES 1. Stephen E. Cross. Qualitative sensitivity analysis: A new approach to expert system plan justification. Technical Report, AI Lab., Dept. of Electrical Engineering, Air Force Institute of Technology, Wright-Patterson AFB, 1983. 2. Daniel D. Corkill, Victor R. Lesser, and Eva Hudlicka. Unifying data-directed and goaldirected control: An example and experiments. In Proceedings of the National Coufere~~ce on Artifidal htdfgencc, pages 143-147, August 1982. 3. R. Davis. Meta-Rules: Reasoning about control. Artificial Intelligence, 15179222, 1980. 4. R. Davis, H. Shrobe, W. Hanscher, K. Wieckert, M. Shirley, and S. Polit. Diagnosis based on descriptions of structure and function. In Proceedings of Proceedings of the National Conference on Artif iciaf Intelligence, pages 137-142, August 1982. 5. M. Genesereth. Diagnosis using hierarchical design models. In Proceedings of Proceedings of the National Conference on Artificial Intelligence, pages 278-283, August 1982. 6. Michael R. Genesereth and DE.Smith. Meta-Level Architecture. Heuristic Programming Project Memo HPP-81-6, Stanford University, December 1982. 7. Barbara Hayes-Roth and Frederick Hayes-Roth. A cognitive model of planning. Cognitive Science, 3(4)275-310, October-December 1981. 8. Frederick Hayes-Roth and Victor R. Lesser. Focus of attention in the Hearsay-II speech understanding system. In Proceedings of the Fifth lnternatibnaf Joint Conference on Artif iciaf Intelligence, pages 27-35, August 1977. 10. Eva Hudlicka and Victor Lesser. DiQllOStiC reasoning in fault detection and diagnosis for problem-solving systems. COINS Technical Report (in preparation). 11. Victor Lesser and Daniel DCorkill. The Distributed Vehicle Monitoring Testbed: A tool for investigating distributed problem solving networks. AI Magazine 4(3):lS-33, Fall 1983. 12. B. Smith. Reflection and Semantics in a Procedural Language. Artificial Intelligence Laboratory Memo AI-m-272, MIT, January 1982. 13. Robert Wilensky. Meta-planning: Representing and using knowledge about planning in problem solving and natural language understanding. Cognitive Science, 5(3):X%233, July-September 1981. 9. Eva Hudlicka and Victor Lesser. Design of a knowledgebased fault detection and diagnosis system. In Proceedings af the 17th Hawaii International Conference on System Sciences, Vol l., pages 226230, January 1984. 161
1984
9
360
TEACHING A COMPLEX INDUSTRIAL PROCBSS Beverly Woolf Computer and Information Science University of Massachusetts Amherst, Massachusetts, 01003 Darrell Blegen Johan H. Jansen Arie Verloop J. H. Jansen Co., Inc. Steam and Power Engineers 18016 140 Ave. NE. Woodinville (Seattle), WA 98072 ABsrRAcr Computer training for industry is often not capable of providing advice custom-tailored for a specific student and a specific learning situation. In this paper we describe an intelligent computer-aided system that provides multiple explanations and tutoring facilities tempered to the individual student in an industrial setting. The tutor is based on a mathematically accurate formulation of the kraft recovery boiler and provides an interactive simulation complete with help, hints, explanations, and tutoring. The approach is extensible to a wide variety of engineering and industrial problems in which the goal is to train an operator to control a complex system and to solve difficult “real time” emergencies. 1. Tutorim Complex Procesq Learning how to control a complex industrial process takes years of practice and training; an operator must comprehend the physical and mathematical formulation of the process and must be skilled in handling a number of unforeseen operating problems and emergencies. Even experienced operators need continuous training. A potentially significant way to tram both experienced and student operators for such work is through a “reactive computer environments [Brown et al., 19821 that simulates the process and allows the learner to propose hypothetical solutions that can be evaluated in “real time”. Tllis work was suppoad hy The American Paper Institute, hC. a non-profit trade institutim for the pulp, paper, and papcrbuard industry in the United States, Energy and Materials Department, 260 Madison Ave., New York, NY, 10016. Frcparation of this paper was suppmtcd by the Air Force Systems Canmand, Rome Air Devclpmnent Center, GriHiss AFB, New York, 13441 and the Air Force Office d Scientific Research, Bolliq AFB, DC XX332 under amtract No. However, a simulation without a tutoring component will not test whether a student has actually improved in his ability to handle the situation. In addition, a simulation alone might not provide the conceptual fidelity [Hollan, 19841 necessary for an operator to learn how to use the concepts and trends of the process or how to reason about the simulation. For instance, evaluating the rate of change of process variables and comparing their relative values over time is an important pedagogical skill supporting expert reasoning; yet rate of change is a difficult concept to represent solely with the gauges in a traditional simulation. We have built a Recovery Boiler Tutor, RBT, that protides tools for developing abstract models of a complex process. The system does not actually represent the mental models that a learner might develop; rather, it provides tools for reasoning about that complex process. These tools include graphs to demonstrate the relationship of process parameters over time, meters to measure safety, emissions, efficiency, reliability, and safety, and interactive dialogues to tutor the operator about the on-going process. The system renders a mathematically and physically accurate simulation of a kraft boiler and interacts with the student about those concepts needed for his exploration of the boiler. Our goal has been to couple the motivational appeal of an interactive simulation with the tutoring and modeling ability of an artificial intelligence system to direct the student in his experimentation. The tutor was built in direct response to a serious industrial situation. Many industrial accidents, caused in part by human errors, have lead to dangerous and costly explosions of recovery boilers in pulp and paper mills. The American Paper Institute built the interactive tutor to provide on&e training in the control room of recovery boilers. The tutor is now being beta tested in pulp and paper mills across the United States and is being prepared for nationwide distribution. 722 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. RECMJERV BO ILER TUTOR I SAFETY I 1 EMISSIONS 1 RELIABILITY I I II FEEDWATER- FUEL---- Figure Iz Seztional View of the Recovery RoiIer. 2. The Recoverv Boiler Tutor A recovery boiler is an extremely Complex machine found in hundreds of pulp and paper mills around the world. Figure 1, from the RBT screen, pictures a typical boiler and some of the meters used by an operator to control the process. The actual boiler can cost up to $70 million and can be up to 20 stories high. Its function is to prowmes the spent black liquor that can be produced in a pulp mill, up to I!500 to&day, and to allow recovery and reuse of the pulping chemicals. It is a process in which hot and molten inorganic chemicals may lead to smelt/water and combustible gas explosions, endangering life and property. The god behind the tutor is to challenge an operator to solve boiler problems and to maintain his incentive to “play” with the tutor. The operator is encouraged to move setpoint controllers and to watch the results of his actions on the animated boiler or control board. The tutor monitors the operator’s freewheeling trials and errors and provides help, hints, explanations, and tutoring when needed or requested. An example interaction* between the student and tutor is shown in Fiie 2. As the operator changes setpoint controllers and requests information about the boiler, the tutor selectively discusses the optimality of his actions (we show how in Section 3) and suggests how he might better focus his action or better utilixe his data. An important feature to note about this dialogue is that at any point during the , ?.udiaI~ofF~2wasnotactua~producedin~ti language; student inprt was handkd thraugh maw (~yzurc 3) and tutor OutpIt produced by cutting text frum tmergcncyqecific text filts loaded when the emcrgt~lcy was invoked. [Note: A partial blackout is in progress and it will lead to deterioration of the combustion process if no action is taken. Meters indicate that CO, fQ and 0, readings are abnormally high.] Tutor: What action are you going to take? Student: Check smelt spout cooling water. [Student’s action does not result in a change in the boiler.] Tutor: It looks as though the liquor isn’t burning . . . Student: Help. What is going on? [Student requests help.] Tutor: A partial blackout is occurring and this causes TRS, CO, S%, and 02 to increase. Steam flow and steam temperature are dropping. I think part of the bed is not burning. Student: Rod the primary air ports. [A small improvement in the boiler readings can be observed.] Tutor: Your action is correct; however, at this time rodding alone is not effective . . . Student: Raise the liquor temperature. [Student’s action does not reverse the emergency and a new alarm buzzer sounds] Tutor: Rodding the air port apparently was not enough to undo the blackout. Student: Increase the oil fuel supply. [Stu&nt’s action reverses the emergency.] Tutor: That solved the problem alright. Good thinking. Analysis of the problem: You had a partial blackout caused by plugged primary air ports and a cold bed. Partial blackout can be effectively treated through a combination of rodding the primary air ports and adding more heat. The problem can be avoided by keeping the air ports clean. Flgore2:DlalogueEetweenTptorandoperator. AI AND EDUCATION I 723 RECOVERY I BOILER TUTOR I SAFETY I I I’, EMISSIONS P---i I’I EFFICIENCY I I’I III FEEDWA’ - - - - - d--- -2QA 4 1 RELIABILITY I IQgIlreI:Focuse!dviewoftheFlrcBed. hulated emergency there are a large number of actions an operator might take and, as the problem worsens, an increasing number of actions that he should take to correct the operating conditions. Thus, an immediate and correct response might require only one action, such as rodding the primary air ports, but a delayed response causes the situation to worsen and requires the addition of auxiliary fuel. The operator interacts with the tutor through a hierarchy of menus, one of which is shown in Figure 3. This menu allows an operator to select a physical activity to be performed on the boiler, such as to check for a tube leak or to rod the smelt spout. Another menu allows the operator to select a particular computer screen, such as the alarm board or control panel board. : ........... :. What I%-e You Going to Do ii;i;iiii;;;:,; ............... I Determine source of dilution Check instrumentat ion Check dissolving tank agitators Rod smelt spout Use portable auxiliary burner Remove liquor guns Put in liquor guns Clean liquor guns Rod primary air ports Rod secondary air ports Check smelt spout cooling water St art standby f eedwat er pumps Restore water flow to deaerator Quit Flgurc3:MenutoSeIectoPhysicaITosktoPerf~ on the Boiler. While the simulation is running, the operator can view the boiler from many directions and can focus in on several components, such as the fiie bed in Figure 4. The tutor provides assistance through visual clues, such as a darkened smelt bed; acoustic clues, ringing alarm buzzers, textual help, explanations, and dialogues, such as that illustrated in Figure 2. The operator can request up to 30 process parameters on the complete panel board, Figure 5 or can view an alarm board (not shown). The tutor allows the student to change 20 setpoints and to ask menued questions such as What is the problem?“, “How do I get out of it?“, ‘What caused it?“, and ‘What can I do to prevent it?.“’ The operator can request meter readings, physical and chemical reports, dynamic trends of variables. All variables are updated in real time (every 1 or 2 seconds). The student can initiate any of 20 training situations, emergencies, or operating conditions or ask that one be chosen for him. He might also trigger an emergency as a result of his actions on the boiler. Once an emergency has been initiated, the student should adjust meters and perform actions on the simulated boiler to solve the emergency. In addition to providing information about the explicit variables in the boiler, RBT provides information about implicit processes through reasoning tools, with which an operator can understand and reason about the complex processes. One such tool is composite meters (left side of Figures 1 and 5). These meters record the state of the boiler using synthetic measures for safetv, B, pfficiency, and g&&l& of the boiler. The meter readings are Thcacfourqucsti~afcanswcrcdbycuttingtcxtfnmrafile which was loaded with the spdfic emergency. These qudons do not pmidc the basia d the tutor% kmnvkdgc rqxcscntatim, which will be dimmed in Sccticm 32 724 / ENGINEERING RECOVERY BOILER TUTOR SRFETY EMISSIONS EFFICIENCY RELIABILITY STEAM l-KG-l 1007 : h-d 755 FEEDWATER Mpph 480.3 "F 300 wig 1208 attemp 1.9 DCE Dilution gym 0 Sootblower MPPll 36.0 FLUE GfiS Pressure CO TRS SO2 mm wm PP~ Flue Gas Temperature ID Fan 670 wm Bank Econ DCE 44 5.0 DISSOLUING TANK LeuelIxZ Density(x1 86 96 300 300 Flowlypm) 463 LIQUOR MPPIi 194.1 gym 291 "F 240 % Sol 65.0 stm/liy 3.68 psi 0 MAKE-UP Fp S: ‘Ibe Complete Cantrd Pad. calculated from complex mathematical formulae that would rarely, if ever, be used by an operator to evaluate the same characteristics of their boiler. For instance, the safety meter is a composition of seven independent parameters, including steam pressure, steam flow, steam temperature, feedwater flow, drum water level, firing liquor solids, and combustibles in the flue gas. Meter readings allow a student to make inferences about the effect of his actions on the boiler using characteristics of the running boiler. These meters are not presently available on existing pulp and paper mill control panels; however, if they prove effective as training aids, they could be incorporated into actual control panels. Other reasoning tools include trend analyses, Figure 6, and animated graphics, such as shown in Figures 1 and ‘4. Trend analyses show an operator how essential process variables interact in real time by allowing him to select up to 10 variables, including liquor flow, oil flow, and air flow, etc, and to plot each against the others and time. Animated graphics, another reasoning tool, are provided as a part of every view of the boiler. These animations include moving realistic drawings of components of the boiler, such as steam, fire, smoke, black liquor, and fuel. =* * Multiple concepts and processes were represented in RBT, mrne procedurally, some declaratively, and some in both ways. For example, emergencies in the steam boiler were first represented as a set of mathematical formulae so that process parameters and meter values could be produced accurately in the simulation. Then these same emergencies were encoded within the tutor’s knowledge base as a frame-like data structure with slots for preconditions, optimal actions, and conditions for solution satisfaction so that the tutor could evaluate and comment upon the student’s solution. RBT can recognize and explain: l equipment and process flows, l emergencies operating problems as welI as normal conditions, l solutions to emergencies and operating problems, 0 processes for implementing solutions, and 0 tutoring strategies for a&sting the student. Four modules were used to represent this knowledge: simulation, Amledge base, student wmdel, and instructional stratqdes. The dmufation uses a mathematical foundation to depict processes in a boiler through meter readings and four animated views of the boiler. It reacts to more than 35 process parameters and generates dynamically accurate reports of the thermal, chemical, and environmental performance of the boiler (not shown) upon request. An alarm board (not shown) represents 25 variables whose button wil.l turn red and alarm sounded u&en an abnormal condition exists for that parameter. The simulation is interactive and inspectable in that it displays a “real time” model of its process, yet allows the student to “stop” the process at anytime to engage in activities needed to develop his mental models mollan et al., 19&F]. The operators who tested RBT mentioned that they like being able AI AND EDUCATION I 725 to stop the process to ask questions or to explore boiler characteristics. One advantage of a formal representation of the process is the availability of a “database” of possible worlds into which information based on typical or previous moves cm be fed into the simulation at anytime prawn et al., 19821 and a solution found. In this way, a student’s hypothetical cases ~811 be proposed, verified, and integrated into his mental model of the boiler. The knowledge base contains preconditions, postconditions, and solutions for emergencies or operating conditions, described as Scenarios. Scenarios are represented in frame-like text files containing preconditions, postconditions, and acceptable solutions for each scenario. For example, in Lisp notation, a true blackout would be described as: preconditions: (or (c= blackout-factor 1) (C heatinput soo0)) postconditions: (or (increasing 02) (decreasing steamflow) (increasing TR!3) (increasing CO) (increasing 2502)) solutiollsatisfaction: (and (= blackout-factor 1) (> heatinput !5200)) qn;eiattrins details about the steam and chemical parameters i.n RBT and the boiler simulation capabilities can be found in fJanscn et al.. m&l. The efficiency of the student’s action is evaluated both through the type of action performed, such as $creasina oa or jncreasina steamflow for a true blackout, and the effect of that action on the boiler. Thus, if an inappropriate action nevertheless resulted in a safe boiler, the student would be told that his action worked, but that it was not optimal. The student tnudef records actions carried out by the student in solving the emergency or operating problem. It recognixes correct as well as incorrect actions and identifies each as relevant, relevant but not optimal, or irrelevant. The instnrcrionuf sfru2egies contain decision logic and rules to guide the tutor’s intervention of the operator’s actions. In RBT, the intent has been to “subordinate teaching to learning” and to allow the student to experiment while developing his own criteria about boiler emergencies. The tutor guides the student, but does not provide a solution as long as the student’s perfprmance appears to be moving closer to a precise goal. Represented as if/then rules based on a specific emergency and a specific student action, the instructional rules are designed to verify that the student has ‘asked” the right questions and has made the correct inferences about the Saliency of his data. Respmses are divided into three categories: 726 / ENGINEERING Redirect student: “Have you considered the rate of increase of 021” “If what you suggest is true, then how would you explain the low emissions reading?” Synthealze data: “Both 02 and TRS have abnormal trends.” “Did you notice the relation between steam flow and liquor flow?” Chfinu action: “Yes, It looks like rodding the ports worked this time”. The instructional strategies are designed to encourage an operator’s generation of hypotheses. Evidence from other problem solving domains, such as medicine [Bat-rows and Tamblyn, 19801, suggests that students generate multiple (usually 3-5) hypotheses rapidly and make correct diagnoses with only Z3 of the available data,’ The RBT tutor was designed to be a partner and co-solver of problems with the operator, who is encouraged to recognixe the effect (or lack of same) of his hypotheses and to experiment with multiple explanations of an emergency. No penalty is exacted for slow response or for long periods of trial and error problem solving. This approach is distinct from that of Anderson et al., [198ij and Reiser et al., [19851 whose geometry and Lisp tutors immediately acknowledge a incorrect student answers and provide hints. These authors argue that erroneous solution paths in geometry and Lisp are often so ambiguous and delayed that they might not be recognized for a long tune, if at all, and then the source of the original error might be forgotten. Therefore, immediate computer tutor feedback is needed to avoid fruitless effort. However, in industrial training, the trainee must learn to evaluate his own performance from its effect on the industrial process. He should trust the process itself to provide the feedback, as much as is possible. In RBT we provide this feedback through animated simulations, trend analyses, and “real-time” dynamically updated meters. The textual dialogue from the tutor provides added assurance that the operator has extracted as much information as possible from the data and it establishes a mechanism to redirect him if he has not. 4. ~veIonmental Issues RRT was developed on an IBM PC AT (512 RR RAM) with enhanced graphics and a 20 MB hard disk. It uses a math co-processor, two display screens (one color), and a two key mouse. The simulation was implemented in Fortran and took 321 ICB; the tutor was implemented in C and took 100 KB. Although we tried to implement the tutor in Lii, we found extensive interfacing and memory problems, including segment sixe restrictions (64k), incompatibility with the existing Fortran simulator, and addressable RAM restrictions (64OK). To circumvent these problems the tutor was developed in C with many Lisp features implemented in C, such as functional calls within the parameters of C functions. Meter readings and student actions were transferred from the simulation, in Fortran, to the tutor, in C, through vectors passed between the two programs. 5. Jhiluationq The tutor has been well-received thus far. It is presently used in actual training in the control rooms of several pulp and paper mills throughout the US. Formal evaluation will be available soon. However, informal evaluation suggests that working operators enjoy the simulation and handle it with extreme care. They change parameters slowly, with great intention, and use small intervals in adjusting meters. They behave as they might at the actual control panel of the pulp mill; they check each action and examine several meter readings before moving on to the next action. Roth experienced and novice operators engage in lively use of the system after about a half hour introduction. When several operators interact with the tutor, they sometimes trade “war stories” advising each other about rarely seen situations. In this way, experienced operators frequently become partners with novice operators as they work together to simulate and solve unusual problems. .hfcdical atudenta have been found to ask 60% of their questions while wrching for new data and obtain 75% of their sign&ant information within the first 10 minutes after a problem ia stated b and Tamblyn, 19&l]. AI AND EDUCATION / 727
1986
1
361
INTEGRATION OF MULTIPLE KNOWLEDGE SOURCES IN ALADIN, AN ALLOY DESIGN SYSTEM M. D. Rychener,’ M. L. Farinacci,2 I. Hulthage,’ and M. S. Fox1 1 Intelligent Systems Laboratory, Robotics Institute, Carnegie-Mellon University, Pittsburgh, PA 15213 2ALCOA Laboratories, ALCOA Center, PA 15069 ABSTRACT ALADIN’ is a knowledge-based system that aids metallurgists in the design of new aluminum alloys. Alloy design is characterized by creativity, intuition and conceptual reasoning. The application of artificial intelligence to this domain poses a number of challenges, including: how to focus the search, how to deal with subproblem interactions, how to integrate multiple, incomplete design models and how to represent complex, metallurgical structure knowledge. In this paper, our approach to dealing with these problems is described. 1. INTRODUCTION ALADIN (ALuminum Alloy Design INventor) is a knowledge- based system that aids metallurgists in the design of new aluminum alloys. The alloy design task produces a material composition and thermal-mechanical processing (TMP) plan whose resulting alloy satisfies a set of criteria, e.g., Ultimate Tensile Strength. The system can be operated in several modes. As a decision support system, it accepts alloy property targets as input and suggests alloying additives, processing methods or microstructural features to meet the targets. As a design assistant, it can evaluate designs supplied by a metallurgist, or provide information that is useful for design from a knowledge bank. As a knowledge bank, it provides information to supplement the usual sources such as books, journals, databases and specialized consultants. Alloy design in an industrial setting involves teams of experts, each of whom is a highly-trained specialist in a different technical area. The primary application objective of ALADIN is to systematize and preserve the expertise of such teams, as an expert system. There is some hope that by fusing together multiple sources of knowledge from different experts, a system will be developed that exceeds the capabilities of individual experts. At the same time the expertise can be applied more widely to design problems. We also hope with such a system to shorten the design cycle, which is often on the order of five years, from specification of properties until commercial production begins. Alloy design raises a number of issues as an Al problem. First, the search space is combinatorially complex due to the number and amount of elements that may participate in the composition and the number of alternative processing plans**. The knowledge available to guide the search is primarily heuristic, gained over many years of experimentation, coupled with some metallurgical models. *This research has been supported by the Aluminum Company of America. “The MOLGEN system [14] focused primarily on process planning. As a result, there exist multiple partial models of alloys which relate: 0 composition to alloy properties, l thermal-mechanical processing to alloy properties, and 0 micro-structure to alloy properties. This raises two questions for Al: what is the appropriate architecture for the explicit representation and utilization of multiple, parallel theories, and how is search to be focused in this architecture? A second issue is the degree to which design decisions are dependent. Each change in composition or process alters a number of properties. This level of dependence results in a level of interaction among sub-problems which exceeds that experienced in the planning literature, and is not amenable to simple constraint propagation techniques due to the size and complexity of the search space. Issue three is the result of issues one and two. The complexity of the search places a tremendous burden on how to focus attention in complex solution spaces. Lastly, issue four is concerned with representation. Knowledge of the relationship between alloy structure and its resultant properties is at best semi-formal. Much of it is composed of diagrams of 3D structure and a natural language description. Quantitative models rarely exist. The problem lies in representing spatial information in which structural variations are significant. The rest of this paper describes the alloy design problem in more detail. This is followed by a description of the ALADIN problem-solving architecture. Then there is a discussion of knowledge representation, multi-model reasoning, and focus of attention. 2. ALLOY DESIGN REASONING An alloy design problem begins with the specification of constraints on the physical properties of the material to be created. The objective of the designer is to identify element additions with percent levels and processing methods that will result in an alloy with the desired characteristics. The line of reasoning that designers use is similar to the generate-and-test model. The designer selects a known material that has properties similar to the design targets or other interesting features. The designer then alters the properties of the known material by making changes to the composition and processing methods. The effects of these changes on the various physical properties are estimated, and discrepancies are identified to be corrected in a later iteration. In order to select fabrication variables that improve the 878 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. properties, the designer may consider known cause and effect relations, such as: l IF Mg is added THENthe strength will increase l IF the aging temperature is increased beyond the peak level THEN the strength will decrease Often, however, these relationships are not available or cannot be generalized sufficiently. In that case, the designer may construct a model of the microstructure that will produce the required properties. The microstructure can be defined to be the configuration in three- dimensional space of all types of non-equilibrium defects in an idealized phase. These defects include voids, cracks, particles and irregularities in the atomic planes. They are visible when the material is magnified several hundred times with a microscope. The geometric, mechanical and chemical properties of the microstructural elements, as well as their spatial distributions and interrelationships, have a major influence on the macroscopic properties of the material. The microstructure is often described in abstract, conceptual terms and is rarely characterized numerically. However, these concepts provide a powerful guide for the search process since they constrain composition and processing decisions. For example, if meta-stable precipitates are required, then the percentage of additives must be constrained below the solubility limit, certain heat treatment processes must be applied, and aging times and temperatures must be constrained within certain numerical ranges. While the human design approach can generally be characterized with the generate-and-test model, a more detailed study of metallurgical reasoning reveals complexities and deviations from the idealized artificial intelligence-based models. To some extent, knowledge is applied in an opportunistic fashion. When relationships or procedures are identified that can make some progress in solving the problem, then they may be applied. However, there are many regularities to the search process, Furthermore, the strategies that designers use to select classes of knowledge to be applied varies among individuals. For example, in the selection of the baseline alloy to begin the search, some designers like to work with commercial alloys and others prefer experimental alloys produced in a very controlled environment. Still others like to begin with a commercially pure material and design from basic principles. When searching for alternatives to meet target properties, some designers construct a complete model of the microstructure that will meet all properties and then they identify composition and processing options. Many designers prefer to think about one property at a time, identifying a partial structure characterization and implementation plan that will meet one property before moving to the next. Still other designers prefer to avoid microstructure reasoning whenever possible by using direct relationships between decision variables and design targets. All designers occasionally check their partial plans by estimating the primary and secondary effects of fabrication decisions on structure and properties. However, the frequency of this activity and the level of sophistication of the estimation models varies among designers. 3. ALADIN ARCHITECTURE ALADIN is a multi-spatial reasoning architecture akin to a blackboard model [3,6] . It is composed of five spaces: 1. Property Space: The multi-dimensional space of all alloy properties. 2. Structure Space: The space of all alloy microstructures 3. Composition Space: This is an space where each dimension represents a different alloying element (e.g., Cu, Mg). 4. Process Space: The space of all thermo-mechanical alloy manufacturing processes. 5. Meta Space: This is the focus of attention planning space which directs all processing. The meta space holds knowledge about the design process and control strategies. Planning and search takes place in this space in that goals and goal trees are built for subsequent execution. Each space can be viewed as a separate blackboard system with its own search space, hypotheses and abstraction levels. Activity is generated on different planes and levels in a way similar to Stefik’s MOLGEN system [14]. ALADIN’s planes are: Meta or strategic plane, which plans for the design process itself, establishing sequencing, priorities, etc.; Structure planning plane, which formulates targets at the phase and microstructure level, in order to realize the desired macro-properties; Implementation plane, encompassing chemical composition and thermal and mechanical processing subplanes. We treat the alloy design problem as a planning problem because the final alloy design is a sequence of steps to be taken in a production plant in order to produce the alloy. The design plan is only partly ordered since the time ordering of some steps is unimportant. The planning process in ALADIN utilizes the existence of the microstructure model. The alloy design therefore typically starts in the structure space with decisions on microstructural features that imply desirable properties. These decisions are thereafter implemented in composition and process space. Overall, the search is organized according to three principles that have proven successful in past Al systems: l Meta Planning 0 Least Commitment: meaning that values within hypotheses are expressed as ranges of values that are kept as broad as possible, l Multiple levels: under which plans are developed first at an abstract level, and then gradually made more precise. Figure 3-1: Spaces of Domain knowledge APPLICATIONS / 879 properties, the designer relations, such as: may consider known cause and effect l 1F Mg is added THEA/ the strength will increase l IF the aging temperature is increased beyond the peak level THEN the strength will decrease Often, however, these relationships are not available or cannot be generalized sufficiently. In that case, the designer may construct a model of the microstructure that will produce the required properties. The microstructure can be defined to be the configuration in three- dimensional space of all types of non-equilibrium defects in an idealized phase. These defects include voids, cracks, particles and irregularities in the atomic planes. They are visible when the material is magnified several hundred times with a microscope. The geometric, mechanical and chemical properties of the microstructural elements, as well as their spatial distributions and interrelationships, have a major influence on the macroscopic properties of the material. The microstructure is often described in abstract, conceptual terms and is rarely characterized numerically. However, these concepts provide a powerful guide for the search process since they constrain composition and processing decisions. For example, if meta-stable precipitates are required, then the percentage of additives must be constrained below the solubility limit, certain heat treatment processes must be applied, and aging times and temperatures must be constrained within certain numerical ranges. While the human design approach can generally be characterized with the generate-and-test model, a more detailed study of metallurgical reasoning reveals complexities and deviations from the idealized artificial intelligence-based models. To some extent, knowledge is applied in an opportunistic fashion. When relationships or procedures are identified that can make some progress in solving the problem, then they may be applied. However, there are many regularities to the search process. Furthermore, the strategies that designers use to select classes of knowledge to be applied varies among individuals. For example, in the selection of the baseline alloy to begin the search, some designers like to work with commercial alloys and others prefer experimental alloys produced in a very controlled environment. Still others like to begin with a commercially pure material and design from basic principles. When searching for alternatives to meet target properties, some designers construct a complete model of the microstructure that will meet all properties and then they identify composition and processing options. Many designers prefer to think about one property at a time, identifying a partial structure characterization and implementation plan that will meet one property before moving to the next. Still other designers prefer to avoid microstructure reasoning whenever possible by using direct relationships between decision variables and design targets. All designers occasionally check their partial plans by estimating the primary and secondary effects of fabrication decisions on structure and properties. However, the frequency of this activity and the level of sophistication of the estimation models varies among designers. 3. ALADIN ARCHITECTURE ALADIN is a multi-spatial reasoning architecture akin to a blackboard model [3, 61 . It is composed of five spaces: 1. Property Space: The multi-dimensional space of all alloy properties. 2. Structure Space: The space of all alloy microstructures 3. Composition Space: This is an space where each dimension represents a different alloying element (e.g., Cur W. 4. Process Space: The space of all thermo-mechanical alloy manufacturing processes. 5. Meta Space: This is the focus of attention planning space which directs all processing. The meta space holds knowledge about the design process and control strategies. Planning and search takes place in this space in that goals and goal trees are built for subsequent execution. Each space can be viewed as a separate blackboard system with its own search space, hypotheses and abstraction levels. Activity is generated on different planes and levels in a way similar to Stefik’s MOLGEN system [14]. ALADIN’s planes are: Meta or strategic plane, which plans for the design process itself, establishing sequencing, priorities, etc.; Structure planning plane, which formulates targets at the phase and microstructure level, in order to realize the desired macro-properties; Implementation plane, encompassing chemical composition and thermal and mechanical processing subplanes. We treat the alloy design problem as a planning problem because the final alloy design is a sequence of steps to be taken in a production plant in order to produce the alloy. The design plan is only partly ordered since the time ordering of some steps is unimportant. The planning process in ALADIN utilizes the existence of the microstructure model. The alloy design therefore typically starts in the structure space with decisions on microstructural features that imply desirable properties. These decisions are thereafter implemented in composition and process space. Overall, the search is organized according to three principles that have proven successful in past Al systems l Meta Planning l Least Commitment: meaning that values within hypotheses are expressed as ranges of values that are kept as broad as possible, l Multiple levels: under which plans are developed first at an abstract level, and then gradually made more precise. . Composition Process Figure 3-1: Spaces of Domain knowledge APPLICATIONS / 879 The general trend of execution is to start generating a plan in the Meta plane, and to complete the alloy design within the processing plane. However, it will always be necessary to jump back and forth between spaces and levels and to backtrack. The qualitative and quantitative levels of the Structure, Composition and Processing spaces are activated as appropriate, to generate hypotheses that specify design variables in their own range of expertise. Hypotheses generated on other planes and levels constrain and guide the search for new hypotheses in many ways. An existing qualitative hypothesis obviously suggests the generation of a quantitative hypothesis. Certain microstructure elements can be produced by compositional additives, while others are produced by specific processes with the composition restricting the choices available. The final product of the design process is a plan in the composition and process spaces. More details on the ALADIN architecture are available in [ll]. 4. KNOWLEDGE REPRESENTATION ALADIN utilizes three forms of knowledge representations: 1. Declarative knowledge base of alloys, properties, products, processes, and metallurgical structure concepts; 2. Production Rules in the form of IF-THEN rules of many types: control of search among competing hypotheses, empirical associations of causes and effects, rankings and preference orderings, processing of user commands, decisions about when to call upon knowledge in other forms, and others; 3. Algorithms knowledge expressed as functions: detailed physical, chemical, thermo-mechanical, statistical, etc. calculations. 4.1. Declarative Knowledge The ALADIN system contains representations for metallurgical charts, alloys, physical properties, compositions and processing methods [8]. Each of these classes of knowledge admit a relatively simple representation using well known ideas about schemata (frames) and inheritance. The representation of microstructure presents some interesting problems and is discussed in more detail here. Microstructure is the configuration in three-dimensional space of all types of non-equilibrium defects [7] in an ideal phase. Metallurgical research has shown that many microstructural features have important consequences for macroscopic properties. The objective of the microstructure representation in ALADIN is to classify and quantify the microstructure of alloys in order to facilitate the formulation of rules that relate the microstructure to the macroscopic properties of alloys. Although much of the heuristic knowledge about alloy design involves the microstructure, it is usually poorly represented. Metallurgists have attempted to describe microstructural features systematically [7] and there is also a field called quantitative metallography that describes quantitative information about the three-dimensional microstructure of alloys [15]. In practice, neither of these approaches is commonly used. Instead, metallurgists rely on visual inspection of micrographs, which are pictures of metal surfaces taken through a microscope. Information is communicated with these pictures and through a verbal explanation of their essential features. In order to represent microstructure data and rules it was necessary to develop a symbolic representation of alloy microstructure. The two main features of an alloy microstructure are the grains and the grain boundaries, and are described by an enumeration of the types of grains and grain boundaries present. Each of these microstructural elements are in turn described by any available information such as size, distribution, etc., and by its relations to other microstructural elements such as precipitates, dislocations, etc. This representation allows important facts to be expressed even if quantitative data is unavailable, an important example being the presence of precipitates on the grain boundaries. It is interesting to note that most of the expert reasoning about microstructure deals with qualitative facts, with quantitative information typically not available. 4.2. Procedural Knowledge Most of the procedural knowledge is encoded in OPS5 production rules [5, l] in well-known ways. However, some procedures are best represented as algorithms, for whose coding we have chosen Common Lisp [13]. Especially important is the fact that alloy design requires the simultaneous use of both qualitative or symbolic reasoning and the application of suitable mathematical models. The subject of coupling symbolic and numeric methods is of general interest. Accordingly, Kitzmiller and Kowalik [lo] point out that in order to solve many problems in business, science and engineering, both insight and precision are needed. ALADIN currently contains the following types of mathematical routines: l Regressions, in order to interpolate and extrapolate from known alloy properties to those of new alloys; l Models of structure-insensitive properties, such as density; l Solutions of systems of multi-dimensional constraints; l Retrieval of constraints from phase diagrams. ALADIN couples qualitative and quantitative reasoning in several ways. The design is made at two levels, first on a qualitative and second on a quantitative level. Examples of design decisions that are made first are what alloying elements to add and whether the alloy should be artificially aged or not. These decisions are followed on the quantitative level by a determination of how much of each alloying element should be added and at what temperature aging should take place. The ALADIN system attempts to couple symbolic and numeric computation deeply by not treating algorithms as black boxes. A calculation is typically broken down into calculations of the various quantities involved, and the exact course of a computation is determined dynamically at the time of execution through the selection of methods to determine all the quantities needed to obtain the final result. These selections are based on heuristic knowledge that estimates the relative advantage and accuracy of the choices and by the availability of data [9]. 5. MULTIPLE DESIGN MODELS It is a feature of the alloy design domain that several partly independent models of alloys are used. The simplest model of alloys deals only with the relationship between chemical composition and alloy properties. From the point of view of modern metallurgy only a few structure-independent properties like density and modulus can be described in this way. However, empirical knowledge does exist 880 / ENGINEERING about other properties, eg. Beryllium causes embrittlement in Aluminum. Quantitative comparisons can also be made between alloys of varying composition, everything else being equal. This yields some useful quantitative knowledge about properties through regression. A more complete model includes the relationship between thermo-mechanical processes and properties. Since only composition and process descriptions are needed to manufacture an alloy, it could be assumed that no other models are needed to design alloys. As a matter of fact, historically many alloys have been designed with composition and process models only. The progress of research in metallurgy is giving new insights in the relationship between the microstructure of alloys and their physical properties. The deepest understanding of alloy design therefore involves models of microstructure effects on properties and models of composition and processing effects on microstructure. The microstructure decisions serve as an abstract plan that cuts down the number of alternatives in the composition and process spaces. In this way the role of the microstructure has both similarities and differences with abstract planning as described by Sacerdoti [12]. The main differences are: l Microstructure concepts are distinct from composition and process concepts, not merely a less detailed description. l The microstructure plan is not a part of the final design in the sense that an alloy can be manufactured with composition and process information only. l The microstructure domain is predefined by metallurgical expertise, not defined during implementation or execution of the ALADIN system. These differences introduce a number of differences from a MOLGEN-like system: . Instead of one hierarchy of plans there are three; Structure, Composition and Process, each of which has abstraction levels. l Since structure decisions don’t necessarily always have the highest criticality (as defined by [12]), opportunistic search is important. l The effect of abstract hypotheses is more complex because decisions in the structure space cut the search by constraining the choice of both composition and process hypotheses. The existence of more than one level in each space also introduces new types of interactions. Ideally, the models taking microstructure into account should be sufficient for all design decisions, but in reality they are incomplete. As a result, empirical models that relate composition and processes directly to properties have to be used. Utilizing several design models introduces another important deviation from standard abstract hierarchical planning: One or more levels of the Structure space can be bypassed during hypothesis generation or property evaluation. It is the combined use of the five design models plus a set of global control strategies for dealing with multiple models that enables ALADIN to design an alloy. 6. DESIGN STRATEGY PLANNING AND FOCUS OF ATTENTION ALADIN has a model of alloy design strategies that is encoded in OPS5 rules and associated with the meta space. This space is used to guide and control the search for solutions. The strength of this strategic model comes from the partition of the detailed metallurgical knowledge into knowledge sources. Facts, rules and procedures are each associated with a knowledge source that is characterized by a context, a goal, a space and a level. Rules and procedures can be applied only if the corresponding goal and context is active. The design strategy model guides the search by building goals. Several types of information are included: l The status of the search, l The history of the solution process, l Constraints on strategic alternatives, and l The effectiveness of various strategic alternatives. The status of the search is characterized by the constraints, hypotheses and estimates that have been created and indicate what problems remain to be dealt with. These schemata have the following definitions: l Hypothesis. Partial description and commitment regarding the alloy that is designed to meet the targets. l Constraint. The design target, and therefore a condition to be met and a criterion for selecting hypotheses. l Estimate. Prediction of the effect of fabricating an alloy according to the components of the current hypothesis; the effects will show up as characteristic properties and microstructure. The history of the solution process is retained in the goals. ALADIN has a rather elaborate set of rules for managing goals in a general way. Each goal has a status, a symbol from a fixed set of possibilities, which are in turn understood and managed by general goal rules. The outcome of work on a goal is propagated according to its final status and the logical (e.g., AND and OR) and sequential relations (nextgoal) that the goal has to other goals. Constraints on control alternatives are easily represented in rule form. Some examples are: l If numerical decisions regarding composition and process have not yet been made, then quantitative evaluation models can not be applied; l If decisions have not yet been made regarding what processing steps to use, then it makes no sense to reason about temperatures and rates. Finally, the system has a notion of what strategies will have the greatest impact on the search, based on heuristic knowledge obtained from the metallurgists. Rules include: l If it is possible to reason about microstructure, composition or process, then microstructure reasoning is preferred; l If many fabrication alternatives have been identified to meet a single target, then use simple heuristics to evaluate each and prune the search. Due to the complex interdependence of design decisions on an alloy’s final properties, simple concepts of goal protection are inappropriate. Instead a combination of least commitment and over-compensatory planning is utilized. This means to over- compensate when achieving a goal. In particular, if a certain tensile strength is required, the planning system sets even higher goals to achieve at this point in the search knowing that later decisions may result in a reduction of this property. This approach works because the property goals are values on a continuum. APPLICATIONS / 881 ALADIN begins in the meta space and frequently returns there for new direction. When the meta space is activated, strategy rules identify activities that are reasonable and create top level goals in memory, with context, space, and level information. Often, several alternative strategies are possible at any point in the search, and the user is offered a menu of possibilities. The system recommends the strategy that is felt to be most effective. After the user makes a selection, the meta rules expand the goals by creating more detailed subgoals. These goal trees constitute a plan for how to accomplish the requested activities. Control then returns to the domain spaces, which process the goals. Control remains in the domain spaces until the success or failure of each goal is determined. At that point, control returns again to the meta space. Iteration between meta and domain space continues until the ALADIN problem solving process is complete. With the meta space, numerous design strategies, obtained from different people, are integrated into a single system. As a result, ALADIN can develop several solutions to a single problem by applying different approaches. The flexible user control allows the metallurgist to experiment with different strategies. The designer may, in fact, explore solutions arising out of the application of hybrid strategies that are not usually integrated into a single problem. 7. SYSTEM PERFORMANCE AND RESULTS ALADIN runs on a Symbolics Lisp Machine within the Knowledge Craft [2] environment at a speed that is comfortable for interaction with expert alloy designers. A typical design run takes about an hour, and involves considerable interaction with the user, whose choices influence the quality of the outcome. Its development is at the mature, advanced-prototype stage, where it can begin to assist in the design process. We must point out, though, that its knowledge is presently focused on narrow areas of alloy design, with expertise on only three additives, two microstructural aspects and five design properties. We are dealing in depth only with ternary alloys. But these restrictions are by our own choice, so that we can go into depth and train the system on the selective areas of greatest import to our expert informants and sponsors. Within these restrictions lie a number of commercially important alloys, whose rediscovery and refinement by ALADIN will be a major milestone. Performance measures to date are strictly anecdotal. Our experts work with the system in the interactive mode described earlier. Three milestones have been reached: 1. The representation of structural knowledge is considered by the experts to be an advance over what was available previously. 2. The experts have made the transition from being sceptics to believing the system is of value to their work. 3. The system is beginning to produce non-trivial results that are of interest to designers, and that would require too much tedious work to generate manually. These include partial designs on several spaces and levels. Though two years have passed since the commencement of the project, we continue to work with the experts to refine and extend the voluminous knowledge and data not yet added to the system. More details about the current state of user acceptance and future plans for technology transfer are supplied in [4]. ALADlN is primarily an application of existing artificial intelligence ideas to an advanced, difficult problem domain. Alloy &sign is thought to require a high degree of creativity and intuition. However, we have found that generate-and-test, abstract planning, decomposition and rule-based heuristic reasoning can reproduce a significant portion of the reasoning used by human designers on prototype cases. Furthermore, the attempt to build a knowledge- based system has helped alloy designers to systematize their knowledge and characterize interrelationships, particularly in the area of microstructural representation. 9. ACKNOWLEDGMENTS We are grateful to all our expert metallurgist informants from Alcoa, especially Marek Przystupa and Warren Hunt. 10. REFERENCES 1. Brownston, L., Farrell, R., Kant, E., Martin, N.. Programming Expert Systems in OPS5: An Introduction to Rule-Based Programming. Addison-Wesley, Reading, MA, 1985. 2. Carnegie Group, Inc. Knowledge Craft, Version 3.0. Pittsburgh, PA, October, 1985. 3. Erman, L. D., Hayes-Roth, F., Lesser, V. R. and Reddy D. R. “The Hearsay-II speech-understanding system: integrating knowledge to resolve uncertainty”. Computing Surveys 72, 2 (June 1980), 214-253. 4. Farinacci, M. L., Fox, M. S., Hulthage, I. and Rychener, M. D. The development of Aladin, an expert system for Aluminum alloy design. Third International Conference on Advanced Information Technology, Amsterdam, The Netherlands, 1986. Sponsored by Gottlieb Duttweiler Institute, Zurich, Switzerland, November, 1985; also Tech. Rpt. CMU-RI-TR-86-5. 5. Forgy, C. L. OPS5 User’s Manual. CMU-CS-81-135, Carnegie- Mellon University, Dept. of Computer Science, July, 1981. 6. Hayes-Roth, B. “A blackboard architecture for control”. Artificial intelligence 26 (1985), 251-321. 7. Hornbogen, E. “On the microstructure of alloys”. Acta Metal/. 32, 5 (1984), 615. 8. Hulthage, I., Farinacci, M. L., Fox, M. S., Przystupa, M., and Rychener, M. D. The Metallurgical Database of Aladin. Carnegie- Mellon University, Intelligent Systems Laboratory, Robotics Institute, 1986. In preparation. 9. Hulthage, I., Rychener, M. D., Fox, M. S. and Farinacci, M. L. The use of quantitative databases in Aladin, an alloy design system. Coupling Symbolic and Numerical Computing in Expert Systems, Amsterdam, The Netherlands, 1986. Presented at a Workshop in Bellvue, WA, August, 1985; Also Tech. Rpt. CMU-RI-TR-85-19. 10. Kitzmiller, C. T. and Kowalik, J. S. Symbolic and numerical computing in knowledge-based systems. Coupling Symbolic and Numerical Computing in Expert Systems, Amsterdam, The Netherlands, 1986. 11. Rychener, M. D., Farinacci, M. L., Hulthage, I. and Fox, M. S. Integrating multiple knowledge sources in ALADIN, an alloy design system. Carnegie-Mellon University, Intelligent Systems Laboratory, Robotics Institute, 1986. In Preparation; Long version of AAAI-86 paper. 12. Sacerdoti, E. D. “Planning in a hierarchy of abstraction spaces”. Artificial Intelligence 5 (1974), 115-l 35. 13. Steele, G. L.. Common Lisp, the Language. Digital Press, Burlington, MA, 1984. 14. Stefik, M. J. “Planning with constraints (MOLGEN: part 1); Planning and meta-planning (MOLGEN: part 2)“. Artificial intelligence 76 (1981), 11 l-170. 15. Underwood, E. E.. Quantitative Stereology. Addison-Wesley, Reading, MA, 1970. 882 / ENGINEERING
1986
10
362
PHYSICS FOR ROBOTS James G. Schmolze BBN Laboratories Inc. 10 Moulton Street Cambridge, MA 02238 ABSTRACT Robots that plan to perform everyday tasks need knowledge of everyday physics. Physics For Robots (PFR) is a representation of part of everyday physics directed towards this need. It includes general concepts and theories, and it has been applied to tasks in cooking. PFR goes beyond most AI planning representation schemes by including natural processes that the robot can control. It also includes a theory of mater-la1 composition so robots can identify and reason about physical objects that break apart, come together, mix, or go out of existence. Following on Naive Physics (NP), issues about reasoning mechanisms are temporarily postponed, allowing a focus on the characterization of knowledge. However, PFR departs from NP in two ways. (1) PFR characterizes the robot’s capabilities to act and perceive, and (2) PFR replaces the NP goal of developing models of actual common sense knowledge. Instead, PFR includes all and only the knowledge that robots need for planning, which is determined by analyzing proofs showing the effectiveness of robot I/O programs. 1. Introduction Physics For Robots (PFR) represents knowledge of everyday physics according to the physical capabllltles and planning needs of robots. This knowledge is intended to be an important part of the overall knowledge given to a robot. Physical capabihties are represented within PFR by specifying the perceptual and action functionality of a (hypothetical) robot. This specification is comprrsed by an I/O programming language, whose primitive instructions correspond to primitive perceptions and actions, and an operational semantics, which describes the real world effects of executing I/O programs. (Given the complexity of the real world, this semantics is necessarily rncomplete.) The hypothetical robot used for this research has capabilities that are beyond current, but are within near future technology. Some of the robot’s capabilities and an I/O program are presented later in this paper. This research was supported by the Advanced Research Projects Agency of the Department of Defense and was monitored by ONR under Contract Nos. N00014-77-C-0378 and N00014-85-C-0079. The views and conclusions contained in this document are those of the author and shouid not be interpreted as necessar i ly represent i ng the official pal icies, either expressed or imp1 ied, of the Defense Advanced Research Projects Agency or the U.S. Government. PFR’s similar representation of everyday physics is very in style to Hayes’ Naive Physics (NP) formalizations [Hayes 85a, Hayes 85b]. Like NP, PFR focuses on characterizrng knowledge while postponing implementation considerations. However, NP is ultimately after realistic models of common sense (see [Hayes 85a], page 5) whereas PFR 1s after the knowledge that robots need to plan for everyday tasks. As a result, PFR includes a specification of the robot’s I/O capabilities whereas NP postpones such considerations. More importantly, PFR includes a criteria for judging the value of its representations whereas NP must rely on the existing, and small, body of what is known about common sense along with one’s own intuitions. One begins to evaluate a PFR representation by selecting a set of everyday tasks for the robot to perform, and for each task, designing an I/O program that, when executed, will cause the robot to successfully perform the task. An I/O program is one whose primitive instructions are only perceptions and actions for the robot to perform (see Section 4). The test for PFR is whether or not its theory of everyday physics is adequate to prove that the execution of each program will accomplish its corresponding task. The more programs/tasks that can be proven correct using a PFR representation, the greater the PFR’s expressive power and the better the PFR. Further, given two expressively similar PFR representations, one should choose the simpler of the two, and one should choose the representation that is most in keeping with what is known about common sense. I point out that there are two notions of correctness here. One is whether or not executnrg a program w-ill actually accomplish the given task in the real world. PFR cannot be used to show this directly. For hypothetical robots, only informal arguments can used here. For actual robots, the programs can be executed and the robots observed. The second notion of correctness corresponds to whether or not executing the program accomplishes the task according to the theories of a PFR representation. The extent to which these two notions of correctness are in agreement is the extent that the representation is successful. 2. Composition of Materials Physical obJects in the everyday world can come into or go out of existence, break apart, come together or mix Examples from cooking include water that boils and turns to steam, or the pouring of hot water over coffee grounds to create a cup of coffee. PFR must provide the robot with knowledge to deal with such phenomena by giving it a theory of material t 6 i SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. composition skills to: Such a theory provides a robot with the 0 identify physical ObJects as they come into or go out of existence, or go through transformations, and o detsrmine the properties of whole ObJects from the properties of their parts, and vice versa, including when the parts are not readily identifiable (such as the portion of the hot water that went into a cup of coffee) M >- theory of material composition includes three components. (1) a theory of what constitutes the physical ObJeCtS, (2) the part--whole relation along with a theory that identifies parts from wholes and vice versa, and (3) a theory that determines the properties of parts from the properties of wholes and vice versa. In this paper, I wrll only touch on (1) and (2), and will ignore (3) completely given that I will focus on processes. (See [Schmolze 861 for a fuller treatment of material composition.) Before discussing physical objects, I now introduce some basic elements of PFR. Instants of time are represented as individuals where they form a continuum. Let “seconds” map real numbers to instants where “seconds(n)” denotes n seconds. Points in space form a 3-dimensional continuum. Changing relations are represented as functrons on instants of time. Formulas and terms for these relations are written with the time argument separated. For example, “occ.space(x)(t)” denotes the set of points in space that x occupies at time t. “occ.space(x)(t)” is defined iff x is a physical ObJect, t is an instant of time. and x exists at t. Further, x must occupy a non-empty set. Also, “vol(x)(t)” denotes the volume occupied by a physical ObJect at time t, which is defined as the volume of “occ.space(x)(t)“, and which 1s greater than zero for existing physical objects. obJects -- possibly objects at the level of atoms and molecules. For example, the process of evap oration can be described by having small pieces of liquid turn to gas and leave the container holding the liquid. Also, by adding some sugar to water and stirring, the entire glass of water becomes sweet. By using small pieces again, one can describe mixtures and show the spread of the sweetness as a dispersion of small pieces of sugar. When hot water IS poured over coffee grounds. a new ObJect is created. coffee. It too is a mixture, which can be useful for determlnlng that, say% the coffee is hot of water that 1t because were ho 1s primarily composed of pieces ust a few seconds earlier Hayes ( [Hayes 85b], page 74) eschews an atomistic theory because he considers it to be beyond the realm of common sense. In traditional physics, there is a complicated gap to bridge between the microscopic and macroscopic versions of certain properties such as temperature, volume and state. Does the robot need to know about actual atoms and molecules, and if not, what simpler theory will meet the robot’s needs? Fortunately, there IS a way to meet the robot’s needs without introducing microscopic versions of temperature, volume and state. To this end, I invent a class of physical objects that I call granules. Their essential properties are that: o they are small enough to be a part of all solid, liquid and gaseous physical objects -- they are too small to be seen individually, o they are large enough to have the usual macroscopic properties of temperature, state and volume (each has a volume greater than zero), o they are pure, be they purely water, wood, or whatever, and o they have no proper parts, and consequently, no two granules share parts nor occupied space. A quantity, borrowed from Hayes [Hayes 85a], is a set of measurements of a given type. For example, the temperatures and the volumes each form a quantity. Each quantity forms a continuum. I will introduce functions from the reals to various quantities, in the style of Hayes, as needed. For example, “cups(4)” denotes a volume of 4 cups. Types that are not trme-varying are called basic types. An example is being a physical object or a temperature. (See [Schmolze 861 for the reasons for the above design choices.) Regarding notation, boolean function names, i.e., predicate names, will be capitalrzed. Other function names are written in all lower case. Names of constants are written in all capital letters. Names of variables are written in lower case. Variable names beginning with “t” are implicitly of type “Instant”, which denotes the basic type for instants of time. I will write “(t,..Q)” to denote the open interval from t, to t,. Also, I will use the following shorthand when a time varying predicate, say P, is true over an open interval. h UP( t, . . tp) - [Vt& tt, 1 ~t*m(t)ll (1) Further, granules of the same type are similar. For example, two water granules with the same heat content will have the same temperature and state. Granules form the smallest physical objects in my ontology. I let “Granule” denote a basic type for granules. By coupling the part-whole relation with granules, I have a powerful too1 for describing material composition. Let “Part(x,y)(t)” be true iff x and y are physical objects that exist at t and x 1s a part of y at t. “Part” forms a partial order over existing physical objects at each instant. From these relations, I can define a function, called “gset”, from physical objects to the sets of granules that comprise them at an instant. [v.tl[gsetb)ttj = 1 x tG ranule(x) A Part(x,y)(t)l] (2) I will use the ability to determine an object’s “gset” as the criteria for identifying the object. For example, let there be a glass called G that contains some liquid at time T. If G and T are identified to the robot, I can identify the liquid in G as W with the following. gset(W)(T) = {x[Granule(x) /: Liquid(x)(T) /; (3) Contains(G,x)(T){ Being a physical object is a basic type, and I write “Phys.obj(x)” when x 1s an individual physical ObJect. In order to represent physical obJects coming into and going out of existence, I introduce existence as a property of physical obJects. Let “Exists(x)(t)” be true when x is a physical ObJect that exists at time t Physical ObJeCtS include those ObJects normally considered as such, e g., books, cars, computers, the atmosphere, oceans and glasses of water. However, for certain types of transformations that physical ObJects undergo, it will be useful to include very small physical “Liquid(x)(t)” is true iff x exists and is entirely liquid at t. (“Solid” and “Gas” are defined similarly for the solid and gaseous states.) “Contains(x,y)(t)” iff x and y exist and x contains y at t. Borrowrng from Hayes [Hayes 85b], I have used containment to ldentlfy this liquid ObJect. 1 can go a step further and write a general rule that allows the robot to identify a contained quantity of liquid as a physical object. The first line in Formula 4 requires that there is some liquid rn a container and Planning: AUTOMATED REASONING i t5 the remainder asserts the existence of the object formed by all the liquid in the container. hl K [ZIx][Contains(c,x)(t) A Liquid(x)(t)]) + (4) [ZIl][Phys.obj(l) A Exists(l)(t) A gset(l)(t)=(ylGranule(y) A Liquid(y)(t) A Contains(c,y)(t) I]] Here, x can be a single liquid granule. Space does not permit a thorough examination of the utlhty of granules: The interested reader should refer to [Schmo<ze 861 where there are rules that allow the robot to identify liquid ObJects elsewhere, are mixed with other that are poured liquids, partially evaporate, etc. In addition, there are rules that allow the- robot to infer various properties of these transformed objects, such as their temperature, volume or composition,- all without special knowledge about the properties of microscopic objects. Further, the robot needs to reason about granules only when necessary; it can reason about normal physical objects without considering granules. The general PFR representation thus far allows a wide variety of such rules to be formulated. objects to application However, the actual rules for identifying be given dependent. to a particular robot will be 3. Simple Processes Any-robot that deals with the everyday world must be able to predict changes due to nature. An important source of natural changes is natural processes, and so, PFR includes them. I have limited my study to a class of process types that I call simple. All simple process types have an enabling condition and an effect, -both of which depend only on the physical condition of the world (and not on, say, the intention of any agent). Basically, an instance of a simple process type occurs when and only when the enabling condition is true for some set of physical objects, and the process has the given effect on the world while it is occurring. For example, whenever a faucet’s knob is open, water flows from the faucet. Or, whenever two physical objects are of different temperatures and are in thermal contact, heat flows from the hotter to the cooler object. I note that many real processes are not simple. Given instances of simple process types (i.e., simple processes), a robot must be able to determine when they occur, how to identify them (e.g., deciding when two processes are the same or different), and what their effects are. Further, these factors must be determinable from limited information. For example, it must be possible to determine a process’ effects without knowing when the process will end. Also, the manner of describing effects must allow for either discrete or continuous changes. For example, heat is measured on a continuum, so heat transfer causes continuous changes. However, water flowing from a faucet is (eventually) measured by the transfer of whole water granules, so faucet flow causes discrete changes. Finally, the representation must allow for situations where several processes affect the same property of the very same objects, such as a heating and cooling process occurring simultaneously on the same pot of water. I note that Hayes [Hayes 85a, Hayes 85b] does not address these points direct&. Others, such as [Forbus 851 and [Hendrix ‘731 have addressed some but not all of them. I represent simple processes as individuals. Let “Occurs(x)(t)” be true iff x is an event that is occurring at time t. “Occurs” for events is analogous to “Exists” for physical objects. I will illustrate the essential properties of simple process types by describing the process type for water flowing from a kitchen faucet. Along with that, I will describe faucets, objects associated with faucets (such as their controlling knobs), and their operation. Let “Faucet.flow” be a basic type for faucet flow processes. Each simple process has a set of players, i.e., the physical objects that are involved. For “Faucet.flow”, the only player IS the faucet, with which I assocrate other objects. In my model, a faucet has a knob, a head, a sink, a supply container that holds the faucet’s SUPPlY and, of course, the water In the supply container. Let “Kltchen.faucet” and “Faucet.knob” be basic types for kitchen faucets and their controlling knobs, respectively. The knob has fully closed and fully open positions, and there are positions in between. Let “closed.position(k)(t)” denote the space that a faucet knob, k, must occupy in order to be fully closed at time t. Let “open.position(k)(t)” be similar, but for the fully open position. From these functions, I can define “Closed.knob(k)(t)” as true iff k is a faucet knob that is fully closed at t and “Open.knob(k)(t)” as true iff k is a fully open faucet knob at t. [Vk,t][Closed.knob(k)(t)+Faucet.knob(k) A (5) occ.space(k)(t)=closed.position(k)(t)] A [Vk,t][Open.knob(k)(t)eFaucet.knob(k) A occ.space(k)(t)=open.position(k)(t)] If neither is true, the knob is in between. In addition, let “knob.of.faucet(f)(t)“, “supply.cont.of.faucet(f)(t)” and “supply.of.faucet(f)(t)” denote the existing knob, supply container and water supply, respectively, of f when f is an existing faucet. The enabling condition for the “Faucet.flow” process type is written over an interval of time (I will soon explain why) and is true iff a faucet, f, is not fully closed over some open interval, “(t,..t,)“. The following is written with f, t, and tp free. k is used to simplify the formula. [VtG(t,..t2)][-CIosed.knob(k)(t)] (6) where “k” is “knob.of.faucet(f)(t)” I will write “Faucet.not.closed(f){t,,t.$ as a shorthand for Formula 6. The effect of a “Faucet.flow” process is that water flows from the faucet’s supply container to a receiving container, which is either the faucet’s sink, or an open container under the faucet’s head. To describe the effect, I rely on two defined predicates, “Liq.xfer” and “rate.liq.xfer” (only “rate.liq.xfer” will be formally presented here). “Liq.xfer(c,,c2,tb,te)” is true iff the following holds. 1. 2. There is some liquid in a container, c,, at t,. Throughout the open time interval from t, to t,, where “Q,<t,“, granules from the liquid in c, are transferred to a different container, c2. The transfer could have begun before t, and could have ended after t,. “Liq.xfer” only states that a transfer occurred throughout the particular interval “(tb..te)“. Further, the liquid need not remain in c2 (e.g., it could be transferred elsewhere). “rate.liq.xfer(c, ,c2,t,.t,)” denotes the average rate of a liquid transfer satisfying “Liq.xfer(c, ,c2,t,,,t,)“. It is just the volume of the liquid actually transferred divided by the time of transfer. I calculate this volume by summing over the volumes of granules transferred 46 / SCIENCE since (1) all the liquid that is transferred may not form a single individual (e.g., if part of it was transferred elsewhere from c2 during “($,..ta)“), and (2) granules share no parts, so I will get an accurate measurement of volume. Since the number of granules transferred 1s discrete, I place a minimum length on the time interval over which. this rate can be calculated -- this minimum being large enough so that a reasonably large number of granules are certain to have transferred. If these intervals are allowed to be arbitrarily small, inaccurate measurements can result. Let “atLx” denote this minimum interval length, which I set to one-tenth second. [Vr,c, ,c2~tb~t,l (7) [r=liq.xfer.rate(c,,C~,t~,t,)~Liq.Xfer(C,,CP,tb,te)A te--tGAtLx A 1 r=- * vol.gset(+(c, ,c2,tb,t,)(t,))l %+b where +b, +‘tb’t,,)tt,)= (8) (xlGranule(x) A [3t,~(tb..te)ltp~(tb..t,)] [t,$ * fiwid(x)(t, _ +) A COntainS(C,,x)(t,) A Gontains(cZ,x)(tp)]~ and where “vol.gset(x)(t)” is just the volume of a set of existing granules, x, at time t. [Vx,y,t] [y=vol.gset(x) * (9) Set(x) A [VzGx][Granule(z) A Exists(z)(t)] A Y=~~xpl(z)l’)] “Set(x)” is true iff x is a set. I define the effect of a “Faucet.flow” process to be that, if the faucet is fully open, water transfers from the faucet’s supply container to a receiving container at the rate of one-quarter cup per second. If it is partially open, the rate is between one-sixtieth and one-quarter cup per second (this is idealized to simplify its presentation). The following describes the effect of a “Faucet.flow” process, p, that is occurring during “(t,..t2)” ( remember, for p to occur, the faucet must not be closed). Let “faucet.of.flow(p)” denote the faucet involved with p. c, r and k are introduced to simply the formula. Liq.xfer(c,r,t, ,t& A [ Open.knob(k) ( t, ..q + cups( 1) rate.liq.xfer(c,r,t,,t2)=seconds(4~ A 1 CuPsW trate.liq.xfer(c,r,t, ,t,)< cups(l) seconds(60) seconds(4) 1 where “c” is “supply.cont.of.faucet(faucet.of.flow(p))(t)” “r” is “receptacle.of.flow(p)(t)” “k” is “knob.of.faucet(faucet.of.flow(p))(t)” “receptacle.of.flow” is a function that is defined using geometrical primitives; I will not discuss it in this paper except to state that, for a “Faucet.flow” process, it refers either to the faucet’s sink or to an open container directly below the faucet’s head. For the formulas that follow, I will use “Effect(p)(t,,tZ)” to refer to Formula 10. The effect of a water flow process is written over an interval of time because there is a discrete quantity being measured, as I explained above. For this reason, I will place a minimum length on the intervals over which the effect of a faucet flow process is calculated (as will be seen in Formula 15). Let “Atef f” denote this minimum, which, like “AtLx”, is one-tenth second. For simple process types whose effects can be measured on a continuum, “Aterr” is zero, making it possible to describe such process types using instantaneous rates, if desired. I note that enabling conditions are expressed over intervals for similar reasons, although for the enabling condition of “Faucet.flow”, there is no need for a minimum length interval. There are 5 essential properties of simple process types. For each, I include a formula written for “Faucet.flow” that describes the property. Each simple process type will have 5 similar formulas. 1. An instance begins when (or just after) the enabling condition goes from false to true for some set of players. t, represents the beginning time for a process. [ Vf:Kitchen.faucet,tb 1 (11) [ -[3t][t<tb A Faucet.not.closed(f)(t,tb)] A [3t][t>tb A Faucet.not.closed(f)(tb,t)] -+ [ IL 3P Faucet.flow(p) A f=faucet.of.flow(p) A [Vt][t<t, + ~Occurs(p)(t)] A [Vt][t>t,A Faucet.not.closed(f)(tb,t) + Occurstd~tb.. t) 111 i.e., for appropriate tb’s, a faucet flow process begins at t, whose player -- its faucet -- is f and which continues while the faucet is not closed. 2. An instance continues as long as the enabling condition remains true for those players. [ Vf:Kitchen.faucet,t, ,tS 1 (12) [ t,<tp A Faucet.not.closed(f)(t,,tJ + [ I[ 3P Faucet.flow(p) A f=faucet.of.flow(p) A -- Occurs(p)( t, . . tP) 11 3. An instance ends when (or just before) the condition first becomes false after the process starts for those players. t, represents the ending time for the process. [ Vf:Kitchen.faucet,t. 1 (13) [[3t][t<t,, A Faucet.not.closed(f)(t,tJ] A -[3t][t>t, A Faucet.not.closed(f)(t,,t)] + [ IL 3P Faucet.flow(p) A f=faucet.of.flow(p) A [vt][t>t* + -Occurs(p)(t)] A [Vt][t<t, A Faucet.not.closed(f)(t,t,) + Occurdp+t.. tell 11 i.e., for appropriate t,‘s, a faucet flow process ends at t, whose player -- its faucet -- is f and which has continued for as long as the faucet has not been closed. 4. If two individual simple processes of the same type and with the same players overlap in the times of Planning: AUTOMATED REASONING / 47 their occurrences, they are the very same process. [ Vp, .Faucet.flow,p2:Faucet.flow 1 (14) [ faucet.of.flow(p,)=faucet.of.flow(p2) A E~tl[Occursb,)(t) A occurs(p,)Wl --+ P, =PJ 5. The effect applies to the players while the process occurs over intervals larger than the given minimum length. [ Vp Faucet flow,t, ,t2 1 (15) [ -It ef fZt*-tl ’ occurs(p)( t, t2) -+Effect(p)(t, ,t2)] This knowledge allows the robot to determine when faucet flow processes begin, continue and end. It provides identity criteria for these processes and it describes their effect in the real world. Thus, the robot is well equipped to plan to control such processes. In Section 5, this knowledge is used to show the effectiveness of an I/O program. 4. Robot Perception and Action Any robot that plans must know the consequences of executing its perceptual and action routines, i.e., its own I/O programs. In this section, I specify the IjO functionality of a hypothetical robot as part of PFR. In order to describe the effects of executing programs, a model of the robot’s internal state and capabilities is needed. The robot can move about, grasp certain kinds of objects with its (single) arm and hand, and can determine certain kinds of situations by “looking” through its (single) camera eye. Let “Near(x)(t)” be true iff the robot is near ObJect x at t. To be near an object means that the robot is able to see it and reach it. “Grasped(x)(t)” iff the robot is grasping object x at t. In order to be grasped, the ObJect must be of a certain shape, which I denote with “Graspable(x)(t)“. Only one obJect can be grasped at a time. In order to represent the robot’s ability to identify and find objects at given times, I introduce “Identifiable(x)(t)“, which partially models the robot’s internal memory state. The I/O language includes calls to primitive input and output procedures, sequencing, compound statements, if-then-else statements and while loops. Output procedure calls are program statements. Input procedure calls are program functions. There is no assignment statement. Constants denote individuals such as physical objects or instants of time. For simplicity, I assume that the execution of the control portion of statements takes zero time. This includes calls to input procedures, so they also take zero time to execute Also for simplicity, output procedures take fixed, greater-than-zero time to execute. In the descriptions that follow, each output procedure takes 2 seconds. (For a full specificat.ion, see [Schmolze 861.) Let “E(S)(t,,t&” d enote the execution of statement S by the robot where execution begins at t, and ends at t2, such that a new statement can begin executing at tp grasp x. If x is identifiable, graspable, near the robot and nothing is already grasped, the robot will grasp x. [ vx,t, $2 I[ Ekrasp x)(t, J,)--+ t-1 6) tp-t, =seconds(2) A ( Identifiable(x)(t,) A Near(x)(t, j A Graspable A -[3y][Grasped(y)(t,>] -+ Grasped(x open.knob k. If k is a faucet knob that is currently being grasped, this causes the robot to move k (if necessary) to its open position. It takes 2 seconds. For simplicity, I assume that the robot knows the current open position for k. If k is already open, the robot takes no action. If k is not open, it begins to move k immediately. At some point during execution of this procedure, k is in the open position, after which the robot stops moving it. Before describing “open.knob”, I define “Stationary(x)(t, ,t2)” to be true iff x does not change location from t, through t2. [Yx,t, ,tJ[Stationary(x)(t, ,t2> # (17) bwt,..t,)l[ occ.space(x)(t)= occ space(x)(t, i]] [ Vk.Faucet.knob,t, ,tp 1 (18) [ E(open.knob k)(t,,$) + t2-t,=seconds(Z) A ( Grasped A Open.knob(k)(tl) + Open.knob(k)(t , . . tZl A Stationary(k)(t, ,t2)) A ( Grasped A wOpen.knob(k)(t,) -+ Wwt,..t,M occ.space(k)(t)# occ.space(k)(t,)] A [3tG (t,. .t,)][Open.knob(k)(t) A Open.knob(k)(t.. tZ) A Stationary(k)(t,t&] A Cvt~(t,..tpmP en.knob(k)(t) -+ Own.knob(k)(t.. ,$] close.knob k: If k is a faucet knob that is currently being grasped, this causes the robot to move k (if necessary) to its closed position. It is very similar to the “open.knob” procedure. [ Vk:Faucet.knob,t, ,tp 1 1191 [ E(close.knob k)(t,,t& -+ tp-t,=seconds(2) A (Grasped A Closed.knob(k)(t,) --+ Closed.knob(k)(t,. . t2) A Stationary(k) ,t*)) A ( Grasped A wClosed.knob(k)(t,) + mw,..t~m occ.space(k)(t)# occ.space(k)(t,)] h [3tG(t,..tp)][Cl osed.knob(k)(t) ,\ Closed.knob(k)(tf. t2) 1~ Stationary(k)(t,t2)] A [vtG(t,..t,>][cl osed.knob(k)(t) -+ Closed.knob(k)( t. I ,,)I)] release. The robot releases whatever is being grasped. It takes 2 seconds. [Vt, ,tp][E(release)(t, ,t,j --+ c201 t2--t, =seconds(2) A -[3y][Grasped(y)(t2)]] Less.full(C,P). An input procedure that 1s true iff container C is less than a certain fraction full of solid and/or liquid material, P IS the fraction if ? 1s 1. then this is true whenever C is not full C must be identified beforehand and the robot must be near it The robot estimates the value of this function using its visual capabilities along with knowledge of the container’s shape. However, for this paper, this ability of the robot is idealized. Let “o(P)(t)’ be true iff the evaluation of input procedure P at time t would be true. -tx , SCIENCE c IL vt Identifiable(C)(t) A Near(C)(t) --+ ( o(Less.full(C,P))(t) * vol.gset(Z)(t) contained.vol(C)(t) <P >I where “Z” is “(xlGranule(x) A Contains(C,x)(tj A -Gas(x Here, “contalned.vol(x)(t)” denotes the maximum volume of liquid material that x can contain at time t 5. Filling a Pot with Water In this section, I present an I/O program that, when executed under given conditions, will cause the robot to partially fill a pot with water. The given conditions are that a pot (P) is upright, in a sink (S), and under the head of a faucet (F) that is controlled by a knob (K) with a water supply (W) that is stored in a supply container (C). K is in the closed position. The robot is near the faucet. FP. S,. grasp K; S2. open knob K; (22) S3. while Less.full(P,0.5) do idle.for seconds(0.1); S,: close.knob K, S5: release, When FP is executed, the robot grasps K and moves K to the open position. At this point, water begins flowing into P. In S3, the robot waits until the accumulated water occupies more than half of P. The robot then closes K and releases it, leaving P about half full of water. PFR can be used to show the effectiveness of the FP program The ontology and theories presented so far will be used to show that each statement of FP, when executed, produces a set of conditions needed for the next statement execution, and that at the end, the FP program has caused the robot to partially fill P with water. Furthermore, I will demonstrate how the robot has the knowledge to infer the identity of a faucet flow process, even though no such process is mentioned in the FP program. I will only sketch a proof in this paper. (A full proof, excluding program termination, of a similar I/O program can be found in [Schmolze 861.) I introduce T, through T,, where S, is executed from T, through T,, S2 is executed from T, through T,, etc. The relevant given conditions are. Faucet(F) A Pot(Pj A (23) K=knob.of.faucet(F)(Tg) A W=supply of.faucet(F)(TJ A C=supply.cont.of.faucet(F)(TO) A Contalns(C,W)(Tg) A v~l(W)(T~)>cups( 1000) A Exists(F)(T@) A Exists(K)(TJ A Exists(P)(T& f\ Exists(C)(Ta) A Exists(W)(TJ A contained.vol(P)(T&=cups( 1) A All.water(W)(TO) A Identrfiable(P)(T& A Identifiable(K)(T@) A Near(P)(T@) A Near(K)(T& A Graspable A Closed.knob(K)(Tg) A -[XIy][Grasped(y)(T,)] Here I have used “Pot”, which denotes a basic type for kitchen pots, and “All.water(x)(t)“, which is true iff x is composed entirely of water granules at time t (definition not shown here). The goal is that P contains at least half a cup of water at time TG. [Sl][Exists(l)(T,) A All.water(l)(TG) ?\ (24) Contains(P,l)(TG) A vol(l)(TG)>cups(O 5j] Throughout this proof sketch, I will need to make default assumptions. However, I have not investigated theories for making appropriate default assumptions in this research. Instead, I will simply make those assumptions that are needed and reasonable. As a result, I have a set of examples that a theory for making default assumptions must be able to produce. My first assumptions correspond to conditions that will not change throughout the execution of FP. Default assumptzon [VWTa..T5)] (25) [K=knob of faucet(F)(t) A W=supply of faucet(F)(t) “\ C=supply cant of.faucet(Fj(t) A Contains(C,W)(t) A vol(W)(t)>cups( 1000) A Exists(F)(t) A Exists(K)(t) A Exists(P)(t) A Exists(C)(t) A Exists(W)(t) A contained.vol(P)(t)=cups( 1) A All.water(W)(t) A Identifiable(P)(t) A Identifiable(K)(t) A Near(P)(t) A Near(K)(t) A Graspable(K)(t)] Additional assumptions are needed in a complete proof, such as that certain ObJects do not move throughout, that the open and closed positions for K do not change, etc. After executing S,, the knob K is grasped, i.e., “Grasped(K This follows trivially since the given condition in Formula 23 satisfies the condition of Formula 16. While executing Sp, the robot moves K (the currently grasped object) to its open position. Let T’, denote the instant that K first becomes fully open, after which it remains open. T’, is in the interval “(T,..T2)“. Also, according to Formula 18, the robot begins to move K immediately at T,. Open.knob(K)(T,, . .T2) A (26) [Vt~(T,..T’,)][~Open.knob(K)(t)n~Closed.knob(K)(t)] For similar reasons, during the execution of S,, there 1s some instant when K becomes fully closed and remains closed (using Formula 19). Let this instant be T’,, which is in the interval “(T3,.T4)“. Closed.knob(K)(T, 5..T41A (27) [VtG (T3..T’3)][-0pen.knob(Kj(t) A -Closed.knob(K)(t)] I will now sh/ow that a “Faucet.flow” process begins at T, and ends at T’3. However, first I make the default assumption that K remains fully closed during “([email protected],)“, fully open during “(T2..T3)“, and fully closed during “(T4. .Ts)“. Default assumption. (28) Closed.knob(K)(T @. J1> * OPenknob CT*. , T3) A Closed.knob(K)(T 4- .Ts> As a result, K is fully closed before T, and it is not fully closed just after T, (note that nothing needs to be said about K’s status precisely at T,). This satisfies the left side of Formula 11 with “tb=T,“, leading me to conclude that there is a “Faucet.flow” process, which I’ll call FF, with F as its “faucet.of.flow”, that begins at 5 and continues while K is not closed. However. Formula 11 will not let me conclude that FF ends at T’,; Formula 13 is needed to determine process endings. Letting “t,=T’3” in Formula 13, I conclude that a “Faucet.flow” process, which I’ll call FF2, has F as its “faucet.of.flow”, ends at T’,, and has continued for as long as K has not been closed. Of course, there is Planning: ALJTOMATED REASONING 1 -t9 only one process here, which is concluded from Formula 14. Since FF and FF2 use the same faucet, F, and their occurrences overlap (e.g., at T3), then “FF2=FF”. Faucet.flow(FF) A F=faucet.of.flow(FF) A (29) Occurs(FF)(T,. +Tv3) A [Vt][t<T, +-Occurs(FF)(t)] A [Vt][t>T’3-+-Occurs(FF)(t)] Thus, the robot can identify a faucet flow process and can determine its times of occurrence. Given the times of occurrence of FF, I can now determine its effect. First, I assume that P receives the water flowing from F (space does not allow a discussion of the necessary geometry). [VtG (T, ..T’3)][P= receptacle.of.flow(FF)(t)] (30) By applying the formula describing the effects of “Faucet.flow”, Formula 15, to the above times for FF’s occurrence, 29, I conclude that a liquid transfer took place from C to P during “(T,..T’,)“. Liq.xfer(C,P,T, ,T’,) (31) So, granules are accumulating in P that come from C (i.e., are part of what was W). From this, I can conclude that water is accumulating in P (and if I added more theories, that this water has properties similar to those of W, such as being either hot or cold). Also, given that FF is occurring, I can conclude the approximate rates of transfer. During “(T’, . .T$“, it transfers at the maximum rate of 1 cup every four seconds. During the other times it transfers at a rate somewhere between 1 cup per minute and 1 cup per 4 seconds. I now make the default assumptions that the liquid transferred by FF remains in P throughout execution of FP and that it remains liquid. Also, any non-gaseous object in P during execution of FP came from F’s water supply, w. Default assumption: [Vx,W&] (32) K Liquid(x)(t) A Contains(P,x)(t)+ [Vt’G(t..Tg)][Liquid(x)(t’) A Contains(P,x)(t’)]) A (-Gas(x)(t) A Contains(P,x)(t)+ wt(x)W smeWMTg))] Given the above, I conclude that P will continue to fill with water and that, eventually, “Less.full(P,0.5)” will be false. In fact, this will happen between 0 and 2 seconds after Tp, taking into account the varying rate of water flow and the fact that the time of T’, is not precisely known. Therefore, S, takes between 0 and 2 seconds to execute, and the entire program takes between 8 and 10 seconds. So, the robot should begin execution at “Te=TC-seconds(lO)” to be sure P will be filled in time. It turns out that during the execution of s,, another half cup of water could flow, so P will be between half and completely full. I am nearly at the given goal, Formula 24, but it is stated in terms of a liquid object and not in terms of a set of liquid granules that are contained in P. However, Formula 4 lets the robot identify the liquid in P as a physical object, and so the goal is achieved. 6. Conclusions Physics For Robots (PFR) represents the everyday physics that a robot needs to use in planning to perform everyday tasks. Using a PFR representation scheme, a robot can reason about natural processes as well as actions. It can take into account the time events take, the gradual changes they cause and the fact that many processes, once initiated, continue without further attention. Therefore, it can plan to control many processes simultaneously. PFR also specifies identity criteria for physical objects that break apart, come together, mix, or come into or go out of existence. Therefore, the robot can plan to recognize and manipulate objects undergoing transformations, and to determine the properties of these objects based on their material composition. The contributions of this research are: 0 a strategy to develop and evaluate representations of everyday physics for robot planning, 0 a general representation for part of everyday physics: including an ontology of time, space, physical objects and events, theories governing processes, material composition, etc. o an application specific representation: describing everyday phenomena from cooking, such as water flow from a faucet, etc. The crucial research to be done next is not only to extend these types of representations to more areas, but to use these results to design reasoning mechanisms that will allow robots to plan for everyday tasks. 7. Acknowledgements Many, many thanks go to David Israel, David McDonald, Candy Sidner, Brad Goodman, N. S. Sridharan, Andy Haas, Marc Vilain and Krithi Ramamritham for their ideas and comments. [Forbus 851 [Hayes 85a] [Hayes 85b] [Hendrix ?3] [Schmolze 861 REFERENCES Forbus, K. D. The Role of Qualitative Dynamics in Naive Physics. In Formal Theories of the Commonsense World, pages 185-228. Ablex, 1985. Hayes, P. The Second Naive Physics Manifesto. In Formal Theories of the Commonsense World, pages l-38. Ablex, 1985. Hayes, P. Naive Physics 1: Ontology for Liquids. In Formal Theories of the Commonsense World, pages ?l- 108. Ablex, 1985. G.G. Hendrix. Modeling Simultaneous Actions and Continuous Processes. Artificial Intelligence 4: 145- 180, 1973. Schmolze, J. G. Physics FOT Robots. PhD thesis, University of Massachusetts, February, 1986. (Also BBN Laboratories Report No. 6222, July 1986). 50 / SCIENCE
1986
100
363
COOPERATION WITHOUT COMMUNICATION Michael R. Genesereth, Matthew L. Ginsberg, and Jeffrey S. Rosen&&n* Logic Group, Knowledge Systems Laboratory, Computer Science Department, Stanford University, Stanford, California 94305 ABSTRACT Intelligent agents must be able to interact even with- out the benefit of communication. In this paper we examine various constraints on the actions of agents in such situations and discuss the effects of these con- straints on their derived utility. In particular, we de- fine and analyze basic raiionaliiy; we consider various assumptions about independence; and we demonstrate the advantages of extending the definition of rational- ity from individual actions to decision procedures. I Introduction The affairs of individual intelligent agents can sel- dom be treated in isolation. Their actions often inter- act, sometimes for better, sometimes for worse. In this paper we discuss ways in which cooperation can take place in the face of such interaction. A. Previous work in Distributed AI In recent years, a sub-area of artificial intelli- gence called distributed artificial intelligence (DAI) has arisen. Researchers have attempted to address the problems of interacting agents so as to increase efficiency (by harnessing multiple reasoners to solve problems in parallel [29]) or as necessitated by the distributed nature of the problem domain (e.g., dis- tributed air traffic control [30]). Smith and Davis’ work on the contract net [6] pro- duced a tentative approach to cooperation using a contract-bid metaphor to model the assignment of tasks to processors. Lesser and Corkill have made em- pirical analyses of distributed computation, trying to discover cooperation strategies that lead to efficient problem solutions for a network of nodes [3,4,7,21]. *This research has been supported by the Office of Naval Re- search under grant number N00014-81-K-0004 and by DARPA under grant numbers N00039-83-C-0136 and N00039-86-C-0033. Georgeff has attacked the problem of assuring non- interference among distinct agents’ plans [12,13]; he has made use of operating system techniques to iden- tify and protect critical regions within plans, and has developed a general theory of action for these plans. Lansky has adapted her work on a formal, behavioral model of concurrent action towards the problems of planning in multi-agent domains [20]. These DA1 efforts have made some headway in con- structing cooperating systems; the field as a whole haa also benefited from research into the formalisms nec- essary for one agent to reason about another’s knowl- edge and beliefs. Of note are the efforts of Appelt [l], Moore [24], Konolige [19,18], Levesque [22], Halpern and Moses [8,16]. B. Their assumptions Previous DA1 work has assumed for the most part that agents are mutually cooperative through their designer’s fiat; there is built-in “agent benevolence.” Work has focused on how agents can cooperatively achieve their goals when there are no conflicts of inter- est. The agents have identical or compatible goals and freely help one another. Issues to be addressed include those of synchronization, efficient communication, and (inadvertent) destructive interference. c. Overview of this paper 1. True conflicts of interest The research that this paper describes discards the benevolent agent assumption. We no longer assume that there is a single designer for all of the interacting agents, nor that they will necessarily help one another. Rather, we examine the question of how high-level, au- tonomous, independently-motivated agents ought to interact with each other so as to achieve their goals. In a world in which we get to design only our own intelligent agent, how should it interact with other in- telligent agents? Planning: AUTOMATED REASONING / 5 1 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. There are a number of domains in which au- tjonomous, independently-motivated agents may be ex- pected to interact. Two examples are resource man- agement applications (such as an automated secre- tary [15]), and military applications (such as an au- tonomous land vehicle). These agents must represent the desires of their designers in an environment that includes other intelligent agents with potentially con- flicting goals. Our model of agent interaction thus allows for true conflicts of interest. As special cases, it includes pure conflict (i.e., zero sum) and conflict-free (i.e., common goal) encounters. By allowing conflict of interest inter- actions, we can address the question of why rational agents would choose to cooperate with one another, and how they might coordinate their actions so as to bring about mutually preferred outcomes. 2. No communication Although communication is a powerful instrument for accomodating interaction (and has been examined in previous work [28]), in our analysis here we consider only situations in which communication between the agents is impossible. While this might seem overly re- strictive, such situations do occur, e.g., as a result of commmunications equipment failure or in interactions between agents without a common communications protocol. Furthermore, the results are valuable in the analysis of cooperation with communication [28,27]. Despite the lack of communication, we make the strong assumption that sufficient sensory information is available for the agents to deduce at least partial information about each other’s goals and rationality. For example, an autonomous land vehicle in the battle- field may perceive the actions of another autonomous land vehicle and use plan recognition techniques [9] to deduce its destination or target, even in the absence of communication. 3. Study of coristrairrts In this paper we examine various constraints on the actions of agents in such situations and discuss the effects of these constraints on the utility derived by agents in an interaction. For example, we show that it can be beneficial for one agent to exploit information about the rationality of another agent with which it is interacting. We show that it can also be beneficial for an agent to exploit the similarity between itself and other agents, except in certain symmetric situa- tions where such similarity leads to indeterminate or nonoptimal action. The study of such constraints and their conse- quences is important for the design of intelligent, inde- pendently motivated agents expected to interact with other agents in unforeseeable circumstances. Without such an analysis, a designer might overlook powerful principles of coooperation or might unwittingly build in interaction techniques that are nonoptimal or even inconsistent. Section 2 of this paper provides the basic frame- work for our analysis. The subsequent sections analyze progressively more complicated assumptions about in- teractions between agent.s. Section 3 discusses the consequences of acting rationally and exploiting the rationality of other agents in an interaction; section 4 analyzes dependence and independence in decision making; and section 5 explores the consequences of rationality across situations. The concluding section discusses the coverage of our analysis. II Framework Throughout the paper we make the assumption that there are exactly two agents per interaction and ex- actly two actions available to each agent. This as- sumption substantially simplifies our analysis, while retaining the key aspects of the general case. Except where indicated to the contrary, all results hold in gen- eral [10,11,14,27]. The essence of interaction is the dependence of one agent’s utility on the actions of another. We can char- acterize this dependence by defining the payoff for each agent i in an interaction s as a function pi that maps every joint action into a real number designating the resulting utility for i. Assuming that M and N are the sets of possible moves for the two agents (respectively), we have pf ; A4 x l?i --+ R. In describing specific interactions, we present the values of this function in the form of payoff matrices [23], like the one shown in figure 1. The number in the lower left hand corner of each box denotes the payoff to agent J if the agents perform the corresponding ac- tions, and the number in the upper right hand corner denotes the payoff to K. For example, if agent J per- forms action a in this situation and agent Ii performs action c, the result will be 4 units of utility for J and 1 unit for Ii’. Each agent is interested in maximizing its own utility. 52 /’ SCIENCE Figure 1: A payoff matrix Although the utilities present in a payoff matrix can generally take on any value, we will only need the or- dering of outcomes in our analysis. Therefore, we will only be using the numbers 1 through 4 to denote the utility of outcomes. An agent’s job in such a situation is to decide which action to perform. We characterize the decision proce- dure for agent i as a function W; from situations (i.e., particular interactions) to actions. If S is the set of possible interactions, we have Wi : S + M. In the remainder of the paper we take the viewpoint of agent J. III Basic Rationality We begin our analysis by considering the conse- quences of constraining agent J so that it will not perform an action that is basically irrational. Let Ri denote a unary predicate over moves that is true if and only if its argument is rational for agent i in situation s. Then agent J is basically rational if its decision procedure does not generate irrational moves, i.e., lR”,(m) * WJ(S) # m. WJ here is a function that designates the action per- formed by J in each situation, as described above. In order to use this definition to judge which actions are rational, however, we need to further define the ratio- nality predicate R”, . An action m’ dominaies an action m for agent J in situation s (written D;(m’, m)) if and only if the payoff to J of performing action m’ is greater than the payoff of performing action m (the definition for agent I< is analogous). The difficulty in selecting an action stems from lack of information about what the other agent will do. If such information were available, the agent could easily decide what action to perform. Let the term A;((m) denote the action that agent K will perform in situation s if agent J performs action m: WI&) = &(W.r(s)). In what follows we call AL the reac2ion function for IC. Then the formal definition of dominance is D;(m’, m) ++ p$(m’, 4&n’)) > P~(mA;((m)). We can now define the rationality predicate. An action is basically irrational if there is another action that dominates it. (3m’ D~(m’,m)) 3 ‘R:(m) Even if J knows nothing about K’s decision pro- cedure, this constraint guarantees the optimality of a decision rule known as dominance analysis. According to this rule, an action is forbidden if there is another action that yields a higher payoff for every action of the other agent, i.e., (3m’hVn’ p;(m, n) < p:(m’, n’)) 3 W(s) # m. Theorem Basic rationality implies dominance analy- sis. Proof: A straightforward application of the definition of rationality. Cl As an example of dominance analysis, consider the payoff matrix in figure 2. In this case, it is clearly best for J to perform action a, no matter what K does (since 4 and 3 are both better than 2 and 1). There is no way that J can get a better payoff by performing action b. K J Figure 2: Row Dominance Problem Of course, dominance analysis does not always ap- ply. As an example, consider the payoff matrix in fig- ure 3. In this situation, an intelligent agent J would probably select action a. Rowever, the rationale for this decision requires an assumption about the ratio- nality of the other agent in the interaction. Planning: ALJTOMATED REASONING ! ii Figure 3: Column Dominance Problem In dealing with another agent it is often reasonable to assume that the agent is also basically rational. The formalization of this assumption of mutual rationality is analogous to that for basic rationality. lRk(m) 3 W&s) #m Using this assumption one can prove the optimality of a technique called iterated dominance analysis. Theorem Basic rationality implies iterated domi- nance analysis. Proof: For this proof, and those of several following theorems, see [lo] and [ll]. III Iterated dominance analysis handles the column dominance problem in figure 3. Using the basic ra- tionality of K, we can show that action d is irrational for K. Therefore, neither ad nor bd is a possible out- come, and J need not consider them.2 Of the remain- ing two posrible outcomes, ac dominates be (from J’s perspective), so action b is irrational for J. _ IV Action Dependence Unfortunately, there are situations that cannot be han- dled by the basic rationality assumptions alone. Their weakness is that they in no way account for dependen- cies between the actions of interacting agents. This sec- tion offers several different, but inconsistent, approaches to dealing with this deficiency. The simplest case is complete independence. The in- dependence assumption states that each agent’s choice of action is independent of the other. In other words, each agent’s reaction function yields the same value for every one of the other agent’s actions. For all m, m’, n, and n’, we would then have 2We write mn to describe the situation where J has chosen action m, and K has chosen action n; we call this a joint action. A&(m) = A&(m’) A;(n) = A”J(n’). The main consequence of independence is a decision rule commonly known as case analysis. If for every %xed move” of K, one of J’s actions is superior to another, then the latter action is forbidden. The difference between case analysis and dominance analysis is that it allows J to com- pare two possible actions for each action by K without considering any %ross terms.” As an example, consider the payoff matrix in figure 4. Given independence of actions, a utility-maximizing agent J should perform action a: if K performs action c, then J gets 4 units of utility rather than 3, and if K performs action d, then J gets 2 units of utility rather than 1. Dom- inance analysis does not apply in this case, since the payoff (for J) of the outcome ad is less than the payoff of bc. K J Theorem analysis. Figure 4: Case Analysis Problem Basic rationality and independence imply case By combining the independence assumption with mutual rationality, we can also show the correctness of an iterated version of case analysis. Theorem Mutual rationality and independence imply it- erated case analysis. As an example of iterated case analysis, consider the situation in figure 5. J cannot use dominance analysis, iterated dominance analysis, nor case analysis to select an action. However, using case analysis K can exclude action c. With this information and mutual rationality, J can exclude action Q. K J Figure 5: Iterated Case Analysis i Problem Note that, if two decision procedures are not indepen- dent, the independence assumption can lead to nonoptimal results. As an example, consider the following well-known “paradox.” An alien approaches you with two envelopes, one marked “!I?’ and the other marked ‘)?“. The first enve- lope contains some number of dollars, and the other con- tains the same number of cents. The alien is prepared to 54 / SCIENCE Alien V General Rationality J Figure 6: Omniscient Alien Problem General rationality is a stronger version of basic ratio- nality, the primary difference being that general rationality applies to decision procedures rather than to individual ac- tions. We introduce a new set of relations and functions to define general rationality. Let IRi denote a unary predicate over procedures that is true if and only if its argument is rational for agent i. A generally rational agent can use a procedure only if it is rational. -%(P) * 33 (W&J) # P(s)). Recall that WJ here is a function that designates the ac- tion performed by J in each situation, as described above. In order to use this definition to judge which actions are rational, we of course need to define further the rationality predicate RJ. give you the contents of either envelope. The catch is that the alien, who is omniscient,, is aware of the choice you will make. In an attempt to discourage greed on your part, he has decided to put one unit of currency in the envelopes if you pick the envelope marked $ but one thousand units if you pick the envelope marked #. Bearing in mind that the alien has decided on the contents of the envelope before you pick one, which envelope should you select,? The payoff matrix for this situation is shown in figure 6. Since the payoff for $ is greater that, that for # for either of the alien’r option@, cma anallyair dictatea chooring the % envelope. Assuming that the alien’s omniscience is accurate, this lead to a payoff of $1.00. While selecting the envelope marked # violates case analysis, it leads to a payoff of $10.00. We can easily solve this problem by describing the alien’s reaction function and abandoning the independence con- straint. The appropriate axioms are A&($) = 1 and A%(+‘) = 1000. The definition of rationality for a decision procedure is analogous to that for individual actions. A procedure is irrational if there is another procedure that dominate8 it: (3P’ DJ(P’, P)) * dzJ(P). One procedure dominates another if and only if it yields as good a payoff in every game and a better payoff in at least, one game. Let the term dk(P) denote the action that agent K will take in situation s if agent J uses procedure P. Then the formal definition of dominance is: These constraints limit J’s attention to the lower left hand corner and the upper right hand corner of the ma- trix. Since the payoff for selecting the # envelope is better than the payoff for selecting the $ envelope, a rational agent will choose #. Although the example given here is whimsi- cal, there are real-world encounters where the assumption of independence is unwarranted, and where the effect illus- trated above must enter the rational agent’s analysis. Another interesting example of action dependence is common behavior. The definition requires that we consider not only the current situation s but also the permuted sit- uation s’ in which the positions of the interacting agents are reversed. An agent J and an agent K have common behavior if and only if the action of K in situation s is the same as that of agent J in the permuted situation s’.~ Va WK(a) = WJ(~‘) Common behavior is a strong constraint. While it may be insupportable in general, it is reasonable for interac- tion among artificial agents, especially those built from the same design. Unfortunately, it is not as strong as we would like, except when combined with general rationality. 3This constraint is similar to the similar bargainer.9 assump- tion in [28,27]. nl(P’, P) - vs pdJ(P’(s),dR(~‘)) 2 pd,(P(s)AON A 3s p:(P’(s),&&“)) > pdJ(P(s)A#‘)). The advantage of general rationality is that, together with common behavior (defined in the last section), it al- lows us to eliminate joint actions that are dominated by other joint actions for all agents, a technique called domi- nated case elimination. Theorem General rationality dominated case elimination. and common behavior imply Proof: Let, s be a situation with joint actions uv and zy such that psJ(u, v) > psJ(z, y) and p;((u, v) > p;C(z, y), and let, P be a decision procedure such that P(s) = z and P(a’) = y (where s’ is the permuted situation, where J and K’s positions have been reversed). Let Q be a deci- sion procedure that is identical to P except that Q(s) = u and Q(s’) = v. Under the common behavior assumption, Q dominates P for both J and K and, therefore, P is gen- erally irrational. q In other words, if a joint action is disadvantageous for both agents in an interaction, at least one will perform a different action. This conclusion has an analog in the informal arguments of [5] and [17]. Planning: AUTOMATED REASONING / 55 No-conflict situations are handled as a special case of this result. The best plan rule states that, if there is a joint action that maximizes the payoff to all agents in an interaction, then it should be selected. Carallary Gnatal rationality and common bshuvior im- ply best plan. J Figure 9: Battle of the Sexes Proof: Apply dominated case elimination to each of the alternatives. Cl As an example of best plan, consider the situation pic- tured in figure 7. None of the preceding techniques (e.g., dominance analysis, case analysis, iterated case analysis) applies. However, UC dominates all of the other joint ac- tions, and so J will perform action a and K will perform action c (under the assumptions of general rationality and common behavior). K C 1 d ] a 4 2 4 2 J IE b 1133 Figure 7: Best Plan Generlll rationality and common behavior also handle difkult rituationr like the prisoner’s dilemma [2,512ti] pica tured in figure 8. Since the situation is symmetric (i.e., s=a ‘, using our earlier notation), common behavior re- quires that they both perform the same action; general ra- tionality eliminates the joint action bd since it is dominated by ac. The agents perform actions a and c respectively, and each receives 3 units of utility. By contrast, case analysis dictates that the agents perform actions b and d, leading to a payoff of only 2 units for each. K J Figure 8: Prisoner’s Dilemma Unfortunately, general rationality and common behav- ior are not always consistent. As an example, consider the battle of the sexes problem in figure 9. Again the sit- uation is symmetric, and common behavior dictates that both agents perform the same action. However, both joint actions on the ac/bd diagonal are forbidden by dominated case elimination. This inconsistency can be eliminated by occasionally re- stricting the simultaneous use of general rationality and common behavior to non-symmetric situations. Neverthe- less, in a no-communication situation, the resolution of a conflict such as that in figure 9 remains undetermined by the constraints we have introduced. VI Conclusions A. Coverage of this approach There are 144 distinct interactions between two agents with two moves and no duplicated payoffs. Of these, the techniques presented here cover 117. The solutions to the remaining 27 cases are unclear, e.g., the situation in fig. 10. K Figure 10: Anomalous situation For a discussion of a variety of other techniques that can be used to handle these situations, as well as a discussion contrasting all of these approachs with those used in game theory, see [27]. B. Suitability of this approach This paper’s analysis of interactions presupposes a vari- ety of strong assumptions. First, the agents are assumed to have common knowledge of the interaction matrix, includ- ing choices of actions and their outcomes. Second, there is no incompleteness in the matrix (i.e., there are no miss- ing utilities). Third, the interaction is viewed in isolation ( i.e., no consideration is given to future interactions and the effects current choices might have on them). Fourth, there must be effective simultaneity in the agents’ actions (otherwise, there are issues concerning which agent moves first, and the new situation that then confronts the second agent). Admittedly, these are serious assumptions, but there are some situations where they are satisfied. Consider as an example two ALVs approaching opposite ends of a narrow 56 / SCIENCE tunnel, each having the choice of using the tunnel or try- ing one of several alternate routes. It is not unreasonable to assume that. they have common knowledge of one an- other’s approach (e.g., through reconnaisance). Nor is it unreasonable to assume that the agents have some models of one another’s utility functions. Finally, in the domain of route navigation, the choices are often few and well-defined. There might be no concern (in this case) over future en- counters, and the decisions are effectively simultaneous. The types of analysis in this paper are an appropriate tool to use in deciding what action to take. For most domains, of course, the assumptions listed above are far too limiting, and clearly more work needs to be done in developing this approach so that each of the most restrictive assumptions can be removed in turn. The work in [28,27] represents steps in that direction. Cur- rently, research on the question of incomplete matrices is being pursued, so that the type of conflict analysis pre- sented in this paper can be applied to interactions with incomplete information [26]. Future work will focus on is- sues arising from multiple encounters, such as retaliation and future compensation for present loss. Intelligent agents will inevitably need to interact flexibly with other entities. The existence of conflicting goals will need to be handled by these automated agents, just as it is routinely handled by humans. The results in this paper and their extensions should be of use in the design of in- telligent agents able to function successfully in the face of such conflict. REFERENCES [l] D. E. Appelt. Planning Natural Language Utterances to Satisfy Multiple Goals. PhD thesis, Stanford Univ., 1981. [2] Robert Axelrod. The Evolution of Cooperation. Basic Books, Inc., New York, 1984. [3] D. Corkill. A F ramework for Organizalional Self-Design in Distributed Problem-Solving Networks. PhD thesis, University of Massachusetts, Amherst, MA, 1982. [4] Daniel D. Corkill and Victor R. Lesser. The use of meta- level control for coordination in a distributed problem solv- ing network. IJCAI-83, pp. 748-756. [5] L. Davis. Prisoners, paradox and rationality. American Philosophical Quarterly, 14, 1977. [S] Randall Davis and Reid G. Smith. Negotiation as a metaphor for distributed problem solving. Artificial In- telligence, 20(1):63-109, 1983. j’i’] Edmund H. Durfee, Victor R. Lesser, and Daniel D. Corkill. Increasing coherence in a distributed problem solving network. IJCAI-85, pp. 1025-1030. [8] R. Fagin and J. Y. Halpern. Belief, awareness, and limited reasoning: preliminary report. IJCAI-85, pp. 491-501. [9] Michael R. Genesereth. The role of plans in automated consultation. IJCAI-79, pp. 311-319. [lo] Michael R. G enesereth, Matthew L. Ginsberg, and Jef- frey S. Rosenschein. C ooperation without Communication. HPP Report 84-36, Heuristic Programming Project, Corn- puter Science Department, Stanford University, September 1984. [ll] Michael R. G enesereth, Matthew L. Ginsberg, and Jef- frey S. Rosenschein. Solving the Prisoner’s Dilemma. Re- port NO. STAN-CS-84-1032 (HPP-84-41), Computer Sci- ence Department, Stanford University, November 1984. [ 121 Michael Georgeff. Communication and interaction in multi-agent planning. AAAI-83, pp. 125-129. [13] Michael Georgeff. A theory of action for multi-agent plan- ning. AAAI-84, pp. 121-125. [14] M. L. Ginsberg. Decision procedures. In Proceedings of the Distributed Artificial Intelligence Workshop, pages 43- 65, AAAI, Sea Ranch, CA, December 1985. [l5] Ira P. G Id t o s ein. Bargaining Between Goals. A. I. Work- ing Paper 102, Massachusetts Institute of Technology Ar- tificial Intelligence Laboratory, 1975. [16] Joseph Y. Halpern and Yoram Moses. Knowledge and Common I(nowledge in a Distributed Environment. Re- search Report IBM RJ 4421, IBM Research Laboratory, San Jose, California, October 1984. [17] D. R. Hofstadter. Metamagical themes-computer tour- naments of the prisoner’s dilemmasuggest how cooperation evolves. Scientific American, 248(5):1626, May 1983. [IS] Kurt Konolige. A computational theory of belief intro- spection. IJCAI-85, pp. 502-508. [19] Kurt Konohge. A Deduction Model of Belief and ita Log- its. PhD thesis, Stanford University, 1984. [20] A. L. Lansky. Behavioral Specification and Planning for Multiagent Domains. Technical Note 360, SRI Interna- tional, Menlo Park, California, November 1985. [21] V. Lesser and D. Corkill. The distributed vehicle moni- toring testbed: a tool for investigating distributed problem solving networks. AI Magazine, 4(3):15-33, Fall 1983. [22] Hector J. Levesque. A logic of implicit and explicit belief. AAAI-84, pp. 198-202. [23] R. Duncan Lute and Howard Raiffa. Games and Deci- aions, Introduction and Critical Survey. John Wiley and Sons, New York, 1957. [24] R. Moore. A formal theory of knowledge and action. In J. R. Hobbs and R. C. Moore, editors, Formal Theories of the Commonsense World, Ablex Publishing Co., 1985. [25] D. Parfit. Reasons and Persons. Clarendon Press, Ox- ford, 1984. [26] Jeffrey S. Rosenschein. Cooperation in the Presence of Im- complete Information. Technical Report, Knowledge Sys- tems Laboratory, Computer Science Dept., Stanford Univ., 1986. In preparation. [27] Jeffrey S. Rosenschein. Rational Interaction: Cooperation Among Intelligent Agents. PhD thesis, Stanford Univer- sity, 1986. Also published as STAN-CS-85-1081 (KSL85- 40), Department of Computer Science, Stanford University, October 1985. (281 Jeffrey S. R osenschein and Michael R. Genesereth. Deals among rational agents. IJCAI-85, pp. 91-99. [29] R. G. Smith. A F Ta4meWOTk fOT Problem Solwing in a Dis- tributed Processing Environment. PhD thesis, Stanford University, 1978. (301 R. Steeb, S. C ammarata, F. Hayes-Roth, and R. Wesson. Distribzlted intelligence for air fleet control. Technical Re- port WD-839-ARPA, The Rand Corporation, Dec. 1980. Planning: AUTOMATED REASONING / 5’
1986
101
364
INCREMENTAL PLANNING TO CONTROL A BLACKBOARD-BASED PROBLEM SOLVER Edmund H. Durfee and Victor R. Lesser Department of Computer and Information Science University of Massachusetts Amherst, Massachusetts 01003 ABSTRACT To control problem solving activity, a planner must resolve uncertainty about which specific long-term goals (solutions) to pursue and about which sequences of actions will best achieve those goals. In this paper, we describe a planner that abstracts the problem solving state to recognize possible competing and compatible solutions and to roughly predict the importance and expense of developing these solutions. With this information, the planner plans sequences of problem solving activities that most efficiently resolve its uncertainty about which of the possible solutions to work toward. The planner only details actions for the near future because the results of these actions will influence how (and whether) a plan should be pursued. As problem solving ‘proceeds, the planner adds new details to the plan incrementally, and monitors and repairs the plan to insure it achieves its goals whenever possible. Through experiments, we illustrate how these new mechanisms significantly improve problem solving decisions and reduce overall computation, We briefly discuss our current research directions, including how these mechanisms can improve a problem solver’s real- time response and can enhance cooperation in a distributed problem solving network. I INTRODUCTION A problem solver’s planning component must resolve control uncertainty stemming from two principal sources. As in typical planners, it must resolve uncertainty about which sequence of actions will satisfy its long-term goals. Moreover, whereas most planners are given (possibly prioritized) well-defined, long-term goals, a problem solver’s planner must often resolve uncertainty about the goals to achieve. For example, an interpretation problem solver that integrates large amounts of data into “good” overall interpretations must use its data to determine what specific long-term goals (interpretations) it should pursue. Because the set of possible interpretations may be intractably large, the problem solver uses the data to form promising partial interpretations and then extends these to converge on likely complete interpretations. The blackboard-based architecture developed in Hearsay-II permits such data-directed problem solving [ 7). In a purely data-directed problem solver, control decisions can be based only on the desirability of the This research was sponsored, in part, by the National Science Foundation under Grant MCS-8306327, by the National Science Foundation under Support and Maintenance Grant DCR-8318776, by the National Science Foundation under CER Grant DCR-8500332, and by the Defense Advanced Research Projects Agency (DOD), monitored by the Office of Naval Research under Contract NRO&-041. expected immediate results of each action. The Hearsay-II system developed an algorithm for measuring desirability of actions to better focus problem solving [lo]. Extensions to the blackboard architecture unify data-directed and goal-directed control by representing possible extensions and refinements to partial solutions as explicit goals [2]. Through goal processing and subgoals, sequences of related actions can be triggered to achieve important goals. Further modifications separate control knowledge and decisions from problem solving activities, permitting the choice of problem solving actions to be influenced by strategic considerations [9]. However, none of these approaches develop and use a high-level view of the current problem solving situation so that the problem solver can recognize and work toward more specific long-term goals. In this paper, we introduce new mechanisms that allow a blackboard-based problem solver to form such a high-level view. By abstracting its state, the problem solver can recognize possible competing and compatible interpretations, and can use the abstract view of the data to roughly predict the importance and expense of developing potential partial solutions. These mechanisms are much more flexible and complex than those we previously developed [6] and allow the recognition of relationships between distant as well as nearby areas in the solution space, We also present new mechanisms that use the high- level view to form plans to achieve long-term goals. A plan represents specific actions for the near future and more general actions for the distant future. By forming detailed plans only for the near future, the problem solver does not waste time planning for situations that may never arise; by sketching out the entire plan, details for the near-term can be based on a long-term view. As problem solving proceeds, the plan must be monitored (and repaired when necessary), and new actions for the near future are added incrementally. Thus, plan formation, monitoring, modification, and execution are interleaved [1,3,8,12,13]. We have implemented and evaluated our new mechanisms in a vehicle monitoring problem solver, where they augment previously developed control mechanisms. In the next section, we briefly describe the vehicle monitoring problem solver. Section 3 provides details about how a high-level view is formed as an abstraction hierarchy. The representation of a plan and the techniques to form and dynamically modify plans are presented in Section 4. In Section 5, experimental results are discussed to illustrate the benefits and the costs of the new mechanisms. Finally, Section 6 recapitulates our approach and describes how the new mechanisms can improve real-time responsiveness and can lead to improved cooperation in a distributed problem solving network. 58 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. II A VEHICLE MONITORING PROBLEM SOLVER A vehicle monitoring problem solving node in the Distributed Vehicle Monitoring Testbed (DVMT) applies simplified signal processing knowledge to acoustically sensed data in an attempt to identify, locate, and track patterns of vehicles moving through a two-dimensional space [ll]. Each node has a blackboard-based problem solving architecture, with knowledge sources and levels of abstraction appropriate for vehicle monitoring. A knowledge source (KS) performs the basic problem solving tasks of extending and refining hypotheses (partial solutions). The architecture includes a goal blackboard and goal processing module, and through goal processing a node forms knowledge source instantiations (KSIs) that represent potential KS applications on specific hypotheses to satisfy certain goals. KSIs are prioritized based both on the estimated beliefs of the hypotheses each may produce and on the ratings of the goals each is expected to satisfy. The goal processing component also recognizes interactions between goals and adjusts their ratings appropriately; for example, subgoals of an important goal might have their ratings boosted. Goal processing can therefore alter KS1 rankings to help focus the node’s problem solving actions on achieving the subgoals of important goals [2]. A hypothesis is characterized by one or more time- locutions (where the vehicle was at discrete sensed times), by an event-class (classifying the frequency or vehicle type), by a belief (the confidence in the accuracy of the hypothesis), and by a blackboard-level (depending on the amount of processing that has been done on the data). Synthesis KSs take one or more hypotheses at one blackboard-level and use event-class constraints to generate hypotheses at the next higher blackboard-level. Extension KSs take several hypotheses at a given blackboard-level and use vehicle movement constraints (maximum velocities and accelerations) to form hypotheses at the same blackboard- level that incorporate more time-locations. For example, in Figure 1 each blackboard-level is represented as a surface with spatial dimensions z and y. At blackboard-level s (signal level) there are 10 hypotheses, each incorporating a single time-location (the time is indicated for each). Two of these hypotheses have been synthesized to blackboard-level g (group level). In turn, these hypotheses have been synthesized to blackboard-level v (vehicle level) where an extension KS has connected them into a single track hypothesis, indicated graphically by connecting the two locations. Problem solving proceeds from this point by having the goal processing component form goals (and subgoals) to extend this track to time 3 and instantiating KSIs to achieve these goals. The highest rated pending KS1 is then invoked and triggers the appropriate KS to execute. New hypotheses are posted on the blackboard, causing further goal processing and the cycle repeats until an acceptable track incorporating data at each time is created. One of the potential solutions is indicated at blackboard-level v in Figure 1. III A HIGH-LEVEL VIEW FOR PLANNING AND CONTROL Planning about how to solve a problem often requires viewing the problem from a different perspective. For example, a chemist generally develops a plan for deriving a new compound not by entering a laboratory and envisioning possible sequences of actions but by representing the Blackboard-levels are represented as surfaces containing hypotheses (with associated sensed times). Hypotheses at higher blackboard-levels are synthesized from lower level data, and a potential solution is illustrated with a dotted track at blackboard-level v. Figure 1: An Example Problem Solving State. problem with symbols and using these symbols to hypothesize possible derivation paths. By transforming the problem into this representation, the chemist can more easily sketch out possible solutions and spot reactions that lead nowhere, thereby improving the decisions about the actions to take in the laboratory. A blackboard-based, vehicle monitoring problem solver requires the same capabilities. Transforming the node’s problem solving state into a suitable representation for planning requires domain knowledge to recognize relationships-in particular, long-term relationships-in the data. This transformation is accomplished by incrementally clustering data into increasingly abstract groups based on the attributes of the data: the hypotheses can be clustered based on one attribute, the resulting clusters can be further clustered based on another attribute, and so on. The transformed representation is thus a hierarchy of clusters where higher-level clusters abstract the informat ion of lower-level clusters. More or less detailed views of the problem solving situation are found by accessing the appropriate level of this abstraction hierarchy, and clusters at the same level are linked by their relationships (such as having adjacent time frames or blackboard-levels, or having nearby spatial regions). We have implemented a set of knowledge-based clustering mechanisms for vehicle monitoring, each of which takes clusters at one level as input and forms output clusters at a new level. Each mechanism uses different domain- dependent relationships, including: temporal relationships: the output cluster combines any input clusters that represent data in adjacent time frames and that are spatially near enough to satisfy simple constraints about how far a vehicle can travel in one time unit. spatial relationships: the output cluster combines any input clusters that represent data for the same time frames and that are spatially near enough to represent sensor noise around a single vehicle. blackboard-level relationships: the output cluster combines any input clusters that represent the same data at different blackboard-levels. Planning: AUTOMATED REASONING / 59 l event-class relationships: the output cluster combines any input clusters that represent data with the same event-class (type of vehicle). l belief relationships: the output cluster combines input clusters representing data with similar beliefs. The abstraction hierarchy is formed by sequentially applying the clustering mechanisms. The order of application depends on the bias of the problem solver: since the order of clustering affects which relationships are most emphasized at the highest levels of the abstraction hierarchy, the problem solver should cluster to emphasize the relationships it expects to most significantly influence its control decisions. Issues in representing bias and modifying inappropriate bias are discussed elsewhere [4]. To illustrate clustering, consider the clustering sequence in Figure 2, which has been simplified by ignoring many cluster attributes such as event-classes, beliefs, and volume of data and pending work; only a cluster’s blackboard- levels (a cluster can incorporate more than one) and its time-regions (indicating a region rather than a specific location for a certain time) are discussed. Initially, the problem solving state is nearly identical to that in Figure 1, except that for each hypothesis in Figure 1 there are now two hypotheses at the same sensed time and slightly different locations. In Figure 2a, each cluster CL (where 1 is the level in the abstraction hierarchy) corresponds to a single hypothesis, and the graphical representation of the clusters mirrors a representation of the hypotheses. By clustering based on blackboard-level, a second level of the abstraction hierarchy is formed with 19 clusters (Figure 2b). As is shown graphically, this clustering ‘Lcollapses” the blackboard by combining clusters at the previous abstraction level that correspond to the same data at different blackboard-levels. In Figure 2c, clustering by spatial relationships forms 9 clusters. Clusters at the second abstraction level whose regions were close spatially for a given sensed time are combined into a single cluster. Finally, clustering by temporal relationships in Figure 2d combines any clusters at the third abstraction level that correspond to adjacent sensed times and whose regions satisfy weak vehicle velocity constraints. The highest level clusters (Figure 2d) indicate four rough estimates of potential solutions: a vehicle moving through regions R1R2R3R4&&, through Ri&R&RkRL, through R~R!&R4R5RG, or through R\RLR3R4Rk.Rk. The problem solver could use this view to improve its control decisions. For example, this view allows the problem solver to recognize that all potential solutions pass through Rs at sensed time 3 and R4 at sensed time 4. By boosting the ratings of KSIs in these regions, the problem solver can focus on building high-level results that are most likely to be part of any eventual solution. In some respects, the formation of the abstraction hierarchy is akin to a rough pass at solving the problem, as indeed it must be if it is to indicate where the possible solutions may lie. However, abstraction differs from problem solving because it ignores many important constraints needed to solve the problem. Forming the abstraction hierarchy is thus much less computationally expensive than problem solving, and results in a representation that is too inexact as a problem solution but is suitable for control. For example, although the high-level clusters in Figure 2d indicate that there are four potential solutions, three of these are actually impossible based on the more stringent constraints applied by the KSs. The high-level view afforded by the abstraction hierarchy therefore does not provide answers but only rough indications about the long-term promise of various areas of the solution space, and this additional knowledge can be employed by the problem solver to make better control decisions as it chooses its next task. IV INCREMENTAL PLANNING The planner further improves control decisions by intelligently ordering the problem solving actions. Even with the high-level view, uncertainty remains about whether each long-term goal can actually be achieved, about whether an action that might contribute to achieving a long-term goal will actually do so (since long-term goals Cluster Time- BB- regions levels (hY1)(252Y2) 21 (1XlYI) 9 (2X2Y2) 9 Subclusters/ X /‘: - 43 (6xjyg’) 4 . - (4 Cluster Time- BB- Subclusters regions levels C2 4 (~~~Y1)(~~2Y2)~~~~~c:,c~,c~,c~,c~ % 1 C6 4 d CL (62;‘~;‘) s 43 (b) Cluster Time- BB- regions levels Subclusters Cluster Time- BB- regions fevels Subclusters (1h)(2&)(3&) 3 3 3 “: (4&)(5&)(6R,) ‘jg’ ’ Cl, C.i>Cb, c;, c; wvP~:H3~3) 3 3 3 ” (4Rzr)(5R;)(6R;) ’ c2, c3, Cd> 3 3 3 c ~5 > c 7 , %I * (4 A sequence of clustering steps are illustrated both with tables (left) and graphically (right). cf represents cluster z at level 1 of the abstraction hierarchy. initial clusters (a), are clustered by blackboard-level (b), then by spatial proximity (c), and finally by temporal relationships (d). Figure 2: Incremental Clustering Example. 60 i SCIENCE are inexact), and about how to most economically form a desired result (since the same result can often be derived in different ways). The planner reduces control uncertainty in two ways. First, it orders the intermediate goals for achieving long-term goals so that the results of working on earlier intermediate goals can diminish the uncertainty about how (and whether) to work on later intermediate goals. Second, the planner forms a detailed sequence of steps to achieve the next intermediate goal: it determines the least costly way to form a result to satisfy the goal. The planner thus sketches out long-term intentions as sequences of intermediate goals, and forms detailed plans about the best way to achieve the next int)ermediate goal. A long-term vehicle monitoring goal to generate a track consisting of several time-locations can be reduced into a series of intermediate goals, where each intermediate goal represents a desire to extend the track satisfying the previous intermediate goal into a new time-location.* To order the intermediate goals, the planner currently uses three domain-independent heuristics: Heuristic-l Prefer common intermediate goals. Some intermediate goals may be common to several long- term goals. If uncertain about which of these long- term goals to pursue, the planner can postpone its decision by working on common intermediate goals and then can use these results to better distinguish between the long-term goals. This heuristic is a variation of least-commitment 1141. Heuristic-2 Prefer less costly intermediate goals. Some intermediate goals may be more costly to achieve than others. The planner can quickly estimate the relative costs of developing results in different areas by comparing their corresponding clusters at a high level of the abstraction hierarchy: the number of event-classes and the spatial range of the data in a cluster roughly indicates how many potentially competing hypotheses might have to be produced. This heuristic causes the planner to develop results more quickly. If these results are creditable they provide predictive information, otherwise the planner can abandon the plan after a minimum of effort. Heuristic-3 Prefer discriminative intermediate goals. If the planner must discriminate between possible long- term goals, it should prefer to work on intermediate goals that most effectively indicate the relative promise of each long-term goal. When no common intermediate goals remain this heuristic triggers work where the long-term goals differ most. These heuristics are interdependent. For example, common intermediate goals may also be more cost,ly, as in one of the experiments described in the next section. The relative influence of each heuristic can be modified parametrically. Having identified a sequence of intermediate goals to achieve one or more long-term goals, t,he planner can reduce its uncertainty about how to satisfy these intermediate goals by planning in more detail. If the planner possesses models of the KSs that roughly indicate both the costs of a particular action and the general characteristics of *In general terms. an intermediate goal in any interpretation t.ask is to process a new piece of information and to integrate it into the current partial interpretation. the output of that action (based on the characteristics of the input), then the planner can search for the best of the alternative ways to satisfy an intermediate goal. We have provided the planner for our vehicle monitoring problem solver with coarse KS models that allow it to make reasonable predictions about short sequences of actions to find the sequences that best achieve intermediate goals.“ To reduce the effort spent on planning, the planner only forms detailed plans for the next intermediate goal: since the results of earlier intermediate goals influence decisions about how and whether to pursue subsequent intermediate goals, the planner avoids expending effort forming detailed plans that may never be used. Given the abstraction hierarchy in Figure 2, the planner recognizes that achieving each of the four long-term goals (Figure 2d) entails intermediate goals of tracking the vehicle through these regions. Influenced predominantly by Heuristic-l, the planner decides to initially work toward all four long-term goals at the same time by achieving their common intermediate goals. A detailed sequence of actions to drive the data in R3 at level s to level v is then formulated. The planner creates a plan whose attributes their values in this example) are: the long-term goals the plan contributes to achieving (in the example, there are four); the predicted, underspecified time-regions of the eventual solution (in the example, the time regions are (1 RlorR:)(2 Rzor$)(3 &) . . . ); the predicted vehicle type(s) of the eventual solution (in the example, there is only one type); the order of intermediate goals (in the example, begin with sensed time 3, then time 4, and then work both backward to earlier times and forward to later times); the blackboard-level for tracking, depending on the available KSs (in the example, this is level v); a record of past actions, updated as actions are taken (initially empty); a sequence of the specific actions to take in the short- term (in the example, the detailed plan is to drive data in region R3 at level s to level v); a rating based on the number of long-term goals being worked on, the effort already invested in the plan, the average ratings of the KSIs corresponding to the detailed short-t*erm actions, the average belief of the partial solutions previously formed by the plan, and the predicted beliefs of the partial solutions to be formed by the detailed activities. As each predicted action is consecutively pursued, the record of past actions is updated and the actual results of the action are compared with the general characteristics predicted by the planner. When these agree, the next action in the detailed short-term sequence is performed if there is one, otherwise the planner develops another detailed sequence for the next intermediate goal. In our example, after forming results in R3 at a high blackboard- level, the planner forms a sequence of actions to do the same in R4. When the actual and predicted results disagree **If the predict,ecl cost of satisfying an intermediate goal deviates substantially from the crude estimate based on the abstract view, the ordering of the intermediate goals may need to be revised. Planning: AUTOMATED REASONING / 6 1 (since the planner’s models of the KSs may be inaccurate), the planner must modify the plan by introducing additional actions that can get the plan back on track. If no such actions exist, the plan is aborted and the next highest rated plan is pursued. If the planner exhausts its plans before forming a complete solution, it reforms the abstraction hierarchy (incorporating new information and/or clustering to stress different problem attributes) and attempts to find new plans. Throughout this paper, we assume for simplicity that no important new information arrives after the abstraction hierarchy is formed; when part of a more dynamic environment, the node will update its abstraction hierarchy and plans with such information. The planner thus generates, monitors, and revises plans, and interleaves these activities with plan execution. In our example, the common intermediate goals are eventually satisfied and a separate plan must be formed for each of the alternative ways to proceed. After finding a partial track combining data from sensed times 3 and 4, the planner decides to extend this track backward to sensed time 2. The long-term goals indicate work in either Rz or RL. A plan is generated for each possibility, and the more highly rated of these plans is followed. Note, however, that the partial track already developed can provide predictive information that, through goal processing, can increase the rating of work in one of these regions and not the other. In this case, constraints that limit a vehicle’s turning rate are used when goal processing (subgoaling) to increase the ratings of KSI’s in R&, thus making the plan to work there next more highly rated.* The planner and goal processing thus work in tandem to improve problem solving performance. The goal processing uses a detailed view of local interactions between hypotheses, goals, and KSJs to differentiate between alternative actions. Goal processing can be computationally wasteful, however, when it is invoked based on strictly local criteria. Without the knowledge of long-term reasons for building a hypothesis, the problem solver simply forms goals to extend and refine the hypothesis in all possible ways. These goals are further processed (subgoaled) if they are at certain blackboard- levels, again regardless of any long-term justification for doing so. With its long-term view, the planner can drastically reduce the amount of goal processing. As it pursues, monitors, and repairs plans, the planner identifies areas where goals and subgoals could improve its decisions and selectively invokes goal processing to form only those goals that it needs. As the experimental results in the next section indicate, a planner with the ability to control goal processing can dramatically reduce overhead. V EXPERIMENTS IN INCREMENTAL PLANNING We illustrate the advantages and the costs of our planner in several problem solving situations, shown in Figure 3. Situation A is the same as in Figure 2 except that each region only has one hypothesis. Also note that the data in the common regions is most weakly sensed. In situation B, no areas are common to all possible solutions, and issues in plan monitoring and repair are therefore stressed. Finally, situation C has many potential solutions, where each appears equally likely from a high-level view, ‘In fact the turns to RZ and Rk exceed these constraints, SO the only track that satisfies the constraints is R~R~&R~&.&. d14 4 L 4 - solution = d:dad3d4dsdG A solutions = dldzdaddds, d’ d’ d’ d’ d’ 1 2 3 4 5 C solutions = dld2dsd4d5, d;d;d&d;d; B d, = data for sensed time i, l = strongly sensed, l = moderately sensed, 0 = weakly sensed Three problem solving situations are displayed. The pos- sible tracks (found in the abstraction hierarchy) are indi- cated by connecting the related data points, and the ac- ceptable solution(s) for each situation are given. Figure 3: The Experimental Problem Situations. When evaluating the new mechanisms, we consider two important factors: how well do they improve control decisions (reduce the number of incorrect decisions), and how much additional overhead do they introduce to achieve this improvement. Since each control decision causes the invocation of a KSI, the first factor is measured by counting KSIs invoked-the fewer the KSIs, the better the control decisions. The second factor is measured as the actual computation time (runtime) required by a node to solve a problem, representing the combined costs of problem solving and control computation. The experimental results are summarized in Table 1. To determine the effects of the new mechanisms, each problem situation was solved both with and without them, and for each case the number of KSIs and the computation time were measured. We also measured the number of goals generated during problem solving to illustrate how control overhead can be reduced by having the planner control the goal processing. Experiments El and E2 illustrate how the new mechanisms can dramatically reduce both the number of KSIs invoked and the computation time needed to solve the problem in situation A. Without these mechanisms (El), the p ro bl em solver begins with the most highly sensed data (di, da, db, and d:). This incorrect data actually corresponds to noise and may have been formed due to sensor errors or echoes in the sensed area. The problem solver attempts to combine this data through ds and da but fails because of turning constraints, and then it uses the results from d3 and d4 to eventually work its way back out to the moderately sensed correct data. With the new mechanisms (E2), problem solving begins at d3 and da and, because the track formed (d3d4) triggers goal processing to stimulate work on the moderate data, the solution is found much more quickly (in fact, in 62 / SCIENCE Expt Situ Plan. 3 KSIs Rtime Goals Comments El A no 58 17.2 262 - E2 E3 2 yes 24 8.1 49 - yes 32 19.4 203 1 E4 A’ no 58 19.9 284 2 E5 A’ yes 64 17.3 112 2,3 E6 A’ yes 38 16.5 71 214 no 73 21.4 371 - yes 45 11.8 60 - E9 B yes 45 20.6 257 1 El0 C no 85 29.8 465 El1 C yes 44 19.3 75 - Situ: Plan?: KSIs: Rtime: Goals: Comments: Legend The problem situation. Are the new planning mechanisms used? Number of KSIs invoked to find solution. The total CPU runtime to find solution lin minutes). The number of goals formed and processed. I Additional asoects of the exneriment: 1 = independint goal procesiing and planning 2 = noise in da and d4 3 = Heuristic-l predominates 4 = Heuristic-2 predominates Table 1: Summary of Experimental Results. optimal time 151). The planner controls goal processing to generate and process only those goals that further the plan; if goal processing is done independently of the planner (E3), the overhead of the planner coupled with the only slightly diminished goal processing overhead (the number of goals is only modestly reduced, comparing E3 with El) nullifies the computation time saved on actual problem solving. Moreover, because earlier, less constrained goals are subgoaled, control decisions deteriorate and more KSIs must be invoked. The improvements in experiment E2 were due to the initial work done in the common areas d3 and d4 triggered by Heuristic-l. Situation A’ is identical to situation A except that areas d3 and d4 contain numerous competing hypotheses. If the planner initially works in those areas (E5), then many KSIs are required to develop all of these hypotheses-fewer KSIs are invoked without planning at all (E4). However, by estimating the relative costs of the alternative intermediate goals, the planner can determine that d3 and dq, although twice as common as the other areas, are likely to be more than twice as costly to work on. Heuristic-2 overrides Heuristic-l, and a plan is formed to develop the other areas first and then use these results to more tightly control processing in d3 and dq. The number of KSIs and the computation time are thus reduced (E6). In situation B, two solutions must be found, corresponding to two vehicles moving in parallel. Without the planner (EV), problem solving -begins with the most strongly sensed data (the noise in the center of the area) and works outward from there. Only after many incorrect decisions to form short tracks that cannot be incorporated into longer solutions does the problem solver generate the two solutions. The high-level view of this situation, as provided by the abstraction hierarchy, allows the planner in experiment E8 to recognize six possible alternative solutions, four of which pass through di (the most common area). The planner initially forms plani, pZan2, and plans, beginning in dg, ds, and d$ respectively (Heuristic-l triggers the preference for dz; and subsequently Heuristic-3 indicates a preference for d3 and d$). Since it covers the most long-term goals, plan1 is pursued first-a reasonable strategy because effort is expended on the solution path if the plan succeeds, and if the plan fails then the largest possible number of candidate solutions are eliminated. After developing di, pl an1 is divided into two plans to combine this data with either d2 or d\. One of these equally rated plans is chosen arbitrarily and forms the track dzd’,‘, which then must be combined with di. However, because of vehicle turning constraints, only dldz rather than dld2dg is formed. The plan monitor flags an error, an attempt to repair the plan fails, and the plan aborts. Similarly, the plan to form d\did!J eventually aborts. Plan2 is then invoked, and after developing d3 it finds that d2 has already been developed (by the first aborted plan). However, the plan monitor detects that the predicted result, dzd3 was not formed, and the plan is repaired by inserting a new action that takes advantage of the previous formation of dldE to generate dld2d3. The predictions are then more than satisfied, and the plan continues until a solution is formed. The plan to form the other solution is similarly successfully completed. Finally, note once again that, if the planner does not control goal processing (E9), unnecessary overhead costs are incurred, although this time the control decisions (KSIs) are not degraded. Situation C also represents two vehicles moving in parallel, but this time they are closer and the data points are all equally well sensed. Without the new mechanisms (ElO), control decisions in this situation have little to go on: from a local perspective, one area looks as good as another. The problem solver thus develops the data points in parallel, then forms all tracks between pairs of points, then combines these into larger tracks, until finally it forms the two solution tracks. The planner uses the possible solutions from the abstraction hierarchy to focus on generating longer tracks sooner, and by monitoring its actions to extend its tracks, the planner more quickly recognizes failed extensions and redirects processing toward more promising extensions. The new mechanisms thus improve control decisions (reduce the KSIs) without adding excessive computational overhead (El 1). However, the planner must consider 32 possible solutions in this case and does incur significant overhead. For complex situations, the planner may need additional control mechanisms to more flexibly manage the many possibilities. VI THE IMPLICATIONS OF ABSTRACTION AND PLANNING We have described and evaluated mechanisms for improving control decisions in a blackboard-based vehicle monitoring problem solver. Our approach is to develop an abstract view of the current problem solving situation and to use this view to better predict both the long- term significance and cost of alternative actions. By interleaving plan generation, monitoring, and repair with plan execution, the mechanisms lead to more versatile planning, where actions to achieve the system’s (problem solving) goals and actions to satisfy the planner’s needs (resolve its own uncertainty) are integrated into a single plan. Although incremental planning may be inappropriate in domains where constraints must be propagated to determine an entire detailed plan before acting (141, the approach we have described is effective in unpredictable domains where plans about the near future cannot depend on future states that may never arrive. Planning: AUTOMATED REASONING / 63 This approach can be generally applied to blackboard- based problem solvers. Abstraction requires exploiting relationships in the data-relationships that are used by the knowledge sources as well-such as allowable combinations of speech sounds [7] or how various errands are related spatially or temporally 191.’ Planning requires simple models of KSs, recognition of intermediate goals (to extend a phrase in speech, to add another errand to a plan), and heuristics to order the intermediate goals. We believe that many if not all blackboard-based problem solvers (and more generally, problem solvers whose long-term goals depend on their current situation) could incorporate similar abstraction and planning mechanisms to improve their control decisions. The benefits of this approach extend beyond the examples demonstrated in this paper. The more global view of the problem provided by the abstraction hierarchy helps the problem solver decide whether a goal is adequately satisfied by indicating areas where improvements are possible and potentially worthwhile. The ability to enumerate and compare possible solutions helps the problem solver decide when a solution is the best of the possible alternatives, and so, when to terminate activity. These mechanisms also help a problem solver to work under real-time constraints. The KS models provide estimates of the cost (in time) to achieve the next intermediate goal, and by generalizing this estimate to the other intermediate goals, the time needs for for the entire plan can be crudely predicted. With this prediction, the planner can modify the plan (replace expensive actions with actions that inexpensively achieve less exact results) until the predicted time costs satisfy the constraints. Finally, planning and prediction are vital to cooperation among problem solvers. A network of problem solvers that are cooperatively solving a single problem could communicate about their plans, indicating what partial solutions they expect to generate and when, to better coordinate their activities [4,5,6]. In essence, the problem solvers incrementally form a distributed plan together. The inherent unpredictability of actions and interactions in multi-agent domains makes incremental planning particularly appropriate in distributed problem solving applications. We are currently augmenting our mechanisms with capabilities to perform effectively in more dynamic environments with multiple problem solvers. The mechanisms, though they address issues previously neglected, should also be integrated with other control techniques (such as a blackboard architecture for control 191) to be fully fl exible, as seen in experiment. Eli. Based on our experiences, we anticipate that the further development of these mechanisms for planning in blackboard-based problem solvers will greatly enhance the performance of these problem solving systems, will lead to improved real-time response and to better coordination in distributed problem solving networks, and will increase our understanding of planning and action in highly uncertain domains. ‘In fact, t,he WORD-SEQ knowledge source in the Hearsay-11 speech understanding system essentially is a clustering mechanism: by applying weak grammatical constraints about pairwise sequences of words, WORD-SEQ generated approximate word sequences solely to control the application of the more expensive PARSE KS that. applied full grammatical constraints about. sequences of arbitrary length [7]. PI R. T. Chien and S. Weissman. Planning and execution in incompletely specified environments. In Proceedings of the Fourth International Joint Conference on Artificial Intelligence, pages 169- 174: August 1975. REFERENCES Daniel D. Corkill, Victor R. Lesser, and Eva Hudlicka. Unifying data-directed and goal-directed control: an example and experiments. In Proceedings of the Second National Conference on Artificial Intelligence, pages 143- 147, August 1982. Randall Davis. A model for planning in a multi-agent en- vironment: steps toward principles of teamwork. Technical Report MIT AI Working Paper 217, Massachusetts Institute of Technology Artificial Intelligence Laboratory, Cambridge, Massachusetts, June 1981. [4] Edmund H. Durfee. An Approach to Cooperation: Planning and Communication in a Distributed Problem Solving Network. Technical Report 86-09, Department of Computer and Information Science, University of Massachusetts. Amherst, Massachusetts 01003, March 1986. !5j Edmund H. Durfee, Victor R. Lesser, and Daniel D. Corkill. Coherent Cooperation Among Communicating Problem Solvers. Technical Report 85-15, Department of Computer and Information Science, University of Massachusetts, Amherst, Massachusetts 01003, April 1985. [6] Edmund H. Durfee, Victor R. Lesser, and Daniel D. Corkill. Increasing coherence in a distributed problem solving network. In Proceedings of the Ninth International Joint Conference on Artificial intelligence, pages 1025-1030, August 1985. [7] Lee D. Erman, Frederick Hayes-Roth, Victor R. Lesser, and D. Raj Reddy. The Hearsay-II speech understanding system: integrating knowledge to resolve uncertainty. Computing Surveys, 12(2):213-253, June 1980. 18 Jerome A. Feldman and Robert F. Sproull. Decision theory and artificial intelligence II: the hungry monkey. Cognitive Science, 1:158-192, 1977. [9] Barbara Hayes-Roth. A blackboard architecture for control. Artificial Intelligence, 26:251-321, 1985 [lo] Frederick Hayes-Roth and Victor R. Lesser. Focus of attention in the Hearsay-II speech understanding system. In Proceedings of the Fifth International Joint Conference on Artificial Intelligence, pages 27-35, August 1977. [ll] Victor R. Lesser and Daniel D. Corkill. The distributed vehicle monitorine: testbed: a tool for investigating 1 distributed proble”m solving networks. .4 I Mag‘hzinf: 4(3):15-33, Fall 1983. Gordon I. McCalla, Larry Reid, and Peter F. Schneider. Plan creation, plan execution, and knowledge acquisition in a dynamic microworld. international Journal of Alan- Machine Studies, 16:89--l 12, 1982. Earl D. Sacerdoti. Problem solving tactics. In Proceedings of the Sirth International Joint Conference on Artificial Intelligence, pages 1077-1085, August 1979. Mark Stefik. Planning with constraints. Artificial Intelligence, 16:111-140, 1981. 6t / SCIENCE
1986
102
365
An Adaptive Planner Richard Alterman Computer Science Division University of California, Berkeley Berkeley, California 94720 ABSTRACT This paper is about an approach to the flexi- ble utilization of old plans called adaptive plan- ning. An adaptive planner can take advantage of the details associated with specific plans, while still maintaining the flexibility of a planner that works from general plans. Key elements in the theory of adaptive planning are its treatment of background knowledge and the introduction of a notion of planning by situation matching. 1. Introduction A planner that has access to genera! plans (alternately abstract or high-level plans) is flexible because such plans will apply to a large number of situations. A problem for a planner working exclusively with general plans is that many of the details associated with more specific plans (e.g. sequencing information and causal relationships) must be recomputed. For a planner that works from more specific plans the situation is reversed: There is a wealth of detail, but there are problems with flexibility. I will refer to planners with the capacity to use a mix of old specific plans and general plans as adaptive planners [l-3]. Adaptive planners foreground specific plans, but gain flexibility, in situations where the old plan and the planner’s current cir- cumstances diverge, by having access to more general plans. The adaptive planning techniques that will be described in this paper are sufficiently robust to handle a wide range of relationships between an old specific plan and the planner’s current circumstances. For example, suppose a planner is about to ride the NYC subway for the first time, and attempts to treat an old plan for riding BART (Bay Area Rapid Transit) as an example to guide the current planning activity. Consider the steps involved in riding BART. At the BART station the planner buys a ticket from a machine. Next, the ticket is fed into a second machine which returns the ticket and then opens a gate to let the planner into the terminal. Next the planner rides the train. At the exit station the planner feeds the ticket to another machine that keeps the ticket and then opens a gate to allow the planner to leave the station. Compare that to the steps involved in riding the NYC subway: buy a token from a teller, put the token into a turnstile and then enter, ride the train, and exit by pushing thru the exit turnstile There are a great number of differences between the BART Plan and the plan that the planner must eventually devise for riding the NYC Subway. . In the BART case a ticket is bought from a machine, in the NYC subway case there is no ticket machine and instead a token is bought from a teller. . In the BART case the ticket is returned after enter- ing the station, in the NYC subway case the token is not returned after entry. . In the BART case the ticket is needed to exit, in the NYC subway case the token is not needed to exit. This paper will describe an adaptive planner called PLEXUS that can overcome these differences and in an effective manner use the BART Plan as a basis for construct- ing a plan for the NYC subway situation. Two versions of PLEXUS have already been constructed. This paper gives an overview of adaptive planning and PLEXUS. It includes a discussion of adaptive planning in relation to the litera- ture, descriptions of four key elements of adaptive planning, and some details of PLEXUS’ adaptation mechanism. 2. Adaptive Planning There are four keystones to the adaptive planning posi- tion on the flexible utilization of old plans. . An adaptive planner has access to the background knowledge associated with an old plan. . In adaptive planning the exploitation of the back- ground knowledge is accomplished by a process of situation matching. . An adaptive planner foregrounds specific plans. . Adaptive planners treat the failing steps of a plan as representative of the category of action which is to be accomplished. Adaptive planning makes the background knowledge associated with an old specific plan explicit. Previous approaches to re-using old plans have dealt with an old plan in relative isolation and therefore the task of re- using an old plan has been considerably more complicated. By making the content and organization of the background knowledge explicit, it becomes possible to re-use an old plan in a wider variety of situations. Background knowledge includes general plans, categorization knowledge, and causal knowledge. Exploitation of the background knowledge is accom- plished by a process of situation matching. Adaptive plan- ning uses the position of the old plan in a planning network This research was sponsored in part by the Defense Advance Research Projects Agency IDOD), Arpa ord- er No. 4031, Monitored by Naval Electronic System Command under Contract No. N00039-C-0235 This research was also supported by the National Science Foundation (ISI- Planning: AUTOMATED REASONING / 65 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. as a starting point for finding a match to the planner’s current circumstances. The interaction of planning knowledge and the current situation determine a plan which fits the current context and realizes the goal. The interac- tion works in both directions. In the direction of planning knowledge to situation, the old plan serves as a basis for interpreting the actions of other agents and the various objects in the new situation. Moreover, it provides the planner with a course of action. In the direction of situation to planning knowledge, it is the situation which provides selection cues that aid the planner in determining an alter- nate course of action when complications arise. Adaptive planning foregrounds specific plans. It has been previously argued by Carbonell [4] that the importance of being able to plan from more specific plans is that many times a more general plan is not available But there are other reasons why the capacity to work from more specific examples is important. Many times a more specific plan is tailor-made for the current planning situation Further- more, the more specific plans make availahle to the planner previously computed causal and ordermg relationships between steps. For a more general plan these can not be determined until that steps are instantiated. Consequently, even in the cases where the more specific plan must be re-fit, many times the cost of such changes are much less than the cost of dealing with the subgoal and suhplan interactions inherent in a process that works by instantiating more gen- eral plans. Adaptive planning treats the failing steps of the old plan as representative of the category of action which is to be accomplished. In the case of the BART-NYC planning problem, each of the failing steps is representative of the category of action the planner eventually wants to take. An adaptive planner uses the category knowledge, as represented by the failing step, to access more general ver- sions of that step and also to determine its eventual course of action. For example, the first step of the BART Plan, ‘buying a BART ticket’, is representative of the planner’s eventually course of action - adapting a plan to ‘buy a theatre ticket’. 3. PLEXUS - An adaptive planner For PLEXUS the background knowledge associated with an old plan is determined by the old plan’s position in a knowledge network. The network includes taxonomic, par- tonomic, causal, and role knowledge: the network acts as a structural backbone for its contents. PLEXUS uses the taxo- nomic structure not only for the purposes of property inheri- tance. but also as a basis for reasoning about categories The partonomic structure (i.e. step-substep hierarchy) is used to aid in determining the pieces of network which need to be refitted in a given situation. The causal knowledge serves several functions: The purpose relation identifies the abstraction which maintains the purpose of a step in a plan. The precondition, outcome, and goal relations act as appropriateness conditions. The reason relation provides dependency links between a step and its justification rc.f Stallman & Sussman, 1977) [5j. Roughly, in PLEXUS, pur- pose is synonymous with ‘intent’, goal with ‘aim’, and rea- son with )istification’. The purpose of ‘buying a BART ticket’ is to ‘gain access’, the goal associated with it is to ‘have a ticket’, and the reason for doing it is that it makes it possible to ‘enter the BART station’ (see figure 1) ASSOCI- ated with roles are type constraints on the types of objects which can fill them The role relations are used by PLEXUS for both cross indexing purposes and to control inferencing. For further arguments on the importance of background knowledge see Alter-man (1985), and for more details on the representation of the background knowledge see Alterman (1986) [3]. PLEXUS uses the old plan to interpret its course of action in its current circumstances. It considers the steps, one step at a time, in order. If a step is not an action it adapts substeps in a depth-first fashion before moving onto the next step in the plan. When a given step of the old plan has been adapted to the current circumstances, PLEXUS simulates a planner taking action on that step before moving onto the next step in the plan - thus, as did NASL (McDer- mott, 1978 Es]), PLEXUS interleaves planning and acting. Associated with each step (substep) in a plan are appropriatness conditions. The appropriatness conditions are intended to be suggestive that a particular course of action is reasonable to pursue. Before a step is applied, PLEXUS treats the preconditions and goals of the old plan as appropriateness conditions. After a step has been applied, PLEXUS treats the expected outcomes as appropriateness conditions. Appropriateness conditions are checked by test- ing the type constraints associated with each of the roles attached to the appropriateness condition. The type con- straints are interpreted in terms of the network. A rough outline of the top-level decision procedure is shown below: 1) Are any of the before conditions associated with the old plan failing? a) Is this a case of step-out-of-order? b) Is this a case of failing precondition? 2) Has the current circumstances aroused a goal not accounted for by the current step? a) This is a case of differing goals. 3) Is the current step an action? a) If yes, perform the action. b) If no, proceed to adapt substeps. 4) Are any of the outcomes associated with the current step failing? a) This is a case of failing outcome? 5) Adapt next step. If one of the before appropriatness conditions fails, or the current circumstances indicate a goal not accounted for by the old plan, one of three different types of situation difference is occurring: failing precondition, step-out-of- order, or differing goals. There is a fourth kind of situation difference, failing outcome, that occurs when one of the expected outcomes of a given step fails to occur. Associated with each of the types of situation difference are varying strategies that will be briefly described in the fifth section of this paper. PLEXUS does not always consider the steps in order, under certain circumstances it looks ahead to the latter steps of the plan and adjusts them in anticipation of certain changes - thus PLEXUS has an element of oppor- tunism (Hayes-Roth & Hayes-Roth, 1979) [7l. The core of PLEXUS are the matching techniques it uses for finding an alternate version of a step once it deter- mines that the step needs to be refit. To find an alternate matching action for a given situation, PLEXUS treats the failing step as representative of the category of action it needs to perform, and then it proceeds to exploit the back- ground knowledge in two ways. By a process of abstraction PLEXUS uses the back- ground knowledge to determine a category of plans in common between the two situations. 66 / SCIENCE By a process of specialization PLEXUS uses the back- ground knowledge to determine an alternate course of action which is appropriate to the current cir- cumstances. PLEXUS accomplishes abstraction by moving up the categor- ization hierarchy until it finds a plan where all the before appropriatness conditions are met. PLEXUS accomplishes specialization by moving down the categorization hierarchy until it finds a plan that is sufficiently detailed to be action- able. 4. Core of the Matcher (Managing the Knowledge) There are at least two important considerations con- cerning the control of access to knowledge. One considera- tion is that there is a danger of the planner becoming overwhelmed by the wealth of knowledge (cf. saturation, Davis 1980 [8]) that is available. The problem is that there are potentially too many plans that the planner might have to consider, and consequently, the planner could get bogged down in evaluating each candidate plan. Somehow the planner needs to be able to selectively consider the various alternatives available to it. Another consideration in the control of access to knowledge comes form the cognitive science literature and is referred to as the problem of enumeration (e.g. Kolodner, 1983 [91). The problem of enumeration is that humans do not appear to be capable of listing all the instances of a category without some other kind of prompting. When asked to list the states of the union, human subjects do not accom- plish this by simply listing all the members of the category of states. For the concerns of adaptive planning the problem of enumeration comes in a slightly different guise. Given an abstract plan it is not reasonable to assume that a human planner could enumerate all of the specializations of that abstract plan. The first of these considerations dictates that PLEXUS be selective in its choice of planning knowledge to use. The second of these considerations acts as a sort of termination condition: sometimes the planner knows the right plan but circumstances are such that it cannot find it. As a result of these considerations, PLEXUS abstraction and specializa- tion processes must be constrained. While moving up the abstraction hierarchy PLEXUS maintains the function of the step in the overall plan. Movement down the abstraction hierarchy, towards more detailed plans, is controlled by the interaction between the planner’s knowledge and the current circumstances. 4.1. Abstraction The way to think about abstraction of a plan is that it removes details from that plan: if a particular plan fails to match the current situation, some of the details of that par- ticular plan must be removed. Moving up the abstraction hierarchy removes the details that do not work in the current situation while maintaining much of what is in com- mon to the two situations. Effectively, the movement of abstraction is discovering the generalization which holds between the old and new situations given that a difference has occurred. A given plan step can have any number of abstractions associated with it. Choosing the wrong abstraction can lead to the wrong action. The planner can avoid this problem by applying the following general rule: Ascend the abstraction hierarchy that maintains the purpose of the step in the plan that is being refitted. By moving up the abstraction hierarchy that maintains the purpose of the step, PLEXUS attempts to maintain the func- tion of the step in the overall plan and thereby mitigate the propagated effects of changes. In general PLEXUS uses two techniques for moving up the abstraction hierarchy. . If a plan is failing due to the existence of a particular feature of a plan, move to the point in the abstraction hierarchy from which that feature was inherited. . Incremently perform abstraction on a failing plan The first technique applies in situations where there is a specific feature in the old plan that does not exist in the current situation. The second technique of abstraction applies in situations where there is no identifiable feature which has to be removed. In such cases, PLEXUS incremen- tally moves up the abstraction hierarchy. In either case, for each abstraction it tries to find a specialization that will work in the current context. If it fails to find a specializa- tion for a given abstraction, it moves to the next abstraction in the abstraction hierarchy. 4.2. Specialization I Via the process of specialization PLEXUS moves from a more abstract plan towards more specific examples. PLEXUS navigation thru the network is dependent on the planner’s current circumstances. PLEXUS descends down the classification hierarchy one step at a time, PLEXUS tests the applicability of a specialization by checking the before appropriateness conditions; if one of these conditions fails the movement is rejected. At each point in the hierarchy PLEXUS is faced with one of five options: 1) Is the plan sufficiently detailed to act on? 2) Is there a feature suggested by the type of situation difference which cross indexes some subcategory of the current category of plan? 3) Is there an observable feature which cross indexes some subcategory of the current category of plan‘? 4) Is there an observable feature with an abstraction that cross indexes a subcategory of the current category? 5) Is there a salient subcategory? PLEXUS stops descending the categorization hierarchy when it gets to a leaf node (option 1). If the node is not a leaf it continues to descend (options 2-5). Sometimes the type of situation difference suggests cues for subcategory selections (option 2). Sometimes ‘observable features’ act as cues for subcategory selection (options 3-4). These ‘observable features’ can either directly cross index some subcategory of plan (option 3), or have an abstraction which cross indexes a subcategory of plan [option 4). Certain subcategories are salient regardless of context and can always be selected (option 5). Many of these techniques are employed in the following example: Suppose a planner wants to transfer between planes at the Kennedy Airport in NYC. The planner’s nor- mal plan for transferring between planes is to walk from the arrival to the departure gate. But when the planner arrives at Kennedy Airport the arrival and departure gates turn out to be in different terminals. Suppose the planner decides that the walk between terminals is too strenuous, and thus a new goal is aroused: preserve energy. The detection of this goal has no correspondent in the old plan and it is deter- mined that the plan must be adjusted to account for this goal; this is a case of the differing goals type of situation difference. By a process of abstraction, PLEXUS moves up Planning: AUTOMATED REASONING i 6’ the categorization hierarchy from the plan to ‘walk’ to the more general plan of ‘travelling’. Next PLEXUS must determine an alternate plan, within the category of ‘travel- ling’, from which to act. The newly aroused goal acts as a cue for selecting ‘vehicular travel’ as a potential subcategory of plan from which to act (option 2). Suppose the planner has never used a shuttle before at an airport, but it sees (observable feature) a sign concerning ‘airport shuttles’. An abstraction of ‘shuttle’ acts as a cue for selecting ‘mass tran- sit travel’ as a subcategory of ‘vehicular travel’ (option 4). Moreover, ‘shuttle’ is a cue for selecting ‘shuttle travel’ as a subcategory of ‘mass transit travel’ (option 3). ‘Shuttle travel’ is sufficiently detailed for PLEXUS to attempt to adapt (option 1). See Alterman (1986a) [31 for further details and a trace of PLEXUS handling this planning problem. 5. Four Types of Situation Difference PLEXUS currently recognizes four kinds of situation difference: failing precondition, failing outcome, different goals, step-out-of-order. A failing precondition situation difference occurs when one of the preconditions of a step (plan) fails. For fail- ing preconditions PLEXUS moves up the abstraction hierar- chy, according to the purpose of the step, to a point at which the failing condition has been abstracted out. In the event that PLEXUS cannot find a specialization of that category of plans, it continues to incrementally move up the abstraction hierarchy indicated by the purpose relation. For failing preconditions either of PLEXUS specialization techniques are appropriate. A failing outcome situation difference occurs, if after applying a plan (step) PLEXUS discovers that one of the expected outcomes of that plan was not achieved. There are three courses of action available. The obvious course of action is to try the plan again. A second course of action, is to use the reason relation to determine the other steps of the plan which are effected by the failed outcome, and deter- mine, via abstraction and specialization, if the planner can continue on its course action because there is an alternate interpretation of the latter step which does not require the failed outcome. If all else fails, the third option available to the planner is to find and perform an alternate version of the failing step. For failing outcomes, if the current plan step is being re-interpreted, abstraction occurs incrementally. If PLEXUS is trying to re-interpreted a step related to the current step by a reason relation, abstraction occurs using the failing outcome as a feature to abstract out of the plan. For the second and third cases PLEXUS uses both of the spe- cialization techniques available to it. A differing goal situation difference occurs if the planner’s current circumstances arouse a new goal not accounted for by the old plan. For this kind of situation difference, abstraction occurs incrementally, and specializa- tion requires that the new plan be indexed under both old and new goals, A stewout-of-order situation difference occurs, when PLEXUS encounters a situation where it needs to apply a step out of order. There are two adjustments that are possi- ble when a step-out-of-order situation difference occurs, PLEXUS can either delete the intermediate step(s), or re- order the steps of the old plan. If a step can be applied out of order, PLEXUS uses abstraction and specialization in an attempt to find an alternate version of the plan with the correct ordering of steps. Under such a situation, PLEXUS can use the new ordering constraint as an index for speciali- zation purposes. In the event an alternate plan with a dif%erent ordering of steps can not be found, PLEXUS per- forms the step-out-of-order, removes it from the sequence of steps, and proceeds with attempting to apply the failing step. 6. An example The BART-NYC subway planning problem provides examples of three of the types of situation difference (see figure 1). Adapting buy a BART ticket. The first step of the BART plan fails in the NYC subway situation because there is no ticket machine. This is a case of failing precondition, and therefore PLEXUS abstracts out the failing condition, ‘exist ticket machine’, and special- izes, using the salient subcategory, to ‘buy theatre ticket’, which it proceeds to adapt to the NYC subway situation. During the process of adapting this step ‘ticket’ gets bound to ‘token’. Adapting enter BART station. The second step of the BART plan involves entering the sta- tion. The first substep of this step is to insert the token into the entrance machine, which the planner successfully accom- plishes. The next step of ‘BART enter’ is that the ticket is returned by the machine. But in the NYC subway situation the ticket is not accessible, but it is possible to push thru the turnstile (the third step of ‘BART enter’). Hence this is a case of step out of order. Having accomplished the last step of ‘BART enter’, PLEXUS must determine whether it should act on the intermediate step or instead delete it. Re-interpreting BART exit. In order to delete intermediate steps PLEXUS must treat the outcomes of each intermediate step as a case of a failing outcome and test to see if the latter steps in the plan effected by the failing outcome can be adapted. In this case there is only one intermediate step, ‘ticket returned’. The outcone associated with this intermediate step is that the planner ‘has the ticket’ (or in this case ‘token’). PLEXUS applies the second strategy associated with the situation difference type failed outcome: Find an alternate interpre- tation of the situation where that outcome is no longer necessary. PLEXUS uses the reason relation associated with ‘ticket return’ to determine which of the latter steps are effected by the failing outcome. In this case, the reason that the ticket is returned is so it can be used when exiting the station. PLEXUS must try to re-interpret ‘BART-EXIT’ in such a manner that it can exit without a ticket. This leads to a situation of failed precondition for the step ‘BART- EXIT’. Via abstraction PLEXUS extracts that ‘exiting an institution’ ia what is in common between the old plan and the new situation. PLEXUS ‘observes’ the exit turnstile and uses it as a cue for determining ‘exit-building’ as an alter- nate plan for ‘exiting the station’, where ‘exit turnstile’ plays the role of ‘locking door’. Since it can find an alternate interpretation to ‘exiting the station’ that does not involve using a ticket, PLEXUS treats the step-out-of-order situa- tion that occurs during execution of the plan ‘BART enter’ as a case of deletion. For a more detailed discussion of this problem and a trace see Alterman (1986) [31. 7. Discussion Like the early general problem solving planners [lo,111 adaptive planning is concerned with the problems of gen- erality and flexibility. Unlike them it explores these issues in the context of increased amounts, and larger chunks of, knowledge. Where the early general problem solvers accom- plished generality and flexibility by working with a small 68 / SCIENCE number of atomic operators, adaptive planning works with increased amounts of knowledge and achieves these twin goals by exploiting the structure of that knowledge. Like the work on MACROPS [12], adaptive planning is concerned with larger chunks of actions, but adaptive planning extends their utilization to planning problems like the BART-NYC subway problem, Adaptive planning is concerned with tasks [61 and commonsense planning [131 problems. It is knowledge-based in that its approach to refitting old plans is baaed on the accessibility of the structure and content of the background knowledge associated with an old plan. As in the case of other knowledge-based planning approaches [8,14,15], adaptive planning is concerned with control of access to knowledge; its approach is dependent on the interaction of the planner’s knowledge with the planner’s current circumstances. Like the work on analogical plan- ning [4,16,17], adaptive planning attempts to re-use old specific plans, but its strategies take greater advantage of the available knowledge, exploit categorization knowledge, and its processing is novel in that it takes the form of situa- tion matching. Where other researchers have emphasized the problem of initial retrieval of old plans [18-211, the work on adaptive planning balances that view by investigating issues concerning flexibility and usage. Although knowledge acquisition is not the focus of the current research, adaptive planning does provide a framework for dealing with these issues. It promises to promote additivity because its pro- cedures are largely based on the structure of the knowledge and not its content. Moreover, as a by-product of abstraction and specialization, PLEXUS discovers the generalizations over the steps of the old plan and the steps of the new plan, and consequently it provides a framework for the planner to do automatic re-organization and generalization [22-251. 2. 8. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 23. 25. References Alterman, R., Adaptive Planning: Refitting old plans to new situations, in The seventh annual conference of the cognilive science eocidy, 1985. Alterman, R., Adaptive Planning: Refitting old planning experiences to new situations, Second Annual Workehop on ‘Theoretical I88UeP in Conceputal Injormation fioceeeing, 1985. Alterman, R., Adaptive Planning: A case of flexible knowing, Technical Report, University of California at Berkeley, 1986. Carbonell, J. G., Derivational analogy and its role in problem solving, AAAI-88, 1983, 6469. Stallman, R. and Sussman, G., Forward reasoning and dependency-directed backtracking in a system for computer-aided circuit analysis, Artificial Intelligence Q, 2 , 135196. McDermott, D., Planning and Acting, Cognitive Science & (1978), 71-109. HayesRoth, B., A cognitive model of planning, Cogniliue Science 3 (1979), 275310. Davis, R., Met&Rules: Reasoning about Control, Arificial Intefligence 15 (1980), 179-222. Kolodner, J. L., Reconstructive memory a computer model, Cognitive Science 7 (1983), 281-328. Ernst, G. and Newell, A., Cm: A caee etudy in generality in problem solving, Academic Press, 1969. Fikes, R. and Niison, N., STRIPS: a new approach to the application of theorem proving to problem solving, Aritificial Znt$figcmce .2 (1971), 189- 208. Fikes, R., Hart, P. and Nilsson, N., Learning and Executing Generalized Robot Plans, Artificial Inlefligence Journal 9 (1972), 251-288. Wllensky, R., Ran&g and Understanding, Addison-Wesley Publishing Company, 1983. Widensky, R., Mets-Planning: Representing and using knowledge about planning in problem solving and natural language understanding, Cognitive Science 5 (1981), 197-233. Stefik, M., Planning and meta-planning, Artificial Intelligence 12, 2 (1981), 141-170. Carbonell, J. G., A computation model of analogical problem solving, IICM 7, 1981. Carbonell, J. G., Learning by analogy: formulating and generalizing plans from past experience, in Machine learning, and artificial intelligence approach, Mitchell, M. C. (editor), Tioga Press, Palo Alto, 1983. Kolodner, J. L. and Simpson, R. L., Experience and problem solving: a framework, Roceedinge of the aizth annutrl conference oj the cognitive science eocidy, 1984. Kolodner, J. L., Simpson, R. L. and SycarsCyranski, K., A process model of cased-based reasoning in problem solving, fioceedingr of the ninth intemationcrl joint conference on artificial intelligence, 1985. Hammond, K., Indexing and Causality: The organization of plans and strategies in memory., Yale Department of Computer Science Technical Report 351, 1985. Hendler, J., Integrating Marker-Passing and Problem Solving, in The eeuent annual conference o/ the cognitive science eociety, 1985. DeJong, G., Acquiring Schemata through Understanding and Generalizing Plans, IJCAI 8, 1983. Schank, R. C., Dynamic Memory, Cambridge University Press, Cambridge, 1982. Kolodner, J. L., Maintaining organization in a dynamic long-term memory, Cognitive Science 7 (1983), 243-280. Lebowitz, M., Generalization from natural language text, Cognitive Sciace 7 (1983), l-40. Figure 1: BAFtT Plan witb some background knowledge. Planning: AUTOMATED REASONING / 69
1986
103
366
THE REPRESENTATION OF EmNTS IN MULTIAGENT DOMAINS Michael I’. Georgeff * Artificial Intelligence Center SRI International Menlo Park, California Abstract The purpose of this paper is to construct a mode! of actions ar,d events suited to reasoning about domains involving multiple agents or dynamic environments. A mode! is constructed that provides for simultaneous action, and the kind of facts necessary for reasoning about such actions are described. A model-bssed foul O~~Z~J~J~OIC.T is introduced to describe how actions affect the world. No frame axioms or syntactic frame rules are involved in the specification of any given action, thus allowing a proper mode!-theoretic semantics for the representation. Some serious deficiencies with existing approaches to reasoning about multiple agents are also identified. Finally, it is shown how the law of persistence, together with a notion of causality, makes it possible to retain a simp!e mode! of action while avoiding most of the difficulties associated with the frame problein. 1 Introduction A notion of events md processes is essential for reasoning about problem domains involving one or more agents situated in dy- namic environments. While previous papers [3,4,5&I] discussed the importance of the notion of process, herein we focus on the representation of events and actions. As we will show, the ap proach avoids many of the difficulties associated with other mod- els of events and actions. 2 Events We assume that, at any given instant, the world is in a particular world state. Each world state consists of a number of ob~&ba from a given domain, together with various relations and functiona oYer those objects. A sequence of world states will be called a world history. A given world state has no duration; the only way the pas- sage of time can be observed is through some change of state. The world changes state by the occurrence of ewenta. An event (strictly, an event type) is a set of state sequences, representing all possib!e occurrences of the event in all possible situations (see also [1,12]). !n this paper, we will restrict our attention to atomic eventa. Atomic events are those in which the state sequences are of length * Also affiliated with the Center for the Study of Language and Informa- tion, Stanford University, Stanford, California. This research has been made possible in part by a gift from the System Development Foundation and by the Office of Naval Research under Con- tract N@OO14-85-C-0251. two, and can be modeled as a transition relation on world states. This transition relation must include al! possible state transi- tions, including those in which other eventa occur simultaneously with the given event. Consequently, the transition relation of an atomic event places restrictions on those world relations that are directly affected by the event, but leaves most others to vary freely (depending upon what else is happening in the world). This is in contrast to the classical approach, which views an event aa changing some world relations but leaving most others unaltered. For example, consider a domain consisting of blocks A and B at possible locations 0 and 1. Assume a world relation that rep resents the location of each of the blocks, denoted foe. Consider two events, move(A, l), which has the effect of moving block A to location 1, and moue( B, l), which has a similar effect on block B. Then the classical approach (e.g., see reference [13]) would mode! these events as follows: moue(A, 1) = { (foc(A, 0), foc( B, 1)) - (loc(A, l), loc( B, 1)) (Zoc(A,O),loc(B,O)) - (loc(A, l),loc(B,O))) and similarly for mowe(B, 1). Every instance (transition) of moue(A, 1) leaves the location of B unchanged, and similarly every instance of move(B, 1) leaves the location of A unchanged. Consequently, it is impossible to compose these two events to form one that represents the simul- taneous performance of both move(A, 1) rend mooe(l.3, l), except by using some interleaving approximation. In contrast, our mode! of these events is: movc(A, 1) = { (loc(A,O), loc( II, 1)) - (loc(A, l), loc( B, 1)) (loc(A, 0), loc( B, 1)) - (loc(A, l), loc( L3,O)) (loc(A, 0), foc( B, 0)) - (loc(A, l), loc( B, 1)) (boc(A, 0), foc( B, 0)) - (loc(A, l), loc( B, 0))) and similarly for move( B, 1). This mode! represents a!! possible occurrences of the event, including its simultaneous execution with other events. For ex- ample, if mooe(A, 1) and move(B, 1) arc performed simultane- ously, the resulting event will be the intersection of their possible behaviors: move(A, l)~~moue( B, 1) = moue(A, 1) n moue( B, 1) = Ibc(W), W& 0)) - (lo@, l),lo@, 1))) Thus, to szy that an event has taken place is simply to put constraints on some world relations, and leave most others to vary freely. -0 I SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Event e * 1s P Q 0 32 Event e2: pLq Figure 1: Two Incompatible Events Of course, to specify events by explicitly listing all the possible transitions would, in general, be infeasible. We therefore need some forma!ism for describing events and world histories; herein, we will use something similar to situation calculus [ll]. Essentially, there are two things we need to say about the pos- sible occurrences of any given event. The first is needed to specify the effects of the event occurring in some given situation. The seccnd is needed to specify under what conditions we consider the event to have occurred, and is essential if we are to reason about the possibility of events occurring simultaneously. Let # and 4 be conditions on world states (usually called fru- ent3 [ll]), let occurs(e)( ) p 3 re resent the fact that the event e occurs in state 3 and, for a given world history w containing state 3, let 3ucc(3) be the successor of 8. Then we can describe the effects of an occurrence of e with axioms of the following form:’ VW, 8 . 443) A occurs(c)(3) 3 +(ducc(d)) This statement is intended to mean that, in all possible world histories, if 4 is true when the event e occurs, 4 will be true in the resulting state. It has essentially the same meaning as 4 3 [e]tl, in dynamic logic. Axioms such as these are essential for planning, allowing the determination of the strongest [provable] postcon- ditions and weakest [provable] preconditions of events [l5]. At first glance, it appears as if this is all we really need for planning and other forms of practical reasoning. For example, assume we have the following axioms describing events er and ~2: VW, 3 . 41(s) A occurs(el)(3) > +!Q(JUCC(S)) VW, 3 . d2(3) A OCCUr3(ez)(3) > $2(3UCC(J)) From this we can infer that VW,J . (61 A d2)(3) A (( occurs(e1) A occurs(e2))(3) 1 ($1 A Qz)f3ucc(a)) However, it would be unwise to take this as the basis for a plan to achieve ($1 A $5). Th e reason is that it may be impossible for the two events to occur simultaneously, even if (J!J~~\$~)(~ucc(~)) is not provably false. For example, consider the two events shown in Figure 1. Let’s assume that p holds in states 81 and 82 and q holds in the succes- sor states. (We have taken some liberties in naming states, but ‘ We will assume throughout that, in such axioms, u is an element of w. ‘The ever.ts shown in Figure 1 do not satisfy this axiom! that is not important for this example.) Events el and e2 satisfy the above axioms, where 41 = 42 = p and $1 = 92 = q. Given these axioms alone, it is quite consistent to assume that both events occur simultaneously, but there is no way to prove that they can so occur - in fact, given er and e2 as shown in Figure 1, such a statement is clearly false. (Given auficient axioms about the effects of these events, we could, of course, prove that such events could caot occur together.) To describe what conditions constitute the occurrence of an event e, we need axioms cf the form VW, 8 * +9 A Yq succ(s)) 3 occzIrs(c)(s) This statement is intended to capture the fact that, for all world histories, we consider the event e to have occ;lrred if, + holds at the beginning of the event and $ holds afterwards. Facts such as these are critical for reasoning about whether two or more events can proceed simultaneously and cannot be inferred from statements of the former kind about the effects of events. For example, consider that two events cl and ez both satisfy* VW, 3 ’ PM A 4 LWCC(8)) > OCCUt8(ej)(.S) To prove that these events can occur simultaneously. all we need do is prove that, in some world history, p holds of one state and q holds of its successor. Often, we may even be able to make stronger statements than these. For example, the event mooe(A, 1) satisfies VW, a . occura(move(A, l))(g) G loc(A, O)js)A/oc(A, l)(succ(3)) This specification completely characterizes the event move( ‘4,l) - there is nothing more that can be said about the event. Thus, at thi# point of the atory, the frame problem does not arise. Be- cause the event, in and of itself, places no restrictions on the majority of world relations, we do not require (icdeed, it would be false to require) a large number of frame axioms stating what relations the performance of the event lezves unchanged. In con- trast to the classical approach, we therefore need not introduce any frame rule [7] or STRIPS-like assumption [2] regarding the apecifica tion of events. 3 Actions When a process brings about an event we will say that the process performs an action. For now, we can consider an action and the event it brings about to be the same object - that is, a relation on world states. Later on, we shall have to distinguish the two. If we are to form plans in multiagent worlds, one of the more important considerations is whether or not any two or more ac- tions can be performed concurrently - it is of little use to form a plan that calls for the simultaneous performance of actions that simply cannot coexist. Thus, to guarantee the validity of a plan containing simultaneous actions, we need to prove that it is indeed possible to perform the actions simultaneously. Consider two actions al and a2 that bring about, events cl and e2, respectively. In constructing a plan that involves the simultaneous performance of a1 and a2, it is not enough that it simply be consistent that el and e2 occur together. The example discussed in the preceding section is a case in point. Of course, this may be the best one can do given incomplete knowledge cl the world but, in such cases, there is certainly no guarantee that the plan would ever succeed. To guarantee the success of such a plan, we need to be able to prove that a1 and a2 can be performed simultaneously. To Planning: ACIUMATED REASONING i - I do this, we need to prove that the intersection of the transition relations corresponding to er and ez is nonempty and that its domain includes the states in which the actions 01 and a2 might be performed. For example, consider that we have VW, 3 . &(3) A $+?ucc(8)) 3 occut8(el)(8) ‘v’w,a . h(a) A $74 duct(8)) 3 occurs(ez)(s) It is easy to see that, if we are in a state in which 41 and 42 hold, both events can occur together if there exists a world his- tory containing a successor state in which $1 and $2 hold. Unforc tunately, ascertaining this involves determining the consistency of (+I A h), w IC is undecidable in the nonpropositional case. h’ h Moreover, determining per-formability of actions on the basis of consistency arguments can lead to nonmonotonicity - addition of further axioms could invalidate any conclusions drawn. In fact, a similar problem arises even for single-agent planning - it is not possible to infer from axioms describing the effects of actions that these effects are indeed satisfiable. To get around this problem, it is usual to assume that no action ever fails, i.e., that there is always a transition from any state satisfying the preconditions of the action to some subsequent state (e.g., [15]). This option is not open to us in the multiagent domain - si- multaneous actions are often not performable. What we need is some way to determine whether or not composite actions will fail on the basis of some property of the component actions. To do this, we introduce a notion of action independence. The approach we adopt is to provide additional axioms specify- ing which relational tuples the action directly a&cf8.S To do this, for every action a and n-ary predicate symbol P, we introduce a formula bp(a, ii), called a direct-eflects formula (5 represents an n-tuple of free variables). The meaning of this formula is that, for all 5, if bp(a, 2) holds in some state 3, only those relational tuples denoted by P(S) may be affected by the performance of action a; any relational tuple that is not a direct effect of action a is thus free to vary independently of the occurrence of a. Thus, P(5) may be forced to take on some particular truth value in any state resulting from the performance of a; conversely, all other atoms involving P are free to take on any truth value. For example, &,,(move(A, l), z, y) z (z = A). This means that the action moue(A, 19 could affect any tuple denoted by loc(A, y), for any y; on the other hand, it would not affect any other tuples in the relation denoted by lot. There are two impor- tant points to note here: (1) this does not mean that the other tuples of lot remain unchanged - some other action could occur simultaneously that affected these tuples also; and (2) if we wish to infer that loc(B, y) does not change for some y, we need to know that .4 and B denote different objects. Given such formulae, it follows that two actions a1 and a2 can occur simultaneously in a state s if 8 is in the domain of each ac- tion and, for each n-ary predicate P, (13S.(bp(al,j;)A6p(02,L))) holds in J - that is, both actions don’t directly affect the same relational tuple. For example, assuming unique names, we can infer that the location of B is unaffected by move(A, 1) and that move(A, 1) could be performed simultaneously with any action a’ that changed the location of B, provided that, conversely, a’ did not affect the location of A. In the case that the same relational tuples are affected, it might aIn the general cade, we would also have to specify which functional values and constants were directly affected by the action. This is a draight- forward extension of the described approach, and we will not consider it further (see reference [13)). be that each relational tuple is changed by each action in the same way, and simultaneity would still be possible. But we then get forced back to considering consistency of formulae. There is no difficulty with this if consistency can be determined and does not involve any nonmonotonicity (such as when one con- dition (say, ~61) implies the other (&), and we know that tit is satisfiable). However, if this is not the case, any conclusions drawn must be subject to retraction and thus should be treated as assumptions about the problem domain. Note that all the direct effects of an action need not be involved in any single occurrence of that action - they represent only possible effects. Also, the direct effects of an action do not define the possible state transitions - this is given, as before, by the state transition relation associated with the action. There are some problems with this representation, not the least being that, in many cases of interest, we still have to check con- sistency of formulae. However, knowledge about the relational tuples that actions may affect, and reasoning about interactions on the basis of this knowledge, seems to be an important part of commonsense reasoning. As we will shortly see, such knowledge also plays an important role in determining the effects of actions performed in isolation. 4 The Law of Persistence We have been viewing atomic actions or events as imposing cer- tain constraints on the way the world changes while leaving other aspects of the situation free to vary as the environment chooses. That is, each action transition relation describes all the potential changes of world state that could take place during the perfor- mance of the action. Which transition actually occur in a given situation depends, in part, on the actions and events that take place in the environment. However, if we cannot reason about what happens when some subset of all possible actions occurs - in particular, when only one action occurs - we could predict very little about the future and any useful planning would be impossible. What we need is some notion of persistence that specifies that, in general, world relations only change when forced to [12]. For example, because the action mowe(A, 1) defined in the previous section places no constraints on the location of B, we would not expect the location of B to change when moue(A, 1) was performed in isolation from other environmental actions. One possibility is to introduce the following law o/persistence: VW, 8, ii . (bp(ii)(d) A (da . (occurs(a) A bp(a, l)))(b) 2 hw(~UCC(~99 where #p(5) is either P(2) or -,P(59. This rule states that, provided no action occurs that directly affects the relational tuple denoted by P(2), the truth value of P(2) is preserved from one state to the next. It can be viewed as a generalization of the rule used by Pednault for describing the effects of actions in single-agent worlds [13]. For example, we could use this rule to infer that, if move(A, 1) were the only action to occur in some state 8, the location of B would be the same in the resulting state as it was in state S. However, at this point we encounter aserious deficiency in the action model we have been using and, incidentally, in all others that represent actions and events as the set of all their possible behaviors (e.g., [l,lZ]). C onsider, for example, a seesaw, with ends A and B and fulcrum F. We shall assume there are no other entities in the world, that the only possible Iocations for 72 / SCIENCE Location: 2 0 Figure 2: Possible Seesaw State after moves A, F, and B are 0, 1, and 2, and that these are always colinear. Assume that initially A, F, and B are at location 0, and consider an action ??‘&OVeF that moves F to location 1 (see Figure 2), while allowing all possible movements of A and B, depending on what other actions are occurring at the same time (such as someone lifting B). Of course, the objects must always remain colinear. The possible transitions for movI?F are to one of the states (foc(A, l),foc(F, l),loc(B, l)), (foc(A,O),foc(F, l),foc(B,2)), or (foc(A, 2), loc(F, l), loc( B,O)). Furthermore, because the move- ment of F places constraints on both the locations of A and B, the direct affects of the action will include the locations of all objects: d&(mOVeF,x,y) G (x = A) V (z = F) V (z = 8). Thus, the effect of mOVCF, in addition to-changing the location of F, will be to change either the location of A or the location of B or both. The question is, if no other action occurs simultaneously with moVeF, which of the possible transitions can occur? Let’s assume that, because of the squareness of the fulcrum F, the action ?nOVeF always moves A and B to location 1 at the same time, unless some parallel action forces either A or B to behave differently. Unfortunately, using our current action model there is simply no way to represent this. We cannot restrict the transition relation so that it always yields the state in which A, F, and B are all at location 1, because that would prevent A or B from being moved simultaneously with A. Furthermore, the constraint on locations is a contingent fact about the world, not an analytic one - thus, we cannot sensibly escape the dilema by considering any of the relations derived from the others (as many philosphers have pointed out). From a purely behavioral point of view this is how things should be. To an external observer, it would appear that move(A, 1) sometimes changed the location of A and not B (when some simultaneous action occurred that raised A to lo- cation 2), sometimes changed the location of B and not A (when some simultaneous action raised B), and sometimes affected the locations of both A and B. (Of course, the action would always change the location of F). As there is no observation that could allow the observer to detect whether or not another action was occurring simultaneously, there is no way the action mOVep could be distinguished from any other that had the same transition re- lation. For example, there would be no way to distinguish mOveF from an action move> that exhibited the same set of possible be- haviors but, when performed in isolation, left A where it was and moved B to location 2. On the other hand, when reasoning about processes, we do want to be able to make this distinction. For example, there may be two different ways of moving F, one corresponding to moveF and the other to move;. In other cases, while an action like moveF might be appropriate to seesaws, an action ana!gous to move; might be needed for describing object movements in other situations. For example, consider the situation where, instead of being parts of a seesaw, A is a source of light and B is F’s shadow. We therefore make a distinction between actions and events - one that is critical for reasoning about processes and plans. That is, an event is simply identified with all its possible occurrences; in particular, two atomic events having the same transition re- lation are considered identical. However, actions with the same transition relation (such as movej7 and move;) are not neces- sarily identical - they may behave differently when performed in isolation and may play different causal roles in a theory of the world. Clearly, therefore, we cannot determine which action we in- tend from knowledge (even complete knowledge) of all the pos- sible state transitions (event occurrences) which constitute per- formance of the action. In particular, we cannot use any general de/auft rufe or minimafity criteria to determine the intended ef- fects of an action when performed in isolation. Indeed, in the case of mOVCF, note that we do not minimize the changes to world relations or maximize their persistence: both A and B change location along with F. It appears, then, that the only thing we can do is to specify what happens when the action occurs in isolation in addition to specifying what happens when other actions occur in paral- lel. This is certainly possible, but the representation would be cumbersome and unnatural. 5 Causality One way to solve this problem is by introducing a notion of causality. As used herein, if an action a1 is stated to cause an action a2, we require that al always occur simultaneously with az. Thus, in this case, a1 could never be performed in isolation - a2 would always occur simultaneously with every occurrence of 01. For example, we might have a causal law to express the fact that whenever a block x is moved, any block on top of z and not somehow restrained (e.g., by a string tied to a door) will also move. We could write this as Vtu, u,z, y, f . (occura(move(z, I)) A on(y, 2) A -redrained( 3 occurs(move(y, f))(b) The notion of causality used by us is actually more general than that described above, and is fully described elsewhere [5]. We use the term in a purely technical sense, and while it has many similarities to commonsense useage, we don’t propose it as a fully-fledged theory of causality. Essentially, we view causal- ity as a relation between atomic actions that is conditional on the state of the world. We also relate causation to the tempo- ral ordering of events, and assume that an action cannot cause another action that precedes it. However, we do allow an event to cause another that occurs simultaneously (as in this paper). This differs from most formal models of causality [8,12,16]. But how does this relate to the problem of persistence and the specification of the effects of actions performed in isolation? The answer is that we can thereby provide axioms that explicity describe how an action affects the world in the context of other actions either occurring or not. For example, consider the action moveF described in the pre- vious section. We begin by modifying the definition of this action so that its only direct effect is the location of the fulcrum F it- Planning: AUTOMATED REASONING / 73 self. This means that the transition relation for ??zovcF wi!! have to include world states in which A, F, and B are not colinear, but this is no problem from a technical point of view. Indeed, at least in this case, there is also an intuitive meaning to such worlds; namely, those in which the seesaw is broken. However, there is no problem with requiring all possible world histories (not all world states!) to satisfy the linearity constraint. We then add causal laws which force the simultaneous move- ment of either A or B or both. For example, we might have the following causal law: VW, 3 . OCCurs(moveF)(8) /\ (da . (oecurs(a’) hinterferea(a’, move(A, l))))(a) > occurs(move(A, l)(s) where interferes(ar , az)(s) means that it is not possible to per- form actions al and a2 simultaneously in state 8 (see Section 3). The intended meaning of this causal law is that, if we perform the action movef, move(A, 1) is caused to occur simultaneously with mODeF unless another action occurs that forces A to occupy a location different from 1. A similar causal law would describe the movement of B. Both laws could be made conditional on the seesaw being intact, if that was desired. There are a number of things to be observed about this ap- proach. First, it would appear that we should add further causal !aws requiring the movement of at least one of A or B in the case that both could not move to location 1. However, this is not necessary. For example, let us assume that, at the moment we perform mot’eF, some other action occurs simu!taneous!y that moves B to location 2 (without directly affecting the location of A). -4s the direct effects of neither this action nor the action movcF include the location of A, we might expect application of the above causal law to yield a resulting state in which A is at lo- cation 1. However, this is clearly inconsistent with the constraint that A, F, and B must remain colinear. If we examine this more carefully, however, the impossibility of such a world state simply implies that the antecedent of the above causal law must, in this case, be false. That is, there must exist an action that occurs in state J and that cannot be per- formed simultaneously with move(A, 1). Indeed, this is exactly the action that would have appeared in any causal laws that forced the colinearity constraint to be maintained. The point of this example is that in many cases we do not need to include causal laws to maintain invariant world conditions - we can, in- stead, use the constraints on world state to infer the existence of the appropriate actions. Second, the application of causal laws need not yield a unique set of caused actions - it could be that one causal law requires the location of A to change and B not, while another requires the location of B to change and A not. Given only this knowledge of the world, the most we could infer would be that one but not both of the actions occurs - but which one would be unknown. (Interestingly, this bears a strong similarity to the different pos- sible extensions of a theory under certain kinds of default rules P414 Third, actions are clearly distinct from events (cf. [1,12,16]). ID particular, actions with the dame transition relation - i.e., exhibiting the same set of possible behaviors - may play different causal roles. For example, with no outside interference, ??ZOve,p causes the movement of both A and B, whereas move), causes the movement of A alone. This is not the same distinction that is made between actions and events in the philosophical literature, but it does have some similarities. Finally, we may not be able to prove that no interference arises, which, in the above example, would prevent us from inferring that the action move(A, 1) occurs. However, this is not a serious problem - if we cannot prove that the action either occurs or does not, we simply will not know the resulting location of A (unless, of course, we make some additional assumptions about what events are occurring). Causal laws can be quite complex, and may depend on whether or not other actions occur as well as on conditions that hold in the world. It is the introduction of such laws that allows us to represent what happens when only a subset of all possible actions occur. We gain by having simpler descriptions of actions but, in return, require more complex causal laws. On the other hand, it is now easy to introduce other causal laws, such as ones that describe what happens when a block is moved with a cup on top of it, when the cup is stuck with glue, or tied with a string to a door, or when other blocks are in the path of the movement. Some predicates are better considered as defined predicates, which avoids overpopulating the world with causal laws. For ex- ample, the distance between two objects may be considered a de- fined predicate. Instead of introducing various causal laws stat- ing how this relation is altered by various move actions, we can simply work with the basic entities of the problem domain and infer the value of the predicate from its deliniens when needed. 8 The Frame Problem The frame problem, as Hayes [7] describes it, is dealt with in our approach by means of the law of persistence. This has a number of advantages. First, because this law is a property of our action model, and not of our action specification language, we avoid all of the semantic difficulties usually associated with the frame problem. Second, we avoid the problem of having to state a vast number of uninteresting frame axioms by means of direct-effects formu- lae, which describe all those relational tuples (and, in the general case, functional values and constants) that can possibly change. Third, we avoid having unduly complex direct-effects formu- lae and action representations by introducing causal laws that describe how actions bring about (cause) others. Of course, the causal laws can themselves be complex (just as is the physics of the real world), but the representation and specification of actions is thereby kept simple. There are also important implementation considerations. The approach outlined here is at least tractable, as the relations and functions that can be affected by the occurrence of an action require, at most, provability of the formulae of interest. In- terestingly, one of the most efficient action representations so far employed in AI planning systems - the STRIPS represen- tation [2,10] - is essentially the special c8se in which (1) the transition relation for each action can be represented by a single precondition-postcondition pair; (2) the postcondition is a con- junction of literals; (3) the direct effects (which correspond to the elements in the delete list) include all the literals mentioned in the postcondition; and (4) no actions ever occur simultaneously with any other. The approach used by Pednault [13] can else be considered the special case in which there are no simultaneous actions. Some researchers take a more general view of the frame prob lem, seeing it ~3 the problem of reasoning about the effects of ac- tions and events with incomplete information about what other actions or processes (usually the environment) may be occurring simultaneously. Unfortunately, this problem is often confused -t i SCIENCE with the representration of actions, with the result that there is usually no clear model-theoretic semantics for the representation. For example, one of the major problems in reasoning about actions and plans is in determining which actions and events can possibly occur at any given moment. Based on the relative infrequency of “relevant” actions or events, or that one would ‘know about” these if they occurred, it has been common to use various default rules (e.g., [12]) or minimal models (e.g., [9,16)) to constrain the set of possible action occurrences. However, there are many cases where this is unnecessary - where we can prove, on the basis of axioms such as those appearing in this paper, that no actions of interest occur. We may even have axioms that allow one to avoid consideration of whole classes of actions, such as when one knows that certain actions are external to a @en process. Thus, in many cases, there is simply no need to use default rules or minimality principles - reasoning about plans and actions need not be nonmonotonic. In the case that we do need to make assumptions about ac- tion occurrences, the use of default rules and circumscription can be very useful. For example, by minimizing the extension of the occur.3 predicate we can obtain a theory in which the only action occurrences are those that are causally neceaaary. How- ever, there is no need to limit oneself to such default rules or minimality criteria. There may be domain-specific rules defining what assumptions are reasonable, or one may wish to use a more complicated approach based on information theory. We may be able to make reasonable assumptions about freedom from inter- ference; to assume, for example, that a certain relational tuple will not be influenced by actions in other processes. It is not our intention to consider herein the problem of making useful assumptions about actions and freedom from interference - it is, of course, not a simple problem. However, it is important to keep this problem separate from the issue of action representa- tion. For example, it at first seems reasonable to assume that my car is still where I left it this morning, unless I have information that is inconsistent with that assumption. However, this assump tion gets less and less reasonable as hours turn into days, weeks, months, years, and centuries. This puts the problem where it should be - in the area of making reasonable assumptions, not in the area of defining the effects of actions [2,7], the persistency of facts [12], or causal laws [16]. 7 Conclusions We have constructed a mode! of atomic actions and events that allows for simultaneity, and described the kind of facts required for reasoning about such actions. We introduced a law ofperaia- t,nnce that allows the effects of actions to be determined and, most importantly, have shown how the representation of actions and their effects involves no frame axioms or syntactic frame rules. We also pointed out some deficiencies in existing approaches to reasoning about multiagent domains: for example, that consis- tency of predications over states or intervals cannot be taken as proof that act ions can proceed concurrently, and that models that represent actions simply as the set of all their possible be- haviors cannot make certain distinctions critical for planning in multiagent domains. Finally, we showed how the law of persis- tence, together with the notion of causation, makes it possible to retain a simple mode! of action while avoiding most of the difficulties associated with the frame problem. I wish to thank especiafly Amy Lansky and Ed Pednault, both of whom helped greatly in clarifying many of the ideas presented in this paper. References PI PI PI PI PI 161 I71 PI PI PO1 illI PI P31 M Ml PA Allen, J. F., “A General Model of Action and Time,” Computer Science Report TR 97, University of Rochester, Rochester, New York (1961). Fikes, R. E., and Nilsson, N. J., (‘STRIPS: A Ijew Ap- proach to the Application of Theorem Proving to Problem Solving,” Artificial Intelligence, 2, pp- 189208 (1971). Georgeff, M. P., “A Theory of Action for Multiagent Plau- ning,” Pm. AAAI-84, Austin, Texas (1984). Georgeff, M. P. “A Theory of Process” Workshop on Dis- tributed AI, Sea Ranch, California (1985). Georgeff, M. P. “Process, Action, and Causality,” Work- shop on Planning and Reasoning about Action, Timberline Lodge, Mount Hood, Oregon (1986). Georgeff, M. P., and Lansky, A. L., “Procedural Knowl- edge,” hoc. IEEE, Special Issue on Knowledge Represen- tation (1986). Hayes, P. J., “The Fraae Problem and Related Problems in Artificial Intelligence,” in Artificial and Human 3’hin&ing, A. Elithom and D. Jones (eds.), Jossey-Bass (1933). Lansky, A. L., “Behavioral Specification and Planning for Multiagent Domains,” Tech. Note 360, Artificial Intelli- gence Center, SRI International, Menlo Park, California (1985). Lifschitz, V. “Circumscription in the Blocks World,” Com- puter Science Working Memo, Stanford University, Stan- ford, California (1985). Lifschitz, V. “On the Semantics of STRIPS,” W’orkshop on Planning and Reasoning about Action, Timberline Lodge, Mount Hood, Oregon (1986). McCarthy, J., and Hayes, P. J., %ome Phi!osoph;~a! Prob- lems from the Standpoint of Artificial Intelligence, in Ma- chine Intelligence /, pp. 463-502 (1969). McDermott, D., ‘A Temporal Logic for Reasoning about Processes and Plans,” Cognitive Science, 6, pp. lOlLlS5 (1982). Pednault, E. P. D., “Toward a Matbemnt,icnl Theory of Plan Synthesis,” Ph.D. thesis, Department of f’;lectri= cal Engineering, Stanford University, Stanford, C:!ifornia (1986). Reiter, R., “A Logic for Default Reasoning,” Arti,&iul in- telligence, 13, pp. 81-132 (1980). Rosenschein, S. J., “Plan Synthesis: A L.ogica! Perspec- tive,- Proc. IJCAI-81, Vancouver, Brit.ish Co!umhia (1981). Shoham, Y. ‘Chronological Ignorance: Time, Knowledge, Nonmonotonicity, and Causation,” Workshop on Planning and Reasoning about Action, Timberline Lodge, h4cunt Hood, Oregon (1986).
1986
104
367
PLANNING WITH ABSTRACTION Josh Tenen berg Department of Computer Science University of Rochester Rochester, NY 14620 josh@rochester objects of type v are also objects of type w, and inherit all properties provable of type w. We will call w an abstraction of v, and v a specialization of w. These taxonomies enable us to make assertions about a class of objects that we need not repeat for all Abstract Intelligent problem solvers for complex domains must have the capability of reasoning abstractly about tasks that they are called upon to solve. The method of abstraction presented here allows of its subclasses. So, for instance, if it is asserted that all one to reason analogically and hierarchically, making both the task of formalizing domain theories easier for the system designer, as well as allowing for increased computational efficiencies. It is believed that reasoning about concepts that share structure is essential to improving the performance of automated planning systems by allowing one to apply previous compua tional effort expended in the solution of one problem to a broad range of new problems. supportable objects can be stacked, then it need not be asserted separately that blocks can be stacked, boxes can be stacked, and trays can be stacked. It suffices to assert that blocks, boxes and trays are all supportable objects. This structure is not strictly a tree, which means that each object can be abstracted along several different dimensions, with the effect that every node inherits all of the properties of every other node from which there is a path. For example, a Bottle is both a Container and a Holdable object , since there are paths in the graph from Bottle to both Holdable and Container. Note that this structure admits no exceptions. We prefer instead to weaken those assertions we can make of a class in order to preserve consistency. 1. Introduction Most artificial intelligence planning systems explore issues-of search and world representation in toy domains. The blocks world is such a domain, with one of its salient and unfortunate characteristics being that all represented objects (blocks) are modeled as being perfectly uniform in physical features. We would like to model a richer domain, where objects bear varying degrees of similarity to one another. For instance, we might wish PhysObs /ty Supporter Contents Container to model blocks and trunks, which are both stackable but of different sizes and weights, or boxes and bottles, which are both containers but of different shape and material. As a consequence of solving problems in this richer domain, we will want plans to solved problems to be applicable to new problems based upon the similarites of the objects to be manipulated. So, for instance, a plan for stacking one block on top of another will be applicable to a similar trunk stacking in terms of its gross features, but will differ at more detailed levels. We will present a representation for plans of varying degrees of abstraction based upon a hierarchical organization of both objects and acti ons that provides a qualitative similarity metric for problems posed to the planner. This plan representation has the following property. When a plan Block Box Tray HoldCont Room Impermeable Cat-t fZ Glass Bottle figure 1 We would like to represent actions similarly. Typically [McCarthy and Hayes 19691, actions are represented in terms of those conditions that suffice to hold before the performance of the action (called preconditions) that ensure that the desired effects will hold after the performance of the action. However, an inherent inefficiency with this is that many actions share preconditions and effects which must be specified separately for each action, providing no means with which to determine which actions are similar and hence replaceable by one another in analogous problems What we will do alternatively is to provide an action taxonomy, by grouping actions into inheritance classes. An example of a partial action hierarchy is given in figure 2. The boxed nodes denote actions and the dotted arcs between actions denote inheritance. As with the object hierarchy, if there is an From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. inheritance arc from action v to action w, we say that v is a specialization of w, and w is an abstraction of v. The solid arcs from a literal into an action denote necessary preconditions for that action, and the solid arcs from an action to a literal denote effects of that action. Each action inherits all preconditions and effects from every one of its abstractions. So, for instance, CarriedAloft is a precondition of p/aceln(x,y) inherited from put(x,y), and /n(x,y) is an effect of p/ace/n(x,y) inherited from contain(x,y). As we proceed down .this graph from the root node traversing inheritance nodes backward, by collecting the preconditions for each action encountered, we are adding increasing constraints on the context in which the action may be performed in order to have the desired effects. At the source nodes, which represent the primitive actions, the union of all of the preconditions on every outgoing path constitute a sufficient set of preconditions. An action can only be applied if its sufficient set of preconditions are all satisfied in the current state. The sufficient set of preconditions for p/ace/nBox has been italicized, and is exactly the union of those preconditions for each action type on all paths from p/ace/nBox to contain. Additional action hierarchies we might have are remove with specializations pourOut and /iftOut, and the hierarchy open, with specializations openDoor and removelid. Open(y) NextTo(ROBOT, y) (containol -b In(x,y) Container(y) ConnectedTo(ROBOT, x) , l ’ w 4 . .* . CarriedAloft In(ROBOT,y) . l *. . In(x,z) \ . . L In(hand(ROBOT),y) . . . . . Em ptytz) Box(yl - figure 2 III. Plan Abstractions Planning involves finding a temporally ordered sequence of primitive actions which when applied with respect to the temporal ordering from a given initial state produces a state of the world in which the desired goals hold, and for which the sufficient preconditions for each primitive action must be satisfied by the state in which the action is performed. In this paper, a total temporal ordering of actions will be used for simplicity, although the ideas presented here can be extended to more general temporal orderings (partial orderings [Sacerdoti 771, concurrent actions [Allen 841). Such a totally ordered sequence of actions will be called a primitive p/an, or simply a p/an. Finding plans to solve given problems involves searching for a state of the world which satisfies our goals from those states of the world which are possible from the initial state through the performance of one or more actions. To reduce this search, we will make use of plans that have already been found for solving previous problems. In order to use saved plans, the similarity between the previous problem solved by this plan and the current problem we wish to solve must be evaluated. We describe p/an graphs which are a means for performing this evaluation. These plan graphs are generalizations of triangle tables [Fikes, Hart, and Nilsson 19721. Using explanation based generalization [Mitchell, Keller, and Kedar-Cabelli 1985) techniques, from a primitive plan a plan graph is constructed which embeds the causal structure of the primitive plan such that the purpose of each plan step can be determined. Each action is represented not only as a primitive, but as a path from a primitive to an abstract action taken from an action hierarchy, where the causal structure enables us to determine which hierarchy to choose. Given a new problem, this plan graph is searched for its most specific subgraph whose causal structure is consistent with the new problem. This subgraph represents an abstract plan for the new problem, and will be used as a guide for finding the primitive plan for this problem. If an action in the original plan cannot be applied due to some difference between the old and the new problem, such as a difference in corresponding objects manipulated, (e.g., a ball in one case, a box in the other) we can replace this action by choosing another which is a different specialization of the same abstraction (pickupBall replacing pickup8ox, both of which are specializations of pickup). Thus many problems can be solved by performing search within the constraints of the abstract plan we have retrieved for this problem, rather than having to perform an unconstrained global search. A plan graph of a plan will have nodes for each action in the primitive plan, and nodes with directed arcs for each precondition and effect of these actions. If an effect of an action satisfies a precondition of another, this will appear as an arc from the first action, to its effect, to the second action. These causal chains establish the purpose of each action in terms of the overall goal of the plan. We will formally define plan graphs in two stages. The first stage includes only the causal structure, while the second incorporates abstractions. A plan graph G = (V,E) for primitive plan P is a directed acyclic graph where V and E are defined as follows. The set of vertices is partitioned into two subsets V, and V,, precondition nodes and action nodes. Likewise, E is partitioned into two two subsets Ec and E,, causal edges and specialization edges. For every action in P, there is a node in V, labeled by its corresponding action. If p is an effect of action a in P, then there is a corresponding node in V, labeled p, and the edge (a,p) is in Ec, and for every action b in P that this instance of p satisfies there is an edge (p,b) in E,. For example, if action Al establishes condition K which is a precondition of action A2, then (K,AZ) is in E, if and only if there does not exist action A3 that occurs after Al but before A2 that clobbers K (establishes 1 K). Clearly any precondition of each action that is not satisfied by a previous action must be satisfied by the initial state. For every such precondition p there is a corresponding node in VP labeled p, and for every action a in P for which this instance of p is a precondition, there is an edge (p,a) in E,. Each action node in E, is additionally labeled by a number indicating its temporal order, the nth action labeled by n. This graph will be specified further by the addition of action abstractions, but note that as it stands it is similar to a graph Planning: AUTOMATED REASONING / 77 version of triangle tables [Fikes, Hart, and Nilsson 19721, and fulfills much the same function. We can use the same technique as used in triangle tables for generalizing a plan by replacing all constants in the action and precondition nodes by variables, and redoing the precondition proofs to add constraints on variables in different actions of the plan that should be bound to the same object (see previous reference for details). These constraints will have to be added to the graph as additional preconditions, but are left off in our examples for clarity. The preconditions for this plan graph are the set of source nodes (nodes with no incoming arcs), and the goals of this plan graph are the set of sink nodes (nodes with no outgoing arcs). This graph has the property that any subset of its goals can be achieved from any initial situation in which we can instantiate all of the preconditions by applying each of the actions in order. A plan graph for the problem in figure 3 of moving a ball from one box to another is given in figure 4 (nodes representing preconditions satisfied by the initial state rather than by a previous action are not included in this figure). This graph will be altered to include abstract actions in a straightforward fashion. figure 3 1 n(k,y) t 0 5 y IplacelnBox(x,y) 1 y t ConnectedTo(ROBOT, x) CarriedAloft NextTo(ROBOT,y) NextTo(ROBOT,x) \I reachlnBox(hand(ROBOT), z) 1 117 1 a- figure 4 Figure 5 is an example of the altered plan from figure 4 (the outlined subgraph of figure 5 will be explained later). This alteration is done as follows. For each primitive action A0 in V,, we will add nodes to V, labeled Al, AZ, . . . . A, (where n may be different for each primitive action) and edges to E, labeled (Aa, A,), (A,, A*), . . . . (A,-,, A,), where there exists some action hierarchy such that A, is an abstraction of each Ak for k<i, and A, satisfies at least one effect p for which there exists a node in V, labeled p and an edge in E, labeled (Ao, p). More simply, we add an abstraction path from an action hierarchy to the plan graph. We then redirect each precondition arc (p, Ao) to point to the highest abstraction A, for which there is an arc (p, A,) in the chosen action hierarchy. In other words, p is a precondition of abstraction A,, but not of any abstraction of A,. For instance, p/ace/&ox is replaced by the abstraction sequence placeinBox, place/n, put, contain, and the preconditions NextTo and ConnectedTo are redirected to contain, while CarriedAloft is redirected to put. We additionally redirect effect arcs (Aa, p) such that the effects come from the highest abstraction A, for which there is an arc (A,, p) in the chosen action hierarchy. In other words, p is an effect of abstraction A,, but not of any abstraction of A,. Turning again to figure 5, the effect arc into ConnectedTo is redirected to come from attachToAgent, since this will be an effect of every specialization of this abstraction, and the effect arc to Grasped is redirected to come from grasp. We will additionally add temporal numberings to each abstraction on a path from each primitive action (although the examples will only number the highest abstractions for each action). r _ __-_-.-.-.-.-.-. -.-.-,-.-.-.-‘ -‘ -‘ -.-‘ -‘ -‘ -‘ -‘ -‘ -’ -’ -’ -’ -.-.~ Inky) ‘. ConnectedTo(ROBOT, x) ’ x., CarriedAloft 4 f NextTo(ROBOT,x) i $x&J I 4 i.-.-.-.-. (_ -.- -.-.-.-.-. IgrarpBailoI mox(hand(ROBOT), x, z) 1 figure 5 The primitive action nodes of this plan graph indicate the primitive plan that solves the problem for which the plan was constructed. The distance between an action node and one of the goals of the entire plan graph along its shortest causal chain is a rough measure of the significance of the action to the overall plan. The shorter the distance, the more likely this action or an abstraction of it will be required in a similar problem; the greater the distance, the less likely this action will be useful in a similar problem. This plan can thus be abstracted by one or both of the following: removing causal chains from one or more precondition nodes, and removing specialization paths from one or more action nodes. Each resultant partial plan graph represents a plan with some of the detail unspecified. -8 / SCIENCE More formally, a partial plan graph P of plan graph P’ is any subset of the nodes and arcs of P’ such that no source nodes are action nodes, at least one sink node (goal) of P’ is in P, and these will be the only sink nodes in P, and for every node in P, there exists at least one path from this node to a sink node (unless that node is itself a sink node). Additionally, if b is an action node, then every node p for which there exists an arc (p,b) in P’ will be added to P along with this arc. We will additionally “mark” each source node in P that was also a source node in P’. This mark indicates that this precondition is satisfied by the initial state of the original problem, as opposed to being satisfied by the performance of a previous action. The reason for marking these nodes will be explained later. From this definition, there will be several partial plan graphs that can be constructed from a given plan graph. The subgraph outlined by the dotted line in figure 5 is one exapmle. As before, the preconditions of a partial plan graph are the formulas attached to the source nodes (not included in the given figures), while the goals of each partial plan graph are the formulas attached to the sink nodes. Figure 7 is the plan graph for a plan to solve the problem from figure 6. Here a box must be moved between rooms. In both this problem and that of figure 3, the goal is to move an object from one container to another. This draws analogies between rooms and boxes, which are both containers according to our object hierarchy, and between placing objects in boxes and pushing objects into rooms, which are both containment actions, according to our action hierarchy. At an abstract level, the plan of attaching the object to the agent, and moving the agent from one container to the other suffices for both problems, and in fact this is the abstract plan represented by the identical partial plan graph that is outlined by the dotted line in both figures 5 and 7. So although the problems that these graphs solve are different, at this level of abstraction they are identical. 1 figure 6 We can generalize from this in that for any partial plan graph P of plan graph P’, there will exist a set II of plan graphs for which P will be a partial plan graph of each of them. That is, P will describe each primitive plan of each element from this set at some level of abstraction. We will use the symbol TIP to denote the largest such set. For instance, if we label the outlined partial plan graph of figure 5 K, then the graphs of figures 5 and 7 are in ITK. We will say that the primitive plan of each member of TIP is an expansion of the partial plan graph P. The more general P is, that is, the smaller a subgraph of P’ it is and hence the more abstract each of its constituent actions are and the smaller its causal chains, the larger will be the cardinality of TIP. We will say that partial plan graph P solves problem Q if and only if there exists an element of IIp whose primitive plan solves Q for some instantiation of all of its variables by ground terms. Given a partial plan graph P and a problem instance Q that P solves, we can find an expansion of P that solves Q by only searching for specializations of the abstract actions of P without r’ -.-.-.-.- -.-.-.- _ _ _ _ , _ . _ -,- - - -.- - _ -.-.- - - -.- - - w . 7 Inky) 4 - / ( FI 1 NextTo(;BOT,y) i \ ‘, i I f ConnectedTo(ROBOT, x) . \., I A I I’\ L ._.-.-. -.-.A .-.-.- -.-.A Grasped(x) i / . . i NextTo(ROBOT,x) i I 4 r ._ .- -. - .A - -.- i j~getNeat(x) L.- .-.- *-.-.-.-.- - - i ImoveTo(x)I figure 7 having to backtrack through the actions of P itself. P thus serves as an abstract guide to solving Q. So, for instance, given the partial plan graph outlined in figure 5, we can find the remainder of the primitive plan (those actions not inside the dotted line) that solves the problem from figure 6 by only having to do local search. By this, we mean that for any non-primitive action in this partial plan graph, (such as contain, in figure 7), we follow arcs backward through that abstract action’s specialization tree (figure 2 in this case) until we find a primitive action whose preconditions are all satisfied by the state in which it is executed (push/n, in this example). If no such primitive exists, then additional pnmltlves must be inserted in this plan to establish the sufficient preconditions for some specialization, where these inserted actions do not clobber preconditions of any of the already established succeeding actions Unfortunately, we cannot In general know if a given partral plan solves a given problem instance unless we perform the possibly unbounded local search for the primitive plan that verifies this. It may not be possible to find specializations of each extant abstract action without reordering some of the actions, and therefore backtracking through and altering the partial plan graph itself. Although we are not guaranteed certainty, we can still use the plan graphs as a heu%st:c for search. We will define a partial plan graph P as being applmble to a problem Q If and only if the goals of Q are a subset of the goals of P, and the marked preconditions of P are a subset of the conditions that hold In the initial state of Q. Recall that we marked all of those preconditions in a partial plan graph that were satisfied by the Initial state of the original problem. Applicability thus means that the current Initial state satisfies the same preconditions at this /eve/ of abstractron as the original initial state. Suppose we wish to find a primitive plan for problem Q consisting of an initial state and a set of goals (for simplicity we Planning: AUTOML4TED REASONING , ‘9 will assume that this goal is a single literal). Additionally suppose that the goals of plan graph P are the same as those of Q. We will attempt to find the most specialized partial plan graph P’ of P for which an expansion exists that will solve Q, even though it is possible that no such P’ exists. We will do this by traversing P backward from its goal node through the causal and specialization arcs, considering increasingly larger partial plans of P. We will continue this traversal as long as the partial plan represented by all of the paths pursued is still applicable to Q, stopping when we can no longer traverse any arc and still have applicability of the current partial plan to problem Q. The size of the partial plan that we have constructed is thus a qualitative measure of similarity between the original problem and the current one. If there are only insignificant differences, the partial plan may be equivalent to the entire plan graph. If the differences between the problems are large, this may result in a graph of only a few actions expressed at high levels of abstraction. But given the exponential nature of searching through combinatorial spaces, knowing the temporal ordering of even a few of the action abstractions that will eventually appear specialized in our plan may help significantly. IV. Previous Research Abstraction in planning is typically viewed in terms of decompositional abstraction as used in NOAH-like planners [Sacerdoti 19771. In these planners, action A is an abstraction of actions B,C,D if the latter actions are each steps in the performance of action A. This type of abstraction is thus orthogonal to inheritance abstraction presented here. ABSTRIPS [Sacerdoti 19741, although using different techniques, shares some important similarities. ABSTRIPS is an iterative planner, where increasingly large subsets of preconditions of each action are considered at each successive iteration. The developed plan at each level is then used to guide search at more detailed levels, where the satisfaction of emergent preconditions is attempted locally, similar to what is done in this paper. Of even greater similarity, but within a different domain, is the work presented in [Plaisted 19811, who uses abstraction within a theorem prover. He details how a desired proof over a set of clauses can be obtained by first mapping the clause set to a set of abstract clauses, obtaining a proof in this (hopefully simpler) space, and then using this proof as a guide in finding the proof in the original, detailed space. His mapping process and abstract proof are similar to our search for an abstract plan within our saved plan space - but rather than constructing an abstract plan for each new problem, we attempt to appropriate one from a previously solved problem. V. Conclusion The primary motivation for using abstraction was so that search for solutions to new problems can be improved by using solutions to old problems. We believe that this approach can be used to these ends in a domain in which objects are distinguishable at various levels of detail. We will try matching abstract plans to problems that have the same goals. Any such new problem whose initial state does not contain all of the preconditions of the original initial state will thus not match the abstract plan at every level, but will likely do so at some level. The partial plan graph still provides two important functions in this case. First, it ignores “unimportant” preconditions at the most general levels, where the importance of a precondition is determined by the height at which it appears in the action hierarchy. Second, the search space of the new problem can be explored along those paths that do not match the original problem, while attempting to leave intact those paths that do match. We must point out that the abstraction described in this paper has not been implemented for even a small domain. In fact, one of the obstacles to doing such an implementation is that one may likely only see benefits in a large domain. Thus, there will be little point to use this method as a representation for the vanilla blocks world. An additional issue is in the choice of problems that the system will encounter. One can always construct problem sequences given as input to the problem solving system such that the abstractions in the model will optimize performance. By the same token, one can always construct problem sequences where the abstractions will give quite poor performance. The ultimate test of a set of abstractions will therefore be empirical in that they must be cost-effective (in terms of some resource measure) only as compared with other problem solvers (human or machine) for a given domain. We can make no such claims for the particular abstractions of the limited physical world domain illustrated in this paper. The importance of this work is in how we can structure knowledge for solving problems in domains that are far richer than the ones in which the current generation of planners have approached. It is believed that inheritance abstraction will be a powerful technique in this endeavor. Special thanks to my advisor, Dana Ballard, whose energy, knowledge, piercing insights and trust have made it all worthwhile, to Leo Hartman, who always seems to have an answer when an answer is needed, and to Jay Weber, who will hopefully solve the questions of how we go about constructing abstraction hierarchies. References [Allen 841 Allen, J.F., “Towards a General Theory of Action and Time”, Artificial Intelligence 23: 123 - 154, 1984. [Fikes, Hart, and Nilsson 19721 Fikes, R., Hart, P. , and Nilsson, N., “Learning and executing generalized robot plans”, Artificial Intelligence 3:251 - 288, 1972 [Hendrix 19791 Hendrix, G.C., “Encoding Knowledge in Partitioned Networks” in, Associative Networks, ed. Findler, N.V. 1979 [McCarthy and Hayes 19691 McCarthy, J., and Hayes, P., “Some philosophical problems forom the standpoint of artificial intelligence”, In B.Meltzer and D. Michie (editors), Machine Intelligence 4, 1969. [Mitchell, Keller and Kedar-Cabelli 19851 Mitchell, T., Keller, R. and Kedar-Cabelli, S., “Explanation Based Generalization: A Unifying View”, Rutgers Computer Science Dept. ML-TR-2, 1985. [Plaisted 19811 Plaisted, D., “Theorem Proving with Abstraction”, Artificial intelligence 16:47-108, 1981 [Sacerdoti 19741 Sacerdoti, E., “Planning in a hierarchy of abstraction spaces”, Artificial Intelligence 5: 115 - 135, 1974. [Sacerdoti 19771 Sacerdoti, E. A structure for plans and behavior. American Elsevier Publishing Company, New York, 1977 80 / SCIENCE
1986
105
368
GENERATING PERCEPTION REQUESTS AND EXPECTATIONS TO VERIFY THE EXECUTION OF PLANS Richard J. Doyle David J. Atkinson Rajkumar S. Doshi Jet Propulsion Laboratory 4800 Oak Grove Drive, Pasadena, CA 91109 ABSTRACT This paper addresses the problem of verifying plan execution. An implemented computer program which is part of the execution monitoring process for an experimental robot system is described. The program analyzes a plan and automatically inserts appropriate perception requests into the plan and generates anticipated sensor values. Real-time confirmation of these expectations implies successful plan execution. The implemented plan verification strategy and knowledge representation are described. Several issues and extensions of the method are discussed, including a language for plan verification, heuristics for constraining plan verification, and methods for analyzing plans at multiple levels of abstraction to determine context-dependent verification strategies. 1. THE PROBLEM In a partially-modelled real world, an agent executing a plan may not actually achieve desired goals. Failures in the execution of plans are always possible because of the difficulty in eliminating uncertainty in world models and of a priori determining all possible interventions. Given these potential failures, the expected effects of actions in a plan must be verified at execution time. In this paper, we address the problem of providing an execution monitoring system with the information it needs to verify the execution of a plan in real time. Our solution is to identify acquirable perceptions which serve as more reliable verifications of the successful execution of actions in a plan than do the inferences directly derivable from the plan itself. Assertions which appear as preconditions and postconditions in plan actions are mapped to appropriate sensor requests and expectations describing a set of values. Observing a value from the expectation set on the indicated sensor at the appropriate time during execution of the action implies that the assertion holds. The strategies for verifying the execution of actions are derived from the intentions behind their use. The knowledge of which perceptions and expectations are appropriate for which actions is represented by verification operators. These ideas have been implemented in a working computer program called GRIPE (Generator of Requests Involving Perceptions, and Expectations). After describing this program, we propose several generalizations of its results by examining the issues involved in Selection, or determining which actions in a plan to monitor, and Generation, or how to verify the successful execution of particular actions. 1.1. CONTEXT OF THE PROBLEM Generating perception requests and expectations to verify the execution of actions in a plan is only one aspect of a robust control system for an intelligent agent. Research on such a control system is underway at the Jet Propulsion Laboratory. In our system, known as PEER (Planning, Execution monitoring, Error interpretation and Recovery) [Atkinson, 19861 [Friedman, 19831 [Porta, 19861 several cooperating knowledge-based modules communicate through a blackboard [James, 19851 in order to generate plans, monitor those plans, interpret errors in their execution, and attempt to recover from those errors. In this paper we concentrate on part of the execution monitoring task The primary application for PEER is the proposed NASA/JPL Telerobot, intended for satellite servicing at the U.S. space station. The current testbed scenarios for the Telerobot include a subset of the Solar Max satellite repairs recently accomplished by shuttle astronauts. Broadly speaking, the tasks which must be accomplished by execution monitoring are Selection, Generation, Detection/Comparison, and Interpretation. The Selection task must determine which effects of actions in the plan require monitoring. The Generation task involves determining the appropriate sensors to employ to verify assertions, and the nominal sensors values to expect. This is the task accomplished by the GRIPE system, which we discuss in detail below. The Detection/Comparison monitoring task handles the job of recognizing significant events on sensors and then comparing these events with the corresponding expectations. Finally, the Interpretation task involves explicating the effects of failed expectations on subsequent plan actions. We will discuss all of these in more detail. 1.2. OTHER WORK Monitoring task execution and feedback have been the topic of research in Al for quite some time. Attention has been focused on monitoring at the task level, the geometric and physical levels, and also at the servo level. Early work which exposed the role of uncertainty in planning and other problems in error recovery includes [Fikes, 19721, [Munson, 19721, and [Chien, 19751. Sacerdoti discussed the issues of monitoring and verification in NOAH in detail [Sacerdoti, 19741 [Sacerdoti, 19771. This work illustrated the role which planning at multiple levels of abstraction could play in monitoring. NOAH used the plan hierarchy as a guide for asking a human to verify plan assertions. Planning: AUTOMATED REASONING / 8 1 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Some recent research in planning and execution monitoring has focused on handling uncertainty. A planner implemented by Brooks reasons about the propagation and accumulation of errors [Brooks, 19821. It modifies a plan by inserting sensing operations and constraints which ensure that the plan does not become untenable. Erdmann’s method for planning trajectory motions utilizes a backprojection algorithm that geometrically captures the uncertainty in motion [Erdmann, 19851. Donald also addressed the the problem of motion planning with uncertainty in sensing, control and the geometric models of the robot and its environment [Donald, 19861. He proposed a formal framework for error detection and recovery. [Wilkins, 19821 and [Wilkins, 19851 deals extensively with planning actions to achieve goals. Wilkins also deals with Error Recovery which is the problem of recovering from errors that could occur at execution time. To be precise, Wilkins does not deal with the problem of planning to monitor the plan generated by the planner. [Tate, 19841 discusses the usefulness of the intent and the rich represtation of plans. He also mentions the issues in goal ordering, goal interaction, planning with time, cost and resource limitations, and interfacing the planner with other subsystems. He also throws some light on some solutions to the problem of Error Recovery. He mentions the problem of Execution Monitoring but has not discussed the problems or issues or any related solutions. Other recent research has also addressed the problem of using sensors to verify plan execution. Van Baalen CVanBaalen, 19841 implemented a planner that inserts sensory action requests into a plan if an assertion of an operator is manually tagged “MAYBE”. Miller [Miller 19851 includes continuous monitoring and monitoring functions in a route navigation planner. His focus is on the problem of coordination of multiple time dependent tasks in a well-known environment, including sensor and effector tasks involving feedback. Gini [Gini et al., 19851 have developed a method which uses the intent of a robot plan to determine what sensor conditions to check for at various points in the plan. In the final executable plan, the system inserts instructions after each operator to check approriate sensors for all possible execution errors. Fox and Smith [Fox et al., 19841 have also acknowledged the need to detect and react to unexpected events in the domain of job shop scheduling. 2. IMPLEMENTATION The algorithms presented in this paper have been implemented in a working computer program called GRIPE. In addition to the knowledge sources supplied to the program, the basic input is a plan specification as described below. GRIPE’s output consists of a modified plan which includes sensing operations, expectations about sensor values to be used by a sensor monitoring program, and subgoals for the planner to plan required sensor operations or establish preconditions for sensing. GRIPE has been tested on a segment of the JPL TeleRobot demonstration scenario and generates a plan of 67 steps modified to include perception requests and additional output, as described above. The examples shown below are drawn from this test case. GRIPE has been tested on examples from the Solar Max satellite repair domain. The system generates a modified plan which include perception requests, as well as expectations about those perceptions and subgoals for acquiring those perceptions. The following sections describe this process in more detail. 2.1. VERIFICATION STRATEGY The basic input to the GRIPE system is a plan specification. GRIPE prepares a plan for execution monitoring by examining the preconditions and postconditions of each action in the plan. For each of these assertions GRIPE generates an appropriate perception request and an expectation which, if verified, implies that the assertion holds. During execution, an action is commanded when all of its preconditions are verified and its successful execution is signalled when all of its postconditions are verified. GRIPE prepares a plan for execution monitoring by examining the preconditions and postconditions of each action in the plan. For each of these assertions GRIPE generates an appropriate perception request and an expectation which, if verified, implies that the assertion holds. An action is commanded when all of its preconditions are verified and its successful execution is signalled when all of its postconditions are verified. GRIPE uses dependency information between conditions established as postconditions in one action and required as preconditions in another. If the establishment and use of a condition occurs in consecutive actions, the condition is verified only once. Otherwise, the condition is verified when it is established, and re-verified when it is needed. The method of verifying the execution of an action is derived by examining the intention behind its use. The knowledge of which perceptions and expectations are appropriate for which actions is encoded in verification operators, described below. In the prototype GRIPE implementation, we assume that actions have a single intention. In general, however, the intent of an action may vary according to the context in which it appears. As an example, consider moving a robot’s arm as a precondition to a grasp. This operation may require high accuracy. A combination of sensors such as position encoders in the arm, proximity sensors at the end effector and vision could be used to ensure that the end-effector is properly placed to grasp this object. On the other hand, moving the arm away from the object after the release operation may require very little verification. The available latitude in the final position of the arm may be large. In this case, a cursory check on the position encoders may suffice. In the prototype GRIPE implementation, we assume that actions have a single intention. 2.2. REPRESENTATION Before examining in detail how GRIPE generates perception requests and expectations, we describe our representation for plans and our models for actions and sensors in the JPL TeleRobot domain. A plan is a totally ordered sequence of actions representing a schedule of commands to an agent. The dependencies among actions are maintained explicitly. X2 / SCIENCE Actions are modelled in the situation calculus style [Fikes, 19711, with specified preconditions and postconditions. In addition, an explicit duration for each action is determined and represented by a start and stop time. Currently, we model all actions as having the same duration. As an example, the action operator for GRASP is shown in Figure #l. 2.3. VERIFICATION OPERATORS The knowledge of how to verify the assertions which appear as preconditions and postconditions in actions is captured by verification operators. Verification operators map assertions to appropriate perception requests, expectations, and possibly subgoals. The definition of verification operators is shown in Figure #3. As an example, the verification operator for determining that an object is at a particular location is shown ‘in Figure #4. (create-action-operator :type GRASP :action (GRASP End-Effector Object before after) : precondit iona ((MODEL= (POSITION-OF-MODELLHD Object before) (POSITION-OF-MODELLED End-Effector before) (MODEL= (FORCE-OF-MODELLHD End-Effector before) 0) (MODEL= (WIDTH-OF-MODELLED End-Effector before) 'Open) 1 :postconditions ((MODEL= (FORCE-OF-MODELLED End-Effector after) (COMPLIANT-GRASP-FORCE-FOR Object after)) (MODEL= (WIDTH-OF-MODELLED End-Effector after) (GHASP-WIDTH-FOR Object after) ) ) ) Figure 1: GRASP Action Operator There are four types of sensors currently modelled for the TeleRobot: position encoders for the arms, force sensors at the end-effecters, configuration encoders for the end-effecters which tell how wide the grippers are held, and vision system cameras. The table shown in Figure #2 associates with each sensor type the actual perception request which GRIPE grafts into plans. We assume that all the sensors except vision can be read passively at any time. The TeleRobot vision system uses CAD/CAM-type models and requires an expected position and orientation to effectively acquire objects. Sensor Perception Request -------------___------------------------------------- Ann-Kinesthetic (WHERE End-Effector when) Hand-Kinesthetic (CONFIGURATION End-Effector when) Force (FEEL End-Ef fector when) Vision (SEE Object Position when) Figure 2: Perception Requests for Sensors Note that there are two equivalence predictes, MODEL= and SENSE=. The MODEL= predicate appears in the preconditions and postconditions of action operators and represents a comparison between two assertions in the world model. All reasoning done during planning occurs within the world model. Assertions in the world model are identified by the suffix -MODELLED. The SENSE= predicate, appearing in expectations generated by GRIPE, represents comparisons between a perception and an assertion in the world model. These comparisons are the essence of verification. Perceptions are identified by the suffix -SENSED. (define-verification-operator ;Assertion to be verified. assertion ;Actions partially verified by this operator. actions ;Constraints on assertion. constraints ;The sensor to be used. sensor ;Perception request which can verify assertion. perception ;Preconditions for obtaining the perception. preconditions ;Sensor value which verifies assertion. expectation) Figure 3: Verification Operator Definition Each verification operator is relevant to a single assertion which may appear as a precondition or postcondition in several different actions. Verification operators are indexed under the actions which they help to verify. In our example, there are three steps involved in determining the relevance of the verification onerator shown in Figure #4 to preconditions of the GRASP action shown in Figure #l. First, the relevant verification operators for the GRASP action are retrieved. Next, an attempt is made to unify the precondition against the assertion pattern specified in each retrieved verification operator. Finally, any constraints specified in the verification operator are checked. These constraints constitute a weak context mechanism; in our example, the specified constraint distinguishes the use of position encoders to verify the location of an end-effector from the use of the vision system to verify the location of an external object. Once the relevant verification operator has been identified, a perception request and expectation for verifying that the precondition holds at execution time are generated from the appropriate fields of the verification operator. This information is then passed to the real-time execution monitor. Perception requests are themselves actions to acquire perceptions via various sensors. The use of sensors may also be subject to the establishment of preconditions, In our example, the simulated vision system can acquire an object only if there are unobstructed views from the cameras to the object. Currently, the other three sensors we simulate are passive and do not have preconditions on their use. In this case, GRIPE generates and submits subgoals generated by particular perception Planning: AUTOMATED REASONING / 83 (create-verification-operator : sensor VISION : actions (GRASP RELEASE) : assertion (MODEL= (POSITION-OF-MODELLED Object Moment) Position) :constraints ( (NOT (MEMQ Object ’ (Left-End-Effector Hight-End-Effector) ) )) : percept ion (SEE Object Position Moment) :preconditions ( (DNOBSTHUCTFD-PATH (POSITION-OF-MODELLED ‘Left-Camera Moment) Position Moment) (UNOBSTRUCTED-PATH (POSITION-OF-MODELLED ‘Right-Camera Moment) Position Moment) ) : expectation (SENSE= (POSITION-OF-SENSED Object Moment) Position) ) Figure 4: Example Verification Operator requests to the planner. The task of the planner is to further modify the plan, which now includes perception requests, so that preconditions on the use of sensors are properly established. This process details the extent of the interaction of monitoring and planning and suggests the issue of how closely the two processes should be interleaved, a problem which has not yet received much close attention. 3. AN EXAMPLE The following example is drawn from a satellite repair scenario for the JFL Telerobot, described above. Part of the servicing sequence in the previous example involves grasping the handle of a hinged panel on the satellite. A segment of this plan is shown in Figure #5 When GRIPE processes this plan segment for execution monitoring, it inserts appropriate perception requests into the plan and generates expectations about nominal sensor values. The plans given to GRIPE have been hand-generated. Perform the action (MOVE right-end-effector handle (NEAR (POSITION-OF-MODELLED handle 2) ) (POSITION-OF-MODELLED handle 3) 2 3). Perform the action (GRASP right-end-effector handle 3 4). Figure 5: Example plan input to GRIPE Verify and do the action (MOVE . . . 2 3) using the ARM-KINESTHETIC sensor (WHERE right-end-effeetor 2) . (SENSE= (POSITION-OF-SENSED right-end-effector 2) (NEAR (POSITION-OF-MODELLED handle 2) ) ) (MOVE . . . 2 3) (WHERE right-end-effector 3) (SENSE= (POSITION-OF-SENSED right-end-effector 3) (POSITION-OF-MODELLED handle 3) ) Verify and do the action (GRASP . . . 3 4) using VISION sensor, the FORCE sensor, and the HAND-KINESTHETIC sensor (SEE handle (POSITION-OF-MODELLRD right-end-effector 3) 3) (SENSE= (POSITION-OF-SENSED handle 3) (POSITION-OF-MODELLED right-end-effector 3) ) (FEEL right-end-effector 3) (SENSE= (FORCE-OF-SENSED right-end-effector 3) 0) (CONFIGURATION right-end-effector 3) (SENSE= (WIDTH-OF-SENSED right-end-effector 3) open) (GRASP . . . 3 4) (CONFIGURATION right-end-effector 4) (SENSE= (WIDTH-OF-SENSED right-end-effector 4) (GHASP-WIDTH-FOR handle 4) ) (FEEL right-end-effector 4) (SENSE= (FORCE-OF-SENSED right-end-effector 4) (COMPLIANT-GRASP-FORCE-FOR handle 4)) Figure 6: Example plan output from GRIPE GRIPE’s strategy for verifying the successful execution of these two actions is: Use the position encoders of the arm to verify that the end-effector is in the correct position before and after the MOVE-TO. Before the GRASP, use the vision system to verify that the handle is in the expected location, use the force sensor to verify that the end-effector is not holding anything, and use the configuration encoder of the end-effector to verify that it is open. After the GRASP, read the force sensor and configuration encoder of the end-effector and verify that the values on these sensors are appropriate for gripping the handle. The modified plan is shown in Figure #6. 4. ISSUES A number of issues have been raised during our development of the GRIPE system, some of which were handled in the initial implementation by making certain assumptions. In this section we examine these issues in detail and propose some preliminary solutions. 4.1. PERCEPTION VERSUS INFERENCE The essence of verification is gathering a perception which implies that an assertion holds. A motivating assumption of our work is that inferences which have a basis in perception are more reliable as verifications than inferences (such as the specification of postconditions in an action) which are not so based. Thus our basic strategy of verification is to substitute relevant perceptions for the assertions that appear in plans. 84 / SCIENCE Verification operators embody essentially one-step inferences between perceptions and assertions. There is no reason why such inferences could not be more indirect. An example appears implicitly in the GRASP action in our example above. One of the preconditions for the GRASP action is that there must be no forces at the end-effector. Implicit in this assertion is the inference that the gripper is not holding any object when the forces at the end-effector are zero. This reasoning can be made explicit by making the assertion that the gripper is empty appear as the precondition in the GRASP action. An additional inference rule relating no forces at the gripper to the gripper being empty allows the same perception request involving the force sensor to be generated. However, now it is possible to define other strategies (e.g. using the vision system) to verify this restatement of the precondition for the GRASP action. We intend to develop our verification knowledge base so that GRIPE can construct more complicated chains of inferences to determine how to verify the assertions appearing in plans. Under this extended verification knowledge base, there should often be several ways to verify a particular assertion. The considerations involved in choosing a verification strategy are discussed in the remainder of this section. 4.2. WHEN SHOULD THE EFFECTS OF PLAN ACTIONS BE MONITORED? As others have pointed out CVanBaalen, 19841 [Gini et al., 19851, it is too expensive to check all the assertions in a plan. In many domains, it may be impossible. Sensors should be viewed as a resource of the agent which must be planned and scheduled just like other resources [Miller, 19851. However, the process is aided by the observation that exhaustive monitoring may not be necessary and that selection criteria exist which can effectively limit the scope of monitoring. Some of these criteria are listed below. How they may best be combined in an assertion selection process is an open research topic. l Uncertainty Criteria. Uncertainty in a number of forms may exist which requires that actions be closely monitored. This area has been the most extensively investigated [Brooks, 19821, [Donald, 19861, [Erdmann, 19851 and [Gini et, al. 19851. Uncertainty may exist in the world model which is tised for planning. Uncertainty may exist about the effects of actions themselves; multiple outcomes may be possible. The effects of actions may be “fragile” and easily become undone (e.g., balancing operations). Actions may have known failure modes which should be explicitly checked. If the effects of actions have a duration, there may be uncertainty about their persistence. l Dependency Criteria. There is a class of assertions which do not need to be verified at all. These are assertions which appear as postconditions of an action but are not required as preconditions of later actions, i.e., side effects. The assertions not on the this critical path of explicit dependencies between actions in a plan can be ignored in the verification process. The dependency information in the plan can be used to prune out these irrelevant effects of actions. 4.3. WHICH PERCEPTION(S) CAN BEST VERIFY AN ASSERTION? Importance Criteria. If we have explicit, representation of the dependencies among effects and actions in a plan, we can prioritize assertions for monitoring based on their criticality. The simplest metric is the number of subsequent actions which depend directly or indirectly on an assertion. More complicated metrics might take into account the importance of the dependent actions as well. The failure to achieve highly critical effects could have profound implications for subsequent error recovery. Recovery Eke Criteria. These criteria interact with the importance criteria. If an effect may be trivially re-achieved after a failure, the effects of a failure to verify even highly critical assertions is somewhat mitigated. Consequently, the need to monitor the assertion closely is not so severe. The current set of verification operators for GRIPE provide only a single, context-independent perception request for verifying individual assertions. In previous sections, we discussed how a more extensive verification knowledge base could support reasoning about multiple ways to verify assertions. These options often will be necessary. For example, consider the difference between an arm movement which sets up a GRASP and a movement #after a RELEASE. The location of the end-effector is critical to the success of a GRASP. In this case a battery of sensors such as position encoders, proximity sensors, force sensors, and vision might be indicated to verify that the end-effector is properly in place. On the other hand, a movement of an arm after a RELEASE may be performed relatively sloppily, particularly if this movement terminates a task sequence. A simple check on a position encoder (or even no check at all) may be sufficient. 4.4. HOW ACCURATELY SHOULD ASSERTIONS BE VERIFIED? Using the same example, the latitude in the position of the end-effector for a GRASP is small; this position must, be verified with a great deal of precision. On the other hand, the latitude in the position of the arm after movement away from a RELEASE is presumably quite large. 4.5. SHOULD AN ASSERTION BE VERIFIED INSTANTANEOUSLY OR CONTINUOUSLY? In the current version of GRIPE, we assume that the successful execution of actions can be verified by instantaneously verifying the actions’s preconditions before its execution, and instantaneously verifying its postconditions after its execution. This approach proves inadequate for some actions. For example, consider a MOVE-OBJECT action which transports an object, gripped by the end-effector of an arm by moving the arm. The force sensors in the end-effector should be checked continuously because the object might be dropped at any point along the trajectory. Instantaneous monitoring is also insufficient for those actions which involve looping, for example, when a Planning: AUTOMATED REASONING / 85 running hose is being used to fill a bucket with water. In this case, monitoring must not only be continuous but conditionalize the performance of the filling action itself. 4.6. SHOULD ASSERTIONS WITH PERSISTENCE BE RE-VERIFIED? GRIPE’s strategy for verifying assertions which are established as postconditions in one action and required as preconditions in a later, non-consecutive action is to verify the assertion twice -- both at the time of its establishment and at the time of its use. Like the issue above, this issue concerns assertions across actions rather than assertions during actions. An error interpretation and recovery system should know as soon as possible if a condition which is needed later during the execution of a plan becomes unsatisfied. For example, suppose a part is to be heated and used in a delayed, subsequent action. If there is uncertainty about how quickly the part will cool (i.e., the duration or persistance of the “heated” assertion), then the temperature of the part should be frequently checked to verify it stays within the desired parameters. If it cools too quickly, additional heat may need to be applied before the subsequent action can be executed. 5. THE VERIFICATION LANGUAGE The verification operators described earlier capture a restricted style of verification. In this section, we develop an extended language for verification which makes explicit a set of issues relevant to determining how to verify assertions in plans (the language is not yet implemented in GRIPE). The need to perform both instantaneous and continuous monitoring functions suggests two fundamental types of perception requests, called Brief and Prolonged. Any particular perception request is exclusively one of these types. Brief-type perception requests handle instantaneous monitoring tasks which involve simple pattern matches against sensor or data-base values. Prolonged-type perception requests handle continuous or repetitive monitoring tasks which may involve extended modification of the plan. However, both types of perception requests may require preconditions to the use of sensors to be established by the planner. In addition, the planner ensures that any sensor resources specified are appropriate and available at the desired time. If sensor resources are not explicitly specified, the planner must choose appropriately from those available. The current GRIPE implementation does yet interact with a planner and therefore its sensor resource managment is not this facile, Since monitoring an assertion may itself involve planning and the generation of additional plan actions, the process may recursively involve monitoring of the plan generated to achieve the original monitoring request. In the current GRIPE implementation, we allow a maximum recursive depth of two. However, to be satisfactory, we need to first, relax the depth restriction, and second, to use heuristics to constrain the recursion depth. The second requires a priori assumptions about the success of some plan operations. 86 / SCIENCE <Perception-Request> <Brief-Type> <Quick-Condition> <Cond-Operator> <Relational-Op> <Sensor> <Value-Spec> <Prolonged-Type> <Time-Spec> <Time-Relationship> <Timing> (Action-Spec> <Time-Designation> == (Brief-Type> 1 (Prolonged-Type> == IF <Quick-Condition> TREN (Action> == data-base-query 1 NOT data-base-query 1 <Cond-Operator> <Sensor> (Value-Spec> == IN-RANGE 1 (Relational+> == ( 1 = I> I<= I=> = any available and appropriate sensor == [<integer> . . . <integer>] 1 <integer> == CRECX <assertion> (Time-Spec> <Stopping-Con& == <Time-Relationship> 1 <Time-Designation> == <Timing> (Action-Spec> == BEFORE 1 AE’TER 1 DURING I an instance of a plan-action-node == FOR <Time-Designation-Spec> WITH FREQUENCY <integer> <Time-Designation-Spec> == TIME <Relational-Op> <integer> 1 <integer> NUMRER-OF-TINES 1 NEXT <integer> ACTION-NODES <Stopping-Spec> == STOP MONITORING <Brief-Type> Figure 7: Verification Planning Language Figure #7 gives a grammar for a Verification Planning Language which addresses these considerations. Bequests of the syntax defined in Figure #7 are generated by an expectation generator module such as GRIPE and recursively input to the planner. Eventually, this iteration flattens Prolonged-type perception requests. The final executable perception request in the plan is always of the Brief-type. For example, if a Prolonged-type perception request stated that an assertion should be monitored 5 times then the final plan would state IF predicate THEN action 5 times. Miller [Miller, 19851 has discussed similar ideas. 6. DETERMINING CONTEXT AND THE INTENTS OF ACTIONS Our overall approach to verifying the execution of plans is driven by the following observation: The appropriate means of verifying the execution of an action is constrained by the intent of the action. In general, the intent of an action may vary according to context. Our results so far are restricted by the assumption that actions have a single intention. In this section, we describe our approaches to determining the intent of actions from context. They are similar to those described in [Gini et al. 19851. One approach is top-down and assumes the existence of a hierarchical planner, as in [Sacerdoti, 19741. Recall the example concerning two movements of an arm, one to set up a GRASP operation, and one after a RELEASE operation. The movement in the context of the GRASP operation needs to be verified quite accurately; the movement in the context of the RELEASE operation requires only cursory verification. For example, the expanded movement operator before the GRASP might be a MOVE-GUARDED which indicates the need for careful verification; the expanded movement operator after the RELEASE might be a MOVE-FREE which requires less exacting verification, Even when actions are not distinguished during expansion, the context provided by higher-level actions of which they are a part may be sufficient to distinguish them for the purpose of verification. In our example, the GRASP might have been expanded from a GET-OBJECT task while the RELEASE might have been expanded from a LEAVE-OBJECT task. The knowledge that the MOVE within a GET-OBJECT task is critical while the MOVE within a LEAVE-OBJECT task is not can be placed in the verification knowledge base. The context of an action also may be determinable through a more local, bottom-up strategy. In this same example, the two MOVE actions at the lower level might be distinguished by noting that one occurs before a GRASP and the other occurs after a RELEASE. These contexts then can be used in the same way to retrieve appropriate verification strategies from the verification knowledge base. 7. INTERFACING WITH GEOMETRIC AND PHYSICAL LEVEL REASONING SYSTEMS GRIPE reasons at what is commonly referred to as task level. We envision GRIPE and the other knowledge-based modules of our proposed PEER system interfacing with systems that can reason directly about the geometry and physics of task situations. Examples of systems are described in [Erdmann, 19851 and [Donald, 19861. Erdmann has refined a method for computing the accuracy required in the execution of motions to guarantee that constraints propagated backward from goals are satisfiable. His approach could be incorporated into our system to generate expectations for verifying the execution of motions (only). An expectation would be a volume; a perception which indicates that a motion has reached any point within the volume would verify the successful execution of the motion. Donald has developed a complementary technique for planning motions in the presence of uncertainities in the world model (as opposed to uncertainities in the execution of motions). He also proposes a theoretical framework for constructing strategies for detecting and recovering from errors in the execution of motion planning tasks. 8. CONCLUSIONS The problem addressed in this paper is that of verifying the execution of plans. We have implemented a system which analyzes a plan and generates. appropriate perception requests and expectations about those perceptions which, when confirmed, imply successful execution of the actions in the plan. Typically, not all the assertions in a plan can or should be verified. We have proposed a number of heuristic criteria which are relevant to the selection of assertions for verification. In general, verification strategies must be context-dependent; this need can be supported by an ability to analyze plans at multiple levels of abstraction. Finally, we are developing a language for verification which makes explicit the relevant considerations for determining verification strategies: the appropriate perceptions, the degree of accuracy needed, discrete vs. continuous verification, and the need for re-verification. 9. ACKNOWLEDGEMENTS The work described in this paper was carried out by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. We would like to thank Leonard Friedman of the USC Information Sciences Institute for instigating the PEER project at JPL and stimulating our research. Rajkumar Doshi would like to thank his advisor Professor Maria Gini, University of Minnesota, for providing inspiration and ideas. Planning: AUTOMATED REASONING / 8’ REFERENCES [1] Atkinson D., James M., Porta H., Doyle R. Autonomous Task Level Control of a Robot. In Proceedings of Robotics and Expert Systems, 2nd Workshop. Instrument Society of America June, 1986. El Brooks, Rodney. Symbolic Error Analysis and Robot Planning. A.I.Memo 685, Massachusetts Institute of Technology, September, 1982. [31 Chien, R.T., Weismann, S. Planning and Execution in Incompletely Specified Environments. In International Joint Conference on Artificial Intelligence. 1975. [41 Donald, Bruce. Robot Motion Planning with Uncertainty in the Geometric Models of the Robot and Environment: A Formal Framework for Error Detection and Recovery. In IEEE International Conference on Robotics & Automation San Francisco, CA, 1986. E51 Erdmann, Michael. Using Backprojections for Fine Motion Planning with Uncertainty. In IEEE International Conference on Robotics & Automation St. Loius, MO, 1985. El Fikes, R.E., Nilsson, N.J. STRIPS: A new approach to the application of Theorem Proving to Problem Solving. Artficial Intelligence Journal 2(3-41, 1971. [71 Fikes, R.E., Hart, P.E., Nilsson, N.J. New Directions in Robot Problem Solving. Machine Intelligence 7, 1972. 181 Fox, Mark S., Smith, Stephen. The Role of Intelligent Reactive Processing in Production Management. In CAM-I, 13th Annual Meeting & Technical Conference November, 1984. PI Friedman, Leonard. Diagnosis Combining Empirical and Design Knowledge. Technical Report JPGD-1328, Jet Propulsion Laboratory, December, 1983. [ll] James, Mark. The Blackboard Message System. Technical Memorandum Jet Propulsion Laboratory, 1985. Write to author, stating reason. WI Miller, David P. Planning by Search through Simulations. PhD thesis, Yale University, October, 1985. 1131 Munson, John. Robot Planning, Execution and Monitoring in an Uncertain Environment. In International Joint Conference on Artificial Intelligence. 1972. D41 Porta Harry. Dynamic Replanning. In Proceedings of Robotics and Expert Systems, 2nd Workshop. Instrument Society of America, June, 1986. 1151 Sacerdoti, Earl. Planning in a Hierarchy of Abstraction Spaces. Artficial Intelligence Journal 5(2), 1974. WI Sacerdoti, Earl. A Structure for Plans and Behaviour. Elsevier North-Holland Inc., 1977. Cl71 Tate, Austin. Planning and Condition Monitoring in a FMS University of Edingburgh, Artificial Intelligence Applications Institute, AIAI TR #2, July 1984. [I81 Van Baalen, Jefffrey. Exception Handling in a Robot Planning System. IEEE Workshop on Principles of Knowledge-Based Systems, Denver, CO, December, 1984. Not Published due to late submission. Cl91 Wilkins, David. Domain Independent Planning: Representation & Plan Generation SRI, Technical Note #266, August 1982. [20] Wilkins, David. Recovering From Execution Errors in SIPE SRI Technical Note #346, January 1985. [lo] Gini, Maria., Doshi, Rajkumar S., Garber, Sharon., Gluch, Marc., Smith, Richard., Zualkernain, Imran. Symbolic Reasoning as a basis for Automatic Error Recovery in Robots. Technical Report 85-24, University of Minnesota July 1985. 88 / SCIENCE
1986
106
369
A Representation of AC tion Structures Erik Sandewall and Ralph Rgnnquist Department of Computer and Information Science Linkiiping University Linkiiping, Sweden Abstract: We consider structures of actions which are partially ordered for time, which may occur in parallel, and which have lasting effects on the state of the world. Such action structures are of interest for problem-solving with multiple actors, and for understanding narrative texts where several things are going on at the same time, They are also of interest for other branches of computer science besides AI. Actions in the action structure are characterized in terms of preconditions, postconditions, and prevail conditions, where the prevail condition is a requirement on what must hold for the duration of the action. All three conditions are partial states of the world, and therefore elements of a lattice. We develop the formalism, give an example, and specify formally the criterion for admissible action’ structures, where postconditions of earlier actions serve as prevail- or preconditions of later actions in a coherent way, and there are no conflicting attempts to change (“update”) a feature in the world. 1. Introduction. Our topic is the formal analysis of actions, i.e. things that happen in the world. The phenomenon described by the phrase ‘John gives the ball to Mary’, and the operation where an industrial robot moves a workpiece from a conveyor to an NC-machine, are examples of actions. Actions have a duration in time, and several actions may occur in parallel; these are essential properties of actions. It is not essential that there should be an identifiable actor who ‘does’ the action, so ‘thunder’ or ‘a thunderstorm’ could also qualify as an action. We shall use the term ‘action structure’ for a set of actions together with information about them, in particular, information about their relative order in time. A formal analysis of action structures must be very much concerned with their effects: will (or may) a given action structure have a certain effect on the world; are two given action structures equivalent with respect to their effects on the world, etc. Action structures have been studied in several branches of computer science (and also of course in several other disciplines), but with different and sometimes complementary assumptions or constraints. Usually action structures have been seen as ‘plans’ or ‘programs’, i.e. pre-scriptions for intended behavior in a machine. It is however also possible to see an action structure as a tie-scription of what has happened, an account of history. In A.I., research on ‘planning and problem-solving’ has traditionally focused on the analysis of effects of sequences of actions, and has only recently begun to address the complicating issue of parallel1 actions. (A discussion of related work in this field is in section 11 of this paper). This research was supported by the Swedish Board of Technical Development. On the other hand, the Operating systems and Data base fields have for a long time included work on concurrent programming, where the main issue is “how two or more sequential programs may be executed concurrently as parallel processes” [Andr83]. In principle there should not be any particular difference between structures of actions in the real world, and actions inside computers. In practice, however, there is a difference which is also indicated by the very term “concurrent program”: one deals with a number of programs, one for each processor, and uses special constructs for synchronization. This works well if each of the programs is relatively complex, and synchronization can be perceived as a relatively marginal annotation. For action structures in the real world, it is less natural to use the “concurrent program” viewpoint. One would prefer to make statements about what actions happen, and in which order they happen. That is therefore the approach that is taken in the present paper. - Section 11 also contains a more extensive discussion of how results from concurrent programming research relates to our topic. 2. Key ideas and results. The key ideas in this paper are: - An action structure is viewed as a set of actions, each of which has a start-point and an end-point. The set of such points is partially ordered for time. - Partial state descriptions are used, where each ‘feature’ or proposition in a partial state description can have a definite value (e.g. a truth-value), or be undefined. The relation of ‘containing more information than’ defines a lattice over the space of partial state descriptions. - Each action is associated with a prc-condition, a post-condition. and a prevail-condition. All three conditions are partial states. The precondition and postcondition characterize what must hold at the beginning and the end of the action, respectively. The effect of the action on the world is therefore characterized (explicitly or implicitly ) by the pre- and postconditions. The prevail-condition characterizes what must hold for the whole duration of the action. For a concrete example, consider a small car rental company with only one outlet. The action of ‘customer renting a car’ has as its precondition and as its post-condition that the car is in the company premises, but it does not have to be in the premises during the rental period. On the other hand, the action of ‘mechanic on duty fming the car’ requires the car to be on the premises for the duration of an action, which we would express as a prevail-condition. Planning: AUTOMATED REASONING / 89 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Thus in the case of pre-/ postcondition, it is possible but not necessary for the postcondition to be different from the precondition, in which case the action has an effect on those aspects or features of the world that are recorded in the conditions. In the case of prevail- condition, the condition must necessarily be equal at the beginning and the end of the action, since it should hold and be constant for the duration of the action. In a sense that will become clear below, the pre- and postconditions correspond to the concept of ‘non-shared resource’ or ‘read and write access’ in operating systems terms. Prevail-conditions correspond to the concept of ‘shared resource’ or ‘read only access’. Also, the pre- and post-conditions correspond to the ‘delete’ and ‘add’ clauses in Strips-like problem solving systems. The prevail-conditions do not have a direct counterpart in systems like STRIPS, since the problem of whether another, parallel action can violate a prevail-condition of an action, does not arise when all actions are assumed to happen in sequence. In programming languages such as Occam [May83], parallel programs are constructed using the seq, par, and Kleene star (repetition) operators. Those operators are however not sufficient for constructing all possible (and useful) action structures; Figure 1 shows an example of an action structure that can not be constructed from them, Our approach does not have those restrictions, and is in that sense similar to the Petri net approach [Pete82]. In this paper, we formulate the model and discuss its motivation. We also propose how the admissibility criterion for action structures can be expressed formally, and work out an example. 3. Partial models We described above how, in our approach, the set of start-points and end-points of actions is viewed as a partially time-ordered set. But in every actual train of occuring actions, the set of time-points is of course totally ordered (as long as we can assume common-sense, Newtonian time). The action structure is therefore a way of characterizing a set of similar trains of actions. As such, it is an alternative to other ways of characterizing a set of ‘admissible worlds’. One other, well-known method would be a logical system, where formulas and their truth-values are defined, and the admissible worlds are characterized using formulas that have the value true exactly in the admissible worlds [Krip63, Resch71]. When we characterize the momentary state of the world where actions take place, we can similarly choose to use pavtial states. In a very simple example’ we might characterize the world only in terms of the position of a number of electrical switches, which can be in position ‘1’ or ‘0’. A partial state in that world would be a function which assigns to each switch, either of the values ‘l’, ‘0’) or ‘undefined’. Partial states go well together with partially time-ordered action structures, for the following reason: if the ‘state’ (i.e. those aspects of the world that we consider in the formal characterization) consists of a number of components, and two actions happen in parallel or ‘at the same time’, but they affect distinct and unrelated aspects of the world, then we can analyze their effects without needing to know which of them actually occurred first, and without needing to understand their interactions if they actually occurred at the same time. In such a case, it is reasonable to assign a partial state to the start-point and end-point of each of those actions. After this introduction, the presentation. we can now proceed to the formal part of 4. The lattice of partial states We assume that we have a domain S of partial states of the world, and a partial order E on S such that <S, &> is a lattice. The lattice operations are written u, n, c and the top and bottom elements are written T, I as usual. We pay particular attention to domains which are constructed M the Cartesian product of a finite number of feature domains. For example, the world which is characterized by the position of four different switches would be seen as Fl x F2 x F3 x F4 where each of the Fi is the feature domain consisting of the four elements u, 1, 0, and k, with the following order: uclck uEOSk as shown also in the Hasse diagram in figure 2. One world-state vector is defined to be E another world-state iff corresponding elements are C, as usual. We shall generally use u (‘undefined’) and k (‘contradiction’) for the bottom and top element in feature domains. Let s be an element in a domain Fl x F2 x . . . x Fn. The element of Fi which is used for forming a, will be called the projection of I) into the dimension i, and will be written s[i]. The element s will be said to haue the i:th feature iff s(i] is different from u. We write dim(s) for the set of all i such that s has the i:th feature. 90 i SCIENCE Two elements s and s’ are said to be co-dimcnsion.aZ iff dim(s) = dim(s’) and they are said to be anti-dimensionaL iff dim(s) and dim(s’) are disjoint sets. The element equals k. s is said to be consistent iff none of its projections It will be desirable to generalize these concepts to be used also for If, b, v, el those domains that are not formed as Cartesian products, and in particular, for domains that are formed by constraining a Cartesian product using propositions expressed in logic. 6. The domains of operations and actions. We now introduce a domain V of operations. For example, “to turn on switch number 3” might be one operation. In many cases it will be natural to form operations using a “verb” in some sense, combined with a number of “case slots”. In the present treatment we however make no assumptions about the structure of operations. The domain of actions is next defined as: an action is a fourtuple triple An action structure over a set A of valid actions is now a 5, p], where p (for plan) is a set of triples [h a, t’] [T where again t and t’ are time-points in T, a is in A, and every triple in the set. Each member triple It, a, t’l or in expanded form, it, If, b, v, 4, t’l will be called an action occuwencc. t I t’ for obvious constraints are: In order to draw a given action structure graphically, we should in principle make one dot for each time-point; represent the temporal order on time-points using dotted arrows, and then indicate the action occurrences using solid arrows, with the understanding that a solid arrow may be drawn on top of the dotted arrow since the action anyway implies that its beginning- point precedes its end-point. 6. Coherent action structures. The preconditions, postconditions, and prevail-conditions impose a number of constraints on an action structure. Some of the relatively where f, b, and e are states, and v is an operation. In the sense that was described in section 2 above, f,b, and e are the prevail condition, the pre-condition, and the post-condition, respectively. From the domain of actions, we distinguish a subset A of valid actions. Intuitively, valid actions are those fourtuples [f,b,v,e] where f characterizes the state of the world for the duration of the action, b characterizes the state of the world immediately before the operation v takes place, and e characterizes the state of the world at its conclusion. - at the beginning of an action, all its preconditions must be present, either because they were present from the beginning of the action structure, or because they were the result of previous action(s) - several actions which affect the same ‘feature’ of a state but in different ways, must not be allowed to occur in parallell. In other words, the temporal order must guarantee that one of them comes before the other For example, suppose the states of the world are fourtuples which indicate the position of each of four switches, such as 11, 0, u, 11 If TurnOn is the operation of turning on switch 2, in a state where it is off, then the following action should be in the set A of valid actions: [I, [u,O,u,u], TurnOn% [u,l,u,ul1 where of course I = [u,u,u,u]. In this example the prevail- condition is the bottom element of the partial state lattice because there are no constraints in the prevail-condition. We always require from valid actions [f,b,v,e] that b and e must be co-dimensional with each other, and anti-dimensional with f. We introduce an identity operation Noop which leaves every state - - several actions which require the same prevail-condition may occur in parallel. However, there must not be other, also parallel actions that have a prevail-condition feature in their pre- or postconditions. Th e purpose of th e present section is to capture these intuitions through a formal definition, which we call for the action structure to be coherent. unchanged, i.e. Is, 1, NOOP, J-I These intuitions actually represent a simplification relative to the real world. Consider for example the scenario of parking a car parallel to the curb, between two other cars, starting from the point where ‘our’ car is positioned to the left of the car in front of the parking slot ( in right-hand traffic) (figure 4). We consider three actions: is a member of A for every s in S. We are now ready to introduce the action structures themselves, first intuitively/graphically and then formally. We will draw an action structure as in figure 3, where full arrows represent actions. If two actions begin at the same time, they start in the same point; if one immediately succeeds another then the endpoint of one arrow is the beginning-point of the next arrow. If a delay is allowed between one action and the next, then a dotted arrow is drawn from the end-point of one to the beginning- point of the next. In this way, we can also express e.g. that two actions (must) begin at the same time, or that the termination of one (must) preceed the termination of another. This very natural structure is formally expressed as follows. We use a set T of tire-points, corresponding to the beginning-points and end-points of the arrows in the figure. A partial order s is defined on T, representing the order of temporal precedence. al: keep the car moving in the reverse direction, at suitable speed a2: keep the car’s front wheels at an angle pointing right a3: keep the car’s front wheels at an angle pointing left. The action plan of course is to do a2 and a3 in sequence, and al in parallel to both of them. These actions affect the same ‘resource’ or ‘feature’ of the state, name?y the position of the car. Still, it is admissible and in fact necessary to perform them in parallel - fist turning the wheels right and left, and only then moving backwards, would not have the intended effect. In the present paper we do not account for such coordinated actions. Here we only wish to capture the intuition of actions which can occur in parallel because they do not interfere with each other. In a wider perspective and in future work, it will however be necessary to deal with the case of coordinated actions. Planning: AUTOMATED REASONING / 0 1 El 92 / SCIENCE Let [T, I, P] b e an action structure. For each member t of T, we define the incoming action occurrences in p to be those of the form It’, a, tl i.e. having the given t as their last element. Graphically, if each action occurrence is represented as an arrow, the incoming action occurrences are those whose arrows end in the given time-point. the partial state where the stove is hot, there is no batter, and otherwise we do not know. The actions whose operations occur in figure 5 can now be defined as follows: [I, UOUU, MakeBatter, uluu] [I, Ouuu, HeatStove, luuu] The incoming states for a time-point t are defined as follows: [ luuu, uuu0, MakeCoffee, uuul] - the post-condition of each incoming action occurrence, is an [luuu, ulOu, FryPancakes, uOlu] incoming state; [I, uulu, EatPancakes, uuOu] - the join of the prevail-conditions of all the incoming action (L, uuul, HaveCoffee, uuuO] occurrences, is also an incoming state. [I, luuu, CoolStove, Ouuu] Similarly, the outgoing action occurrences are those of the form It, a, t’l having the given t as their fast element, and the outgoing states are the pre-conditions of the outgoing action occurrences, plus the join of the prevail-conditions of all the outgoing action occurrences. We are now ready to formulate the coherence criterion. If we use these actions in the structure of figure 5, and check out the coherence criterion above, we obtain a violation. The key problem is that the result of making the coffee, i.e. the fact that coffee exists, must be ‘made known’ to the action of having coffee, which of course has coffee existence as a precondition. This is accomplished by adding an action of the form [uuul, I, Noop, I] from node t4 to node t7 in the figure. An action structure [T, 5, p] is defined to be coherent if, for every time-point t in T, 1. the incoming states are consistent and anti-dimensional, 2. the outgoing states are consistent and anti-dimensional, 3. the join of the incoming states equals the join of the outgoing states, if the time-point has both incoming and outgoing states. One may ask why we do not instead augment the existing arcs from t4 to t5 and from t5 to t7 so as to contain also the information that coffee exists. The reason is that in a more general case, there could have been two (or more) parallel1 paths from t4 to t7, and then there would be no reason why one or the other should ‘carry’ the coffee existence information. The join of states mentioned state of the time-point t. in point 3 will be called the cuwent Furthermore, an action structure [T, 5, p] is also coherent if one can add to p some number of action occurrences of the form It, Is, 1, Noop, 11, t’] and the resulting action structure is coherent. The way an incoming state cold be inconsistent is if incoming action occurrences have incompatible prevail conditions, and similarly for outgoing states. There is a similar problem concerning those nodes which are the first ones to have a feature (i.e. no earlier time-point has a current state that haa the feature). We shall call such nodes the first we node(s) for the feature. In order to satisfy the coherence criterion, we have to add Noop actions from the initial time-point t0 to the first use nodes for each feature (at least if the first use node has some predecessor at all). The last nodes have to be similarly connected to the final time- point t9. (This is somewhat inelegant, but we outline below how one can avoid the need to formally introduce those Noop actions). The resulting action structure is shown in figure 6. For simplicity, a Noop action such as [uuul, I, Noop, I] is written just as [uuul] in the figure, and is drawn as a -a-s-*-9 arrow. This definition captures most of the intuitions, but it leaves out some constraints. We shall first motivate this definition with a concrete example, and then proceed to the additional requirements and the formally derived properties of the concept. It is now trivial to check off that the action structure in figure 6 is coherent. The current state of the respective time- points, i.e. the join of their incoming or outgoing states, is as follows: to 0000 7. An example. t1 uouu t2 ouuu Suppose we are to prepare and consume a meal consisting of pancakes followed by a cup of coffee. The coffee is to be cooked on the stove, and since there is only room for one pot at a time on the stove, and we do not want to interrupt the eating in order to cook, we decide to make the coffee before the pancakes. (Thus hot pancakes have higher priority than fresh cooked coffee). Figure 5 shows the action structure, including the operations of making the batter, heating the stove, and allowing the stove to cool. t3 luu0 t4 1101 t5 1Olu t6 Ouuu t7 UUOl to8 uuuo t9 0000 When action structures are repeated cyclically (for example, in robotics applications, for the programs of manufacturing cells), it is often undesirable to have a single startpoint and endpoint for the cycle. We would like a cycle to have several, parallel first actions, each of which can start as soon as all its prerequisites have been made available. Our model can easily be adapted for that purpose: instead of having the extra Noop actions that go to the first use node and from the last use node for each state feature, we would form a vector of first use nodes and another vector of last use nodes, across the feature space. The definitions of incoming and outgoing states in action structures must of course be modified In order to analyze the action structure, we use partial states with four truth-value components, namely the answers to the following\ questions: is the stove hot? is there batter? is there pancakes? is there coffee? As before, each component of the partial state is either of u, 0, 1, or k. We write the states without punctuation, so 1Ouu is for example Planning: AUTOMATED REASONING / 93 i ? . I i . I i i ’ .’ 7) i/ =i 9 i i i rol i z! i 3 94 / SCIENCE accordingly. The operation of combining two successive cycles is then be performed by introducing an appropriate Noop action from the last use node of each feature in one cycle, to the first use node of the same feature in the next cycle. 8. Additional requirements. Consider the action structure described in figure 7. It is a prevail-condition of operations vl and v2 that the first dimension feature shall be 1. The operation v3 changes that value from 1 to 0, and v4 changes it back to 1. The action structure in the figure is coherent, according to the definition in section 6. Yet we see that the action structure may possibly not be correctly executable, namely if operation v3 takes effect before vl has concluded. The example illustrates a side-effect problem: the problem which arises if another action, maybe in a remote part of the action structure, locally violates a condition of an action, or at least (with unfortunate timing) threatens to violate it. The following is a possible way of characterizing that constraint formally: Let [T,l,p] be an action structure. A sequence of action occurrences in p is called a chain iff it has the form [tO,al,tl], [tl,a2,t2],(t2,a3,t3] ,... An action occurrence [t, lf,b,v,el, ~1 is said to subsume another action occurrence [t’, [f’,b’,v’,e’], u’] in the i:th feature, iff t g t’ c u’ c u and f’[i] E f[i] An action structure [T,<,p] is now said to be aligned for the i:th feature iff there is some subset p’ of p which is a chain, and where every action occurrence whose f,b, or e has the i:th feature, is either a member of p’ or is subsumed by some member of p’. It is easily seen that in an action structure that is aligned for the i:th feature, those actions whose b and e have the i:th feature (active actions, drawn a-/) and those whose f have the i:th feature (passive actions, drawn ----->) together form a structure of the type shown in figure 8. Substructures of passive, possibly parallel1 actions with a single start-point and end-point, are sequentially combined with active actions. Our intuitions for admissible action structures can now be formulated as follows: an action structure [T,l,p] is admissible iff there exists some p’ which is a superset of p, where all the action occurrences i p’-p are formed using the operation Noop, and where [T,<,p’] is coherent, and aligned for all features. The following ‘model existence’ property is stated here without proof: If [T,<,p] is an admissible action structure, and 3 is a total order over T such that t 5 t’ -> t i: t’ then one can assign a consistent state s(t) to each time-point t in T, in such a way that the following holds for every action occurrence [t’, [f,b,v,e], t”] in p: b 5 s(t’) e c s(t”) and for every t such that t’ < t < t”, s(t)[i] = u for each i in dim(b), and for every t such that t’ < t 5 t”, f c s(t) and finally (“frame property”), if u > t is the immediate successor of t, s(u)[i] = s(t)[i] unless the b or e condition of an action forces them to be different according to the above. 9. Verbs or conditions. verb phrases that express post- and prevail- We have not said anything about the intended structure of operations. From a software engineering background, it may be natural to view operations as essentially procedure calls, i.e. names of procedures with their proper parameters. Pre- conditions, post-conditions, and prevail-conditions are then a part of the specification and/or the description of the procedures, but one would not expect to derive those conditions from the name or the definition (the ‘body’) of the operation. If the operations are instead thought of as verb phrases in natural language, this picture changes somewhat. A verb phrase like ‘(to) open the door’ directly suggests what is the postcondition, and also (taking for granted that one can not open a door that is already open) the corresponding precondition. In common sense reasoning, we also have access to a reportoire of ‘methods’ for how to achieve a goal. The method for achieving a goal is often used in place of the mere attainment of the goal. For example, if we say: ‘as John was driving home that afternoon, he was hit by the lightning and died’, we refer to the action which, if properly completed, would have the postcondition ‘John is at home’, but which in this particular occurrence was tragically interrupted. Similarly, if we watch a movie where the hero is asked to ‘please leave the room’, and he does so by crashing through the window, the possible entertainment effect is derived from the non-standard way in which the hero achieved the requested result. These examples will suffice here as indications of how we would like to analyze some natural-language verb phrases in terms of intended world-states, and standard methods for achieving them. But there are also plenty of examples of how verb phrases refer to prevail conditions, namely phrases of the form ‘keep’ + condition For example, ‘keep the car on the road’, ‘keep the car at the regulated speed’, ‘keep the pot slowly boiling’, ‘keep the audience interested’, ‘keep all the rooms clean’, all show how a lot of common sense phenomena may be understood in terms of qualitative regulators or feed back loops. The formal characterization of such actions would of course refer to the state that is intended to be kept, as a prevail-condition. In such cases the prevail-condition is not merely a prerequisite for doing the essential action, but it defines the essential action. It is interesting to notice that this could be an entry point to “naive control theory”, which would seem to have a potential for being of high industrial relevance. Also, we should maybe now return to von Neumann’s early insight that feedback systems are of outmost importance for intelligent behavior, and blow new life into the term ‘cybernetics’ that he coined. 10. Non-flat feature domains. Above we have introduced feature-values as domains, but all examples have been chosen from the trivial case of flat, finite domains. It is easy to see how the more general case can be useful especially for prevail conditions. For example, suppose we have actions for painting a wall with color X, for different specific X, and we also have an action of photographing a white statue with that wall as background, which (in the prevail condition of the action) requires the wall to be non-white. If now the feature domain is Planning: AUTOMATED REASONING / 95 organized so that u E non-white g red E k branch is allowed to violate the prevail-condition, one can not characterize what is a parallell branch. basically because then we can organize our action structure so that the paint-red action is succeeded by the photographing action. At the time-point between those two actions, the incoming post-condition (red) is matched against an outgoing prevail condition (non-white). In order to satisfy the last requirement on coherent action structures, we must add an outgoing Noop action whose prevail condition says that the wall is red. The two outgoing prevail conditions ‘red’ and ‘non-white’, are co-dimensional but not equal, but that does not matter - the important thing is that their join (‘red’) is consistent. ~e’cnporuZ logic ([Resch71]) uses the temporal ordering of points in time, as the accessibility relation, with modal operators such as: FA A is true at some future time GA A will be true at all future times etc. As far as we can see, temporal logic (as used e.g. by Halpern, Manna and Moszkowski [Halp83]) leads to the same problem as were just discussed for dynamic logic. Manna and Pnueli [MannBlb] apply temporal logic to the specification of concurrent programs, using the approach of 11. Related work. “cooperating sequential processes” which is not well adapted to our goal, for the reasons quoted above. The theories and languages for concurrent programming address the issue of specifying ‘two or more sequential programs that may be executed concurrently as parallell processes’ (quoted from the survey article of Andrews and Schneider, (Andr831). Their goal is therefore different from the goal of the present work, which is to characterize parallell processes in the world outside the computer, but evidently the techniques may sometimes be interchangable. One of the approaches to concurrent programming is to consider cooperating ScquentiaZ processes [Dijk68], i.e. to use a set of sequential programs, equipped with special synchronization operations. That approach may make good sense for concurrent programming, especially in machine-oriented programs, but is not as attractive for describing real-world action structures since there is usually not a good set of ‘processors’ to write programs for and to synchronize. Another approach, path expressions, separates the specification of operations from the constraints on execution order [Camp74]. In that respect they can be considered as similar to the approach taken in the present paper, since the action structure does not specify the ‘procedure’ for performing an operation, but only the allowable orderings. Also, the alignment criterium that was introduced in section 8, can be thought of as a set of path constraints, one for each feature. But the path operators that are used for writing path expressions, such as “,” for concurrency and “;” for sequencing, do not easily lend themselves to expressing structures like the one in figure 1. Also, path expressions have not (to our knowledge) developed the counterpart of the precondition/ postcondition /prevail- condition characterization of operations. A large amount of work has been based on modal Logic, both as a tool for concurrent programming, and in A.I. for characterizing structures of actions or events (which is exactly the goal of the present paper). Basically, the ‘accessibility relation’ that characterizes the Kripke semantics for modal logic [Krip63] is then used as the relation between a world-state and a (or the) succeeding world-state. Dynamic logic ((Prat76j) allows one to use a collection of such accessibility relations. Each elementary operation (from world-state to world-state) may be one such relation, and relations may be composed algebraically, using operators such as “;” for sequential composition, “union” for parallel composition, and the Kleene star for infinite sequential repetition. The big problem with that approach, from our point of view, is that world-states are not explicitly named and talked about. The language only allows you to say things like “in the resulting state after first doing a, and then doing b and c in parallell, the proposition P will hold”. Consequently, the language can not characterize structures like the one shown in figure 1. Also, it becomes quite difficult (probably impossible) to express the constraint of prevail-conditions, namely that no other, parallel1 Yet another approach, which is also frequently called “temporal logic”, is to use a many-sorted first-order logic where e.g. ‘times’, ‘intervals’, ‘states’, and ‘events’ are distinct sorts, and where there are the obvious relations and functions such as During(il,i2) Holds(p,i) and so on. This approach, which we can call “explicit temporal logic” (to distinguish from “modal temporal logic”) has been repeatedly used in A.I. Along with (one interpretation of) the logic programming paradigm, work with this approach is done by defining an ontology, first intuitively and then formally by writing down a large number of axioms in first-order logic. The axioms must of course characterize those sorts and relations. McDermott has done this for one particular ontology, which uses states, times, chronicles, etc. ((McDerm82J). Allen has done a similar work for a different ontology, which treats intervals of time as the basic concept ([AlleBl]). A cri i t q ue of these works, which seems to extend to the approach in general, has been written by Turner ([Turn84]). Yet another approach, particularly in AL, has been to extend a temporal logic, of some kind, with additonal constructs which turn it into a programming Language. The procedural logic of Georgeff et al ([Geor85]) is a case in point. Outside the framework of formal logic, early AL research on planning and problem-solving developed methods that have inspired the results in this paper. The handling of preconditions and postconditions builds directly on STRIPS, as has already been discussed. Its successor, the NOAH system ([Sace75]) used a partial order on the actions in the plan, in order not to over-commit itself during the plan-making process. Also, many “scmuntic net” type enterprises (in the broad sense of the attempts to develop adequate knowledge representations to be used for language understanding, scene recognition, question answering, etc., based on common sense and ad hoc notions) have introduced “nodes” “arcs” , etc. for actions or events, and are able to express tempera; relations, preconditions, and/or effects of those actions. Too often, of course, the expressiveness of such representations is so great that a formal analysis of what it is they express, is not possible. In relation to these various approaches, ours can be characterized as an explicit temporal logic, and in that respect it is similar to the approach of McDermott and of Allen. However, we do not tread the usual path of logic, i.e. to define the language, write out axioms, define a semantics, and so on. The structures described above are the ones which would have been used for the semantics, if we had followed the standard path. But we do not see the need for language and axioms, at least not at this point. The purpose of the present paper has been to nail down a minimal set of necessary concepts (a simple ontology for action structures, if you wish), and to characterize the logically admissible action structures. 96 I SCIENCE References. [AIIe81] Allen, J.F. “An interval based representation of temporal knowledge”. Proc. 7th IJCAI, 1981, pp. 221-226. [Andr83] Andrews, Gregory R., and Schneider, Fred B. “Concepts and Notations for Concurrent Programming”. Computing Surveys, Vol. 15, No. 1, March 1983. [Camp741 Campbell, R.H., and Habermann, A.N. “The specification of process synchronization by path expressions”. Lecture notes in Computer Science, vol. 16. Springer Verlag, 1974, pp. 89-102. [Dijk68] Dijkstra, E. W. “Cooperating sequential processes”. In F. Genuys (ed), Programming Languages. Academic Press, New York, 1968. (Geor85] Georgeff, Michael P., Lansky, Amy L., and Bessiere, Pierre “A Procedural Logic I’. Proc. 9th IJCAI, 1985, pp. 516-523. [HaIp83] Halpern, J., Manna, Z., and Moszkowski, B. “A hardware semantics based on temporal intervals”. Proc. 19th ICALP. Springer Lecture Notes in Computer Science, Vol. 54, pp. 278- 292. [Krip63] Kripke, S. “Semantical considerations on modal logic”. Acta Philosophica Fennica, Vol. 16, pp. 83-94. [MannSl] Manna, Z., and Wolper, P. “Synthesis of Communicating Processes from Temporal Logic Specifications”. Proc. of the Workshop on Logics of Programs, Yorktown Heights, NY. Lecture notes in Computer Science, Springer Verlag, 1981. [Mann8lb] Manna, Z. and PnueIi, A. “Verification of concurrent programs: the temporal framework”. In: Boyer, R.S. and Moore, J.S. (eds) The correctness problem in computer science, pp. 215- 273, Academic Press, New York, 1981. (May831 May, D., Inmos Ltd., Bristol, U.K. “Occam”. SIGPLAN Notices, April 1983. (Occam is a trademark of Inmos Ltd.) (McDerm82] McDermott, D. “A temporal logic for reasoning actions and plans”. Cognitive Science, Vol. 6, pp. 101-155. about (Pete821 Peterson, J.L. “Petri Net theory and the modeling of systems”. Prentice-Hall, Inc., 1982. [Prat76] Pratt, V.R. “Semantical considerations on Floyd-Hoare logic” Proc. 17th IEEE Symp. on Foundations of Computer Science, pp. 108- 121. [Reschirl] Rescher, J. and Urquhart, A. “Temporal logic”. Springer Verlag, 1971. [Sace75] Sacerdoti, E.D. “A structure for plans and behavior”. Ph.D. thesis, reprinted by Elsevier North Holland Publishing Co., New York, 1977. [Tate76] Tate, A. “Project planning using a hierarchic non- linear planner”. Univ. of Edinburgh, Dept. of A.I. Research, Report 25. [Turn841 Turner, Raymond “Logics for artificial intelligence”. Ellis Horwood, Ltd., 1984. Planning: AUTOMATED REASONING / 97
1986
107
370
ORDER OF MAGNITUDE REASONING Olivier Raiman A.I. Research Department C.F. Picard lab. IBM Paris Scientific Center University P.M. Curie 36 Ave. R. Poincare Paris 75 116 France. ABSTRACT This paper presents a methodology for extending representation and reasoning in Qualitative Physics. This methodology is presently used for various applications. The qualitative modeling of a physical system is weakened by the lack of quantitative information. This may lead a qualitative analysis to ambiguity. One of the aims of this methodology is to cope with the lack of quantitative information. The main idea is to reproduce the physicist’s ability to evaluate the influence of different phenomena according to their relative order of magnitude and to use this information to distinguish among radically different ways in which a physical system may behave. A formal system, FOG, is described in order to represent and structure this kind of apparentty vague and intuitive knowledge so that it can be used for qualitative reasoning. The validity of FOG for an interpretation in a mathematical theory called Non-Standard Analysis is then proven. Last, it is shown how FOG structures the quantity-space. INTRODUCTION Qualitative Physics has had a remarkable development in the last few years. It has shown an increasing capacity to describe the qualitative behavior of physical systems. Nevertheless, the lack of quantitative information can lead a qualitative analysis to ambiguities, and the limits of qualitative simulation have recently been pointed out (Kuipers 1985). In order to overcome these difficulties, the physicist’s basic approach and language can be used as guidelines. This provides us with a way to represent seemingly inaccurate and rather informal knowledge which nevertheless plays a determining role in the physicist’s (or engineer’s) art. This knowledge embodies concepts and rules used to qualify the relative importance of different phenomena on which the whole behavior of a physical system may depend. This is order of magnitude reasoning. Order of magnitude reasoning based on the technique introduced in this paper is being used to: l build the expert system, DEDALE, for troubleshooting analog circuits [2] , l search for “qualitative models” by interpretation of numerical results which represent behaviors of a physical system, such as tires under stress, l build qualitative macroecoanomics [ 13. model of textbook First we go into some of the limitations of qualitative analysis methods, through a simple example of mechanics. Then we introduce the formal system FOG* designed to enable order of magnitude reasoning. We show how FOG removes ambiguity. We then demonstrate FOG’s logical validity with respect to an interpretation in Non-Standard Analysis. Next we explain the relationship between this interpretation of the formal system and its practical applications. Lastly, we show how order of magnitude reasoning is related to the notion of quantity space as defined in qualitative physics (Forbus 1982). We submit that this knowledge structure plays a crucial part in identifying and differentiating between the possible ways a physical system behaves qualitatively. I A SIMPLE EXAMPLE Let’s consider a simple example of mechanics. The impact of two masses of very different weights, M and m, coming from opposite directions, with close velocities Vi and Vi. Qualitative reasoning integrating common sense should explain what happens to such a physical system**, and for instance explain what will be the directions of the masses after impact. Following De Kleer’s notation [x] will be the qualitative value of quantity x, i.e. the sign of x, with possible values { + , 0, - }. Then the question is what are the values of [ v,], [v~] ? (“f” designates a value after the impact and “i” a value before). Before impact [ K-J = + [Vi] = - Impact axis --> After impact CV,3=? [v,1=? Figure 1 : Colliding masses 1 In French FOG stands for ‘Formalisation du raisonnement sur I’Ordre de Grandeur’, in English: a Formal system for Order of maGnitude reasoning. 1 We assume the type of collision that occurs is elastic 100 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Momentum and Energy conservation requires that, except during the shock, the following constraints are satisfied: (eJ M.V + m.v = P (eJ M.V.V + m.v.v = E where P and E are constants. A. Qualitative modeling We are tempted to use qualitative equations to describe the behavior of this physical system. However this is constrained by the fact that in order to arrive at “qualitative differential equations” as in [ 3,4], equations (e,) and (eJ must be derived with respect to time. This cannot be done because impact causes a discontinuity of velocities V and v. For the same reason, it is not possible to work with higher order derivatives or to apply any continuity rules for velocities [ 51. Nevertheless, it is still possible to arrive at qualitative equations which link the qualitative values of the velocities before and after impact. Using the classical addition of signs, denoted 8, momentum conservation (e,) implies: (l) C vfl @ CyIl = C vil @ Cvil Conservation of energy does not give a useful qualitative equation directly, but (el) and (eJ imply: (e3) Vf + Vi = Vf + Vi ThUS: (2) [ v/3 @ [ Vi] = [VI] @ [Vi] B. Ambiguity Since Vi and Vi have opposite signs, [ Vi] 8 [Vi] i.e [ Vi] 8 [Vi] = ?. Therefore the right-hand side of equation (1) is undefined. An analysis of equations (1) and (2) shows that such qualitative modeling leaves the sign of v’r and v! unknown. Five solutions for [ VI] and [v!] are equally possible (see table 1). Common sense suggests that the particularity of this situation makes it possible to remove such ambiguity. As we shall see, this can be done by applying FOG. [v/3 Cv,l CVJ [Vi1 CQI @ Cv,l = Evil @ c$l 1+ + + - + ? = 1210 l+l+l- I + ? = I I 31 - l+l+l- I ? = ? I I 41 - lo l+l- I ?=- I Isl- I- l+l- I ?=- I Table 1 : Possible values of [V/1 and Cvfl II FOG What are the key concepts of order of magnitude reasoning? We introduce three operators Ne, Vo, Co. They are used to represent intuitive concepts: A Ne B stands for A is negligible in relation to B. A Vo B stands for A is close to B, ie (A - B) is negligible in relation to B. A Co B stands for A has the same sign and order of magnitude as B. The underlying idea is that if B Ne C then A Ne C. I We now introduce the FOG formal system. The completness and minimality of FOG is not studied here. Because of the intuitive nature of the rules we won’t explain them in detail. [X] stands for the sign of element X. A. The Formal System Axiom: A,: A Vo A Inference rules: R,,: AVo B + B VoA R,: ACoB + BCoA R,: B Vo A + B Co A R,: AVoB, BVoC + AVoC &: A Ne B, B NeC --* ANeC R,: ACoB, BCoC + ACoC &: ACoB, BVoC -+ ACoC R,: A Vo B, B Ne C + A Ne C 4: ANe B, B Co C + A Ne C 4: AVo B -+ [A] = R,,: A Co B -+ [A]- ccB"II . - RI,: A Ne B + -A Ne B R12: [A] # 0 , A Vo B + - (A Ne B) R,+ [A] # 0 , A Co B + 1 (A Ne B) RIfip + B] = +, [A] = - + -(BNeA), =+ R,~:[A] # o, [A] = [B], (A + B)VOC -+ -(C Ne A), -(C Ne B) R,6: [A] = 0 , (A + B) Vo C 4 B Vo C R,,: [A] = [B] , A Vo C --) (A + B) Vo (C + B) R,*: A Ne C, B Co D --) A.B Ne C.D R19: A Ne B, C Vo D + A.C Ne B.D Rm: A Ne C, B Ne D + A.B Ne C.D Rzl: A Co B, C Co D -+ A.C Co B.D Ru: A Vo B, C Vo D + A.C Vo B.D R13: (A + B) Vo C, B Ne A -+ A Vo C R14: (A + B) Vo A -+ B Ne A R15: A.B Ne C.D, C Ne A, [A] # 0 + B Ne D R16: A.B Vo CD, A Vo C, [A] # 0 -+ B Vo D R,,: A.B Vo C.D, A Ne C, [c] + 0 + D Ne B R18: A.B Co C.D, A Ne C, [c] f 0 + D Ne B R29:[A) = - [D.E] Z 0 , (A + B.C) Vo D.E, BNeD + ENeC R,:[A] = - D c Ne B EC~l , (A + B) Vo C, D Ne E + . . B. Basic Properties If Co and Vo are both relations of equivalence, a distinction can be made when they are used in conjunction with the Ne relation: if (A + B) Vo A is true then R2,, implies B Ne A. If instead (A + B) Co A is true, the same conclusion cannot be drawn. Co is obviously less restrictive then Vo. Qualitative Reasoning and Diagnosis: AUTOMATE-D REASONING / 10 1 Rule 4, and Rio , imply that FOG can work with the qualitative values of quantities. Thus relations in FOG contain both information on the signs, and on the relative order of magnitude of the quantities. We call these relations “order of magnitude equations”. One should notice that there is no rule that concludes (B Vo D) from (A Vo C) and (A + B ) Vo (C + D). In fact, the orders of magnitude of B and D may occasionally be concealed by those of A and C. This last remark shows that this calculus is not as simple as it may look at first glance. III BACK TO THE EXAMPLE A. Qualitative Constraints (3) (MVf + ??lVf) VO (MVi + mVi) VO (MViK + mVjVi) (10) K VO -Vi* B. Firing Rules of FOG l RI, to (9, 10) + (I 1) - mvi Ne MY, l RI, to (1 I) + (12) mvi Ne ML’, l RI9 to (11, IO) + (13) mvp, Ne MV,V, l Ra to (3, 12) + (14) (MVf + l?lVf) VO MVi . Ra to (4, 13) -+ (15) (MV,V, + mv~,) Vo MKV, . 4 to (14, 5) + (16) [ MV, + mvf] = + . Hypothesis: [ Vf] = - R,, to (16) 3 (17) [vf] = + RB to (17, 14, 5, 9) + (18) V, Ne vf R, to (15, 18) + (19) MV,V, Ne mv,-q R,, to (15) -+ (20) -(MViVi) Ne mv,v, (19, 20) + Contradiction n Hypothesis: [ Vf] = 0 R,, to (15) --) (21) mvfvf Vo MV,V, R,, to (14) + (22) mvf Vo MV, Rm to (22,9) + (23) Vi Ne VI R,to (21,22) -+ (24) v, Vo vf (23,24) + Contradiction 0 -+ [ v,] = + 3 If instead of asserting V* Vo -vb we weaken this this assertion to Ye Co -vi, the same conclusion 1 or the signs of velocities can be drawn, an d the final result is V~CO vi instead of VfVo 3 Vi C. Results Ambiguity for the qualitative value of I$ is removed. c yfl = - and [ Vf] = 0 lead to contradiction. Thus, the only right solution in table 1 is: [V,l = + and [yfl = + Furthermore the complete analysis [ lo] of the case also implies: V VO Vi and V, VO (Vi + Vi + Vi) , which means that t e velocity of the larger mass remains L about the same after impact, and that the smaller mass resumes with a velocity close to three times the velocity of the larger mass. D. Comparing the Results with a Reasoning by Analogy Mass m is negligible as compared to M, so everything happens as if mass m were hitting a wall (mass M). If the frame of reference is mass M, the velocity of mass m before impact is close to ( -2VJ. Mass m rebounds at a velocity of 2 Vi . Since mass M’s velocity is already Vi, after impact the velocity of mass m is 2Vi + Vi . It should be noted that this reasoning implicitly uses the steps proven with FOG For example, the sentence “everything happens as if mass m were hitting a wall” is equivalent to “the momentum and energy of M remains unchanged”. And these conclusions are obtained when using FOG [lo] that infers: MV’Vo Mb and MVfVIVoMViVi E. The Added Information Derived from FOG Analysing another simple case will help illustrate the rewards of using order of magnitude reasoning, and the limitations of focusing only on the sign of quantities. Take the following case: The results in this case are: [Vi] = + m Vo M v- Ne Vi p$= -;7 c’l V; Ne Vi vi = - Cyfl = + V/ VO Vi With a qualitative analysis restricted to signs, ambiguity would remain for both velocities VI and vr With FOG the sign of VI remains ambiguous, but a qualitative property is obtained: VI Ne Vi is provided, and compared to the velocity of mass m, mass M remains steady after impact. So the main phenomenon is derived by FOG, namely that there is a transfer of velocity, momentum and energy from mass M to m. These two cases show that information relative to order of magnitude structures the behavior of the physical system. For more complicated systems, it is often essential for the practitioner to use this order of magnitude knowledge to deduce the different possible behaviors of the physical system. 102 / SCIENCE An interesting question is whether it is preferable to solve the problem symbolically and then use order of magnitude considerations. A first remark is that in some cases the model can only be described in terms of order of magnitude [ 1,2]. Secondly if we look at the resolution of the simple example above, using initially order of magnitude reasoning produces inferences that at each step can be interpreted in terms of velocity, momentum, or energy. We expect the more complex the system the greater the gain, by using order of magnitude reasoning as early as possible in the analysis, for the resolution and for the explanation. IV VALIDITY OF FOG IN NON-STANDARD ANAL YSIS Let’s give a justification for the use of FOG from a logical point of view. Under the name of Non-Standard Analysis, A. Robinson introduces [ 111 a calculus on infinitesimal. In essence and with a gross simplification he describes a way to introduce a halo around quantities. This suggest that Non-Standard Analysis might be a good tool to validate FOG. A. A Quick Glimpse on Non-Standard Analysis Field K of Non-Standard Analysis, noted N.S.A., is a totally ordered non archimedean* field [ 93. The field R of real numbers is imbedded in K. Let F be the ring of finite elements of K, I the set of infinitesimals. Then R n I = (0}, and I is an ideal of F. In particular the sum of two infinitesimals is an infinitesimal and the product of an infinitesimal and a finite element is an infinitesimal. Positive infmitesimals are smaller than any strictly positive real number. B. Definitions For The Qualitative Operators Let A, B, be elements of K: 0 AVoB iff o&I, A = B.(l + 0). One could be tempted to use the definition of a halo to define A Vo B, i.e A Vo B iff (A - B) E I. But with this definition A Vo B would not imply [A] = [B] , and FOG would lose it’s capacity to remove ambiguity. a A Ne B iff o E I, A = B.o 0 A Co B iff 0 E F-I, A = B.0 C. Validity of the Inference Rules All the rules of FOG are valid for this interpretation [ 101 . Let us d some rules. emonstrate, for example, the validity of 4 A field is archimedean if for every strictly positive element x of the field, and for every element y of the field, there exists an integer n such that nx > y. R, AVoB,BVoC + AVoC A= B( 1 + oJ, B = C ( 1 + 03, with 0; and 0, elements of I. Hence A = C ( 1 + o1 + 4 + o,.d. Since I is stable by addition and multiplication, definition (dl) shows that A Vo C. Rp (A+ B)VoC, BNeA -+ AVoC According to RI, (A + B) Vo C + C Vo (A + B), i.e. C = (A + B) (1 + ol), B Ne A, gives B = A .ol, with 01 and oJ elements of I. Hence C = A ( 1 + o1 + 4 + o,.q). The stability of I for addition and multiplication still results in C Vo A, in replying R, we finally get A Vo C. Rn A.B Vo CD, ANe C, [Cl # 0 --, D NeB A= C .ol, by applying R, we get: C.D Vo A B, i.e. C.D = A.B ( 1 + oJ, with o, and oz elements of 1. Hence D= B.o, ,( 1 + oJ, by using the properties of stability of I, we get: D Ne B. R,, ACo B, BNeC + A NeC A= B 0, and B = C o,, with 0, element of F-I, Ok element of I. Hence O1.oz is an element of I and A Ne C. V HOW TO USE FOG Getting back to the practical aspect, it is interesting to complete the path in the diagram below: “Natural order of magnitude reasoning” {{ In concrete terms we must go from Robinson’s infmitesimals to sufficiently small reals. To do this one can associate with infinitesimals, sequences of real numbers tending towards 0 [7] . Algebraic computing in N.S.A. then becomes the study of limits in the world of real numbers. Thus the following result completes the path. Let us consider a formal deduction using the rules of FOG a finite number of times. Given neighborhoods I, of 0, I2 of 1 and I3 a finite interval containing Z2 with 4 n 4 = 0 and defining A Ne B iff A/B E II, A Vo B iff A/B E &, A Co B iff A/B E Z3, all results derived from applying FOG will hold, provided that the initial intervals allowed for the use of Ne, Vo, Co are “tight” enough compared to II, I,I,. The above result does not specify the size of these intervals but confirms their existence. If we reason without specifying these ranges, we have a purely symbolic qualitative reasoning. In practice, such symbolic reasoning is applied either because the data available is not accurate enough to use quantitative methods, or because qualitative reasoning has been deliberately chosen. Even in the case of a pure symbolic reasoning with order of magnitude knowledge, we can extract an interesting explanation of the behavior of a system. This is the case for example in the macroeconomic model [ 11. Qualitative Reasoning and Diagnosis: AUTOMATED REASONING / 103 For certain applications, we must be able to determine whether or not we are within ranges for which this reasoning is acceptable for real numbers. In this case, using order of magnitude reasoning for a given application requires the specific expertise of the system builder. Deciding to use premise A Ne B is only of interest with respect to a given situation, and an expert is capable of deciding which qualitative relations are suited to the system. For instance, as far as the DEDALE expert system is concerned, the choice of initial relations requires expertise. The system user will consider the expertise used to specify the acceptable ranges as given initial knowledge when solving a particular case. VI FOG AND THE QUANTITY SPACE FOG’s contribution can be understood through what is referred to in qualitative physics as the quantity space [ 61. The notion of quantity space is used to define ‘landmarks” for values of qualitative variables. The basic structuring of the quantity space is to locate an element [x] in relations to fields [x] = + and [x] = -, Here the landmark considered is the value 0, but there may also be other landmarks. Landmark “L”, for example, then separates the quantity space according to the sign of [X - L]. The set of landmarks defines a partial order on the quantity space [ 61. FOG provides the quantity space with qualitative landmarks and a structure for the regions defined by these landmarks: l The equivalence relations, Co and Vo, define regions within this space containing elements which have the same order of magnitude. l The Ne relation sets up a hierarchy between these regions, in other words a scale of comparison for this space. l The rule: A Ne B + (A + B) Vo B, shows that for orders of magnitude which differ, the regions are stable with respect to addition. l The rule: A Ne B and C Co D + A.C Ne B.D, means that the hierarchy between two regions is maintained, when multiplying their elements by elements which are on comparable scales. The way in which FOG applies these characteristics to the quantity space makes it possible to express what it means to detect a contradiction or to make a hypothesis concerning orders of magnitude. l A contradiction is detected if regions defined by classes of equivalence associated with relations Vo and Co, do not follow the hierarchy between the classes imposed by the Ne relation. l Making a hypothesis concerning the order of magnitude comparing two elements means imposing an additional relation between their classes. This may involve merging them or establishing a hierarchy between them. CONCLUSION The aim of order of magnitude reasoning is to provide a level of description, eliminating secondary aspects and showing the main properties of a system. This implies a quantity space with the added structure derived from the use of the operators Ne, Vo, Co. This allows the introduction of common sense knowledge, and simplifies the representation of complex systems. FOG handles order of magnitude reasoning through symbolic computation. Thus, the formal system FOG creates a framework to represent this category of qualitative knowledge. This representation belongs to the scientist’s traditional and intuitive way of reasonin . Experience in applying FOG to Macroeconomics [ 1 f indicates that it should have a wide range of applications. ACKNOWLEDGMENTS I would like to thank J.P. Adam and J. Fargues, (Paris I.B.M. scientific center), for their constant encouragements and help, Pr. J.L. Lauriere, (C.F. Picard lab.) for his support. I am also greatful to V. Tixier (G.S.I.) for discussing and reviewing this paper. REFERENCES [l] P. Bourgine, 0. Raixnan, “Macroeconomics as reasoning on a qualitative model”, To appear in Economics and Artij?cial Intelligence, First International Conference, Aix-en-provence, France, September 1986. [2] P. Dague, P. De&, 0. Raiman, ‘Rais;lfhy;;; quahtatif dans le diagnostic de pannes”, * International Workshop on Expert System and their Applications, Avignon, France, April 1986. [ 31 J. De Kleer, “Causal and Teleological Reasoning in Circuit Recognition”, M.I.T. Lab, 1979. [4] J. De Kleer, J.S. Brown, “A Qualitative Physics Based on Confluences”, Art$cial Intelligence, Vol 24, 1984. [5] J. De Kleer, D.J. Bobrow, “Qualitative Reasoning With Higher Order Derivatives”, Proceedings of National Conference on ArtiJicial Intelligence , pp. 86-9 1, 1984. [6] K.D. Forbus, “Qualitative Process Theory’“, Artificial Intelligence Laboratory, AIM-664, Cambridge: M.I.T.,1982. [7] J. Henle, J. Kleinberg, “Infinitesimal Calculus”, M.I.T. Press, Cambridge, Massachusetts, and London, England, Printed by Alpine Press, U.S.A. [ISI B. Kuipers, “The Limits of Qualitative Simulation”, hoc. IJCAZ-85 1985, pp. 128-136. [9] R. Lutz, M. Goze, “Nonstandard Analysis”, Lecture Notes in Mathematics, Vol 881, Springer-Verlag, 1981. [lo] 0. Raiman, “Raisormement Quahtatif”, Centre Scientifique IBM France, document F093, November, 1985. [ll] A. Robinson, “Non-Standard Analysis”, North- Holland Publishing Company, Amsterdam, 1966. 104 / SCIENCE
1986
108
371
MULTIPLE FuAULTS Johan de Klcer Intelligent Systems Laboratory XEROX Palo Alto R.esenrch Center 3333 Coyote Hill Road Palo Alto, California 94304 and Brian C. Williams M.I.T. Artilicial Intelligence Laboratory 545 Technology Square Cambridge, Massachusetts, 02139 ABSTRACT Diagnostic tasks require determining the differences between a model of an artifact nnd the artifact itself The differences between the manifested behavior of the artifact and th,e predicted behuvior of the model guide the setsrch for the diflerences between the artifact and its model. The diagnostic procedure presented in this paper is model-based, inferring the behavior of the composite device from knourl- edge oj the structure and function of the individual compo- nents comprising the device. The system (GDE - General Dirtgnostic Engine) has been implemented and tested on ex- amples in the domain of troubleshooting digital circuits. This research makes several novel contributions: First, the system diagnoses failures due to multiple faults. Sec- ond, j&lure candidates are represented and manipulated in terms of minimal sets oj violated assumptions, resulting in an eficient diagnostic procedure. Third, the diagnostic procedure is incremental, reflecting the iterative nature of diagnosis. Finally, a clear separation is drawn between di- agnosis und behavior prediction, resulting in a domain (and injerence procedure) independent diagnostic procedure. 1. Introduction Engineers and scientists constantly strive to under- stand the differences between physical systems and their lllodels. Engineers troubleshoot mechanical syst,cms or electrical circuits to find broken parts. Scientists succes- sively refine a model based on empirical data during the process of theory formation. Many everyday common- sense reasoning tasks involve finding the differcuce between models and reality. Diagnostic reasoning requires a means of assigning credit or blame to parts of the model based on observed behavioral discrepancies observed. If the task is trou- bleshooting, t,hen the model is presumed to be correct and all model-artifact differences indicate part malfunctions. If the task is theory formation, then the artifact is presumed to be correct and all model-artifact differences indicate re- quired changes in the model. Usually the evidence does not admit a unique model-artifact difference. Thus, the diagnostic task requires two phases. The first, mentioned above, identifies the set of possible model-artifact diifer- ences. The second proposes evidence-gathering tests to reline the set of possible model-artifact differences until they accurately reflect the actual differences. This view of diagnosis is very general, encompassing troubleshooting mechanical devices and analog and digi- tal circuits, debugging programs, and modeling physical or biological systems. Our approach to diagnosis is also independent of the inference strategy employed to derive predictions from observations. For troubleshooting circuits, the diagnostic task is to determine why a correctly designed piece of cqltipmcnt is not functioning as it was intended; the explanation for the faulty behavior being that the particular piece of equip- ment under consideration is at variance in some way with its design (e.g., a set of components is not working cor- rectly or a set of connections is broken). To troubleshoot a system, a sequence of measurements must be proposed, executed and then analyzed to localize this point of vari- ance, or fault. For example, consider the circuit in Fig. 1, consisting of three multipliers, Ml, Mz, and Ma, and two adders, A, and A,. The inputs are A = 3, D -x 2, C =-I 2, U = 3, and E I= 3, and the outputs are measured showing 132 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. that F = 10 and G = 12.’ From these measurements it is possible to deduce that at least one of the following sets of components is faulty (each set is referred to as a can- didate and is designated by [...I): [A,], [M,], [A2,M2], or [AJZ, MS]. Furthermore, mea>uring X is likely to produce the most useful information in further isolating the faults. Intuitively, X is optimal because it is the only measure- ment that can differentiate between two highly probable singleton candidates: [Al] and [Ml]. A r 1 3 Ml I B F 2 Al I - C 2- M2 -I’ D - G 1 A2 - Fig. 1: A familiar circuit. Earlier work in diagnosis has concentrated primarily on diagnosing failures produced by a single faulty com- ponent. Wh en one entertains the possibility of multiple faults, the space of potential candidates grows exponen- tially with the number of faults under consideration. This work is aimed specifically at developing an efficient gen- eral method for diagnosing failures due to any number of simultaneous faults. The focus of this paper is the process of analyzing the results of measurements to identify potential causes of variance (see [3] f or an extensive discussion on the use of probabilistic information to guide the measurement pro- cess). This paper describes a general framework for di- agnosis which, when coupled with a predictive itlference component provides a powerful diagnostic procedure for dealing with multiple faults. In addition it also demon- strates the approach in the domain of digital electronics, using propagation as the predictive inference engine. 2. Model-artifact Differences The model of the artifact describes the physical struc- ture of the device in terms of its constituents. Each type of constituent obeys certain behavioral rules. For csample, a simple electrical circuit consists of wires, resistors and so forth, where wires obey Xirchoff’s Current Law, resistors obey Ohm’s Law, <and so on. In tliaguosis, it is given that Ihe behavior of the artifact differs frown its model. It is I This Systclns. cirruit is ;dso 1rwt1 by b3t.h [z] nntl [8] in cxpl:.il!ir!;; thei! then the task of the diagnostician to determine what these differences are. The model for the artifact is a description of its phys- ical structure, plus models for each of its constituents. A constituent is a very general concept, including compo- nents, processes and even steps in a logical inference. In addition, each constituent has associated with it a set of one or more possible model-artifact differences which es- tablishes the grain size of the diagnosis. Diagnosis takes (1) the physical structure, (2) models for each constituent, (3) a set of possible model-artifact differences and (4) a set of measurements, and produces a set of candidates, each of which is a set of differences which explains the observations. Our diagnostic approach is based on characterizing model-artifact differences as assumption violations. A con- stituent is guaranteed to behave according to its model only if none of its associated differences <are manifested, i.e., all the constituent’s assumptions hold. If any of these assumptions are false, then the artifact deviates from its model, thus, the model may no longer apply. An impor- tant ramification of this approach ([1,2:3,6,8,11]) is that WC need only specify correct models for constituents - explicit fault models <are not needed. Reasoning about model-artifact differences in terms of assumption viol&ons is very general. For example, in elec- tronics the assumptions might be the correct functioning of each component and the absence of any short circuits; in a scientific domain a faulty hypothesis; in a common- sense domain an assumption such as persistence, defaults or Occam’s Razor. 3. Detection of Symptoms We presume (as is usually the case) that the model- artifact differences are not directly observable.2 Instead, all assumption violations must be inferred indirectly from behavioral observations. In section 8 we present a gen- eral inference architecture for this purpose, but for the moment we presume an inference procedure which makes behavioral predictions from observations and assumptions without being concerued about the procedure’s details. Intuitively, a symptom is any difference between a pre- diction made by the inference procedure and an observa- tion. Consider our example circuit. Given the inputs, A = 3, B = 2, C = 2, D = 3, and E = 3, by simple calculation (i.e., the inference procedure), F = X x 1’ = A x C + R x D = 12. However, F is measured to be 10. Thus “J’ is observed to be 10, not 12” is a symptom. More generally, a symptom 1. .C u,ny inconsistency detected by the inference proccdurc, and way occur ber.wecn two prcdic- tions (inl’erred from distinct oleasureiuents) as well as n Qualitative Reasoning and Diagnosis: AUTOMATED REASONING / 133 measurement and a prediction (inferred from some other measurements). 4. Conflicts The diagnostic procedure is guided by the symptoms. Each symptom tells us about one or more assumptions that are possibly violated (e.g., components that nmy be faulty). Intuitively, a conflict is a set of assumptions sup- porting a symptom, and thus leads to an inconsistency. In electronics, a conflict might be a set of components which cannot all be functioning correctly. Consider our example sylllptonl “F is observed to be 10, not 12.” Our calcu- lation that E’ = 12 depends on the correct operation of Ml, M2 and Al, i.e., if Ml, A42, and A, were correctly functioning, then F = 12. Since F is not 12, at least one of Ml, M2 and Al is faulted. Thus the set (Mi,M2,Ai) (conIlicts are indicated by (...)) is a conflict for the synlp- tom. Because the inference is monotonic with the set of assumptions, the set (Ml, M’2, Ai, AZ), and any other su- perset of (Ml, Mz, Al) are conflicts as well; however, no subsets of (Ml, M;!, Al) are necessarily confticts since all the components in the conflict were necessary to constrain the value at F. A measurement might agree with one prediction and yet disagree with another, resulting in a symptom. For example, starting with the inputs B = 2, C = 2, D = 3, and E = 3, <and assuming M2, MS and A2 are correctly funct,ioning we calculate G to be 12. However, starting with the observation F = 10, the inputs A = 3, C = 2, and E = 3, and assuming that Al, AZ, Ml, and MS, (i.e., ignoring 1W2) arc correctly functioning we calculate G = 10. Thus, when G is measured to be 12, even though it agrees with the first prediction, it still produces a conflict based on the second: (Al, AZ, Ml, MS). For complex domains any single symptom can give rise to a large set of comlicts, including the set of all com- ponents in the circuit. To reduce the combinatorics of diagnosis it is essential that the set of conflicts be repre- sented and manipulated concisely. If a set of components is a conflict, then every superset of that set must also be a conflict. Thus the set of conflicts can be represented con- cisely by only identifying the minimal conIlicts, where a conflict is minimal if it has no proper subset which is also a conflict. This observation is central to the performance of our diagnostic procedure. The goal of conflict recogni- tion is to identify the complete set of minimal conIlicts.3 ‘JJyJ>ically, but not always, each symptom corresponds to a single minimal conflict. 5. Candidates A cnndidate is a particular hypothesis for how the ac- tual artifact differs from the model. Ultimately, the goal of diagnosis is to identify, and refine, the set of candidates consistent with the observations thus far. A candidate is represented by a set of assumptions (indicated by [...I). Every assumption mentioned in the set must fail to hold. As every candidate must explain every symptom (i.e., its conflicts), each set representing a candidate must have a non-empty intersection with every conflict. For electronics, a candidate is a set of failed compo- nents, where (any components not mentioned a.re guaran- teed to be working. Before any measurements have been taken we know nothing about the circuit. The size of the initial candidate space grows exponentially with the num- ber of components. Any component could be working or faulty, thus the candidate space for Fig. 1 initially consists of 2” = 32 candidates. [MI.M2.Ml.Al.A2) [MI.MZ.MJ.Al) [Mull PfMwI &i%All IMU9 [AL421 Fig. 2 Initial candidate space for circuit example. It is essential that candidates be represented concisely as well. Notice that, like conflicts, candidates have the property that any superset of a candidate must be a can- didate as well. Thus the space of all candidates consistent with the observations can be represented by the minimal candidates. The goal of candidate generation is to idcn- tify the complctc set of niinimnl candidates. The space of candidates can be vislinlixed in tcrri1s of a slll)sct,-snI)cl.,~ct l;~tlicf: (Fig. 2). ‘1‘1 ic tnirliurnl camflitl~~lcs tltcn flcfitie a bonritli~ry suc!i t.hat f~vcrytliing fro~il Idie boundary up is a vnlifl candidate, ;vhilc: everything below is not. Given no measurements every component might be working correctly, thus the single minimal candidate is the empty set, [], which is the root of the lattice at the bottom of Fig. 2. To summarize, the set of candidates is constructed in two stages: conflict recognition and candidate generation. ConIlict rccoguition uses the observations made along with a model of t,he device to construct a complete set of min- 134 / SCIENCE imal conflicts. Next, candidate generation uses the set of minimal conflicts to construct a complete set of minimal candidates. Candidate generation is the topic of the next section, While conflict recognition is discussed in Section 7. 0. Candidate Generation Diagnosis is an incremental process; as the amgnos- tician takes measurements he continually relines the can- didate space and then uses this to guide further measure- ments. Within a single diagnostic session the total set of candidates must decrease monotonically. This corresponds to having the minimal candidates move monotonically up through the candidate superset lattice towards the candi- date represented by the set of all components. Similarly, the total set of conflicts must increase monotonically. This corresponds to having the minimal conflicts move mono- tonically down through a conflict superset lattice towards the conflict represented by the empty set. Candidates care generated incrementally, using the new conflict(s) and the old candidate(s) to generate the new candidate(s). The set of candidates is incrementally modified as fol- lows. Whenever a new conflict is discovered, any previous minimal candidate which does not explain the new con- flict is replaced by one or more superset candidates which are minimal based on this new information. This is ac- complished by moving up along the lattice from the old minimal candidate, recording the first candidate reached which explains the new conflict; i.e., when the candidate’s intersection with the new conflict is non-empty. When moving up past a candidate with more than one parent a consistent candidate must be found along each branch. Eliminated from those candidates recorded are any which are subsumed or duplicated; the remaining candidates are added to the set of new candidates. Consider our example. Initially there are no conflicts, thus the minimal candidate [] (i.e., everything working) explains all observations. We have already seen that the single symptom “F = 10 not 12” produces one conflict (A,, Ml, Mz). This rules out the single minimal candidate [I. Thus, its immediate supersets [Ml], [Mz], [MS], [Al], and [AZ] are examined. Each of the candidates [MI], [Mz], and [A,] explain the new conflict and thus are recorded; however, [AZ] and [MS] do not. All of their immediate superset candidates except for [AZ, MS] are supersets of the three minimal candidates discovered above. [AZ, MS] does not explain the new conflict, however, its immedi- ate superset candidates are supersets of the three minimal candidates and thus are implicitly represented. Therefore, the new minimal candidate set consists of [Ml], [Mz], and [AI]. The second conflict (infcrrcd from observation G == 1% (4, A2, Ml, MA only eliminates minimal Catldidid! [&fz]; the unaffected candidntcs [Ml], aud [ tll] remain min- imal. Ijowever, to complete the set of minimal candidates we must consider the supersets of [Mz]: [Al, Mz], [A2, n/r,], [W, M2], an d [M2, M3]. Each of these candidates explains the new conflict, howcvcr, [A,,MzJ and [n/r,, M21 are SU- persets of the minimal candidates [Al] ad [MI], respec- tjvely. Thus the new minimal candidates are [A2, n/iz], and [p/f2, M3], resulting in the Il;inimal candidate set: [Al], [Ml], [AZ, Mz], and [Mz, Mz]. Candjdate generation has several interesting proper- ties: First, the set of minimal candidates may increase or decrease in size as a result of a measurement; however, a candidate, once eliminated can never reappear. AS mea- surements accumulate the sizes of the minimal candidates never decrease. Second, if an assumption appears in every minimal candidate (and thus every candidate), then that assumption is necessarily false. Third, the presupposition that there is only a single fault (exploited in all previous model-based troubleshooting strategies), is equivalent to assuming all candidates are singletons. In this case, the set of candidates can be obtained by intersecting all the conflicts. 7. Conflict Recognition Strategy The remaining task involves incrementally construct- ing the conflicts used by candidate generation. In this sec- tion we first present a simple model of conflict recognition. This approach is then refined into an efficient strategy. A conflict can be identified by selecting a set of as- sumptions, referred to as an environment, and testing if they are inconsistent with the observations.4 If they are, then the inconsistent environment is a conflict. This re- quires an inference strategy C(OBS,ENV) which given the set of observations OBS made thus far, and the cnviron- merit ENV, determines whether the combination is consis- tent. In our example, after measuring F = 10, and before measuring G = 12, C({F = lo}, {M~,M2,A,}) (leaving off the inputs) is false indicating the conflict (Ml, M2, Al). This approach is refined as follows: Refinement I: Exploiting minimality. To identify the set of minimal inconsistent environments (and thus the minimal conflicts), we begin our search at the empty en- vironment, moving up along its parents. This is sinlilar to the search pattern used during candidate generation. At each environment we apply C(OBS,ENV) to dcterrnine 4 An environment should not be confused with n calldidnte. An environment is n set of assumptions all of which are assnrned to be true (e.g., A41 alld M2 WC CSSIII~~CI to bc working correctly), a cnndidntc is a set of assumptions all of which arc assumed to be false (e.g., colllpollents Ml and A42 are liot fuuctionillg correctly). A conflict is n, set of ;lssun~~~l.iolls, at least one of which is f&c. Intuitivrly an rnvironmtmt is t,bc% set of assulllptions tlmt defijl:? il "contc!ut" in a deductive infc~rcrtcc engin(B, in this cnx: t,llc engilM2 i:; IISCX~ for pdict.iotr and t.ho assurnpt,iom ;IIC ;hout the 1:lck of particular n~otlcl-artifact dilfcrctlccs. Qualitative Reasoning and Diagnosis: AUTOMATED REASONING / 13 j whether or not ENV is a conflict. Before a new environ- ment is explored, all other environments which are a subset of the new environment must be explored first. If the envi- ronment is inconsistent then it is a minimal conflict and its supersets are not explored. If an environment has already been explored or is a superset of a conflict then C is not run on the environment and its supersets are not explored. We presume the inference strategy operates entirely by inferring hypothetical predictions (e.g., values for vnri- ables in environments given the observations made). Let P(OBS,ENV) b e all behavioral predictions which follow from the observations OBS given the assumptions ENV. For example, P({A = 3,B = 2,C = 2,D = 3}, {Al,Ml, Mz}) produces {A = 3, B = 2, C == 2,D = 3,X = 6, Y = 6, F = 12). C can now be implemented in terms of P. If P com- putes two distinct values for a quantity (or more simply both z and - TC), then ENV is a conflict. Refinement 2: Monotonicity of measurements. If in- puts are kept constant, measurements are cumulative and our knowledge of the circuit’s structure grows monotoni- cally. Given a new measurement M, P(OBSU{M}, ENV) is always a superset of P(OBS,ENV). Thus if we cache the values of every P, when a new measurement is made we need only infer the incremental addition to the set of predictions. Refinement 3: Monotonicity for assumptions. Analo- gous to refinement 2, the set of predictions grows monoton- ically with the environment. If a set of predictions follow from an environment, then the addition of any assump- tion to that environment only expands this set. Therefore P(OBS,ENV) contains P(OBS,E) for every subset E of ENV. This makes the computation of P(OBS,ENV) very simple if all its subsets have already been analyzed. Refinement 4: Redundant Inferences. P must be run on every possible environment. Thus, we need a large set of data-bases, and the same rule will be executed over and over again on the same antecedents. All of this overlap can be avoided by utilizing ideas of Truth Maintenance such that every inference is recorded as a dependency and no inference is ever performed twice [ 71. Refinement 5: Exploiting locality. This is primarily an observation of why the previous refinements care suc- cessful. The first four refinements allow the strategy to ignore (i.e., to the extent of not even generating its name) any enviromnent which doesn’t contain some interesting inferences absent in every one of its subsets. If every envi- ronment contained a new unique inference, then we would still be faced with a computation exponential in the num- bcr of potential model-artifact differences, However, in practice, as the components are weakly connected, the in- ferences rules are weakly connected. Our strategy depends on this empirical property. For example, in electronics the only assumption sets of interest will be sets of components which are connected and whose signals interact - typ- ically circuits are explicitly designed SO that colllporlent interactions are limited. 8. Inference Procedure Architecture To completely exploit the ideas discussed in the pre- ceding section we need to moclify and augmcn t, Ihe itn- plementation of P. We presume that P meels (or can be modified to) the two basic criteria for utilizing truth main- tenance: (1) A dependency (i.e., justification) can be con- structed for each inference, and (2) belief or disbelief in a datum is completely determined by these dependencies. In addition, we presume that, during processing, whenever more than one inference is simultaneously permissible, that the actual order in which these inferences are performed is irrelevant and that this order can be ext.ernally controlled (i.e., by our architecture). Finally, we presume that the in- ference procedure is monotonic. Most Al inference proce- dures meet these four general criteria. For example, many expert rule-based systems, constraint propagation, demon invocation, taxonomic reasoning, qualitative simulations, natural deduction systems, and many forms of resolution theorem-proving fit this general framework. We associate with every prediction, V, the set of envi- ronments, ENVS(V), from which it follows (i.e., ENVS(V) E {envlV E P(OBS, env)}). We call this set the support- ing environments of the prediction. Exploiting the mono- tonicity property, it is only necessary to represent the min- imal (under subset) supportiug environments. Consider our example after the measurements F = 10 and G = 12. In this case we can calculate X = 6 in two different ways. First, Y = B x 15) = 6 assuming n/r, is functioning correctly. Thus, one of its supporting environnlents is {Mz}. Second, Y = G - 2 = G - (C x I;-‘) == 6 assl~ming 112 and ,‘l13 nre working. Therefore the supporting environments of Y := 6 are {{Mz}{A:!, MS}). Any set of assumptions used to derive Y = G is a superset of one of these two. By exploitin, m dependencies no inference is ever done twice. If the supporting environment of a fact changes, then the supporting environments of its consequents are updated automatically by tracing the dcpcndencies created when the rule was first, run. l’his achieves the effect of rerunning the rule without incurring any computational overhead, Wc control the inference IJrixCSS such that whenever two inFerenccs are posslhle, the one producing a datum in the smaller environment is performed first. A simple agenda lJlCChc?~JiSlll srlIfices for this. Whenever a symptom is rccoguizcd, the enviroamcnl is marked a conflict and all inf::ro~icing stops on lhat or:‘* -iron nlcut. Using this control schr~lle facts are ;dwnys dctlucetl in their nlinilual environ- IllClJi,, ;Lcllicving oho dcsircd property th,at only minimal 136 I SCIENCE conflicts (i.e., inconsistent environments) arc geueratcd. In this architecture P can be incomplete (in praclice it usually is). The only consequence of incolnplcteness is that fewer conflicts will be detected and thus fewer candidates will be eliminated than the ideal - no candidate will be mistakenly eliminated. 9. Circuit Diagnosis Thus far we have descrihcd a very general diagnos- tic strategy for handling multiple faults: whose applica- tion to a specific domain depends only on the selection of the function P. During the remainder of this paper, WC demonstrate the power of this approach, by applying it to the problem of circuit diagnosis. For our example we make a number of simplifying pre- suppositions. First, we assume that the model of a circuit is described in terms of a circuit topology plus a behavioral description of each of its components. Second, that the only type of model-artifact difference considered is whether or not a particular component is working correctly. Fi- nally, all observations are made in terlns of measurements at a component’s terminals. Measurements are expensive, thus not every value at every terminal is known. Instead, some values must be inferred from other values and the component models. Intuitively, symptoms are recognized by propagating out locally through components from the measurement points, using the component models to de- duce new values. The application of each model is based on the assumption that its corresponding component is working correctly. If two values are deduced for the same quantity in different ways, then a coincidence has occurred. If the two values differ then the coincidence is a symptom. The conflict then consists of every component propagated throng11 from the measurement points to the point of coin- cidence (i.e., the sympt,om implies that, at least one of the components used to deduce the two values is inconsistent,). 10. Constraint Propagation Constraint propagation [12,13] operates on cells, val- ues, and constraints. Cells represent state variables such as voltages, logic levels, or fluid flows. A constraint stipu- lates a condition that the cells must satisfy. For example, Ohm’s law, ZI = iR, is represented as a constraint among the three cells V, i, and R. Given a set of initial values, constraint propagation assigns each cell a value that sat- isfies the constraints. The basic inference step is to find a constraint that allows it to determine a value for a pre- viously unknown cell. For example, if it has discovered values v = 2 and i = 1, then it rises the constraint v = iR to calculate the value R = 2. In addition, the propa.gnt,or records If’s depclltloricy on 21, i and the constraitit 1~ -z ill. The newly recorded value ~lrny cnusc other conslrnints to trigger and more values to be deduced. Thus, constraints may be viewed as a set of conduits along which values can be propagated out locally from the inputs to other cells in the system. The dependencies recorded trace out a par- ticular path through the constraints that the inputs have taken. ,i synlptom is manifcstcd when two different values are deduced for the same cell (i.e., a logical inconsistency is identified). In this event dependencies are used to con- struct the conflict. Sometimes the constraint propagation process tcrmi- nates leaving some constraints unused and some cells unas- signed. This usually arises as a consequence of insufficient informatiou about device inputs. However, it can also =arise as the consequence of logical incompleteness in the propa- gator. In the circuit domain, the behavior of each component is modeled as a set of constraints. For example, in analya- ing analog circuits the cells represent circuit voltages and currents, the values are numbers, and the constraints are mathematical equations. In digital circuits, the cells repre- sent logic levels, the values are 0 and t, and the constraints are boolean equations. Consider the constraint model for the circuit of Fig. 1. There are ten cells: A, B, C, D, E, X, Y, 2, F, and G, five of which are provided the observed values: A = 3, B = 2, C = 2, D = 3 and E = 3. There are three lnultipliers and two adders each of which is modeled by a single constraint: MI : X = A x C, M, : Y = I3 x D, MS : 2 = CXE, AI : F = X+Y, and A2 : G = Y+Z. The following is a list of cleductions and dependencies that the constraint propagator generates (a dependency is indicated by (component : antecedents): X=6(MI:A=3,C=2) Y=6(M2:B=2,D=3) Z=6 (M3:C=2,E=3) F=12(Al:X=6,Y=6) G=12(AZ:Y=6,Z=6) A symptom is indicated when two values are detcrnlincd for the same cell (e.g., measuring F to be 10 not 12). Each symptom leads to new conflict(s) (e.g., in this example the symptom indicates a conflict (A,, MI, Mz)), This approach has some important properties. First, it is not necessary for the starting points of these paths to be inputs or outputs of the circuit. A path may begin at, any point in the circuit where a measurement has been taken. Scconcl, it is not necessary lo make any assumy- tions about Ilie direction that sigllnls flow tliroiigh conipo- ncnts. Tn most digital circuits a signal can only flow from inputs to outputs. For cxa~l~plc, a subtracI.or cannot bc constructed by sinlply reversing xi input ,and the output Qualitative Reasoning and Diagnosis: AUTOMATED REASONING / 137 of an adder since it violates the directionality of signal flow. However, the directionality of a component’s signal flow is irrelevant to our diagnostic technique: a component places a constraint between the values of its terminals which can be used any way desired. To detect discrepancies, infor- mation can flow along a path through a component in any direction. For example, although the subtractor does not function in reverse, when we observe its outputs we can infer what its inputs must have been. 11. Generalized Constraint Propagation Each step of constraint propagation takes a set of an- tecedent values and computes a consequent. We have built a constraint propagator within our inference architecture which explores minimal environments first. This guides each step during propagation in an efficient manner to in- crementally construct minimal conflicts and candidates for multiple faults. Consider our example. We ensure that propagations in subset environments are performed first, thereby guar- anteeing that the resulting supporting environments and conflicts arc minimal. We use 15, el, e2, . ..I to represent the assertion x with its associated supporting environments. Before any measurements or propagations take place, given only the inputs, the data base consists of: [A = 3, {}I, [B = 2, {}], [[C = 2, {>], [[D = 3, -OD, and UE = 3, On- Observe that when propagating values through a compo- nent, the assumption for the component is added to the dependency, and thus to the supporting environment(s) of the propagated value. Propagating A and C through Ml we obtain: [[X = 6, {Ml}]. The remaining propa- gations produce: [[Y = 6, w2)n, uz = 6, w3n up = 12, {AI, wafdn, and [G = 12, (A2, M2, it&}]. Suppose we measure F to be 10. This adds [IF = 10, {}I to t,he data base. Analysis proceeds as follows (starting with the smaller assumption sets first): [[X = 4, {Al,M2}~, and [Y = 4, {Al,Ml}j. NOW the symptom between [[F = 10, {}I and [TF = 12, {Al, Ml,M2}~ is rec- ognized indicating a new minimal conflict: (Al, MI, M2). Thns the inference architecture prevents further propaga- tion in the environment {Al, Ml, Mz} and its supersets. The propagation goes one more step: [G = 10, {Al, AZ, MI, MS}]. There are no more inferences to be made. Next, suppose we measure G to be 12. Propaga- tion gives: [I2 = 6, {A2,M3}], [Y = 6, {&,~I~}], 12 = 8, (4, A2, MIIII, and [X = 4, {A 1, A2, &}]. The symp- tom “G = 12 not 10” produces the conflict (Al,A,, Ml, MS). The final data-base state is:5 A= 3,0 B= 2,{) c= 2,o D= 3,0 E= 390 F= lO,{} G= 12,{} X = 4, (AI, j&&G, Ad&) ww Y I= 4, {Al, M,} %{M2}{Az,M3) Z = 8, {&,~MI) 6,{M3}{A2,M2) This results in two minimal conflicts: (AI, 4, Mdf3) Note that at no point during propagation is effort wasted in constructing non-minimal conflicts. The algorithm discussed in section 6 uses the two min- imal conflicts to incrementally construct the set of mini- mal candidates. Given new measurements the propaga- tion/candidate generation cycle continues until the candi- date space has been sufficiently constrained 12. Connected Research Our approach has been completely implemented and tested on numerous examples. Our implementation con- sists of four basic modules. The first maintains the mini- mal supporting environments for each prediction and con- structs minimal conflicts. It is based on Assumption-Based Truth Maintenance [4]. Tl le second controls the inference such that minimal conflicts are discovered first and records the dependencies of inferences. It is based on the consumer and agenda architectures of [5]. The third is a general con- straint language based on the first two modules. The last module, the candidate generator, incrementally constructs the minimal candidates from the minimal conflicts. As all the work within the model-based paradigm, our approach presumes measurements and potential model- artifact differences are given. In [3] we exploit the frame- work of this paper in two ways to generate measurements which are information-theoretically optimal. First, the data structures constructed by our strategy (e.g., the data base state of Section 11) make it easy to consider and eval- uate hypothetical measurements. Second, as we construct all minimal environments, conflicts, and candidates, it is relatively straight forward to compare potential measure- ments (using probabilistic information of component fail- ure rates). The work presented here represents <another step to- wards Ihe goal of automatecl diagnosis, nevcrthcless there remains much to be done. Plans for the future include: 1) incorpornling the predictive cnginc cliscussed in [14] in order to diagnosis systcn~s with time-varying signals and 138 / SCIENCE state, and 2) controlling the set of model-artifact differ- ences being considered. 13. Related Work This research fits within the model-based debugging paradigm: [1,2,3,6,8, 9,111. However, unlike [1,2,6,8, 91, we propose a general method of diagnostic reasoning which is effic.ient, incremental, handles multiple faults, and is easily extended to include measurement strategies. Reiter (111 has been exploring these ideas independently and provides a formal account of many of our “intuitive” techniques of conIlict recognition and candidate generation. ACKNOWLEDGMENTS Daniel G. Bobrow, Randy Davis, Kenneth Forbus, Matthew Ginsberg, Frank Halasz, Walter Hamscher, Tad Hogg, Ramesh Patil, provided useful insights. We espe- cially thank Ray Reiter for his clear perspective <and many productive interactions. BIBLIOGRAPHY 1. Brown, J.S., Burton, R. R. and de Kleer, J., Peda- gogical, natural language and knowledge engineering techniques in SOPHIE I, II and III, in: D. Sleeman and J.S. Brown (Eds.), Intelligent Tutoring Systems, (Academic Press, New York, 1982) 227-282. 2. Davis, R., Shrobe, H., Hamscher, W., Wieckert, K., Shirley, M. and Polit, S., Diagnosis based on descrip- tion of structure and function, in: Proceedings of the National Conference on Artificial Intelligence, Pitts- burgh, PA (August, 1982) 137-142. 3. de Kleer, J. and Williams, B.C., Diagnosing multiple faults, Artificial Intelligence (1986) forthcoming. 4. de Kleer, J., An assumption-based truth maintenance system, Artificial Intelligence 28 (1986) 127--162. 5. de Kleer, J., Problem solving with the ATMS, Artifi- cial Intelligence 28 (1986) 197-224. 6. de Kleer, J., Local methods of localizing faults in electronic circuits, Artificial Intelligence Laboratory, AIM-394, Cambridge: M.I.T., 1976. 7. Doyle, J., A truth maintenance system, Artificial In- telligence 24 (1979). 8. Genesereth, M.R., The use of design descriptions in automated diagnosis, Artificial Intelligence 24 (1984), 411-436. 9. Hamscher, W., and Davis, R.., Diagnosing circuits with state: an inhcrcntly undcrconstraincd problem, in: Proceedings of the National Conference on Artificia.1 Intelligence, Austin, TX (August, 1984) 142 -147. 10. 11. 12. 13. 14. Mitchell, T., Version spaces: An approach to concept learning, Computer Science Department, STAN-CS- 78-711, Palo Alto: Standford University, 1978. Reiter, R., A theory of diagnosis from first principles, Artificial Intelligence, forthcomming. Also: Depart- ment of Computer Science Technical Report 187/86, (University of Toronto, Toronto, 1985). Steele, G.L., The dcEnition and implementation of a computer programming language based on constraints, AI Technical Report 595, MIT, Cambridge, MA, 1979. Sussman, G.J. and Steele, G.L., CONSTRAINTS: A language for expressing almost-hierarchical descrip- tions, Artificial Intelligence 14 (1980) l-39. Williams, B.C., “Doing Time: Putting Qualitative Reasoning on Firmer Ground,” Proceedings of the Nu- tional Conference on Artificial Intelligence, Philadel- phia, Penn., (August, 1984). Qualitative Reasoning and Diagnosis: AUTOMATED REASONING / 139
1986
109
372
ARTIFICIAL INTELLIGENCE AND DESIGN: A MECHANICAL ENGINEERING VIEW John R. Dixon Department of Mechanical Engineering University of Massachusetts Amherst, MA., 01003 ABSTRACT Most AI research into design has been based on or directed to the electrical circuit domain. This paper presents a mechanical engineer's view. Design of mechanical parts and products differs from design of electrical circuits in several fundamental ways: materials selection, sensitivity to manufacturing issues, non-modularity, high coupling of form and function, and especially the role of 3-D geometry. These differences, and the role of analysis in mechanical design, are discussed. A model for design is also presented based on the basically iterative nature of the design process. A brief summary of the related research at the University of Massachusetts into application of AI to mechanical design is included. INTRODUCTION "The proper study of mankind is the science of design." (Simon, 1969) Probably so, but since 1969 the subject has hardly been a major theme of AI research. The Handbook of AI barely discusses it. Now, however, AI has rediscovered design, especially engineering design (Mostow, 1985). There has, of course, been attention some paid to design by AI researchers over the intervening years. See, for example, Sussman, 1977, 1978; McDermott, 1978, 1981, 1982; deKleer and Sussman, 1978; Bennett and Englemore, 1979; Stefik et al, 1982, Brown, H. et al, 1983, Mitchell et al, 1983 ; Steinberg and Mitchell, 1985). The vast majority of this research is directed to or based on the domain of circuit design. Unfortunately, one cannot obtain an adequate general model of design, even engineering design, from the circuit domain alone. Recently some AI researchers have become interested in mechanical engineering design (Brown, D. C., and Chandrasakeran, 1983, 1984, 1985, 1986; Brown, D. C., 1985a, 1985b, 1985c; Brown, D. C. and Breau, 1986; Popplestone, 1984; Mittal, 1985). These initiatives into the application of AI in the mechanical domain illustrate very well not only that AI can contribute important perspectives to mechanical design, but also that studies of mechanical design can enrich the AI view of design generally. The goal of this paper is contribute a mechanical engineer's view (and model) of design to the AI discussion of design. The view of design and the model presented here are based on the author's engineering experience and also on recent research at the University of Massachusetts into the possible application of AI to mechanical design automation (Dixon and Simmons, 1984, 1985; Dixon et al, 1984, 1985; Dym , 1985; Simmons and Dixon, 1986; Kulkarni et al, 1985; Vaghul et al, 1985; Howe et al, 1986; Libardi et al, 1986; Luby et al, 1986). MECHANICAL DESIGN COMPARED WITH CIRCUIT DESIGN The Rutgers Workshop on which Mostow's article is based consisted of mostly electrical engineers and AI researchers. A workshop involving both mechanical engineers and AI researchers has since been held, with quite different results (Cole et al, 1985). Though it may be possible that at some (high) level of abstraction the design processes in the circuit and mechanical domains are similar, design in the two domains is so very different at the practicing level that we are not likely to discover the generalizations until we understand design much more fully in the two domains separately. There are some important ways that mechanical design and circuit design are alike. Both are engineering, and thus there are bodies of heuristics (e.g., "good design practices") as well as much reasoning from basic science principles. Both domains also use analysis methods of various degrees of sophistication to support design. Despite the similarities, there are four issues in mechanical design that make it fundamentally different from circuit design. These are: (1) the wide spectrum of material choices available to mechanical designers; (2) the critical role and often hyper-sensitive effect of manufacturing concerns on mechanical designs; (3) the non-modularity of mechanical designs; and (4) the intimate role of complex 3-D geometry in mechanical design. 872 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Perhaps the most significant of these differences between mechanical design and circuit design is in the role of 3-D geometry. In addition to a material specification, a mechanical design & a description of a 3-D geometry. Is. Though Mostow's paper does not even mention geometry as an issue in developing better models of the design process, it is a critical central issue in mechanical design. Moreover, geometry, manufacturing, and function are highly coupled in mechanical design (Rinderle, 1986). Geometry is the natural language of mechanical design. Mechanical designers "think" in 3-D geometry. There is no mathematical connection between the mental image or concept of a design and its 3-D visual representation in a drawing. Whether on paper or on a CAD terminal, mechanical designers sketch, erase, and sketch again. A mechanical design is seldom the solution to a set of constrained equations; it is instead a representation -- a drawing -- of a 3-D object. It may be noted that three of the points of difference listed above -- materials, manufacturing, and geometry -- each constitutes a huge body of knowledge. There are also important interactions among these topics. Moreover, this is knowledge that is not naturally represented by equations. How shall materials be represented so that design programs can reason about them? How shall manufacturing processes and machine tools be represented so that design programs can be written that properly consider the manufacturability of designs as well as plan the production process? And especially, how shall the geometry of a design be represented so that design programs can reason about that geometry, relating it to both function and manufacturability? These are issues that distinguish mechanical design from circuit design, and which must be studied in addition to common issues if we are to develop adequate models of the engineering design process generally. THE ROLE OF ANALYSIS IN MECHANICAL DESIGN Before discussing a model of engineering design from a mechanical view, it is necessary to consider the role of analysis in mechanical design. In mechanical design, equations and mathematical analyses enter the design process only after a trial design has been developed. Analyses are used to simulate or predict the performance of a prospective design for its intended use. It is a fundamental intellectual error, therefore, to believe that analyses, by themselves, can produce designs. Analysis supports and assists design by providing useful information about how a proposed design will perform, but an analysis does not produce a design since there must already be a design in order for an analysis to be performed. Sometimes parts of a design problem can be formulated as optimization problems, and in these instances, sub-problem optimization methods are very useful. Seldom, however, is a whole, real mechanical design problem expressable as a mathematical optimization problem. The designer's lot will never be such an easy one. Hopes persist that design can somehow be done directly by some form of "analysis", that is, without iteration. I doubt it. The essential nature of design -- at least mechanical design -- is iterative. Therefore, to learn to construct programs that can design, we need to learn first how to guide' iterative processes intelligently so that acceptable designs can be produced efficiently. MODELLING THE MECHANICAL DESIGN PROCESS We need a model or models of the design process in order to formulate design problems, acquire and represent design knowledge, and to develop design inference engines. Most mechanical engineers accept that the basic nature of design is iterative. However, there are variations in the exact nature of how a design problem is to be formulated, and just what the process is in detail. We view design as a hierarchy of nested iterative processes of (1) decomposition and redecomposition, (2) specification and respecification, and (3) design and redesign. Figure 1 shows the decomposition aspect of this model as a simple tree. That is, node A designates a complex problem to be solved. Nodes B, C, and D represent a decomposition of the problem into sub-problems, and so on. Decomposition continues until sub-problem Figure I. Decomposition into Sub-Problems. APPLICATIONS / 873 size and complexity is reduced to the point where the problem can be managed intellectually without further decomposition. Usually this is a problem with a relatively small number of design variables, perhaps three to ten or twelve. These sub-problems are then solved (i.e., designs developed) by a process we call "redesign". We shall now discuss this redesign process in some detail and then return to the decomposition part of the model. Redesign. The redesign model of design is shown in Figure 2. A problem is specified in terms of problem parameters. An initial design procedure generates an initial trial design. (Seldom is this the final design. If it is -- that is, if the initial design procedure is so good that it produces acceptable designs, then the domain is of no further interest intellectually. Such procedures exist in some domains but we are not concerned with such well understood problems.) Next the trial design is evaluated; that is, analyzed to determine its expected performance in terms of performance parameters that may include cost, function, and manufacturability issues. Then a decision is made as to the trial design's acceptability. (Simon coined the word "satisficing" for this concept.) If the design is acceptable, the task is complete. If not, the design is redesigned and re-evaluated, and so on iteratively. Often in practice, redesign ultimately fails and the process returns to the initial step with the information that the problem requirements need to be changed, usually relaxed, in some manner. A great deal of mechanical design is done by this iterative redesign process. Despite its importance, however, it appears to me that many AI researchers tend to belittle or at least ignore, redesign. There are perhaps two reasons for this. One stems from an association of redesign with "generate and test" (Lindsay et al., 1980); the other from an association of redesign with with process of "debugging of almost right" solutions (Sussman, 1977). Generate and test & a rather poor process for design if what one means is to generate randomly and exhaustively. But this is not at all what is meant by redesign. In redesign, the analysis results and the reasons for unacceptability are used heuristically to guide the changes made in the design. The phrase "debugging of almost right solutions" also suggests something rather trivial. After all, if the design is already "almost" right, the really hard intellectual work has already been done, hasn't it? In mechanical design, usually not. When designing a refrigerator, one does not begin with the design of a tractor. When designing an automobile drive shaft, one does not start with an initial design of a fender. In all but the most novel and unstructured design situations (say, starting out to design the first Polaroid camera), getting an "almost" right solution is relatively easy, whereas getting rid of all the unacceptabilities and making all the tradeoffs is excruciatingly hard. In summary, for sub-problems that need no further decomposition to be managed, design is done by iterative redesign. It is an important intelligent process. "Domain Independent" Redesign. Our work with redesign has progressed to the point where we have a working prototype of a program (called Dominic) that designs redesign class problems in several domains (Howe et al, 1986). Dominic has designed successfully to date in four distinctly different domains, and other tests are in progress. Dominic is essentially a hill-climbing algorithm. In this sense it is similar to otimization techniques. It differs, however, in two respects. First, it is guided by heuristic domain knowledge obtained from a domain expert. Second, its input format is a kind of low level language more natural to design problem formulation than the formal mathematics of optimization methods. Dominic is neither a very strong nor a very weak problem solving method. It lies in-between these extremes. It works "generally", but only on a sub-class of design problems. It makes use of domain knowledge, but also possesses some general knowledge of how to go about solving problems in its class. It may be that such strong/weak methods can be a practical way to apply AI in design. 1 1 GET SPECIFICATIONS fails I REDESIGN Figure 2. The Redesign Model of Design. 874 / ENGINEERING Decomposition. We return now to the decomposition process that leads to the sub-problems solvable by redesign. A tentative architecture of a decomposition node is shown in Figure 3. A problem specification is received from above in the hierarchy. If the problem can be solved by redesign, this is done, and the results returned upward. If not, an intial decomposition is made. Using this decomposition, initial specifications are assigned to the sub-problems created. (This assignment of specifications is a key step; the sub-problem specs are, in fact design variables at this stage.) These problems are then passed to the modules below, which are similar in structure to the one being described. The results returned from the various sub-problems are then integrated and analyzed as a complete system. If the complete system result is acceptable, it is passed upward. If not, new sub-problem specifications are assigned, and the process repeated. If the respecifier must admit defeat, then a new decomposition may be tried. If the re-decomposer must admit defeat, the system reports failure up the line, and asks for some change in the overall problem assignment. It is to be noted that this model does not allow for passing of information or "constraints" between sub-problems in the hierarchy. Only chaos can result from such communication. Our model is autocratic; information is passed only up or down. What we do include, however, is a mechanism for each sub-problem to pass up a re-specification request. This information is then a part of the information used to determine what the next round of specification changes should be. In other words, the sub-problems can request a change that will affect the other sub-problems, but they cannot impose it. Only the module in a position to tradeoff competing requests has the power to impose or "propagate" constraints on sub-problems. This is as it must be to retain control of this very complex process. Others are also working on design models that decompose problems for solution. The most advanced is the excellent and useful work by (Brown and Chandrasakeran, 1986). GEOMETRY How shall we represent design geometries? Answering this question is key to our future ability to construct knowledge-based systems that can serve to integrate CAD, CAM, and engineering analysis (CAE). Since so much of our knowledge of manufacturing is currently expressed in terms of geometric features, the answer appears to be "In terms of features." But, what, exactly is a "feature", and how shall the desired features representations of designs be obtained? We have been addressing these questions about features in our research. So far, the definition of a feature is simply "any geometric form or entity whose presence or dimensions in a domain are germain to manufacturing evaluation or planning, or to automation of functional analyses". We have experimented with several different types of features in several domains COMMUNICATIONS INTERFACE I INITIAL DECOMP I c I I , fails 1 I c l I I INITIAL e RE- < ACCEPT? *SPECS OK DECOMP fails I OK I COMMUNICATIONS INTERFACE I TO SIMILAR SUBPROBLEM MODULES Figure 3. A Typical Decomposition Module. APPLICATIONS / 875 (extrusion, injection molding, and casting), and have constructed working prototype research programs that provide a "design-with-features" environment for designers (Dixon, 1985; Vaghul, 1985; Libardi, 1986; Luby, 1986). These programs create a features representations of the design, and use this representation to draw the part in wire frame or solid form. In the extrusion program, the features representation is then used to develop an automatic finite element analysis for stresses and deflections in a loaded extrusion. In the injection molding and casting programs, the representation of the geometry is used as a basis for on-line evaluation of the manufacturability of the in-progress designs. These programs function a bit like a manufactururing expert who is looking and commenting over the shoulder of the designer while he or she designs. The programs are simple; it remains to be seen whether adding complexity will create insurmountable difficulties. Others have been also been working on the features concept in various ways (Pratt, 1984; Latombe, 1978; Popplestone, 1984). Also, rather than designing with features as we are doing, some are attempting to extract or infer features from the points, lines, and surfaces representation created by existing CAD systems (Henderson, 1984). SUMMARY AND CONCLUSION The themes of this paper are: (1) that mechanical design is very different from electrical design, and must also be studied in order to obtain general models of the design process; (2) that the differences involve the degree of involvement in the two domains with materials selection, manufacturing processes, and especially geometry; (3) that good analysis is important for good design, but design is not and cannot be done by analysis alone; (4) that the design process is iterative; (5) that design can be modelled as iterative redesign inside iterative respecification inside iterative decomposition, but redesign and respecification are the most important; (6) that learning how to guide iterative processes is intellectually important; (7) that strong/weak methods can be developed for design problem solving; (8) that learning how to represent the geometry of designs is a key issue for applying AI to mechanical design; and (9) that most likely the way to represent design geometry is in terms of features. If this paper has expanded the reader's model of design by providing useful insight into the nature of the mechanical design process, it has served its purpose well. ACKNOWLEDGEMENT Research at the University of Massachusetts into the application of AI to mechanical design automation is sponsored in part by grants from General Electric. REFERENCES Bennett, J. S. and Englemore, , R. S. (1979). "SACON: A knowledge-Based Consultant for Structural Analysis", Sixth IJCAI, Palo Alto. Brown, D. C., (1985a) "Capturing Mechanical Design Knowledge", Proceedings ASME Computers in Engineering Conference, Boston, MA., August. Brown, D. C. (1985b) "Failure Handling in a Design Expert System" CAD Journal, Computer Aided Design, Vol 17, No 9, November. Brown, D. C. (1985c) "Capturing Mechanical Design Knowledge", ASME Computers in Engineering Conference, Boston, MA. August. Brown, D. C. and Breau, R, (19861, "Types of Constraints in Routine Design Problem-Solving", First International Conference on Application of AI to Engineering Problems, Southampton, England, April. Brown, D. C. and Chandrasakeran, B., (1983) "An Approach to Expert Systems for Mechanical Design", Proceedings IEEE Trends and Applications, Gaithersburg, MD. Brown, D. C. and Chandrasakeran, B. (1984) "Expert Systems for a Class Of Mechanical Design Activity, Proceedings IFIP WG5.2 Working Conference on Knowledge Engineering in Computer Aided Design, Budapest, Hungary, September. Brown, D. C. and Chandrasakeran, B. (1985) "Plan Selection in Design Problem Solving" AIS85, Warwick, England, April. Brown, D. C. and Chandrasakeran, B. (1986) "Knowledge and Control for a Mechanical Design Expert System", IEEE Computer, July. Brown, H., Tong, C., and Foyster, G. (1983) "Palladia: An Exploratory Environment for Circuit Design", Computer, Vol 16, No 12, December. Cole, J. H., Stall, H. W., Parunak, V. (1985) Machine Intellipence in Machine, Report No. 85-20, Industrial Technology Inst., Ann Arbor, MI Dixon, J. R. and Simmons, M. K. (1984) "Expert Systems for Design: Standard V-Belt Drive Design as an Example of the Design-Evaluate-Redesign Architecture", Proceedings ASME Computers in Engineering Conference, Boston, MA. August. Dixon, J. R. and Simmons, M. K. (1985) "Expert Systems for Mechanical Design: A Program of Research", ASME Paper No. 85-DET-78, Design Engineering Conference, Cincinnati, Ohio, September. Dixon, J. R., Simmons, M. K., and Cohen, P. R. (1984) "An Architecture for Applying Artificial Intelligence to Design", Proceedings IEEE Design Automation Conference, Albuquerque, NM, June. 876 / ENGINEERING Dixon, J. R., Libardi, E. C., Luby, S. C., Vaghul, M. V., and Simmons, M. K. (1985) "Expert Systems for Mechanical Design: Examples of Symbolic Representations of Design Geometries", in Applications of Knowledge-Based Systems to Engineering Analysis and Design, ASME Publication No. AD-lo, New York. de Kleer, J. and Sussman, G. J. (1978) "Propagation of Constraints Applied to Circuit Synthesis", MIT Artificial Intelligence Memeo 485. Dym, C. L. (1985) Applications of Knowledge-Based Systems to Ennineerinp and Design, Publication No. AD-lo, American Society of Mechanical Engineers, New York. Henderson, M. R. (1984) "Extraction of Feature Information From Three Dimensional CAD Data", Ph. D. Thesis, Purdue University, Lafayette, Indiana. Howe, A., Dixon, J. R., Cohen, P. R., and Simmons, M K. (1986) "Dominic: A Domain Independent Program for Mechanical Engineering Design", Proceedings First International Conference on Application of Artificial Intelligence to Engineering Problems, Southampton, England, April. Kulkarni, V. M., Dixon, J. R., Simmons, M. K., and Sunderland, J. E. (1985) "Expert Systems for Design: The Design of Heat Fins as an Example of Conflicting Sub-goals and the Use of Dependencies", Proceedings ASME Computers in Engineering Conference, Boston, MA., August. Latombe, J. (1976) "Artificial Intelligence in Computer-Aided Design: The TROPIC System" TR 125, Stanford Research Institute, February. Libardi, E. C., Dixon, J. R., and Simmons, M. K. (1986) "Designing With Features: Extrusions As AN Example" ASME Paper No. 86-DE-4 Design Engineering Conference, Chicago, March. Lindsay, R., Buchanan, B. G., Fiegenbaum, E. A., Lederberg, J. (1980) DENDRAL, McGraw-Hill, New York. Luby, S. L., Dixon, J. R., and Simmons, M. K. (1986) "Designing With Features: Creating and Using a Features Data Base for Evaluation of Manufacturing of Castings", Proceedings ASME Computers in Engineering Conference, Chicago, July. McDermott, D. (1978) "Circuit Design as Problem Solving" Artificial Intelligence and Pattern Recognition in Computer Aided Design, J. Latombe (Ed), North-Holland Publishing Company, Amsterdam. McDermott, J. (1981) "Domain Knowledge and the Design Process", Eighteenth Design Automation Conference, ACM/IEEE, Nashville, Tennessee, July. McDermott, J. (1982) "Rl: A Rule-Based Configurer of Computer Systems, Artificial Intellipence, Vol 19, No 1, September. Mittal, S, Morjaria, M., and Dym, C. L. (1985) "PRIDE: An Expert System for the Design of Paper Handling Systems", in Applications of Knowledge Based Systems to Engineering Analvsis and Design, Publication no. AD-lo, American Society of Mechanical Engineers, New York. Mitchell, T. M., Steinberg, L., Kedar-Cabelli, S Kelly, V., (1983) Shulman, J., and Weinrich, T., "An Intelligent Aid for Circuit Redesign" Proceedings Third NCAI, Washington, D. C. Mostow, J. (1985)s "Towards Better Models of the Design Process" AI Magazine, Vol 6, No 1. Popplestone, R. J. (1984) "The Application of Artificial Intelligence to Design Systems", Proceedings First International Symposium on Design and Synthesis (ISDS, Tokyo, Japan. Pratt, M. J. (1984) ffSolid Modelling and the Interface I Between Design and Manufacture", IEEE, CG and A. Rinderle, J. R. (1986) "Function, Form, Fabrication Relations and Decomposistion Strategies in Design" Proceedings ASME Computers in Engineering Conference, Chicago, July. Simmons, M. K. and Dixon, J. R. (1986) "Reasoning About Quantitative Methods in Engineering Design", in Coupling Svmbolic and Numerical Computing Svsdtems in Expert Systems, J. S. Kowalik (Ed), North-Holland, Amsterdam. Simon, H. A. (1969) "The Science of Design" in The Sciences of the Artificial, MIT Press, Cambridge, MA. Stefik, M., Bobrow, D., Brown, H., Conway, L. and Tong, C., (1982) "The Partitionong of Concerns in Digital System Design", Proceedings of the Conference on Advanced Research in VLSI, MIT, Cambridge, MA. Steinberg, L. I. and Mitchell, T. M. (1985). "Redesign System: A Knowledge-Based Approach to VLSI CAD", IEEE Design and Test of Computers, Vol 2, Number 1, February. Sussman, G. (1977), "Electrical Design: A Problem for Artificial Intelligence Research", Fifth IJCAI. Sussman, G. J., (1978) "Slices: At the Boundary Between Analysis and Synthesis" Artificial Intelligence and Pattern Recognition in Computer Aided Design, J. Latombe (Ed), North-Holland, Amsterdam. Vaghul, M. V., Dixon, J. R., and Simmons, M. K. (1985) "Expert Systems in a CAD Environment: Injection Molding as an Example", Proceedings ASME Computers in Engineering Conference, Boston, APPLICATIONS / 877
1986
11
373
PLAUSIBILITY OF DIAGNOSTIC HYPOTHESES: The Nature of Simplicity Yun Peng and James A. Reggia Department of Computer Science University of Maryland College Park, MD. 20742 Abstract In general diagnostic problems multiple disorders can occur simultaneously. AI systems have traditionally han- dled the potential combinatorial explosion of possible hypotheses in such problems by focusing attention on a few “most plausible” ones. This raises the issue of estab- lishing what makes one hypothesis more plausible than others. Typically a hypothesis (a set of disorders) must not only account for the given manifestations, but it must also satisfy some notion of simplicity (or coherency, or parsimony, etc) to be considered. While various cri- teria for simplicity have been proposed in the past, these have been based on intuitive and subjective grounds. In this paper, we address the issue of if and when several previously-proposed criteria of parsimony are reasonable in the sense that they are guaranteed to at least identify the most probable hypothesis. Hypothesis likelihood is cal- culated using a recent extension of Bayesian classification theory for multimembership classification in causal diag- nostic domains. The significance of this result is that it is now possible to decide objectively a priori the appropriateness of different criteria for simplicity in developing an inference method for certain classes of gen- eral diagnostic problems. 1. Diagnostic Problem-Solving During the last decade, a number of artificial intelli- gent (AI) systems have been developed that use an “abductive” * approach to diagnostic problem-solving [Pople73, 821 [Pauker76] [ReggiaSl, 831 [Miller821 [Joseph- son841 [Basili85]. These systems use an associative knowledge base where causal associations between disorders and manifestations are the central component, and inferences are made through a sequential hypothesize-and-test process. An important but as yet unresolved issue in abductive systems for diagnostic problem-solving is what characteristics make a set of disorders a plausible, “best”, or “simplest” explanatory hypothesis for observed manifestations. This issue has long been an important one in philosophy [Peirce55] [Tha- gard78] [Joseph son821 as well as in AI [Rubin75] [Pople73] [Pauker76] [Reggia83] [Josephson84], and is not only of relevance to diagnostic problem-solving but also to many other areas in AI (natural language processing, machine learning, etc. [Charniak85] [Reggia85a]). In particular, to * Abductive inference is ing to the best explanation.” s enerally defined to be “reason- or a given set of facts, and is distinguished from deductive and inductive inference (see [Reggia85a\). Peirce55 Thagard781 [Pople73] [Josephson82] [Charniak85] the authors’ knowledge, all previous suggestions of hypothesis plausibility have generally been proposed pri- marily on intuitive rather than formal grounds. Over the last few years we have been studying a formal model of diagnostic problem-solving referred to as parsimonious covering theory [Reggia83,85b] [Peng86a]. Recently, we have successfully integrated into this causal reasoning model the ability to calculate the relative likeli- hood of any evolving or complete diagnostic hypothesis [Peng86b]. As a result an objective measure (relative likelihood) can now be used to examine several previous subjective criteria of hypothesis plausibility. The rest of this paper examines this issue, and is organized as fol- lows. First, the parsimonious covering model of problem- solving, which is based on an underlying causal relation- ship and the use of probability theory in this context, are briefly summarized in Sections 2 and 3. Section 4 then examines several different criteria for hypothesis plausibil- ity used in AI systems with respect to whether they lead to the most probuble diagnostic hypothesis. Situations where the use of each criterion is/is not appropriate are identified. Section 5 concludes by summarizing the impli- cations of these results for AI system development. 2. Parsimonious Covering Theory Causal associations between disorders and manifesta- tions are the central element of diagnostic knowledge bases in many real-world systems, and parsimonious cov- ering theory is based on a formalization of causal associa- tive knowledge [Peng86a] [Reggia85b]. The simplest type of diagnostic problems in this model, and the one we use in this paper, is defined to be a 4-tuple P = <D,M,C,M+> where D = {di, . . . , d,} is a finite non-empty set of disorders; M = {ml, . . . , mk} is a finite non-empty set of manifestations (symptoms); C C D x M is a relation with domain(C) = D and range(C) = M; and M+C M is a distinguished subset of M. The relation C captures the intuitive notion of causal associations in a symbolic form, where <d; ,mj > E C iff “disorder di may cause manifestation mj “. Note that <di ,mi > E C does not imply that mj always occurs when di is present, but only that mj may occur. D, M, and C together correspond to the knowledge base in an abductive expert system. M+, a special subset of M, represents the features (manifestations) which are present 140 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. in a specific problem. Fig. 1 graphically illustrates the symbolic causal knowledge of a tiny abstract diagnostic problem of this type. dl d2 d3 d4 ci I ml m2 m3 m4 Fig. 1. An example of a very simple abstraction of a diagnostic problem. Two functions, “causes” and “effects”, can be defined in the above framework: for all mjE M, causes(mj ) = {di 1 <d i ,mj > E C}, representing all pos- sible causes of manifestation mj ; for all di E D, effects(di ) = {mj 1 <di ,rni > E C}, representing all manifestations which may be caused by di. A set of disorders D1 E D is then said to be a cover of a set of manifestations MJ 5 M if MJ & effects(D1), where by definition effects(D, ) = U effects(di ). Also, we define causes(MJ) = 4~ Dr U CiMlSeS( mj ). mjE M, In parsimonious covering theory, a diagnostic hypothesis must be a cover of M+ in order to account for the presence of all manifestations in M+. On the other hand, not all covers of M’ are equally plausible as hypotheses for a given problem. The principle of parsi- mony, or “Occam’s Razor”, is adopted as a criterion of plausibility: a “simple” cover is preferable to a “complex” one. Therefore, a plausible hypothesis, called an expfana- tion of M+, is defined as a parsimonious cover of M+, i.e., a set of disorders that both covers M+ and satisfies some notion of being parsimonious or “simple”. Since there is, in general, more than one possible explanation for M+, and one is often interested in all plausible hypotheses, the set of all explanations of M+ is defined to be the solution of a given problem. A central question in this theory is thus: what is the nature of “parsimony” or “simplicity”? Put otherwise, what makes one cover of M+ more plausible than another? A number of different parsimony criteria have been identified both by us and by others doing related work: (1) Single-Disorder Restriction: a cover D1 of M+ is an explanation if it contains only a single disorder [Shu- bin82]. (2) Minimality: a cover D1 of M+ is an explana- tion if it has the minimal cardinality among all covers of M+, i.e., it contains the smallest possible number of disorders needed to cover M+ [Pople73] [Reggia81, 831. (3) Irredundancy: a cover DI of M+ is an explanation if it has no proper subsets which also cover M+, i.e., removing any disorder from D, results in a non-cover of M+ [Nau84] [Reggia84,85b] [Peng86a] [Reiter85] (deKleer861. (4) Relevancy: a cover D1 of M+ is an explanation if it only contains disorders in causes(M+), i.e., every di E DI must be causally associated with some mjE M+ [Peng86a]. Other criteria of parsimony are possi- ble. Assuming at least one manifestation is present, single-disorder covers are minimal. Furt,her, the set of all minimal covers is always contained in the set of all irredundant covers, which in turn is always contained in the set of all relevant covers [Peng86a]. example 1: In Fig. 1, let M+ = {m1,m3}. Then Di T: {dl) is a minimal cover of M+ because it alone covers m1,m3}. The cover D2 = {d,,d,} is irredundant but not minimal because neither d2 nor d, alone can cover {ml,m3}. The cover D3 = {d,,d,,d,} is relevant but redundant because it is a subset of causes({m i,m3}) and one of its proper subsets, namely {d2,d3}, is a cover of M+. Finally, D4 = {dl,d2,d3,d4} is an irrelevant cover of M+ because d 4fG causes( { m i,m 3}). The single-disorder restriction, while appropriate in some restricted domains [Shubin82] [Reggia85b], is obvi- ously not sufficient for general diagnostic problems where multiple, simultaneous disorders can occur (and thus we will not consider it any further). Minimality captures features and assumptions of many previous abductive expert systems. However, our experience has convinced us that there are clearly cases where minimal covers are not necessarily the best ones. For example, suppose that either a very rare disorder d i alone, or a combination of two very common disorders d2 and d3, could cover all present manifestations. If minimality is chosen as the par- simony criterion, d 1 would be chosen as a viable hypothesis while the combination of d2 and d3 would be discarded. A human diagnostician, however, may consider the combination of d2 and d, as a possible alternative. Minimality also suffers from various computational difficulties [Peng86a]. On the other hand, intuition also suggests that relevancy is too loose as a parsimony/plausibility criterion (in Fig. 1 there are only 2 irredundant covers, but 5 relevant ones among all 10 cov- ers of M+ = {m,,m3}). Therefore, solely on an intuitive basis, in our recent work irredundancy has been chosen as the parsimony criterion, and the notion of explanation equated to the notion of irredundant cover. Irredundancy handles situations like that in the above example and avoids some computational difficulties of minimality [Peng86a]. Similar notions are also used in related work by oth- ers, although with different emphasis. For example, in de Kleer’s work, the notion of “minimal conflict” of an abnormal finding corresponds to causes( mj ), while a “minimal candidate” corresponds to an irredundant cover of M+ in parsimonious covering theory [deKleer86]. Simi- larly, in Reiter’s work, the notion of “minimal conflict set” corresponds to causes(mi ), “hitting set” to relevant cover, and “minimal hitting set” to irredundant cover [Reiter85]. One reason that we choose the term “irredun- dancy” rather than “minimality” is to avoid any confu- sion with the term “minimal cardinality”. 3. Hypothesis Likelihood An alternative approach to determining the plausibil- ity of a diagnostic hypothesis is to objectively calculate its probability using formal probability theory. The Qualitative Reasoning and Diagnosis: AUTOMATED REASONING / 1-i 1 difficulty with this approach in the past has been that general diagnostic problems are multimembership classification problems [Ben-Bassat80]: multiple disorders can be present simultaneously. A hypothesis D, = (4, dz, . . . , d, } represents the belief that disorders d 1 and d2 and , . . and d,, are present, and that all die DI are absent. Such problems are recognized to be very difficult to handle [Ben-Bassat [Charniak83]. Among other things, the set of 21Dl diagnostic hypotheses that must be ranked in some fashion is incredibly large in most real-world applications (e.g., in medicine, even very constrained diagnostic problems may have 50 5 IDI 5 100; see [Reggia83]). Recently we have been successful in integrating for- mal probability theory into. the framework of parsimoni- ous covering theory in a way that overcomes these past difficulties [Peng86b]. This is achieved as follows. In the knowledge base, a prior probability pi is associated with each di E D where 0 < pi < 1. A causal strength 0 < Cij 5 1 is associated with each causal association <di, mj > E C representing how frequently di causes mj . For any <di, mj > $ C, cij is assumed to be zero. A very important point here is that cij f P(mj 1 di). The probability cij = P(di causes mj Idi) represents how fre- quently di causes rni when di is present; the probability P(mj 1 di), which is what has been used in previous sta- tistical diagnostic systems, represents how frequently mj occurs when di is present. Since typically more than one disorder is capable of causing a given manifestation mj, P(mj I di ) 2 cij. For example, if di cannot cause mj at all, Cij = 0, but P( mj 1 di ) 2 0 because some other disorder present simultaneously with di may caGuse mj. By introducing the notion of causal strengths, and by assuming that disorders are independent of each other, that causal strengths are invariant (whenever di is present, it causes mi with the probability cii regardless of other disorders that are present), and that -no manifes- tation can occur without being caused by some disorder, a careful analysis derives a formula for P(DI I M+), the posterior probability of any DI given the presence of any M+, from formal probability theory. Here DI, representing a hypothesis, denotes the event that all disorders in DI are present and all other disorders absent, while M+, representing the given findings, denotes that all manifesta- tions in M+ are present and all others absent [Peng86b]. Specifically, we have proven that manifestations are independent under a given D1, and that P(mj ]DI) = 1 - n (1 - cij ) for mj E M, D, C D. Then by Bayes’ 4 EDI theorem, it is easy to show that n (l- Pi) P(DI 1 M+) = dZED . WI, M+) PI p(M+) where n (1 - Pi ) / P(M+) is a constant for all DI given d, ED any M+. L(D[, M+), called the relative likelihood of DI given M+, consists of three components: L(b,M+)= LQh,M+). L2(b,M+) .L3(D,, M+), 12al where the first product L(DI 7 Mt) = 11 P(mj I DI 1 Wl,E M+ informally can be thought of as a weight reflecting how likely D1 is to cause the presence of manifestations in the given M+; the second product LPI, Mt) = - IT Rw I DI) m,~ M-M+ =rI II t1 - cd 1 d, E D, ml Eeffects(d, )-M+ PC1 can be viewed as a weight based on manifestations expected with D1 but which are actually absent; and the third product L3m'M+) = dtpD, (1 PiPi) represents a weight based on prior probabilities of D, [Peng86b]. Note that each of these products involves only probabilistic information related to di E DI and mj E M+ instead of the entire knowledge base. For this reason L(DI, M+) is computationally very tractable. Eqs 1 and 2a - d make it possible to compare the relative likelihood of any two diagnostic hypotheses D, and DJ using WI I M+) L(DI, M+) P(DJ 1 M+) = L(DJ, M+) ’ PI Before we use this objective measure to examine various subjective notions of plausibility, a brief example may be helpful. example 2: Let the following probabilities be assigned to the problem given in Fig. 1: p1 = .Ol p2 = .l p3 = .2 p4 = .2 Cl1 = .2 c 12 = .8 c 13 = .l c 14 =o c21 = .9 C22 = 0 C23 = 0 c24 = .3 c31 = 0 c32 = 0 c33 = .9 c34 = .2 c41 = 0 c42 = .5 c43 = 0 C44 = .8 Let Mt = {ml,m3}. Th en the relative likelihood of three covers of M+, {d,}, {d2,d3}, and {dl,d2,d3}, are calculated as follows. L,({d,}, {m1,m3}) = c 11’ c 13= .2 . .l = .02 L(Wh hmd) = (1 - c 12).(1 - c 14) = (1 - .8)*1 = .20 Pl L3(Wj -hmd) = - = 1 - Pl .Ol. Similarly, W&J& 1 ml,m3}) = (1 - (1 - c2J(l - c31)) . (1 ~ (1 - c&*(1 - Cam)) = .9 . .9 = .81 Lz({d,,d,}, {ml,m3}) = (1 - ~24) (1 - ~34) = .7 . .8 = .56 L(&,d& hm3)) = p2 * P3 l- P2 - = .028. Similarly, l- P3 Ll({dl,d2,d3}, {ml,md) = (1 - (1 - cdl - cd1 - ~31)) . (1 ~ (1 - c 13).(1 - ~23).(l - ~33)) = .84 L2({d1,d2,d3}, {ml,m3}) = (1 - C 12) ' (1 - C 24) ' (1 - C34) = .ll UbW,J,L { ml,m3}) = .00028. Thus, WU { m1,m3}) = .00004, WW3h hd) = .013, and WW2rd31, 1 mlrm3}) = .000026, by Eq. 2a. 142 / SCIENCE 4. Hypothesis Plausibility As noted earlier, parsimonious covering theory (as well as the work of others cited earlier) captures the basic notion used in many abductive problem-solvers that a set of disorders DI is an “explanation” (plausible hypothesis) for M+ if (1) D, covers M+, and (2) I)h,i~ “parsimonious”. We now examine intuitive/subjective criteria using the measure L(D,, M+) given above, focusing on the question of when a set of parsimonious covers includes the most probable cover. First, suppose a hypothesis D, & D is not a cover of M+. Then there exists at least one present manifestation, say mj E M+, that is not covered by DI, i.e., for all 4~ D,, <mj ,d; > $ C SO cij = 0. Then, L,(DI, M+) = 0 and hence L(DI, M+) = 0 (by Eqs. 2b and 2a). That is, any DI & D which does not cover M+ will have zero relative likelihood, and P(D, ) M+) = 0. It thus follows that any most likely set of disorders D, must be a cover of M+, and that in search for plausible hypotheses only those sets that are covers of M+ need to be considered (an important savings because usually a large number of DK in zD are not covers). The more difficult issue in hypothesis evaluation, however, has been precisely defining what is meant by the “best” or “most plausible” explanation for a given set of facts [Thagard78] [Josephson821 [Reggia85c] [Peng86a]. In th e context of diagnostic problem-solving, it seems reasonable to correlate such subjective and ill- defined concepts with likelihood, i.e., to prefer diagnostic hypotheses that are more likely to be true based on their posterior probabilities. If one accepts P(DI ] M+) as a measure of the plausibility of DI, it then becomes possi- ble to objectively analyze the conditions under which different criteria of parsimony seem plausible. Three such criteria were defined in section 2, namely, relevancy, irredundancy, and minimality, and we now wish to con- sider if and when these criteria identify the most prob- able diagnostic hypothesis. Let DIE D be a cover of M+ in a diagnostic prob- lem P = <D,M,C,M+>. For any dk E D - D1, it follows from Eqs. 2b - d that Ldb W >, M+) = LP,, M+) . I-I (l- ckj + h) ml Eeffects(dk )nM+ Pal where P(mj]Dr)=l- n (I-cij)#O for all mjE d,E DI M+ since D, covers M+. For bid& U{4 h M+) and L3(DIU{ dk }, M+), it is similarly the case that L,(b U{ d,, 1, M+) = L(D, 9 M+). n (1 - ckj I- m, Eeffects(dc )-M+ WI LdJh u@k 1, M+) = LPI, M+) . e- 144 Eqs. 4a - c’ directly support the analysis of the three types of parsimony in question by permitting the direct comparison of L(D, , M+) and L(DI u{dk }, M+). relevant covers: Let D1 be a relevant cover of M+, so by definition D, covers M+ and DI E causes(M+). Let dk $ causes(M+), i.e., dk is irrelevant to M+, so dk $ DI . Then, DI U{ dk } is an irrelevant cover of M+. For such a dk , all of its manifestations are known to be absent, so using Eqs. 4a, b and c, it follows from the preceding that L(& ‘Jbi 1, M+) = ( n Pk I@,, M+) (l - ckl 1) . - m, Eeffects( dt ) 1 - Pk because L,(D1 u{dk }, M+) = L1(D1, M’). In most real world diagnostic problems, pk is gen- erally very small. For example, in medicine pk < 10-l even for very common disorders in the general population, such as a cold or the flu, and is much much smaller (e.g., lo-‘) for rare disorders. Thus, pk /(l - pk) << 1 usually. The product of (1 - ckl )‘s is also less than 1, and is often much less since it is a product of numbers less than one. Thus, in most applications, L(D,u {dk }, M+)<<L(D,, M+) making an irrelevant cover much less likely than any relevant cover it contains. This effect is magnified as a cover becomes “more irrelevant”, i.e., as additional irrelevant disorders dl are included. Thus, generally, it is only necessary to generate relevant covers as hypotheses for which L(D,, M+) is calculated, and in most real world problems this represents an enor- mous computational savings (typically most covers are irrelevant). The only exception would occur when pk is fairly large, and dk has few, weakly causal associations with its manifestations. In particular, L(D,u {dk }, M+) would exceed L(DI, M+) where dk was an irrelevant disorder only if pk > 1 / (1 + n (1 - ck.1 )) > 0.5, m,~effects(d~) a distinctly atypical situation as noted earlier. An interesting consequence of this result is that if M+ = 0, since 0 is the only relevant cover of such a M’, the pro- babilistic causal model generally entails “no disorders are present” as the only reasonable explanation, provided that pi 5 0.5 for all di E Dr. This is consistent with parsi- monious covering theory and with intuition. irredundant covers: If DI is an irredundant cover of M+, then by definition no proper subset of DI covers M+. For dk $! D1 but dk E causes(M+), D,u {dk ) is a redundant but relevant cover of M’. From- Eqs. 4a - c, LdDdJ{dk >I M+) > 1 and L,(Dd{dk h M+) < LIPI, M+) - L,fD, . M+I - 1. If pk << 1, then Ls(b U{dk >t M+) -1’ ’ Pk ’ LOI, M+) << 1 - Pk general it is likely that the decrease in LZ and L3 by adding dk will compensate for the increase because pk is typically small. As example 2 although adding d, into irredundant cover bLd,) increases L, from .81 to .84, it reduces L2 from .56 to .ll and L3 from .028 to .00028, thus making the redundant but relevant cover {dl,d2,d3} much less likely than the irredundant cover {d,,d,} (.000026 vs. .013). Therefore, if the prior probabilities pi << 1 for all di E D as in many applications, the most probable covers of M+ are likely to be irredundant covers, consistent with intuitive arguments made in the past [Nau84] (Reggia851 [Peng86a] [Reiter85] [deKleer86). 1. In caused in L1 shows, However, more care must be applied in restricting hypothesis generation to just irredundant covers. A care- ful analysis of Eqs. 4a - c should convince the reader that a redundant but relevant cover D,u {dk } might Qualitative Reasoning and Diagnosis: AUTOMATED REASONING / 143 occasionally be more likely than D, if dk is fairly com- mon and ckj >>P( mj 1 D, ) for some mj E M+. This is an intuitively reasonable result, and it represents an insight concerning the nature of “parsimony” that was only recognized after developing the probability calculus sum- marized in Section 3. However, even in the situation where some redundant cover is more irredundant cover it contains, such a probable than an redundant cover might still be less probable’ than the most probable irredundant cover. For instance, in example 2, a redun- dant cover {dl,dt} has relative likelihood I,( { d 1, d 3}, {m i,me}) = .000064 which is greater that of {d 1}, but still less than that of { d,,d,} which is an irredundant cover. minimal covers: It is possible to identify situations where minimal cardinality is a reasonable criterion for hypothesis generation. For example, if, for all d;E D, the prior probabilities are pi << 1 and are about equal, and the Cij 'S are fairly large in general, then a careful analysis of Eqs. 4a - c shows that the most probable cov- ers of M+ are likely to be minimal covers. In this situa- tion, the ratio between L(D1, M+) and L(D,, M+) for two different covers D, and D, of M+ will be dominated by L(DJ ,- M+) - the ratio = (pi) ID,1 - IDfI which LADI, M+) l - Pi would be very much smaller than 1 if 1 DI 1 < 1 DJ 1 . Unfortunately, in many real-world diagnostic situations the assumptions needed to make minimality a useful par- simony criterion are violated. In medicine, for example, prior probabilities among diseases and causal strengths vary by as much as 106, and therefore minimality is gen- erally not a reasonable criterion to adopt to limit hypothesis generation. 5. Discussion BY applying a form of Bayesian classification extended to work in th e framework of parsimonious cov- ering theory, we have been able to examine various intuitive/subjective criteria for hypothesis plausibility in an objective fashion. Consistent with intuition and con- cepts in parsimonious covering theory, probability theory leads to the conclusion that a set of disorders must be a cover to be a plausible hypothesis. Further, conditions can now be stated (Section 4) for when various criteria of “simplicity” are reasonable heuristics for judging plausibil- ity. For example, minimal cardinality is only appropriate to consider when all disorders are very uncommon and of about equal probability, and causal strengths are fairly large. If some disorders are relatively much more common than others, or if causal strengths in some cases are fairly weak, using minimal cardinality as a heuristic to select plausible diagnostic hypotheses is inadequate. In this latter situation, typical of most real-world problems, the criterion of irredundancy may be appropriate. Irredundancy is generally quite attractive as a plausi- bility criterion for diagnostic hypotheses, and a formal algorithm (with proof of its correctness) for generating all irredundant covers of a set of given manifestations M+ has recently been described [Peng86a]. Unfortunately, there are two difficulties with directly generating the set of all irredundant covers for consideration as diagnostic hypotheses. First, this set may itself be quite large in some applications, and may contain many hypotheses of very low probability. Second, and more serious, it may still miss identifying the most probable diagnostic hypothesis in some cases (see Section 4). This latter difficulty is an insight concerning plausibility criteria that has not been previously recognized. Fortunately, both difficulties are surmountable. A heuristic function based on a modification of L(D[, M+) can be used to guide an A* -like algorithm to first locate a few most likely irredundant covers for M+. Then, a typically small amount of additional search of the “neigh- borhood” of each of these irredundant covers can be done to see if any relevant but redundant covers are more likely. An algorithm to do this and a proof that it is guaranteed to always identify the most likely diagnostic hypothesis has been presented in detail elsewhere [Peng86b]. There are a number of generalizations that could be made to the results presented in this paper, and we view these as important directions for further research. Our use of Bayesian classification with a causal model assumed that disorders occur independently of one another. In some diagnostic problems this is unrealistic, so a logical extension of this work would be to generalize it to such problems. Some work has already been done along these lines in setting bounds on the relative likelihood of disorders with Bayesian classification [Cooper84]. In addi- tion, we have adapted only one method of ranking hypotheses (Bayes’ Theorem) to work in causal domains involving multiple simultaneous disorders. It may be that with suitable analysis other approaches to ranking hypotheses could also be adopted in a similar fashion (e.g.7 Dempster-Shafer theory [Dempster68] [Shafer76]). Some initial work along these lines with fuzzy measures has already been done [Yager85]. Supported in part by ONR award N00014-85-K0390 and by NSF Award DCR-8451430 with matching funds from Software A&E, AT&T Information Systems, and Allied Cor- poration Foundation. PI PI PI M REFERENCES Basili, V., and Ramsey, C., “ARROWSMITH - P: A Prototype Expert System for Software Engineering Management” , Proc. Expert Systems in Government Symposium”, Karna, K., (ed.), Mclean, VA, 1985. Ben-Bassat, M., et al, “Pattern-Based Interactive Diagnosis of Multiple Disorders: The MEDAS Sys- tern”“” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2, March 1980, pp. 148-160. Charniak, E., “The Bayesian Basis of Common Sense Medical Diagnosis”, Proc. of National Conference on Arti&cial Intelligence, AAAI, 1983, pp. 70-73. Charniak, E., and McDermott, D., Introduction to Artificial Intelligence, Addision-Wesley, Reading, MA., 1985, chapters 8, 10. 144 / SCIENCE 151 PI PI PI PI PI PI PI P31 ii41 1151 Cooper, G., NESTOR: A Computer-Based Medical Diagnostic Aid That Integrates Causal and Probabilis- tic Knowledge, STAN-CS-84-1031 (Ph. D. Disserta- tion), Dept. of Computer Science, Stanford Univer- sity, Nov. 1984. de Kleer, J., and Williams, B., “Reasoning about Multiple Faults”, submitted, 1986. Dempster, A., “A Generalization of Bayesian Infer- en ce” , Journal of Roy. Statis. Sci. Ser. B30, 1968, pp. 205-247. Josephson, J., Explanation and Induction, Ph. D. Thesis, Dept. of Phil., Ohio State Univ., 1982. Josephson, J., Chadrasekaran, B., and Smith, J., “Assembling the Best Explanation”, IEEE Workshop on Principles of Knowledge-Based Systems, Denver, CO. Dec., 1984. Miller, R., Pople, H., and Myers, J., “INTERNIST-l, An Experimental Computer-Based Diagnostic Consul- tant for General Internal Medicine”, New England J oj Medicine, 307, 1982, 468-476. Nau, D., and Reggia, J., “Relationship Between Deductive and Abductive Inference in Knowledge- Based Diagnostic Problem Solving”, Proc. First Intl. Workshop on Expert Database Systems, Kerschberge, L. (ed.), Kiawah Island, SC., Ott, 1984, pp. 500-509. Pauker, S., Gorry, G., Kassirer, J., and Schwartz, M., “Towards the Simulation of Clinical Cognition”, Am. J. Med., 60, 1976, pp. 981-996. Peirce, C., Abduction and Induction, Dover, 1955. Peng, Y., A Formalization of Parsimonious Covering and Probabilistic Reasoning in Abductive Diagnostic Inference, Technical Report TR-1615 (Ph. D. Disser- tation), Dept. of Computer Science, University of Maryland, Jan. 1986a. Peng, Y., and Reggia, J., “A Probabilistic Causal Model for Diagnostic Problem-Solving”, submitted for publication, 198613. I161 P71 PI il91 PO1 WI WI i231 WI I251 WI Pople, H., “On the Mechanization of Abductive Logic”, Proc. of International Joint Conjerence on Artificial Intelligence, IJCAI, 1973, pp. 147-152. Pople, H., “Heuristic Methods for Imposing Structure on Ill-structured Problems: The Structuring of Medi- cal Diagnostics”, Artificial Intelligence in Medicine, Szolovits, P. (ed.), 1982, pp. 119-190. Reggia, J., Knowledge-Based Decision Support Sys- tems: Development Through KMS, Technical Report TR-1121 (Ph. D. Dissertation), Dept. of Computer Science, University of Maryland, Oct., 1981. Reggia, J., Nau, D., and Wang, P., “Diagnostic Expert Systems Based on a Set Covering Model”, Int. J. Man-Machine Studies, Nov. 1983, pp. 437-460. Reggia, J., and Nau, D., “An Abductive Non- Monotonic Logic”, Proc. Workshop on Non-Monotonic Reasoning, AAAI, Oct. 1984, pp. 385-395. Reggia, J., “Abductive Inference”, Expert Systems in Government Symposium, Oct. 1985a, pp. 484-489. Reggia, J., Nau, D., Wang, P., and Pen@;, Y., “A Formal Model of Diagnostic Inference”, Information Sciences, 37, 198513, pp. 227-285. Reiter, R., “A Theory of Diagnosis from First Princi- pies”, TR-187/86, Dept. of Computer University of Toronto, Dec. 1985. Science, Rubin, A., “The Role of Hypotheses in Medical Diagnosis”, Proc. of International Joint Conference on Artificial Intelligence, IJCAI, 1975, pp. 856-862. Shafer, G., A Mathematical Theory of Evidence, Princeton University Press, Princeton, NJ, 1976. Shubin, H., and Ulrich, J., “IDT: An Intelligent Diagnostic Tool”, Proc. National Conjerence on Artificial Intelligence, AAAI, 1982, pp. 290-295. [27] Thagard, P., “The Best Explanation - Criteria for Theory Choice”, 76-92. Journal of Philosophy, 75, 1978, pp. [ZS] Yager, R., “Explanatory Models in Expert Systems”, Int. Journal of Man - Machine Studies, 23, 1985, pp. 539-549. Qualitative Reasoning and Diagnosis: AUTOMATED REASONING / 145
1986
110
374
Doing Time: Putting Qualitative Reasoning on Firmer Ground Brian C. Williams MIT Artificial Intelligence Laboratory 545 Technology Square Gambradge, MA 02139 (wzlliams%mit-ozQmit-mc.arpa) Abstract (e.g., digital, quantitative, qualitative or symbolic abstrac- tions). In addition, it provides a framework in which to explore Recent work in qualitative reasoning has focused on predict- ing the dynamic behavior of continuous physical systems. Signif- icant headway has been made in identifying the principles nec- essary to predict this class of behavior. However, the predictive inference engines based on these principles are limited in their ability to reason about time. such tasks as prediction, explanation, diagnosis[4], and design. I begin with a summary of current qualitative reasoning sys- terns and their limitations. Based on these limitations, a more robust language for describing behavior over time is presented. Next, an inference mechanism, referred to as a Temporal Con- straint Propagator (TCP), is introduced ; TCP predicts the be- havior of the desired class of systems in terms of the behavioral language. Finally, the power of this approach is demonstrated with an example taken from qualitative reasoning. 2 Qualitative Reasoning In A Nutshell Given a description of a physical system and its initial conditions, qualitative analysis typically involves 1) describing the temporal behavior of the system’s state variables, in terms of a particular This paper presents a general approach to behavioral pre- diction which overcomes many of these limitations. Generality results from a clean separation between principles relating to time, continuity, and qualitative representations. The resulting inference mechanism, based on propagation of constraints, is ap- plicable to a wide class of physical systems exhibiting discrete or continuous behavior, and can be used with a variety of rep- resentations (e.g., digital, quantitative, qualitative or symbolic abstractions). In addition, it provides a framework in which to explore a broad range of tasks including prediction, explanation, diagnosis, and design. 1 Introduction qualitative representation and 2) explaining how this behavior came about. The description The physical world around us is continually changing. Thus in or- der for an agent to make intelligent decisions about his interaction with the surrounding environment he needs to be able to predict the effects of his actions and of changes he observes. Recent work in qualitative reasoning[9,10,6,5,3] has focused on predicting the dynamic behavior of continuous physical systems (e.g., predicting fluid flow, pressure stability or mechanical oscillations). Signif- icant headway has been made in identifying the principles nec- essary to predict this class of behavior. However, the predictive inference mechanisms based on these principles exhibit a number of severe limitations, such as 1) forcing one to make unnecessary temporal distinctions, 2) overrestricting the language used for describing temporal behavior, 3) performing weak temporal in- ference, 4) constructing incomplete justifications, and 5) making irrelevant domain restrictions. of a physical system consists of a set of state variables (e.g., force and acceleration) and a system of equations, parameterized by time, which describe the interactions between these variables (e.g., f(t) = ma(t)). A qualitative representation divides the range of values a quantity can take into a set of regions of interest (e.g., positive, negative and zero). representation selected depends on properties of the goals of the analysis. The qualitative value then the region it is in. The particular the domain and of a quantity is The behavior of the system can be viewed in terms of a qual- itative state diagram, where each state describes the qualitative value of every state variable in the system.’ The behavior of the system over time can be viewed as a particular path through this state diagram. Each state along this path represents an interval of time over which the system’s state variables maintain their values. The duration of this interval is dictated by principles involving continuity and rates of change. [9] This paper presents a general approach to behavior predic- tion which overcomes many of these limitation. Generality results from a clean separation among principles relating to time, con- tinuity, and qualitative representations. The resulting inference mechanism, based on propagation of constraints[8], is applicable to a wide class of physical systems exhibiting discrete or contin- uous behavior, and can be used with a variety of representations This work was supported in part by an Analog Devices Fellowship, and in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-80-C-0505. AAAI-86 National Conference on Artificial Intelligence We say that the system changes state whenever any state variable changes its qualitative value. The values in the next state are then determined by 1) identifying those quantities which cannot change value (e.g., if Q is positive in a particular state and its derivative is positive or zero then it will remain positive in the next state), and 2) propagating the effects of those quantities that are known to change. The qualitative reasoning system also keeps track of the reason for every deduction in 1) or Z), using the record, among other things, to generate explanations (e.g., “an increase in force causes the mass to accelerate”). ‘The process of constructing a qualitative state diagram is called Envisionment [3]. Qualitative Reasoning and Diagnosis: AUTOMATED REASONING / 1 OS From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. 3 Limitations of the Existing Approach “Fred was born in 1910 and raised in Montana, went to school in Massachusetts from 1928 to 1936, and then spent the remain- der of his life’in Alaska, where he died in 1980.” In this example The above approach has been adequate for describing the behav- ior of a number of simple systems such as pressure regulators, harmonic oscillators, RC networks and rudimentary plumbing. However, the approach has a number of limitations which make “uniform behavior” means Fred’s state of residence In general, what we mean by “uniform behavior” is unchanged. determined 1s it difficult to use for analyzing plex behaviors. A few of these cial difficulties. large systems and describing com- limitations will illustrate the cru- by particular properties of interest based on our analysis goals. For example, we may be interested in the interval over which a state variable maintains a particular value, is bounded within a certain region, or is sinusoidal. One limitation arises from the fact that a state-based ap- preach imposes a total ordering on events. That is in order to describe a system’s behavior as a sequence of states, we must specify the value of every variable at every point in time. The re- sult is a total ordering on events. This need to specify the value of every variable in turn forces the inference engine to acquire and manipulate a significant number of irrelevant relations. For example, suppose we are interested in studying the behavior of two bears, a panda bear and a polar bear living oceans apart. Specifically we are interested in how a bear’s sleep cycle (i.e., when they wake and sleep) affects its eating habits. In a state- based approach we are required to determine the order in which the bears fall asleep. However, this ordering doesn’t affect either bear’s eating habits since they will never interact. If instead we were studying one hundred bears living in separate remote areas of the globe then the bedtime of every bear would have to be ordered resulting in 10,000 irrelevent orderings! If a particular The description of a state variable’s behavior over time is referred to as a value history, or simply history. A history is a contiguous, non-overlapping sequence of interval/value pairs, called episodes. The time interval associated with each episode, e, is referred to as the episode’s temporal extent, and is bounded by the end points t-(e) and t+(e). In the above example, the value history for “Fred’s state of residence” is composed of three episodes, the first being “Montana from 1910 to 1928.” We say that histories use a qualitative representation for time because they break the time line into a set of regions of interest. In this case a region of interest is an interval of uniform behavior. This model of behavior is very general, permitting a variety of dis- Crete and continuous temporal further in Sections 10 and 11. representations. This is discussed To avoid the limitations noted fail to satisfy this property. For example, instead of saying “Fred in Section 3, a history must was in Montana from 1910 to 1928,” we could say, “Fred was avoid introducing distinctions that are irrelevant in Montana during 1910,1911, 1912, . . . 1928.” In this case to the analysis. the boundaries between the intervening years are irrelevant and There are many ways of describing a particular behavior which obscure the description. The case becomes absurd if we enumer- ate the same description in terms of days, minutes, or seconds. Nevertheless, incorporating irrelevant temporal distinctions is a ordering cannot be determined (as it is often the case in quali- tative reasoning), we must split cases, creating an explosion in A second limitation is that the lack of an explicit representa- tion for time restricts the class of analyzable behaviors. Typically the number of interpretations. we only specify an initial state of the system. Without an explicit representation for time, there is no easy way to describe inputs Even in the above example the that vary (e.g., number of interpretations is clearly unbearable. an external clock in a digital circuit). Further- more, the lack of an explicit representation of temporal relations prevents adequate reasoning about durations, delays and feed- The additional relations also obscure the resulting behaviors back. Finally, it is difficult to change the model for time or the temporal relations allowed without changing the underlying qual- and dependencies, minimizing their utility for explanation, diag- itative reasoning mechanism. A solution to the last two problems is the focus of Sections 10 and 11. nosis or design. Filtering out irrelevant information at the end of analysis is difficult and computationally expensive. real problem encountered in most existing qualitative reasoning systems. To achieve the desired descriptions we want every episode in a history to encompass the largest contiguous interval of time during which the state variable maintains a single qualitative value. More precisely, we say that an episode, el, is maximal if there exists no episode, e2, with the same value such that el’s temporal extent is a proper sub-interval of e2’s extent. We then say that a history is concise if every episode is maximal. Thus every point in a concise history where two episodes meet denotes a change in value. According to this definition, the example of Fred’s residence given at the beginning of this section is a concise history. Representing behavior in terms of concise histories makes ex- plicit all events of interest (i.e., the changes in values), while suppressing “uninteresting” details. These events can be used in expressing temporal relationships. The set of relevant relations between events provides the second component of a behavioral description. Instead of representing all relations, as in a total or- dering, we are only interested in relations between events whose interaction can result in the change of other quantities. The in- teractions of interest are defined by the system’s equations, with each equation typically specifying a single, local interaction. We will see that those interactions which affect the system’s behav- ior can be ident,ified during the analysis process.2 Given this In the next section we will see that a history-based approach allows us to separate out the specification of the behavior of each quantity from the description of the interrelationship between these behaviors. It is then necessary to specify only the rele- vant interactions between behaviors. In addition, relations have become explicit, making it possible for other reasoning mecha- nisms to use them. This is crucial since it allows us to change our underlying model for time without modifying the predictive inference mechanism. 4 Representing Behavior Over Time To avoid the above limitations we describe a system’s behavior in terms of 1) the behavior of each state variable over time, and 2) the relevant temporal relations between events. For each vari- able, we are typically interested in intervals of uniform behavior and points at which these behaviors change. For example, we might describe a person’s life history in terms of where he lives: ‘This is one solution to what Forbus refers to as the intersec- tion/interaction problem: “Which intersections of histories actually corre- spond to interactions between the objects?“[5]. 106 / SCIENCE representation, the problem remaining is to efficiently generate for propagation. This differs from traditional constraint propaga- behavioral descriptions of physical systems. 5 Propagation of Constraints Most systems performing some type of reasoning about physical systems have been based on propagation of constraints[8]. These include a wide variety of applications such as digital, quantita- tive and qualitative analysis, explanation, synthesis, diagnosis, and troubleshooting. One reason for the pervasiveness of this approach is that constraints naturally reflect the structure of the physical world around us. Because of its generality for physical problem solving, constraint propagation provides a framework in which to reason about temporal behavior. The remainder of this paper incorporates time into constraint propagation, but does so in a way that avoids the limitations described in Section 3. We will see that concise histories play a key role in making this happen. The next section presents a brief overview of traditional constraint propagation. 6 The Basic Constraint Propagator Constraint propagation operates on cells, values and constraints. A cell contains a single value, while a constraint stipulates a con- dition that a set of cells’ values must satisfy. Cells and constraints can be used to model state variables and equations respectively (e.g., f = ma, is represented as a constraint among the three cells f, m, and u). Values can be anything including real numbers, ranges, logic levels, signs or symbolic quantities. A constraint propagator performs two functions. First, given a set of initial values, constraint propagation tries to assign each cell a value that satisfies the constraints. Second, it tries to recog- nize inconsistencies between constraints and values, and identify the cause of this inconsistency. The basic inference step during propagation is to select a constraint that determines a value for a previously unknown cell. For example, if the propagator has discovered values f = 12 and a = 6, then it can use the constraint f = ma to calculate the value m = 2. In addition, the propagator records m’s de- pendency on f, a and the constraint f = ma (typically using a truth maintenance system (TMS)). The newly recorded value re- sulting from this inference may cause other constraints to trigger and more values to be deduced. Thus, constraints may be viewed as a set of conduits along which values can be propagated. The dependencies trace out a particular path through the constraints that the inputs have taken. Constraints are very general. A constraint is implemented as a collection of partial functions or &es, each involving a subset of the cells mentioned in the constraint. For example, f = mu is implemented as three functions: f (m, u) = mu, m( f, a) = f/u and a(f,m) = f/m. A function is applied whenever all of its inputs are known. Since the function may be partial, it may not deduce an output value for every set of inputs. 7 Temporal Constraint Propagation Standard constraint propagation is based on little more than function application. A temporal constraint propagator (TCP) adds to this knowledge about time, delay and feedback, using the concise history representation described in Section 4. Sepa- rating this knowledge from that specific to qualitative reasoning about continuous systems extends the propagator’s range of ap- plicability, including for example both quantitative and digital tion in that the objects being propagated are episodes (i,e., values over time intervals), rather than values. Section 8 describes how recording episodes in terms of histories aids in determining the sets of episodes on which each rule should run. Section 9 de- scribes how newly deduced episodes are checked for consistency and then incorporated into a concise history. To manage and reason about temporal relations, TCP uses a facility referred to as a time boz. The job of the time box is to answer questions like: “Which of the following episodes end first?” The expressive power of the time box determines the way in which delays, du- rations and temporal relations can be specified, and is discussed in Sections 10 and 11. In TCP, values are concise histories, and rules are functions parameterized by time (e.g., f (m, a(t)) = ma(t)). New episodes are deduced by applying rules to known episodes. That is, given a rule and an episode for each of the rule’s inputs, a new episode is deduced in two parts. First, the extent of the new episode is the intersection of each input episode’s extent. If this intersection is empty then no new episode is deduced. Second, the value of the new episode is deduced by applying the rule to the values of the episodes corresponding to each of the inputs. For example, given A = 8 over an interval (30,100) and B = 3 over (50,140), then the rule C = A - B is used to deduce C = 5 over (50,100). If the rule is a partial function then it may not return a value; in this case no episode is deduced. Next, the new episode is recorded in the cell for the rule’s result (e.g., the cell for C gets the value 5 with extent (50,100)). We indicate to the time box that the new episode’s extent is the intersection of each supporting episode’s extent. This informa- tion will be used during further propagation to determine the extent of episodes which depend on this new episode. Finally, the propagator uses a TMS to record the new episode’s depen- dence on 1) the applied function, 2) the input episodes, and 3) the deduction that the new episode’s extent is non-empty. This dependency information can be used for a variety of tasks, such as explanation, deduction caching, conjectural reasoning, diagnosis and guiding search. Often a time lag is involved when quantities interact. To model this we can associate a delay with each rule. In the analysis of many systems, this delay is considered infinitesimal; changes propagate almost (but not quite) instantaneously. Infinitesimal delay, along with feedback, is essential for modeling such proper- ties as stability, inertia, memory and causality, and is discussed in Section 11. 8 Histories To produce the desired behavioral descriptions TCP uses the de- duced episodes to construct a set of value histories. Recording sets of episodes as histories has a number of computational ad- vantages. Wh en applying a function to the episodes of a set of input cells, the constraint propagator must determine which combination of episodes and rules will result in new episodes. In a moment we will see that value histories allow us to accomplish this without having to consider the cross-product between the episodes of every input cell. In analyzing a system’s behavior we believe that our models are (or should be) internally consistent. Thus we are particularly interested in detecting any inconsistencies. When TCP records domains. The next few sections describe the basic components of TCP. The remainder of this section describes the basic inference step a new episode it must be checked for consistency with existing episodes. A cell must be single-valued at any point in time; thus an inconsistency arises if two episodes with differing values over- Qualitative Reasoning and Diagnosis: AUTOMATED REASONING / 107 lap in extent. Histories allow this test to be performed without having to consider every episode already recorded for the cell. This is discussed in Section 9. For the moment we assume that every rule is a complete func- tion (i.e., deduces a value for every set of inputs) and that each cell has at most one rule mentioning it as an output cell. To per- form propagation, each rule R “walks” along the histories of its input cells from start to end, deducing new episodes and adding them to the end of the output cell’s history. Each propagation step consists of 1) applying R to a set of input episodes E, 2) recording the new result at the end of the output cell’s value history, 3) determining the set C of one or‘more episodes in E which end first, and 4) constructing the next set of episodes to be propagated. This is accomplished by modifying E so that ev- ery episode which appears in C is replaced by its successor. One episode being the successor of another in a history depends on the two episodes being adjacent. This dependency is also recorded. The propagation step is repeated on successive sets of episodes for R, moving monotonically forward along contiguous episodes of each of the input histories until the end of one of the input histories is reached. If that input history is later extended, then propagation using R continues. For example, given cells: A A - B, the following shows how history for C from A and B: B, and C and rule Rl: C= Rl is used to deduce the value cl 2 I3 I3 12 I >t A[ 6 16 1 >t B 1 3 ] 4 I' 6 >t 9 Making Histories Concise It is often the case that a rule deduces the same value for two successive episodes in its output history; thus, the sequence of episodes deduced is not necessarily concise. Before being propa- gated the sequence of deduced episodes is “summarized” by ac- cumulating each contiguous sequence of episodes with the same value into a maximal episode. It is this concise history which is then used for further propagation. To understand how the concise history of values is constructed we need to add one more level of detail to the picture above. Value histories are made concise by constructing them in a two step process. The product of a rule invocation is actually a ~US- tification episode and is added to the justification history for the quantity. A more precise picture is: >t where Dl, D2 and D3 are dependencies left behind by previous invocations of Rl. (Note: the justification history for a quan- tity is shown directly under the quantity’s value history, and is labeled by the rule used to construct it.) A justification episode contains the value computed along with the input episodes used in computing that value. As sug- gested above the concise history for C is constructed by sum- marizing contiguous justification episodes with the same value. Hence, the first two justification episodes for C are summarized into the maximal value episode shown. So in fact the whole process is one in which rules walk along value histories, producing as output new justification histories, which are then summarized into concise value histories. Justification episodes allow us to maintain a complete de- pendency record of the computation, while still maintaining the property that every value history is concise. To avoid irrelevant distinctions in the behavioral description, it is important that justification histories be concise as well. This property is satis- fied since each justification episode is concise with respect to the set of dependencies. Justification histories are an important component of a sys- tem’s behavioral description. For tasks like explanation and di- agnosis, knowing why a quantity has a specific value is at least as important as knowing what the value is. This is one of the important ways in which TCP differs from traditional simulators. Traditional simulators tell you what the values of each quantity are but not how they came about. In addition to constructing the justification histories, it is important to record the dependence of each maximal episode on the justification episodes used to construct it; that is, both that the justification episodes have the same value and that their extents are adjacent or overlapping. Having the propagator operate directly in terms of concise histories is essential. Suppose the sequence of deduced episodes were propagated without summarization. In this case, each suc- cessive episode with the same value will be propagated forward separately, rather than propagating forward a single maximal episode which encompasses them. In Section 11 we see that, in the worst case, feedback could cause an infinite number of con- secutive episodes with the same value to be created. In addition, this propagation will introduce a number of irrelevant tempo- ral distinctions into the predicted behavior. These result from the fact that each propagation of a set of non-maximal episodes requires an ordering between their ends. If the propagator can- not infer the required ordering, then it must either halt without predicting the rest of the behavior, or split cases on each of the orderings possible, as we saw in section 3. The use of maximal 108 / SCIENCE episodes solves this problem. Earlier we assumed that a cell is used as an output cell of at most one rule. In general several rules may deduce values for the same cell. Thus for a particular cell we must 1) record a justifi- cation history for every rule which uses it as an output, and 2) record the dependence of the value history on each of its justi- fication histories. In addition, we must also make sure that the values deduced in the different justification histories are consis- tent. To accomplish this we construct a procedure which walks forward along the episodes of each justification history, construct- ing the concise value history by creating successive maximal value episodes and associating justification episodes with each value episode. If a justification episode, je, has a value different from its immediate predecessor, then a maximal episode, we, with this value is added to the value history such that t-(ve) is constrained to be equal to t-(je). If ve already exists, then the function checks that ue’s value and t-(ve) are in agreement with je. Earlier we also assumed that each function is complete. If a function is partial then it could produce gaps in its justification history where a behavior cannot be predicted. During the extent of this gap the same value may be maintained, or may change several times. Thus, before incorporating into the value history any justification episodes following a gap, we must make sure that 1) the extent of the gap is covered by a combination of episodes from other justification histories for that cell, and 2) these episodes are incorporated into the value history. If the episodes on either side of a gap have different values, then the maximal episode containing the justification episode immediately preceding the gap must end somewhere within the gap. Consider the example consisting of three cells: A,B and C, and the constraint: C = A OR B. This constraint is modeled with three rules, each being a partial function: RI: IfA= 1 thenC= 1 R2: If B = 1 then C = 1 R3: If A = 0 and B = 0 then C = 0 The following shows the values and justification histories deduced for C, given inputs for A and B. Note that the justification histories for Rl, R2 and R3 overlap in places, each contains gaps, and together they cover the two value episodes for C: C 1 0 ,t Rl 1 I R2 1 I A R3 0 A Dl D2 D3 A 1 1 0 \ \ I+ B 1 0 1 0 1st To review briefly, in general a constraint propagator carries out four basic operations: 1) select a constraint and set of values, 2) apply the constraint to deduce a new value, 3) record the new value, and 4) check consistency. Section 8 discussed selecting constraints, Section 7 discussed applying constraints, and Section 9 discussed recording values and checking consistency. Note also that the overall goal of a constraint propagator is to tell us what will happen (i.e., compute values), why it will happen (i.e., record justifications) and to spot inconsistencies. 10 The Time Box Whenever TCP has a question about the relationship between the extents of different episodes it consults the time box. Sepa- rating inferences about time from behavioral prediction produces a system which is more easily extensible and conceptually clear. The demands placed by constraint propagation on the time box can be characterized in terms of 1) the types of questions asked, 2) the temporal information available, and 3) the inference necessary to answer these questions. TCP asks questions about temporal relations when apply- ing a constraint, or when incorporating newly deduced episodes into a value history. Questions asked by the temporal constraint propagator have been of the form “Does this episode have a non- empty extent?“, “Which of these episodes begins/ends first?“, “Are these episodes adjacent or overlapping?“, or “Are these re- lations consistent?” Each of these questions can be reduced to a question about the ordering (<, =, >, 5 or 1) between two or more events. In addition, the events being ordered are either parts of 1) value histories for cells participating in the same con- straint, or 2) justification histories for the same cell. Thus no global ordering is required - all temporal interactions are local to the constraints. Information about temporal relations is provided both exter- nally and by the constraint propagator. Information from the constraint propagator is of the form: “episode A is the inter- section of the following episodes”, “A is contained in B” , “these two episodes begin/end at the same time” or “these two episodes meet .” Each form, except intersection, can be expressed as a conjunction of endpoint orderings. Intersection is more complex and is discussed later in this section. Information provided externally is problem-dependent. For many digital and quantitative problems, precise information is available about the exact times that events occur in the input histories. This information might be provided in terms of precise numerical values, upper and lower bounds, algebraic relations (e.g., A occurs 20 seconds after B), or a total ordering. At the other extreme, qualitative reasoning makes as little commitment about temporal relations as possible; i.e., only when they af- fect the predicted behavior. At this extreme commitments about temporal relations are required by the propagator only when no further inferences can otherwise be made. Finally, the time box must use the temporal information pro- vided to check consistency and answer queries. Inconsistent in- formation leads to wasted effort exploring wrong paths; thus new information should be checked for consistency before it is used. The number of relations queried is small relative to the number of relations deducible; thus relations should be deduced on demand. The inference algorithms used in the time box depend on the types of temporal information supplied and on properties of the constraint network. The simplest case occurs when the con- straint network has no feedback loops, and the end points of the episodes for each input are specified quantitatively. In this case Qualitative Reasoning and Diagnosis: AUTOMATED REASONING / 109 the extent of each justification episode can be determined pre- cisely, as the intersection of its support, using simple arithmetic. In addition, the extent of each deduced value episode can be de- termined from its justification episodes before it is propagated. Thus, answering a query involves a simple arithmetic compari- son. In this case TCP acts like an event driven simulator which episode is propagated the time box is told about its relation to other justification and value episodes. End points of episodes are represented symbolically and are constrained by specifying rela- tions with other points. Instead of computing each point’s value immediately, they are only solved for as needed. Thus constraints between points can be updated with no cost.4 To see how this is useful, consider a simple feedback example where a quantity Q is a function of only itself (i.e., Q = f(Q)). In addition, assume the function is an identity function I and there is some delay d from input to output. From this mathematical model the propagator should be able to deduce that, due to the delay, Q will never change its value (i.e. Q has inertia). records dependencies. A second case occurs when the system has no feedback, but the endpoints are specified qualitatively, through inequalities or upper and lower bounds. This case is similar to the first except that, to answer queries the time box must combine inequality rea- soning (e.g., transitivity of > and =) with reasoning about simple arithmetic expressions. In addition, the minimum/maximum of Initially we are given that Q = c over an interval (to, tl) where tl - t0 > d and c is a constant. This fact is recorded by constructing a value episode Vl for Q with value c and extent (to, t2). This is supported by justification episode Jl with extent (to, tl) and value “Given.” VI must at least include the extent of Jl thus tl < t2. Next I is invoked on Vl, producing J2 with extent (t3, t4). We know that t0 < t3 and t2 < t4 because I has delay. Also, 52 overlaps Jl (since the delay through I is less than the extent of Jl), thus J2 is used to support Vl as well. This implies that t2 2 t4, because the extent of Vl at least includes the extent of J2. This however is inconsistent with t2 < t4. The only explanation consistent with these constraints is that Vl never changes. This situation is depicted below: a set of events (used to determine intersection) are determined by querying the ordering between each of the systems are available with this capability [ 7,2]. events. Several The remaining case to consider involves analyzing systems with feedback using a qualitative representation for time (i.e., specifying only partial orders on events). This is the most inter- esting class for qualitative reasoning and is the topic of the next section. 11 Modeling Time for Feedback Systems Feedback is both pervasive and important. Even trivial systems, such as two component circuits and simple harmonic oscillators, exhibit feedback. Without feedback, physical systems would not have state or memory. Properties such as damped oscillation and bistability depend critically on feedback. Reasoning with feedback is more difficult than the two cases addressed in the previous section. The problem lies in deter- I C I >t I to c-L+ mining the feedback, t a quantity the extent extent of a maximal value episode. If a system has #hen the delay through the feedback loop may cause to maintain its value (e.g. ,, as in a flip flop). Thus of the episode will depend on itself. This is the im- Given C portant difficulty. If the constraint propagator waits until the extent of the value episode is determined before propagating it, then it cannot determine this cyclic dependency and the extent will not be determined. This is one of the reasons why tempo- ral reasoning systems such as the one described in reference [2] cannot handle feedback. to Jl t1 Consider what requirements this type of problem places on the time box. The type of argument given above can only be made if 1) the constraint propagator generates these relations for the time box, 2) the time box can recognize inconsistencies, and 3) the time box is given and can manipulate symbolic end- points. Consider the relations which TCP must record; there are two cases. First, when a justification episode, J, is incorporated into a value episode, V, we use inequalities to record that the extent of J is a subset of V (i.e., t-(V) 5 t-(J) and t+(V) 2 t+(J)). Second, when a justification episode, J, is deduced from a set of value episodes, V 1, V2, ..Vn, we record that the extent of J is equal to the intersection of episodes in the set (i.e., t- (J) = Max[t-(Vl), t-(V2) , . . . t-(Vn)] and t+(J) = Min[t+(Vl), t+(V2) ) . . . t+(Vn)]). Min and Max, in turn, can be expressed using inequalities and disjunction. For example, a = Min(b, c) becomes “a 5 b, a 5 c, and (u = b or a = c)“. Thus, to reason fully about the temporal relationships deduced during propaga- tion, the time box must be able to handle both inequalities and disjunction. Notice, however, that every relation in the disjunc- tion produced by an expression x = Min [ . ..I mentions the point One solution to this problem is to drop the restriction that episodes be maximal, and instead construct the value histories directly from the result of each propagation.3 In this case prop- *If we are interested in recognizing all inconsistencies immediately then we must pay the cost of testing a temporal relation for consistency when recorded. 1 IO / SCIENCE x.~ Thus the time box need only deal with a limited form of disjunction, rather than the general case. As part of this research, a polynomial time algorithm has been developed which answers questions of validity and consis- tency about relations involving inequalities, given expressions in- volving inequalities and the limited form of disjunction described above. More specifically, given a set of R relations including D disjuncts, determining whether or not a particular relation log- ically follows from this set takes worst case time Q(D * R).6 A number of techniques are used for guiding constraint propagation which significantly reduces this time in practice, without sacri- ficing the completeness or soundness of this algorithm. However, space does not permit a detailed discussion of the algorithm or these techniques here. Even with a powerful time box, modeling systems with feed- back is still a difficult problem. For example, TCP inherits the well known limitation of local constraint propagators in that it is not a complete constraint satisfaction system. A number of ex- isting techniques[ 10,8] can be used in TCP to solve this problem, depending on the representation for values being used. 12 Qualitative Reasoning Revisited Given the framework described above, incorporating qualitative reasoning about physical systems into TCP involves 1) mapping time to the reals (e.g., specifying intervals to be open and closed), 2) adding principles of continuity and integration[g], 3) providing a set of constraints which support a qualitative representation and algebra, and 4) expressing the laws of physics in terms of these constraints. Instead of describing each of these steps we demonstrate the approach with a familiar example. Consider a harmonic oscillator, consisting of a mass, M, and a spring, S. Let z denote the position of the mass with respect to its rest point. The spring is extended from its rest point (z > 0) and released at time t0 with zero velocity. The position of the mass then oscillates back and forth, extending and compressing the spring. The initial part of the oscillation can be explained as follows: At time t0 the spring is extended from its rest point (Z > 0) and its velocity is zero. The positive position produces a force on the spring and mass, causing an immediate acceleration. This acceleration causes the spring to begin to move inward towards its rest point immediately after to. Because of the increasing veloc- ity, the spring eventually reaches its rest length (z = 0) after a finite interval of time. We would like to use TCP to generate a prediction correspond- ing to the behavior explained above. To do this the system is described in terms of the state variables, position (z), velocity (u), acceleration (u) and force (f). The qualitative representa- tion used consists of the sign of each quantity (+, 0, or-), and equations used to describe the system are: (El) fs(t) = Icz(t) Hooke’s Law (EZ) fs(t) = -fm(t) Conservation of Force (E3) fm(t) = ma(t) Newton’s First Law where the subscripts s and m on force denote mass and spring respectively. k and m are assumed to be positive and finite. For 5Conceptually, if we view each relation as an edge in a graph, and each point as a vertex, then all the edges mentioned in a disjunction are connected to a common vertex. ‘Answering the same question, but disallowing disjunction takes worst case time O(R). simplicity of presentation we rewrite El-E3 as: (E4) a(t) = -s(t)k/m In addition to these basic constraints we incorporate a few spe- cial rules. A detailed discussion of the principles underlying these rules is presented in [9] and [lo]. We know from continuity that (Cl) a quantity moving through an open/closed interval of space takes an open/closed interval of time. Thus, a quantity, Q, will be in the open interval “positive” (0 < Q < ;nf) for an open interval of time and in the closed interval “zero” for a closed in- terval (possibly an instant). From principles of rates of change (integration) we know that: (11) if a quantity, Q, is 0 at some instant, tq, and dQ/dt is negative over an interval immediately following tq, then Q will be negative at least over that interval,’ (12) if a quantity, Q, is positive and its derivative is bounded above by a finite negative value as long as Q is positive, then Q will become zero in a finite amount of time (although it might remain so for only an instant), and (13), if a quantity, Q is nega- tive and its derivative is non-positive over an interval, then Q is bounded above by a finite negative value over that interval. The following naming and notational conventions are used. A pair (v, ;) is used to denote an episode with value u and interval i. (a, b) denotes the open interval from a to b, and [a, b] denotes a closed interval. A justification episode’s value is a list consisting of the rule and value episodes used in the deduction. The nth value episode for state variable z is named “Zion” and the nth justification episode is named “zjn”. In the explanation, if two points are equal then the same name is used for both points. Each relation between points is labeled “Rn” where n is an integer. The remainder of this section shows each sentence of the English explanation given above and the corresponding prediction made by TCP. While constructing a complete dependency record is important for many tasks, presenting these details here would obscure the example, so they are omitted. The analysis begins with the spring extended but not moving (z > 0 and u = 0 at to): xvl: (z = +, [to, t1)) xjl: ((Given), [to, to]) justifying xv1 RI: t0 < tl the extent of xv1 contains xjl vvl: (u = 0) [to, t2]) vjl: ((Given), [to, to]) justifying vvl R2: t0 5 t2 the extent of vvl contains vjl The t- of zvl and wl are both closed because analysis begins at to. t+ of zul is open by Cl since 2 is positive. Likewise, t+ of vu1 is closed by Cl since u is zero. Rl holds, since z is positive at least as long as the extent of the given zjl; likewise for R2. In addition, Rl is a strict inequality since the end of zul’s extent is open. The positive position (zul) produces a force on the spring and mass, causing an immediate acceleration (uvl). By E4, x being positive over [to, tl) causes a to be negative over that interval: avl: (u = -, [to, t3)) ajl: ((E4, xvl), [to, tl)) justifying au1 R3: tl < t3 the extent of au1 contains ujl This acceleration causes the spring to begin to move inward imme- diately after tU. Specifically, u is 0 at t0 (uul) and its derivative, a, is negative immediately following t0 (uvl), so by 11, u becomes negative immediately following to, and remains so as long as a is 711 is actually a special case of the constraint: [Q(t + epsilon)] = [Q(t)] + PQ(t + O/4, h w ere E is an infinitesimal delay and the notation [z] denotes the sign of z. Qualitative Reasoning and Diagnosis: AUTOMATED REASONING / 111 negative: vv2: (w = -,(t0,t4)) vj2: ((11, vvl, avl), (to, t3)) justifying vv2 R4: t3 2 t4 the extent of vv2 contains vj2 R5: t2 < to vv2 follows vvl vvl ends when v is no longer zero and v is negative over (tO,t3) thus vvl ends at or before t0 (R5). Because of the increasing velocity, the spring eventually reaches its rest length (z = 0) after a finite interval of time. Specifically, 1) z is positive (xvl), 2) v is bounded by a finite negative value over (to, t3) by 13, since v and its derivative are negative over this interval (vv2 and avl), and 3) v is bounded as long as x is positive, since t+(xvl) = tl 5 t3 = t+(vv2). Thus by 12, 2: = 0 at tl: xv2: (x = 0, (t1, t5]) xj2: ((12, xvi, vv2, avl), [tl, tl]) justifying xv2 R6: t1 5 t5 extent of xv2 contains xj2 The analysis continues, identifying a deceleration immediately following t5, and eventually predicting the oscillation. The deductions made above are summarized in the following figure: 12 Gvn V I1 ( Gvn f 0 a. f- L I >t A causal explanation can be constructed by tracing forward along the dependencies through each justification episode whose start coincides with the beginning of the value episode it supports. In the figure above the chain of dependencies drawn with a thick line corresponds roughly to the causal explanation printed above in italics. While we have used the oscillator for simplicity of presenta- tion, our system is in fact capable of dealing with considerably more complex devices involving partially ordered, time varying inputs. It has, for example, predicted the bistable behavior of an SR-latch built from cross-coupled NOR gates, accurately model- ing the positive feedback that is crucial for latching values. 13 Summary and Research Status Predicting behavior involves describing both what happens and why. Thus, the propagator must provide a clear description of both a quantity’s values and their justifications. We have seen that concise histories are crucial in describing values and their justifications, as was demonstrated in the issue of feedback. TCP provides a clear separation between inferences about time and behavioral prediction, This allows a variety of temporal repre- sentations to be used without modifying the propagator itself. Finally, by avoiding unnecessary commitments the resulting pre- dictions are more broadly applicable. A prototype of TCP was implemented in the fall of 1984, using Simmons’ Quantity Lattice[‘l] as the time box. The power of this approach has been demonstrated on a number of examples taken from digital electronics and arithmetic with time varying inputs. A second time box has been developed that incorporates disjunction. This, along with principles of qualitative reasoning are currently being incorporated into TCP. Plans for the near future include: 1) augmenting TCP to perform envisionment as well as simulation, 2) incorporating techniques for abstracting or approximating constraints used during analysis, and 3) adding a more robust control strategy for guiding this process. 14 Acknowledgements I would especially like to thank Randy Davis for many hours of help in clarifying these ideas. I would also like to thank the following people for their advice and support during this research: Johan de Kleer, Ron Brachman, Walter Hamscher, Mark Shirley, Reid Simmons, Jeff Van Baalen and Leah Ruby Williams. I would also like to thank AT&T Bell Labs for support and the use of their equipment during the writing of this paper. References ill PI PI PI PI PI PI PI PI WI Allen, J., “Maintaining Knowledge About Temporal Inter- vals,” Comm. ACM, 26 (1983)) 832-843. Dean, T., “Planning and Temporal Reasoning Under Uncer- tainty,” IEEE Workshop on Principles of Knowledge-Based Systems, Denver, CO, (December, 1984). de Kleer, J., and Brown, J.S., “A Qualitative Physics Based on Confluences,” Artificial Intelligence, 24 (1984), 7-84. de Kleer, J., and Williams, B. C., “Reasoning about Mul- tiple Faults,” Proceedings National Conference on Artificial Intelligence, Philadelphia, Penn., August, 1986. Forbus, K., “Qualitative Process Theory,” Artificial Intelli- gence, 24 (1984)) 85-168. Kuipers, B., LLCommonsense Reasoning About Causality: Deriving Behavior From Structure,” Artificial Intelligence, 24 (1984) 169-204. Simmons, R., “ ‘Commonsense’ Arithmetic Reasoning,” Proceedings National Conference on Artificial Intelligence, Philadelphia, Penn., August, 1986. Sussman, G.J., and Steele, G.L., ‘CONSTRAINTS: A Lan- guage for Expresssing Almost Hierarchical Descriptions,” Artificial Intelligence, 14 (1980)) l-40. Williams, B.C., “The Use of Continuity in a Qualitative Physics,” Proceedings National Conference on Artificial In- telligence, Austin, TX, (August, 1984). Williams, B.C., “Qualitative Analysis of MOS Circuits,” Ar- tificial Intelligence, 24 (1984), 281-347. 112 I SCIENCE
1986
111
375
Interpreting measurements of physical systems Kenneth D. Forbus Qualitative Reasoning Group Department of Computer Science University of Illinois 1304 W. Springfield Avenue Urbana, Illinois, 61801 Abstract An unsolved problem in qualitative physics is generating a qualitative understanding of how a physical system is behaving from raw data, especially numerical data taken across time, to reveal changing internal state. Yet providing this ability to “read gauges” is a critical step towards building the next generation of intelligent computer-aided engineering systems and allowing robots to work in unconstrained envirionments. This paper presents a theory to solve this problem. Importantly, the theory is domain independent and will work with any system of qualitative physics. It requires only a qualitative description of the domain capable of supporting envisioning and domain-specific techniques for providing an initial qualitative description of numerical measurements. The theory has been fully implemented, and an extended example using Qualitative Process theory is presented. 1. Introduction Interpreting numerical data is an important part of monitoring, operating, analyzing, debugging, and designing complex physical systems. A person operating a nuclear power plant or propulsion plant must constantly read and interpret gauges to maintain an understanding of what is happening and take corrective action, if necessary. Designing a new system requires running numerical simulations (or building models of the system) and analyzing the results. Diagnosis requires interpreting behavior, both to see if the system is actually operating correctly and to determine if a hypothesized fault can account for the observed behavior. All of these problems require the ability to deduce the changing internal state of the system across time from measurements. Currently there is a great deal of interest in applying qualitative physics to engineering tasks such as diagnosis (e.g., the articles in (Bobrow, 19851). For such efforts to be successful, a theory about how to translate observed behavior, including numerical data, into useful qualitative terms is essential. This paper presents such a theory. The theory is domain independent and makes only two assumptions about the nature of the underlying domain model. Specifically, it assumes that: 1. Given a particular physical situation, a graph of all possible behaviors - an envisionment - may be generated. 2. Domain-specific criteria are available for quantizing numerical data into an initial qualitative description. The theory is analogous to AI models of speech understanding (e.g., [Reddy, et. al, 1973]). In these models the speech signal is partitioned into segments, each of which is explained in terms of phonemes and words. Grammatical constraints are imposed between the hypothesized words to prune the possible interpretations. In this theory, the initial signal is partitioned into pieces which are interpreted as possible particular qualitative states of the system. By supplying information about state transitions, the envisionment plays the role of grammatical constraints, imposing compatibility conditions between the hypotheses for adjacent partitions. 1.1. Overview The goal of this theory is to produce a general solution for the problem which can be instantiated for any particular physics and domain. Consequently we couch the analysis in an abstract vocabulary and specify what domain-dependent modules are required to produce initial qualitative descriptions. We demonstrate how the theory can be instantiated using an example involving Qualitative Process theory [Forbus, 1981, 19841. The theory has been implemented, and the performance of the implementation on these examples is demonstrated. It should be noted that the theory and implementation have also been successfully applied to a completely different system of qualitative physics, the qualitative state vector ontology ([de Kleer, 19751, [Forbus, 1980, 1981b]), as described in (Forbus, 19861. The next section provides a vocabulary for describing the initial data and places constraints on the segmentation process. Section 3 generalizes an earlier theory of interpreting measurements taken at an instant for QP theory [Forbus, 19831 and shows how the envisionment can be used to locally prune interpretations of segments. ’ Section 4 illustrates how global interpretations are constructed and how gaps in the input data can be filled. Finally we discuss planned extensions and some implications of the theory. 2. Input Data and Segmentation First we describe the kinds of inputs the theory handles. We assume a function time which maps measurements to real numbers, and that the duration of an interval is simply the difference between the times for its start and end points. We also assume the temporal relationships described in [Allen, 19811 may be applied to intervals (i.e., Meet, Starts, and Finishes.) We say Observable (Cp>, <I>) when property <p> can be observed in principle by instrument <I>. To say that we can measure the level of water in a can with our eyes we write2 Observable (A [Level (C-S (water, llquld, can) ) 1 , eyes) To say that some property is in fact observable at some time, we use the predicate Observable-at, which takes a time as an extra argument. ’ An “envisionment” is, roughly, “the set of all qualitatively distinct possible behaviors of a system.” However, sometimes it is used to refer to “all behaviors pos- sible from some given initial state” and sometimes to “all behaviors inherent in some fixed collection of objects in some configuration, for each possible initial state ” We call the first type attarnable envisionments, and the second total envisionments. Here we are’only be concerned with total cnvisionments ’ The first argument uses notation from QP theory; A is a function that maps from a quantity to a number representing the value of that quantity, Level is a function mapping from individuals to quantities, and C-S is a function denoting an individual composed of a particular substance in a particular state, distinguished by virtue of being in a particular place. Qualitative Reasoning and Diagnosis: AUTOMATED REASONING / 11 .i From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. We sav associated with distinct segments of a measurement sequence to be The input of a measurement interpretation problem is a set of measurement sequences, each consisting of a set of measurements totally ordered by the times of the measurements. Suppose we have some “grain” on time, St, such that events of duration shorter than st will not be considered relevant. (The problem of instantaneous events will be discussed in section 4.1.) Then two types of measurement sequences must be considered: Close: The data is complete, in the sense that over the total interval of interest measurements are separated by durations no larger than St. Sampled: There larger than st. are temporal gaps in the data whose duration is Given an assumption of a finite “grain size” of analysis, with close data we are justifictl in assuming that centiguous segments of the data correspond to successive states of the system. With sampled data wc can only make this assumption on close subsequences. Regular sequences are a subclass of close sequences where successive measurements are exactly St apart. adjacent if there is no interval in between them (by assumption, I, and 12 cannot overlap). If the minimum distance between the times of the end points is not greater than st, we also say that the intervals Meet, as defined in Allen’s time logic. The function Int maps segments to intervals. The local information provided by the segmentation of measurement sequences must be combined to form global segments, intervals over which the qualitative state of the system is not obviously different. We define global segments as follows. Let {MS13 be a collection of measurement sequences, each of which has a segmentation <Si 8 3. The global segmentation consists of a set of global segments <& $ such that 1. The value of the over GSk. property measured for each MS1 is constant 2. Starts (GSk, Int (S l,3>) for some S1 l , i.e., the start time of each global segment corresponds to the h arting time of some segment in one or more of the segmented measurement sequences. 3. Finishes (GSk, Int (S-& 1 for some S1 k, i.e., the end time of each global segment corresponds to the &d time of some segment in one or more of the segmented measurement sequences. The first constraint prevents a global stgment fmm straddling an obvious qualitative boundary, and the last two constraints ensure it spans the largest possible interval where quaitative values are constant. Thus global segments are good candidates for explanation by a single qualitative state. 2.1. Segmenting the input data 2.2. QP Example, Part 1 The first problem is to partition the measurement sequences into meaningful pieces. We define a segment of a measurement sequence to be the largest contiguous intervaJ over which the measured property is “constant”. A symbolic property is constant over an interval if its value is identical for all measurements within that interval. Notice that in QP theory signs of derivatives are symbolic properties in this sense. A numerical parameter is constant over a segment if the same qualitative value can be used to describe each measurement in the segment. The exact notion of qualitative value depends of course on the choice of domain representation and ontology.3 All we require is that algorithms exist for taking numerical values and producing at least some qualitative description sanctioned by the representation used. In QP theory, for example, numerical values can be described in terms of inequalities, the quantity space representation. If some domain-specific constants are unknown, such as the boiling temperature of a particular substance at a certain pressure, partial information can be delivered. In the worst case, the sign of the derivative can be estimated. Once numerical parameters are translated to qualitative values segmentation becomes simple. However, these segments cannot necessarily be identified with a single qualitative state. First, the qualitative value may be partial, as noted above. Second, a state transit ion may leave the measured parameters constant for some time (possibly forever). Consider a home heating system. Suppose you turn the thermostat up past the ambient temperature. If you cannot hear the furnace firing or touch a radiator, then you will not know for some time whether or not the system is actually working. This hidden transition problem must be taken into account when pruning interpretations, discussed below. Many changes in the physical world can be characterized as the result of physical processes, such as heat flow, liquid flow, boiling, and motion. Q “l’t t ua I a ive Process theory formalizes this intuitive notion of physical process and provides a qualitative language for differential equations that preserves distinctions required for causal reasoning. QP theory provides several types of measurable properties, including the truth of predicates and relations, whether or not different processes are acting, and of course information about numbers. Ideally measurements of amounts and magnitudes should be segmented whenever their descriptions in terms of quantity spaces change. However, as we will see a great deal of information can be gleaned from just the signs of derivatives (i.e., the DS value of a quantity, which ranges over c-1, 0, 13). Suppose we have a beaker that has a built-in thermometer. Suppose we also know that the beaker either contains some water, some alcohol, or a mixture of both. In this case we can always measure the temperature, i.e. V t E time Observable-at (A [Temperature (Inside (beaker) ) 1 , thermometer, t) If we plot the temperature with respect to time we might get the graph shown in Figure 1.4 If we don’t know the numerical values for the boiling points of water and alcohol, then all we can get from this graph is the DS value for temperature as a function of time. Providing this list of Ds values to the program results in six segments. Since this is the only property measured, each segment gives rise to a single global segment. The program’s output is shown in Figure 2. Each segment of a measurement sequence covers a non-empty collection of data, and since the data is temporally well-ordered there will be a maximum and minimum time associated with this data set. Let the minimum time be the start time and the Ilr:lximum time be the end time. We define two intervals I,, I2 a To be a qualitative representation some such notion must exist; the primary purpoee of such representations ia to provide quantizations of the continuous world which form useful vocabularies for symbolic reasoning. 11-i / SCIENCE Figure 1 - Temperature plotted as tl, function of time Figure 2 - Segments and global segments for the QP problem Here are the segments and global segments generated by the imple- mentation from the data in Figure 1. ATMI: Finding initial segments... lpropertieshavebeenmeasured. For Ds of (T INSIDE-BEAKER) : Starttime =O.O, Endtlme=ll.7. 117 samples, taken O.ltlme units apart. Divided Into 8 segments. DS 0f (TINSIDE-BEAKER) is 1 fromO.Oto 1.3. Ds of (T INSIDE-BEAKER) Is0 froml.4to2.1. Ds of (T INSIDE-BEAKER) Is lfrom2.2to4.l.. Ds of (T INSIDE-BEAKER) Is 0 from4.2t05.8. Ds of (T INSIDE-BEAKER) Is 1 from 5.9 to 8.5. Ds of (T INSIDE-BEAKER) 1s 0 from8.6to 11.7. ATMI: Findlngglobalsegments... There are 6globalsegments. 8. Interpreting segment5 If the segmentation based on domain-specific constraints is correct, a global segment should typically be explained as the manifestation of a single qualitative state. A qualitative state consists of a finite number of components,’ some fraction of which are fixed by the measurement sequences. If every component of the qualitative state is measured, then there can be only a single interpretation for each segment. Usually there are several, so we must generate the set of qualitative states that could give rise to the ’ This graph was generated by a numerical simulation program; it does not represent actual measurements. The numbers were hand-translated to DS values. measurements. The “one look” theory of measurement inf c~r.l.~~tation cited previously describes a solution to this prol,lcl~, 1’1’r ( :I’ theory. We now generalize it. Call the states in the tot:>; 81. i.ioument which are consistent with the measurements represt 1 1 t (1 by some global segment its p-interps. The possible interpretntions of each global segment is exactly this collection of states. As the 1983 paper illustrated, this set may be computed via dependency-directed search over the space of possible qualitative states, pruning those which are not consistent with the measurements. If instead the total envisionment has been explicitly generated, then p-interps can be computed by table-lookup (the implications of this fact are discussed below). However p-interps are computed, any system of reasonable complexity will give rise to many of them. Therefore it is important to prune out inconsistent interpretations as quickly as possible. Any domain-specific information applicable to the one- look case, as described in the 1983 paper, could again be useful in this context. However, when we have close data we can impose “grammatical” constraints, ruling out those p-interps which cannot possibly be part of any consistent pattern of behavior. To impose these constraints we need to refer to the possible transitions between qualitative states contained in the total envisionment. We assume that associated with each qualitative state St is a set of ufters which are the set of states which can be reached from St via a single transition. The following assumption is needed to apply this information: Simplest Action Assumption: The qualitative states St1 and St2 which describe the behavior of two global segments Sl and S2 which are temporally adjacent in a close sequence (i.e., Meet(Int(Sl), Int(S,>)) are temporal successors in the total envisionment, i.e. St2 E Af ters (St11 . In essence, this is a “compatibility constraint” applied to action. For it to be true st, our sampling time, must be small enough so that all important changes are reflected in the data. The temporal adjacency between Sl and S2 implies that any state which serves as an explanation for S1 must have a transition that leads to some state which explains S2. Similarly, any state which explains S2 must result from some state which explains Sl. These facts can be used locally, via Waltz filtering, to prune p-interps as follows: Given global segments Sl, 2 S s.L. Meets (Int (S1) , Int (S2)), ForeachStl Ep-interps(S1) andSt2 Ep-lnterps(S2) if 13 St0 E p-lnterps (S2) s.t. St0 E Afters (Stl), then prune St1 from p-interps (Sl) if 13 St0 E p-lnterps (S1) s.t. St2 E Afters (StO), then prune St2 from p-lnterps (S2) These rules must be applied to each global segment in turn until no more p-interps are pruned. Suppose for some global segment S, p-lnterps(S) = C). Then either (a) the dataisinconsistent or (b) the simplest action assumption is violated, either because there is more than one qualitative state required to explain a particular global segment (the hidden transition problem described previously) or the sample time st is not short enough. Suppose the p-interps for a segment include states that are temporally adjacent, that is, for some St1 and St2 in p- lnterps(S), St2 E Afters(St1). Since Stl*and St2 are in the same set p-lnterps(S), they must be indistinguishable with respect to the measurements provided. This is exactly how the hidden transition problem arises, and in fact is the only way it ca.n arise - otherwise, the set of p-interps would be incomplete. Thus to find hidden transitions it suffices to extend the collection of p - 6 This would not be true if our system model contained an infinite number of parts. We assume such models can always be avoided. Qualitative Reasoning and Diagnosis: AUTOMATED REASONING / 11 j interps to include all sequences of states from the original collectron which are temporally adjacent. Two points should be made about this pruning algorithm. First, in cases where the measurements are not very constraining the number of such sequences could grow very large. In the limiting case of no relevant measurements, the set of p-interps would correspond to the set of all possible paths and connected subparts of paths through the envisionment ! We suspect such cases could arise when reasoning about a very large system with several loosely- connected components while only watching a little piece of it, and hence suggest instead a scheme combining pruning with backup for those circumstances. Second, the algorithm can easily tolerate extra states in the sets of p-interps, but will be sensitive to missing states. These properties follow directly from the fact that states are only pruned when certain other states cannot be found. This means that gaps in the initial data will show up very rapidly, without extensive global computations. 8.1. QP example, part 2 Consider again the physical situation involving liquids discussed previously. The only processes we will be concerned with are heat flow to the liquids (if any), heat flow to the beaker, and boiling. We ignore any gasses that are produced, the possibility of the beaker melting or exploding, and any heat flow to the atmosphere. While we do not assume knowledge of the actual boiling points of water or alcohol, we assume that the boiling temperature of alcohol is lower than the boiling temperature of water. Given these assumptions, Figure 3 shows the total envisionment for the possible configurations of objects. Since our only available measurement is temperature there is a great deal of ambiguity, as indicated by the p-interp lookup table in Figure 4. Allowing the program to apply the pruning rules, we Figure 8 - Total Envisionment for liquids problem The states in the picture below are divided into rows based on the contents of the beaker. A thick arrow from the burner through the beaker indicates heat flow, and small bubbles indicate boiling. A thin arrow indicates a possible transition from the state at the tail to the state at the head. Alcohol Only Water OdY AkohoV Water Miiture find that after four iterations a unique solution has emerged (se&\ Figure 5). Even with very little initial data, we can conclude from this result that originally there was a mixture of water and alcohol in the beaker (S9). The mixture heated up until the alcohol started to boil (SlO). Aft er the alcohol boiled away the water heated up (SB) and began to boil (S7). After the water boiled away, the beaker heated up (SZ) until thermal equilibrium was attained (Sl). 4. Constructing global interpretations Suppose the initial data is close. Then if it is correct we have a complete collection of initial hypotheses, and if the simplest action assumption is not violated and that the data is consistent, as indicated by a non-null set of p-interps for each total segment, then we have an exhaustive set of possibilities for each segment. Furthermore, the hypotheses for each segment are temporally adjacent, i.e. they are plausible candidates to follow one another in a valid description of behavior. Given these assumptions, constructing all the consistent global interpretations is simple: Figure 4 - Table of Ds values and corresponding P-interps 31 1. Select an element of the p-interps for the earliest segment. 2. Walk down the after links between p-interps, depth first. Each such path is a consistent global interpretation. However, close data can be hard to get. Many physically important transitions occur in an instant. For example, collisions can happen very fast; we may see a ball head into a wall and head out again without actually seeing the collision. In general we must live with sparse data. Consequently, we next describe how gaps in the data can be filled. 4.1. Filling gaps in sparse data The procedure above can be modified to work on sparse data, although more ambiguity, and hence more interpretations, are likely. 1. Use the procedure above on all close subsequences. 2. For each gap between close subsequences, let S1 be the segment which ends at the start of the gap, and let S2 be the segment which starts at the end of the gap. 2.1 Select an element of p-lnterps (Sl) . 2.2 Walk down the after links through states in the envisionment until an element of p-lnterps (S2) is reached. Each such path is part of a global interpretation. There are two cases where gaps can arise. Gaps can be small because instantaneous states have been missed, or large because the sequences are sparse. An example of a large gap is when we see a 116 / SCIENCE Figure 6 - Applying Waltz filtering to P-interps Here is the program’s operation on the p-interps shown in Figure 4. ATMI: Flndlngp-interps... Global Segment lhas 4p-interps. Global Segment2 has 7 p-lnterps. Global Segment3 has 4p-lnterps. Global Segment4 has 7 p-lnterps. GlobalSegment5has 4p-lnterps. Global Segment6 has 7 p-lnterps. ATMI: Filterlngp-interps... After4rounds, 27p-lnterps excluded. ATMI: Finding global Interpretations... There Is aunlque global lnterpretatlon: (59 SlO S6 s7 s2 Sl) The qualitative states are: S9: water and alcohol, heatflowtobeaker, temperature Increasing. SlO: water and alcohol, heatflowtobeaker, alcoholbolllng, temperature constant. S6: water, heatflowtobeaker. temperature Increasing. S7: water, heatflowtobeaker, water bolllng, temperature constant. 52: empty, heatflowtobeaker, temperature Increasing. Sl: empty, thermal equlllbrlum, temperature constant. glass of water sitting on a table one day and come back the next day to find the glass turned over and a puddle of water on the floor. The above procedure is quite useful for small gaps, since there will be few states (usually one) between Sl and S2. However, explicitly generating the set of global interpretations for large gaps can lead to combinatorial explosions. In the worse case the number of interpretations is the set of all paths through the envisionment. If the envisionment has cycles, corresponding to oscillations in behavior, the number of paths can be infinite. An alternate strategy is to use the envisionment as a “scratchpad”, using the measurements to directly rule certain states in or out, and using algorithms akin to garbage collection to determme the indirect consequences of these constraints. Algorithms to do this have been implemented (see [Forbus, 1980]), and have been successfully used with the measurement interpretation program (see (Forbus, 19861). 6. Discussion This paper has presented a theory of interpreting measurements taken across time, illustrating its utility by extended example. The theory solves a central problem in qualitative physics, and has many potential applications. For example, this theory is useful for diagnosis problems because it provides a general ability to test fault hypotheses to see if they actually explain the observed behavior. Currently we are coding routines to automatically perform the signal/symbol transformation for several domains and generalizing the implementation to handle sparse data. Importantly, the theory relies on very few assumptions. The small number of assumptions makes the theory applicable to many different representations and domains. The assumptions of a total envisionment and of algorithms which can provide some qualitative description of numerical parameters are very mild restrictions which most systems of qualitative physics can easily satisfy. There is no apparent reason why this theory cannot be used with device- centered models, such as [de Kleer & Brown, 19841, [Williams, 19841, or discrete-process models, such as [Simmons, 19831, [Weld, 19841, or even equation-centered models, ,11ch as [Kuipers, 19841. In fact, we expect that the constraints OII \)ilrtitioning numerical data will be ‘~~l’llllities. the system that uses continuous An interesting opportunilj *r&es when the particular physical syslem is known in advance, as is typically the case when dealing with engineered :;y::~erns. Current qualitative reasoning programs are often slow', clspecially when generating the entire space of possible behaviors while taking different fault modes into account. However, given a description of the structure of the system and an adequate qualitative physics, the total envisionment (or several total envisionments, representing typical fault modes) can be precomputed and preprocessed to provide a set of state tables, indexed by possible values of measurements or sets of measurements. These lookup tables, while possibly quite large, could make the interpretation process very fast. It does not Seem unlikely that, given fast signal-processing hardware to perform the initial signal to symbol translation, special-purpose measurement interpretation programs which operate in real time on affordable computers might be written. As qualitative physics progresses, leading to standardized domain models and fault models, diagnostic expert systems could be automatically compiled from the structural description of a system. 5.1. Acknowledgements Discussions with Johan de Kleer, Dan Weld, Brian Falkenhainer, Dave Waltz, and Dave Chapman led to sienifican t improvements in both form and content. John Hogge provided invaluable programming assjstance. This research js supported by the Office of Naval Elesearch, contract No. N00014-85-K-Q22~ 6. Bibliography Allen, J. "Maintaining knowledge about temporal intervals”, TR-86, Department of Computer Science, January 1981 Bobrow, D., Ed. Qualitative Reasoning about Physical Systems., MIT Press, 1985. de Kleer, J. “Qualitative and quantitative knowledge in classical mechanics”, MIT Artificial Intelligence lab TR352, December, 1975 de Kleer, J. and Brown, J. “A qualitative physics based on confluences”, Artificial Inntelligence, 24, 1984 Forbus, K. “Spatial and qualitative aspects of reasoning about motion” Proceedings of the first annual conference of the American Association for Artificial Intelligence, August 1980. Forbus, K. “Qualitative reasoning about physical processes” Proceedings of the seventh International Joint Conference on Artificial Intelligence, August, 1981 Forbus, K. “A study of qualitative and geometric knowledge in reasoning about motion” MIT ArtiEcial Intelligence lab TR-615, February, 1981 Forbus, K. “Measurement interpretation in qualitative process theory” in Proceedings of IJCAI-8, Karlsruhe, Germany, 1983 Forbus, K. “Qualitative process theory” Artificral Intelligence, 24, 1984, pp 85-l 68. Forbus, K. “Interpreting observations of physical systems”, Report No. UIUCDCS-R-86-1248, Department of Computer Science, University of Illinois, Urbana, Illinois. Kuipers, B. “Commonsense reasoning about causality: Deriving behavior from structure”, Artificial Intelligence, 24, 1984, pp 169-204 Reddy, D.R., L.D. Erman, R D Fennel, and R B. Newly “The HEARSAY speech understandlng system, an example of the recognition process” IJCAI-3, 1973, pp 185-193 Simmons, R “Representing and reasoning about change in geologic interpretation” MIT Artificial Intelligence Lab TR-749, December, 1983 Williams, B. “Qualitative analysis of MOS circuits”, Artlficrd Intelltgence, 24, 1984 Weld, D. “Switching between discrete and continuous process models tc predict genetic activity” MlT Artificial Intelligence Lab TR-793, October 1984 ’ Although efficicrlcy can be improved. Our new implementation of QP theory (QPE) 18 currently running 95 times faster than our old implementation (GIZMO). Qualitative Reasoning and Diagnosis: AUTOMATED REASONING / 11 ‘I
1986
112
376
“COMMONSENSE” ARITHMETIC REASONING Reid Simmons The Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139 Abstract “Arithmetic reasoning” can range in complexity from simple integer arithmetic to powerful symbolic algebraic reasoning of the sort done by MACSYMA. We describe an arithmetic reason- ing system of intermediate complexity called the Quantity Lat- tice. In a computationally efficient manner the Quantity Lattice integrates qualitative and quantitative reasoning, and combines inequality reasoning with reasoning about simple arithmetic ex- pressions, such as addition or multiplication. The system has proven useful in doing simulation and analysis in several domains, including geology and semiconductor fabrication, by supporting useful forms of reasoning about time and the changes that hap pen when processes occur. 1 Introduction “Cogito Ergo Sum” - I think, therefore I add “Arithmetic reasoning” denotes a broad class of inferences which range in complexity from simple integer arithmetic, such as “l+ 1 = 2,” to complex symbolic algebra, such as “sz(2 logz+ l)dz = zc2 log z + C.” We have identified a particular class of arithmetic reasoning which we believe to be very common in tasks such as simulation, planning and diagnosis. This class of arithmetic reasoning is intermediate in both expressive power and computational efficiency. This paper describes an imple- mented system called the Quantity Lattice and details its expres- sive power and potential range of applications. We also indicate where the Quantity Lattice fits in the spectrum of arithmetic reasoning tools. Certain classes of arithmetic inferences seem to crop up fre- quently in everyday life. Some involve solely qualitative relation- ships : l A < B and B 5 C. Is A < C? l Joe is taller than Amy and Jack is shorter than Amy. Is Joe taller than Jack? Others involve mixed qualitative and quantitative information : a X < 1 and Y = 2. What is the Y? relationship between X and l New York is less than 120 miles away and Washington is 138 miles away. Which city is closer? A large class of simple arithmetic reasoning problems qualitative relationships with arithmetic expressions: combine X 5 1 and Y = 2. What is the relationship between X+X and Y? I finish class at 3, then eat for at most 1 hour, and after- wards study for 2 to 3 hours, but have to be at a meeting by 6. Is there enough time to fit in a half hour nap? A geologic formation is eroded all the way down to sea level. Uplift follows. Is it possible for airborne erosion to affect that formation again? Two silicon wafers are oxidized at the same rate for the same amount of time. They are then etched for the same amount of time, but one wafer etches faster than the other. Which wafer will be thicker at the end? These questions, and many more like them, can all be an- swered by reasoning about ordinal relationships (>, <, =, 2, 5 #) between expressions and by reasoning about the value of simple arithmetic expressions (+, *, -, /). To handle cases where the values of the numbers are only partially specified, the rea- soning must also be able to combine qualitative and quantitative information. There are several systems reported in the AI lit- erature which handle various subsets of this class of inferences, including those which deal with time [Allen,Dean,Vere], space [Davis] and actions [Forbus,Simmons]. The Quantity Lattice is an arithmetic reasoning system which handles a wider class of arithmetic inferences than the systems referred to above. The remainder of this paper describes the Quantity Lattice and details the inferences it supports, discusses why the Quantity Lattice is useful and compares it with other arithmetic reasoning systems. 2 The Quantity Lattice The primary significance of the Quantity Lattice’ is that it smooth- ly integrates relationships, arithmetic ezpressions, qualitative and quantitative information, permitting it to handle a wide range of “commonsense” arithmetic inferences. By “integrates” we mean that adding one type of knowledge may constrain other types and thereby enable additional inferences to be made. For example, if we tell the system that “Y = X + 5” it will infer the additional qualitative constraint “Y > X,” even though it does not yet know anything about the actual values of X or Y. If we now tell the system that “X < 2” it will deduce the additional quantitative constraint that “Y < 7.” The Quantity Lattice has been used for reasoning about time ‘The name “Quantity Lattice” is historical representation is a mathematical lattice. and does not imply that the 118 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. and about the effects of processes [Mohammed,Simmons,Williams] In temporal reasoning it has been used to maintain a consistent partial order of time points and to answer queries about relation- ships between time points and about the durations of intervals. Its main application, however, has been in reasoning about the effects of processes. Consider, for example, the geologic model which states “the height of a formation after uplift equals the height before uplift plus the amount of uplift” (from [Simmons!). If the only numeric information known is that the amount of up- lift is positive, the Quantity Lattice can still infer that the height after uplift is strictly greater than the height before uplift. If we later tell the system that the height before uplift is at least 100 and the amount of uplift is at least 50, it can then infer that the height after uplift is at least 150. The Quantity Lattice supports such inferences in a computationally efficient manner. An obvious question is “why implement an arithmetic reason- ing system when existing symbolic algebra packages like MAC- SYMA can perform the same class of inferences and more?” The main answer is efficiency. The Quantity Lattice is designed to efficiently handle problems in which there are thousands of vari- ables, expressions and inequalities, but where each expression contains only a small fraction of the total number of variables. The algorithms and data structures used by the Quantity Lattice are designed to take advantage of this type of arithmetic problem which is often encountered in doing commonsense reasoning such as in the geology or semiconductor manufacture domains. Another major advantage of the Quantity Lattice is that it maintains justifications for all its inferences. This dependency information facilitates doing retraction and is used to generate explanations of how two quantities are related. The system de- scribed in [Mohammed] to diagnose failures in semiconductor fab- rication depends in large part on the explanations generated by the Quantity Lattice to determine how attributes of the wafer relate to parameters of the manufacturing process. 2 .l Representat ion The Quantity Lattice supports reasoning about ordinal relation- ships between expressions whose values are real numbers. An ordinal relationship is one of >, <, =, 2,s) #. An expression is a simple expression, such as “A,” a numeric expression, such as “5,” or an arithmetic expression, such as “B + 5.” Expressions are represented as nodes in a digraph. The nodes are called quantities and the arcs of the graph are called relation- ships. An arithmetic expression is simply a quantity with an associated formula, a list of its operator and arguments. Information is added to the Quantity Lattice by asserting or retracting relationships between expressions, such as “A = B + 5.” The system constrains the value of an expression by reasoning about its position in the graph and, if it is an arithmetic expression, by the values of its arguments. It uses the assertions to infer relationships between expressions and to infer upper and lower numeric bounds on the values of expressions. The upper and lower numeric bounds are represented by as- sociating a real valued interval with each quantity. The interval indicates that the actual value of the expression falls somewhere within the interval range. For example, if the only constraint on A is that it is positive, A would have the interval (0, oo], denoted by A E (O,OO].~ ‘A parenthesis indicates a half-open interval, a bracket indicates a half- closed interval. As a simple example, the two equations “A = B t 5” and “B > 0” are represented by five quantities : A, B, 5, 0 and (B + 5). The quantities A and (B + 5) are linked by an ‘<=” arc in the graph and B and 0 are linked by a “2” arc. The quantity 5 has the interval [5,5] and the quantity 0 has the interval [0, 0). 2.2 Inferences Two types of inferences are performed by the Quantity Lattice : (i) determining the relationship between two quantities, (ii) constraining the value of an arithmetic expression. These types of inferences are carried out by using five different reasoning techniques : 1. 2. 3. 4. 5. Determining relationships using graph search Determining relationships using numeric constraint propa- gation Constraining the value of arithmetic expressions using in- terval arithmetic Constraining the value of arithmetic expressions using re- lational arithmetic Constraining the value of arithmetic expressions using con- stant elimination arithmetic These reasoning techniques are integrated in the sense that infer- ences performed by one technique can be used by another to per- form further inferences. For example, the relational arithmetic technique infers ordinal relationships between an arithmetic ex- pression and its arguments. These relationships can be used by the graph search technique to find new relationships between quantities. 2.2.1 Graph Search There are two ways for the system to determine the relation- ship between the quantities A and B - one qualitative and one quantitative. The qualitative technique searches the graph of quantities using a simple breadth-first search to find a path be- tween the quantities. Figure 2 presents a small graph in which we are trying to find the relationship between A and B. Each quantity is marked by the order in which it is searched and by its relationship to A. Relationships are found by using a simple transitivity table (see Figure 1). For example, since A = C and C 5 E we can infer that A < E by finding the intersection of the column marked = and the row marked < in the transitivity table. Notice that the search along the bottom branch does not proceed past G because its relationship to A is unknown. The standard breadth-first search will find any path between Figure 1: Transitivity Table for Ordinal Relationships +-g++g’;’ < < < ?? ?? / < ?? - - > 77 77 > > > ?? . . . . > 77 . . - 1 ?? > > > ?? = < i < > : = #/ # ?? : ii ?? I ?? # ?? 1 ?? means that the relationship is unknown. Qualitative Reasoning and Diagnosis: AUTOMATED REASONING / 119 (, ) B ‘< \ (W D , (5,??) )G = ‘J Figure 2: Graph Search of the Quantity Lattice Figure 3: Combining Paths to Constrain the Relationship the two quantities. However, in some cases there are multiple itly “knows” the ordering of the reals (e.g. from “A < 1” and “B > 2,” infer “A < B”). However, this method alone is not sufficient to answer ques- tions of the form “if A > 1, B < 0 and A < C what is the relationship between B and C?” because we first need to de- termine the upper and lower bounds of the intervals of B and C. This is done by performing a numeric constraint propagation whenever a relationship is asserted between two quantities. This propagation ensures that the intervals of all quantities are con- sistent with the assertion. For example, if we assert “A < C” the system constrains the upper limit of A’s interval to be less than the upper limit of C’s interval. Similarly, it constrains the lower limit of C’s interval to be greater than A’s lower limit. In turn, these constraints propagate to all quantities which are <, 5, or = to A and >, 2, or = to C. This constraint propa- gation algorithm has the same computational complexity as the paths between two quantities with different paths yielding dif- ferent relationships. Since we want to find the most constrained relationship between the quantities (where <,>, and = are more constrained than <,>: and #) we need to modify the search slightly to combine the different relationships found via different paths. For example, Figure 3 presents a graph in which we are try- ing to find the relationship between X and Y. The quantities are again marked with the order of the search and relationship found so far. Notice that W and Y are visited twice, and that the second time they are visited the relationship recorded on the quantity is the combination of the relationships found via the multiple paths. For instance, following the path X, T, W the relationship is < but following the path X, U, V, W the rela- tionship is 2. The combination of 5 and 2 yields =, which is the most constrained relationship between X and W. Thus the most constrained relationship between X and Y is also =. In general, a quantity is revisited only if the relationships found via separate paths combine to yield a more constrained relationship. graph search Section 2.2.1 algorithm, for reasons similar to those presented in Both the qualitative and quantitative inference techniques described above perform consistency checking. When the user asserts a relationship between two quantities, the Quantity Lat- tice checks to see if the relationship is consistent. This involves searching the Quantity Lattice graph to make sure that the in- verse relationship cannot be inferred from the relationships al- ready present. Thus, asserting a relationship in the Quantity Lattice is of complexity O(R). When performing numeric con- straint propagation, the system checks to ensure that the upper bound of an interval is never less than its lower bound. If an in- consistency is found, an exception is raised which the user must handle. Typically, this entails finding the justifications underly- ing the inconsistency and retracting one of them. Although this extension to the standard breadth-first search means that some quantities might be visited more than once, since any combination of unconstrained relationships yields a constrained one, they are visited at most twice. Thus the com- plexity of this algorithm is O(R), where R is the number of re- lationships (i.e. arcs) in the graph, the same order of complexity as that of standard breadth-first search. The result of a search is cached by adding a new relationship the graph. This relationship is justified by a path between the to quantities. There may actually be many equally constraining paths, but for efficiency only one is found and recorded as the justification. 2.2.3 Interval Arithmetic 2.2.2 Numeric Constraint Propagation One of the important features of the Quantity Lattice is that it combines reasoning about ordinal relationships with reasoning about arithmetic expressions. The Quantity Lattice maintains constraints between the two types of knowledge in order to pro- The other method of determining the relationship between two quantities is quantitative. The ordering between two quantities can be determined if the intervals associated with the quantities do not overlap, except possibly at their endpoints,3 since the value of a quantity is constrained to lie within the interval. For example, if A E (- 00, 23, B E [2,00) and C E [5, lo] then we can infer that A 5 B and A < C, but cannot infer anything about the relationship between B and C. In addition, equality can be inferred if both intervals are single points and they have the same value. vide a more expressive system. As mentioned, an arithmetic expression such as “B -L 5” is represented as a quantity with an associated formula. An arith- metic expression can be placed in the Quantity Lattice graph like any other quantity by asserting relationships between it and other quantities, such as “A = B + 5.” Thus, the value of an arithmetic expression may be constrained by the values of other quantities as described in the previous section. This quantitative method for determining relationships be- tween quantities has two advantages over the graph search tech- nique : (i) it is a constant time algorithm, and (ii) it can detect relationships not explicit in the graph, since the system implic- 3The implementation actually allows the intervals to overlap by some c to compensate for the approximate nature of computer arithmetic. There are three other techniques used by the Quantity Lat- tice to constrain the value of an arithmetic expression further. One technique is quantitative (interval arithmetic) and the other two are qualitative (relutionaf nrithmetic and constraint efimina- tion arithmetic). Interval arithmetic computes the value of an 120 / SCIENCE [x4 4 + [Yh YU] = [(xl + Yl), (5u-t w)] [d, xu] * [yl, yu] E 1 min(sl * yl, 21 * yu, 2u * yl, zu * YU), max(sf * yf, xl * yu, zu * yf, 5~ * YU) I [& 4 - [Y4 YU] = [(x1 - YU), (211 - Yf)] -[x1, xu] s [-ml, -xl] if (yf < 0) A (yu > 0) min(sf/yf,zf/yu,zu/yf,z~/yu), max(zf/yf, x1, yu, zu/yf, ZU, yu) I otherwise Figure 4: Interval Arithmetic Operators arithmetic expression by applying the arithmetic operator of the formula to the endpoints of the intervals of its arguments. For example, “]3,6) + [-1,5]” yields “[2, ll).” The system main- tains the most constrained interval by applying interval arith- metic when the arithmetic expression is first constructed and whenever the interval of an argument changes. Also, constrain- ing the arithmetic expression through interval arithmetic may in turn constrain the other quantities related to the arithmetic expression via numeric constraint propagation. Figure 4 presents some of the operators used by Quantity Lattice in doing interval arithmetic.’ Although just five basic operations are shown, it is quite easy to add other arithmetic operators. In particular, the trigonometric and absolute value operatorswere added for the version of the Quantity Lattice used in the geologic reasoner of [Simmons]. As an example of interval arithmetic, consider the following set of constraints : l A 2 3, A 5 4, (i.e. A E [3,4]) a B 2 1, B 5 4, (i.e. B E [1,4]) 0 C = 2, (i.e. C E [2,2]) . D= (B*C)/(A+ B) Using interval arithmetic, the system computes that (B * C) E [2,8] and (A + B) E [4,8] constraining D E [0.25,2]. If we now assert “B 2 C” numeric constraint propagation will constrain B E [2,4] (see Section 2.2.2). The system will then recompute the arithmetic expressions, constraining D E [0.5,1.6]. Unlike many constraint propagation systems, for efficiency reasons constraints in the Quantity Lattice are not bi-directional. They have a preferred direction - constraints are propagated up to an-arithmetic expression from its arguments. To achieve inferences in the other direction, the user must explicitly assert constraints for each argument of the arithmetic expression in terms of the expression and its other arguments. For example, given the expression “(A + B)” one would assert “B = (A+ B) - A” and “A = (A + B) - B.” ‘For presentation purposes, the axioms ignore whether the intervals are open or closed. 2.2.4 Relational Arithmetic Interval arithmetic has some serious limitations. First, inter- val arithmetic will often compute intervals which are larger t)han commonsense dictates. For example, suppose we know that A > B, B E [0, 1) and A E (O,l]. Interval arithmetic computes that (A - B) E (- 1, l] but we should be able to infer that (A - B) E (O,l] since A is greater than B. The problem is even clearer when we realize that by using interval arithmetic we cannot determine, in general, that A - A is zero. For example, if A E [l, 21 then by interval arithmetic (A - A) E [- 1, l] since [ 1,2] - [ 1,2] = [ - 1, 11. Only by knowing that both intervals refer to the same quantity can we infer that the answer is iO,O]. Another limitation is that often interval arithmetic cannot increase our knowledge at all. For example, if all we know is that “X = Y + 5,” then Y E (-00, co) and by interval arithmetic we can only constrain X E (-oo,oo). Using interval arithmetic we gain no information about the relationship between X and Y, although we know, in fact, that X is greater than Y. We have compensated for both these deficiencies in interval arithmetic by combining it with an arithmetic technique based on ordering relationships. Relational arithmetic maintains con- straints on the qualitative relationship of an arithmetic expres- sion to its arguments. The relationship depends on the relation- ship of the expression or its arguments to the identity value for the arithmetic operator of the expression. Figure 5 presents axioms encoding this relational arithmetic technique. Using these axioms and the examples presented above, the system infers that since 5 > 0 then (Y + 5) > Y and there- fore X > Y. Similarly, the system infers that since A > B then (A - B) > 0. This inference, combined with numeric constraint propagation, constrains the lower bound of (A - B) to be greater than 0, while interval arithmetic constrains the upper bound to be less than or equal to 1. Thus integrating the two techniques constrains (A - B) E (0, l] w ic is the smallest consistent in- h h terval for this problem. The complexity of the relational arithmetic algorithm is O(R). The algorithm includes three steps : (i) performing several com- parisons of quantities to determine which axioms are applicable, Figure 5: Axioms for Relational Arithmetic For ref E {<, I, >, >,=,#t) x ref 0 =+ (z + y) ref y y rel 0 * (x + y) ref x 2 ref y * (x - y) rel 0 (x>OAy>O) * ( 2 ref 1 * (z * y) ref y) A (y ref 1 * (x * y) rel x) (s>OAy<O) =b- ( x ref 1 3 y rel (z * y)) A (y rel -1 3 (z * y) re/ -z) (x<OAy>O) * ( x ref -1 * (x * y) rel -y) A (y ref 1 =k- 2 Tel (z * y)) (x<OAy<O) 3 (xref -l=+- y rel (x * y)) A (y rel -1 * --2 rel (z * y)) (x > Or\ y > 0) * ((x rel y * (z/y) ref 1)) (x > OAy < 0) * (( 2 ref -y =F- -1 ref (z/y))) (x < Or\ y > 0) * ((x ref -y * (x/y) rel -1)) (2 < 0 A y < 0) =3 ((x rel y 3 1 ref (x/y))) Qualitative Reasoning and Diagnosis: AUTOMATED REASONING / 12 1 For rel E {<, 5, >, 2, =, #} z rel y * (z + Z) rel (y + 2) 2 rel y =k- (z - 2) rel (y - t) z rel y * (z - y) rel (2 - z) z > 0 A 2 rel y 3 (z * 2) rel (y * 2) z<OAzrely * (y * 2) rel (z * 2) z 2 OA 2 rel y * (z/z) rel (y/z) 5 5 0 A z rel y 3 (y/z) rel (z/z) Figure 6: Axioms for Constant Elimination Arithmetic (ii) asserting the newly inferred relationship and (iii) per- forming a numeric constraint propagation. All of these steps are O(R) although if one of the quantities being compared is nu- meric the comparisons can be done in constant time, such as in the case of the axiom “z rel 0 + (z + y> rel y.” 2.2.5 Constant Elimination Arithmetic All the above techniques are still not powerful enough to infer that A > C if A = B + X, C = D + X and B > D. To solve this problem, the system must be able to infer that if the same amount is added to two expressions then the results are related in the same way as the original expressions are. That is, A rel B =+ (A + C) rel (B + C). Figure 6 presents axioms which enable the system to reason about relationships between two arithmetic expressions. 5 We call this constant elimination arithmetic because it gives the system the power of a very simple algebraic simplifier - one which can eliminate constants from expressions. Note that these axioms extend the power of the axioms in Figure 5 which infer relationships involving only one arithmetic expression and one simple expression. In fact, the axioms of Figure 5 are only special cases of the ones in Figure 6. For example, by substituting Y = 0 we derive X rel 0 3 (X + 2) rel (0 + 2) h h w ic simplifies to the addition rule in Figure 5. However, for efficiency we have chosen to implement the special cases of Figure 5 separately. Also, for efficiency, we apply the axioms in Figure 6 in a con- sequent manner - that is, only for those arithmetic expressions actually in the system. Otherwise we could create an explo- sion of arithmetic expressions of the form A + q for all quanti- ties q in the Quantity Lattice. Finally, we note that an even more general form of the axioms in Figure 6 are of the form X rel Y =+ (X + 2;) rel (Y + Zj) for any & = Zj. We limit the expressive power for the sake of efficiency by insisting that i = j. The algorithm for constant elimination arithmetic is O(E * R) where E is the number of arithmetic expressions in the sys- tem. The term R arises because to determine whether the an- tecedent clause of each axiom is true the Quantity Lattice must be searched to determine the relationship between two quanti- ties. The term E arises because when an expression of the form “A op B” is constructed, the appropriate axiom in Figure 6 must be applied for each existing expression of the form “A op C” or “C op B.” In practice, however, the number of such expressions 5There + and are also permutations of the axioms for the commutative operators is usually a rather small percentage of E, and so the average complexity is much better than the worst case complexity. Although the computational complexity of this technique is greater than that of the other four arithmetic reasoning tech- niques presented above, we included constant elimination arith- metic in the Quantity Lattice because the inferences it supports are often needed in the domains we are exploring. For example, in the semiconductor fabrication domain [Mohammed] we often need to make inferences like “two silicon regions which start out the same thickness will end up the same thickness if they are oxidized at the same rate for the same amount of time.” 3 Results Combining the five reasoning techniques of(i) graph search, (ii) nu- meric constraint propagation, (iii) interval arithmetic, (iv) rela- tional arithmetic and (v) constant elimination arithmetic enables the Quantity Lattice to perform a large range of “commonsense” arithmetic inference in a fairly efficient manner. It can, for in- stance, handle all the questions listed in the introduction in O(R) time except for the last question which takes O(E * R). The Quantity Lattice has been tested in several domains, in- cluding geology [Simmons], semiconductor fabrication [Mohammed] and reasoning about temporal constraints [Williams]. In both the geology and semiconductor fabrication domains the Quantity Lattice is used to support qualitative simulation - it maintains and reasons about a partial order of time points which repre- sent when processes occur and when objects are created and de- stroyed, and it helps in reasoning about the changes produced by processes. For example, the effect of the geologic process “uplift” would be represented by equations stating that the height of a formation after the process equals the height before the process plus some positive quantity “uplift-amount.” From this infor- mation the Quantity Lattice would infer that the new height is greater than the old height. To measure the performance of the Quantity Lattice, we used an example from the semiconductor fabrication domain and sim- ulated the fabrication of a pair of resistors using the models of IMohammed]. The simulation involved 27 processing steps and took 384 seconds of CPU time on a Symbolics 3600. Of this, 112 seconds or 29’% was used by the Quantity Lattice. The simulator asserted 3125 relationships among 1357 quantities, taking an av- erage of 0.01 seconds per assertion. It constructed 660 arithmetic expressions, of which about one-third were binary additions. The simulator queried the Quantity Lattice to determine the relationship between quantities over 23,006 times taking an av- erage of 0.003 seconds per query. Of this, the vast majority were for relationships between time points and most were relation- ships that the system already knew or had already inferred and cached. In fact, of the 23,507 queries 14,736 were already known to the system and were answered taking an average of 0.0605 sec- onds per query and 751 were determined quantitatively in con- stant time (see Section 2.2.2) taking an average of 0.002 seconds per query. The remaining 8020 were determined using graph search taking an average of 0.0075 seconds per query. Of the 8020 graph searches, 1884 new relationships (paths) were found between quantities. In the geologic domain both a qualitative and quantitative simulation are performed [Simmons], The qualitative simulation is done using the same simulator as in the semiconductor fab- rication domain and the performance of the Quantity Lattice is 122 / SCIENCE similar to that described above. For the quantitative simulation, much more emphasis is placed on constructing arithmetic expres- sions and determining real values for the parameters of processes. Thus the numeric constraint propagation and interval arithmetic techniques are used more heavily than for the qualitative simu- lation. Using a 7 step geology simulation example, the quantitative simulation constructed 718 arithmetic expressions, more than was constructed for the 27 step semiconductor fabrication ex- ample, while asserting less than half as many relationships as for the semiconductor example. Constructing the arithmetic expres- sions consumed 45% of the time spent in the Quantity Lattice, as opposed to only 19% for the qualitative simulation. When re- lationships were asserted between quantities 51% of the time was spent propagating numeric constraints and 29% was spent check- ing for consistency. These figures are reversed for the qualitative simulation in which 56% of the time was spent doing consistency checking with only 26% needed for constraint propagation. 4 Relation to Other Work The Quantity Lattice was designed as a compromise between ex- pressive power and computational complexity. Efficiency of op eration was gained by taking advantage of the expected structure of the problem - many loosely connected variables and expres- sions. This is in contrast to a system like MACSYMA which is designed to handle sets of equations where each equation involves most of the variables, that is, the resulting coefficient matrix will be dense rather than sparse. This expectation leads one to use more powerful algebraic techniques like solving systems of equa- tions, which are polynomial in complexity, rather than using the techniques described above which are mostly linear in the number of equations. There are symbolic algebra algorithms which make use of the structure of the domain to achieve performance comparable to the Quantity Lattice. However, these algorithms are not actually used in MACSYMA because it is designed to solve systems of equations in general. For example, the types of domains handled by the Quantity Lattice are amenable to solution by setting up the equations in band matrix format and representing the matrix of equations as a linked list so that inequalities can be easily inserted into the correct row to preserve the band matrix format. When one has only a few expressions and inequalities, solving systems of equations as MACSYMA does is not too expensive. When there are thousands of expressions and inequalities, as in our simulation domains, making inferences by symbolically solv- ing equations becomes computationally infeasible. On the other hand, there are many inferences which MACSYMA handles that the Quantity Lattice cannot. For example, the Quantity Lattice does not .do simplification. Thus, in general, it cannot deduce that X = (X + Y) - Y. The appropriate strategy is to have the problem solving system reason about the class of inferences it needs to make - using the computational efficiency of the Quantity Lattice for simple “commonsense” inferences and do- ing the more complex (and computationally inefficient) problems using a symbolic algebra package. On the other side of the spectrum from general purpose sym- bolic algebra systems there are systems which perform some sub set of the inferences provided by the Quantity Lattice. Like the Quantity Lattice, they use specialized representations to make the inference algorithms more computationally efficient. The temporal reasoning system of [Allen] uses a representa- tion similar to the one used to store qualitative relationships in the Quantity Lattice. Although Allen’s system uses time inter- vals and we use time points, the basic difference is really in the implementation. Where Allen’s system computes the transitive closure of the relationships every time an assertion is made, the Quantity Lattice infers a relationship only upon demand. Al- though it might seem to be more efficient to compute the clo- sure, in practice we have found that the closure algorithm infers many more relationships than are actually needed, and is thus less efficient overall. For example, in the semiconductor fabri- cation simulation example presented in Section 3 there are 13Si quantities and therefore over 1.8 million potential relationships between quantities. However, during the simulation only 5009 (0.27%) of th ose relationships are actually needed. In designing problem solvers which use the Quantity Lattice, we have found it useful to be able to tell the system “1 am inter- ested in the relationship between A and B - let me know if it ever changes.” This feature, also used by [Dean], is implemented with a mechanism which associates demons with relationships in the Quantity Lattice graph. If the relationship changes, then the demon is fired. The main problem with this scheme is that in order to be complete, the system must explicitly check all relationships which have demons to see if they have changed whenever any constraint is added. This would involve one graph search for each such relation and is clearly not reasonable computationally. A com- promise position is to check only those relationships which are reachable by a path length of N or less from the relationship was added. Although this scheme does not necessarily cover all the relationships which logically might have changed, surprisingly an N of only 1 has been found to be sufficient in practice for the domains -which we have explored. This same scheme has also been used by the time-map manager of [Dean] which uses an N of greater than 1. Several researchers have incorporated some degree of quali- tative and quantitative numeric reasoning. The DEVISER plan- ning system [Vere] maintains qualitative temporal relationships in the form of a plan network, but associated with each plan node ate numeric intervals which indicate the range of start and end times for the node. The interpretation of these in- tervals is identical to that of the Quantity Lattice - the real value lies somewhere in the interval. DEVISER uses techniques similar to interval arithmetic and numeric constraint propaga- tion described in Section 2.2.2 to maintain the constraint that start time = end time + duration, where duration is a real num- ber. Inconsistencies in a plan’s schedule are detected if the upper bound of an interval is constrained to be less than its lower bound. The Quantity Lattice can perform the same inferences with two major advantages. First, the duration of a plan step can be represented as an arbitrary expression, such as “B + 5.” In DEVISER the duration must be a real number and the temporal constraints cannot be applied until the duration is known exactly. Second, the Quantity Lattice integrates qualitative and quanti- tative knowledge in such a way that new qualitative relationships can be inferred as more quantitative information is known (see Section 2.2.2). This integration is lacking in Vere’s temporal rea- soner. The system of [Allen] represents the quantitative duration of time intervals, but does not allow durations to be added to- gether - something which is necessary to achieve at least the level of performance that Vere’s system reaches. Qualitative Reasoning and Diagnosis: AUTOMATED REASONING / 123 A system which approaches the Quantity Lattice in expres- sive power is the fuzzy spatial reasoner of IDavis]. A similar fuzzy number representation is used in [Dean], but it only does addi- tion and subtraction and the intervals are bounded by integers, not reals. As in the Quantity Lattice, [Davis] represents the value of expressions using intervals and performs constraint propaga- tion to narrow the intervals. However, evaluation of arithmetic expressions is done using a Monte Carlo technique rather than interval arithmetic. This technique overcomes some of the dis- advantages of pure interval arithmetic, but it is rather expensive computationally. The representation of qualitative relationships is handled by placing one quantity in a local frame of reference of another quantity. However, it is not clear whether a quan- tity can be placed in more than one local frame of reference, that is, whether qualitative partial orders can be represented. In any event, there seems to be no facility for inferring new qual- itative relationships as can be done using relational arithmetic techniques, so the range of inferences performed is still smaller than the Quantity Lattice. 5 Conclusions “Commonsense” arithmetic reasoning is an important form of reasoning. We have presented the Quantity Lattice, a system which performs many of these commonsense arithmetic infer- ences. We believe that the Quantity Lattice offers a reasonable balance between expressive power and computational complexity. The range of inferences performed by the Quantity Lattice was carefully chosen by observing the types of arithmetic rea- soning used in doing the qualitative and quantitative reasoning tasks needed in the domains of geologic interpretation [Simmons] and semiconductor fabrication diagnosis [Mohammed]. The algo- rithms used by the Quantity Lattice are designed for the range of assertions and queries commonly found in these and similar real-world domains. The resulting system smoothly integrates qualitative and quantitative information, ordinal relationships, and arithmetic expressions. The various types of knowledge con- strain one another to enable more powerful inferences to be per- formed. At the same time, the computational complexity is quite mod- est. The worst case for each assertion or inference is O(E * R) while, in practice, the average case is much better as only small portions of the Quantity Lattice need to be traversed for each operation. Finally, all the inferences performed by the Quantity Lattice are recorded along with their justifications which facili- tate retraction and the generation of explanations. I would like to thank Randy Davis, Walter Hamscher, Dan Carnese, Mark Shirley and Jeff Van Baalen for thoughtful sug- gestions for improving this paper. Thanks to Rich Zippel for his insights on MACSYMA. I also thank Brian Williams and John Mohammed for their suggestions gained through using the Quan- tity Lattice. References [Allen] [Davis] [Dean] [ Forbus] [Mohammed] [Simmons] [ Vere] [Williams] Allen, James. “Maintaining Knowledge About Temporal Intervals,” CACM, vol. 26, no. 11, 1983. Davis, Ernest. “Representing and Acquiring Ge- ographic Knowledge,” Yale University Research Report 292, 1984. Dean, Thomas. “Temporal Imagery : An Ap preach to Reasoning about Time for Planning and Problem Solving,” Yale University Research Report 433, October 1985. Forbus, Kenneth. “Qualitative Process Theory,” AI Journal, vol. 24, 1984. Mohammed, John; Simmons, Reid. “Qualita- tive Simulation of Semiconductor Fabrication,” AAAI-86, Philadelphia, PA. Simmons, Reid. “Representing and Reasoning About Change in Geologic Interpretation,” MIT Al Technical Report 749, December 1983. Vere, Steven. “Planning in Time : Windows and Durations for Activities and Goals,” IEEE Trans- actions on Pattern Analysis and Machine Intefli- gence, vol. PAMI-5, no. 3, May 1983. Williams, Brian. “Doing Time : Putting Quali- tative Reasoning on Firmer Ground,” AAAI-86, Philadelphia, PA. 124 / SCIENCE
1986
113
377
A REASONING MODEL BASED ON AN EXTENDED DEMPSTER-SHAFER THEORY * John Yen Computer Science Division Department of Electrical Engineering and Computer Sciences University of California Berkeley, CA 94720 ABSTRACT The Dempster-Shafer (D-S) theory of evidence suggests a coherent approach to aggregate evidence bearing on groups of mutually exclusive hypotheses; however, the uncertain relation- ships between evidence and hypotheses are difficult to represent in applications of the theory. In this paper, we extend the mul- tivalued mapping in the D-S theory to a probabilistic one that uses conditional probabilities to express the uncertain associations. In addition, Dempster’s rule is used to combine belief update rather than absolute belief to obtain results consistent with Bayes’ theorem. The combined belief intervals form probability bounds under two conditional independence assumptions. Our model can be applied to expert systems that contain sets of mutually exclusive and hierarchies. exhaustive hypotheses, which may or may not form I INTRODUCTION Evidence in an expert system is sometimes associated with a group of mutually exclusive hypotheses but says nothing about its constituents. For example, a symptom in CADIAG-2/RHEUh4A (Adlassnig, 1985a)(Adlassnig, 1985b) may be a supportive evidence for rheumatoid arthritis, which consists of two mutually exclusive subclasses: seropositive rheumatoid arthritis and seronegative rheu- matoid arthritis. The symptom, however, carries no information in diaerentiating between the two subclasses. Therefore, the representation of ignorance is important for the aggregation of evi- dence bearing on hypothesis groups. Two previous approaches to the problem were based on Bayesian probability theory (Pearl, 1985) and the Dempster-Shafer (D-S) theory of evidence (Gordon and Shortliffe, 1985). While the Bayesian approach failed to express the impreciseness of its proba- bility judgements, the D-S approach was not fully justified because of the difficulty to represent uncertain relationships between evi- dence and hypotheses in the D-S theory. As a result, the belief functions of the D-S approach are no longer probability bounds. In this paper, we propose a reasoning model in which degrees of belief not only express ignorance but also forms interval proba- bilities. The multi-valued mapping in the D-S theory is first extended to a probabilistic one, so the uncertain relationships between evidence and hypothesis groups are described by condi- tional probabilities. The probability mass distribution induced from the mapping are then transformed to the basic certainty 1 This research was supported by National Science Foundation Grant ECS8209670. assignment, which measures belief update. Applying Dempster’s rule to combine basic certainty assignments, we obtain the belief function that forms probability bounds under two conditional independence assumptions. II TWO PREVIOUS APPROACHES A. The Bayesian Approach In a Bayesian approach presented by Judea Pearl (Pearl, 1986), the belief committed to a hypothesis group is always distri- buted to its constituents according to their prior probabilities. A point probability distribution of the hypothesis space thus is obtained. However, the distribution is much too precise than what is really known, and the ranges that the estimated probability judgements may vary are lost. B. The Dempster-Shafer Approach Jean Gordon and Edward Shortliffe have applied the D-S theory to manage evidence in a hierarchical hypothesis space (Gor- don and Shortliffe, 1985) but several problems still exist. In order to define the terminology for our discussions, we describe the basics of the D-S theory before we discuss Gordon and Shortliffe’s work. l.Basics of the Dempster-Shafer Theory of Evidence w For simplicity, we assume thnt r does not map any element of the space E to the empty set. Qualitative Reasoning and Diagnosis: AUTOMATED REASONING / 125 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. In general, the probability distribution of the space 8 is con- strained by the bpa. The probability of a subset B of the frame of discernment is thus bounded below by the belief of B, denoted by Bel(B), and above by the plausibility of B, denoted by Pls(B). These two quantities are obtained from the bpa as follows: Bel(B) = Em(A), Pls(B) = c m(A). (2.3) ACE AflW9 Hence, the belief interval [Be](B), Pls(B)] is the range of B’s proba- bility . An important advantage of the D-S theory is its ability to express degree of ignorance. In the theory, the commitment of belief to a subset does not force the remaining belief to be commit- ted to its complement, i.e., Be1(6) + Bel(L3’) < 1. The amount of belief committed to neither B nor B’s complement is the degree of ignorance. If ml and m2 are two bpa’s induced by two independent evi- dential sources, the combined bpa is calculated according to Dempster’s rule of combination: 2. Gordon and Shortliffe’s Work Gordon and Shortliffe (G-S) applied the D-S theory to com- bine evidence in a hierarchical hypothesis space, but they viewed MYCIN’s CF as bpa without formal justification (Gordon and Shortliffe, 1985). As a result, the belief and plausibility in their approach were not probability bounds. Moreover, the applicability of Dempster’s rule became questionable because it was not clear how one could check t,he independence assumption of Dempster’s rule in the G-S approach. The G-S approach also proposed an efficient approximation technique to reduce the complexity of Dempster’s rule, but Shafer and Logan has shown that Dempster’s rule can be implemented efficiently in a hierarchical hypothesis space (Shafer and Logan, 1985). Hence, the G-S’s approximaticn technique is not necessary. III A NEW APPROACH A. An Extension to the Dempster-Shafer Theory One way to apply the D-S theory to reasoning in expert sys- tems is to consider the space E as an evidence space and the space 8 as a hypothesis space. An evidence space is a set of mutually exclusive outcomes (possible values) of an evidential source. For example, all possible results of a laboratory test form an evidence space because they are mutually exclusive. The elements of an evidence space are called the evidential elements. A hypothesis space is a set of mutually exclusive and exhaustive hypotheses. These hypotheses may or may not form a strict hierarchy. The multivalued mapping in the D-S theory is a collection of conditional probabilities whose values are either one or zero. Sup pose that an evidential element e, is mapped to a hypothesis group Si. This implies that if ei is known with certainty, the probability of Al is one and the probability of AT is zero, i.e., P(AI I el) = 1 and P(Af I el) = 0. However, the mr.pping fails to express uncer- tain relationships such as “the probability of the hypothesis A is 0.8 given the evidence en. In order to represent this kind of uncer- tain knowledge, we extend the multivalued mapping to a proba- bilistic multi-set mapping. A probabilistic multi-set mapping from an evidence space to a hypothesis space is a function that associates each evidential ele- ment to a collection of non-empty disjoint hypothesis groups accompanied by their conditional probabilities. A formal definition is given below. DeflnitIon 1: A probabilistic multi-set mapping from a space E to a space e is a function P’:E + 2 2expV ‘1. The image of an element in E, denoted by I”(ei), is a collection of subset-probability pairs, i.e., that satisEes the following conditions: (1) Aij#Qj, j = 1, . . . , m (2) “iJ@ik =@, jfk (3) P(Ai, I ei) > 0, j = 1, . . . , m (4) FF’(Aij I ei) = 1 where ei is an element of E, Ai, . . . , Aim are subsets of 8, For the convenience of our discussion, we introduce the fol- lowing terminology. A granule is a subset of the hypothesis space 9 that is in the image of some evidential elements under the map ping. The granule set of an evidential element, denoted by G, is a set of all the granules associated with that element. For example, the granule set of e( in the definition is the set of Ai,, . . . ,Ai, i.e., G(ei) = {Ai, . . . , Ai,}. The focal element in the D-S theory is the union of the granules in a granule set; moreover, because these granules are mutually exclusive, they form a partition of the focal element. Since the mapping in the D-S theory has been extended to a probabilistic one, the probability mass of an evidential element ei is now distributed among its granules. More precisely, the portion of et’s probability mass assigned to its granule A is the product of the conditional probability P(A I ei) and the mass P(ci I E’). Thus, the basic probability value of the granule A is the total mass assigned to it by all the evidential elements whose granule sets contain A. Deflnition 2: Given a probabilistic multi-set mapping from an evidence space E to a hypothesis space 8 and a probability distri- bution of the space E, a mass function m is induced: m(A IE’) = c P(A I ei)P(ei I E’) (3.1) A &(ei) where E’ denotes the background evidential source. The mass function defined satisfies the properties of bpa described in (2.2). In fact, the bpa in the D-S theory (2.1) is a special case of our mass distribution with all conditional probabilities being either zero’s or one’s. 124 / SCIENCE The belief and the plausibility obtained from our mass func- tion bound the posterior probability under the conditional indepen- dence assumption that given the evidence, knowing its evidential source does not affect our belief in the hypotheses, i.e., P(A I ei, E’) = P(A I ei). Lemma 1: If we assume that P(A I ei, E’) = P(A I e,-) for any evidential element ei and its granule A, then for an arbitrary sub- set B of the hypothesis space, we have BeZ(Z3 I E’) 5 P(B I E’) 5 Pls(B I E’). (The proofs have been relegated to Appendix) If all the granule sets of all evidential element are identical for a mapping, the basic probability value of a granule is not only its belief but also its plausibility. In particular, If all the granules are singletons, then the mass function determines a Bayesian Belief Function (Shafer, 1976). Lemma 2: If G(ei) = G(ej) for all ei, ej E E, then for any granule A, we have m(A I E’) = Bel(A I E’) = PZs (A I E’) = P(A I E’). B. Combination of Evidence In the Dempster-Shafer Theory, bpa’s are combined using Dempster’s rule; nevertheless, using the rule to combine our mass distributions will overweigh the prior probability as shown in the following example. Example 1: El and e2 are two pieces of independent evidence bearing on the same hypothesis group A. If both el and e2 are known with certainty, each of them will induce a mass distribution from Definition 2: m(A I el) = P(A I el), m(AC I el) = P(AC I el) and m(A I e2) = P(A I e2), m(AC I e2) = P(A” 1 e2). The combined belief in A using Dempster’s rule is Bel(A I el, e2) = P(A I el)xP(A I e2) P(A I el)P(A I e2) + P(AC I el)P(A” I e2) P(e1 I A)P(e2 I A)P(A)2 = P(e1 I A)P(e2 I A)P(A)2 + P(e1 I Ae)P(e2 I A”)P(A”)* Because both P(A I el) and P(AIe2) are affected by the prior proba- bility of A, the effect of the prior is doubled in the combined belief. In fact, the more evidential sources are combined, the bigger is the weight of the prior in the combined belief. Even if el and e2 are assumed to be conditionally independent on A, the combined belief could not be interpreted as lower probability. The D-S theory does not have such problem because its bpa does not count prior belief. In order to combine our mass distributions, we define a quan- tity called basic certainty value, denoted by C, to discount the prior belief from the mass distribution. The basic certainty value of a hypothesis subset is the normalized ratio of the subset’s mass to its prior probability*, i.e., (3.29 Hence, any basic probability assignment can be transformed to a I A special case of the in (Grosof, 1885). basic certainty value is the belief measure basic certainty assignment (bca) using the equation abolre. Intui- tively, the basic certainty value measures the belief update, while both the bpa and the belief function measure absolute belief. Since both the CF in MYCIN (Shortliffe and Buchansn, 1975) and the likelihood ratio in PROSPECTOR (Duda, 1976) measure belief update, we may expect a relationship among them. In fact, as shown in the section IV-B, the probabilistic interpretations of CF given by Heckerman (Heckerman, 1985) are functions of basic cer- tainty values. Theorem 1: Consider two evidential spaces E, and E, that bear on a hypothesis space 8. Eli and e2j denote elements in E, and E2. A, and Bl denote granules of eli and e2j respectively. Assuming that P(eli I Ak)P(e2j I B,) = P(eh, e2i I A,nB,) AknBl f @ (3.3) and P(E1’ I elf) P(E; I e2j) = P(El’, Ei I eli, e2j) then E C(Ak I E,‘) C(Bi I E:) AflB,=D 2’ C(Ak I E,‘)C(Bl I E,‘) = C(D I El’, Ez’) (3.5) where El’ and Ea’ denote the evidential sources of the space E, and the space E2 respectively. Proof of Theorem 1 can be found in (Yen, 1985). Based on Theorem 1, we apply Dempster’s rule to combine basic certainty assignments. The aggregated bca can be further combined with other independent bca’s. To obtain the updated belief function, the aggregated bca is transformed to the aggre- gated bpa through the following equation: m(AIE’)= C(A I E’)P(A) 2’ C(A I E’)P(A) AC0 From Lemma 1, the belief and the plausibility of a hypothesis sub- set obtained from the updated bpa are lower probability and upper probability of the subset given the aggregated evidence. In summary, combination of evidence is performed by first transforming bpa’s from independent sources of evidence into bca’s which are then combined using Dempster’s rule. The Enal com- bined bca is transformed to a combined bpa, from which we obtain the updated belief function that forms interval probabilities. C. Independence Assumptions of the Combining Rule The two conditions assumed in Theorem 1 correspond to con- ditional independence of evidence and the independence assump tion of Dempster’s rule. In fact, the first assumption (3.3) is weaker than the strong conditional independence assumption employed in MYCIN and PROSPECTOR. The second assumption (3.4) is implicitly made in these systems. 1. The First AssumDt.ion The first assumption (3.3) d escribes the conditional indepen- dence regarding the two evidence spaces and the hypothesis space. Sufficient conditions of the assumption are P(eli I Ak) = P(eli I A,nB,), P(e2j I B,) = P(e2j I A,f”jt3[)(3.7) and Qualitative Reasoning and Diagnosis: AUTOMATED REASONING / 127 P(eli I AknBl)P(e2j I A,nBt) = P(eli, e2j I AknB,). (34 The condition (3.7) is the conditional independence assump tion P(e I A, A,) = P(e IA) A,CA stating that if .4 is known with certainty, knowing its subset does not change the likelihood of e. A similar assumption is made in the Bayesian approach of (Pearl, 1986). The Bayesian approach applies the assumption to distribute the subset’s mass to each of the subset’s constituents. In our approach, however, the assump tion is applied only when two bodies of evidence are aggregated to give support to a more specific hypothesis group. The assumption (3.7) is a consequence of the aggregation of evidence, not a deli- berate effort to obtain a point distribution. The equation (3.8) states that elements of different evidential sources are conditionally independent on their granules’ non-empty intersections. Since the granules of an evidential element are dis- joint, the intersections of two granule sets are also disjoint. Hence, two evidential elements of different sources are conditionally independent on a set of mutually diJolnt hypothesis groups. In particular, pieces of evidence are not assumed to be conditionally independent on single hypotheses and their negations (comple- ments) because generally they are not mutually disjoint. There- fore, the equation (3.7) is weaker than PROSPECTOR and h4YCIN’s assumption that pieces of evidence bearing on the same hypothesis are conditionally independent on the hypothesis and its negation. As a result, we solve their inconsistency problems dealing with more than two mutually exclusive and exhaustive hypotheses (Heckerman, 1985)(Konolige, 1979). 2. The Second Assumption The second assumption (3.4) describes the conditional independence regarding the two evidence spaces and their back- ground evidential sources. Sufficient conditions of the assumption (3.4) are 1. The probability distribution of the space E2 conditloned on the evidence in El is not affected by knowing the evidential source of El. P(e2j I cl;) = P(e2’ I eli, El’) (3.10) 2. 3. Similarly , the distribution of the space E, conditioned evidence in E2 is not affected by knowing E2’. on the P(eli I e2j) = P(eli I e2j, E,‘). (3.11) The evidential sources E, and E, are conditionally dent on the joint probability distribution of E,x E2. indepen- P(EI’I eli, e2j, E2’) = P(E,‘I eli, e2j) (3.12) The assumption (3.4) corresponds to the assumption of Dempster’s rule (Dempster, 1967), independence P(eli I E1’)P(e2i I E,‘) = P(eli, e2j I El’, Ei), because (3.4) can be reformulated as P(eli 1 El’I?qe*j 1 Ez?~EI?~~Ez? = eli E2j) The Dempster’s independence assumption differs from (3.14) in that it does not contain prior probabilities. This difference is understood because in the D-S theory there is no notion of poste- rior versus prior probability in the evidence space. Therefore (3.4) intuitively replaces the independence of evidential sources assumed in Dempster’s rule of combination. The assumption is always satisfied when evidence is known with certainty. For example, if ell and e2, are known with cer- tainty, the equation (3.4) then becomes P(ell I el{) P(e2J I e2j) = P(el,, e2s I eli, e2j) Both the left hand side and the right hand side of the equation above are zeros for all values of i and j except when i=l and j=3 in which case both sides are one. Therefore, the equality holds. It is also straightforward to prove Theorem 1 without (3.4) assuming that evidence is known with certainty. PROSPECTOR and Heckerman’s CF model made similar assumptions in the combining formula: P(E1’, Ez’l h) P(E1’I h) P(&‘I h) P(E,‘, Ez’ I K) =--. P(E,‘I h) P(&‘I h) Hence, we are not TOR and MYCIN. adding any assumptions to those of PROSPEC- D. An Example Suppose h,, h,, hs, and h4 are mutually exclusive and exhaustive hypotheses, Thus, they constitute a hypothesis space 8. The prior probabilities of the hypotheses are P(h,) = 0.1, P(h,) = 0.4 ,P(h,) = 0.25, and P(h,) = 0.25. Two pieces of evidence col- lected are e, and e2. E, strongly supports the hypothesis group {h,, h2}, with the following probability values: P({h,, h-2) I el) = 0.9, P((h3, h4} I e,) = 0.1. E2 supports h, with the following tion gives no information: probability values while its nega- P(h, I e2) = 0.67, P({h,, ha, h4} I e,) = 0.33, and P(9 I e2 = 1 Suppose that e, is known with certainty, and e2 is likely to be present with probability 0.3 (i.e., P(e, I El’) = 1, P(e2 I El) = 0.3, where E,’ and E2’ denote background evidential sources for e, and e2 respectively), Although we have not assumed the prior proba- bility of e2, it is easy to check that a consistent prior for e2 must be less then 0.14925. Therefore, e2 with a posterior probability of 0.3 is still a piece of supportive evidence for h,. The effect of e, on the belief in the hypotheses is represented by the following mass distribution: m({hI, h2} I E,‘) = 0.9 m({h,! h4) I El’) = 0.1 and m is zero for all other subsets of 8. The certainty assignment (bca) is corresponding basic C({h,, h,} I El’) = 0.9, C({h,, h4} I E,‘) = 0.1. Similarly, the effect of e2 on the belief in the hypotheses is represented by the following mass distribution: m({hI} I Ez’) = 0.201 m({h,}’ I E2’) = 0.099 m(0 I E2’) = 0.7 and m is zero for all other subsets transformed to the following bca: of 0. The distribution is c({h,} I E;) = 0.7128 C({h,}’ 1 E2’) = o-039 ~(9 I Ez’) = 0.2482 128 / SCIENCE Using Dempster’s rule to combine the two bca’s, we get the follow- ing combined bca: C({h,} I E,‘E2’) = 0.6907 C({h,} I El/E;) = 0.0378 C({h,, h2} I El’&‘) = 0.2406 C({h,, h4} I El’Ei) = 0.0309. From the combined bca, we obtain the following combined bpa: rn({hl} I E,‘E2’) = 0.314 m ({h,} I El’E2’) = 0.069 m({h,, h2) I E#*‘) = 0.547 m ({ha, h4} I E1’E2’) = 0 07 and m is zero for all other subsets of 0. The belief intervals of the hypotheses are hi: (0.314, O.SSl], h,: (0.069, 0.6161, h,: (0, 0.071, and h,: (0, 0.071. Th ese intervals determine the following partial ordering: h, is more likely than h3 and h4, and h2 is incomparable with h,, h,, and h,. However, the Bayesian approach (Pearl, 1986) yields a different result: Bel(h,) = 0.424, Bel(h,) = 0.506, Bel(h,) = 0.035, and Bel(h,) = 0.035. The posterior probability of h, is higher than that of h, because majority of the mass assigned to the hypothesis group {h,, h2} is allocated to h, for its relatively high prior probability. IV COMPARISONS A. Relationship to Bayes’ Theorem The result of our model is consistent with Bayes’ theorem under conditional independence assumption. To show this, we con- sider n evidential sources E,, E2, . . . , E, bearing on a hypothesis space 8 = {h,, h, . . . , h,}. The values of each evidential sources are known with certainty to be e,, e2 * * . e, respectively. Also, the granules for every evidential sources are all singletons. It then follows from Lemma 2 that m({hi} I e1,e2, . . * e,) = Bel({hi} I e1,e2,. . . e,) (44 = P(hi I e1,e2, . . . e,) The basic probability assignment due to the evidential source Ej is m({hi} I ej) = P(hi I ej), i = 1, . . . , m. The corresponding basic certainty assignment is P(hi I ej) C({hi} I ej) = PO P(ej I hi) P(hk 1 ej) = ‘CP(ej’ i = ” ’ . ’ ’ m’ c- k P(hk) k Combining the basic certainty assignments from n evidential sources, we get C({hi} I q,e2, . . . e,) = P(e, I hi)P(e2 I hi) . . . P(e, I hi) xP(e, I hi)P(ez I hi) . . * P(e, I hi) Through the transformation (3.6), we obtain the combined basic probability assignment: m({hi} I el,eQ * * . e,) = 4el I h,)P(e2 1 4) . . . r(% I hi)4ns) LXelI hiPf%I 4) " ' 4e, 1 hi)4ni) (44 From the equations (4.1) and (4.2) we get Bayes’ theorem under the assumption that el, e2, . . . on each hypothesis in 8. , e, are conditionally independent B. Mapping Basic Certainty Assignment to CF The probabilistic interpretations of certainty factor (CF) given by Heckerman (Heckerman, 1985) are functions of basic cer- tainty values. One of Heckerman’s formulations for the CF of a hypothesis h given a piece of evidence e is CF(h,e) = G where X is the likelihood ratio defined to be x - PC” 1 h_, P(e 111) In our model, when the frame of discernment contains only two hypotheses (i.e., 8 = {h, I?} ), and a piece of evidence e is known with certainty, the basic certainty assignment of 0 is: C({h} I e) = & C({l} I e) = L. x+1 Therefore, one of Heckerman’s probabilistic interpretations of CF is the difference of C( {h } I e) and C( { z} 1 e) in this case. Moreover, the relationship can be comprehended as follows: (1) If the basic certainty values of the hypothesis h and its nega- tion are both 0.5, no belief update occurs. Hence, the cer- tainty factor CF(h,e) is zero. (2) On the other hand, if basic certainty value of the hypothesis is greater than that of its negation, degree of belief in h is increased upon the observation of the evidence. Thus the certainty factor CF(h,e) is positive. In general, the probabilistic interpretations of CF are functions of WV4 I4 A similar mapping between Heckerman’s CF and a “belief measure” B to which Dempster’s rule applies was found by Grosof (Grosof, 1985). In fact, Grosof’s belief measure B(h,e) is equivalent to basic certinty value C({h} le) in this special case. However, the distinction between belief update and absolute belief was not made in Grosof’s paper. Thus, our approach Grosof’s work but also distinguishes basic from mass distributions in a clear way. not only certainty generalized assignments V CONCLUSIONS By extending the D-S theory, we have developed a reasoning model that is consistent with Bayes’ theorem with conditional independence assumptions. The D-S theory is extended to handle the uncertainty associated with rules. In addition, the Dempster rule is used to combine belief update rather than absolute belief, and the combined belief and plausibility are lower probability and upper probability respectively under two conditional independence assumptions. The major advantage of our model over the Bayesian approach (Pearl, 1986) is the representation of ignorance. In our model, the amount of belief directly committed to a set of hypotheses is not distributed among its constituents until further evidence is gathered to narrow the hypothesis set. Therefore, degree of ignorance can be expressed and updated coherently as the degree of belief does. In the Bayesian approach, the amount of belief committed to a hypothesis group is always distributed among its constituents. Directions for future research are mechanism to perform chains of reasoning, computational complexity of the model, and decision making using belief intervals. In chaining, Definition 2 is Qualitative Reasoning and Diagnosis: AUTOMATED REASONING / 129 no longer valid because the probability distribution of an evidence space may not be known exactly. Although the extension of the Definition can be straight forward, a justification similar to Theorem 1 is difficult to establish. The computational complexity of our model is dominated by that of Dempster’s rule, so any efficient implementations of the rule greatly reduce the complexity of our model. Interval-based decision making has been discussed in (Loui, 1985) and elsewhere, yet the problem is not completely solved and needs further research. from the definition of plausibility and Definition 2, we The proposed reasoning model is ideal for the expert system applications that (1) contain mutually exclusive and exhaustive hypotheses, (2) provide the required probability judgements, and (3) satisfy the two conditional independence assumptions. The model is currently implemented in a medical expert system that diagnoses rheumatic diseases. ACKNOWLEDGEMENTS The author is indebted to Professor Zadeh for his encourage- ment and support. The author would also like to thank Dr. Peter Adlassnig for valuable discussions and his comments on the paper Appendix Lemma 1: If we assume that P(A I ef, E’) = P(A I ei) for any evidential element ei and its granule A, then for an arbitrary sub- set B of the hypothesis space, we have BeZ(B I E’) 5 P(B I E’) < Pla(B I E’). Proof: Let us consider the conditional probability of B, an arbi- trary subset of the hypothesis space, given the evidence et. The conditional probabilities of ei’s granules contribute to P(B I ei) depending on their set relationships with B: (1) conditional probability If the granule is included must be assigned to P(B I in B, all its ei). (2) (3) If the granule has non-empty intersection with B, but is not included in B, its conditional assigned to P(B I 4 probability may or may not be If the granule has no intersection with bability can not be assigned to P(B I ei B, its conditional pro- Since ei’s granules are disjoint, the sum of the conditional probabil- ities of the first type granules is the lower bound of P(B I ei). Similarly, the sum of the conditional probabilities of the first type granules and the second type granules is the upper bound of P(B 1 ei). Thus, we get C P(Aj I ei) 5 P(B I ei) < C P(Ak I ei). A,&B A&G(e,) Ak&#$ &=(‘A) Since P(ei I E’) is positive and the equation above holds for any evidential element ei, we have C C P(Aj I ei)P(ei I E’) 5 xP(B I ei)P(ei I E’) i . A,:B i A++4 64.2) 5 C C P(Ak I ei)P(ei I E’) . i Ak$ib AkEG From the definition of belief function and Definition 2, we have Bel(BIE’)= Cm(AjlE’) Aj:B = C C P(Aj I ei)P(ei I E’) A&3 A+&) (A.3) Similarly, get Pla(Z3 I E’) = C m(AjI E’) 4&W = C C P(A, I ei)P(ei I E’) A,&#@ A,&) Also, from the assumption that P(A I ei,E’) = P(A I e,-), we get P(B I E’) = CP(B I e,-)P(ei I E’) (A4 It thus follows from (A.2), (A.3), (A.4), and (A.5) that BeZ(B I E’) < P(B I E’) 5 Pla(B I E’) . 8 Lemma 2: If G(ei) = G (ej) f or all ei, ej E E, then for any granule A, we have m(A I E’) = Bel(A I E’) = Pla(A I E’) = P(A I E’). Proof: Part 1 Assume m(A I E’) # Bel(A I E’). (A4 From the definition of belief function and of the basic probability values, we have the nonnegativity m(A I E’) < BeZ(A I E’). Hence, there exists a subset B such that B C A,B#A,andm(BIE’)>O. (A? From the definition 2, we know B is in a granule set, denoted as G(e,). Since A is also a granule, we denote its granule set as G(e,). Since G(e,) = G(e,,) according to the assumption of this Lemma, A and B are in the same granule set. From the Definition 1 it follows that A and B are disjoint, which contradicts (A.7). Therefore, the assumption (A.6) fails, and we have proved by contradiction that m(A I E’) = Bel(A I E’) From the definition of plausibility function, we know m(A I E’) < Pla(A I E’) Hence, there exists a subset C such that C n A # Qll C # A, and m(C I E) > 0. (A.9) Using the arguments of Part 1, C and A must be in the same granule set, therefore they are disjoint, which contradicts (A.9). Therefore, the assumption (A.8) fails, and we have proved by contradiction that m(A I E’) = Pla(A I E’). 130 / SCIENCE REFERENCES PI PI I31 14 I51 161 PI PI PI ilO1 Adlassnig, K.-P. “Present State of the Medical Expert System CADIAG-Z”, Methoda of Information in Medicine, 24 (1985) 13-20. Adlassnig, K.-P. “CADIAG: Approaches to Computer-Assisted Medical Diagnosis”, Comput. Eiol. Med., 15:5 (1985) 315-335. Dempster, A. P. “Upper and Lower Probabilities Induced By A Multivalued Mapping”, Annals of Mathematical Statistics, 38 (1967) 325-339. Duda, R. O., P. E. Hart, and N. J. Nilsson. “Subjective Baye- Sian Methods for Rule-Based Inference Systems”, Proceedings 1976 National Computer Conlerence, AFIPS, 45 (1976) 1075- 1082. Gordon J. and E. H. Shortliffe. “A Method for Managing Evi- dential Reasoning in a Hierarchical Hypothesis Space”, Artificial Intelligence, 26 (1985) 323-357. Grosof, B. N. “Evidential Confirmation as Transformed Pro- bability”, In Proceedings of the AAAI/IEEE Workshop on Uncertainty and Probability in Artificial Intelligence, 1985, pp. 185-192. Heckerman, D. “A Probabilistic Interpretation for MYClN’s Certainty Factors”, In Proceedings of the AAAI/IEEE Workshop on Uncertainty and Probability in Artificial Intelli- gence, 1985, pp. 420. Heckerman, D. “A Rational Measure of Confirmation”, MEMO KSL-86-16, Department of Medicine and Computer Science, Stanford University School of Medicine, February 1986. Konolige, K. “Bayesian Methods for Updating Probabilities” Appendix D of “A computer-Based Consultant for Mineral Exploration”, Final Report of Project 6415, SRI International, Menlo Park, California, 1982. Loui, R., J. Feldman, H. Kyburg. “Interval-Based Decisions for Reasoning Systems”, In Proceedings of the AAAI/IEEE Workshop on Uncertainty and Probability in Artificial Intelli- gence, 1985, pp. 193-200. [ll] Pearl, J. “On Evidential Reasoning in a Hierarchy of Hypothesis”, Artificial Intelligence Journal, 28:l (1986) 9-16. 1121 Shafer, G. “Mathematical Theory of Evidence”, Princeton University Press, Princeton, N. J., 1976. 1131 Shafer, G. and R. Logan. “Implementing Dempster’s Rule For Hierarchical Evidence”, Working Paper of School of Business, University of Kansas, 1985. [14] Shortlifie E. H. and B. G. Buchanan. “A Model of Inexact Rea- soning in Medicine”, jI!uthematical Biosciences, 23 (1975) 351-379. [15] Yen, J. “A Model of Evidential Reasoning in a Hierarchical Hypothesis Space”, Report No. UCB/CSD 86/277, Computer Science Division (EECS), University of California, Berkeley, December 1985. [16] Zadeh, L. A. “Fuzzy Sets and Information Granularity”, In Advances in Fuzzy Set Theory and Applications, 1979, pp. 3- 18. Qualitative Reasoning and Diagnosis: AUTOMATED REASONING / 13 1
1986
114
378
An Integration of Resolution and Natural Deduction Theorem Proving Dale Miller and Amy Felty Computer and Information Science University of Pennsylvania Philadelphia, PA 19104 Abstract: We present a high-level approach to the integra- tion of such different theorem proving technologies as resolution and natural deduction. This system represents natural deduc- tion proofs as X-terms and resolution refutations as the types of such X-terms. These type structures, called ezpansion trees, are essentially formulas in which substitution terms are attached to quantifiers. As such, this approach to proofs and their types ex- tends the formulas-as-type notion found in proof theory. The LCF notion of tactics and tacticals can also be extended to in- corporate proofs as typed X-terms. Such extended tacticals can be used to program different interactive and automatic natural deduction theorem provers. Explicit representation of proofs as typed values within a programming language provides sev- eral capabilities not generally found in other theorem proving systems. For example, it is possible to write a tactic which can take the type specified by a resolution refutation and auto- matically construct a complete natural deduction proof. Such a capability can be of use in the development of user oriented explanation facilities. 1. Introduction Theorem provers built on resolution and natural deduction have very different characteristics. For example, a search for a resolution refutation starts by taking a proposed theorem and putting its negation into skolem normal and conjunctive nor- mal form. AS a result of using such normal forms, the search space of refutations is very homogeneous, and automatic thee- rem provers using this paradigm are rather easy to build. On the other hand, since the search in such a theorem prover is carried out in a space which is rather remote from a useJs orig- inal input, it is difficult to get the user to interact with the search process. On these accounts, natural deduction theorem proving is just the opposite. For example, no normal forms are generally used and only subformulas or instances of sub- formulas of the proposed theorem are used during the search for a proof. AS a result, it is very easy to involve a user in the search for a proof since the state of the search at any moment is easily understood. On the other hand, natural deduction often leaves too many unimportant features in the search space which the preprocessing done by normal forms would have removed. Thus, resolution is often the core of automatic theorem provers while natural deduction is often the core of interactive theorem provers. Clearly it is desirable to find some way to smoothly in- tegrate these two very different paradigms. In this paper, we propose just such an integration. This integration is not a merg- ing of the two different search spaces. It is, instead, an inte- gration of the two kinds of proofs. We shall present a system which explicitly represents proofs in both systems and is capa- This work has been supported by NSF grants MCS8219196 CER, MCS-82-07294, and DARPA NOOO- 14-85-K-0018. ble of translating between them. In order to achieve this goal, we have designed a programming language which permits proof structures as values and types. This approach builds on and ex- tends the LCF approach to natural deduction theorem provers by replacing the LCF notion of a uakfation with explicit term representation of proofs. The terms which represent proofs are given types which generalize the formulas-as-type notion found in proof theory [Howard, 19691. Resolution refutations are seen aa specifying the type of a natural deduction proofs. This high level view of proofs as typed terms can be easily combined with more standard aspects of LCF to yield the integration for which we are looking. In Section 2 we describe a representation of natural deduc- tion proofs as X-terms, and in Section 3 we show how the LCF notion of tactics and tacticals can be used to specify an inter- active theorem prover based on such a term representation of natural deduction proofs. In Section 4 we describe how resolu- tion refutations can be converted to generalized type structures called expansion trees. In Section 5 we show how tactics can make use of the information stored in these generalized types. Also in Section 5, we present a program in the language of tat- tics which is capable of automatically converting a resolution refutation to a natural deduction proof. 2. Natural Deduction Proofs Although much of what we describe here is applicable to most forms of natural deduction, the form we present in this pa- per is essentially the sequent system LK presented in [Gentzen, 19351 but without the cut rule. More modern presentations of similar systems can be found in [Gallier, 19861 and [Prawitz, 19651. Proofs in the LK system are finite, ordered trees in which nodes are labeled with sequents. A sequent, written as l? --) 8, will represent the proposition “from all the formulas in the set P, some formula in the set 0 can be proved.” Notice that the proposition connected to the sequent A -----* A is triv- ially true. Sequents of this simple kind are called axioms. The non-terminal nodes of an LK proof are called inference rules and are listed below. I’ - Q,A r - e,c and-r A,C,I‘ - e r - Q,Ar\C and-l AAC,l? - 0 A,r - 8 c,r - 8 or-l AvC,r - 8 I’ - @,A,C or-r r - Q,AvC r - 8,A not -1 A,r - 0 -A,lY - 8 not-r r - Q,-A r - Q,A C,A - A imp- 1 A,l? - Q,C A> C,r,A - Q,A imp-r I’ - Q,A>C lc)s / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. WIPJ - 63 all-l r - QMYIP all-r vxp,r - 8 r - Q,Vz P WYW - 43 some-l r - 0, [x/tlP 803110-r jxp,r - 8 r - e,jx P r-43 thin- 1 r-8 thin-r A,r - 8 r - O,A All but the last two rules are introduction rules and are responsi- ble for introducing into sequents the variouslogical connectives. The proviso that the variable y is not free in any formula of the lower sequent must be added to the rules all-r and some-l. A derivation tree is an LK proof of A if the root of the tree is the sequent - A and its leaves are axioms. Example 1. Figure 1 is an LK proof of the formula [P(4 v ml A vx [P(X) 1 c&N 1 3x d4. These proof trees can be represented more manageably as term structures. For example, let axiom(A) represent the proof tree which contains just the sequent A - A. The inference rules can be represented by function symbols of 1 or 2 argu- ments. For example, if 2’1 and 2’2 are proofs of l?,A + 8 and I’, B - 9, respectively, we would write or-l(Ti, Ti) to represent the proof Tl 7’2 or-l I’,AvB - 8 where T: and Ti are the terms representing the proofs Tl and Tz, respectively. Many inference rules require more information (- k(a) v d~)l AV’z b(4 1 !+>I ’ SC q(4)- Remember that the typed X-calculus has the following restric- tion on application: a term g can be applied to a term h if and only if the type of g is of the form Q + ,8 and the type of h is of the form cy. This restriction thus enforces the restriction of combining partial proofs with completed subproofs. We shall assume that the reader is familiar with the basic properties of X-conversion. some-r PC4 - PM q(a) - 3x dz) imp-l q(b) - m some-r These free variables are also abstracted with X-bindings. Thus a partial proof is represented as a function from subproofs to a completed proof. Example 3. A partial proof of the formula in Example 1 is given in Figure 2 and by the term lambda X lambda Y. imp-r(and-l(or-l(X,thin-l(Y)))). In order for the mechanism of X-conversion to correctly represent the operation of supplying a partial proof with a sub- proof, we must type these X-terms. For example, XxXy T(x, y) represents a partial proof of some sequent in which two sub- proofs must be supplied. However, before this term can be applied to some actual proof, say S, one must check that the abstracted variable x is a place holder for proofs of the sequent for which S is a proof. Thus, we should make sequents and functions among sequents be the types of X-terms. For exam- ple, if x and y are place holders for proofs of the sequents 01 and 02, respectively, and if XxXy T (2, y) is a partial proof of the sequent cr, then we attach to this X-term the type 01 + 02 + cr. The type of the X-term representing the partial proof in Figure 2 is, therefore, l+),P@) ’ Cd4 - 32 44 all-l q(b) - 32 44 thin-l p(a), v's Ip(x) ' q(x)1 - 32 q(x) q(b),VJz [P(Z) ’ +)I - ckc d4 or-l p(a) ’ q(b),Vx [Pb) ’ q(‘)l - 3x ‘(‘) and-l [p(a) V q(b)] A v’z b(x) ’ q(x)] - 32 dz> imp-r - [p(a) V q(b)] A ‘4~ [P(X) 3 Q(x)] ’ 32 q(X) Figure 1. than just subproofs in order to put those subproofs together into larger proofs. For example, a term representing a proof which contains any of the quantifier introduction rules must contain the substitution term used to instantiate the quantifiers. Although such information is necessary, we avoid presenting it in this paper to simplify the presentation of examples. Example 2. The (simplified) term which represents the proof in Example 1 is written as: imp-r(and-l(or-l(all-l(imp-l(axiom(p(a)), some-r(axiom(q(a>>)), thin-l(some-r(axiom(q(b)))))))). 3. Using Tactics to Build Proof Trees The LCF system [Gorden, Milner, and Wadsworth, 19791 of tactics and tacticals can be easily extended to use the notion of proofs as typed X-terms. In particular, tactics are functions which, when given a sequent (i.e. a type), either returns a par- tial proof for that sequent or fails. The main extension to LCF is that explicit representations of partial proofs are maintained through the use of X-terms. In LCF, proofs and partial proofs are discarded as they are discovered. Tactics are either primitive or compound. Primitive tactics attempt to prove a given sequent by using a particular inference rule. For example, the or-l-tat attempts to prove a sequent To build interactive proof systems, it is important to rep- by using the or-l inference rule. A primitive tactic will fail if resent not only completed proofs but also incomplete or partial its own special inference rule is not applicable. If it succeeds, proofs. We represent these by introducing into proof terms free it returns a X-term representing the partial proof which simply variables which act as place holders for the actual subproofs. encodes that inference rule. If the tactic or-1-tat is applied TheoremProving: AUTOMATEDREASONING / 199 to a sequent of the return the X-term thin- 1 p(a),Vx &) 1 &)I - 32 44 q@),Vz [p(z) 2 q(z)1 - 3x d4 or-l p(g v q@),Vx [P(X) ’ d41 - 32 q(x) and- 1 [p(a) v q(b)] /f vx [p(x) 1 q(x)1 - SC q(z) . - [P(a) v q(b)] A vz [p(x) 3 q(x)] 1 3x q(x) Imp-= Figure 2. formA,Av c,r - Q it will succeed and lambda X lambda Y. or-l(X,Y). This term is typed as (A,AJ - e) --) (A,C,r - e) -+ (A,AVC,~ -0). This X-term is a partial proof that stores a description of one step of the proof and represents the function which when given proofs of the types A,A,I’ - 8 and A,C,I’ - 8, would return a proof of the type A, A v C, I’ - 8. Compound tactics are built from primitive tactics by us- ing tacticals. As in LCF, the then tactical is responsible for combining partial proofs. If we have a partial proof of type 61 --+ 62 4 . ’ * 4 CT, + 00, we need to compute proofs for each of Ql,.. . ,bn in order to have a complete proof of 00. Suppose we have decided to find a proof of the type ai, and some tac- tic or combination of tactics returns a partial proof of the type r1 + ‘*a ---, 7, --) ui. This is also a partial proof with m missing subproofs. The then tactical combines these two partial proofs into a single one of type which is a more refined partial proof of ~0, Example 4. Suppose that some combination of tactics returns the following partial proof: P(4 - da) q(a) - 3% q(x) P@),P(4 ’ d4 - 3J Q(4 imp-l p(a),Vz [p(x) 1 q(x)] - 32 q(x) a11-1 where the term representing this partial proof is lambda Z. all-l(imp-l(axiom(p(a)),Z)). When this term is combined with the partial proof in Example 2, the combined proof can be written as lambda Z lambda Y. imp-r(and-l(or-l(all-l(imp-l(axiom(p(a)),Z)), thin-l(Y)))) and is of type (q(a) -j+ 32 P(4) + (q(b) - 3x q(4) + - [p(a) v q(b)] A vx [P(x) 1 q(x)1 ’ 3x q(4). ber Although the number of abstracted variables (i.e. the num- of subproofs) may grow in size as we combine partial proofs, the amount of the proof that still must be completed gener- ally decreases because as each rule is applied, the resulting se- quent(s) generally contain fewer connectives. The number of subproofs decreases when one of them is recognized as an ax- iom. In general, there are many terms (proofs) of a given type (sequent). Thus many choices can be made at each step in building a proof, and different choices can result in different proofs. These choices fall into two categories. The first choice at any given point in processing a partial proof is which abstracted variable (i.e. which subproof) should be analyzed. The second choice is which tactic to use in filling in this subproof. Tacti- cals allow the programmer to specify the order in which tac- tics are tried, i.e. control the order in which proof rules within the natural deduction system are attempted. For example, if we want to prove a set of theorems that we know all have the form - (Al AA2 A . . . A As) 3 B we may want to automate the part of the proof tree that breaks these connectives to get Al,Az,- - -,A, - B, then apply all non-branching proposi- tional rules before continuing in interactive mode. A procedure to do this can be written as follows: (then (then imp-r-tat (repeat and-1-tat)) (repeat (orelse imp-r-tat neg-r-tat neg-1-tat and-1-tat or-r-tat))) where then, repeat, and orelse are names of high level tacti- cals similar to those found in LCF. By writing compound tac- tics, the programmer is directly involved in how choices are made during the search for a proof. The ability to express proof strategies as small programs allows great flexibility in customizing proof search and building proof heuristics. Tactics look at the top-level connectives in sequents to de- termine which inference rules can be applied. We, however, have not discussed what happens when a top-level quantifier is encountered. The sequent itself does not have enough informa- tion to describe how that quantifer is to be introduced. Substi- tution information is required at this point. This information is not given by the sequent, and so the sequent by itself does not contain enough type information to adequately specify a proof. This type information is much harder to determine, and here we turn to an automatic theorem prover, such as resolution, for help. 4. Expansion Trees and Resolution Refutations The substitution information which is lacking for this proof building process to continue could be supplied in a couple of ways. The search process could stop and ask the user for a substitution term. A more interesting possiblity, however, is 200 / SCIENCE to use a resolution theorem prover to supply this information since one of their strengths is the computation of substitutions via unification. The main question is how this information can be captured and used in the natural deduction setting. The problem of relating the substitution information in refutations to the building of natural deduction proofs is described in this section. For concreteness, we first present a definition of resolution refutations. If B is a formula, let p denote its skolem normal form, i.e. essentially existential quantifiers are instantiated with Skolem terms and all essentially universal variables are deleted. We shall use cnf(B) to denote the set of sets of literals which comprises the conjunctive normal form of B. Let Ug denote the set of first-order terms which are composed only of functions and constants of B, plus an additional constant added to ensure that ?ig is non-empty. A re8oZution refutation of B is a list of clauses (i.e. a set of literals) Cl,. . . ,C,, such that C, is the empty clause and for each i = 1,. . . ,m, one of the following is true: (a) Ci E cnf((wB)*), or (b) there are positive integersj, k: less than i and sets of literals Sr and S2 such that Ci = Sr U S2, Cj = Sr U {A}, and Ck = S1 u {-A}, for some atomic formula A, or (c) there is a substitution p built using only terms in U, and a positive integer j < i such that Ci = PCj. Example 5. The following is a refutation of the formula in Example 1. P(4 9 q(b) da) -q(a) by (b) from 5 and 6 by (c) from 3 by (b) from 7 and 8 Notice that refutations use substitutions in a very distributed fashion. Given a quantifier in the theorem it is not obvious what substitution terms were substituted for it. In contrast to refutations, we present another proof structure called expansion tree8 which store substitution information much more locally. The tactics, as we described above, could not process a universal quantifier on the left or an existential quantifier on the right. This is because the sequents did not specify the instance of these expressions that should be used. We solve this problem by simply attaching to quantifiers in a formula what substitution instances are to be used during a proof. To - - this end, we define expansion trees and dual expansion trees in the following fashion. (1) Let B be a formula. Then B is both an expansion tree and dual expansion tree for B. (2) If Qr and 92 are expansion trees and Qa is a dual expan- sion tree for B1, B2, Bs, respectively, then the following are expansion trees for B1 A B2, B1 V B2, Bs > B2, and -BS, respectively: QrAQ2,QrVQ2,Qe 3 Q2, and -Qe. This statement remains true if the role of expansion trees and dual expansion trees is switched. (3) If y is a variable and Q is an expansion tree for [x/y] B then (Vx 4 (Y,Q)) is an expansion tree for Vx B. If Q is a dual expansion tree for [z/y] B then (3x B, (y, Q)) is a dual expansion tree for 3x B. (4) Iftl,-*-, n t are first-order terms and for i = 1, . . . , n, Qi is an expansion tree for [x/ti]B, then (3% B,(tl,Ql),...,(tn,Qn)) is an expansion tree for 3x B. If, however, for i = 1 >“‘, n, Qi is a dual expansion tree for [x/ti]B, then (V~B,(tl,Ql),...,(tn,Qn)) is a dual expansion tree for Vx B. Example 6. An expansion tree for the formula in Example 1 1s [p(a) V q(b)] A (vx (P(X) ’ q(x))> (%P(a) ’ q(a))) ’ (3x !?(4 6% 464, (h q(W). There are, of course, many other expansion trees for this for- mula. We classify certaih expansion trees as fzpansion tree proof8 or simply ET-proof8 if they satisfy two properties one requires that a certain relation on the substitution terms in the tree be acyclic and the other requires that the “deep formula” rep- resented by the tree be tautologous. For the definitions, the reader is referred to [Miller, 19841. Roughly speaking, the deep formula of an expansion tree is simply a formula whose subfor- mulas are taken from the terminal nodes of the tree. The deep formula of the expansion tree in Example 6 is [p(a) V q(b)] A b(a) ’ +)I ’ k?(a) V q(b)]. . Since this is tautologous, this expansion tree is in fact an ET- proof. We now introduce a more general notion of type. A gen- eralized sequent, written P - 4, contains a set of dual ex- pansion trees P and a set of expansion trees 4, such that the single expansion tree [API > [vQ] is an ET-proof. The significance of expansion trees in the integration of natural deduction and resolution comes from the following two facts. First, a resolution refutation of a formula B can be con- verted to an ET-proof of B. Such an algorithm is presented in [Pfenning, 19841. Here, skolem terms introduced by the refutation process must be removed. This, however, is very straightforward [Miller, 19831. The refutation in Example 5 is converted by this procedure to the ET-proof in Example 6. Second, as shown in the next section, a generalized sequent can automatically be converted by a compound tactical to a natural deduction proof of the given type. There are, in general, many possible natural deduction proofs which could have this gener- alized sequent as their type, so such a conversion involves some searching. Here the search is concerned not with the existence of a proof but with the preeentation of the proof. The search for different presentations can also be governed by compound tactics. 5. Expansions Trees-as-Types Expansion trees can be viewed as types of LK proofs in the following sense. The sense in which formulas were types still applies since expansion trees generalize formulas. In addition, the substitution information in an expansion tree indicates how quantifiers within an LK proof get instantiated. For example, let Q be an ET-proof for A. An LK proof 2’ of A is of type Q if the quantifier occurrences in T are introduced using the substitution terms attached to them in Q. Theorem Proving: AUTOMATED REASONING / 20 1 We now return to the description of building natural de- duction proofs. Remember that tactics work by examining a type and suggesting a part of the proof which would build an element of that type. We have now introduced a more informa- tive type structure. Hence, when theall-l-tat is called, there would be substitution terms attached to universal quantifiers on the left of the sequent. Such terms can, therefore, be used to do the required universal instantiation. The same is true for the other three quantifier rules. Hence, this new notion of type contains enough information to completely specify how to build a complete natural deduction proof. The following compound tactic performs exactly that operation. (repeat (orelse (then thin-to-axiom axiomatize) and-1-tat imp-r-tat some-1-tat all-r-tat neg-1-tat neg-r-tat or-1-tat and-r-tat imp-1-tat or-r-tat some-r-tat all-1-tat)) There are many other compound tactics for building LK proofs of a generalized sequent. If this one is applied to the generalized sequent - Q where Q is the expansion tree in Example 6 it would yield the natural deduction proof in Example 1, except that the thin-l rule would be swapped with the some-r rule. It is possible to pair with expansion trees even more in- formation to make for a richer type structure. For example, a type can be the triple P - Q; M, where P - Q is a generalized sequent, and M is a mating for the deep formula of [AP] 3 [VQ]. H ere, a mating is a graph of the literals of this for- mula which shows how various literals in it are connected. See [Andrews, 19801, [Andrews, 19811, [Bibel, 19811, and [Miller, 19841 for more on matings. By using matings, it is possible to make various tactics smarter. For example, it is possible to write a thinning tactic which can look “ahead” using the mat- ing to determine that a certain formula in a sequent will never be needed in a certain subproof. The ability to throw away such formulas is very important for building coherent proofs. See [Miller, 19841 f or more on using matings in this fashion. 6. Conclusions Explicit representations of proofs provides this system with some capabilities not generally found in other theorem proving systems. Expansion trees can be used to store complete proofs in a very compact form. Proofs stored in such a form are also very flexible since they only represent a type of a natural de- duction proof. Hence, when one wants to browse through or use such a proof in natural deduction form, there are many different presentations of it that can be made. Representing partial proofs as first-class values provides the ability to stop at any point in the proof process, and resume at a later time. The calculus of X-conversion describes how partial proofs can be composed and the typing system is all that is needed for such compositions to be done soundly. This representation of proofs should also make it possible to implement many different kinds of algorithms on proofs which have been studied in proof the- ory. For example, one particularly exciting item to implement is the automatic conversion of proofs of a certain (constructive) kind to executable programs, such as is done in the PRL system [Bates and Constable, 19851. There is very little about the LK proof system that is cen- tral to the development of this system. In fact, many different and less formal notions of natural deduction, such as natural language oriented explanations (see [Webber, Joshi, Mays, and McKeown, 19831) could also be supported in many of the same ways we have discussed here. Our current implementation of this system is built in a combination of Common Lisp and Prolog code. Besides being strongly related to LCF, much of the spirit of this implementa- tion derives from the TPS system described in [Miller, Cohen, and Andrews, 19821. 7. References PI PI PI bl Fl PI PI PI PI Peter B. Andrews, “Transforming Matings into Natural Deduction Proofs,” Fifth Conference on Automated Deduc- tion, Le8 Arc8, France, edited by W. Bibel and R. Kowal- ski, Lecture Notes in Computer Science, No. 87, Springer- Verlag, 1980, 281 - 292. Peter B. Andrews, “Theorem Proving Via General Mat- ings,” Journal of the Association for Computing Machinery 28 (1981), 193 - 214. Joseph L. Bates and Robert L. Constable, “Proofs as Pro grams,” ACM Transactions on Programming Language8 and Systeme, Vol. 7, No. 1 (January 1985) 113 - 136. Wolfgang Bibel, “Matrices with Connections,” Journal of the Association of Computing Machinery 28 (1981), 633 - 645. Jean H. Gallier, Logic for Computer Science: Foundation8 of Automatic Theorem Proving, Harper & Row, 1986. Gerhard Gentzen, Investigations into Logical Deduction8 in The Collected Paper8 of Gerhard Gentzen edited by M. E. Szabo, North-Holland Publishing Co., Amsterdam, 1969, 68 - 131. Michael J. Gorden, Arthur J. Milner, and Christopher P. Wadsworth, ‘Edinburgh LCF,” Lecture Notes in Computer Science, No. 78, Springer-Verlag, 1979. W. A. Howard, “The formulae-as-type notion of construc- tion,” 1969. Published in J. P. Seldin and R. Hindley, ed. To H. B. Curry: Eseaye in Combinatory Logic, Lambda Calculus, and Formal&m, 479 - 490, Academic Press, New York, 1980. Dale A. Miller, Eve Longini Cohen, and Peter B. Andrews, “A Look at TPS,” 6th Conference on Automated Deduc- tion, New York, edited by Donald W. Loveland, Lecture Notes in Computer Science, No. 138, Springer-Verlag, 1982, 50 - 69. (lo] Dale A. Miller, “Proofs in Higher-order Logic,” Ph. D. Dissertation, Carnegie-Mellon University, August 1983. [ll] Dale A. Miller, “Expansion Trees and Their Conversion to Natural Deduction Proofs,” 7th Conference on Auto- mated Deduction, Napa CA, edited by R. E. Shostak, Lec- ture Notes in Computer Science, No. 170, Springer-Verlag, 1984, 375 - 393. 121 Frank Pfenning, ‘Analytic and Non-analytic Proofs,” 7th Conference on Automated Deduction, Napa CA, edited by R. E. Shostak, Lecture Notes in Computer Science, No. 170, Springer-Verlag, 1984, 394 - 413. 131 Dag Prawitz, Natural Deduction, Almqvist & Wiksell, Up psala, 1965. [14] Bonnie Webber, Aravind Joshi, Eric Mays, and Kathleen McKeown, “Extended Natural Language Data Base Inter- actions,” Computers and Mathematic with Applications 9 (1983), 233 - 244. 201 / SCIENCE
1986
115
379
Multi-valued logics Matthew T,. Ginsberg* THE LOGIC GROUP KNOWLEDGE SYSTEMS LABORATORY Department of Computer Science Stanford University Stanford, California 94305 ABSTRACT A great deal of recent theoretical work in inference has in- volved extending classical logic in some way. I argue that these extensions share two properties: firstly, the formal addition of truth values encoding intermediate levels of validity between true (i.e., valid) and false (i.e., invalid) and, secondly, the ad- dition of truth values encoding intermediate levels of certainty between true or false on the one hand (complete information) and unknown (no information) on the other. Each of these prop- erties can be described by associating lattice structures to the collection of truth values involved; this observation lead us to describe a general framework of which both truth maintenance systems are special cases. default logics and 1 Introduction There has been increasing interest in AI generally in infer- ence methods which are extensions of the description provided by first order logic. Circumscription [9], default logic [lo] and probabilistic inference schemes such as that discussed in [7] are examples. Research in truth maintenance systems [4] has involved recording information concerning not only the truth or falsity of a given conclusion, but also justifications for that truth or falsity. This is useful in providing explanations, and also in the revision of inferences drawn using non-monotonic inference rules. Assumption-based truth maintenance systems [3] provide an interesting extension of this idea, taking the truth value of a given proposition to be the set of contexts in which it will hold. My intention in this paper is to show that these different ap- proaches can be subsumed under a uniform framework. Hope- fully, such a framework will lead to a greater understanding of the natures of the individual approaches. In addition, an imple- mentation of the general approach should facilitate the imple- mentation of any of the individual approaches mentioned earlier, in addition to combinations of them (such as probabilistic truth maintenance systems) or new ones yet to be devised. The ideas presented in this paper should not be taken as supporting any specific multi-valued logic, but as supporting a multi-valued approach to inference generally. The specific logic selected in any given application can be expected to depend upon the domain being explored. 2 A motivating example Let me motivate the approach I am proposing with a rather tired example. Suppose that Tweety is a bird, and that birds fly by default. *This work supported by the Office of Naval Research Any of the standard formalizations of default reasoning (such as [9] or [lo]) will allow us to conclude that Tweety can fly; suppose that we do so, adding this conclusion to our knowledge base. Only now do we learn that Tweety is in fact a penguin. The difficulty is that this new fact is in contradiction with the information just added to our knowledge base. Having in- corporated the fact the Tweety can fly into this knowledge base, we are unable to withdraw it gracefully. Truth maintenance systems [4] provide a way around this difficulty. The idea is to mark a statement not as merely “true” or “false”, but as true or false for a reason. Thus Tweety’s flying may depend on Tweety’s being a penguin not being in the knowledge base; having recorded this, it is straightforward to adjust our knowledge base to record the consequences of the new information. The truth maintenance approach, however, provides us with a great deal more power than is needed to solve this particular problem. We drew a default conclusion which was subsequently overturned by the arrival of new information. Surely we should be able to deal with this without recording the justification for the inference involved; it should be necessary merely to record the fact that the conclusion never achieved more than default status. In this particular example, we would like to be able to la- bel the conclusion that Tweety can fly not as true, but as true by default. The default value explicitly admits to the possibil- ity of new information overturning the tentative conclusion it represents. 3 Truth values 3.1 Lattices This approach is not a new one. There is an extensive lit- erature discussing the ramifications of choosing the truth value assigned to a given statement from a continuum of possibilities instead of simply the two-point set {t, f}. Typical examples are a suggestion of Scott’s in 1982 [l2] and one of Sandewall’s in 1985 [ll]. Scott notices that we can partially order statements by their truth or falsity, and looks at this as corresponding to an assign- ment to these statements of truth values chosen from some set L which is partially ordered by some relation <t (the reason for the subscript will be apparent shortly). He goes on to note that if we can associate to the partial order st greatest lower bound and least upper bound operations, the set L is what is known to mathematicians as a /nitice (81. Es- sentially, a lattice is a triple {L, A, V} where A and V are binary operations from L x L to L which are idempotent, commutative and associative: aAa=aVa=c Uncertainty and Expert Systems: AUTOMATED REASONING / 243 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. a b Figure 1: A lattice Figure 3: The smallest non-trivial bilattice l t indicating that the probability of the statement in question is known to lie somewhere in the associated probability interval. This proposal also appears in [‘i’] and [5]. Figure 2: The two-point lattice ahb=bAa; aVb=bVa (aAb)Ac=aA(bAc); (aVb)Vc=aV(bVc). In terms of the partial order mentioned earlier, we have a A b = glb(a, b) and a V b = lub(a, b). A is called the meet operation of the lattice; V is called the join. We also require that if a < b, then a A b = glb(a, b) = a and a V b = lub(a, b) = b. This is captured by the absorption identities: aA(aVb)=a; aV(aAb)=a. Lattices can be represented graphically. Given such a repre- sentation, we will take the view that p St q if a path can be drawn on the graph from p to q which moves uniformly from left to right on the page. In the lattice in figure 1, f is the minimal element of the lattice, and t is the maximal element. We also have a <t b; a and c are incomparable since there is no unidirectional path connecting them. Up to isomorphism, there is a unique two-point lattice, shown in figure 2. The truth values in first order logic are chosen from this lattice; all we are saying here is that f St t; “true” is more true than “false”. 3.2 Uncertainty Sandewall’s proposal, although also based on lattices, is a dif- ferent one. Instead of ordering truth values based on truth or falsity, he orders them based on the completeness of the infor- mation they represent. Specifically, Sandewall suggests that the truth values be subsets of the unit interval [O,l], the truth value The lattice operation used is that of set inclusion. Thus true, corresponding to the singleton set {l}, is incomparable to false, which corresponds to the singleton (0). (And each is in turn incomparable with any other point probability, such as (0.4}.) Instead, the inclusion of one truth value in another relates to our acquiring more information about the statement in ques- tion. The minimal element of the lattice is the full unit interval [0, 11; the fact that the probability of some statement lies in this interval contains no real information at all. This is in sharp contrast with knowing, for example, that the probability of the statement in question is .5. If the probability of a coin’s coming up heads is .5, the coin is fair; if nothing is known about the probability, it may well not be. It is clear that the partial order corresponding to Sandewall’s notion is conceptually separate from that in Scott’s construction. To capture it, we introduce a second partial order <k onto our lattice of truth values, interpreting p Sk q to mean~oosely that the evidence underlying an assignment of the truth value p is subsumed by the evidence underlying an assignment of the truth value q. Informally, more is known about a statement whose truth value is q than is known about one whose truth value is p. Since f and t in the two-point lattice corresponding to first order logic should be incomparable with respect to this second partial order, there is no way to introduce this second lattice structure onto the lattice in figure 2. Instead, we need to in- troduce two additional truth values glb,(t, f) and lubk(t, f), as shown in figure 3. Just as p <t q if p is to the left of q in a graph- ical representation, we will adopt the convention that p $ q if p is below q on the page. The two new values are given by u (unknown) and I (contra- dictory). The latter indicates a truth value subsuming both true and false; this truth value will be assigned to a given statement just in case it is possible to prove it true using one method and false using another. We will denote the two lattice operations corresponding to <k by + (hbk) and . (glb,) respectively. In general, we define a bilattice to be a quintuple (23, A, V,., +) such that: 244 / SCIENCE 1. 2. Figure 4: D, the bilattice for default logic (B, A, V) and (B, ., +) are both lattices, and Each operation respects the lattice relations in the alternate lattice. For example, we require that if p <k q and T Sk s, then p h T Sk q /\ S. Equivalently, A must be a lattice homomorphism from the product lattice (B x B, ., +) into (B,+,+) (and similarly for V, - and +). Just as figure 2 depicts the smallest non-trivial lattice, figure 3 depicts the smallest bilattice which is non-trivial in each lattice direction. Belnap [1,2] has considered the possibility of selecting truth values from this bilattice. Another bilattice is shown in figure 4; this is the bilattice of truth values in default logic. In addition to the old values of t, f , u and I, a sentence can also be labelled as dt (true by default) or df (false by default). The additional value * = di + df labels statements which are both true and false by default. This is of course distinct from u (indicating that no information at all is available) or I (indicating the presence of a proven contradiction). We will discuss this bilattice in greater detail in a subsequent section. Before proceeding, however, note that this bilattice shares the elements t, f, I and u with the previous one. In fact, any bilattice will have four distinguished elements, corresponding to the maximal and minimal elements under the two partial or- ders. We will denote these distinguished elements in this fashion throughout the paper. 4 Logical operations In order to apply these ideas, it is insufficient merely to give a framework in which to describe the truth values associated to the sentences of our language. We must also be able to perform inference using these truth values. We now turn to the issue of describing logical operations in a multi-valued setting. 4.1 Extensions and logical connect ives Let L be the set of all well-formed we will define a truth function to be corresponding to an assignment lattice B to each formula in L. d:L+B, formulae in our any mapping of some truth value in the bi- language. In first order logic, consistency is defined for truth functions 4 that are models, so that for each well-formed formulap, d(p) = t or b(p) = f. W e will continue to use this definition in the case of multi-valued logics, calling 4 a model if 4 maps L into the two-point set {t, j}. If 4 and II, are two truth functions with 4(p) $ $(p) for all p f L, we will write C# Sk $J and say that $J is an ettension of 4. If the inequality is strict for at least one p E L, we will write @I <k II, and say that the extension is proper. If $J is a model, we will say that it is a complete extension of 4. Lnformally, an extension of a truth function is what is ob- tained upon the acquisition of more information about some sentence or sentences in L. The extension will be proper if and only if the new information was not already implicit in the ex- isting truth values. The usual logical operators of negation, conjunction, disjunc- tion and implication can be described in terms of natural opera- tions on the bilattice structure of our truth values. Conjunction and disjunction are the most easily described, since they are es- sentially captured by the lattice operators A and V. In order for a model to be consistent, we therefore require: dP A e) = 4(p) A 4(q) (1) +(P v 4 = 4(P) v 4(q). (2) Negation is rather different. Clearly we want to have in gen- eral that 4(-p) <t 4(-q) if and only if 4(p) It 4(q). Somewhat less transparent% that we should have 4(-p) Sk 4(-q) if and only if 4(P) Sk d(q): f i we know less about p than about q, we also know less about the negation of p than about that of q. Additionally, we require that 4(--p) = d(p). This leads us to define negation in terms of a map 7 from B to itself such that: 1. 1 is a bilattice isomorphism from (B,A,V, .,-I-) to (%%A,-,+), and 2. -3 = 1. Note that in the first condition, we have reversed the order of A and V between the two bilattices while retaining the order of . and +. This corresponds to the observations of the previous paragraph. For a model, we require: 4(-p) = -4(P)- (3) We handle implication by retaining the usual identification (P --) 4) f (‘PW). This gives us d(P --+ 4 = -d(P) v 4(q)- (4) We deal with quantification by noting that Vx.p -+ p:, where t is substitutable for x in p and p: is the result of replacing some (but not necessarily all) of the occurrences of x in p with t. This leads us to assume: W’Z-P) = db, -@(p:) It is substitutable for 3: in p). (5) The existential operator is similar: 4(3x.p) = lubt{ddp;)lt is substitutable for T in p}. (6) In general, we will call a truth function r#~ consistent if it has a complete extension satisfying (l)-(6). Uncertainty and Expert Systems: AUTOMATED REASONING / 245 Here are some predicate calculus examples: zi %w f B t(orf) 21 ? AAB f f u f t t In the first two cases, $J is a consistent complete extension of 4. Since # in the third case has no consistent complete extension, it is itself inconsistent. Suppose that + is consistent, and let {+,} be the set of its consistent complete extensions. We define $ to be the greatest lower bound of the +i: 3 = db&‘bb is a consistent complete extension of $}. In the following two examples, 4 has two consistent complete extensions given by 41 and 42, and 3 is the greatest lower bound of these. - 4(x) 41cg 42(x) d+) A t t t t B I u AvB u t f 21 t t I t - e4 41(4 42(x) 4(x) A U t f u Td I U f t U Av-A u t t t The above construction is closely related to the usual notion of logical inference. In fact, if we denote by &, the truth function given by we have: Theorem 4.1 p b q if and only if G >k dq. Proof. All proofs can be found in [6]. If p is consistent, &, is the k-minimal truth function in which p is true; the point of the theorem is that q will be true in G if and only if p b q. 4.2 Closure It might seem that 3 is a natural choice for the closure of a Lruth function in general, but it suffers the drawback of having 4(P) >k E&{t,f} for all P. As our bilattice of truth values becomes more complex, such a closure will be insensitive to some of the information c_ontained in 4. In the default bilattice D, for example, we have $(p) >k *. Contrast this with theorem 5.1, where the closure of 4 can also take the values df, df or U. The general construction is somewhat more involved; the reader is referred to [6] for details. If we denote the closure of some truth function 4 by cl(4), the key features of the con- struction are the following: 1. It can be described completely in terms of the bilattice struc- ture of the truth values. 2. Logical inference always “adds” information to a truth func- tion, so that 4 <k cl($) in all cases. 3. The construction is non-monotonic, so that it is possible to have 4 <k $ without cl(4) 5 k cl($). An example of this is given in the next section. The final remark above refers only to a portion of what is generally referred to as “non-monotonic” behavior. Consider a truth function with 4(p) = dl but cl(4)(p) = f, for example; here inference is behaving “non-monotonically” in the sense that 4(p) >t u but Cl(d)(P) <t u. It is behaving monotonically, how- ever, in that 4(p) <k cl(&)(p). lt turns out that the computa- tional difficulties which plague non-monotonic inference systems arise principally as a result of the potential non-monotonicity in the Ic sense; loosely speaking, k-monotonicity is enough to guar- antee that we can maintain our knowledge base using updates. There are therefore substantial practical advantages to be gained by recognizing situations where it can be demonstrated that the closure operation is k-monotonic. Details are in [6]. 5 Examples Let me end by very briefly describing Reiter’s default reason- ing and truth maintenance in terms of this sort of construction. The second of these is extremely straightforward, essentially re- quiring us merely to identify those statements that support some fixed one. Default reasoning is a bit more intricate, since the philosophy underlying Reiter’s approach is very different from that of the one we have been presenting. 5.1 Default logic Reiter defines default reasoning in terms of a default theory (R, T) where T is a collection of first order sentences, and R is a collection of defaults, each of the form indicating that if cy holds and all of the p’s holds. If every default rule is of the form CV:W 9 W are possible, then w so that we infer UJ from cr in the absence of information contrary, the default theory is called normal. to the Reiter goes on to define an eztenaion of a default theory (R,T), and shows that these extensions correspond to the col- lections of facts derivable from such a theory. Since there may be conflicting default rules, it is possible that a given default theory have more than one extension. The bilattice for default logic appeared in figure 4. where i indexes the elements in R. Associate to (R,T) a truth junction 4 given by ifpisct;+w; jorsomei butpeT; Then: Cl(dJ)(P) = ’ t, CUT I= P; f, 47 T I= 1~; *, ifl p is true in some extensions of (R,T) and false in others; dt, ifl T pt p but p is true in some of the extensions and false in none; dj, iff T k -p but p is false in some of the extensions and true in none; L uv ifl p is undecided in all extensions of (R,T). 216 / SCIENCE Theorem 5.2 monotonic. The closure operalion is potentially non- Suppose that we have two default rules, one of which indicates that birds can fly, and the other that flying things are stupid. Consider the following two truth functions: Theorem 5.3 Let pl, . . . , p, be possible assumptions in our knowledge base; and suppose that 4 is given by 4(q) = { \!{P*H * nil 17 ythqe;w;;efo7- S0me i; Then ifql,... , qm form a subset of the pi ‘s and x is an arbitrary sentence, 1 ({91, - -. , b)) . nil 1 Sk cl(cb)(x) if and only if the ql’s form a justification of x. 6 Future work Clearly 4 <k +. But in light of the previous theorem, the closures of 4 and ?I, are given by: P bird(Tweety) penguin(Tweety) flies(Tweety) dumb(Tweety) It is painfully clear that the work presented in this paper only scratches the surface of the approach being discussed. Both theoretical and engineering issues need to be explored, There are many other non-standard approaches to inference; can they be captured in this framework? Circumscription and probabilistic schemes seem especially important candidates. Equally important is an implementation of the ideas we have We do not have cl(d) 5 k cl($). The point is that the fact discussed. Ideally, a general-purpose inference engine can be that Tweety is now known not to fly keeps the default rule about constructed which accepts as input four functions giving the stupidity from firing. 0 two glb and two lub operations in the bilattice, and which then performs suitable multi-valued inference. The key issue is the 5.2 Truth maintenance determination of what price must be paid in terms of efficiency for the increased generality of the approach we are proposing. In a truth maintenance system, the truth values assigned to Work in each of these areas is currently under way at Stan- propositions contain information concerning the reasons for their ford. truth or falsity. We can capture this using a multi-valued logic in which the truth values consist of pairs [ a . b ] where a and b are respectively justifications for the truth and falsity of the REFERENCES statement in question. We can assume that these justifications are themselves in disjunctive normal form, consisting of a list of parallel conjunctive justifications. An example will make this clearer. Suppose that p is the statement q V (T A s). Then if q, T and s are all in the knowledge base, the truth value of p will be Either q or the {T, s} pair provides independent justification for p, and there is no justification for up. We will assume in general that if the truth value of p is [ a . b 1, either a or b is empty; in other words, that either p or lp is unjustified. Given two justifications j, and j, expressed in disjunctive normal form, we write j, 5 j, if every conjunctive subclause in j, contains some subclause in j, as a subset: (a1 . . . a,) 5 (bl . . . bm) if for each a;, there is some bj with b, C a;. It is not hard to see that the empty justification (containing no information) is a minimal element under this partial order, while the justificac tion (0) consisting of a single empty conjunct (a justification needing no premises) is maximal. If j, 5 j,, we now define: [ j, . nil ] St [ j, . nil ] [jl .nil]<k [j2 .nil] (7) and [l] N. D. Belnap. H ow a computer should think. In Proceed- ings of the Oxjord International Symposium on Contem- porary Aspects of Philosophy, pages 30-56, 1975. [2] N. D. Belnap. A useful four-valued logic. In G. Epstein and J. Dumm, editors, hiiodern Uses of hfulliple-valued Logic, pages 8-37, D. Reidel Publishing Company, Boston, 1977. [3] J. de Kleer. An assumption-based truth maintenance sys- tem. Artificial Intelligence, 28:127-162, 1986. [4] J. Doyle. A truth maintenance system. Artificial Inlelli- gence, 12:231-272, 1979. (51 h/I. L. Ginsberg. Analyzing Incomplete Informalion. Tech- nical Report 84-17, KSL, Stanford University, 1984. [6] M. L. Ginsberg. h4ulti-valued Logics. Technical Re- port 86-29, KSL, Stanford University, 1986. [7] M. L. Ginsberg. N on-monotonic reasoning using Demp- ster’s rule. In Proceedings of the American Association for Artificial Intelligence, pages 126-129, 1984. [8] G. Gr8tzer. General Lattice Theory. BirkhZuser Verlag, Basel, 1978. [9] J. McCarthy. Applications of circumscription to formaliz- ing common sense knowledge. Arlificial In2el/igence, 89- 116, 1956. [lo] R. Reiter. A logic for defalllt reasoning. Artificial Intel/i- gence, 13:81-132, 1980. [ll] E. Sandewall. A functional approach to non-monotonic logic. In Proceedings of Ihe Ninth International Joint Con- [ nil . jl ] >t [ nil . j2 ] [ ni1 . jl 1 Sk [ ni1 . j2 1 t8) ference on Artificial Intelligence, pages 100-106, 1985. L121 D. s. Scott . Some ordered sets in computer science. In The k-join t . f is of course I as usual. Note the sense of the I. Rival, editor, Ordered Sets, pages 677-718, D. Reidel first inequality in (8). Publishing Company, Boston, 1982. The analog to theorem 5.1 is now: Uncertainty and Expert Systems: AUTOMATED REASONING / 247
1986
116
380
IMPLEMENTATION OF AND EXPERIMENTS WITH A VARIABLE PRECISION LOGIC INFERENCE SYSTEM* Peter Haddawy Intelligent Systems Group Department of Computer Science University of Illinois Urbana, Illinois 61801 ABSTRACT A system capable of performing approximate inferences under time constraints is presented. Censored production rules are used to represent both domain and control informa- tion. These are given a probabilistic semantics and reasoning is performed using a scheme based on Dempster-Shafer theory. Examples show the naturalness of the representation and the flexibility of the system. Suggestions for further research are offered. I INTRODUCTION It is a Sunday afternoon and your fully autonomous car is taking you for a drive. Suddenly a truck pulls out into the road ahead of you. Your car has 5 seconds to decide what to do. If your car were powered by current rea- soning technology, chances are it would never reach a decision because while trying to deter- mine the best course of action it would hit the truck, destroying both itself and you. In this situation any decision, even a rough guess, is better than indecision. What is needed is a sys- tem capable of producing the best decision possi- ble within a given time limit. In any practical reasoning process there are extra-logical cost constraints such as time and resource limitations which must be taken into account. The tradeoff between the cost, certainty, and specificity of inferences can be used to flexibly adjust to these constraints. * Th’ IS work was supported in part by the Defense Advanced Research Project Agency under grant N00014-K-85-0878, in part by the National Science Foundation under grant NSF DCR 84- 06801, and in part by the Office of Naval Research under grant N00014-82-K-0186. Certainty refers to the degree of belief in a statement, while specificity refers to the degree of detail of a description. The idea of a logic in which the certainty of an inference could be varied to conform to cost constraints was presented by Michalski and Winston [1985]. This Variable Precision Logic (VPL) used cen- sored production rules to encode both domain and control information. The rules take the form P->D[C read If P then D unless C. The unless part of the rule is called the censor. Censors represent exception conditions and as such are considered to be false most of the time. Therefore, the determination of their truth values is given a lower priority than that of rule antecedents. Whereas unlimited resources are devoted to checking the antecedents, only a lim- ited amount of resources is devoted to checking the censors in time critical situations. The unless symbol is logically interpreted as an exclusive-or operator between the censor and the consequent. Thus, given the rule Sunday -> Go to the park 1 Weather is bad, we can conclude that if it is Sunday and the weather is good, I will go to the park; and if it is Sunday and the weather is bad, I will not go to the park. Formally, P,AP,A * - * +D [C,vC,v - - . is logically interpreted as P,AP,A - - - A-( C,vC,v...)--+D and P,AP,A ’ ’ - A(C,VC,V...)--D 238 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. In order to make the exceptions quantitative, numerical parameters are associated with each rule, representing the strength of inference when the truth value of the censor is known and when the value is unknown. These values allow the precision of inferences to be varied. This paper presents a formalization of the notion of uncertainty in censored produc- tion rules and an implementation of an inference system capable of varying the certainty of infer- ences to conform to given time limits. II FORMALISM AND THEORY Uncertain inference in the VPL sys- tem is performed using a scheme based on Dempster-Shafer theory [Shafer 19761. Domain information is represented in the form of rules and facts. A fact is assigned a certainty represented by a Shafer interval, [s p]. The s value indicates the support for a proposition, while the p value indicates its plausibility. The intervals for A & 1A are related by p(A) = 1 - $(-IA). The s and p values can also be thought of as the minimum and maximum probabilities of the proposition. The amount of uncertainty in a proposition is defined as the difference of the values. A value of ‘unknown’ is represented as the interval [0 l]. Th e value of a conjunction or disjunction of facts is calculated by applying the formulas for probabilistic product or sum respectively to the support and plausibility values separately. Rules are interpreted as expressing conditional probabilities. Beliefs are propogated across the rules using an approach similar to that employed by Ginsberg [1984] and derived by Dubois & Prade [1985]. Suppose we have the rule A -> B, where prob(l3~A)~[S,P,] and prob(A)EISAPA]. Th en it can be shown that prob(B) E [S,S,, I--s,+S,P,]. Now if prob(d3 / A)E[S,P,] th en P, = l-S, from which it follows that prob(B)EISAS,, l-S,,%]. To use this scheme, the certainty of a rule is represented by four values: Q p 7 S, where a=S,forPAlC->D p=S;forPAC->lD 7= S, for P -> D s=S;forP-> YD These values are constrained by the following restrictions: When the value of the censor is known, the Shafer interval for the conclusion D is computed according to s(D) = @‘)[l - p(C)Io p(D) = 1 - s(P)s(C)@ When the censor value is unknown, the formulas are s(D) = s(P)a p(D) = 1 - s(P)/3. Evidence for multiply argued conclu- sions is combined using Dempster’s orthogonal sum rule. This requires the assumption that the evidence events are conditionally independent. The formula used to combine two Shafer inter- vals is similar to that in [Ginsberg 19841: -- [u b]@[c d] = l- “_” bd I-(ad+bc) l-(ad+bc) 1 The above discussion has presented an approximate inference scheme for proposi- tional logic, but the VPL system uses a typed predicate logic representation. The type infor- mation enumerates the elements of a finite domain for each predicate argument. In this representation, terms containing only ground instances are equivalent to propositional logic and thus present no additional problems. How- ever, a semantics for expressions with free vari- ables is needed. Rules of the form A(x,y) -> B(x), with an associated certainty [s p] are inter- preted as Vx,y p(B(x)lA(x,y)) = [s p]. This is essentially a short-hand for listing rules over the entire domain of x and y. Similarly, a fact A(x) with certainty [s p] is interpreted as Vx p(A(x)) = Is PI. Uncertainty and Expert Systems: AUTOMATED REASONING / 239 III SYSTEM OVERVIEW The VPL system consists of six main components: the user interface, the parser, the knowledge base, the unifier, the inference engine, and the rule-base analyzer. The system is implemented in Common Lisp and runs on a Symbolics 3640. The system is designed to be fully interactive for incremental rule base develop- ment. The user may assert or retract rules and facts, define new types and predicates, and make queries. Once a rule base is complete, the user may perform an analysis of inference times. The rule-base analyzer determines for each pos- sible query the inference time required for all uniform depths of censor chaining. A censor chain is a rule chain in the search tree leading from a censor. A list of times with associated depths for each query is stored in the time data base. A user query may have an optional time limit associated with it, in which case the system searches the time data base to determine the maximum censor chaining depth which will guarantee a response in the requried time. If no time limit is specified, chaining depth is unlim- ited. The system performs backward chaining inference, with possibly limited search depth on censors. Inference is performed in two stages: search and calculation. The search strategy is breadth first and exhaustive. The exhaustive search is achieved by generating all consistent ground instances of any free variables after unification. To satisfy a goal, the system searches for a fact which unifies with the goal. If none is found, it tries to find rules which unify with the goal. If both of these attempts fail, the query is given a value of [0 11. During the search process, instructions for performing the certainty calculations are put on a calculation stack. When the search terminates, the entries on the calculation stack are evaluated and put on a value list. The computation on the bottom of the stack corresponds to the user query. I-V EXAMPLES This section presents two simple examples to demonstrate the system’s capabili- ties. The first is intended to highlight the approximate inference methods. The idea is that a bird can fly unless it is a special kind of bird such as a penguin or is in an unusual condi- tion such as dead. The input file shows domain type declarations, followed by predicate declara- tions, followed by rules. Following the input file is a log of some example runs. After the rules are loaded, the system is told that tweety is a dead bird, from which it concludes that tweety cannot fly. Next, changing our certainty in tweety’s death changes the certainty in his ability to fly propor- tionately. When the system is told that tweety may be a kiwi, this information combines with the possibility of his death to further decrease our belief in his ability to fly. Finally, if tweety is neither in an unusual condition nor a special bird, he is able to fly. type (animal (spot rover jane tweety road-runner)) (bird (tweety road-runner)) pred (is-bird (animal)) (flies (animal)) (is-special-bird (bird)) (is-in-unusual-condition (animal)) (is-penguin (bird)) (is-ostrich (bird)) (is-emu (bird)) (is-kiwi (bird)) (is-domestic-turkey (bird)) (is-dead (animal)) (is-sick (animal)) (has-broken-wing (bird)) assert ((is-bird 8x) => (flies 8x) 1 (is-special-bird 8x) (is-in-unusual-condition 8x) 1.0 1.0 .9 .05) ((is-dead $x) = > (is-in-unusual-condition $x) 1.0 0.0) ((is-sick $x) = > (is-in-unusual-condition 8x) 0.9 0.06) ((has-broken-wing $x) = > (is-in-unusual-condition 8x) 1.0 0.0) 240 / SCIENCE ((is-penguin 8x) => (is-special-bird 8x) 1.0 0) ((is-ostrich $x) => (is-special-bird $x) 1.0 0) ((is-emu $x) = > (is-special-bird 8x) 1.0 0) ((is-kiwi $x) => (is-special-bird tf;x) 1.0 0) ((is-domestic-turkey $x) = > (is-special-bird $x) 1.0 0) Example Runs -- Tweety is a dead bird --- ENTER Command > assert ENTER Command or assertion > ((is-bird tweety) 1 1) ENTER Command or assertion > ((is-dead tweety) 1 1) ENTER Command or assertion 0.027635619 seconds elapsed time <RESULT> [l.OO 1.001 The next example is the one described in the introduction. It shows the abil- ity of the system to vary the depth of censor chaining in response to time limits. The input file shows rules for determining if a car can stop in time to avoid an obstacle based on the condi- tion of its brakes and the road. A log of the sample run shows the effect of varying the time limit. With a censor chaining depth of 1 or less the system cannot determine the truth values of the road-condition censor and thus uses the more approximate version of the rule. ENTER Command or make query of system > (flies tweety) using censor chaining depth of UNLIMITED 0.103377685 seconds elapsed time <RESULT> [O.OO 0.001 -- reduce certainty in Tweety’s death - ENTER Command or make query of system > (assert ((is-dead tweety) .7 .8)) ENTER Command or assertion > (? (flies tweety)) using censor chaining depth of UNLIMITED 0.1013306 seconds elapsed time <RESULT> [O.OO 0.3Oj --- suspect that Tweety is a kiwi --- ENTER Command or make query of system > (assert ((is-kiwi tweety) .3 .5)) ENTER Command or assertion > (? (flies tweety)) using censor chaining depth of UNLIMITED 0.10849539 seconds elapsed time <RESULT> [O.OO 0.211 --- Tweety is healthy and normal -- ENTER Command or make query of system > (assert (( is in - - unusual-condition tweety) 0 0)) ENTER Command or assertion > ((I is-special-bird tweety) 1 1) ENTER Command or assertion > (? (flies tweety)) using censor chaining depth of UNLIMITED We (level (low medium high)) (rating (good fair poor)) (substance (gravel ice)) (place (road ground)) (temp-type (below-freezing moderate hot)) (looks (shiny rough)) pred (speed-distance-ratio (level)) (can-stop-in-time()) (road-condition (rating)) (brake-condition (rating)) (on (substance place)) (temperature (temp-type)) (road-appearance (looks)) (construction-site 0) (sound-of-pebbles-hitting-underside-of-car 0) assert ; rules ( (- speed-distance-ratio high) => (can-stop-in-time) 1 (road-condition poor) (brake-condition poor) 1.0 1.0 .85 .l) ( (on ice road) = > (road-condition poor) 1.0 0) ( (on gravel road) => (road-condition poor) .9 .l) ( (temperature below-freezing) (road-appearance shiny) = > (on ice road) .9 .I) ( (construction-site) (sound-of-nebbles-hitting-underside-of-car1 Uncertainty and Expert Systems: AUTOMATED REASONING / 24 1 => (on gravel road) .9 .l) ; facts ((speed-distance-ratio high) .05 .15) ((temperature below-freezing) .2 .3) ((road-appearance shiny) 0 0) ((construction-site) 1.0 1.0) ((sound-of-pebbles-hitting-underside-of-car) .8 .85) Example Runs --- time limit of 1 second -- ENTER Command or input file > (? ((can-stop-in-time) 1)) using censor chaining depth of 2 0.08085977 seconds elapsed time <RESULT > [O.OO 0.451 --- time = .05 second --- ENTER Command or make query of system > (? ((can-stop-in-time) .05)) using censor chaining depth of 1 0.031729784 seconds elapsed time <RESULT > [0.72 0.921 - time = .03 second --- ENTER Command or make query of system > (? ((can-stop-in-time) .03)) using censor chaining depth of 0 0.017470836 seconds elapsed time <RESULT> [0.72 0.921 --- time limit too low --- ENTER Command or make query of system > (? ((can-stop-in-time) .Ol)) Cannot perform inference in requested time. Minimum guaranteed time is 0.018423745 set V CONCLUSIONS It has been shown that the formal- ism of censored production rules when given a probabilistic semantics allows a system to adjust the certainty of inferences to conform to time constraints. Such a system has numerous appli- cations in situations where decisions must be made in real time and with uncertain informa- tion. Examples range from medical expert systems for operating rooms to domestic robots. In the current system, if the value of a rule’s censor is known, the a! and /3 certainty values are used. If the value is unknown, the 7 and 6 values are used. A better approach would be to look at the degree to which the censor value is known and use this to interpolate between the o & p and 7 & S rule certainty values. Much world knowledge is best expressed in the form of taxonomies. Taxo- nomies carry more information than simple col- lections of rules. To make the system more effective, I am working on incorporating special- ized inference rules for reasoning in taxonomies. This paper has only investigated the trade-off between inference time and certainty. The trade-offs between cost and specificity and certainty and specificity need yet to be explored. This is a direction in which the results of machine learning research hold much promise. ACKNOWLEDGEMENTS I would like to thank Prof. Michalski for his guidance, Lisa Wolf for comments on a draft of this paper, and John Wiegand and Jim Kelly for help on an earlier implementation. REFERENCES Dubois, D., Prade, H. “Combination and Propo- gation of Uncertainty with Belief Fuctions” In Proc. IJCAI-85. Los Angeles, California, August, 1985, pp. 111-113. Ginsberg, M.L. “Non-monotonic Reasoning Using Dempster’s Rule” In Proc. AAAI-84. Austin, Texas, August, 1984, pp. 126-129. Michalski, R.S., Winston,P.H., “Variable Preci- sion Logic” MIT AI Memo 857, Artificial Intelligence Laboratory, MIT, August, 1985 (accepted for AI Journal, 1986). Shafer, G. A Mathematical Theory of Evidence. Princeton University Press, Princeton, New Jersey, 1976. 242 / SCIENCE
1986
117
381
BAYESIAN INFERENCE WITHOUT POINT ESTIMATES Paul Snow Hawthorne College 59 Maple Ave #107 Keene, NH 03431 ABSTRACT It is conventional to apply Bayes' formula only to point estimates of the prior probabilities. This convention is unnecessarily restrictive. The analyst may prefer to estimate that the priors be- long to some set of probability vectors. Set esti- mates allow the non-paradoxical expression of ig- norance and support rigorous inference on such everyday assertions as "one event is more likely than another" or that an event "usually" occurs. Bayes' formula can revise set estimates, often at little computational cost beyond that needed for point priors. Set estimates can also inform statis- tical decisions, although disagreement exists about what decision methods are best. I INTRODUCTION Probabilistic information often comes in forms other than point estimates. “It is more likely to rain to day than not" is an intelligible statement about a probability even though it gives no speci- fic value for the chance of rain. The statement is also useful as it stands; it helps us decide what to wear outdoors. A point estimate, e.g. "The chance of rain to- day is seventy percent", might be more useful. If our weather source doesn't know the precise proba- bility, however, then we'd surely rather have the "more than fifty percent" estimate than nothing at all. We might even be grateful that our source did not pretend to have more precise information than was actually warranted. Such modesty wins no applause from conven- tional Bayesians, especially those who work in the tradition of Savage. From their vantage, every statement about probabilities ought to assert point estimates for the events of interest. Researchers in artificial intelligence who use Bayesian inference have largely adopted the point estimate restriction as given. Further, it appears that some researchers reject probability methods in favor of non-additive belief measures partly be- cause they attribute certain shortcomings of point estimates to probability estimates in general. Freed of the restriction to points, probabil- ity estimates can be as expressive as any fuzzy possibility. The liberalization of probability comes at what is often a modest cost in computa- tional effort, and at no cost at all in statistical rigor. Bayes' formula still works, the intuitively meaningful "relative frequency" interpretation of probabilities still holds, and non-point estimates retain considerable power to guide decisions under uncertainty. II QUALITATIVE ASSERTIONS One obvious difficulty with point estimates is that the analyst simply may not know the probabil- ities of the interesting events with much precision. Zadeh (1985) cites the commonness of such imprecise probability knowledge as the key factor motivating a "fuzzy probability". If the analyst is not res- tricted to point estimates, however, imprecision poses little problem for (crisp) statistical inference. A dramatic instance of imprecise knowledge oc- curs when the analyst is totally ignorant of the event probabilities. The conventional, point-bound, representation of utter ignorance is to assign equal probabilities to each possible "state of the world". It is well-known that it's difficult to ex- press ignorance consistently by this method when there are three or more mutually exclusive states. For definiteness, suppose there are three such states. Each state is assigned a probability of one third. The disjunctive probability of any two states (the sum of the two states' probabilities, or 2/3) is strictly greater than the probability of the third state (i.e. l/3). If the analyst is truly ignorant, how does one know that any state is less likely than the disjuction of the other two? Such problems have led some workers to embrace cardinal measures of belief that are point-valued, but not additive (Shackle, 1949; Prade, 1985). Another answer is to allow the analyst to say that the vector of correct state probabilities belongs to some set. For ignorance, that is the "vacuous set", the set of all probability vectors with the right number of states. In the general case, where the analyst's know- ledge is imprecise, but not so completely imprecise as ignorance, the analyst might choose any set that is thought to contain the correct vector. We do not assume that the analyst has an opinion about which member of the set is the right one, only that the correct vector is not to be found outside the cho- sen set. If the analyst's imprecise knowledge happens Uncertainty and Expert Systems: AUTOMATED REASONING / 233 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. to involve linear equalities or inequalities among the state probabilities, then the resulting esti- mate set has a simple and convenient geometry. The linear relations define hyperplanes in the proba- bility vector space, and the estimate set is the intersection of half-spaces bounded by these hyper- planes. The resulting figure is a polytope: a con- vex set with a finite number of vertices located where the hyperplanes intersect. To construct the estimate set, the analyst simply enumerates the vertices. For instance, the analyst may know minimum values for the various state probabilities (at least some of the minima being positive in the non-trivial case). If there are n states, then the analyst's know1 edge can be expressed as the n in- equalities Pl 3 Ll, P2 3 L2, . . . , Pn 3 Ln. Tf T is one minus the sum of the minima (T > 0), then it is simple to show that the n vertices are: (L1+TiL:2, . . . , Ln), (11, L2+T, . . . , Ln), . . . . , . . . . Ln+T) Another common kind of estimate is an ordering of state probabilities, that is, the n-fold linear inequality Pl 3 P2 3 . . . 3 Pn. The seT representing this assertion also has fi vertices, which are (l/n, l/n, . . . l/n), (l/(n-l), .I. , l/(n-l), 0), . . . . (1, 0, l ** 3 0) Not all simple probability statements that assert linear relationships among the probabilities give rise to a small (i.e., comparable to the number of states) number of vertices. The number of vertices needed to represent probability maxima is subject to combinatorial explosion in bad cases. E.g., if there are n states and each probability is no more than 2/n, t?ien each vertex has rJ2 elements equal to 2/n and n/2 elements equal to zero. There are C(n, n/T) sucTi linearly independent vectors. The information possessed by the analyst might vary from state to state; perhaps a point estimate for one, a range for another, an ordering among others and a bound on the disjunction of still others (that is, a bound on the sum of probabili- ties, also a linear inequality). The basic proce- dure of defining the estimate set by enumerating the vertices is the same (and one hopes the number of maxima is small, or the maxima are well- behaved). Although linear relationships are "special cases" of the possible probability knowledge, it is remarkable how easily they mesh with many common qualitative descriptions of the state probabili- ties. Nilsson (1986) discusses the construction of polytopes from linear relations that arise from certain formal logical statements about probabili- ties. maxima. The breakpoints for such representations may be arbitrary (does "almost always" mean P > .8? P> .9?), but not obviously more so than the esti- mates of membership grades used with fuzzy set methods. Freed of the point restriction, probability estimates are evidently more useful in the face of imprecise qualitative statistical descriptions than some workers have believed. III BAYESIAN INFERENCE WITH SET PRIORS --- Suppose the analyst has chosen a set represen- tation for the probability information available before observing any evidence. It would be helpful if there were some way to revise the estimate later, when some evidence has been observed. If the analyst knows the conditional probabi- lity of seeing the evidence given each of the pos- sible states, then the analyst can apply Bayes' formula point-by-point to the prior set, making a posterior set in the process. If the correct prior belongs to the original estimate set, then clearly the correct posterior vector is in the revised set. That much is self-evident. Point-by-point Bayesian revision works, but it is apt to be pro- hibitively cumbersome for large prior sets. We can lower the computational burden quite a bit if the estimate set has a congenial geometry for revision. In the discussion to follow, we assume that the conditionals are available to us as point estimates. We could allow the conditionals to be set estimates, but that would obscure the present argument and add unilluminating complication. As luck would have it, our old friend the poly- tope, the hero of the last section, has a congenial geometry for Bayesian revision. It turns out that if the prior set is a polytope, then the posterior set will also be a polytope. The vertices of the posterior polytope are the Bayes' formula posterior values of the prior set's vertices. For proof, see Levi (1980). To apply Bayes' formula to a polytope, there- fore, one need only find the Bayes' posteriors of the prior vertices. As long as the number of ver- tices is small, polytope revision is simple and cheap. Given that the polytope is also an expres- sive geometry, this is a heartening result. Polytopes are so gifted that a word of caution is in order. Polytopes are not the only convenient geometry for Bayesian revision, nor are they the only kind of set estimate that can occur in easily imagined circumstances. Levi goes too far when he offers convex sets as the only defensible geometry. A set of discrete points, for example, is not con- vex. It isn't hard to imagine cases where the ana- lyst knows that the true probability vector is either V or W, and no value "in between". Bayesian revision of this estimate set is quite efficient. Natural language, too, seems rich in lineari- Polytopes are emphasized here because they are ties. For example, "Sl is the typical outcome" versatile and convenient, but they are not obliga- suggests an ordering in which Pl > Pj for all j. Words like "often" "usually" or "almost alway?" tory. Restricting the geometry of estimate sets to polytopes would be as artificially confining as the suggest minima; "rirely" and "almost never" connote point restriction has been. 234 / SCIENCE IV ZERO-FREE VERTICES AND CONVERGENCE If the prior set contains only vectors that have no zero components (for polytopes, if the ver- tices are zero-free), then as conditionally inde- pendent evidence accumulates, the posterior set will converge toward a single point. The asymp- totic limit vector has probability one in the cor- rect state and zeros elsewhere. This follows from a standard result about the ultimate insensitivity of Bayesian inference to different zero-free priors (see, for example, Jeffrey, 1983). The limiting performance of Bayesian updating for set priors, then, is comparable to that for point priors. Convergence will generally fail to occur if the estimate set does contain vectors with zero elements. The Bayes' posteriors for such vectors will always contain zeros, in the same components as the priors' zeros. If the vector is a polytope vertex, this will distort the posterior set by "tying down" the vertex even if the evidence comes to overwhelmingly support one of its zero-valued states as true. The worst case occurs when the analyst ex- presses prior ignorance as the vacuous set, a poly- tope whose vertices each have zeros in all com- ponents except one. Bayes' inference is fruitless in such a case. No amount of evidence (short of certain revelation of the true state) ever van- quishes initial ignorance. The posterior set re- mains vacuous. At first glance, this seems to be troublesome. Realistically, however, total ignorance about the states is rare. We can devise artificial instances readily enough, but in the real world, the analyst usually knows something about the states. Just to name the states typically rules out their having a priori zero probabilities, and so eliminates vectors with zero components. As a practical mat- ter, the analyst is probably willing to assert some miniscule positive floor under each state proba- bility (Jeffrey makes a similar remark about point estimates). As has already been shown, the willingness to assert positive minima gives rise to a convex set whose vertices are zero-free. However modest the departure from strict prior ignorance, conditional evidence revises the prior set, and asymptotic convergence can occur. Assertion of small minima also suppresses zeros in less drastic circumstances. The vertices of an exhaustive probability ordering also have zero components, as shown earlier. Even though it is no part of the analysts' intention to say that some state may be impossible, the zeros will re- sist revision as tenaciously as those that arise from prior ignorance. The solution is for the ana- lyst to assert minima Ll, . . . . Ln in addition to the ordering. If each of the minima is less than l/n-, then tedious but simple algebra shows that the vertices for the combined assertion of an ordering and the minima are ( l-TLi, L2, . . . . Ln), ( (1-nCLi)/2, (1-CnLi)/2, L3, . . . . Ln), ( t/n, l/n, .!., l/n) In general, it's a good idea to suppress any zeros that occur in the estimate set, in order to avoid the persistent distortion of posterior esti- mates that zeros cause. Asserting minima is often the simplest way to do this, and since minima are linear relations, they can usually be combined with other information fairly readily. V IMPLEMENTATION The essential AI device for dealing with set estimates characterized by a reasonable number of points is already in place. It is the ordinary Bayesian inference network first proposed for PROSPECTOR by Duda, et al. (1976), and developed further by many others, notably Pearl (1982). Existing networks have two or more exclusive events' (point) probabilities attached to each node. The links are the conditional probabilities relating the events at higher nodes to those at the lower (evidence) nodes. By convention and practical necessity, the pot- ential evidence is resolved into groups of exclusive events in such a way that observations from differ- ent groups are independent of one another, given the states at higher nodes. The geometry of the prior estimates at the higher nodes appears to raise no new issues for this treatment of the evidence. The alterations to the network needed to ac- comodate set estimates are straightforward. Where the higher nodes now contain a single probability vector, in the new scheme they would contain sever- al. The amount of calculation needed to update the network to reflect any given evidence configuration increases linearly in the total number of vectors to be updated. The extra work can be reduced by the efficient handling of intermediate nodes. These nodes contain neither the events of ultimate interest nor the observed evidence. Rather, their role in the network is to aid in its initial construction and to provide explanations of the network's "reasoning" as the evidence is revealed. These nodes do not contribute to the inference itself, and they can be compiled out of the network before run time, to be replaced by conditional probability links directly connec- ting evidence and conclusions (Snow, 1985). The explanation function of these nodes can be recovered on demand by attaching them distally to the ultimate event nodes, where they wait inertly until asked a question. These comments apply only to the sort of Bayes' network that traffics in traditional probability estimates. They do not apply to the "influence net- works" recently proposed by Pearl (1985). In these networks, the structure of the intermediate nodes is crucial to the interpretation of the networks' outputs. The spirit behind Pearl's proposal seems Uncertainty and Expert Systems: AUTOMATED REASONING / 235 to be the same as what animates this paper: reten- tion of probability as the basis of uncertain infer- ence while avoiding the limitations inherent in point estimates. In any case, set estimates can be manipulated by essentially the same techniques that have already been widely proposed for point estimates. Provided that the number of points needed to repre- sent the set is small, the additional cost entailed in using sets instead of points can be modest. VI DECISIONS There are several methods for using set esti- mates to inform decisionmaking. The very diversity is a hint, however, that no one technique has uni- versal acceptance. The simplest method is to select a single point from the estimate set and to base the deci- sion on that single point. Typically, the point selected will be the vector that displays the most entropy or else the centroid of the estimate set. The chosen point is then used in an expected value or expected utility analysis to determine the best act, or what would be the best act if the chosen vector were the right one. This step would usually be followed by a "sensitivity analysis" to find out whether the choice of an act depends a great deal on which probability point is chosen. If sensitivity analysis reveals that the final decision is pretty much the same regardless of the point chosen, then all is well. If not, then selec- ting an arbitrary point and acting according to its counsel defeats the purpose of working with set estimates in the first place. The simplicity of the method, however, makes it suitable for "quick and dirty" analysis of choices other than the final act, e.g. deciding which of several possible ex- periments ought to be performed first. Other decision approaches involve looking at the expected utility of each act for every vector in the estimate set. By our earlier assumption, the analyst doesn't know which vector is the correct one, and so is ignorant about which of the expected utilities is the real pay-off for each act. The choice among the acts, therefore, can be made using any of the popular rules for decisions under pure uncertainty. Once again, the computational task is simpler if the estimate set has a congenial geometry. An especially convenient set occurs when the vertices of the convex hull of the estimate set are them- selves members of the estimate set. This family includes not only polytopes, but also discrete points and polytopes with all or part of their in- teriors removed, Several standard decision rules consider only the utility values at the hull vertices in such cases. The best known decision rule of this kind is the linear programming and expected utility cri- terion called "mixed strategy maximin". If the estimate set has this nice geometry, 236 / SCIENCE and we adopt a decision rule that considers only the hull vertices, then we need Bayesian updates only for the vertices (the proof is a specializa- tion of the polytope result discussed earlier). If the estimate itself is not a solid polytope, then we lose information about how much of the in- terior of the posterior set is included in the es- timate. This won't affect the final decision, and considerable information about the precision of the estimate is retained. Although the maximin rule has a following, its acceptance is far from unanimous, as discussed by Lute and Raiffa (1957). Methods for decisionmaking under pure uncertainty remain an open research top- ic, and with them, methods for choosing an act in- formed by a set probability estimate. VII THE SAVAGE AXIOMS --- The exact nature of the "best" decision rule is controversial, but it seems likely that whatever rule does emerge will involve some expected utility calculation. The "choose a point" and maximin rules of the last section both do. Savage (1972) has proposed axioms that support the conventional point estimate restriction, which also appear to tie that restriction to the common- nest motivation for the adoption of expected uti- lity rules. If rationality (in the sense that ex- pected utility rules are rational) demands point estimates, and we apply "rational" utility rules to "irrational" set estimates, then we court logical contradiction. Even if this were not the case, Savage's axioms are closely reasoned, widely dis- cussed and solidly in the Bayesian mainstream. The case for set estimates must include some expla- nation of why Savage's prescription is to be ignored. Savage's first axiom, the complete ordering assumption, is the crucial one for the point rest- riction (as noted by Smith, 1961). Complete order- ing holds that the analyst assigns a specific value to each act, even when the analyst doesn’t know the state probabilities that govern which outcome the act will yield. So, for example, if the analyst knows that act A offers either $5, $10 or $20 de- pending on whether Sl, S2 or S3 is true, then the analyst is assumed to assign act A a specific dol- lar value, perhaps $8. The first axiom asserts only that an amount like $8 exists, it does not say how the assignment is made (why $8 and not $9). In sum, the first axiom restricts the analyst to point es- timates of value. Clearly, this is not the only possible atti- tude if the analyst hasn't a clue whether Sl, S2 or S3 is the true state. The analyst presumably would be willing to make an interval estimate of A's value (between $5 and $20 inclusive). Absent further information about the states, however, the analyst might balk at making any stronger, more specific assertion about the value of A. If the analyst happens to be willing to make point value estimates, then the other Savage axioms allow us to infer point-valued "judgmental proba- bilities" from the analyst's choices. If the ana- lyst subscribes to all the axioms, then any claim that non-point estimates guide the analyst's choices would result in contradiction. On the other hand, if the analyst doesn't sub- scribe to all the axioms (and we have discussed why complete ordering might be denied), then the infer- ence about point estimates is unfounded. No logical difficulty arises, and the axioms are moot. It is worth noting that Savage resorts to axi- oms for a reason. A strong restriction (the analyst can make only point probability estimates) is to be justified by its derivation from other, supposedly less restrictive assumptions, In fact, the complete ordering axiom (the analyst can make only point estimates of value) is on its face as strong and as restrictive as the proposition it is called upon to justify. VI I I CONCLUSIONS Many practical problems, e.g. diagnosis, are fruitfully viewed as probability inference tasks. Here, "probability" means the relative frequency with which some event or condition occurs. Although the probability estimates may reflect the personal opinion of some expert, the goal is typically to match as closely as possible the true relative fre- quency that prevails in the real world. The loose application of the loaded terms "objective" and "subjective" sometimes obscures this point. The full exploitation of probability methods has been hindered by the convention that point es- timates are the only way to express probability in- formation. Licensing set estimates is not a new idea. Objectivist interval estimation, for instance, has been in the statistician's tool kit for a long time. What may be new is realizing how much well- chosen set representations can overcome the sup- posed shortcomings of probability estimates. Hap- pily enough, set estimates comport well with common AI techniques, particularly those based on another venerable statistical tool, Bayes' formula. Using set estimates to inform decisions re- mains a weak spot. The problem of decision informed by sets is closely related to decisions under ig- norance. Progress on set-informed decisions is thus linked to either the invention of new decision rules for ignorance, or the elevation of some exis- ting rule to preeminence. In the meantime, there is no shortage of plausible rules catering to a vari- ety of tastes. REFERENCES Duda, R. O., P. E. Hart and N. J. Nilsson, “Subjec- tive Bayesian methods for rule-based inference systems", Proc. Natl. Comp. Conf., 1976, pp. 1075- 1082. -- Jeffrey, R. C., The Lo ic of Decision, Chicago: U. of Chicago Press,1 83, + cGp. 12. Levi, I., The Enter rise of Knowledge, Cambridge, -+- MA: MIT Press, 1 80, c ap. 9. Lute, R. D. and H. Raiffa, Games and Decisions, New York: Wiley, 1957. -- Nilsson, N. J., "Probabilistic logic", Artif. Intel 1. 28 (1986, forthcoming). Pearl, J., "Reverend Bayes on inference engines: a distributed hierarchical approach", Proc. AAAI Conf. Artif. Intel1 --A' 1982, pp. 133-136. -' "How to do with probabilities what people say you can't", Proc. IEEE Conf. Artif. Intell. Appl., 1985, pp. 6-r------ Prade, H., "A computational approach to approximate and plausible reasoning with applications to ex- pert systems", IEEE Trans. Patt. Anal. & Mach. Intell. 7:3 (19m -0-283. Savage, L. J., The Foundations of Statistics, New York: Dover, lm. - Shackle, G. L. S., bridge, UK: .Edxpecta;t;;slnlE;;;omics, Cam- Cambri ge U. Smith, C. A. B., "Consistency in statistical infer- ence and decision", J. Roy. Statist. Sot. B 23:l (1961)) pp. l-37. - -- Snow, P., "Tatting inference nets with Bayes' theo- rem", Proc. IEEE Conf. Artif. Intell. Appl., 1985, pp. 63m.---- Zadeh, L., "Decision analysis and fuzzy mathema- tics", in M. D. Cohen, et al., "Research needs and the phenomena of decisionmaking and operations", IEEE Trans. Sys. Man & Cyber. 15:6 (1985), pp. 765-767. -- Uncertainty and Expert Systems: AUTOMATED REASONING / 237
1986
118
382
ADVANCES IN RETE PATTERN MATCHING Marshall 1. Schor, Timothy P. Daly, Ho Soo Lee, Beth R. Tibbitts IBM T. J. Watson Research Center P.O. Box 218, Yorktown Heights, NY 10598 USA Abstract A central algorithm in production systems is the pattern match among rule predicates and current data. Systems like OPS5 and its various derivatives use the RETE algorithm for this function. This paper de- scribes and analyses several augmentations of the basic RETE algo- rithm that are incorporated into an experimental production system, YES/OPS, which achieve significant improvement in efficiency and rule clarity. Introduction Rule based systems often spend a large fraction of their execution time matching rule patterns with data. The production system OPS5 [FOR11 and many other systems (e.g. [ART11 [YAP11 [FOR3]), each use the OPS5 pattern match algorithm known as RETE. This paper describes four augmentations of the basic RETE algorithm that achieve much improved performance and rule clarity. As we describe each augmentation, we give an analysis of its effects, and some ex- amples of its use. These ideas are implemented in an experimental production system language, YES/OPS, running on LISP/VM in IBM Yorktown Research. We presume some familiarity with production systems, and the RETE algorithm. The reader is referred to the book, Programming Expert Systems in OPSS [BROl], and the AI Journal article on the RETE algorithm [FOR21 for background information. The first augmentation involves handling changes to existing data. In OPS5, three operations affect the data being matched with the rule patterns: make, which adds new data, remove, which removes data previously added, and modify, which modifies data previously added. However, modlf y is implemented in OPS5 as a remove of the previously existent data, followed by the creation of new data that is a copy of the previous data, except for the attributes that were changed. This new data is then added, which causes a new match cycle to occur. We change this to support modify as an update-in- place operation, and change how the rules are (re-)triggered, for greater clarity. The second augmentation allows the user to group rule patterns (called condition elements) together, in an arbitrary fashion. This enables specifying negated joins of patterns, not just individual con- dition elements, and plays an important role in specifying when to do maximize and minimize operations (which follows). The grouping can also be used to increase pattern match result sharing among the rules, for efficiency. The third augmentation supports the specification of sorted orderings among sets of data, in a much more efficient and syntactically clear manner. The final augmentation is the ability to do the pattern matching on demand, incrementally. This supports both the incremental addition of new rules, such that the new rule does match the existing data (not possible in OPS5), and the matching of particular patterns as part of an action done w-hen a rule fires, not when the data changes. This aspect eliminates the (OPS5) requirement that data to be manipulated in the action part of a rule must be matched by a condition element pattern in the rule’s tests (its Left Hand Side). This allows many practical rule sets to achieve orders of magnitude performance im- provement, by reducing the pattern matching part that part which needs to be data-change sensitive. of the rules to just All examples of rules are written using the YES/OPS syntax. This is similar to OPS5 syntax, except: 1) attributes are not preceded by an “f” character, but are followed instead by a colon “:‘I; 2) the rule form is: (P rule-name WHEN pattern matching specifications THEN actions to be done) MODIFY as update-in-place, new triggering conditions OPS5’s implementation of modify as a remove of the old value, and a re-make of it with the modified attributes causes excessive re- triggering of rules. Two commonly occurring instances of unwanted re-triggering are modification of attributes not tested in a rule and modification of an attribute to a value that still passes the same rule patterns as before. Example: Don’t-care slots re-triggering Suppose the user structures his working memory elements for a prob- lem involving genealogy research, as follows: Classname: PERSON Attributes: Name : Father: Mother: Gender: Native-language: Native-country: Language: Marital-status: Spouse : Now suppose some rules infer about ancestry, and other rules infer about languages spoken. If the ancestry rules have fired, and now, some new information about language causes the person’s lan- guage : attribute to be changed, in OPS5, the ancestry rules would fire again, even though they had taken all the actions appropriate for their matches to the existing data, and that data had not changed in the attributes of interest. The solution to this behavior in OPS5 is to separate attributes whose change should not re-trigger other rules, into different working mem- ory elements. This is often not the natural partition of the knowledge, and is less efficient, because the RETE must now do run-time joins of the split-apart attributes. Example: Tests true once, true again after modifying, re-triggering In OPS5, whenever a rule’s action part modifies a working memory element such that it still satisfies the rule’s tests, that rule loops. Users are told to “get around” this problem by coding extra control infor- mation in the working memory element and set flags that prevent looping. An example from the book Programming Expert Systems in OPS5 [BROl] is the problem of adding one to a set of items. The natural formulation (the one inexperienced users tend to write) looks like : (p add-l-to-items when (goal name: add-l-to-items) ;the goal to do it <i> (item value: <v>) ;an item, whose value is <v> then (modify <i> ;modify the item value: (<v> + 1) >) ;setting the value ; to <v’ + 1 226 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. This works in YES/OPS, when modify is update-in-place, but loops in OPS5. The suggested rule formulation to get around this problem in OPS5 is to add an extra attribute to Item, called status, and set it from nil to marked when doing the adding. After adding to all the items, the goal is advanced to unmark, and another rule fires re- peatedly, once per item, to change the status attribute back to nil. This clearly is more rule firings, and also, more testing (the value of the status attribute must be tested). The example also is now cluttered up with control information, unrelated to the task of adding 1 to a set of items, which makes these OPSS-style rules less readable: (p add-l-to-items when (goal name: add-l-to-items) ;the goal to do it <i> (item value: <v> status: nil) then (modify <i> value: (compute <v> + 1) status: MARKED)) ----------________--------------------------- (p change-task when ;this rule fires after ;prev. rule because it <g> (goal name: ;tests fewer things add-l-to-items) then (modify <g> name: UNMARK)) -----______---------------------------------- (p unmark when (goal name: UNMARK) <i> (item status: MARKED) then (modify <i> status: NIL)) Having to code this kind of status information makes the rules less clear. Without implementing the new modify definition, the natural rule an expert often writes would need to be “fixed” to eliminate the unwanted triggering. The efficiency also suffers, in that the fixes re- quire more pattern matching tests. New modify definition remom re-tri&gering problems We define modify as an atomic update-in-place operation, rather than as a remove followed by a make of the modified working mem- ory element. The triggering rules are changed so that an existing instantiation that continues to exist after the modify, does NOT cause re-triggering. In addition to improving performance by eliminating extra control flags and their testing and maintenance, modify done as update-in- place reuses existing working memory data structure and RETE memory nodes. This improves the performance by reducing the ac- tivity involved with maintaining these structures. Triggering on any change The new modify semantics normally trigger a rule when a rule instantiation that was not previously present gets created. This means that a modify operation does not re-trigger a rule, if it does not result in a new instantiation. Sometimes, however, triggering on any change is desirable. An ex- ample might be a rule that counted how many times a person’s marital status changed. Here, we want the rule to re-trigger, no matter what the status changed to. To provide for this case, we extend the syntax to allow specifying re-triggering on any change of one or more se- lected attributes, by preceding the attribute name by an exclamation point (!). In addition, to specify re-triggering on the change of any attribute in the class, an exclamation point may be placed in front of the class name. This gives behavior like OPS5. For example: (p count-marital-status-changes when (person ! marital-stat:) cc> (counter type: *I retriggers on change Marital-stat-chg value: <v>> then (modify cc> value: (<v> + 1))) New algorithm for Modify in RETE Beta Join nod&s Tokens passed down the RETE have the operation ADD, REMOVE, or MODIFY associated with them (ADD corresponds to make). For modify operations, if at some point in the processing, the test result of the previous value of the modified working memory element differs from that of the current value, the modify operation is converted to a remove or add operation: CASE 1 CASE 2 Previous value: tests fail tests OK Current value: tests OK tests fail New operation: ADD REMOVE When a token arrives at the bottom of the RETE, if the operation is add or remove, then the rule instantiation in the production node is either inserted to or removed from the conflict set, according to the operation; if the operation is modify, then nothing is done. This prevents re-triggering. For modify operations, specification of re-triggering attributes causes an exception. If one or more of the attributes was preceded by an “!” to indicate that re-triggering is wanted on any change of that attribute, the attributes so designated are compared with those that were modified; if one or more match, then the rule is reinserted into the conflict set even if it has already fired. Join nodes where left and right predecessors are identical Special case handling is required where the left and right inputs to a join node are identical. This arises in rules like: (p find-skilled-persons when (person name: <s> skill: <sl>) (person name: <n> needs-skilled-service: <sl>) then (say <s> can help <n> with service <sl>)) This yields the RETE structure: A problem can occur when a new working memory element is added which matches with itself; in this example, this could happen if the person needs the skilled-service which he himself has. The problem happens because the RETE algorithm sends the result of any changes in a node to all of its successors. In particular, a change token arriving at the previous node would be sent down both the right and left legs to the same Beta Join node. If no special consideration is taken, what can happen is that the change token on each path causes an instantiation to be added to the conflict set, resulting in double instantiations. OPS5 handles this case by first sending the token to all successors having left inputs, before updating the memory node by adding or re- moving (depending on the operation being done) the token to/from the memory. Thus, a change only sees itself on one leg (the right leg for add, the left leg for remove). Uncertainty and Expert Systems: AUTOMATED REASONING / 227 With modify implemented as an update-in-place, the token in ques- tion is a/ready in the memory node. The new RETE code removes the particular element temporarily, before sending the modify operation to the left-successors. This prevents it from seeing itself during this phase. Then it puts it back into the list before sending it to the right successors. Running backwud Normal forward running of production systems repeats a cycle of matching rules with data, picking a rule to fire, and executing the picked rule’s actions, which may change the data being matched. A very useful debugging tool is the ability to run backwards, that is, re- store the state of the system to that which existed in previous cycles. OPS5 implements the back function for this; we have extended this function to handle modify as update-in-place. The utility of back requires that the user be able to make top-level changes as well; otherwise, when forward running resumes, the sys- tem would merely repeat what it had already done. Two kinds of changes are possible: changing data, and changing rules. In OPS5, incrementally added (or changed) rules do not match the existing data, which means that adding or changing rules dynamically is not practical. Extensions we have implemented for procedural matching support matching new or changed rules with existing data, making incremental rule editing a powerful debugging technique, us- able with back. Back requires that a history of changes to working memory and rule refractions (the firing of a rule instantiation) be kept during forward running. This history record is used to incrementally undo rule firing effects and restore the system to a previous state. Modify operations record the previous (unmodified) value, together with a pointer to the current working memory element in this history, so that the previous values can be restored when backing up. Generalization of OPSS validity test for reinserting refracted rules When a rule fires, a record is made; when backing up, that rule is re- inserted into the conflict set, re-enabling it to fire, unless something was done (at top level) that prevents it from being true anymore. In OPS5, the test done was to verify that all the working memory ele- ments, which matched positive condition elements of a rule being backed up, were still present. This test is inadequate in the general case. Consider the following example: (p back-bug ’ when (a) iT~i’“~~‘~e~~sPe~~~s~~t, ;in order to fire. -(b) then ( . . . . 1) Now suppose we do the following top level actions: 1. (make a) ; this will insert the “back-bug” rule into the conflict set. 2. (run 1) ; fires the rule, running forward 3. (make b) ; add (b ) to the working memory 4. (back 1) ; backs up 1 rule If step 3 had not been done at top level, we would expect to see the “back-bug” rule reinserted into the conflict set. However, because (b) now exists, that instantiation is no longer valid. YES/OPS verifies that a reinserted instantiation actually exists, be- fore reinserting it into the conflict set. To do this, we keep a RETE memory with each rule representing its current instantiations, given the current data in working memory. Before reinserting a rule when backing up, the instantiation is looked up in this memory. If it is present, then the rule instantiation is reinserted into the conflict set. If it is not present, then some top-level action changed working memory in such a manner to preclude this instantiation being true. In this case, the instantiation is not reinserted. Arbitrary grouping of pattern condition elements Rule condition element patterns of rules in OPS5 are grouped in a left-associative manner. For example, the joining of condition ele- ments of the rule (p rule1 when (a> (b) cc> (d) then . ..> results in a RETE join tree: Memory nodes at the bottom of the Alpha part of the RETE 0 D d 3 RETE Beta Join Nodes +47 Rule1 We have augmented the basic RETE to allow arbitrary groupings, in addition to the default left-to-right linear associative grouping. Sharing pattern matching work among sewral rules Part of the RETE algorithm efficiency comes from sharing pattern matching tests which are identical among all the rules that have the tests. However, the OPS5 RETE shares results of join tests only if the patterns are the same starting from the first one. For example, con- sider the three rules: (p rule1 (p rule2 (p rule3 when when when (a) (a) (cl (b) lb) Cd) (cl (f) (e) Id) then . . then . . then.. The join for (a) and (b) are shared between rule1 and rule2, but the join of ( c 1 and ( d) in rule1 and rule3 are not shared, because of the top-to-bottom associativity of the joins. By grouping as follows, one can get the benefits of shared tests: (p rule1 (p rule2 (p rule3 when when when (a) (a) (cl (b) (b) (d) ( (c) (f) (e) Cd) 1 then . . then . . then.. The join part of the RETE would look like this: Memory nodes at the bottom of the Alpha part of the RETE T Rule2 T Rule1 T Rule3 One of the major factors in the run-time performance in OPS5 is the number of beta nodes (two-input join nodes). That is due to the fact 228 / SCIENCE that testing beta nodes involves time-consuming tasks proportional to the size of the memory nodes, e.g., checking bound variables for pos- sible join, evaluation of predicates, subsequent update of beta memo- ries, etc. Reducing the number of beta nodes, by sharing RETE structures, increases the run-time performance. Negating joined groupr One of the constructs supported by RETE is the negated condition element. Our grouping extension allows the negation of arbitrary combinations of condition elements. For example, a rule that verifies that no men and women pairs in a group share the same birthday: (p no-same-birthday when (goal type: check-shared-birthdays) -((person gender: male (person gender: birthday: <bd>) female birthday: <bd>)) then (say No man and woman share the same birthday)) The above rule could not have been expressed in OPS5 without cre- ating new working memory elements containing all the attributes to be negated, because the negated conditions have joins among them- selves, and the test is for whether or not the join result is empty. In OPS5, because only single condition elements could be negated, the knowledge programmer would have to rearrange the working memory data structures such that any test for non-existence would involve only single condition elements, never joins of multiple ones. Grouping gives the knowledge programmer the freedom to design working memory elements in a way that best suits the problem, without having to be concerned with support for negated conditions. Maximize/Minimize Many problems require sorting and selection of “best” or perhaps, “top two,” for example, finding the maximum, finding the best two financial alternatives, etc. The OPS5 technique for specifying these patterns is somewhat obscure: (p top-student when (student grade: <top> name: <name> > -(student grade: gt <top>) then . ..> This rule logically means “find a student having a grade <top> such that no other student has a grade which is greater than <top>". This is semantically equivalent to finding the student (or students in case of a tie) who have the best grades. We have augmented the syntax and RETE algorithm to support a clearer and more efficient expression of this kind. The same rule in the new syntax is: (p top-student when (student grade: maximize name : <name> > then . ..> The implementation is done by keeping the normal partial match memory nodes maintained during the RETE algorithm in sorted order, and adding a new kind of RETE node to do the selection of the max- imum, or top two or minimum, etc. Anulp& of sorting efficiency A simple binary tree search to insert a new element into a sorted list takes O(log n) comparisons, where n is the number of elements in the list. The average complexity to create a sorted list of n elements using the binary tree search, and pick the maximum is O(n log n). When n elements are added to the working memory in the OPS5 for- mulation, the RETE does O(n2) comparisons. The situation gets worse if the top two students are requested: The OPS5 formulation is: (p select-best-two when (student grade: <topl> name : <nl>) (student grade: <top2> & le <topl> name : <n2> & ne <nl>) -(&uden; grade: gt <top2> name: ne <nl>) 9 . . The first two condition elements cause a join involving O(n*) com- parisons, and this is joined with the third (negated) condition element, yielding a complexity of O(n)). When the top k values are wanted, O(n**k+ 1) complexity ensues. Such shortcomings can be avoided by keeping memory nodes sorted, if rule patterns include sorting operators. Once memory nodes are sorted, selection of the top, or the top 2 or 3, etc., elements is fast. Sekction opemtom The syntax supports selection of both maximum and minimum sorting sequences, and the selection of the top “n” elements, assuming there are that many. For example: (person age: minimize select 2 to 4) selects persons whose ages, when ranked in ascending order, are the second, third, and fourth in the ranking. This selection ignores the fact that some of the items may have the same sort value. Alterna- tively, one may instead pick all items having the second thru fourth unique values, using the following variation: (person age: minimize select-values 2 to 4) Sort& owr arbitmty expresrions The sorts described so far sort on the value of one attribute of one working memory element. In general, the sort can be done on an ex- pression involving multiple attributes from multiple working memory elements. Consider the following example where prodigy-score is a Lisp function: (person age: <a> piano-skill-level: <p> & maximize (prodigy-score <a> <p>>) This would pick the top person by some combination of skill and early age. Placement of selection in the RETE The following examples illustrate the importance of placing the sorting and selection operators at the proper point in the RETE. Grouping of condition elements is required to achieve correct placement. Con- sider the following two rules: (p same-age-wonder-kids1 when (person skill: piano-player age: <x>) minimize <x> (person skill: ice-skater age : <x>) then . ..> (p same-age-wonder-kids2 when ((person skill: piano-player age: <x>) (person skill: ice-skater age : cx>)) minimize <x> then . ..> These build the following RETE fragments: Uncertainty and Expert Systems: AUTOMATED REASONING / 229 same age v p-G&~1 same age v Select i? Youngest The first case picks the youngest piano-player, who, let us suppose, is 4 years old. If there are no ice-skaters who are 4 years old, then the join in the first case is empty, because the ages do not match. The second case first forms pairs of same-aged piano-players and ice- skaters, and then, from that set, picks the youngest. The grouping construct described earlier is required to give the correct meaning to the sorting constructs. Sorting owr subsets of 4 memofy node In many cases of picking the maximum, we want to find the maximum over subsets of a memory node. For example, suppose we wanted to know the oldest speaker of each language: OPS5 method: (p oldest-speaker when (person language: <I> age: <a>) -(person language: <l> age: gt <a>) then . ..> YES/OPS method: (p oldest-speaker when (person language: <l> age: FOR-UNIQUE <l> maximize) then . ..I Without the FOR-UNIQUE clause, the maximize would merely find the oldest person. Relational database query languages, for example, SQL [DATl], support this same notion of determining subsets over which to apply group operations, like maximum. The subset classi- fication is done on the basis of unique values for attributes, or for some expressions involving one or more attributes. Sorting extemions being conside& The select operation for sorted memory nodes can be extended to select the top half, etc. The goal is to eventually specify a fixed interface for selection to enable the user to use his own particular notion. Sorting is only one of many operations that can be done on subsets of a memory node. Other examples we are investigating are the common operations available from relational database, such as counting the number in the subset, computing the average, selecting the item closest to the mean, etc. The eventual goal is to provide the tools to allow the user to write his own group operations as needed to augment the ones supplied by the system. Procedural match augments data-driven match In OPS5, in order to reference any working memory attribute value, the working memory has to be matched by a condition element in the rule’s pattern. This invokes all the same RETE machinery that make the rule sensitive to changes in data matching that pattern. Often, this causes unwanted triggering, and is not the way the rule writer initially conceives of the knowledge. Consider the following example rule to print lists of language translators: (p translators1 (goal type: print-translators) (language from: <from-lang> to: <to-lang>) (person translate-from: <from-lang> translate-to: <to-lang> name: <n>) then (say <t-r> can translate <from-language> to <to-language>)) Some of the characteristics of this knowledge representation for printing translators are: The goal working memory element can’t be removed by this rule when the task is completed; a “cleanup” rule must also be written that fires when all the instantiations of the translators 1 rule have fired, presumably by being less specific than this rule. To print “headings” for the list, another rule must be written that will fire before this one to print the headers. Allowing matching in the action part of a rule alleviates these prob- lems. The rule writer can choose whether to make a match be a trig- gering condition or not. The following example does a procedural match, iterating over all matches of the language-persons combina- tion. (p translators2 <g> (goal type: print-translators) ; This is the trigger condition ; print heading once (say Source-language Target-language Person) (for-all-matches-of (language from: <from-lan (person translate-from: < g> to from- <to- anq> lang>) translate-to: <toylang>- name: <n>) ,d?say <from-lang> <to-lang> <n>) iremove <g>)) ;one rule fires, goal removed The pattern matching work to find all languages and persons and compute their join is not done until the rule has fired. Impiementation of procedural matching A mini-RETE is created for the match expression. For efficiency reasons, the compilation of the mini-RETE is delayed until the first time the match is called for. This mini-RETE is then built in such a way as to reuse, wherever possible, partial matches already present in the main RETE. It is temporarily grafted onto the main RETE, and the partial matches present at the graft points form the starting point for computing the match. This section of the added RETE is “turned off” after the procedural match execution takes place, and only “turned on” again when the rule fires again. In this manner the pro- cedural matching isn’t done again until (and unless) the rule fires again. New rules matching existing working memory data In OPSS, if one compiles a large set of rules, then does many makes, then starts to run the production system and discovers a bug in one 230 / SCIENCE of the rules, one is prohibited from simply fixing the rule and recom- piling it, since it would not match against existing working memory. The same problem pertains when writing a “debugging” rule in the middle of a run to try and determine the cause of some bug. The de- bugging rule doesn’t match existing working memory and is therefore not of much help at finding problems with existing data. The procedural matching ability in YES/OPS allows rules to be added after working memory has been defined, and these added rules match the existing working memory elements. For example, the following rule could be added after the production system had started running, to “catch” the rule that changes one spouse to be divorced but not the other one, assuming that it wasn’t obvious by inspection. (p catch-unfinished-divorces when (person name: <sl> marital-stat: divorced ) (person name: <s2> marital-stat: ne divorced spouse: <sl>) then priority 100 ;a high rule priority (say the culprit has been found!) (back 1) ;run back to the previous state (halt)) ;and stop Without having the rule match existing data, the knowledge of the spouse-spouse join would be missing, if it existed before the rule was added. The priority specification causes this rule to fire earlier than other rules in the conflict set, assuming we want to be notified of the condition as soon as it appears. Building new rules as a rule action An interesting consequence of this feature is that rules can be added, or existing rules changed, while running, by the action part of some other rule, and they will match existing data. This feature can be used in constructing self-modifying rule systems (a form of learning), al- though we have not yet experimented with this. Implementation of incremental rule addition The incremental rule addition handles its matching in a similar way to the procedural match discussed above. A mini-RETE is created for the new rule, sharing existing RETE structures if previously compiled rules contain matching patterns that can be reused by the new rule. This new RETE is then grafted onto the existing RETE; in this case, the new addition to the RETE is permanent; the new nodes are not “turned off” when the match is complete. Existing memory nodes at the points where the new mini-RETE is added are pushed down through the new part of RETE, thus matching the new rule’s patterns with existing working memory elements. Performance of YES/OPS Using YES/OPS, small projects done so far have exhibited orders of magnitude improvement in certain cases, even when the new exten- sions are minimally used. A subset of the rules of a large OPS5 system was converted to YES/OPS, without being rewritten to take advan- tage of the new mod if y technology. It ran approximately 20 CPU seconds in OPS5, but only 2 CPU seconds in YES/OPS. Further- more, a slight expansion of the problem (more working memory ele- ments) increased the OPS5 time by 30%, while the YES/OPS time increased only about 5 %. The performance comparison can be made arbitrarily good by increasing the size of the problem. The performance improvements come from five factors: The modify as update-in-place substantially reduces the flags that must be set and tested to control rule re-triggering. The grouping construct allows more sharing of pattern tests in the RETE. The sorted memory nodes trade algorithms of complexity O(n log n) for O(n ** k+ l), for the operations of selecting the best k elements from a set of alternatives, an often used function. The procedural matching, done on demand instead of included in the RETE match and updated at every change of the data, reduces the number of patterns that are active to just those that are required to trigger the actions. And, finally, the internal structure of the RETE representation and the algorithms were timed and tuned carefully. Summary These ideas have been implemented in an experimental production system language, YES/OPS [SCHl], built using LISP/VM [IBMl]. The guiding principles in the design of YES/OPS include 0 the development of clean semantics, designed for data-driven production system applications, l full integration with the underlying procedural language(s) (e.g., LISP/VM), including communication with other languages and environments (for example, GDDM (Graphical Data Display Manager) and the XEDIT editor), l generality in rule expression, and 0 efficiency of space and time, especially for large production sys- tems. Other features of YES/OPS include 0 When-no-longer-true, which triggers actions when an instantiation, having once matched working memory, later ceases to match. This is useful for catching conditions that have no other explicit means to determine when they happen. a Rule priorities, which allow ordering of rules to fire, in addition to conflict resolution. Rule priorities can be numeric, or ex- pressions involving working memory attribute values in the instantiation being considered in conflict resolution. Some of these ideas have also been incorporated into another exper- imental production system language extension on top of PL/ 1, YES/L1 [MILl]. Many people at the IBM Yorktown Research Center participated in the discussions that evolved into these extensions. The ideas, support, and encouragement of Dr. Se June Hong are gratefully acknowledged. References ART1 Bruce Clayton ART Programming Tutorial Inference Corporation, March 15, 1985 BROl Lee Brownston, Robert Farrell, Elaine Kant, and Nancy Martin Programming Expert Systems in OPS5: An Introduction to Rule-Based Programming Addison-Wesley, 1985 DATl C. J. Date An Introduction to Database Systems Second Edition, Addison-Wesley, 1977 FOR1 Charles Forgy OPS5 User’s Manual Department of Computer Science, Carnegie-Mellon University, 1981 FOR2 Charles Forgy “RETE: A Fast Algorithm for the Many Pattern/Many Object Pattern Match Problem” Artificial Intelligence, Volume 19, pp. 17-37, 1982 FOR3 Charles Forgy “The OPS83 Report” Technical Report CMU-CS-84- 133, Department of Computer Science, Carnegie-Mellon University May 1984 Uncertainty and Expert Systems: AUTOMATED REASONING / 23 1 IBM1 Cyril Alberga, Martin Mikelson and Mark Wegman LISP/VM User’s Guide IBM SH20-6477, October 1985 MILl K.R. Milliken, A.V. Cruise, R.L. Ermis, J.L. Hellerstein, M.J. Masullo, M. Rosenbloom, and H.M. Van Woerkom, YES/L1 : A Language for Implementing Real-Time Expert Systems, Technical Report RC-11500, IBM T. J. Watson Research Center, Yorktown Heights, NY 10598, 1986 SCHl Marshall I. Schor, Timothy P. Daly, Ho Soo Lee, and Beth R. Tibbitts “YES/OPS Extensions to OPS5: Language and Environment” Technical Report RC-11900, IBM T. J. Watson Research Center, Yorktown Heights, NY 10598, 1986 YAP1 L. Allen “YAPS: Yet Another Production System” Technical Report TR-1146, Department of Computer Science, University of Maryland, Feb. 1982 232 / SCIENCE
1986
119
383
Knowledge Engineering Issues in VLSI Synthesis w. H. wolf T. J. Kowalski hf. C. McFarland, S.J. AT&T Bell Laboratories Murray Hill, New Jersey 07974 ABSTRACT This paper explores VLSI synthesis and the role that traditional AI methods can play in solving this problem. VLSI synthesis is hard because interactions among decisions at different levels of abstraction make design choices difficult to identify and evalu- ate. Our knowledge engineering strategy tackles this problem by organizing knowledge to encourage reasoning about the design through multiple levels of abstraction. We divide design knowledge into three categories: knowledge about modules used to design chips; knowledge used to distinguish and select modules; and knowledge about how to compose new designs from modules. We discuss the uses of procedural and declara- tive knowledge in each type of knowledge, the types of knowledge useful in each category, and efficient representations for them. 1. INTRODUCTION The VLSI design domain1 is well-suited to the exploration of design because of the large body of work on the computer representation and manipulation of VLSI designs. In this paper we present and justify one approach to the knowledge engineer- ing problem for VLSI. We base our views about VLSI knowledge engineering on our experience with VLSI synthesis programs, notably Fred, a chip planning database,2 the Design Automation Assistant a knowledge-based synthesis program,3 and BUD, an intelligent partitioner for ISPS descriptions.4 Our goal is the automatic design of large (100,000 transistor) systems whose quality as measured by performance and cost is competitive with human- produced designs. We view the design problem as one of succes- sive refinement of an algorithmic description of a processor guided by user-supplied constraints on cost and performance. The synthesis procedure implements the algorithm’s data and control flow as a structure built of modules and wires, and finds a layout that implements that structure. Doubtless the synthesis of high-quality designs is difficult-VLSI design is a composition of a large number of subproblems, many of which are NP-hard. Further, synthesis is in some important respects fundamentally different from the diagnosis problems to which rule-based expert systems are typically applied. Diagnos- tic systems try to infer behavior of a system from a partial description of its behavior and/or structure; synthesis systems try to build a good implementation from a specification, a process that usually requires search. In this respect the problem more closely resembles the problem attacked by Dendra15 -finding candidate molecular structures for organic compounds. VLSI synthesis is particularly complex because decisions about architecture, logic design, circuit design, and layout cannot be fully decoupled. Lacking perfect foresight, a synthesis system must be able to reason across multiple levels of abstraction, through deduction and search, to predict or estimate the results of bottom-up implementations. A synthesis system’s ability to make tradeoffs based on bottom- up design information requires not only specific pieces of knowledge, like the size of a particular design, but an organiza- tion of knowledge that allows the system to extract and manipu- late that knowledge. As in any design system, we judge the value of our knowledge engineering scheme by two criteria: effectiveness, or whether the scheme expresses what synthesis needs to know; and efficiency, or how much it costs to compute the knowledge. The relative importance of effectiveness and efficiency will vary for different tasks; decisions that require the examination of a large number of candidate designs may be satisfied with simple, quickly computable information about the designs, while other decisions are made by detailed examination of a few designs. In the rest of the paper we develop a knowledge engineering scheme and judge it by these two cri- teria. 2. HORIZONTAL AND VERTICAL REPRESENTATIONS The partitioning of the digital system design process into levels of abstraction goes back at least to Amdahl, et ~2.~ and, more concretely, to Bell and NewelL7 who divided digital system design into four levels of abstraction: processors, programs, logic, and circuits. Bell and Newell emphasized that their tax- onomy was dependent on the existing technology and general understanding of computer science, and was likely to change with time, as it did in Siewiorek, Bell, and Newell.8 A simplified form of their taxonomy was reflected in the SCALD CAD system used to design the S-l processor.g The Carnegie-Mellon Design Automation Project advocated a similar top-down, successive refinement approach for automatic designlo More recently, Stefik et ul.ll have updated the Bell and Newell paradigm for the VLSI domain. Gajski and Kuhn have proposed a more comprehensive model for understanding design methodologies. 1 2 They divide the universe into representations-structural, functional, and geometrical-each of which includes several levels of abstrac- tion. Walker and Thomas have expanded this model to detail 866 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. the various levels of abstraction in each representation.13 We characterize the levels of abstraction model as horizontal: a description level categorizes all the knowledge about a particular phase of design, but the complete description of any particular design requires reasoning at several different levels. Using levels of abstraction as an organizing principle, as in Palladio,14 limits one’s ability to consider bottom-up knowledge. We have organ- ized our knowledge into three groups, with knowledge about modules organized vertically-knowledge about a module at all levels of abstraction is contained in the module description. Our methodology is more akin to that of the Caltech Silicon Struc- tures Project,15 which advanced the “tall thin man” paradigm as an embodiment of the simultaneous consideration of problems at multiple levels of abstraction. We believe that a vertical classification scheme has some distinct advantages. First, a vertical categorization enhances one’s ability to analyze tradeoffs. One radical example of the effect of low-level knowledge on high-level decisions is the relation between pinout and architecture. Fabrication, bonding, and power dissipation limitations set a maximum number of input/output pads avail- able on a chip; the resulting upper bound on the amount of com- munication between the chip and the world is a strong constraint on many architectures. A more subtle example is the relative cost of barrel shifters in nMOS and CMOS-the shifter’s higher cost in CMOS may force a different architectural implementa- tion for some algorithms. We must be able to make design deci- sions by looking deeply into the details of the available imple- mentation choices. Second, simplified models to describe a particular level of abstraction exclude useful and important designs. One example in what Stefik et al. call the CZocked Primitive Switches level is the precharged bus (where the parasitic capacitance of a bus temporarily stores a value that is picked up during a later clock phase). This circuit design technique violates a fundamental precept of strict clocking methodologies-that a wire is memoryless-but, when applied with the proper precautions, works. Further, precharged busses are commonly used and are often the only way to improve chip performance to an accept- able level. A strict clocking methodology that has been extended to include precharging is described by Noice et aZ.,16 but the extensions require explicit verification of the propriety of the precharging circuit, complicating this once simple methodol- ogy. Methodologies, like those used to guarantee clocking correctness, simplify a problem enough to allow quick solutions of a wide variety of problems. But to produce high-quality designs, a synthesis system must be aware of the limitations of its methodologies and be able to collect and analyze knowledge to circumvent its limitations. Figure 1 shows that we partition our design knowledge accord- ing to the tasks of synthesis. Each category also uses a distinct knowledge representation scheme. We divide knowledge into three categories: knowledge about particular modules that can be used in a design, which we represent procedurally; knowledge used to distinguish implementations of a module, which we represent declaratively; and knowledge about the composition of designs from modules, which we represent both procedurally and declaratively. In the next three sections we will describe the knowledge in each of these categories and efficient level components PMS processors, memories, switches, links program memories, instructions, operators, controls horizontal categories (from Bell and Newell) module knowledge vertical categorization of module knowledge Figure 1. TWO CATEGORIZATIONS KNOWLEDGE representations for it. Our enumeration of useful knowledge is not meant to be complete or final, but our experience tells us that this taxonomy is useful. 3. KNOWLEDGE ABOUT MODULES We divide knowledge about modules into two distinct topics: module designs themselves and methods for evaluating modules. (A module may be a type of component or a class of com- ponents, like the class of adders of width n). The design of a module, or of an algorithm for designing a class of modules, is a form of expert knowledge. The ability to compute certain important properties of a module’s design is an orthogonal type APPLICATIONS / 867 of knowledge. The Fred database takes advantage of this ortho- gonality by using an object-oriented description for modules, as does Palladio. We build a general-purpose set of measurement methods to answer fundamental queries, and build on top of these utilities procedural descriptions of specific modules. The objects that describe these modules are kept in a database that can be searched using selection functions. Measurement of a module’s properties is the best understood topic in VLSI design. Algorithms exist to measure almost every conceivable property of interest. (There is little point in recast- ing these algorithms in declarative form.) Unfortunately, most synthesis systems have used simple look-up tables or crude built-in approximations to measure candidate designs. Tables are insufficient to describe parameterized module designs; built- in approximations make it difficult to justify decisions to later stages of designs, and inconsistencies may result if different design procedures use different approximations. We have had good success with evaluating candidate designs based on the answers to a few fundamental queries: l Physical properties- Measurements of the values of the electrical elements in the circuit. Values for transistors (length and width) are easy to measure. Parasitic values associated with layout elements (transistors, contact cuts, wires) require more effort. l Speed-Delay is the real time required to propagate logic signals through networks. The details of delay calculation differ among circuit technologies, but all require measure- ment of the circuit element values and calculation of delays based on those values. Methods for calculating delay for MOS technologies are described by Osterhout.17 l Clocking-A related, but different type of knowledge describes the clocking behavior of the module, particularly, the clock phases on which the inputs and outputs are valid and the delay in clock cycles from an input to an output. Once clock signals are declared and the clocking behavior of primitive components is known, standard longest path algo- rithms can be used to compute the clocking delay from inputs to outputs. l 8 l Shape-If the module’s physical extent is modeled as a set of rectangles, a request for the shape of a module can be used to derive measures such as area, aspect ratio, and minimum required spacing. Fred uses a simple form of com- pactionlg to estimate the shape of a module from its consti- tuent components and wires. Compaction also tells us about the locations of the input/output ports for the module. These queries are usually enough to derive the required knowledge about a module; in a few cases it may be necessary to supply special-purpose methods for calculating some parameter either for performance reasons or, occasionally, because the approximations used in the standard methods are inadequate for the peculiarities of a particular module. The complementary component of module knowledge is the design of the module itself. We describe module designs pro- cedurally rather than declaratively. There are many design tasks that can be done algorithmically: layout compaction,20 transistor sizing,21 and clocking22 are examples. As with meas- urement procedures, there is little point in reformulating these algorithms declaratively. The most mundane part of the module design, the basic structural description of the components, wires, and layout elements that implement the module, could be described declaratively, but we choose to use a procedural representation for consistency and ease of use with existing design procedures. Another pragmatic reason for preferring procedural description of a module’s structure is that most designers know procedural languages but are unfamiliar with strongly declarative languages. 4. KNOWLEDGE ABOUT MODULE SELECTION Some information about a module is easily changed with changes in its parameters; other data is static across versions of the module. Often, we can use static information to make an initial selection of modules, and look at the dynamic information (which generally takes longer to compute) only when making final, detailed design decisions. Example of simple questions that greatly prune the search space of modules are “Does the module implement the function I am interested in?“, “Is the module implementable in a technology compatible with my design?“, and “Is the floor plan of this module compatible with my current physical design?” The distinction between static and dynamic data is not always clear-cut, but we can use it to our advantage to speed the initial search of the module design space. Fred segregates static, discriminatory knowledge about modules into an associative database to select candidate modules for an implementation. An associative database that supports deduc- tion is powerful enough to support queries used in module selec- tion but simple enough to run quickly. The user and author of the database contents must come to an agreement on the mean- ing of the predicates in the database. We have found these categories useful in initial module selection: l Functionality-A description of functionality includes a statement of the gross function of the module (adder, shifter, etc.) and an enumeration of particular operating characteris- tics of the module. Synthesis often requires functional infor- mation like “Does this latch have a reset signal?” or “What are the feasible bit widths of this shifter?” Such knowledge describes how a module deviates from the ideal behavior for a module that implements the pure function or how it is cus- tomized for a particular task. l Signal characteristics- Modules must be compatible in the way they represent logical signals as electrical signals. The important parameters of a signal are: - signal level (voltages for logic 0 and 1) ; - signal polarity (active high or low); - signal duality (whether the circuit requires/produces both true and complement signals). l Technology families- The most common technological deci- sions concern fabrication technology and circuit family. The description should allow the synthesis system to distinguish both particular technologies and families of technologies; a module generator may, for instance, be able to produce modules for a number of CMOS technologies. Examples of CMOS circuit families are full complementary, pseudo- nMOS, domino,23 and zipper.24 The database should also describe the compatibility of families; for example, domino CMOS circuits may be used to drive fully complementary circuits, but not the reverse. All this information can be derived from the module descriptions 868 / ENGINEERING before design starts and stored in the database. Once facts about the modules have been coded as patterns, such as (tech- nology adder-l cmos2.5), the database can be searched using standard pattern matching techniques. A pattern like (and (function ?x add) (technology ?x cmos)) will return the modules that can do an addition and are implemented in CMOS in the bindings of the variable ?x to the names of the candidate modules. The associative database mechanism makes it easy to support two useful forms of record-keeping for design decisions. Both methods rely on having the database apply a standard pattern set that is used along with the current pattern specified by the designer. First, a designer can add patterns that express design decisions like fabrication technology. Modules not meeting the criteria will be filtered out by the standard patterns. Similarly, the synthesis program can load the standard pattern set with a design style description that will enforce a set of externally determined choices-circuit family, layout style, etc. In both cases the history of changes to the standard pattern set can be used to trace design choices. Most of the standard techniques described in the literature can be used to speed up the pattern matching search. Because the categories of knowledge, and therefore the first names in the database patterns, are static, they can be organized into rela- tions to speed the search. For efficiency reasons the database should also include ordering criteria to order the search for max- imum efficiency; often a few standard categories will greatly res- trict the search space. 5. KNOWLEDGE ABOUT MODULE COMPOSITION The previous sections have described knowledge about particular modules; we also need knowledge about how to put together modules to build new designs. We categorize knowledge about module design into three fields: general composition rules, which describe the basic operations that are used to build a module from components; optimization transformations, which transform one design into another, presumably better design; and search rules, which help the synthesis program search the space of can- didate designs. Each of these types of knowledge is used differently, and so requires a different representation. Examples of general composition rules are that a wire be con- nected to at least one input port and one output port, or that wires of incompatible clock phases not be connected. Simple composition operations are easy to specify and frequently exe- cuted to build and rebuild test designs. For these reasons we choose to represent them as compiled functions. We use the composition functions to build more complex transformations on the design. Optimization transformations are more intricate. They must recognize a subset of the design that meets some criteria and then transform it into another implementation with the same functionality that is at least as good. The recognition criteria for optimizations are often structural (remove a multiplexer with all its inputs tied to the same signal) but may look at other pro- perties of the design (if a logic gate with minimum-size output transistors is driving a wire with a capacitance of at least 10 pF, replace the gate with a high-power logical equivalent). Experi- ence with the DAA has shown that pattern matching algorithms like those found in production systems such as the OPS family25 are a good engine for driving transformations. Optimizations stored as patterns are easy to describe and to change; further, optimizations specific to a particular technology or design style can easily be loaded into the system. The representation of search heuristics is a more complicated issue. Some heuristics cannot easily be formulated as rules; an example is the cost function used by the DAA to evaluate the effect of coalescing functions into a module. Although the result of the cost analysis can be used to drive a rule, writing the cost function itself as rules is both cumbersome and expensive in computation time. As a result, the DAA evaluates the cost func- tion procedurally and uses the result to control rule firing. In general, most predicates that can be used as indicators in guid- ing search are sufficiently complicated that they should be calcu- lated with procedures- efficiency in calculating their values is of particular concern because of the large size of the search space. However, predicates can be evaluated by rules that decide how to modify the candidate design. Implementing the final decision-making process as rules gives the standard advantages of rule-based systems: rules can easily be changed during experi- mentation, and special-purpose rules can be added dynamically to customize the search. 6. SYNTHESIS VERSUS ANALYSIS Hardware synthesis is different from the diagnosis and debug- ging problems explored by several investigators. Analysis uses knowledge to infer the functionality and performance of a cir- cuit, while synthesis uses knowledge to gauge the quality of an implementation decision. Exploration of the differences between the two problems helps to illustrate the limitations of rule-based systems in synthesis. Examples of analysis systems are the circuit understanding pro- gram of Stallman and Sussman ;26 hardware error diagnosis pro- grams described by Davis and Shrobe,27 and Genesereth;28 and a hardware design debugger described by Kelly.2g Analysis most closely resembles local design optimization, in that an existing design must be analyzed by looking for particular traits. Both concentrate on local analysis of the design, which can be easily implemented as rules. Synthesis, on the other hand, requires global knowledge of the search space, and several factors limit the utility of rule-based systems for global search. Figure 2 shows the design space for a floating-point arithmetic algorithm as generated by BUD, using an area * time” objective function for several values of n. The search space is unpredictable; decisions on how to change the design cannot be made based on simple, local criteria. Two fac- tors argue against using production systems to drive searches through such a space. One is efficiency; the size of the search space for an interesting design is extremely large, and the space may change with design decisions. Another is the difficulty of expressing synthesis decisions as patterns-consider the relative difficulties of explaining how to travel from Murray Hill NJ to New York using procedures (“go from the south exit, turning APPLICATIONS / 869 Normalized AT-N 0 5 10 15 Step Figure 2. THE SEARCH SPACE OF A SIMPLE DESIGN left at the light, continue until you find the entrance to I-78 North.,.“) and rules that describe what to do at each intermedi- ate state along the path. Although local transformations may be carried out by rules, the global nature of the search required argues for procedural control of the search strategy, either by rule-based systems that allow control of the search process or by direct coding of the procedures. 7. PLANNING-AN OPEN PROBLEM One important topic with which we lack direct experience is planning and control of design. Planning is important because many implementation decisions are deferred: later design pro- cedures must know the goals and rationale of the earlier pro- cedures; the assumptions and estimates made during initial design must be verified; and if the design is found to be unsatis- factory, some plan must be formed to correct the problem. We see two important problems in planning for synthesis. The first is to identify a minimal set of knowledge about design deci- sions required to detect errors and establish criteria for correct- ing them. The second problem is how to control the procedures used to solve design subproblems. Not all synthesis algorithms are well-suited to explaining their results or how to change the design with minimal impact on its other properties. The control of synthesis algorithms will probably require expert knowledge about those algorithms encapsulated in rule-based systems. 8. CONCLUSIONS We have discussed our partitioning of concerns in VLSI design. We believe that it is important to encourage examination of design decisions deeply, particularly because the problem is so poorly understood. So we prefer a vertical organization of knowledge that emphasizes complete descriptions of modules that can be used in the design of a chip. In general, attacks on individual subproblems encountered dur- ing synthesis are best made by well-known algorithms. Tradi- tional AI methods are best suited to the local control of the com- position of modules and to diagnosing problems encountered during synthesis. The daunting problem of VLSI synthesis lies in balancing declarative and procedural techniques to converge on a quality design. REFERENCES I1 1 Carver Mead and Lynn Conway, Introduction to VLSZ Sys- tems, Addison-Wesley, Reading, Massachusetts (1980). [21 Wayne Wolf, “An Object-Oriented, Procedural Database for VLSI Chip Planning,” Proceedings, 23rd Design Auto- mation Conference, ACM/IEEE, (June, 1986). 131 Thaddeus J. Kowalski, An Artificial Intelligence Approach to VLSI Design, Kluwer Academic Publishers, Hingham MA (1985). 141 Michael McFarland, “Using Bottom-Up Design Techniques in the Synthesis of Digital Hardware from Abstract Behavioral Descriptions,” Proceedings, 23rd Design Au to - mation Conference, ACM/IEEE, (June, 1986). 151 Bruce G. Buchanan and Edward A. Feigenbaum, “Dendral and Meta-Dendral: Their Applications Dimension,” Artificial Intelligence 11(1,2) pp. 5-24 (1978). 161 G. M. Amdahl , G. A. Blaauw , and F. P. Brooks, Jr., “Architecture of the IBM System/360,” IBM Journal of Research and Development 8(2) pp. 87- 101 (April, 1964). 171 C. Gordon Bell and Allen Newell, Computer Structures: Readings and Examples, McGraw-Hill, New York (197 1). [81 Daniel P. Siewiorek, C. Gordon Bell , and Allen Newell, Computer Structures: Principles and Examples, McGraw- Hill, New York (1982). [91 Thomas M. McWilliams and Lawrence C. Widdoes, Jr., “SCALD: Structured Computer-Aided Logic Design,” 25th Design Automation Conference, pp. 271-277 IEEE Com- puter Society Press, (1977). [lOI Stephen W. Director , Alice C. Parker , Daniel P. Siewiorek , and Donald E. Thomas, Jr., “A Design Metho- dology and Computer Aids for Digital VLSI Systems,” IEEE Transactions on Circuits and Systems CAS-28(7) pp. 634- 645 (July, 1981). [ 111 Mark Stefik, Daniel G. Bobrow, Alan Bell, Harold Brown, Lynn Conway, and Christoper Tong, “The Partitioning of Concerns in Digital System Design,” Proceedings, 1982 Conference on Advanced Research in VLSI, MIT, pp. 43-52 Artech House, (January, 1983). [12] Daniel D. Gajski and Robert H. Kuhn, “Guest Editor’s Introduction: New VLSI Tools,” Computer 16(12) pp. 11-14 (December, 1983). 870 / ENGINEERING 1131 Robert A. Walker and Donald E. Thomas, “A Model of Design Representation and Synthesis,” 22nd ACM/IEEE Design Automation Conference, pp. 453-459 IEEE Com- puter Society Press, (June, 1985). 1141 Harold Brown, Christopher Tong, and Gordon Foyster, “Palladio: An Exploratory Environment for Circuit Design,” Computer, pp. 41-56 IEEE Computer Society, (December, 1983). 1151 Stephen Trimberger , James A. Rowson , Charles R. Lang , and John P. Gray, “A Structured Design Methodology and Associated Software Tools,” IEEE Transactions on Circuits and Systems CAS-28(7) pp. 618-634 (July, 1981). [ 161 David Noice, Rob Mathews, and John Newkirk, “A Clock- ing Discipline for Two-Phase Digital Systems,” Proceed- ings, International Conference on Circuits and Computers, pp. 108-l 11 IEEE Computer Society, (1982). 1171 John K. Osterhout, “Crystal: A Timing Analyzer for nMOS VLSI Circuits,” Proceedings, Third Caltech Conference on VLSI, pp. 57-69 Rockville MD, (1983). 1181 Kurt Mehlhorn, Data Structures and Algorithms 2: Graph Algorithms and NP-Completeness, Springer-Verlag, Berlin (1984). [191 M. W. Bales, Layout Rule Spacing of Symbolic Integrated Circuit Artwork, Masters thesis, University of California, Berkeley (May 4, 1982). 1201 Wayne Wolf, Two-Dimensional Compaction Strategies, PhD thesis, Stanford University (March 1984). 1211 J. P. Fishburn and A. E. Dunlop, “TILOS: A Posynomial Programming Approach to Transistor Sizing,” Proceedings, ICCAD-85, pp. 326-328 IEEE Computer Society, (November, 1985). [221 Nohbyung Park and Alice Parker, “Synthesis of Optimal Clocking Schemes,” Proceedings, 22nd Design Automation Conference, pp. 489-495 IEEE Computer Society, (June, 1985). 1231 R. H. Krambeck, C. M. Lee, and H. F. S. Law, “High- Speed Compact Circuits with CMOS," IEEE Journal of Solid-State Circuits SC-17(3) pp. 614-619 IEEE Circuits and Systems Society, (June, 1982). 1241 Charles M. Lee and Ellen W. Szeto, “Zipper CMOS," IEEE Circuits and Devices Magazine 2(3) pp. lo-17 (May, 1986). 1251 Lee Brownston, Robert Farrell, Elaine Kant, and Nancy Martin, Programming Expert Systems in OPSS, Addison- Wesley, Reading, Massachusetts (1985). 1271 Randall Davis and Howard Shrobe, “Representing Struc- ture and Behavior of Digital Hardware,” Computer 16(10) pp. 75-82 (October, 1983). [281 Michael R. Genesereth, “The Use of Design Descriptions in Automated Diagnosis,” pp. 41 l-436 in Qualitative Reason- ing About Physical Systems, ed. Daniel G. Bobrow, MIT Press, Cambridge MA (1985). 1291 Van E. Kelly, The Critter System -An Artificial Intelli- gence Approach to Digital Circuit Design Critiquing, PhD thesis, Rutgers University (January, 1985). [261 Richard M. Stallman and Gerald J. Sussman, “Forward Reasoning and Dependency-Directed Backtracking in a System for Computer-Aided Circuit Analysis,” Artificial Intelligence 9(2) pp. 135-196 (October, 1977). APPLICATIONS / 871
1986
12
384
Causal and Plausible Reasoning in Expert Systems Gerald Shao-Hung Liuf Advanced Products, VHSIC Test Systems, SentryBchlumberger 1725 Technology Dr., San Jose, CA 95 115 ABSTRACT This study sets out to establish a unified framework for causal and plausible reasoning. We identify a primitive set of causal roles which a condition may play in the inference. We also extend Dempster-Shafer theory to compose the belief in conclusion by the belief in rules and the belief in conditions. The combined framework permits us to express and propagate a scale of belief certainties in the context of individual roles. Both the causation aspect and the certainty aspect of an inference are now accounted for in a coherent way. I INTRODUCTION Inference rules, as a primitive for reasoning in expert systems, contain two orthogonal components: the inference nature (i.e. what merits the conclu- sion and how is it warranted?) and the inference strength (i.e. how much is the conclusion supported - almost for certain, or weakly so?). The two inference components have largely received separate attention. To account for inference strengths some researchers resort exclusively to various ‘likel- ihood calculi* ’ without causal provisions. Others, aiming to explain the inference nature, endorse symbolic rules without any likelihood mechanism (e.g. [Cohen 831). There are also some other researchers who employ a hybrid approach (see [Szolovits 781). The problems with these rule representations are as follows: symbolic rules without likelihood cannot represent inference strength; likelihood rules without a causal account cannot distinguish inference nature; and hybrid representations to date are either piecewise (using separately one of the two methods in each rule) or ad hoc (lacking a sound theoretical ground for the likelihood calculus). In brief, the non-numerical approach errs on the weak side, whereas an exclusive likelihood calculus suffers from superficiality. The goal of this research is therefore to combine causal and plausible rea- soning in a coherent way. There are two aspects to this goal: identifying a primitive set of causal categories named roles, and extending plausible rea- soning under these qualitatively different roles. II RELATED WORK A. Non-likelihood Symbolic Approaches Endorsements are the explicit construction of records that a particular kind of inference has taken place (e.g. the imprecisely defined supportive condi- tion may be too specific for the conclusion [Cohen 83, ~1331). There are many different kinds of endorsement, corresponding to different kinds of evidence for and against a proposition, However, elaborate heuristics do not overcome the general problem with pure symbolic reasoning: they err on the weak side after all. Categorical inferences are “ones made without significant reservations” [Szolovits 78, ~1161: IF <condition> THEN commit <decision>. A strong causal inference, in our term, is just a categorical one with an explicit tne a&or is also with the Computer Sciences Division, EECS, UC Berkeley, where this work was supported in part by NASA Grant NCC-2-275 and NSF Grant ECS-820%79. causal account: e.g. <condition> IS-SUFFICIENT-FOR <decision>, <con- dition, IS-NECESSARY-FOR <decision>, <condition> EXCLUDES <decision>, etc. Being simple to make, such categorical decisions usually depend on relatively few facts [ibid., ~1171. Unfortunately, for reasons all too obvious, reasoning exclusively by (strong causal) categories finds lim- ited applications only. B. Numerical Likelihood Approaches The Certainty Factor (CF) model [Shortliffe 761 attaches to each inference rule a CF representing the change in belief about the concluding hypothesis given the premised evidence. The actual formulae in the CF model are immaterial to our discussion, for they share the same following problems: these formulae derive from no where, and the CF model in itself does not deal with partial evidence bearing on multiple hypotheses. Bayes nehvorks (a term used in [pearl 851) refer to directed acyclic graphs in which the nodes signify propositions (or variables), and the strengths on the linking arcs represent the (Bayesian) conditional probabilities. Bayes networks include the “inference network” in PROSPECTOR [Duda, Hart, et al 761 as an important variation. These networks largely employ (varia- tions of) Bayes’ rule as the inference mechanism, therefore those usual issues in Bayesian theories [Chamiak 831 are raised: the excessive number of conditional probabilities, the assumption of pairwise conditional independence as a device to escape from the preceding problem, and how to deal with partial evidence bearing on multiple hypotheses. 1. Dempster-Shafer Theory There are two distinguishing advantages of Dempster-Shafer theory as a ‘likelihood calculus’ over Certainty Factors and Bayes Networks: it is able to model the narrowing of the hypothesis set with the accumulation of evi- dence; it permits us to reserve part of our belief tti the ‘don’t-know’ choice (a degree of ignorance). Suppose probability judgements are required for possible answers to a par- ticular question. These possible answers form a set namedframe of discern- ment. To provide supporting evidence, a ‘related’ question may be asked so that the established probabilities of the answers to this related question will shed light on those to the original. The set of answers to the related ques- tion forms a backgroundframe of discernment; correspondingly the original frame may be referred to as the foregroundframe. The ‘inter-relatedness’ between the two questions is manifested by that not every answer in the background frame is compatible (i.e. logically consistent) with all the answers in the foreground frame. Furthermore, commitment of belief to an answer in the foreground frame can be counted as reason to believe it by the sum of probabilities of all the compatible answers in the background frame. Example 1. (Originated from [Zadeh 84, p81]) Suppose Country X believes that a submarine, S, belonging to Country Y is hiding in X’s territorial waters. The Ministry of Defense of X wants to evaluate the possible loca- * ‘Likelihood’ in this context refers in general to such formalisms as Certainty Factors, Probability, and Belief Functions (Plausible Reasoning). 220 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. tions of S. A group of Navy experts, E I,..., EM, are summoned; each of them indicates an area which he believes S is in. Let A 1,..., A, denote the areas indicated by the experts E r,..., EM individually (I I M). Assume that there are also certain experts who, being ignorant in this case, cannot indicate any specific area. Now suppose the Ministry of Defense aggregates the experts opinion by averaging: the vote of E,,, is multiplied by a number w,,OIw,,,Il,suchthatw,+ ..a + w, = 1. Then the reason to believe in an area Ai is counted by a so-called basic probability assignment (bpa) to Aj:m(A,)= x w,, where E,,,:+A, &notes that the expert E, votes for E-:-d, the area Ai. Similarly the amount of ignorance is measured by a bpu to the I I entire territorial waters UAi : m LJAi = C w,,, . i=l 1 1 i=l I Em:+gA, Stated formally, let eb and @3, be the background and the foreground frame of discernment respectively. Between &. and e,, the element-subset compatibility relation is denoted by ‘:+‘. More specifically, b :+F denotes that b is compatible with all the elements in F, and there is no other super- set of F being such, where b E 9, is an item of supporting evidence in the background and Fcef is the ‘maximum supported subset’ in the fore- ground. Such an F is called afocal element. Then the commitment of belief to F, namely a basic probability assignment (bpu) to F, is counted (as rea- son to believe) by m(F)= x P(b) b :-PF (1) where P(b) is the background probability judgement over b E @,. It is easy to see that (1) m(0)=0, and (2) C m(F)= 1. In addition, rn(ef) >O FM, represents the degree of partial ignorance. When all the focal elements (as supported subsets) are singletons, the basic probability assignment m reduces to a Bayesian probability. III BELIEF CERTAINTY IN FACTS AND RULES This section discusses how to represent belief certainty in the knowledge base. The knowledge base is first divided into (unconditional) facts and (inference) rules. They will be attached with basic probability assignments as commitment of beliefs. The role system, to be introduced later, can then be viewed as additional causal structures imposed on generic inference rules. A. Factual Certainty To begin with, we represent an unconditional fact by its canonical form: X is F. For instance, Cur01 has a young daughter is represented by AGE(DAUGHTER(Curo1)) is YOUNG. The belief in “X is F” can usually be represented by an interval [vt,vJ. v1 expresses the extent to which we confirm “X is F” by the available evidence, whereas va express the extent to which we disconfIrm it. There are two advantages of using an interval rather than a single qualifier. First, information incompleteness (or partial ignorance, measured by I-(v,+v,)) is separated from uncertainty (expressed by v1 or v2 alone). Second, information absence (indicated by [0, 01) is represented differently than negation (expressed by [0, 11). If several pieces of facts are related by mutual exclusion, a frame of dis- cernment can be formed. Then the degree of confirmation in each proposi- tion is simply its basic probability assignment. In view of this frame of dis- cernment, any independent proposition and its negation are included in a frame by themselves. Such frames are called dichotomous frames. Then the belief interval amounts to a concise representation for the (&fault) dichoto- mous frame. B. Rule Certainty available for the conclusion. A central issue in evidential reasoning is how to represent uncertain rules. Bayesian probability expresses uncertain rules by the conditional probability Prob (h I e ), then concludes the hypothesis from uncertain evi- dence: P (h I E’)=v (h 1 ei)P (ei I E ‘). In the original Dempster-Shafer theory, however, the counterpart procedure is missing (in spite of the “con- ditional belief function” Be1 (h I e ) defined in [Shafer 761). To remedy the problem, this study follows [Ginsberg 841, [Baldwin 851 and [Yen 851 to extend the original Dempster-Shafer theory, but a different approach is taken. In practice, our approach differs from [Ginsberg 841 and [Baldwin 851 in that it takes into account those frames of discerment more general than the dichotomous ones (those which include only two proposi- tions); our approach differs from [Yen 851 in that it is based on associa- tional strengths more general than the ‘partition-based conditional probabil- ities’. Additionally, in methodology our approach differs from all previous ones in that it relates to the fundamental compatibility relation with the background frames. As a result, we can derive, not define, the extension theory. Examples comparing these approaches are given below. The following two paragraphs define the terms needed for the extension. Their mathematical relations are then expressed below. An inference rule of the form Ai3C, in general may have either the antecedent Ai or the consequent Cj as sets (rather than singletons) of some elements. *This is especially true for hierarchical narrowing of the hypothesi; 6 set, e.g. evidence $DISEASE is HEPATITIS, DISEASE is CIRRHOSISJ. Therefore we assume in general Ai={uk I u,EAiJre, and C,={c, I c/E Cj} ~$3,. e, is the antecedent frame of discerment containing all possible uk’s, and 8, is the consequent frame of discerment containing all possible cl ‘s. In addition, there is a condirionul frame of discernment 0 c1A Containing all possible pairs of cl given Ai’S. An inference rule Aj +Ci can then be viewed as a subset, {cl given Ai I cI E CjJ, of e,, A. To provide a basis for the basic probability assignments over e,, e,, and 8 clA, certain background frames as the supporting evidence have to be assumed. Let @‘=={a’,} be the background of &={a~, 6’=={c’J be the background of &={cJ, and 8’ c~A={~ ‘,, given AiJ be the background of OcJA={cI given AiJ. To prevent confusion, 8’, is called the background antecedent frame, and 8, the foreground untecedeti frame. Similarly @3’, is referred to as the background consequenrfiume; 8, the foreground con- sequent frame. And @,,A and eclA the background conditional and the foreground conditional frames respectively. According to (l), a basic probability assignment to the foreground antecedent Ai measures the ‘reason to believe’ Ai by compatible evidence in the background: m(AilE’)= C P(a’,,,lE’) where Ai s&, and u ‘,,, E 8’, (3) a’.:+A, where E’ denotes the source of observation. In analogy, a conditional bps can be defined to measure the ‘reason to believe’ the conditional proposi- tion Ci given Ai (corresponding to an inference Ai~C,): m(Cj IAi)‘Im(jclJ IAi)tm(jcl given AiJ) = c P(C’nIAiI where c’,E@‘~ , c~E~,c@,,A,G~~ c I. : -c,={c,} (4) Actually, background probabilities P (c ‘, ) E ‘)‘s can also determine directly the bpu to a foreground consequent Cj: m(CjIE’)= C P(c’,,lE’) C’A-C, where C,&,, and C’“E @3’, (5) Butthegoalistoexpressm(CjIE3intermsofm(CiIA,)andm(A,IE’). The rationale is similar to the Bayesian conditioning procedure P (h I E ‘)=Cp (h ] ei)P (ei ( E ‘). That is, there may not be direct probabilities Example 2 (Continuedfrom Example I): Suppose the Ministry of Defense of Country X attempts to conjecture the intention of S based on its locations. Uncertainty and Expert Systems: AUTOMATED REASONING / 22 1 Assume that these conjectures are made in the form of inference rules: Ai+C’i, signifying that S cruising in Area At suggests Conspiracy Cj of Country Y. Suppose the Ministry relies on those Navy experts as in Exam- ple 1 to evaluate the possible locations of S, but calls the Intelligence Agency for a confirming history of activities, c’t, . . . . or c’,v, in each of the areas A 1, . . . . A,. Assume furthermore that each of these local history reports, c’, within At, is weighted by vti such that Cvti = 1 for each i. Then the bpu to a foreground antecedent, m(Ai ) E ‘L is x w, as in Em:+.4 Example 1, whereas a ~~nditiod bpu, m(Cj IA;), is determined by m(CjIAt)= z Vi . In above, the weight vti approximates the condi- c’.:-+c, tional probability P (c’” \A,), and c ‘n:+Cj signifies that the activity history c ‘, confirms the conjecture Cj . Recall that the goal of the Ministry was to evaluate the reason to believe each conspiracy Cj, which is measured by m (Cj 1 E ‘) = z P (c’,, I E ‘). I . However, short of a direct history on P (c’, I E’)‘s, thec6G& seeks to express m(Cj\E’) in terms of m(CjIAi, and m(AiIE’). TO this end Theorem 1 provides an answer below. Two lemmas are first established. Lemma 1. For each u ‘,,,E 9’a and At~9, such that u ‘,,, :~A,, the follow- ing property holds: P(Ai lu’mE’)= 1 Lemma 2. For each a ‘,,,E 8’, and Ai&, such that a ‘m:+Ai, if P (c’,, IAt) = P(c’, IAiUbE’) for certain c’,,E~‘~, then the following pro- perty holds: Theorem 1. (Propagation of Beliefs) In an inference rule Ai+Cj, if for each background antecedent a ‘,,, that SUPERS At, and each background consequent C’” that supports Ci, the equality P(c’,IAi)=P(c’,IAiU’,E? holds,thenm(CiIE> z m(CjIAt)m(AtIE’). ProoT: the theorem fol$z from (3), (4), (S), and Lemmas 1 and 2. It should be noted (I) that this theorem was implicitly assumed in [Ginsberg 84, ~1261 and [Baldwin 85, ~121; (2) that not only can beliefs in conse- quents be composed of beliefs in antecedents and the rules, but also the consequent ignorance (i.e. m(e, I E ‘)) can be composed of antecedent ignorance (i.e. m@,, 1 E ‘)) and rule ignorance (i.e. rn(ec IAi)). This is expressed in the following corollary: Corollary 1. (Increasing Propagation @Ignorance) If m (e, I e,>=i (that is, without knowing the foreground antecedent, we cannot conclude any foreground consequent except for the frame itself), then m(e, IE?= Z m(% IAdm(4 lE?+m@, IE’MV% IE’). e.4Ge. Example 3 (Correspondence to Buyesiun Beliefs): If there is no partial ignorance involved whatsoever, and if all the antecedents and the conse- quents are singletons (that is, if the beliefs are all classical Bayesian proba- bilities [Shafer 76, P451h then Theorem 1 m(Cj I E’)= C m (Cj I At)m(Ai I E ‘) reduces to the posterior probability: A,Ce. P (h ] E ‘)= x P (h I et)P (ei I E ‘). In this case, a foreground frame becomes &E e. identical to its background counterpart. Example 4 (Correspondence to Partition-Bused Probabilities [Yen 85, ~81): If there is no partial ignorance involved whatsoever, and if both the antecedents and the consequents (as focal elements) form a partition in the respective foreground frames, then Theorem 1 becomes Yen’s extension using ‘partition-based conditional probabilities’: P tcj IE’F 2 P(Cj IAiY’(Ai IE? (6) Are n, for each CjE II,, where I&* and II, are partitions of e, and 8, respec- tively. Examples 3 and 4 have different meanings. The partitioning approach by Yen allows hierarchical narrowing of the hypothesis set, although it doesn’t account for ignorances, whereas the classical Bayesian approach requires a probability assignment to every single element in the beginning. Example 5 (Correspondence to ~$otomous Frames [Ginsberg 84, ~1261, [Baldwin 85, ~121) Denote by A ‘4 C an inference rule with dichotomous consequents. u is the extent to which we believe C given A is true, and b is the extent we believe c given the same A. Then Ginsberg’s and Baldwin’s work - which really dealt with singleton C’s only - can be summarized by [cdl E’+A Additionally the consequent ignorance, 1-(cu +cb ) be identically obtained by m (e, ( E ‘) in cor011ary 1. in their calculation, can IV THE ROLE SYSTEM In an inference rule, the relations between the condition and the conclusion are multi-dimensional. They can be causal, or-more often they are associa- tional. In some cases the condition-conclusion relationship would be affected by other auxiliary conditions. These relationships are all qualita- tively different; they need to be treated accordingly. Therefore a primitive causal category, namely the role system IJiu 851, is established to account for these distinct relations. The role system divides the condition into six possible roles (which the condition may play in the inference): ussocia- tionul, supportive, udverse, suficient, necessary, and contrary roles. A. Associational Role A great deal of the surface-level empirical knowledge belongs to ational role. Such an inference rule in general takes the form of the associ- AU~WCl[m11,C2[mzl, . . . &[mo,l. (7) where 8, is the consequent frame of discernment, Ci’s L 8, are focal ele- ments as the alternative consequents, and mi ‘s are corresponding condi- tional bpu’s given that A is true. That is, mi=m (Ci (A ) as defined in (4). Most Often the consequent frame is dichotomized so that C & is the only focal element other than Cr and ec itself. In this case (7) may be abbrevi- ated as in Example 5: A As-W C r where m r and m 2 are the extent to which we believe C and C when A is true. The inference making with an uncertain antecedent in (7) is a straightfor- ward application of Theorem 1. Examples will be given along with the fol- lowing supportive roles. B. Supportive and Adverse Roles m-- Supportive and adverse roles may take place with an associational role. However, they are of secondary importance in the inference rule. That is, when a supportive (or adverse) condition is confirmed in addition to the pri- mary associational role, the conclusion will be better (or worse ) warranted - but the supportive or adverse role by themselves do not make a meaning- ful rule. Example 6 (Supportive Role from @ich 83, ~3491) ‘Close to half’ (40%) of the animals use camouflage as the defense mechan- ism. But those animals with colors similar to the environment are ‘much more liable’ (e.g. 80% of them) to defend themselves by camouflage. Example 7 (Continued from Example 6. An Adverse Role) 222 / SCIENCE Those animals with colors di$erent from the environment tend not to (e.g. only 10% of them will) defend themselves by camouflage. 1. General Form The general form of an inference rule with a supportive role is A~~~Cl[m,l,Cz[m2],...9~[me.l where mi=m(CjlA) (8) Supported by: A’~s~~Cl[m’lll,Cz[m’~ll. - - * ~,[~8,11 where m’jl*(Ci IAA’,) ~‘mS~~~~~~‘~ml,~~~~‘~ml,~ - * %[m~J whf32 m’i,a(Ci IAA’,) where A ‘j 'S are alternative focal elements over the supportive frame of dis- cemment 9,#. Suppose E +A [m (A 1 E)] from previous inferences. If in addition it is known that E’~A’,[m,;l~“,[m,;l, * * - %[me,,l where m,, Si=m (A ‘j I E ‘), then for each EE ‘,;;t,Ci the bpa m (Ci 1 EE ‘) may be cal- culated from Theorem 1 and (8) as follows: m(CiIEE’)= C m(CiIAA’j)m(AA’jIEE’) (9) A’,s’% =m(CiIA)m(AIE)[l- C m(A’j IE’II A’,c {A’,,..A’J + c m(CiIAA’j)m(AIE)m(A’jIE’) A’,E{A’,,.A’J 2. Examples Reformulated Rule 1 (Ref;~~t$ing Example 6): (animal ?x) ijW (defense-by ?x camouflage) Supported by: [O.S, 0.11 (color x ?c) A (habitation x ?y) A (color ?y ?c’) A (similar ?C ?C') s-+P (defense-by x camouflage) Rule 1’ (MF;$,i;8 Examples 6 and 7): (animal ?x) M-X (defense-by ?x camouflage) Supported by: [O.S. OS] (color x ?c) A (habitation x ?y) fi (color ?y ?c’) A (similar ?C ?C') & (defense-by x camouflage) [O.l, 0.81 (color x ?c) A (habitation x ?y) A (color ?y ?c’) A (different ?c ?c’) + SUPP (defense-by x camouflage) For illustration, consider the situation in which (animal x) is matched. If any of the supporting properties (color x c), (habitation x y), (color y c’) or (similar c c’) is unknown, then all that can be concluded is (defense-by x camouflage) with [0.4,0.6] by virtue of the generic unsupported rule. How- ever, if (animal x) A (color x c) A (habitation x y) A (color y c’) is known, and furthermore m (similar c c’)=O.7 and m(different c c’)=O.2, then more specific conclusion can be made. According to (9), Bel(defense-by x camouflage) = m(defense-by x camouflage) in Rule 1’ can be calculated by m[(defense x camouflage)lEE 7 = m[(defense x camouflage)](animal x)] * (1 - ml(similar c c’)lE ‘I - m[(different c c’)lE ‘I) + m[(defense x camouflage)l(animal x)..(sinular c c’)] . m[(similar c c’)lE ‘I + m[(defense x camouflage)l(animal x)..(different c c’)] . m[(different c c’)lE’l = 0.4 . (1 - 0.7 - 0.2) + 0.8 . 0.7 + 0.1.0.2 = 0.62 By the same token, m[(defense x camouflage)lEE 1 in Rule 1 is 0.4 . (1 - 0.7) + 0.8 * 0.7 = 0.68. Similarly Bel[NOT (defense-by x camouflage)] = m[NOT (defense-by x camouflage)] can be obtained 0.25 in Rule 1 and 0.29 in Rule 1’. C. Sufficient Role -- A condition plays a sufficient role if the confirmation of the condition alone warrants the conclusion. The typical usage of such sufficient roles is to facilitate the inference process of Modus Ponens. In the knowledge base a sufficient role may take place at a deep causal level: Example 8 lpatil81, ~8941 traintes tin al fluid. Diarrhea causes the excessive loss of lower gas- Alternatively, a sufficient role may take place on a surface, empirical basis. Consider in assessing the future market of a computer product, the execu- tive might have this rule of thumb: Example 9 If IBM commits itself to a five-year purchase contract totalling multi-million in revenue, then we should go for making the product. h 01 The general form of a rule with sufficient conditions takes the form AssFC where m as the bpa of C conditioned on A must be close w 1. The infer- ence making of such sufficient roles under uncertainty is simply a special form of Example 5 (which follows from Theorem 1): “,C k 41 E’+A [cm ,Ol E 3 Note that when belief in the antecedent is severely discounted role will effectively become a different associational role. the sufficient Rule 2 (Examples 9 reformulated): (has-contract-with ?target) A (is ?target ibm) A (contract-worth multi- r$$n) A (contract-span about-or-at-least-s-years) A (contract-for ?product) s-$ (support ?product) D. Necessary Role A condition plays a necessary role if the disconfirmation (or lack of confirmation, depending on cases) of the condition enables us to refute the conclusion. In classical logic a necessary role would facilitate the inference rule of Modus Tollens. In semantic-rich domains, however, there are two types of necessary roles: the strong necessary role and the weak one. The strong one refers to those conditions whose lack of confirmation suffices to refute the conclusion (the condition doesn’t have to be directly disproved), and the weak necessary role refers to those conditions which must be disproved in order to disprove the conclusion. Example 10 (Strong iVecessury Role) A suspect claiming an alibi needs to have a wimess. In this case, a witness is the strong necessary condition for claiming an alibi. This is because, short of a witness taking the stand (lack of proof), the suspect cannot effectively hold his claim (the claim being refuted). Stated formally, the prosecutor has: (has-witness ?suspect) IS- STRONG-NECESSARY-FOR (has-alibi ?suspect), and he will conclude (NOT (has-alibi ?suspect)) on the basis of (NO (has-witness ?suspect)). (Note that ‘NO’ implies lack of evidence, whereas ‘NOT’ implies a nega- tion.) Example 11 (Weak Necessary Role) The employer may require its employ- ees to demonstrate a job competence in order for them to continue to be employed. Then we have: (competent ?employee) IS-WEAK- NECESSARY-FOR (continue-to-be-employed ?employee). This is because the employer must confirm (NOT (competent ?employee)) in order to determine (NOT (continue-to-be-employed ?employee)); it is not Uncertainty and Expert Systems: AUTOMATED REASONING / 223 sufficient to just have (NO (competent ?employee)). The general form of the rule with a strong necessary condition is: b,Ol- (NO A)s+cC whereas the weak necessary counterpart takes the form: Y m.Ol- (NOT A )wzcC where m as the bpa of F again must be close to 1. In addi- tion, to su m(N0 A)=l-m(A) 2 m(NOT A)!m(A), and m(NOT (NO A))L’=m(A). !i port NO’s and NOTs as difkrent forms of negation we define To support uncertain inference making, we rewrite Example 5 to obtain [c,dl E’+A b $4 - (NO A lrn&C Nl-cb,Ol- E’ + C SNEC Rule 7: PENGUIN(x) v OSTRICH(x) + NOT FLY (x ) coTol Rule 8: OIL -COVERED (x ) v DEAD (x )co~~N0T FLY (x ) [0.7,0.2] and [cdl E’-+A [a,01 - (NOT A )--+cC [&PO1 - E’w&c 1. DUCK(x) then “see Rule 10” 2. “to be added as encountered” Rule 9: FOWL(x) As-x NOT FLY(x) Unless: Rule 3 (Reformulating Exam iltl 10): P. (NO (has-witness ?suspect)) s-+c (NOT (has-alibi ?suspect)) Rule 4 (Reformulating Example 1lJ: 1 &Ol (NOT (competent ?employee)) W-+c (NOT (continued-to-be-employed ?employee)) E. Contrary Role A contrary role is an excluding condition. In other words, the contirmation of this condition will exclude the conclusion. In many cases a contrary con- dition is just the complimentary view of a necessary condition. (The choice is largely a semantic one.) For instance, in Example 11 we could have esta- blished: Rule 5 (See Rule 4): P9,Ol (incompetent ?employee)) co+m (NOT (continued-to-be-employed ?employee)) V THE INCLUSION OF EXCEPTION ROLE’S Inference rules as empirically acquired are often times defeusible (vulner- able) when exceptional situations present themselves. A cliche example is birds can j?y (a &feasible rule) but ostriches cannot (an exception that defeats the rule). These exception conditions may be included as exception roles in the role system. Belief functions can then be used to account for plausible exceptions. Defeasible rules have been the focal subject in ‘non-monotonic reasoning,’ e.g. FIcDermott 801 and [Reiter 801. However, none of the non-monotonic logics based on classical Predicate Calculus can express the rule defeasibil- ity as a natural matter of degree (e.g. how likely the rule is to be valid). To remedy the problem, [Rich 831 and [Ginsberg 841 employed likelihood for- malisms to express the belief tendency, but Rich’s Certainty-Factor basis was ad hoc in itself, and Ginsberg seemed to have diffused the tight rule- exception association when he shielded rules from exceptions and represented the latter as retracting meta rules. Also this meta-rule approach appeared to be ad hoc at partial retraction of earlier conclusions. For exam- ple, what is precisely meant by partial retraction? The focus in this study will not be global issues of logic, but the local representation of defeasible rules. To this end, we propose to include an UNLESS clause as the exception role in an inference rule. Then the antecedent infers the consequent in the absence of underlying ‘unless’ clauses. If one of the ‘unless’ condition becomes satisfied (i.e. an excep- tional situation takes place), the default rule is defeated and a new rule will be in place. [0.9,0.021 Rule 6: BIRD (x )mU~xmFLY (x ) Unless: 1. PENGUIN(x) then “see Rule 7” 2.OSTRICH{x) then “see Rule 7” 3.OIL-COVERED(x) then “see Rule 8” 4. DEAD(x) then “see Rule 8” 5. FOWL(x) then “see Rule 9” 6. “to be added as encountered w% 01 For illustrative purpose, suppose BIRD(Slinky) and EDIBLE(Slinky). Sup- pose also that it is not known directly whether ~~~kV$[Slinky) or not, but the following inference can be made: EDIBLE (x ) iji FOWL (x ) Then what can be said about FLY(Slinky), considering the exception predicate FOWL? First, infer Slinky’s liability to fly from Rule 6. Second, infer Slinky’s ina- bility to fly from Rule 9. Third, combine the previous two results and reach the overall conclusion, which [Be1 (FLY (Slinky)), Be1 (NOT FLY (Slinky))] = [0.55, 0.361. The actus calculation goes as follows: Be1 [FLY (Slinky ) I E ‘E “j = Be1 [FLY(Slinky ) I BIRD (Slinky )n(NO FOWL (x))]*Bel [BIRD (x) 1 E’]. Be1 [NO FOWL (Slinky ) I EDIBLE (Slinky )]*Bel [EDIBLE (Slinky ) I E “j + Be1 [FLY (Slinky ) I FOWL (Slinky )] -&l [FOWL (Slinky ) I EDIBLE (Slinky )I.Bel [EDIBLE (Slinky ) I E ‘1 = 0.9.1*(1 - 0.5).1 + 0.2.0.5.1 = 0.55 Similarly, Bel[NOT FLY(Slinky) I E’E”J = O.(X?.l.(l- 0.5j.1 + 0.7.0.5.1 = 0.36 Although exception roles are useful for inference making, including them in the role system is more complicated than other categories of roles. This is partly because the fundamental theory is still being developed (e.g. [Moore 851). Also the dependency-directed backtracking during conclusion retrac- tions presents a complex efficiency issue by itself. VI CONCLUSION The role system manifests the qualitative difference in causations that is often overlooked in numerical likelihood representations. In particular, the auxiliary nature in supportive roles and the overruling nature in exception roles are explicitly represented now. On the other hand, with an extended Dempster-Shafer theory, the scale of belief certainties as well as ignorance can be expressed and propagated* uniformly in the context of individual roles. Study on further usage of the role information during reasoning is underway. ‘The parallel combination of concluding beliefs represents a different issue, which is not .covered in this paper. See [Yen 851 for alternatives to the independence assumption in the original Dempster’s combining rule. 224 / SCIENCE ACKNOWLEDGEMENT The author is indebted to Professor L. A. Zadeh of UCB for his continuous encouragement. The author also thanks Professor Alice Agogino, Dr. Peter Adlassnig and John Yen of UCB, Dr. Enrique Ruspini and Dr. John Lowrance of SRI for their comments and discussions. Dr. Chris Talbot of Sentry/SchIumberger has helped to prepare this paper. REFERENCES [Baldwin 851 Baldwin, J. F., “Support Logic Programming,” Technical Report No. 65, Information Technology Research Center and Engineering Mathematics Dept., University of Bristol, 1985. [Chamiak 831 Charniak, Eugene, “The Bayesian Basis of Common Sense Medical Diagnosis,” Proc. National Cogerence on Artificial Intelligence, Aug. 83, pp. 70-73. [Cohen 831 Cohen, Paul, Heuristic Reasoning a&out Uncertainty: An Artijiciat Intelligence Approach, PhD dissertation, Dept. of Computer Sci- ence, Stanford University, 1983. [Dempster 671 Dempster, Arthur P., “Upper and Lower Probabilities Induced by a Multivalued Mapping,” Annals of Mathematical Statistics, Vol. 38 (1967), pp. 325339. Duda, Hart, et al 761 Duda, Richard O., Hart, Peter E., and Nilsson, Nils J., “Subjective Bayesian Methods for Rule-Based Inference Systems,” Proc. 1976 National Computer Conference (APIPS Cogerence Proc.), Vol. 45 (1976), pp. 1075-1082. [Ginsberg 841 Ginsberg, Matthew L., “Non-Monotonic Reasoning Using Dempster’s Rule,” Proc. National Conference on Artificial Intelligence, 1984, pp. 126-129. biu 851 Liu, Shao-Hung Gerald, “Knowledge Structures and Evidential Reasoning in Decision Analysis,” Proc. AAAI Workshop on Uncertainty and Probability in Artificial Intelligence, Aug. 1985, pp. 273-282. FrcDermott 803 McDermott, Drew, and Doyle, Jon, “Non-Monotonic Logic I,” Artificial Intelligence, Vol. 13 (1980), pp. 41-72. lJ$ore 851 Moore, Robert C., “Semantical Considerations on Nonmono- tonic Logic,” Artij’kial Intelligence, Vol. 25 (1985), pp. 75-94. [Patil 811 Patil, Ramesh, Szolovits, Peter, and Schwartz, William, “Causal Understanding of Patient Illness in Medical Diagnosis,” Proc. 7th IJCAI, 1981, pp. 893-899. [pearl 851 Pearl, Judea, “A Constraint-Propagation Approach to Probabilis- tic Reasoning,” Proc. AAAI Workshop on Uncertainty and Probability in Artificial intelligence, Aug. 1985, pp. 3142. meiter 801 Reiter, Raymond, “A Logic for Default Reasoning,” Artificial Intelligence, Vol. 13 (1980), pp. 8 1-132. [Rich 831 Rich, E., “Default Reasoning as Likelihood Reasoning,” Proc. National Conference on Artijicial Intelligence, 1983, pp. 348-351. [Shafer 761 Shafer, Glenn, A Mathematical Theory of Evidence (Princeton University Press, Princeton, 1976). [Shortliffe 761 Shortliffe, E.H., Computer Based Medical Consultations: MYCIN (American Elsevier, New York, 1976) [Szolovits 781 Szolovits, Peter, and Pauker, Stephen G., “Categorical and Probabilistic Reasoning in Medical Diagnosis,” Artificial Intelligence, Vol. 11 (1978), pp. 115-144. [Yen 851 Yen, J., “A Model of Evidential Reasoning in a Hierarchical Hypothesis Space,” Tech. Report UCBKSD 86/277, UC Berkeley, 1985. [Zadeh 841 Zadeh, L. A., “Review of Books,” AI Magazine, Fall 1984, pp. 81-83. Uncertainty and Expert Systems: AUTOMATED REASONING / 225
1986
120
385
Using Decision Theory to Justify Heuristics Curtis P. Langlotz, Edward H. Shortliffe, Lawrence M. Fagan Medical Computer Science Group Knowledge Systems Laboratory Medical School Office Building Stanford University Medical Center Stanford, California 94305 ABSTRACT We present a method for using decision theory to evaluate the merit of individual situation -> action heuristics. The design of a decision-theoretic approach to the analysis of heuristics is illustrated in the context of a rule from the MYCIN system. Using calculations and plots generated by an automated decision making tool, decision-theoretic insights are shown that are of practical use to the knowledge engineer. The relevance of this approach to previous discussions of heuristics is dlscussed. We suggest that a synthesis of artificial intelligence and decision theory will enhance the ability of expert systems to provide justifications for their decisions, and may increase the problem solving domains in which expert systems can be used. I INTRODUCTION The rule-based expert system [l], [2] is an established and widely used artificial intelligence paradigm. The rules in such expert systems are often described as heuristics, which encode the experiential knowledge of experts for use in decision support systems. The capability of some expert systems has been shown to be comparable to experts (see for example [3], [4], [S]). However, the task of building expert systems, or knowledge engineering, has yet to be characterized in terms that allow the assessment of the merits of individual heuristics. Nevertheless, previous attempts to characterize heuristic rules have led to insights intended to help knowledge engineers craft heuristics that lead to high performance. For example, Clancey [6] asked the question: What kinds of arguments justify rules and what is their relation to a mechanistic model of the domain? By analyzing a rule used in MYCIN [7], he demonstrated that a heuristic can be broken into smaller and smaller inference steps that support it. Lenat [S] also investigated the nature of heuristics, and asked: What is the source of the power of heuristics? He hypothesized that heuristics derive some of their power from regularity and continuity in the world. To illustrate this point, he provided qualitative plots of the power or utility of a heuristic against characteristics of the task dotnain. Smith [9] describes an expert system that explicitly represents justifications for heuristic rules and uses those justifications to guide knowledge base refinement. However, this system makes its decisions about the causes of system errors based on rule type (e.g., definitional, theoretical, statistical, or default), not based on the measures of certainty associated with individual rules. Gaschnig [lo] quantitatively assessed the performance of heuristics used in search, but only by observing repeated trial executions of a search program with different heuristics. *Suppurl for this work was provided b) the hatlurial I,lbrary of Medlc~ne under Grants l-M-04136 and I,M-04316, lhe Kat~onal Science Foundation under Grant IST83-12148 and the Dlvislon of Research Resources under Grant RR-01636. Computing facllilles were provided by the SUMFX-AIM resource under lilH Graul RR-00785, by Ihe Xerox Corporallon, and bj Corning Medical. Dr. Shcrrtliffr is a Henry J. Kaiser Fam~l) Foundation Fxult> Scholar III General Internal Mcd~c~ne. When a rule is placed in the knowledge base, often no formal analysis is made to ascertain the power of the rule and the magnitude of its effect on system performance. Until a blinded evaluation study is completed, the system builder must assume that the heuristic suggested by the domain expert is appropriate for most or all cases the system will encounter. But heuristics almost always represent significant tradeoffs between possible costs and benefits, and the appropriateness of a heuristic may therefore often be argued. In order to make a reasoned decision about a heuristic that recommends an action, it is important to explicitly consider both the likelihood and the desirability of the consequences of the action. Consequently, we argue for an analysis of heuristics based on the synthesis of artificial intelligence and decision theory. Decision theory can be used to combine explicitly expressed probabilities (likelihoods) and utilities (desirabilities) to decide between competing plans of action. It is an axiomatized method for making decisions which recommends the course of action that maximizes expected utility. The expected utility of a given plan is expressed as follows: Expected Utility = Li ~(0,) x U(Oi) where ~(0,) is the probability of the ith outcome of executing thl plan, and U(Oi) is the utility of the ith outcome. This concept has been promoted by Savage [ll], who defends subjective probabilities to represent uncertainty, and a utility function to represent preferences. Raiffa [ 121 and Howard [13] both provide a thorough introduction to decision theory. Decision theory has been suggested as an adjunct to planning systems. Jacobs [14] and Coles [ 151 described robot planning systems that used Al techniques to generate plans, coupled with decision-theoretic techniques to compare plans based on costs and risks associated with planning operators. Feldman [16] described a similar framework that was used to solve a more realistic version of the “monkey and bananas” problem. Slagle [17] describes an interactive planning system that uses a predictive model of military damages to rank competing plans for allocation of military resources. We have also described a medical problem that could not be solved without explicit quantification of the uncertainties and tradeoffs involved [18]. We believe that precise definitions of both the application area and the notion of heuristic power or utility can provide important information to the knowledge engineer. For example: 1. How often will a heuristic be incorrect? 2. HOW does system performance change when a heuristic is added? 3. What serves as appropriate support or justification for a heuristic? In an attempt to answer these questions, we first define our notion of a heuristic. Then we show how information generated by a decision analysis tool developed on a Xerox Uncertainty and Expert Systems: AUTOMATED REASONING / 2 1 j From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. 1100-series LISP machine can be used to analyze a particular . What if the infecting organism were resistant to heuristic. Next, we show how our analysis relates to earlier ail drugs except tetracycline? analyses of heuristics. Finally, we discuss the implications of our analysis for expert systems. II THE FORMAL ANALYSIS OF A HEURISTIC . What if the only undesirable bodily change that tetracycline caused was minor intestinal distress? . What if the probability of staining due to tetracycline was only 1 in lOO? 1 in lOOO? We will adopt the definition of a heuristic proposed by Lenat [8]. He defines a heuristic as “a piece of knowledge 111 PROBLEM FORMULATION capable of suggesting plausible actions to follow or implausible ones to avoid.” We have chosen to concentrate our discussion on a frequently cited heuristic rule from the MYCIN system [7], shown in Fig. 1. This section describes a decision-theoretic method for representing the tradeoffs that underlq the rule in Fig. 1. When formulating a problem in decision-theoretic terms, three questions must be answered: If: 1) The therapy under consideration is tetracycline 2) The age (in years) of the patient is less than 8 Then: There is strongly suggestive evidence (.8) that tetracycline is not an appropriate therapy for use against the organism Figure I: The MYCIN tetracycline heuristic, slightly simplified for illustration purposes. Fig. 2 shows one possible collection of support know/edge that Clanceq proposed as justification for this heuristic. Let’s analyze this chain of four support rules in more detail. The first three inferences indicate how each event influences the occurrence of the next. But the final inference suggests a decision for action that is based on the previous inferences. No matter how fine the granularity of the reasoning, one rule in the chain will always recommend action based on the situation. That rule represents a compiled plan for action that will have wide ranging consequences. For example, avoiding the administration of tetracycline has the advantage of essentially eliminating the possibility of stained teeth, but it has the disadvantage of creating the need for another drug which may have a weaker therapeutic effect and other undesirable side effects. In fact, a widely used physician’s reference book [19] states that tetracycline should not be used in children under age 8 unless other drugs are not likely to be effective or are contraindicated. In other words, there is a tradeoff between the undesirability of possible the desirability of increased effectiveness. staining and tetracycline in youngster => chelation of the drug in growing bones => teeth discoloration => undesirable body change => don’t administer tetracycline Figure 2: A justification for the tetracycline heuristic in MYCIN from [6]. While these tradeoffs are important when deciding whether or not to recommend tetracycline, they have relevance in other settings. For example, they are essential to an intuitive justification of the conclusion not to recommend tetracycline. A justification for such a decision might be: “Although tetracycline is more likely to cure this infection, that is outweighed by the fact that tetracycline is likely to cause substantial dental staining.” These tradeoffs are important when deciding whether or not to include the tetracycline rule in an expert system. Although the addition of a certainty factor (in this case 0.8) is designed to allow other heuristics to override the recommendations of this one, it does not explicitly represent the circumstances under which the heuristic should be invalidated. Because the tradeoffs are not represented explicitly, the rule cannot recognize the characteristics of an -unusual decision situation and sometimes select tetracycline in spite of possible cosmetic problems, just as an expert would. For example consider the following cases, for which the value of the tetracycline heuristic might be questioned: 1. What alternative plans are available? 2. What might occur if each of those actions were carried out? 3. What is the utility of each possible outcome? We will examine a specific case in which the tetracycline rule might apply, and show how the results of the analysis can be generalized for use in building expert systems. Our problem will be constrained by considering only two alternative plans, and only a few possible outcomes of those plans. Al though not shown here, the process of finding a small number of candidate plans can be automated [ZO]. The case concerns an 8 year old male who has a urethral discharge (an indication of possible urethral infection) but in whom cultures have shown no evidence of bacterial infection. In such cases the urethritis may be caused by organisms that cannot be cultured easily (non-specific urethritis, or NSU) or it might be related to a non-infectious process causing urethral inflammation. In adults with such symptoms, it is common to treat with tetracycline since it is usually effective in NSU and can help assure relief from discomfort. In a child, however, the risk of tetracycline, as summarized above, cannot be totally ignored. The specific question that must be decided is: Should this young patient be treated with tetracycline, or with the second choice drug, erqthromycin? Erythromycin, unlike tetracycline, has no significant side effects except occasional nausea, but has the disadvantage that it is slightly less likely to cure the NSU. To formulate a decision-theoretic representation of the problem, first the available actions must be enumerated: in this case to administer tetracycline or to administer erythromycin. Then the consequences of each action must be explored. In this case, if either action is performed, there are two possible scenarios to consider: The patient either has NSU or has a non-infectious urethritis. If the urethritis is infectious, then the tetracycline will be more likely to cure the infection than erythronlycin. If it is non-infectious, then the drugs will have no therapeutic effect (except for a small placebo effect that is the same for both drugs). Finally, the undesirable side effects of tetracycline must be considered. Regardless of the outcome of tetracycline therapy, there is a definite chance that dental staining will occur. In summary, there are four pertinent outcomes that should be considered in delineating treatment options: CURF/NO STAlK!hG, NO C’URF/STAII\IIKG, CURF,/STAlNlh’G, KO CURF/KO STAlKlhG. Once the decision options and their possible consequences have been enumerated, decision analysts conventionally represent the problem as a decision tree*. In Fig. 3 we see the tree that represents the decision problem described above. Each path through the decision tree represents one possible combination of actions and consequences that might occur. For example, the top branch represents the following chain of events: ‘The patient had an infectious urethritis, was given *Although decision trees are still the prrdom~nant represcntatlon convention, some members of the decision analqtlc conimun~ty are increasingly attracted to an alternative representation called III~~IIPWC diugrar?lJ [21]. The intuitive, modular, characteristics of influence dlagrxms are similar to the Al rrpresentatlon derived [ 221. techniques from which they are 2 16 / SCIENCE NSU 0.25 Staining Cure 0.3 0.9 No staining > Slainins No cure 0.3 No slaining Staining NSU 0.25 Cure 0.6 (1.0) No cure (o.75J (0.23) (1.0) (0.0) (0.75) (0.23) (1.0) (0.0) (0.75) Figure 3: A decision tree that represents the decision between tetracycline and erythromycin for treatment of possible NSU. Square nodes are decision nodes. Branches emanating from decision nodes represent actions among which a choice must be made. The remaining nodes are chance nodes, whose branches represent all of the possible outcomes that might occur. The tree is labeled with the probabilities and utilities as assessed from a physician. TCN q tetracycline, ERYTHRO = erythromycin, NSU q non-specific urethritis. tetracycline, which cured the disease, but dental stalnlng resulted. IV PROBABILITY AND UTILITY ASSESSMEN’I For many tradeoffs, there is a point at which a small chance of a highly undesirable outcome will be equally preferred to a high likelihood of a mildly undesi’rable outcome. The point where this equivalence occurs may be dependent on precise expert assessments of probability and utility. Although these assessments may be subject to some inaccuracies and biases [23], we will see in the next section that we need not utilize the precise values of these numbers to justify a decision. We need only show that large variations from the assessed value will not affect the decision. To assess the relevant probabilities, the following questions wtll be asked in the context of the particular patient: 1. What is the probability that tetracycline will cure non-specific urethritis (NSU)? 2. What is the probability that erythromycin will cure NSU? 3. What is the probability that dental staining will occur if tetracycline is administered to this patient? 4. What is the probability that this patient has an infectious NSU‘? 5. What is the probability that either drug will cure a non-infectious urethritis (through a placebo effect)? To assess the utility of each of the four outcomes, explicit quantitative comparisons must be made among them*. The standard gamble is used to assess the utility of outcomes by converting a utility question into a probability question. Since utilities are relative quantities, it is conventional to assign the worst outcome a utility of 0.0 and the best outcome a utility of 1.0. Outcomes whose utilities are intermediate are assessed by asking the expert what gamble between a bad outcome and a good one would be equally preferable to the certainty of the intermediate outcome. The response to this question uniquely determines the relative desirability of the intermediate outcome. For example, if the expert were indifferent between guaranteed KO CURFIKO STAINIKG and a gamble with 1 chance in 4 of NO CURFISTAIKING (utility = 0.0) and 3 chances in 4 of CURFI~O STAIKING (utility q l.O), then h’0 CURFPKO STAINriG can be assigned a utility of 0.75. An analogous standard gamble question can be devised to find the utility of the CURFXTA~NIKG outcome. Fig. 3 shows the values of the parameters of the model as assessed from a physician. For this decision tree, the expected utility of administering tetracycline is 0.63, and the expected utility of administering erythromycin is 0.83**. Therefore, it would seem that in this case erythromycin is “better” than tetracycline, consistent with the original heuristic statement shown in Fig. 1. But how certain should we be of this conclusion? What does a difference of 0.2 utility units mean? Since there is uncertainty about the values of the probability and utility parameters even when considering an individual patient, many object that probability assessments require of the expert a level certainty that cannot, in reality, be obtained. Additional uncertainty is introduced when generalizing to an entire set of cases to which an expert system will be exposed. To address these concerns, decision-theoretic techniques have been devised to answer the following question: If the value were different than the one provided by the expert, how likely would it be to affect the decision? The principal tool for this purpose, sensirivity analysis, is described in the next section. V SENSITIVITY AN4LYSlS identifying the variables to which a heuristic is sensitive can help determine the merit of the heuristic, can help provide an adequate justification for the heuristic, and can help direct ongoing knowledge acquisition efforts to those areas where further investigation is needed. To quantitatively assess the effect of changes in a variable, one-way sensitivity analysis is frequently employed. It determines how much one parameter in the decision model must vary before the optimal decision changes. Consider, for example, how the utility of administering each drug might change with changes in the probability of dental staining. A plot generated by such an analysis is shown in Fig. 4. The point at which the utilities of the plans are the same is called the threshold value. In this case, the threshold occurs when the probability of dental staining is equal to 0.025, quite a distance from the original assessed value of 0.3. If the threshold value were nearlq equal to the assessed value, further analysis or data collectIon may be necessary to reach a decision. The frequency with which a particular decision will be optimal depends in part on the chance that such a parameter will vary bebond the threshold. If the parameter was not known with great certainty, or if it varied considerabl> from *There are two exceptton lo th1.s stalem~nls. First. 111 sme simple problenls such as the one he consider here, the dominance of one ~IlterllatlVe call be proven solcl~ from q~~altlal~ve assertions about the relative ul~l~ttes of the oulcornes [24]. Second, since the number of wtwmes that must be assessed grows rapidly wl(h the sile of the problem, not ali these ~tssessments are actuallq made in problems more complex than the one we consider here. Instead, deuision analysts look for independent measures of Utility that Cm be combined 111 an addltlve ultlity model, and make assessment of the Parameters of that model. **Note that these expected UtIlIly values are tlnr cerLlinly factors. Uncertainty and Expert Systems: AUTOMATED REASONING / 2 1 T Figure 4: The results of a one-way sensitivity analysis Figure 5: The results of a Monte Carlo simulation of of the tree shown in Fig. 3. The expected utility of the difference in expected utility between erythromycin each decision option (on the vertical axis) is plotted and tetracycline (horizontal axis). The frequency with against the likelihood of dental staining due to which each expected utility value occurred is plotted on tetracycline. T q tetracycline, E = erythromycin. the vertical axis. case to case, the second choice might be the optimal choice in a substantial minority of instances. It is for this reason that decision analysts assess the approximate probability distributions of all sensitive parameters. For example, the expert can be asked to specify a range in which the value can be expected to fall half the time. This specifies an approximate distribution for the parameter [25]. This distribution represents the state of the expert’s knowledge about the parameter. Once such an assessment has been made, it is straightforward to find the probability that the value will fall beyond the threshold (by integrating the distribution up to that point). According to the assessed distribution of the prbbability of dental staining*, a value beyond the threshold occurs in less than 1 in 100,000 patients. However, this value is only a lower bound on the probability of error, since a one-way sensitivity analysis assumes that only one variable at a time deviates from its mean value. It is possible that interactions between variables could cause substantially greater errors that would remain undetected by one-way sensitivity analyses. To address these concerns, more comprehensive sensitivity analyses have been developed, such as multi-way and Monte Carlo sensitivity analyses [ 261. The Monte Carlo sensitivity analysis, in particular, provides an important metric for the evaluation of a heuristic. In Monte Carlo analysis, a value is randomly selected from the distribution of each relevant parameter, and the expected utility of the decision is computed for that random set of parameter values. This process is repeated many times to obtain an estimate of the distribution of the result. Fig. 5 shows the results of a Monte Carlo simulation of the difference between two competing alternatives. From this distribution, a number of useful quantities can be obtained. Since the figure shows the distribution of the difference between erythromqcin and tetracycline, any negative value represents a set of parameter values for which tetracycline would be optimal (III direct contradiction to the original heuristic). The proportion of negative values represents the error rate of the heuristic and can serve as a useful indicator of the power of the heuristic. *The distribution is not shown here. The knowledge was represented by a j-distribution with parameters R = 6 and N = 20. There are important theoretical reasons for selecting +-dlstrlbulion,, but these WIII no1 be presented here. VI IMPl,ICATIONS FOR EXPERT SYSTEMS Although assessing the quantities for a decision-theoretic analysis requires extra effort (in this case, seven quantities must be assessed), the required effort yields substantial advantages. For example, both the knowledge engineer and the domain expert are forced to be explicit about the population of cases for which the system is designed. This allbws the identification of those cases to which <he heuristic may not be useful, and the quantification of expected change in system performance. We have shown that it might indeed be appropriate in some cases to administer tetracycline to a young child. However, Clancev’s analysis leaves to intuition the notion that the undesiiable bbdily changes caused by tetracycline are sufficientlv tetracycline. severe to outweigh the increased effectiveness of He makes expl&it the causal chain of reasoning that indicates an undesirable bodily change may take place, but does not explicitly represent the tradeoffs between that undesirable change -and the possibility of consequences of not being treated by tetracycline. the poor As we discussed in section II, there are several possible scenarios in which the chain of rules in Fig. 2 might not justify the heuristic. Why, then, was MYCIN so successful? For a case similar to the ones addressed by MYCIN, the results of the Monte Carlo analysis indicatk it is highly unlikely that a given . patient would be better off with tetracycline. Furthermore, MYCIN was evaluated by comparing it to experts who also may choose to use the tetracycline heuristic for decision making, even though it does not always lead to the optimal decision. III any case, since other drugs are often as effective, any possible error would not be serious. These features may not be present in other less forgiving problem solving settings (e.g., aminoglycoside antibiotics are frequently used to treat for an infectIon with the organism pseudomonas, despite a high chance of nephrotoxicity). VII CONCLUSION We have demonstrated a decision-theoretic approach to the analysis of heuristics. The informational needs of this analysis technique can be provided through a process similar to conventional knowledge engineering. The concise fashion in which the problem is stated, together with the extra information obtained in the knowledge acquisition process, supplies tools for analyzing the performance of an individual heuristic. This decision-theoretic approach may help to augment the capabilities of expert systems. We recognize that for complex problems, a decision- theoretic analysis may be expensive and difficul t. But when 2 18 / SCIENCE uncertainties and tradeoffs are dominant features of a decision problem, they cannot be captured in a single heuristic, nor can they be captured explicitly by multiple heuristics (with associated measures of certainty). In combining evidence from rules as if they are modular entities that do not affect the performance of the remaining rules, the implicit assumption is made that these rules are probabilistically independent [27]. Because decision analysis makes explicit the variables on which the success of each heuristic depends, it indicates whether assumptions of modularity are being met. Violating the modularity assumption may have serious implications for system performance [28]. We envision a system where each situation -> action heuristic is justified by decision-theoretic knowledge. This will allow the knowledge engineer to estimate the expected gain in system performance when a complete decision analysis is used in place of a simple heuristic. An informed decision can be made between the benefits of the computational economy of heuristics and the possible costs of their computational inaccuracies. Decision theory represents an important tool that should be considered by expert system builders. Used in conjunction with heuristic techniques, the decision-theoretic approach not only provides a sound basis on which to base knowledge engineering decisions, but also may enhance the ability of a system to explain its reasoning and to solve problems in which the explicit consideration of tradeoffs is essential. ACKNOWLEDGEMENTS David Heckerman, Eric Horvitz, and David Wilkins gave helpful comments on an earlier draft of this paper. REFERENCES 1. Buchanan, B. G., and Shortliffe, E. H., eds.. Rule-Based Expert Systems: The MYC/N Experiments of the Stanford Heuristic Programming Project. Addison-Wesley, Reading, Mass., 1984. 2. Hayes-Roth, F., Waterman, D. and Lenat, D. (eds.). Building Expert Systems. Addison-Wesley, New York, 1983. 3. Yu, V. L., Fagan, L. M., Wraith, S. M., et al. Antimicrobial Selection By A Computer: A Blinded Evaluation By Infectious Disease Experts. Journal of the American Medical Association 242, 12 (19791, 1279-1282. 4. Miller, R. A., Pople, H. E., and Myers, J. D. INTERNIST-l, An Experimental Computer-Based Diagnostic Consul tan t for General In ternal Medicine. New England Journal of Medicine 307, 8 (1982), 468-476. 5. Hickam, D. H., Shortliffe, E. H., Bischoff, M. B., Scott, A. C., Jacobs, C. D. A Study of the Treatment Advice of a Computer-based Cancer Chemotherapy Protocol Advisor. Annals of Jnternai Medicine IO3 (1985), 928-936. 6. Clancey, W. J. The Epistemology of a Rule-Based Expert Sys tern -- A Framework for Explanation. Artificial Intelligence 20, 3 (1983), 215-251. 7. Shortliffe, E. H.. Computer-Based Medical Consultations: MYC/N. Elsevier/North Holland, New York, 1976. 8. Lenat, D. B. The Nature of Heuristics. Artificial Intelligence /9 (1982), 189-2 19. 9. Smith, R. G., Winston, H. A., Mitchell, T. M., Buchanan, B. G. Representation and Use of Explicit Justifications for Knowledge Base Refinement. Proceedings of the Ninth International Joint Conference on Artificial Intelligence, IJCAI-85, 1985, pp. 673-680. 10. Gaschnig, J. Exactly How Good Are Heuristics?: Toward a Realistic Predictive Theory of Best-First Search. Proceedings of IJCAI-77, International Joint Conference on Artificial Intelligence, 1977, pp. 434-441. 11. Savage, L. J.. The Foundations of Statistics. Dover, New York, 1972. 12. Raiffa, H.. Decision Analysis: Introductory Lectures on Choice Under Uncertainty. Addison- Wesley, Reading, Mass., 1968. 13. Howard, R. A., Matheson, J. E.. Readings on the Principles and Applications of Decision Analysis. Strategic Decisions Group, Menlo Park, CA, 1984. 2nd Edition. 14. Jacobs, W. and Keifer, M. Robot Decisions Based on Maximizing Utility. Proceedings of the Third International Joint Conference on Artificial Intelligence, IJCAI-73, 1973, pp. 402-411. 15. Coles, L. S., Robb, A. M., Sinclair, P. L., Smith, M. H., Sobek, R. R. Decision Analysis for an Experimental Robot with Unreliable Sensors. Proceedings of the Fourth International Joint Conference on Artificial Intelligence, IJCAI-75, 1975, pp. 749-757. 16. Feldman, J. A., Sproull, R. F. Decision Theory and Artificial Intelligence II: The Hungrey Monkey. Cognitive Science I (1975), 158-192. 17. Slagle, J. R. and Hamburger, H. An Expert System for a Resource Allocation Problem. Communications of the Association for Computing Machinery 28, 9 (1985), 994-1004. 18. Langlotz, C. P., Fagan, L. M., Shortliffe, E. H. Overcoming Limitations of Artificial Intelligence Planning Techniques. Proceedings of the American Association for Medical Systems and Informatics Congress 1986, Anahiem, California, 1986, pp. 92-96. Also Technical Report KSL-85-25, Knowledge Sys terns Laboratory, Stanford University 19. Huff, Barbara B. (ed.). Physicians Desk Reference. Medical Economics Company, Inc., Oradell, New Jersey, 1985. 20. Langlotz, C., Fagan, L., Tu, S., Williams, J., Sikic, B. ONYX: An Architecture for Planning in Uncertain Environments. Proceedings of the International Joint Conference on Artificial Intelligence, IJCAI-85, Los Angeles, Aug., 1985. 21. Howard, R. A., Matheson, J. E. Influence Diagrams. In Readings on the Principles and Applications of Decision Analysis, Howard, R. A., Matheson, J. E., Eds., Strategic Decisions Group, Menlo Park, CA, 1981, ch. 37, , pp. 721-762. 22. Duda, R., Hart, P., and Nilsson, N. Subjective Bayesian Methods for Rule-based Inference Systems. Proceedings 1976 National Computer Conference, AFIPS, 1976, pp. 1075-1082. 23. Tversky, A., Kahneman, D. Judgment Under Uncertainty: Heuristics and Biases. Science (85 (1974), 1124-1131. 24. Wellman, M. P. Reasoning About Preference Models. Tech. Rept. MIT/LCS/TR-340, Laboratory for Computer Science, Massachusetts Institute of Technology, May, 1985. 25. Pratt, J. W., Raiffa, H., and Schlaifer, R.. introduction to Statistical Decision Theory (Preliminary Edition). McGraw- Hill, New York, 1965. 26. Doubilet, P., Begg, C. B., Weinstein, M. C., Braun, P., and McNeil, B. J. Probabilistic Sensitivity Analysis Using Monte Carlo Simulation. Medical Decision Making 5, 2 (1985), 157-177. 27. Heckerman, D.E. Probabilistic Interpretations for MYCIN’s Certainty Factors. In Uncertainty in Artificial intelligence, North Holland, New York, 1986. 28. Horvitz, E. J., and Heckerman, D. E. The Inconsistent Use of Measures of Certainty in Artificial Intelligence Research. In Uncertainty in Artificial Intelligence, North Holland, New York, 1986. Uncertainty and Expert Systems: AUTOMATED REASONING / 2 l()
1986
121
386
A Framework for Comparing Alternative Formalisms for Plausible Reasoning Eric J. Horvitz, David E. Heckerman, Curtis P. Langlotz Medical Computer Science Group Knowledge Systems Laboratory Departments of Medicine and Computer Science Stanford University Stanford, California 94305 ABSTRACT We present a logical relationship between a small number of intuitive properties for measures of belief and the axioms of probability theory. The relationship was first demonstrated several decades ago but has remained obscure. We introduce the proof and discuss its relevance to research on reasoning under uncertainty in artificial intelligence. In particular, we demonstrate that the logical relationship can facilitate the identification of differences among alternative plausible reasoning methodologies. Finally, we make use of the relationship to examine popular non-probabilistic strategies. I INTRODUCTION As artificial intelligence research has extended beyond deterministic problems, methodologies for reasoning under uncertainty or plausible reasoning have become increasingly ten tral. Several competing approaches to reasoning in complex and uncertain settings have been formulated. These include probability [I], fuzzy logic [2], Dempster-Shafer theory [3], certainty factors [4], and multi-valued logics [S]. There has been debate on the theoretical and pragmatic benefits and disadvantages of these alternative strategies. A particular focus of discussion has centered around the adequacy of probability theory for handling reasoning under uncertainty [6]. While probabilists have defended the use of probability, others have cited benefits achieved through the use of non-probabilistic formalisms [2, 3, 7, 41. Such discussion has been heightened in recent years as the demand has grown for applicable methodologies for reasoning under uncertainty. In this paper, we discuss the ramifications of a proof showing that the axioms of probability theory follow logically from a set of simple properties. We shall reformulate the work of R.T. Cox, a physicist interested in reasoning under uncertainty. Cox demonstrated, over forty years ago, that the axioms of probability theory are a necessary consequence of intuitive properties of measures of belief [S]. That is, if a set of simple properties are assumed, the axioms of probability theory must be accepted. Even though others, including Jaynes [9] and Tribus [lo] have since demonstrated similar proofs, the work has remained obscure. We think it is important that the artificial intelligence community become familiar with Cox’s result. After clarifying the focus of this paper, we will present fundamental properties that Cox and Jaynes have asserted as necessary for any measure of belief. We will then discuss the relevance of the proof to current discussions within the artificial intelligence community on the use of alternative formalisms for plausible reasoning. Finally, we will describe how the proof can serve as a framework for analyzing and communicating differences about alternative methodologies for plausible reasoning. We will use the framework to critique *This work was suppoltrd in part by the Josiah Macy. Jr. Foundation, the Henry J. Kaiser Family Foundation, and the Ford Aerospace Corporation. Computing facilities were provided by the SUMFX-AIM resource under NIH grant RR-00785. the non-probabilistic methods of fuzzy logic, the Dempster- Shafer theory of belief functions, and the MYCIN certainty factor model. II THE LIMITS OF BELIEF ENTAILMENT We intend to present a useful perspective on methodologies for the entailment of belief. We use the phrase belief entailment to refer to the consistent assignment of measures of belief to propositions, in the context of established belief. Belief entailment schemes, such as the MYCIN certainty factor model, fuzzy logic, and probability theory dictate the belief in Boolean combinations of propositions given measures of belief in component propositions. Entailment schemes also provide a mechanism whereby beliefs can be updated as new information becomes available. Some have rightly pointed out that theories of belief entailment do not capture the rich semantics of plausible reasoning [7]. We stress that such methods are, indeed, only intended for the relatively simple task of the consistent assignment of measures of belief. We believe that belief entailment should be distinguished from the more encompassing task of plausible reasoning. It is useful to decompose the problem of reasoning under uncertainty in to three distinct components: problem formulation, initial belief assignment, and belief entailment. We use the term problem formulation to refer to the task of constructing the plausible reasoning problem. This consists of the process of enumerating important propositions as well as relations among propositions. The initial assignment of belief requires the direct assessment of belief or some procedure for constructing belief. Belief entailment occurs after a problem is formulated and an initial assignment of belief is completed. Belief entailment methodologies are relatively well- developed. For example, there are a number of different axiomatic schemes to choose from. In contrast, aspects of problem formulation and belief construction currently pose significant challenges for artificial intelligence research. Problem formulation has proven to be particularly difficult; there has been continuing debate as to whether or not an axiomatic theory for problem formulation is possible at all [ll, 12, 131. From this point on, we shall explicitly distill away the problems and issues concerning problem formulation in our discussion of alternative methods for reasoning under uncertainty. III FUNDAMENTAL PROPERTIES OF BELIEF We now turn to the intuitrve basis for probability theory. We shall assert a set of fundamental properties for measures of belief. The intuitive basis is a reformulation of the properties asserted by Cox, Jaynes, and Tribus as being essential for any measure of belief that could vary between truth and falsehood. We have attempted to make explicit all the properties used in the classic proof, including those that were not emphasized in the original work. We will enumerate 210 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. seven properties for measures of belief and name each for presented in Tribus [lo]. reference. Like many artificial intel ligence researchers, Cox reasoned about the nroblem of uncertaintv from a deductive perspective. He sought essential properties required of a measure that remesen ted degree of belief in the truth of a Another assertion is that belief in the negation of a proposition Q, denoted -Q, should be determined by the belief in the proposition itself. Formally, there should be a continuous monotonically decreasing function G such that -Qle = G(Qle). Boolean proposition, or proUposi tions combination rules of Boolean algebra. created through the We refer to this as the complementarity property. The first nronertv in our formulation asserts that A final property focuses on logically equivalent propositions. Consider two propositions, Q and R where it is possible to show that Q logically implies R and vice-versa. In this case, we assert that Qle q Rle for any prior information e. In other words, if two propositions have the same truth value, an individual should believe each of them with equal conviction. We term this the consistency property. In summary, we have presented seven fundamental properties for continuous measures of belief in the truth of propositions. We have termed these properties: 1. Clarity: Propositions should be well-defined. propositions td w’hic6 belief can be assigned must be well- defined. That is, propositions must be defined precisely enough so that it would be theoretically possible to determine whether a proposition is indeed true or false. The intention of this property is captured by the notion that a proposition should be defined clearly enough so that a n omniscient being (often referred to in the decision analysis literature as a clairvoyant) could determine its truth or falsehood. We shall refer tb this requirement as the clarity property. A second assertion is that measures of degree of belief in the truth of a uroDosition should be able to vary continuously between values’of ‘certain truth and certain falsehood and that the continuum of belief can be represented by a single real 2. Scalar continuity: A single real number is both necessary and sufficient for representing a degree of belief. number. We refer to the use of’ a single real number to represent continuous measures of belief as the scalar continuity property. A third assertion in our formulation is that it is oossible to assign a degree of belief to any proposition which is precisely defined. We refer to this property as the completeness 3. Completeness: A degree of belief can be assigned to any well-defined proposition. 4. Context dependency: The belief assigned to a proposition can depend on the belief in other property. UDon what might a degree of belief depend? An propositions. Hypothetical conditioning: There exists some function that allows the belief in a conjunction of propositions to be calculated from the belief in one proposition and the belief in the other proposition given that the first proposition is true. Complementarity: The belief in the negation of a proposition is a monotonically decreasing function of the belief in the proposition itself. indcvidual’s or computer Dcogram’s degree of -belief in a proposition should. - of c&r&. depend on the particular . . ProPosition under consideration. I6 addition, the degree of bel ikf in a particular proposition can depend upon knowledge about other nrouosi tions. We refer to this as the context dependency piop’erty. We shall use the term Qle to represent the degree of belief in proposition Q by an individual with background knowledge e. The background knowledge e refers to information relevant to the belief in Q that is assumed or believed to be true. In exploring the dependency of belief in one proposition on another, Cox specifically focused on the belief in the e . Consistency: There will be equal belief in propositions that have the same truth value. conjunction of two propositions given belief in each proposition. He asserted that the belief in the proposition QR should be related to the belief in Q alone as well as to the belief in R given that Q is true. That is, the belief in an event of interest should depend on one’s belief in the event given the truth of some conditioning event as well as the degree of belief in the conditioning event itself. Formally, we asiert that measures of belief shduld have the property that there exists some function F such that IV A LOGICAL MAPPING As mentioned above, Cox and others have demonstrated that the above properties logically necessitate the axioms of probability theory. According to the proof, if one accepts the above intuitive properties, one must, accept the axioms of probability. More precisely, it can be shown that if the intuitive properties of belief are assumed, there exists a continuous monotonic function (,I such that QRle = F(Qle, RIQe). (1) 0 5 b(Qle) 15 1 The function is asserted to be continuous and monotonically increasing in both arguments when the other is held constant. o(TRUEle) = 1 We refer to the -above property as the hypothetical conditioning property. often refefred to as hypothetical reasoning. This property is related to what is Individuals o(Qle) + o(-Qle) = 1 commonly assign belief to events conditioned on the truth of other events. This property may be viewed as a specialization of context dependency. p(QRle) = o(Qle) . @lQe) These relations are the axioms of probability theory as they are commonly formulated. That is, u(Qle) satisfies the axioms of probability. Given the above fundamental properties, the onfy measure of belief in the truth of proposition Q in light of evidence E must be the probability of Q given E, written p(QIE) or some monotonic transformation of this quantity. Bayes’ theorem follows directly from the last axiom above. Although the hypothetical conditioning property was stated bv Cox. it can actually be Droved from a weaker assumption about ;he relationshid of belief in the conjunction of- two propositions to belief in the individual component propositions. The proof considers functions of belief in two propositions that could generate a measure of belief in the conjunction of the propositions. Alternative arguments are eliminated based on contradiction and symmetry, leaving only the hypothetical conditioning form. This work is elegantly The proof of the necessary relationship between the intuiti ve properties and the axioms of probabil ity theory is Uncertainty and Expert Systems: AUTOMATED REASONING / 2 11 based on an analysis of solutions for the functional forms [14] implied by the intuitive properties. We recommend the referenced versions of the proof to the reader. V RELEVANCE OF THE MAPPING TO AI RESEARCH The logical mapping relating intuitive properties to the axioms of probability has important implications for artificial intelligence research. In the context of the mapping, if one subscribes to the simple intuitive properties of belief in systems that reason under uncertainty, one thereby agrees that the axioms of probability are theoretically sound for capturing all aspects of belief entailment; arguments for alternative entailment schemes based on theoretical or pragmatic considerations must involve the violation or modification of one or more of the enumerated properties. In addition to serving as a proof of logical necessity between the intuitive properties and the axioms of probability theory, the Cox result can provide a useful perspective on the differences between alternative entailment methodologies. Investigation of differences between competing methods may be hampered when the formalisms are defined with axiomatic systems that are difficult to compare. As an example, the most common axiomatization of Dempster’s evidence [15] is in a theory of form not particularly suited for comparison with the axioms of probability theory. As we will see in section VII, moving discussion into the realm of intuitive properties can highlight the fundamental differences between alternative belief entailment schemes. The mapping can be especially helpful in identifying the basis of possible dissatisfaction with probability theory. An individual, harboring ill-defined dissatisfaction with probability theory, might be able to identify the sources of his uneasiness at the level of the intuitive properties. VI A FRAMEWORK FOR COMPARING ALTERNATIVES Cox’s proof of a mapping between a set of intuitive properties and probability theory can serve as an integrative framework for identifying differences among alternative belief entailment schemes. We believe that the set of intuitive properties are so basic as to be relevant to any belief entailment scheme; the properties, or close analogs of them, were undoubtedly addressed in the creation of the methodologies. Ascertaining the status of each of the fundamental properties in a non-probabilistic methodology is usually straightforward. How can we critique alternatives of probability in terms of the intuitive properties ? It is useful to carefully identify the status of the seven intuitive properties in each entailment methodology. In most cases, the spirit of a non-probabilistic methodology can be captured by identifying a fundamental dissimilarity with one or two of the intuitive properties of probability theory. Although such a difference will often have the side effect of invalidating other intuitive properties, it may still be useful to focus on the primary property violation that best captures the rationale behind the creation of the method. The identification of a primary property violation can focus debate on well-defined fundamental principles. Such a focus can be especially useful in discussions of perceived theoretical advantages of alternative belief entailment schemes. When the selection of a scheme is based on the pragmatics of computation or belief assessment, identifying a ten tral property violation can be useful in characterizing problems that may arise in practice. It may also be useful to categorize the differences between probability and alternative belief entailment methods. Such a categorization scheme can summarize agreement of any methodology with the intuitive properties. If we examine the status of the seven intuitive properties of belief, non- probabilistic strategies can be viewed to fall into one of the following categories: 2 12 / SCIENCE 1. Generalization: The elimination or weakening of particular intuitive properties. 2. Specialization: The addition of new fundamental properties or the strengthening of existing properties. 3. Self-inconsistency: The addition or strengthening of properties such that a logical inconsistency arises in the set of fundamental properties; the set of properties become self-inconsistent. 4. Substitution: The substitution of one or more properties for another such that the set does not fall into one of the above categories. Armed with this intuitive framework, we will now explore specific examples of popular belief entailment methodologies that are often viewed as competing with probability theory. In particular we will examine fuzzy logic, the Dempster- Shafer theory of belief functions, and the MYCIN certainty factor model. VII EXAMINATION OF ALTERNATIVE METHODS A. Fuzzy Logic There are currently at least two distinct forms of fuzzy logic used to manage uncertainty. Each deviates from the intuitive properties in a different way. One form of fuzzy reasoning applied to managing uncertainty was introduced by Zadeh [2]. Fuzzy logicians using this methodology do not object to the use of probability theory when events are precisely defined. However, they argue that it is desirable to reason with imprecision in the definition of events in addition to uncertainty about their occurrence. They allow beliefs to be assigned to imprecise events as well as precise ones. This version of fuzzy logic theory includes fuzzy versions of Bayes’ theorem [16]. Zadeh attempts to demonstrate the need to assign belief to fuzzy propositions in’ the following challenge: An urn contains approximately n balls of various sizes, of which several are large. What is the probability that a ball drawn at random IS large [16]? Returning to our intuitive properties, it appears that the central dissimilarity of this kind of fuzzy logic with probability theory occurs with the clarity property. This methodology weakens the clarity property in that it is assumed that events to which belief may be assigned remain ill-defined. We would classify this school of fuzzy logic as being a generalization of probability theory. The identification that a central difference between this form of fuzzy logic and probabilrty theory occurs at the level of the clarity property defines a particular focus for discussion about the benefits or rationale of the methodology. Analysis of the advantages and disadvantages of fuzzy logic should center on the rationale and ramifications of weakening the clarity property. Many probabilists have argued against the weakening of the clarity property by pointing out that imprecision in the specification of a proposition could always be converted to uncertainty about the occurrence of a related precise event that had similar or identical semantic content. It has also been proposed that probability distributions over variables of interest can capture the essence of fuzziness within the framework of probability [17]. It has also been argued that the use of imprecise propositions is inappropriate in making important decisions. The penalty for reasoning with fuzzy events is often obscured by the examples used in presentations of fuzzy set theory. Typical examples tend to center on reasoning about events with small potential losses and gains. For example, it is generally not very important whether or not a person of height 4’ 10” is called “short.” However, problems with using fuzzy events may be more apparent when large potential utility changes are associated with events. The cost for relying on a fuzzy entailment calculus is highlighted by the following example of a high-stakes situation: Stan finally received news about the growth on his chin. His physician, who was quite fond of fuzzy logic, reported to his nervous patient, “The test results usually mean that it is sorneti,hut likely that you have cancer. As the tumor is quite large and probably dangerous, I will operate. You shouldn’t worry; my patients usua(/y survive such operations. A decision theorist might argue that, in general, lack of clarity as in the above problem will lead to lower expected utility of outcome. That is, a cost is incurred by foregoing the use of the clarity property. The comparison of fuzzy and precise versions of a problem would allow an actual penalty associated with loss of information to be calculated. Decision theorists might argue that imprecision may not be tolerable in certain high stakes situations. We move next to an alternative fuzzy logic methodology. In this methodology [S], the degree of membership of a proposition P in the set of true propositions, denoted ,r-p(P), is interpreted as the degree of belief in the hypothesis. That is, /In = Pie. (2) We should note that a logical equivalency between this brand of fuzzy reasoning and forms of multi-valued logic has been demonstrated [S]. In this approach, it is assumed that ,+B) = MIWTV’) v I@) 1. Given the correspondence (2), we see that this brand of fuzzy methodology is not consistent with the hypolhetical conditioning property. Therefore, this form of fuzzy reasoning falls into the subsritution category abole. Probabilists who accept I/~(A), i’(B),-, and /lT(AB) as measures of belief would object to the violation of the hypothetical conditioning relation. They might argue that the final belief in the conjunction /fT(AB) is not necessarily dependent solely on /tT(A) and I/~(B). The violation of hypothetica! conditioning in this case is tantamount lo imposing independence or uniform conditional dependence (equivalent dependency among all propositions) where such a relationship may not exist. R. Dempster-Shafer In the Dempster-Shafer (DS) theory [3], two separate measures of belief can be assigned to each proposition P. These measures are referred to as the “belief” and “plausibility” in P, denoted Bel(P) and PInus respectively. Also, Bel(P) is not directly related to Bel(-P); instead, Bel(P) = 1 - Plaus(-P). Similarly, Plaus(P) q 1 - Bel(-P). Thus, the theory appears to differ from probability theory with respect to the scalar continuity property as well as the complementarity property. However, an examination of the original motivation for the theory reveals a more fundamental difference; the DS theory allows for the existence of well- defined hypotheses to which degrees of belief cannot be assigned. Thus, it seems that the central issue behind the development of the DS theory is the weakening of the completeness property. The fact that two numbers can be attached to the belief in any hypothesis is a consequence of this more fundamental difference between the two theories. To illustrate this, consider the following problem taken from Shafer [ 181: Is Fred, who is about to speak to me, going to speak truthfully, or is he, as he sometimes does, going to speak carelessly, saying something that cotnes to his mind, paying no attention to whether it is true or not? Let S denote the possible answers to this questlon; S q (truthful, careless). Suppose I know from experience that Fred’s announcements are truthful reports on what he knows about 80% of the time and are careless statements the other 20% of the time. Then I have a probability measure p over S: p(truthful} = .8, p(careless} = .2. Are the streets outside slippery? Let T denote the possible answers to this question; T = (yes, no}. And suppose Fred’s answer to this question turns out to be, “The streets outside are slippery.” Taking account of this, 1 have a compatibility relation between S and T; “truthful” IS comuatible with “qes” but not with “no,” while compatible with both “yes” and “no.” “careless” is If one wanted to use probability theory to determine thd belief in the hypothesis that the streets outside are slippery given Fred’s report, additional information would be needed. In particular, one’s prior belief that the streets outside are slippery and the conditional belief that Fred WIII be correct ni&n that he is careless will be required. If r is the needed zrior belief that the streets outside are slippery and if s is the conditional belief that Fred will be correct given that he is careless, the belief of interest can be calculated using Bayes’ theorem: .8r + .2rs p(slipperylreport) = ___------------_--------- .8r + .2rs + .2(1-r)(l-s) In DS theorq, one is allowed to assert that r and s cannot be assessed. To make up for this lack of information, the theory uses the “compatibility relation” described above in order to define beliefs relevant to the problem. The DS “belief” and “plausibility” that the roads are slippery (“yes”) are given b> q p(“truthful”) = .8 Plaus( (“yes”}) q xulXc) ) {“y&‘) () P(X) = p(“truthful”)+p(“careless”) q 1. where XC) means that x in S and y 111 T are compatible. Thus, the violation of the scalar continuity and complemeniurity properties arises from a weakening of the completeness propert). Based in the weakening of this property, DS can be considered a generalization of probability theory. Many have objected to the weakening of the comp/ereness property. For example, most dectsion analysts would insist that a personal measure of belief can be assigned to any well- defined proposttion when placed in the context of a decision. There has been research in the decision analysis community focusing on the pragmatics of assessing belief in an) well- defined proposition. C. Certainty factors We now turn to the MYCIN certainty factor model used for belief entailment in a number of rule-based systems. The MYCIN certainty factor model [4] can be shown to be self- inconsistent [ 19, 201. Thus, the original certainty factor model falls into the third category above. There are several ways to demonstrate inconsistency in the model. We will outline one of these approaches here. The model is an augmentation to the rule-based representation paradigm. Knowledge is represented as rules of the form IF <evidence> THEN <hypothesis>. To each rule IS attached a certuinty factor, denoted CF(H,F), which is intended to represent the chlrnge in belief in hypothesis H given that evidence E Uncertainty and Expert Systems: AUTOMATED REASONING / 2 1.3 becomes known. The definition of CF(h,E) is gtven in the ACKNOWLEDGEMENTS original work: P(HIE) - P(H) We thank Greg Cooper, Arthur Dempster, Larry Fagan, Benjamin Grosof, Judea Pearl, Glenn Shafer, Peter Szolovits, _---____----- P(HIE) > P(H) 1 - P(H) Ted Shortliffe, Michael Wellman, and Lotfi Zadeh for useful discussions. CF(H,E) = (3) REFERENCES \ P(HIE) - P(H) ------------- P(H) > P(HIE) P(H) where p(H) is the prior Ijrobability of H and p(HIE) is the posterior probability of H given E. One component of the model involves a prescription for combining certainty factors. For example, suppose two pieces of evidence E, and E, bear on hypothesis H. In the model, the two certainty factors CF(H,E,) and CF(H,E,) are combined to give an effective certainty factor, CF(H,E,r\E,), for the rule IF E,r\E, THEN H with the following function: x,y > 0 x’y < 0 x + y + xy x,y < 0 where x q CF(H,E& Y = CF(H,E2), and z q CF(H,E,AE,). (4) An inconsistency follows from the two relations above. From (4), it follows that CF(H,E,r\E,) = CF(H,E,r\E,). That is, the combination of evidence is commutative. However, it can be shown that the definition of certainty factors, (3), prescribes non-commutative combination of evidence. Recent work has focused on removing inconsistencies in the certainty factor model [19]. The consistent refortnulation of the MYCIN certainty factor model falls into category 2 above; it can be shown that the certainty factor model is a specialization of probability in that assumptions of conditional independence are imposed by the methodology. For example, it can be shown that (4) is consistent with Bayes’ theorem only if E, and E, are conditionally independent given H and its negation. Although the certainty factor model is computationally efficient, many probabilists would feel the methodology was still unjustified because of its imposition of potentially invalid independence assumptions. They might seek a method whereby the tradeoff between computational efficiency and correctness can be controlled. Indeed, methods in which it is possible to selectively ignore dependencies that are not worth the computational effort to consider are currently being investigated [21]. VI I I CONCLUSION 1. Pearl, J. Fusion, propagation, and structuring in Bayesian networks. Presented at the Symposium on Complexity of Approximately Solved Problems, Colutnbia University, 1985 2. Zadeh, L.A. The role of fuzzy logic in the management of uncertainty in expert systems. Fuzzy Sets and Systems 11 (1983), 199-227. 3. Shafer, G.. A Mathematical Theory of Evidence. Princeton University Press, 1976. 4. Shortliffe, E. H. and Buchanan, B. G. A model of inexact reasoning in medici tie. Mathematical Biosciences 23 (1975), 351-379. 5. Gaines, B.R. Fuzzy and probability uncertainty logics. Information and Control 38 (1978). 154-169. 6. Cheeseman, P. In defense of probability. Proceedings of the Ninth international Joint Conference on Artificial Intelligence, IJCAI-85, 1985. 7. Cohen, P. R. Heuristic Reasoning About Uncertainty: An Artificial intelligence Approach. Ph.D. Th., Computer Science Department, Stanford University, Aug. 1983. 8. Cox, R. Probability, frequency and reasonable expectation. American Journal of Physics 14, 1 (January-February 1946), l-13. 9. Jaynes, E.T. course notes Plausible reasoning. Chapter in unpublished IO. Tribus, M. What do we mean by rational? In Rational Descriptions, Decisions and Designs, Pergamon Press, New York, 1969. 11. Buchanan, B.G. Logics of Scientific Discovery. Ph.D. Th., University of Michigan, Ann Arbor, 1966. Of Discovery. Cam bridge 12. Hanson, N. University Press, .R.. Patterns Cambridge, 1958. 13. Simon, H.A. Does scientific discovery have a logic? Philosophy of Science 40 (1973), 471-480. 14. Aczel, J.. Lectures on Functional Eyuatiotls and Thei, Applicntinn.7. A cademic Press, New York, 1966. 15 Dempster, A. by a multivalued 325-339. P. Uppet and lower probabilities induced mapping. Ann. Math. Statistics 38 (1967), 16. Zadeh, L.A. Fuzzy probabilities and their role in decision analysis. In Proceedirlgs of the Fourth MIT/ONR Workshop on Distributed I~?f~ormation and Decision Systems, MIT, 1982. 17. Cheeseman, P. Probabilistic versus fuzzy reasontng. In Uncertuinty and Probability in Artificial intelligence, North- Holland, New York, 1986. We have presented a logical mapping between several intuitive properties and the axioms of probability theory, and have given examples of how the mapping can be useful in identifying the fundamental differences between probability theory and non-probabilistic methodologies. We believe that the framework can help clarify discussions about alternative belief entailment schemes, whether they currently exist or result from future research. We recommend that investigators who seek a method for reasoning under uncertainty review the fundamental properties of measures of belief to gain an intuitive perspective on the nature of probability theory and on the relationship of non-probabilistic alternatives to probability. I& Shafer, G. Probability judgment in artifictal intelltgence. In Uncerfainty in Artificlcrl Intelligence, North-Holland, 1986. 19. Heckerman, D.E. Probabiltsttc Interpretations for MYCIN’s Certainty Factors. In Uncertainty in Artificial intelligence, North Holland, New York, 1986. 20. Horvitz, E. J., and Heckerman, D. E. The Inconsistent Use of Measures of Certainty in Artificial Intelligence Research. In Uncertainty in Artificial Intelligence, North Holland, New York, 1986. 21. Howard, R. A., Matheson, J. E. Influence Diagrams. In Readings on the Principles and Applications of Decision Analysis, Howard, R. A., Matheson, J. E., Eds., Strategic Decisions Group, Menlo Park, CA, 1981, ch. 37, , pp. 721-762. 214 / SCIENCE
1986
122
387
Parallel Logical Inference and Energy Minimization Dana H. Ballard Computer Science Department The University of Rochester Rochester, NY 14627 Abstract The inference capabilities of humans suggest that they might be using algorithms with high degrees of parallelism. This paper develops a completely parallel connectionist inference mechanism. The mechanism handles obvious inferences, where each clause is only used once, but may be extendable to harder cases. The main contribution of this paper is to show formally that some inference can be reduced to an energy minimization problem in a way that is potentially useful. 1. Motivation This paper explores the possibility that a restricted class of inferences in first order logic can be made with a very large knowledge base using only a parallel relaxation algorithm. The main restriction is on the infrastructure of the logical formulae, but not on the number of such formulae. The relaxation algorithm requires that problems be formulated as the intersection of (possibly huge) numbers of local constraints represented in networks. The formulation of the algorithm is in terms of a connectionist network [Feldman and Ballard, 19821. Recently a class of algorithms for solving problems has emerged that has particularly economical formulations in terms of massively parallel architectures that use large networks of interconnected processors [Kirkpatrick et al., 1983; Hopfield, 1984; Hopfield and Tank, 1985; Hinton and Sejnowski, 19831. For a survey, see [Feldman, 19851. By “massively parallel,” we mean that the number of processors is on the order of the size of the knowledge base. This class of algorithms has been described as “energy minimization” owing to analogies between the algorithms and models of physical processes. The key contribution of this paper is to show that some theorem proving can be described in terms of this formalism. Formally, there is an algorithm to minimize the “energy” functional E given by E = - cc wlj.sisj + CIisi (2) L i where si is the binary state of a unit, either on (0) or off (l), wu is a real number that describes a particular constraint, 8i is a threshold (also a real number) [Hopfield, 19821, and the weights are symmetric, i.e., Wij = wji. The energy functional has a related constraint network where there is a node for each state, and the weights are associated with the ends of arcs in the network and the thresholds are associated with each state. The technical status of algorithms for minimizing E are discussed in [Ballard, 19861. This paper shows that the weights and thresholds can be chosen to encode theorem-proving problems. A controversial aspect of our formulation is that it is not guaranteed to work in every case. For many scientific applications, an inference mechanism that handles only the simpler cases, and fails in many cases, might not be useful. In particular, this is true for research directed toward the development of mechanical theorem provers, that handle cases that are difficult for humans. However, for models of human inference mechanisms, this may not be the case. Our conjecture is that: facts that humans can infer in a few hundred millisconds have an efficient time solution on a parallel machine. The key assumption we are willing to make is that the kind of inferences that humans can do quickly may be restricted cases of a general inference mechanism. One reason for this assumption is that the human inference mechanism can be viewed as one component of several in a perception-action process. For example, in our model, if the inference mechanism fails to identify a visual object, one of the options available is to move closer and gather more data. Thus our goal is to develop an inference mechanism that allows many inferences to be made in parallel with the understanding that it may also fail in many cases. A general method of theorem proving is refutation. In other words, to prove S I- W where S and W are sets of clauses, one attempts to show that S u 1 W is unsatisfiable. One way of doing this is to use resolution [Robinson, 19651. Our approach uses the unit resolution paradigm but has three important restrictions: (1) clauses may be used only once in each proof; (2) the knowledge base must be logically consistent; and (3) the method uses a large network that must bepreconnected. Theorem Proving: AUTOMATED REASONING / 20.3 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. The overall organization of our parallel inference is shown in Figure 1. The process has three overall phases that are carried out sequentially: Step 0: Logical Consistency Constraints. The first part has the goal of activating a logically consistent set of constraints. This is the focus of other research, and we assume that the enterprise is successful. Step 1: Filter Constraints. Constraints derived from the clause structures [Sickel, 1976; Kowalski, 19751 deactivate parts of the network that are inconsistent. Step 2: Resolution. The last part of the algorithm uses a second filtering technique based on unit resolution. In this phase, parts of the network are deactivated if they correspond to pairs of clauses that would resolve where one of the pair is a unit clause. If the entire network can be deactivated in this way, a proof has been found; otherwise, the result is inconclusive. start r\I; Parallel Filter Figure 1. 2. The Constraint Network The constraint network has five sets of nodes: (1) C, the set of clause nodes; (2) L, the set of predicate letters and their complements; (3) F, the set of clause fragments; (4) U, the set of unifications between fragments; and (5) B, the set of substitutions. In any set of clauses there will be one clause node, c c C for each clause in the set, There will be one clause fragment node f c F for each predicate letter and its complement that are mentioned in different clauses. There will be a separate unification node u c U for each possible resolution between complementary literals in different clauses. Finally there will be a substitution node b c B for each possible substitution involving a unification. For example, in the following set {S U 1 W} = {cl: P(v), cz: -P(hy)h c = {Cl, c2) L = {P, -P} F = {(Cl, PI, (c2, -+I) u = {((Cl, P) (cz, -P))} B = {xb, ya} (1) There are six different kinds of constraints: (1) a predicate letter constraint; (2) a clause-predicate substitution constraint; (3) a clause constraint; (4) unification constraints; (5) a substitution constraint; and (6) a unit clause constraint. The first five capture constraints implied by the clause syntax and unification. The sixth is an additional constraint which anticipates the unit resolution proof procedure. Table 1 summarizes these constraints, which are described in detail below. All of the constraints can be obtained directly from the clause syntax. The Clause Constraint, The clause constraint captures the notion that a clause can only be part of the solution if all of its fragments have viable bindings. Thus the fragments must be connected to the node in a way that exhibits conjunctive behavior. Table la shows an example of a clause with n fragments. The Clause-Predicate-Substitution Constraint (or Clause Fragment Constraint). This constraint is derived from the clauses in a straightforward way. Each clause may be decomposed into triples consisting of: (clause symbol, predicate letter, substitution). For example, cl: P(x)&(a) may be decomposed into (cl, P, ~1) and (cl, Q, ~2) where ul and u2 are appropriate substitutions (these will be discussed further as part of the substitution constraints). In the filter network, there are a set of clause fragment nodes F, one for each triple. A clause fragment node f is connected to each node in the triple with positively weighted connections as shown in Table lb. Unification Constraints. Complementary literals in different clauses that can unify constrain the network in two important ways. These can be captured by positively weighted links to unification nodes. Any two clause fragment nodes that are connected to complementary literals are linked to a unique unification node. That node also has links to substitution nodes for each of the substitutions that result from the unification. Thus in the example given by Equation (2), one unification node was linked to the two appropriate fragment nodes and the two appropriate substitution nodes. The Literal Constraint. The literal constraint is derived from propositional logic. If in the set of clauses, a literal appears without its complement or vice versa, then that clause can be pruned from the solution. In terms of the filter network, this constraint is easily expressed as a positively weighted arc between different nodes representing predicate letters, as shown in Table Id. 20-i / SCIENCE The Substitution Constraints. The substitution constraints limit possible bindings between terms. The clauses that can potentially resolve constrain possible substitutions, and these possible substitutions are realized by a set of substitution nodes S. Substitutions that are incompatible are connected by negatively weighted connections. For example, in the set of clauses -P(Q), P(x,y)Q(y,d, l&k&, +(a,& the possible substitutions are xa, yb, yc, and zd. Of these, Table 1: Summary of Constraints - positive links negative - links between rival constant substitutions e. Constant Substitution f. Substitution Incompatibility compatible pairs are: (xa, yb), (xa, yc) and (yc, zd), and there is one incompatible pair: (yc, yd). This example is simple and does not capture all the constraints possible in unification. At least one other is necessary. This relates bindings between constants and variables. If a variable is substituted with a constant and another variable is substituted with a different constant, then the two variables cannot be substituted with each other. These constraints are summarized below: x, y : var ; c, d const (xc, xy, yd) are incompatible (xc, xd) are incompatible In the network there are potentially NU(Nc + NJ nodes where NC is the number of constants and IV, is the number of variables. Thus the above constraints are connected between all relevant groupings. Representative network fragments are shown in Table le and lf. These constraints can be extended to handle some function symbol constraints, but the development herein will assume only constants and variables. The substitution constraint can be easily implemented if we allow multiplicative effects. Multiplicative effects cause a node to be turned off if any one of the inputs becomes zero. A way of handling this problem that also adheres to the symmetric weight requirement needed for convergence is to use ternary nodes. Table If shows a multiplicative connection in terms of symmetric ternary connections, and Figure 2 shows the detailed connections. The final constraint to be added is a single use constraint. This constraint is not dictated by the clause syntax but anticipates a unit clause inference rule. The constraint is simply this: literals in different clauses that can resolve with the same literal in a given clause have mutually inhibitory connections. To clarify this, consider the example cl:P(x), ~2: 1 P(a), cg:lP( b). Either cg or c3 could resolve with cl. However, to force the network to “choose” one or the other, a negatively weighted arc is introduced between the corresponding fragments (~2, -P) and (~3, -P). Figure 2: The substitution network showin only consistency connections for constants {a, b and 3 variables {x, y, z}. Theorem Proving: AUTOMATED REASONING / 205 3. Choosing the Weights In the previous sections it was shown that the formulae of first order predicate calculus and the inference rules of a proof producer (viz. resolution) can be uniquely expressed in terms of a network. Such a network has a particularly simple form, consisting only of undirected links between nodes. To relate this network to Equation (2), we add real-valued weights at the ends of each arc and real-valued thresholds to each node. Owing to the various constraints, some unification nodes or substitution nodes may be forced off. Under this circumstance, just the clause structure that depends on these should be turned off. Building the networks out of AND, OR, and AND-OR nodes guarantees that this happens, since the energy efficiency of the desired state can be shown to be optimal by direct calculation [Ballard, 19861. Thus parts that are consistent according to the logical syntax form local energy minima. T , able 2: (a) Weights and (b) thresholds for filtering stage. (a = (l/2)(0 + e); Owing to space limitations, we will omit the proof m = number of instances of a predicate P; n = number of instances of a complementary predicate ,P; NC = number of literals in the clause.) that the weights and thresholds that we specify guarantee the desired behavior. This can be found in [Ballard, 19861. Instead, the basic ideas will be outlined using Figure 3. The figure shows a hypothetical plot of the energy of the network as a function of the states. The two most important constraints are those that guarantee that: (1) each literal has exactly one complement; and (2) the substitutions are consistent. These are to be weighted so that violating them incurs very large penalties. Remaining states that satisfy them are defined to be admissible states. These are weighted so that the more of the network structure that can be turned on, the better. The global minimum of the network is simply the admissible state with the most clause structure. E A (4 &v LL EH &v -k* -k* * between appropriate pairs and triples ww -k* -k’ states b states admissible state with most clause structure Figure 3: Hypothetical plot of energy vs. states showing desirable network propertIes. The complete summary of weights is shown in Table 2. From this table, it is not intuitive how these weights function. To help overcome this problem we classify the nodes into three types, AND, OR, and AND/OR, as shown in Figure 4. All of the node types will have negative thresholds: AND nodes will have a threshold that must be less (in absolute value) than the sum of all the arc weights but greater than any subset of arc weights; OR nodes will have a threshold that is less than any of the arc weights; an AND-OR node may be constructed if the sum of the OR arcs is equal to the value of the weights on the AND arcs, assuming that the latter are all identical. 4. How It Works Consider the set of clauses {P(a), P(b), P(c), lP(x), lP(y)}. The network for this example is shown in Figure 5, with the weights and thresholds chosen according to Table 2 with 8 = 1. To understand the example, note that if there are m instances of a literal P and n of its complement in different clauses, then: 1) for the P node there will be m OR connections; 2) for the -P node there will be n OR connections; 3) for each fragment node related to P there will be n OR connections; 4) for each fragment node related to 1 P there will be m OR connections. 206 / SCIENCE At the beginning of stage two the filtering process has pruned the network so that only portions with consistent substitutions are left in the on state. Therefore in the resolution process there is no need to recheck the substitutions since they are known to be consistent. For this reason, the substitution network may be ignored. It is removed from the computation and the threshold on the unification node is adjusted accordingly. The thresholds on the clauses are now lowered (remember that they are negative) to the point where they are each greater in absolute value than the weights from the clause fragment link. This means that it is now profitable to turn the singleton clause nodes off. Ideally, this should cause other nodes to be turned off as well. If the entire network can be turned off, a proof by unit resolution exists. [Ballard, 19861 elaborates on this point. The one case where the network cannot be turned off is where there is a loop, e.g., clfufcgfufclfufcl. The energy of a loop is negative, whereas the energy of all nodes in the off state is zero, so it is never profitable to turn off the nodes in a loop. The main change to the weights is to leave out the substitution network and make each clause node an OR node with threshold W2. 5. Summary and Conclusions The implementation of the first order logic constraints results in two coupled networks: (1) a clause network that represents the clause syntax; and (2) a binding network that represents the relationships .5(1 +e) AND .5(1 +c) .25(1 +E) OR .5(1 +E) AND-OR Figure 4: AND, OR, and AND/OR nodes. Numbers inside tokens are thresholds (that appear next to the tokens in Figure 5). Epsilon is a small positive number required for correctness [Ballard, 19861. between terms in different clauses. The method for resolving bindings, unification, can be as complex as the entire inference mechanism. Thus for the purposes of computing eficiently, we would expect the actual bindings in the knowledge base to have a simple structure. At the outset, the possibility of reusing clauses was ruled out, but there are some limited cases that can be handled. To see the necessity of reusing clauses, consider {S U ‘W) = {dW, wP( b), c3: -P(x)&(x), ~4:7Q(a)7Q(b)}. This can be handled by resolution in a straightforward way. The resolution tree is: ((cl, cg), ((~2, c3), ~4)). However, note that c3 appears twice. The consequence of this is that since the unification constraints do not allow xa and xb simultaneously, the network will not pass the filter test. To handle this case we note that both possibilities for cg involve constant bindings. Thus we can resolve this by making two copies of c3: -p(a)Q(a) and -, P( b)Q( b). Once this is done, the inference mechanism will find the proof. However, this is not a very elegant strategy. As noted by Josh Tenenberg, if cg were -P(X)&(Y) one would need four copies, +(a)&(~),~ lP(a)Q(b), ~P(b)Q(a), Tp( b)Q( b), and in general a clause with k literal% each with a different variable, would generate kNC possibilities, where NC is the number of constants. The main intent of this paper has been to provide a new look at formal inference mechanisms from the standpoint of performance. Our contention is that models that do not have a parallel implementation are unlikely candidates for models of human inference. -.66 Figure 5: Network for {P(a), P(b), P(c), -P(x), -P(y)}. Epsilons omitted. Theorem Proving: AUTOMATED REASONING / 20’ This realization may prove catalytic for approaches that try to unify the complementary goals of competence and performance. The technical contribution of this paper is in the detailed specification of a network inference mechanism. The network runs in parallel and can handle obvious inferences in first order logic. We have described how the problem of proving theorems or making inferences of the form S b W can be reduced to two sequential network optimization problems. The first checks the formulae for the constraints defined in Section 2 and settles in a state where each literal instance has a unique complement. The second minimization is equivalent to a unit resolution proof. If a proof by unit resolution exists, it will be manifested as a global energy minimum. While no computer simulations have been done, the proofs provided in [Ballard, 19861 show that the problem reduction works. The stable states of the two optimization problems are just those desired. The reduction of theorem proving to energy minimization is an important step, but much additional work needs to be done. At present, the one convergence proof available [Geman and Geman, 19841 does not provide an encouraging estimate on the running time of such algorithms, and simulations that have been done give varying results for different problems. Algorithms that require global minima are still comparable to conventional approximation techniques [Johnson et al., 19841. However, studies of the Traveling Salesman problem using analog processing units have shown that good solutions can be found quickly [Hopfield and Tank, 19851. These encouraging results are a source for some optimism: perhaps in the case of inferences, if a measure of good, average performance is used instead of the classical best-, worst-case performance, these algorithms will exhibit behavior closer to the Traveling Salesman result. Acknowledgements Pat Hayes was enormously helpful when these ideas were in formative stages. The artful figures and carefully formatted text are the work of Peggy Meeker. Peggy also edited many earlier drafts of this report. I am grateful to John Mellor-Crummey for pointing out several areas that needed improvement in the original design. Jerry Feldman, Josh Tenenberg, Leo Hartman, and Jay Weber each made several helpful suggestions on earlier drafts. This work was supported in part by the National Science Foundation under Grant DCR- 8405720. References Ballard, D.H., “Parallel logical inference and energy minimization,” TR 142, Computer Science Dept., U. Rochester, March 1986. Davis, M., “Obvious logical inferences,” Courant Institute, 1983. Fahlman, S.E., D.S. Touretzky, and W. van Roggen, “Cancellation in a parallel semantic network,” Proc., 7th Intl. Joint Conf. on Artificial Intelligence, Vancouver, BC, Canada, August 1981. Feldman, J.A., “Energy and the behavior of connectionist models,” TR 155, Computer Science Dept., U. Rochester, November 1985. Feldman, J.A. and D.H. Ballard, “Connectionist models and their properties,” Cognitive Science 6,205-254,1982. Freuder, E.C., “Synthesizing constraint expressions,” CACM 21, 11,958-965, November 1978. Garey, M.R. and D.S. Johnson. Computers and Intractability. W.H. Freeman, 1979. Geman, S. and D. Geman, “Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images,” IEEE Trans. PAMI 6,6,721-741, November 1984. Henschen, L.J., “A tutorial on resolution,” IEEE Trans. Computers C-25,8,770-772, August 1976. Hinton, G.E. and T.J. Sejnowski, “Optimal perceptual inference,” Proc., IEEE Computer Vision and Pattern Recognition Conf, 448- 453, Washington, DC, 1983. Hopfield, J.J., “Neural networks and physical systems with emergent collective computational abilities,” Proc., National Academy of Sciences USA 79,2554-2558,1982. Hopfield, J.J., “Neurons with graded response have collective computational properties like those of two-state neurons,” Proc., Natl. Acad. Sci. 81,3088-3092, May 1984. Hopfield, J.J. and D.W. Tank, “‘Neural’ computation of decisions in optimization problems,” to appear, Biological Cybernetics, 1985. Johnson et al., Lecture Notes, seminar presentation at Yale, 1984. Kirkpatrick, S., C.D. Gelatt, and M.P. Vecchi, “Optimization by simulated annealing,” Science 220,4598,671-680,1983. Kowalski, R., “A proof procedure using connection graphs,” JACM 22,4,572-595,1975. Nilsson, N.J. Principles of Artificial Intelligence. Palo Alto, CA: Tioga Pub. Co., 1980. Nilsson, N.J. Problem-Solving Methods in Artificial Intelligence. New York: McGraw Hill Book Co., 1971. Robinson, J.A., “A machine-oriented logic based on the resolution principle,” JACM 12,1,23-41, January 1965. Sickel, S., “A search technique for clause interconectivity graphs,” IEEE Trans. Computers C-25,8,823-835, August 1976. 208 / SCIENCE
1986
123
388
A Unified Theory of Heuristic Evaluation Functions and its Application to Learning Jcns Christensen Computer Science Department, Stanford University, Stanford, Ca. 94305 Richard E. Korf Computer Science Department, University of California, Los Angclcs, Ca. 90024 Abstract WC prcscnt a characterization of heuristic evaluation functions Hhich unities their trcatmcnt in single-agent problems and two- person games. ‘l‘hc central result is that a useful heuristic function is one which dctcrmincs the outcome of a search and is invariant along a solution path. ‘I‘his local chnractcrization of heuristics can hc used to predict the cffcctivcncss of given heuristics and to automatically learn useful heuristic functions for problems. In one cxpcrimcnt, a set of rclntivc weights for the different chess pieces was automatically learned. 1. Int reduction Consider the following anomaly. ‘I’hc Manhattan distance heuristic for the Fifteen PuzAo is computed by monsuring the distance along the two-dimensional grid of each tilt from its current position to its goal position, and summing thtic values for each tile. Manhattan distance is a very cffccticc heuristic function for solving the Fifteen l~uz7lc 141. A complctcly analogous heuristic can bc dcfincd in three dimensions for Rubik’s Cube: for each individual movable piccc of the cube. count the nun;hcr of twists rcquircd to bring it to its goal position and orientation. and sum thcsc vducs for each component. ‘I’hrcc dimcnsionzl Manhattan dizlancc. howcvcr, is cffcctivcly worthless as a heuristic function for liubik‘s Cube 151. l’vcn though Rubik’~ Cube is similar to the t-‘iltccn Puy/.lc, the two heuristics arc virtually idcnIical, and i11 both ci\scs the goal is ilchicvcd when the value of th: heuristic is minimized, the hcurislic is very cffcctivc in one USC and usclcss in the other. As another anomalous cxamplc, consider the games of chcckcrs illld OthCll0 with ITliltCriill c0Ullt ilS iIll evaluation filllcti0n. OlhCll0 is a game played on an eight by tight square grid with picccs which arc white on one side and black on the other. t:,ach player altcrniltcly places picccs with his color showing on empty squares. Wliciicvcr ;I phycr rrli\C’CS his picccs i\t IX)th ClldS Of il lint Of his o~~pcmcnt’s picccs. the opponent’s picccs ;IIY flipped over i\nd hcccmc lllc property ol‘thc Wigillill plityCl+. ‘I’hc winner is the player whocc COIOI~ shows on the majority of the picccs iit the end of the gatnc. Material count is ;LII cv,lluation function which sums the number of picccs hclonging to OIIC player and subtracts the total material of the other player. It turns out that lTliltCl3;ll count is a f,tirly successful evaluation function for chcckcrs but rclativcly incffcctivc for OthCll0, CVCn lllougll tllC winiicr is tllC PlilyCr tllilt maximi/.cs his nliltcrial in both GISCS.* 110 111OIC lcgnl till hih picccs A challenge for any theory of heuristic evaluation fimctions is to explain these anomalies. An additional challcngc is to present a consistent intcrprctation of heuristic functions in single-agent problems and two-player games. Surprisingly, the trcatmcnt in the litcraturc of heuristic starch in thcsc two diffcrcnt domains has little in common. In single-agent scarchcs, a heuristic evaluation function is vicwcd as an cstimatc of the cost of the rcmaindcr of the solution path. In two-person pmcs. howcvcr, a heuristic function is vaguely charactcrizcd as a measure of the “strength” of a board position for one player versus the other. 2. A Unified Theory of Heuristic Evaluation Functions One criterion which distinguishes the successful heuristics from the unsuccessful ones above is that in the successful casts, primitive moves in the problem space make only small changes in the value of the heuristic function. In the cast of Manhattan distance for the Fificcn Puzzle. a single move clli~ngcs the Manhattan distance by a single unit whcrcas for Rubik’s Cube a single twist can change the Vanhattan distance by as much as eight units (eight picccs move at once). Similarly, the material count in chcckcrs r‘lrcly changes by more than a single piccc during mc move. but in Othello it can change by a lnrgc numhcr of pieces (up to 18 in one case). ‘I-his SuggcSts a theory hilt CVillLl~lti~~ll functions which arc rclativcly invariant over single moves i1rC more cffcctivc. A closely related idea was suggested by I.cnat in the more gcncral context of hcurislic producliotl rules [(I]. A production rule has a Icft-hand side that specifics a situation whcrc it is applicable. and 3 right-hillId side that dctcrnlincs the action to IX trtkcll in that situation. I ,cnilt argues that the power of heuristic production rules is dcrivcd from the fact that the appropriatcncss of ;I situation- action pair is ;I continuous function of both the situation and the action. It1 other words, if a particular action is appropriate in a particular SilUiltiOll. lhcn 1) iI similar action is likely to bc ilppl’O~~l’i;ltC iii tllc S;IllIC aitu;ltion. , .II\~ 2) the SitIllC ;Icli(jn is likely to bc ;q>propri;itc iI1 il simil,ir situ;ltion. Il’wc’ bro;ldcn lhc dclinilion 01 ilCtiOl1 to include evaluation. and ,IllOW the situation VilliilhlC t0 range over different states in the siilnc p~~hlcrn sp~c, then our notion of rclativc invariance o\‘cr single moves bccomcs il special cast of I.cnnt’s continuity idea. Of course. invariance over single iiiovcs is not enough to assiirc a LISC~UI cvalt~iltion lilnction, since this CalI bc trivially achicvcd by assigning ill1 StatCS tllC SiIIllC COllSt~lllt VillllC. ‘I’llC hciiris~ic VillLlCS must bc tied to actual payoffs in the game, in particular to the values of the goal st;rtcs. This suggests that when ;I heuristic function is ;Ipplicd to a goit1 stale. it should return the exact value of tllnt StiltC. I-# / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Informally. we claim that an idcal heuristic evaluation firnction has two propertics: 1) when applied to a goal state, it returns the outcome of the starch; and 2) the value of the function is invariant along an optimal solution path. ‘I’akcn togcthcr, these two propcrtics cnsurc a function which is a pcrfcct predictor of the outcome of pursuing any given path in the problem space. Thcreforc, a heuristic search algorithm using such a tinction should always make optimal moves. Furthermore, WC claim that any successful evaluation function will satisfy thcsc propcrtics to some cxtcnt. For cxamplc, the evaluation function for the A* algorithm [3] is Jln)=g(r$+- f/(,(n) whcrc g(tz) is the cost of the best path from the initial state to the node n and II(U) is an estimate of the cost of the best path from node II to a goal state. Typically the !z term is called the heuristic in this function, but for our purposes WC will refer to the entire function fas the heuristic evaluation function. When this function is applied to a goal node. the h term is zero, the g term rcprcscnts the cost of reaching the goal from the initial state, and hcnccfrcturnt; the cost of the path or the outcome of the starch. If h is a pcrfcct estimator. then as WC move along an optimal path to a goal state. each move incrcascs g by the cost of the move and dccrcascs h by the same value. Thus, the value of f remains invariant alon; an optimal path. If h is not a pcrfcct cstimator,fwill vary somcwha! dcpcnding upon the amount of error in h. ‘I’hus, a good evaluation function for an algorithm such as A* will dctcrminc the outcome of the search and is rclativcly invariant over si nglc moves. Now consider a two-person game using minimax starch and a st;ltic evaluation filnction. ‘I’hc static evaluation rcflccts the strength of a given board position, When applied to a state whcrc the game is over, the function detcrmincs the outcome of the game, or which player won. ‘I’his is often added as a special cast to an evaluation function. typkillly returning positive and ncgativc infinity for winning positions for MAX and MIN. rcspcctivcly. When applied to a non-goal node, the function is supposed to return a value which predicts what the ultimate outcome of the game will be. ‘1’0 the cxtcnt that the evaluation is a11 accurate predictor, its value should not change as the anticipated moves arc made. ‘Ihu~. a good cvaluiltion function should bc invariant over the actual scqucncc of moves made in the ganlc. ‘I’hcrcl’orc, in both cxamplcs WC scc that a good evaluation function should have the propcrtics of 1) dctcrniining Lulconic and 2) iiivariancc over single moves. 2.1. Formal Description of the Theory In this section WC will dcfinc the propcrtics of outcome dctcrmination and move invariance and show that thcsc conditions arc sufficient for pcrfcct play by a heuristic search algorithm. A heuristic function is said to tk/mtritrc tltc ou/mttrc of a starch if when ilpplid to any terminal or goal state, it rclurns the figure of merit for the task. ‘I‘his is the criterion against which success is mcasurcd. I;or: cxi~mplc, in a single-person starch whcrc the task is to find ii IOWCSt COSt piItI1 tO a goal SIiltC, tIlC OUtCOlTlC WOtlId hC tIlC ;ICtlliil COSl 01’ LllC solution pitti li)und. Ii1 ;I two-person gi\lTlC’. 1hC outcome might bc cithcr win, lose, or a number indicating a score. or draw for a particular plwr, An opfitnal ntove from a given state is one which leads to a best outcome in the worst case. For cxamplc, in a single-person problem an optimal move is a move along a lowest cost path from the given state to a goal state. For a two-person game, an optimal move is dctormined by expanding the cntirc game tree co terminal values, minimaxing the terminal values back up the tree, and picking a move which forces a win if 011c exists, or forces a draw if no wins exist. If all moves result in a forced loss, all moves are optimal. Note that the optimal move is the best move given the current state of the problem or game. It is dcfincd for all states, not just those on a globally optimal path from the initial state. An algorithm optimal move. exhibits petj+ci if for all states. it makes an A heuristic hnction is said to bc move invariflnf if the value it returns for any given state is equal to the value returned for the immcdiatc successor which results from an optimal move A heuristic senrch algodhnz is 011c which makes its decisions about what move to make next solely based on the minimum and/or maximum values of the heuristic evaluation f%nction of the successors of the current state. Note that such an algorithm may or may not include lookahcad. I,ookahcad is included by allowing it as part of the heuristic evaluation of a state. ‘ITis definition cncompasscs all the standard heuristic starch algorithms for one- and two-player games. Our main thcorctical result is the following: Ourcorne d~lclertttitrnliori plus tttove ittvm%mce arc suf$cietil cotrdilioiu for a hcuris/ic cvnluntiott Jtttction IO guarutttcr pcrfeccl play lty n hmrisfic sccrrch rflgori/httt. Its proof is as follows: Move invariance rcquircs that the heuristic value of any state and its successor resulting from an optimal move bc the same. Since an optimal solution path is just a scqucncc of optimal moves, move invariance implies that the heuristic evaluations of all states along an optimal solution path from any given state arc the snmc. Outcome dctcrmination cnsurcs that the heuristic value of the goal ;It tllC Clltl Of SIICll iI p;llll CqtlillS its CXilCt V;lltlC. ‘Ilicrcli~rc. 1~0th propcrlics logcthcr gllill2llLCC lIlil1 (IlC heuristic VillllC 01’ ilIly given state is i\ pcrftict predictor of [he CvCntuill outcome of that state given pcrfcct play. Thus, a heuristic starch algorithm need only gcncratc all successors of the current state, cvaluatc them, and choose the minimum or maximum value as appropriate to cnsurc optimal moves from cvcry state. While outcome dctcrmination and move invariance arc sufficient conditions for pcrfcct play, strictly speaking they arc not ncccssary conditions. ‘I’hc reason is that a heuristic function with thcsc propcrtics could bc composed with any function which prcscrvcs the maximum or minimum of iI set without changing the moves that would hc made. If’ WC ignore such order-prcscrving functions, Search: AUTOMATED REASONING / 149 however, outcome determination necessary for perfect play. and move invariance become Since outcome determination plus move invariance is equivalent to pcrfcct prediction, one way of intcrprcting the above result is that pcrfcct heuristics arc necessary and sufficient for pcrfcct play. On the surface, this stems somewhat contrary to well-known results such as the optimality of A* with inexact. but admissible, heuristics. Note, howcvcr. that A* doesn’t commit itself to making any moves until it has searched the optimal solution path all the way to the goal, and hence it knows the exact outcome of the best move bcforc it chooses that move. 2.2. Predicting Heuristic Performance The decomposition of pcrfcct prediction into the indcpcndent conditions of outcome dctcrmination and move invariance is useful for predicting heuristic pcrformancc qualitatively. For example, Manhattan Distance satisfies outcome dctcrmination in both the Fifteen Puzzle and Kubik’s Cube, as dots material count in both chcckcrs and Othello. Both heuristics, however, differ markedly in move invariance in their two rcspcctivc problems. Thus, our theory successfully distinguishes the useful from the usclcss heuristic in both casts. Furthermore, it provides a single, uniform intcrprctation of heuristic evaluation functions over both single- person and two-player garncs. 3. Learning Evaluation Functions In addition to unifying tic theory of heuristic tinctions. and making qualitative predictions about the pcrformancc of given evaluation functions for given problems. our theory can bc used as the basis of a method for learning heuristic functions. ‘I’hc main contribution of the theory to this problem is that it dccomposcs the global property of pcrfcct prediction into the two loci\1 propcrtics 01’ outcome dctcrminiltion and ITIOVC invariance. ‘I’hus. WC CM\ search for heuristics that satisfy OIIC of thcsc propcrtics, and then test to what cxtcnt the other is sittisficd as well. ‘I’hc basic idea is that since part of the charactcri~ation of a successful CValUiltioll function is in terms of invariimcc over single moves, candidate evaluation functions can bc optimized based on kll ill li~l3TliltiOll ill ;I plWhkl?l SplCC. Ill p;lr’~iW!ilr. OJlC Call SCilrCil li)r II lilnctioii which is illVariilll1 OVCI IllOvCS ;IlOllg ;I solution ~liltll. ‘I’his tcchniquc WiIS implicitly used by SillIJ~lCl’S [IO] pioneering cxpcrimcnts on Icarning chcckcrs evaluation functions, and by I<cndcll’s [9] more recent work on Icarning heuristics for the l:iftccn Pu7.71~. 13~10~ WC dcscribc some cxpcrimcnts which rcplacc Samuel’s ad hoc tcchniqucs with the well-understood method of linear rcgrcssion, and cxtcnd the method to the domilin of chess. 3.1. Description of the Method WC adopt the standard game-playing model of mini-max starch with static CVa~UiltiOJl at the ScarCh frontier [l I]. While other learning cxpcrimcnts have ftxzuscd on openings or cndgamcs [2,7.8]. we have addrcsscd the mid-game. Samuel [lo] observed that the most cffcctivc way of improving mid-game performance is to modify the evaluation function. The first game program to improve its pcrformancc by learning was Samuel’s checkers program [lo]. Although it also employed other learning techniques, it is mostly known for learning the cocfficicnts in its polynomial evaluation fimction. Samuel’s idea was that the diffcrcncc bctwccn the static evaluation of a board position and the backed-up value dcrivcd from a mini-max starch could bc used to modify the evaluation function. This is based on the assumption that for a given evaluation function, values based on looking ahead are more accurate thatJ purely static evaluations. Samuel’s program altcrcd the coefficients of the function at every move whcrc thcrc was a significant diffcrcnce between the value calculated by the evaluation function and that returned by the mini-max starch. The idea is to alter UK evaluation function so that it can calculate the backed-up value at the original state without having to look ahcad. An ideal evaluation function climinatcs the need for a mini-max starch since it always calculates the correct value for any state. The main diffcrcncc bctwccn our approach and Samuel’s is in how the value rcturncd by the mini-max starch is used to modify the evaluation function. Samuel cmploycd an ad-hoc technique based on correlation cocfficicnts and somewhat arbitrary correction factors. Our method is based on the well-understood tcchniquc of linear rcgrcssion. In addition, while his investigation focused on checkers. our cxperimcnts have been carried out in the more complex game of chess. 3.2. Coefficient Modification by Regression For pedagogical reasons. WC will explain the tcchniquc using the simple Cxamplc of a chcckcrs cvaluiltion function based only on the numbCrs of sin& picccs and kings. In other words, WC want to dctcrmine the rclativc value of the kings and picccs in an evaluation function of the form C’,E;+(>P; whcrc F, and FP arc the numbclx of picccs and kings, rcspcctivcly. Of COLI~SC. thcrc woufd also bc terms for the oppotlcnt’s material. but WC assume that the cocfficicnts have the s;lmc magnitude and opposite signs. WC StiJrt with :rn iniIial CStimiJtc of thC cocfficicnts, c.g. both N~\l;il IO OIIC. (iivcn il ~~;llIiCUlill~ I~O;ll~cf posilion. WC Cilll plllg in ValUCS IiJr /f; illld F’r ‘I’h~n. WC pcrliJrm iI Iook-iJllCild starch to sonic depth. CviJluatc the nodes at the frontier using the initial cstilnatc of the cocfficicnts. and back-up thcsc values using the mini-Jnax algorithm, resulting in a numerical value for the original position. This information can bc rcprcscntcd as an equation of the form (‘,/I;+ (‘>/I;= I<, whcrc the (; arc the paramctcrs of the C~lliltiOll. 1llC /‘) ;II’C tllC factors Of tllC CVi~llliltiOll function ‘or dcpcndcnt vilriablcs, and the I< is the backed-up mini-max value. One Can then perform a linc;Jr rcgrcssion on this ditta to dctcnninc the best-fitting values for the paramctcrs of the equation, thus in cffcct establishing the cocffioicnts of the factors in the evaluation function. 150 / SCIENCE Unfortunately, the result of the regression is not the best choice of cocfficicnts but rather a bcttcr cstimatc. The reason is that the right-hand sides of the equations are not exact but approximate values since they arc based on the same estimated cocfficicnts. Thus, the cntirc process must be rcpcatcd using the new coefficients derived from the rcgrcssion. These iterations are continued until the values converge. This iterative algorithm can bc viewed as hill-climbing in the space of cocfficicnts, with potentially all the normally associated problems of hill-climbing. In particular, thcrc may exist values which are locally stable but not globally optimal. No effcctivc way exists to detect such local stabilities except by drastically altering some of the cocfficicnts in the regression analysis to see if diffcrcnt maxima are encountered. If that is the cast, then these different evaluation functions can be played against each other to see which one is indeed the best. This learning method can be applied to any game which can be implcmcntcd using mini-max starch with static evaluation. Note that the learning is accomplished simply by the program playing games against itself, without any outside input. Our method was first explored in the simple game of 4x4x4 tic- tat-toe, and pcrformcd remarkably well. WC used six factors in the evaluation function. namely the number of rows, columns, and diagonals in which a side could win with cithcr one, two, or three picccs of the same color already in place. Not only did it order the factors of the evaluation function in increasing order of the number of picccs in place, but it also quickly rccognizcd which cocfficicnts had incorrect signs and rcvcrscd them. 3.3. Experiments with Chess As a serious test, WC chose Ihc game of chess and a simple evaluation function consisting only of material advantage. ‘I’hc cxpcrimcnt was to see if the Icarning program woultl approximate the classically acccptcd weights for the picccs: 9 for the queen, 5 for the rook, 3 for the bishop, 3 for the knight, and 1 for the pawn. ‘I’hc chess program was implcmcntcd using a two-ply (one full move) mini-max starch with alpha-beta pruning and quicsccncc. 1400 half-moves wcrc made bctwccn each rcgrcssion. If ncithcr side won during it game it was stopped afkr 100 half-moves and a IlCW g;llllC WilS SlillICd. I:Or pUlJJOSCS OI’ LhC cxpcrimcnl, ;I win W;lS assigned one more than the total initial material value. and the individual piccc vh.x wcrc rounded off to the ncarcst 0.5. ‘I’hc picccs stabilized at: Queen, 8.0; rook, 4.0; bishop, 4.0; knight, 3.0: pawn, 2.0. The above results wcrc based on a starch of only two ply, plus quicsccncc. ‘I’his 111C;I11S tllilt LllC CllCSS plVglalI1 was playing a tactical game. trying to maximize material in the short run rather than to achicvc chcckmatc. Since the equations correspond to moves from cvcry phase of the game, the final values arc avcragc weights from the opening, midgamc. and cndgamc. I1crlincr has obscrvcd, howcvcr, that the optimal evaluation function is in general a function of the stage of the game [l]. Because of the weakness in the end game caused by the lack of planning the chess program could not take advantage of the rook’s incrcascd strength during the end game. Other picccs might suffer from similar effects. When we played the derived ’ function against the classical function in one hundred games, the derived hnction won scvcntccn games and lost sixteen. The rest wcrc draws. This does not mean that our dcrivcd function is optimal, only that it is as good as the classical one in the context in which it was learned, namely two ply starch using only a material evaluation tinction. 4. Conclusions WC have presented a theory which unifies the treatment of heuristic evaluation functions in single-person problems and two- person games. The theory characterizes a useful heuristic function as one which determines the outcome of a starch when applied to a terminal position, and is invariant over optimal moves. We have shown that these two propcrtics arc sufficient for pcrfcct play by a heuristic search algorithm. This local characterization is useful for making qualitative predictions about the pcrformancc of given heuristics, and foi the automatic learning of heuristic flmctions. In one cxpcrimcnt, our program was able to automatically learn a set of rclativc weights for the diffcrcnt chess pieces that arc as good as the classical values in the context in which they were lcarncd. 5. Acknowledgments This rcscarch has bcncfitted from discussions with Bruce Abramson and Judca Pearl. This rcscarch was supported by the National Scicncc Foundation under grant IST-8515302, and by an IliM Faculty Dcvclopmcnt Award. 6. References Dl Bcrlincr, Hans. On the construction of evaluation functions for large domains. In hceedirrgs of IK’A1-79, pages 53-55. lntcmltional Joint Confcrcnccs on Artificial Intclligcncc. Tokyo, Japan, A11g11st. 1979. PI 13crlincr. Hans, and Murray Campbell. Using chunking to solve chess pawn cndgamcs. Ar/ijkial Infelligence 23(1):97- 120, 1984. [3] Hart, P.E., N.J. Nilsson, and 11. Kaphacl. A formal basis for the heuristic dctcrmination of minimum cost paths. 11:‘l;l:’ Transactions OH System Scicttce aud C’ybcme~ics 4(2):100-107, 1968. Search: AUTOMATED REASONING / 1 j 1 PI Korf, R.E. Depth-first itcrativc-decpcning: An optimal admissible tree search. Artificial Intelligence 27:97-109, 1985. PI Korf, R.E. Macro-operators: A weak method for learning. Art$cial Inlelligence 26:35-77, 1985. El Lcnat, Douglas% The Nature of Heuristics. ArtifiCiul Inlelligence 19:189-249,1982. [71 Min ton, Steven. Constraint-based gcncralization, Learning game-playing plans from single examples. In Anrl/-84, pages 251-254. American Association for Artificial Intclligcncc, Austin, Texas, August, 1984. [S] Quinlan, J. Ross. Learning cfflcicnt classification procedures and their application to chess end games. In Michalski, R.S.. J.G. Carboncll, and T.M. Mitchell (editors), Machitle Learn&, pages 463-482. Tioga, Palo Alto, Ca., 1983. [91 Rcndcll. L. A ncb basis for state-space learning systems and a successful implcmcntation. Artificial In!elligence 201369-392, 1983. [IO] Samuel, A.L. Some studies in mnchinc learning using the game of chcckcrs. In l~crgcnbaum, E.A. and J. Feldman (editors), Computers nrrtl Thugh~, . McGraw-Hill, N.Y., 1963. [ 111 S~I~II~~I~, Claude E. Programming a computer for playing chess. I’hilowphicnl hfngnzine (Series 7) 41: 256-275, 1950. 152 / SCIENCE
1986
124
389
INDEFINITE AND GCWA INFERENCE IN INDEFINITE DEDUCTIVE DATABASES Lawrence J. Henschen and Hyung-Sik Park Northwestern University Department of EECS Evanston, Illinois 60201 ABSTRACT This paper presents several basic results on compiling indefinite and GCWA(Generalized Closed World Assumption) inference in IDDB(Indefinite Dedutive Databases). We do not allow function symbols, but do allow non-Horn clauses. Further, although the GCWA is used to derive negative assumptions, we do also allow negative clauses to occur explicitly. We show a fundamental relationship between indefiniteness and indefinite inference. We consider three representation alternatives to separate the CDB(Clausa1 D W from the RDB(Relationa1 DB) . We present the basic ideas for compiling indefinite and GCWA inference on CDB and evaluating it through the RDB. Finally, we introduce decomposition theorems to evaluate disjunctive and conjunctive queries. I INTRODUCTION The reader is assumed to be familiar with the logic approach to databases, especially with the concept of guery compilation relative to an intensional database (IDB) . This paper presents some basic results on compiling indefinite and GCWA inference, i.e. generating queries that will correctly answer questions like, "Is a ground formula q indefinite?" and "Can we assume a ground atom q to be false?", in an IDDB (Indefinite Deductive Database) under the GCWA(Generalized Closed World Assumption) [Minker, 19821 . The notion of indefinite and GCWA inference can be defined by using the semantics of minimal model: A ground formula q is indefinite with respect to IDDB iff it is true in some minimal model of IDDB and false in some minimal model of IDDB. Such a q is false with respect to IDDB under the GCWA iff it is false in every minimal model. An IDDB is a deductive database which does not allow function symbols, but does allow negative and non-Horn clauses in addition to Horn clauses. Since the volume of negative facts may be too huge to be explicitly represented, deductive databases have traditionally treated negative information implicitly. While the negation of a ground atom can be assumed to be true straightforwardly by negation as (finite) failure[Clark, 1978][Reiter, 1978-b] in a Horn database, a generalized metarule [Bossu and Siegel, 1985][Minker, 19821 must be used in a DB with non-Horn clauses. These metarules are much more difficult to compute. We introduce a compiling technique to help overcome the computational problems and also to separate the deduction from the data retrieval. Three typical methods for dealing with the GCWA have been recently reported, First, Grant and Minker(GM) [Minker and Grant, 19811 developed an algebraic method which can answer a negative query in a generative database under the GCWA. Second, Yahya and Henschen(YH) [Yahya and Henschen, 19851 developed a deductive method which can answer a negative query a non-Horn database under the extended gWA. Third, Bossu and Siegel(BS) [Bossu and Siegel, 19851 developed a deductive method which can answer a guerY by subimplication. Subimplication is a generalization of GCWA that handles databases having no minimal model. It reduces to GCWA if the database has no function symbols. However, those methods have the following weak points: GM's method requires the system to generate all data base models. YH's method requires the query to be decomposed into several subqueries which must all be proved at guery time. BS's requires many subsumption tests in the computation of characteristic clauses and characteristic formulas[Bossu and Siegel, 19851. None of these methods seems practical enough for application to large databases. A major difficulty is that ordinary resolution applied to an IDDB cannot distinguish between ground atoms that are indefinite and those that can be assumed false under GCWA. This will be illustrated by an example in section 2. In order to overcome this problem we will investigate the relationships between indefiniteness and indefinite inference in Theorem Proving: AUTOMATED REASONING / 19 1 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. an IDDB. We will also discuss certain tradeoffs among three schemes for representing explicit negative data. We will then develop indefinite and GCWA inference engines by introducing a compilation technique for IDDBs similar in spirit to compilation for Horn databases[Chang, 19811 penschen and Naqvi, 1984][Reiter, 1978-a]. II INDEFINITE DEDUCTIVE DATABASES AND THEGCWA A deductive database is an extension of the proof theoretic relational DB[Reiter, 19841 in which new facts may be derived from the set of explicit facts, called the EDB(Extensiona1 DB), by using the deductive general laws, called the IDB(Intensiona1 DB) . There are two kinds of deductive databases relevant to our study, DDDB (Definite Deductive Databases) and IDDB(Indefinite Deductive Databases), and their properties are quite different. A DDDB allows only function-free Horn definite clauses, while an IDDB allows function-free indefinite (non-Horn) clauses as well. For an extensive survey of deductive databases, refer to [Gallaire, Minker, and Nicolas, 19841. In this paper, we use the notations "DB / - q" for "q can be derived from DB", "DB IX q" for "q cannot be derived from DB", "DB I-GCWA -q" for 'l-q can be assyed by GCWA", and "DB IXGCWA -q" for -q cannot be assumed by GCWA". A clause is written as a list of literals without commas. Since a typical database will have vast amounts of negative information, such information should be implicitly represented. To this end we distinguish two parts to a database - those formulas represented explicitly, e.g., IDDB or DDDB, and those parts not represented explicitly, for example, the negative facts that are to be assumed. Reiterpeiter, 1978-b] developed the closed world assumotion &WA) , for implicit negative information in a DDDB. CWA says that a negative ground unit clause, -p, can be assumed to be true if p cannot be derived from DDDB. However, CWA leads to inconsistencies when used with an IDDB. For example, let IDDB = Cp q). IDDB IX p, Hz;zrIDDB I-CWA -p, Similarly for q. IDDB + (-p, -q} I- nil. Minker[Minker, ‘19821 suggested the semantic . * * and gvntactlc deflnltlons of the GCWA, which can be used to handle negative information implicitly in an IDDB, and showed that they are equivalent. It is based on the concept of minimal model. An interpretation is specified by listing the ground atoms that are to be true. A . . w model is a model of a database such that no proper subset of the true atoms still satisfies the database. . . . Semantic Ihz.hnltlon QfGcwA: -+ (.G> can be assumed to be true with respect to IDDB iff P(s) is not in any minimal model of IDDB. Svnt . . . -P (GP tjc Definltlon QfGcwA: can be assumed to be true with respect to IDDB iff P(c) v C is not provable from IDDB for any C in S, where S is a set of all purely positive@ossibly empty) clauses not provable. Let DB = (p q, r, -s). Then, there are two minimal models of DB: Ml = Cp, r) and M2 = -h r3- Consider the following queries. Ql = r. This query is true. in DB. Semantic justification: r is in Ml and M2. Syntactic justification: DB + (-r) I- nil, and DB is consistent. That is, DB I- r. 42 = p. This query is -finite in DB. Semantic justification: p is in Ml, but not in M2. Syntactic justification: DB + (-p) IX nil, and DB + (p) IX nil. That is, DB IX p, DB IX -p, and DB IXGCWA -p. 43 = s. This query is provablv false in DB. Semantic justification: s is not in Ml or in M2. Syntactic justification: DB + (-s) IX nil, but DB + s I- nil. That is, DB I- -s. 44=t.Thisquerycanbemfalse in DB by GCWA. Semantic justification: t is not in Ml or in M2. Syntactic justification: DB + (-t) IX nil, andDB+t IXnil. DB IX t C for any positive or empty C. That is, DBlX t and DB IX -t, but DB I-GCWA -t. As shown in example 1, one difficulty in evaluating a query in an IDDB under the GCWA is that there is no difference between the indefinite and false cases when the ordinary approach (of trying to prove the query or its negation) is applied, as in Q2 and 44. The difference arises only when the query literal is considered in conjunction with additional positive parts. Hence, we need to develop specialized inference engines for indefinite and GCWA answers. III INDEFINITENESS AND INDEFINITE INFERENCE 192 / SCIENCE We introduce the basic notions for analyzing indefiniteness as follows: PIGC the set of minimal positive indefinite ;ound clauses implied by IDDB, where the notion of "minimal set" means that clauses in PIGC cannot be properly subsumed by any positive ground clause derivable from IDDB. If q is a ground atom, then PIGC[q] consists of members of PIGC that contain q- Recall that q is indefinite if it is true in some minimal model of DB and false in some minimal model of DB. We introduce three auxilliary functions. . . . inition True[q] = t if DB I- q f otherwise Indef[q] = t if q is indefinite in DB f otherwise Definition GCWA[q] = t if True[q] = f and Indef[q] = f f otherwise Notice that the above makes no distinction between provably false and false by assumption. Our main purpose is to determine which of the three values q has, true, false or indefinite. If it was important for a user to distinguish between the two false cases, additional tests would have to be made. -2 Let DBl = (p, p q), and DB2 = Cp q, r s). In DBl, PIGc LPI = (3, IndefCp] = f and GCWA[r] = t. In DB2, PIKCp] = (p q), Indef[p] = t and GCWACp-J = f. Lemma 1 (Minker's Lemma[Minker, 19821) Every minimal model of C is a minimal model of CP, where C = CP union CNP, CP denotes a set of all positive clauses provable from DB, and CNP denotes a set of clauses provable from DB each of which includes at least one negative literal. The lemma 1 says that Mcp = MC rather than Mcp < MC, where Mcp and MC mean sets of minimal models of CP and C, respectively. -2 C = q v C' is in PIGC iff q is indefinite in DB. Theorem 1 (Indefiniteness Theorem) Indef[q] = t if PIGC[q] is not empty f otherwise Corollawu GCWA[q] = t if True[q] = f and PIGC[q] is empty f otherwise The Indefiniteness Theorem says that PIGC characterizes the indefiniteness of an IDDB. It provides the theoretical basis for developing the indefinite inference. That is, it seems unavoidable to consider PIGC in some form or other for answering Indef[q] . However, since it is obviously unfeasible to derive PIGC in general, we should develop an appropriate mechanism to handle only PIGC[q], that is, the portion of PIGC relevant to the query at hand. IV COMPILATION AND REPRESENTATION ALTERNATIVES The goal of compiling is to separate the deductive process from the data retrieval process. For the problem at hand, namely determining PIGC[q] for a generic q, we need to find which resolvents of IDB clauses could lead to positive ground clauses containing q when data from the database is taken into consideration. Further, such a positive ground clause must not be subsumed by another positive clause. A simple example will illustrate the basic approach. Suppose the database contained the clauses -p (xl -0 (4 -Q(Y) -S(z) “Lki~.z) T(~JJ) 8 s 64 u o-4 * -M (9 T(v,lO,u) U(u) 8 where P, Q, M and 0 are simple relations stored in the EDB. Suppose we had the guery. "Is R(JOHN,lO,TOYS) indefinite or false?" The resolvent, -p (4 -0 (4 R(x,y,z) T(x,y,z) U(z) ' 2% produce a positive ground clause containing R(JOHN,lO,TOYS) if the appropriate data were in the relations P, Q and 0. On the other hand, if JOHN were in M, the third clause would derive a positive ground clause subsumming the one containing R(JOHN,lO,TOYS); that * R(JOHN,lO,TOYS) T(JOHN,lO,TOYS) U(TO;:j would not be in PIGC after all. Thus, we may answer false or indefinite after retrieving the appropriate data from P, Q, 0 and M and testing the resulting clauses for subsumption. Notice that if the third clause had contained T(v,25,u) instead, there would be no possibility for subsumption, and R(JOHN,lO,TOYS) T(JOHN,lO,TOYS) U(TOYS) would definitely be in PIGC. As with regular TV-=-Y compilation, the above kinds of analyses can be carried out on the basis of generic values for the attributes of R, and the deductive analysis separated from the data retrieval. In order to carry out the above deductive analysis, we identify certain sets of clauses. consists of The IIDB(Indefinite IDB) indefinite general clauses. Theorem Proving: AUTOMATED REASONING / 193 The DIDB(Definite IDB) consists of definite general clauses. The IEDB(Indefinite EDB) consists of non-Horn ground clauses. The DEDB(Definite EDB) consists of positive unit ground clauses. As will be seen below, the precise details of compiling will depend on whether negative clauses and clauses in IEDB are used at compile time or are to be handled at retrieval time. Therefore, we call CDB (Clausal Database) the set of clauses that are to be used in the compile phase. Then, IGI[ql is the set of minimal non-Horn clauses which contain a positive occurrence .of predicate q and are derivable from the CDB. A clause Cl is said to gotentiallv pubs- another clause C2 iff there is a positive subclause of Cl obtained by deleting the negative EDB literals and the positive literals for which there are corresponding negative EDB relations that subsumes a ground instance of a positive subclause obtained similarly from C2. PSUR[nhi] denotes a set of clauses which potentially subsume a clause, nhi, in NH[q] and are derivable from CDB. The sets NH[q] and PSUB[nhi] can be used to generate PIGC[q] if the negative data is used properly. To see why the negative data plays a crucial role, we consider three representation alternatives. We assume that #DEDB >> #-p > #-C > #IEDB > #DIDB >> #IIDB, where #X denotes the number of clauses or relational tuples in each database X, #-p the number of negative ground unit clauses explicitly occuring in DB, and #-C the number of negative nonunit clauses explicitly occuring in DB. -1 (a) CDB = IIDB + DIDB + negative nonunit clauses (b) RDB = IEDB + DEDB + negative unit ground facts RE RE ENTATI (a: CgB = II:: 4 DIDB + IEDB + negative nonunit clauses (b) RDB = DEDB + negative unit ground facts REPRE ENT TI N (a) CgB "II:, 2 DIDB + IEDB + negative clauses (b) RDB = DEDB fzF=-r = ( -P(x) Q(X), -U(x) S(x) V(x) a -T(x) P(x) R(x) S(x), -Q(a), -S(a), T(a), u (a> 3 - In representation 2, CDB = =C-P W Q (xl , -U (x) S (x) V(x) , -T(x) P(x) R(x) S(x)) and mB = (-Q(a), -S(a), T(a) , U(a) 3. Then NH[P] = ( nhl: -T(x) P (x) R (4 S (x> 3 and PSUB[nhl] = (-U(x) S(x) V(x), -T(x) t?El R (4 S (x> 3 - NH[P(cl)] = {nhl: -T(cl) P(c1) R(c1) s (cl) 3 and PSUB[nhl] ={-U(c1) S(c1) v (cl) , -T(cl) Q(c1) R(c1) S(cl)), where cl denotes a generic constant. Hence, PIGC[P(a)] is empty. In representation 3, -S(a)3 and RDB = Then, performing resolution on the CDB yields the following resolvents : -T (4 R (x> S (x) Q(x) , -P (a> , -U (a> V(a) I and -T(a) R(a). Hence, NH[P(cl)] = {nhl: -T(cl) P(c1) R (~1) S (~1) 3 and PSUB[nhl] = (-T(a) R(a)). Hence, PIGCp(a)] is empty. In example 3, notice that the clause -U(x) S(x) V(x) should be in PSUB[nhl] for representation 2, since the literal V may be resolved with the negative data in RDB if there is a negative table for V, and these resolutions are made at query time, not at compile time. In representation 3, any negative information about V would have to be in the CDB and would therefore be resolved at compile time. meorem 2 (Representation Theorem) 1. In representation-l. PIGCrul of DB is not equivalent to PIGC[qj of R%+ NH[qj + PSUB[nhi] . 2. In representation-2, PIGC[qj of DB is equivalent to PIGc[q] of RDB+NH[q-j + PSUB[nhi] . 3. In representation-3, PIGC[q] of DB is equivalent to PIGC[q] of RDB+NH[q-J + PSUB[nhi] . The representatian Zheorem indicates that representation schemes 2 and 3 enable us to compile the CDB with respect to NH[q] and PSUB[nhi] before query time. In order to avoid extra overhead in the deduction at compile time, we may prefer representation 1. However, we have some difficulties with the algebraic manipulation of the IEDB in representation 1: First, the ordinary relational table is not adequate for storing indefinite clauses due to the variable length of these clauses. Second, the representation theorem indicates that it is very difficult to develop the interface for generating PIGC[q] between CDB and RDB. In order to avoid some combinatorial explosion due to indefinite ground clauses at gu--Y* time, we may prefer resentatlon 2 and 3. Representation 2 needs an additional RDB operation for handling the negative unit ground clauses, while representation 3 needs additional resolutions on the CDB. When the negative ground facts are updated into the IDDB, 19-k / SCIENCE some modifications to the compiled program are needed in representation 3, but not in representation 2. However, assuming that the volume of explicit negative ground facts is not very large and updates to them are not frequent, representation 3 may be preferred, since it reduces the size of NH [sl and PSUB[nhi] and it incorporates the traditional relational DB as the RDB. V COMPILING INDEFINITE INFFRENCE IN A NON-RECURSIVE IDDB em Z,&?orem and its tell us that for answering Indef[q] and GCWA[q], we must calculate True Es1 and PIGC[q]. The representation theorem tells us that we may have the following scenario for developing more practical indefinite & GCWA inference enaines which consists of three major procedures, namely Compile[q], EvaI [q] , and Modify[C], in the following manner: (1) At m desiw time, the procedure Compile[q] compiles generic queries with respect to the CDB. (4 At cge~~~ time, the procedure Eval[q] evaluates True[q], k&f [ql , and GCWA[q] for a closed query, q, by evaluating the compiled program through the RDB. (3) At update time, the procedure Modify[C] modifies the compiled programs with respect to the update of a clause C into CDB. Updates in RDB require no program modification. The compilation of the CDB may be performed by various techniques such as linear resolution[Chang and Lee, 19731, connection g-raph[Kowalski, 1975][McKay and Shapiro, 1981][Sickel, 19761, a generalized version of the system graph[Lozinskii, 19851, etc., of which the effectiveness will depend upon the structure of IDDB. In this paper, we show a simple saturated resolution technique for compiling a non-recursive IDDB with a small CDB. However, even though the CDB of an IDDB consists of only a few clauses, a large volume of resolvents may be generated by simple saturation. We present a more effective compiling technique in penschen and Park, 1985][Park, 19851 by introducing NH-reduction theorems. Furthermore, we present a basic idea on compiling queries in a recursive IDDB in [Park, 19851. Comnjlation Phase For True[q], perform resolution on the CDB until saturation occurs, i.e. no more resolutions are possible. Construct a set of Horn clauses, called PTRUE[q], of which the positive literal unifies with q and the negative part consists of only base relations. For PIGC[q], perform resolution on the CDB until saturation occurs, and construct NH[q] and PSUB[q] defined in the previous section. F. aluation Phase Fzr True[q] evaluate the negative part of each Horn clause in PTRUE[q] by performing join operations through the RDB, until either True[q] = t or PTRUE[q] has been exhausted. For PIGC[q], evaluate the negative part of each clause in NH[q] and PSUB[nhi] and compute PIGC[q] and its potential subsuming clauses. Perform subsumotion tests on each clause in PIGC[~~, say pigc, by its potential subsuming clauses and relations in RDB relevant to pigc. Example 4 illustrates the compilation and evaluation of Indef[q] and GCWA[q] in a non-recursive IDDB by resolution, using representation scheme 3. The given IDDB partially describes the blood tYPe relationship between parents and children. -4 Base Relations: P(person, father, mother) B(person, blood-type) Virtual Relations: FBCperson, father-blood-type) MB(person, mother-blood-type) BPCperson, possible-blood-type) CDB: P(xl,x2,x3) & B(x2,x5) --' FB(xl,x5) P(xl,x2,x3) & B(x3,x5) --' MB(xl,x5) FB(xl,A) & MB(xl,O) --> BP(xl,A) V BP(xl,O) B(x4,x5) --> BP(x4,x5) -BP (g,O> RDB: P (a, i, j) P (b,m,n) P(e,a,b) P (f, a/b) P(g,a,b) B(a,A) B(b,O) B(e,A) . . S;ompihtlon. PTRUE[BP(cl,c2)] = {hl: B(x4,x5) --> BP(x4,x5) 3 NH[BP(cl,c2)] = (nhl: P(xl,x2,x3) & B(x2,A) & B(x3,O) --> BP(xl,A) V BP(xl,O)} PSUB[nhl] = (B(x4,x5) --> BP(x4,x5), P(g,x2,x3) & B(x2,A) & B(x3,O) --' BP (g,A) 3 - . Evaluation For the query BP(e,A), PTRUE[BP(e,A)] = {hl:B(e,A) --> BP(e,A)) True[BP(e,A)] = t by resolving hl in PTRUE and B(e,A) in RDB. Hence, Indef[BP(e,A)] = f and GCWA[BP(e,A)] = f. For the query BP(f,A), NH[BP(f,A)] = (nhl: P(f,x2,x3) & B(x2,A) & B(x3,O) --> BP(f,A) V BP(f.0)) Theorem Proving: AUTOMATED REASONING / 1 c);i PSUB[nhl] = (B(f,A) --> BP(f,A)) PIGC[BP(f,A)] = (BP(f,A) V BP(f,O)) Since True[BP(f,A)]=f and PIGC[BP(f,A)] is not empty, Indef[BP(f,A)] = t and GCWA[BP(f,A)] = f. For the query BP(g,A), PIK [BP (g,A) ] ' subsumes BP(g,A)i$ BETgtgj. since BP(g,A) That is, True[BP(g,A)] = t. Hence, Indef[BP(g,A)] = f and GCWA[BP(g,A)] = f. This example is relatively simple because the indefinite predicate, BP, does not occur as an hypothesis of any rule. We point out that the complexity of the simple saturation method grows very fast as more indefinite predicates occur as hypotheses and as the length of resolution chains stemming from an indefinite hypothesis grows. VI QUERY DECOMPOSITION We introduce the following decomposition theorem to evaluate disjunctive and conjunctive queries from their unit subqueries. Theorem 3 (Decomposition Theorem) Let CL1 and CL2 be different clauses. "*" denotes "don't care" and "x" denotes "t, f, or i(indefinite) 'I. 1. Disjunctive decomposition al !ILLz GLLYU t * t f X X i i i or t 2. Conjunctive decomposition CL1 .!ixdz ixJYU f * f t X X i i i or f Notice that the decomposition theorem shows a duality between disjunctive and conjunctive decomposition. In disjunctive decomposition, if all ground literals appearing in CL1 and CL2 are indefinite, CL1 v CL2 may be either indefinite or true. Let DB = (p r, q s, -P -9). Then, the minimal models are Ml = -k ~3, M2 = Cp, s), and M3 = (q, r). Let DB' = -& r, q s). Then, Ml' = (p, q), M2' = J&b s), M3' = (r, q), and M4' = (r, s) are the minimal models. Let CL1 = -p and CL2 = -q. Then, both CL1 and CL2 are indefinite with respect to DB and DB'. However, CL1 V CL2 = -p V -q is true with respect to DB, while it is indefinite with respect to DB'. Disjunctive queries can be evaluated as follows. Let Q = Ll V L2 V . . . . V Ln. Then, determine the value of each literal Li by utilizing the compiled program for it, and evaluate Q by using the disjunctive decomposition theorem. In case all Li are indefinite, there are two ways to proceed. First, we may look for a straightforward refutation of DB & -Q to infer the value of Q. If nil is derived, Q is true. Otherwise, Q is indefinite. Second, Q may be evaluated by utilizing the compiled program of unit queries as follows. Generate PIGC[Li] for each Li. Let pigc be a clause in PIGC[Li]. If Q is a positive clause and there is a pigc consisting of only ground atoms of Q, Q is true with respect to DB. Otherwise, it is indefinite. For example, let DB = Cp r, q s) and Q = p V q V r. All p, q, and r are indefinite. Since we can generate p V r consisting of only ground atoms p and r appearing in Q, Q is true. Notice that Q may be compiled. Evaluation theorems for more complex queries including conjunctive queries are presented in [Park, 19851. VII CONCLUSION Our goal is to develop effective inference engines for indefinite databases. We have shown that PIGC is the key to determining when a positive ground literal is indefinite or can be assumed false under GCWA. Further, we have shown which sets of resolvents must be generated in a compile phase in order to separate deduction from data retrieval. We have shown that two of the three obvious representation schemes allow such clause sets to be generated in a separate compile phase. We have shown how conjunction and disjunction can be handled. Work beyond that described in [Henschen and Park, 1985][Park, 19851 is needed to improve the actual compilation, in particular the generation of just the right resolvents in an effective way. This is especially true for recursive IDDBs. REFERENCES El1 PI [31 Bossu, G. and P. Siegel "Saturation, nonmonotonic reasoning and the . . . closed-world assumption." Artlflc1a.l telliam. 25 (1985) pp.13-63. Chang, C.L. "On evaluation of queries containing derived relations." In Advances in Data Base Theory Vol. 1. H. Gallaire, J. Minker, and J.M. Nicolas, Eds. Plenum Press, New York. (1981) pp. 235-260. Chang, C.L. and R.C.T. Lee. Svmbol~ JitQgic ZmdMechanicalTheoremProviw. Academic Press, New York, 1973. 196 / SCIENCE [41 [51 161 PI PI PI 1101 II111 1121 II131 El41 L-1 P61 Clark, K.L. "Negation as Failure." In !?g?%*%-* H. Gallaire and Plenum Press, York. (197A),Eg;: 293 -324. New P-7 Gallaire, H., J. Minker, and J. Nicolas. "Logic and Databases: a deductive approach." u !&nol~tlng Surveys. 10:2 (1984) pp. 153-185. Henschen, L.J. and S. Naqvi. "on compiling queries in recursive first-order databases." J.ACM 31:l (1984) pp. 47-85. Henschen, L.J. and H.S. Park. "Compiling the GCWA in indefinite deductive databases." in preparation for MaLand Work ~13 Deductive Databases and Li2g.k P=s=xrmw . Maryland, August, 1986. [181 1191 Kowalski, R. "A proof procedure using connection graphs." J.AcM 22:4 (1975) pp. 572-595. Lozinskii, E.L. "Evaluating queries in deductive database by generating." In Proc.IJCAI-85. PP. 173-177. Maier, D. Pelatlonal Database. ThS2 ?c%ce Computer Press I Maryland, 1983. McKay, D. and S. Shapiro. "Using active connection graphs for reasoning with recursive rules." In Proc. IJCAI-81. pp. 24-28. Minker, J. "On indefinite database and the closed world assumption." In Lecture Notes in .tZmwbx Science 138 Springer Verlag, 1982, pp. 292-308. Miriker, J. and J. Grant, J. "Answering queries in indefinite databases and the null value problems." University of Maryland, College Park, Maryland, July, 1981. Park, H.S. "Compiling queries in indefinite deductive databases under the generalized closed-world assumption." in preparation for Ph.D. dissertation, Department of EECS, Northwestern University, August, 1986. Reiter, R. "Deductive question answering on relational databases." In Logic, & D&a Bases. H. Gallaire and J. Minker, Eds. Plenum Press, New York, 1978-a, pp. 149-177. Reiter, R. "on closed world databases." In Logic & Databases. H. Gallaire and J. Minker, Eds. Plenum Press, 1978-b, pp 55-76. Reiter, R. "Towards a logical reconstruction of relational database theory." . M.L. In Eo5E==Y Mylopoulos, and J.W. Schmit: Eds: Springer-Verlag, New York, 1984, PP. 163-189. Sickel, S. "A search technique for clause interconnectivity graphs." C-25:8 (1976) pp. 823-8% Comx>uter* YaQm A. and L.J. Henschen. "Deduction in non-Horn databases." a Automated Beasoning. 1:2 Journal (1985) pp. 141-160. Theorem Proving: AUTOMATED REASONING / 197
1986
125
390
QUERY ANSUERING IN CIRCUBSCRIPTIVE AND CLOSED-YORLD TBEORIES Teodor C. Przymusinski Department of Mathematical Sciences University of Texas, El Paso, TX 79968 <[email protected]> ABSTRACT. Among various approaches to hand1 ing incomplete and negative information in knowledge representation systems based on predicate logic, McCarthy’s circumscription appears to be the most powerful. In this paper we describe a decidable algorithm to answer queries in circumscriptive theories. The algorithm is based on a modif ied version of ordered linear resolution, which constitutes a sound and complete procedure to determine whether there exists a minimal model satisfying a given formula. The Closed-World Assumption and its generalizations, GCWA and ECWA, can be considered as a special form of circumscription. Consequently, our method also applies to answering queries in theories using the Closed-World Assumption or its generalizations. For the sake of clarity, we restrict our attention to theories consisting of ground clauses. Our algorithm, however, has a natural extension to theories consisting of arbitrary clauses. 1. Introduction We describe a decidable algorithm to answer queries in indefinite theories with the proper treatment of incomplete information and, in particular, with the correct representation of negative information. The need for such algorithms has been recently stressed in the literature (cf. [GMN], [RZ], [Mi]). Our algorithm is based on McCarthy’s theory of circumscription (see [M],[M2],[L],[L2],[L3]), which appears to be the most powerful among various approaches to handling incomplete and negative information in knowledge representation sys terns based on predicate logic. For the sake of clarity, in this paper we restrict our attention to theories consisting of ground clauses. Under natural conditions, explained in Section 5, our algorithm has a straighforward decidable extension to theories consieting of arbitrary clauses. Suppose that T is a fi rst order clausal form and F is a sen tence. We theory in develop a i¶Inimal model Linear Ordered resolution (MILO-resolution) which constitutes a sound and complete method to determine whether there exists a minimal model M of T satisfying the formula F. Since a circumscriptive theory CIRC(T) implies a formula H if and only if there are no minimal models M of T satisfying the negation of H, MILO-resolution gives rise to an algorithm for answering queries in circumscriptive theories. Our method also applies to answering queries in theories using Rei ter’s Closed-World Assumption (CWA; see [RI) or its generalizations. It has been shown in [L2], that under the assumptions of unique names, domain closure and finitely many terms, CWA (applicable only to definite theories) is equivalent to circumscription. A generalization GCWA of the CWA for indefinite theories has been proposed by Minker [Mi] (see also [WI). In [GPP], an extension, ECWA, of GCWA for non-unit clauses has been described and proven (under the 8 ame assumptions) to be equivalent to circumscription. Since the above mentioned assumptions are routinely made when applying the CWA; it can be argued that CWA and its generalizations constitute a special case of circumscription. Finally, we wish to point out that our algorithm will naturally suffer from all the inherent inefficiencies present in a general theorem prover. In fact, being more complex, it will be even more inefficient. Therefore, we see its main importance as an analytical tool to study theorem proving methods in general closed-world theories, which - when restricted to a suitable domain - becomes a sound and complete inference engine. It is fairly clear, that if efficient implementation of a closed-world inference engine is the main objective, then strong syntactical restrictions have to be imposed on the theory involved. The so called stratifiable databases (see [ABW] and EPI) provide a case in point. 2. Parallel Circumscription From now on we assume that T is a first order theory consisting of finitely many ground clauses over the language L. We also assume that the Unique Names Assumption is satisfied for L , i.e. 0fL.l 2 that t #t for any two different terms tl and t2 186 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Parallel circumscription was introduced by J. McCarthy [M),[M2]. Suppose thet P = (P1,...,Pn) is a list of some predicate symbols from T that we intend to minimize and Q = {Q ,...,Q } is the list of the remaining pre icatesy a- called parameters. The process of circumscribing (or minimizing) predicates P in T transforms T into a stronger second order theory CIRC(T;P) as defined below. Definition 2.1. The circumscription of P in T . the following ;;P) A V P’ [( T(P’) A P’ sentence CIRC(T;P) : -> P > -> (P’ = P)], where P’ -> P stands for V x (P’(x) -> P(x)).@ This formula states that predicates from P have a minimal possible extension under the condition T(P) (cf.[L],[L2],[L3]). Reurrk. For the sake of simplicity, in this paper we do not consider variable predicates 2 (see [h2] 1. At the cost of becoming more complex, the procedure can be generalized to handle variable predicates.0 To clarify the notion of circumscription we reformulate it in model-theoretic terms. Definition 2.2. For any two models M and N of T we write I¶ ( N Md P if M and N differ only in how they interpret the predicates in P, and if the extension of every predicate in M is a subset of its extension in N.I This relation is a partial-order and hence we can talk about minimal models M w.r.t. 6 in the class S of all models of T. Such models are called P-rriniral models of T. The following result is fundamental: Theorem 2.3. [L] A structure M is a model of CIRC(T;P) iff M is a P-minimal model of T. In other words, for any formula F we have CIRC(T;P) k F iff M I= F, for every P-minimal model M of T.I Our algorithm will be based on the following important characterization of circumscription obtained in [GPP] and stating, in effect, that circumscription is equivalent to the so called Extended Closed-World Assumption: Theorer 2.4. [GPP] Suppose that F is a ground formula. Then CIRC(T;P) I= F if and only if there are no clauses C such that: (i) C does not contain literal8 from P-; (ii) T b ~FvC, but T H C.I Theorems 2.3 and 2.4 yield the following: Corollary 2.5. [GPP] Suppose that K is a ground formula. There exists a P-minimal model M of T satisfying K if and only if there exists a clause C such that: (i) C does not contain literal8 from P-; (ii) T I- KvC, but T H C.I The purpose of the MILO-resolution defined in the next section is to determine the existence of a clause C and thus the existence of a P-minimal model. 3. Diniral Model Resolution In this section we describe a l¶Inilral model Linear Ordered resolution (HILO-resolution) which constitutes a sound and complete method to determine whether there exists a P-minimal model M of a theory T satisfying a given formula F. Since a circumscriptive theory CIRC(T;P) implies a formula H if and only if there are no P-minimal models M of T satisfying the negation of H, MILO-resolution leads to an algorithm for answering queries in circumscriptive theories. MILO-resolution is a modification of the ordered linear resolution (OL-resolution; see [CL]). We denote by P- the set of all negative literals, whose predicate symbols are in P and we consider every clause as an ordered list of literal8 {ll,...,lm]. By an extended clause we mean an ordered list of literals, some of which may be framed. A framed literal k is denoted by [kl. Framed literal8 are merely used for recording those literals that have been resolved upon ; they do not participate in the resolution. An extended clause is a tautology if it contains a pair of unframed complementary literals. An extended clause C subsures an extended clause D if the set of unframed literals of C is contained in D. Now we are ready to define ?lILD-deduction. For readers unfamiliar with the OL-deduction, we have indicated in bold case those parts of the definition that have to be removed to obtain standard OL-deduction. Definition 3.1. Given a theory T and a clause cO’ a i¶ILD-deduction of a clause C from T + C is any sequence of extended clause! C ,C CO in which C. is generated from C the follow!:; rules: i ac!or&fng”tZ (i> First, an extended clause D i+l is constructed, which is the ordered resolvent of ci=Ill’...‘lm] and some clause B={kl,...,ks} from T upon the first literal 1 j in. Ci that belongs to p-9 i.e. D i+l = ll,.-,~j_l~kl'"~ku_l~ku+l".'ks~[ljl'lj+l lm I l l , where k =ll U j (framed literal [l.] is used to J record the performed operation); (ii) The clause C is obtained from by performing the folfziing D. reductions in GA order specified: (a) deleting any unframed literal8 k in for which there exists a framed literal [lk] %+bi+l ; (b) merging any identical literal8 in D i+l to the right; Theorem Proving: AUTOMATED REASONING i 1 tJ7 (c) removing any framed literala in D. that are not preceded by unframed literal8 f++o)l P-. (iii) The clause Ci+ and it cannot be subsume a cannot be a tautology by any of the previous clauses. I As indicated above, MILO-deduction differs from the standard ordered 1 inear deduct ion (OL-deduction; see [CL]) only by: (1) restricting the resolution to literal8 from P- and (2) removing all those framed literals in D +l # that are not preceded by unframed li terals rom P-, rather than just removing those framed literal6 which are not preceded by any unframed literals. Definition 3.2. A IlILO-deduction of a clause C fromT+CD f?omT+Co, if is called a HILO-derivation of Cn The process of finding called a HILO-resol .ution.l a MILO-derivation is Let us explain the meaning of this definition. First of all, if C does not contain any literal8 from P- then: according to Definition 3.1, no further deduction can be performed from it, and therefore C is a terminal clauee. Secondly, it is not diffikrlt to show, that if K is a conjunction of literals and if a clause C is MILO-derivable from lK, then T k K v C”, but T tf C Corollary” 2.5. shows “ihat which in view of there exists a P-minimal model of T satisfying K. This establishes the easy part (soundness) of the following fundamental result: Theorem 3.3. (Soundness and completeness of the HILO-resolution) Suppose that K is a conjunction of literals. There exists a P-minimal model of T such that M k K iff there exists a MILO-derivation from T + 1K .I Example 3.4. Suppose that T consists of the following clauses: (1) s(C) V -IS(B) (2) s(A) v s(B) v is(C) (3) s(A) v s(B) v s(C) and suppose that P = (8). The following deduction is a MILO-derivation from T + l(s(B)hs(C)) (literal8 resolved upon are underlined and side clauses are given in parentheses): ls(B~Vls(C) I (2) s(A)vxI(C)V[~S(B)]V~S(C) 1 (reduction) s(A)vls(C) I (1) s(A)vls(B1V[ls(C)] I (3) ~(A)~~(C)V[~~(B)JV[~~(C)] 1 (reduction) s(A) because, it is easy to verify, using e.g. standard OL-resolution, that TVs(A). Therefore, Theorem 3.3 implies that there exists a P-minimal model of T in Example 3.4 satisfying s(B)hs(C).I Although the notions of HILO-deduction and OL-deduction are similar, the proof of Theorem 3.3 is considerably more involved than the proof of the soundness and completeness of the standard OL-resolution. This is due to the special treatment of literals from P-. Suppose now that F is any formula. We can obviously assume that F is represented in normal disjunctive form, i.e. F=KlV....vK,, where Ki’s are conjunctions of literale. Corollary 3.5. For a formula F the following conditions are equivalent: (i) there exists a P-minimal model tl of T such that H I= F ; (ii) there exists a P-minimal model kl of T such that M I= Ki, for some i; (iii) there exists a MILO-derivation from T+,Ki , for some i.@ From the description of the MILO-resolution, it is clear that its role is to reduce the original problem of the existence of P-minimal models M of T satisfying a given formula F to the problem of establishing whether a Riven clause C is derivable from T. The last problem can bg handled by a standard theorem prover. Obviously, if the theory T is not decidable, we will not be always able to establish that C is not derivable from T. This dependence on the aecidability of T is not surprising: after all our query concern6 the existence of specific models of T. 4. Query answering in circumscriptive and closedjworld theories Suppose that F is any formula. We can obviously assume that F is represented in normal conjunctive form, i.e. F = G A...AG are clauses. From Theorem 2. 3 , where G ‘8 and torollary 5.5 we easily obtain: Theorem 4.1, For a formula F the following conditions are equivalent: (i> CIRC(T;P) Lt F ; (ii) there is a P-minimal model for -IF; (iii) there is a P-minimal model for lGi, for some i ; (vi) there is a MILO-derivation for some i .I from T+Gi, Corollary 4.2. For conditions are equival (1) CIRC(T;P) k (ii) there is no (iii) for every i for 1G.; (vi) for every i from T+Gi .I a formula F the following ent: F i P-minimal model for 1F; there is no P-minimal model there is no tlILO-derivation According to Theorem 4.1, 188 / SCIENCE CIRC(T;P) does not imply is(B) v is(C), where T is the theory described in Example 3.4.1 The following corollary is a special case of results established in [EMR] and [GPP] and shows that as long as F does not contain any literal8 from P-, F is implied by CIRC(T;P) iff it is derivable from T, which further explains the reduction process described in the previous section. Corollary 4.4. ([EMRJ,[GPP]) Suppose that F is a formula which does not contain any negative occurrences of predicates from P. Then: T + F iff CIRC(T;P) t= F .I Corollary 4.2 leads to the following decidable algorithm for query answering in circumecriptive theorier based on tlILO-resolution. Theorem 4.5. (Decidable Query Answering Algorithm). The following procedure constitutes a decidable algorithm for determining whether a given formula F is implied by a circumscriptive theory CIRC(T;P): Step 1. Represent F in normal conjunctive form, i.e. clauses. let F = Glh...hGm, where Gi’s are Step2. For all j=l,...,m use the depth-first search on the HILO-resolution tree with the top clause G with cla se8 H that do not contain literal8 d to find all MILO-deductions terminating from P-. Step 3. If no such terminal clauses H are found for any j=l,...,m, then CIRC(T;P) b F. Step 4. Else, for any terminal clause H found use any decidable standard theorem prover to determine whether T t- H. Step 5. If there is an H such that T HH, then CIRC(T;P) PC F, else CIRC(T;P) I= F.I The decidability of the above algorithm follows from the fact that, due to the subsumption check in the definition of HILO-deduction, the search tree for MILO-resolution is finite. The following example illustrates the above algorithm. In order to show that the algorithm is not limited to ground clauses, we apply it to clauses that contain variables. Example 4 .6.(c f.[B S]) Suppose that T is given by the foll owing clauses: our theory (1) learns (x, Latin)vlearns(x, Greek) (- senior(x) (2) learns(x, French)Vlearns(x,Spanish) <- junior(x) (3) senior(x)Vjunior(x) (4) senior(Ann) (5) learns(Ann,Latin). Suppose that P={learns,senior,junior} and that we want to find out whether CIRC(T;P) I= llearns(Ann,Greek). As shown below (using obvious abbreviations), all MILO-deductions from T + llearns(Ann,Greek) terminate with clauses implied by T ( because T+learns(Ann,Latin) ), thus showing that CIRC(T;P) I= llearns(Ann,Greek). ll(A,Gl I (1) l(A,L)v~s(A>v[~l(A,G>l I (4) \ (3) I \ l(A,L) l(A,L)Vjunior(A) Similarly, we can show that: CIRC(T;P) b llearns(Ann,French)Allearns(Ann,Spanish). On the other hand, if P={learns,senior) then, as shown below, there exists a MILO-derivation from T + llearns(Ann,French), thus CIRC(T;P) Pt llearns(Ann,French). Similarly, CIRC(T;P) Pt llearns(Ann,Spanish). ll(A,F) I (2) l(A,S)Vlj(A)V[ll(A,F)] I (reduction) l(A,S)Vlj(A) It is easy to verify that THl(A,S)Vlj(A).I Beaark. Under the assumptions mentioned in the introduction, which are routinely made when CWA is applied, the Extended Closed-World Assumption [GPP] is exactly equivalent to circumscription, i.e. for any formula F we have: ECWA(T;P)l-F iff CIRC(T;P)l=F. In particular, for any unit clause F, GCWA(T;P)kF iff CIRC(T;P)t=F, where GCWA stands for the Generalized Closed-World Assumption of J. Minker [IYi]. Moreover, for Horn clauses, all the four approaches - CWA,GCWA,ECWA and circumscription - coincide. This shows that our methods apply to answering queries in closed-world the0ries.l 5, Concluding remarks For the sake of simplicity, we have presented our results under the assumption that all clauses are ground, i.e. in the propositional case. This assumption is not necessary. Without significant changes, our results can be generalized to the following caee: (1) the theory T consists of any, not necessarily ground, clauses; (2) the query F in Section 4 is universal: (3) the Unique Names Axiom is assumed. In particular, there is no need to replace the theory T by the set of ground instances of its clauses, because the algorithm described in Section 4 (with natural modifications) works properly with variables. Moreover, if the language L does not contain function symbols, then the algorithm remains decidable. Theorem Proving: AUTOMATED REASONING / 189 In [BS], Bossu and Siegel describe an entirely different theorem proving procedure to answer queries in circumscriptive theories. Their procedure, however, is restricted to the so called ‘groundable clauses’ and applies only to the case when all predicates are minimized. Moreover, it seems that in many instances it can be grossly inefficient as it always deals with the entire set of clauses, not just with those that pertain to a particular query. Other methods of answering queries in closed-world theories can be found in [GM], [YHI and [GP]. Grant and Minker’s algorithm [GM] is restricted to ground positive clauses. Yahya and Henshen’s algorithm also assumes that all clauses are ground and seems too inefficient to be implemented in practice. Gelf ond and Przymusinska’s approach [GP] is sound but often far from completeness. Finally, Clark’s QEP-procedure [Cl, essentially equivalent to the one implemented in PROLOG, correctly evaluates queries under CWA only for Horn clauses. ACKNOYLEDGMENTS Full description of the MILO-resolution for non-ground clauses including proofs and variable predicates will appear elsewhere. The author is grateful to Michael Gelfond and Halina Przymusinska for introducing him to the subject and for suggesting the problem. The author is also obliged to David Etherington and Vladimir Lifschitz for helpful comments. REFERENCES [ABW] Apt, K., Blair, H. and Walker, A., “Towards a Theory of Declarative Knowledge”, preprint 1986. CBS] Bossu, G. and Siegel, P., “Saturation, Nonmonotonic Reasoning and the Closed World Assumption”, Artificial Intelligence 25(1985), 13-63. [CL] Chang, C., Lee R.C., Symbolic Logic and Mechanical Theorem Proving, Academic Press, New York 1973. [Cl Clark, K.L., “Negation as Failure”, in: Logic and Data Bases (H.Gallaire and J.Minker, Eds.), Plenum Press, New York 1978, 293-322. [EMR] Etherington, D., Mercer, R. and Reiter, R., “On the Adequacy of Predicate Circumscription for Closed-World Reasoning”, Computational Intelligence l(1985). [GMN] Gallaire,H., Minker, J. and Nicolas, J., “Logic and Databases : A Deductive Approach”, Computing Surveys 16(1984), 153-185. [GP] Gelfond, Il. and Przymusinska, H., *‘Negation as Failure: Careful Closure Procedure”, Artificial Intelligence, to appear. [GPP] Gelfond, M., Przymusinska, H. and CGMI [Ll [L21 [L31 [Ml [HZ1 [nil [PI CR1 [=!I CYHI Przymusinski, T., “The Extended Closed World Assumption and its Relationship to Parallel Circumscription”, Proceedings ACM SIGACT-SIGMOD Symposium on Principles of Database Systems, Cambridge, Mass. 1986, 133-139. Grant, J. and Hinker, J., “Answering Queries in Indefinite Databases and the Null Value Problem”, preprint. Lifschitz, V., “Computing Circumscription’*, Proceedings IJCAI-85, Los Angeles 1985, 121-127. Lifschitz, V., “Closed World Data Bases and Circumscription”, Artificial Intelligence, 27(1985), 229-235. Lifschitz, V., ‘Pointwise Circumscription”, preprint. McCarthy, J., “Circumscription - a Form of Non-Monotonic Reasoning”, Artificial Intelligence 13(1980), 27-39. McCarthy, J “Applications of Circumscription’;o Formalizing Common Sense Knowledge”, AAAI Workshop on Non-Monotonic Reasoning 1984, 295-323. Minker, J., ‘On Indefinite Data Bases and the Closed World Assumption”, Proc. 6-th Conference on Automated Deduction, Springer Verlag, 292-308. Przymusinski, T., “On the Semantics of Stratified Deductive Databases”, to appear. Reiter, R., “On Closed-World Data Bases”, in: Logic and Data Bases (H.Gallaire and J.tlinker, Eds.), Plenum Press, New York 1978, 55-76. Reiter’, R., ‘*Towards a Logical Reconstruction of Relational Database Theory”, in: On Conceptual Modeling (M.Brodie et al., Eds.), Springer Verlag. Yahya, A. and Henschen, L., “Deduct ion in Non-Horn Databases”,Journal of Automated Reasoning 1(2)(1985),141-160. lc)O / SCIENCE
1986
126
391
LEARNING WHILE SEARCHING IN CONSTRAINT-SATISFACTION-PROBLEMS* Rina Dechter Artificial Intelligence Center Hughes Aircraft Company, Calabasas, CA 91302 and Cognitive Systems Laboratory, Computer Science Department University of California, Los Angeles, CA 90024 ABSTRACT The popular use of backtracking as a control strategy for theorem proving in PROLOG and in Truth-Maintenance- Systems (TMS) led to increased interest in various schemes for enhancing the efficiency of backtrack search. Researchers have referred to these enhancement schemes by the names ‘ ‘Intelligent Backtracking’ ’ (in PROLOG), ‘ ‘Dependency- directed-backtracking” (in TMS) and others. Those improve- ments center on the issue of “jumping-back” to the source of the problem in front of dead-end situations. This paper examines another issue (much less explored) which arises in dead-ends. Specifically, we concen- trate on the idea of constraint recording, namely, analyzing and storing the reasons for the dead-ends, and using them to guide future decisions, so that the same conflicts will not arise again. We view constraint recording as a process of learning, and examine several possible learning schemes studying the tradeoffs between the amount of learning and the improve- ment in search efficiency. I. INTRODUCTION The subject of improving search efficiency has been on the agenda of researchers in the area of Constraint-Satisfaction- Problems (CSPs) for quite some time [Montanari 1974, Mackworth 1977, Mackworth 1984, Gaschnig 1979, Haralick 1980, Dechter 19851. A recent increase of interest in this sub- ject, concentrating on the backtrack search, can be attributed to its use as the control strategy in PROLOG [Matwin 1985, Bruynooghe 1984, Cox 19841, and in Truth Maintenance Sys- tems [Doyle 1979, De-Kleer 1983, Martins 19861. The terms “intelligent backtracking”, “selective backtracking”, and “dependency-directed backtracking” describe various efforts for producing improved dialects of backtrack search in these systems. The various enhancements to Backtrack suggested for both the CSP model and its extensions can be classified as fol- lowed: 1. Look-ahead schemes: affecting the decision of what value to assign to the next variable among all the con- sistent choices available [Haralick 1980, Dechter 19851. *This work was supported in part by the National Science Foundation, Grant #DCR 85-01234 2. Look-back schemes: affecting the decision of where and how to go in case of a a dead-end situation. Look-back schemes are centered around two funda- mental ideas: a. Go-back to source of failure: an attempt is made to detect and change previous decisions that caused the dead-end without changing decisions which are irrelevant to the dead-end. b. Constraint recording: the reasons for the dead-end are recorded so that the same conflicts will not arise again in the continuation of the search. All recent work in PROLOG and truth-maintenance system, and much of the work in the traditional CSP model is concerned with look-back schemes, particularly on the go- back idea. Examples are Gaschnig’s “Backmark” and “Backjump” algorithms for the CSP model [Gaschnig 19791 and the work on Intelligent-Backtracking for Prolog [Bruynooghe 1984, Cox 1984, Matwin 19851. The possibility of recording constraints when dead-ends occur is mentioned by Bruynooghe [Bruynooghe 19841. In truth-maintenance systems both ideas are implemented to a certain extent. How- ever, the complexity of PROLOG and of TMS makes it difficult to describe (and understand) the various enhance- ments proposed for the backtrack search and, more impor- tantly, to test them in an effort to assess their merits. The general CSP model, on the other hand, is considerably simpler, yet it is close enough to share the basic problematic search issues involved and, therefore, provides a convenient framework for describing and testing such enhancements. Constraint-recording in look-back schemes can be viewed as a process of learning, as it has some of the proper- ties that norrnally characterize learning in problem solving: 1. The system has a learning module which is indepen- dent of the problem-representation scheme and the algorithm for solving problem instances represented in this scheme. 2. The learning module works by observing the perfor- mance of the algorithm on any given input and record- ing some relevant information explicated during the search. 3. The overall performance of the algorithm is improved when it is used in conjunction with the learning module. 178 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. 4. When the algorithm terminates, the information accu- mulated by the learning module is part of a new, more knowledgeable, representation of the same problem. That is, if the algorithm is executed once again on the same input, it will have a better performance. Learning has been a central topic in problem solving. The task of learning is to record in a useful way some infor- mation which is explicated during the search and use it both at the same problem instance and across instances of the same domain. One of the first applications of this notion involved the creation of macro-operators from sequences and sub- sequences of atomic operators that have proven useful as solu- tions to earlier problem instances from the domain. This idea was exploited in STRIPS with MACROPS [Fikes 19711. A different approach for learning macros was more recently offered by [Korf 19821. Other recent examples of learning in problem solving are: the work on analogical problem solving [Carbonell 19831, learning heuristic problem-solving stra- tegies through experience as described in the program LEX [Mitchel 19831 and developing a general problem solver (SOAR) that learns about aspects of its behavior using chunk- ing [Laird 19841. In this paper we examine several learning schemes as they apply to solving general CSPs. The use of the CSP model allows us to state our approach in a clear and formal way, provide a parameterized learning scheme based on the time-space trade-offs, and analyze the trade-offs involved theoretically. We evaluated this approach experimentally on two problems with different levels of difficulty. II. THE CSP MODEL AND ITS SEARCH-SPACE A constraint satisfaction problem involves a set of n variables Xl , . . . . X,, each represented by its domain values, R 1, . . . , R, and a set of constraints. A constraint Ci(Xil, * * * ,Xij) is a subset of the Cartesian product Ri, x * * * xRij which specifies which values of the variables are compatible with each other. A solution is an assignment of values to all the variables which satisfy all the constraints and the task is to find one or all solutions. A constraint is usually represented by the set of all tuples permitted by it. A Binary CSP is one in which all the constraints are binary, i.e., they involve only pairs of vari- ables. A binary CSP can be associated with a constraint- graph in which nodes represent variables and arcs connects pairs of constrained variables. Consider for instance the CSP presented in figure 1 (from [Mackworth 19771 ). Each node represent a variable whose values are explicitly indicated, and the constraint between connected variables is a strict lexico- graphic order along the arrows. Xl x2 Figure 1: An example CSP Backtracking works by provisionally assigning con- sistent values to a subset of variables and attempting to append to it a new instantiation such that the whole set is con- sistent. An assignment of values to subset of the variables is consistent if it satisfies all the constraints applicable to this subset. A constraint is applicable to a set of variables if it is defined over a subset of them. The order by which variables get instantiated may have a profound effect on the efficiency the algorithm [Freuder 19821 since each ordering determine a different search space with different size. The ordering can be pre- determined, or could vary dynamically, in which case the search space is a graph whose states are unordered subsets of consistently instantiated variables. The methods suggested in this paper are not dependent on the particular ordering scheme chosen, and we assume, without loss of generality, that the ordering is given as part of the problem input. Moreover, in Section 5 we generate different instances of a problem, for our numerical experiments, by simply changing the ordering of the variables of the same problem. Another issue that have influence on the size of the search space is the CSP’s input representation, i.e. a set of variables, their domains and the set of explicit constraints. It defines a relation among the variables, consisting of those tuples satisfying all the constraints, or the set of all solutions. There may be numerous equivalent CSP representations for the same set of solutions and some may be better then others since they yield a smaller search space. One way of improv- ing the representation is by inducing, or propagating con- straints [Montanari 1974, Mackworth 19771. For example, the constraints C(X,Y) and C(Y,Z) induce a constraint C(X,Z) as follows: A pair (x,2) is allowed by C(X,Z) if there is at least one value y in the domain of Y such that (x,y) is allowed by C(X,Y) and (y,z) is allowed by C(Y,Z). For instance, for the problem in figure 1, a constraint between X 1 and X2 can be induced from the binary constraints C(X 1 ,X3) and C(X3,Xz) to yield a constraint C(X 1 ,X2) that disallow (among other pairs) the pair (a,a). The definition of induced constraints can be extended in a natural way to non-binary constraints. Several schemes for improving the search efficiency by pre-processing the problem’s representation have been pro- posed [Montanari 1974, Mackworth 1984, Dechter 19851. These pre-processing schemes can be viewed as a mode of learning since they result in modified data structure and improved performance. However, inducing all possible con- straints may involve a procedure which is exponential both in time and space [Freuder 19781. III. LEARNING WHILE SEARCHING The process of learning constraints need not be performed as a pre-processing exercise, but can rather be incorporated into the backtrack search. An opportunity to learn new constraints is presented each time the algorithm encounters a dead-end situation, i.e. whenever the current state S = (Xl =X1, s a a ,Xi-1 = xi-1 ) cannot be extended by any value of the variable Xi. In such a case we say that S is in conflict with Xi or, in short, that S is a conflict-set. An obvious con- straint that can be induced at that point is one that prohibits the set S. Recording this constraint, however, is of no help since under the backtrack control strategy this state will never reoccur. If, on the other hand, the set S contains one or more subsets which are also in conflict with Xi, then recording this information in the form of new explicit constraints might Search: AUTOMATED REASONING / 179 prove useful in future search. One way of discovering such a subset is by removing from S all the instantiations which are irrelevant to Xi. A pair consisting of a variable and one of its value (X,x) in S is said to be irrelevant to Xi if it is consistent with all values of Xi w.r.t the constraints applicable to S. We denote by Conf(S,Xi), or Conf-set in short, the conflict-set resulting by removing all irrelevant pairs from S. The Conf-set may still contain one or more subsets which are in conflict with Xi. Some of these subsets are Minimal conflict sets [Bruynooghe 19811, that is, they do not contain any proper conflict-sets and, so, can be regarded as the sets of instantiations that “caused” the conflict. Since a set which contains a conflict-set is also in conflict, it is enough to explicitly discover all the minimal conflict-sets i.e., the set of smallest conflict-sets. Consider again the problem in figure 1. Suppose that the backtrack algorithm is currently at State (X 1 = b ,X2 = b ,X3 = a,Xd = b). This state cannot be extended by any value of X5 since none of its values is con- sistent with all the previous instantiations. This means, of course, that the tuple (Xl = b ,X2 = b ,X3 = a,X4 = b) should not have been allowed in this problem. As pointed out above, however, there is no point recording this fact as a constraint among the four variables involved. A closer look reveals that the instantiation X1 = b and X2 = b are both irrelevant in this conflict simply because there is no explicit constraint between X1 andX5 or betweenX2 andX5. NeitherX3 = a norX4 = b can be shown to be irrelevant and, therefore, the Conf-set is (X3 = a,X4 = b). This could be recorded by eliminating the pair (a,b) from the set of pairs permitted by C (X3,X4). This Conf-set is not minimal, however, since the instantiation X4 = b is, by itself, in conflict with X5. Therefore, it would be sufficient to record this information only, by eliminating the value b from the domain of X4. Finding the conflict-sets can assist backtrack not only in avoiding future dead-ends but also by backjumping to the appropriate relevant state rather then to the chronologically most recent instantiation. If only the Conf-set is identified the algorithm should go back to the most recent variable (i.e. the deepest variable) in this set. If the minimal conflict-sets mlm2,. . . , ml are identified, and if d&i) is the depth of the deepest variable in mi then the algorithm should jump back to the shallowest among those deep variables, i.e. to. Min {d(ntj)] (1) Discovering all minimal conflict-sets amounts to acquiring all the possible information out of a dead-end. Yet, such deep learning may require considerable amount of work. While the number of minimal conflict-sets is less then 2r, where r is the cardinality of the Conf-set, we can envision a worst case where all subsets of Conf(S,Xi) having f ele- ments are in conflict with Xi. conflict-sets should then satisfy The number of Lnimal r1 #m&conflict-sets = L 3 2’ , II z (2) which is still exponential in the size of the Conf-set. If the size of this Conf-set is small it may still be reasonable to recognize all minimal conflict-sets. Most researchers in the area of truth-maintenance- systems have adopted the approach that all the constraints realized during the search should be recorded (recording no- good sets or restriction sets), e.g., [Doyle 1979, De-Kleer 1983, Martins 19861. However, learning all constraints may amount to recording almost all the search space explored. Every dead-end contains a new induced constraint. The number of dead-ends may be exponential in the worst case, i.e., O(P) when n is the number of variables and k is the number of values for each variable, which presents both a storage problem and a processing problem. It seems reason- able, therefore, to restrict the information learned to items which can be stored compactly and still have a gc& chance for being reused. In the next section we discuss several possi- bilities for accomplishing these criteria. N. CONTROLLED LEARNING Identifying the Conf-set is the first step in the discovery of other subsets in conflict and, by itself, it can be considered a form of shallow learning. It is easy to show that the Conf-set satisfies Conf = UT(Xij) , (3) xij where xi. is the jfh subset o f value in the domain of Xi and T(xij) is a S which contains all instantiations in S that are not consistent with the assignment Xi =x+.. Let C be the set of relevant constraints on SU{Xij which‘involve Xi, and let I be the size of C. The identification of a specific T-set requires testing all these constraints. An algorithm for identifying the Conf-set may work by identifying T-sets for all the values of Xi and unionize them and its complexity is 0 (k-l) when k is the number of values for Xi. An approximation of the Conf-set may be obtained by removing from the set S only those variables that are not asso- ciated with any constraint involving Xi. The resulting conflict set, which contains the Conf-set, may be used as a surrogate for it. The complexity of this algorithm is just O([) but it may fail to delete an irrelevant pair which appears in some con- straint but did not participate in any violation. For example, in the example CSP the state {Xl = a,Xz =c} is at dead-end since it cannot be extended by any value of X3. The approxi- mate Conf-set in this case is the whole state since both X 1 and X2 have constraints with X3 however a careful look reveals that X2 = c is irrelevant to X3 and the real Conf-set is VI = al. Independently of the depth of learning chosen, one may restrict the size of the constraints actually recorded. Constraints involving only a small number of variables require less storage and have a better chance for being reused (to limit the search) than constraints with many variables. For example, we may decide to record only conflict-sets consist- ing of a single instantiation. this is done by simply eliminat- ing the value from the domain of the variable. We will refer to this type of learning as first-order learning which amounts to making a subset of the arcs arc-consistent [Mackworth 19771. It does not result in global arc-consistency because it only make consistent those arcs that are encountered during the search. First-order learning does not increase the storage of the problem beyond the size of the input and it prunes the search each time the deleted value is a candidate for assign- ment. For example, if we deleted a value from a veable at depth j we may prune the search in as much as kl-’ other states. 180 / SCIENCE Second-order learning is performed by recording only conflict-sets involving either one or two variables. Since not all pairs of variables appear in constraints in the initial representation (e.g. when all pair of values are permitted noth- ing is written), second-order learn: f 5 nml increase the size of the problem. There are at most * binary constraints, each having at most k2 pairs of values, the increase in storage is still reasonably bounded and may be compensated by sav- ing in search. Second-order learning performs partial path- consistency [Montanari 19741 since it only adds and modify constraints emanating from paths discovered during the search. When deep learning is used in conjunction with res- tricting the level of learning we get deep first-order learning (identifying minimal conflict sets of size 1) and deep second- order learning (i.e. identifying minimal conflict-sets of sizes 1 and 2). The complexity of deep first-order learning is 0 (kd) when r is the size of the Conf-set since each instan- tiation is tested against all values of Xi. The omplex’ of deep second-order learning can rise to 0( I r F .k.l) since in this case each pair of instantiations shou d be checked against each value of Xi. In a similar manner we can define and execute higher degrees of learning in backtrack. In general, an irh-order learning algorithm will record every constraint involving i or less variables. Obviously, as i increases storage increases. The additional storage required for higher order learning can be avoided, however, by further restricting the algorithm to only modify existing constraint without creating new ones. This approach does not change the structure of the constraint- graph associated with the problem, a property which is some- times desirable [Dechter 19851. V. EXPERIMENTAL EVALUATION The backtrack-with-learning algorithm has been tested on two classes of problems of different degrees of difficulty. The first is the class problem, a data-base type problem adapted* from [Bruynooghe 19841. The problem statement is given in Appendix 1. The second, and more difficult, problem is known as the Zebra problem. The problem’s statement is given in Appendix 2. It can be represented as a binary CSP by defining 25 variables each having five possible values denot- ing the identities of the different houses. Several instances of each problem have been generated by randomly varying the order of variables’ instantiation. As explained in Section 2, each ordering results in a different search space for the problem and, therefore, can be considered as a different instance. The mode of learning used in the experiments was controlled by two parameters: the depth of learning (i.e., shal- low or deep), and the level of learning (i.e., first order or second order). This results in four modes of learning: shallow-first-order, shallow-second-order, deep-first-order, and deep-second-order. The information obtained by the learning module was utilized also for backjumping as dis- cussed in Section 3. *Our problem is an app roximation of the original problem where only binary constraints are used. Each problem instance was solved by six search stra- tegies: naive backtrack, backtrack with backjump (no learn- ing), and backtrack with backjump coupled with each of the four possible modes of learning. The results for six problem instances of the class problem are presented in table 1, and for six problem instances of the zebra problem in table 2. The following abbreviations are used: NB = naive backtrack, BJ = Backjump, SF = Shallow-First-Order, SS = Shallow-Second- Order, DF = Deep-First-Order, DS = Deep-Second-order. # NB BJ SF SS DF DS ’ 1 219 219 218 221 218 194 25 25 25 25 25 22 (;1243) (44) !if? (44) . 2 123 123 133 155 I 12 I 12 I 12 I 12 I 12 I 12 I (43) (43) (42) (42) 3 266 266 266 267 260 125 24 24 24 24 20 7 (140) (140) (140) (51) 4 407 407 406 409 423 509 42 42 42 42 40 39 Table 1: The Class Problem Table 2: The Zebra Problem Each of the problem instances was solved twice by the same strategy; the second run using a new representation that included alI the constraints recorded in the first run. This was done to check the effectiveness of these strategies in finding a better problem representation. Search: AUTOMATED REASONING / 18 1 Each entry in the table records three numbers: the first is the number of consistency checks performed by the algo- rithm, the second is the number of backtrackings, and the Parenthesized number gives the number of consistency checks in the second run. Our experiments (implemented in LISP on a Symbol- its LISP Machine) show that in most cases both performance measures improve as we move from shallow learning to deep learning and from first-order to second-order. The class prob- lem turned out to be very easy, and is solved efficiently even by naive backtrack. The effects of backjumping and learning are, therefore, minimal except for deep-second-order learning where gains are sometimes evident. In some instances there is some deterioration due to unnecessary learning. In all these cases, the second run gave a backtrack-free performance. The zebra problem, on the other hand, is much more difficult, and in some cases could not be solved by naive back- track in a reasonable amount of time (in these cases the number reported are the counts recorded at time the run was stopped (*)). The enhanced backtrack schemes show dramatic improvements in two stages. First, the introduction of backjump by itself improved the performance substantially, with only moderate additional improvements due to the intro- duction of first-order or shallow second-order learning. Second-order-deep learning caused a second leap in perfor- mance, with gains over no-learning-backjump by a factor of 5 to 10. The experimental results for the zebra problem are depicted graphically in Figure 2 (for the number of con- sistency checks) and in Figure 3 (for the number of bakctrack- ings) . 250,000 - 200,000 NB = naive backtrack BJ = backjump SF = shallow first SS = shallow second DF = deep first DS = deep second 50,000 I L i 25,000 . NB BJ SF SS DF DS STRENGTH OF LEARNING Figure 2: Number of Consistency Checks for the Zebra Problem NB = naive backtrack STRENGTH OF LEARNING Figure 3: Numebr of Backtrackings for the Zebra Problem VI. CONCLUSIONS Our experiments demonstrate that learning might be very beneficial in solving CSPs. Most improvement was achieved by the strongest form of learning we have tested: deep- second-order learning. It remains to be tested whether higher degrees of learning perform even better or whether storage considerations and the amount of work invested in such learning outweigh the reduction in search. It was also shown that the more “knowledgeable” problem representation, achieved upon termination of backtrack-with-learning, is significantly better that the original one. This feature is beneficial when a CSP model is viewed as a world representing an initial set of constraints on which many different queries can be posed. Each query assumes a world that satisfies all these static constraints and some of the additional query constraints. Recording all the solutions for the initial set of constraints may be too costly and may not be efficiently used when new queries arrive. In such cases it may be worthwhile to keep the world model in the form of a set of constraints enriched by those learned during past searches. Another issue for further research is the comparison of first and second-order learning with the pre-processing approach of performing full arc and path-consistency prior to search. The pre-processing approach yield a representation which is usually better then that of second-order-learning, but the question is at what cost? Theoretical considerations reveal that pre-processing may be too costly and may perform unnecessary work. For instance, the Path-consistency algo- rithm is known to have a lower bound on its performance of 0 (n3k3) on every problem instance. For the zebra problem this number 1s 1,953,125 consistency checks, which is far worse the performance of deep-second- order learning on all problem instances presented. 182 / SCIENCE APPENDIX I: THE CLASS PROBLEM Several students take classes from several professors in dif- ferent days and rooms according to the following constraints: Student(Robert,Prolog) Student(John,Music) Stu&nt(John,Prolog) Student(John,surf) Student(Mary,Science) Student(mary,Art) Student(Mary, Physics) Professor(Luis ,Prolog) Professor(Luis,SurfJ Course(Prolog,Monday,Rooml) Course(Prolog,Friday,Rooml) Course(surf,Sunday,Beach) Course(Math,Tuesday,Rooml) Course(Math,Friday,Room2) Course(Science,Thurseday,Rooml) Course(Science,Friday,Room2) Course(Art,Tuesday,Rooml) Course(Physics,Thurseday,Room3) Course(Physics,Saturday,Room2) Professor(Maurice,Prolog) Professor(Eureka,Music) Professor(Eureka,Art) Professor(Eureka,Science) Professor(Eureka,Physics) The query is: find Student(stud,coursel) and Cou.rse(coursel ,day 1 ,room) and Professor(prof,course 1) and Student(stud,course2) and Course(course2,day2,room) and noteq(course 1 ,course2) APPENDIX II: THE ZEBRA PROBLEM There are five houses of different colors, inhabited by dif- ferent nationals, with different pets, drinks, and cigarettes: 1. i* 4: 5. 4: f * lb. 11. 12. 13. 14. The Englishman lives in the red house The Spaniard owns a dog. Coffee is drunk in the green house. The Ukranian drinks tea The green house is to the right of the ivory house. The old-gold smoker owns snails Kools are being smoked in the yellow house. Milk is drunk in thye middle house. The Norwegian lives in the first house on the left. The chesterfield smoker lives next to the fox owner. Kools are smoked next to the house with the horse. The Lucky-Strike smoker drinks orange juice. The Japanese smoke Parliament The Norwegian lives next to the blue house. The question is: Who drinks water? and who owns the Zebra? REFERENCES [ l]Bruynooghe, Maurice, ‘ ‘Solving combinatorial search problems by intelligent backtracking,” Information Process- ing Letters, Vol. 12, No. 1, 1981. [2]Bruynooghe, Maurice and Luis M. Pereira, “Deduction Revision by Intelligent backtracking,” in Implementation of Prolog, J.A. Campbell, Ed. Ellis Harwood, 1984, pp. 194- 215. [3]Carbonell, J.G., “Learning by analogy: Formulation and generating plan from past experience,” in Machine Learning, Michalski, Carbonell and Mitchell, Ed. Palo Alto, California: Tioga Press, 1983. [4]Cox, P.T., “Finding backtrack points for intelligent back- tracking,’ ’ in Implementation of Prolog, J.A. Campbell, Ed. Ellis Harwood, 1984, pp. 216-233. [S]Dechter, R. and J. Pearl, “The anatomy of easy problems: a constraint-satisfaction formulation,” in Proceedings Ninth International Conference on Artificial Intelligence, Los Angeles, Cal: 1985, pp. 1066-1072. [6]De-Kleer, Johan, “Choices without backtracking,” in Proceedings AAAZ, Washington D.C.: 1983, pp. 79-85. [7]Doyle, Jon, “A truth maintenance system,” Artijcial Intel- ligence, Vol. 12, 1979, pp. 231-272. [8]Fikes, R.E. and N.J. Nilsson, “STRIPS: a new approach to the application of theorem to problem solving.,” Artificial Intelligence, Vol. 2, 1971. [9]Freuder, EC., ‘ ‘Synthesizing constraint expression, ’ ’ Com- munication of the ACM, Vol. 21, No. 11, 1978, pp. 958-965. [ lO]Freuder, E.C., ‘ ‘A sufficient condition of backtrack-free search.,’ ’ Journal of the ACM, Vol. 29, No. 1, 1982, pp. 24- 32. [ 1 l]Gaschnig, J., “A problem similarity approach to devising heuristics: first results,” in Proceedings 6th international joint co& on Artificial Intelligence., Tokyo, Jappan: 1979, pp. 301-307. [12]Haralick, R. M. and G.L. Elliot, “Increasing tree search efficiency for cconstraint satisfaction problems,” AZ Journal, Vol. 14, 1980, pp. 263-313. [ 13]Korf, R.E., “A program that learns how to solve rubic‘s cube.,’ ’ in Proceedings AAAI Conference, Pittsburg, Pa: 1982, pp. 164-167. [ 14]Laird, J. E., P. S. Rosenbloom, and A. Newell, “Towards chunking as a general learning mechanism,” in Proceedings National Conference on Artijcial Intelligence, Austin, Texas: 1984. [ 15]Mackworth, A.K., “Consistency in networks of rela- tions,’ ’ Artifficial intelligence, Vol. 8, No. 1, 1977, pp. 99- 118. [16]Mackworth, A.K. and EC. Freuder, “The complexity of some polynomial network consistancy algorithms for con- straint satisfaction problems,” Artificial Intelligence , Vol. 25, No. 1, 1984. [17]Martins, Joao P. and Stuart C. Shapiro, “Theoretical Foundations for belief revision,” in Proceedings Theoretical aspects of Reasoning about knowledge, 1986. [ 18]Matwin, Stanislaw and Tomasz Pietrzykowski, “Intelli- gent backtracking in plan-based deduction,” IEEE Transac- tion on Pattern Analysis and Machine Intelligence, Vol. PAMJ-7, No. 6, 1985, pp. 682-692. [19]Mitchel, T., P.E. Utgoff, and R. Banerji, “Learning by experimentation; acquiring and refining problem solving heuristics.,” in Machine learning, Michalski, R.S., Carbonel, J.R., Mitchel, T.M., Ed. Palo Alto, California: Tioga publish- ing company, 1983. [20]Montanari, U., “Networks of constraints :fundamental properties and applications to picture processing,” Informa- tion Science, Vol. 7, 1974, pp. 95-132. Search: AUTOMATED REASONING / 18.3
1986
127
392
Joint and LPA * : COMBINATION OF APPROXIMATION AND SEARCH Daniel Ratner and Ira Pohl Computer & Information Sciences University of California Santa Cruz Santa Cruz, CA 95064 ABSTRACT This paper describes two new algorithms, Joint and LPA*, which can be used to solve difficult combinatorial problems heuristically. The algorithms find reasonably short solution paths and are very fast. The algorithms work in polynomial time in the length of the solution. The algorithms have been benchmarked on the 15-puzzle, whose generalization has recently been shown to be NP hard, and outperform other known methods within this context. I. INTRODUCTION In this paper we describe two new algorithms, Joint and LPA *, which can be used to solve difficult combinatorial problems heuristically. The algorithms find reasonably short solution paths and are fast. The main idea behind these algorithms is to combine a fast approximation algorithm with a search method. This idea was first suggested by S. Lin (Lin, 1965; Lin, 1975), when he used it to find an effective algorithm for the Traveling-Salesman problem (TSP). His approximation techniques were strongly related to the TSP. Our goals are to develop a problem independent approximation method and combine it with search. An advantage of approximation algorithms is that they execute in a polynomial time, where many other algorithms have no such upper bound. Examples where there is no polynomial upper bound can be found for various models of error in tree spaces (Pohl 1977); or under worst case conditions, where the error in the heuristic function is proportional to the distance between the nodes, the number of nodes expanded by A * is exponential in the length of the shortest path (Gasching 1979). In the following sections we state conditions that assure that the new algorithms will finish in polynomial time. Later we describe the algorithms and give some empirical results. Our test domain is the 15-puzzle and the approximation algorithm is the Macro-Operator (Korf, 1985a). The need for an approximation algorithm in the case of the 15-puzzle has been demonstrated in (Ratner, 1986) by a proof that finding a shortest path in the (n 2-l)-puzzle is NP-hard. The empirical results, which come from test on a standard set of 50 problems (Politowski and Pohl, 1984), show that the algorithms outperform other published methods within stated time limits. Empirical results in two recent reports are related but reflect different goals. The Iterative- Deepening-A * method found optimal solutions to randomly generated 15-puzzles, but it generated on average nearly 50 million nodes (Korf 1985b). In (Politowski 1986) excellent search results are achieved by an improved heuristic found through a learning algorithm. II. GENERAL CONCEPTS Let G,(V,,E,) be a family of undirected graphs, where n is the length of the description of G,. Suppose there is an approximation algorithm that finds a path in a graph G, between an arbitrary x EV, (start node) and an arbitrary y EV,, (goal node) and runs in a polynomial in n time. Since the algorithm is polynomial, the length of the path is also polynomial. Once we have a path, we can make local searches around segments of the path in order to shorten it. If each local search is guaranteed to terminate in constant time (or in the worst case in polynomial time) and the number of searches are polynomial in n, then the complete algorithm will run in polynomial time. In order to bound the effort of local search by a constant, each local search will have the start and goal nodes reside on the path, with the distance between them bounded by&,, a constant independent of n and the nodes. Then we will apply A * with admissible heuristics to find a shortest path between the two nodes. The above two conditions generally guarantee that each A * requires less than some constant time. More precisely, if the branching degrees of all the nodes in G, are bounded by a constant c which is independent of n then A * will generate at most c (c -l)dmX-’ nodes. Theoretically c (c -l)d”x-l is a constant, but it may be a very large number. Nevertheless, most heuristics prune most of the nodes (Pearl 1984). The fact that not many nodes are generated, is supported by our experiments reported in the result section. The goal of the local search is to find a new path between two nodes which is shorter than the existing subpath on the original path. Hence, if there is a shorter path its new length will be at most d max-l. These two paths create a cycle of length 2*d,,- 1 at most. Thus if the length of the smallest (non-trivial) cycle in G, is CL, we want d max 2(CL+1)/2 . This means that CL has to be a constant, independent of n. Moreover, we expect that cycles of length CL (or a bit larger) exist throughout the graph. This is the case for many combinatorial and deductive problems. Search: AUTOMATED REASONING / 173 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Once we know that there is such a constant CL, we pick d max that satisfies the condition d mm 2(CL +1)/2 in G . We would like to select segments of length d max on the solution path given by the approximation algorithm and to try to shorten these segments by the local searches. There are many of ways to pick the segments. The algorithms that we suggest, LPA * and Joint, pick a segment in a way that is based on our experiments and motivated by the following two facts: a. Assume that node y is on the path between x and z nodes. Then if we cannot shorten the path between x and y and we cannot shorten the path between y and z, it does not mean that we cannot shorten the path between x and z (see Figure 1.). b. Replacing a path by another path can later yield a shortening (see Figure 1.). of the same length X Y Z I Kghes i Y replace Z Figure 1. Examples of possibilities to shorten the Path(x ,z )_ Assume that the Global-Path consists of the following three consecutive segments 11 , I2 , I3 and we try to shorten 12. Note that there is no mutual influence among the searches and the segments. Thus when we replace the segment 12 by a new one, I,, (shorter or not), the beginning of I, may be exactly as the end of 11 but in the opposite direction (see Figure 2.) and the end of I, may be exactly at the beginning of 2 3 but in the opposite direction. Hence after replacing one segment by the other, we check whether there are such trivial cycles and cancel them. The process of erasing these cycles within the Global.Path is called Squeeze. The Squeeze nrocess saves some local searches, and was shown, for both I algorithms, to be useful in reducing execution time. Path- 1: A-B-C-D-E-F Path-2: F-E-D-G-H-I Figure 2. Two paths that partially cancel each other. III. The LPA * algorithm In this section we define the algorithm LPA * (Local Path A * ). First the algorithm finds a path by some approximation algorithm. Then it starts searching for an improvement from the global.start.node (x E V,). If the local search fails to shorten the current subpath, we advance the start node (anchor.node) along the path by a small increment, called sancbr. Then we try again to shorten the subpath starting at anchor.node and repeat this process until we succeed. The reason we advance the start node by only a small increment is motivated by fact (a) of the previous section. Once we succeed in shortening the subpath, we divide the remaining subpath (between the anchor.node and the global.goal.node 0) EVA)) into consecutive segments of length d,, . Then for each segment, we make a local search and replace it by the result of the search. The result of the local search is never longer than the original segment. The replacement is done, whether a shortening occurs or not, to increase the randomness in attempted improvements. This is a standard method in search to avoid repeating minima, which is given in fact (b) of the previous section. Upon finishing the replacement, the algorithm returns to the anchor.node as if it is the global.start.node and repeats the process. In the following we present the LPA * algorithm, using the following notations: d (x ,y ,P ) is the distance between the nodes x and y along P ; G.P = GlobalPath; L.P = LoacalPath; g.s.n = globalstartnode; g.g.n = global.goal.node; i.s.n = localstartnode; 1.g.n = local.goal.node; a.n = anchornode; The LPA * algorithm. G.P t Approximation(G , g.s.n , g.g.n) ; a.n t g.s.n ; while d (a.n , g.g.n , G.P) 2 d max begin L.P t A * (G , a.n , 1.g.n ) ; 1.g.n is the node s.t. d (a.n , 1.g.n , G.P) = dmm; replace the segment from a.n to Z.g.n in G.P by L.P if length of L.P = d max then a.n t the node with distance sanchor from a.n along G.P ; else begin 1.s.n t l.g.n ; while d ( Z.g.n , g.g.n , G.P) 2 d,,, begin 1.g.n is the node s.t. d (1.s.n , 1.g.n , G.P) = dmax; L.P t A *(G , 1.s.n , 1.g.n) ; replace the segment from Z.s.n to f.g.n in G.P by L.P ; 1.s.n t 1.g.n ; end; end; end; L.P t A * (G , a.n , g.g.n); if length of L.P < d max then replace the segment from a.n to Z.g.n in G.P by L.P ; 174 / SCIENCE In order to show that LPA * is polynomial time in n, we have to show that the number of times we use A * is polynomial in n. Since the length of the approximation’s solution is polynomial in n, it is enough to show that the number of times we call A * is polynomial in the length of solution. Let Lapp be the length of the path generated by the approximation algorithm and let L,, be the length of a shortest path between the global start and goal nodes. Then the number of times LPA * calls A * is not more than izp, I& 1 + [e 1 (Ratner, 1986), which is quadratic in L,,, . Practically, for the U-puzzle, the number of searches is much less than the worst case. According to our experiments the number of calls is about + F, as reported in the result section. max IV. The Joint algorithm In this section we present the Joint algorithm. The main ideas behind the Joint algorithm are: Starting with a solution path found by the approximation algorithm, we divide it into segments of length dm,x. Then we shorten each segment by a local search and replace the segment with the path found by the search. As a result, the new GlobalPath is composed from optimal subpaths. Since each segment is optimal, the most promising place to look for shortening is around the nodes that connect these segments. We name these nodes “joints”. The algorithm always try to shorten the path around the first joint. The local start and goal nodes are picked as symmetrically as possible around the first joint and a new local search takes place. The path found by the local search replaces the corresponding segment on the Global.Path, even when no improvement is made. This is done to increase the randomness in attempted improvements. We found that it is worthwhile to define a parameter, called SjoiM, that depends on the problem. The algorithm will erase all the joints along the segment except those which are located in the last Si,,, nodes on the segment. If a shorter path was found the local start and goal nodes are added as new joints. In the next column we present the Joint algorithm. We use the same notations we have used in the LPA * algorithm. The number of times Joint calls A * is not more than 2( (J&p -Lopt) + * I i ) (Ratner, 1986), which is linear in max L qp and therefore polynomial in n. Practically, for the 15-puzzle, the number of searches is much less than the worst case. According to our experiments the number of L calls is about p + 2sjoiti + d max 4 ) as reported in the max result section. The Joint algorithm Initialization: list of joints is empty ; G.P t Approximation(G , g.s.n , g.g.n) ; 1.s.n t g.s.n ; while d(Z.s.n , g.g.n , G.P) 2 d,,, begin Z.g.n is the node s.t. d(f.s.n , Z.g.n , G.P) = d,,, ; append Z.g.n to the list of joints; L.P t A * (G , 1.s.n , 1.g.n) ; replace the segment from 1.s.n to Z.g.n in G.P by L.P 1.s.n t 1.g.n ; end; while there are more joints do begin 1.s.n is thenode s.t. d(1.s.n , first.joint , G.P) =dmax/2; 2.g.n is thenodes.t.d(f.s.n ,l.g.n ,G.P)=d,,; remove all the joints that are on G.P and satisfy d(joint, 1.g.n) > sj,,, ; L.P t A *(G , 1.s.n , 1.g.n ) ; replace the segment from Z.s.n to Z.g.n in G.P by L.P ; if length of L.P < d,,, then prepend 1.s.n and 1.g.n to the list of joints ; end; V. Macro -Operator as an approximation algorithm For our algorithm to be time efficient, we need to choose a fast approximation algorithm, that gives a “reasonable” solution path. The Macro -Operator Algorithm is such an algorithm. It runs in linear time in the length of the path it produces. The path generated by the Macro-Operator algorithm is a sequence of segments. In many cases each segment is optimal, which is the goal of the first loop in the Joint algorithm. The idea behind the Macro-Operator is to predefine a set of subgoals such that any instance of finding a path in the graph can be viewed as a sequence of some of the predefined subgoals. For each of the subgoals there is a known macro (a subpath) that solves it. There is a restriction on each macro, namely, if a macro was used to solve a subgoal, then it must leave the previously solved subgoals intact. Finding a path in a graph induced by permutations on n-tuples is an example of using a macro-operator. A graph induced by a permutations on n -tuples is a graph where each node represents a distinct permutation, and the edges are defined by some rules that relate the permutations. For example the 15puzzle can be viewed as graph induced by permutations on 1Btuple. If we rename the right lower comer as 0 and give the blank tile the value 0 the standard goal node in this game is the permutation (0,1,2,.......15) and the start node will be some other permutation (i o,i l,iz,....,i 15). The meaning of this permutation is that the tile with value ij is in location j. An edge between two nodes exists iff by a single sliding of the blank tile one can move from one permutation to the other. Search: AUTOMATED REASONING ! 1’5 In this case the subgoals can be defined as follows: substartnode: (0,1,2 ,... j-1;s ,... ;x,j;c ,... +x) sub.goal.node: (0,1,2 ,.... j-1,j ,X ,X.X) where x is don’t care. Only 119 such macros are required for the B-puzzle, although about ten trillion different problems (start configurations) exist. As explained above, the Macro -Operator can be chosen as the algorithm that initially approximates the solution, and then the first loop of Joint is redundant. In the Joint algorithm, after decomposing the solution to subpaths, each of them optimal, the Squeeze process is executed. The Squeeze is linear time in the length of the Global.Path, and generally the more squeezing the fewer searches will be later executed. Hence we would like to predefine the macros, a shortest path that is a solution to the subgoal, with some mutual influence such that the squeezing will be maximal . Looking for the most appropriate shortest subpath is a meaningful target since in general there is more than one shortest path. Especially, in this case of graphs induced by permutations where the subpath (macro) is a path between a set of start nodes and a set of goal nodes there are many shortest paths. We have no general scheme for how to find the macros that will guarantee maximum squeezing on the average. Yet we know how to do it in our test case, the 1% puzzle. A Macro-Operator with the macros picked randomly among the candidates give an average solution length of 149 moves, which is reduced by Squeeze to 139. If the macros were generated according to maximum squeezing we can achieve an average solution length of 124 after squeezing. Naturally after the squeezing process we will continue with the Joint algorithm for reducing the length of the resulting path. VI. Experimental results In this section, we report the results from using the LPA * and Joint algorithms with Macro -Operator as an approximation algorithm. We compare our results with some of the well known search methods that generate thousands of nodes. We selected the 15puzzle as the domain for testing the methods because of the following three reasons. We wished to have a domain where, in theory, finding a shortest path is computationally infeasible. The second reason is that there is a lot of available data about this puzzle. The third reason is that the generalization of this puzzle satisfies the condition that the length of the smallest non-trivial cycles (CL) in the search graphs is small ( < 30 ). These cycles are spread uniformly over the graph space. The average solution length for the test data, using only Macro -Operator, with the macro designed for maximum squeeze by itself, is 143 moves before the squeeze. After applying the squeeze the average solution length is reduced by 26.2 moves to 116.8 moves. For the Joint algorithm, with d,, = 24 and GjoiM = 6 , the average solution length is 86.7 moves, where 5950 nodes were expanded and 9600 nodes were generated on the average. For the LPA * algorithm, that expands and generates about the same number of nodes as the Joint algorithm, the average solution length is 86.3 moves. This was achieved with d max = 24 and ljancbr = 9 , where 6580 nodes were expanded and 10240 nodes were generated on the average. We tested both algorithms with the following two well known admissible heuristics: h l(a ,b ) = the sum of the Manhattan distance of the non- blank tiles between the start (a) and goal (b) nodes. hz(a,b)= hl(a,b)+2S(a,b) R (a ,b) is the number of reversals in a with respect to b. A reversal means that two tiles exist in the same row (or column) in a and b , but in an opposite order. The following four tables correspond to the two heuristics and two algorithms. They show the reduction achieved by the local searches, the number of nodes that were generated and expanded and the number of searches as a function of d max and SjoiM (or sancbr). All the data is the average for the 50 problems. Table 2. LPA * algorithm with heuristics h 2 . ~1 14 I 103.1 I 1370 I 2020 I 13.9 1 4 I 90.1 I 5250 I 7770 1 28.9 1 9 1 93.8 I 3460 I 5350 I 17.7 I 1’6 / SCIENCE 24 1 6 1 86.5 11790 18350 15.5 REFERENCES From the tables we can verify the following results. 1. Both algorithms generated only thousands of nodes. 2. There is no significant difference between the methods. 3. The bigger d,,, the shorter the solution length. 4. The bigger sjoid the shorter the solution length in the Joint algorithm 5. The smaller GancbT the shorter the solution length in the LPA * algorithm. 6. Since h 2 is more informed than h 1 the number of nodes expanded (or generated) by h2 is about half of the number of nodes expanded (or generated) by h 1. In (Politowski and Pohl, 1984) there is a comparison between the performances of four methods using the same test data. The methods are: a. The Heuristic Path Algorithm (HPA) (Pohl, 1971)- Unidirectional search with weighting. b. The Heuristic Path Algorithm (HPA) (Pohl, 1971) - Bidirectional search with weighting. c. The Bidirectional Heuristic Front to Front Algorithm (BHFFA) (De Champeux and Sint, 1977). d. The D-node Algorithm (Politowski and Pohl, 1984). Comparing the results obtained by the four methods and the results presented here we can conclude: 1. The other methods using “unsophisticated’ heuristics cannot find a path at all or a “reasonable path”, in contrast to our algorithms that always find a “reasonable” path. 2. Keeping running time the same, LPA * and Joint algorithms yield a shorter solution than the other methods even when using “sophisticated’ heuristics. VII. CONCLUSION The results in this paper demonstrate the effectiveness of using LPA * or Joint. When applicable, these algorithms achieve a good solution with small execution time. These methods require an approximation algorithm as a starting point. Typically, when one has a heuristic function, one has adequate knowledge about the problem to be able to construct an approximation algorithm. Therefore, these methods should be preferred in most cases to earlier heuristic search algorithms . HI VI II31 E4 1 PI PI [71 PI PI De Champeaux, B. and Sint, L., “An improved bi- directional search algorithm,” JACM, vol. 24, pp. 177- 191,1977. Gaschnig, J., “Performance measurement and analysis of certain search algorithms,” Ph.D thesis, Department of Computer Science, Carnegi-Melon University, May 1979 Korf, R. E., Learning to solve problems by searching for Macro-Operators. Research Notes in Arti’cial Intelligence 5, Pitman Advanced Publishing Program, 1985. Korf, R. E., “Iterative-Deepening-A * : An Optimal Admissible Tree Search,” Proceedings of the Ninth International Joint Conference on Artijcial Intelligence, Vol. 2, pp. 1034-1035, 1985. Lin, S., “Computer Solutions of the Traveling-Salesman Problem,” BSTJ, Vol. 44, pp. 2245-2269, December 1965 Lin, S., “Heuristic Programming as an Aid to Network Design,” J Networks, Vol. 5, pp. 33-43, 1975. Pearl, J., Heuristics. Intelligent search strategies for computer problem solving, Addison-Wesley Publishing Company, 1984. Pohl, I., “Bi-directional search,” in Bernard Meltzer and Donald Michie (editors) ,Machine Intelligence 6, pp. 127- 140, American Elsevier, New York, 197 1. Pohl, I., “Practical and theoretical considerations in heuristic search algorithms,” in Bernard Meltzer and Donald Michie (editors), Machine Intelligence 8, pp. 55-72, American Elsevier, New York, 1977. [lo] Politowski, G., “On Construction of Heuristic Functions,” Ph.D thesis, University of California Santa Cruz, June 1986. [l l] Politowski, G. and Pohl, I., “D-Node Retargeting in Bidirectional Heuristic Search,” Proc. of the AAAI-84, pp. 274-277, 1984. [ 121 Ratner, D., “Issues in Theoretical and Practical Complexity for Heuristic Search Algorithms,” Ph.D thesis, Department of Computer Science, University of California Santa Cruz, June 1986. Search: AUTOMATED REASONING / 1”
1986
128
393
FINDING A SHORTEST SOLUTION FOR THE N xN EXTENSION OF THE H-PUZZLE IS INTRACTABLE Daniel Ratner and Manfred Warrnuth Computer & Information Sciences University of California Santa Cruz Santa Cruz, CA 95064 ABSTRACT The g-puzzle and the 15puzzle have been used for many years as a domain for testing heuristic search techniques. From experience it is known that these puzzles are “difficult” and therefore useful for testing search techniques. In this paper we give strong evidence that these puzzles are indeed good test problems. We extend the 8- puzzle and the Epuzzle to a nxn board and show that finding a shortest solution for the extended puzzle is NP-hard and thus computationally infeasible. We also present an approximation algorithm for transforming boards that is guaranteed to use no more than c%(V) moves, where L(SP) is the length of the shortest solution and c is a constant which is independent of the given boards and their size n . I. INTRODUCTION For over two decades the g-puzzle and the 15-puzzle have been a laboratory for testing search methods. Michie and Doran used these games in their general problem-solving program, called Graph Traverser [DM66]. Pohl used the 15-puzzle in his research on bi-directional search and dynamic weighting Ipo77]. Recently Korf used these puzzles as examples for the Macro-Operators [K85a] and for IDA * [K85b]. Judea Pearl used the g-puzzle throughout the first half of his Heuristics book as one of the main examples [Pe84]. Also, these puzzles were used for testing the performance of some learning algorithms me83]. The main reasons for selecting these problems as workbench models for measuring the performance of searching methods are: 1) There is no known algorithm that finds a shortest solution for these problems efficiently. 2) The problems are simple and easy to manipulate. 3) The problems are good representatives for a class of problems with the goal of finding a relative short path between two given vertices in an undirected graph. 4) The size of the search graph is exponential in n even though the input configurations can be described easily ( 0 ( n*>>. 5) The search graph can be specified by a few simple rules. Certainly, if there existed simple efficient algorithms for finding a shortest solution for these problems, then heuristic approaches would become superfluous. Thus we need to give a convincing argument that no such algorithm exists. This is accomplished by using complexity theory. We show that finding the shortest solution for a natural extension of the 8- puzzle and the 15-puzzle is NP-hard. Thus unless P=NP, which is considered to be extremely unlikely, there is no polynomial algorithms for finding a shortest solution. Of course, since the number of distinct configurations in the 8- puzzle and the 15-puzzle are finite, theoretically (and practically for the g-puzzle) one can find shortest solutions for all the possible inputs by analyzing the whole search graph. To get problems of unbounded size we extend the problem to the n xn board ( (n 2-l)-puzzle ). The aim of the (n 2-l)-puzzle is to find a sequence of moves which will transfer a given initial configuration of an nxn board to a final (standard) configuration. A move consists of sliding a tile onto the empty square (blank tile ) from an orthogonally adjacent square. We will show that the following decision problem (nPU2) is NP-complete: Instance: two n xn boards and a bound k . th Question: is ere a solution for transforming the first board into the second board requiring less than k moves? The pebble games of [KMS84] can be viewed as a direct generalization of the nPUZ problem. Rather than moving tiles in the planar grid, they allow general graphs with an arbitrary number of empty spaces. They address the question of reachability, i.e. whether a final configuration is reachable from an initial configuration by moving pebbles to adjacent empty spaces. It was shown that the general reachability problem can be decided in polynomial time. The nPUZ problem is case where reachability is easy. We address the complexity of reaching the final configuration from the initial configuration in a small number of moves. In the nPUZ problem we relocate tiles. The relocation task, even without the specific rigid rules of the game, is the essence of the intractability. In the nPUZ problem we have additional restrictions that makes its proof of NP- completeness very difficult. Therefore we first show the intractability of a relocation problem. This problem, the REL problem, captures the hardness of nPUZ and is less restrictive and easier to prove NP-complete. The REL problem is specified as follows: Instance: A planar directed graph G (V,E) where each e E E has capacity 0 or 2, a set X of elements, and an initial and final configuration. A configuration specifies the location of each element of X at the vertices of V. Question: Is there a relocation procedure that ships the elements of X from their initial configuration to their final configuration such that the procedure moves along each e E E exactly once and along each edge it never ships more elements than allowed by its capacity? 168 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. The nPUZ and REL problems can be viewed as robotics problems: A robot needs to efficiently relocate objects in the plane. The NP-completeness proof of nPUZ will simulate the simpler proof for REL. The graph is mapped onto the board of the puzzle problem. The vertices and edges will correspond to certain areas of board. The elements and the capacities are encoded by the arrangements of tiles in these areas in the start and final configuration. Since finding the shortest solution is NP-hard we would like to know how close the shortest solution can be approximated. We show that finding a solution that is an additive constant away from the optimum is also NP-hard. However we have positive results for approximating the optimal solution by a multiplicative constant. We only give a proof for a high multiplicative constant. However we suspect that the algorithm outputs a solution that is not more than twice the optimal. It is an open problem to find the algorithm with the best possible constant. Note that this type of algorithm finds reasonable solutions even if n is large (around 100). The research based on search methods only addresses the cases of n14. The best solutions will be produced by a combined approach, as suggested in [RP86]: use an approximation algorithm and then do local optimization of the approximate solution employing search methods. We suggest to apply our approach to other puzzles, like the Rubik’s cube, which are studied extensively in the AI- literature. Is the problem of finding a shortest solution for the n -dimensional Rubik’s cube NP-hard? Are there polynomial time approximation algorithms for this puzzle that approximate the optimal solution by a multiplicative constant? II. THE NP-COMPLETENESS OF REL In this section we prove that REL problem is NP- complete, i.e. relocating elements that reside in vertices of a planar graph via an Eulerian path is NP-complete. We prove that REL and nPUZ are NP-complete by reducing a special very symmetric version of the satisfiability problem to nPUZ . This version is called 2/2/4-SAT and is defined as follows: each clause contains four literals; each variable appears four times in the formula, twice negated and twice not negated; the questions is whether there is a truth setting for the formula such that in each clause there are exactly two true literals. In the complete paper we give a standard NP- completeness reduction for 2/2/4-SAT. Theorem 1: REL is NP-complete. Proof: Let U={U~,U~;*., u,} be a set of variables and C={q,c2;y c, } be a set of clauses defining an arbitrary instance of 2/2/4-SAT. From this instance we will construct an instance of REL. An instance of REL is a graph G (V,E) with capacities (0 or 2) for each e E E, a set X of elements, an initial configuration (called B t), and a final configuration (called B 2). First we start with the description of the graph and later we define the configurations. The graph (Figure 2) consists of 5m+2 vertices and 12m -3 edges. The vertices are divided to 4 groups. The first group is built up from m diamonds of 4 vertices each. The i -th diamond which is shown in Figure 1 corresponds to the variable Ui. This diamond contains the vertices: topi, nui, boti , and Yiiii . toPi 0 toPi Figure 1: The i -th diamond in B 1 and B 2. The second group is the single vertex TC (stands for truth collection). The third group is the single vertex FC (stands for false collection). The fourth group consists of m vertices. The i -th vertex of this group, called nci, corresponds to the i-th clause in the boolean formula of the 2/2/4-SAT instance. The directed edges connecting the vertices and the capacities of the edges are specified in Figure 2. Note that their is a special edge of capacity zero from nc, to top 1. J-+----- ----------- --1 nu1 -52 I I I I I I I I I I I I I I I I I I I I k- SW-----> edge with capacity 0 nc,------------ -- ------ -1 Figure 2: The graph of the REL instance. To complete the definition of the instance of REL we need to specify the elements and their initial and final locations in the graph. The set of elements X consists of 4m elements. Recall that in 2/2/4-SAT each variable occurs Search: AUTOMATED REASONING / 169 twice negated and twice unnegated. There is an element for each of the 4 occurrences: ni ,t and ni ,2 correspond to the two appearances of ui in C , and ni ,3 and ni,4 correspond to the two appearances of G in C . In B 1 the elements are located in the diamonds as specified in Figure 1. All the remaining vertices contain no elements. In B 2 all elements are in the vertices that correspond to the clauses. The 4 elements that are associated with the 4 literals of the i -th clause appear in vertex nci. This completes the definition of the instance of REL. The following two claims complete the proof of the theorem. Claim 1: If there is a truth assignment f : U +{T ,F } that satisfies the 2/2/4-SAT instance then there is a relocation procedure along an Eulerian path that shifts B 1 to B 2. Proof: The proof is constructive. First we ship all the ni,j elements that correspond to true literals from their vertices in B 1 to TC vertex. This collection is done by the following loop: for i := 1 to m do begin if f(Ui)=T then begin move along (tOpi,nUi) ; move along (nui,boti) with ni,l and ni,2; move along (boti ,TC ) with ni ,l and ni ,2; end else begin move along (tOpi ,nUi) ; move along (nUi ,boti) with ni ,3 and ni ,4; move along (boti ,TC) with ni,3 and ni,4; end { if }. if i # m then move along (TC ,topi+l) ; end. When the above loop is finished, then the vertex TC contains 2m elements. Each diamond contributes exactly two elements. The two elements from the i -th diamond are either ni,l and ni,2 (from nu;) or n;,3 and ni,4 (from nu;). The next step drops the 2m ni,j elements that are in TC into the nci vertices they belonged to in B 2. As mentioned above, these 2m nij elements correspond to the 2m true literals. Since there is a truth assignment for the 2/2/4-SAT instance, it follows that two ni,j elements, that appear in each clause vertex nci in B 2 are now in TC. These elements are dropped into their clause vertices by the following loop: for i :=l to m do begin move along (TC ,nci) with the two ni ,j elements that are in nci in B 2; if i f m then move along (nci ,TC ) ; As a result of the above segment, each nci vertex receives the two ni,j elements that correspond to the true literals. Now we move along (nc,,top 1) . From this point we repeat the two loops given above. In the first loop we collect all the ni,j elements that correspond to false literals into the FC vertex. This is done by traversing all the edges of the diamonds that have not been traversed in the first pass and by traversing all the edges that connect FC with the diamonds. Once the first loop in this second pass is completed, the 2m ni,j elements that correspond to the 2m false literals are in FC. In the second loop of the second pass, the algorithm drops the 2m ni,j elements from FC into their appropriate nci vertices in B 2. Once the second pass is completed the arrangement of the ni,j elements in the graph is as prescribed in B 2. Observe that each edge is traversed exactly once and the number of elements moved through each edge always equals the capacity of the edge. Claim 2: If there is a relocation procedure that ships the elements from B 1 to B 2 along an Eulerian path then there is a truth assignment f : U --+{T ,F } that satisfies the 2/2/4-SAT instance. Proof: We need to ship the four ni,j elements from their initial locations in the i -th diamond to the clause vertices (the rick) they belong to in B 2. The ni,j elements must pass through boti + There are only two edges (boti ,TC ) and (boti ,FC ) outgoing from boti. Both edges have capacity 2. This means that when the procedure moves along (boti,TC) and (boti,FC) it must carry 2 elements each time. Furthermore, the first time the procedure ships two elements to boti they must be either the pair (ni,l,ni,z) or the pair (ni,s,ni,d). NOW the procedure must continue to move along with these two elements. Thus, for each i, 1 I i I m , the procedure that relocates the elements ships the pair (ni,l,ni,z) along (boti,TC) or along (boti,FC ). Let us define the truth assignment f : U+{T ,F } as follows: f (Ui) = T if the procedure ships the pair (ni,l,ni,p) along (boti ,TC ). f (Ui ) = F if the procedure ships the pair (ni, 1 ,ni ,2) along (boti ,FC ). Note that if f (ui) = T (respectively F) then the procedure ships the pair (ni,3,ni,4) along (botiJ;C) (respectively (boti,TC)). We proceed to show that the above truth assignment satisfies the requirements of the 2/2/4-SAT instance. There are two ingoing edges to each nci vertex, each of capacity two. There is no way to ship elements from TC to FC or vise versa (see Figure 2). Thus the procedures ships from TC exactly two elements to each of the nci vertices. According to the definition of f these elements correspond to true literals. The other two elements that arrive at each nci vertices are from FC, which means that they correspond to false literals. This completes the proof of Claim 2 and Theorem 1. Cl III. THE NP-COMPLETENESS OF nPUZ In this section we will sketch a reduction of the 2/2/4-SAT problem to the nPUZ problem. Given an instance of 2/2/4-SAT we define a corresponding instance of nPUZ. This instance (and the whole reduction) is similar to the instance of REL used in the previous section. We will map the graph of Figure 2 onto the board. The instance of nPUZ consists of two n xn board configurations B 1 (the initial configuration), B2 (the final configuration) and an integer k which is an upper bound on the number of moves that can be used to transform B 1 to B 2. To simulate the graph of Figure 2 we have to capture the notions of vertices, edges, elements, relocation, moving along an edge and capacity of an edge. Each vertex in the graph of Figure 2 corresponds to a square of locations. Edges are identified as stripes (horizontal, vertical, or a pair of both) of locations that connect the 17’0 / SCIENCE vertices. Each element of X corresponds to a specific tile on the board. The tiles which correspond to the elements appear in different locations on board B 1 than on board B 2. As in the instance of REL the element tiles are in the diamonds on B 1 and in the squares of the clauses in B 2. Moving these tiles to their destination in B 2 corresponds to relocating the elements in the graph of the REL problem. Until now, the analog between the components in the REL problem and the corresponding components in the game are straight forward. An outline of how the graph is mapped onto the board is given in Figure 3. The main difference is a 45 degree counterclockwise rotation. Note that the lines of Figure 3 represent “thin” stripes of locations. The arrangements of the tiles outside the squares of the vertices and outside the stripes of the edges are the same on B 1 and B 2. Note that all the names of the tiles on the board are distinct. Thus the configurations B 1 and B 2 are equivalent w.r.t. renaming of tiles and only the relative location of equally named tiles on B 1 and B 2 is important. r k I I ~ I I I I I I L--. I I I r---------, 1 I 1 I 1 I I I I I I I I I I I I I I I I I I I I I I I I I I I J I 1 I I 1 L-------l I I -I edge with capacity 2 ---mm) edge with capacity 0 Figure 3: The locations in which B 1 and B 2 differ. I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I -A We still need to show the analog to “move along an edge”. In the game, the tiles can be shifted to any location, and they are not tied to specific squares and stripes of locations. How can we force the tiles that correspond to the elements of the X to move only along the stripes of the edges? How can we force these tiles to move along a stripe exactly once? To realize the notion of capacity we need to guarantee that exactly two element tiles (in addition to the blank tile) move along the stripes of capacity two, and zero element tiles, i.e. only the blank tile, move along the stripes of capacity zero. To overcome the above difficulties we carefully arrange the tiles within the edges. The vertices and edges are the only locations in which B 1 and B 2 differ. Edges either have capacity zero or capacity two. They are stripes following the outline of Figure 3. The edges of capacity zero are stripes of width 1 and the edges of capacity two are stripes of width 3. The tiles within the edges are arranged differently in B 1 and B 2. Recall that each edge has a direction. For the edges of capacity zero the tiles of B 2 are shifted one location backward relative to their location in B 1. This will guarantee that the blank tile has to move through this edge to achieve the rearrangement of the edge. The overall bound on the number of moves will assure that this can happen only once. The rearrangement of the edges of capacity two is given in Figure 4,5 and 6. The figures show how to move two tiles (x and y ) together with the blank tile through a stripe edge of width 3 and length 6 (the edge is the portion between the double bars). Figure 4 shows the arrangement of the tiles on the edge in B 1 and Figure 6 the same for B 2. Figure 4: The arrangement of an edge with capacity 2 in B 1. Figure 5 (below) is produced from Figure 4 by advancing x and y one location to the right. This is accomplished by moving the blank tile: left, left, up, right, right, right, down, left, left, down, right, right, right, up. Figure 5: The arrangement after advancing x and y once. If we apply the above sequence 9 + 3 times then we end up with the following arrangement. Figure 6: The arrangement of an edge with capacity 2 in B 2. For all edges of capacity 2 the tiles on B 2 are rearranged as if two pieces and the blank tile moved through the edge on B 1 in the prescribed fashion. Note that the procedure specified in the three figures uses the minimal number of moves, i.e. there is no procedure that moves x and y through the edges using a smaller number of moves. Also the procedure uses no more moves than the manhattan distance between the arrangement of e on B 1 and B2. This rearrangement can not be accomplished efficiently by any other shifting procedure, i.e. by moving twice from the beginning to the end of the edge or by moving through the edge with less or more than 2 non-blank tiles. The number of moves required by any other shifting procedure exceeds the manhattan distance and the number of additional moves required is proportional to the length of the edge. Since the bound on the overall number of moves will be tight, B 1 must be rearranged efficiently to achieve B 2, i.e. each edge of capacity 2 is traversed exactly once while shipping exactly 2 non-blank tiles along the edge. A detailed proof is given in the complete paper. We haven’t specified the rearrangements of the vertices and the comers of the edges. We define k to be equal to the number Search: AUTOMATED REASONING i I’ 1 of moves required by traversing each edge exactly once with the specified number of non-blank tiles plus enough freedom to achieve the rearrangements in the comers of the edges and in the vertices. This freedom is much smaller than the number of moves required to travel along an edge with only the blank tile. It might seem that B 2 uniquely determines the rearrangement procedure. However note that the rearrangement of the edges (Figure 4 and 6) is independent of the element tiles moved through the edge. The element tiles are located in the same vertices as in the instance of REL. They are to be moved from the diamonds to the clauses. Half of the element tiles are gathered in TC and these tiles correspond to the true literals. The other half is gathered in FC . The reduction is identical to the reduction of REL . IV. AN APPROXIMATION ALGORITHM Since finding a shortest solution is NP-hard we would like to know how close a shortest solution for the (n2-1)-puzzle can be approximated. We can prove that finding a solution that is an additive constant larger than the optimum is also NP-hard. We simply use the reduction of the previous’ section except that we enlarge the length of each edge by four times the additive constant. In this section we sketch the main ideas of a polynomial approximation algorithm that approximates the optimal solution by a multiplicative constant. We chose to present a simplified version of our algorithm. Therefore the multiplicative constant is not as low as possible. The main point is that such an approximation algorithm exists. Also for simplicity we assume that the blank tile resides in the same location (n ,n) in both configurations. Let us denote B 1 as a permutation of B 2 and then decompose the permutation into disjoint cyclic permutations. The decomposition can be produced in time which is linear in the size of the permutation. Assume that there is only one cycle, (Zc,Z 1, . . . , I,-I), denoting the fact that the tile located at Zi in B 1 is located at 1 (i+l)mod c in B 2 * There are two simple lower bounds on the length of the optimal solution: d ((n ,n ),Za) and ‘- d (li,li+l), z I= where d (I ,I’) is the manhattan distance between the locations 1 and 1’. The following procedure requires at most 2d ((n ,n ),lo) + 20. ‘- d (li,Zi+l) moves, which is at most 22 x 1= times the length of the optimal solution: the blank tile moves from (n ,n ) to lo; the pieces at locations lo,1 1, ’ * * ,lc-2 are shifted one at a time to the locations 11,12, * . * 1 , c-17 respectively; this shifting process has the side effect that a large number of tiles on the path between the locations are shifted one or two places from their origin (see figures 4, 5 and 6); now the tile at 1,-l is moved along the path Ldc-2, . . * ,I0 to its destination at lo; while relocating the tile of 1,-l the side effects of the shifting process are undone; finally, the blank tile moves from 10 to (n ,n ). In the case where there is more than one cycle, each cycle c contributes its sum of distances around the cycle (denoted by S (C )) to the lower bound, Thus the second lower bound becomes 3 S(C). The first lower bound all cyc es C d ((n ,n ),ZO) is replaced by the cost of the minimum spanning tree given below. We can view the 10 ‘s of each cycle and (n,n> ‘as nodes in a complete graph, where the cost of each edge is the manhattan distance between the corresponding locations. Clearly the cost of the minimum spanning tree is a lower bound for the length of the optimal solution. (Note that the minimum spanning tree can be constructed in 0 (n4) time.) In the case where there is only one cycle, the vertices (n ,n ) and IO, and the path between them represent the minimum spanning tree. If we have more than one cycle we connect the lo’s of all the cycles according to the edges of the minimum spanning tree. The procedure for the general case: the blank tile moves along the locations that correspond to edges of the spanning tree as if it traverses the tree; whenever it reaches an 10 location the shifting process in a cycle is executed; backtracking from a child to a parent in the traversal corresponds to undoing the changes made on the edge that connects the child with its parent. There are two difficulties in the above description. The first one is that if the number of locations in a cycle is even we can’t completely fix the cycle because of the group properties of nPUZ. The second difficulty is that when we shift tiles along a path we might increase the manhattan distance between the tiles of some cycle that crosses the path, and then the second lower bound is improper. In the complete paper we show how we overcome these difficulties without additional moves. REFERENCES [DM66] Doran, J. and Michie, D., “T Experiments with the graph traverser program”’ Proc. of the Royal Society (A), No. 294, pp.235-259, 1966. [K85a] Korf, R. E., Learning to solve problems by searching for Macro-Operators. Research Notes in Artificial Intelligence 5, Pitman Advanced Publishing Program, 1985. [K85b] Korf, R. E., “Iterative-Deepening-A * : An Optimal Admissible Tree Search,” Proceedings of the Ninth International Joint Conference on Artijcial Intelligence, Vol. 2, pp. 1034-1035, 1985. [KMS84] Kornhauser, D., Miller, G. and Spirakis, P., “Coordinating Pebble Motion on Graphs, The Diameter of Permutation Groups, and Applications,” 25th FOCS, pp. 241-250, 1984. [Pe84] Pearl, J., Heuristics. Intelligent search strategies for computer problem solving, Addison-Wesley Publishing Company, 1984. [Po77] Pohl, I., “Practical and theoretical considerations in heuristic search algorithms,” in Bernard Meltzer and Donald Michie (editors) ,Machine Intelligence 8, pp. 55-72, American Elsevier, New York, 1977. [Re83] Rendell, L. A., “A new basis for state-space learning systems and a successful implementation,” Artificial Intelligence, Vol. 20, pp. 369-392, 1983. [RP86] Ratner, D. and Pohl, I., “Joint and LPA *: Combination of Approximation and Search,” to appear in the Proceedings of AAAI-86, 1986. 1’2 / SCIENCE
1986
129
394
A Knowledge-Based Framework for Design Sanjay Mittal and Agustin Araya Knowledge Systems Area Intelligent Systems Laboratory Xerox Palo Alto Research Center 3333 Coyote Hill Rd. Palo Alto, CA. 94304 Abstract: Many design problems can be formulated as a process of searching a “well-defined” space of artifacts with similar functionality. The dimensions of such spaces are largely known and are constrained by relations obtained from the implicit functionality of the designed artifact. After identifying the kinds of knowledge that mediate the search for acceptable designs, a computational framework is presented that organizes the required knowledge as design plans. A problem solver is described that executes these plans. The problem solver extends the notion of dependency-directed backtracking with an advice mechanism. This mechanism allows information from a constraint failure to be used as advice in modifying a partial design. An expert system for designing paper transports inside copiers has been successfully built based on this framework. 1. Introduction Increasing attention is being paid to the development of knowledge-based systems for design, especially of mechanical systems [Dym 1985, Gero 19851. The expectation is that these computer systems can improve the quality of designs and shorten the time required to find satisfactory designs. Some of the major stages in designing a complex system are: i) a definition stage where precise functional specifications are developed from the requirements; ii) a generation stage where many satisfactory designs may be created; and iii) an evaluation stage where these different designs are compared or optimized by some criteria. These stages are not necessarily sequential because the latter stages can provide feedback to earlier ones. In this paper we shall be primarily concerned with the middle stage, i.e., the generation of designs that satisfy some functional specification. The general problem of designing artifacts that satisfy some arbitrary functionality is not well understood [Mostow 851. However, there seem to be many design problems where the search space has been largely defined by the expert designers (or can be obtained from them). This means that the kinds of dimensions of the design space are by and large known, i.e., the kinds of design parameters are known. Furthermore, the design parameters of the search space are constrained to produce artifacts which have the “same” functionality. We shall call problems with these two properties as being well-defined. In this paper we present a framework for building computer programs that can assist in the design of systems that have well-defined search spaces. The framework rests on the key observation that given such spaces, the process of generating alternative designs is largely a process of searching these spaces. This is not to suggest that the space is small, or that it does not vary in details, or that substantial reasoning may not be needed for finding satisfactory designs. On the contrary, the search process is guided by knowledge about how to define partial designs in this space and knowledge about how to modify a partial design when the constraints are violated. Furthermore, the search may be ordered by heuristic knowledge obtained from experience. The proposed framework organizes these different kinds of knowledge into design plans. These plans are carried out by a problem solver that can engage in exhaustive search if the knowledge is insufficient. The problem solver extends the notions of dependency-directed backtracking with an advice mechanism. This mechanism allows advice based on a failed constraint to reorder the generators at a prior decision point allowing rapid convergence in many cases. 856 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Based on this framework we have successfully built an expert system called PRIDE [Mittal et. al. 19861 for the design of paper transports inside copiers. In this paper we shall focus on the ideas behind the design framework and not the expert system itself. We start by describing an example of an artifact with a well-defined design space. The next section makes our notion of design-as-search more precise. The subsequent three sections describe the framework itself. We conclude with a discussion of some of the questions raised by our work. 2. Knowledge about the Artifact Being Designed We begin with a simplified example taken from the domain of paper handling systems inside copiers and duplicators. An example of an artifact A paper handling system in a copier is used to transport paper from an input to an output location, avoiding certain obstructions. One kind of paper transports are built from the pinch-roll technology. In this technology, a “baffle” is used to guide the paper along a certain path and “roll stations” are placed along this path to move the paper (see Figure 1). Roll stations consist of one or more pairs of rolls mounted in corresponding shafts. Each pair, in turn, consists of a driver roll, which is powered, and an idle roll, which spins freely. A typical design problem specifies the velocity and angle of the paper at the input and output locations of the transport, maximum acceptable skew of the paper while being transported, characteristics of the papers that will be transported (e.g., length, weight, etc), and so on. The problem is to determine the shape of the baffle, the number, position and kinds of roll stations, the properties of drivers and idlers, and many other properties of these and other components. Different kinds of knowledge There might be several kinds of artifacts, based on different technologies, that can exhibit the “same” functionality. For instance, paper transports can also be built from belt-transport technology. For each technology, it is necessary to know the kinds, and numbers, of parts (or components) and how those parts compose or interact to form the artifact. Parts might be further decomposed into other parts. Certain parts might have alternative decompositions into subparts, and it is necessary to know the conditions under which each alternative is more suitable. Parts have “relevant” properties, i.e., properties that can affect the functionality of the artifact. (e.g. width and diameter of a driver roll, which may affect the velocity with which the paper moves while passing through the station). When parts interact with other parts of an artifact, they can exhibit certain relevant behaviors (e.g., velocity of a driver, skew of the paper), which depend on properties and behaviors of these or other parts. Corresponding to each property, one needs to know what the plausible values are for that property, e.g., the different known diameters of a drive roll may be 10,20, 40 mm; the width of a driver can be between 5mm and 50mm in increments of lmm; the baffle gap can be between 2 and 1Omm in increments of 0.5 mm; etc. Certain properties of parts can only take values from a pre-existing set of values. This is the case when it is desirable to select parts from existing ones. For other properties it might be known how to design them taking into account the given specifications and the properties and behaviors of other parts. 3. Design as Knowledge-guided search The process of designing such an artifact can be usefully viewed as a search of a multi-dimensional space of possible designs. The dimensions of such a space are the parameters of the artifact, i.e., the structural relationships between the parts and the properties of the individual parts. For example, in the case of a paper transport, some of the dimensions would be “input velocity of the paper coming into the transport “, “lengths and widths of the different kinds of paper”, “length of the paper path”, physical characteristics of each of the driver and idler roll at each station such as diameter, width, material, and velocity, etc. Typically such design spaces are very large and searching for suitable designs can be very time consuming. Two major factors contribute to this. First, significant computation may be involved in defining a point in the space, i.e., assigning values to the different parameters. Because the space is quite sparse, in that there are far fewer acceptable designs than the ones ultimately rejected, most of the search effort may be expended in finding solutions that will be rejected later on. One approach to mitigate this problem is to analyze partial designs as early as possible, instead of waiting APPLICATIONS / 857 for the complete design. This brings us to the second cost, i.e., the computation in evaluating a design for suitability. Many of the analysis techniques are time-consuming and a design may pass one analysis only to be rejected later by another one. By appropriately ordering the generation of the design and its evaluation for suitability, some of the wasteful computation may be avoided. Given this complexity, experienced designers use knowledge of various kinds to direct their search. As discussed in the previous section, one obviously needs to have a great deal of knowledge about the artifact itself. Here we will discuss some of the knowledge used in exploring the space and directing the search. Ordering Knowledge. A simple, yet powerful piece of knowledge is information that creates an order in which decisions get made. Use of such ordering information is quite prevalent [Mostow 851. However, the characteristics of the search space which create such order are not well understood. The ordering knowledge may be simply based on the dependencies between decisions. For example, in our example problem, decisions about roll station placement depend so intrinsically on the length of the paper path that they have to be made later. A different kind of order is created by structuring the space hierarchically. By this we mean that instead of having the complete space explicitly defined, decisions along some dimension open up sub-spaces. Thus, different choices at some level could lead to very different sub-spaces being opened up for design. A simple example from paper transport domain involves choice of technology. Depending on the technology chosen such as rolls or belts, very different design spaces are opened up for further exploration. Constraints between parameters. The parameters of the design artifact are not independent. Often, they are constrained by relations. Some of these constraints may be derived from the explicit specifications of the particular design problem. For example, the locations and angles of the input and output of the paper transport constrain the shape of the paper path. A different set of constraints is derived from the intrinsic properties of the structure and behavior of the artifact being designed. All paper transports must satisfy some basic constraints on velocities, frictions, and forces acting on a moving paper, otherwise they will fail in their essential functionality. For example, the distance between two consecutive roll stations must be less than the smallest paper that will be transported by the paper handling system, otherwise for certain sections of the path the paper will no longer be under the control of any station. Both kinds of constraints determine the suitability of a design in terms of providing the desired functionality. The way these constraints are used is crucial in determining how efficiently the design process operates. It is well known that a generate and test model in which the constraints are primarily used to test the generated solutions will be quite inefficient. More powerful problem solvers such as dependency-directed backtracking [deKleer et al. 791 also have some well-known deficiencies. Some of these deficiencies can be compensated by using appropriate knowledge, in terms of “ordering” information based on how the variables are constrained . We have found it useful to make a distinction between tight and loose coupling between a set of variables. In the case of tightly coupled variables, a search procedure that tries to assign a value to one of these variables and then propagate it over the constraints may have to back up many times before finding a consistent solution. However, in the case of loosely coupled variables, it is often possible to find a partial order in which the variables are decided which will work with relatively small amounts of backtracking. Advice for Modification. A major piece of knowledge that expert designers seem to use when the design fails some acceptability condition (constraint) is how to modify the design. Consider a dependency-directed backtracking problem solver in constrast. It knows enough to back up to a relevant decision point but does not have any way of deciding how to modify its decision. Good designers, on the other hand, not only know where the relevant prior decision points are but also analyse the failure to decide how to modify their past decisions. Being able to advise a prior decision point (and a problem solver in general) is crucial in reducing the search. In the best case, the advice would enable a previous decision to be modified in exactly the way needed to fix the current constraint failure. In general, the advice may only help partially. In the framework we have developed, and described in the rest of the paper, this ability to advise plays a central role in problem solving and is an important 858 / ENGINEERING advance over most of the earlier approaches. 4. Structuring Design Knowledge as Plans In the previous section we identified four major kinds of knowledge that are needed during the design process: defining the dimensions of the design space; choices along each dimension; constraints on these choices; and advice for modifying some design choice. In addition, there were heuristics on ordering the decisions, structuring the space, and ordering the choices for some dimension that aid in making the design process be more effective. These different pieces of knowledge can be effectively integrated into knowledge structures that we shall call design plans. In this section we introduce the different plan elements and describe their structure. The next section discusses how they are used in problem solving. Goals. Plans are organized around goals for making design decisions about a set of design parameters. Each goal is responsible for a few of these parameters, i.e., it represents one or more decision points from a problem solving viewpoint. A goal also defines some of the dimensions of the design space. By this we mean that only by scheduling a goal does the design sub-space defmed by that goal become ready for exploration. In our paper transport domain, some typical goals would be “Design Paper Transport”, “Design Paper Path”, “Design Driver Roll”, and “Design Driver Width”. The first of these is a top-level goal, which can recursively expand into a tree of sub-goals (Figure 2). Each of these goals defines a space of partial designs. As we move down the goal tree fewer dimensions are considered. Thus, the goal “Design Driver Width” is concerned with only one design parameter, whereas the goal “Design Driver Roll” is concerned with all parameters of a driver roll. The former is a sub-goal of the latter. Each goal explicitly specifies the design parameters it is responsible for. Goals also specify the design parameters on which they depend. For example, the goal “Decide number and location of roll stations” specifies that it depends on knowing the paper path length. The dependency information may be either statically described or dynamically determined from the particular design method that is being tried or both. Design Methods. Design goals have different design methods associated with them, which specify alternate ways to make decisions about the design parameters of the goal. These methods capture the knowledge about the possible values of properties of components, as well as knowledge about the behavior of components. The role of the design methods is then to generate partial designs. The knowledge about carrying out a goal may be available in many different forms. This diversity is reflected by the different kinds of methods that exist in our representation. One kind of methods are generators which specify a set, or range of values to be generated. They can also encode heuristics about ordering the values, initial guesses, etc. For example, a generator method for driver width is shown in figure 3. It shows both the range of values as well as the initial choice heuristic. Another kind of methods are calculations which apply some mathematical function over a set of previously decided parameters. A calculation may be viewed as a combination of a generator and an equality constraint. This method always produces the same value for the same set of its input parameter values. Some of the other kinds of methods are procedures (which embed arbitrary computations) and constrained generators (which can look ahead to the constraints on the goal to generate values). There is another set of method types which primarily provide control knowledge on the use of other methods. A simple example are conditional methods (also called rules) which allow some conditions to be specified on the suitability of applying a method. The action part of a rule must be a method. Other examples of such control methods are rule groups and conjunctive methods. An important property of control methods is that they make explicit the separation between two kinds of knowledge: one for making design choices and the other for selecting a suitable set of choices or ordering the different sets of choices. Sub plans. Another kind of control method is called a subplan. These methods specify a set of goals that must be carried out in order to satisfy, the higher level goal. The actual order in which the goals are carried out is specified by the input and output dependency descriptions attached to a goal. The subplan method is the only mechanism for creating goal trees. This has some important consequences. First, alternate plans for decomposing a goal into sub-goals can be easily represented. For example, very different sub-plans exist for a goal if different technologies are available for the implementation of the goal’s specifications. Second, APPLICATIONS / 859 given that a subplan method is like any other method, it can be embedded inside control methods. This allows, for example, plan selection knowledge to be represented inside control methods. Finally, subplan methods and other more direct methods can be simultaneously specified for the same goal. In other words, a goal may be achieved in different ways. One way may be to decompose it into smaller sub-problems. Another way might be to use previously designed pre-packaged solutions. For example, the goal for “Design driver roll” may have one method which decomposes the goal into sub-goals: “design diameter”, “design width”, “decide tolerances”, “decide material”, etc. A driver designed in this way may need to be manufactured from raw stock. Another method may be a generator which selects from some standard off-the-shelf driver rolls. Typically, this latter method would be tried before the more general subplan and be so specified. Statically no distinction can be made between goals which have sub-goals and those which have direct methods. During the execution of the plan, however, some differences arise. The primary difference arises from the fact that a sub-goal is responsible for a subset of the specifications of its super-goal. In such cases, the most specific goal is held responsible for the shared design parameter during problem-solving, which is described in the next section. In addition to the method types described above, we also specify an abstract problem solving protocol that must be followed by a method. Thus, new method types can be created. In fact, the current set has evolved over the course of representing the knowIedge about paper transports. Design Constraints. The third major element of a plan are constraints on the design parameters. These constraints are attached to some goal. Typically, they would be associated with the goal for the less constrained variable, as heuristically determined by experts. However, they can be as well attached on separate goals which then depend on the goals for the constrained parameters. Notice, that much of the ordering in the plan arises from where the constraints are attached. This is because the parameters in a constraint are also used to order the goal during run-time scheduling. As we discussed in the previous section, this is very appropriate because much of the ordering seems to come from the constraints on a parameter. We view a constraint as an object which basically specifies a relation between a set of design parameters. These relationships may reflect the conditions on the underlying structure or behavior of the artifact or they may be derived from the specifications of an individual problem. In the next section we elaborate on how constraints are used. Advice for modification. The last major element of a design plan is advice to the problem solver. We have identified the need for many different kinds of advice. In this paper we will focus on only one kind of advice, namely, modify parameter advice.. This is the advice attached to constraints and activated when constraints fail. These advice descriptions can be obtained in two ways. For certain kinds of constraints one can analyze the expression and determine which parameters must be modified and how to satisfy the constraint. In many other cases, the experts know from experience which parameter may be more easily modifiable and the system can determine how much to change the parameters in order to satisfy the constraint. In our framework we can represent both kinds of advice. This implies that part of the constraint protocol is being able to automatically analyze the failure. Once a piece of advice is created, no difference is made between the heuristic (produced by the expert) and direct (produced by the system) advice. Some of the other kinds of advice we have found useful are processing advice which advises the problem solver itself to give up or suspend a particular exploration path; selection advice which causes a particular plan to be aborted in favor of another; and modify specification advice which advises the user (or another system) to change some problem specification. 5. Problem Solving using the Plans We start by describing the basic problem solver that tries to carry out these design plans. Later we will briefly describe the more extended version which supports a more comprehensive design process. The basic problem solver comprises three major parts: i) a goal scheduler which uses an agenda to post goals, try them out, suspend them if needed, and revise them; ii) a dependency net which is created dynamically (this data structure associates a designed parameter with the goal which designed it and the goals which directly depend on it); and iii) a set of protocols which each of the plan elements is expected to follow. The protocols can be viewed as falling in two groups: initial design and 860 / ENGINEERING revision. Notice that at the revised goal, some constraints Initial Design Protocol. which originally succeeded may now fail. This can Before a goal is run, its preconditions are checked. create new advice causing the problem solver to back up These are computed both from the input parameter further. Also, some new constraints may have been dependencies as well as direct dependency on other added which can fail. In fact the calculation methods goals. The latter is a heuristic way of ordering goals effectively propagate the advice backwards by this which reflects processing considerations. mechanism. The activated goal tries methods from its list of Illustration of the advice mechanism design methods to find the first that runs successfully. A method could cause a goal to suspend by surfacing some new dependencies. Most methods fail or succeed right away. Subplan methods, on the other hand, post new goals and suspend the higher goal. If all methods fail, then the goal fails. Notice, that if the goal was embedded in a subplan method, and all but the top goal are, this failure propagates to the method and up. Once a method succeeds, the constraints are tried. If all constraints are satisfied, the goal succeeds. If a constraint fails, however, the problem solver (often working with the user) will either relax the constraint or try to satisfy it by revising the partial design. Revise Design Protocol. In order to revise the design the problem solver has to: i) determine what design parameter(s) to modify, ii) determine which goal to backtrack to, and iii) try to effect the change. The first piece of information comes from the advice attached to constraints. Given the advice, the dependency net is examined to determine the goal which can handle the advice. This goal is then activated in a “revise” state. The revised goal adds the advice as a new constraint. It then asks the previously executed method to revise itself if it can. Different methods handle advice differently. A generator tries to generate a different value which conforms to the advice. A calculation, on the other hand, can revise itself only by creating a new piece of advice which may cause the problem solver to backup further. If the original method fails, then the goal searches among its other methods for the first method that succeeds. If none of the methods succeed then the advice has failed and control returns to the original point of failure. Often there are other pieces of advice that can be tried. If a method does succeed in producing a value then the constraints are checked again. If the constraints are satisfied then the advice has succeeded and design will proceed, eventually reaching the goal which originally failed and continuing beyond if the advice was appropriate. We shall illustrate how the advice mechanism works with the help of a simple example. Consider the following two constraints on three variables X, y, and z. x+y+z>lO (Cl) x+y+z<20 Ka Furthermore, let us assume that independent of these constraints, we also know the sets from which each of the three variables can take values. x: {1,3,5} (4) Y: {2,4,6’ 8) r (5) z: (1 . . 100) (6) One way to represent this problem in our framework is to have separate goals for x, y, and z. Let us call them Gx, Gy, and Gz. Each of these goals will have a single method, which is a generator incorporating the choice sets in (4) - (6) respectively. Let us name the methods Mx, My, and Mz. Also assume that there is no knowledge about initial guesses for these variables in the generators. Constraints Cl and C2 can be either attached to one of these goals or a fourth one. Let us say we adopt the latter representation and call the goal with the constraints Gc.. [A discussion of the differences between the two choices are beyond the scope of this paper.1 In the initial design phase, the goals Gx, Gy, and Gz will be trivially satisfied (because no constraints are attached to them) by making the following choices. x=l;y=2;andz=l However, goal Gc will fail because while C2 is satisfied, Cl is not. Constraint Cl can generate many different advice for modification: xt,>7 (Al) YL=+ UW zf,>7 (A3) x 7 & y ‘/’ (A4), etc. The advice Al means “increase x such that it is greater than 7”. In this example, we will only consider advice APPLICATIONS / 861 that tries to change one variable at a time. The advice Al when sent to the problem solver will cause goal Gx to try to revise itself. However, the method Mx at Gx cannot find a value for x that is greater than 7, so this advice will fail. Goal Gc will then send advice A2, which also fails. Next A3 is tried which succeeds in modifying z to 8 and now the constraints are satisfied. Notice that the revision of z will cause all goals dependent on z to be “undone” and retried. Also, even though we started with arbitrary values for the three variables, we were able to quickly find a solution. The generators keep track of the choices they have made, so the same value will not be generated again in the same context (see section 6 for more on the context mechanism). Suppose we were to impose a new constraint on z at this point: z>lO (C3) This constraint will fail creating an advice, z?, >lO (A5) This advice will cause the value of z to change to 11. The change in z will undo goal Gc which will recheck its constraints. The constraints Cl and C2 are still satisfied, so this new solution will be accepted. Notice, that if wanted to preserve the previous solution, this new constraint would be imposed in a subcontext, allowing both solutions to be explored further. Example of design revision from Pride Let us consider another example which is drawn from the paper transport domain. After the shape of the path to be followed by the paper has been defined, it is necessary to determine the number of roll stations and their locations. The placement of the stations has to satisfy various kinds of constraints [Mittal and Stefik 861. In the design phase, a heuristic is used to propose the number of stations. Using this information, a method is applied which determines ranges of placements of stations such that the relevant constraints are satisfied. If it turns out that no such placement exist because for any placements there are constraints that are not satisfied, then a redesign episode takes place. A piece of advice is generated indicating, for instance, that the number of roll stations should be increased. This requires undoing the previous decision (and all the decisions that depended on it) and making a new decision using the advice. This is illustrated in figure 4. Discussion. Some important properties of our problem solver are novel and crucial to its success. Our problem solver augments a weak-method, i.e., dependency-directed backtracking, with an advice mechanism. In other words, the dependencies between design parameters are used in determining a relevant decision point to back up to. Furthermore, the failed constraint(s) is analyzed to determine a piece of advice for the revised decision. Thus the problem solver is not only capable of searching its entire design space but still does so intelligently and directed by advice from failures. Moreover, this general search method is integrated in a framework which is knowledge-rich. This means that if knowledge exists about ordering goals or making plausible choices, it can be profitably used. Recourse is made to the general method only where sufficient knowledge does not exist or is incomplete. Finally, notice that our approach avoids another typical shortcoming of purely knowledge-based approaches which rely on heuristically determined order between goals. In our scheme even if two goals were ordered the wrong way, the advice mechanism would produce the correct result in one round of revision. This is because the advice mechanism allows constraints imposed later in design to be propagated back as advice. The same mechanism can also be used to do a rough design followed by a more precise design. Limitation. Even though the problem solver we have described can perform arbitrary search, it will clearly be too inefficient in some cases. One such situation arises in cases of tightly coupled variables. That is, if there is a set of variables which are so inter-constrained that no local propagation of values or advice will suffice to efficiently find a consistent solution, then one might want to look for other problem solving methods for that subproblem. For example, in the paper transport design, the roll placement problem has this property. It is important to emphasize that these special problem solvers can still be embedded in our overall framework by embedding them inside design methods. The example discussed earlier illustrated this point. This implies that the overal problem solving may still proceed as a process of solving loosely-coupled sub-problems with some backtracking, with the tightly -coupled decisions localized as a single decision-point, but still capable of being revised from the outside. 862 / ENGINEERING 6. Extended Problem Solver We briefly describe two other components of the problem solver that play a major role in supporting the overall design process but are not essential in understanding how the problem solver works. Multiple design contexts. We provide a facility for maintaining multiple design contexts [Mittal et. al. 19861. A design context contains a complete description of the artifact being designed, a complete description of the state of the design plan corresponding to that design, and the state of the problem solver. The advising mechanism makes use of the multiple contexts mechanism. Specifically, when the design problem solver processes an advice, it can do so in a separate context. This ensures that if a specific advice fails to revise the design satisfactorily, the system can back up to the context in which the advice was originated and continue with a different advice. The ability to create multiple partial designs and keep them distinct is crucial in exploring different choices simultaneously. For example, at certain choice points, one can explore the different choices simultaneously by creating a sub-context for each choice. We have chosen not to do so because of the size of the design space, i.e., the number of choice points and choices at each point are far too many. Ultimately, some incorporation of ATMS [deKleer 19861 ideas may be worthwhile. User control of the search. Pragmatically, the user and the automated problem solver have to work together. This is because of the complementary nature of their strengths. Most automated problem solvers can tirelessly search a design space, manage the dependencies, selectively undo parts of the design, and consistently check the constraints. However, they rarely have enough knowledge to avoid unnecessary work. Human problem solvers, including experts, are rarely systematic in the above activities, but often have knowledge that lets them avoid or minimize the search. It seems natural, therefore, that there be a way for the human user to steer the problem solver in more suitable regions of the search space. We provide many entry points for a user to interact with the problem solver. The advice mechanism turns out be quite suitable for many such interactions. Thus, a user can easily enter a piece of advice. This means that the user can choose to advise arbitrary goals and thereby affect the course of design. Another natural place is in the selection of advice. A failed constraint typically has alternative advices on how to satisfy it. However, it is often hard for the system to decide which advice is more likely to succeed. We allow the user to not only change the order of the advice but also change its content in some cases. There are many situations where the design methods are incomplete in their description of the design space. In such situations, it is natural for the user to be able to make a design decision and let the system do the rest. In fact it is possible for the user to not only make the decision but also handle the ensuing advice from a constraint failure at some subsequent goal. On a very pragmatic basis, these ‘hooks’, along with the multiple context facility, allow a user to work with the system in exploring a design space and looking at alternatives quite rapidly. 7. Discussion and Conclusions The framework described in this paper has been successfully used to build a knowledge-based system, called Pride, for designing paper transports inside copiers and duplicators [Mittal et. al. 19861. A prototype version of Pride has been ready and in field test for over a year now. It has been tested on real design problems from previous and ongoing copier projects. It has been successful in not only producing acceptable designs but also in analyzing designs produced by engineers and identifying shortcomings in their designs. The notion of plans for representing design knowledge was independently developed by Brown and Chandrasekaran [Brown and Chandrasekaran 19851. Our framework, however, is more general in many ways. First, we impose fewer restrictions on the kinds of artifacts we can handle. Second, we provide a problem solver that can search the design space more thoroughly. Finally, our multiple contexts mechanism allows different design alternatives to be simultaneously explored. Many interesting research issues are still unresolved in the work we have presented. For example, we have not explored the limitations of the advice mechanism. In particular, we have not looked at the general case where many constraints can simultaneously fail and the problem caused by conflicting advice. Another area of investigation is a categorization of constraint types and the constraint satisfaction methods that may be most suitable for each APPLICATIONS / 863 type. Another interesting issue we are investigating is the relationship between the structure and function of the artifact on one hand and the design plans on the other. This seems to be important both from the point of view of acquiring additional knowledge as well as generating the design plans more automatically. As was indicated in the introduction, the proposed framework supports the “generation of alternative designs” stage of the overall design process. We are trying to extend the framework to cover the other stages also. In particular, we want to study the processes involved in the comparison of designs according to a set of criteria. Also, we want to extend the advice mechanism to support the feedback processes between the different stages. Acknowledgements Mittal, S., C. L. Dym, and M. Morjaria. “PRIDE: An Expert System for the Design of Paper Handling Systems.” To appear in Computer (Spl. Issue on Expert Systems for Engineering Applications). July, 1986. Mittal, S., and M. J. Stefik. “Constraint Compaction: Managing Computational Resources for Efficient Search.” Technical memo, Xerox Palo Alto Research Center, Palo Alto, CA, April, 1986. Mittal, S., D. G. Bobrow, and K. Kahn. “Virtual Copies: At the boundary between classes and instances.” To appear in Proc. ACM Conf. on Object-Oriented Programming Languages, Systems and Applications (OOPSLA). Portland, Oregon, September, 1986. Mostow, J. “Towards Better Models of the Design Process.” AI Magazine, Spring 1985. The Pride project is a joint effort between Xerox PARC and Xerox RBG (Reprographics Business Group). Mahesh Morjaria, George Roller and many other engineers at RBG have collaborated on this project from the start. Felix Frayman has contributed many ideas and programming effort to the project. Mark Stefik has supported the work both as the manager of Knowledge Systems Area at PARC and as a research colleague. Daniel Bobrow, Felix Frayman, Ken Kahn, Mark Stefik, and the referees provided invaluable feedback on earlier drafts of the paper. References Brown, D. C., and B. Chandrasekaran. “Expert Systems for a Class of Mechanical Design Activity”. In J. Gero, ed., Knowledge Engineering in Computer-Aided Design. North Holland, Amsterdam, 1985. Dym, C. L., (ed). Applications of Knowledge-Based Systems to Engineering Analysis and Design. ASME, New York, 1985. Gero, J., kd). Knowledge Engineering in Computer-Aided Design. North Holland, Amsterdam, 1985. de Kleer, J., J. Doyle, G. L. Steele, and G. J. Sussman. “Explicit Control of Reasoning”. In P. H. Winston and R. H. Brown, eds., Artificial Intelligence: An MIT Perspective, MIT Press, Cambridge, 1979. de Kleer, J. “An Assumption-based TMS”. Artificial Intelligence 28:2 (1986) 127-162. a) Side view of paper path and roll stations Driver I-- 1 - >Shaft t Idler t b) Front view of roll station FIG 1: Views of a Paper Handling System 864 / ENGINEERING Design Paper Transport Decide Number & Design Paper Path m m w m Location of Stations m I w I m m m m m I a mL Design Station - m m. I F For each Station: Decide Number of Generate Range of Stations Locations Generate Location Design Driver Design Idler , goal - subgoal relation I m I I. goal on right depends on goal on left Fig 2: Part of goal tree for Paper Handling Systems Idler width generator parameter : Idler width minvalue : 10mm maxvalue : 100mm step : lmm initial value : if driver width known then 2 * driver width else 40mm Fig 3: Generator for Idler Width 3) Advice parameter: # of roll station change: increase by one 1) Goal: decide # of roll stations Method: divide length of path by smallest paper length . I D Goal: generate range of locations that satisfy contraints Method: contraints compaction algorithm Constraint 1: maximum separation between neighboring rolls < smallest paper length - K Fig 4: Advice Example APPLICATIONS / 865
1986
13
395
MAKING BEST USE OF AVAILABLE MEMORY WHEN SEARCHING GAME TREES Subir Bhattacharya and Amitava Bagchi Indian Institute of Management Calcutta P.O. Box.16757, Calcutta-700027, INDIA ABSTRACT When searching game trees, Algorithm SSS* examines fewer terminal nodes than the alphabiata procedure, but has the disadvantage that the storage space required by it is much greater. ITERSSS* is a modified version of SSS* that does not suffer from this limitation. The memory M that is available for use by the OPEN list can be fed as a parameter to ITERSSS* at run time. For successful operation M must lie above a threshold value MO . But MO is small in magnitude and is of the same order as the memory requirement of the alphabeta procedure. The number of terminal nodes of the game tree examined by ITERSSS* is a func- tion of M, but is never greater than the number of terminals examined by the alphabeta procedure. For large enough M, ITERSSS* is identical in operation to SSS*. 1. Introduction : The alphabeta procedure is the best known of game tree search algorithms. Generally formu- lated as a recursive procedure, it is quite fast in execution and uses little memory. In the pro- cess of computing the minimax value at the root of the game tree it makes a left-to-right scan of the terminal nodes: it does not examine all the termi- nals, but looks only at those that to it appear capable of influencing the root value. Detailed expositions can be found in Knuth and Moore /-2 7 and Pearl L-3 7. - - - In 1979, Stockman 14_7 announced a new game tree search algorithm called SSS* quite different in nature from alphabeta. SSS* does not examine terminal nodes of the game tree in a left-to-right manner. It views a game tree as a union of its constituent solution trees, and at each step dur- ing the search, selects for inspection the most promising among the contending solution trees. The terminal nodes of this solution tree are examined in a left-to-right manner; if this not the best solution tree in the game tree, then a time comes when a more promising solution tree is found and that is then taken up for inspection. One node from each solution tree is kept in a list called OPEN. Nodes in OPEN have associated heuristic values, and OPEN is maintained as a priority queue with nodes having higher heuristic values at higher levels. At each iteration of the algorithm the node at the root of the priority queue is selected for examination. For a uniform game tree of depth d and branching factor b, SSS* requires O(bdi2) cells of storage for the OPEN list. In contrast, the total storage required by alphabeta is O(d). SSS* has an advantage over alphabeta in that it examines only a subset of the terminal nodes examined by alphabeta. Since the running time of a game-tree search algorithm is primarily determined by the number of terminal nodes it examines, SSS* should run faster than alphabeta. According to.Pearl [3, p. 310_7, however, on the average alphabeta examines at most three times as many terminals as SSS*, and the much greater memory and bookkeeping requirements of SSS* tend to weigh the scale in favour of alphabeta. In this paper we present a modified version of SSS* which we call ITERSSS*. The memory M that is available for the list OPEN is fed as a para- meter to ITERSSS* at execution time, which then runs essentially like SSS* but not using more memory than M. For a uniform game tree of depth d and branching factor b, the minimum allowable value of M is MO = p/21 . (b -l)+lwhen the root is a MAX node. So long as M is greater than this threshold value, ITERSSS* runs smoothly and outputs the minimax value at the root of the game tree. ITERSSS* examines more terminals than SSS* but fewer terminals than alphabeta, the number of terminals examined being a function of M. When M = b I"d/27 , ITERSSS* is identical in operation to SSS*. When M = Mof ITERSSS* examines no more terminals than alphabeta, and uses the same order of memory as alphabeta. From a pro- gramming point of view, ITERSSS* is of the same level of complexity as SSS*, and the flexibility it provides with regard to use of memory should give it an edge over both SSS* and alphabeta. In section 2 of this paper we give formu- lations of SSS* and ITERSSS*, and in section 3 we describe some experimental results. Section 4 presents a few formal properties of game tree search algorithms, and Section 5 summarizes the paper and lists some open problems. 2. Algorithms SSS* and ITERSSS* : We assume the root s of the game tree T to be a MAX (i.e. OR) node. The sons of s are then MIN (i.e. AND) nodes. The game tree T is also assumed to have a finite minimax value. A Dewey Search: AUTOMATED REASONING / l(,J From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. radix-b code is used for representing the nodes in T. We suppose that T is a uniform tree of depth d and branching factor b. Then i) the root s is represented by the empty sequence ; ii) the sons of nonterminal node x are repre- sented as x-j, 1s j < b. The maximum possible length of the Dewey code is d digits, and terminal nodes in T have d digit codes. The definition can be readily generalized to non- uniform trees. The nodes of T get linearly ordered by the lexicographic ordering of their Dewey codes. We also note that each terminal node x in T has a static evaluation score v(x). A standard formulation of the alphabeta procedure using the minimax approach can be found in Knuth r2, p.3003, and this is the version of the algorithm used by us in our experimental in- vestigations described in the next section. We now present SSS*. This algorithm maintains a list called OPEN that is initially empty. Each node x in T has an associated heuristic value h(x), which gives the current value of node x; the node x also has another field called STATUS (x), which is either LIVE or SOLVED. The function first (OPEN) returns a node x having the current maxi- mum h-value in OPEN; ties are always resolved in favour of lexicographically smaller nodes. The procedure is invoked by calling SSS*(s). Procedure SSS* (s) begin - := 1 sl ; h(s) := & ; STATUS (s) := LIVE; OPEN repeat ' ' X := first (OPEN); case : x is terminal and STATUS(x) = LIVE : h(x) := min (h(x), v(x));STATUS (x) := SOLVED ; : x is a nonterminal MIN node and STATUS(x) = LIVE : remove x from OPEN: insert x.1 in OPEN with h(x.l):=h(x), STATUS (x.1) := LIVE: : x is a nonterminal MAX node and STATUS(x) = LIVE : remove x from OPEN: insert x.j in OPEN with h(x.j):=h(x), STATUS(x.j):=LIVE for l<j<b; : x = x' .j is a MIN node and STATUS(x) = SOLVED : remove all successors of x' from OPEN; insert x' in OPEN with h(x'):=h(x), STATUS := SOLVED; : x = x' .j is a MAX node and x #= s and - STATUS(x) = SOLVED : remove x from OPEN ; if j = b then - insert x' -in OPEN with h(x'):=h(x), STATUS := SOLVED else (* l,< j 6 b *) insert x' .j+l in OPEN with h(x'.j+l) := h(x), STATUS (x'.jtl) := LIVE ; until x = s and STATUS (x) = SOLVED ; output h(x) ; end ; Example 2.1 : The game tree T shown in Fig. 1 has b = 3, d = 4, and 81 terminal nodes. A terminal node x is said to be examined only when its assi- aned value v(x) is computed. Alphabeta needs to do this for 41 of the terminal nodes, while SSS* needs to do this for only 28 of them. Algorithm ITERSSS* is very similar to SSS*. Here too a list OPEN is maintained, but the size of OPEN is constrained by the availability of storage. A node in OPEN, in addition to h and STATUS fields, also has a TYPE field. The TYPE of a node can be either ACTIVE or INACTIVE. The pro- cedure is invoked by calling ITERSSS* (s,M) where s is the root of the game tree and M the amount of storage that OPEN can use. It is assumed that M),b- As in SSS*, OPEN is initially empty, and ties for selection from OPEN are resolved in favour of lexicographically smaller nodes. Procedure ITERSSS* (s, M) begin SPACE := M ; OPEN := {s }; STATUS(s):= LIVE: h(s):=ob; TYPE(s) := INACTIVE ; SPACE := SPACE - 1 ; FLAG := INACTIVE : repeat X := first (OPEN, FLAG) ; case : x is a terminal node and STATUS(x) = LIVE : h(x) := min(h(x), v(x)); STATUS(X):= SOLVED; TYPE(x) := ACTIVE ; : x is a nonterminal MIN node and STATUS(x) = LIVE : remove x from OPEN ; insert x.1 in OPEN with h(x.l):= h(x STATUS (x.1) := LIVE, TYPE (x.1) := FLAG: : x is a nonterminal MAX node and STATUS (x) = LIVE : if SPACE >/b - 1 then - )I 13 begin remove x from OPEN ; insert x.j in OPEN with h(x.j):= h(x), STATUS(x.j):= LIVE, TYPE(x.j):=FLAG for 1s j < b; SPACE :=SPACE-b+l; end else begin TYPE(x) := INACTIVE ; FLAG := ACTIVE ; end: : x Zx'.j is a MAX node and x # s and - STATUS (x) = SOLVED : remove x from OPEN ; if j = b then - insert x' in OPEN with h(x'):=h(x), STATUS := SOLVED, TYPE(x') := ACTIVE else (* l< j < b*) insert x' .j+l in OPEN with h(x'.j+l) := h(x), 164 / SCIENCE STATUS(x'.j + 1) := LIVE, TYPE(x'.j + 1) := ACTIVE; : x = x' .j is a MIN node and STATUS (x) = SOLVED : for each node y # x in OPEN such that y is a successor of x' do if h(y),< h(x) then beain Y : if - remove y from OPEN ; SPACE := SPACE + 1 ; end; = inactivesucc (OPEN, X) y = null then begin remove x from OPEN ; insert x' OPEN with h(x STATUS (x') := SOLVED TYPE(x') := ACTIVE ; ‘1: =h( end else TYPE(y) := ACTIVE ; : x = null : FLAG := ACTIVE ; Until x = s and STATUS (x) = SOLVED output h(x) ; end : FLAG is an indicator that takes one of two values : INACTIVE or ACTIVE. Initially FLAG is INACTIVE, and once it becomes ACTIVE it remains ACTIVE. A LIVE node in OPEN is either INACTIVE or ACTIVE, while a SOLVED node in OPEN is always ACTIVE. An INACTIVE node can be thought of as a node that cannot be expanded because of lack of 59023 AAA memory space, and it is our intention to confine selections from OPEN to ACTIVE nodes only. An exception occurs at the beginning of the execu- tion of the algorithm when there are no ACTIVE nodes at all, and we must expand INACTIVE nodes and fill up the available memory. Thereafter only ACTIVE nodes get selected from OPEN. The func- tion first (OPEN, FLAG) returns that node x from OPEN whose current h-value is highest among all nodes (if any) in OPEN with TYPE = FLAG, ties begin resolved in favour of lexicographically smaller nodes as usual. If there is no node in OPEN with TYPE= FLAG then null is returned. The function inactivesucc (OPEN, x) returns an INACTIVE successor z of x' (where x=x'.j) such that z is at the greatest depth among all INACTIVE successors of x1 in OPEN: if no such z can be found it returns null. ITERSSS* begins with FLAG set to INACTIVE. So long as FLAG = INACTIVE only INACTIVE nodes get expanded and ACTIVE nodes, if any/ in OPEN are all terminal nodes. Since M_) M the algorithm ensures that at least b ter#iAal nodes are brought to the ACTIVE condition before storage runs out. Once FLAG = ACTIVE, INACTIVE nodes in OPEN do not participate in selection and remain in "suspended animation" until some nodes get purged from OPEN and storage is released. When storage becomes available the TYPE of only one node is changed from INACTIVE to ACTIVE: this ensures that the algorithm never gets "stuck" because of insuffi- cient storage. LEGEND A=TfRHlNAL NOOC VISITED BY ALPHA-BETA 1 =TERMINAL NDDE VISlTED BY ITERSSS*(M=S) 187046268620991126433884631Ir1592653589793233279502~8419716939937510582 AAAA AAA AAAAAAAAAAA A AAAAAA AA AAAA IIII IJI I I Ill11 I III111 II III1 s 5 s 5 sssss s ssssss 5s 555s 1 Fig. Search: AUTOMATED REASONING / 165 The case statement in ITERSSS* differs from that in SSS* in two ways. The expansion of a LIVE MAX node x can get held up because of lack of storage: if this happens x is made INACTIVE. If a SOLVED MIN node x = x'.j is selected from OPEN, we cannot immediately throw out x from OPEN and assign a SOLVED STATUS to x' since x' can have INACTIVE successors in OPEN; one of these succe- ssors of x' in OPEN must then have its TYPE changed to ACTIVE, and x remains in OPEN until x' has no INACTIVE successors in OPEN. Example 2.2 : For the uniform game tree T of Fig-l, MO= 5. When M is 5 or 6, ITERSSS* examines 33 terminals, which is much less than that seen by alphabeta. When M >/ 7, ITERSSS* examines 28 than alphabeta , while for large M it examines just as many terminals as SSS*. More extensive experi- mental investigations are needed with much larger numbers of values of M and of sets of terminal values, but we do not expect any departures from the trend shown in Table 1. 4. Theoretical Analysis How can we characterize the terminal nodes of a game tree that get examined by the alphabeta procedure or SSS* or ITERSSS* ? We do a theoreti- cal analysis to obtain the exact pruning condi- tions. We begin with some definitions. Definition 4.1 : Let a game tree T be given. terminals, exactly as many as are examined by SSS*. 3. Experimental Observations : We conducted some experiments on a VAX i) U/750 to find out the average number of terminal nodes examined by alphabeta, SSS* and ITERSSS*. Four (b, d) pairs were chosen. For each pair, ten sets of terminal node values were obtained with the help of a random number generator. Both alphabeta and SSS* were run ten times, once with each set of terminal values, and the average number of termi- nals examined was computed, the average being expressed as a percentage of the total number of terminal nodes in the game tree. For each set of terminal values, ITERSSS* was run five times for five different values of M. The average percentage of terminals visited was computed for each value of M. The programs were written in PASCAL-Table 1 gives the results. It can be seen that the average number of terminals examined by ITERSSS* decreases steadily as M increases; for small.M, ITERSSS* examines a slightly smaller number of terminals ii) iii) iv) VI A subtree T' of T is called a solution tree if a) the root s of T is in T' ; b) for every nonterminal MAX node x in T', exactly one son of x in T is in T' ; c) for every nonterminal MIN node x in T', all sons of x in T are in T'. The value v is defined rf; of the solution tree T' VT' = min{v(x)l x is a terminal node in T' I A solution tree TA in T is said to be optimal if VT, , vTl for every solution tree T' in T.O / For any nonterminal node x in T, let tx be the minimax value of the subtree rooted at x. When x is a terminal node, let t = v(x). Let vx denote the minimax value of the game T ree T, i.e. VT= max i f VT' T' is a solution tree in T TABLE 1 : Number of terminals examined by alphabeta, SSS* and ITERSSS* -------- ------- ------------------------=-------------------------------------------------------------------------------- __--_--------__~------~---~----~~~-~----~~--~~--~~~~~~~~-~~~~-~~-- Serial Total number No. b d of terminal average number of terminals examined (expressed as a percentage of the total number of terminals) nodes = bd alphabeta sss* Available ITERSSS* 1 2 15 32768 storage M 12.47 8.50 9 12.40 3 10 59049 10.44 6.44 5 6 15625 16.51 11.48 9 5 59049 13.67 10.24 64 10.36 128 9.49 192 9.37 256 8.50 11 10.11 61 8.31 122 7.75 183 7.65 243 6.44 13 15.49 32 13.46 63 12.68 95 12.60 125 11.48 25 13.67 183 12.34 365 11.87 548 11.27 729 10.24 l(,(, / SCIENCE Remark : We note that vT = ts = v T;' Definition 4.2 : Let a game tree T be given. i) Let x and y be two nodes in T. We write x + y if the Dewey code for x is strictly smaller lexicographically then the Dewey code for Y- ii) Let x be any node in T. Let L(x) ={ z 1 z is a terminal node in T, z + x, and there is no solution tree in T to which both x and z belong } . Z~)tZZ 'ii both x and z is a terminal node in T, x 4 z, no solution tree in T to which belong . ) iii) Let T' be a solution tree in T, and let node x be in T'. Let left (x, T') = = i 00 if there is no terminal node z<T'such min'v(z) ' that z 4 x z is a terminal node in T', ZQX ) otherwise Now define B(x) = max left (x, T') . x E T' { 1 Note that B(s) = 00 . iv) Let T' be a solution tree in T, and let x be a node in T that does not belong to T! Let Lin (x,T') = -@if i L(x) r\ T' = + min (v(z)1 2 is a terminal node and z 4 L(x) A T' ) otherwise Rin (x, T') = -@if t R(x) n T'= + min (v(z) ( z is a terminal node and z E R(X) n T' -1 otherwise v) We now define AL (x) = max { Lin (x, T' ) 1 I AR (x) = x t$ T' max Rin (x, T' ) I x 4 T' 1 1 Again note that AL (s) = AR (s) = - 00 . With these definitions we are in a position to state some lemmas and theorems. Proofs are omitted. Related analyses can be found in Baudet ClI and Pearl [3]. Lemma 4.1 : Let node x be a successor of node z in a game tree T. Then AL (x) >/ AL (z) and B(x) < B(z). Definition 4.3 : Let the alphabeta procedure be run on a game tree T. Let x be a node in T. We say that the node x is pruned by alphabeta if no call is made to alphabemh x as an argument, i.e. if none of the terminal nodes in the subtree rooted at x are examined by alphabeta. Theorem 4.1 : When the alphabeta procedure is run on a game tree T, a node x in T is pruned iff AL (x) >/ B(x). This is the standard pruning condition for alphabeta (see 131). We have just reformulated the basic definitions in terms of solution trees. Lemma 4.1 would be needed in the proof of Theorem 4.1. Definition 4.4 : Let T be a game tree, and let SSS* (or ITERSSS*) be run on T. i) Each call to the function "first" is regarded as a distinct instant of execution. By the kth instant we mean the moment of time imme- diately following the kth time "first" returns a value. ii) A node x in T is examined if at some inst- ant during the execution of SSS* (or ITERSSS*), x is returned by the function "first" and x is LIVE. Lemma 4.2 : At each instant during the execution of SSS* on a qame tree T, OPEN contains exactly one node from each solution tree in T. Theorem 4.2 : Algorithm SSS* when run on any game tree T terminates successfully, i.e. it finally selects s from OPEN in the SOLVED state. At termi- nation, h(s) = vT. Theorem 4.3 : Let T be a game tree. When run on T, a node x in T is not examined. CAL(x) >/ B(x) or AR(X) > sss* iff is Remark : Lemma 4.2 clearly does not hold for ITERSSS* if we consider only the ACTIVE nodes in OPEN. However, Theorem 4.2 is still true. The foll- owing modified form of Theorem 4.3 also holds. Theorem 4.4 : Let T be a game tree. When ITERSSS* is run on T, a node x in T is not examined if AL(x) >/ B(x). - If follows that terminal nodes examined by ITERSSS* are also examined by alphabeta. 5. Conclusion : SSS* examines fewer terminals in a game tree than alphabeta but takes an inordinate amount of storage. An additional overhead is incurred in maintaining OPEN as a priority queue. These limi- tations of SSS* were noticed by Stockman [4], who suggested the use of a hybrid alphabeta-SSS* procedure when storage was in limited supply. ITERSSS* is not such a hybrid procedure, however: it is not related to alphabeta at all and can be viewed as a modification of SSS*. Its most desir- able feature is that the amount of storage M avail- able for OPEN can be supplied to it as a parameter at run time. Experiments indicate that it performs as per expectations. What would be of great inter- est is an average case analysis of the dependence on M of the number of terminal nodes examined. More extensive experimental studies are also needed to find out whether ITERSSS* outperforms alphabeta in practical situations. REFERENCES (II Baudet, G.M. "On the Branching Factor of the Alpha-Beta Pruninq Algorithm." Artificial Intelligence lO(2) (1978) 173-199. [2] Knuth, D. E. and Moore, R.W. "An Analysis of Alpha-Beta Prunning," Artificial Intelligence 6(4) (1975) 293-326. [3] Pearl, J. "Heuristics: Intelligent Search Strategies for Computer Problem Solving" Addison-Wesley 1984. 01 Stockman, G. "A Minimax Algorithm Better than Alpha-Beta?" Artificial Intelligence 12(2) (1979) 179-196. Search: AUTOMATED REASONING / 16’
1986
130
396
AN ALGORITHMIC SOLUTION OF N-PERSON GAMES Carol A. Luckhardt and Keki B. Irani Electrical Engineering and Computer Science University of Michigan Ann Arbor, Michigan 48104 ABSTRACT function is a function which estimates what resulting value Two-person, perfect information, constant sum games have been studied in Artificial Intelligence. This paper opens up the issue of playing n-person games and proposes a pro- cedure for constant sum or non-constant sum games. It is proved that a procedure, max”, locates an equilibrium point given the entire game tree. The minimax procedure for 2- person games using look ahead finds a saddle point of approximations, while maxn finds an equilibrium point of the values of the evaluation function for n-person games using look ahead. Maz” is further analyzed with respect to some pruning schemes. I INTRODUCTION Game playing is one of the first areas studied in Artificial Intelligence (AI) [Ric83]. Most of the work has been done with games that are 2-person, finite, constant sum (and therefore non-cooperative), perfect information and without a random process involved. For example, chess and checkers involve two neoole. have a finite number of stra- tegies available to each’ player, pay the same total amount at the end of the game, each player knows the other player’s moves, and there is no chance involved. The most famous game programs are the chess players such as the Cray-Blitz, Chaos and Belle [Ne184]. This paper addresses n-person games, that is, games with more than two players, and describes a method of computer play for non-cooperative, non-constant sum games, and for cooperative games given a coalition structure. The approach has been to bring game theoretic results into the more pragmatic AI domain. the game should have when given a terminal node of a par- tial game tree. Then by the look ahead procedure, values are backed up from the terminal nodes to each node of the tree according to the minimaz searching method [Ric83]: (1) at the program’s move, the node gets the maximum value of its children, (2) at the opponent’s move, the node gets the minimum value of its children. The value that is backed up to the root node is the value of the game, and the move taken should be to a node that has that value as its backed up value. If the whole tree is avail- able to be analyzed, there is a theorem from game theory called the minimaz theorem [LuR57] that applies. It is for Z-person zero sum games. Zero sum means that the payoff values for each player add up to zero for any payoff vector. The theorem says there is a strategy that exists for each player that will guarantee that one gets at most v while the other loses at most v and the value of the game is v. This set of strategies, one for each player, is called a saddle point. For example, in the game of 2-2 Nim, initially there are 2 piles of 2 tokens. Players A and B alternate turns. Each player selects a pile and removes any number of tokens from that pile, taking at least one. The loser is the one who takes the last token. Jqll Ill-1 II BACKGROUND Trees are often used as models of decision making in AI and in game theory. From the rules or definition of a game, the game tree representation can be specified for an n-person game by a tree where [Jon80]: (1) the root node represents the initial state of the (2) (3) game, a node is a state of the with the player whose move it is attached to it, transitions represent possible moves a player An1 Aal Figure 1. 2-2 Nim . . (4 make to the next possible states, outcomes are the payoff assignments associated with each terminal node, which are n-tuples where the ith entry is paid to player i. Because most games of interest have combinatorially explo- sive game trees, AI programs tend to analyze partial game trees in order to determine a best move. An evaluation The terminal node value of 1 corresponds to the vector (1,O) and -1 corresponds to (0,l). S ince this is a 2-person zero sum game, the outcomes can be represented by one number. The value of 2-2 Nim is -1 which means that no matter what A does, B can always make a move that will lead to a win for B. 158 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. A technique from AI called alpha-beta pruning [Ric83] reduces the number of nodes that have to be visited when calculating the minimax values. For example, in the above game tree orderin , when doing a depth first search and backing up to B ] II , frl the left most child needs to be evaluated to get a -1 and then it is not necessary to look any further since this is the best that B can do. If a game tree has depth d and branching factor b, then in the best case of this pruning procedure, 2bd/ 2 nodes are evaluated rather than the complete b d nodes [Win77]. III N-PERSON GAMES’ Considering games with more than two players, one value will no longer suffice in representing the outcome. A vector is required for both constant and non-constant sum games. A constant sum game is one where the sum of the entries in an outcome vector is the same value for any termi- nal node. It no longer makes sense to evaluate the game based on any one player’s payoff values. Game theory solutions to non-cooperative games are usually a set of strategies for each player that are in some sense optimal, where the player can expect the best outcome given the constraints of the game and assuming the other players are attempting to maximize their own payoffs. A solution for an n-person, perfect information game is a vec- tor which consists of a strategy for each player, tslr * - * J Sn)* A strategy defines for the player what move to make for any possible game state for the player. Call the set of possible strategies for player i, Pi, and the payoff to player i, Vi. Vi is a real valued function on a set of stra- tegies, one for each player. The set {Pl, * * - YPn;UIJ * * * J un} is called the normal form of a game [Jon80]. {PI, * * - ,Pn;U1, * * . ,L$ equilibrium point for is a strategy n- tuple (Sl, - * * ,s,), such that for all i=l,..., n and si,si’ EP,, U&l, . . . ,s;’ , . . . , Sn)< U;(s1, . . . , s;, . . . , s,). The si’s are called equilibrium strategies. For example, in the game represented by, Pl P2 a1 r (-4,-4) (h-9) 1 a2 1 Pv) (-1,-l) J 1: Theorem A finite n-person non-cooperative game which has perfect information possesses an equilibrium point in pure strategies (proof in [Jon80], page 63). A pure strategy is a single (Y; or pi, as we have seen so far. The theorem just states the existence of an equilibrium point, not how to find one. IV MAXN If we have rational players who are trying to maximize their own payoffs, the backed up values should be the max- imum for each player at each player’s turn. We call this procedure Max n. The maxn procedure, maxn(node), is recursively defined as follows: (1) For a terminal node, maxn(node) = payoff vector for node (2) Given node is a move for player i, and (V lJ? ’ ’ -, maxn( jth V~j) is child of node), then maxn(node) = (vf , . . . , vi), which is the vector where vi’=maxvij . i Calling the procedure with the root node finds the maxn value for the game and determines a strategy for each player, including a move for the first player. This procedure can be used with a look ahead where a terminal node in the definition above becomes a terminal node in the look ahead. For example, given the payoff vectors on the bottom row, by the procedure, A should take the move represented here by the right child: (3,3,3) (2,2,5) (2,5,2) (0,4,4) (5,2,2) (4,i),4) (4,4,0) (l,i,l) Figure 3. maxn example Figure 2. 2X2 game Note that this procedure does not require that there be an where player A’s strategies are the LY,’ s and B’s are the pi’ s, and (a,b) means pay a to the first player and b to the second player, (or,&) which corresponds to (-4,-4) is an equilibrium point. The equilibrium point has the property that no player can improve his or her expected payoff by changing his or her own choice of strategy if the other stra- tegies are held fixed. A saddle point is an equilibrium point, while an equilibrium point may not be a saddle point. These non-cooperative games with perfect information are always solvable in this sense according to the following theorem. order in the moves of the example, B could follow C. players going down the tree. For The next theorem shows that maxn finds an equili- brium point. There may be more than one equilibrium point. When a tie occurs in the back up, each possible choice will lead to an equilibrium point, so it does not matter which move is selected. Theorem 2: Given an n-person, non-cooperative, perfect information game {PI ,..., P,;U, ,... U,}, in tree form, maxn finds an equilibrium point for the game. Search: AUTOMATED REASONING / 159 Proof: Backing up values in the tree by applying the maxn pro- cedure, with some tie breaker, determines a strategy for each player which gives a strategy set S=(S~,...,S,), siEPi, i=l,...,n. So, at each node for each player i, the strategy si gives the arc or move choice which maximizes the backed up value of Ui of the children nodes. In order to have an equilibrium point, we need to show that for all i, uiCs ly...ySiy...,Sn)> Ui(Sly...ySi’ ,...,Sn), for all Si' EPi* Suppose that there is some sJ’ EP,, s,’ #s, where this is not true. That is, uj(s l,m*e,Sj f*se,Sn)< UJ (Sl,em*,S~ ’ ,.**,S,J. The strategy set S’= {sr, . . . , sj’ , . . . , sn} differs from S={q, . . . , SJ, . . . f sn} in the tree only at the nodes where it is j’s turn. As we work from the terminal nodes up the tree on the path defined by maxn, Sj’ must change this path or the payoff would be the same. Let us con- sider the place where sj and sj’ first differ: , j Csj) A ('j' ) V V’ Figure 4. where the strategies differ where v=(vr, . . . ,vn) and v’=(vr’, . . . ,v,‘) are the maxn backed up values which are the payoffs for the stra- ten sets S and S’, respectively, and Vi=Ui(SIJ e * e ~Siy a e a rSn)y Vi’=Ui(S1, . . . , Si’ , . . . , Sn). From our assumption vi < vj’ but by the maxn procedure vi 2vj’ . This con- tradiction proves the theorem. • I An equilibrium point exists according to theorem 1, and it is the best a player can do if the opponents are rational, which means taking the maximum of the utilities available to them. This procedure seems a likely candidate for playing n- person, non-cooperative, perfect information games in the AI domain, that is, games to be played intelligently by a com- puter. Just as the minimax procedure with an evaluation function approximates a saddle point in two person, perfect information games, if we use maxn with a good evaluation function, we can approximate an equilibrium point. Actu- ally we would be finding an equilibrium point of the approxi- mations given by the heuristic function. It is also possibie to check each point and analyze it to see if it might be an equilibrium point. Maz” gives a quick result on which to base a move choice. An estimated payoff calculation for a node does not need to be for the whole vector. The value needed immedi- ately is the estimated payoff for the entry of the player of the parent node in order to make a comparison to decide which value to back up. We will consider types of possible pruning related to this. V SHALLOW PRUNING Since in searching for the maxn value a maximum is always sought after, pruning of subtrees as in alpha-beta is not entries possible. However, some pruning of individual within the vector is possible if the entries are payoff calcu- lated separately. A simple pruning would be to calculate the entire vector only for the best child of the terminal nodes. Only one entry from the other payoff vectors is needed. First, evaluate the payoff entry for the parent node in each of the children and find the maximum entry. Then back up the entire vector of that child. If a game tree has a constant branching factor b and we look ahead m levels, which would usually be a multiple of n, then the number of evaluations is nbm, without any pruning. With this simple shallow prun- ing, rather than evaluating all nbm numbers, only one vector entry value for each b” terminal plus the rest of the vector for the best child of each of the b”-’ parents is calculated. Thus, the number of evaluations is b” +(n-l)b”-’ = b m-1( b tn-1). The percentage of entries evaluated is: b”+nb”-l-b”-’ = 1+ 1 1 nbm n b nb’ Note that this does not depend on the number of levels being searched. A further improvement on this is to calculate a value only when it is needed for the next comparison. Instead of only for terminal nodes as in the simple shallow pruning, do this for all levels of nodes. Each time a child’s values are backed up, the next value to the left in the vector of payoffs needs to be calculated. That is, the payoff for the player a level above needs to be calculated from the terminal node from which the backed up value came. Call this shallow pruning for n-person games. The number of evaluations out of nbm done with this type of pruning is: bm+bm-l+ . . . +b”-(n-1) = b”+ - - - +b+l _ (bm-n+b”-(“+‘)+. . . +b+l) b m+l -1 bm-“+l-l =-e b-l b-l b m+l-bm-n+l = b-l The following procedure returns the entry payoff of the maxn vector and determines the strategy for the player of node as a side effect. The maxn algorithm with shallow pruning is: pmaxn( node); /* returns maxn value */ BEGIN IF node is terminal in the look ahead THEN evaluate and return the parent’s payoff ELSE BEGIN FOR each child of node DO BEGIN v := pmaxn(child) IF v is the best value of the children THEN back up the value and child pointer END calculate the value for the grandparent of the best child and back it up also RETURN v END END 160 / SCIENCE The algorithm is illustrated in the example in the next sec- tion. The number of comparisons done of payoff values with any of these searches is the same. At the lowest level where the terminal nodes are, for each of the b”-’ sets of children, there are b-l comparisons made, and b-l for each of the brne2 groups of b nodes above, and so on, to the final b-l comparisons at the root node. So, the number of total com- parisons is: bm-l(b-l)+bm-z(b-l)+ . . . +b(b-1)+(6-l) = s (b-1) = bm-l VI EXAMPLE As an example of shallow pruning, see figure 5 for the first three moves of the 2-2-2 Nim game for three players. (l/2,1/2,0) \ \ c /#/-7q W/2,1~2) (0.1 ,O) .L, (1 fJ.0) l *. *EVALUATEDIN SHALLOW PRUNING Figure 5. three person 2-2-2 Nim with three levels of look ahead The game is played just like the other Nim games. Players alternate turns taking one or more pieces from any one group. The goal can be varied to give a different evaluation and strategy for playing. The goal in this case is to have the player before you take the last piece. The player who achieves the goal gets 1 unit of reward and the other two get nothing. The evaluation function used for the look ahead estimate is the percentage of the possible number of moves left in the game which leads to a win for the player. To calculate that, first find the minimum number of moves left in the game which is equal to the number of groups, say a for example. Th en find the maximum number of moves left, which is equal to the total number of pieces left, b for exam- ple. The possible number of moves in the game ranges from a to b, or the possibilities are a, a+l, a+2, . . . , b-l, b. The estimated payoff in the look ahead player for A is the number of these that are divisible by 3 ( = 0 mod 3), divided by I{a,a+l, . . . ,b-l,b}( = b-a+l. The estimated payoff for B is the percentage of the numbers that are equal to 1 mod 3, and for C it is equal to 2 mod 3. For example, with one group of one piece and one group of two pieces we have a=2 and b=3. The estimated payoffs for A, B, and C are l/2, O/2=0, l/2, respectively. In the example given, an exhaustive search would require 39 evaluations while the shallow pruning requires 24 evalua- tions, or 62% of an exhaustive search. The back up pro- cedure suggests that A should take the lower child in the representation for its first move. Ties are handled by backing up the average of the payoffs for each player, which is a pos- sible variation with which to play. Note that when this is done, more evaluations may be needed than the stated for- mula suggests. VII DEEP PRUNING The pruning described here could be correlated to a deep cutoff which was made distinct from a shallow cutoff by Pearl [Pea84]. A deep cutoff uses information from great grandparent nodes. When a value is backed up, the entry for the player of the grandparent node must also be sent for the comparison at the next level up. A deep pruning pro- cedure for n-person games is: evaluate the far left, lowest level children for the last player’s payoff, find the best of the components, evaluate the best vector, (~1, - - a , vn), and back it up one level IF at the root node THEN return the vector ELSE BEGIN back the vector up one level to player i FOR each unvisited terminal node below DO BEGIN IF v, < the payoff to player i at the terminal node THEN back up the best vector by shallow pruning END REPEAT (2) with the backed up node END Applying deep pruning to the example used for shallow pruning requires seven more evaluations than the shallow pruning. Th e game tree in figure 3 requires 16 evaluations in simple shallow, 14 in shallow and 19 in deep pruning. Fig- ure 6 is an example which benefits from deep pruning. The second set of payoffs shows which entries are evaluated. There are 10 evaluations with deep pruning verses 14 with shallow. Search: AUTOMATED REASONING / 161 (5,4,1) (2,2,2) (5,1,2) (2,0,3) (1,573) (0,374) (LW PM) (-,-,I) (2,2,2) (-A-) (-,o,-) (W Kh-) (L-7-) (on) Figure 6. deep pruning The best case, shown above, would evaluate bm +n -1 values in the general n-person game tree with constant branching factor b, m levels of look ahead, and n players. This is better than the case for shallow pruning. Deep pruning would be very useful if some predictable order of terminal nodes were available. In the worst case, at each check going down the tree, the comparison would call for a different vector value to be backed up. In that case, below each of the 6”-’ nodes in the level next to the bottom, the number of evaluations required is: n values for the vector + b-l values to find the best child + b-l values of the deep pruning check which would fail on the last node checked in the worst case. Adding this up for the 6 m-1 nodes, and subtracting (b-l) since the first set of children is only evaluated to find the best child and not a deep pruning check, we get: 6”++2(6-1)1-(6-l) = 26”+(n-2)b”‘-l-6+1 = (6”+nbm-‘-6 “-I)+( b”--6 m-1-6 +1). This last expression is the number of evaluations in simple shallow pruning plus (b”-6”-l-6+1) = (6”-l-1)(6-1). VIII COOPERATIVE GAMES A cooperative game is one in which communication and coalition formation is allowed between players. A coalition is a subset of the n players such that a binding agreement exists between the players. The coalition can be treated as one player with a strategy which is collectively determined. When it is a player’s turn who is in the coalition, it is the coalition’s move. A coalition structure on an n-person game {PI, . . . , P,;CJ,, . . . , U,} is a partition of {l,..., n}. Call the partition S={S,, . . . , S, }, where si = {Qilt * - - 7 Qiz,lT ~,j~O~..-4h S; nSj =r$ for all i #j, and SlU * * - USm = {I,...+} We can now use max” for cooperative games by the follow- ing theorem. Theorem 3: For any coalition structure {S,, . , . , S, }, S, = { ql, . . . , qz,}, on a cooperative n-person game {PI, . . . , P, ; U,, . . . , u* }, maxn finds an equilibrium point for the m-person non-cooperative game W,, . . . JLY1,. . . , &,I where R,=Pqlx - - . xP,*, and Wj(71, . . . ,7m) = 1 5 qs1, - . - , s,), r&-L+%. j=l Proof: APPlY theorem 2 to the non-cooperative game -CR,, . . . JL;W,, - -. , W,}. In the tree form, it is Ri ‘S turn whenever it is a player’s turn who is in the coalition Ri . Assuming a coalition structure has been determined and will remain constant for a cooperative game, maxn can be applied to the resulting non-cooperative game with a meaningful result. Maz’ can be used in determining a move for a computer in n-person games under these conditions. IX CONCLUSIONS As an answer to how should a computer play n-person, non-cooperative games, maxn with pruning is a satisfactory approach given a good evaluation function. In the best case situation, deep pruning does the least number of evaluations, but in the worst case for deep pruning, it does worse than even the simple shallow pruning. Shallow pruning does fewer evaluations than simple shallow pruning, however, more traveling by pointers in the tree is required. For cooperative games with a given coalition structure, max” will find an equilibrium point as a possible solution of the game and determine a strategy for a coalition. Using this approach, we are looking at the question of what are the best coalitions to be formed. The max” algorithm might also be applied to imperfect information games or games with chance involved. [ Jon801 [LuR57] [ Ne184] [ Pea841 [Ric83] [Win771 REFERENCES Jones, A. J. Game Theory: Mathematical Models of Conflict. West Sussex, England: Ellis Hor- wood, 1980. Lute, R., and Raiffa, H. Games and Decisions. New York: John Wiley & Sons, 1957. Nelson, Harry. “How we won the Computer Chess World’s Championship” In Lawrence Livermore National Laboratories Tentacle, (excerpt from DAS Computer Science Collo- quium). LLNL, L ivermore, CA, January 1984. Pearl, Judea. Heuristics. Massachusetts: Addison-Wesley, 1984. Rich, Elaine. Artificial Intelligence. USA: McGraw-Hill, 1983. Winston, Patrick H. Artificial Intelligence. Reading, Mass.: Addison-Wesley, 1977. 162 / SCIENCE
1986
131
397
CHOOSING DIRECTIONS FOR RULES Richard Treitel and Michael R. Genesereth* Logic Group, Knowledge Sytems Laboratory, Computer Science Dept., Stanford University ABSTRACT In “expert systems” and other applications of logic pro- gramming, the issue arises of whether to use rules for for- ward or backward inference, i.e. whether deduction should be driven by the facts available to the program or the ques- tions that are put to it. Often some mixture of the two is cheaper than using either mode exclusively. We show that, under two restrictive assumptions, optimal choices of directions for the rules can be made in time polynomial in the number of rules in a recursion-free logic program. If we abandon either of these restrictions, the optimal choice is NP-complete. A broad range of cost measures can be used, and can be combined with bounds on some element of the total cost. I INTRODUCTION In logic programming, decisions about which inference direction to use, based on rough estimates of the compu- tational costs of each direction, are frequently taken by users, We would like to automate the choice between for- ward and backward inference, at least with respect to the cost of computation. Other optimisations, such as rule or- dering and ordering of terms within rules, will be ignored, as will other considerations that could affect the choice of inference direction. The conflicts between forward and backward computation can be outlined as follows: solving a goal backwards may be much cheaper than doing the corresponding forwards deduction, because more variable bindings are available to constrain the computation. Or it may be more expensive, because several rules are applied, only one of which has enough facts to solve the goal. A fact that has been deduced forward and stored can be re- used many times, or it may never be used at all. We will show how to attach numerical estimates to these factors and optimise the trade-offs. A. Statement of problem We consider a system whose inputs are a set F of facts (ground atomic formulae), a set R of rules (sentences hav- ing implicational force), and a set G of goals (which may be conjunctions). Rules and goals may contain variables, *This work was supported by the Office of Naval Research, the National Institute of Health, and Martin-Marietta under contracts N00014-81-K-0004, NIH 5P41 RR 00785, and GH3-116803 which are assumed to be universally quantified if in rules and existentially quantified if in goals. No function sym- bols appear in R or G. The purpose of the system is to solve each goal from G using the facts F and the rules R, and, in the case of goals containing variables, to find all the sets of variable bindings which make the goal true. The deductive mechanism used for both forward and backward inference will be a restricted form of resolution, and the members of F, R, and G are assumed to be in conjunctive normal form. We impose the restriction that clauses in R be Horn clauses, i.e. have exactly one positive literal. This literal is the consequent of the rule, the others being its antecedents. The problem we consider is that of choosing an optimal subset Rf of R to be used forwards. Optimality is defined with respect to the sum of the times taken by all the deduc- tions. For a program whose database of deduced clauses was kept from one run to the next, the daily cost of renting disk space would have to be added to the cost of the CPU cycles consumed per day. The set of facts for which stor- age costs are incurred can be changed, as discussed below. Bounds may be imposed on the space available for stor- ing facts, or on the time taken by forwards inference or backwards inference or both. B. Notation We define a directed graph, called the rule graph, whose nodes are the members of R and which has an arc from a rule r to a rule s iff r’s consequent is unifiable with one of s’s antecedents. We say that s is a successor of r, and r a predecessor of s. The rule graph will be required to be acyclic, since the work we have done to date does not include techniques for estimating the costs of using recursive rules. We may add F and G to the rule graph, in the obvious places: a fact from F is the predecessor of those rules whose antecedents unify with it, and a goal from G is the successor of those rules whose consequents unify with it. We do not represent individual members of F and G in the rule graph, but sets of facts or goals that match some pattern; these patterns, rather than the individual facts or goals, are used to construct arcs in the rule graph. In Figure 1, we have put the facts at the bottom and the goals at the top. For a node r in the rule graph, representing a rule from R, let ef(r) be the cost of using it forwards, and eb(r) the cost of using it backwards, assuming in each case that Search: AUTOMATED REASONING / 153 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Figure 1: A fairly tangled rule graph its antecedent facts are available in the database. Below we describe how to estimate these costs. We also define an indicator variable z)(r) for each r, with a value of 1 to denote using the rule forwards, and 0 for backwards. A complete set of values of v(r) for all rules 7‘ will be called a strategy. Et(r) always depends on the structure of R and on the numbers of facts in that part of F from which T’S inputs are obtained, and eb(r) can depend both on these and on that part of G from which subgoals that invoke T are obtained. Moreover, eb(r) will depend on whether any of T’S succes- sors are used forwards. We can eliminate this dependence by simply insisting that all successors of a backwards rule be used backwards themselves, or equivalently, that RI is closed under the operation of taking predecessors. We shall call this coherence, i.e. a coherent strategy will be one in which no rule is used forwards unless all its predecessors are. Some deductive systems do in fact enforce coherence, and others have a bias towards it. II THE OPTIMAL COHERENT Namely, the total cost of a strategy (represented as a set of values for the V(Z)) is c 4+v(x> + (1 - G9)44 x and the coherence constraint turns into inequalities v(y) 2 V( 2) for each predecessor y of x. If this expression for the cost is minimised subject to these inequalities, the resulting values of the variables V(X) give the optimal solution to the original problem. This integer program is in fact a linear program, and hence solvable in time polynomial in the number of con- straints, which is polynomial in the total number of literals in R. This is important, since integer programming in gen- eral is NP-complete. To see that we really have a linear program here, we must prove that if the above constraints were augmented by the inequalities 0 < V(X) 5 1 for all x and the whole system solved as a linear program, the so- lution obtained would in fact give integer values to all the V(X). This is done by showing that a solution in fractional values would not be a vertex of the simplex defined by the constraints. It then follows that if the Simplex Algorithm is applied to the above equations and these constraints, it will find a solution in integers, which will be the solu- tion to the integer program, corresponding to the optimal strategy. [4] has considered a similar problem, different from this one mainly in the imposition of an upper bound on the total amount of storage available for the facts deduced (it is hard to tell whether he requires his strategies to be co- herent). Such a bound can be added to this problem very simply, for if e,(x) is the estimated amount of space taken up by the facts deduced by rule x, and A is the total space available, we just add the inequality STRATEGY to the linear program. The same would apply to a time bound or to a bound on any expression linear in the V(X), In this paper we examine only the case where all strate- when combined with any cost function linear in the u(i). gies are required to be coherent. If additionally no rules Roussopoulos also expects that (translating into our ter- generate duplicate answers (the same answer deduced from minology) the only facts stored permanently are those that different sets of facts), then the optimal set R, can be are inputs to some backwards rule. Until now we have found by any linear programming method. The optimisa- lumped space costs with time costs, thus assuming that tion problem is NP-complete if there are such rules. By every fact is stored. We can change this by re-defining the treating them separately from others, we can find the op- cost of a strategy as timal set RJ in time bounded by a polynomial in the to- tal number of rules times an exponential in the number c v(X)e&) + (1 - +&b(x) + ++%(x> X of “bad” rules (and much more quickly than this in most cases). A. No Duplicate Answers where e,(x) is the estimated cost of storing the facts de- duced by rule x. The new variable V’(X) is made to be 1 if x is a forward rule with a backward rule among its successors, and 0 otherwise, by the constraints Under the two restrictions mentioned above (coherence of strategy and no duplicate answers), the cost estimates for all successors y of x. If x has no successors except goals q(X) and eb(x) can be made to have only F, R, and G then V’(Z) is made identical to V(X). The rule graph must as implicit arguments. In particular, they do not depend also be extended to include nodes for the facts in F, each on the directions of rules other than x. Then the problem of which has zero values for et(z) and eb(z) and 1 for V(X). of finding the set of rules to be used forwards in a least These changes roughly double the numbers of variables and total cost strategy is just an integer programming problem. constraints in the linear programming problem. 154 / SCIENCE B. Duplicate Answers Present III THE DEDUCTIVE Some rules can generate duplicate answers to a goal, METHOD corresponding to different values of some variable which appears in the rule’s antecedents but not in its consequent. In order to describe the estimation of the computational For example, given the rule costs of using rules, we must specify precisely the version of resolution which we assume to be used for deductions. if facts A(l, 2), B(2,3), A(l, 4), and B(4,3) were available, then C( 1,3) could be deduced twice, once with y = 2 and once with y = 4. If the rule’s conclusions are being stored in the database, the duplicates will disappear, but if the rule is being used in backwards inference and its conclu- sions forgotten as soon as they are used, then duplicates will not even be detected. If some rule ri has a predecessor r2 which generates du- plicates, the number of inputs supplied to rr will depend on whether r2 is used forwards or backwards, and this clearly affects the cost of using ~1 backwards. The cost of using other predecessors of rl can also be affected by this, be- cause the number of clauses resolving against their positive literal can change. So if rs is such a rule, the cost of us- ing rs backwards will depend on Z)(Q). Only the backward costs can be affected, for a rule used forwards in a coherent strategy receives no duplicates from its predecessors. This makes it impossible to express the cost of r3 as a linear function of I and v(r2), since the value of v(r2) affects it only when v(rs) = 0; terms containing the product of the two indicator variables would be required to express this. Thus the linear program cannot be used. In fact, the task of optimally choosing the u(z) under these conditions is NP-complete [6], so that it cannot be solved by linear programming unless P=NP. But this is not as discouraging as it may seem to be. Considerable care was required to construct the logic program used in the proof, and we anticipate that most programs encountered in practice would not display the features that make it necessary to explore many strategies. We therefore expect that a heuristic-guided A* search could solve most such optimisation problems quite fast. The appropriate heuristic for this search is a lower bound on the cost of any strategy containing a given partial strut- egy (a set of values for some of the V(Z)). Such a lower bound can be obtained by assuming that all duplicate an- swers are magically eliminated. This allows us to make estimates of the costs of using all rules in each direction, which will certainly be no higher than the true costs. Feed- ing these estimates, and the values for those V(X) that are included in the partial strategy, into a linear programming algorithm, we get a cost which cannot be higher than that of any complete strategy that extends the partial strategy. As soon as the rules which generate the most duplicates (relative to their number of unique answers) are in the par- tial strategy, the lower bound will be fairly accurate, so the search will be well focussed towards good strategies if these rules are among the first ones added. And once a partial strategy includes directions for all rules that can generate duplicates, the optimal complete strategy containing it is returned immediately by the linear program. Binary resolution is sound and complete but inefficient. We impose on it the restriction that the complementary lit- erals which are resolved away must each be the first in their respective clauses. This is similar to lock resolution [l]. Thus the number of possible resolutions on a given clause set is substantially reduced, but in general completeness is sacrificed. With some more effort it can be restored. The implementation that we shall consider includes a stack or agenda of clauses to be resolved against. To use a clause, we add it to the agenda. We then repeatedly pop the top clause off the agenda, store it in the database per- haps, resolve it against all possible clauses in the database, and add the resolvents to the agenda. The literals from the clause that was found in the database come before those from the clause taken off the agenda (this causes subgoals to be solved before work on their parent goal is resumed), and the order of literals within each parent clause is car- ried over unchanged. When the agenda is empty, we have deduced all the consequences of the new clause. When the clauses entered by the user are all Horn, and the non-unit clauses have their positive literal at one end or the other, this kind of resolution begins to look very much like traditional forward or backward chaining. Consider a rule A(x, y) & B(y, z) + C(z, z), which can be written in two ways: With the first of these, facts like A(1,2) and B(2,3) will resolve, in that order, giving C(l, 3), which is just what would emerge from forward inference. The second form of the rule can resolve with a goal like C(V, w) to give a clause lA(v, y)lB(y, w), which is just a conjunction of subgoals whose answers would give the answer to the goal, and this looks like a backward chaining step. For forward inference to be complete, either the facts must be presented in the same order as that in which the negative literals of the rule appear (in the above example, the rule could not resolve with B(2,3) unless A( 1,2) had already been taken off the agenda) or else all facts, includ- ing those deduced by forwards rules, must be kept in the database for as long as there is any rule that might wish to resolve against them. If B(2,3) was in fact taken off the agenda first, it would have to be stored until A( 1,2) came along and generated -Q3(2, z)C( 1, z), which would then re- solve against it. In general it would also be necessary to store the non-unit resolvents. An alternative to this would be to have several versions of a forward rule, namely one beginning with each input literal. The version that began with the literal corresponding to the last of a set of facts to be presented would resolve against this fact and then against all the others, if they had been stored; there would be no need to keep intermediate resolvents. This would Search: AUTOMATED REASONING / 155 lead to extra deductive cost due to abortive use of versions of a rule when not enough facts were there for it to succeed. IV ESTIMATING COSTS OF DEDUCTION We estimate the costs of running our form of resolution on a set of rules, facts, and goals by means of a simulation, in which we represent each set of similar clauses that will arise during the computation by a clause, called the set’s pattern, and a number, namely the expected number of instances of that pattern that will be generated. The sets F and G are represented this way, since we do not expect to know exactly what facts or goals will be in them. If we regard a clause in R as having itself as pattern and the number 1.0, then we see that the basic step needed for estimating costs is to simulate resolutions between pairs of such clause sets and estimate their costs. The simulator accepts the sets of patterns from F, R, and G as input, and obtains descriptions (in terms of pat- tern and set size) of all the sets of clauses that will be generated. It has an agenda like the one used for the reso- lution, so that the effects of putting clauses on the agenda in different orders can be simulated. We can combine the output of the simulator with knowledge about how long each elementary operation (unification, substitution, and so on) will take, to arrive at actual time estimates. It is then necessary to decompose the simulated cost into a sum of rule costs. A. The Number of Resolvents Here we describe how to estimate the number of resol- vents generated at each node in the rule graph, given esti- mates for the numbers of propositions matching each pat- tern in F and G. We also need to know, for each variable in any clause of R, the size of the domain of values over which that variable will range. This is important for two reasons. First, some of the equations involved (which have been omitted for space reasons) are couched in terms of the probability of a typical instance of some clause pat- tern being generated, so that in order to derive a cost esti- mate, we need to know the number of potential instances of this pattern. Domain sizes also affect the probability that two variables will have been bound to different val- ues, which in turn affects the chance that a unification will be successful; however, this probability may be known in- dependently. Note that “domain” here refers to the set of values expected to be encountered during a particular run of the program, rather than to a set of theoretically possible values. 1. Simulated unification It is useful to distinguish the set of variables in a pat- tern which will have had constants substituted for them at the time when unification is attempted. We call these “bound variables” of the pattern, meaning that the sim- ulation must know that they will be bound at run- time to constants whose values are not known yet. The other variables in the pattern will be referred to as its “free vari- ables”, or “variables” if there is no ambiguity. Now clearly the pattern of a set of resolvents will be just the result of resolving the patterns of the two parent sets. How- ever, when a pattern that has bound variables is unified with another pattern, this represents some unifications at run-time in which constants will have been substituted for these bound variables, and the unification may fail if two unequal constants have appeared. So, when the simulator is unifying two patterns, it must take special note of their bound variables. In the absence of specific information, we can estimate the probability of successful unification between a bound variable and a constant or another bound variable by as- suming that all values in the domain of the bound vari- able are equally likely to occur. This is called the equal frequency assumption: no value appears more often than another of the same type. The probability of a unification succeeding is then the reciprocal of the number of possible values in the domain. If the distribution of actual constants in the facts and goals does not conform to the equal frequency assump- tion, the estimated numbers of resolvents may be arbitrar- ily badly wrong. Two safety mechanisms are possible for this. The first is to specify that some value is going to be over- or under-represented relative to the average; this could be done for several values. The second is to allow the user to give the probability of successful unification di- rectly. For example, in a Computer Science Department where all the students have ages between 12 and 50, the probability of a random student being the same age (in years) as another may actually be 0.2 or so. The simulator can simply use this value instead of subtracting 12 from 50 and taking the reciprocal. 2. Estimating set sizes The estimated number of clauses in the set of resolvents is the probability of successful unification times the num- ber of attempted unifications (which is just the product of the estimated numbers of clauses for the parent sets). In general, two literals being unified by the simulator may contain several pairs of constants or bound variables that must be equal for unification to succeed. We make an ur- gument independence assumption, under which the event of one pair being equal is independent of other pairs, so the probabilities can be multiplied. However, some of the bound variables in the parent pat- terns may not correspond to anything in the resolvent pat- tern. It may happen that some pairs of parent clauses will differ only in the binding of such a variable, so that du- plicates of some instances of the resolvent can occur, as was indicated above. If th e resolvents are stored in the database, these duplicates will presumably be detected and eliminated, reducing the number of clauses that are avail- able to subsequent resolutions. The appropriate changes to the estimated set sizes have been given in [5]. Given directions for each rule in R, and the patterns and 156 / SCIENCE estimated sizes for sets of terms in F and G, we can now iteratively obtain descriptions of all the sets of resolvents generated. This approach is clearly not adequate for deal- ing with recursive rules in R, which correspond to cycles in the rule graph. Techniques for dealing with recursive rules are being investigated by many researchers [3,2,7]. B. Breaking down the costs Since a clause pattern can be derived via a sequence of resolutions involving several rules from R, we need some way of assigning the costs associated with a set of clauses to one and only one rule, or perhaps to a goal, so that the total cost of a strategy is equal to the sum of costs over all the rules and goals, and so that the cost numbers for each direction of each rule accurately reflect the consequences of using that rule in that direction. We make this assignment by considering the first literal of the clause set’s pattern, which must have been obtained by a.pplying some number (possibly zero) of substitutions to a literal of a rule or goal from R or G. We charge the costs associated with the set against this rule or goal. Minor adjustments must be made to this even in the coherent case, since the number of database lookup op- erations done by a rule depends on which, if any, of its predecessors are used forwards. It turns out to be possible to remove this variability in the cost of a rule by assuming that it looks up all its inputs in the database, and then ad- justing the costs of backward rules to reflect the fact that their answers do not get looked up and so do not contribute to lookup costs. In the incoherent case, it is impossible to define the cost of using the rule backwards independently of the rest of the strategy it is used in. V CONCLUSIONS We have shown how a certain optimisation on logic pro- grams can be performed cheaply under a fairly commonly encountered set of conditions. It is difficult to quantify the benefits available from this optimisation, since prob- lems can easily be conceived which would take arbitrarily long to solve if only one of forward and backward inference were used, but are soluble in modest amounts of time by an appropriate combination of the two. Human program- mers, confronted with such problems, will usually make sensible choices; the claimed advantages for this procedure are that it gives the precisely optimal strategy, and that it can easily be tailored to the performance of any inference engine by adjusting the calculations of ej(z) and es(z). Note that although the cost estimation methods fail on recursive sets of rules, the optimisation algorithms do not. If estimates ef and eb were available for such rules, the coherence condition requires that any set of mutually re- cursive rules be used in the same direction as each other, so for the purposes of optimisa.tion they could be treated like one rule, and the linear pr0gra.m or the search algorithm could be used. The problem of finding the optimal incoherent strategy, under the assumptions used here, is discussed in [6]. The obvious next extension to this work will be the study of how to optimise the ordering of negative literals within clauses together with the directions in which the clauses are used. Another important direction for future research will be the investigation of “adaptive” or “mixed” methods, which use information gathered at run-time to change or control a generic strategy devised at compile-time. PI PI PI WI PI 161 PI REFERENCES Robert S. Boyer. Locking: A Restriction of Resolu- tion. PhD thesis, University of Texas at Austin, Au- gust 1971. L.J. Henschen and S.A. Naqvi. Compiling queries in recursive first order databases. Journal of the ACM, 31( 1):47-85, January 1984. D. P. McKay and S. Shapiro. Using active connec- tion graphs for reasoning with recursive rules. In Pro- ceedings of the Seventh IJCAI, pages 368-374, August 1981. Nicholas Roussopoulos. Indexing views in a relational database. ACM Transactions on Database Systems, 7(2):258-290, June 1982. D. E. Smith. Controlling Inference. PhD thesis, Stan- ford University, July 1985. R.J. Treitel. Sequentialising Logic Programs. PhD the- sis, Stanford University, 1986. Jeffrey D. Ullman. Implementation of Logical Query Languages for Databases. Technical Report STAN-CS- 84-1000, Stanford University, May 1984. Search: AUTOMATED REASONING / 157
1986
132
398
EDITORIAL COMPREHENSION IN QpEd THROUGH ARGUMENT UNITS* Sergio J. Alvarado Michael G. Dyer Margot Flowers Artificial Intelligence Laboratory Computer Science Department 353 1 Boelter Hall University of California Los Angeles, CA 90024 ABSTRACT This paper presents a theory of reasoning and argument comprehension currently implemented in OpEd, a computer system that reads short politico-economic editorials and answers questions about the editorial contents. We believe that all arguments are com- posed of a fixed number of abstract argument structures, which we call Argument Units (AUs). Thus, argument comprehension is viewed in OpEd fundamentally as the process of recognizing, instantiating, and applying argument units. Here we discuss: (a) the knowledge and processes necessary to understand opinions, arguments, and issues which arise in politico-economic editorials; and (b) the relation of this research to previous work in natural language understanding. A description of OpEd and examples of its current input/output behavior are also presented in this paper. I. INTRODUCTION An intelligent computer program must be able to understand people’s opinions and reasoning. This requires a theory of the processes and knowledge sources used during reasoning and argu- ment comprehension. To develop such a theory, we have studied the problems that arise in understanding newspaper and magazine editori- als which convey writers’ opinions on politico-economic issues. This theory has been implemented in OpEd (Opinions to/from the Editor), a computer program that currently reads two short politico-editorial segments and answers questions about the editorial contents. Thus, OpEd also includes a theory of memory search and retrieval for rea- soning and argument comprehension. What are the computational issues currently addressed in OpEd? To illustrate the nature of the issues involved, consider the following editorial segment by Milton Friedman (1982): ED-JOBS Recent protectionist measures by the Reagan administration have . . . disappointed . . . us . . . [Voluntary] limits on Japanese . . . automobiles . . . [and] . . . [voluntary] limit[s] on steel . . . by the Com- mon Market . . . are . . . bad for the nation . . . They do .., [not] . . . pro- mote the long-run health of the industries affected . . . The . . . prob- lem of the auto[mobile] and steel industries is . . . in both industries, average wage rates are twice as high as the average . . . Far from saving jobs, the limitations on imports will cost jobs. If we import less, foreign countries will earn fewer dollars. They will have less to spend on [American] exports . . . The result will be fewer jobs in export industries. Understanding ED-JOBS requires: (1) having a large amount of domain-specific knowledge, (2) recognizing beliefs and belief rela- tionships, (3) following reasoning about plans and goals, (4) having abstract knowledge of argumentation, (5) mapping text into concep- tual representation, and (6) indexing recognized concepts for later retrieval during question answering. (1) Domain-Specific Knowledge: OpEd has a computational model of general politico-economic knowledge which helps it make sense of the discussion about import restrictions. OpEd knows about nations, consumers, workers, jobs, wage rates, imports, and exports. OpEd is also be able to handle references to politico-economic goals, plans, events, and states, such as: saving jobs, protectionist policies, importing goods, and drops in earnings/spending. * Tlus work was supported in part by a grant from the Keck Founda- tion. The first author was also supported in part by an IJCAI-85 Doc- toral Fellowship and the second author by an IBM Faculty Develop- ment Award. (2) Recognizing Beliefs and Belief Relationships: A basic problem in editorial comprehension is identifying the writer’s explicit and implicit beliefs and how they support one another. For example, after reading the first sentence of ED-JOBS, OpEd infers that Fried- man is against the Reagan administration’s protectionist policies, although this opinion is not explicitly stated. OpEd is also able to recognize other individuals’ beliefs and how they are supported or attacked by the writer’s beliefs. For instance, OpEd understands that in the sentence “[These import restrictions] do not promote the long- run health of the industries affected,” Friedman attacks the implicit belief of the Reagan administration that the limitations will help the American automobile and steel industries. (3) Reasoning about Plans and Goals: OpEd identifies and keeps track of chains of reasoning which support beliefs about goals and-plans. This requires: (1) recognizing explicit and implicit cause- effect relationships and (2) applying OpEd’s politico-economic knowledge to aid the recognition process. For example, when pro- cessing ED-JOBS, OpEd realizes that Friedman’s belief that import restrictions will cost jobs is supported by a cause-effect chain on how reductions in imports to the U.S. cause reductions in exports by the U.S. and, consequently, reductions in jobs in U.S. export industries. (4) Abstract Knowledge of Argumentation: OpEd has abstract knowledge of argument structure which is independent of domain- specific knowledge, i.e., knowledge fundamental to understanding and generating arguments in any domain. This abstract knowledge of argumentation is organized by memory structures called Argument Units (AUs) (Alvarado et al., 1985a, 1985b). For example, in ED- JOBS, Friedman uses the following argument unit: AU-OPPOSITE-EFFECT Although OPPONENT believes that his PLAN P achieves GOAL G, SELF does not believe that P achieves G because SELF believes that P thwarts G. Therefore, SELF believes that P is BALI. Thus, Friedman argues that he is against limitations on imports because they will not save but cost jobs. During editorial comprehen- sion, OpEd recognizes and applies this argument unit to understand Friedman’s attack on the Reagan administration’s policies. (5) Mapping Text into Conceptual Representations: OpEd keeps track of the conceptual contents of the editorial by building and maintaining an internal conceptual model of all recognized beliefs, belief relationships, reasoning chains, and argument units. This con- ceptual model, known as an argument graph (Flowers et al., 1982), represents explicitly beliefs supports and attacks as well as relation- ships among politico-economic plans, goals, events, and states. To build the argument graph, OpEd parses words or phrases into concep- tual structures and integrates these structures into the graph. This is not a trivial process, since mapping editorial text into conceptual representations involves handling numerous tasks including: (a) disambiguating words; (b) resolving pronoun references; (c) recogniz- ing, instantiating and applying conceptual structures; and (d) inferring implicit information by applying appropriate knowledge sources. For example, OpEd understands that in the phrase “the health of the [American automobile and steel] industries,” the word “health” does not refer to the physical state of the industries but rather to their economic well-being. . 250 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. (6) Question Answering: During question answering, OpEd accesses, retrieves, and generates into English beliefs, belief relation- ships, and argument units. OpEd has search and retrieval processes capable of gaining initial access to the argument graph. Initial entry is provided by indexing structures created during editorial comprehension. These structures index instances of domain-specific objects, plans, goals, events, states, causal relationships, reasoning chains, beliefs, belief relationships, and argument units. The theory of question comprehension, indexing, and retrieval implemented in OpEd was originally developed by Lehnert (1978) and extended in (Dyer and Lehnert, 1982) and (Alvarado et al., 1985a). Consider the following question posed to OpEd after reading ED-JOBS: Q: What is the result of the limitations on imports? A: MILTON FRIEDMAN BELIEVES THAT PROTECTIONIST POLICIES BY THE REAGAN ADMINISTRATION WILL THWART THE PRESERVATION OF JOBS FOR U.S.. MILTON FRIEDMAN BELIEVES THAT PROTECTIONIST POLICIES BY THE REAGAN ADMINISTRATION DO NOT LEAD TO THE ACHIEVEMENTOF NOR- MAL PROFITS OF THE STEEL INDUSTRY AND THE AUTOMOBILE INDUS- TRY. THE REAGAN ADMINISTRATION BELIEVES THAT PROTECTIONIST POLI- CIES BY THE REAGAN ADMINISTRATION LEAD TO THE ACHIEVEMENT OF NORMAL PROFITS OF THE STEEL INDUSTRY AND THE AUTOMOBILE INDUSTRY. THE REAGAN ADMINISTRATION BELIEVES THAT PROTECTIONIST POLI- CIES BY THE REAGAN ADMINISTRATION ACHIEVE THE PRESERVATION OF JOBS FOR U.S.. To answer this question OpEd uses: (a) indexing structures from ECONOMIC-PROTECTION-PLANS to their instantiations and access links between these instances and their associated BELIEFS; and (b) retrieval functions that take PLANS as input and retrieve appropriate BELIEFS about the PLANS’ effects. Editorial understanding is a natural next step and logical chal- lenge for research in natural language understanding. Current narra- tive understanding programs are capable of reading stories involving stereotypic situations, goal and planning situations, and complex human interactions (Cullingford, 1981; DeJong, 1982; Dyer, 1983; Lebowitz, 1983; Wilensky, 1983). However, those programs lack the ability to understand editorial text since this requires knowledge of argumentation and reasoning in addition to the sources of knowledge used for comprehension of narratives. In contrast, OpEd builds upon knowledge constructs and processing strategies resulting from previ- ous work in narratives. OpEd’s process model involves combining the following: 1) Knowledge representation constructs used in conceptual analysis of narratives, including events (Schank, 1975; Schank and Carbonell, 1979); goals and plans (Schank and Abelson, 1977; Carbonell, 1981; Wilensky, 1983); reason- ing scripts (Dyer, Cullingford, and Alvarado, in press; Flowers and Dyer, 1984); and MOPS (Schank, 1982). 2) Techniques for modeling argument dialogues; 3) A taxonomy of beliefs and Argument Units; 4) Techniques for integrated in-depth parsing of narratives; 5) Search and retrieval techniques to model the process of question answering. Here, we focus on the first four **. Their use in editorial comprehen- sion will be illustrated by means of examples using excerpts from ED-JOBS and ED-RESTRICTIONS, another segment handled by OpEd and taken from an editorial by Lance Morrow (1983): ED-RESTRICTIONS . . . The American machine-tool industry . . . [is] seeking pro- tection from foreign competition. The industry has been . . . hurt by . . . cheaper . . . machine tools from Japan . . . [T]he toolmakers ;lri;;h;hatl . . . restrictions . . . [on imports] . . . must be imposed so . . . industry can survive . . . It is a . . . wrongheaded argu- ment . . . [Rlestrictions on [imports] . . . would mean that . . . [Ameri- can] manufacturers . . . . would have to make do with more expen- we . . . American machine tools. Inevitably those American manufacturers would produce more expensive . . . products . . . They would lose sales . . . Then those manufacturers would . . . demand protection against . . . foreign competition. ** OpEd’s question answering model is described in (Alvarado et al., 1985a). II. REASONING COMPREHENSION Editorial arguments involve complex reasoning chains which justify beliefs about plans and goals. These chains show: (1) why plans should/shouldn’t be selected, implemented or terminated; or (2) why goals should/shouldn’t be pursued. Thus, knowledge of goals and plans is essential to follow and keep track of reasoning chains. For instance, OpEd realizes the following goal and planning situa- tions in order to comprehend ED-RESTRICTIONS: (a) American machine-tool manufacturers have an active PRESERVE-FINANCES goal since their finances are being threatened by Japanese imports; and (b) to protect their finances, American machine-tool manufactur- ers are PETITIONing that the American government implement ECONOMIC-PROTECTION-PLANS against the market COMPETI- TION by the Japanese machine-tool industry. In OpEd, reasoning scripts (Flowers and Dyer, 1984) are used to organize prespecified reasoning chains involving cause-effect rela- tionships among politico-economic goals, plans, events, and states (Dyer, Cullingford, and Alvarado, in press). OpEd recognizes and instantiates these reasoning scripts when following belief justifications which contain structural gaps, i.e., justifications involv- ing causal chains with implicit cause-effect relationships. Consider how OpEd processes the following fragment of ED-JOBS: Recent protectionist measures by the Reagan administration have disappointed us . . . Far from saving jobs, the limitations on imports will cost jobs. If we import less, foreign countries will earn fewer dollars. They will have less to spend on American exports. The result will be fewer jobs in export industries. Q: Why does Milton Friedman believe that the limitations on imports will cost jobs? A: MILTON FRIEDMAN BELIEVES THAT PROTECTIONIST POLICIES BY THE REAGAN ADMINISTRATION WILL THWART THE PRESERVATION OF JOBS FOR U.S. BECAUSE MILTON FRIEDMAN BELIEVES THAT AS A CONSEQUENCE OF PROTECTIONIST POLICIES BY THE REAGAN ADMINISTRATION, U.S. IMPORTS FEWER PRODUCTS; IF U.S. IMPORTS FEWER PRODUCTS, THEN THERE IS A DECREASE IN PROFITS OF FOREIGN COUNTRIES; IF THERE IS A DECREASE IN PROFITS OF FOREIGN COUNTRIES, THEN FOREIGN COUNTRIES BUY FEWER AMERICAN EXPORTS; IF FOREIGN COUNTRIES BUY FEWER AMERI- CAN EXPORTS, THEN THERE IS A DECREASE IN PROFITS OF EXPORT INDUSTRIES; IF THERE IS A DECREASE IN PROFITS OF EXPORT INDUSTRIES, THEN THERE IS A DECREASE IN JOBS IN EXPORT INDUSTRIES; A DECREASE IN JOBS IN EXPORT INDUSTRIES THWARTS THE PRESERVATION OF JOBS FOR U.S.. In order to understand Friedman’s complex reasoning chain, which justifies his belief that the limitations will cost jobs, OpEd applies the following reasoning script: $R-DROP-FOREIGN-SPENDING-->DROP-JOBS IF COUNTRY Cl spends less on PRODUCT P produced by PRODUCER PI from COUNTRY C2, THEN there is a decrease on the EARNINGS of PRODUCER PI. AND IF there is a decrease on the EARNINGS of PRODUCER PI, THEN there is a decrease in the number of OCCUPATIONS in PRODUCER PI. During instantiation, Cl is bound to “foreign countries,” C2 to “U.S.,” and Pl to “U.S. export industries.” As a result, OpEd infers that a decrease in U.S. exports causes a decrease in jobs in U.S. export industries. Thus, the use of reasoning scripts allows OpEd to infer missing steps in incomplete chains of reasoning in editorial text. III. BUILDING ARGUMENT GRAPHS Flowers et al. (1982) have presented a theory of the reasoning processes used when engaging in adversary arguments, i.e., argu- ments in which the participants do not expect to convince one another or to be convinced. Flowers et al. represent an adversary argument in terms of an argument graph, which contains all propositions used by the argument participants. Propositions are connected by links that indicate how they support or attack one another. The argument graph aids understanding because the role of every new proposition is deter- mined by establishing how the proposition can be integrated into the graph by using attack or support links. In OpEd, argument graphs are used to keep track of all beliefs and belief supports/attacks implicitly or explicitly mentioned in edi- torial arguments. For example, OpEd recognizes and integrates into an argument graph the following attack and support relationships present in ED-RESTRICTIONS: COGNITIVE MODELLING AND EDUCATION / 25 1 Support Relationship between Beliefs: Morrow’s general belief that import restrictions on Japanese machine tools are bad is supported by his specific belief that restrictions will cause a drop in earnings of American manufacturers. Supporting Cause-Effect Chain: Morrows’s specific belief is supported by the cause-effect chain on how a reduction in imports causes a reduction in earnings of American manufactur- ers. Attack Relationship between Beliefs: Morrow’s specific belief attacks the American machine-tool industry’s belief that the limitations will help it recover from losses caused by foreign competition. In general, support relationships are themselves supported by warrants, i.e., more basic beliefs which state that conclusions can be drawn from supporting evidences (Flowers et al., 1982; Toulmin et al., 1979). Since warrants are also beliefs, they can themselves be attacked. For example, the support relationship between Morrow’s general belief that import restrictions are bad and his specific beliefs that import restrictions cause drops in earnings is based on the follow- ing principle: IF a PLAN P thwarts a GOAL G2 as important as the GOAL Gl which intended PLAN P, THEN PLAN P is BAD. In this warrant, BAD is an evaluative place holder (much like the act DO in CD Theory (Schank, 1975)) for negative outcomes, such as goal violations and expectations failures.(For more details on BAD see Alvarado et al., 1985a.) In Morrow’s editorial, the goal being thwarted is PRESERVE-FINANCES of American manufacturers. Thus, Morrow can argue against the restrictions on Japanese exports of machine tools because he shows that they will cause a violation of a preservation goal. Similarly, the support relationship between Morrow’s specific belief and his cause-effect chain on how a reduc- tion in imports produces a reduction in earnings of American manufacturers is based on the warrant: IF C causes El AND El causes E2 AND . . . En causes E, THEN C causes E. Thus, Morrow can support his specific belief if he can coherently expand it into a cause-effect chain. OpEd uses warrants to generate expectations about possible belief justifications. For example, after reading the sentence “me belief of the American machine-tool industry] is wrongheaded,” OpEd not only recognizes that Morrow is against import restrictions, but also expects to hear one of the following justifications: * Restrictions do not achieve the goal that intended them, namely PRESERVE-FINANCES; * Restrictions thwart their intending goal; * Restrictions thwart other goals more important than or equivalent to their intending goal. In ED-RESTRICTIONS, the third expectation is fulfilled and OpEd integrates this justification into the argument graph. OpEd retrieves this justification when answering the following question: Q: 2 aj;es Lance Morrow believe that restrictions on imports is . A: LANCE MORROW BELIEVES THAT PROTECTIONIST POLICY BY THE AMERI- CAN GOVERNMENT IS BAD BECAUSE LANCE MORROW BELIEVES THAT PROTECTIONIST POLICY BY THE AMERICAN GOVERNMENT MOTIVATES THE PRESERVATION OF NORMAL PROFITS OF AMERICh INDUSTRIES. LANCE MORROW BELIEVES THAT PROTECTIONIST POLICY BY THE AMERI- CAN GOVERNMENT IS BAD BECAUSE LANCE MORROW BELIEVES THAT PROTEcIlONIST POLICY BY THE AMERICAN GOVERNMENT MOTIVATES THE PRESERVATION OF NORMAL PROFITS OF AMERICAN INDUSTRIES; AND THE PRESERVATION OF NORMAL PROFITS OF AMERICAN INDUS- TRIES INTENDS PERSUASION PLAN BY AMERICAN INDUSTRIES ABOUT PROTECTIONIST POLICY BY THE AMERICAN GOVERNMENT. VI. BEtiEFS AND ARGUMENT UNITS Beliefs can be directly recognized if they are explicitly men- tioned using phrases such as “X believe SITUATION.” For exam- ple, the following sentence “The current administration believes that unilateral disarmament is bad for the U.S.,” explicitly indicates the current administration’s belief with respect to unilateral disarmament. However, editorial writers seldom state their beliefs explicitly. AS a result, their beliefs must be inferred from other explicit standpoints, from affective reactions, and from various argument units. A. Recognizing Beliefs from Standpoints and Affective Reactions Beliefs can be inferred from explicitly stated support and opp- sition standpoints. For instance, in the following excerpt from (Fried- man, 1982) “Those of us who have opposed export quotas on grain, .., have defended [the] administration opposition to the pipe- line deal,” we infer that Friedman believes that the export quotas are bad and that both Friedman and the administration believe that the pipeline deal is a bad idea. These inferences rely on the application Of the following rules: * IF X opposes SITUATIONS, THEN infer that X believes that S is BAD. * IF X supports Ys attack of SITUATION S, THEN infer that X believes that S is BAD. where SITUATION S corresponds to a goal/planning situation. These inference rules are part of a larger set of belief inference rules described in (Alvarado, et al., 1985b). Beliefs can also be signaled by explicit emotional reactions (Dyer, 1983) often stated in arguments. The belief inference rules organized by affective reactions are as follows: * IF a SITUATIONS produces a negative affective reaction for X (due to X experiencing a goal or expectation failure), THEN infer that X believes that S is BAD. * IF a SITUATIONS produces a positive aflective reaction for X (due to X experiencing a goal or expectation achievement), THEN infer that X believes that S is GOOD. where, as in the case of BAD, GOOD is an evaluative place holder for positive outcomes. For example, in the first sentence of ED-JOBS, Friedman’s disappointment indicates to OpEd his belief that the Reagan administration’s protectionist policies are BAD, i.e., they cause (or will cause) goal violations or expectations failures. These violations are confirmed later when OpEd reads that the limitations (1) will not help the auto and steel industries and (2) will cost jobs. OpEd retrieves the reason for Friedman’s disappointment when answering the following question: Q: Why have the limitations on imports disappointed Milton Fried- man? A: MILTON FRIEDMAN BELIEVES THAT PROTECTIONIST POLICIES BY THE REAGAN ADMINISTRATION WILL THWART THE PRESERVATION OF JOBS FOR U.S. MILTON FRIEDMAN BELIEVES THAT PROTECIIONIST POLICIES BY TlIE REAGAN ADMINISTRATION DO NOT LEAD TO THE ACHIEVEMENT OF NOR- MAL PROFITS OF THE STEEL INDUSTRY AND THE AUTOMOBILE INDUS- TRY. B. Argument Unit Taxonomy Argument units (Alvarado et al., 1985a, 1985b) are abstract argument structures which package patterns of belief support/attack relationships and chains of reasoning. When combined with domain- specific knowledge, these abstract argument structures can be used to argue about issues involving plans, goals, and beliefs in the particular domain. Thus, argument comprehension is viewed in OpEd funda- mentally as the process of recognizing, instantiating, and applying argument units. The abstract relationships embodied by AUs fall within one of following categories: 1) Support/attack relationships on why plans should or shouldn’t be selected, implemented or terminated; 2) Support/attack relationships on why goals should or shouldn’t be pursued; or 3) Support/attack relationships on why beliefs do or don’t hold within ideological contexts. Here, we focus on the first category. In particular, we describe four AUs used in ED-JOBS and ED-RESTRICTIONS, namely: AU- ACTUAL-CAUSE, AU-OPPOSITE-EFFEO, AU-EQUIVALENCE, and AU-SPIRAL-EFFECT. 252 / SCIENCE 1. AU-ACTUAL-CAUSE AU-ACTUAL-CAUSE embodies the following reasoning chain: Although OPPONENT believes that his PLAN P achieves GOAL G, SELF does not believe that P achieves G because SELF believes that: (1) it is SITUATIONS which thwarts G, and (2) P does not afJkct S. Therefore, SELF believes that executing P is BAD planning. This argument unit is depicted in figure 1. Friedman uses AU- ACTUAL-CAUSE in ED-JOBS to argue that restrictions on imports do not help the American automobile and steel industries because their economic problems are caused by high wage rates. Here, P refers to ECONOMIC-PROTECTION-PLANS, G to PRESERVE- FINANCES of the auto and steel industries, and S to EARNINGS of workers in these industries. In this case, recognition of AU- ACTUAL-CAUSE is top-down since: a) OpEd has inferred from Friedman’s disappointment his belief that ECONOMIC-PROTECTION-PLANS are BAD. b) OpEd knows that a plan is BAD if it does not achieve its intending goal. This expectation is confirmed when OpEd reads that ECONOMIC-PROTECTION-PLANS “do not promote the long-run health of the [automobile and steel] industries.” At this point OpEd expects to hear why the ECONOMIC-PROTECTION-PLANS do not help these industries. cl OpEd’s expectation is fulfilled as it reads that the economic problem of these industries is caused by high wage rates which, as OpEd knows, are not affected by import restrictions. This instantiation of AU-ACTUAL-CAUSE is retrieved when OpEd answers the following question: Q: What does Milton Friedman believe? A: MILTON FRIEDMAN BELIEVES THAT PROTECTIONIST POLICIES BY THE REAGAN ADMINISTRATION ARE BAD BECAUSE MILTON FRIEDMkI BELIEVES THAT PROTECI-IONIST POLICIES BY THE REAGAN ADMINISTRA- TION DO NOT LEAD TO THE ACHIEVEMENT OF NORMAL PROFITS OF THE STEELINDUSTRY ANDTHEAUTOMOBILEINDUSTRY.MILTON FRIEDMAN BELIEVES THAT PROTECTIONIST POLICIES BY THE REAGAN ADMINISTRA- TION DO NOT LEAD TO THE ACHIEVEMENT OF NORMAL PROFITS OF THE STEEL INDUSTRY AND THE AUTOMOBILE INDUSTRY BECAUSE MILTON FRIEDMAN BELIEVES THAT NORMAL SALARY IN THE STEEL INDUSTRY AND THE AUTOMOBILE INDUSTRY HIGHER THAN THE NORM THWARTS THE ACHIEVEMENT OF NORMAL PROFITS OF THE STEEL INDUSTRY AND THE AUTOMOBILE INDUSTRY. MILTON FRIEDMAN BELIEVES THAT THE REAGAN ADMINISTRATION IS WRONG BECkUSE THE REAGAN ADMINIS- TRATION BELIEVES THAT PROTECTIONIST POLICIES BY THE REAGAN ADMINISTRATION LEAD TO THE.ACHIEVEMENT OF NORMAL PROFITS OF THE STEEL INDUSTRY AND THE AUTOMOBILE INDUSTRY. 2. AU-OPPOSITE-EFFECT AU-OPPOSITE-EFFECT embodies the following reasoning chain: Although OPPONENT believes that his PLAN P achieves GOAL G, SELF does not believe that P achieves G because SELF believes that P thwarts G. Therefore, SELF believes that P is BAD. This argument unit is shown in figure 2. In ED-JOBS, Friedman uses AUaPPOSITE-EFFECT to argue that the limitations will cost jobs in the U.S.. In this case, P refers to ECONOMIC-PROTECTION- PLANS by the Reagan administration and G to PRESERVE-JOBS. In ED-JOBS, recognition of AU-OPPOSITE-EFFECT is bottom-up since OpEd infers it from the OPPOSITE RELATION between expected results of import restrictions, namely, saving jobs and cost- ing jobs. Notice that AU-OPPOSITE-EFFECT allows OpEd to infer that: (a) the Reagan administration believes that import restrictions will save jobs; and (b) this belief is attacked by Friedman. This instantiation of AU-OPPOSITE-EFFECT is also retrieved when OpEd answers the question: Q: What does Milton Friedman believe? A: MILTON FRIEDMAN BELIEVES THAT PROTECTIONIST POLICIES BY THE REAGAN ADMINISTRATION ARE BAD BECAUSE MILTON FRIEDMAN BELIEVES THAT PROTECTIONIST POLICIES BY THE REAGAN ADMINISTRA- TION WILL THWART THE PRESERVATION OF JOBS FOR U.S.. MILTON AU-ACTUAL-CAUSE +---------------------------------------------------------------------------------+ IBELIEF Believer SELF WARRANT1 . I I Content (BAD PLAN-P) +--------------------------------------+ IIF the execution of a PLAN-P does not 1 1 support<------------support-------- +achieve a GOAL-G which PLAN-P intended1 I 1 I I I to achieve, THEN PLAN-P is BAD. I +-----------------------------me-------+ IBELIEF2<-------------attack -------------->BELIEF-j I Believer SELF Believer OPPONENT I Content (PLAN-P -not-achieve-> GOAL-G) Content (PLAN-P -achieve-> GOAL-G) 6 I I +---------------------- --------------------+ 1 IIF the execution of a PLAN-P does not I I I supTort+- ----------support-------- +affect SITUATION-S which thwarts a GOAL-G I I lwhich PLAN-P intended to achieve, I I I ITHEN PLAN-P does not achieve GOAL-G I I +--------------------- ---------------------+ 1 IBELIEF Believer SELF WARRANT2 I I Content (SITUATION-S -thwart-> GOAL-G) I f---------------------------------------------------------------------------------+ Figure 1. AU-ACTUAL-CAUSE AU-OPPOSITE-EFFECT +---------------------------------------------------------------------------------+ IBELIEF Believer SELF WARRANT1 I L I I Content (BAD PLAN-P) +------------------------------------+ 1 IIF the execution of a PLAN-P thwarts1 I 1 support<------------support-------------- +a GOAL-G which PLAN-P intended to I 4 I achieve, THEN PLAN-P is BAD. I I IBELIAF2< +------------------------------------+ 1 -------------attack------------------>BELIEF3 ( Believer SELF Believer OPPONENT I I Content (PLAN-P -thwart-> GOAL-G) Content (PLAN-P -achieve-> GOAL-G) I I I I I +-------------------opposite-------------------+ I +-----------------me------ --------------------------------------------------------~ Figure 2. AU-OPPOSITE-EFFECT COGNITIVE MODELLING AND EDUCATION / 253 FRIEDMAN BELIEVES THATTHE REAGAN ADMINISTRATION IS WRONG BECAUSETHEREAGANADMINISTRATIONBELIEVES THATPROTECTION- IST POLICIES BY THE REAGAN ADMINISTRATION ACHIEVE THE PRESER- VATIONOFJOBSFORUS.. 3. AU-EQUIVALENCE AU-EQUIVALENCE embodies the following reasoning chain: Although OPPONENT believes that his PLAN P achieves GOAL GI, SELF believes that P thwarts GOAL G2 which is as impor- tant as GI. Therefore, SELF believes that P is BAD. AU-EQUIVALENCE is shown in figure 3. Notice that AU- OPPOSITE-EFFECT is a specialization of AU-EQUIVALENCE where GOAL Gl and GOAL G2 correspond to the same GOAL G. However, AU-OPPOSITE-EFFECT is triggered by an opposite rela- tionship rather than by an equivalence one, as in the case of AU- EQUIVALENCE. Morrow uses AU-EQUIVALENCE in ED- RESTRICTIONS to argue that restrictions on imports will cause losses to American manufacturers. In ED-RESTRICTIONS, P refers to ECONOMIC-PROTECTION-PLAN by the U.S. government, Gl to PRESERVE-FINANCES of American machine-tool industry, and G2 to PRESERVE-FINANCES of other American manufacturers. In this case, recognition of AU-EQUIVALENCE is top-down since: a> OpEd knows that Morrow is against import restrictions after reading “[The belief of the American machine-tool industry] is wrongheaded.” At this point, however, OpEd does not know why Morrow is against protectionist poli- cies. Yet, OpEd expects to hear that these policies (1) will have negative consequences (e.g., goal or expectation vio- lations) or (2) will not achieve their intending goal. b) While following Morrow’s cause-effect chain, OpEd real- izes that costing sales to other American manufacturers will thwart a PRESERVE-FINANCES goal for them. Thus, OpEd realizes that this goal is equivalent to the goal that intended the ECONOMIC-PROTECTION-PLAN in the first place (i.e., PRESERVE-FINANCES of American machine-tool industry). This instantiation of AU-EQUIVALENCE is retrieved when OpEd answers the following question: Q: What does the Lance Morrow believe? A: LANCEMORROWBELIEVESTHATPROTECTIONISTPOLICYBYTHEAMERI- CAN GOVERNMENT IS BAD BECAUSE LANCE MORROW BELIEVES THAT PROTECTIONISTPOLICY BYTHE AMERICANGOVERNMENTMOTIVATES THE PRESERVATION OF NORMAL PROFITS OF AMERICAN INDUSTRIES. LANCEMORROWBELIEVESTHATTHEAMERICANMACHINETOOLINDUS- TRY IS WRONG BECAUSE THE AMERICAN MACHINE TOOL INDUSTRY BELIEVES THAT PROTECTIONIST POLICY BY THE AMERICAN GOVERN- MENTACHIEVESTHEPRESERVATIONOFNORMALPROFITSOFTHEAMER- ICANMACHINETOOLINDUSTRY. 4. AU-SPIRAL-EFFECT AU-SPIRAL-EFFECT embodies the following reasoning chain: Although OPPONENT believes that his PLAN P achieves GOAL GI, SELF believes that P thwarts a GOAL G2 which is as impor- tant as Gl. In addition, SELF believes that G2 will intend P’, another instance of P. Therefore, SELF believes that P is BAD. AU-SPIRAL-EFFECT is depicted in figure 4. Morrow uses AU- SPIRAL-EFFECT in ED-RESTRICTIONS to argue that restrictions on Japanese machine-tool imports will generate more petitions for import restrictions. In ED-RESTRICTIONS, P refers to ECONOMIC-PROTECTION-PLAN by the American government, Gl to PRESERVE-FINANCES of American machine-tool manufac- turers, G2 to PRESERVE-FINANCES of other American manufactur- ers, and P’ to the PERSUASION-PLAN of these manufacturers to get ECONOMIC-PROTECTION-PLANS implemented. In ED- RESTRICTIONS, recognition of AU-SPIRAL-EFFECT is top-down since (1) AU-EQUIVALENCE is active and (2) AU-SPIRAL- EFFECT can follow other AUs that embody arguments about plans’ consequences. From the instantiation of AU-EQUIVALENCE, OpEd already knows about the expected goal violation resulting from res- tricting Japanese exports of machine tools to the US.. OpEd knows that if this goal violation intends another instance of (or a PETITION for) the ECONOMIC-PROTECTION-PLAN, then AU-SPIRAL- EFFECT is being used. Consequently, the sentence “Then those manufacturers would demand protection against foreign competi- tion,” causes OpEd to activate AU-SPIRAL-EFFECT. This instantia- tion of AU-SPIRAL-EFFECT is also retrieved when OpEd answers the question: AU-EQUIVALENCE +-----------m--m---- --------------------------------------------------------------+ IBELIEF Believer SELF I I WARRANT1 . Content (BAD PLAN-P) I +------------------------------------+ 1 support<------------support-------------------- IIF the execution of a PLAN-P thwarts1 I I I +a GOAL-G2 which is as important as al 1 I I IGOAL-Gl which PLAN-P intended to /BELI;FZ< lachieve, THEN PLAN-P is BAD. I I +------------------------------------+ -------------attack----------------->BELIEF3 I Believer SELF I I Content (PLAN-P -thwart-> GOAL-G2) Believer OPPONENT I I I Content (PLAN-P -achieve-> GOAL-Gl)( I I I +---------------a--- +------------------equivalent----------------------~ I --------------------------------------------------------------+ Figure 3. AU-EQUIVALENCE AU-SPIRAL-EFFECT +---------------------------------------------------------------------------------+ IBELIEF Believer SELF WARRANT1 I 1 Content (BAD PLAN-P) +------------------------------~~----~~~~~~+ , I IIF the execution of a PLAN-P thwarts a IGOAL-G2 which is as important as a GOAL-Gl! i support<------------support --------+which PLAN-P intended to achieve AND I I IGOAL-G2 intends P', another instance of I I I IPLAN-P, THEN PLAN-P is BAD. I I I +------------------------------------------+ 1 attack--------------->BELIEF3 I I I Believer OPPONENT I IBELIEF2<---------------1 Content (PLAN-P -achieve-> GOAL-G11 ( I Believer SELF I Content (PLAN-P -thwart-> GOAL-G2 -intends-> PLAN-P') I I I +-----------------equivalent----------------+ +---------------------------------------------------------------------------------+ Figure 4. AU-SPIRAL-EFFECT 254 / SCIENCE Q: What does Lance Morrow believe? A: LANCE MORROW BELIEVES THAT PROTECTIONIST POLICY BY THE AMERI- CAN GOVERNMENT IS BAD BECAUSE LANCE MORROW BELIEVES THAT PROTECTIONIST POLICY BY THE AMERICAN GOVERNMENT MOTIVATES THE PRESERVATION OF NORMAL PROFITS OF AMERICAN INDUSTRIES; AND THE PRESERVATION OF NORMAL PROFITS OF AMERICAN MDUS- TRIES INTENDS PERSUASION PLAN BY AMERICAN INDUSTRIES ABOUT PROTECTIONIST POLICY BY THE AMERICAN GOVERNMENT. LANCE MOR- ROW BELIEVES THAT THE AMERICAN MACHINE TOOL INDUSTRY IS WRONG BECAUSE THE AMERICAN MACHINE TOOL INDUSTRY BELIEVES THAT PROTECTIONIST POLICY BY THE AMERICAN GOVERNMENT ACHIEVES THE PRESERVATION OF NORMAL PROFITS OF THE AMERICAN MACHINE TOOL INDUSTRY. V. THE OpEd SYSTEM OpEd has been designed as an in-depth understander of editorial text. OpEd can read short politico-economic editorial segments and demonstrate its comprehension by answering questions about the edi- torial contents. In OpEd, editorial comprehension and question answering are handled by the same conceptual parser; thus, OpEd is an integrated process model of comprehension, search, and retrieval. Input editorial segments are in English and contain the essential wording, issues, and arguments of the original editorials. During edi- torial comprehension, OpEd builds the argument graph which represents the conceptual contents of the editorial. When answering questions, it is the argument graph which is queried, since OpEd can- not remember the wording used in the editorial segment. Input ques- tions are in English and the answers retrieved are converted from memory representation to English by an English generator. A. OpEd’s Architecture OpEd consists of seven major interrelated components, as shown in figure 5. (I) Semantic Memory: OpEd’s semantic memory embodies: (1) a computational model of politico-economic knowledge; and (2) @Ed’s abstract knowledge of argumentation. Each knowledge struc- ture has attached processes called demons which perform knowledge application and knowledge interaction tasks, such as inferring belief and belief relationships, following reasoning about plans and goals, and inferring argument units. Each class of knowledge structure (i.e., goals, plans, beliefs, AUs, etc.) also has an associated generation pat- tern which is accessed by OpEd’s English generator (7). (2) Lexicon: OpEd has a lexicon where words, phrases, roots, and suffixes are declared in terms of knowledge structures in semantic memory (1). Each lexical item also has attached demons which per- form such functions as role binding, word disambiguation, and resolving pronoun references. (3) Demon-Based Parser: Input editorial text is parsed by an integrated demon-based parser based on the conceptual parser imple- mented in BORIS (Dyer, 1983), an in-depth understander of narra- tives. Each input sentence is read form left to right, on a word-by- word or phrase basis. When a lexical item is recognized, a copy of its associated conceptualization is placed into OpEd’s short-term memory or working memory (4). Copies of the lexical item’s demons and its conceptualization’s demons are placed into a demon agenda that contains all active demons. Then, the parser tests all active demons and executes those whose test conditions are satisfied. After demons are executed, they are removed from the agenda. (4) Working Memory: When demons are executed, they bind together conceptualizations in working memory and, as a result, build the conceptual representation of the input sentence. Thus, working memory maintains the current context of the sentence being parsed. (5) Argument Graph: Also resulting from demon execution, the conceptualizations created in working memory (4) get interac- tively integrated with instantiated knowledge structures indexed by semantic memory’s uninstantiated structures (1). These instantiations compose the editorial’s argument graph which both maintains the current context and represents the portion of the editorial read so far. Thus, the argument graph can be viewed as OpEd’s episodic memory (Tulving, 1972), as opposed to OpEd’s semantic memory (1) which contains what OpEd knows before reading the editorial. (6) Memory Search and Retrieval Processes: During question ans~~ering, the argument graph (5) also maintains the current context from which questions are understood. Input questions are parsed by the same demon-based parser (3) used for editorial comprehension, which, as before, builds the conceptual representations of the ques- tions in the working memory (4). Question-answering demons attached to WI-I-words are activated whenever such words are encountered at the beginning of input questions. Aside from deter- mining conceptual question categories (Lehnert, 1978), these demons activate appropriate search and retrieval demons which access the argument graph and return conceptual answers. (7) English Generator: Once an answer is found, it is generated in English by OpEd’s recursive-descent English generator. This gen- erator produces English sentences in a left-to-right manner by travers- ing instantiated knowledge structures and using generation patterns associated with uninstantiated knowledge structures. For example, instantiations of AU-OPPOSITE-EFFECT are generated using the pattern: INPUT: +-----------------+ <----------------> +--------------------------------------+ Editorial I(3) Demon-Based I l(4) Working Memory (Short-Term Memory) I Text ------> 1 Parser: I +-------------> +--------------------------------------+ I Demon Agenda1 I ,. Question --> +-----------------+ <-----> +---------------------+ I Text A I I <--------+ I I ; (5) Argument Graph (Episodic Memory) I < ----------------+ Indexing +-----------------+------+ +---------------------+ I Links I(2) Lexicon: I I 1 V I Words, Phrases, I I I +-----------------------------------------+ I Roots, and Suffixes1 I I I(l) Semantic Memory: I +-----------------+------+ <------------------+ Argument Units ---Belief Relationships1 I I I +-----------------t------t I +----------------Beliefs I(2) Lexical Demons: I i (Goals I I Role Bindings I , Plans, Events) --Reasoning Scripts1 Word Disambiguationl I I Pronoun References I I (Economic Quantities, Physical Objects, Question Answering I I Institutions, Countries) +---------------- 1 -----+ +---+---+--- +--------------------+--------t V V V I I +------------------------------------------+ I I I(6) Memory Search and Retrieval Processes1 <----+ I +------------------------+-----------------+ I I I Conceptual Answer I +--a------------------+ I I V V +--------------a-----+-----------+ +-----+--------+ OUTPUT: t---------------------t I(1) Knowledge Application and I I(1) Generation1 Answer Text <--+(‘7) English Generator1 I Knowledge Interaction Demons1 I Patterns I +---------------------+ +-----------------------------s-s+ +------w-------t Figure S.-Diagram of OpEd’s Components COGNITIVE MODELLING AND EDUCATION / 255 tBELJEFI> “because” cBELIEF2> “.” <SELF> “believe that” <OPPONENT> “be wrong because” <BELIEF3> “.” where: (1) SELF, OPPONENT, BELIEF1 , BELIEF2, and BELIEF3 are components of AU-OPPOSITE-EFFECT, as indicated in figure 2; and (2) the verbs “to believe” and “to be” are conjugated according to contents of SELF, OPPONENT, and BELIEF3. B. Current Status OpEd is written in T (Rees, et al., 1984), a lexically-scoped Scheme-based dialect of Lisp running on Apollo Domain worksta- tions. OpEd uses the knowledge representation system provided by GATE (Mueller and Zernik, 1984), an integrated set of graphical Artificial Intelligence development tools. Currently, OpEd can handle two short editorial segments, (i.e., ED-JOBS and ED-RESTRICTIONS) and various conceptual question categories. The first version of OpEd (Alvarado et al., 1985a) con- tained enough knowledge to handle a fragment of ED-JOBS. Later, the scope of OpEd was extended to mad completely ED-JOBS and ED-RESTRICTIONS. This expansion did not require modifying OpEd’s process model of reasoning and argument comprehension, but rather: (a) augmenting OpEd’s lexicon, politico-economic knowledge, and argument units; and (b) specifying the demons attached to the lexical items and conceptual constructs added. In addition, OpEd’s search and retrieval processes did not require any modifications to handle questions about ED-RESTRICTIONS. This follows from the fact that these processes do not depend on “key” lexical items or specific instantiations of conceptual constructs, but rather on general classes of conceptual constructs, such as goals, plans, beliefs, and AUs. Thus, OpEd’s process model is not tailored to any specific editorial and can be viewed as a prototype of computer comprehension of editorial text. Our current goal in the OpEd project is to advance our fundamental understanding of the processes and knowledge structures involved in argument text comprehension, rather than to produce a robust editorial comprehension system. VI. FUTURE WORK We believe that the theory implemented in OpEd constitutes the foundation for an integral theory of argument comprehension and argument generation. Such a theory should ultimately help explain not only how people’s opinions are understood, but also: Reasoning Intentionality: Whether the reasoning is intended to explain or to convince. Reasoning Errors: Whether the reasoning is sound. Agreement: The computational meaning of agreement and its relation to ideologies. Efficacy of Reasoning and Argument Units: The computational meaning of persuasion and the use of argument units in per- suasive arguments. Long-term Memory Organization and Retrieval: How memory is organized and how retrieval is affected after similar editorials have been read and integrated into memory. Learning Argument Units: How argument units and reasoning chains are learned. Argument Generation: How argument units are used to generate arguments. VII. CONCLUSIONS We have presented a theory of reasoning and argument comprehension implemented in OpEd to understand short editorial segments. Four major points have been emphasized in this paper: * Understanding arguments requires: (1) recognizing beliefs, belief support/attack relationships, and argument units; (2) fol- lowing belief justifications; and (3) building argument graphs. * Beliefs can be inferred from explicit standpoints, emotional reactions, and argument units. * To follow belief justifications, it is necessary to: (1) trace the evolution of goal/plan situations; and (2) apply reasoning scripts to infer implicit cause-effect relationships. * Instantiating argument units helps recognize and integrate into the argument graph implicit beliefs and belief support/attack relationships. We believe that all arguments are composed of coherent configurations of argument units. Thus, argument comprehension is the process of recognizing, instantiating, and applying these units. We have designed OpEd to explore this process model in the domain of editorial text. REFERENCES Alvarado, S. J., Dyer, M. G., and Flowers, M. (1985a). Memory Representation and Retrieval for Editorial Comprehension. Proceed- ings of the Seventh Annual Conference of the Cognitive Science Society. University of California, Irvine, pp. 228-235. Alvarado, S. J., Dyer, M. G., and Flowers, M. (1985b). Understand- ing Editorials. (Tech. Rep. UCLA-AI-85-3). Artificial Intelligence Laboratory. Comp. Sci. Dept. University of California, Los Angeles. Carbonell, J. G. (198 1). Subjective Understanding. Ann Arbor: UMI. Cullingford, R. E. (1981). SAM. In R. C. Schank and C. K. Reisbeck (Eds.), Inside Computer Understanding. Hillsdale, NJ: Earlbaum. DeJong II, G. F. (1982). An Overview of the FRUMP System. In W. G. Lehnert and M. H. Ringle (Eds.), Strategies for Natural Language Understanding. Hillsdale, NJ: Earlbaum. Dyer, M. G. (1983). In-Depth Understanding. Cambridge, MA: MIT. Dyer, M. G., Cullingford, R. E., and Alvarado, S. J. (in press). SCRIPTS: Representing and Applying Stereotypical Knowledge. In S. C. Shapiro (Ed.), Encyclopedia of Arttjicial Intelligence. New York: Wiley. Dyer, M. G. and Lehnert, W. G. (1982). Question Answering for Nar- rative Memory. In J. F. Le Ny and W. Kintsch (Eds.), Language and Comprehension. Amsterdam: North-Holland, Flowers, M. and Dyer, M. G. (1984). Really Arguing with your Com- puter in Natural Language. Proceedings of the National Computer Conference. Las Vegas, Nevada, pp. 65 l-659. Flowers, M., McGuire, R., and Bimbaum, L. (1982). Adversary Argu- ments and the Logic of Personal Attacks. In W. G. Lehnert and M. G. Ringle (Eds.), Strategies for Natural Language Understanding. Hillsdale, NJ: Earlbaum. Friedman, M. (1982, November 15). Protection That Hurts (Edi- torial). NaYsweek, p. 90. Lebowitz, M. (1983). Memory-Based Parsing. Artificial Intelligence, 21 (4), 363-404. Lehnert, W. G. (1978). The Process of Question Answering: A Com- puter Simulation of Cognition. Hillsdale, NJ: Earlbaum. Morrow, L. (1983, January 10). The Protectionist Temptation (Edi- torial). Time, p, 68. Mueller, E. T. and Zernik, U. (1984). Gate reference manual (Tech. Rep. UCLA-AI-84-5). Artificial Intelligence Laboratory. Comp. Sci. Dept. University of California, Los Angeles. Rees, J. A., Adams, N. I., and Meehan, J. R. (1984). The T Manual. Department of Computer Science. Yale University, New Haven, CT. Schank, R. C. (Ed.) (1975). Conceptual Information Processing. Amsterdam: North-Holland. Schank, R. C. (1982). Dynamic Memory. Cambridge: Cambridge University Press. Schank, R. C. and Abelson, R. P. (1977). Scripts, Plans, Goals, and Understanding. Hillsdale, NJ: Earlbaum. Schank, R. C. and Carbonell, J. G. (1979). Re: The Gettysburg Address, Repxtesenting Social and Political Acts. In N. Findler (Ed.), Associative Networks. New York: Academic Press. Toulmin, S., Reike, R., and Janik, A. (1979). An Introduction to Rea- soning. New York: Macmillan. Tulving, E. (1972). Episodic and Semantic Memory. In E. Tulving and W. Donalson (Eds.), Organization of Memory. New York: Academic Press. Wilensky, R. (1983). Planning and Understanding. Reading, MA: Addison-Wesley. 256 / SCIENCE
1986
133
399
Uniform Parsing and Inferencing for Learning* Charles E. Martin and Christopher I<. Riesbeck Yale University New Haven, CT 06520 Abstract In previous papers we have argued for the complete in- tegration of natural language understanding with the rest of the cognitive system. Given a set of richly indexed memory structures, we have claimed that parsing is a general memory search process guided by predictive patterns of lexical and conceptual items which are a part of those memory struc- tures. In this paper, we demonstrate that our architecture for language understanding is capable of implementing the mem- ory search processes required to make complex inferences not directly associated with parsing. The uniform format of the knowledge representation and search process provide a foun- dation for learning research. 1 Introduction Research at the Yale Economics Learning Project is aimed at mod- elling knowledge reorganization and learning as a reasoner goes from being novice to expert in its domain. [Riesbeck 19831 has argued for expert reasoning as the result of gradual changes to novice reasoning in response to self-acknowledged failures in novice reasoning. The original learning system parsed texts such as “high interest rates limit growth, ’ “low growth raises prices,= and “large budget deficits cause higher interest rates” into separate meaning representations which were then pieced together to derive new economic arguments [Riesbeck 19811. We now believe that a much tighter connection must be made between natural language understanding and the rest of the cognitive system in order to make progress towards our goals for the learning project. The language understanding system must be able to take advantage of the knowledge present in memory to the same degree that any other memory process could, and other memory processes must be able to make full and immediate use of linguistic input without waiting for a final interpretation to be formed. This is the reflection of a re-orientation of the learning project in a much more promising direction. The system begins with a richly- indexed episodic memory of various arguments, including informa- tion such as who gave the argument, which other arguments it sup- ports or contradicts, and so on. Linguistic input is used by the system to recognize relevant prior arguments; differences between the input and prior memory structures give rise to failures in the recognition process, which are resolved by recognizing and applying reconciliation strategies. The common threads of this architecture are 1) a uniform rep- resentation of domain knowledge, failure structures, and reconcilia- tion strategies in the regular memory format and 2) a uniform view of memory processes, including language understanding, as search through a knowledge base controlled by the prior recognition of struc- tures in that knowledge base. *This report describes work done in the Department of Computer Science at Yale University. It was supported in part by the Air Force Office of Scientific Research under contract F49620-82-K-0010. In previous papers ([Riesbeck and Martin 1985]), we have argued for an approach to parsing which conforms to this view. The parsing algorithm is a process of lexically-guided memory search in which predictive patterns of words and concepts guide a general memory search process to recognize relevant memory structures. We call this direct memory access parsing (DMAP). Our memory structures are frame-like objects called Memory Organization Packets (MOPS), organized by the standard part-whole packaging and class-subclass abstraction hierarchies (Schank 19821. This approach is the reverse of that taken by past conceptual an- alyzers ([Riesbeck 197.51 [L e owitz 19801 [Dyer 19821 [Lytinen 19841) b that construct meaning representations from texts which may then be be connected to memory in a separate step; this is the “Build and Store” model of conceptual analysis. The proposed alternative is to find relevant structures in memory and record differences between the input and what exists already. We call this the “Recognize and Modify” model. We are now turning our attention back to the original goals of the learning project. When failures occur in the understanding process, we wish to trigger inference processes to record those failures and to implement strategies for resolving the anomalies. In this paper, we describe how our previous approach to integrating parsing with memory extends naturally to handle these inference mechanisms: failure episodes and reconciliation strategies are represented in the regular memory format of domain knowledge, and we are excited that a single, uniform memory search #recess appears capable of handling both parsing and memory-based inference in such a knowledge base. This paper examines the architecture we have evolved for our system. Section 2 reviews our original work on parsing, detailing the memory structures and the search process used for recognition. Section 3 explains how we have augmented this with failure and strategy structures to build new memory structures where neces- sary. Section 4 extends the failure and strategy concepts to handle inference which is only indirectly related to the parsing task. 2 Integrating Parsing with Memory We integrate parsing knowledge into memory by attaching linguis- tic templates to memory structures in a manner reminiscent of the Teachable Language Comprehender [Quillian 19691. These tem- plates, called concept sequences, are patterns of words and con- cepts. For example, attached to the memory structure MILTON- FRIEDMAN is the concept sequence {Milton Friedman}, repre- senting the linguistic phrase “Milton Friedman.” Attached to MTRANS-EVENT, our primitive marker for communications events (Schank and Ab 1 e son 19771, is the concept sequence {actor says mobject}, representing 1. the identification of another memory structure which is indexed from MTRANS-EVENT through the packaging hierarchy via the actor role, COGNITIVE MODELLING AND EDUCATION / 257 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. 2. the linguistic item “says,” and 3. the identification of another memory structure which is indexed from MTRANS-EVENT through the packaging hierarchy via the mobject role (representing the content of the communicated information). Any memory structure can have one or more concept sequences; in addition, the abstraction hierarchy provides an inheritence mecha- nism through which any structure implicitly acquires the sequences attached at a more general level of abstraction. The dictionary in DMAP, which we call the concept lexicon, is simply a set of pointers from words and concepts to the concept sequences they appear in. The concept sequences encode the lex- ical and syntactic knowledge of the parser. This is a generaliza- tion of the “phrasal lexicon” approach to language understanding [Becker 19751 that includes not only actual phrases, but more con- ceptual combinations as well. The primary task of concept sequences is to quickly connect standardized patterns of language use to gen- eral memory structures of the system. To this end, the DMAP model depends on the use of parallel activation and intersection to re- solve the basic combinatorial explosion, as is presumed in a number of other recent models [Small et al. 19821 [Hahn and Reimer 19831 [Granger et al. 19841 [Waltz and Pollack 1 .984] [Charniak unpb]. In the process of recognizing conceptual elements of concept se- quences, the parser will identify more specific structures than the general concept sequence refers to. The parser uses these specific structures to recognize episodes in memory which are 1) consistent with the general structures predicted by the concept sequence, and 2) capable of adequately packaging the other structures recognized by the input. Because the parsing process attempts to recognize the most specific memory structures available, exactly which memory structures the parser settles on depends on which ones are already in memory. Figure 1 depicts a simplified portion of the memory structures used to recognize the communicative act of the following The New York Times, August 4, 1983. Milton Friedman: Interest rates will rise as an inevitable consequence of the monetary explosion we’ve experienced over the past year. If this claim of Friedman’s has been seen before, then seeing it again, as originally stated, or paraphrased, will guide the parser to the previously built memory structure MF:MTRANS-EVENT. hPuK SaYa Ihal mnhla) YTRAMSEVENT u .i, ~s.q".nc. (JO JO YONLY.SUPPLY-UP INTEREST-RATES-UP lmon*Uv l ⌧PloW) (intwd rat** rise) Figure 1: Simplified memory structures. 2.1 Marker passing The parser uses a marker-passing architecture to identify relevant memory structures from the input text and the expectations in mem- ory. Two kinds of markers are used in the system: activation mark- ers, which capture information about the input text and the cur- rent selection of relevant memory structures, and prediction markers, which indicate which memory structures may be expected to become relevant. 258 / SCIENCE DMAP is definitely not disambiguation with marker passing [Waltz and Pollack 19841. Rather than using marker passing as an appendage to a standard parser for finding the (shortest, strongest, whatever) path between two nodes in memory, the structures found through marker passing are the most relevant ones in memory and comprise themselves the result of the parse. The connectionist work is also currently focussing on the disam- biguation problem [Cottrell 19841, though here it is intended that eventually all aspects of parsing will be included in the same spread- ing activation framework. The connectionist project is much more difficult, since they are deliberately limiting the allowable set of mechanisms. They do not have access to the kinds of structured markers we are quite willing to invoke. 2.2 Concept activation Memory structures are activated by placing activation them. Activation markers are created in two situations. markers on l System input: when an input word is read by the parser, an ac- tivation marker is created and placed on the associated lexical item in memory. l Concept sequence recognition: when every element of a concept sequence has been activated, an activation marker is created and placed on the associated memory structure. Activation markers are passed up the class-subclass abstraction hi- erarchy from their associated structures. This is a recursive process; all structures which receive an activation marker continue to pass it on to their own abstractions. When a memory structure receives an activation marker, that structure is said to have been activated; the activation marker contains a pointer to the originally activated structure. For example, an activation marker associated with MONEY- SUPPLY-UP will be passed to ECON-EVENT, which in turn passes the marker to EVENT. All of these structures are activated, while the activation marker keeps a pointer to MONEY-SUPPLY-UP. 2.3 Concept prediction Prediction markers represent concept sequences which are in the pro- cess of being recognized. Whenever a memory structure is activated, prediction markers are created for all the concept sequences indexed by that memory structure through the concept lexicon. A predic- tion marker captures the intuition of the “focus of attention” of the parser. A shift of attention corresponds to passing the prediction marker to a new location in memory; this takes place in response to concept activation. When a memory structure is activated which intersects the current focus of a prediction via some packaging rela- tionship, the prediction is altered by two concurrent processes. . . 3 Concept refinement. Since the activation will generally supply more specific information about the current input than the prediction takes into account, the prediction marker can be passed down the abstraction hierarchy to a more specialized memory structure which better packages the activation. Sequence advancement. Intersection of an activation marker will complete the current element of the prediction marker’s concept sequence. If the sequence has not yet been completed, the prediction marker can be passed across the abstraction hierarchy to focus on the next element of the sequence. Simple Memory Modification Of course, it is not enough to recognize structures in memory; the parser must also be able to record “where it has been.” For example, if MS:IR:CAUSAL were not contained in the memory of Figure 1, then the parser would identify the more general ECON:CAUSAL. In this case, the parser can’t find a structure which is specific to the acti- vated memory structures it knows about, yet it has identified some general structures which serve to classify the input. We call this situation a specialization failure, and there exist structures in mem- ory which serve to index such situations. In turn, these structures index reconciliation strategy memory structures which can reconcile the anomalies. In this section, we describe how the most general of failure and reconciliation structures are recognized and activated. It is at this most general level that new memory structures are built; Section 4 describes how more specific failures cause the recognition process to search for reconciliations which may result in inference. Ultimately, all such search processes “bottom out” at the most general level of specialization failure, causing new structures to be created. 3.1 Recognizing failures When the normal recognition process identifies a structure which is not suitably specialized, that process spawns a recognition process which is predictive of a specialization failure structure; although it is handled identically to the normal recognition process, it is initiated internally by the parser and not by a concept sequence. This is the only exception to the general recognition algorithm. Specialization failure structures, like other memory struc- tures, are organized by part-whole and class-subclass relation- ships. The most general specialization failure structure is MISSING- SPECIALIZATION. In the above example, if MS:IR:CAUSAL were missing from memory then an instance of MISSING-SPECIALIZATION would be recognized which packaged ECON:CAUSAL and the MONEY- SUPPLY-UP and INTEREST-RATES-UP activations. 3.2 Recognizing strategies Reconciliation strategies are similar in spirit to both the Excep- tion MOPS proposed in [Riesbeck 19811 and the explanation pat- terns (XPs) of [Schank unpb]. Reconciliations are also memory structures; they package a failure structure and other memory struc- tures that “explain away” the failure. By “explain away” we mean that if the memory had contained the explanatory structures in the first pla,-e, the recognition process would not have arrived at a fail- ure str:eture. A reconciliation is recognized by the system through the normal process of concept sequence completion; the most general reconciliation structure is ROTE-MEMORY. ROTE-MEMORY simply adds new memory structures at the appropriate point to resolve a MISSING-SPECIALIZATION. Since MISSING-SPECIALIZATION is packaged by ROTE-MEMORY via the failure relationship, recognition of the failure structure leads to recog- nition of the strategy, and ROTE-MEMORY builds a new memory structure. In the above example, ROTE-MEMORY would create a new memory structure which packaged INTEREST-RATES-UP and MONEY-SUPPLY-UP underneath ECON:CAUSAL intheabstractionhi- erarchy. 3.3 Invoking ROTE-MEMORY ROTE-MEMORY is invoked only in the simple situation where you know things of a certain type can occur, and one of them does. The input matches completely a general pattern and there is no more specific version of the pattern to compare the input with. ROTE- MEMORY creates new specializations of existing structures to package specific items. It is important to note that ROTE-MEMORY will also be invoked to create specific sub-structures for an identified memory structure. For example, if we have identified a generalized “restaurant” MOP [Schank 19821 f rom the input, ROTE-MEMORY fills out the unspeci- fied scenes according to the specific informat\on available. The dis- tinction between these two methods of invocation is only one of in- terpretation; in the implementation, an attempt is made to recognize sub-structures via the normal algorithm, which may or may not end up with the invocation of ROTE-MEMORY to create a new memory structure. 4 Failure-Driven Inferencing Consider again the memory structures depicted in Figure 1. Given an input such as “John Doe blames the large increase in the money supply for the rise in interest rates,” what structures should be recognized? When this is parsed, the parser is unable to special- izefrom ECON:MTRANS-EVENT to MF:MTRANS-EVENT because the more specific structure only partially matches the input-the actor of MF:MTRANS-EVENT does not match the actor of the input. This state of the parser is similar to that described above, with the ex- ception that some prior knowledge structure has been recognized but deemed over-specialized due to the actor mismatch. 4.1 Failure and reconciliation structures The additional information in this example serves to locate a more specific failure structure than MISSING-SPECIALIZATION. In this case, the failure structure identified is ACTOR:EXCEPTION. This structure packages: the new package that couldn’t be specialized (John Doe’s argument); the new part contained in the new package (John Doe); the old package (Milton Friedman’s argument); and the old part (Milton Friedman). The general situation of two people saying the same thing can be explained in many ways; since the parser attempts to recognize the most specific relevant structure in memory, it prefers to try domain- specific before more general strategies. A routine domain-specific ex- planation for why two economists say the same thing is “they belong to the same economic camp.” This strategy for ACTOR:EXCEPTION is CREATE-CAMP; it packages l the ACTOR:EXCEPTION jailure structure, l the economic camp which the actors belong to, and l the camp argument which both arguments instantiate. Figure 2 presents the actual definitions of these structures. Note the constraints placed on the sub-structures of CREATE-CAMP which re- flect their mutual dependencies: the camp-mtrans structure is a spe- cialization of ECON:MTRANS-EVENT whose actor is the camp of the strategy and which is in turn a generalization (isa-) of the old-package and new-package of the failure of the strategy. (def actor:exception (isa: missing-specialization) (new-package (econ:mtrans-event)) (old-package (econ:mtrane-event)) (new-part (economist)) (old-part (economist))) (def create-camp (isa: reconciliation) (failure (actor-exception (new-package ?a) (old-package ?b) (new-part ?c> (old-part ?d) 1) (camp (economist (isa- ?c ?a>>) (camp-mtrane (econ:mtrans-event (actor (camp)) (lea- ?a ?b)))) Figure 2: Failure and strategy memory structures. COGNITIVE MODELLING AND EDUCATION / 259 4.2 An example of failure-directed inference CREATE-CAMP With the simplified memory defined so far (the structures of Fig- ures 1 and 2), we can follow the parse of “John Doe says that in- terest rates rise as a consequence of the monetary explosion.” The recognition algorithm described in Section 2 is sufficient to identify ECON:MTRANS-EVENT, whichisnotspecific totheactive JOHN-DOE and MS:IR:CAUSAL structures. The specialization failure spawns a recognition process which identifies MISSING-SPECIALIZATION, since no other information is available to locate more specific structures. ROTE-MEMORY constructs JD:MTRANS-EVENT to packagetheinput and be connected to memory as a specialization of ECON:MTRANS- EVENT. Simultaneous with the above identification of the general ECON:MTRANS-EVENT was the recognition of the inapplicability of MF:MTRANS-EVENT due to over-specialization. With the activation of JD:MTRANS-EVENT, the structure of memory appears as depicted in Figure 3. This figure depicts the failure and strategy structures which will be relevant. (The packaging relationships from the AC- TOR:EXCEPTION failure structure have been omitted for clarity.) (lailure) CREATE-CAMP MILTON-FRIEDMAN ACTOR:EXCEPTION P P CREAT -CAMP-l Figure 4: Instantiating a reconciliation structure. A topic of future research is how the system might learn specific concept sequences to identify ECON:CAMP- 1; e.g., that the struc- ture refers to monetarists, with CAMP- 1 ;MTRANS-EVENT referring to arguments commonly held by monetarists. 5 The Economic Learning Project The previous section outlined an example in which the parser’s infer- ence revolves around its knowledge of argumentation and argument advocacy in the economics domain. The goal of the learning project is to model the reorganization and learning of knowledge as a rea- soner progresses from novice to expert understanding of its domain. To this end, the system needs to have declarative representations of inference rules used in expert reasoning. A common form of inference required to understand economic arguments is the construction of causal chains from individual causal structures. Consider the following expert text. Figure 3: Failure and strategy structures. Lester C. Thurow, Newsweek, September 21, 1983: With the resulting structure of taxes and expenditures, the President is not going to be balancing the Federal budget in 1984 or any other year. With high growth choked off by high interest rates, budget deficits are going to be bigger, not smaller. The result: more demands for credit and higher interest rates. Activation of JD:MTRANS-EVENT provides the extra informa- tion needed for the failure recognition process to identify AC- TOR:EXCEPTION. Since this general failure structure does not directly package the active JD:MTRANS-EVENT and MF:%ITRANS- EVENT structures, another recognition failure process is spawned. Thisidentifies MISSING-SPECIALIZATION, and ROTE-MEMORY builds ACTOR:EXCEPTION-1. Note that the normal recognition process works on these failure structures in exactly the way that it works on “domain” memory structures. The activation of ACTOR:EXCEPTION- 1 causes the recognition process to recognize CREATE-CAMP. Once again, a MISSING- SPECIALIZATION is recognized, and CREATE-CAMP-~ is built by ROTE-MEMORY. At this point, the memory appears as depicted in Figure 4. (The packaging links at the general level of Figure 3 have been omitted for clarity.) 4.3 The result of parsing At the conclusion of this example, the parser has built two memory structures which are not directly related to the input: ECON:CAMP- 1 and CAMP- ~:MTRANS-EVENT. These were built when the parser recognized two instances of MISSING-SPECIALIZATION while recog- nizing sub-structures of CREATE-CAMP- 1. These new structures serve to better organize memory so that the same text will not create a failure if seen again; prior memory structures such as MF:MTRANS- EVENT have been automatically re-indexed in the correct relation- ships with the new structures. This is a rather complex argument, involving an implicit feedback loop and causal chain through interest rates, investment, business growth, tax revenues, and the deficit. Consider the phrase “with high growth choked off by high interest rates.* The system recognizes this as an instance of ECON:CAUSAL, but does not recognize this particular example. The inference rule required is familiar: IF 20 causes ~1, ~1 causes 22, . . . , and zn- 1 causes xn THEN ~0 causes xn. This unlimited chaining has been broken down into a two-step chain- ing structure in the implementation. The failure, strategy, and aux- illiary structures are shown in Figure 5. In this example, “high growth choked off by high interest rates” causes the parser to recognize CAUSAL- 1, which invokes an ECON- CAUSAL:CONSEQUENT-EXCEPTION failure. The strategy indexed by this failure is USE:CAUSAL-CHAIN:FORWARD, which supports a causal argument by a causal chain. The binding constraints force the argument to be that presented in the text, while the causal chain be- gins with CAUSAL- 1 and searches for a ca&al argument connecting this to the goal state. In this case, the system will find CAUSAL-~. 260 / SCIENCE (def econ-caueal:coneequent-exception (iea: missing-specialization) (new-package (econ:cauaal)) (old-package (econ:caueal)) (new-part (econ:event)) (old-part (econ:event))) (def uee:caueal-chain:forward (isa: use:caueal-chain) (failure (econ-caueal:consequent-exception (new-package ?a) (old-package ?b) (new-part ?c) (old-part ?d))) (argument ?a) (support (causal-chain (first ?b) (second (ante ?d) (cnaq ?c)>>>> (def caueal-chain (first (econ:causal)) (second (econ:caueal))) (def causal-l (iaa: econ:caueal) causal-2 (iea: econ : causal) specific information available in memory guides the search process to appropriate failure and reconciliation structures. Acknowledgements The work described here is based on the joint efforts of the Direct Memory Access Parsing project, which consists of Charles E. Martin, Monique Barbancon, and Michael Factor. References Alvarado, S.J., Dyer, M.G, and Flowers, M. (1985). Memory Represen- tation and Retrieval for Editorial Comprehension. In Proceedings oj the Sevenfh Annual Conference oj the Cognitive Science Society. Irvine, CA. Becker, J.D. (1975). The phrasal Language Processing. Cambridge, lexicon. MA. In Theoretical Issues in Natural Charniak, E. (Unpublished). A Single-Semantic-Process Theory ofParsing. of ambiguous words. In Cottrell, G.W. (1984). A model of lexical access Proceedings of the AAAI-84. Austin, Texas. Dyer, M.G. (1982, May). In-Depth Understanding: A Computer Model of Integrated Processing for Narrative Comprehension. Technical Report 219 Yale University Department of Computer Science. Flowers, M., McGuire, R., and Birnbaum, L. (1982). Adversary Argu- ments and the Logic of Personal Attacks. In W.G. Lehnert and M.G. Ringle (Eds.), Strategies {or Natural Language Understanding. Lawrence Erlbaum Associates. (ante (high-interest-rates)) (ante (low-investment)) (cneq (low-investment))) (cnsq (low-growth) 11 Figure 5: Using the USE:CAUSAL-CHAIN heuristic. At the conclusion of this example, the parser has built a new causal structure which is supported by a causal chain. This support structure records the use of USE:CAUSAL-CHAIN:FORWARD strategy, and the constructed chain can now be recognized in subsequent pars- ing without repeating the original memory search. Granger, R.H., Eiselt, K.P., and Holbrook, J.K. (1984). The parallel or- ganization of lexical, syntactic, and pragmatic inference processes. In Pro- ceedings ol the First Annual Workshop on Theoretical Issues in Conceptual Information Processing. Atlanta, GA. The identification of specific repair strategies and indexing them under the specific circumstances of their recognition is reminiscent of the work on learning to anticipate and avoid planning failures in [Hammond 19861. Further research in the Economic Learning Project will require representing complex domain objects, including goals, plans, events, and argument structures ([Flowers et al. 19821, [ Alvarado 1985]), and the inference rules used to understand and connect objects in memory. The inference rules of our system are represented declaratively and are processed identically to other mem- ory objects; we think this is an essential step in building a learning system. S Conclusions Hahn, U. and Reimer U. (1983, November). Word ezperf parsing: An approach to tezt parsing vrith a distributed lezical grammar. Bericht TOPIC 6/83. Universitat Konstanz, Konstanz, West Germany. Hammond, K.J. (1986). Case-baaed Planning: An integrated theory olplan- ning, learning and memory. Ph.D. Thesis, Yale University. Forthcoming. Lebowitz, M. (1980, October). Generalization and Memory in an Integrated Underalanding System. Ph.D. Thesis, Yale University. Research Report #186. Lytinen, L. (1984, November). The Organization 01 Knowledge in a Multi- lingual, Integrated Parser. Ph.D. Thesis, Yale University. Research Re- port #340. Quillian, M.R. (1969). The Teachable Language Comprehender: A Sim- ulation Program and Theory of Language. Communications of the ACM, 12 (8). At this point we have demonstrated that our representation and pro- cess model is capable of handling 1) phrase-oriented parsing, and 2) automatic instantiation of inferential structures. What is particu- larly pleasing about this architecture is that these explanatory infer- ence mechanisms are triggered when required by the current state of memory, and not through the artificial intercession of specific control procedures. For example, the CREATE-CAMP reconciliation may or may not be applicable to a given ACTOR:EXCEPTION failure depending upon what the system already knows, i.e., what other memory structures it has that package the two arguments. Suppose that the concept of a monetarist is already represented in memory and the parser reads “Lester Thurow blames the rise in interest rates on the in- creased money supply.” If we know enough about Lester Thurow Riesbeck, C.K. and Martin, C.E. (1985). Direct Memory Access Parsing. YALEU/DCS/RR 354. Yale University, Riesbeck, C.K. (1975). Conceptual Analysis. In Schank, R.C. (Ed.), Con- ceptual In/ormation Processing. Amsterdam: North Holland/American El- sevier. Riesbeck, C.K. (1981). Failure-driven Reminding for Incremental Learn- ing. In Proceedings o/ the Seventh International Joint Conference on Arti- ficial Intelligence. Vancouver, B.C.. Riesbeck, C.K. (1983). Some problems for conceptual analyzers. In Sparck Jones, K. and Wilks, Y. (as.), Automatic Natural Language Parsing. Chichester: Ellis Horwood Limited. Schank, R.C. and Abelson, R.P. (1977). standing. Lawrence Erlbaum Associates. Scripts, plans, goals, and under- to know that he has often made arguments against the monetarist position, then the failure is more specific than ACTOR:EXCEPTION and CREATE-CAMP will not be the appropriate resolution. Instead, we want to locate reconciliations which capture the ex- planation that “Thurow is leaning towards monetarism,” “current economic conditions make the monetarist position generally accept- able,” and (at the general level of MTRANS-EVENT specialization failures) ‘Thurow is lying,” among others. The point is that more Schank, R.C. (1982). Dynamic Memory: A Theory 01 Learning in Com- putera and People. Cambridge University Press. Schank, R.C. (Unpublished). Explanation Patterns. Small, S., Cottrell, G. and Shastri, L. (1982). Toward Connectionist Pars- ing. In Proceedings of the AAAI-82. Pittsburgh, PA.. Waltz, D.L. and Pollack, J.B. (1984). Phenomenologically plausible pars- ing. In Proceedings of the AAAI-84. Austin, Texas. COGNITIVE MODELLING AND EDUCATION / 26 1
1986
134