{ "paper_id": "W07-0312", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T04:37:57.201755Z" }, "title": "WIRE: A Wearable Spoken Language Understanding System for the Military", "authors": [ { "first": "Helen", "middle": [], "last": "Hastie", "suffix": "", "affiliation": { "laboratory": "Lockheed Martin Advanced Technology Laboratories", "institution": "", "location": { "addrLine": "3 Executive Campus Cherry Hill", "postCode": "08002", "region": "NJ" } }, "email": "hhastie@atl.lmco.com" }, { "first": "Patrick", "middle": [], "last": "Craven", "suffix": "", "affiliation": { "laboratory": "Lockheed Martin Advanced Technology Laboratories", "institution": "", "location": { "addrLine": "3 Executive Campus Cherry Hill", "postCode": "08002", "region": "NJ" } }, "email": "pcraven@atl.lmco.com" }, { "first": "Michael", "middle": [], "last": "Orr", "suffix": "", "affiliation": { "laboratory": "Lockheed Martin Advanced Technology Laboratories", "institution": "", "location": { "addrLine": "3 Executive Campus Cherry Hill", "postCode": "08002", "region": "NJ" } }, "email": "morr@atl.lmco.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we present the WIRE system for human intelligence reporting and discuss challenges of deploying spoken language understanding systems for the military, particularly for dismounted warfighters. Using the PARADISE evaluation paradigm, we show that performance models derived using standard metrics can account for 68% of the variance of User Satisfaction. We discuss the implication of these results and how the evaluation paradigm may be modified for the military domain.", "pdf_parse": { "paper_id": "W07-0312", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we present the WIRE system for human intelligence reporting and discuss challenges of deploying spoken language understanding systems for the military, particularly for dismounted warfighters. Using the PARADISE evaluation paradigm, we show that performance models derived using standard metrics can account for 68% of the variance of User Satisfaction. We discuss the implication of these results and how the evaluation paradigm may be modified for the military domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Operation Iraqi Freedom has demonstrated the need for improved communication, intelligence, and information capturing by groups of dismounted warfighters (soldiers and Marines) at the company level and below. Current methods of collecting intelligence are cumbersome, inefficient and can endanger the safety of the collector. For example, a dismounted warfighter who is collecting intelligence may stop to take down notes, including his location and time of report or alternatively try to retain the information in memory. This information then has to be typed into a report on return to base. The authors have developed a unique, hands-free solution by capturing intelligence through spoken language understanding technology called WIRE or Wearable Intelligent Reporting Environment. Through WIRE, users simply speak what they see, WIRE understands the speech and automatically populates a report. The report format we have adopted is a SALUTE report which stands for the information fields: Size, Activity, Location, Unit, Time and Equipment. The military user is used to giving information in a structure way, therefore, information entry is structured but the vocabulary is reasonably varied, an example report is \"Size is three insurgents, Activity is transporting weapons.\" These reports are tagged by WIRE with GPS position and time of filing. The report can be sent in real-time over 802.11 or radio link or downloaded on return to base and viewed on a C2 Interface. WIRE will allow for increased amounts of digitized intelligence that can be correlated in space and time to predict adverse events. In addition, pre and post-patrol briefings will be more efficient, accurate and complete. Additionally, if reports are transmitted in real time, they have the potential to improve situational awareness in the field.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper discusses the challenges of taking spoken language understanding technology out of the laboratory and into the hands of dismounted warfighters. We also discuss usability tests and results from an initial test with Army Reservists.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "WIRE is a spoken language understanding system that has a plug-and-play architecture (Figure 1 ) that allows for easy technology refresh of the different components. These components pass events to each other via an event bus. The speech is collected by an audio server and passed to the Automatic Speech Recognizer (ASR) server, which is responsible for converting the audio waveform into an N-best list. The Natural Language (NL) under standing component executes a named-entity tagger to tag and retain key text elements within the each candidate N-best list element. The sets of tagged entities are then parsed using a bottom-up chart parser. The chart parser validates each named entity tag sequence and generates a syntactic parse tree. A heuristic is then applied to select the best parse tree from the N-best list as the representative spoken text. After a parse tree is selected, a semantic parser is used to prune the parse tree and produce a semantic frame-a data structure that represents the user's spoken text. The semantic frame is then passed through a rule-based filter that translates text as necessary for processing, e.g., converting text numbers to digits.", "cite_spans": [], "ref_spans": [ { "start": 85, "end": 94, "text": "(Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "System Overview", "sec_num": "2" }, { "text": "The semantic frame is then passed to the Dialogue Manager which decides what action to take based on the most recent utterance and its context. If the system is to speak a reply, the natural language generation component generates a string of text that is spoken by the Text-To-Speech engine (TTS).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Overview", "sec_num": "2" }, { "text": "The WIRE spoken language understanding system was fully developed by the authors with the exception of the ASR, called Dynaspeak\u2122, which was developed by SRI International (Franco et al., 2002) and the TTS engine from Loquendo S.p.A. Grammars for the ASR and NL have to be written for each new domain and report type.", "cite_spans": [ { "start": 172, "end": 193, "text": "(Franco et al., 2002)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "System Overview", "sec_num": "2" }, { "text": "In order for the system to adapt to the user's environment, there are two modes of operation. Interactive mode explicitly confirms what the user says and allows the user to ask the system to read back certain fields or the whole report. Alternatively, in stealth mode, the user simply speaks the report and WIRE files it immediately. In both cases, audio is recorded as a back-up for report accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Overview", "sec_num": "2" }, { "text": "The goal of WIRE is to provide a means of reporting using an interface that is conceptually easy to use through natural language. This is particularly challenging given the fluid nature of war and the constant emergence of new concepts such as different types of Improvised Explosive Devices (IEDs) or groups of insurgents. Another challenge is that each unit has its own idiosyncrasies, call signs and manner of speaking. Because WIRE is a limited-domain system and it is not possible to incorporate all of this variability, we found training to be a key factor in user and system performance and acceptance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Challenges of Deployment to Dismounted Warfighters", "sec_num": "3" }, { "text": "A new challenge that phone-based or desk-top systems have yet to face is the need for a mobile spoken language understanding system that can be worn by the user. From a software perspective, WIRE has to have a small footprint. From a hardware perspective, the system has to be lightweight, robust, and rugged and must integrate with existing equipment. Wearable computing is constantly evolving and eventually WIRE will be able to run on a system as small as a button. We have also been working with various companies to create a USB noise-canceling microphone similar to what the military user is accustomed to.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Challenges of Deployment to Dismounted Warfighters", "sec_num": "3" }, { "text": "Fifteen Army Reservists and three former Marines participated in WIRE usability tests in a laboratory environment. The Reservists predominately provide drill-instructor support for Army basic training groups. The session began with a brief introduction to the WIRE system. Following that, participants reviewed a series of self-paced training slides. They then completed two sets of four scenarios, with one set completed in stealth mode and the other in interactive mode. A total of 523 utterances were collected. Participants were asked to complete five-question surveys at the end of each set of scenarios. For the regression model described below, we averaged User Satisfaction scores for both types of interaction modes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Design", "sec_num": "4" }, { "text": "We adopted the PARADISE evaluation method (Walker et al., 1997) . PARADISE is a \"decisiontheoretic framework to specify the relative contribution of various factors to a system's overall performance.\" Figure 2 shows the PARADISE model which defines system performance as a weighted function of task-based success measures and dialogue-based cost measures. Dialogue costs are further divided into dialogue efficiency measures and qualitative measures. Weights are calculated by correlating User Satisfaction with performance. Figure 2 . PARADISE Model (Walker et al., 1997) The set of metrics that were collected are: \u2022 Q5: I would recommend that this system be fielded (Future Use). These questions are modified from the more traditional User Satisfaction questions (Walker et al., 2001 ) that include TTS Performance and Expected Behavior. TTS Performance was substituted because the voice is of such a high quality that it sounds just like a human; therefore, the question is no longer relevant. Expected Behavior was substituted for this study because WIRE is mostly user initiative for the reporting domain.", "cite_spans": [ { "start": 42, "end": 63, "text": "(Walker et al., 1997)", "ref_id": "BIBREF3" }, { "start": 551, "end": 572, "text": "(Walker et al., 1997)", "ref_id": "BIBREF3" }, { "start": 766, "end": 786, "text": "(Walker et al., 2001", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 201, "end": 209, "text": "Figure 2", "ref_id": null }, { "start": 525, "end": 533, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Experiment Design", "sec_num": "4" }, { "text": "The Task Success metric was captured by Report Accuracy. This was calculated by averaging the correctness of each field over the number of fields attempted. Field correctness was scored manually as either 1 or 0, depending on whether the report field was filled out completely correctly based on user's intent. Partial credit was not given.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Design", "sec_num": "4" }, { "text": "Various platforms were used in the experiment, including laptops, tablet PCs and wearable computers. The Platform metric reflects the processing power with 0 being the highest processing power and 1 the less powerful wearable computers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Design", "sec_num": "4" }, { "text": "We applied the PARADISE model using the metrics described above by performing multiple linear regression using a backward coefficient selection method that iteratively removes coefficients that do not help prediction. The best model takes into account 68% of the variance of User Satisfaction (p=.01). Table 1 gives the metrics in the model with their coefficients and p values. Note that the data set is quite small (N=18, df=17), which most likely affected the results. Results show an average User Satisfaction of 3.9 that is broken down into 4.09 for interactive mode and 3.73 for stealth. The lowest medium user satisfaction score was for System Trust (3.5), the highest for Task Ease (4.5).", "cite_spans": [], "ref_spans": [ { "start": 302, "end": 309, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "5" }, { "text": "Speech recognition word accuracy is 79%, however, Report Accuracy, which is after the speech has been processed by the NL, is 84%. Individual field correctness scores varied from 93% for Activity to 75% for Location. From previous tests, we have found that word accuracy increases through user training and experience up to 95%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "5" }, { "text": "These initial results show that the User Turns metric is negatively predictive of User Satisfaction. This is intuitive as the more user turns it takes to complete a report the less satisfied the user. (Walker et al., 2001) have similar findings for the Communicator data where Task Duration is negatively predictive of User Satisfaction in their model (coefficient -0.15).", "cite_spans": [ { "start": 201, "end": 222, "text": "(Walker et al., 2001)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Interpretation and Discussion", "sec_num": "6" }, { "text": "Secondly, Unit Field Correctness is predictive of User Satisfaction. Given this model and the limited data set, this metric may represent task completion better than overall Report Accuracy. During the test, the user can visually see the report before it is sent. If there are mistakes then this too will affect User Satisfaction. This is similar to findings by (Walker et al., 2001) who found that Task Completion was positively predictive of User Satisfaction (coefficient 0.45).", "cite_spans": [ { "start": 362, "end": 383, "text": "(Walker et al., 2001)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Interpretation and Discussion", "sec_num": "6" }, { "text": "Finally, Platform is negatively predictive, in other words: the higher the processing power (scored 0) the higher the User Satisfaction and the lower the processing power (scored 1) the lower the User Satisfaction. Not surprisingly, users prefer the system when it runs on a faster computer. This means that the success of the system is likely dependent on an advanced wearable computer. There have been recent advances in this field since this experiment. These systems are now available with faster Intel processors and acceptable form factor and battery life.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpretation and Discussion", "sec_num": "6" }, { "text": "The User Satisfaction results show that areas of improvement include increasing the trust in the user (Q3). This challenge has been discussed previously for military applications in (Miksch et al., 2004) and may reflect tentativeness of military personnel to accept new technology. Trust in the system can be improved by putting the system in \"interactive\" mode, which explicitly confirms each utterance and allows the user to have the system read back the report before sending it. A Wilcoxon signed-rank test (Z = 2.12, p < .05) indicated that scores for this question were significantly higher for interactive mode (M = 3.93) than stealth mode (M=3.27).", "cite_spans": [ { "start": 182, "end": 203, "text": "(Miksch et al., 2004)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Interpretation and Discussion", "sec_num": "6" }, { "text": "Our current evaluation model uses User Satisfaction as a response variable in line with previous PARADISE evaluations (Walker et al., 2001) . However, User Satisfaction may not be the most appropriate metric for military applications. Unlike commercial applications, the goal of a military system is not to please the user but rather to complete a mission in a highly effective and safe manner. Therefore, a metric such as mission effectiveness may be more appropriate. Similarly, (Forbes-Riley and Litman, 2006) use the domain-specific response variable, of student learning in their evaluation model.", "cite_spans": [ { "start": 118, "end": 139, "text": "(Walker et al., 2001)", "ref_id": "BIBREF4" }, { "start": 481, "end": 512, "text": "(Forbes-Riley and Litman, 2006)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Interpretation and Discussion", "sec_num": "6" }, { "text": "An obvious extension to this study is to test in more realistic environments where the users may be experiencing stress in noisy environments. Initial studies have been performed whereby users are physically exerted. These studies did not show a degradation in performance. In addition, initial tests outside in noisy and windy environments emphasize the need for a high quality noise canceling microphone. Further, more extensive tests of this type are needed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpretation and Discussion", "sec_num": "6" }, { "text": "In summary, we have presented the WIRE spoken language understanding system for intelligence reporting, and we have discussed initial evaluations using the PARADISE methods. Through advances in spoken language understanding, hardware and microphones, this technology will soon transition out of the laboratory and into the field to benefit warfighters and improve security in conflict regions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpretation and Discussion", "sec_num": "6" } ], "back_matter": [ { "text": "Thanks to the Army Reservist 1/417th Regt, 1st BDE 98th Div (IT).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Modeling User Satisfaction and Student Learning in a Spoken Dialogue Tutoring System with Generic, Tutoring, and User Affect Parameters", "authors": [ { "first": "K", "middle": [], "last": "Forbes-Riley", "suffix": "" }, { "first": "D", "middle": [ "J" ], "last": "Litman", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Forbes-Riley, K. and Litman, D.J. \"Modeling User Sat- isfaction and Student Learning in a Spoken Dialogue Tutoring System with Generic, Tutoring, and User Affect Parameters.\" HLT-NAACL, 2006.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Dynaspeak\u2122: SRI International's scalable speech recognizer for embedded and mobile systems", "authors": [ { "first": "H", "middle": [], "last": "Franco", "suffix": "" }, { "first": "J", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "J", "middle": [], "last": "Butzberger", "suffix": "" }, { "first": "F", "middle": [], "last": "Cesari", "suffix": "" }, { "first": "M", "middle": [], "last": "Frandsen", "suffix": "" }, { "first": "J", "middle": [], "last": "Arnold", "suffix": "" }, { "first": "R", "middle": [], "last": "Rao", "suffix": "" }, { "first": "A", "middle": [], "last": "Stolcke", "suffix": "" }, { "first": "Abrash", "middle": [], "last": "", "suffix": "" }, { "first": "V", "middle": [], "last": "", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franco, H., Zheng, J., Butzberger, J., Cesari, F., Frand- sen, M., Arnold, J., Rao, R., Stolcke, A., and Abrash, V. \"Dynaspeak\u2122: SRI International's scalable speech recognizer for embedded and mobile systems.\" HLT, 2002.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Establishing Trust in a Deployed Spoken Language System for Military Domains", "authors": [ { "first": "D", "middle": [], "last": "Miksch", "suffix": "" }, { "first": "J", "middle": [ "J" ], "last": "Daniels", "suffix": "" }, { "first": "H", "middle": [], "last": "Hastie", "suffix": "" } ], "year": 2004, "venue": "Proc. of AAAI Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miksch, D., Daniels, J.J., and Hastie, H. (2004). \"Estab- lishing Trust in a Deployed Spoken Language Sys- tem for Military Domains.\" In Proc. of AAAI Workshop, 2004.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "PARADISE: A Framework for Evaluating Spoken Dialogue Agents", "authors": [ { "first": "M", "middle": [ "A" ], "last": "Walker", "suffix": "" }, { "first": "D", "middle": [], "last": "Litman", "suffix": "" }, { "first": "C", "middle": [], "last": "Kamm", "suffix": "" }, { "first": "A", "middle": [], "last": "Abella", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Walker, M.A., Litman, D., Kamm, C. and Abella, A. \"PARADISE: A Framework for Evaluating Spoken Dialogue Agents.\" ACL, 1997.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Quantitative and Qualitative Evaluation of DARPA Communicator Spoken Dialogue Systems", "authors": [ { "first": "M", "middle": [ "A" ], "last": "Walker", "suffix": "" }, { "first": "R", "middle": [], "last": "Passonneau", "suffix": "" }, { "first": "J", "middle": [ "E" ], "last": "Boland", "suffix": "" } ], "year": 2001, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Walker, M.A., Passonneau, R., and Boland, J.E. \"Quantitative and Qualitative Evaluation of DARPA Communicator Spoken Dialogue Systems.\" ACL, 2001.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "WIRE System Architecture", "type_str": "figure", "uris": null, "num": null }, "TABREF1": { "text": "Predictive Power and Significance of Metrics", "html": null, "content": "
MetricStandardized \u03b2 Coefficientsp value
User Turns-0.6330.01
Unit Field0.7350.00
Correctness
Platform-0.240.141
", "type_str": "table", "num": null } } } }