Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W02-0219",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:15:13.157462Z"
},
"title": "A New Taxonomy for the Quality of Telephone Services Based on Spoken Dialogue Systems",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "M\u00f6ller",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ruhr-University",
"location": {
"postCode": "D-44780",
"settlement": "Bochum, Bochum",
"country": "Germany"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This document proposes a new taxonomy for describing the quality of services which are based on spoken dialogue systems (SDSs), and operated via a telephone interface. It is used to classify instrumentally or expert-derived dialogue and system measures, as well as quality features perceived by the user of the service. A comparison is drawn to the quality of human-to-human telephone services, and implications for the development of evaluation frameworks such as PARADISE are discussed.",
"pdf_parse": {
"paper_id": "W02-0219",
"_pdf_hash": "",
"abstract": [
{
"text": "This document proposes a new taxonomy for describing the quality of services which are based on spoken dialogue systems (SDSs), and operated via a telephone interface. It is used to classify instrumentally or expert-derived dialogue and system measures, as well as quality features perceived by the user of the service. A comparison is drawn to the quality of human-to-human telephone services, and implications for the development of evaluation frameworks such as PARADISE are discussed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Telephone services which rely on spoken dialogue systems (SDSs) have now been introduced at a large scale. For the human user, when dialing the number it is often not completely clear that the agent on the other side will be a machine, and not a human operator. Because of this fact, and because the interaction with the SDS is performed through the same type of user interface (e.g. the handset telephone), comparisons will automatically be drawn to the quality of human-human communication over the same channel, and sometimes with the same purpose. Thus, while acknowledging the differences in behaviors from both -human and machine -sides, it seems justified to take the human telephone interaction (HHI) as one reference for telephone-based human-machine interaction (HMI).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The quality of interactions with spoken dialogue systems is difficult to determine. Whereas structured approaches have been documented on how to design spoken dialogue systems so that they adequately meet the requirements of their users (e.g. by Bernsen et al., 1998) , the quality which is perceived when interacting with SDSs is often addressed in an intuitive way. Hone and Graham (2001) describe efforts to determine the underlying dimensions in user quality judgments, by performing a multidimensional analysis on subjective ratings obtained on a large number of different scales. The problem obviously turned out to be multi-dimensional. Nevertheless, many other researchers still try to estimate \"overall system quality\", \"usability\" or \"user satisfaction\" by simply calculating the arithmetic mean over several user ratings on topics as different as perceived TTS quality, perceived system understanding, and expected future use of the system. The reason is the lack of an adequate description of quality dimensions, both with respect to the system design and to the perception of the user.",
"cite_spans": [
{
"start": 246,
"end": 267,
"text": "Bernsen et al., 1998)",
"ref_id": "BIBREF0"
},
{
"start": 368,
"end": 390,
"text": "Hone and Graham (2001)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, an attempt is made to close this gap. A taxonomy is developed which allows quality dimensions to be classified, and methods for their measurement to be developed. The starting point for this taxonomy was a similar one which has fruitfully been used for the description of human-to-human services in telecommunication networks (e.g. traditional telephony, mobile telephony, or voice over IP), see M\u00f6ller (2000) . Such a taxonomy can be helpful in three respects: (1) system elements which are in the hands of developers, and responsible for specific user perceptions, can be identified, (2) the dimensions underlying the overall impression of the user can be described, together with adequate (subjective) measurement methods, and (3) prediction models can be developed to estimate quality -as it would be perceived by the user -from purely instrumental measurements. While we are still far from the last point in HMI, examples will be presented of the first two issues.",
"cite_spans": [
{
"start": 411,
"end": 424,
"text": "M\u00f6ller (2000)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The next section will discuss what is understood by the term \"quality\", and will present the taxonomy for HMI. In Section 3, quality features underlying the aspects of the taxonomy are identified, and dialogue-and system-related measures for each aspect are presented in Section 4, based on measures which are commonly documented in literature. Section 5 shows the parallels to the original taxonomy for HHI. The outlook gives implications for the development of evaluation and prediction models, such as the PARADISE framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It is obvious that quality is not an entity which could be measured in an easy way, e.g. using a technical instrument. The quality of a service results from the perceptions of its user, in relation to what they expect or desire from the service. In the following, it will thus be made use of the definition of quality developed by Jekosch (2000) :",
"cite_spans": [
{
"start": 331,
"end": 345,
"text": "Jekosch (2000)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of Service Taxonomy",
"sec_num": "2"
},
{
"text": "\"Quality is the result of the judgment of a perceived constitution of an entity with regard to its desired constitution. [...] The perceived constitution contains the totality of the features of an entity. For the perceiving person, it is a characteristic of the identity of the entity.\"",
"cite_spans": [
{
"start": 121,
"end": 126,
"text": "[...]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of Service Taxonomy",
"sec_num": "2"
},
{
"text": "The entity to be judged in our case is the service the user interacts with (through the telephone network), and which is based on a spoken dialogue system. Its quality is a compromise between what s/he expects or desires, and the characteristics s/he perceives while using the service.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of Service Taxonomy",
"sec_num": "2"
},
{
"text": "At this point, it is useful to differentiate between quality elements and quality features, as it was also proposed by Jekosch. Whereas the former are system or service characteristics which are in the hands of the designer (and thus can be optimized to reach high quality), the latter are perceptive dimensions forming the overall picture in the mind of the user. Generally, no stable relationship which would be valid for all types of services, users and situations can be established between the two. Evaluation frameworks such as PARADISE establish a temporary relationship, and try to reach some crossdomain validity. Due to the lack of quality elements which can really be manipulated in some way by the designer, however, the framework has to start mostly from dialogue and system measures which cannot be directly controlled. These measures will be listed in Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of Service Taxonomy",
"sec_num": "2"
},
{
"text": "The quality of a service (QoS) is often addressed only from the designer side, e.g. in the definition used by the International Telecommunication Union for telephone services (ITU-T Rec. E.800, 1994). It includes service support, operability, security and serveability. Whereas these issues are necessary for a successful set-up of the service, they are not directly perceived by the user. In the following taxonomy, the focus is therefore put on the user side. The overall picture is presented in Figure 1 . It illustrates the categories (white boxes) which can be sub-divided into aspects (gray boxes), and their relationships (arrows). As the user is the decision point for each quality aspect, user factors have to be seen in a distributed way over the whole picture. This fact has tentatively been illustrated by the gray cans on the upper side of the taxonomy, but will not be further addressed in this paper. The remaining categories are discussed in the following. identified three factors which carry an influence on the performance of SDSs, and which therefore are thought to contribute to its quality perceived by the user: agent factors (mainly related to the dialogue and the system itself), task factors (related to how the SDS captures the task it has been developed for) and environmental factors (e.g. factors related to the acoustic environment and the transmission channel). Because the taxonomy refers to the service as a whole, a fourth point is added here, namely contextual factors such as costs, type of access, or the availability. All four types of factors subsume quality elements which can be expected to carry an influence on the quality perceived by the user. The agent factors carry an influence on three quality categories. On the speech level, input and output quality will have a major influence. Quality features for speech output have been largely investigated in the literature, and include e.g. intelligibility, naturalness, or listening-effort. They will depend on the whole system set-up, and on the situation and task the user is confronted with. Quality features related to the speech input from the user (and thus to the system's recognition and understanding capabilities) are far less obvious. They are, in addition, much more difficult to investigate, because the user only receives an indirect feedback on the system's capabilities, namely from the system reactions which are influences by the dialogue as a whole. Both speech input and output are highly influenced by the environmental factors.",
"cite_spans": [],
"ref_spans": [
{
"start": 498,
"end": 506,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Quality of Service Taxonomy",
"sec_num": "2"
},
{
"text": "On the language and dialogue level, dialogue cooperativity has been identified as a key requirement for high-quality services (Bernsen et al., 1998) . The classification of cooperativity into aspects which was proposed by Bernsen et al., and which is related to Grice's maxims (Grice, 1975) of cooperative behavior in HHI, is mainly adopted here, with one exception: we regard the partner asymmetry aspect under a separate category called dialogue symmetry, together with the aspects initiative and interaction control. Dialogue cooperativity will thus cover the aspects informativeness, truth and evidence, relevance, manner, the user's background knowledge, and meta-communication handling strategies.",
"cite_spans": [
{
"start": 126,
"end": 148,
"text": "(Bernsen et al., 1998)",
"ref_id": "BIBREF0"
},
{
"start": 277,
"end": 290,
"text": "(Grice, 1975)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of Service Taxonomy",
"sec_num": "2"
},
{
"text": "Adopting the notion of efficiency used by ETSI and ISO (ETSI Technical Report ETR 095, 1993), efficiency designates the effort and resources expanded in relation to the accuracy and completeness with which users can reach specified goals. It is proposed to differentiate three categories of efficiency. Communication efficiency relates to the efficiency of the dialogic interaction, and includesbesides the aspects speed and conciseness -also the smoothness of the dialogue (which is sometimes called \"dialogue quality\"). Note that this is a significant difference to many other notions of efficiency, which only address the efforts and resources, but not the accuracy and completeness of the goals to be reached. Task efficiency is related to the success of the system in accomplishing the task; it covers task success as well as task ease. Service efficiency is the adequacy of the service as a whole for the purpose defined by the user. It also includes the \"added value\" which is contributed to the service, e.g. in comparison to other means of information (comparable interfaces or human operators).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of Service Taxonomy",
"sec_num": "2"
},
{
"text": "In addition to efficiency aspects, other aspects exist which relate to the agent itself, as well as its perception by the user in the dialogic interaction. We subsume these aspects under the category \"comfort\", although other terms might exist which better describe the according perceptions of the user. Comfort covers the agent's \"social personality\" (perceived friendliness, politeness, etc.), as well as the cognitive demand required from the user.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of Service Taxonomy",
"sec_num": "2"
},
{
"text": "Depending on the area of interest, several notions of usability are common. Here, we define usability as the suitability of a system or service to fulfill the user's requirements. It considers mainly the ease of using the system and may result in user satisfaction. It does, however, not cover service efficiency or economical benefit, which carry an influence on the utility (usability in relation to the financial costs and to other contextual factors) of the service. also state that \"user satisfaction ratings [...] have frequently been used in the literature as an external indicator of the usability of an agent.\" As , we assume that user satisfaction is predictive of other system designer objectives, e.g. the willingness to use or pay for a service. Acceptability, which is commonly defined on this more or less \"economic\" level, can therefore be seen in a relationship to usability and utility. It is a multidimensional property of a service, describing how readily a customer will use the service. The acceptability of a service (AoS) can be represented as the ratio of potential users to the quantity of the target user group, see definitions on AoS adopted by EURESCOM (EURESCOM Project P.807 Deliverable 1, 1998).",
"cite_spans": [
{
"start": 514,
"end": 519,
"text": "[...]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of Service Taxonomy",
"sec_num": "2"
},
{
"text": "From the schematic, it can be seen that a large number of aspects contribute to what can be called communication efficiency, usability or user satisfaction. Several interrelations (and a certain degree of inevitable overlap) exist between the categories and aspects, which are marked by arrows. The interrelations will become more apparent by taking a closer look to the underlying quality features which can be associated with each aspect. They will be presented in the following section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of Service Taxonomy",
"sec_num": "2"
},
{
"text": "In Tables 1 and 2, an overview is given of the quality features underlying each aspect of the QoS taxonomy. For the aspects related to dialogue cooperativity, these aspects partly stem from the design guideline definitions given by Bernsen et al. (1998) . For the rest, quality features which have been used in experimental investigations on different types of dialogue systems have been classified. They do not solely refer to telephone-based services, but will be valid for a broader class of systems and services.",
"cite_spans": [
{
"start": 232,
"end": 253,
"text": "Bernsen et al. (1998)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification of Quality Features",
"sec_num": "3"
},
{
"text": "By definition, quality features are percepts of the users. They can consequently only be measured by asking users in realistic scenarios, in a subjective way. Several studies with this aim are reported in the literature. The author analyzed 12 such investigations and classified the questions which were asked to the users (as far as they have been reported) according to the quality features. For each aspect given in Tables 1 and 2, at least two questions could be identified which addressed this aspect. This classification cannot be reproduced here for space reasons. Additional features of the questionnaires directly address user satisfaction (e.g. perceived satisfaction, degree of enjoyment, user happiness, system likability, degree of frustration or irritation) and acceptability (perceived acceptability, willingness to use the system in the future).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification of Quality Features",
"sec_num": "3"
},
{
"text": "From the classification, it seems that the taxonomy adequately covers what researchers intuitively would include in questionnaires investigating usability, user satisfaction and acceptability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification of Quality Features",
"sec_num": "3"
},
{
"text": "Experiments with human subjects are still the only way to investigate quality percepts. They are, however, time-consuming and expensive to carry out. For the developers of SDSs, it is therefore interesting to identify quality elements which are in their hands, and which can be used for enhancing the quality for the user. Unfortunately, only few such elements are known, and their influence on service quality is only partly understood. Word accuracy or word error rate, which are common measures to describe the performance of speech recognizers, can be taken as an example. Although they can be measured partly instrumentally (provided that an agreed-upon corpus with reference transcriptions exists), and the system designer can tune the system to increase the word accuracy, it cannot be determined beforehand how this will affect system usability or user satisfaction. For filling this gap, dialogue-and system-related measures have been developed. They can be determined during the users' experimental interaction with the system or from log-files, either instrumentally (e.g. dialogue duration) or by an expert evaluator (e.g. contextual appropriateness). Although they provide useful information on the perceived quality of the service, there is no general relationship between one or several such measures, and specific quality features. The PARADISE framework produces such a relationship for a specific scenario, using multivariate linear regression. Some generalizablility can be reached, but the exact form of the relationship and its constituting input parameters have to be established for each system anew.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification of Dialogue and System Measures",
"sec_num": "4"
},
{
"text": "A generalization across systems and services might be easier if a categorization of dialogue and system measures can be reached. Tables 3 and 4 in the Appendix report on the classification of 37 different measures defined in literature into the QoS taxonomy. No measures have been found so far which directly relate to speech output quality, agent personality, service efficiency, usability, or user satisfaction. With the exception of the first aspect, it may however be assumed that they will be addressed by a combination of the measures related to the underlying aspects.",
"cite_spans": [],
"ref_spans": [
{
"start": 129,
"end": 143,
"text": "Tables 3 and 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Classification of Dialogue and System Measures",
"sec_num": "4"
},
{
"text": "It has been stated earlier that the QoS taxonomy for telephone-based spoken dialogue services has been derived from an earlier schematic addressing human-to-human telephone services (M\u00f6ller, 2000) . This schematic is depicted in Figure 2 , with slight modifications on the labels of single categories from the original version.",
"cite_spans": [
{
"start": 182,
"end": 196,
"text": "(M\u00f6ller, 2000)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 229,
"end": 237,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Comparison to Human-Human Services",
"sec_num": "5"
},
{
"text": "In the HHI case, the focus is placed on the categories of speech communication. This category (re- placing environmental and agent factors of the HMI case) is divided into a one-way voice transmission category, a conversational category (conversation effectiveness), and a user-related category (ease of communication; comparable to the category \"comfort\" in the HMI case). The task and service categories of the interaction with the SDS are replaced by the service categories of the HHI schematic. The rest of the schematic is congruent in both cases, although the single aspects which are covered by each category obviously differ. The taxonomy of Figure 2 has fruitfully been used to classify three types of entities: quality elements which are used for the set-up and planning of telephone networks (some of these elements are given in the gray boxes of Figure 2 ) assessment methods commonly used for measuring quality features in telecommunications quality prediction models which estimate single quality features from the results of instrumental measurements Although we seem to be far from reaching a comparable level in the assessment and prediction of HMI quality issues, it is hoped that the taxonomy of Figure 1 can be equally useful with respect to telephone services based on SDSs.",
"cite_spans": [],
"ref_spans": [
{
"start": 650,
"end": 658,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 858,
"end": 866,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 1215,
"end": 1221,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison to Human-Human Services",
"sec_num": "5"
},
{
"text": "The new taxonomy was shown to be useful in classifying quality features (dimensions of human quality perception) as well as instrumentally or expertderived measures which are related to service quality, usability, and acceptability. Nonetheless, in both cases it has not been validated against experimental (empirical) data. Thus, one cannot guarantee that the space of quality dimensions is captured in an accurate and complete way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "6"
},
{
"text": "There are a number of facts reported in literature, which make us confident that the taxonomy nevertheless captures general assumptions and trends. First of all, in his review of both subjective evaluations as well as dialogue-or system-related measures, the author didn't encounter items which would not be covered by the schematic. This literature review is still going on, and it is hoped that more detailed data can be presented in the near future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "6"
},
{
"text": "As stated above, the separation of environmental, agent and task factors was motivated by . The same categories appear in the characterization of spoken dialogue systems given by Fraser (1997) (plus an additional user factor, which obviously is nested in the quality aspects due to the fact that it is the user who decides on quality). The context factor is also recognized by Dybkjaer and Bernsen (2000) . Dialogue cooperativity is a category which is based on a relatively sophisticated theoretical as well as empirical background. It has proven useful especially in the system design and set-up phase, and first results in evaluation have also been reported (Bernsen et al., 1998) . The dialogue symmetry category captures the remaining partner asymmetry aspect, and has been designed separately to additionally cover initiative and interaction control aspects. To the authors knowledge, no similar category has been reported. The relationship between the different efficiency measures and usability, user satisfaction and utility was already discussed in Section 2.",
"cite_spans": [
{
"start": 377,
"end": 404,
"text": "Dybkjaer and Bernsen (2000)",
"ref_id": "BIBREF2"
},
{
"start": 661,
"end": 683,
"text": "(Bernsen et al., 1998)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "6"
},
{
"text": "In the PARADISE framework, user satisfaction is composed of maximal task success and minimal dialogue costs , -thus a type of efficiency in the way it was defined here. This concept is still congruent with the proposed taxonomy. On the other hand, the separation into \"efficiency measures\" and \"quality measures\" (same figure) does not seem to be fine-graded enough. It is proposed that the taxonomy could be used to classify different measures beforehand. Based on the categories, a multi-level prediction model could be envisaged, first summarizing similar measures (belonging to the same category) into intermediate indices, and then combining the contributions of different indices into an estimation of user satisfaction. The reference for user satisfaction, however, cannot be a simple arithmetic mean of the subjective ratings in different categories. Appropriate questionnaires still have to be developed, and they will take profit of multidimensional analyses as reported by Hone and Graham (2001) .",
"cite_spans": [
{
"start": 984,
"end": 1006,
"text": "Hone and Graham (2001)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "6"
}
],
"back_matter": [
{
"text": "(K97) -UER: understanding error rate -# ASR rejections (W98) -IC: information content (SF93) -# system error messages (Pr92) Danieli and Gerbino (1995) ; F97: Fraser (1997); K97: P92: Polifroni et al. (1992) ; Pr92: Price et al. (1992) ; SF93: Simpson and Fraser (1993) ; S01: Strik et al. (2001) ; W98: .",
"cite_spans": [
{
"start": 125,
"end": 151,
"text": "Danieli and Gerbino (1995)",
"ref_id": "BIBREF1"
},
{
"start": 179,
"end": 207,
"text": "P92: Polifroni et al. (1992)",
"ref_id": null
},
{
"start": 216,
"end": 235,
"text": "Price et al. (1992)",
"ref_id": "BIBREF16"
},
{
"start": 244,
"end": 269,
"text": "Simpson and Fraser (1993)",
"ref_id": "BIBREF17"
},
{
"start": 277,
"end": 296,
"text": "Strik et al. (2001)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Classification of Dialogue and System Measures",
"sec_num": null
},
{
"text": "Dialogue / System Measure Communic. Speed -TD: turn duration (STD, UTD) (F97) Efficiency -SRD: system response delay (Pr92) -URD: user response delay (Pr92) -# timeout prompts (W98) -# barge-in attempts from the user (W98) Conciseness -DD: dialogue duration (F97, P92) -# turns (# system turns, # user turns) (W98) dialogue efficiency) Smoothness -# system error messages (Pr92) -# cancel attempts from the user (W98) dialogue quality) -# help requests (W98) -# ASR rejections (W98) -# barge-in attempts from the user (W98) -# timeout prompts ( ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aspect",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Designing Interactive Speech Systems: From First Ideas to User Testing",
"authors": [
{
"first": "Niels",
"middle": [
"Ole"
],
"last": "Bernsen",
"suffix": ""
},
{
"first": "Hans",
"middle": [],
"last": "Dybkjaer",
"suffix": ""
},
{
"first": "Laila",
"middle": [],
"last": "Dybkjaer",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Niels Ole Bernsen, Hans Dybkjaer, and Laila Dybkjaer. 1998. Designing Interactive Speech Systems: From First Ideas to User Testing. Springer, D-Berlin.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Metrics for evaluating dialogue strategies in a spoken language system",
"authors": [
{
"first": "Morena",
"middle": [],
"last": "Danieli",
"suffix": ""
},
{
"first": "Elisabetta",
"middle": [],
"last": "Gerbino",
"suffix": ""
}
],
"year": 1995,
"venue": "Empirical Methods in Discourse Interpretation and Generation. Papers from the 1995 AAAI Symposium",
"volume": "",
"issue": "",
"pages": "34--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morena Danieli and Elisabetta Gerbino. 1995. Metrics for evaluating dialogue strategies in a spoken language system. In: Empirical Methods in Discourse Inter- pretation and Generation. Papers from the 1995 AAAI Symposium, USA-Stanford CA, pages 34-39, AAAI Press, USA-Menlo Park CA.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Usability issues in spoken dialogue systems",
"authors": [
{
"first": "Laila",
"middle": [],
"last": "Dybkjaer",
"suffix": ""
},
{
"first": "Niels",
"middle": [
"Ole"
],
"last": "Bernsen",
"suffix": ""
}
],
"year": 2000,
"venue": "Natural Language Engineering",
"volume": "6",
"issue": "3-4",
"pages": "243--271",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laila Dybkjaer and Niels Ole Bernsen. 2000. Usability issues in spoken dialogue systems. Natural Language Engineering, 6(3-4):243-271.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Human Factors (HF)",
"authors": [],
"year": 1993,
"venue": "Guide for Usability Evaluations of Telecommunication Systems and Services. European Telecommunications Standards",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ETSI Technical Report ETR 095, 1993. Human Factors (HF); Guide for Usability Evaluations of Telecommu- nication Systems and Services. European Telecommu- nications Standards Institute, F-Sophia Antipolis.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Jupiter II -Usability, Performability and Interoperability Trials in Europe. European Institute for Research and Strategic Studies in Telecommunications",
"authors": [],
"year": 1998,
"venue": "EURESCOM Project P.807 Deliverable 1",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "EURESCOM Project P.807 Deliverable 1, 1998. Jupiter II -Usability, Performability and Interoperability Tri- als in Europe. European Institute for Research and Strategic Studies in Telecommunications, D- Heidelberg.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Assessment of Interactive Systems",
"authors": [
{
"first": "Norman",
"middle": [],
"last": "Fraser",
"suffix": ""
}
],
"year": 1997,
"venue": "Handbook on Standards and Resources for Spoken Language Systems",
"volume": "",
"issue": "",
"pages": "564--615",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Norman Fraser. 1997. Assessment of Interactive Sys- tems. In: Handbook on Standards and Resources for Spoken Language Systems (D. Gibbon, R. Moore and R. Winski, eds.), pages 564-615, Mouton de Gruyter, D-Berlin.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Logic and Conversation",
"authors": [
{
"first": "H",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Grice",
"suffix": ""
}
],
"year": 1975,
"venue": "Syntax and Semantics",
"volume": "3",
"issue": "",
"pages": "41--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Paul Grice, 1975. Logic and Conversation, pages 41- 58. Syntax and Semantics, Vol. 3: Speech Acts (P. Cole and J. L. Morgan, eds.). Academic Press, USA- New York (NY).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Subjective Assessment of Speech-System Interface Usability",
"authors": [
{
"first": "Kate",
"middle": [
"S"
],
"last": "Hone",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Graham",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. 7th Europ. Conf. on Speech Communication and Technology (EUROSPEECH 2001 -Scandinavia)",
"volume": "",
"issue": "",
"pages": "2083--2086",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kate S. Hone and Robert Graham. 2001. Subjective As- sessment of Speech-System Interface Usability. Proc. 7th Europ. Conf. on Speech Communication and Tech- nology (EUROSPEECH 2001 -Scandinavia), pages 2083-2086, DK-Aalborg.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Terms and Definitions Related to Quality of Service and Network Performance Including Dependability. International Telecommunication Union",
"authors": [
{
"first": ".",
"middle": [
"E"
],
"last": "Itu-T Rec",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "800",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ITU-T Rec. E.800, 1994. Terms and Definitions Related to Quality of Service and Network Performance In- cluding Dependability. International Telecommunica- tion Union, CH-Geneva, August.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Sprache h\u00f6ren und beurteilen: Ein Ansatz zur Grundlegung der Sprachqualit\u00e4tsbeurteilung",
"authors": [
{
"first": "Ute",
"middle": [],
"last": "Jekosch",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ute Jekosch. 2000. Sprache h\u00f6ren und beurteilen: Ein Ansatz zur Grundlegung der Sprachqualit\u00e4tsbe- urteilung. Habilitation thesis (unpublished), Univer- sit\u00e4t/Gesamthochschule Essen, D-Essen.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Design and Evaluation of Spoken Dialogue Systems",
"authors": [
{
"first": "Candance",
"middle": [
"A"
],
"last": "Kamm",
"suffix": ""
},
{
"first": "Marilyn",
"middle": [
"A"
],
"last": "Walker",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. 1997 IEEE Workshop on Automatic Speech Recognition and Understanding",
"volume": "",
"issue": "",
"pages": "14--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Candance A. Kamm and Marilyn A. Walker. 1997. Design and Evaluation of Spoken Dialogue Systems. Proc. 1997 IEEE Workshop on Automatic Speech Recognition and Understanding, USA-Santa Barbara (CA), pages 14-17.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Evaluating Spoken Dialogue Systems for Telecommunication Services",
"authors": [
{
"first": "Candance",
"middle": [],
"last": "Kamm",
"suffix": ""
},
{
"first": "Shrikanth",
"middle": [],
"last": "Narayanan",
"suffix": ""
},
{
"first": "Dawn",
"middle": [],
"last": "Dutton",
"suffix": ""
},
{
"first": "Russell",
"middle": [],
"last": "Ritenour",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. 5th Europ. Conf. on Speech Communication and Technology (EUROSPEECH'97)",
"volume": "4",
"issue": "",
"pages": "2203--2206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Candance Kamm, Shrikanth Narayanan, Dawn Dutton, and Russell Ritenour. 1997. Evaluating Spoken Dialogue Systems for Telecommunication Services. Proc. 5th Europ. Conf. on Speech Communication and Technology (EUROSPEECH'97), 4:2203-2206, GR- Rhodes.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Evaluating Response Strategies in a Web-Based Spoken Dialogue Agent",
"authors": [
{
"first": "Diane",
"middle": [
"J"
],
"last": "Litman",
"suffix": ""
},
{
"first": "Shimei",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Marilyn",
"middle": [
"A"
],
"last": "Walker",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. of the",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diane J. Litman, Shimei Pan, and Marilyn A. Walker. 1998. Evaluating Response Strategies in a Web- Based Spoken Dialogue Agent. Proc. of the 36th",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Ann. Meeting of the Assoc. for Computational Linguistics and 17th Int. Conf. on Computational Linguistics (COLING-ACL 98)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ann. Meeting of the Assoc. for Computational Linguis- tics and 17th Int. Conf. on Computational Linguistics (COLING-ACL 98), CAN-Montreal.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Assessment and Prediction of Speech Quality in Telecommunications",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "M\u00f6ller",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian M\u00f6ller. 2000. Assessment and Prediction of Speech Quality in Telecommunications. Kluwer Aca- demic Publ., USA-Boston.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Stephanie Seneff, and Victor Zue. 1992. Experiments in Evaluating Interactive Spoken Language Systems",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Polifroni",
"suffix": ""
},
{
"first": "Lynette",
"middle": [],
"last": "Hirschman",
"suffix": ""
}
],
"year": null,
"venue": "Proc. DARPA Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "28--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Polifroni, Lynette Hirschman, Stephanie Sen- eff, and Victor Zue. 1992. Experiments in Eval- uating Interactive Spoken Language Systems. In: Proc. DARPA Speech and Natural Language Work- shop, pages 28-33.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Subject-Based Evaluation Measures for Interactive Spoken Language Systems",
"authors": [
{
"first": "Patti",
"middle": [
"J"
],
"last": "Price",
"suffix": ""
},
{
"first": "Lynette",
"middle": [],
"last": "Hirschman",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Wade",
"suffix": ""
}
],
"year": 1992,
"venue": "Proc. DARPA Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "34--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patti J. Price, Lynette Hirschman, Elizabeth Shriberg, and Elizabeth Wade. 1992. Subject-Based Evaluation Measures for Interactive Spoken Language Systems. In: Proc. DARPA Speech and Natural Language Work- shop, pages 34-39.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Black Box and Glass Box Evaluation of the SUNDIAL System",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Simpson",
"suffix": ""
},
{
"first": "Norman",
"middle": [
"M"
],
"last": "Fraser",
"suffix": ""
}
],
"year": 1993,
"venue": "Proc. 3rd Europ. Conf. on Speech Communication and Technology (EUROSPEECH'93)",
"volume": "2",
"issue": "",
"pages": "1423--1426",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Simpson and Norman M. Fraser. 1993. Black Box and Glass Box Evaluation of the SUNDIAL Sys- tem. Proc. 3rd Europ. Conf. on Speech Communi- cation and Technology (EUROSPEECH'93), 2:1423- 1426, D-Berlin.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Comparing the Performance of Two CSRs: How to Determine the Significance Level of the Differences",
"authors": [
{
"first": "Helmer",
"middle": [],
"last": "Strik",
"suffix": ""
},
{
"first": "Catia",
"middle": [],
"last": "Cucchiarini",
"suffix": ""
},
{
"first": "Judith",
"middle": [
"M"
],
"last": "Kessens",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. 7th Europ. Conf. on Speech Communication and Technology (EUROSPEECH 2001 -Scandinavia)",
"volume": "",
"issue": "",
"pages": "2091--2094",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helmer Strik, Catia Cucchiarini, and Judith M. Kessens. 2001. Comparing the Performance of Two CSRs: How to Determine the Significance Level of the Dif- ferences. Proc. 7th Europ. Conf. on Speech Communi- cation and Technology (EUROSPEECH 2001 -Scan- dinavia), pages 2091-2094, DK-Aalborg.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "PARADISE: A Framework for Evaluating Spoken Dialogue Agents",
"authors": [
{
"first": "Marilyn",
"middle": [
"A"
],
"last": "Walker",
"suffix": ""
},
{
"first": "Diane",
"middle": [
"J"
],
"last": "Litman",
"suffix": ""
},
{
"first": "Candance",
"middle": [
"A"
],
"last": "Kamm",
"suffix": ""
},
{
"first": "Alicia",
"middle": [],
"last": "Abella",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. of the ACL/EACL 35th Ann. Meeting of the Assoc. for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "271--280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marilyn A. Walker, Diane J. Litman, Candance A. Kamm, and Alicia Abella. 1997. PARADISE: A Framework for Evaluating Spoken Dialogue Agents. Proc. of the ACL/EACL 35th Ann. Meeting of the As- soc. for Computational Linguistics, pages 271-280.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Evaluating Spoken Dialogue Agents with PARADISE: Two Case Studies",
"authors": [
{
"first": "Marilyn",
"middle": [
"A"
],
"last": "Walker",
"suffix": ""
},
{
"first": "Diane",
"middle": [
"J"
],
"last": "Litman",
"suffix": ""
},
{
"first": "Candace",
"middle": [
"A"
],
"last": "Kamm",
"suffix": ""
},
{
"first": "Alicia",
"middle": [],
"last": "Abella",
"suffix": ""
}
],
"year": 1998,
"venue": "Computer Speech and Language",
"volume": "12",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marilyn A. Walker, Diane J. Litman, Candace A. Kamm, and Alicia Abella. 1998. Evaluating Spoken Dialogue Agents with PARADISE: Two Case Studies. Com- puter Speech and Language, 12(3).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "QoS schematic for task-oriented HCI. lower part of the picture.",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "QoS schematic for human-to-human telephone services.",
"num": null,
"type_str": "figure"
},
"TABREF0": {
"text": "The corresponding quality features are summarized into aspects and categories in the following",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>linguistic backgr.</td><td colspan=\"2\">attitude</td><td colspan=\"2\">emotions flexibility</td><td>user factors</td><td colspan=\"2\">experi-ence</td><td>task / domain knowledge</td><td>motivation, goals</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">quality</td><td/></tr><tr><td/><td/><td/><td/><td>of</td><td/><td/></tr><tr><td/><td/><td/><td/><td colspan=\"2\">service</td><td/></tr><tr><td colspan=\"3\">environmental factors</td><td/><td>agent factors</td><td/><td colspan=\"2\">task factors</td><td>contextual factors</td></tr><tr><td/><td colspan=\"3\">transm. channel backgr. noise room acoustics</td><td colspan=\"2\">system knowledge dialogue strategy dialogue flexibility</td><td/><td>task coverage domain cov. task difficulty task flexibility</td><td>costs availability access opening hours</td></tr><tr><td colspan=\"3\">speech i/o</td><td colspan=\"2\">dialogue</td><td>dialogue</td><td/></tr><tr><td colspan=\"2\">quality</td><td/><td colspan=\"2\">cooperativity</td><td colspan=\"2\">symmetry</td></tr><tr><td>intelligibility naturalness listening-effort</td><td/><td/><td colspan=\"2\">informativeness truth &amp; evidence relevance</td><td colspan=\"3\">initiative interaction control partner asymmetry</td></tr><tr><td>system underst.</td><td/><td/><td>manner</td><td/><td/><td/></tr><tr><td/><td/><td/><td colspan=\"2\">backgr. know.</td><td/><td/></tr><tr><td/><td/><td/><td colspan=\"2\">meta-comm. handl.</td><td/><td/></tr><tr><td colspan=\"3\">communication efficiency</td><td/><td>comfort</td><td/><td colspan=\"2\">task efficiency</td><td>task success task ease</td></tr><tr><td>speed / pace</td><td/><td colspan=\"2\">personality</td><td/><td/><td/></tr><tr><td colspan=\"2\">dialogue conciseness</td><td colspan=\"2\">cognitive demand</td><td/><td/><td/></tr><tr><td colspan=\"2\">dialogue smoothness</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td colspan=\"2\">ease of use</td><td>usability</td><td/><td/><td>service efficiency</td><td>economical benefit</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">service adequacy</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">added value</td></tr><tr><td/><td/><td/><td>enjoyability</td><td colspan=\"2\">user satisfaction</td><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>valuability</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"2\">utility</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"2\">acceptability</td><td>future use</td></tr></table>"
},
"TABREF1": {
"text": "Dialogue-related quality features.",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td>Aspect</td><td>Quality Features</td></tr><tr><td>Dialogue</td><td>Informativeness</td><td>-Accuracy / Specificity of Information</td></tr><tr><td>Cooperativity</td><td/><td>-Completeness of Information</td></tr><tr><td/><td/><td>-Clarity of Information</td></tr><tr><td/><td/><td>-Conciseness of Information</td></tr><tr><td/><td/><td>-System Feedback Adequacy</td></tr><tr><td/><td>Truth and</td><td>-Credibility of Information</td></tr><tr><td/><td>Evidence</td><td>-Consistency of Information</td></tr><tr><td/><td/><td>-Reliability of Information</td></tr><tr><td/><td/><td>-Perceived System Reasoning</td></tr><tr><td/><td>Relevance</td><td>-System Feedback Adequacy</td></tr><tr><td/><td/><td>-Perceived System Understanding</td></tr><tr><td/><td/><td>-Perceived System Reasoning</td></tr><tr><td/><td/><td>-Naturalness of Interaction</td></tr><tr><td/><td>Manner</td><td>-Clarity / Non-Ambiguity of Expression</td></tr><tr><td/><td/><td>-Consistency of Expression</td></tr><tr><td/><td/><td>-Conciseness of Expression</td></tr><tr><td/><td/><td>-Transparency of Interaction</td></tr><tr><td/><td/><td>-Order of Interaction</td></tr><tr><td/><td>Background</td><td>-Congruence with User's Task/Domain Knowl.</td></tr><tr><td/><td>Knowledge</td><td>-Congruence with User Experience</td></tr><tr><td/><td/><td>-Suitability of User Adaptation</td></tr><tr><td/><td/><td>-Inference Adequacy</td></tr><tr><td/><td/><td>-Interaction Guidance</td></tr><tr><td/><td>Meta-Comm.</td><td>-Repair Handling Adequacy</td></tr><tr><td/><td>Handling</td><td>-Clarification Handling Adequacy</td></tr><tr><td/><td/><td>-Help Capability</td></tr><tr><td/><td/><td>-Repetition Capability</td></tr><tr><td>Dialogue</td><td>Initiative</td><td>-Flexibility of Interaction</td></tr><tr><td>Symmetry</td><td/><td>-Interaction Guidance</td></tr><tr><td/><td/><td>-Naturalness of Interaction</td></tr><tr><td/><td>Interaction</td><td>-Perceived Control Capability</td></tr><tr><td/><td>Control</td><td>-Barge-In Capability</td></tr><tr><td/><td/><td>-Cancel Capability</td></tr><tr><td/><td>Partner</td><td>-Transparency of Interaction</td></tr><tr><td/><td>Asymmetry</td><td>-Transparency of Task / Domain Coverage</td></tr><tr><td/><td/><td>-Interaction Guidance</td></tr><tr><td/><td/><td>-Naturalness of Interaction</td></tr><tr><td/><td/><td>-Cognitive Demand Required from the User</td></tr><tr><td/><td/><td>-Respect of Natural Information Packages</td></tr><tr><td>Speech I/O</td><td>Speech Output</td><td>-Intelligibility</td></tr><tr><td>Quality</td><td>Quality</td><td>-Naturalness of Speech</td></tr><tr><td/><td/><td>-Listening-Effort Required from the User</td></tr><tr><td/><td>Speech Input</td><td>-Perceived System Understanding</td></tr><tr><td/><td>Quality</td><td>-Perceived System Reasoning</td></tr></table>"
},
"TABREF2": {
"text": "Communication-, task-and service-related quality features.",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td>Aspect</td><td>Quality Features</td></tr><tr><td>Communic.</td><td>Speed</td><td>-Perceived Interaction Pace</td></tr><tr><td>Efficiency</td><td/><td>-Perceived Response Time</td></tr><tr><td/><td>Conciseness</td><td>-Perceived Interaction Length</td></tr><tr><td/><td/><td>-Perceived Interaction Duration</td></tr><tr><td/><td>Smoothness</td><td>-System Feedback Adequacy</td></tr><tr><td/><td/><td>-Perceived System Understanding</td></tr><tr><td/><td/><td>-Perceived System Reasoning</td></tr><tr><td/><td/><td>-Repair Handling Adequacy</td></tr><tr><td/><td/><td>-Clarification Handling Adequacy</td></tr><tr><td/><td/><td>-Naturalness of Interaction</td></tr><tr><td/><td/><td>-Interaction Guidance</td></tr><tr><td/><td/><td>-Transparency of Interaction</td></tr><tr><td/><td/><td>-Congruence with User Experience</td></tr><tr><td>Comfort</td><td>Agent</td><td>-Politeness</td></tr><tr><td/><td>Personality</td><td>-Friendliness</td></tr><tr><td/><td/><td>-Naturalness of Behavior</td></tr><tr><td/><td>Cognitive</td><td>-Ease of Communication</td></tr><tr><td/><td>Demand</td><td>-Concentration Required from the User</td></tr><tr><td/><td/><td>-Stress / Fluster</td></tr><tr><td>Task</td><td>Task Success</td><td>-Adequacy of Task / Domain Coverage</td></tr><tr><td>Efficiency</td><td/><td>-Validity of Task Results</td></tr><tr><td/><td/><td>-Precision of Task Results</td></tr><tr><td/><td/><td>-Reliability of Task Results</td></tr><tr><td/><td>Task Ease</td><td>-Perceived Helpfulness</td></tr><tr><td/><td/><td>-Task Guidance</td></tr><tr><td/><td/><td>-Transparency of Task / Domain Coverage</td></tr><tr><td>Service</td><td>Service</td><td>-Access Adequacy</td></tr><tr><td>Efficiency</td><td>Adequacy</td><td>-Availability</td></tr><tr><td/><td/><td>-Modality Adequacy</td></tr><tr><td/><td/><td>-Task Adequacy</td></tr><tr><td/><td/><td>-Perceived Service Functionality</td></tr><tr><td/><td/><td>-Perceived Usefulness</td></tr><tr><td/><td>Added Value</td><td>-Service Improvement</td></tr><tr><td/><td/><td>-Comparable Interface</td></tr><tr><td>Usability</td><td>Ease of Use</td><td>-Service Operability</td></tr><tr><td/><td/><td>-Service Understandability</td></tr><tr><td/><td/><td>-Service Learnability</td></tr></table>"
}
}
}
}