reference
stringlengths
376
444k
target
stringlengths
31
68k
Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> Diagnostic tests, like therapeutic procedures, require proper analysis prior to incorporation into clinical practice. In studying diagnostic tests, an evaluation should be made of the reproducibility, accuracy, variation among those without the disease, and variation among those with the disease. Both diseased and disease-free states should be identified using a gold standard, if available. Three main guidelines can be used in evaluating and applying the results of diagnostic tests: validity of the study, expression of the results, and assessment of the generalizability of the results. Validity requires an independent, blind comparison with a reference standard. Methodology should be fully explained. Results should include sensitivity, specificity, and a receiver operating characteristic plot. Several categories of results should be provided in the form of likelihood ratios. Management decisions can be made on the basis of the posttest probability of disease after including both the pretest probability and the likelihood ratio in the calculation. <s> BIB001 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> CONTEXT ::: The literature contains a large number of potential biases in the evaluation of diagnostic tests. Strict application of appropriate methodological criteria would invalidate the clinical application of most study results. ::: ::: ::: OBJECTIVE ::: To empirically determine the quantitative effect of study design shortcomings on estimates of diagnostic accuracy. ::: ::: ::: DESIGN AND SETTING ::: Observational study of the methodological features of 184 original studies evaluating 218 diagnostic tests. Meta-analyses on diagnostic tests were identified through a systematic search of the literature using MEDLINE, EMBASE, and DARE databases and the Cochrane Library (1996-1997). Associations between study characteristics and estimates of diagnostic accuracy were evaluated with a regression model. ::: ::: ::: MAIN OUTCOME MEASURES ::: Relative diagnostic odds ratio (RDOR), which compared the diagnostic odds ratios of studies of a given test that lacked a particular methodological feature with those without the corresponding shortcomings in design. ::: ::: ::: RESULTS ::: Fifteen (6.8%) of 218 evaluations met all 8 criteria; 64 (30%) met 6 or more. Studies evaluating tests in a diseased population and a separate control group overestimated the diagnostic performance compared with studies that used a clinical population (RDOR, 3.0; 95% confidence interval [CI], 2.0-4.5). Studies in which different reference tests were used for positive and negative results of the test under study overestimated the diagnostic performance compared with studies using a single reference test for all patients (RDOR, 2.2; 95% CI, 1.5-3.3). Diagnostic performance was also overestimated when the reference test was interpreted with knowledge of the test result (RDOR, 1.3; 95% CI, 1.0-1.9), when no criteria for the test were described (RDOR, 1.7; 95% CI, 1.1-2.5), and when no description of the population under study was provided (RDOR, 1.4; 95% CI, 1.1-1.7). ::: ::: ::: CONCLUSION ::: These data provide empirical evidence that diagnostic studies with methodological shortcomings may overestimate the accuracy of a diagnostic test, particularly those including nonrepresentative patients or applying different reference standards. <s> BIB002 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> BACKGROUND ::: In the era of evidence based medicine, with systematic reviews as its cornerstone, adequate quality assessment tools should be available. There is currently a lack of a systematically developed and evaluated tool for the assessment of diagnostic accuracy studies. The aim of this project was to combine empirical evidence and expert opinion in a formal consensus method to develop a tool to be used in systematic reviews to assess the quality of primary studies of diagnostic accuracy. ::: ::: ::: METHODS ::: We conducted a Delphi procedure to develop the quality assessment tool by refining an initial list of items. Members of the Delphi panel were experts in the area of diagnostic research. The results of three previously conducted reviews of the diagnostic literature were used to generate a list of potential items for inclusion in the tool and to provide an evidence base upon which to develop the tool. ::: ::: ::: RESULTS ::: A total of nine experts in the field of diagnostics took part in the Delphi procedure. The Delphi procedure consisted of four rounds, after which agreement was reached on the items to be included in the tool which we have called QUADAS. The initial list of 28 items was reduced to fourteen items in the final tool. Items included covered patient spectrum, reference standard, disease progression bias, verification bias, review bias, clinical review bias, incorporation bias, test execution, study withdrawals, and indeterminate results. The QUADAS tool is presented together with guidelines for scoring each of the items included in the tool. ::: ::: ::: CONCLUSIONS ::: This project has produced an evidence based quality assessment tool to be used in systematic reviews of diagnostic accuracy studies. Further work to determine the usability and validity of the tool continues. <s> BIB003 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> OBJECTIVES ::: To produce an easily understood and accessible tool for use by researchers in diagnostic studies. Diagnostic studies should have sample size calculations performed, but in practice, they are performed infrequently. This may be due to a reluctance on the part of researchers to use mathematical formulae. ::: ::: ::: METHODS ::: Using a spreadsheet, we derived nomograms for calculating the number of patients required to determine the precision of a test's sensitivity or specificity. ::: ::: ::: RESULTS ::: The nomograms could be easily used to determine the sensitivity and specificity of a test. ::: ::: ::: CONCLUSIONS ::: In addition to being easy to use, the nomogram allows deduction of a missing parameter (number of patients, confidence intervals, prevalence, or sensitivity/specificity) if the other three are known. The nomogram can also be used retrospectively by the reader of published research as a rough estimating tool for sample size calculations. <s> BIB004 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> Now is an exciting time to be or become a diagnostician. More diagnostic tests, including portions of the medical interview and physical examination, are being studied rigorously for their accuracy, precision, and usefulness in practice,1,2 and this research is increasingly being systematically reviewed and synthesized.3,4 Diagnosticians are gaining increasing access to this research evidence, raising hope that this knowledge will inform their diagnostic decisions and improve their patients’ clinical outcomes.5 For patients to benefit fully from this accumulating knowledge, the diagnosticians serving them must be able to reason probabilistically, to understand how test results can revise disease probability to confirm or exclude disorders, and to integrate this reasoning with other types of knowledge and diagnostic thinking.6–8 ::: ::: Yet, clinicians encounter several barriers when trying to integrate research evidence into clinical diagnosis.9 Some barriers involve difficulties in understanding and using the quantitative measures of tests’ accuracy and discriminatory power, including sensitivity, specificity, and likelihood ratios (LRs).9,10 We have noticed that LRs are particularly troubling to many learners at first, and we have wondered if this is because of the way they have been taught. Stumbling blocks can arise in several places when learning LRs: the names and formulae themselves can be intimidating; the arithmetic functions can be mystifying when attempted all at once; if two levels of test results are taught first, learners can have difficulty ‘stretching’ to multiple levels; and if disease probability is framed in odds terms (to directly multiply the odds by the likelihood ratio), learners can misunderstand why and how this conversion is done. Other stumbling blocks may occur as well. ::: ::: Other authors have described various approaches to helping clinicians understand LRs.11–16 In this article, we describe two additional approaches to help clinical learners understand how LRs describe the discriminatory power of test results. Whereas we mention other concepts such as pretest and posttest probability, full treatment of those subjects is beyond the scope of this article. These approaches were developed by experienced teachers of evidence-based medicine (EBM) and were refined over years of teaching practice. These tips have also been field-tested to double-check the clarity and practicality of these descriptions, as explained in the introductory article of this series.17 ::: ::: To help the reader envision these teaching approaches, we present sequenced advice for teachers in plain text, coupled with sample words to speak, in italics. These scripts are meant to be interactive, which means that teachers should periodically check in with the learners for their understanding and that teachers should try other ways to explain the ideas if the words we have suggested do not “click.” We present them in order from shorter to longer; however, because these 2 scripts cover the same general content, we encourage teachers to use either or both in an order that best fits their setting and learners. <s> BIB005 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> Background ::: Clinical prediction rules (CPR) are tools that clinicians can use to predict the most likely diagnosis, prognosis, or response to treatment in a patient based on individual characteristics. CPRs attempt to standardize, simplify, and increase the accuracy of clinicians’ diagnostic and prognostic assessments. The teaching tips series is designed to give teachers advice and materials they can use to attain specific educational objectives. <s> BIB006 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> The leading function of the physician is the clinical reasoning, which involves appropriate investigation of the problems of the patient, formulation of a diagnostic suspect based on the patient's symptoms and signs, gathering of additional relevant information, to select necessary tests and administration of the most suitable therapy. The problems of the patient are expressed by symptoms or signs or abnormal test results, requested for a variety of reasons. The entire scientific, as well as diagnostic approach, is based on three steps: to stumble in a problem; to try a solution through a hypothesis; to disprove or to prove the hypothesis by a process of criticism. Clinicians use the information obtained from the history and physical examination to estimate initial (or pre-test) probability and then use the results from tests and other diagnostic procedures to modify this probability until the post-test probability is such that the suspected diagnosis is either confirmed or ruled out. When the pre-test probability of disease is high, tests characterized by high specificity will be preferred, in order to confirm the diagnostic suspect. When the pre-test probability of disease is low, a test with high sensitivity is advisable to exclude the hypothetical disease. The above mentioned process of decision making has been transferred to a problem oriented medical record that is currently employed in our Clinic. <s> BIB007 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> In the last decade, many new rapid diagnostic tests for infectious diseases have been developed. In general, these new tests are developed with the intent to optimize feasibility and population health, not accuracy alone. However, unlike drugs or vaccines, diagnostic tests are evaluated and licensed on the basis of accuracy, not health impact (eg, reduced morbidity or mortality). Thus, these tests are sometimes recommended or scaled up for purposes of improving population health without randomized evidence that they do so. We highlight the importance of randomized trials to evaluate the health impact of novel diagnostics and note that such trials raise distinctive ethical challenges of equipoise, equity, and informed consent. We discuss the distinction between equipoise for patient-important outcomes versus diagnostic accuracy, the equity implications of evaluating health impact of diagnostics under routine conditions, and the importance of offering reasonable choices for informed consent in diagnostic trials. <s> BIB008 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> Much of clinical research is aimed at assessing causality. However, clinical research can also address the value of new medical tests, which will ultimately be used for screening for risk factors, to diagnose a disease, or to assess prognosis. In order to be able to construct research questions and designs involving these concepts, one must have a working knowledge of this field. In other words, although traditional clinical research designs can be used to assess some of these questions, most of the studies assessing the value of diagnostic testing are more akin to descriptive observational designs, but with the twist that these designs are not aimed to assess causality, but are rather aimed at determining whether a diagnostic test will be useful in clinical practice. This chapter will introduce the various ways of assessing the accuracy of diagnostic tests, which will include discussions of sensitivity, specificity, predictive value, likelihood ratio, and receiver operator characteristic curves. <s> BIB009 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> Graphical abstractDisplay Omitted Sample size calculation in diagnostic studies.Tables of required sample size in different scenarios.How sample size varies with accuracy index and effect size.Help to the clinician when designing ROC diagnostic studies. ObjectivesThis review provided a conceptual framework of sample size calculations in the studies of diagnostic test accuracy in various conditions and test outcomes. MethodsThe formulae of sample size calculations for estimation of adequate sensitivity/specificity, likelihood ratio and AUC as an overall index of accuracy and also for testing in single modality and comparing two diagnostic tasks have been presented for desired confidence interval. ResultsThe required sample sizes were calculated and tabulated with different levels of accuracies and marginal errors with 95% confidence level for estimating and for various effect sizes with 80% power for purpose of testing as well. The results show how sample size is varied with accuracy index and effect size of interest. ConclusionThis would help the clinicians when designing diagnostic test studies that an adequate sample size is chosen based on statistical principles in order to guarantee the reliability of study. <s> BIB010 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> textabstractDiscrete-choice experiments (DCEs) have become a commonly used instrument in health economics and patient-preference analysis, addressing a wide range of policy questions. An important question when setting up a DCE is the size of the sample needed to answer the research question of interest. Although theory exists as to the calculation of sample size requirements for stated choice data, it does not address the issue of minimum sample size requirements in terms of the statistical power of hypothesis tests on the estimated coefficients. The purpose of this paper is threefold: (1) to provide insight into whether and how researchers have dealt with sample size calculations for healthcare-related DCE studies; (2) to introduce and explain the required sample size for parameter estimates in DCEs; and (3) to provide a step-by-step guide for the calculation of the minimum sample size requirements for DCEs in health care. <s> BIB011 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. <s> BIB012 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> The health care system needs to face new and advanced medical technologies that can improve the patients' quality of life by replacing lost or decreased functions. In stroke patients, the disabilities that follow cerebral lesions may impair the mandatory daily activities of an independent life. These activities are dependent mostly on the patient's upper limb function so that they can carry out most of the common activities associated with a normal life. Therefore, an upper limb exoskeleton device for stroke patients can contribute a real improvement of quality of their life. The ethical problems that need to be considered are linked to the correct adjustment of the upper limb skills in order to satisfy the patient's expectations, but within physiological limits. The debate regarding the medical devices dedicated to neurorehabilitation is focused on their ability to be beneficial to the patient's life, keeping away damages, injustice, and risks. <s> BIB013 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> Wearable health monitoring systems have gained considerable interest in recent years owing to their tremendous promise for personal portable health watching and remote medical practices. The sensors with excellent flexibility and stretchability are crucial components that can provide health monitoring systems with the capability of continuously tracking physiological signals of human body without conspicuous uncomfortableness and invasiveness. The signals acquired by these sensors, such as body motion, heart rate, breath, skin temperature and metabolism parameter, are closely associated with personal health conditions. This review attempts to summarize the recent progress in flexible and stretchable sensors, concerning the detected health indicators, sensing mechanisms, functional materials, fabrication strategies, basic and desired features. The potential challenges and future perspectives of wearable health monitoring system are also briefly discussed. <s> BIB014 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> This Editorial comment refers to the article “Medical students’ attitude towards artificial intelligence: a multicenter survey,” Pinto Dos Santos D, et al Eur Radiol 2018. • Medical students are not well informed of the potential consequences of AI in radiology. ::: • The fundamental principles of AI—as well as its application in medicine—must be taught in medical schools. ::: • The radiologist specialty must actively reflect on how to validate, approve, and integrate AI algorithms into our clinical practices. <s> BIB015 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> OBJECTIVE ::: To assist clinicians to make adequate interpretation of scientific evidence from studies that evaluate diagnostic tests in order to allow their rational use in clinical practice. ::: ::: ::: METHODS ::: This is a narrative review focused on the main concepts, study designs, the adequate interpretation of the diagnostic accuracy data, and making inferences about the impact of diagnostic testing in clinical practice. ::: ::: ::: RESULTS ::: Most of the literature that evaluates the performance of diagnostic tests uses cross-sectional design. Randomized clinical trials, in which diagnostic strategies are compared, are scarce. Cross-sectional studies measure diagnostic accuracy outcomes that are considered indirect and insufficient to define the real benefit for patients. Among the accuracy outcomes, the positive and negative likelihood ratios are the most useful for clinical management. Variations in the study's cross-sectional design, which may add bias to the results, as well as other domains that contribute to decreasing the reliability of the findings, are discussed, as well as how to extrapolate such accuracy findings on impact and consequences considered important for the patient. Aspects of costs, time to obtain results, patients' preferences and values should preferably be considered in decision making. ::: ::: ::: CONCLUSION ::: Knowing the methodology of diagnostic accuracy studies is fundamental, but not sufficient, for the rational use of diagnostic tests. There is a need to balance the desirable and undesirable consequences of tests results for the patients in order to favor a rational decision-making approach about which tests should be recommended in clinical practice. <s> BIB016 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> Abstract The pathological diagnostics of cancer - based on the histological features - is today increasingly completed by molecular profiling at variable depth in an almost evident fashion. Predictive information should cover potential therapeutic targets and/or major resistance mechanisms the nature of which is subject of alteration during the course of the treatment. Mutational profiling recently became technically available by the analysis of circulating free DNA obtained following non-invasive peripheral blood or body fluid sampling. This „liquid biopsy” approach reflects the general status considering the actual tumor burden, irrespective of the intratumoral distribution and anatomical site. However, the dynamics of the liquid compartment relies on tissue-related processes reflected by histological variables. The amount and composition of free DNA seems to be influenced by many factors, including the stage and anatomical localization of the cancer, the relative mass of neoplastic subclones, the growth rate, the stromal and inflammatory component, the extent of tumor cell death and necrosis. The histopathological context should be considered also when analysis of cfDNA is about to replace repeated tumor sampling for molecular follow-up. <s> BIB017 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> Noble metal nanoparticle-based colorimetric sensors have become powerful tools for the detection of different targets with convenient readout. Among the many types of nanomaterials, noble metal nanoparticles exhibit extraordinary optical responses mainly due to their excellent localized surface plasmon resonance (LSPR) properties. The absorption spectrum of the noble metal nanoparticles was mostly in the visible range. This property enables the visual detection of various analytes with the naked eye. Among numerous color change modes, the way that different concentrations of targets represent vivid color changes has been brought to the forefront because the color distinction capability of normal human eyes is usually better than the intensity change capability. We review the state of the art in noble metal nanoparticle-based multicolor colorimetric strategies adopted for visual quantification by the naked eye. These multicolor strategies based on different means of morphology transformation are classified... <s> BIB018 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> Abstract Upconversion nanoparticle-based lateral flow assays (UCNP-LFAs) have attracted significant attention in point-of-care testing (POCT) applications, due to the long-term photostability and enhanced signal-to-background noise ratio. The existing UCNP-LFAs generally require peripheral equipment for exciting fluorescent signals and reading out fluorescence results, which are generally bulky and expensive. Herein, we developed a miniaturized and portable UCNP-LFA platform, which is composed of a LFA detection system, an UCNP-LFA reader and a smartphone-assisted UCNP-LFA analyzer. The LFA detection system is based on three types of UCNPs for multiplexed detection. The reader has a dimension of 24.0 cm × 9.4 cm × 5.4 cm (L × W × H) and weight of 0.9 kg. The analyzer based on the custom-designed software of a smartphone (termed as UCNP-LFA analyzer) can get the quantitative analysis results in a real-time manner. We demonstrated the universality of this platform by highly sensitive and quantitative detections of several kinds of targets, including small molecule (ochratoxin A, OTA), heavy metal ion (Hg2+), bacteria (salmonella, SE), nucleic acid (hepatitis B virus, HBV) and protein (growth stimulation expressed gene 2, ST-2). Our developed UCNP-LFA platform holds great promise for applications in disease diagnostics, environmental pollution monitoring and food safety at the point of care. <s> BIB019
e current paper did not present either detail regarding the research methodology for diagnostic studies nor the critical appraisal of a paper presenting the performances of a diagnostic test because these are beyond the aim. Extensive scientific literature exists regarding both the design of experiments for diagnostic studies [4, BIB003 BIB001 BIB009 and the critical evaluation of a diagnostic paper BIB002 [232] BIB006 BIB005 . As a consequence, neither the effect of the sample size on the accuracy parameters, or the a priori computation of the sample size needed to reach the level of significance for a specific research question, nor the a posteriori calculation of the power of the diagnostic test is discussed. e scientific literature presenting the sample size calculation for diagnostic studies is presented in the scientific literature BIB004 BIB011 BIB012 BIB010 , but these approaches must be used with caution because the calculations are sensitive and the input data from one population are not a reliable solution for another population, so the input data for sample size calculation are recommended to come from a pilot study. is paper does not treat how to select a diagnostic test in clinical practice, the topic being treated by the evidence-based medicine and clinical decision BIB007 BIB016 . Health-care practice is a dynamic field and records rapid changes due to changes in the evolution of known diseases, the apparition of new pathologies, the life expectancy of the population, progress in information theory, communication and computer sciences, development of new materials, and approaches as solutions for medical problems. e concept of personalized medicine changes the way of health care, the patient becomes the core of the decisional process, and the applied diagnostic methods and/or treatment closely fit the needs and particularities of the patient . Different diagnostic or monitoring devices such as wearable health monitoring systems BIB014 , liquid biopsy or associated approaches BIB017 BIB018 , wireless ultrasound transducer , or other point-of-care testing (POCT) methods BIB019 are introduced and need proper analysis and validation. Furthermore, the availability of big data opens a new pathway in analyzing medical data, and artificial intelligence approaches will probably change the way of imaging diagnostic and monitoring BIB015 . e ethical aspects must be considered BIB008 BIB013 along with valid and reliable methods for the assessment of old and new diagnostic approaches that are required. Space for methodological improvements exists, from designing the experiments to analyzing of the experimental data for both observational and interventional approaches.
A review of speech-based bimodal recognition <s> A. Motivation for Bimodal Recognition <s> Oral speech intelligibility tests were conducted with, and without, supplementary visual observation of the speaker's facial and lip movements. The difference between these two conditions was examined as a function of the speech‐to‐noise ratio and of the size of the vocabulary under test. The visual contribution to oral speech intelligibility (relative to its possible contribution) is, to a first approximation, independent of the speech‐to‐noise ratio under test. However, since there is a much greater opportunity for the visual contribution at low speech‐to‐noise ratios, its absolute contribution can be exploited most profitably under these conditions. <s> BIB001 </s> A review of speech-based bimodal recognition <s> A. Motivation for Bimodal Recognition <s> This paper reviews progress in understanding the psychology of lipreading and audio-visual speech perception. It considers four questions. What distinguishes better from poorer lipreaders? What are the effects of introducing a delay between the acoustical and optical speech signals? What have attempts to produce computer animations of talking faces contributed to our understanding of the visual cues that distinguish consonants and vowels? Finally, how should the process of audio-visual integration in speech perception be described; that is, how are the sights and sounds of talking faces represented at their conflux? <s> BIB002
Speech recognition can be used wherever speech-based man-machine communication is appropriate. Speaker recognition has potential application wherever the identity of a person needs to be determined (identification task) or an identity claim needs to be validated (identity verification task). Possible applications of bimodal recognition are: speech transcription; adaptive human-computer interfaces in multimedia computer environments; voice control of office or entertainment equipment; and access control for buildings, computer resources, or information sources. Bimodal recognition tries to emulate the multimodality of human perception. It is known that all sighted people rely, to a varying extent, on lipreading to enhance speech perception or to compensate for the deficiencies of audition BIB002 . Lipreading is particularly beneficial when the listener suffers from impaired hearing or when the acoustic signal is degraded BIB001 , . Sensitivity to speech variability, inadequate recognition accuracy for many potential applications, and susceptibility to impersonation are among the main technical hurdles preventing a widespread adoption of speech-based recognition systems. The rationale for bimodal recognition is to improve recognition performance in terms of accuracy and robustness against speech variability and impersonation. Compared to speech or speaker recognition that uses only one primary source, recognition based on information extracted from two primary sources can be made more robust to impersonation and to speech variability, which has a different effect on each modality. Automatic person recognition based on still two-dimensional (2-D) facial images is vulnerable to impersonation attempts using photographs or by professional mimics wearing appropriate disguise. In contrast to static personal characteristics, dynamic characteristics such as visual speech are difficult to mimic or reproduce artificially. Hence, dynamic characteristics offer a higher potential for protection against impersonation than static characteristics. Given the potential gains promised by the combination of modalities, multimodal systems have been identified, by many experts in spoken language systems , as a key area which requires basic research in order to catalyze a widespread deployment of spoken language systems in the "real world."
A review of speech-based bimodal recognition <s> B. Outline of the Review <s> Acoustic automatic speech recognition (ASR) systems tend to perform poorly with noisy speech. Unfortunately, most application environments contain noise from machines, vehicles, others talking, typing, television, sound systems, etc. In addition, system performance is highly dependent on the particular microphone type and its placement, but most people find head-mounted microphones uncomfortable for extended use and they are impractical in many situations. Fortunately, the use of visual speech (lipreading or, more properly, speechreading) information has been shown to improve the performance of acoustic ASR systems especially in noise. This paper outlines the history of automatic lipreading research and describes the authors current efforts. <s> BIB001 </s> A review of speech-based bimodal recognition <s> B. Outline of the Review <s> We give an overview of speechreading systems from the perspective of the face and gesture recognition community, paying particular attention to approaches to key design decisions and the benefits and drawbacks. We discuss the central issue of sensory integration how much processing of the acoustic and the visual information should go on before integration how should it be integrated. We describe several possible practical applications, and conclude with a list of important outstanding problems that seem amenable to attack using techniques developed in the face and gesture recognition community. <s> BIB002 </s> A review of speech-based bimodal recognition <s> B. Outline of the Review <s> This paper reviews key attributes of neural processing essential to intelligent multimedia processing (IMP). The objective is to show why neural networks (NNs) are a core technology for the following multimedia functionalities: (1) efficient representations for audio/visual information, (2) detection and classification techniques, (3) fusion of multimodal signals, and (4) multimodal conversion and synchronization. It also demonstrates how the adaptive NN technology presents a unified solution to a broad spectrum of multimedia applications. As substantiating evidence, representative examples where NNs are successfully applied to IMP applications are highlighted. The examples cover a broad range, including image visualization, tracking of moving objects, image/video segmentation, texture classification, face-object detection/recognition, audio classification, multimodal recognition, and multimodal lip reading. <s> BIB003 </s> A review of speech-based bimodal recognition <s> B. Outline of the Review <s> We review recent research that examines audio-visual integration in multimodal communication. The topics include bimodality in human speech, human and automated lip reading, facial animation, lip synchronization, joint audio-video coding, and bimodal speaker verification. We also study the enabling technologies for these research topics, including automatic facial-feature tracking and audio-to-visual mapping. Recent progress in audio-visual research shows that joint processing of audio and video provides advantages that are not available when the audio and video are processed independently. <s> BIB004
This review complements earlier surveys on related themes BIB004 , BIB003 , BIB001 , BIB002 . The history of automatic lipreading research is outlined in BIB001 , which does not cover audio processing, sensor fusion, or bimodal speaker recognition. The In practice, similar speech processing techniques are used for speech recognition and speaker recognition. Front-end processing converts raw speech into a high-level representation, which ideally retains only essential information for pattern categorization. The latter is performed by a classifier, which often consists of models of pattern distribution, coupled to a decision procedure. The block generically labeled "constraints" typically represents domain knowledge, such as syntactic or semantic knowledge, which may be applied during the recognition. Sequential or tree configurations of modality-specific classifiers are possible alternatives to the decision fusion of parallel classifiers shown in (b). Audio-visual fusion can also occur at a level between feature and decision levels. overview given in BIB002 covers speechreading (it pays particular attention to visual speech processing), but it does not cover audio processing and bimodal speaker recognition. Reference BIB003 centers on the main attributes of neural networks as a core technology for multimedia applications which require automatic extraction, recognition, interpretation, and interactions of multimedia signals. Reference BIB004 covers the wider topic of audio-visual integration in multimodal communication encompassing recognition, synthesis, and compression. This paper focuses on bimodal speech and speaker recognition. Given the multidisciplinary nature of bimodal recognition, the review is broad-based. It is intended to act as a shop-window for techniques that can be used in bimodal recognition. However, paper length restrictions preclude an exhaustive coverage of the field. Fig. 1 shows a simplified architecture commonly used for bimodal recognition. The structure of the review is a direct mapping from the building blocks of this architecture. The paper is organized as follows. First, the processing techniques are discussed. Thereafter, bimodal recognition performance is reviewed. Sample applications and avenues for further work are then suggested. Finally, concluding remarks are given.
A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> The results of a study aimed at finding the importance of pitch for automatic speaker recognition are presented. Pitch contours were obtained for 60 utterances, each approximately 2‐sec in duration, of 10 female speakers. A data‐reduction procedure based on the Karhunen‐Loeve representation was found effective in representing the pitch information in each contour in a 20‐dimensional space. The data were divided into two portions; one part was used to design the speaker recognition system, while the other part was used to test the effectiveness of the design. The 20‐dimensional vectors representing the pitch contours of the design set were linearly transformed so that the ratio of interspeaker to intraspeaker variance in the transformed space was maximum. A reference utterance was formed for each speaker by averaging the transformed vectors of that speaker. The test utterance was assigned to the speaker corresponding to the reference utterance with the smallest Euclidean distance in the transformed space. ... <s> BIB001 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> An important problem in speech processing is to detect the presence of speech in a background of noise. This problem is often referred to as the endpoint location problem. By accurately detecting the beginning and end of an utterance, the amount of processing of speech data can be kept to a minimum. The algorithm proposed for locating the endpoints of an utterance is based on two measures of the signal, zero crossing rate and energy. The algorithm is inherently capable of performing correctly in any reasonable acoustic environment in which the signal-to-noise ratio is on the order of 30 dB or better. The algorithm has been tested over a variety of recording conditions and for a large number of speakers and has been found to perform well across all tested conditions. <s> BIB002 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> This paper presents several digital signal processing methods for representing speech. Included among the representations are simple waveform coding methods; time domain techniques; frequency domain representations; nonlinear or homomorphic methods; and finaIly linear predictive coding techniques. The advantages and disadvantages of each of these representations for various speech processing applications are discussed. <s> BIB003 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> Several parametric representations of the acoustic signal were compared with regard to word recognition performance in a syllable-oriented continuous speech recognition system. The vocabulary included many phonetically similar monosyllabic words, therefore the emphasis was on the ability to retain phonetically significant acoustic information in the face of syntactic and duration variations. For each parameter set (based on a mel-frequency cepstrum, a linear frequency cepstrum, a linear prediction cepstrum, a linear prediction spectrum, or a set of reflection coefficients), word templates were generated using an efficient dynamic warping method, and test data were time registered with the templates. A set of ten mel-frequency cepstrum coefficients computed every 6.4 ms resulted in the best performance, namely 96.5 percent and 95.0 percent recognition with each of two speakers. The superior performance of the mel-frequency cepstrum coefficients may be attributed to the fact that they better represent the perceptually relevant aspects of the short-term speech spectrum. <s> BIB004 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> Accurate location of the endpoints of spoken words and phrases is important for reliable and robust speech recognition. The endpoint detection problem is fairly straightforward for high-level speech signals in low-level stationary noise environments (e.g., signal-to-noise ratios greater than 30-dB rms). However, this problem becomes considerably more difficult when either the speech signals are too low in level (relative to the background noise), or when the background noise becomes highly nonstationary. Such conditions are often encountered in the switched telephone network when the limitation on using local dialed-up lines is removed. In such cases the background noise is often highly variable in both level and spectral content because of transmission line characteristics, transients and tones from the line and/or from signal generators, etc. Conventional speech endpoint detectors have been shown to perform very poorly (on the order of 50-percent word detection) under these conditions. In this paper we present an improved word-detection algorithm, which can incorporate both vocabulary (syntactic) and task (semantic) information, leading to word-detection accuracies close to 100 percent for isolated digit detection over a wide range of telephone transmission conditions. <s> BIB005 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> A novel speech analysis method which uses several established psychoacoustic concepts, the perceptually based linear predictive analysis (PLP), models the auditory spectrum by the spectrum of the low-order all-pole model. The auditory spectrum is derived from the speech waveform by critical-band filtering, equal-loudness curve pre-emphasis, and intensity-loudness root compression. We demonstrate through analysis of both synthetic and natural speech that psychoacoustic concepts of spectral auditory integration in vowel perception, namely the F1, F2' concept of Carlson and Fant and the 3.5 Bark auditory integration concept of Chistovich, are well modeled by the PLP method. A complete speech analysis-synthesis system based on the PLP method is also described in the paper. <s> BIB006 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> The use of instantaneous and transitional spectral representations of spoken utterances for speaker recognition is investigated. Linear-predictive-coding (LPC)-derived cepstral coefficients are used to represent instantaneous spectral information, and best linear fits of each cepstral coefficient over a specified time window are used to represent transitional information. An evaluation has been carried out using a database of isolated digit utterances over dialed-up telephone lines by 10 talkers. Two vector quantization (VQ) codebooks, instantaneous and transitional, were constructed from each speaker's training utterances. The experimental results show that the instantaneous and transitional representations are relatively uncorrelated, thus providing complementary information for speaker recognition. A rectangular window of approximately 100 ms duration provides an effective estimate of the transitional spectral features for speaker recognition. Also, simple transmission channel variations are shown to affect both the instantaneous spectral representations and the corresponding recognition performance significantly, while the transitional representations and performance are relatively resistant. > <s> BIB007 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> Two acoustic representations, integrated Mel-scale representation with LDA (IMELDA) and perceptual linear prediction-root power sums (PLP-RPS), both of which have given good results in speech recognition tests, are explored. IMELDA is examined in the context of some related representations. Results of speaker-dependent and independent tests with digits and the alphabet suggest that the optimum PLP order is high and that the effectiveness of PLP-RPS stems not from its modeling of perceptual properties but from its approximation to a desirable statistical property attained exactly by IMELDA. A combined PLP-IMELDA representation is found to be generally more effective than PLP-RPS, but an IMELDA representation derived directly from a filter-bank provides similar results to PLP-IMELDA at a lower computational cost. > <s> BIB008 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> The paper describes a voice activity detector (VAD) that can operate reliably in SNRs down to 0 dB and detect most speech at −5 dB. The detector applies a least-squares periodicity estimator to the input signal, and triggers when a significant amount of periodicity is found. It does not aim to find the exact talkspurt boundaries and, consequently, is most suited to speech-logging applications where it is easy to include a small margin to allow for any missed speech. The paper discusses the problem of false triggering on nonspeech periodic signals and shows how robustness to these signals can be achieved with suitable preprocessing and postprocessing. <s> BIB009 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> Two models, the temporal decomposition and the multivariate linear prediction, of the spectral evolution of speech signals capable of processing some aspects of the speech variability are presented. A series of acoustic-phonetic decoding experiments, characterized by the use of spectral targets of the temporal decomposition techniques and a speaker-dependent mode, gives good results compared to a reference system (i.e., 70% vs. 60% for the first choice). Using the original method developed by Laforia, a series of text-independent speaker recognition experiments, characterized by a long-term multivariate auto-regressive modelization, gives first-rate results (i.e., 98.4% recognition rate for 420 speakers) without using more than one sentence. Taking into account the interpretation of the models, these results show how interesting the cinematic models are for obtaining a reduced variability of the speech signal representation. > <s> BIB010 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> 1. Fundamentals of Speech Recognition. 2. The Speech Signal: Production, Perception, and Acoustic-Phonetic Characterization. 3. Signal Processing and Analysis Methods for Speech Recognition. 4. Pattern Comparison Techniques. 5. Speech Recognition System Design and Implementation Issues. 6. Theory and Implementation of Hidden Markov Models. 7. Speech Recognition Based on Connected Word Models. 8. Large Vocabulary Continuous Speech Recognition. 9. Task-Oriented Applications of Automatic Speech Recognition. <s> BIB011 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> This paper describes the results of experiments to investigate the integration of MLP (multilayer perceptron) and HMM (hidden Markov modeling) techniques in the task of fixed-text speaker verification. A large speech database collected over the telephone network was used to evaluate the algorithm. Speech data for each speaker was automatically segmented using a supervised HMM-Viterbi decoding scheme and an MLP was trained with this segmented data. The output scores of the MLP, after appropriate scaling were used as observation probabilities in a Viterbi realignment and scoring step. Intra-speaker and inter-speaker scores were generated by training the HMM-MLP system for each speaker and testing against speech data for the same speaker and against all other speakers, who shared utterances of identical text. Our results show that MLP classifiers combined with HMMs improve speaker discrimination by 20% over conventional HMM algorithms for speaker verification. > <s> BIB012 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> We describe current approaches to text-independent speaker identification based on probabilistic modeling techniques. The probabilistic approaches have largely supplanted methods based on comparisons of long-term feature averages. The probabilistic approaches have an important and basic dichotomy into nonparametric and parametric probability models. Nonparametric models have the advantage of being potentially more accurate models (though possibly more fragile) while parametric models that offer computational efficiencies and the ability to characterize the effects of the environment by the effects on the parameters. A robust speaker-identification system is presented that was able to deal with various forms of anomalies that are localized in time, such as spurious noise events and crosstalk. It is based on a segmental approach in which normalized segment scores formed the basic input for a variety of robust 43% procedures. Experimental results are presented, illustrating 59% the advantages and disadvantages of the different procedures. 64%. We show the role that cross-validation can play in determining how to weight the different sources of information when combining them into a single score. Finally we explore a Bayesian approach to measuring confidence in the decisions made, which enabled us to reject the consideration of certain tests in order to achieve an improved, predicted performance level on the tests that were retained. > <s> BIB013 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> Various linear predictive (LP) analysis methods are studied and compared from the points of view of robustness to noise and of application to speaker identification. The key to the success of the LP techniques is in separating the vocal tract information from the pitch information present in a speech signal even under noisy conditions. In addition to considering the conventional, one-shot weighted least-squares methods, the authors propose three other approaches with the above point as a motivation. The first is an iterative approach that leads to the weighted least absolute value solution. The second is an extension of the one-shot least-squares approach and achieves an iterative update of the weights. The update is a function of the residual and is based on minimizing a Mahalanobis distance. Third, the weighted total least-squares formulation is considered. A study of the deviations in the LP parameters is done when noise (white Gaussian and impulsive) is added to the speech. It is revealed that the most robust method depends on the type of noise. Closed-set speaker identification experiments with 20 speakers are conducted using a vector quantizer classifier trained on clean speech. The relative performance of the various LP approaches depends on the type of speech material used for testing. > <s> BIB014 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> This chapter overviews recent advances in speaker recognition technology. The first part of the chapter discusses general topics and issues. Speaker recognition can be divided in two ways: (a) speaker identification and verification, and (b) text-dependent and text-independent methods. The second part of the paper is devoted to discussion of more specific topics of recent interest which have led to interesting new approaches and techniques. They include parameter/distance normalization techniques, model adaptation techniques, VQ-/ergodic-HMM-based text-independent recognition methods, and a text-prompted recognition method. The chapter concludes with a short discussion assessing the current status and possibilities for the future. <s> BIB015 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> A tutorial on the design and development of automatic speaker-recognition systems is presented. Automatic speaker recognition is the use of a machine to recognize a person from a spoken phrase. These systems can operate in two modes: to identify a particular person or to verify a person's claimed identity. Speech processing and the basic components of automatic speaker-recognition systems are shown and design tradeoffs are discussed. Then, a new automatic speaker-recognition system is given. This recognizer performs with 98.9% correct decalcification. Last, the performances of various systems are compared. <s> BIB016 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> The primary goal of pattern recognition is supervised or unsupervised classification. Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice. More recently, neural network techniques and methods imported from statistical learning theory have been receiving increasing attention. The design of a recognition system requires careful attention to the following issues: definition of pattern classes, sensing environment, pattern representation, feature extraction and selection, cluster analysis, classifier design and learning, selection of training and test samples, and performance evaluation. In spite of almost 50 years of research and development in this field, the general problem of recognizing complex patterns with arbitrary orientation, location, and scale remains unsolved. New and emerging applications, such as data mining, web searching, retrieval of multimedia data, face recognition, and cursive handwriting recognition, require robust and efficient pattern recognition techniques. The objective of this review paper is to summarize and compare some of the well-known methods used in various stages of a pattern recognition system and identify research topics and applications which are at the forefront of this exciting and challenging field. <s> BIB017
The common front-end processes for speech-based recognition are signal conditioning, segmentation, and feature extraction. Signal conditioning typically takes the form of noise removal. Segmentation is concerned with the demarcation of signal portions conveying relevant acoustic or visual speech. Feature extraction generally acts as a dimensionality reduction procedure which, ideally, retains information possessing high discrimination power, high stability, and also for speaker recognition, good resistance to mimicry. Dimensionality reduction may mitigate the "curse of dimensionality." The latter relates to the relation between the dimension of the input pattern space and the number of classifier parameters, which influences the amount of data required for classifier training BIB017 . To obtain reliable estimates of classifier parameters, training data volume should increase with the dimension of the input space. The reliability of parameter estimates may affect classification accuracy. Segmentation and feature extraction can have an adverse effect on recognition. They may retain unwanted information or inadvertently discard important information for recognition. Also, the extracted features may fail to match the assumptions incorporated in the classifier. For example, some classifiers minimize their parameter-estimation requirements by assuming that features are uncorrelated. Accurate segmentation and optimal feature extraction are challenging. A. Acoustic Speech Processing 1) Segmentation: Separation of speech from nonspeech material often employs energy thresholding BIB005 , zero-crossing rate, and periodicity measures BIB002 , BIB009 . Often, several information sources are used jointly , BIB002 . In addition to heuristic decision procedures, conventional pattern recognition techniques have also been used for speech segmentation. This is typified by classification of speech events, based on vector-quantization (VQ) or hidden Markov models (HMMs) BIB012 . 2) Feature Extraction: Many speech feature extraction techniques aim at obtaining a parametric representation of speech, based on models which often embed knowledge about speech production or perception by humans BIB011 . a) Speech production model: The human vocal apparatus is often modeled as a time-varying filter excited by a wide-band signal; this model is known as a source-filter or excitation-modulation model BIB003 . The time-varying filter represents the acoustic transmission characteristics of the vocal tract and nasal cavity, together with the spectral characteristics of glottal pulse shape and lip radiation. Most acoustic speech models assume that the excitation emanates from the lower end of the vocal tract. Such models may be unsuitable for speech sounds, such as fricatives, which result from excitation that occurs somewhere else in the vocal tract BIB016 . b) Basic features: Often, acoustic speech features are short-term spectral representations. For recognition tasks, parameterizations of the vocal tract transfer function are invariably preferred to excitation characteristics, such as pitch and intensity. However, these discarded excitation parameters may contain valuable information. Cepstral features are very widely used. The cepstrum is the discrete cosine transform (DCT) of the logarithm of the short-term spectrum. The DCT yields virtually uncorrelated features, and this may allow a reduction of the parameter count for the classifier. For example, diagonal covariance matrices may be used instead of full matrices. The DCT also packs most of the acoustic information into the low-order features; hence, allowing the reduction of the input space dimension.The cepstrum can be obtained through linear predictive coding (LPC) analysis , BIB014 or Fourier transformation BIB013 . Variants of the standard cepstrum include the popular mel-warped cepstrum or mel frequency cepstral coefficients (MFCCs) BIB004 (see Fig. 2 ) and the perceptual linear predictive (PLP) cepstrum BIB006 . In short-term spectral estimation, the speech signal is first divided into blocks of samples called frames. A windowing function, such as the Hamming window, is usually applied to the speech frame before the short-term log-power spectrum is computed. In the case of MFCCs, the spectrum is smoothed typically by a bank of triangular filters, the passbands of which are laid out on a frequency scale known as mel scale. The latter is approximately linear below 1 kHz and logarithmic above 1 kHz; the mel scale effectively reduces the contribution of higher frequencies to the recognition. Finally, a DCT yields the MFCCs. By removing the cepstral mean, MFCCs can be made fairly insensitive to time-invariant distortion introduced by the communication channel. In addition, given the low cross correlation of MFCCs, their covariance can be modeled with a diagonal matrix. MFCCs are notable for their good performance in both speech and speaker recognition. c) Derived features: High-level features may be obtained from a temporal sequence of basic feature vectors or from the statistical distribution of the pattern space spanned by basic features. These derived features are typified by first-order or higher order dynamic (also known as transitional) features such as delta or delta-delta cepstral coefficients , BIB007 and statistical dynamic features represented by the multivariate autoregressive features proposed in BIB010 . The delta cepstrum is usually computed by applying a linear regression over the neighborhood of the current cepstral vector; the regression typically spans approximately 100 ms. The use of delta features mitigates the unsuitability of the assumption of temporal statistical independence often made in classifiers. Other high-level features are the long-term spectral mean, variance or standard deviation, and the covariance of basic features , BIB015 . Some high-level features aim at reducing dimensionality through a transformation that produces statistically orthogonal features and packs most of the variance into few features. Common transforms are based on principal component anal- , statistical discriminant analysis optimizing the F-ratio such as linear discriminant analysis (LDA) BIB001 , and integrated mel-scale representation with LDA (IMELDA) BIB008 . The latter is LDA applied to static spectral information, possibly combined with dynamic spectral information, output by a mel-scale filter bank. Composite features are sometimes generated by a simple concatenation of different types of features .
A review of speech-based bimodal recognition <s> 1) Segmentation: <s> Though technology in speech recognition has progressed recently, Automatic Speech Recognition (ASR) is vulnerable to noise. Lip-information is thought to be useful for speech recognition in noisy situations, such as in a factory or in a car. This paper describes speech recognition enhancement by lip-information. Two types of usage are dealt with. One is the detection of start and stop of speech from lip-information. This is the simplest usage of lip-information. The other is lip-pattern recognition, and it is used for speech recognition together with sound information. The algorithms for both usages are proposed, and the experimental system shows they work well. The algorithms proposed here are composed of simple image-processing. Future progress in image-processing will make it possible to realize them in real-time. <s> BIB001 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> Abstract The feasibility of using isodensity lines for human face identification system is presented through experimental investigation. Instead of using feature points extracted from the faces, as is done in the conventional face matching systems, the technique presented uses gray level isodensity line maps of the faces. Only simple template matching is then required to match the individual isodensity lines. The preprocessing required, the properties of isodensity lines and some considerations for practical implementation are also discussed. The results show a 100% accuracy in matching same persons and a 100% accuracy in discriminating different persons (including persons with spectacles). <s> BIB002 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> Abstract This paper describes a neural approach intended to improve the performance of an automatic speech recognition system for unrestricted speakers by using not only voice sound features but also image features of the mouth shape. In particular, we used the natural sample voice signals and mouth shape images that were acquired in the general environment, neither in the sound isolation room nor under specific lighting conditions. The FFT power spectrum of acoustic speech was used as the voice feature. In addition, the gray level image, binary image and geometrical shape features of the mouth were used as the compensatory information, and compared which kinds of image features are effective. For unrestricted speakers, a vowel recognition rate of about 80% was obtained using only voice features, but this increased to some 92% when voice features plus binary images were used. This method can be applied not only to the improvement of voice recognition, but also to aid the communication of hearing-impaired people. <s> BIB003 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> Locating facial features is crucial for various face recognition schemes. The authors suggest a robust facial feature detector based on a generalized symmetry interest operator. No special tuning is required if the face occupies 15-60% of the image. The operator was tested on a large face data base with a success rate of over 95%. > <s> BIB004 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> Two new algorithms for computer recognition of human faces, one based on the computation of a set of geometrical features, such as nose width and length, mouth position, and chin shape, and the second based on almost-gray-level template matching, are presented. The results obtained for the testing sets show about 90% correct recognition using geometrical features and perfect recognition using template matching. > <s> BIB005 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> We have developed visual preprocessing algorithms for extracting phonologically relevant features from the grayscale video image of a speaker, to provide speaker-independent inputs for an automatic lipreading ("speechreading") system. Visual features such as mouth open/closed, tongue visible/not-visible, teeth visible/notvisible, and several shape descriptors of the mouth and its motion are all rapidly computable in a manner quite insensitive to lighting conditions. We formed a hybrid speechreading system consisting of two time delay neural networks (video and acoustic) and integrated their responses by means of independent opinion pooling - the Bayesian optimal method given conditional independence, which seems to hold for our data. This hybrid system had an error rate 25% lower than that of the acoustic subsystem alone on a five-utterance speaker-independent task, indicating that video can be used to improve speech recognition. <s> BIB006 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> This paper describes a method of real-time facial-feature extraction which is based on matching techniques. The method is composed of facial-area extraction and mouth-area extraction using colour histogram matching, and eye-area extraction using template matching. By the combination of these methods, we can realize real-time processing, user-independent recognition and tolerance to changes of the environment. Also, this paper touches on neural networks which can extract characteristics for recognizing the shape of facial parts. The methods were implemented in an experimental image processing system, and we discuss the cases that the system is applied to man-machine interface using facial gesture and to sign language translation. <s> BIB007 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> Abstract The human face is a complex pattern. Finding human faces automatically in a scene is a difficult yet significant problem. It is the first important step in a fully automatic human face recognition system. In this paper a new method to locate human faces in a complex background is proposed. This system utilizes a hierarchical knowledge-based method and consists of three levels. The higher two levels are based on mosaic images at different resolutions. In the lower level, an improved edge detection method is proposed. In this research the problem of scale is dealt with, so that the system can locate unknown human faces spanning a wide range of sizes in a complex black-and-white picture. Some experimental results are given. <s> BIB008 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> We present the development of a modular system for flexible human-computer interaction via speech. The speech recognition component integrates acoustic and visual information (automatic lip-reading) improving overall recognition, especially in noisy environments. The image of the lips, constituting the visual input, is automatically extracted from the camera picture of the speaker's face by the lip locator module. Finally, the speaker's face is automatically acquired and followed by the face tracker sub-system. Integration of the three functions results in the first bi-modal speech recognizer allowing the speaker reasonable freedom of movement within a possibly noisy room while continuing to communicate with the computer via voice. Compared to audio-alone recognition, the combined system achieves a 20 to 50 percent error rate reduction for various signal/noise conditions. <s> BIB009 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> We propose a new speech communication system to convert oral motion images into speech. We call this system "the image input microphone." It provides high security and is not affected by acoustic noise because it is not necessary to input the actual utterance. This system is especially promising as a speaking-aid system for people whose vocal cords are injured. Since this is a basic investigation of media conversion from image to speech, we focus on vowels, and conduct experiments on media conversion of vowels. The vocal-tract transfer function and the source signal for driving this filter are estimated from features of the lips. These features are extracted from oral images in B learning data set, then speech is synthesized by this filter inputted with an appropriate driving signal. The performance of this system is evaluated by hearing tests of synthesized speech. The mean recognition rate for the test data set was 76.8%. We also investigate the effects of practice by iterative listening. The mean recognition rate rises from 69.4% to over 90% after four tests over four days. Consequently, we conclude the proposed system has potential as a method of nonacoustic communication. > <s> BIB010 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> The robust acquisition of facial features needed for visual speech processing is fraught with difficulties which greatly increase the complexity of the machine vision system. This system must extract the inner lip contour from facial images with variations in pose, lighting, and facial hair. This paper describes a face feature acquisition system with robust performance in the presence of extreme lighting variations and moderate variations in pose. Furthermore, system performance is not degraded by facial hair or glasses. To find the position of a face reliably we search the whole image for facial features. These features are then combined and tests are applied, to determine whether any such combination actually belongs to a face. In order to find where the lips are, other features of the face, such as the eyes, must be located as well. Without this information it is difficult to reliably find the mouth in a complex image. Just the mouth by itself is easily missed or other elements in the image can be mistaken for a mouth. If camera position can be constrained to allow the nostrils to be viewed, then nostril tracking is used to both reduce computation and provide additional robustness. Once the nostrils are tracked from frame to frame using a tracking window the mouth area can be isolated and normalized for scale and rotation. A mouth detail analysis procedure is then used to estimate the inner lip contour and teeth and tongue regions. The inner lip contour and head movements are then mapped to synthetic face parameters to generate a graphical talking head synchronized with the original human voice. This information can also be used as the basis for visual speech features in an automatic speechreading system. Similar features were used in our previous automatic speechreading systems. <s> BIB011 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> Developments in dynamic contour tracking permit sparse representation of the outlines of moving contours. Given the increasing computing power of general-purpose workstations it is now possible to track human faces and parts of faces in real-time without special hardware. This paper describes a real-time lip tracker that uses a Kalman filter based dynamic contour to track the outline of the lips. Two alternative lip trackers, one that tracks lips from a profile view and the other from a frontal view, were developed to extract visual speech recognition features from the lip contour. In both cases, visual features have been incorporated into an acoustic automatic speech recogniser. Tests on small isolated-word vocabularies using a dynamic time warping based audio-visual recogniser demonstrate that real-time, contour-based lip tracking can be used to supplement acoustic-only speech recognisers enabling robust recognition of speech in the presence of acoustic noise. <s> BIB012 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> There has recently been increasing interest in the idea of enhancing speech recognition by the use of visual information derived from the face of the talker. This paper demonstrates the use of nonlinear image decomposition, in the form of a "sieve", applied to the task of visual speech recognition. Information derived from the mouth region is used in visual and audio-visual speech recognition of a database of the letters A-Z for four talkers. A scale histogram is generated directly from the gray-scale pixels of a window containing the talker's mouth on a per-frame basis. Results are presented for visual-only, audio-only and a simple audio-visual case. <s> BIB013 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> Since the Fifties, several experiments have been run to evaluate the “benefit of lip-reading” on speech intelligibility, all presenting a natural face speaking at different levels of background noise: Sumby and Pollack, 1954; Neely, 1956; Erber, 1969; Binnie et al., 1974; Erber, 1975. We here present a similar experiment run with French stimuli. <s> BIB014 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> We have designed and implemented a lipreading system that recognizes isolated words using only color video of human lips (without acoustic data). The system performs video recognition using "snakes" to extract visual features of geometric space, Karhunen-Loeve transform (KLT) to extract principal components in the color eigenspace, and hidden Markov models (HMM's) to recognize the combined visual features sequences. With the visual information alone, we were able to achieve 94% accuracy for ten isolated words. <s> BIB015 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> This paper describes an active-camera real-time system for tracking, shape description, and classification of the human face and mouth using only an SGI Indy computer. The system is based on use of 2-D blob features, which are spatially-compact clusters of pixels that are similar in terms of low-level image properties. Patterns of behavior (e.g., facial expressions and head movements) can be classified in real-time using Hidden Markov Model (HMM) methods. The system has been tested on hundreds of users and has demonstrated extremely reliable and accurate performance. Typical classification accuracies are near 100%. <s> BIB016 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> Humans use visual as well as auditory speech signals to recognize spoken words. A variety of systems have been investigated for performing this task. The main purpose of this research was to systematically compare the performance of a range of dynamic visual features on a speechreading task. We have found that normalization of images to eliminate variation due to translation, scale, and planar rotation yielded substantial improvements in generalization performance regardless of the visual representation used. In addition, the dynamic information in the difference between successive frames yielded better performance than optical-flow based approaches, and compression by local low-pass filtering worked surprisingly better than global principal components analysis (PCA). These results are examined and possible explanations are explored. <s> BIB017 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> This paper presents a novel technique for the tracking and extraction of features from lips for the purpose of speaker identification. In noisy or other adverse conditions, identification performance via the speech signal can significantly reduce, hence additional information which can complement the speech signal is of particular interest. In our system, syntactic information is derived from chromatic information in the lip region. A model of the lip contour is formed directly from the syntactic information, with no minimization procedure required to refine estimates. Colour features are then extracted from the lips via profiles taken around the lip contour. Further improvement in lip features is obtained via linear discriminant analysis (LDA). Speaker models are built from the lip features based on the Gaussian mixture model (GMM). Identification experiments are performed on the M2VTS database, with encouraging results. <s> BIB018 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> Active contours or snakes are widely used in object segmentation for their ability to integrate feature extraction and pixel candidate linking in a single energy minimizing process. But the sensitivity to parameters values and initialization is also a widely known problem. The performance of snakes can be enhanced by better initialization close to the desired solution. We present a fine mouth region of interest (ROI) extraction using gray level image and corresponding gradient information. We link this technique with an original snake method. The automatic snakes use spatially varying coefficients to remain along its evolution in a mouth-like shape. Our experimentations on a large image database prove its robustness regarding speakers change of the ROI mouth extraction and automatic snakes algorithms. The main application of our algorithms is video-conferencing. <s> BIB019 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> This paper evaluates lip features for person recognition, and compares the performance with that of the acoustic signal. Recognition accuracy is found to be equivalent in the two domains, agreeing with the findings of Chibelushi (1997). The optimum dynamic window length for both acoustic and visual modalities is found to be about 100 ms. Recognition performance of the upper lip is considerably better than the lower lip, achieving 15% and 35% identification error rates respectively, using a single digit test and training token. <s> BIB020 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> We present a robust technique for tracking a set of pre-determined points on a human face. To achieve robustness, the Kanade-Lucas-Tomasi point tracker is extended and specialised to work on facial features by embedding knowledge about the configuration and visual characteristics of the face. The resulting tracker is designed to recover from the loss of points caused by tracking drift or temporary occlusion. Performance assessment experiments have been carried out on a set of 30 video sequences of several facial expressions. It is shown that using the original Kanade-Lucas-Tomasi tracker, some of the points are lost, whereas using the new method described in this paper, all lost points are recovered with no or little displacement error. <s> BIB021 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> We propose a three-stage pixel based visual front end for automatic speechreading (lipreading) that results in improved recognition performance of spoken words or phonemes. The proposed algorithm is a cascade of three transforms applied to a three-dimensional video region of interest that contains the speaker's mouth area. The first stage is a typical image compression transform that achieves a high "energy", reduced-dimensionality representation of the video data. The second stage is a linear discriminant analysis based data projection, which is applied to a concatenation of a small number of consecutive image transformed video data. The third stage is a data rotation by means of a maximum likelihood linear transform. Such a transform optimizes the likelihood of the observed data under the assumption of their class conditional Gaussian distribution with diagonal covariance. We apply the algorithm to visual-only 52-class phonetic and 27-class visemic classification on a 162-subject, 7-hour long, large vocabulary, continuous speech audio-visual dataset. We demonstrate significant classification accuracy gains by each added stage of the proposed algorithm, which, when combined, can reach up to 27% improvement. Overall, we achieve a 49% (38%) visual-only frame level phonetic classification accuracy with (without) use of test set phone boundaries. In addition, we report improved audio-visual phonetic classification over the use of a single-stage image transform visual front end. <s> BIB022 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> This paper describes a speech recognition system that uses both acoustic and visual speech information to improve recognition performance in noisy environments. The system consists of three components: a visual module; an acoustic module; and a sensor fusion module. The visual module locates and tracks the lip movements of a given speaker and extracts relevant speech features. This task is performed with an appearance-based lip model that is learned from example images. Visual speech features are represented by contour information of the lips and grey-level information of the mouth area. The acoustic module extracts noise-robust features from the audio signal. Finally the sensor fusion module is responsible for the joint temporal modeling of the acoustic and visual feature streams and is realized using multistream hidden Markov models (HMMs). The multistream method allows the definition of different temporal topologies and levels of stream integration and hence enables the modeling of temporal dependencies more accurately than traditional approaches. We present two different methods to learn the asynchrony between the two modalities and how to incorporate them in the multistream models. The superior performance for the proposed system is demonstrated on a large multispeaker database of continuously spoken digits. On a recognition task at 15 dB acoustic signal-to-noise ratio (SNR), acoustic perceptual linear prediction (PLP) features lead to 56% error rate, noise robust RASTA-PLP (relative spectra) acoustic features to 7.2% error rate and combined noise robust acoustic features and visual features to 2.5% error rate. <s> BIB023 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> A method for detecting and describing the features of faces using deformable templates is described. The feature of interest, an eye for example, is described by a parameterized template. An energy function is defined which links edges, peaks, and valleys in the image intensity to corresponding properties of the template. The template then interacts dynamically with the image, by altering its parameter values to minimize the energy function, thereby deforming itself to find the best fit. The final parameter values can be used as descriptors for the features. This method is demonstrated by showing deformable templates detecting eyes and mouths in real images. > <s> BIB024
Visual speech requires both spatial and temporal segmentation. Temporal endpoints may be derived from the acoustic signal endpoints, or computed after spatial segmentation in the visual domain. Many spatial segmentation techniques impose restrictive assumptions, or rely on segmentation parameters tuned for a specific data set. As a result, robust location of the face or its constituents in unconstrained scenes is beyond the capability of most current techniques. At times, the spatial segmentation task is eased artificially through the use of lipstick or special reflective markers, or by discarding most facial information and capturing images of the mouth only. Face segmentation relies on image attributes related to facial surface properties, such as brightness, texture, and color BIB015 , BIB011 , BIB018 , possibly accompanied by their dynamic characteristics. Face segmentation techniques can be grouped into the broad categories of intra-image analysis, inter-image analysis, or a combination of the two. Intra-image approaches may be subdivided into conventional, connectionist, symbolic, and hybrid methods. Conventional methods include template-based techniques BIB024 , signature-based techniques BIB007 , edge or contour following BIB002 , and symmetry detection BIB004 . Connectionist methods are built around artificial neural networks such as radial basis functions, self-organizing neural networks, and (most frequently) multilayer perceptrons (MLPs). Symbolic methods are often based on a knowledge-based system BIB008 . Hybrid methods combining the above techniques are also available BIB009 . Conventional and symbolic methods tend to perform poorly in the presence of facial image variation. In comparison, when sufficient representative training data is available, connectionist methods may display superior robustness to changes in illumination and to geometric transformations. This is due to the ability of neural networks to learn without relying on explicit assumptions about underlying data models. Face segmentation often exploits heuristics about facial shape, configuration, and photometric characteristics; implicit or explicit models of the head or facial components are generally used BIB005 , BIB016 . The mouth is commonly modeled by deformable templates , dynamic contour models BIB015 , BIB019 , BIB012 , or statistical models of shape and brightness . The segmentation then takes the form of optimization of the fit between the model and the image, typically using numerical optimization techniques such as steepest descent, simulated annealing, or genetic algorithms. A downside of approaches based on iterative search, is that speedy and accurate segmentation requires initialization of model position and shape relatively close to the target mouth. In addition, segmentation based on such models is usually sensitive to facial hair, facial pose, illumination, and visibility of the tongue or teeth. Some approaches for enhancing the robustness of lip tracking are proposed in BIB016 and BIB021 . The approach described in BIB016 incorporates adaptive modeling of image characteristics, which is anchored on Gaussian mixture models (GMMs) of the color and geometry of the mouth and face. BIB021 enhances the robustness of the Kanade-Lucas-Tomasi tracker by embedding heuristics about facial characteristics. 2) Feature Extraction: Although raw pixel data may be used directly by a classifier , , BIB003 , , feature extraction is often applied. Despite assertions that much of lipreading information is conveyed dynamically , , features relating to the static configuration of visible articulators are fairly popular. Depending on the adjacency and area coverage of pixels used during feature extraction, approaches for visual speech feature extraction may be grouped into mouth-window methods and landmark methods. These two approaches are sometimes used conjunctively BIB015 . a) Mouth-window methods: These methods extract features from all pixels in a window covering the mouth region. Examples of such methods are: binarization of image pixel intensities , aggregation of pixel intensity differences between image frames BIB001 , computation of the mean pixel luminance of the oral area BIB010 , 2-D Fourier analysis , , DCT BIB022 , discrete wavelet transform (DWT) BIB022 , PCA ("eigenlips") , , , LDA , and nonlinear image decomposition based on the "sieve" algorithm BIB013 . b) Landmark methods: In landmark methods, a group of key points is identified in the oral area. Features extracted from these key points may be grouped into three main subgroups: 1) kinematic features; 2) photometric features; and 3) geometric features. Examples of kinematic features are velocity of key points BIB006 . Photometric features may be in the form of intensity and temporal intensity gradient of key points BIB006 . Typical geometric features are the width, height, area, and perimeter of the oral cavity ; and distances or angles between key points located on the lip margins, mouth corners, or jaw BIB014 , BIB006 . Spectral encoding of lip shape is used in BIB020 , where it is shown to yield compact feature vectors, which are fairly insensitive to the reduction of video frame rate. Geometric and photometric features are sometimes used together BIB023 . This is because, although shape features are less sensitive to lighting conditions than photometric features, they discard most of the information conveyed by the visibility of the tongue and teeth. c) Evaluation of feature types: Investigations reported in show no significant difference in speech classification accuracy obtained from raw pixel intensities, PCA and LDA features. A study of mouth-window dynamic features for visual speech recognition found that optical-flow features are outperformed by the difference between successive image frames BIB017 . It was also noted that local low-pass filtering of images yields better accuracy than PCA BIB017 . In BIB022 , no significant difference in recognition accuracy was observed between DCT, PCA, and DWT features. Compared to landmark features, mouth-window features may display higher sensitivity to changes of lighting and to the spatial or optical settings of the camera. Moreover, pixel-based features may be afflicted by the curse of dimensionality. Additionally, although pixel intensities capture more information than contour-based features, pixel data may contain many irrelevant details.
A review of speech-based bimodal recognition <s> A. Parametric Models <s> This tutorial provides an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and gives practical details on methods of implementation of the theory along with a description of selected applications of the theory to distinct problems in speech recognition. Results from a number of original sources are combined to provide a single source of acquiring the background required to pursue further this area of research. The author first reviews the theory of discrete Markov chains and shows how the concept of hidden states, where the observation is a probabilistic function of the state, can be used effectively. The theory is illustrated with two simple examples, namely coin-tossing, and the classic balls-in-urns system. Three fundamental problems of HMMs are noted and several practical techniques for solving these problems are given. The various types of HMMs that have been studied, including ergodic as well as left-right models, are described. > <s> BIB001 </s> A review of speech-based bimodal recognition <s> A. Parametric Models <s> 1. Fundamentals of Speech Recognition. 2. The Speech Signal: Production, Perception, and Acoustic-Phonetic Characterization. 3. Signal Processing and Analysis Methods for Speech Recognition. 4. Pattern Comparison Techniques. 5. Speech Recognition System Design and Implementation Issues. 6. Theory and Implementation of Hidden Markov Models. 7. Speech Recognition Based on Connected Word Models. 8. Large Vocabulary Continuous Speech Recognition. 9. Task-Oriented Applications of Automatic Speech Recognition. <s> BIB002 </s> A review of speech-based bimodal recognition <s> A. Parametric Models <s> We describe current approaches to text-independent speaker identification based on probabilistic modeling techniques. The probabilistic approaches have largely supplanted methods based on comparisons of long-term feature averages. The probabilistic approaches have an important and basic dichotomy into nonparametric and parametric probability models. Nonparametric models have the advantage of being potentially more accurate models (though possibly more fragile) while parametric models that offer computational efficiencies and the ability to characterize the effects of the environment by the effects on the parameters. A robust speaker-identification system is presented that was able to deal with various forms of anomalies that are localized in time, such as spurious noise events and crosstalk. It is based on a segmental approach in which normalized segment scores formed the basic input for a variety of robust 43% procedures. Experimental results are presented, illustrating 59% the advantages and disadvantages of the different procedures. 64%. We show the role that cross-validation can play in determining how to weight the different sources of information when combining them into a single score. Finally we explore a Bayesian approach to measuring confidence in the decisions made, which enabled us to reject the consideration of certain tests in order to achieve an improved, predicted performance level on the tests that were retained. > <s> BIB003 </s> A review of speech-based bimodal recognition <s> A. Parametric Models <s> This paper introduces and motivates the use of Gaussian mixture models (GMM) for robust text-independent speaker identification. The individual Gaussian components of a GMM are shown to represent some general speaker-dependent spectral shapes that are effective for modeling speaker identity. The focus of this work is on applications which require high identification rates using short utterance from unconstrained conversational speech and robustness to degradations produced by transmission over a telephone channel. A complete experimental evaluation of the Gaussian mixture speaker model is conducted on a 49 speaker, conversational telephone speech database. The experiments examine algorithmic issues (initialization, variance limiting, model order selection), spectral variability robustness techniques, large population performance, and comparisons to other speaker modeling techniques (uni-modal Gaussian, VQ codebook, tied Gaussian mixture, and radial basis functions). The Gaussian mixture speaker model attains 96.8% identification accuracy using 5 second clean speech utterances and 80.8% accuracy using 15 second telephone speech utterances with a 49 speaker population and is shown to outperform the other speaker modeling techniques on an identical 16 speaker telephone speech task. > <s> BIB004 </s> A review of speech-based bimodal recognition <s> A. Parametric Models <s> This paper describes a speech recognition system that uses both acoustic and visual speech information to improve recognition performance in noisy environments. The system consists of three components: a visual module; an acoustic module; and a sensor fusion module. The visual module locates and tracks the lip movements of a given speaker and extracts relevant speech features. This task is performed with an appearance-based lip model that is learned from example images. Visual speech features are represented by contour information of the lips and grey-level information of the mouth area. The acoustic module extracts noise-robust features from the audio signal. Finally the sensor fusion module is responsible for the joint temporal modeling of the acoustic and visual feature streams and is realized using multistream hidden Markov models (HMMs). The multistream method allows the definition of different temporal topologies and levels of stream integration and hence enables the modeling of temporal dependencies more accurately than traditional approaches. We present two different methods to learn the asynchrony between the two modalities and how to incorporate them in the multistream models. The superior performance for the proposed system is demonstrated on a large multispeaker database of continuously spoken digits. On a recognition task at 15 dB acoustic signal-to-noise ratio (SNR), acoustic perceptual linear prediction (PLP) features lead to 56% error rate, noise robust RASTA-PLP (relative spectra) acoustic features to 7.2% error rate and combined noise robust acoustic features and visual features to 2.5% error rate. <s> BIB005
The static characteristics of voice are sometimes modeled by single-mode Gaussian probability density functions (pdfs) BIB003 . However, the most popular static models are multimode mixtures of multivariate Gaussians BIB004 , commonly known as Gaussian mixture models (GMMs). HMMs are widely used as models of both static and dynamic characteristics of voice BIB005 , BIB001 , BIB002 .
A review of speech-based bimodal recognition <s> 1) Gaussian Mixture Models (GMMs): <s> This tutorial provides an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and gives practical details on methods of implementation of the theory along with a description of selected applications of the theory to distinct problems in speech recognition. Results from a number of original sources are combined to provide a single source of acquiring the background required to pursue further this area of research. The author first reviews the theory of discrete Markov chains and shows how the concept of hidden states, where the observation is a probabilistic function of the state, can be used effectively. The theory is illustrated with two simple examples, namely coin-tossing, and the classic balls-in-urns system. Three fundamental problems of HMMs are noted and several practical techniques for solving these problems are given. The various types of HMMs that have been studied, including ergodic as well as left-right models, are described. > <s> BIB001 </s> A review of speech-based bimodal recognition <s> 1) Gaussian Mixture Models (GMMs): <s> We study the use of discriminative training to construct speaker models for speaker verification and speaker identification. As opposed to conventional training which estimates a speaker's model based only on the training utterances from the same speaker, we use a discriminative training approach which takes into account the models of other competing speakers and formulates the optimization criterion such that speaker recognition error rate on the training data is directly minimized. We also propose a normalized score function which makes the verification formulation consistent with the minimum error training objective. We show that the speaker recognition performance is significantly improved when discriminative training is incorporated. > <s> BIB002 </s> A review of speech-based bimodal recognition <s> 1) Gaussian Mixture Models (GMMs): <s> This paper introduces and motivates the use of Gaussian mixture models (GMM) for robust text-independent speaker identification. The individual Gaussian components of a GMM are shown to represent some general speaker-dependent spectral shapes that are effective for modeling speaker identity. The focus of this work is on applications which require high identification rates using short utterance from unconstrained conversational speech and robustness to degradations produced by transmission over a telephone channel. A complete experimental evaluation of the Gaussian mixture speaker model is conducted on a 49 speaker, conversational telephone speech database. The experiments examine algorithmic issues (initialization, variance limiting, model order selection), spectral variability robustness techniques, large population performance, and comparisons to other speaker modeling techniques (uni-modal Gaussian, VQ codebook, tied Gaussian mixture, and radial basis functions). The Gaussian mixture speaker model attains 96.8% identification accuracy using 5 second clean speech utterances and 80.8% accuracy using 15 second telephone speech utterances with a 49 speaker population and is shown to outperform the other speaker modeling techniques on an identical 16 speaker telephone speech task. > <s> BIB003 </s> A review of speech-based bimodal recognition <s> 1) Gaussian Mixture Models (GMMs): <s> In recent years there has been a significant body of work, both theoretical and experimental, that has established the viability of artificial neural networks (ANN's) as a useful technology for speech recognition. It has been shown that neural networks can be used to augment speech recognizers whose underlying structure is essentially that of hidden Markov models (HMM's). In particular, we have demonstrated that fairly simple layered structures, which we lately have termed big dumb neural networks (BDNN's), can be discriminatively trained to estimate emission probabilities for an HMM. Recently simple speech recognition systems (using context-independent phone models) based on this approach have been proved on controlled tests, to be both effective in terms of accuracy (i.e., comparable or better than equivalent state-of-the-art systems) and efficient in terms of CPU and memory run-time requirements. Research is continuing on extending these results to somewhat more complex systems. In this paper, we first give a brief overview of automatic speech recognition (ASR) and statistical pattern recognition in general. We also include a very brief review of HMM's, and then describe the use of ANN's as statistical estimators. We then review the basic principles of our hybrid HMM/ANN approach and describe some experiments. We discuss some current research topics, including new theoretical developments in training ANN's to maximize the posterior probabilities of the correct models for speech utterances. We also discuss some issues of system resources required for training and recognition. Finally, we conclude with some perspectives about fundamental limitations in the current technology and some speculations about where we can go from here. > <s> BIB004 </s> A review of speech-based bimodal recognition <s> 1) Gaussian Mixture Models (GMMs): <s> We propose a minimum verification error (MVE) training scenario to design and adapt an HMM-based speaker verification system. By using the discriminative training paradigm, we show that customer and background models can be jointly estimated so that the expected number of verification errors (false accept and false reject) on the training corpus are minimized. An experimental evaluation of a fixed password speaker verification task over the telephone network was carried out. The evaluation shows that MVE training/adaptation performs as well as MLE training and MAP adaptation when the performance is measured by the average individual equal error rate (based on a posteriori threshold assignment). After model adaptation, both approaches lead to an individual equal error-rate close to 0.6%. However, experiments performed with a priori dynamic threshold assignment show that MVE adapted models exhibit false rejection and false acceptance rates 45% lower than the MAP adapted models, and therefore lead to the design of a more robust system for practical applications. <s> BIB005 </s> A review of speech-based bimodal recognition <s> 1) Gaussian Mixture Models (GMMs): <s> This paper describes a speech recognition system that uses both acoustic and visual speech information to improve recognition performance in noisy environments. The system consists of three components: a visual module; an acoustic module; and a sensor fusion module. The visual module locates and tracks the lip movements of a given speaker and extracts relevant speech features. This task is performed with an appearance-based lip model that is learned from example images. Visual speech features are represented by contour information of the lips and grey-level information of the mouth area. The acoustic module extracts noise-robust features from the audio signal. Finally the sensor fusion module is responsible for the joint temporal modeling of the acoustic and visual feature streams and is realized using multistream hidden Markov models (HMMs). The multistream method allows the definition of different temporal topologies and levels of stream integration and hence enables the modeling of temporal dependencies more accurately than traditional approaches. We present two different methods to learn the asynchrony between the two modalities and how to incorporate them in the multistream models. The superior performance for the proposed system is demonstrated on a large multispeaker database of continuously spoken digits. On a recognition task at 15 dB acoustic signal-to-noise ratio (SNR), acoustic perceptual linear prediction (PLP) features lead to 56% error rate, noise robust RASTA-PLP (relative spectra) acoustic features to 7.2% error rate and combined noise robust acoustic features and visual features to 2.5% error rate. <s> BIB006
A GMM represents a probability distribution as a weighted aggregation of Gaussians where is the "observation" (usually corresponding to a feature vector) and the GMM parameters are the mixture weights ( ), the number of mixture components ( ), the mean ( ), and the covariance matrix ( ) of each component. Diagonal covariance matrices are often used for features, such as MFCC, which are characterized by low cross correlation. GMM parameters are often estimated using the Expectation-Maximization (EM) algorithm BIB003 . Being iterative, this algorithm is sensitive to initial conditions. It may also fail to converge if the norm of a covariance matrix approaches zero. 2) Hidden Markov Models (HMMs): HMMs are generative data models, which are well suited for the statistical modeling and recognition of sequential data, such as speech. An HMM embeds two stochastic components (see Fig. 3 ). One component is a Markov chain of hidden states, which models the sequential evolution of observations. The hidden states are not directly observable. The other component is a set of probability distributions of observations. Each state has one distribution, which can be represented by a discrete or a continuous function. This divides HMMs into discrete-density HMMs (DHMMs) and continuous-density HMMs (CHMMs), respectively. In early recognition systems, continuous-valued speech features were vector quantized and each resulting VQ codebook index was then input to a DHMM. A key limitation of this approach is the quantization noise introduced by the vector quantizer and the coarseness of the similarity measures. Most modern systems use CHMMs, where each state is modeled as a GMM. In other words, a GMM is equivalent to a single-state HMM. Although, in theory, Gaussian mixtures can represent complex pdfs, this may not be so in practice. Hence, HMMs sometimes incorporate MLPs, which estimate state observation probabilities BIB006 , BIB004 . The most common HMM learning rule is the Baum-Welch algorithm, which is an iterative maximum likelihood estimation of the state and state-transition parameters BIB001 . Due to the iterative nature of the learning, the estimated parameters depend on their initial settings. HMMs are often trained as generative models of within-class data. Such HMMs do not capture discriminating information explicitly and hence may give suboptimal recognition accuracy. This has spurred research into discriminative training of HMMs and other generative models BIB002 , BIB005 . Viterbi decoding BIB001 is typically used for efficient exploration of possible state sequences during recognition; it calculates the likelihood that the observed sequence was generated by the HMM. The Viterbi algorithm is essentially a dynamic programming method, which identifies the state sequence that maximizes the probability of occurrence of an observation sequence. In practice, to minimize the number of parameters, HMMs are confined to relatively small or constrained state spaces. Practical HMMs are typically first-order Markov models. Such models may be ill-suited for higher order dynamics; in the case of speech, the required sequential dependence may extend across several states. HMMs for speech or speaker recognition are typically configured as left-right models (see Fig. 3 ). The use of state-specific GMMs increases the number of classifier parameters to estimate. To reduce the parameter count, sharing (commonly referred to as "tying") of parameters is often used. Typically, state parameters are tied across HMMs which possess some states deemed similar.
A review of speech-based bimodal recognition <s> B. Nonparametric Models 1) Reference-Pattern Models: <s> The use of instantaneous and transitional spectral representations of spoken utterances for speaker recognition is investigated. Linear-predictive-coding (LPC)-derived cepstral coefficients are used to represent instantaneous spectral information, and best linear fits of each cepstral coefficient over a specified time window are used to represent transitional information. An evaluation has been carried out using a database of isolated digit utterances over dialed-up telephone lines by 10 talkers. Two vector quantization (VQ) codebooks, instantaneous and transitional, were constructed from each speaker's training utterances. The experimental results show that the instantaneous and transitional representations are relatively uncorrelated, thus providing complementary information for speaker recognition. A rectangular window of approximately 100 ms duration provides an effective estimate of the transitional spectral features for speaker recognition. Also, simple transmission channel variations are shown to affect both the instantaneous spectral representations and the corresponding recognition performance significantly, while the transitional representations and performance are relatively resistant. > <s> BIB001 </s> A review of speech-based bimodal recognition <s> B. Nonparametric Models 1) Reference-Pattern Models: <s> The authors present the results of speaker-verification technology development for use over long-distance telephone lines. A description is given of two large speech databases that were collected to support the development of new speaker verification algorithms. Also discussed are the results of discriminant analysis techniques which improve the discrimination between true speakers and imposters. A comparison is made of the performance of two speaker-verification algorithms, one using template-based dynamic time warping, and the other, hidden Markov modeling. > <s> BIB002 </s> A review of speech-based bimodal recognition <s> B. Nonparametric Models 1) Reference-Pattern Models: <s> The use of nonmemoryless source coders in speaker recognition problems is studied, and the effects of source variations, including speaking inconsistency and channel mismatch, in source coder designs for the intended application are discussed. It is found that incorporation of memory in source coders in general enhances the speaker recognition accuracy but that more remarkable improvements can be accomplished by properly including potential source variations in the coder design/training. An experiment with a 100-speaker database shows a 99.5% recognition accuracy. > <s> BIB003 </s> A review of speech-based bimodal recognition <s> B. Nonparametric Models 1) Reference-Pattern Models: <s> The authors address the problem of speaker recognition using very short utterances, both for training and for recognition. The authors propose to exploit speaker-specific correlations between two suitably defined parameter vector sequences. A nonlinear vectorial interpolation technique is used to capture speaker-specific information, through least-square-error minimization. The experiments show the feasibility of recognizing a speaker among a population of about 100 persons using only an utterance of one word both for training and for recognition. > <s> BIB004 </s> A review of speech-based bimodal recognition <s> B. Nonparametric Models 1) Reference-Pattern Models: <s> A text-independent speaker recognition method using predictive neural networks is described. The speech production process is regarded as a nonlinear process, so the speaker individuality in the speech signal also includes nonlinearity. Therefore, the predictive neural network, which is a nonlinear prediction model based on multilayer perceptrons, is expected to be a more suitable model for representing speaker individuality. For text-independent speaker recognition, an ergodic model which allows transitions to any other state is adopted as the speaker model and one predictive neural network is assigned to each state. The proposed method was compared to distortion-based methods, hidden Markov model (HMM)-based methods, and a discriminative neural-network-based method through text-independent speaker recognition experiments on 24 female speakers. The proposed method gave the highest recognition accuracy of 100.0% and the effectiveness of the predictive neural networks for representing speaker individuality was clarified. > <s> BIB005 </s> A review of speech-based bimodal recognition <s> B. Nonparametric Models 1) Reference-Pattern Models: <s> The author presents and evaluates a modular connectionist system for speaker identification. Modularity has emerged as a powerful technique for reducing the complexity of connectionist systems and allowing prior knowledge to be incorporated into their design. Thus, for systems where the amount of training data is limited, modular systems incorporating prior knowledge are likely to generalize significantly better than a monolithic connectionist system. An architecture is developed which achieves speaker identification based on the cooperation of several connectionist expert modules. When tested on a population of 102 speakers extracted from the DARPA-TIMIT database, perfect identification was observed. In a specific comparison with a system based on multivariate autoregressive models, the modular connectionist approach was found to be significantly better in terms of both identification accuracy and speed. > <s> BIB006 </s> A review of speech-based bimodal recognition <s> B. Nonparametric Models 1) Reference-Pattern Models: <s> This paper describes recent improvements to an algorithm for identifying an unknown voice from a set of known voices using unconstrained speech material. These algorithms compare the underlying probability distributions of speech utterances using a method that is free of assumptions regarding the form of the distributions (e.g., Gaussian, etc.). In comparing two utterances, the algorithms accumulate minimum inter-frame distances between frames of the utterances. In recognition tests on the Switchboard database, using a closed population of speakers, we show that the new algorithm performs substantially better than the baseline algorithm. The modifications are segment-based scoring, limiting likelihood ratio estimates for robustness and estimating biases associated with reference files. > <s> BIB007 </s> A review of speech-based bimodal recognition <s> B. Nonparametric Models 1) Reference-Pattern Models: <s> An evaluation of various classifiers for text-independent speaker recognition is presented. In addition, a new classifier is examined for this application. The new classifier is called the modified neural tree network (MNTN). The MNTN is a hierarchical classifier that combines the properties of decision trees and feedforward neural networks. The MNTN differs from the standard NTN in both the new learning rule used and the pruning criteria. The MNTN is evaluated for several speaker recognition experiments. These include closed- and open-set speaker identification and speaker verification. The database used is a subset of the TIMIT database consisting of 38 speakers from the same dialect region. The MNTN is compared with nearest neighbor classifiers, full-search, and tree-structured vector quantization (VQ) classifiers, multilayer perceptrons (MLPs), and decision trees. For closed-set speaker identification experiments, the full-search VQ classifier and MNTN demonstrate comparable performance. Both methods perform significantly better than the other classifiers for this task. The MNTN and full-search VQ classifiers are also compared for several speaker verification and open-set speaker-identification experiments. The MNTN is found to perform better than full-search VQ classifiers for both of these applications. In addition to matching or exceeding the performance of the VQ classifier for these applications, the MNTN also provides a logarithmic saving for retrieval. > <s> BIB008 </s> A review of speech-based bimodal recognition <s> B. Nonparametric Models 1) Reference-Pattern Models: <s> The authors evaluate continuous density hidden Markov models (CDHMM), dynamic time warping (DTW) and distortion-based vector quantisation (VQ) for speaker recognition, emphasising the performance of each model structure across incremental amounts of training data. Text-independent (TI) experiments are performed with VQ and CDHMMs, and text-dependent (TD) experiments are performed with DTW, VQ and CDHMMs. For TI speaker recognition, VQ performs better than an equivalent CDHMM with one training version, but is outperformed by CDHMM when trained with ten training versions. For TD experiments, DTW outperforms VQ and CDHMMs for sparse amounts of training data, but with more data the performance of each model is indistinguishable. The performance of the TD procedures is consistently superior to TI, which is attributed to subdividing the speaker recognition problem into smaller speaker-word problems. It is also shown that there is a large variation in performance across the different digits, and it is concluded that digit zero is the best digit for speaker discrimination. <s> BIB009 </s> A review of speech-based bimodal recognition <s> B. Nonparametric Models 1) Reference-Pattern Models: <s> The non-supervised self organizing map of Kohonen (SOM), the supervised learning vector quantization algorithm (LVQ3), and a method based on second-order statistical measures (SOSM) were adapted, evaluated and compared for speaker verification on 57 speakers of a POLYPHONE-like data base. The SOM and LVQ3 were trained by codebooks with 32 and 256 codes and two statistical measures; one without weighting (SOSM1) and another with weighting (SOSM2) were implemented. As the decision criterion, the equal error rate (EER) and best match decision rule (BMDR) were employed and evaluated. The weighted linear predictive cepstrum coefficients (LPCC) and the /spl Delta/LPCC were used jointly as two kinds of spectral speech representations in a single vector as distinctive features. The LVQ3 demonstrates a performance advantage over SOM. This is due to the fact that the LVQ3 allows the long-term fine-tuning of an interested target codebook using speech data from a client and other speakers, whereas the SOM only uses data from the client. The SOSM performs better than the SOM and the LVQ3 for long test utterances, while for short test utterances the LVQ is the best method among the methods studied. <s> BIB010
These models take the form of a store of reference patterns representing the voice-pattern space. To counter misalignments, arising from change in speaking rate, for example, temporal alignment using dynamic time warping (DTW) is often applied during pattern matching. The reference patterns may be taken directly from the original pattern space; this approach is used in k-nearest-neighbor (kNN) classifiers BIB007 . Alternatively, the reference patterns may represent a compressed pattern space, typically obtained through vector averaging. Compressed-pattern-space approaches aim to reduce the storage and computational costs associated with an uncompressed space. They include VQ models BIB001 , BIB009 and the template models used in minimum distance classifiers BIB002 . A conventional VQ model consists of a collection (codebook) of feature-vector centroids. In effect, VQ uses multiple static templates, and hence, it discards potentially useful temporal information. The extension of such memoryless VQ models, into models which possess inherent memory, has been proposed in the form of matrix quantization and trellis VQ models BIB003 . 2) Connectionist Models: These consist of one or several neural networks. The most popular models are the memoryless type, such as MLPs , radial basis functions , neural tree networks BIB008 , Kohonen's self-organizing maps BIB010 , and learning vector quantization . The main connectionist models capable of capturing temporal information are time-delay neural networks BIB006 and recurrent neural networks . However, compared to HMMs, artificial neural networks are generally worse at modeling sequential data. Most neural network models are trained as discriminative models. Predictive models, within a single medium or across acoustic and visual media, are rare , BIB004 , BIB005 . A key strength of neural networks is that their training is generally implemented as a nonparametric, nonlinear estimation, which does not make assumptions about underlying data models or probability distributions.
A review of speech-based bimodal recognition <s> C. Decision Procedure <s> The results of a study aimed at finding the importance of pitch for automatic speaker recognition are presented. Pitch contours were obtained for 60 utterances, each approximately 2‐sec in duration, of 10 female speakers. A data‐reduction procedure based on the Karhunen‐Loeve representation was found effective in representing the pitch information in each contour in a 20‐dimensional space. The data were divided into two portions; one part was used to design the speaker recognition system, while the other part was used to test the effectiveness of the design. The 20‐dimensional vectors representing the pitch contours of the design set were linearly transformed so that the ratio of interspeaker to intraspeaker variance in the transformed space was maximum. A reference utterance was formed for each speaker by averaging the transformed vectors of that speaker. The test utterance was assigned to the speaker corresponding to the reference utterance with the smallest Euclidean distance in the transformed space. ... <s> BIB001 </s> A review of speech-based bimodal recognition <s> C. Decision Procedure <s> The authors present the results of speaker-verification technology development for use over long-distance telephone lines. A description is given of two large speech databases that were collected to support the development of new speaker verification algorithms. Also discussed are the results of discriminant analysis techniques which improve the discrimination between true speakers and imposters. A comparison is made of the performance of two speaker-verification algorithms, one using template-based dynamic time warping, and the other, hidden Markov modeling. > <s> BIB002 </s> A review of speech-based bimodal recognition <s> C. Decision Procedure <s> A way to identify people by voice is discussed. A speaker's spectral information represented by line spectrum pair (LSP) frequencies is used to describe characteristics of the speaker's utterance and the VQ (vector quantization) method is used to model the spectral distribution of each speaker. Some easily computed distances (Euclidean distance, weighted distance by F-ratio, and hard-limited distance) are used to measure the discrimination among speakers. > <s> BIB003 </s> A review of speech-based bimodal recognition <s> C. Decision Procedure <s> Text-independent speaker verification systems typically depend upon averaging over a long utterance to obtain a feature set for classification. However, not all speech is equally suited to the task of speaker verification. An approach to text-independent speaker verification that uses a two-stage classifier is presented. The first stage consists of a speaker-independent phoneme detector trained to recognize a phoneme that is distinctive from speaker to speaker. The second stage is trained to recognize the frames of speech from the target speaker that are admitted by the phoneme detector. A common feature vector based on the linear predictive coding (LPC) cepstrum is projected in different directions for each of these pattern recognition tasks. Results of tests using the described speaker verification system are shown. > <s> BIB004 </s> A review of speech-based bimodal recognition <s> C. Decision Procedure <s> An evaluation of various classifiers for text-independent speaker recognition is presented. In addition, a new classifier is examined for this application. The new classifier is called the modified neural tree network (MNTN). The MNTN is a hierarchical classifier that combines the properties of decision trees and feedforward neural networks. The MNTN differs from the standard NTN in both the new learning rule used and the pruning criteria. The MNTN is evaluated for several speaker recognition experiments. These include closed- and open-set speaker identification and speaker verification. The database used is a subset of the TIMIT database consisting of 38 speakers from the same dialect region. The MNTN is compared with nearest neighbor classifiers, full-search, and tree-structured vector quantization (VQ) classifiers, multilayer perceptrons (MLPs), and decision trees. For closed-set speaker identification experiments, the full-search VQ classifier and MNTN demonstrate comparable performance. Both methods perform significantly better than the other classifiers for this task. The MNTN and full-search VQ classifiers are also compared for several speaker verification and open-set speaker-identification experiments. The MNTN is found to perform better than full-search VQ classifiers for both of these applications. In addition to matching or exceeding the performance of the VQ classifier for these applications, the MNTN also provides a logarithmic saving for retrieval. > <s> BIB005 </s> A review of speech-based bimodal recognition <s> C. Decision Procedure <s> A new algorithm, the hierarchical speaker verification algorithm, is introduced. This algorithm employs a set of unique mapping functions determined from an enrolment utterance that characterize the target voice as a multidimensional martingale random walk process. For sufficiently long verification utterances, the central limit theorem insures that the accumulated scores for the target speaker will be distributed normally about the origin. Impostor speakers, which violate the martingale property, are distributed arbitrarily and widely scattered in the verification space. Excerpts of verification performance experiments are given and extensions to the algorithm for handling noisy channels and speaker template aging are discussed. > <s> BIB006 </s> A review of speech-based bimodal recognition <s> C. Decision Procedure <s> We study the use of discriminative training to construct speaker models for speaker verification and speaker identification. As opposed to conventional training which estimates a speaker's model based only on the training utterances from the same speaker, we use a discriminative training approach which takes into account the models of other competing speakers and formulates the optimization criterion such that speaker recognition error rate on the training data is directly minimized. We also propose a normalized score function which makes the verification formulation consistent with the minimum error training objective. We show that the speaker recognition performance is significantly improved when discriminative training is incorporated. > <s> BIB007
The classifier decision procedure sometimes involves a sequence of consecutive recognition trials BIB002 . At times, it is implemented as a decision tree BIB005 , BIB006 . Some similarity measures are tightly coupled to particular feature types. For speaker verification or open-set identification, a normalization of similarity scores may be necessitated by speech variability BIB007 , . Examples of common similarity measures are: the Euclidean distance (often inverse-variance weighted, or reduced to a city-block distance) BIB001 , BIB003 , the Mahalanobis distance , the likelihood ratio BIB004 , and the arithmetic-harmonic sphericity measure . The Mahalanobis distance measure takes account of feature covariance and de-emphasizes features with high variance; however, reliable estimation of the covariance matrix may require a large amount of training data.
A review of speech-based bimodal recognition <s> B. General Fusion Hierarchy <s> From the Publisher: ::: This invaluable reference offers the most comprehensive introduction available to the concepts of multisensor data fusion. It introduces key algorithms, provides advice on their utilization, and raises issues associated with their implementation. With a diverse set of mathematical and heuristic techniques for combining data from multiple sources, the book shows how to implement a data fusion system, describes the process for algorithm selection, functional architectures and requirements for ancillary software, and illustrates man-machine interface requirements an database issues. <s> BIB001 </s> A review of speech-based bimodal recognition <s> B. General Fusion Hierarchy <s> Sensor fusion models have been characterized in the literature in a number of distinctly different ways: in terms of information levels at which the fusion is accomplished; the objectives of the fusion process, the application domain; the types of sensors employed, the sensor suite configuration and so on. The characterization most commonly encountered in the rapidly growing sensor fusion literature based on level of detail in the information is that of the now well known triplet: data level, feature level, and decision level. We consider here a generalized input-output (I/O) descriptor pair based characterization of the sensor fusion process that can be looked upon as a natural out growth of the trilevel characterization. The fusion system design philosophy expounded here is that an exhaustive exploitation of the sensor fusion potential should explore fusion under all of the different I/O-based fusion modes conceivable under such a characterization. Fusion system architectures designed to permit such exploitation offer the requisite flexibility for developing the most effective fusion system designs for a given application. A second facet of this exploitation is aimed at exploring the new concept of self-improving multisensor fusion system architectures wherein the central (fusion system) and focal (individual sensor subsystems) decision makers mutually enhance the other's performance by providing reinforced learning. A third facet is that of investigating fusion system architectures for environments wherein the different local decision makers may only be capable of narrower decisions that span only a subset of decision choices. The paper discusses these flexible fusion system architectures along with related issues and illustrates them with examples of their application to real-world scenarios. <s> BIB002 </s> A review of speech-based bimodal recognition <s> B. General Fusion Hierarchy <s> Multisensor data fusion is an emerging technology applied to Department of Defense (DoD) areas such as automated target recognition, battlefield surveillance, and guidance and control of autonomous vehicles, and to non-DoD applications such as monitoring of complex machinery, medical diagnosis, and smart buildings. Techniques for multisensor data fusion are drawn from a wide range of areas including artificial intelligence, pattern recognition, statistical estimation and other areas. This paper provides a tutorial on data fusion, introducing data fusion applications, process models, and identification of applicable techniques. Comments are made on the state-of-the-art in data fusion. <s> BIB003 </s> A review of speech-based bimodal recognition <s> B. General Fusion Hierarchy <s> We develop a common theoretical framework for combining classifiers which use distinct pattern representations and show that many existing schemes can be considered as special cases of compound classification where all the pattern representations are used jointly to make a decision. An experimental comparison of various classifier combination schemes demonstrates that the combination rule developed under the most restrictive assumptions-the sum rule-outperforms other classifier combinations schemes. A sensitivity analysis of the various schemes to estimation errors is carried out to show that this finding can be justified theoretically. <s> BIB004
Sensor fusion deals with the combination of information produced by several sources BIB002 , BIB003 . It has borrowed mathematical and heuristic techniques from a wide array of fields, such as statistics, artificial intelligence, decision theory, and digital signal processing. Theoretical frameworks for sensor fusion have also been proposed BIB004 . In pattern recognition, sensor fusion can be performed at the data level, feature level, or decision level (see Fig. 1 ); hybrid fusion methods are also available BIB001 . Low-level fusion can occur at the data level or feature level. Intermediate-level and high-level fusion typically involves the combination of recognition scores or labels produced as intermediate or final output of classifiers. Hall BIB001 argues that owing to the information loss, which occurs during the transformation of raw data into features and eventually into classifier outputs, classification accuracy is expected to be the lowest for decision fusion. However, it is also known that corruption of information due to noise is potentially highest and requirements for data registration most stringent, at the lower levels of the fusion hierarchy BIB002 , . In addition, low-level fusion is less robust to sensor failure than high-level fusion BIB002 . Moreover, low-level fusion generally requires more training data because it usually involves more free parameters than high-level fusion. It is also easier to upgrade a single-sensor system into a multisensor system based on decision fusion; sensors can be added, without having to retrain any legacy singlesensor classifiers. Additionally, the frequently-used simplifying assumption of independence between sensor-specific data holds better at the decision level, particularly if the classifiers are not of the same type. However, decision fusion might consequently fail to exploit the potentially beneficial correlation present at the lower levels.
A review of speech-based bimodal recognition <s> C. Low-Level Audio-Visual Fusion <s> A bimodal automatic speech recognition system, using simultaneously auditory model and articulatory parameters, is described. Results given for various speaker dependent phonetic recognition experiments, regarding the Italian plosive class, show the usefulness of this approach especially in noisy conditions. > <s> BIB001 </s> A review of speech-based bimodal recognition <s> C. Low-Level Audio-Visual Fusion <s> There has recently been increasing interest in the idea of enhancing speech recognition by the use of visual information derived from the face of the talker. This paper demonstrates the use of nonlinear image decomposition, in the form of a "sieve", applied to the task of visual speech recognition. Information derived from the mouth region is used in visual and audio-visual speech recognition of a database of the letters A-Z for four talkers. A scale histogram is generated directly from the gray-scale pixels of a window containing the talker's mouth on a per-frame basis. Results are presented for visual-only, audio-only and a simple audio-visual case. <s> BIB002 </s> A review of speech-based bimodal recognition <s> C. Low-Level Audio-Visual Fusion <s> We present work on improving the performance of automated speech recognizers by using additional visual information: (lip-/speechreading); achieving error reduction of up to 50%. This paper focuses on different methods of combining the visual and acoustic data to improve the recognition performance. We show this on an extension of an existing state-of-the-art speech recognition system, a modular MS-TDNN. We have developed adaptive combination methods at several levels of the recognition network. Additional information such as estimated signal-to-noise ratio (SNR) is used in some cases. The results of the different combination methods are shown for clean speech and data with artificial noise (white, music, motor). The new combination methods adapt automatically to varying noise conditions making hand-tuned parameters unnecessary. <s> BIB003 </s> A review of speech-based bimodal recognition <s> C. Low-Level Audio-Visual Fusion <s> Consistently high person recognition accuracy is difficult to attain using a single recognition modality. This paper assesses the fusion of voice and outer lip-margin features for person identification. Feature fusion is investigated in the form of audio-visual feature vector concatenation, principal component analysis, and linear discriminant analysis. The paper shows that, under mismatched test and training conditions, audio-visual feature fusion is equivalent to an effective increase in the signal-to-noise ratio of the audio signal. Audio-visual feature vector concatenation is shown to be an effective method for feature combination, and linear discriminant analysis is shown to possess the capability of packing discriminating audio-visual information into fewer coefficients than principal component analysis. The paper reveals a high sensitivity of bimodal person identification to a mismatch between LDA or PCA feature-fusion module and speaker model training noise-conditions. Such a mismatch leads to worse identification accuracy than unimodal identification. <s> BIB004 </s> A review of speech-based bimodal recognition <s> C. Low-Level Audio-Visual Fusion <s> We propose the use of discriminative training by means of the generalized probabilistic descent (GPB) algorithm to estimate hidden Markov model (HMM) stream exponents for audio-visual speech recognition. Synchronized audio and visual features are used to respectively train audio-only and visual-only single-stream HMMs of identical topology by maximum likelihood. A two-stream HMM is then obtained by combining the two single-stream HMMs and introducing exponents that weigh the log-likelihood of each stream. We present the GPD algorithm for stream exponent estimation, consider a possible initialization, and apply it to the single speaker connected letters task of the AT&T bimodal database. We demonstrate the superior performance of the resulting multi-stream HMM to the audio-only, visual-only, and audio-visual single-stream HMMs. <s> BIB005
To the best of the authors' knowledge, data-level fusion of acoustic and visual speech has not been attempted, possibly due to range registration difficulties. Low-level fusion is usually based on input space transformation into a space with less cross correlation and where most of the information is captured in fewer dimensions than the original space. Feature fusion is commonly implemented as a concatenation of acoustic and visual speech feature vectors , BIB004 , BIB002 , BIB005 . This typically gives an input space of higher dimensionality than each unimodal pattern space, and hence, raises the specter of the curse of dimensionality. Consequently, linear or nonlinear transformations, coupled to dimensionality reduction, are often applied to feature vector pairs. Nonlinear transformation is often implemented by a neural network layer BIB001 , , BIB003 , , connected to the primary features or outputs of subnets downstream of the integration layer. PCA and LDA are frequently used for linear transformation of vector pairs BIB004 . Although the Kalman filter can be used for feature fusion , it has not found much use in bimodal recognition. Transformations, such as PCA and LDA, allow dimensionality reduction. However, PCA and LDA may require high volumes of training data for a reliable estimation of the covariance matrices on which they are anchored. LDA often outperforms PCA in recognition tasks. This is because, unlike LDA, PCA does not use discriminative information during parameter estimation. However, the class information embedded in LDA is a set of class means, hence, LDA is ill suited for classes with multiple distribution modes or with confusable means.
A review of speech-based bimodal recognition <s> 1) Information Combination: <s> We present a new supervised learning procedure for systems composed of many separate networks, each of which learns to handle a subset of the complete set of training cases. The new procedure can be viewed either as a modular version of a multilayer supervised network, or as an associative version of competitive learning. It therefore provides a new link between these two apparently different approaches. We demonstrate that the learning procedure divides up a vowel discrimination task into appropriate subtasks, each of which can be solved by a very simple expert network. <s> BIB001 </s> A review of speech-based bimodal recognition <s> 1) Information Combination: <s> We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIMs). Learning is treated as a maximum likelihood problem; in particular, we present an expectation-maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an online learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain. <s> BIB002 </s> A review of speech-based bimodal recognition <s> 1) Information Combination: <s> Methods of integrating audio and visual information in an audiovisual HMM-based ASR system are investigated. Experiments involve discrimination of a set of 22 consonants, with various integration strategies. The role of the visual subsystem is varied; for example, in one run, the subsystem attempts to classify all 22 consonants, while in other runs it attempts only broader classifications. In a second experiment, a new HMM formulation is employed, which incorporates the integration into the HMM at a pre-categorical stage. A single variable parameter allows the relative contribution of audio and visual information to be controlled. This form of integration can be very easily incorporated into existing audio-based continuous speech recognizers. > <s> BIB003 </s> A review of speech-based bimodal recognition <s> 1) Information Combination: <s> Audio-visual person recognition promises higher recognition accuracy than recognition in either domain in isolation. To reach this goal, special attention should be given to the strategies for combining the acoustic and visual sensory modalities. The paper presents a comparative assessment of three decision level data fusion techniques for person identification. Under mismatched training and test noise conditions, Bayesian inference and Dempster-Shafer theory are shown to outperform possibility theory. For these mismatched noise conditions, all three techniques result in compromising integration. Under matched training and test noise conditions, the three techniques yield similar error rates approaching the more accurate of the two sensory modalities, and show signs of leading to enhancing integration at low acoustic noise levels. The paper also shows that automatic identification of identical twins is possible, and that lip margins convey a high level of speaker identity information. <s> BIB004 </s> A review of speech-based bimodal recognition <s> 1) Information Combination: <s> The use of clustering algorithms for decision-level data fusion is proposed. Person authentication results coming from several modalities (e.g., still image, speech), are combined by using fuzzy k-means (FKM) and fuzzy vector quantization (FVQ) algorithms, and a median radial basis function (MRBF) network. The quality measure of the modalities data is used for fuzzification. Two modifications of the FKM and FVQ algorithms, based on a fuzzy vector distance definition, are proposed to handle the fuzzy data and utilize the quality measure. Simulations show that fuzzy clustering algorithms have better performance compared to the classical clustering algorithms and other known fusion algorithms. MRBF has better performance especially when two modalities are combined. Moreover, the use of the quality via the proposed modified algorithms increases the performance of the fusion system. <s> BIB005 </s> A review of speech-based bimodal recognition <s> 1) Information Combination: <s> Biometric person identity authentication is gaining more and more attention. The authentication task performed by an expert is a binary classification problem: reject or accept identity claim. Combining experts, each based on a different modality (speech, face, fingerprint, etc.), increases the performance and robustness of identity authentication systems. In this context, a key issue is the fusion of the different experts for taking a final decision (i.e., accept or reject identity claim). We propose to evaluate different binary classification schemes (support vector machine, multilayer perceptron, C4.5 decision tree, Fisher's linear discriminant, Bayesian classifier) to carry on the fusion. The experimental results show that support vector machines and Bayesian classifier achieve almost the same performances, and both outperform the other evaluated classifiers. <s> BIB006
A common technique for post-categorical fusion is the linear combination of the scores output by the single-modality classifiers , BIB003 . Geometric averaging is also applied at times; a popular approach for combining HMMs is decision fusion implemented as a product of the likelihood of a pair of uncoupled audio and visual HMMs. DTW is sometimes used, along the sequence of feature vectors, to optimize the path through class hypotheses . The combination of classifiers possessing localized expertise can give a better estimate of the decision boundary between classes. This has motivated the development of the mixture of experts (MOE) approach BIB001 . An MOE consists of a parallel configuration of experts, whose outputs are dynamically integrated by the outputs of a trainable gating network. The integration is a weighted sum, whose weights are learned estimates of the correctness of each expert, given the current input. MOEs can be incorporated into a tree-like architecture known as hierarchical mixture of experts (HME) BIB002 . One difficulty with using HMEs is that the selection of appropriate model parameters (number of levels, branching factor of the tree, architecture of experts) may require a good insight into the data or problem space under consideration. Neural networks, Bayesian inference, Dempster-Shafer theory, and possibility theory have also provided frameworks for decision fusion BIB004 , . Integration by neural network is typically implemented by neurons whose weighted inputs are connected to the outputs of single-modality classifiers. Bayesian inference uses Bayes' rule to calculate a posteriori bimodal class probabilities, from the a priori class probabilities and the class conditional probabilities of the observed unimodal classifier outputs. Dempster-Shafer theory of evidence is a generalization of Bayesian probability theory. The bimodal belief in each possible class is computed by applying Dempster's rule of combination to the basic probability assignment in support of each class. Possibility theory is based on fuzzy sets. The bimodal possibility for each class is computed by combining the possibility distributions of classifier outputs. Although possibility theory and Dempster-Shafer theory of evidence are meant to provide more robust frameworks than Bayesian inference, for combining uncertain or imprecise information, comparative assessment on bimodal recognition has shown that this may not be the case BIB004 . 2) Classification: Decision fusion can be formulated as a classification of the pattern of unimodal classifier outputs. The latter are grouped into a vector that is input to another classifier, which yields a classification decision representing the consensus among the unimodal classifiers. A variety of classifiers, acting as a decision fusion mechanism, have been evaluated. It is suggested in that, compared to kNN and decision trees, logistic regression offers the best accuracy and the lowest computational cost during recognition. It is shown in BIB005 that a median radial basis function network outperforms clustering based on fuzzy k-means or fuzzy VQ; the superiority of fuzzy clustering over conventional clustering is also shown. Comparison of the support vector machine (SVM), minimum cost Bayesian classifier, Fisher's linear discriminant, decision trees, and MLP, showed that the MLP gives the worst accuracy BIB006 . The comparison also showed that the SVM and Bayesian classifiers have similar performance and that they outperform the other classifiers. An SVM is a binary classifier which is based on statistical learning theory. SVMs try to maximize generalization capability, even for pattern spaces of high dimensionality. A downside of SVMs is that inappropriate kernel functions can result in poor recognition accuracy; hence, the need for careful selection of kernel functions.
A review of speech-based bimodal recognition <s> 3) Stochastic Modeling of Coupled Time Series: <s> Methods of integrating audio and visual information in an audiovisual HMM-based ASR system are investigated. Experiments involve discrimination of a set of 22 consonants, with various integration strategies. The role of the visual subsystem is varied; for example, in one run, the subsystem attempts to classify all 22 consonants, while in other runs it attempts only broader classifications. In a second experiment, a new HMM formulation is employed, which incorporates the integration into the HMM at a pre-categorical stage. A single variable parameter allows the relative contribution of audio and visual information to be controlled. This form of integration can be very easily incorporated into existing audio-based continuous speech recognizers. > <s> BIB001 </s> A review of speech-based bimodal recognition <s> 3) Stochastic Modeling of Coupled Time Series: <s> We present algorithms for coupling and training hidden Markov models (HMMs) to model interacting processes, and demonstrate their superiority to conventional HMMs in a vision task classifying two-handed actions. HMMs are perhaps the most successful framework in perceptual computing for modeling and classifying dynamic behaviors, popular because they offer dynamic time warping, a training algorithm and a clear Bayesian semantics. However the Markovian framework makes strong restrictive assumptions about the system generating the signal-that it is a single process having a small number of states and an extremely limited state memory. The single-process model is often inappropriate for vision (and speech) applications, resulting in low ceilings on model performance. Coupled HMMs provide an efficient way to resolve many of these problems, and offer superior training speeds, model likelihoods, and robustness to initial conditions. <s> BIB002 </s> A review of speech-based bimodal recognition <s> 3) Stochastic Modeling of Coupled Time Series: <s> This paper describes a speech recognition system that uses both acoustic and visual speech information to improve recognition performance in noisy environments. The system consists of three components: a visual module; an acoustic module; and a sensor fusion module. The visual module locates and tracks the lip movements of a given speaker and extracts relevant speech features. This task is performed with an appearance-based lip model that is learned from example images. Visual speech features are represented by contour information of the lips and grey-level information of the mouth area. The acoustic module extracts noise-robust features from the audio signal. Finally the sensor fusion module is responsible for the joint temporal modeling of the acoustic and visual feature streams and is realized using multistream hidden Markov models (HMMs). The multistream method allows the definition of different temporal topologies and levels of stream integration and hence enables the modeling of temporal dependencies more accurately than traditional approaches. We present two different methods to learn the asynchrony between the two modalities and how to incorporate them in the multistream models. The superior performance for the proposed system is demonstrated on a large multispeaker database of continuously spoken digits. On a recognition task at 15 dB acoustic signal-to-noise ratio (SNR), acoustic perceptual linear prediction (PLP) features lead to 56% error rate, noise robust RASTA-PLP (relative spectra) acoustic features to 7.2% error rate and combined noise robust acoustic features and visual features to 2.5% error rate. <s> BIB003 </s> A review of speech-based bimodal recognition <s> 3) Stochastic Modeling of Coupled Time Series: <s> Hidden Markov models (HMMs) have proven to be one of the most widely used tools for learning probabilistic models of time series data. In an HMM, information about the past is conveyed through a single discrete variable—the hidden state. We discuss a generalization of HMMs in which this state is factored into multiple state variables and is therefore represented in a distributed manner. We describe an exact algorithm for inferring the posterior probabilities of the hidden state variables given the observations, and relate it to the forward–backward algorithm for HMMs and to algorithms for more general graphical models. Due to the combinatorial nature of the hidden state representation, this exact algorithm is intractable. As in other intractable systems, approximate inference can be carried out using Gibbs sampling or variational methods. Within the variational framework, we present a structured approximation in which the the state variables are decoupled, yielding a tractable algorithm for learning the parameters of the model. Empirical comparisons suggest that these approximations are efficient and provide accurate alternatives to the exact methods. Finally, we use the structured approximation to model Bach‘s chorales and show that factorial HMMs can capture statistical structure in this data set which an unconstrained HMM cannot. <s> BIB004 </s> A review of speech-based bimodal recognition <s> 3) Stochastic Modeling of Coupled Time Series: <s> We study Markov models whose state spaces arise from the Cartesian product of two or more discrete random variables. We show how to parameterize the transition matrices of these models as a convex combination—or mixture—of simpler dynamical models. The parameters in these models admit a simple probabilistic interpretation and can be fitted iteratively by an Expectation-Maximization (EM) procedure. We derive a set of generalized Baum-Welch updates for factorial hidden Markov models that make use of this parameterization. We also describe a simple iterative procedure for approximately computing the statistics of the hidden states. Throughout, we give examples where mixed memory models provide a useful representation of complex stochastic processes. <s> BIB005
The fusion of acoustic and visual speech can be cast as a probabilistic modeling of coupled time series. Such modeling may capture the potentially useful coupling or conditional dependence between the two modalities. The level of synchronization between acoustic and visual speech varies along an utterance; hence, a flexible framework for modeling the asynchrony is required. Factorial HMMs BIB004 , BIB005 , Boltzman chains and their variants (multistream HMMs BIB003 , and coupled HMMs BIB002 ) are possible stochastic models for the combination of time-coupled modalities (see Fig. 4 ). Factorial HMMs explicitly model intra-process state structure and inter-process coupling; this makes them suitable for bimodal recognition, where each process could correspond to a modality. The state space of a factorial HMM is a Cartesian product of the states of its component HMMs. The modeling of inter-process coupling has the potential of reducing the sensitivity to unwanted intra-process variation during a recognition trial and hence it may enhance recognition robustness. Variants of factorial HMMs have been shown to be superior to conventional HMMs for modeling interacting processes, such as two-handed gestures BIB002 or acoustic and visual speech BIB003 , . A simpler pre-categorical fusion approach for HMM-based classifiers is described in BIB001 . In this approach, the weighted product of the emission probabilities of acoustic and visual speech feature vectors is used during Viterbi decoding for a bimodal discrete HMM.
A review of speech-based bimodal recognition <s> E. Adaptive Fusion <s> We present work on improving the performance of automated speech recognizers by using additional visual information: (lip-/speechreading); achieving error reduction of up to 50%. This paper focuses on different methods of combining the visual and acoustic data to improve the recognition performance. We show this on an extension of an existing state-of-the-art speech recognition system, a modular MS-TDNN. We have developed adaptive combination methods at several levels of the recognition network. Additional information such as estimated signal-to-noise ratio (SNR) is used in some cases. The results of the different combination methods are shown for clean speech and data with artificial noise (white, music, motor). The new combination methods adapt automatically to varying noise conditions making hand-tuned parameters unnecessary. <s> BIB001 </s> A review of speech-based bimodal recognition <s> E. Adaptive Fusion <s> Multisensor data fusion is an emerging technology applied to Department of Defense (DoD) areas such as automated target recognition, battlefield surveillance, and guidance and control of autonomous vehicles, and to non-DoD applications such as monitoring of complex machinery, medical diagnosis, and smart buildings. Techniques for multisensor data fusion are drawn from a wide range of areas including artificial intelligence, pattern recognition, statistical estimation and other areas. This paper provides a tutorial on data fusion, introducing data fusion applications, process models, and identification of applicable techniques. Comments are made on the state-of-the-art in data fusion. <s> BIB002 </s> A review of speech-based bimodal recognition <s> E. Adaptive Fusion <s> The use of clustering algorithms for decision-level data fusion is proposed. Person authentication results coming from several modalities (e.g., still image, speech), are combined by using fuzzy k-means (FKM) and fuzzy vector quantization (FVQ) algorithms, and a median radial basis function (MRBF) network. The quality measure of the modalities data is used for fuzzification. Two modifications of the FKM and FVQ algorithms, based on a fuzzy vector distance definition, are proposed to handle the fuzzy data and utilize the quality measure. Simulations show that fuzzy clustering algorithms have better performance compared to the classical clustering algorithms and other known fusion algorithms. MRBF has better performance especially when two modalities are combined. Moreover, the use of the quality via the proposed modified algorithms increases the performance of the fusion system. <s> BIB003 </s> A review of speech-based bimodal recognition <s> E. Adaptive Fusion <s> The integration of multiple classifiers promises higher classification accuracy and robustness than can be obtained with a single classifier. This paper proposes a new adaptive technique for classifier integration based on a linear combination model. The proposed technique is shown to exhibit robustness to a mismatch between test and training conditions. It often outperforms the most accurate of the fused information sources. A comparison between adaptive linear combination and non-adaptive Bayesian fusion shows that, under mismatched test and training conditions, the former is superior to the latter in terms of identification accuracy and insensitivity to information source distortion. <s> BIB004 </s> A review of speech-based bimodal recognition <s> E. Adaptive Fusion <s> Audiovisual speech recognition involves fusion of the audio and video sensors for phonetic identification. There are three basic ways to fuse data streams for taking a decision such as phoneme identification: data-to-decision, decision-to-decision, and data-to-data. This leads to four possible models for audiovisual speech recognition, that is direct identification in the first case, separate identification in the second one, and two variants of the third early integration case, namely dominant recoding or motor recoding. However, no systematic comparison of these models is available in the literature. We propose an implementation of these four models, and submit them to a benchmark test. For this aim, we use a noisy-vowel corpus tested on two recognition paradigms in which the systems are tested at noise levels higher than those used for learning. In one of these paradigms, the signal-to-noise ratio (SNR) value is provided to the recognition systems, in the other it is not. We also introduce a new criterion for evaluating performances, based on transmitted information on individual phonetic features. In light of the compared performances of the four models with the two recognition paradigms, we discuss the advantages and drawbacks of these models, leading to proposals for data representation, fusion architecture, and control of the fusion process through sensor reliability. <s> BIB005 </s> A review of speech-based bimodal recognition <s> E. Adaptive Fusion <s> This paper describes a speech recognition system that uses both acoustic and visual speech information to improve recognition performance in noisy environments. The system consists of three components: a visual module; an acoustic module; and a sensor fusion module. The visual module locates and tracks the lip movements of a given speaker and extracts relevant speech features. This task is performed with an appearance-based lip model that is learned from example images. Visual speech features are represented by contour information of the lips and grey-level information of the mouth area. The acoustic module extracts noise-robust features from the audio signal. Finally the sensor fusion module is responsible for the joint temporal modeling of the acoustic and visual feature streams and is realized using multistream hidden Markov models (HMMs). The multistream method allows the definition of different temporal topologies and levels of stream integration and hence enables the modeling of temporal dependencies more accurately than traditional approaches. We present two different methods to learn the asynchrony between the two modalities and how to incorporate them in the multistream models. The superior performance for the proposed system is demonstrated on a large multispeaker database of continuously spoken digits. On a recognition task at 15 dB acoustic signal-to-noise ratio (SNR), acoustic perceptual linear prediction (PLP) features lead to 56% error rate, noise robust RASTA-PLP (relative spectra) acoustic features to 7.2% error rate and combined noise robust acoustic features and visual features to 2.5% error rate. <s> BIB006
In most fusion approaches to pattern recognition, fusion parameters are determined at training time and remain frozen for all subsequent recognition trials. However, optimal fusion requires a good match between the fusion parameters and the factors that affect the input patterns. Nonadaptive data fusion does not guarantee such a match and hence, pattern variation may lead to suboptimal fusion, which may even result in worse accuracy than unimodal recognition BIB002 . Fusion parameters should preferably adapt to changes in recognition conditions. Such dynamic parameters can be based on estimates of signal-to-noise ratio BIB006 , BIB001 , entropy measures , BIB001 , degree of voicing in the acoustic speech , or measures relating to the perceived quality of the unimodal classifier output scores BIB003 , BIB004 , , BIB005 , .
A review of speech-based bimodal recognition <s> B. Recognition Accuracy <s> Methods of integrating audio and visual information in an audiovisual HMM-based ASR system are investigated. Experiments involve discrimination of a set of 22 consonants, with various integration strategies. The role of the visual subsystem is varied; for example, in one run, the subsystem attempts to classify all 22 consonants, while in other runs it attempts only broader classifications. In a second experiment, a new HMM formulation is employed, which incorporates the integration into the HMM at a pre-categorical stage. A single variable parameter allows the relative contribution of audio and visual information to be controlled. This form of integration can be very easily incorporated into existing audio-based continuous speech recognizers. > <s> BIB001 </s> A review of speech-based bimodal recognition <s> B. Recognition Accuracy <s> We present the development of a modular system for flexible human-computer interaction via speech. The speech recognition component integrates acoustic and visual information (automatic lip-reading) improving overall recognition, especially in noisy environments. The image of the lips, constituting the visual input, is automatically extracted from the camera picture of the speaker's face by the lip locator module. Finally, the speaker's face is automatically acquired and followed by the face tracker sub-system. Integration of the three functions results in the first bi-modal speech recognizer allowing the speaker reasonable freedom of movement within a possibly noisy room while continuing to communicate with the computer via voice. Compared to audio-alone recognition, the combined system achieves a 20 to 50 percent error rate reduction for various signal/noise conditions. <s> BIB002 </s> A review of speech-based bimodal recognition <s> B. Recognition Accuracy <s> We present work on improving the performance of automated speech recognizers by using additional visual information: (lip-/speechreading); achieving error reduction of up to 50%. This paper focuses on different methods of combining the visual and acoustic data to improve the recognition performance. We show this on an extension of an existing state-of-the-art speech recognition system, a modular MS-TDNN. We have developed adaptive combination methods at several levels of the recognition network. Additional information such as estimated signal-to-noise ratio (SNR) is used in some cases. The results of the different combination methods are shown for clean speech and data with artificial noise (white, music, motor). The new combination methods adapt automatically to varying noise conditions making hand-tuned parameters unnecessary. <s> BIB003 </s> A review of speech-based bimodal recognition <s> B. Recognition Accuracy <s> Audiovisual speech recognition involves fusion of the audio and video sensors for phonetic identification. There are three basic ways to fuse data streams for taking a decision such as phoneme identification: data-to-decision, decision-to-decision, and data-to-data. This leads to four possible models for audiovisual speech recognition, that is direct identification in the first case, separate identification in the second one, and two variants of the third early integration case, namely dominant recoding or motor recoding. However, no systematic comparison of these models is available in the literature. We propose an implementation of these four models, and submit them to a benchmark test. For this aim, we use a noisy-vowel corpus tested on two recognition paradigms in which the systems are tested at noise levels higher than those used for learning. In one of these paradigms, the signal-to-noise ratio (SNR) value is provided to the recognition systems, in the other it is not. We also introduce a new criterion for evaluating performances, based on transmitted information on individual phonetic features. In light of the compared performances of the four models with the two recognition paradigms, we discuss the advantages and drawbacks of these models, leading to proposals for data representation, fusion architecture, and control of the fusion process through sensor reliability. <s> BIB004 </s> A review of speech-based bimodal recognition <s> B. Recognition Accuracy <s> This paper describes a speech recognition system that uses both acoustic and visual speech information to improve recognition performance in noisy environments. The system consists of three components: a visual module; an acoustic module; and a sensor fusion module. The visual module locates and tracks the lip movements of a given speaker and extracts relevant speech features. This task is performed with an appearance-based lip model that is learned from example images. Visual speech features are represented by contour information of the lips and grey-level information of the mouth area. The acoustic module extracts noise-robust features from the audio signal. Finally the sensor fusion module is responsible for the joint temporal modeling of the acoustic and visual feature streams and is realized using multistream hidden Markov models (HMMs). The multistream method allows the definition of different temporal topologies and levels of stream integration and hence enables the modeling of temporal dependencies more accurately than traditional approaches. We present two different methods to learn the asynchrony between the two modalities and how to incorporate them in the multistream models. The superior performance for the proposed system is demonstrated on a large multispeaker database of continuously spoken digits. On a recognition task at 15 dB acoustic signal-to-noise ratio (SNR), acoustic perceptual linear prediction (PLP) features lead to 56% error rate, noise robust RASTA-PLP (relative spectra) acoustic features to 7.2% error rate and combined noise robust acoustic features and visual features to 2.5% error rate. <s> BIB005
Bimodal sensor fusion can yield (see Tables I-III): 1) better classification accuracy than either modality ("enhancing fusion," which is the ultimate target of sensor fusion); 2) classification accuracy bounded by the accuracy of each modality ("compromising fusion"); 3) lower classification accuracy than the least accurate modality ("attenuating fusion"). "Enhancing," "compromising," and "attenuating" fusion is a terminology adapted from . When audio-visual fusion results in improved accuracy, it is often observed that intermediate unimodal accuracy gives a higher relative improvement in accuracy than low or high unimodal accuracy , BIB003 . In particular, the "law of diminishing returns" seems to apply when unimodal accuracy changes from intermediate to high. Most findings show that audio-visual fusion can counteract a degradation of acoustic speech BIB002 , BIB005 . Audio-visual fusion is therefore a viable alternative, or complement, to signal processing techniques which try to minimize the effect of acoustic noise degradation on recognition accuracy BIB005 . Although it is sometimes contended that the level at which fusion is performed determines recognition accuracy (see Section IV-B), published results reveal that none of the levels is consistently superior to others. It is very likely that recognition accuracy is not determined solely by the level at which the fusion is applied, but also by the particular fusion technique and training or test regime used . For example, BIB001 shows nearly equal improvement in speech recognition accuracy accruing from either pre-categorical or post-categorical audio-visual fusion. It is also observed in BIB004 that feature fusion and decision fusion yield the same speech recognition accuracy. However, shows that post-categorical (high-level) audio-visual fusion yields better speech recognition accuracy than pre-categorical (low-level) fusion; the worst accuracy is obtained with intermediate-level fusion.
A review of speech-based bimodal recognition <s> C. Performance Assessment Issues <s> The measured performance of any audio-visual processing or analysis technique is inevitably influenced by the database material used in the measurement. Careful consideration should therefore be given to those factors affecting the database content. This paper presents the design issues for the DAVID audio-visual database. First, a number of audio-visual databases are summarised, and the database design issues are discussed. Finally, the content and quality assessment results for DAVID are given. <s> BIB001 </s> A review of speech-based bimodal recognition <s> C. Performance Assessment Issues <s> The primary goal of the M2VTS project is to address the issue of secured access to buildings or multi-media services by the use of automatic person verification based on multimodal strategies (secured access based on speech, face images and other information). This paper presents an overview of the multimodal face database recorded at UCL premises for the purpose of research applications inside the M2VTS project. This database offers synchronized video and speech data as well as image sequences allowing to access multiple views of a face. This material should permit the design and the testing of identification strategies based on speech andro labial analysis, frontal and/or profile face analysis as well as 3-D analysis thanks to the multiple views. The M2VTS Database is available to any non-commercial user on request to the European Language Resource Agency. <s> BIB002 </s> A review of speech-based bimodal recognition <s> C. Performance Assessment Issues <s> Keywords: vision Reference EPFL-CONF-82502 URL: ftp://ftp.idiap.ch/pub/papers/vision/avbpa99.pdf Record created on 2006-03-10, modified on 2017-05-10 <s> BIB003
It is difficult to generalize some findings reported in the bimodal recognition literature and to establish a meaningful comparison of recognition techniques with respect to published recognition accuracy figures. Notably, not all systems are fully automatic; and there are no universally accepted test databases, or performance assessment methodologies. In addition, the majority of reported bimodal recognition figures are for relatively small tasks in terms of vocabulary, grammar, or distribution of speakers. Another problem with most published findings is the lack of rigor in performance assessment methodology. Most results quote empirically determined error rates as point estimates, and findings are often based on inferences made without reference to the confidence intervals of estimates or to the statistical significance of any observed differences. To permit the drawing of objective conclusions from empirical investigations, statistical decision theory should guide the interpretation of results. Most of the reported results are based on data captured under controlled laboratory environments. Most techniques have not been tested in real-world environments. Performance degradation is expected in such environments, particularly if the modeling and fusion techniques are not adaptive. Real-world environments are characterized by a comparatively limited level of control over operational factors such as acoustic and electromagnetic noise, illumination and overall image quality, ruggedness of data capture equipment; as well as head pose, facial appearance, and physiological or emotional state of the speaker. These sources of variability could lead to a mismatch between test and training conditions, and hence, potentially result in degraded recognition accuracy. Although the need for widely accepted benchmark databases has been asserted, there is a paucity of databases for the breadth and depth of research areas in bimodal recognition. Typical limitations of most existing bimodal recognition databases are: small population; narrow phonetic coverage; isolated words; lack of synchronization between audio and video streams; or absence of certain visual cues. There is a pressing need for the development of readily available good benchmark databases. The "DAVID" BIB001 , "M2VTS" BIB002 , "XM2VTSDB" BIB003 , and "ViaVoice Audio-Visual" databases represent positive efforts toward the fulfilment of this need.
A review of speech-based bimodal recognition <s> B. Research Avenues <s> We present an approach to combine the optical motion analysis of the lips and acoustic voice analysis of defined single words for identifying the people speaking. Due to the independence of the different data sources, a higher reliability of the results in comparison with simple optical lip reading is observed. The classification of the preprocessed data is done by synergetic computers, which have attracted increasing attention as robust algorithms for solving industrial classification tasks. Special potential of synergetic computers lies in their close mathematical similarity to self-organized phenomena in nature. Therefore they present a clear perspective for hardware realizations. We propose that the combination of motion and voice analysis offers a possibility for realizing robust access control systems. > <s> BIB001 </s> A review of speech-based bimodal recognition <s> B. Research Avenues <s> This paper describes a new approach for speaker identification based on lipreading. Visual features are extracted from image sequences of the talking face and consist of shape parameters which describe the lip boundary and intensity parameters which describe the grey-level distribution of the mouth area. Intensity information is based on principal component analysis using eigenspaces which deform with the shape model. The extracted parameters account for both, speech dependent and speaker dependent information. We built spatio-temporal speaker models based on these features, using HMMs with mixtures of Gaussians. Promising results were obtained for text dependent and text independent speaker identification tests performed on a small video database. <s> BIB002 </s> A review of speech-based bimodal recognition <s> B. Research Avenues <s> The objective of this work is a computationally efficient method for inferring vocal tract shape trajectories from acoustic speech signals. We use an multilayer perceptron (MLP) to model the vocal tract shape-to-acoustics mapping, then in an analysis-by-synthesis approach, optimise an objective function that includes both the accuracy of the spectrum approximation and the credibility of the vocal tract dynamics. This optimisation carries out gradient descent using backpropagation of derivatives through the MLP. Employing a series of MLPs of increasing order avoids getting trapped in local optima caused by the many-to-one mapping between vocal tract shapes and acoustics. We obtain two orders of magnitude speed increase compared with our previous methods using codebooks and direct optimisation of a synthesiser. <s> BIB003 </s> A review of speech-based bimodal recognition <s> B. Research Avenues <s> This paper deals with a noisy speech enhancement technique based on the fusion of auditory and visual information. We first present the global structure of the system, and then we focus on the tool we used to melt both sources of information. The whole noise reduction system is implemented in the context of vowel transitions corrupted with white noise. A complete evaluation of the system in this context is presented, including distance measures, Gaussian classification scores, and a perceptive test. The results are very promising. <s> BIB004
There is a need for research into bimodal recognition capable of adapting its pattern modeling and fusion knowledge to the prevailing recognition conditions. Further research into the nesting of fusion modules (an approach called meta-fusion in ), also promises improved recognition accuracy and easier handling of complexity. The modeling of the asynchrony between the two channels is also an important research issue. Furthermore, the challenges of speaker adaptation, recognition of spontaneous and continuous speech, require further investigation within the framework of bimodal recognition. Also, to combat data variability, the symbiotic combination of sensor fusion with mature techniques developed for robust unimodal recognition, is also a worthwhile research avenue. In addition, further advances in the synergetic combination of speech with other channels-such as hand gestures and facial expressions-to reduce possible semantic conflicts in spoken communication, are required. Despite the potential gains in accuracy and robustness afforded by bimodal recognition, the latter invariably results in higher storage and computational costs than unimodal recognition. To make the implementation of real-world applications tractable, the development of optimized and robust visual-speech segmentation, feature extraction, or modeling techniques are worthwhile research avenues. Comparative studies of techniques should accompany such developments. Research efforts should also be directed at the joint use of visual and acoustic speech for estimating vocal tract shape, a difficult problem often known as the inversion task BIB003 . This could also be coupled with studies of the joint modeling of the two modalities. The close relationship between the articulatory and phonetic domains suggests that an articulatory representation of speech might be better suited for speech recognition, synthesis, and coding than the conventional spectral acoustic features BIB003 . Previous approaches to the inversion task have relied on acoustic speech alone. The multimodal nature of speech perception suggests that visual speech offers additional information for the acquisition of a physical model of the vocal tract. An investigation, relevant to the inversion task within a bimodal framework, is presented in BIB004 . The effect of visual speech variability on bimodal recognition accuracy has not been investigated as much as its acoustic counterpart . As a result, it is difficult to vouch strongly for the benefit of using visual speech for bimodal recognition in unconstrained visual environments. Studies of the effect of the following factors are called for, particularly in large-scale recognition tasks closely resembling typical real-world tasks: segmentation accuracy; video compression; image noise; occlusion; illumination; speaker pose; and facial expression or paraphernalia (such as facial hair, hats, makeup). A study into the effects of some of these factors is given in . Although the multimodal character of spoken language has long been formally recognized and exploited, multimodal speaker recognition has not received the same attention. The surprisingly high speaker recognition accuracy obtained with visual speech , BIB002 , BIB001 , warrants extensive research on visual speech, either alone or combined with acoustic speech, for speaker recognition. Research is also needed on the potential of bimodal recognition for alleviating the problem of speaker impersonation. The study of how humans integrate audio-visual information could also be beneficial as a basis for developing robust and computationally efficient mechanisms or strategies for bimodal recognition by machine, particularly with regard to feature extraction, classification, and fusion.
Multilateration -- source localization from range difference measurements: Literature survey <s> INTRODUCTION <s> We consider a digital signal processing sensor array system, based on randomly distributed sensor nodes, for surveillance and source localization applications. In most array processing the sensor array geometry is fixed and known and the steering array vector/manifold information is used in beamformation. In this system, array calibration may be impractical due to unknown placement and orientation of the sensors with unknown frequency/spatial responses. This paper proposes a blind beamforming technique, using only the measured sensor data, to form either a sample data or a sample correlation matrix. The maximum power collection criterion is used to obtain array weights from the dominant eigenvector associated with the largest eigenvalue of a matrix eigenvalue problem. Theoretical justification of this approach uses a generalization of Szego's (1958) theory of the asymptotic distribution of eigenvalues of the Toeplitz form. An efficient blind beamforming time delay estimate of the dominant source is proposed. Source localization based on a least squares (LS) method for time delay estimation is also given. Results based on analysis, simulation, and measured acoustical sensor data show the effectiveness of this beamforming technique for signal enhancement and space-time filtering. <s> BIB001 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> INTRODUCTION <s> In this paper we present an efficient method to perform acoustic source localization and tracking using a distributed network of microphones. In this scenario, there is a trade-off between the localization performance and the expense of resources: in fact, a minimization of the localization error would require to use as many sensors as possible; at the same time, as the number of microphones increases, the cost of the network inevitably tends to grow, while in practical applications only a limited amount of resources is available. Therefore, at each time instant only a subset of the sensors should be enabled in order to meet the cost constraints. We propose a heuristic method for the optimal selection of this subset of microphones, using as distortion metrics the Cramer-Rao lower bound (CRLB) and as cost function the total distance between the selected sensors. The heuristic approach has been compared to an optimal algorithm, which searches the best sensor configuration among the full set of microphones, while satisfying the cost constraint. The proposed heuristic algorithm yields similar performance w.r.t. the full-search procedure, but at a much less computational cost. We show that this method can be used effectively in an acoustic source tracking application. <s> BIB002 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> INTRODUCTION <s> Microphone arrays sample the sound field in both space and time with the major objective being the extraction of the signal propagating from a desired direction-of-arrival (DOA). In order to reconstruct a spatial sinusoid from a set of discrete samples, the spatial sampling must occur at a rate greater than a half of the wavelength of the sinusoid. This principle has long been adapted to the microphone array context: in order to form an unambiguous beampattern, the spacing between elements in a microphone array needs to conform to this spatial Nyquist criterion. The implicit assumption behind the narrowband beampattern is that one may use linearity and Fourier analysis to describe the response of the array to an arbitrary wideband plane wave. In this paper, this assumption is analyzed. A formula for the broadband beampattern is derived. It is shown that in order to quantify the spatial filtering abilities of a broadband array, the incoming signal's bifrequency spectrum must be taken into account, particularly for nonstationary signals such as speech. Multi-dimensional Fourier analysis is then employed to derive the broadband spatial transform, which is shown to be the limiting case of the broadband beampattern as the number of sensors tends to infinity. The conditions for aliasing in broadband arrays are then determined by analyzing the effect of computing the broadband spatial transform with a discrete spatial aperture. It is revealed that the spatial Nyquist criterion has little importance for microphone arrays. Finally, simulation results show that the well-known steered response power (SRP) method is formulated with respect to stationary signals, and that modifications are necessary to properly form steered beams in nonstationary signal environments. <s> BIB003
As the technologies relying on distributed (individual) sensor arrays (like Internet Of Things) gain momentum, the questions regarding efficient exploitation of such acquired data become more and more important. A valuable information that could be provided by these arrays is the location of the source of the signal, e.g. the RF emitter, or a sound source. In this document, the focus on the latter use case -localizing a sound source, but the reader is reminded that the discussed methods are essentially agnostic to the signal type, as long as the RD measurements are available. Specifically, assume a large aperture array of distributed mono microphones with potentially different gains, as opposed to distributed (compact) microphone arrays. The array geometry is assumed known in advance, and the microphones are already synchronized/syntonized. We further assume that all captured audio streams are readily available (i.e. centralized processing architecture). Lastly, we assume the presence of a direct path (line-of-sight) and the overdetermined setting, i.e. the number of speech sources S is smaller than the number of available microphones in the distributed array M. This scenario imposes several techical constraints: 1. The large aperture size implies significant spatial aliasing, which, along with the relatively small number of microphones, seriously degrades performance of beamforming-based techniques, at least in the narrowband setting BIB003 . The approaches based on distributed beamforming, e.g. BIB002 BIB001 , could still be appealing if they operate in the wideband regime: unfortunately, the literature on beamforming by distributed mono microphones is scarce. 2. The absence of compact arrays prevents the traditional Direction-of-Arrival (DOA) estimation.
Multilateration -- source localization from range difference measurements: Literature survey <s> No knowledge of the sources' ignition times prohibits the Time-of-Flight (TOF) estimation. <s> For the purpose of localizing a distant noisy target, or, conversely, calibrating a receiving array, the time delays defined by the propagation across the array of the target-generated signal wavefronts are estimated in the presence of sensor-to-sensor-independent array self-noise. The Cramer-Rao matrix bound for the vector delay estimate is derived, and used to show that either properly filtered beamformers or properly filtered systems of multiplier-correlators can be used to provide efficient estimates. The effect of suboptimally filtering the array outputs is discussed. <s> BIB001 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> No knowledge of the sources' ignition times prohibits the Time-of-Flight (TOF) estimation. <s> The problem of position estimation from time difference of arrival (TDOA) measurements occurs in a range of applications from wireless communication networks to electronic warfare positioning. Correlation analysis of the transmitted signal to two receivers gives rise to one hyperbolic function. With more than two receivers, we can compute more hyperbolic functions, which ideally intersect in one unique point. With TDOA measurement uncertainty, we face a non-linear estimation problem. We suggest and compare a Monte Carlo based method for positioning and a gradient search algorithm using a nonlinear least squares framework. The former has the feature of being easily extended to a dynamic framework where a motion model of the transmitter is included. A small simulation study is presented. <s> BIB002 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> No knowledge of the sources' ignition times prohibits the Time-of-Flight (TOF) estimation. <s> Time delay estimation has been a research topic of significant practical importance in many fields (radar, sonar, seismology, geophysics, ultrasonics, hands-free communications, etc.). It is a first stage that feeds into subsequent processing blocks for identifying, localizing, and tracking radiating sources. This area has made remarkable advances in the past few decades, and is continuing to progress, with an aim to create processors that are tolerant to both noise and reverberation. This paper presents a systematic overview of the state-of-the-art of time-delay-estimation algorithms ranging from the simple cross-correlation method to the advanced blind channel identification based techniques. We discuss the pros and cons of each individual algorithm, and outline their inherent relationships. We also provide experimental results to illustrate their performance differences in room acoustic environments where reverberation and noise are commonly encountered. <s> BIB003 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> No knowledge of the sources' ignition times prohibits the Time-of-Flight (TOF) estimation. <s> In source localization from time difference of arrival, the impact of the sensor array geometry to the localization accuracy is not well understood yet. A first rigorous analysis can be found in B. Yang and J. Scheuing (2005). It derived sufficient and necessary conditions for optimum array geometry in terms of minimum Cramer-Rao bound. This paper continues the above work and studies theoretically the localization accuracy of two-dimensional sensor arrays. It addresses different issues: a) optimum vs. uniform angular array b) near-field vs. far-field array c) using all sensor pairs vs. those with a common reference sensor as required from spherical position estimators. The paper ends up with some new insights into the sensor placement problem <s> BIB004 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> No knowledge of the sources' ignition times prohibits the Time-of-Flight (TOF) estimation. <s> The accuracy of a source location estimate is very sensitive to the accurate knowledge of receiver locations. This paper performs analysis and develops a solution for locating a moving source using time-difference-of-arrival (TDOA) and frequency-difference-of-arrival (FDOA) measurements in the presence of random errors in receiver locations. The analysis starts with the Crameacuter-Rao lower bound (CRLB) for the problem, and derives the increase in mean-square error (MSE) in source location estimate if the receiver locations are assumed correct but in fact have error. A solution is then proposed that takes the receiver error into account to reduce the estimation error, and it is shown analytically, under some mild approximations, to achieve the CRLB accuracy for far-field sources. The proposed solution is closed form, computationally efficient, and does not have divergence problem as in iterative techniques. Simulations corroborate the theoretical results and the good performance of the proposed method <s> BIB005 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> No knowledge of the sources' ignition times prohibits the Time-of-Flight (TOF) estimation. <s> Microphone arrays sample the sound field in both space and time with the major objective being the extraction of the signal propagating from a desired direction-of-arrival (DOA). In order to reconstruct a spatial sinusoid from a set of discrete samples, the spatial sampling must occur at a rate greater than a half of the wavelength of the sinusoid. This principle has long been adapted to the microphone array context: in order to form an unambiguous beampattern, the spacing between elements in a microphone array needs to conform to this spatial Nyquist criterion. The implicit assumption behind the narrowband beampattern is that one may use linearity and Fourier analysis to describe the response of the array to an arbitrary wideband plane wave. In this paper, this assumption is analyzed. A formula for the broadband beampattern is derived. It is shown that in order to quantify the spatial filtering abilities of a broadband array, the incoming signal's bifrequency spectrum must be taken into account, particularly for nonstationary signals such as speech. Multi-dimensional Fourier analysis is then employed to derive the broadband spatial transform, which is shown to be the limiting case of the broadband beampattern as the number of sensors tends to infinity. The conditions for aliasing in broadband arrays are then determined by analyzing the effect of computing the broadband spatial transform with a discrete spatial aperture. It is revealed that the spatial Nyquist criterion has little importance for microphone arrays. Finally, simulation results show that the well-known steered response power (SRP) method is formulated with respect to stationary signals, and that modifications are necessary to properly form steered beams in nonstationary signal environments. <s> BIB006 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> No knowledge of the sources' ignition times prohibits the Time-of-Flight (TOF) estimation. <s> In this paper, we show that minimization of the statistical dependence using broadband independent component analysis (ICA) can be successfully exploited for acoustic source localization. As the ICA signal model inherently accounts for the presence of several sources and multiple sound propagation paths, the ICA criterion offers a theoretically more rigorous framework than conventional techniques based on an idealized single-path and single-source signal model. This leads to algorithms which outperform other localization methods, especially in the presence of multiple simultaneously active sound sources and under adverse conditions, notably in reverberant environments. Three methods are investigated to extract the time difference of arrival (TDOA) information contained in the filters of a two-channel broadband ICA scheme. While for the first, the blind system identification (BSI) approach, the number of sources should be restricted to the number of sensors, the other methods, the averaged directivity pattern (ADP) and composite mapped filter (CMF) approaches can be used even when the number of sources exceeds the number of sensors. To allow fast tracking of moving sources, the ICA algorithm operates in block-wise batch mode, with a proportionate weighting of the natural gradient to speed up the convergence of the algorithm. The TDOA estimation accuracy of the proposed schemes is assessed in highly noisy and reverberant environments for two, three, and four stationary noise sources with speech-weighted spectral envelopes as well as for moving real speech sources. <s> BIB007 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> No knowledge of the sources' ignition times prohibits the Time-of-Flight (TOF) estimation. <s> In sensor networks, passive localization can be performed by exploiting the received signals of unknown emitters. In this paper, the Time of Arrival (TOA) measurements are investigated. Often, the unknown time of emission is eliminated by calculating the difference between two TOA measurements where Time Difference of Arrival (TDOA) measurements are obtained. In TOA processing, additionally, the unknown time of emission is to be estimated. Therefore, the target state is extended by the unknown time of emission. A comparison is performed investigating the attainable accuracies for localization based on TDOA and TOA measurements given by the Cramer-Rao Lower Bound (CRLB). Using the Maximum Likelihood estimator, some characteristic features of the cost functions are investigated indicating a better performance of the TOA approach. But counterintuitive, Monte Carlo simulations do not support this indication, but show the comparability of TDOA and TOA localization. <s> BIB008 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> No knowledge of the sources' ignition times prohibits the Time-of-Flight (TOF) estimation. <s> We consider the problem of estimating the time differences of arrival (TDOAs) of multiple sources from a two-channel reverberant audio signal. While several clustering-based or angular spectrum-based methods have been proposed in the literature, only relatively small-scale experimental evaluations restricted to either category of methods have been carried out so far. We design and conduct the first large-scale experimental evaluation of these methods and investigate a two-step procedure combining angular spectra and clustering. In addition, we introduce and evaluate five new TDOA estimation methods inspired from signal-to-noise-ratio (SNR) weighting and probabilistic multi-source modeling techniques that have been successful for anechoic TDOA estimation and audio source separation. For 5cm microphone spacing, the best TDOA estimation performance is achieved by one of the proposed SNR-based angular spectrum methods. For larger spacing, a variant of the generalized cross-correlation with phase transform (GCC-PHAT) method performs best. <s> BIB009 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> No knowledge of the sources' ignition times prohibits the Time-of-Flight (TOF) estimation. <s> This tutorial text gives a unifying perspective on machine learning by covering bothprobabilistic and deterministic approaches -which are based on optimization techniques together with the Bayesian inference approach, whose essence liesin the use of a hierarchy of probabilistic models. The book presents the major machine learning methods as they have been developed in different disciplines, such as statistics, statistical and adaptive signal processing and computer science. Focusing on the physical reasoning behind the mathematics, all the various methods and techniques are explained in depth, supported by examples and problems, giving an invaluable resource to the student and researcher for understanding and applying machine learning concepts. The book builds carefully from the basic classical methods to the most recent trends, with chapters written to be as self-contained as possible, making the text suitable for different courses: pattern recognition, statistical/adaptive signal processing, statistical/Bayesian learning, as well as short courses on sparse modeling, deep learning, and probabilistic graphical models. All major classical techniques: Mean/Least-Squares regression and filtering, Kalman filtering, stochastic approximation and online learning, Bayesian classification, decision trees, logistic regression and boosting methods. The latest trends: Sparsity, convex analysis and optimization, online distributed algorithms, learning in RKH spaces, Bayesian inference, graphical and hidden Markov models, particle filtering, deep learning, dictionary learning and latent variables modeling. Case studies - protein folding prediction, optical character recognition, text authorship identification, fMRI data analysis, change point detection, hyperspectral image unmixing, target localization, channel equalization and echo cancellation, show how the theory can be applied. MATLAB code for all the main algorithms are available on an accompanying website, enabling the reader to experiment with the code. <s> BIB010 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> No knowledge of the sources' ignition times prohibits the Time-of-Flight (TOF) estimation. <s> The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton–Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm. <s> BIB011
Due to these constraints, the scope of the review is limited to the family of multilateration methods BIB011 based on the TDOA estimation. Fortunately, it has been shown BIB008 that the TOF and TDOA features perform similarly in terms of localization accuracy. Of particular interest is speaker localization within reverberant (indoor) and/or noisy environments. However, the TDOA estimation in these conditions is a challenging problem in its own right (especially in the multisource setting), and is out of the scope of this document -the interested reader may consult appropriate references, e.g. BIB003 BIB009 . Distances between microphones are considered to be of the same order as the distances between microphones and source(s), hence we are in the near-field setting. The general formulation of the time-domain signal y m (t), recorded at the m th microphone is given by the convolutional sum: where a (m) s (t, :) is the time-variant Room Impulse Response (RIR) filter, relating the m th microphone position r m with the s th source position r s , x s (t) is the signal corresponding to the s th source, and n m (t) is the additive noise of the considered microphone. In (1), the microphone gains are absorbed by RIRs. In practice, various simplifications are commonly used instead of the general expression BIB006 . Commonly, a free-field, time-invariant approximation is adopted -in the single source case, it is given as follows BIB002 : where the offset τ m denotes the TOF value, which is proportional to the source-microphone distance. The TDOA, corresponding to the difference in propagation delay between the microphones m and m ′ , with respect to the source s, is defined as τ m . Naturally, the TDOA measurements could be corrupted by various types of noise, which negatively affects the performance of localization algorithms. Another cause of TDOA localization errors is the inexact knowledge of microphone positions. As shown in BIB005 , the Cramér-Rao lower bound (CRB) BIB010 of the source location estimate increases rather quickly with the increase in the microphone position "noise" (fortunately, somewhat less fast in the near field, than in the far field setting). Finally, the localization accuracy also depends on the array geometry BIB004 , which is assumed arbitrary in our case. In homogeneous propagation media, the TDOA values τ where D (s) m denotes the distance between the source s and the microphone m. Thus, the observed RDs also suffer from measurement errors, usually modeled as an additive noise. Note that the observation model (3) defines the two-sheet hyperboloid with respect to r s , with foci in r m and r m ′ . Given the observations {d In the multisource setting, multiple sets of RDs are assumed available, and the localization of each source is to be done independently of the rest. Such measurements could be obtained by multisource TDOA estimation algorithms, e.g. BIB007 . Thus, without loss of generality, we will only discuss the single-source setting (s = 1). In the noiseless case, the number of linearly independent RD observations is equal to M − 1, but considering the full set of observations (of size M(M−1)/2) may be useful for alleviating the harmful effects of measurement noise BIB001 . Usually, the first microphone is chosen to be a reference point: e.g. r 1 = 0, where 0 is the null vector. By denoting r := r s , from (3) we have In the following sections, we discuss different types of source location estimators and methods to calculate them.
Multilateration -- source localization from range difference measurements: Literature survey <s> MAXIMUM LIKELIHOOD ESTIMATION <s> An effective technique in locating a source based on intersections of hyperbolic curves defined by the time differences of arrival of a signal received at a number of sensors is proposed. The approach is noniterative and gives an explicit solution. It is an approximate realization of the maximum-likelihood estimator and is shown to attain the Cramer-Rao lower bound near the small error region. Comparisons of performance with existing techniques of beamformer, spherical-interpolation, divide and conquer, and iterative Taylor-series methods are made. The proposed technique performs significantly better than spherical-interpolation, and has a higher noise threshold than divide and conquer before performance breaks away from the Cramer-Rao lower bound. It provides an explicit solution form that is not available in the beamforming and Taylor-series methods. Computational complexity is comparable to spherical-interpolation but substantially less than the Taylor-series method. > <s> BIB001 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> MAXIMUM LIKELIHOOD ESTIMATION <s> A linear-correction least-squares estimation procedure is proposed for the source localization problem under an additive measurement error model. The method, which can be easily implemented in a real-time system with moderate computational complexity, yields an efficient source location estimator without assuming a priori knowledge of noise distribution. Alternative existing estimators, including likelihood-based, spherical intersection, spherical interpolation, and quadratic-correction least-squares estimators, are reviewed and comparisons of their complexity, estimation consistency and efficiency against the Cramer-Rao lower bound are made. Numerical studies demonstrate that the proposed estimator performs better under many practical situations. <s> BIB002 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> MAXIMUM LIKELIHOOD ESTIMATION <s> A fundamental requirement of microphone arrays is the capability of instantaneously locating and continuously tracking a speech sound source. The problem is challenging in practice due to the fact that speech is a nonstationary random process with a wideband spectrum, and because of the simultaneous presence of noise, room reverberation, and other interfering speech sources. This Chapter presents an overview of the research and development on this technology in the last three decades. Focusing on a two-stage framework for speech source localization, we survey and analyze the state-of-the-art time delay estimation (TDE) and source localization algorithms. <s> BIB003
Since the observations (4) are non-linear, a statistically efficient estimate (i.e. the one that attains CRB) may not be available. The common approach is to seek the maximum likelihood (ML) estimator instead. Letr andd 1,m ′ (r) denote the estimated source position, and the corresponding RD, respectively: Under the hypothesis that the observation noise is Gaussian, the ML estimator is given as the minimizer of the negative log-likelihood BIB001 BIB002 , and Σ is the covariance matrix of the measurement noise. Note, however that the Gaussian noise assumption for the RD measurements may not hold. For instance, the digital quantization effects can induce RD errors on the order of 2 cm BIB003 . Moreover, the ML estimators are proven to attain the CRB in the asymptotic regime, while the number of microphones (i.e. the number of RDs) is often small. Therefore, non-statistical estimators, such as least squares, are often used in practice instead. Anyhow, in this section we discuss two families of methods proposed for the TDOA maximum likelihood estimation: the ones that aim at solving the non-convex problem (5) directly 1 , and the ones based on convex relaxations.
Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> Taylor-series estimation gives a least-sum-squared-error solution to a set of simultaneous linearized algebraic equations. This method is useful in solving multimeasurement mixed-mode position-location problems typical of many navigational applications. While convergence is not proved, examples show that most problems do converge to the correct solution from reasonable initial guesses. The method also provides the statistical spread of the solution errors. <s> BIB001 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> Three noniterative techniques are presented for localizing a single source given a set of noisy range-difference measurements. The localization formulas are derived from linear least-squares "equation error" minimization, and in one case the maximum likelihood bearing estimate is approached. Geometric interpretations of the equation error norms minimized by the three methods are given, and the statistical performances of the three methods are compared via computer simulation. <s> BIB002 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> An effective technique in locating a source based on intersections of hyperbolic curves defined by the time differences of arrival of a signal received at a number of sensors is proposed. The approach is noniterative and gives an explicit solution. It is an approximate realization of the maximum-likelihood estimator and is shown to attain the Cramer-Rao lower bound near the small error region. Comparisons of performance with existing techniques of beamformer, spherical-interpolation, divide and conquer, and iterative Taylor-series methods are made. The proposed technique performs significantly better than spherical-interpolation, and has a higher noise threshold than divide and conquer before performance breaks away from the Cramer-Rao lower bound. It provides an explicit solution form that is not available in the beamforming and Taylor-series methods. Computational complexity is comparable to spherical-interpolation but substantially less than the Taylor-series method. > <s> BIB003 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> This paper studies the problem of sound source localization in a distributed wireless sensor network formed by mobile general purpose computing and communication devices with audio I/O capabilities. In contrast to well understood localization methods based on dedicated microphone arrays, in our setting sound localization is performed using a sparse array of arbitrary placed sensors (in a typical scenario, localization is performed by several laptops/PDAs co-located in a room). Therefore any far-field assumptions are no longer valid in this situation. Additionally, localization algorithm's performance is affected by uncertainties in sensor position and errors in A/D synchronization. The proposed source localization algorithm consists of two steps. In the first step, time differences of arrivals (TDOAs) are estimated for the microphone pairs, and in the second step the maximum likelihood (ML) estimation for the source position is performed. We evaluate the Cramer-Rao bound (CRB) on the variance of the location estimation and compare it with simulations and experimental results. We also discuss the effects of distributed array geometry and errors in sensor positions on the performance of the localization algorithm. The performances of the system are likely to be limited by errors in sensor locations and increase when the microphones have a large aperture with respect to the source. <s> BIB004 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> In source localization from time difference of arrival, the impact of the sensor array geometry to the localization accuracy is not well understood yet. A first rigorous analysis can be found in B. Yang and J. Scheuing (2005). It derived sufficient and necessary conditions for optimum array geometry in terms of minimum Cramer-Rao bound. This paper continues the above work and studies theoretically the localization accuracy of two-dimensional sensor arrays. It addresses different issues: a) optimum vs. uniform angular array b) near-field vs. far-field array c) using all sensor pairs vs. those with a common reference sensor as required from spherical position estimators. The paper ends up with some new insights into the sensor placement problem <s> BIB005 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> Sensors at separate locations measuring either the time difference of arrival (TDOA) or time of arrival (TOA) of the signal from an emitter can determine its position as the intersection of hyperbolae for TDOA and of circles for TOA. Because of measurement noise, the nonlinear localization equations become inconsistent; and the hyperbolae or circles no longer intersect at a single point. It is now necessary to find an emitter position estimate that minimizes its deviations from the true position. Methods that first linearize the equations and then perform gradient searches for the minimum suffer from initial condition sensitivity and convergence difficulty. Starting from the maximum likelihood (ML) function, this paper derives a closed-form approximate solution to the ML equations. When there are three sensors on a straight line, it also gives an exact ML estimate. Simulation experiments have demonstrated that these algorithms are near optimal, attaining the theoretical lower bound for different geometries, and are superior to two other closed form linear estimators. <s> BIB006 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> This paper proposes a new type of algorithm aimed at finding the traditional maximum-likelihood (TML) estimate of the position of a target given time-difference-of-arrival (TDOA) information, contaminated by noise. The novelty lies in the fact that a performance index, akin to but not identical with that in maximum likelihood (ML), is a minimized subject to a number of constraints, which flow from geometric constraints inherent in the underlying problem. The minimization is in a higher dimensional space than for TML, and has the advantage that the algorithm can be very straightforwardly and systematically initialized. Simulation evidence shows that failure to converge to a solution of the localization problem near the true value is less likely to occur with this new algorithm than with TML. This makes it attractive to use in adverse geometric situations. <s> BIB007 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> We consider the problem of target localization by a network of passive sensors. When an unknown target emits an acoustic or a radio signal, its position can be localized with multiple sensors using the time difference of arrival (TDOA) information. In this paper, we consider the maximum likelihood formulation of this target localization problem and provide efficient convex relaxations for this nonconvex optimization problem. We also propose a formulation for robust target localization in the presence of sensor location errors. Two Cramer-Rao bounds are derived corresponding to situations with and without sensor node location errors. Simulation results confirm the efficiency and superior performance of the convex relaxation approach as compared to the existing least squares based approach when large sensor node location errors are present. <s> BIB008 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> We consider the source localization problem using time-difference-of-arrival (TDOA) measurements in sensor networks. The maximum likelihood (ML) estimation of the source location can be cast as a nonlinear/nonconvex optimization problem, and its global solution is hardly obtained. In this paper, we resort to the Monte Carlo importance sampling (MCIS) technique to find an approximate global solution to this problem. To obtain an efficient importance function that is used in the technique, we construct a Gaussian distribution and choose its probability density function (pdf) as the importance function. In this process, an initial estimate of the source location is required. We reformulate the problem as a nonlinear robust least squares (LS) problem, and relax it as a second-order cone programming (SOCP), the solution of which is used as the initial estimate. Simulation results show that the proposed method can achieve the Cramer-Rao bound (CRB) accuracy and outperforms several existing methods. <s> BIB009 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> In sensor networks, passive localization can be performed by exploiting the received signals of unknown emitters. In this paper, the Time of Arrival (TOA) measurements are investigated. Often, the unknown time of emission is eliminated by calculating the difference between two TOA measurements where Time Difference of Arrival (TDOA) measurements are obtained. In TOA processing, additionally, the unknown time of emission is to be estimated. Therefore, the target state is extended by the unknown time of emission. A comparison is performed investigating the attainable accuracies for localization based on TDOA and TOA measurements given by the Cramer-Rao Lower Bound (CRLB). Using the Maximum Likelihood estimator, some characteristic features of the cost functions are investigated indicating a better performance of the TOA approach. But counterintuitive, Monte Carlo simulations do not support this indication, but show the comparability of TDOA and TOA localization. <s> BIB010 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> This paper proposes two methods to reduce the bias of the well-known algebraic explicit solution (Chan and Ho, "A simple and efficient estimator for hyperbolic location," IEEE Trans. Signal Process., vol. 42, pp. 1905-1915, Aug. 1994) for source localization using TDOA. Bias of a source location estimate is significant when the measurement noise is large and the geolocation geometry is poor. Bias also dominates performance when multiple times of independent measurements are available such as in UWB localization or in target tracking. The paper starts by deriving the bias of the source location estimate from Chan and Ho. The bias is found to be considerably larger than that of the Maximum Likelihood Estimator. Two methods, called BiasSub and BiasRed, are developed to reduce the bias. The BiasSub method subtracts the expected bias from the solution of Chan and Ho's work, where the expected bias is approximated by the theoretical bias using the estimated source location and noisy data measurements. The BiasRed method augments the equation error formulation and imposes a constraint to improve the source location estimate. The BiasSub method requires the exact knowledge of the noise covariance matrix and BiasRed only needs the structure of it. Analysis shows that both methods reduce the bias considerably and achieve the CRLB performance for distant source when the noise is Gaussian and small. The BiasSub method can nearly eliminate the bias and the BiasRed method is able to lower the bias to the same level as the Maximum Likelihood Estimator. The BiasRed method is extended for TDOA and FDOA positioning. Simulations corroborate the performance of the proposed methods. <s> BIB011 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> We present a general scheme for analyzing the performance of a generic localization algorithm for multilateration (MLAT) systems (or for other distributed sensor, passive localization technology). MLAT systems are used for airport surface surveillance and are based on time difference of arrival measurements of Mode S signals (replies and 1,090 MHz extended squitter, or 1090ES). In the paper, we propose to consider a localization algorithm as composed of two components: a data model and a numerical method, both being properly defined and described. In this way, the performance of the localization algorithm can be related to the proper combination of statistical and numerical performances. We present and review a set of data models and numerical methods that can describe most localization algorithms. We also select a set of existing localization algorithms that can be considered as the most relevant, and we describe them under the proposed classification. We show that the performance of any localization algorithm has two components, i.e., a statistical one and a numerical one. The statistical performance is related to providing unbiased and minimum variance solutions, while the numerical one is related to ensuring the convergence of the solution. Furthermore, we show that a robust localization (i.e., statistically and numerically efficient) strategy, for airport surface surveillance, has to be composed of two specific kind of algorithms. Finally, an accuracy analysis, by using real data, is performed for the analyzed algorithms; some general guidelines are drawn and conclusions are provided. <s> BIB012 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> This tutorial text gives a unifying perspective on machine learning by covering bothprobabilistic and deterministic approaches -which are based on optimization techniques together with the Bayesian inference approach, whose essence liesin the use of a hierarchy of probabilistic models. The book presents the major machine learning methods as they have been developed in different disciplines, such as statistics, statistical and adaptive signal processing and computer science. Focusing on the physical reasoning behind the mathematics, all the various methods and techniques are explained in depth, supported by examples and problems, giving an invaluable resource to the student and researcher for understanding and applying machine learning concepts. The book builds carefully from the basic classical methods to the most recent trends, with chapters written to be as self-contained as possible, making the text suitable for different courses: pattern recognition, statistical/adaptive signal processing, statistical/Bayesian learning, as well as short courses on sparse modeling, deep learning, and probabilistic graphical models. All major classical techniques: Mean/Least-Squares regression and filtering, Kalman filtering, stochastic approximation and online learning, Bayesian classification, decision trees, logistic regression and boosting methods. The latest trends: Sparsity, convex analysis and optimization, online distributed algorithms, learning in RKH spaces, Bayesian inference, graphical and hidden Markov models, particle filtering, deep learning, dictionary learning and latent variables modeling. Case studies - protein folding prediction, optical character recognition, text authorship identification, fMRI data analysis, change point detection, hyperspectral image unmixing, target localization, channel equalization and echo cancellation, show how the theory can be applied. MATLAB code for all the main algorithms are available on an accompanying website, enabling the reader to experiment with the code. <s> BIB013 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> The problem of estimating receiver or sender node positions from measured receiver-sender distances is a key issue in different applications such as microphone array calibration, radio antenna array calibration, mapping and positioning using UWB or using round-trip-time measurements between mobile phones and WiFi-units. In this paper we address the problem of optimally estimating a receiver position given a number of distance measurements to known sender positions, so called trilateration. We show that this problem can be rephrased as an eigenvalue problem. We also address different error models and the multilateration setting where an additional offset is also unknown, and show that these problems can be modeled using the same framework. <s> BIB014
The problem (5) is difficult to solve directly, due to nonlinear dependence of the RDs {d 1,m ′ (r)} on the position variabler. Early approaches, based on iterative schemes, such as linearized gradient descent and LevenbergâȂŞMarquardt algorithm BIB001 BIB004 , suffer from sensitivity to initialization, increased computational complexity and ill-conditioning (though the latter could be improved using regularization techniques BIB012 ). The method proposed in BIB007 exploits correlation among noises within different TDOA measurements, and defines a constrained ML cost function tackled by a Newton-like algorithm. According to simulation results, it is more robust to adverse localization geometries BIB005 than BIB001 , or the least squares methods BIB002 . Another advantage of this method is the straightforward way to provide the initial estimate (however, as usual, global convergence cannot be guaranteed). In the pioneering article BIB003 , the authors proposed a closed-form, two-stage approach, that approximates the solution of BIB010 . Firstly, the (weighted) unconstrained least-squares solution (to be explained in the next section) is computed, which is then improved by exploiting the relation between the estimates of the position vector and its magnitude. The minimal number of microphones, due to the unconstrained LS estimation is 5 in three dimensions. It has been shown BIB003 that the method attains the CRB at high to moderate Signal-to-Noise-Ratios (SNRs). Unfortunately, it suffers from a nonlinear "threshold effect" -its performance quickly deteriorates at low SNRs. Instead, an approximate, but more stable version of this ML method has been proposed in BIB006 . In addition, the estimator BIB003 comes with a large bias BIB012 , which cannot be reduced by increasing the amount of measurements. This bias has been theoretically evaluated and reduced in BIB011 . The method proposed in BIB009 uses Monte Carlo importance sampling techniques BIB013 to approximate the solution of the problem BIB010 . As an initial point, it uses the estimate computed by a convex relaxation method. According to simulation experiments, its localization performance is on pair with the convex method BIB008 , but the computational complexity is much lower. A very recent article BIB014 proposes the linearization approach that casts the original into an eigenvalue problem, which can be solved optimally in closed form. Additionally, the authors propose an Iterative Reweighted Least Squares scheme that approximates the ML estimate for different noise distributions.
Multilateration -- source localization from range difference measurements: Literature survey <s> Convex relaxations <s> Taylor-series estimation gives a least-sum-squared-error solution to a set of simultaneous linearized algebraic equations. This method is useful in solving multimeasurement mixed-mode position-location problems typical of many navigational applications. While convergence is not proved, examples show that most problems do converge to the correct solution from reasonable initial guesses. The method also provides the statistical spread of the solution errors. <s> BIB001 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Convex relaxations <s> An effective technique in locating a source based on intersections of hyperbolic curves defined by the time differences of arrival of a signal received at a number of sensors is proposed. The approach is noniterative and gives an explicit solution. It is an approximate realization of the maximum-likelihood estimator and is shown to attain the Cramer-Rao lower bound near the small error region. Comparisons of performance with existing techniques of beamformer, spherical-interpolation, divide and conquer, and iterative Taylor-series methods are made. The proposed technique performs significantly better than spherical-interpolation, and has a higher noise threshold than divide and conquer before performance breaks away from the Cramer-Rao lower bound. It provides an explicit solution form that is not available in the beamforming and Taylor-series methods. Computational complexity is comparable to spherical-interpolation but substantially less than the Taylor-series method. > <s> BIB002 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Convex relaxations <s> A disadvantage of the SDP (semidefinite programming) relaxation method for quadratic and/or combinatorial optimization problems lies in its expensive computational cost. This paper proposes a SOCP (second-order-cone programming) relaxation method, which strengthens the lift-and-project LP (linear programming) relaxation method by adding convex quadratic valid inequalities for the positive semidefinite cone involved in the SDP relaxation. Numerical experiments show that our SOCP relaxation is a reasonable compromise between the effectiveness of the SDP relaxation and the low computational cost of the lift-and-project LP relaxation. <s> BIB003 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Convex relaxations <s> Convex optimization problems arise frequently in many different fields. A comprehensive introduction to the subject, this book shows in detail how such problems can be solved numerically with great efficiency. The focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them. The text contains many worked examples and homework exercises and will appeal to students, researchers and practitioners in fields such as engineering, computer science, mathematics, statistics, finance, and economics. <s> BIB004 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Convex relaxations <s> A common technique for passive source localization is to utilize the range-difference (RD) measurements between the source and several spatially separated sensors. The RD information defines a set of hyperbolic equations from which the source position can be calculated with the knowledge of the sensor positions. Under the standard assumption of Gaussian distributed RD measurement errors, it is well known that the maximum-likelihood (ML) position estimation is achieved by minimizing a multimodal cost function which corresponds to a difficult task. In this correspondence, we propose to approximate the nonconvex ML optimization by relaxing it to a convex optimization problem using semidefinite programming. A semidefinite relaxation RD-based positioning algorithm, which makes use of the admissible source position information, is proposed and its estimation performance is contrasted with the two-step weighted least squares method and nonlinear least squares estimator as well as Cramer-Rao lower bound. <s> BIB005 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Convex relaxations <s> We consider the problem of target localization by a network of passive sensors. When an unknown target emits an acoustic or a radio signal, its position can be localized with multiple sensors using the time difference of arrival (TDOA) information. In this paper, we consider the maximum likelihood formulation of this target localization problem and provide efficient convex relaxations for this nonconvex optimization problem. We also propose a formulation for robust target localization in the presence of sensor location errors. Two Cramer-Rao bounds are derived corresponding to situations with and without sensor node location errors. Simulation results confirm the efficiency and superior performance of the convex relaxation approach as compared to the existing least squares based approach when large sensor node location errors are present. <s> BIB006 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Convex relaxations <s> This paper investigates the source localization problem based on time difference of arrival (TDOA) measurements in the presence of random noises in both the TDOA and sensor location measurements. We formulate the localization problem as a constrained weighted least squares problem which is an indefinite quadratically constrained quadratic programming problem. Owing to the non-convex nature of this problem, it is difficult to obtain a global solution. However, by exploiting the hidden convexity of this problem, we reformulate it to a convex optimization problem. We further derive a primal-dual interior point algorithm to reach a global solution efficiently. The proposed method is shown to analytically achieve the Cramer-Rao lower bound (CRLB) under some mild approximations. Moreover, when the location geometry is not desirable, the proposed algorithm can efficiently avoid the ill-conditioning problem. Simulations are used to corroborate the theoretical results which demonstrate the good performance, robustness and high efficiency of the proposed method. HighlightsWe explore the source localization problem using noisy TDOA measurements in the presence of random sensor position errors.By exploiting the hidden convexity, the formulated non-convex localization problem is transformed to a convex optimization problem.The proposed convex localization algorithm analytically achieves the CRLB under some mild approximations.The proposed algorithm can efficiently avoid the ill-conditioning problem. <s> BIB007
Another important line of work are the methods based on convex relaxations of ML estimation problems. In other words, the original problem is approximated by a convex one BIB004 , which is usually far easier to solve. Two families of approaches dominate this field: methods based on semidefinite programming (SDP), and the ones relaxing the original task into a second-order cone optimization problem (SOCP). In the former, the non-convex quadratic problem (5) is first lifted such that the non-convexity appears as a rank 1 constraint, which is then substituted by a positive semidefinite one . Lifting is a problem reformulation by variable substitution G = gg T , where g is the original optimization variable (the term lifting is used to emphasize that the problem is now defined in a high-dimensional space). On the other hand, solving the SDP optimization problems can be computationally expensive, and the SOCP framework has been proposed as a compromise between the approximation quality and computational complexity (cf. BIB003 for technical details). One of the first convex relaxation approaches for the TDOA localization is BIB005 , based on SDP. The algorithm requires the knowledge of the microphone closest to the source, in order to ensure that all RDs (with that microphone as a reference) are positive. The article BIB006 discusses three convex relaxation methods. The first one, based on SOCP relaxation is computationally efficient, but restricts the solution to the convex hull BIB004 of microphone positions. The other two SDP-based remove this restriction, but are somewhat more computationally demanding. In addition, one of these is the robust version -it minimizes the worst-case error due to imprecise microphone locations. The latter requires tuning of several hyperparameters, among which is the variance of the microphone positioning error. All three versions are based on the white Gaussian noise model for the TDOA measurements, however, whithening could be applied in order to support the correlated noise case. However, the SDP solutions are not the final output of the algorithms, but are used to initialize nonlinear iterative scheme, such as BIB001 . Interestingly, a recent article BIB007 has shown that the ideas of the direct approach BIB002 and the constrained least-squares approach 2.1 could be mixed together. Moreover, the cost function can be casted to a convex problem, for which an interiorpoint method has been proposed. However, in practice, it is a compound algorithm which iteratively solves a sequence of convex problems in order to re-calculate a weighting matrix dependant on the estimated source position. The accuracy depends on the number of iterations, which, in turn, increases computational complexity. As for BIB002 , it requires 5 microphones for the 3D localization.
Multilateration -- source localization from range difference measurements: Literature survey <s> LEAST-SQUARES ESTIMATION <s> This paper proposes a new type of algorithm aimed at finding the traditional maximum-likelihood (TML) estimate of the position of a target given time-difference-of-arrival (TDOA) information, contaminated by noise. The novelty lies in the fact that a performance index, akin to but not identical with that in maximum likelihood (ML), is a minimized subject to a number of constraints, which flow from geometric constraints inherent in the underlying problem. The minimization is in a higher dimensional space than for TML, and has the advantage that the algorithm can be very straightforwardly and systematically initialized. Simulation evidence shows that failure to converge to a solution of the localization problem near the true value is less likely to occur with this new algorithm than with TML. This makes it attractive to use in adverse geometric situations. <s> BIB001 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> LEAST-SQUARES ESTIMATION <s> The iterative least squares method, or Gauss-Newton method, is a standard algorithm for solving general nonlinear systems of equations, but it is often said to be unsuited for mobile positioning with e.g. ranges or range differences or angle-of-arrival measurements. Instead, various closed-form methods have been proposed and are constantly being reinvented for the problem, claiming to outperform Gauss-Newton. We list some common conceptual and computation pitfalls for closedform solvers, and present an extensive comparison of different closed-form solvers against a properly implemented Gauss-Newton solver. We give all the algorithms in similar notations and implementable form and a couple of novel closed-form methods and implementation details. The Gauss-Newton method strengthened with a regularisation term is found to be as accurate as any of the closed-form methods, and to have comparable computation load, while being simpler to implement and avoiding most of the pitfalls. <s> BIB002 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> LEAST-SQUARES ESTIMATION <s> This paper proposes two methods to reduce the bias of the well-known algebraic explicit solution (Chan and Ho, "A simple and efficient estimator for hyperbolic location," IEEE Trans. Signal Process., vol. 42, pp. 1905-1915, Aug. 1994) for source localization using TDOA. Bias of a source location estimate is significant when the measurement noise is large and the geolocation geometry is poor. Bias also dominates performance when multiple times of independent measurements are available such as in UWB localization or in target tracking. The paper starts by deriving the bias of the source location estimate from Chan and Ho. The bias is found to be considerably larger than that of the Maximum Likelihood Estimator. Two methods, called BiasSub and BiasRed, are developed to reduce the bias. The BiasSub method subtracts the expected bias from the solution of Chan and Ho's work, where the expected bias is approximated by the theoretical bias using the estimated source location and noisy data measurements. The BiasRed method augments the equation error formulation and imposes a constraint to improve the source location estimate. The BiasSub method requires the exact knowledge of the noise covariance matrix and BiasRed only needs the structure of it. Analysis shows that both methods reduce the bias considerably and achieve the CRLB performance for distant source when the noise is Gaussian and small. The BiasSub method can nearly eliminate the bias and the BiasRed method is able to lower the bias to the same level as the Maximum Likelihood Estimator. The BiasRed method is extended for TDOA and FDOA positioning. Simulations corroborate the performance of the proposed methods. <s> BIB003
Largely due to computational convenience, the least-squares (LS) estimation is often a preferred parameter estimation approach. It is noteworthy that all LS approaches optimize a somewhat "artificial" estimation objective, which can induce large errors in very low SNR conditions, when the measurement noise is not white, and/or for some adverse array geometries BIB001 BIB002 BIB003 . Three types of cost functions are discussed: hyperbolic, spherical and conic LS.
Multilateration -- source localization from range difference measurements: Literature survey <s> Hyperbolic LS <s> A derivation of the principal algorithms and an analysis of the performance of the two most important passive location systems for stationary transmitters, hyperbolic location systems and directionfinding location systems, are presented. The concentration ellipse, the circular error probability, and the geometric dilution of precision are defined and related to the location-system and received-signal characteristics. Doppler and other passive location systems are briefly discussed. <s> BIB001 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Hyperbolic LS <s> The problem of position estimation from time difference of arrival (TDOA) measurements occurs in a range of applications from wireless communication networks to electronic warfare positioning. Correlation analysis of the transmitted signal to two receivers gives rise to one hyperbolic function. With more than two receivers, we can compute more hyperbolic functions, which ideally intersect in one unique point. With TDOA measurement uncertainty, we face a non-linear estimation problem. We suggest and compare a Monte Carlo based method for positioning and a gradient search algorithm using a nonlinear least squares framework. The former has the feature of being easily extended to a dynamic framework where a motion model of the transmitter is included. A small simulation study is presented. <s> BIB002 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Hyperbolic LS <s> Time delay estimation has been a research topic of significant practical importance in many fields (radar, sonar, seismology, geophysics, ultrasonics, hands-free communications, etc.). It is a first stage that feeds into subsequent processing blocks for identifying, localizing, and tracking radiating sources. This area has made remarkable advances in the past few decades, and is continuing to progress, with an aim to create processors that are tolerant to both noise and reverberation. This paper presents a systematic overview of the state-of-the-art of time-delay-estimation algorithms ranging from the simple cross-correlation method to the advanced blind channel identification based techniques. We discuss the pros and cons of each individual algorithm, and outline their inherent relationships. We also provide experimental results to illustrate their performance differences in room acoustic environments where reverberation and noise are commonly encountered. <s> BIB003
The goal is to minimize the sum of squared distances ǫ h between the true and estimated RDs: BIB003 which is analogous to the ML estimation problem (5) for Σ = I, with I being the identity matrix. Thus, in the case of white Gaussian noise, the hyperbolic LS solution coincides with the ML solution. Otherwise, solving (6) comes down to finding the pointr whose distance to all hyperboloids d 1,m ′ , defined in (4), is minimal. However, the hyperbolic LS problem is also non-convex, and its global solution cannot be guaranteed. Instead, local minimizers are found by iterative procedures, such as (nonlinear) gradient descent or particle filtering BIB001 BIB002 . Obviously, the quality of the output result of such algorithms depends on their initial estimates, the choice of which is usually not mathematical, but rather application-based.
Multilateration -- source localization from range difference measurements: Literature survey <s> Spherical LS <s> Three noniterative techniques are presented for localizing a single source given a set of noisy range-difference measurements. The localization formulas are derived from linear least-squares "equation error" minimization, and in one case the maximum likelihood bearing estimate is approached. Geometric interpretations of the equation error norms minimized by the three methods are given, and the statistical performances of the three methods are compared via computer simulation. <s> BIB001 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Spherical LS <s> An effective technique in locating a source based on intersections of hyperbolic curves defined by the time differences of arrival of a signal received at a number of sensors is proposed. The approach is noniterative and gives an explicit solution. It is an approximate realization of the maximum-likelihood estimator and is shown to attain the Cramer-Rao lower bound near the small error region. Comparisons of performance with existing techniques of beamformer, spherical-interpolation, divide and conquer, and iterative Taylor-series methods are made. The proposed technique performs significantly better than spherical-interpolation, and has a higher noise threshold than divide and conquer before performance breaks away from the Cramer-Rao lower bound. It provides an explicit solution form that is not available in the beamforming and Taylor-series methods. Computational complexity is comparable to spherical-interpolation but substantially less than the Taylor-series method. > <s> BIB002 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Spherical LS <s> A linear-correction least-squares estimation procedure is proposed for the source localization problem under an additive measurement error model. The method, which can be easily implemented in a real-time system with moderate computational complexity, yields an efficient source location estimator without assuming a priori knowledge of noise distribution. Alternative existing estimators, including likelihood-based, spherical intersection, spherical interpolation, and quadratic-correction least-squares estimators, are reviewed and comparisons of their complexity, estimation consistency and efficiency against the Cramer-Rao lower bound are made. Numerical studies demonstrate that the proposed estimator performs better under many practical situations. <s> BIB003 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Spherical LS <s> This paper considers the problem of locating a radiating source from range-difference observations. This specific source localization problem has received significant attention for at least 20 years, and several solutions have been proposed to solve it either approximately or exactly. However, some of these solutions have not been described clearly, and confusions seem to persist. This paper aims to clarify and streamline the most successful solutions. It introduces a new closed-form approximate solution, and briefly comments on the related problem of source localization from energy or range measurements <s> BIB004 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Spherical LS <s> We consider least squares (LS) approaches for locating a radiating source from range measurements (which we call R-LS) or from range-difference measurements (RD-LS) collected using an array of passive sensors. We also consider LS approaches based on squared range observations (SR-LS) and based on squared range-difference measurements (SRD-LS). Despite the fact that the resulting optimization problems are nonconvex, we provide exact solution procedures for efficiently computing the SR-LS and SRD-LS estimates. Numerical simulations suggest that the exact SR-LS and SRD-LS estimates outperform existing approximations of the SR-LS and SRD-LS solutions as well as approximations of the R-LS and RD-LS solutions which are based on a semidefinite relaxation. <s> BIB005 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Spherical LS <s> In a recent letter by Gillette and Silverman, a linear closed-form algorithm (the GS algorithm) for source localization from time-differences of arrival is proposed. It is claimed that ldquoit is so simple that we were surprised that, until very recently, there have been no other solutions similar to ours.rdquo This comment has two objectives. We point out imprecisions of statement, and we give additional references of closely related works, some of them presenting the same results. <s> BIB006 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Spherical LS <s> Microphone arrays sample the sound field in both space and time with the major objective being the extraction of the signal propagating from a desired direction-of-arrival (DOA). In order to reconstruct a spatial sinusoid from a set of discrete samples, the spatial sampling must occur at a rate greater than a half of the wavelength of the sinusoid. This principle has long been adapted to the microphone array context: in order to form an unambiguous beampattern, the spacing between elements in a microphone array needs to conform to this spatial Nyquist criterion. The implicit assumption behind the narrowband beampattern is that one may use linearity and Fourier analysis to describe the response of the array to an arbitrary wideband plane wave. In this paper, this assumption is analyzed. A formula for the broadband beampattern is derived. It is shown that in order to quantify the spatial filtering abilities of a broadband array, the incoming signal's bifrequency spectrum must be taken into account, particularly for nonstationary signals such as speech. Multi-dimensional Fourier analysis is then employed to derive the broadband spatial transform, which is shown to be the limiting case of the broadband beampattern as the number of sensors tends to infinity. The conditions for aliasing in broadband arrays are then determined by analyzing the effect of computing the broadband spatial transform with a discrete spatial aperture. It is revealed that the spatial Nyquist criterion has little importance for microphone arrays. Finally, simulation results show that the well-known steered response power (SRP) method is formulated with respect to stationary signals, and that modifications are necessary to properly form steered beams in nonstationary signal environments. <s> BIB007 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Spherical LS <s> We consider the source localization problem using time-difference-of-arrival (TDOA) measurements in sensor networks. The maximum likelihood (ML) estimation of the source location can be cast as a nonlinear/nonconvex optimization problem, and its global solution is hardly obtained. In this paper, we resort to the Monte Carlo importance sampling (MCIS) technique to find an approximate global solution to this problem. To obtain an efficient importance function that is used in the technique, we construct a Gaussian distribution and choose its probability density function (pdf) as the importance function. In this process, an initial estimate of the source location is required. We reformulate the problem as a nonlinear robust least squares (LS) problem, and relax it as a second-order cone programming (SOCP), the solution of which is used as the initial estimate. Simulation results show that the proposed method can achieve the Cramer-Rao bound (CRB) accuracy and outperforms several existing methods. <s> BIB008
By squaring the idealized RD measurement expression (4), followed by some simple algebraic manipulations, we have The interest of this operation is in decoupling of the position vector and its magnitude, which are to be replaced by their estimatesr andD := r , respectively. The goal now becomes driving the sum of left hand sides (for all microphones) to zero: which leads to the following (compactly written) constrained optimization problem BIB005 : , andĉ BIB007 denotes the first entry of the column vectorĉ. In the literature, the problem above is tackled as: Unconstrained LS : by ignoring the constraints relating the position estimater and its magnitudeD, the problem (7) admits a closed-form solutionĉ As pointed in BIB004 BIB006 , several well-known estimation algorithms BIB001 actually yield the unconstrained LS estimate. The minimum of M = 5 microphones (i.e. four RD measurements), in three dimensions, are required in order for Φ T Φ −1 to be an invertible matrix. Constrained LS : While the unconstrained LS is simple and computationally efficient, its estimate is known to have a large variance compared to the CRB BIB006 , hence the interest for solving the constrained problem. Unfortunately, (8) is non-convex due to quadratic constraints. To directly incorporate the constraint(s), a Lagrangianbased iterative method has been proposed in BIB003 , albeit without any performance guarantees. Later, in their seminal paper BIB005 , Beck and Stoica provided a closed-form global solution of the problem, and demonstrated that it gives orders of magnitude more accurate solution (at an increased computational cost) than the unconstrained LS estimator. Moreover, the results in BIB008 indicate that it is generally more accurate than the two-stage ML solution BIB002 .
Multilateration -- source localization from range difference measurements: Literature survey <s> Conic LS <s> An array of n sensors at known locations receives the signal from an emitter whose location is desired. By measuring the time differences of arrival (TDOAs) between pairs of sensors, the range differences (RDs) are available and it becomes possible to compute the emitter location. Traditionally geometric solutions have been based on intersections of hyperbolic lines of position (LOPs). Each measured TDOA provides one hyperbolic LOP. In the absence of measurement noise, the RDs taken around any closed circuit of sensors add to zero. A bivector is introduced from exterior algebra such that when noise is present, the measured bivector of RDs is generally infeasible in that there does not correspond any actual emitter position exhibiting them. A circuital sum trivector is also introduced to represent the infeasibility; a null trivector implies a feasible RD bivector. A 2-step RD Emitter Location algorithm is proposed which exploits this implicit structure. Given the observed noisy RD bivector /spl Delta/, (1) calculate the nearest feasible RD bivector /spl Delta//spl circ/, and (2) calculate the nearest point to the (/sub 3//sup n/) planes of position, one for each of the triads of elements of /spl Delta//spl circ/. Both algorithmic steps are least squares (LS) and finite. Numerical comparisons in simulation show a substantial improvement in location error variances. <s> BIB001 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Conic LS <s> We consider a digital signal processing sensor array system, based on randomly distributed sensor nodes, for surveillance and source localization applications. In most array processing the sensor array geometry is fixed and known and the steering array vector/manifold information is used in beamformation. In this system, array calibration may be impractical due to unknown placement and orientation of the sensors with unknown frequency/spatial responses. This paper proposes a blind beamforming technique, using only the measured sensor data, to form either a sample data or a sample correlation matrix. The maximum power collection criterion is used to obtain array weights from the dominant eigenvector associated with the largest eigenvalue of a matrix eigenvalue problem. Theoretical justification of this approach uses a generalization of Szego's (1958) theory of the asymptotic distribution of eigenvalues of the Toeplitz form. An efficient blind beamforming time delay estimate of the dominant source is proposed. Source localization based on a least squares (LS) method for time delay estimation is also given. Results based on analysis, simulation, and measured acoustical sensor data show the effectiveness of this beamforming technique for signal enhancement and space-time filtering. <s> BIB002 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Conic LS <s> This paper presents a synthesizable VHDL model of a three-dimensional hyperbolic positioning system algorithm. The algorithm obtains an exact solution for the three-dimensional location of a mobile given the locations of four fixed stations (like a global positioning system [GPS] satellite or a base station in a cell) and the signal time of arrival (TOA) from the mobile to each station. The detailed derivation of the steps required in the algorithm is presented. A VHDL model of the algorithm was implemented and simulated using the IEEE numeric_std package. Signals were described by a 32-bit vector. Simulation results predict location of the mobile is off by 1 m for best case and off by 36 m for worst case. A Cþþ program using real numbers was used as a benchmark for the accuracy and precision of the VHDL model. The model can be easily synthesized for low power hardware implementation. <s> BIB003 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Conic LS <s> The accuracy of a source location estimate is very sensitive to the accurate knowledge of receiver locations. This paper performs analysis and develops a solution for locating a moving source using time-difference-of-arrival (TDOA) and frequency-difference-of-arrival (FDOA) measurements in the presence of random errors in receiver locations. The analysis starts with the Crameacuter-Rao lower bound (CRLB) for the problem, and derives the increase in mean-square error (MSE) in source location estimate if the receiver locations are assumed correct but in fact have error. A solution is then proposed that takes the receiver error into account to reduce the estimation error, and it is shown analytically, under some mild approximations, to achieve the CRLB accuracy for far-field sources. The proposed solution is closed form, computationally efficient, and does not have divergence problem as in iterative techniques. Simulations corroborate the theoretical results and the good performance of the proposed method <s> BIB004 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Conic LS <s> This tutorial text gives a unifying perspective on machine learning by covering bothprobabilistic and deterministic approaches -which are based on optimization techniques together with the Bayesian inference approach, whose essence liesin the use of a hierarchy of probabilistic models. The book presents the major machine learning methods as they have been developed in different disciplines, such as statistics, statistical and adaptive signal processing and computer science. Focusing on the physical reasoning behind the mathematics, all the various methods and techniques are explained in depth, supported by examples and problems, giving an invaluable resource to the student and researcher for understanding and applying machine learning concepts. The book builds carefully from the basic classical methods to the most recent trends, with chapters written to be as self-contained as possible, making the text suitable for different courses: pattern recognition, statistical/adaptive signal processing, statistical/Bayesian learning, as well as short courses on sparse modeling, deep learning, and probabilistic graphical models. All major classical techniques: Mean/Least-Squares regression and filtering, Kalman filtering, stochastic approximation and online learning, Bayesian classification, decision trees, logistic regression and boosting methods. The latest trends: Sparsity, convex analysis and optimization, online distributed algorithms, learning in RKH spaces, Bayesian inference, graphical and hidden Markov models, particle filtering, deep learning, dictionary learning and latent variables modeling. Case studies - protein folding prediction, optical character recognition, text authorship identification, fMRI data analysis, change point detection, hyperspectral image unmixing, target localization, channel equalization and echo cancellation, show how the theory can be applied. MATLAB code for all the main algorithms are available on an accompanying website, enabling the reader to experiment with the code. <s> BIB005
In , Schmidt has shown that (in two dimensions) the RDs of three known microphones define the major axis of a general conic 2 , on which the corresponding microphones lie. In addition, the source is positioned on its focus. In three dimensions, this axis becomes a plane containing the source. The fourth (non-coplanar) microphone is needed to infer the source position r, by calculating the intersection coordinates of three such planes BIB002 . Thus, the method attains the theoretical minimum for the required number of microphones for TDOA localization. To illustrate the approach, let one such triplet of microphones be described by (r 1 , r 2 , r 3 ), and (D 1 , D 2 , D 3 ) -their position vectors, and the distances to the source, respectively. For each pair (i, j) of these microphones, we have the following expression for the product of the range sum Σ i,j and the RD d i,j : By rearranging the terms in BIB004 , and having d k,i = Σ i,j − Σ j,k , the range sums can be eliminated. Eventually, this gives the aforementioned plane equation This is a linear equation of three unknowns, thus the exact solution is obtained when three triplets (i.e. four non-coplanar microphones) are available. Browsing the literature, we found that exactly the same closed-form approach has been recently reinvented in the highly cited article BIB003 , some 30 years after Schmidt's original paper. For M microphones, one ends up with M 3 such equations (in 3D) -the classical LS solution is to stack them into a matrix form, and calculate the position r by applying the Moore-Penrose pseudoinverse. Let A pqr , B pqr , C pqr and F pqr denote the coefficients and the right hand side of the expression (10), for the microphone triplet m ∈ {p, q, r}, respectively. For all such triplets, we have where A pqr = d q,r r p(1) + d r,p r q(1) + d p,q r r(1) , B pqr = d q,r r p(2) + d r,p r q(2) + d p,q r r(2) , C pqr = d q,r r p(3) + d r,p r q(3) + d p,q r r(3) and as in BIB005 . However, such LS solution is strongly influenced by the triplets having large A · , B · , C · or F · values. Instead, as proposed in , the matrix Ψ needs to be preprocessed prior to computing the pseudoinverse -its rows should be scaled by 1/ A 2 · + B 2 · + C 2 · , as well as the corresponding entry of the vector ψ. Likewise, the presence of noise in the TDOA measurements d i,j could seriously degrade the localization accuracy. In that case, the observation model (3) contains an additive noise term, which varies accross different measurements, rendering them inconsistent. This means that the intrinsic redundancy within TDOAs does not hold, e.g d i,k = d i,j + d j,k . In the noiseless case, the vector d of concatenated TDOA measurements, lies in the range space of a simple first-order difference matrix BIB001 , specified by (3) and the ordering of distances D m . Thus, the measurements could be preconditioned, by replacing them with the closest feasible TDOAs, in the LS sense. This is done by projecting the measured d onto the range space of the finite difference matrix, or, equivalently by the technique called "TDOA averaging" BIB001 .
A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> A flame-retardant integral-skinned polyurethane foam is prepared by a method which comprises the reaction of a polyol and an organic polyisocyanate in the presence of a foaming agent comprising trichlorofluoromethane, the improvement which comprises the incorporation of a catalyst comprising a phosphorous-containing compound selected from the group consisting of an alkyl phosphite, aryl phosphites and aryl-, alkyl-, aminoaryl-, alkaryl- and halide phosphines. <s> BIB001 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> The Global Position System (GPS) and the Global Navigation Satellite System (GLONASS) are based on a satellite system. Much work has been carried out on a non-satellite positioning system using the existing Global System of Mobile Communications (GSM) infrastructure. This leads to a GPS-GSM positioning system that manufacturers claim to reliably locate a mobile phone down to resolutions of less than 125 m. The requirement needed to achieve such a resolution with a GPS/GSM positioning system is to have three GSM base stations in a 30 km area. This requirement is difficult to obtain especially in rural areas. The work carried out in this paper shows how to integrate digital audio broadcast (DAB) transmitters with GSM base stations for positioning systems. This novel DAB-GSM hybrid positioning system can reach an accuracy of 40 meters. <s> BIB002 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> Cellular location methods based on angle of arrival (AOA) or time difference (e.g. E-OTD) measurements assume line-of-sight propagation between base stations and the mobile station. This assumption is not valid in urban microcellular environments. We present a database correlation method (DCM) that can utilize any location-dependent signals available in cellular systems. This method works best in densely built urban areas. An application of DCM to GSM, using signal strength measurements, is described and trial results from urban and suburban environments are given. Comparison with AOA and E-OTD trials shows that DCM is a competitive alternative for GSM location in urban and suburban environments. <s> BIB003 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> Effective evaluation of positioning methods in order to give fair results when comparing different positioning technologies requires performance measurements applicable to all the positioning technologies for mobile positioning. In this paper, we outline and compare five major performance measures namely, accuracy, reliability, availability, latency, and applicability, and how they apply to positioning technologies. <s> BIB004 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> The World Trade Center (WTC) rescue response provided an unfortunate opportunity to study the human-robot interactions (HRI) during a real unstaged rescue for the first time. A post-hoc analysis was performed on the data collected during the response, which resulted in 17 findings on the impact of the environment and conditions on the HRI: the skills displayed and needed by robots and humans, the details of the Urban Search and Rescue (USAR) task, the social informatics in the USAR domain, and what information is communicated at what time. The results of this work impact the field of robotics by providing a case study for HRI in USAR drawn from an unstaged USAR effort. Eleven recommendations are made based on the findings that impact the robotics, computer science, engineering, psychology, and rescue fields. These recommendations call for group organization and user confidence studies, more research into perceptual and assistive interfaces, and formal models of the state of the robot, state of the world, and information as to what has been observed. <s> BIB005 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> The mobile phone market lacks a satisfactory location technique that is accurate, but also economical and easy to deploy. Current technology provides high accuracy, but requires substantial technological and financial investment. In this paper, we present the results of experiments intended to asses the accuracy of inexpensive Cell-ID location technique and its suitability for the provisioning of location based services. We first evaluate the accuracy of Cell-ID in urban, suburban and highway scenarios (both in U.S. and Italy), we then introduce the concepts of discovery-accuracy and discovery-noise to estimate the impact of positioning accuracy on the quality of resource discovery services. Experiments show that the accuracy of Cell-ID is not satisfactory as a general solution. In contrast we show how Cell-ID can be effectively exploited to implement more effective and efficient voice location-based services. <s> BIB006 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> Observability properties of errors in an integrated navigation system are studied with a control-theoretic approach in this paper. A navigation system with a low-grade inertial measurement unit and an accurate single-antenna Global Positioning System (GPS) measurement system is considered for observability analysis. Uncertainties in attitude, gyro bias, and GPS antenna lever arm were shown to determine unobservable errors in the position, velocity, and accelerometer bias. It was proved that all the errors can be made observable by maneuvering. Acceleration changes improve the estimates of attitude and gyro bias. Changes in angular velocity enhance the lever arm estimate. However, both the motions of translation and constant angular velocity have no influence on the estimation of the lever arm. A covariance simulation with an extended Kalman filter was performed to confirm the observability analysis. <s> BIB007 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> There has been increased interest in developing location services for wireless communications systems over the last several years. Mobile network operators are continuously investigating new innovative services that allow them to increase the profit. In the days to come, location based service or LBS will be benefiting both the consumers and network operators. While the consumers can expect greater personal safety and more personalised features, the network operators will tackle discrete market segments based on the different service portfolios. This paper analyses radiolocation methods applicable to GSM/UMTS mobile station location, and compares them with other positioning techniques used today, giving an insight into the convergence of satellite and cellular positioning. <s> BIB008 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> The rapid development of wireless communications and mobile database technology promote the extensive application of Location Based Services (LBSs), and provide a greatly convenience for people's lives. In recent years, Location Based Services has played an important role in deal with public emergencies. It is possible to access mobile users' location information anytime and anywhere. But in the meantime, user location privacy security poses a potentially grave new threat, and may suffer from some invade which could not presuppose. Location privacy issues raised by such applications have attracted more and more attention. It has become the research focus to find a balance point between the location-based highly sufficient services and users' location privacy protection. Comprehensive and efficient services are accessed under the premise of less exposed locations, that is to say, allowed the location of exposure in a controlled state. K-anonymity technique is widely used in data dissemination of data privacy protection technology, but the method is also somewhat flawed. This paper analyses on existing questions of location privacy protection system in Location Based Services at the present time, including K-anonymity technique, quality of service, query systems, and generalize and summarize the main research achievement of location privacy protection technology in recent years. And some solutions have been proposed to deal with location privacy problem in Location Based Services. The paper also analyzes how to provide efficient location-based services and better protection users' location privacy in handle public emergencies. In the end, some study trends of Location Based Services and location privacy protection are given. <s> BIB009 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> Due to an increasing number of public and private access points in indoor and urban environments, Wi-Fi positioning becomes more and more attractive for pedestrian navigation. In the last ten years different approaches and solutions have been developed. But Wi-Fi hardware and network protocols have not been designed for positioning. Therefore, Wi-Fi devices have different hardware characteristics that lead to different positioning accuracies. In this article we analyze and discuss hardware characteristics of Wi-Fi devices with a focus on the so called Wi-Fi fingerprinting technique for positioning. The analysis is based on measurements collected using a static setup in an anechoic chamber to minimize signal reflections and noise. Characteristics like measurement offsets and practical polling intervals of different mobile devices have been examined. Based on this analysis a calibration approach to compensate the measurement offsets of Wi-Fi devices is proposed. Experimental results in a typically office building are presented to evaluate the improvement in localization accuracy using the calibration approach. <s> BIB010 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> Location positioning systems using wireless area local network (WLAN) infrastructure are considered cost effective and practical solutions for indoor location tracking and estimation. However, accuracy deterioration due to environmental factors and the need for manual offline calibration limit the application of these systems. In this paper, a new method based on differential operation access points is proposed to eliminate the adverse effects of environmental factors on location estimation. The proposed method is developed based on the operation of conventional differential amplifiers where noise and interference are eliminated through a differential operation. A pair of properly positioned access points is used as a differential node to eliminate the undesired effects of environmental factors. As a result the strength of received signals, which is used to determine the location of a user, remains relatively stable and supports accurate positioning. To estimate wave propagation in indoor environments, log-distance path loss model has been employed at the system level. Experimental results indicate that the proposed method can effectively reduce the location estimation error and provide accuracy improvement over existing methods. <s> BIB011 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> With the growing number of smart phones, the positioning issue for location-based service (LBS) in cellular networks has got much more popular and important in recent years. This paper proposes a signal-aware fingerprinting-based positioning technique (SAFPPT) in cellular networks, which contains two positioning methods: (i) Continuously Measured Positioning Method (CMPM) and (ii) Stop-and-Measured Positioning Method (SMPM). To verify SAFPPT, we have the case study of implementing the aforementioned two positioning methods in the Android platform for the GSM networks and analyze accuracy using some experiments. Our experimental results show that: (i) the positioning accuracy of CMPM and SMPM are much higher than Googlei¦s i§My Locationi¨, (ii) some parameters may affect the positioning accuracy of CMPM, e.g., the moving speed of a user and the number of sampling of the fingerprinting database, and (iii) the staying time of a user may affect the positioning accuracy of SMPM. <s> BIB012 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> Location-based systems for indoor positioning have been studied widely owing to their application in various fields. The fingerprinting approach is often used in Wi-Fi positioning systems. The K-nearest-neighbor fingerprinting algorithm uses a fixed number of neighbors, which reduces positioning accuracy. Here, we propose a novel fingerprinting algorithm, the enhanced weighted K-nearest neighbor (EWKNN) algorithm, which improves accuracy by changing the number of considered neighbors. Experimental results show that the proposed algorithm gives higher accuracy. <s> BIB013 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> This paper introduces a novel approach to model Received Signal Strength (RSS) measurements in cellular networks for user positioning needs. The RSS measurements are simulated by constructing a synthetic statistical cellular network, based on empirical data collected from a real life network. These statistics include conventional path loss model parameters, shadowing phenomenon including spatial correlation, and probabilities describing how many cell identities are measured at a time. The performance of user terminal positioning in the synthetic model is compared with real life measurement scenario by using a fingerprinting based K-nearest neighbor algorithm. It is shown that the obtained position error distributions match well with each other. The main advantage of the introduced network design is the possibility to study the performance of various position algorithms without requiring extensive measurement campaigns. In particular the model is useful in dimensioning different radio environment scenarios and support in preplanning of measurement campaigns. In addition, repeating the modeling process with different random values, it is possible to study uncommon occurrences in the system which would be difficult to reveal with limited real life measurement sets. <s> BIB014 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> This paper proposes a hybrid scheme for user positioning in an urban scenario using both a Global Navigation Satellite System (GNSS) and a mobile cellular network. To maintain receiver complexity (and costs) at a minimum, the location scheme combines the time-difference-of-arrival (TDOA) technique measurements obtained from the cellular network with GNNS pseudorange measurements. The extended Kalman filter (EKF) algorithm is used as a data integration system over the time axis. Simulated results, which are obtained starting from real measurements, demonstrate that the use of cellular network data may provide increased location accuracy when the number of visible satellites is not adequate. In every case, the obtained accuracy is within the limits required by emergency location services, e.g., Enhanced 911 (E911). <s> BIB015 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> Thank you for reading 80211 wireless networks the definitive guide. As you may know, people have look numerous times for their favorite books like this 80211 wireless networks the definitive guide, but end up in malicious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they cope with some infectious virus inside their computer. 80211 wireless networks the definitive guide is available in our book collection an online access to it is set as public so you can get it instantly. Our digital library hosts in multiple countries, allowing you to get the most less latency time to download any of our books like this one. Kindly say, the 80211 wireless networks the definitive guide is universally compatible with any devices to read. <s> BIB016
Numerous geolocation technologies are used to estimate client (person or object) geographical position. The large diversification of existing wireless Radio Access Technologies (RAT) and the increasing number of wirelessenabled devices are promoting the extensive application of Location Based Services (LBS) BIB009 . Position-dependant services include: emergency services such as rescue response BIB005 and security alerts, entertainment services like mobile gaming , medical applications BIB007 and a wide variety of other applications. Global Positioning System (GPS) is the most common technology which supports outdoor locating services . Satellites orbiting around the Earth continuously broadcast their own position and direction. Broadcasted signals are used by the receivers to estimate satellites positions as well as the distance between satellite and receiver. Having these distance measurements, trilateration BIB001 is usually used to estimate receivers position. Accuracy of the estimated positions depends on the number of visible satellites. Hence, it does not work well for indoor positioning, and it depends on weather conditions. Base stations of the mobile terrestrial radio access networks, such as Global System for Mobile communications (GSM), are the reference points for mobile client localization. Cell-Identification (Cell-ID) estimates client position using the geographical coordinates of its serving base station in cellular networks BIB008 BIB004 BIB006 . Other positioning methods are based on the fingerprinting database, and they make use of Received Signal Strength (RSS) measurements BIB003 BIB014 BIB012 . However, positioning accuracy is restrained by interference, multipath and non line-of-sight (NLOS) propagation. A Wireless Local Area Network (WLAN) offers connectivity and internet access for wireless enabled clients within its coverage area. For example, IEEE 802.11 BIB016 (commonly known as Wi-Fi) is widely deployed, and it is also used for localizing Wi-Fi enabled devices. The fingerprinting approach is often used in Wi-Fi positioning systems BIB010 BIB013 . It is based on Received Signal Strength Indication (RSSI) measurements in the localization area. Positioning systems using Wi-Fi are considered cost effective and practical solutions for indoor location tracking and estimation BIB011 . Positioning techniques performance comparison is done using several metrics BIB004 such as applicability, latency, reliability, accuracy and cost. A technique is more accurate when the estimated position of a client is closer to the real geographical position. Positioning accuracy is getting more important with the increasing use of position dependant applications. Indeed, it is crucial for emergency location services. Hence, hybrid positioning systems, such as BIB015 BIB002 , are introduced to improve accuracy and reliability of the existing localization technologies. In this paper, we describe the main positioning techniques used in satellite networks like GPS, in mobile networks, such as GSM, and in wireless local area networks such as Wi-Fi. The coexistence of several wireless radio access technologies in the same area allows the introduction of hybrid positioning systems, and promotes the diversification of position dependant services. We explain some of the hybrid localization techniques that coordinate information received from different radio access technologies in order to improve positioning accuracy. Such improvements increase user satisfaction, and make LBS more robust and efficient. We also classify these services into several categories. The rest of the paper is organized as follows: in (II) we explain the principles behind positioning techniques used in satellite and mobile networks. Wi-Fi localization methods are reported in (III). We describe hybrid positioning systems in (IV). Section (V) contains a classification of LBS. Concluding remarks are given in section (VI).
A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> In this paper, time of arrival (TOA) and angle of arrival (AOA) errors in four typical cellular environments are analyzed and modeled. Based on the analysis, a hybrid TOA/AOA positioning (HTAP) algorithm, which utilizes TOA and AOA information delivered by serving base stations (BS), is proposed. The performance of the related positioning algorithms is simulated. It is shown that when the MS is close to the serving BS, HTAP will produce an accurate location estimate. When MS is far from the serving BS, the location estimate obtained by HTAP can be used as an initial location in our system to help a least square (LS) algorithm converge easily. When there are more than three TOA detected, weights and TOA numbers used in the LS algorithm should be dynamically adjusted according to the distance between MS and serving BS and the propagation environment for better positioning performance. <s> BIB001 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> Currently in development, numerous geolocation technologies can pinpoint a person's or object's position on the Earth. Knowledge of the spatial distribution of wireless callers will facilitate the planning, design, and operation of next generation broadband wireless networks. Mobile users will gain the ability to get local traffic information and detailed directions to gas stations, restaurants, hotels, and other services. Police and rescue teams will be able to quickly and precisely locate people who are lost or injured but cannot give their precise location. Companies will use geolocation based applications to track personnel, vehicles, and other assets. The driving force behind the development of this technology is a US Federal Communications Commission (FCC) mandate stating that by 1 October 2001 all wireless carriers must provide the geolocation of an emergency 911 caller to the appropriate public safety answering point. Location technologies requiring new modified, or upgraded mobile stations must determine the caller's longitude and latitude within 50 meters for 67 percent of emergency calls, and within 150 meters for 95 percent of the calls. Otherwise, they must do so within 100 meters and 300 meters, respectively, for the same percentage of calls. Currently deployed wireless technology can locate 911 calls within an area no smaller than 10 to 15 square kilometers. It is argued that assisted-GPS technology offers superior accuracy, availability, and coverage at a reasonable cost. <s> BIB002 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> Cellular location methods based on angle of arrival (AOA) or time difference (e.g. E-OTD) measurements assume line-of-sight propagation between base stations and the mobile station. This assumption is not valid in urban microcellular environments. We present a database correlation method (DCM) that can utilize any location-dependent signals available in cellular systems. This method works best in densely built urban areas. An application of DCM to GSM, using signal strength measurements, is described and trial results from urban and suburban environments are given. Comparison with AOA and E-OTD trials shows that DCM is a competitive alternative for GSM location in urban and suburban environments. <s> BIB003 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> This paper deals with the problem of estimating the position of a user equipment operating in a wireless communication network. We present a new positioning method based on the angles of arrival (AOA) measured in several radio links between that user equipment and different base stations. The proposed AOA-based method leads us to a non-iterative closed-form solution of the positioning problem, and an statistical analysis of that solution is also included. The comparison between this method and the classical AOA-based positioning technique is discussed in terms of computational load, convergence of the solution and also in terms of the bias and variance of the position estimate. <s> BIB004 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> This paper presents a comparison of error characteristics between time of arrival (TOA) and time difference of arrival (TDOA) processing of the linearized GPS pseudo-range equations. In particular, the relationship among dilutions of precision (DOPs), position estimates, and their error covariances is investigated. DOPs for TDOA are defined using the error covariance matrix resulting from TDOA processing. It is shown that the DOPs and user position estimate are the same for TDOA and TOA processing. The relationship of DOPs and position estimates for standard GPS positioning and double differenced processing are also given. <s> BIB005 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> The ray-tracing (RT) algorithm has been used for accurately predicting the site-specific radio propagation characteristics, in spite of its computational intensity. Statistical models, on the other hand, offers computational simplicity but low accuracy. In this paper, a new model is proposed for predicting the indoor radio propagation to achieve computational simplicity over the RT method and better accuracy than the statistical models. The new model is based on the statistical derivation of the ray-tracing operation, whose results are a number of paths between the transmitter and receiver, each path comprises a number of rays. The pattern and length of the rays in these paths are related to statistical parameters of the site-specific features of indoor environment, such as the floor plan geometry. A key equation is derived to relate the average path power to the site-specific parameters, which are: 1) mean free distance; 2) transmission coefficient; and 3) reflection coefficient. The equation of the average path power is then used to predict the received power in a typical indoor environment. To evaluate the accuracy of the new model in predicting the received power in a typical indoor environment, a comparison with RT results and with measurement data shows an error bound of less than 5 dB. <s> BIB006 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> Position information of individual nodes is useful in implementing functions such as routing and querying in ad-hoc networks. Deriving position information by using the capability of the nodes to measure time of arrival (TOA), time difference of arrival (TDOA), angle of arrival (AOA) and signal strength have been used to localize nodes relative to a frame of reference. The nodes in an ad-hoc network can have multiple capabilities and exploiting one or more of the capabilities can improve the quality of positioning. In this paper, we show how AOA capability of the nodes can be used to derive position information. We propose a method for all nodes to determine their orientation and position in an ad-hoc network where only a fraction of the nodes have positioning capabilities, under the assumption that each node has the AOA capability. <s> BIB007 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> Today's rapidly evolving positioning location technology can be possible to determine the geographical position of mobile phone. This is related to many new services of the next revolution mobile communication system. It is important to find out what location technology is suitable for operating on GSM network. However a success of service must concern with the lowest possible cost and minimal impact on the network infrastructure and subscriber equipment because GSM system were not originally designed for positioning. Thus we present to study of the E-OTD location technology for improving accuracy and service stability. <s> BIB008 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> The mobile phone market lacks a satisfactory location technique that is accurate, but also economical and easy to deploy. Current technology provides high accuracy, but requires substantial technological and financial investment. In this paper, we present the results of experiments intended to asses the accuracy of inexpensive Cell-ID location technique and its suitability for the provisioning of location based services. We first evaluate the accuracy of Cell-ID in urban, suburban and highway scenarios (both in U.S. and Italy), we then introduce the concepts of discovery-accuracy and discovery-noise to estimate the impact of positioning accuracy on the quality of resource discovery services. Experiments show that the accuracy of Cell-ID is not satisfactory as a general solution. In contrast we show how Cell-ID can be effectively exploited to implement more effective and efficient voice location-based services. <s> BIB009 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> In this paper we study the temporal statistics of cellular mobile channel. We propose a scattering model that encloses scatterers in an elliptical scattering disc. We, further, employ this model to derive the probability density function (pdf) of Time of Arrival (ToA) of the multipath signal for picocell, microcell, and macrocell environments. For macrocell environment, we present generic closed-form formula for the pdf of ToA from which previous models can be easily deduced. Proposed theoretical results can be used to simulate temporal dispersion of the multipath signal in a variety of propagation conditions. The presented results help in the design of efficient equalizers to combat intersymbol interference (ISI) for wideband systems. <s> BIB010 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> This paper presents a new approach to providing accurate pedestrian indoor positioning using a Time of Arrival (TOA) based technique, when only two access points in an IEEE 802.11 network are in range of a mobile terminal to be located. This allows to enhance the availability and reliability of the positioning system, because existing trilateration-and tracking-based systems require at least three reference points to provide 2D positions. This contribution demonstrates the feasibility of the technique proposed and presents encouraging performance figures obtained through simulations with real observable data. <s> BIB011 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> This paper presents an approach to calibrate GPS position by using the context awareness technique from the pervasive computing. Previous researches on GPS calibration mostly focus on the methods of integrating auxiliary hardware so that the userpsilas context information and the basic demand of the user are ignored. From the inspiration of the pervasive computing research, this paper proposes a novel approach, called PGPS (Perceptive GPS), to directly improve GPS positioning accuracy from the contextual information of received GPS data. PGPS is started with sampling received GPS data to learning carrierpsilas behavior and building a transition probability matrix based upon HMM (Hidden Markov Model) model and Newtonpsilas Laws. After constructing the required matrix, PGPS then can interactively rectify received GPS data in real time. That is, based on the transition matrix and received online GPS data, PGPS infers the behavior of GPS carrier to verify the rationality of received GPS data. If the received GPS data deviate from the inferred position, the received GPS data is then dropped. Finally, an experiment was conducted and its preliminary result shows that the proposed approach can effectively improve the accuracy of GPS position. <s> BIB012 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> This paper describes locating systems in indoors half-manufactured environments using wireless communications framework. Such framework is available through existing communications hardware as is the case of ZigBee standard. Using this framework as locating system can provide trilateration scheme using receiver signal strength indication (RSSI) measurements. An experiment shows RSSI measurement errors and some filters are developed to minimize them. These results present some insights to future RSSI development strategies in order to obtain efficient locating systems. <s> BIB013 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> This paper addresses the Common Radio Resources Management of a Heterogeneous Network composed of three different Radio Access Network (RANs): UMTS, Wi-Fi and Mobile-WiMAX. The network is managed by an algorithm based on a priority table, user sessions being steered preferentially to a given RAN according to the service type. Six services are defined, including Voice, Web browsing and Email. A time-based, system-level, simulation tool was developed in order to evaluate the network performance. Results show that blocking probability and average delay are optimised (minimised) when non-conversational user data sessions are steered preferentially to M-WiMAX. An overall network degradation by selective RAN deactivation is maximised when M-WiMAX is switched off, blocking probability and average delay raising 10 and 100 times, respectively. The average delay is reduced 20 to 30 % when the channel bandwidth of M-WiMAX is increased from 5 to 10 MHz. <s> BIB014 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> Localizing a user is a fundamental problem that arises in many potential applications. The use of wireless technologies for locating a user has been a trend in recent years. Most existing approaches use RSSI to localize the user. In general, one of the several existing wireless standards such as ZigBee, Bluetooth or Wi-Fi, is chosen as the target standard. An interesting question that has practical implications is whether there is any benefit in using more than one wireless technology to perform the localization. In this paper we present a study on the advantages and challenges of using multiple wireless technologies to perform localization in indoor environments. We use real ZigBee, Wi-Fi and Bluetooth compliant devices. In our study we analyse results obtained using the fingerprint method. The performance of each technology alone and the performance of the technologies combined are also investigated. We also analyse how the number of wireless devices used affects the quality of localization and show that, for all technologies, more beacons lead to less error. Finally, we show how interference among technologies may lead to lower localization accuracy. <s> BIB015 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> Radio propagation model simulates of electromagnetic wave propagation in space and calculates the strength of radio signals. It provides the basis for forecasting, analysis and optimization of wireless network communication. In this paper, various types of radio propagation model have been analyzed and discussed, and a solution based on radio propagation model for indoor wireless network planning was proposed. This paper also analyzed fast 3D modeling of buildings, and model optimization, computing acceleration technologies of indoor wireless propagation model. <s> BIB016 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> The wide deployment of Wi-Fi networks empowers the implementation of numerous applications such as Wi-Fi positioning, Location Based Services (LBS), wireless intrusion detection and real-time tracking. Many techniques are used to estimate Wi-Fi client position. Some of them are based on the Time or Angle of Arrival (ToA or AoA), while others use signal power measurements and fingerprinting. All these techniques require the reception of multiple wireless signals to provide enough data for solving the localization problem. In this paper, we describe the major techniques used for positioning in Wi-Fi networks. Real experiments are done to compare the accuracy of methods that use signal power measurement and Received Signal Strength Indication (RSSI) fingerprinting to estimate client position. Moreover, we investigate a fingerprinting method constrained by distance information to improve positioning accuracy. Localization techniques are more accurate when the estimated client positions are closer to the real geographical positions. Accuracy improvements increase user satisfaction, and make the localization services more robust and efficient. <s> BIB017
GPS consists of a network of 24 satellites in six different 12-hour orbital paths spaced so that at least five are in view from every point on the globe . Satellites serve as reference points when estimating client position. They continuously broadcast signals containing information about their own position and direction. Distance between satellite and receiver is determined by precisely measuring the time it takes a signal to travel from the satellite to the receivers antenna. Once the distances between visible satellites and the GPS receiver are measured, client position is estimated via the trilateration method, commonly known as triangulation . Three distance measurements are required to perform position estimation. In fact, the estimated position is the intersection of three spheres having the satellites as centers and the calculated distances as radii. GPS accuracy is largely reduced by several factors such as signal delays, satellite clock errors, multipath distortion, receiver noise and various environment noises BIB012 . To overcome visibility problems between satellites and receivers, assisted-GPS is proposed. It benefits from the coexistence of satellite networks along with terrestrial wireless access networks (i.e. mobile networks or Wireless Local Area Networks) in the same area. Therefore, superior accuracy, availability and coverage are offered for indoor use or in urban areas. Refer to BIB002 for more information about Assisted-Global Positioning System. In mobile networks, such as GSM, many techniques are used to estimate client position. Contrarily to the satellites that are continuously moving around the globe, base stations (BS) of the mobile networks have fixed geographical positions. In addition, each BS broadcasts its Cell-ID and Location Area Identifier (LAI) to the mobiles within its coverage area. Therefore, each mobile can approximate its own position using the geographical coordinates of its serving base station in Cell-ID method BIB009 . Angle of Arrival (AoA) measurements BIB004 BIB007 of several radio links between the base stations and the mobile are also used to estimate client position. Hence, user position is approximated according to these angle measurements and using information about base stations geographical coordinates. Time of Arrival (ToA) BIB010 BIB001 requires synchronization between the different network elements (i.e., base stations and mobile stations). The time difference between bursts sent by the mobile are converted into distance. Hence, trilateration is used to estimate client position. Other methods use received signal strength measurements to localize mobile stations. For example, received signal power is converted into distance via propagation models or empirical models. In addition, the fingerprinting method BIB003 compares RSS measurements with the values stored in a database for specific points in the localization map in order to approximate client position. Time Difference of Arrival (TDoA) BIB005 technique is inspired by ToA. Indeed, in ToA, the positioning entity measures signal propagation time from the emitter to the receiver. Time measurement is converted into distance that is used to estimate client position. However, TDoA technique requires the simultaneous transmission (for each base station) of two signals having different frequencies. These signals will reach the receiver at different times. Therefore, the time difference is measured and converted into distance. Once we have three distance measurements, trilateration is used to estimate client position. Enhanced Observed Time Difference (E-OTD) BIB008 requires synchronization between network entities (base stations and mobiles). Each base station broadcasts messages within its coverage area. A mobile station compares the relative times of arrival of these messages to estimate its distance from each visible base station. Localization techniques in mobile networks can be classified in two categories: network-based and clientbased. In network-based positioning techniques, the network collects necessary information to estimate client position. Time, angle or distance measurements performed by the base stations are usually forwarded to a positioning server deployed in the network. The required information when estimating user position is stored in the positioning database. Thus, positioning server has information about the positions of all the users in the system. However, client-based localization techniques are characterized by the absence of a centralized positioning entity. In fact, each client performs time, angle, power or distance measurements locally. Thus, it approximates its own position using local measurements and information broadcasted from the base stations. Fig. 1 shows a qualitative comparison of the positioning techniques described in this section. Performance criteria used for localization techniques comparison are accuracy and coverage. A positioning technique is better when it has lower accuracy error (distance between the estimated position and the real geographical position) and greater coverage. Traditionally, a Wi-Fi network provides Internet access to wireless-enabled clients located within its coverage area. In addition, it allows interconnectivity between wireless devices existing in the same network. Recently, Wi-Fi networks are having additional applications. For example, we can benefit of the coexistence of several radio access technologies in the same geographical area. Heterogeneous networks offer the possibility to steer user sessions preferentially to a given Radio Access Technology, such as Wi-Fi or Universal Mobile Terrestrial radio access System (UMTS), according to service type BIB014 and network load. Moreover, the wide deployment of Wi-Fi networks allows the introduction of numerous location-based services. Wi-Fi positioning techniques are similar to those used in mobile networks. However, the most common technique used to localize a client in Wi-Fi networks is based on RSSI measurements. In the remainder of this section, we classify Wi-Fi positioning techniques into several categories, and we describe the basics of RSSI-based localization methods. ToA BIB011 and TDoA perform time measurements to calculate the distance between Wi-Fi client and access points. Hence, three distance measurements are required to estimate user position via trilateration BIB013 . Such methods belong to the category of time-based positioning techniques, and they require time synchronization between network entities. In Cell-ID category, users scan the received radio beacons to estimate the closer access point. They use either predefined radio propagation models or experimental fingerprinting data to estimate user position. AoA method uses directional antennas to measure the angle of arrival of signals transmitted by the clients. Hence, client position is estimated via the geometry of triangles in angle-based positioning techniques. However, the most common positioning techniques in Wi-Fi networks are based on RSSI measurements BIB017 . Some of them are based on propagation models BIB016 to translate signal power into distance. Other methods use empirical models, and they store RSSI measurements in a positioning database. Therefore, localization methods in Wi-Fi are classified into four main categories: Cell-ID, time, RSSI and angle. Fig. 3 illustrates the classification of Wi-Fi positioning techniques. Received signal strength indication measurements are quantified levels that reflect the real power of a received wireless signal. When propagating in free space, the transmitted radio frequency signal is subject to degradation due to attenuation, reflection, diffraction and scattering. Indeed, several propagation models formulate signal strength degradation as a function of the traveled distance and the transmission frequency. For instance, Hata-Okumura model approximates Path Loss (PL) according to the distance between emitter and receiver, antenna characteristics and transmission frequency. Hence, RSSI measurements are compared with theoretical values of the received power (calculated using propagation models) in order to find the distance traveled by the signal. Three distance measurements are required to estimate Wi-Fi-enabled client position via trilateration. In fact, it is the intersection of three circles having the access points as centers and the calculated distances as radii. Other positioning techniques based on RSSI measurements use empirical models to estimate user position. Instead of approximating the distance between Wi-Fi clients and access points, the localization area is divided into smaller parts using a grid. Each point of the grid receives several Wi-Fi signals from the neighboring access points. RSSI measurements are performed under different conditions (e.g. time, interference, network load) in order to increase positioning accuracy. If n is the number of WiFi access points, an n-tuple (RSSI 1 , RSSI 2 , ..., RSSI n ) containing mean RSSI values is created for each point (x, y) in the map. Such positioning technique is called RSSI fingerprinting, and it occurs in two phases: an offline fingerprinting phase and an online positioning phase. In the first phase, RSSI measurements are done for each point in the positioning map (under different network conditions). At the end of this phase, positioning database is created. It contains mean RSSI values for every point in the grid BIB006 . However, the second phase performs online RSSI measurements for signals received from the neighboring access points. The positioning entity compares live RSSI measurements with the values stored in the database. Therefore, client position is estimated as the entry (x, y) in the database that best matches the actual measurements BIB015 . Accuracy of RSSI-based positioning techniques in Wi-Fi networks depends on the number of access points involved in the localization problem.
A survey of positioning techniques and location based services in wireless networks <s> IV. HYBRID POSITIONING SYSTEM <s> In this paper, time of arrival (TOA) and angle of arrival (AOA) errors in four typical cellular environments are analyzed and modeled. Based on the analysis, a hybrid TOA/AOA positioning (HTAP) algorithm, which utilizes TOA and AOA information delivered by serving base stations (BS), is proposed. The performance of the related positioning algorithms is simulated. It is shown that when the MS is close to the serving BS, HTAP will produce an accurate location estimate. When MS is far from the serving BS, the location estimate obtained by HTAP can be used as an initial location in our system to help a least square (LS) algorithm converge easily. When there are more than three TOA detected, weights and TOA numbers used in the LS algorithm should be dynamically adjusted according to the distance between MS and serving BS and the propagation environment for better positioning performance. <s> BIB001 </s> A survey of positioning techniques and location based services in wireless networks <s> IV. HYBRID POSITIONING SYSTEM <s> Currently in development, numerous geolocation technologies can pinpoint a person's or object's position on the Earth. Knowledge of the spatial distribution of wireless callers will facilitate the planning, design, and operation of next generation broadband wireless networks. Mobile users will gain the ability to get local traffic information and detailed directions to gas stations, restaurants, hotels, and other services. Police and rescue teams will be able to quickly and precisely locate people who are lost or injured but cannot give their precise location. Companies will use geolocation based applications to track personnel, vehicles, and other assets. The driving force behind the development of this technology is a US Federal Communications Commission (FCC) mandate stating that by 1 October 2001 all wireless carriers must provide the geolocation of an emergency 911 caller to the appropriate public safety answering point. Location technologies requiring new modified, or upgraded mobile stations must determine the caller's longitude and latitude within 50 meters for 67 percent of emergency calls, and within 150 meters for 95 percent of the calls. Otherwise, they must do so within 100 meters and 300 meters, respectively, for the same percentage of calls. Currently deployed wireless technology can locate 911 calls within an area no smaller than 10 to 15 square kilometers. It is argued that assisted-GPS technology offers superior accuracy, availability, and coverage at a reasonable cost. <s> BIB002 </s> A survey of positioning techniques and location based services in wireless networks <s> IV. HYBRID POSITIONING SYSTEM <s> Localizing a user is a fundamental problem that arises in many potential applications. The use of wireless technologies for locating a user has been a trend in recent years. Most existing approaches use RSSI to localize the user. In general, one of the several existing wireless standards such as ZigBee, Bluetooth or Wi-Fi, is chosen as the target standard. An interesting question that has practical implications is whether there is any benefit in using more than one wireless technology to perform the localization. In this paper we present a study on the advantages and challenges of using multiple wireless technologies to perform localization in indoor environments. We use real ZigBee, Wi-Fi and Bluetooth compliant devices. In our study we analyse results obtained using the fingerprint method. The performance of each technology alone and the performance of the technologies combined are also investigated. We also analyse how the number of wireless devices used affects the quality of localization and show that, for all technologies, more beacons lead to less error. Finally, we show how interference among technologies may lead to lower localization accuracy. <s> BIB003 </s> A survey of positioning techniques and location based services in wireless networks <s> IV. HYBRID POSITIONING SYSTEM <s> This paper proposes a hybrid scheme for user positioning in an urban scenario using both a Global Navigation Satellite System (GNSS) and a mobile cellular network. To maintain receiver complexity (and costs) at a minimum, the location scheme combines the time-difference-of-arrival (TDOA) technique measurements obtained from the cellular network with GNNS pseudorange measurements. The extended Kalman filter (EKF) algorithm is used as a data integration system over the time axis. Simulated results, which are obtained starting from real measurements, demonstrate that the use of cellular network data may provide increased location accuracy when the number of visible satellites is not adequate. In every case, the obtained accuracy is within the limits required by emergency location services, e.g., Enhanced 911 (E911). <s> BIB004
The wide usage of position dependant services increases the need for more accurate position estimation techniques. Due to the limitations of positioning methods that use data from one single RAT, hybrid positioning techniques are proposed to increase accuracy. They make use of the collaboration between different wireless access networks existing in the same geographical area, such as GPS and GSM, to exchange additional position related data. The main factors that reduce accuracy of GPS are multipath distortion and visibility problem between satellites and receivers . Therefore, a hybrid positioning technique called Assisted-Global Positioning System (A-GPS) BIB002 is introduced to overcome these limitations. In fact, GSM Base Transceiver Stations (BTS) are involved in the positioning problem along with the satellites. Additional data about BTS geographical position and proximity to the mobile is used together with GPS localization information in order to estimate client position. Moreover, authors of propose a Wi-Fi GPS based combined positioning algorithm. Indeed, localization data from the Wi-Fi network is used when the number of visible satellites is less than four. In cellular networks, positioning accuracy is restrained by the non-line-of-sight propagation and by interference mitigation techniques. Thus, inconsistent information is provided as entry when resolving the localization problem. Authors of BIB004 describe a hybrid positioning scheme that combines TDoA technique measurements obtained from the cellular network with GPS range information to improve the accuracy of mobile client position estimation. The obtained accuracy should be within the limits required by emergency location services. ToA technique is very frequently used for positioning in mobile networks. However, it requires synchronization between mobiles and base stations. In addition, ToA is restricted by multipath and non-line-of-sight propagation problems. Hence, this positioning technique is assisted by additional information from AoA technique. Angle measurements are performed by the serving base station using antenna arrays. The usage of ToA assisted AoA technique improves positioning accuracy, especially in bad propagation environments BIB001 . In wireless local area networks, such as Wi-Fi, RSSI fingerprinting technique depends on the number of access points involved in the localization process. The validity of mean RSSI measurements stored in the database also affects positioning accuracy. We exploit the coexistence of Personal Area Networks (PAN) along with Wi-Fi in the same geographical area to improve positioning accuracy. In fact, RSSI measurements are done for all the existing wireless technologies, and results are stored in the positioning database. Therefore, Wi-Fi fingerprinting uses additional RSSI information from wireless PANs existing in the same area such as Bluetooth and ZigBee networks. More information about this hybrid technique is found in BIB003 .
A survey of positioning techniques and location based services in wireless networks <s> V. LOCATION BASED SERVICES <s> The penetration of mobile wireless technologies has resulted in larger usage of wireless data services in the recent past. Several wireless applications are deployed by service providers, to attract and retain their clients, using wireless Internet technologies. New and innovative applications like ringtone/wallpaper downloading, MMS-messaging, videoclip delivery and reservation enquiries are some of the popular services offered by the service providers today. The knowledge of mobile user's location by the service provider can enhance the class of services and applications that can be offered to the mobile user. These class of applications and services, termed "location based services", are becoming popular across all mobile networks like GSM and CDMA. This paper presents a brief survey of location based services, the technologies deployed to track the mobile user's location, the accuracy and reliability associated with such measurements, and the network infrastructure elements deployed by the wireless network operators to enable these kinds of services. A brief description of all the protocols and interfaces covering the interaction between device, gateway and application layers, are presented. The aspects related to billing of value added services using the location information and emerging architectures for incorporating this "location based charging" model are introduced. The paper also presents some popular location based services deployed on wireless across the world. <s> BIB001 </s> A survey of positioning techniques and location based services in wireless networks <s> V. LOCATION BASED SERVICES <s> The rapid development of wireless communications and mobile database technology promote the extensive application of Location Based Services (LBSs), and provide a greatly convenience for people's lives. In recent years, Location Based Services has played an important role in deal with public emergencies. It is possible to access mobile users' location information anytime and anywhere. But in the meantime, user location privacy security poses a potentially grave new threat, and may suffer from some invade which could not presuppose. Location privacy issues raised by such applications have attracted more and more attention. It has become the research focus to find a balance point between the location-based highly sufficient services and users' location privacy protection. Comprehensive and efficient services are accessed under the premise of less exposed locations, that is to say, allowed the location of exposure in a controlled state. K-anonymity technique is widely used in data dissemination of data privacy protection technology, but the method is also somewhat flawed. This paper analyses on existing questions of location privacy protection system in Location Based Services at the present time, including K-anonymity technique, quality of service, query systems, and generalize and summarize the main research achievement of location privacy protection technology in recent years. And some solutions have been proposed to deal with location privacy problem in Location Based Services. The paper also analyzes how to provide efficient location-based services and better protection users' location privacy in handle public emergencies. In the end, some study trends of Location Based Services and location privacy protection are given. <s> BIB002
In wireless networks, the knowledge of user geographical position allows the introduction of numerous position dependant applications. These applications are known as location-based services, and they are useful for service providers as well as for mobile clients. The wide deployment of radio access technologies and the increasing development of wireless-enabled devices are promoting the extensive use of LBS. As mentioned in the previous sections, numerous positioning techniques are used to estimate client position. However, the main parameter for LBS efficiency is accuracy of the positioning technique. Indeed, users are more satisfied when the estimated position is closer to the real geographical position, and when the probability of erroneous estimations is reduced. Location-based services are related to the position of the user making the request. They are classified BIB001 as emergency services (e.g. security alerts, public safety and query of the nearest hospital), informational services (i.e., news, sports, stocks and query of the nearest hotel or cinema), tracking services (like asset/fleet/logistic monitoring or person tracking), entertainment services (for example: locating a friend and gaming) and advertising services (such as announcements or invitation messages broadcasted by the shops to the nearby mobile clients). Moreover, future applications of LBS include support to the studies on climate change, seismology and oceanography. Position dependant services are useful for mobile mapping, deformation monitoring and many civil engineering applications . They have revolutionized navigation (on land, in the air and at sea) and intelligent transportation system by increasing their safety and efficiency. However, user location privacy security poses a potentially grave threat BIB002 . In fact, it is possible to access user location information anytime and anywhere. Therefore, many privacy protection methods are introduced to deal with the contradiction between location privacy protection and quality of service in LBS. Some of them protect user ID information by hiding the true ID when requesting the service. Other methods do not submit the exact location to the server, but they send a region containing user exact position.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Introduction <s> Discriminative learning methods for classification perform well when training and test data are drawn from the same distribution. Often, however, we have plentiful labeled training data from a source domain but wish to learn a classifier which performs well on a target domain with a different distribution and little or no labeled training data. In this work we investigate two questions. First, under what conditions can a classifier trained from source data be expected to perform well on target data? Second, given a small amount of labeled target data, how should we combine it during training with the large amount of labeled source data to achieve the lowest target error at test time?We address the first question by bounding a classifier’s target error in terms of its source error and the divergence between the two domains. We give a classifier-induced divergence measure that can be estimated from finite, unlabeled samples from the domains. Under the assumption that there exists some hypothesis that performs well in both domains, we show that this quantity together with the empirical source error characterize the target error of a source-trained classifier.We answer the second question by bounding the target error of a model which minimizes a convex combination of the empirical source and target errors. Previous theoretical work has considered minimizing just the source error, just the target error, or weighting instances from the two domains equally. We show how to choose the optimal combination of source and target error as a function of the divergence, the sample sizes of both domains, and the complexity of the hypothesis class. The resulting bound generalizes the previously studied cases and is always at least as tight as a bound which considers minimizing only the target error or an equal weighting of source and target errors. <s> BIB001 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Introduction <s> Significant advances have been made towards building accurate automatic segmentation systems for a variety of biomedical applications using machine learning. However, the performance of these systems often degrades when they are applied on new data that differ from the training data, for example, due to variations in imaging protocols. Manually annotating new data for each test domain is not a feasible solution. In this work we investigate unsupervised domain adaptation using adversarial neural networks to train a segmentation method which is more invariant to differences in the input data, and which does not require any annotations on the test domain. Specifically, we learn domain-invariant features by learning to counter an adversarial network, which attempts to classify the domain of the input data by observing the activations of the segmentation network. Furthermore, we propose a multi-connected domain discriminator for improved adversarial training. Our system is evaluated using two MR databases of subjects with traumatic brain injuries, acquired using different scanners and imaging protocols. Using our unsupervised approach, we obtain segmentation accuracies which are close to the upper bound of supervised domain adaptation. <s> BIB002 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Introduction <s> Abstract Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks. Concise overviews are provided of studies per application area: neuro, retinal, pulmonary, digital pathology, breast, cardiac, abdominal, musculoskeletal. We end with a summary of the current state-of-the-art, a critical discussion of open challenges and directions for future research. <s> BIB003 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Introduction <s> The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a lesser number of training cases than did CNNs. “Deep learning”, or ML with image input, in medical imaging is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical imaging in the next few decades. <s> BIB004 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Introduction <s> Background ::: Deep learning convolutional neural networks (CNN) may facilitate melanoma detection, but data comparing a CNN's diagnostic performance to larger groups of dermatologists are lacking. ::: ::: ::: Methods ::: Google's Inception v4 CNN architecture was trained and validated using dermoscopic images and corresponding diagnoses. In a comparative cross-sectional reader study a 100-image test-set was used (level-I: dermoscopy only; level-II: dermoscopy plus clinical information and images). Main outcome measures were sensitivity, specificity and area under the curve (AUC) of receiver operating characteristics (ROC) for diagnostic classification (dichotomous) of lesions by the CNN versus an international group of 58 dermatologists during level-I or -II of the reader study. Secondary end points included the dermatologists' diagnostic performance in their management decisions and differences in the diagnostic performance of dermatologists during level-I and -II of the reader study. Additionally, the CNN's performance was compared with the top-five algorithms of the 2016 International Symposium on Biomedical Imaging (ISBI) challenge. ::: ::: ::: Results ::: In level-I dermatologists achieved a mean (±standard deviation) sensitivity and specificity for lesion classification of 86.6% (±9.3%) and 71.3% (±11.2%), respectively. More clinical information (level-II) improved the sensitivity to 88.9% (±9.6%, P = 0.19) and specificity to 75.7% (±11.7%, P < 0.05). The CNN ROC curve revealed a higher specificity of 82.5% when compared with dermatologists in level-I (71.3%, P < 0.01) and level-II (75.7%, P < 0.01) at their sensitivities of 86.6% and 88.9%, respectively. The CNN ROC AUC was greater than the mean ROC area of dermatologists (0.86 versus 0.79, P < 0.01). The CNN scored results close to the top three algorithms of the ISBI 2016 challenge. ::: ::: ::: Conclusions ::: For the first time we compared a CNN's diagnostic performance with a large international group of 58 dermatologists, including 30 experts. Most dermatologists were outperformed by the CNN. Irrespective of any physicians' experience, they may benefit from assistance by a CNN's image classification. ::: ::: ::: Clinical trial number ::: This study was registered at the German Clinical Trial Register (DRKS-Study-ID: DRKS00013570; https://www.drks.de/drks_web/). <s> BIB005 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Introduction <s> Abstract What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of deep learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging. <s> BIB006 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Introduction <s> Convolutional neural network (CNN), a class of artificial neural networks that has become dominant in various computer vision tasks, is attracting interest across a variety of domains, including radiology. CNN is designed to automatically and adaptively learn spatial hierarchies of features through backpropagation by using multiple building blocks, such as convolution layers, pooling layers, and fully connected layers. This review article offers a perspective on the basic concepts of CNN and its application to various radiological tasks, and discusses its challenges and future directions in the field of radiology. Two challenges in applying CNN to radiological tasks, small dataset and overfitting, will also be covered in this article, as well as techniques to minimize them. Being familiar with the concepts and advantages, as well as limitations, of CNN is essential to leverage its potential in diagnostic radiology, with the goal of augmenting the performance of radiologists and improving patient care. KEY POINTS: • Convolutional neural network is a class of deep learning methods which has become dominant in various computer vision tasks and is attracting interest across a variety of domains, including radiology. • Convolutional neural network is composed of multiple building blocks, such as convolution layers, pooling layers, and fully connected layers, and is designed to automatically and adaptively learn spatial hierarchies of features through a backpropagation algorithm. • Familiarity with the concepts and advantages, as well as limitations, of convolutional neural network is essential to leverage its potential to improve radiologist performance and, eventually, patient care. <s> BIB007 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Introduction <s> Deep learning-based image segmentation is by now firmly established as a robust tool in image segmentation. It has been widely used to separate homogeneous areas as the first and critical component of diagnosis and treatment pipeline. In this article, we present a critical appraisal of popular methods that have employed deep-learning techniques for medical image segmentation. Moreover, we summarize the most common challenges incurred and suggest possible solutions. <s> BIB008
Medical imaging is a major pillar of clinical decision making and is an integral part of many patient journeys. Information extracted from medical images is clinically useful in many areas such as computer-aided detection, diagnosis, treatment planning, intervention and therapy. While medical imaging remains a vital component of a myriad of clinical tasks, an increasing shortage of qualified radiologists to interpret complex medical images suggests a clear need for reliable automated methods to alleviate the growing burden on health-care practitioners . In parallel, medical imaging sciences are benefiting from the development of novel computational techniques for the analysis of structured data like images. Development of algorithms for image acquisition, analysis and interpretation are driving innovation, particularly in the areas of registration, reconstruction, tracking, segmentation and modelling. Medical images are inherently difficult to interpret, requiring prior expertise to understand. Bio-medical images can be noisy and contain many modality-specific artefacts, acquired under a wide variety of acquisition conditions with different protocols. Thus, once trained models do not transfer seamlessly from one clinical task or site to another because of an often yawning domain gap BIB002 BIB001 . Su-pervised learning methods require extensive relabelling to regain initial performance in different workflows. The experience and prior knowledge required to work with such data means that there is often large inter-and intraobserver variability in annotating medical data. This not only raises questions about what constitutes a gold-standard ground truth annotation, but also results in disagreement of what that ground truth truly is. These issues result in a large cost associated with annotating and re-labelling of medical image datasets, as we require numerous expert annotators (oracles) to perform each annotation and to reach a consensus. In recent years, Deep Learning (DL) has emerged as the state-of-the-art technique for performing many medical image analysis tasks BIB003 BIB004 . Developments in the field of computer vision have shown great promise in transferring to medical image analysis, and several techniques have been shown to perform as accurately as human observers BIB005 . However, uptake of DL methods within the clinical practice has been limited thus far, largely due to the unique challenges of working with complex medical data, regulatory compliance issues and trust in trained models. We identify three key challenges when developing DL enabled applications for medical image analysis in a clinical set-arXiv:1910.02923v1 [cs. LG] 7 Oct 2019 ting: 1. Lack of Training Data: Supervised DL techniques traditionally rely on a large and even distribution of accurately annotated data points, and while more medical image datasets are becoming available, the time, cost and effort required to annotate such datasets remains significant. 2. The Final Percent: DL techniques have achieved state-ofthe-art performance for medical image analysis tasks, but in safety-critical domains even the smallest deviation can cause catastrophic results downstream. Achieving clinically credible output may require interactive interpretation of predictions (from an oracle) to be useful in practice. 3. Transparency and Interpretability: At present, most DL applications are considered to be a 'black-box' where the user has limited meaningful ways of interpreting, understanding or correcting how a model has made its prediction. Credence is a detrimental feature for medical applications as information from a wide variety of sources must be evaluated in order to make clinical decisions. Further indication of how a model has reached a predicted conclusion is needed in order to foster trust for DL enabled systems and allow users to weigh automated predictions appropriately. There is concerted effort in the medical image analysis research community to apply DL methods to various medical image analysis tasks, and these are showing great promise. We refer the reader to a number of reviews of DL in medical imaging BIB008 BIB006 BIB007 . These works primarily focus on the development of predictive models for a specific task and demonstrate state-of-the-art performance for that task. This review aims to give an overview of where humans will remain involved in this development, deployment and practical use of DL systems for medical image analysis. We focus on medical image segmentation techniques to explore the role of human end users in DL enabled systems. Automating segmentation tasks suffers from all of the drawbacks incurred by medical image data described above. There are many emerging techniques that seek to alleviate the added complexity of working with medical image data to perform automated segmentation of images. Segmentation seeks to divide an image into semantically meaningful regions (sets of pixels) in order to perform a number of downstream tasks, e.g. biometric measurements. Manually assigning a label to each pixel of an image is a laborious task and as such automated segmentation methods are important in practice. Advances in DL techniques such as Active Learning (AL) and Human-in-the-Loop computing applied to segmentation problems have shown progress in overcoming the key challenges outlined above and these are the studies this review focuses on. We categorise each study based on the nature of human interaction proposed and broadly divide them between which of the three key challenges they address. Section 2 introduces Active Learning, a branch of Machine Learning (ML) and Human-in-the-Loop Computing that seeks to find the most informative samples from an unlabelled distribution to be annotated next. By training on the most informative subset of samples, related work can achieve state-of-the-art performance while reducing the costly annotation burden associated with annotating medical image data. Section 3 evaluates techniques used to refine model predictions in response to user feedback, guiding models towards more accurate per-image predictions. We evaluate techniques that seek to improve interpretability of automated predictions and how models provide feedback on their own outputs to guide users towards better decision making. Section 4 evaluates the key practical considerations of developing and deploying Human-in-the-Loop DL enabled systems in practice and outlines the work being done in these areas that addresses the three key challenges identified above. These areas are human focused and assess how human end users might interact with these systems. Section 5 introduces related areas of ML and DL research that are having an impact on AL and Human-in-the-Loop Computing and are beginning to influence the three key challenges outlined. In Section 6 we offer our opinions on the future directions of Human-in-the-Loop DL research and how many of the techniques evaluated might be combined to work towards common goals.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Uncertainty <s> Abstract We propose an active learning approach to image segmentation that exploits geometric priors to speed up and streamline the annotation process. It can be applied for both background–foreground and multi-class segmentation tasks in 2D images and 3D image volumes. Our approach combines geometric smoothness priors in the image space with more traditional uncertainty measures to estimate which pixels or voxels are the most informative, and thus should to be annotated next. For multi-class settings, we additionally introduce two novel criteria for uncertainty. In the 3D case, we use the resulting uncertainty measure to select voxels lying on a planar patch, which makes batch annotation much more convenient for the end user compared to the setting where voxels are randomly distributed in a volume. The planar patch is found using a branch-and-bound algorithm that looks for a 2D patch in a 3D volume where the most informative instances are located. We evaluate our approach on Electron Microscopy and Magnetic Resonance image volumes, as well as on regular images of horses and faces. We demonstrate a substantial performance increase over other approaches thanks to the use of geometric priors. <s> BIB001 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Uncertainty <s> Recent successes in learning-based image classification, however, heavily rely on the large number of annotated training samples, which may require considerable human effort. In this paper, we propose a novel active learning (AL) framework, which is capable of building a competitive classifier with optimal feature representation via a limited amount of labeled training instances in an incremental learning manner. Our approach advances the existing AL methods in two aspects. First, we incorporate deep convolutional neural networks into AL. Through the properly designed framework, the feature representation and the classifier can be simultaneously updated with progressively annotated informative samples. Second, we present a cost-effective sample selection strategy to improve the classification performance with less manual annotations. Unlike traditional methods focusing on only the uncertain samples of low prediction confidence, we especially discover the large amount of high-confidence samples from the unlabeled set for feature learning. Specifically, these high-confidence samples are automatically selected and iteratively assigned pseudolabels. We thus call our framework cost-effective AL (CEAL) standing for the two advantages. Extensive experiments demonstrate that the proposed CEAL framework can achieve promising results on two challenging image classification data sets, i.e., face recognition on the cross-age celebrity face recognition data set database and object categorization on Caltech-256. <s> BIB002
The main family of informativeness measure falls into calculating uncertainty. It is argued that the more uncertain a prediction is, the more information we can gain by including the ground truth for that sample in the training set. There are several ways of calculating uncertainty from different ML/DL models. When considering DL for segmentation the most simple measure is the sum of lowest class probability for each pixel in a given image segmentation. It is argued that more certain predictions will have high pixel-wise class probabilities, so the lower the sum of the minimum class probability over each pixel in an image, the more certain a prediction is considered to be -this is a fairly intuitive way of thinking about uncertainty and offers a means to rank uncertainty of samples within a distribution. We refer to the method above as least confident sampling where the samples with the highest uncertainty are selected for labelling . A drawback of least confident sampling is that it only considers information about the most probable label, and discards the information about the remaining label distribution. Two alternative methods have been proposed that alleviate this concern. The first, called margin sampling , can be used in a multi-class setting and considers the first and second most probable labels under the model and calculates the difference between them. The intuition here is that the larger the margin is between the two most probable labels, the more confident the model is in assigning that label. The second, more popular approach is to use entropy as an uncertainty measure. For binary classification, entropy sampling is equivalent to least confident and margin sampling, but for multi-class problems entropy generalises well as an uncertainty measure. Using one of the above measures, un-annotated samples are ranked and the most 'uncertain' cases are chosen for the next round of annotation. BIB002 propose the Cost-Effective Active Learning (CEAL) method for deep image classification that involves complementary sampling in which the framework selects from an unlabelled data-set a) a set of uncertain samples to be labelled by an oracle, and b) a set of highly certain samples that are 'pseudo-labelled' by the framework and included in the labelled data-set. propose an active learning method that uses uncertainty sampling to support quality control of nucleus segmentation in pathology images. Their work compares the performance improvements achieved through active learning for three different families of algorithms: Support Vector Machines (SVM), Random Forest (RF) and Convolutional Neural Networks (CNN). They show that CNNs achieve the greatest accuracy, requiring significantly fewer iterations to achieve equivalent accuracy to the SVMs and RFs. Another common method of estimating informativeness is to measure the agreement between multiple models performing the same task. It is argued that more disagreement found between predictions on the same data point implies a higher level of uncertainty. These methods are referred to as Query by consensus and are generally applied when Ensembling is used to improve performance -i.e, training multiple models to perform the same task under slightly different parameters/settings . Ensembling methods have shown to measure informativeness well, but at the cost of computational resources -multiple models need to be trained and maintained, and each of these needs to be updated in the presence of newly selected training samples. Nevertheless, Beluch Bcai et al. (2018) demonstrate the power of ensembles for active learning and compare to alternatives to ensembling. They specifically compare the performance of acquisition functions and uncertainty estimation methods for active learning with CNNs for image classification tasks and show that ensemble based uncertainties outperform other methods of uncertainty estimation such as 'MC Dropout'. They find that the difference in active learning performance can be explained by a combination of decreased model capacity and lower diversity of MC dropout ensembles. A good performance is demonstrated on a diabetic retinopathy diagnosis task. introduce the use of Bayesian CNNs for Active Learning, and show that the use of Bayesian CNNs outperform deterministic CNNs in the context of Active Learning. Bayesian CNNs model uncertainty of predictions directly, and its argued this property allows them to outperform deterministic CNNs. In this work several different query strategies (or acquisition functions as they are referred to in the text) are used for Active Learning to demonstrate improved performance from fewer training samples than random sampling. They demonstrate their approach for skin cancer diagnosis from skin lesion images to show significant performance improvements over uniform sampling using the BALD method for sample selection, where the BALD method seeks to maximise the mutual information between predictions and model posterior. BIB001 propose an active learning approach that exploits geometric smoothness priors in the image space to aid the segmentation process. They use traditional uncertainty measures to estimate which pixels should be annotated next, and introduce novel criteria for uncertainty in multi-class settings. They exploit geometric uncertainty by estimating the entropy of the probability of supervoxels belonging to a class given the predictions of its neighbours and combine these to encourage selection of uncertain regions in areas of non-smooth transition between classes. They demonstrate state-of-the-art performance on mitochondria segmentation from EM images and on an MRI tumour segmentation task for both binary and multi-class segmentations. They suggest that exploiting geometric properties of images is useful to answer the questions of where to annotate next and by reducing 3D annotations to 2D annotations provide a possible answer to how to annotate the data, and that addressing both jointly can bring additional benefits to the annotation method, however they acknowledge that it would impossible to design bespoke selection strategies this way for every new task at hand.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Representativeness <s> Image segmentation is a fundamental problem in biomedical image analysis. Recent advances in deep learning have achieved promising results on many biomedical image segmentation benchmarks. However, due to large variations in biomedical images (different modalities, image settings, objects, noise, etc.), to utilize deep learning on a new application, it usually needs a new set of training data. This can incur a great deal of annotation effort and cost, because only biomedical experts can annotate effectively, and often there are too many instances in images (e.g., cells) to annotate. In this paper, we aim to address the following question: With limited effort (e.g., time) for annotation, what instances should be annotated in order to attain the best performance? We present a deep active learning framework that combines fully convolutional network (FCN) and active learning to significantly reduce annotation effort by making judicious suggestions on the most effective annotation areas. We utilize uncertainty and similarity information provided by FCN and formulate a generalized version of the maximum set cover problem to determine the most representative and uncertain areas for annotation. Extensive experiments using the 2015 MICCAI Gland Challenge dataset and a lymph node ultrasound image segmentation dataset show that, using annotation suggestions by our method, state-of-the-art segmentation performance can be achieved by using only 50% of training data. <s> BIB001 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Representativeness <s> Segmentation is essential for medical image analysis tasks such as intervention planning, therapy guidance, diagnosis, treatment decisions. Deep learning is becoming increasingly prominent for segmentation, where the lack of annotations, however, often becomes the main limitation. Due to privacy concerns and ethical considerations, most medical datasets are created, curated, and allow access only locally. Furthermore, current deep learning methods are often suboptimal in translating anatomical knowledge between different medical imaging modalities. Active learning can be used to select an informed set of image samples to request for manual annotation, in order to best utilize the limited annotation time of clinical experts for optimal outcomes, which we focus on in this work. Our contributions herein are two fold: (1) we enforce domain-representativeness of selected samples using a proposed penalization scheme to maximize information at the network abstraction layer, and (2) we propose a Borda-count based sample querying scheme for selecting samples for segmentation. Comparative experiments with baseline approaches show that the samples queried with our proposed method, where both above contributions are combined, result in significantly improved segmentation performance for this active learning task. <s> BIB002
Many AL frameworks extend selection strategies to include some measure of representativeness in addition to an uncertainty measure. The intuition behind including a representativeness measure is that methods only concerned with uncertainty have the potential to focus only on small regions of the distribution, and that training on samples from the same area of the distribution will introduce redundancy to the selection strategy, or may skew the model towards a particular area of the distribution. The addition of a representativeness measure seeks to encourage selection strategies to sample from different areas of the distribution, thus improving AL performance. A sample with a high representativeness covers the information for many images in the same area of the distribution, so there is less need to include many samples covered by a representative image. To this end, BIB001 present Suggestive Annotation, a deep active learning framework for medical image segmentation, which uses an alternative formulation of uncertainty sampling combined with a form of representativeness density weighting. Their method consists of training multiple models that each exclude a portion of the training data, which are used to calculate an ensemble based uncertainty measure. They formulate choosing the most representative example as a generalised version of the maximum set-cover problem (NP Hard) and offer a greedy approach to selecting the most representative images using feature vectors from their models. They demonstrate state-of-the-art performance using 50% of the available data on the MICCAI Gland segmentation challenge and a lymph node segmentation task. propose MedAL, an active learning framework for medical image segmentation. They propose a sampling method that combines uncertainty, and distance between feature descriptors, to extract the most informative samples from an unlabelled data-set. Another contribution of this work is an approach which generates an initial training set by leveraging existing computer vision image descriptors to find the images that are most dissimilar to each other and thus cover a larger area of the image distribution. They show good results on three different medical image analysis tasks, achieving the baseline accuracy with less training data than random or pure uncertainty based methods. BIB002 propose a Borda-count based combination of an uncertainty and a representativeness measure to select the next batch of samples. Uncertainty is measured as the voxel-wise variance of N predictions using MC dropout in their model. They introduce new representativeness measures such as 'Content Distance', defined as the mean squared error between layer activation responses of a pre-trained classification network. They extend this contribution by encoding representativeness by maximum entropy to optimise network weights using an novel entropy loss function. propose a novel method for ensuring diversity among queried samples by calculating the Fisher Information (FI), for the first time in CNNs. Here, efficient computation is enabled by the gradient computations of propagation to allow FI to be calculated on the large parameter space of CNNs. They demonstrate the performance of their approach on two different flavours of task: a) semi-automatic segmentation of a particular subject (from a different group/different pathology not present in the original training data) where iteratively labelling small numbers of voxels queried by AL achieves accurate segmentation for that subject; and b) using AL to build a model generalisable to all images in a given data-set. They show that in both these scenarios the FI-based AL improves performance after labelling a small percentage of voxels, outperformed random sampling and achieved higher accuracy than than entropy based querying.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Learning Active Learning <s> In this paper, we suggest a novel data-driven approach to active learning (AL). The key idea is to train a regressor that predicts the expected error reduction for a candidate sample in a particular learning state. By formulating the query selection procedure as a regression problem we are not restricted to working with existing AL heuristics; instead, we learn strategies based on experience from previous AL outcomes. We show that a strategy can be learnt either from simple synthetic 2D datasets or from a subset of domain-specific data. Our method yields strategies that work well on real data from a wide range of domains. <s> BIB001 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Learning Active Learning <s> We introduce a model that learns active learning algorithms via metalearning. For a distribution of related tasks, our model jointly learns: a data representation, an item selection heuristic, and a method for constructing prediction functions from labeled training sets. Our model uses the item selection heuristic to gather labeled training sets from which to construct prediction functions. Using the Omniglot and MovieLens datasets, we test our model in synthetic and practical settings. <s> BIB002
The methods discussed so far are all hand designed heuristics of informativeness, but some works have emerged that attempt to learn what the most informative samples are through experience of previous sample selection outcomes. This offers a potential way to select samples more efficiently but at the cost of interpretability of the heuristics employed. Many factors influence the performance and optimality of using hand-crafted heuristics for data selection. BIB001 propose 'Learning Active Learning', where a regression model learns data selection strategies based on experience from previous AL outcomes. Arguing there is no way to foresee the influence of all factors such as class imbalance, label noise, outliers and distribution shape. Instead, their regression model 'adapts' its selection to the problem without explicitly stating specific rules. BIB002 take this idea a step further and propose a model that leverages labelled instances from different but related tasks to learn a selection strategy, while simultaneously adapting its representation of the data and its prediction function.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Interpretability <s> Deep neural networks have achieved near-human accuracy levels in various types of classification and prediction tasks including images, text, speech, and video data. However, the networks continue to be treated mostly as black-box function approximators, mapping a given input to a classification output. The next step in this human-machine evolutionary process — incorporating these networks into mission critical processes such as medical diagnosis, planning and control — requires a level of trust association with the machine output. Typically, statistical metrics are used to quantify the uncertainty of an output. However, the notion of trust also depends on the visibility that a human has into the working of the machine. In other words, the neural network should provide human-understandable justifications for its output leading to insights about the inner workings. We call such models as interpretable deep networks. Interpretability is not a monolithic notion. In fact, the subjectivity of an interpretation, due to different levels of human understanding, implies that there must be a multitude of dimensions that together constitute interpretability. In addition, the interpretation itself can be provided either in terms of the low-level network parameters, or in terms of input features used by the model. In this paper, we outline some of the dimensions that are useful for model interpretability, and categorize prior work along those dimensions. In the process, we perform a gap analysis of what needs to be done to improve model interpretability. <s> BIB001 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Interpretability <s> Digital pathology is not only one of the most promising fields of diagnostic medicine, but at the same time a hot topic for fundamental research. Digital pathology is not just the transfer of histopathological slides into digital representations. The combination of different data sources (images, patient records, and *omics data) together with current advances in artificial intelligence/machine learning enable to make novel information accessible and quantifiable to a human expert, which is not yet available and not exploited in current medical settings. The grand goal is to reach a level of usable intelligence to understand the data in the context of an application task, thereby making machine decisions transparent, interpretable and explainable. The foundation of such an "augmented pathologist" needs an integrated approach: While machine learning algorithms require many thousands of training examples, a human expert is often confronted with only a few data points. Interestingly, humans can learn from such few examples and are able to instantly interpret complex patterns. Consequently, the grand goal is to combine the possibilities of artificial intelligence with human intelligence and to find a well-suited balance between them to enable what neither of them could do on their own. This can raise the quality of education, diagnosis, prognosis and prediction of cancer and other diseases. In this paper we describe some (incomplete) research issues which we believe should be addressed in an integrated and concerted effort for paving the way towards the augmented pathologist. <s> BIB002 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Interpretability <s> Despite the state-of-the-art performance for medical image segmentation, deep convolutional neural networks (CNNs) have rarely provided uncertainty estimations regarding their segmentation outputs, e.g., model (epistemic) and image-based (aleatoric) uncertainties. In this work, we analyze these different types of uncertainties for CNN-based 2D and 3D medical image segmentation tasks. We additionally propose a test-time augmentation-based aleatoric uncertainty to analyze the effect of different transformations of the input image on the segmentation output. Test-time augmentation has been previously used to improve segmentation accuracy, yet not been formulated in a consistent mathematical framework. Hence, we also propose a theoretical formulation of test-time augmentation, where a distribution of the prediction is estimated by Monte Carlo simulation with prior distributions of parameters in an image acquisition model that involves image transformations and noise. We compare and combine our proposed aleatoric uncertainty with model uncertainty. Experiments with segmentation of fetal brains and brain tumors from 2D and 3D Magnetic Resonance Images (MRI) showed that 1) the test-time augmentation-based aleatoric uncertainty provides a better uncertainty estimation than calculating the test-time dropout-based model uncertainty alone and helps to reduce overconfident incorrect predictions, and 2) our test-time augmentation outperforms a single-prediction baseline and dropout-based multiple predictions. <s> BIB003 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Interpretability <s> Manual estimation of fetal Head Circumference (HC) from Ultrasound (US) is a key biometric for monitoring the healthy development of fetuses. Unfortunately, such measurements are subject to large inter-observer variability, resulting in low early-detection rates of fetal abnormalities. To address this issue, we propose a novel probabilistic Deep Learning approach for real-time automated estimation of fetal HC. This system feeds back statistics on measurement robustness to inform users how confident a deep neural network is in evaluating suitable views acquired during free-hand ultrasound examination. In real-time scenarios, this approach may be exploited to guide operators to scan planes that are as close as possible to the underlying distribution of training images, for the purpose of improving inter-operator consistency. We train on free-hand ultrasound data from over 2000 subjects (2848 training/540 test) and show that our method is able to predict HC measurements within 1.81$\pm$1.65mm deviation from the ground truth, with 50% of the test images fully contained within the predicted confidence margins, and an average of 1.82$\pm$1.78mm deviation from the margin for the remaining cases that are not fully contained. <s> BIB004
While DL methods have become a standard state-of-the-art approach for many medical image analysis tasks, they largely remain black-box methods where the end user has limited meaningful ways of interpreting model predictions. This feature of DL methods is a significant hurdle in the deployment of DL enabled applications to safety-critical domains such as medical image analysis. We want models to be highly accurate and robust, but also explainable and interpretable. Recent EU law 1 has led to the 'right for explanation', whereby any subject has the right to have automated decisions that have been made about them explained. This even further highlights the need for transparent algorithms which we can reason about [Goodman and Flaxman (2016), , ]. It is important for users to understand how a certain decision has been made by the model, as even the most accurate and robust models aren't infallible, and false or uncertain predictions must be identified so that trust in the model can be fostered and predictions are appropriately weighted in the clinical decision making process. It is vital the end user, regulators and auditors all have the ability to contextualise automated decisions produced by DL models. Here we outline some different methods for providing interpretable ways of reasoning about DL models and their predictions. Typically DL methods can provide statistical metrics on the uncertainty of a model output, many of the uncertainty measures discussed in Section 2 are also used to aid in intepretability. While uncertainty measures are important, these are not sufficient to foster complete trust in DL model, the model should provide human-understandable justifications for its output that allow insights to be drawn elucidating the inner workings of a model. BIB001 discuss many of the core concerns surrounding model intepretability and highlight various works that have demonstrated more sophisticated methods of making a DL model interpretable across the DL field. Here we evaluate some of the works that have been applied to medical image segmentation and refer the reader to , BIB002 ] for further reading on interpretability in the rest of the medical imaging domain. Oktay et al. (2018) introduce 'Attention Gating' to guide networks towards giving more 'attention' to certain image areas, in a visually interpretable way -potentially aiding in the subsequent refinement of annotations. explore different uncertainty estimates for a U-Net based cardiac MRI segmentation in order to detect inaccurate segmentations, as having the ability to know when a segmentation is less accurate can be useful to reduce down stream errors, and demonstrate that by setting a threshold on the quality of segmentations we can remove poor segmentations for manual correction. In BIB004 we propose a visual method for interpreting automated head circumference measurements from ultrasound images, using MC Dropout at test-time to acquire N head segmentations to calculate an upper and lower bound on the head circumference measurement in real-time. These bounds were displayed over the image to guide the sonographer towards views in which the model predicts with the most confidence. This upper lower bound is presented as a measure of model compliance of the unseen image rather than uncertainty. Finally, variance heuristics are proposed to quantify the confidence of a prediction in order to either accept or reject head circumference measurements, and it is shown these can improve overall performance measures once 'rejected' images are removed. BIB003 propose using test-time augmentation to acquire a measure of aleatoric (image-based) uncertainty and compare their method with epistemic (model) uncertainty measures and show that their method provides a better uncertainty estimation than a test-time dropout based model uncertainty alone and reduces overconfident incorrect predictions. propose a novel interpretation method for histological Whole Slide image processing by combing a deep neural network with a Multiple instance Learning branch to enhance the models expressive power without guiding its attention. A logit heat-map of model activations is presented, in order to interpret its decision making process. Two expert pathologists provided feedback that the interpretability of the method has potential for integration into several clinical applications. Jungo and Reyes (2019) evaluate several different voxelwise uncertainty estimation methods applied to medical image segmentation with respect to their reliability and limitations and show that current uncertainty estimation methods perform similarly. Their results show that while uncertainty estimates may be well calibrated at the dataset level (capturing epistemic uncertainty), they tend to be mis-calibrated at a subject-level (aleatoric uncertainty). This compromises the reliability of these uncertainty estimates and highlights the need to develop subject-wise uncertainty estimates. They show auxiliary networks to be a valid alternative to common uncertainty methods 6 as they can be applied to any previously trained segmentation model. Developing transparent systems will enable faster uptake in clinical practice and including humans within the deep learning clinical pipelines will ease the period of transition between current best practices and the breadth of possible enhancements that deep learning has to offer. We suggest that ongoing work in improving interpretability of DL models will also have a positive impact on AL, as the majority of methods to improve intepretability are centred on providing uncertainty measures for a models prediction, these same uncertainty measures can be used for AL selection strategies in place of existing uncertainty measures that are currently employed. As intepretability and uncertainty measures improve we expect to see a similar improvement of AL frameworks as they incorporate the most promising uncertainty measures.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Refinement <s> There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net . <s> BIB001 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Refinement <s> Pixel-level labelling tasks, such as semantic segmentation, play a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning techniques for image recognition to tackle pixel-level labelling tasks. One central issue in this methodology is the limited capacity of deep learning techniques to delineate visual objects. To solve this problem, we introduce a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling. To this end, we formulate Conditional Random Fields with Gaussian pairwise potentials and mean-field approximate inference as Recurrent Neural Networks. This network, called CRF-RNN, is then plugged in as a part of a CNN to obtain a deep network that has desirable properties of both CNNs and CRFs. Importantly, our system fully integrates CRF modelling with CNNs, making it possible to train the whole deep network end-to-end with the usual back-propagation algorithm, avoiding offline post-processing methods for object delineation. We apply the proposed method to the problem of semantic image segmentation, obtaining top results on the challenging Pascal VOC 2012 segmentation benchmark. <s> BIB002 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Refinement <s> For complex segmentation tasks, fully automatic systems are inherently limited in their achievable accuracy for extracting relevant objects. Especially in cases where only few data sets need to be processed for a highly accurate result, semi-automatic segmentation techniques exhibit a clear benefit for the user. One area of application is medical image processing during an intervention for a single patient. We propose a learning-based cooperative segmentation approach which includes the computing entity as well as the user into the task. Our system builds upon a state-of-the-art fully convolutional artificial neural network (FCN) as well as an active user model for training. During the segmentation process, a user of the trained system can iteratively add additional hints in form of pictorial scribbles as seed points into the FCN system to achieve an interactive and precise segmentation result. The segmentation quality of interactive FCNs is evaluated. Iterative FCN approaches can yield superior results compared to networks without the user input channel component, due to a consistent improvement in segmentation quality after each interaction. <s> BIB003 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Refinement <s> Accurate medical image segmentation is essential for diagnosis, surgical planning and many other applications. Convolutional Neural Networks (CNNs) have become the state-of-the-art automatic segmentation methods. However, fully automatic results may still need to be refined to become accurate and robust enough for clinical use. We propose a deep learning-based interactive segmentation method to improve the results obtained by an automatic CNN and to reduce user interactions during refinement for higher accuracy. We use one CNN to obtain an initial automatic segmentation, on which user interactions are added to indicate mis-segmentations. Another CNN takes as input the user interactions with the initial segmentation and gives a refined result. We propose to combine user interactions with CNNs through geodesic distance transforms, and propose a resolution-preserving network that gives a better dense prediction. In addition, we integrate user interactions as hard constraints into a back-propagatable Conditional Random Field. We validated the proposed framework in the context of 2D placenta segmentation from fetal MRI and 3D brain tumor segmentation from FLAIR images. Experimental results show our method achieves a large improvement from automatic CNNs, and obtains comparable and even higher accuracy with fewer user interventions and less time compared with traditional interactive methods. <s> BIB004 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Refinement <s> Measuring airways in chest computed tomography (CT) images is important for characterizing diseases such as cystic fibrosis, yet very time-consuming to perform manually. Machine learning algorithms offer an alternative, but need large sets of annotated data to perform well. We investigate whether crowdsourcing can be used to gather airway annotations which can serve directly for measuring the airways, or as training data for the algorithms. We generate image slices at known locations of airways and request untrained crowd workers to outline the airway lumen and airway wall. Our results show that the workers are able to interpret the images, but that the instructions are too complex, leading to many unusable annotations. After excluding unusable annotations, quantitative results show medium to high correlations with expert measurements of the airways. Based on this positive experience, we describe a number of further research directions and provide insight into the challenges of crowdsourcing in medical images from the perspective of first-time users. <s> BIB005 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Refinement <s> Convolutional neural networks (CNNs) have achieved state-of-the-art performance for automatic medical image segmentation. However, they have not demonstrated sufficiently accurate and robust results for clinical use. In addition, they are limited by the lack of image-specific adaptation and the lack of generalizability to previously unseen object classes. To address these problems, we propose a novel deep learning-based framework for interactive segmentation by incorporating CNNs into a bounding box and scribble-based segmentation pipeline. We propose image-specific fine-tuning to make a CNN model adaptive to a specific test image, which can be either unsupervised (without additional user interactions) or supervised (with additional scribbles). We also propose a weighted loss function considering network and interaction-based uncertainty for the fine-tuning. We applied this framework to two applications: 2D segmentation of multiple organs from fetal MR slices, where only two types of these organs were annotated for training; and 3D segmentation of brain tumor core (excluding edema) and whole brain tumor (including edema) from different MR sequences, where only tumor cores in one MR sequence were annotated for training. Experimental results show that 1) our model is more robust to segment previously unseen objects than state-of-the-art CNNs; 2) image-specific fine-tuning with the proposed weighted loss function significantly improves segmentation accuracy; and 3) our method leads to accurate results with fewer user interactions and less user time than traditional interactive segmentation methods. <s> BIB006 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Refinement <s> Segmentation is one of the most important parts of medical image analysis. Manual segmentation is very cumbersome, time-consuming, and prone to inter-observer variability. Fully automatic segmentation approaches require a large amount of labeled training data and may fail in difficult or abnormal cases. In this work, we propose a new method for 2D segmentation of individual slices and 3D interpolation of the segmented slices. The Smart Brush functionality quickly segments the region of interest in a few 2D slices. Given these annotated slices, our adapted formulation of Hermite radial basis functions reconstructs the 3D surface. Effective interactions with less number of equations accelerate the performance and, therefore, a real-time and an intuitive, interactive segmentation of 3D objects can be supported effectively. The proposed method is evaluated on 12 clinical 3D magnetic resonance imaging data sets and are compared to gold standard annotations of the left ventricle from a clinical expert. The automatic evaluation of the 2D Smart Brush resulted in an average Dice coefficient of 0.88 ± 0.09 for the individual slices. For the 3D interpolation using Hermite radial basis functions, an average Dice coefficient of 0.94 ± 0.02 is achieved. <s> BIB007 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Refinement <s> Automatic segmentation has great potential to facilitate morphological measurements while simultaneously increasing efficiency. Nevertheless often users want to edit the segmentation to their own needs and will need different tools for this. There has been methods developed to edit segmentations of automatic methods based on the user input, primarily for binary segmentations. Here however, we present an unique training strategy for convolutional neural networks (CNNs) trained on top of an automatic method to enable interactive segmentation editing that is not limited to binary segmentation. By utilizing a robot-user during training, we closely mimic realistic use cases to achieve optimal editing performance. In addition, we show that an increase of the iterative interactions during the training process up to ten improves the segmentation editing performance substantially. Furthermore, we compare our segmentation editing CNN (interCNN) to state-of-the-art interactive segmentation algorithms and show a superior or on par performance. <s> BIB008 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Refinement <s> An interactive image segmentation algorithm, which accepts user-annotations about a target object and the background, is proposed in this work. We convert user-annotations into interaction maps by measuring distances of each pixel to the annotated locations. Then, we perform the forward pass in a convolutional neural network, which outputs an initial segmentation map. However, the user-annotated locations can be mislabeled in the initial result. Therefore, we develop the backpropagating refinement scheme (BRS), which corrects the mislabeled pixels. Experimental results demonstrate that the proposed algorithm outperforms the conventional algorithms on four challenging datasets. Furthermore, we demonstrate the generality and applicability of BRS in other computer vision tasks, by transforming existing convolutional neural networks into user-interactive ones. <s> BIB009
If we can develop accurate, robust and interpretable models for medical image applications we still cannot guarantee clinical grade accuracy for every unseen data-point presented to a model. The ability to generalise to unseen input is a cornerstone of deep learning applications, but in real world distributions, generalisation is rarely perfect. As such, methods to rectify these discrepancies must be built into applications used for medical image analysis. This iterative refinement must save the end user time and mental effort over performing manual annotation. Many interactive image segmentation systems have been proposed, and more recently these have built on the advances in deep learning to allow users to refine model outputs and feedback the more accurate results to the model for improvement. BIB003 introduced UI-Net, that builds on the popular U-Net architecture for medical image segmentation BIB001 . The UI-Net is trained with an active user model, and allows for users to interact with proposed segmentations by providing scribbles over the image to indicate areas that should be included or not, the network is trained using simulated user interactions and as such responds to iterative user scribbles to refine a segmentation towards a more accurate result. Conditional Random fields have been used in various tasks to encourage segmentation homogeneity. BIB002 propose CRF-CNN, a recurrent neural network which has the desirable properties of both CNNs and CRFs. BIB004 propose DeepIGeoS, an interactive geodesic framework for medical image segmentation. This framework uses two CNNs, the first performs an initial automatic segmentation, and the second takes the initial segmentation as well as user interactions with the initial segmentation to provide a refined result. They combine user interactions with CNNs through geodesic distance transforms BIB005 , and these user interactions are integrated as hard constraints into a Conditional Random Field, inspired by BIB002 . They call their two networks P-Net (initial segmentation) and R-Net (for refinement). They demonstrate superior results for segmentation of the placenta from 2D fetal MRI and brain tumors from 3D FLAIR images when compared to fully automatic CNNs. These segmentation results were also obtained in roughly a third of the time taken to perform the same segmentation with traditional interactive methods such as GeoS or ITK-SNAP. Graph Cuts have also been used in segmentation to incorporate user interaction -a user provides seed points to the algorithm (e.g. mark some pixel as foreground, and another as background) and from this the segmentation is calculated. BIB006 propose BIFSeg, an interactive segmentation framework inspired by graph cuts. Their work introduces a deep learning framework for interactive segmentation by combining CNNs with a bounding box and scribble based segmentation pipeline. The user provides a bounding box around the area which they are interested in segmenting, this is then fed into their CNN to produce an initial segmentation prediction, the user can then provide scribbles to mark areas of the image as mis-classified -these user inputs are then weighted heavily in the calculation of the refined segmentation using their graph cut based algorithm. BIB008 propose an alternative to BIFSeg in which two networks are trained, one to perform an initial segmentation (they use a CNN but this initial segmentation could be performed with any existing algorithm) and a second network they call interCNN that takes as input the image, some user scribbles and the initial segmentation prediction and outputs a refined segmentation, they show that with several iterations over multiple user inputs the quality of the segmentations improve over the initial segmentation and achieve state-of-theart performance in comparison to other interactive methods. The methods discussed above have so far been concerned with producing segmentations for individual images or slices, however many segmentation tasks seek to extract the 3D shape/surface of a particular region of interest (ROI). BIB007 propose a dual method for producing segmentations in 3D based on a Smart-brush 2D segmentation that the user guides towards a good 2D segmentation, and after a few slices are segmented this is transformed to a 3D surface shape using Hermite radial basis functions, achieving high accuracy. While this method does not use deep learning it is a strong example of the ways in which interactive segmentation can be used to generate high quality training data for use in deep learning applications -their approach is general and can produce segmentations for a large number of tasks. There is potential to incorporate deep learning into their pipeline to improve results and accelerate the interactive annotation process. BIB009 propose an interactive segmentation scheme that generalises to any previously trained segmentation model, which accepts user annotations about a target object and the background. User annotations are converted into interaction maps by measuring the distance of each pixel to the annotated landmarks, after which the forward pass outputs an initial segmentation. The user annotated points can be mis-segmented in the initial segmentation so they propose BRS (back-propogating refinement scheme) that corrects the mis-labelled pixels. They demonstrate that their algorithm outperforms conventional approaches on several datasets and that BRS can generalise to medical image segmentation tasks by transforming existing CNNs into user-interactive versions. In this section we focus on applications concerned with iteratively refining a segmentation towards a desired quality of output. In the scenarios above this is performed on an un-seen image provided by the end user, but there is no reason the same approach could be taken to generate iteratively more accurate annotations to be used in training, e.g., using active learning to select which samples to annotate next, and iteratively refining the prediction made by the current model until a sufficiently accurate annotation is curated. This has the potential to accelerate annotation for training without any additional implementation overhead. Much work done in AL ignores the role of the oracle and merely assumes we can acquire an accurate label when we need it, but in practice this presents a more significant challenge. We foresee AL and HITL computing become more tightly coupled as AL research improves it's consideration for the oracle providing the annotations.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Noisy Oracles <s> Multi-label active learning is a hot topic in reducing the label cost by optimally choosing the most valuable instance to query its label from an oracle. In this paper, we consider the poolbased multi-label active learning under the crowdsourcing setting, where during the active query process, instead of resorting to a high cost oracle for the ground-truth, multiple low cost imperfect annotators with various expertise are available for labeling. To deal with this problem, we propose the MAC (Multi-label Active learning from Crowds) approach which incorporate the local influence of label correlations to build a probabilistic model over the multi-label classifier and annotators. Based on this model, we can estimate the labels for instances as well as the expertise of each annotator. Then we propose the instance selection and annotator selection criteria that consider the uncertainty/diversity of instances and the reliability of annotators, such that the most reliable annotator will be queried for the most valuable instances. Experimental results demonstrate the effectiveness of the proposed approach. <s> BIB001 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Noisy Oracles <s> An active learner is given a hypothesis class, a large set of unlabeled examples and the ability to interactively query labels to an oracle of a subset of these examples; the goal of the learner is to learn a hypothesis in the class that fits the data well by making as few label queries as possible. ::: This work addresses active learning with labels obtained from strong and weak labelers, where in addition to the standard active learning setting, we have an extra weak labeler which may occasionally provide incorrect labels. An example is learning to classify medical images where either expensive labels may be obtained from a physician (oracle or strong labeler), or cheaper but occasionally incorrect labels may be obtained from a medical resident (weak labeler). Our goal is to learn a classifier with low error on data labeled by the oracle, while using the weak labeler to reduce the number of label queries made to this labeler. We provide an active learning algorithm for this setting, establish its statistical consistency, and analyze its label complexity to characterize when it can provide label savings over using the strong labeler alone. <s> BIB002 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Noisy Oracles <s> Measuring airways in chest computed tomography (CT) images is important for characterizing diseases such as cystic fibrosis, yet very time-consuming to perform manually. Machine learning algorithms offer an alternative, but need large sets of annotated data to perform well. We investigate whether crowdsourcing can be used to gather airway annotations which can serve directly for measuring the airways, or as training data for the algorithms. We generate image slices at known locations of airways and request untrained crowd workers to outline the airway lumen and airway wall. Our results show that the workers are able to interpret the images, but that the instructions are too complex, leading to many unusable annotations. After excluding unusable annotations, quantitative results show medium to high correlations with expert measurements of the airways. Based on this positive experience, we describe a number of further research directions and provide insight into the challenges of crowdsourcing in medical images from the perspective of first-time users. <s> BIB003 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Noisy Oracles <s> Over the last few years, deep learning has revolutionized the field of machine learning by dramatically improving the state-of-the-art in various domains. However, as the size of supervised artificial neural networks grows, typically so does the need for larger labeled datasets. Recently, crowdsourcing has established itself as an efficient and cost-effective solution for labeling large sets of data in a scalable manner, but it often requires aggregating labels from multiple noisy contributors with different levels of expertise. In this paper, we address the problem of learning deep neural networks from crowds. We begin by describing an EM algorithm for jointly learning the parameters of the network and the confusion matrices of the different annotators for classification settings. Then, a novel general-purpose crowd layer is proposed, which allows us to train deep neural networks end-to-end, directly from the noisy labels of multiple annotators, using backpropagation. We empirically show that the proposed approach is able to internally capture the reliability and biases of different annotators and achieve new state-of-the-art results for various crowdsourced datasets across different settings, namely classification, regression and sequence labeling. <s> BIB004
Gold-standard annotations for medical image data are acquired by aggregating annotations from multiple expert oracles, but as previously discussed, this is rarely feasible to obtain for large complex datasets due to the expertise required to perform such annotations. Here we ask what effect on performance we might incur if we acquire labels from oracles without domain expertise, and what techniques can we use to mitigate the suspected degradation of annotation quality when using non-expert oracles, to avoid any potential loss in accuracy. BIB001 and BIB002 propose active learning methods that assume data will be annotated by a crowd of non-expert or 'weak' annotators, and offer approaches to mitigate the introduction of bad labels into the data set. They simultaneously learn about the quality of individual annotators so that the most informative examples can be labelled by the strongest annotators. BIB003 explore using Amazon's MTurk to gather annotations of airways in CT images. Results showed that the novice oracles were able to interpret the images, but that instructions provided were too complex, leading to many unusable annotations. Once the bad annotations were removed, the annotations did show medium to high correlation with expert annotations, especially if annotations were aggregated. BIB004 describe an approach to assess the reliability of annotators in a crowd, and a crowd layer used to train deep models from noisy labels from multiple annotators, internally capturing the reliability and biases of different annotators to achieve state-of-the-art results for several crowdsourced data-set tasks. We can see that by using a learned model of oracle annotation quality we can mitigate the effects of low quality annotations and present the most challenging cases to most capable oracles. By providing clear instructions we can lower the barriers for non-expert oracles to perform accurate annotation, but this is not generalisable and would be required for every new annotation task we wish to perform.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Alternative Query Types <s> The availability of training data for supervision is a frequently encountered bottleneck of medical image analysis methods. While typically established by a clinical expert rater, the increase in acquired imaging data renders traditional pixel-wise segmentations less feasible. In this paper, we examine the use of a crowdsourcing platform for the distribution of super-pixel weak annotation tasks and collect such annotations from a crowd of non-expert raters. The crowd annotations are subsequently used for training a fully convolutional neural network to address the problem of fetal brain segmentation in T2-weighted MR images. Using this approach we report encouraging results compared to highly targeted, fully supervised methods and potentially address a frequent problem impeding image analysis research. <s> BIB001 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Alternative Query Types <s> In this paper, we propose DeepCut, a method to obtain pixelwise object segmentations given an image dataset labelled with bounding box annotations. It extends the approach of the well-known GrabCut method to include machine learning by training a neural network classifier from bounding box annotations. We formulate the problem as an energy minimisation problem over a densely-connected conditional random field and iteratively update the training targets to obtain pixelwise object segmentations. Additionally, we propose variants of the DeepCut method and compare those to a naive approach to CNN training under weak supervision. We test its applicability to solve brain and lung segmentation problems on a challenging fetal magnetic resonance dataset and obtain encouraging results in terms of accuracy. <s> BIB002 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Alternative Query Types <s> To efficiently establish training databases for machine learning methods, collaborative and crowdsourcing platforms have been investigated to collectively tackle the annotation effort. However, when this concept is ported to the medical imaging domain, reading expertise will have a direct impact on the annotation accuracy. In this study, we examine the impact of expertise and the amount of available annotations on the accuracy outcome of a liver segmentation problem in an abdominal computed tomography (CT) image database. In controlled experiments, we study this impact for different types of weak annotations. To address the decrease in accuracy associated with lower expertise, we propose a method for outlier correction making use of a weakly labelled atlas. Using this approach, we demonstrate that weak annotations subject to high error rates can achieve a similarly high accuracy as state-of-the-art multi-atlas segmentation approaches relying on a large amount of expert manual segmentations. Annotations of this nature can realistically be obtained from a non-expert crowd and can potentially enable crowdsourcing of weak annotation tasks for medical image analysis. <s> BIB003
Most segmentation tasks require pixel-wise annotations, but these are not the only type of annotation we can give an image. Segmentation can be performed with 'weak' annotations, which include image level labels e.g. modality, organs present etc. and annotations such as bounding boxes, ellipses or scribbles. It is argued that using 'weaker' annotation formulations can make the task easier for the human oracle, leading to more accurate annotations. 'Weak' annotations have been shown to perform well in several segmentation tasks, BIB002 demonstrate obtaining pixel-wise segmentations given a data-set of images with 'weak' bounding box annotations. They propose DeepCut, an architecture that combines a CNN with an iterative dense CRF formulation to achieve good accuracy while greatly reducing annotation effort required. In a later study, BIB003 examine the impact of expertise required for different 'weak' annotation types on the accuracy of liver segmentations. The results showed a decrease in accuracy with less expertise, as expected, across all annotation types. Despite this, segmentation accuracy was comparable to state-of-the-art performance when using a weakly labelled atlas for outlier correction. The robust performance of their approach suggests 'weak' annotations from non-expert crowds could be used to obtain accurate segmentations on many different tasks, however their use of an atlas makes this approach less generalisable than is desired. In BIB001 they examine using super pixels to accelerate the annotation process. This approach uses a pre-processing step to acquire a super-pixel segmentation of each image, non-experts are then used to perform the annotation by selecting which super-pixels are part of the target region. Results showed that the approach largely reduces the annotation load on users. Non-expert annotation of 5000 slices was completed in under an hour by 12 annotators, compared to an expert taking three working days to establish the same with an advanced interface. The non-expert interface is web-based demonstrating the potential of distributed annotation collection/crowd-sourcing. An encouraging aspect of this paper is that the results showed high performance on the segmentation task in question compared with expert annotation performance, but may not be suitable for all medical image analysis tasks. It has been shown that we can develop high performing models using weakly annotated data, and as weak annotations requires less expertise to perform, they can be acquired faster and from a non-expert crowd with a smaller loss in accuracy than gold-standard annotations. This is very promising for future research as datasets of weakly annotated data might be much easier and more cost-effective to curate.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Annotation Interface <s> Motion detection by the retina is thought to rely largely on the biophysics of starburst amacrine cell dendrites; here machine learning is used with gamified crowdsourcing to draw the wiring diagram involving amacrine and bipolar cells to identify a plausible circuit mechanism for direction selectivity; the model suggests similarities between mammalian and insect vision. <s> BIB001 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Annotation Interface <s> Recent advances in microscopy imaging and genomics have created an explosion of patient data in the pathology domain. Whole-slide images (WSIs) of tissues can now capture disease processes as they unfold in high resolution, recording the visual cues that have been the basis of pathologic diagnosis for over a century. Each WSI contains billions of pixels and up to a million or more microanatomic objects whose appearances hold important prognostic information. Computational image analysis enables the mining of massive WSI datasets to extract quantitative morphologic features describing the visual qualities of patient tissues. When combined with genomic and clinical variables, this quantitative information provides scientists and clinicians with insights into disease biology and patient outcomes. To facilitate interaction with this rich resource, we have developed a web-based machine-learning framework that enables users to rapidly build classifiers using an intuitive active learning process that minimizes data labeling effort. In this paper we describe the architecture and design of this system, and demonstrate its effectiveness through quantification of glioma brain tumors. <s> BIB002 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Annotation Interface <s> In this study, we developed a novel system, called Gaze2Segment, integrating biological and computer vision techniques to support radiologists’ reading experience with an automatic image segmentation task. During diagnostic assessment of lung CT scans, the radiologists’ gaze information were used to create a visual attention map. Next, this map was combined with a computer-derived saliency map, extracted from the gray-scale CT images. The visual attention map was used as an input for indicating roughly the location of a region of interest. With computer-derived saliency information, on the other hand, we aimed at finding foreground and background cues for the object of interest found in the previous step. These cues are used to initiate a seed-based delineation process. The proposed Gaze2Segment achieved a dice similarity coefficient of 86% and Hausdorff distance of 1.45 mm as a segmentation accuracy. To the best of our knowledge, Gaze2Segment is the first true integration of eye-tracking technology into a medical image segmentation task without the need for any further user-interaction. <s> BIB003 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Annotation Interface <s> Deep learning with convolutional neural networks (CNNs) has experienced tremendous growth in multiple healthcare applications and has been shown to have high accuracy in semantic segmentation of medical (e.g., radiology and pathology) images. However, a key barrier in the required training of CNNs is obtaining large-scale and precisely annotated imaging data. We sought to address the lack of annotated data with eye tracking technology. As a proof of principle, our hypothesis was that segmentation masks generated with the help of eye tracking (ET) would be very similar to those rendered by hand annotation (HA). Additionally, our goal was to show that a CNN trained on ET masks would be equivalent to one trained on HA masks, the latter being the current standard approach. Step 1: Screen captures of 19 publicly available radiologic images of assorted structures within various modalities were analyzed. ET and HA masks for all regions of interest (ROIs) were generated from these image datasets. Step 2: Utilizing a similar approach, ET and HA masks for 356 publicly available T1-weighted postcontrast meningioma images were generated. Three hundred six of these image + mask pairs were used to train a CNN with U-net-based architecture. The remaining 50 images were used as the independent test set. Step 1: ET and HA masks for the nonneurological images had an average Dice similarity coefficient (DSC) of 0.86 between each other. Step 2: Meningioma ET and HA masks had an average DSC of 0.85 between each other. After separate training using both approaches, the ET approach performed virtually identically to HA on the test set of 50 images. The former had an area under the curve (AUC) of 0.88, while the latter had AUC of 0.87. ET and HA predictions had trimmed mean DSCs compared to the original HA maps of 0.73 and 0.74, respectively. These trimmed DSCs between ET and HA were found to be statistically equivalent with a p value of 0.015. We have demonstrated that ET can create segmentation masks suitable for deep learning semantic segmentation. Future work will integrate ET to produce masks in a faster, more natural manner that distracts less from typical radiology clinical workflow. <s> BIB004
So far the majority of Human-in-the-loop methods assume a significant level of interaction from an oracle to annotate data and model predictions, but few consider the nature of the interface with which an oracle might interact with these images. The nature of medical images require special attention when proposing distributed online platforms to perform such annotations. While the majority of techniques discussed so far have used pre-existing data labels in place of newly acquired to demonstrate their performance, it is important to consider the effects of accuracy of annotation that the actual interface might incur. BIB002 propose a framework for the online classification of Whole-slide images (WSIs) of tissues. Their interface enables users to rapidly build classifiers using an active learning process that minimises labelling efforts and demonstrates the effectiveness of their solution for the quantification of glioma brain tumours. BIB003 propose a novel interface for the segmentation of images that tracks the users gaze to initiate seed points for the segmentation of the object of interest as the only means of interaction with the image, achieving high segmentation performance. BIB004 extend this idea and compare using eye tracking generated training samples to traditional hand annotated training samples for training a DL model. They show that almost equivalent performance was achieved using annotation generated through eye tracking, and suggest that this approach might be applicable to rapidly generate training data. They acknowledge that there is still improvements to be made integrate eye tracking into typical clinical radiology work flow in a faster, more natural and less distracting way. evaluate the player motivations behind EyeWire, an online game that asks a crowd of players to help segment neurons in a mouse brain. The gamification of this task has seen over 500,000 players sign up and the segmentations acquired have gone onto be used in several research works BIB001 ]. One of the most exciting things about gamification is that when surveyed, users were motivated most by making a scientific contribution rather than any potential monetary reward. However this is very specialised towards this particular task and would be difficult to apply across other types of medical image analysis task. There are many different approaches to developing annotation interfaces and the ones we consider above are just a few that have been applied to medical image analysis. As development increases we expect to see more online tools being used for medical image analysis and the chosen format of the interface will play a large part in the usability and overall success of these applications.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Variable Learning Costs <s> Deep learning for clinical applications is subject to stringent performance requirements, which raises a need for large labeled datasets. However, the enormous cost of labeling medical data makes this challenging. In this paper, we build a cost-sensitive active learning system for the problem of intracranial hemorrhage detection and segmentation on head computed tomography (CT). We show that our ensemble method compares favorably with the state-of-the-art, while running faster and using less memory. Moreover, our experiments are done using a substantially larger dataset than earlier papers on this topic. Since the labeling time could vary tremendously across examples, we model the labeling time and optimize the return on investment. We validate this idea by core-set selection on our large labeled dataset and by growing it with data from the wild. <s> BIB001 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Variable Learning Costs <s> For medical image segmentation, most fully convolutional networks (FCNs) need strong supervision through a large sample of high-quality dense segmentations, which is taxing in terms of costs, time and logistics involved. This burden of annotation can be alleviated by exploiting weak inexpensive annotations such as bounding boxes and anatomical landmarks. However, it is very difficult to \textit{a priori} estimate the optimal balance between the number of annotations needed for each supervision type that leads to maximum performance with the least annotation cost. To optimize this cost-performance trade off, we present a budget-based cost-minimization framework in a mixed-supervision setting via dense segmentations, bounding boxes, and landmarks. We propose a linear programming (LP) formulation combined with uncertainty and similarity based ranking strategy to judiciously select samples to be annotated next for optimal performance. In the results section, we show that our proposed method achieves comparable performance to state-of-the-art approaches with significantly reduced cost of annotations. <s> BIB002
When acquiring training data from various types of oracle it is worth considering the relative cost associated with querying a particular oracle type for that annotation. We may wish to acquire more accurate labels from an expert oracle, but this is likely more expensive to obtain than from a non-expert oracle. The trade off, of course, being accuracy of the obtained labelless expertise of the oracle will likely result in a lower quality of annotation. Several methods have been proposed to model this and allow developers to trade off between cost and overall accuracy of acquired annotations. BIB001 propose a cost-sensitive active learning approach for intracranial haemorrhage detection. Since annotation time may vary significantly across examples, they model the annotation time and optimize the return on investment. They show their approach selects a diverse and meaningful set of samples to be annotated, relative to a uniform cost model, which mostly selects samples with massive bleeds which are time consuming to annotate. BIB002 propose a budget based cost minimisation framework in a mixed-supervision setting (strong and weak annotations) via dense segmentation, bounding boxes, and landmarks. Their framework uses an uncertainty and a representativeness ranking strategy to select samples to be annotated next. They demonstrate state-of-the-art performance at a significantly reduced training budget, highlighting the important role of choice of annotation type on the costs of acquiring training data. The above works each show an improved consideration for the economic burden that is incurred when curating training data. A valuable research direction would be to assess the effects of oracle expertise level, annotation type and image annotation cost in a unified framework as these three factors are very much linked and may have a profound influence over each other.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Generative Adversarial Networks <s> Abstract Generative adversarial networks have gained a lot of attention in the computer vision community due to their capability of data generation without explicitly modelling the probability density function. The adversarial loss brought by the discriminator provides a clever way of incorporating unlabeled samples into training and imposing higher order consistency. This has proven to be useful in many cases, such as domain adaptation, data augmentation, and image-to-image translation. These properties have attracted researchers in the medical imaging community, and we have seen rapid adoption in many traditional and novel applications, such as image reconstruction, segmentation, detection, classification, and cross-modality synthesis. Based on our observations, this trend will continue and we therefore conducted a review of recent advances in medical imaging using the adversarial training scheme with the hope of benefiting researchers interested in this technique. <s> BIB001 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Generative Adversarial Networks <s> Training robust deep learning (DL) systems for medical image classification or segmentation is challenging due to limited images covering different disease types and severity. We propose an active learning (AL) framework to select most informative samples and add to the training data. We use conditional generative adversarial networks (cGANs) to generate realistic chest xray images with different disease characteristics by conditioning its generation on a real image sample. Informative samples to add to the training set are identified using a Bayesian neural network. Experiments show our proposed AL framework is able to achieve state of the art performance by using about \(35\%\) of the full dataset, thus saving significant time and effort over conventional methods. <s> BIB002 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Generative Adversarial Networks <s> Image segmentation is an important task in many medical applications. Methods based on convolutional neural networks attain state-of-the-art accuracy; however, they typically rely on supervised training with large labeled datasets. Labeling medical images requires significant expertise and time, and typical hand-tuned approaches for data augmentation fail to capture the complex variations in such images. ::: We present an automated data augmentation method for synthesizing labeled medical images. We demonstrate our method on the task of segmenting magnetic resonance imaging (MRI) brain scans. Our method requires only a single segmented scan, and leverages other unlabeled scans in a semi-supervised approach. We learn a model of transformations from the images, and use the model along with the labeled example to synthesize additional labeled examples. Each transformation is comprised of a spatial deformation field and an intensity change, enabling the synthesis of complex effects such as variations in anatomy and image acquisition procedures. We show that training a supervised segmenter with these new examples provides significant improvements over state-of-the-art methods for one-shot biomedical image segmentation. Our code is available at this https URL. <s> BIB003
Generative Adversarial Network (GAN) based methods have been applied to several areas of medical imaging such as denoising, modality transfer and abnormality detection, but more relevant to AL has been the use of GANs for image synthesis, this offers an alternative (or addition) to the many data augmentation techniques used to expand limited data-sets BIB001 . propose a conditional GAN (cGAN) based method for active learning where they use the discriminator D output as a measure of uncertainty of the proposed segmentations, and use this metric to rank samples from the unlabelled data-set. From this ranking the most uncertain samples are presented to an oracle for segmentation and the least uncertain images are included in the labelled data-set as pseudo ground truth labels. They show their method approaches increasing accuracy as the percentage of interactively annotated samples increasesreaching the performance of fully supervised benchmark methods using only 80% of the labels. This work also motivates the use of GAN discriminator scores as a measure of prediction uncertainty. BIB002 also use a cGAN to generate chest X-Ray images conditioned on a real image, and using a Bayesian neural network to assess the informativeness of each generated sample, decide whether each generated sample should be used as training data. If so, is used to fine-tune the network. They demonstrate that the approach can achieve comparable performance to training on the fully annotated data, using a dataset where only 33% of the pixels in the training set are annotated, offering a huge saving of time, effort and costs for annotators. BIB003 present an alternative method of data synthesis to GANs through the use of learned transformations. From a single manually segmented image, they leverage other un-annotated images in a SSL like approach to learn a transformation model from the images, and use the model along with the labelled data to synthesise additional annotated samples. Transformations consist of spatial deformations and intensity changes to enable to synthesis of complex effects such as anatomical and image acquisition variations. They train a model in a supervised way for the segmentation of MRI brain images and show state-of-the-art improvements over other oneshot bio-medical image segmentation methods. The above works demonstrate the power of using synthetic data conditioned on a very small amount of annotated data to generate new training samples that can be used to train a model to a high accuracy, this is of great value to AL methods where we usually require a initial training set to train a model on before we can employ a data selection policy. These methods also demonstrate the efficient use of labelled data and allow us to generate multiple training samples from a individually annotated image, this may allow the annotated data obtained in AL/Human-in-the-Loop methods to be used more effectively through generating multiple training samples for a single requested annotation, further reducing the annotation effort required to train state-of-the-art models.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Transfer Learning <s> Training a deep convolutional neural network (CNN) from scratch is difficult because it requires a large amount of labeled training data and a great deal of expertise to ensure proper convergence. A promising alternative is to fine-tune a CNN that has been pre-trained using, for instance, a large set of labeled natural images. However, the substantial differences between natural and medical images may advise against such knowledge transfer. In this paper, we seek to answer the following central question in the context of medical image analysis: Can the use of pre-trained deep CNNs with sufficient fine-tuning eliminate the need for training a deep CNN from scratch? To address this question, we considered four distinct medical imaging applications in three specialties (radiology, cardiology, and gastroenterology) involving classification, detection, and segmentation from three different imaging modalities, and investigated how the performance of deep CNNs trained from scratch compared with the pre-trained CNNs fine-tuned in a layer-wise manner. Our experiments consistently demonstrated that 1) the use of a pre-trained CNN with adequate fine-tuning outperformed or, in the worst case, performed as well as a CNN trained from scratch; 2) fine-tuned CNNs were more robust to the size of training sets than CNNs trained from scratch; 3) neither shallow tuning nor deep tuning was the optimal choice for a particular application; and 4) our layer-wise fine-tuning scheme could offer a practical way to reach the best performance for the application at hand based on the amount of available data. <s> BIB001 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Transfer Learning <s> Cardiovascular disease (CVD) is the number one killer in the USA, yet it is largely preventable (World Health Organization 2011). To prevent CVD, carotid intima-media thickness (CIMT) imaging, a noninvasive ultrasonography method, has proven to be clinically valuable in identifying at-risk persons before adverse events. Researchers are developing systems to automate CIMT video interpretation based on deep learning, but such efforts are impeded by the lack of large annotated CIMT video datasets. CIMT video annotation is not only tedious, laborious, and time consuming, but also demanding of costly, specialty-oriented knowledge and skills, which are not easily accessible. To dramatically reduce the cost of CIMT video annotation, this paper makes three main contributions. Our first contribution is a new concept, called Annotation Unit (AU), which simplifies the entire CIMT video annotation process down to six simple mouse clicks. Our second contribution is a new algorithm, called AFT (active fine-tuning), which naturally integrates active learning and transfer learning (fine-tuning) into a single framework. AFT starts directly with a pre-trained convolutional neural network (CNN), focuses on selecting the most informative and representative AU s from the unannotated pool for annotation, and then fine-tunes the CNN by incorporating newly annotated AU s in each iteration to enhance the CNN’s performance gradually. Our third contribution is a systematic evaluation, which shows that, in comparison with the state-of-the-art method (Tajbakhsh et al., IEEE Trans Med Imaging 35(5):1299–1312, 2016), our method can cut the annotation cost by >81% relative to their training from scratch and >50% relative to their random selection. This performance is attributed to the several advantages derived from the advanced active, continuous learning capability of our AFT method. <s> BIB002 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Transfer Learning <s> In recent years, some convolutional neural networks (CNNs) have been proposed to segment sub-cortical brain structures from magnetic resonance images (MRIs). Although these methods provide accurate segmentation, there is a reproducibility issue regarding segmenting MRI volumes from different image domains – e.g., differences in protocol, scanner, and intensity profile. Thus, the network must be retrained from scratch to perform similarly in different imaging domains, limiting the applicability of such methods in clinical settings. In this paper, we employ the transfer learning strategy to solve the domain shift problem. We reduced the number of training images by leveraging the knowledge obtained by a pretrained network, and improved the training speed by reducing the number of trainable parameters of the CNN. We tested our method on two publicly available datasets – MICCAI 2012 and IBSR – and compared them with a commonly used approach: FIRST. Our method showed similar results to those obtained by a fully trained CNN, and our method used a remarkably smaller number of images from the target domain. Moreover, training the network with only one image from MICCAI 2012 and three images from IBSR datasets was sufficient to significantly outperform FIRST with (p < 0.001) and (p < 0.05), respectively. <s> BIB003
Transfer Learning (TL) and domain adaptation are branches of DL that aim to use pre-trained networks as a starting point for new applications. Given a pre-trained network trained for a particular task, it has been shown that this network can be 'fine-tuned' towards a target task from limited training data. BIB001 demonstrated the applicability of TL for a variety of medical image analysis tasks, and show, despite the large differences between natural images and medical images, CNNs pre-trained on natural images and fine-tuned on medical images can perform better than medical CNNs trained from scratch. This performance boost was greater where fewer target task training examples were available. Many of the methods discussed so far start with a network pre-trained on natural image data. propose AFT*, a platform that combines AL and TL to reduce annotation efforts, which aims at solving several problems within AL. AFT* starts with a completely empty labelled data-set, requiring no seed samples. A pre-trained CNN is used to seek 'worthy' samples for annotation and to gradually enhance the CNN via continuous finetuning. A number of steps are taken to minimise the risk of catastrophic forgetting. Their previous work applies a similar but less featureful approach to several medical image analysis tasks to demonstrate equivalent performance can be reached with a heavily reduced training data-set. They then use these tasks to evaluate several patterns of prediction that the network exhibits and how these relate to the choice of AL selection criteria. BIB002 have gone onto to use their AFT framework for annotation of CIMT videos, a clinical technique for characterisation of Cardiovascular disease. Their extension into the video domain presents its own unique challenges and thus they propose a new concept of an Annotation Unit -reducing annotating a CIMT video to just 6 user mouse clicks, and by combining this with their AFT framework reduce annotation cost by 80% relative to training from scratch and by 50% relative to random selection of new samples to be annotated (and used for fine-tuning). BIB003 use TL for supervised domain adaptation for sub-cortical brain structure segmentation with minimal user interaction. They significantly reduce the number of training images from different MRI imaging domains by leveraging a pre-trained network and improve training speed by reducing the number of trainable parameters in the CNN. They show their method achieves similar results to their baseline while using a remarkably small amount of images from the target domain and show that using even one image from the target domain was enough to outperform their baseline. The above methods and more discussed in this review demonstrate the applicability of TL to reducing the number of annotated sample required to train a model on a new task from limited training data. By using pre-trained networks trained on annotated natural image data (there is an abundance) we can boost model performance and further reduce the annotation effort required to achieve state-of-the-art performance.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Continual Lifelong Learning and Catastrophic Forgetting <s> This work investigates continual learning of two segmentation tasks in brain MRI with neural networks. To explore in this context the capabilities of current methods for countering catastrophic forgetting of the first task when a new one is learned, we investigate elastic weight consolidation, a recently proposed method based on Fisher information, originally evaluated on reinforcement learning of Atari games. We use it to sequentially learn segmentation of normal brain structures and then segmentation of white matter lesions. Our findings show this recent method reduces catastrophic forgetting, while large room for improvement exists in these challenging settings for continual learning. <s> BIB001 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Continual Lifelong Learning and Catastrophic Forgetting <s> Abstract Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan. This ability, referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to long-term memory consolidation and retrieval. Consequently, lifelong learning capabilities are crucial for computational learning systems and autonomous agents interacting in the real world and processing continuous streams of information. However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting or interference. This limitation represents a major drawback for state-of-the-art deep neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. Although significant advances have been made in domain-specific learning with neural networks, extensive research efforts are required for the development of robust lifelong learning on autonomous agents and robots. We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration. <s> BIB002
In many of scenarios described in this review, models continuously receive new annotations to be used for training, and in theory we could continue to retrain or fine-tune a model indefinitely, but is this practical and cost effective? It is important to quantify the long term effects of training a model with new data to assess how the model changes over time and whether or not performance has improved, or worse, declined. Learning from continuous streams of data has proven more difficult than anticipated, often resulting in 'catastrophic forgetting' or 'interference BIB002 . We face the stability-plasticitydilemma. Avoiding catastrophic forgetting in neural networks when learning from continuous streams of data can be broadly divided among three conceptual strategies: a) Retraining the the whole network while regularising (to prevent forgetting of previously learned tasks). b) selectively train the network and expand it if needed to represent new tasks, and c) retaining previous experience to use memory replay to learn in the absence of new input. We refer the reader to BIB002 for a more detailed overview of these approaches. BIB001 investigate continual learning of two MRI segmentation tasks with neural networks for countering catastrophic forgetting of the first task when a new one is learned. They investigate elastic weight consolidation, a method based on Fisher information to sequentially learn segmentation of normal brain structures and then segmentation of white matter lesions and demonstrate this method reduces catastrophic forgetting, but acknowledge there is a large room for improvement for the challenging setting of continual learning. It is important to quantify the performance and robustness of a model at every stage of its lifespan. One way to consider stopping could evaluate when the cost of continued training outweighs the cost of errors made by the current model. An existing measure that attempts to quantify the economical value of medical intervention is the Quality-adjusted Life year (QALY), where one QALY equates to one year of healthy life NICE (2013) . Could this metric be incorporated into models? At present we cannot quantify the cost of errors made by DL medical imaging applications but doing so could lead to a deeper understanding of how accurate a DL model really ought to be. As models are trained on more of the end users own data, will this cause the network to perform better on data from that users system despite performing worse on data the model was initially trained on? Catastrophic forgetting suggests this will be the case, but is this a bad thing? It may be beneficial for models to gradually bias themselves towards high performance for the end users own data, even if this results in the model becoming less transferable to other data.
Novel Testing Tools for a Cloud Computing Environment- A Review <s> INTRODUCTION <s> This paper provides a state-of-the-art review of cloud testing. Cloud computing, a new paradigm for developing and delivering computing applications and services, has gained considerable attention in recent years. Cloud computing can impact all software life cycle stages, including the area of software testing. TaaS (Testing as a Service) or cloud testing, which includes testing the cloud and testing using the cloud, is a fast developing area of research in software engineering. The paper addresses the following three areas: (1) general research in cloud testing, (2) specific cloud testing research, i.e., tools, methods, and systems under test, and (3) commercial TaaS tools and solutions.. <s> BIB001 </s> Novel Testing Tools for a Cloud Computing Environment- A Review <s> INTRODUCTION <s> Cloud computing is discipline which use everything as service that provide economic, convenient and on-demand services to requested end users and cloud service consumer. Building a cloud computing network is not an easy task. It requires lots of efforts and time. For this, there arises a concept called Cloud Engineering. Cloud engineering is a discipline that uses set of processes which help to engineer a cloud network. The structure and principles of cloud engineering plays an important role in the engineering of usable, economic and vibrant cloud. The cloud engineering use a cloud development life cycle (CDLC) which systematic developed cloud. Quality assurance and verification is an important and mandatory part of development cycle. Quality assurance ensures the quality and web service of cloud network. Cloud Verification is an irrespirable step in a development of an economic cloud computing solution of a network. Verify the performance, reliability, availability, elasticity and security of cloud network against the service level agreement with respect to specification, agreement and requirement. The work in this paper focuses on the Quality Assurance factors and parameters that influence quality. It also discuses quality of data used in a cloud. This paper proposes and explores the structure and its component used in verification process of a cloud. <s> BIB002
LOUD computing is a model for convenient and ondemand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management efforts [Prince Jain, 2012; Prince BIB002 . The aim of the cloud computing is to provide scalable and inexpensive ondemand computing infrastructures with good quality of service levels [Prince . Cloud testing is a form of testing in which web applications uses cloud computing environment and infrastructure to simulate real world user traffic by using cloud technologies and solutions. Cloud testing basically aligns with concept of cloud and SaaS. Cloud testing provides with the ability to test cloud by using the cloud infrastructure such as hardware and bandwidth that more closely simulate real world conditions and parameters. In simple words, Testing a Cloud refers to the verification and validation of applications, environments and infrastructure that are available on demand by conforming these to the expectations of the cloud computing business model [Prince Jain, 2012; Prince BIB002 . Cloud Testing is defined as testing as a Service (TaaS). TaaS is considered as a new business and service model, in which a provider undertakes software testing activities of a given application in a cloud infrastructure for customers. TaaS can be used to validation of various products owned by organizations that deal with testing products and services which are making use of a cloud based licensing model for their clients. To build an economic, efficient and scalable cloud computing network, a good testing tool is needed. The number of tests cases for a large scale cloud computing system can range from several hundred to many thousands which requiring significant computing resources, infrastructure and lengthy execution times [Vinaya Kumar Mylavarapu, 2011; Sergiy BIB001 . Software Testing tools that are basically used for testing of conventional applications are of little use when it is applied to cloud computing. A traditional software testing approach to test cloud includes high cost to simulate user activity from different locations. Moreover, testing firewalls and load balancers involves expenditure on hardware, software and its maintenance [Sergiy BIB001 . Traditional approaches to reduce the execution time by excluding selected tests from the suite [Neha . To test the cloud based software systems, techniques and tools are necessary to address quality aspect of the cloud infrastructure such as massive scalability and dynamic configuration. The tools can also be built on the cloud platform to benefit from virtualized platform and services, massive resources, and parallelized execution. Major These tools are discussed and explored in the following sections. Issues and characteristics of traditional testing tool used for cloud computing in discussed in Section 2.
Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Introduction <s> In this review paper, we make a detailed study of a class of directed graphs, known as tournaments. The reason they are called tournaments is that they represent the structure of round robin tournaments, in which players or teams engage in a game that cannot end in a tie and in which every player plays each other exactly once. Although tournaments are quite restricted structurally, they are realized by a great many empirical phenomena in addition to round robin competitions. For example, it is known that many species of birds and mammals develop dominance relations so that for every pair of individuals, one dominates the other. Thus, the digraph of the "pecking structure" of a flock of hens is asymmetric and complete, and hence a tournament. Still another realization of tournaments arises in the method of scaling, known as "paired comparisons." Suppose, for example, that one wants to know the structure of a person's preferences among a collection of competing brands of a product. He can be asked to indicate for each pair of brands which one he prefers. If he is not allowed to indicate indifference, the structure of his stated preferences can be represented by a tournament. Tournaments appear similarly in the theory of committees and elections. Suppose that a committee is considering four alternative policies. It has been argued that the best decision will be reached by a series of votes in which each policy is paired against each other. The outcome of these votes can be represented by a digraph whose points are policies and whose lines indicate that one policy defeated the other. Such a digraph is clearly a tournament. After giving some essential definitions, we develop properties that all tournaments display. We then turn our attention to transitive tournaments, namely those that are complete orders. It is well known that not all preference structures are transitive. There is considerable interest, therefore, in knowing how transitive any given tournament is. Such an index is presented toward the end of the second section. In the final section, we consider some properties of strongly connected tournaments. <s> BIB001 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Introduction <s> The main subjects of this survey paper are Hamitonian cycles, cycles of prescirbed lengths, cycles in tournaments, and partitions, packings, and coverings by cycles. Several unsolved problems and a bibiligraphy are included. <s> BIB002 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Introduction <s> Abstract We describe a polynomial algorithm, which either finds a Hamiltonian path with prescribed initial and terminal vertices in a tournament (in fact, in any semicomplete digraph), or decides that no such path exists. <s> BIB003 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Introduction <s> This paper presents polynomially bounded algorithms for finding a cycle through any two prescribed arcs in a semicomplete digraph and for finding a cycle through any two prescribed vertices in a complete k-partite oriented graph. It is also shown that the problem of finding a maximum transitive subtournament of a tournament and the problem of finding a cycle through a prescribed arc set in a tournament are both NP-complete. <s> BIB004 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Introduction <s> A digraph obtained by replacing each edge of a complete m-partite graph with an arc or a pair of mutually opposite arcs with the same end vertices is called a complete m-partite digraph. An $O ( n^3 )$ algorithm for finding a longest path in a complete m-partite $( m \geq 2 )$ digraph with n vertices is described in this paper. The algorithm requires time $O( n^{2.5} )$ in case of testing only the existence of a Hamiltonian path and finding it if one exists. It is simpler than the algorithm of Manoussakis and Tuza [SIAM J. Discrete Math., 3 (1990), pp. 537–543], which works only for $m = 2$. The algorithm implies a simple characterization of complete m-partite digraphs having Hamiltonian paths that was obtained for the first time in Gutin [Kibernetica (Kiev), 4 (1985), pp. 124–125] for $m = 2$ and in Gutin [Kibernetica (Kiev), 1(1988), pp. 107–108] for $ m \geq 2 $. <s> BIB005 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Introduction <s> Abstract A directed graph is called ( m , k )-transitive if for every directed path x 0 x 1 … x m there is a directed path y 0 y 1 … y k such that x 0 = y 0 , x m = y k , and { y i |0⩽ i ⩽ k } ⊂{ x i |0⩽ i ⩽ m }. We describe the structure of those ( m , 1)-transitive and (3,2)-transitive directed graphs in which each pair of vertices is adjacent by an arc in at least one direction, and present an algorithm with running time O( n 2 ) that tests ( m, k )-transitivity in such graphs on n vertices for every m and k =1, and for m =3 and k =2. <s> BIB006 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Introduction <s> Abstract A digraph obtained by replacing each edge of a complete multipartite graph by an arc or a pair of mutually opposite arcs with the same end vertices is called a complete multipartite graph. Such a digraph D is called ordinary if for any pair X , Y of its partite sets the set of arcs with both end vertices in X ∪ Y coincides with X × Y = {( x , y ): xϵX , yϵY } or Y × X or X × Y ∪ Y × X . We characterize all the pancyclic and vertex pancyclic ordinary complete multipartite graphs. Our charcterizations admit polynomial time algorithms. <s> BIB007
A digraph obtained by replacing each edge of a (simple) graph G by an arc (by an arc or a pair of mutually opposite arcs) with the same end vertices is called an orientation (biorientation, respectively) of G. Therefore, orientations of graphs have no opposite arcs, and biorientations may have. The investigation of paths and cycles in tournaments, orientations of complete graphs, was initiated by Redei's Theorem [63] derived in 1934: each tournament contains a Hamiltonian path. In 1959, P. Camion obtained necessary and sufficient conditions for the existence of a Hamiltonian cycle in a tournament. He proved that every strongly connected tournament has a Hamiltonian cycle. There are several survey articles BIB002 BIB001 (the second one contains results on general digraphs too) and a book by J. Moon where the properties of tournaments are considered. J. Moon and J. A. Bondy were the first to consider cycles in the entire class of multipartite tournaments (orientations of complete multipartite graphs). Since the 80s, mathematicians began studying extensively cycles and paths in bipartite tournaments. The first results were described in the survey by L. W. Beineke . In this period a number of results on the cycle and path structure of m-partite tournaments for m ≥ 3 were obtained. A survey describing these results as well as recent results on cycles and paths in bipartite tournaments is absent and seems to be needed. The aim of the present article is to fill in this gap and also to describe some theorems and algorithms on paths and cycles in tournaments which have been obtained recently. Note that part of the results given in the paper are formulated not for orientations of complete multipartite graphs (usually called multipartite tournaments) but for biorientations of them (called semicomplete multipartite digraphs). In particular, we give some theorems for semicomplete digraphs (biorientations of complete graphs) instead of the more restricted class of tournaments. The motivation for considering semicomplete multipartite digraphs rather than multipartite tournaments is the following. From a theoretical point of view there is no good reason to restrict investigation to digraphs having no opposite arcs when more general results may be available. Digraphs with opposite arcs are sometimes used in order to obtain results for digraphs without opposite arcs (see ). Moreover, total exclusion of opposite arcs from the consideration does not allow to study adequately some practical digraph models (models in social choice theory, interconnections networks, etc. BIB006 ). That is why there are numerous papers where properties of semicomplete digraphs, semicomplete bipartite and m-partite (m ≥ 3) digraphs (see, for example, BIB003 BIB004 BIB005 BIB007 BIB006 ) were investigated. We hope that this survey will be as successful as in stimulating further research on the subject.
Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Paths in SMDs <s> Abstract We characterize weakly Hamiltonian-connected tournaments and weakly panconnected tournaments completely and we apply these results to cycles and bypasses in tournaments with given irregularity, in particular, in regular and almost regular tournaments. We give a sufficient condition in terms of local and global connectivity for a Hamiltonian path with prescribed initial and terminal vertex. From this result we deduce that every 4-connected tournament is strongly Hamiltonian-connected and that every edge of a 3-connected tournament is contained in a Hamiltonian cycle of the tournament and we describe infinite families of tournaments demonstrating that these results are best possible. <s> BIB001 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Paths in SMDs <s> This clearly written , mathematically rigorous text includes a novel algorithmic exposition of the simplex method and also discusses the Soviet ellipsoid algorithm for linear programming; efficient algorithms for network flow, matching, spanning trees, and matroids; the theory of NP-complete problems; approximation algorithms, local search heuristics for NPcomplete problems, more. All chapters are supplemented by thoughtprovoking problems. A useful work for graduate-level students with backgrounds in computer science, operations research, and electrical engineering. Mathematicians wishing a self-contained introduction need look no further.—American Mathematical Monthly. 1982 ed. <s> BIB002 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Paths in SMDs <s> Abstract Let G be a directed graph whose edges are coloured with two colours. Call a set S of vertices of G independent if no two vertices of S are connected by a monochromatic directed path. We prove that if G contains no monochromatic infinite outward path, then there is an independent set S of vertices of G such that, for every vertex x not in S , there is a monochromatic directed path from x to a vertex of S . In the event that G is infinite, the proof uses Zorn's lemma. The last part of the paper is concerned with the case when G is a tournament. <s> BIB003 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Paths in SMDs <s> A graph is constructed to provide a negative answer to the following question of Bondy: Does every diconnected orientation of a complete k-partite (k ≥ 5) graph with each part of size at least 2 yield a directed (k + 1)-cycle? <s> BIB004 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Paths in SMDs <s> In this paper, the following results will be shown: 1 There is a Hamilton path and a cycle of length at least p —1 in any regular multipartite tournament of order p; (i) There is a longest path U O ,…, u t in any oriented graph such that d − (u O ) + d + (u t ) ≤ t. <s> BIB005 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Paths in SMDs <s> We give necessary and sufficient conditions in terms of connectivity and factors for the existence of hamiltonian cycles and hamiltonian paths and also give sufficient conditions in terms of connectivity for the existence of cycles through any two vertices in bipartite tournaments. <s> BIB006 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Paths in SMDs <s> Efficient algorithms for finding Hamiltonian cycles, Hamiltonian paths, and cycles through two given vertices in bipartite tournaments are given. <s> BIB007 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Paths in SMDs <s> Abstract We prove that every k -partite tournament with at most one vertex of in-degree zero contains a vertex from which each other vertex can be reached in at most four steps. <s> BIB008 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Paths in SMDs <s> Abstract An n -partite tournament, n ≥2, or multipartite tournament is an oriented graph obtained by orienting each edge of a complete n -partite graph. The cycle structure of multipartite tournaments is investigated and properties of vertices with maximum score are studied. <s> BIB009 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Paths in SMDs <s> This paper presents polynomially bounded algorithms for finding a cycle through any two prescribed arcs in a semicomplete digraph and for finding a cycle through any two prescribed vertices in a complete k-partite oriented graph. It is also shown that the problem of finding a maximum transitive subtournament of a tournament and the problem of finding a cycle through a prescribed arc set in a tournament are both NP-complete. <s> BIB010 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Paths in SMDs <s> A digraph obtained by replacing each edge of a complete m-partite graph with an arc or a pair of mutually opposite arcs with the same end vertices is called a complete m-partite digraph. An $O ( n^3 )$ algorithm for finding a longest path in a complete m-partite $( m \geq 2 )$ digraph with n vertices is described in this paper. The algorithm requires time $O( n^{2.5} )$ in case of testing only the existence of a Hamiltonian path and finding it if one exists. It is simpler than the algorithm of Manoussakis and Tuza [SIAM J. Discrete Math., 3 (1990), pp. 537–543], which works only for $m = 2$. The algorithm implies a simple characterization of complete m-partite digraphs having Hamiltonian paths that was obtained for the first time in Gutin [Kibernetica (Kiev), 4 (1985), pp. 124–125] for $m = 2$ and in Gutin [Kibernetica (Kiev), 1(1988), pp. 107–108] for $ m \geq 2 $. <s> BIB011 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Paths in SMDs <s> For a graph G, letG?(G?) denote an orientation ofG having maximum (minimum respectively) finite diameter. We show that the length of the longest path in any 2-edge connected (undirected) graph G is precisely diam(G?). LetK(m l ,m 2,...,m n) be the completen-partite graph with parts of cardinalitiesm 1 m2, ?,m n . We prove that ifm 1 = m2 = ? =m n = m,n ? 3, then diam(K?(m1,m2,...,mn)) = 2, unless m=1 andn = 4. <s> BIB012 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Paths in SMDs <s> An n-partite tournament is an orientation of a complete n-partite graph. We show that if D is a strongly connected n-partite (n ≥ 3) tournament, then every partite set of D has at least one vertex, which lies on an m-cycle for all m in {3, 4,..., n}. This result extends those of Bondy (J. London Math. Soc.14 (1976), 277-282) and Gutin (J. Combin. Theory Ser. B58 (1993), 319-321). <s> BIB013
The following result is proved in BIB011 . Theorem 3.1 Let D be a SMD. Then 1) for any almost 1-diregular subgraph F of D there is a path P of D satisfying V (P ) = V (F ) (if F is a maximum almost 1-diregular subgraph each such path is a longest path of D); 2) there exists an O(n 3 ) algorithm for finding a longest path in D. The first half of Theorem 3.1 follows from the following: Lemma 3.2 Let D be a SMD and P , C a path and a cycle having no common vertices; then the subgraph of D induced by V (P ) ∪ V (C) contains a Hamiltonian path. In order to describe the algorithm mentioned in Theorem 3.1 we first consider a construction by N. Alon (cf. BIB011 ) which allows one to find efficiently a 1-diregular subgraph with maximum order of a given digraph D. Let B = B(D) be a bipartite weighted graph, such that (X, X ) is the partition of B, where X = V (D), X = {x : x ∈ X}; xy ∈ E(B), if and only if either (x, y) ∈ A(D) or x = y . The weight of an edge xy of B equals 1 if x = y and equals 2, otherwise. It is easy to see that solving the assignment problem for B (in time O(n 3 ), cf. BIB002 ) and, then, removing all the edges with weight 2 from the solution, we obtain a set of edges of B corresponding to some 1-diregular subgraph F of D of maximum order. For a cycle C and a vertex x on it, denote by C x the path obtained from C by deleting the arc ending at x. Now we are ready to describe the algorithm. Step 1. Construct the digraph D with be the cycles of F , and suppose x ∈ V (C 0 ) (it is easy to see that x ∈ F ). Find P = C 0 − x, and put Note that F is almost a 1-diregular subgraph of D of maximum order. We shall construct a path on all the vertices of F -this will clearly be a longest path. Step 2. If t = 0, then H := P , and we have finished. Otherwise put C := C t , t := t − 1. Let P = (x 1 , x 2 , ..., x m ), C = (y 1 , y 2 , ..., y k , y 1 ). Step , where z + is the vertex following z in C, and go back to Step 2. Analogously, if there exists y ∈ Γ + (x m ) ∩ V (C) put P := (P, C y ), and go back to Step 2. Step 4. For i = 1, 2, ..., m − 1; j = 1, 2, ..., k if (y j , x i+1 ), (x i , y j+1 ) ∈ A(D), then let P be the path containing the fragment of P from x 1 to x i , the path C y j+1 , and the fragment of P from x i+1 to x m . Go to Step 2. If none of Steps 2,3,4 can be applied, we go to Step 5 below . Step 5. then let P be the path containing the fragment of P from x 1 to x i−1 , the vertices y j+1 , x i , the fragment of C from y j+2 to y j , and the fragment of P from x i+1 to x m in the given order (the direct proof of Lemma 3.2 BIB011 consists of showing the existence of the above mentioned i, j = j(i) as well as the arcs (x i−1 , y j+1 ), (x i , y j+2 )). Go to Step 2. Lemma 3.2 can also be proved as a rather simple consequence of a sufficient condition for a SMD to be Hamiltonian, shown in (see Theorem 4.8 bellow). This proof of Lemma 3.2 provides a more complicated algorithm than Algorithm 3.3. Step 1 of Algorithm 3.3 can be executed in time O(n 3 ). All the other steps can be performed in time O(n 2 ). Using the maximum matching algorithm for bipartite graphs , one can test whether a digraph D contains a 1-diregular spanning subgraph F and find some F in case one exists in time O(n 2.5 / √ log n). This implies Corollary 3.4 was derived in as a generalization of the same theorem obtained for semicomplete bipartite digraphs in . Using a different approach, R. Häggkvist and Y. Manoussakis gave in BIB006 analogous characterization of bipartite tournaments having a Hamiltonian path. Y. Manoussakis and Z. Tuza constructed in BIB007 an O(n 2.5 / √ log n) algorithm for finding a Hamiltonian path in a bipartite tournament B (if B has a Hamiltonian path). Corollary 3.4 implies that any almost diregular or diregular multipartite tournament has a Hamiltonian path. This last result was also proved in BIB005 as a corollary of Theorem 4.13 (see below). Recently J. Bang-Jensen proved that Corollary 3.4 is also valid for arc-local tournament digraphs. The problem of deciding whether a tournament with two given vertices x and y, contains a Hamiltonian path with endvertices x, y (the order not specified) was solved by C. Thomassen BIB001 . It follows from his characterization that the existence of such a path for specified vertices An analogous characterization of all bipartite tournaments that have a Hamiltonian path between two prescribed vertices x, y was derived by J. Bang-Jensen and Y. Manoussakis in . The only difference between these two characterizations is in Condition 4: in BangJensen's and Manoussakis' theorem the set of forbidden digraphs is absolutely different from that of Theorem 3.5 and moreover infinite (see ). Both characterizations imply polynomial algorithms to decide the existence of a Hamiltonian path connecting two given vertices and find one (if it exists). In BIB001 C. Thomassen considered not only the problem of deciding if for a pair x, y of vertices there is a Hamiltonian path either from x to y or from y to x but also the stronger problem of deciding if there exists a Hamiltonian (x, y)-path. He proved that for every pair x, y of vertices of a 4-strongly connected tournament there is a Hamiltonian path starting at x and ending at y. In , the following conjecture was formulated. Conjecture 3.6 Let D be a 4-strongly connected ordinary MT (or bipartite tournament). The digraph D has a Hamiltonian path from x to y for any pair of vertices x, y of D if and only if D contains an (x, y)-path P such that D − P has a factor. The radius and diameter are important invariants of a digraph. H. Landau observed that the radius of any tournament is at most two and each vertex of maximum outdegree in it is a center. Obviously, any MT containing at least two vertices of indegree zero has an infinite radius. However, in case there are no such two vertices, the radius can be bounded, as shown in the following statement, proved in and, independently, in BIB008 . Theorem 3.7 Any MT with at most one vertex of indegree zero has radius r ≤ 4. B. Sands, N. Sauer and R. Woodrow BIB003 studied monochromatic paths in arc-coloured digraphs. In particular, they proved that every tournament whose arcs are coloured with two colours contains a vertex v such that for every other vertex w there exists a monochromatic (v, w)-path. They also showed the following: Theorem 3.8 Let T be a tournament whose arcs are coloured with three colours, and whose vertices can be partitioned into disjoint blocks such that (i) two vertices in different blocks are always connected by a red arc; (ii) two vertices in the same block are always connected by a blue or a green arc. Then there is a vertex v of T such that for every other vertex x of T there is a monochromatic path from v to x. It is easy to see that the last theorem follows from Theorem 3.7 and the first mentioned result of B. Sands, N. Sauer and R. Woodrow. It is easy to check that Theorem 3.7 holds for the entire class of SMD. V. Petrovic and C. Thomassen BIB008 pointed out that Theorem 3.7 can be extended to a larger class of oriented graphs (at the cost of modifying the constant 4). Theorem 3.9 Let G be a graph whose complement is a disjoint union of complete graphs, cycles and paths. Then every orientation of G with at most one vertex of indegree zero has radius at most 6. Unlike tournaments, a vertex of maximum outdegree in a MT is not necessarily a center as proved in BIB009 : Theorem 3.10 Let T be a strongly connected 3-partite tournament of order n ≥ 8. If v is a vertex of maximum outdegree in T , then ecc(v) is at most [n/2] and this bound is best possible. In the case of bipartite tournaments, it is possible to obtain more detailed results. In characterizations of vertices with eccentricity 1, 2, 3 or 4 were derived. Using these characterizations all bipartite tournaments with radius 1, 2, 3 or 4 were characterized. It is easy to see BIB012 , that if a graph G has an orientation with a finite diameter (i. e., if G has no bridges ), then the maximum diameter of such an orientation is equal to the length of the longest path in G. The problem of finding the minimum possible diameter of such an orientation is significantly more complicated. Denote by f (m 1 , m 2 , ..., m k ) the minimum possible diameter of a k-partite tournament with partite sets of sizes m 1 , m 2 , ..., m k . L. Soltes obtained the following result. , and otherwise A shorter proof of this result using the well known theorem of Sperner (cf. [1] ) is given in . In BIB012 , the following result dealing with k ≥ 3 was proved. 4 Cycles in semicomplete multipartite digraphs J. A. Bondy extended the above mentioned Moser's Theorem on k-partite (k ≥ 3) tournaments in the following form (this result was obtained independently in as well). In connection with the last statement he asked if the inequality m > k may be replaced by the equality m = k + 1. A negative answer to this question was obtained in (for details see ). The same counter-example (as in ) was found independently by R. Balakrishnan and P. Paulraja BIB004 . Consider the k-partite (k ≥ 3) tournament D k with the partite sets {x It is easy to see that D k (k ≥ 3) has no (k + 1)-cycle. In it was also proved that the inequality m > k (in Theorem 4.2 ) may be replaced by the inequality k + 1 ≤ m ≤ k + 2. In connection with Theorem 4.1 J. A. Bondy raised the question if some form of the corresponding generalization of Moon's Theorem is also true. He further gave an example showing that the last generalization is not true in general. In , and BIB013 the following three restricted generalizations of Moon's theorem were obtained. Note that the last theorem implies the previous one. W. Goddard, G. Kubicki, O. Oellermann and S. Tian BIB009 proved that every vertex of a strongly connected k-partite tournament T (k ≥ 3) belongs to a 3-cycle or a 4-cycle of T . Moreover, they obtained the following: In BIB007 and BIB010 , the problem of the existence of a cycle containing prescribed vertices in MTs is studied. In BIB010 , the following result is shown. J. Bang-Jensen, G. Gutin and J. Huang study the Hamiltonian cycle problem for SMDs. To describe the main result of , we need the following definitions. Let C and Z be two disjoint cycles in a digraph D. A vertex x ∈ C is called out-singular (in-singular)) with respect to such that C i has singular vertices with respect to C j and they are all out-singular, and C j contains singular vertices with respect to C i and they are all in-singular. The main result of is the following sufficient condition for a SMD to be Hamiltonian. The following lemma is used in the proof of Theorem 4.8 in . It is useful in other proofs as well (see Theorems 5.5, 5.6). Lemma 4.9 Let F = C 1 ∪ C 2 ∪ · · · ∪ C t be a 1-diregular subgraph of maximum cardinality of a strongly connected SMD D , where C i is a cycle in D (1 ≤ i ≤ t). Let, also, F satisfy the following condition: for every pair 1 ≤ i < j ≤ t all arcs between C i and C j are oriented either from C i to C j or from C j to C i . Then D has a (longest) cycle of length |V (F )| and one can find such a cycle in time O(n 2 ) for a given subgraph F . In view of Theorem 4.8 the following statement seems to be true.
Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Conjecture 4.10 <s> We obtain several sufficient conditions on the degrees of an oriented graph for the existence of long paths and cycles. As corollaries of our results we deduce that a regular tournament contains an edge-disjoint Hamilton cycle and path, and that a regular bipartite tournament is hamiltonian. <s> BIB001 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Conjecture 4.10 <s> In this paper, the following results will be shown: 1 There is a Hamilton path and a cycle of length at least p —1 in any regular multipartite tournament of order p; (i) There is a longest path U O ,…, u t in any oriented graph such that d − (u O ) + d + (u t ) ≤ t. <s> BIB002
There is a polynomial algorithm for the Hamiltonian cycle problem in SMDs. Theorem 4.8 provides a short proof of Lemma 3.2 and hence of Theorem 3.1, Theorems 5.3, 5.7 as well as of the following result originally obtained in . One of the interesting classes of MT is the set of diregular k-partite tournaments (k ≥ 2). Theorem 5.3 implies that every diregular bipartite tournament is Hamiltonian. This result was first obtained in BIB001 , . Moreover, C.-Q. Zhang BIB002 proved the following: Theorem 4.13 There is a cycle of length at least n − 1 in any diregular MT of order n. C.-Q. Zhang BIB002 formulated the following conjecture.
Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles in SBDs and ordinary SMDs <s> The value of depth-first search or “backtracking” as a technique for solving problems is illustrated by two examples. An improved version of an algorithm for finding the strongly connected components of a directed graph and at algorithm for finding the biconnected components of an undirect graph are presented. The space and time requirements of both algorithms are bounded by $k_1 V + k_2 E + k_3 $ for some constants $k_1 ,k_2 $, and $k_3 $, where V is the number of vertices and E is the number of edges of the graph being examined. <s> BIB001 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles in SBDs and ordinary SMDs <s> Abstract A tournament is a directed graph which contains exactly one arc joining each pair of vertices. We show that the number of tournaments on n ⩾ 4 vertices which contain exactly one Hamiltonian circuit equals F 2 n −6 , the (2 n − 6)-th Fibonacci number. <s> BIB002 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles in SBDs and ordinary SMDs <s> We obtain several sufficient conditions on the degrees of an oriented graph for the existence of long paths and cycles. As corollaries of our results we deduce that a regular tournament contains an edge-disjoint Hamilton cycle and path, and that a regular bipartite tournament is hamiltonian. <s> BIB003 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles in SBDs and ordinary SMDs <s> We give necessary and sufficient conditions in terms of connectivity and factors for the existence of hamiltonian cycles and hamiltonian paths and also give sufficient conditions in terms of connectivity for the existence of cycles through any two vertices in bipartite tournaments. <s> BIB004 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles in SBDs and ordinary SMDs <s> Efficient algorithms for finding Hamiltonian cycles, Hamiltonian paths, and cycles through two given vertices in bipartite tournaments are given. <s> BIB005 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles in SBDs and ordinary SMDs <s> Abstract We give a simple algorithm to transform a Hamiltonian path in a Hamiltonian cycle, if one exists, in a tournament T of order n. Our algorithm is linear in the number of arcs, i.e., of complexity O(m)=O(n2) and when combined with the O(n log n) algorithm of [2] to find a Hamiltonian path in T, it yields an O(n2) algorithm for searching a Hamiltonian cycle in a tournament. Up to now, algorithms for searching Hamiltonian cycles in tournaments were of order O(n3) [3], or O(n2 log n) [5]. <s> BIB006 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles in SBDs and ordinary SMDs <s> Abstract A digraph D is said to satisfy the condition O( n ) if d T + ( u ) + d T − ( v ) ⩾ n whenever uv is not an arc of D . In this paper we prove the following results: If a p × q bipartite tournament T is strong and satisfies O( n ), then T contains a cycle of length at least min(2 n + 2, 2 p , 2 q , unless T is isomorphic to a specified family of graphs. As an immediate consequence of this result we conclude that each arc of a n × n bipartite tournament satisfying O( n ) is contained in cycles of lengths 4, 6, …, 2 n , except in a described case. <s> BIB007 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles in SBDs and ordinary SMDs <s> We give anO(log4n)-timeO(n2)-processor CRCW PRAM algorithm to find a hamiltonian cycle in a strong semicomplete bipartite digraph,B, provided that a factor ofB (i.e., a collection of vertex disjoint cycles covering the vertex set ofB) is computed in a preprocessing step. The factor is found (if it exists) using a bipartite matching algorithm, hence placing the whole algorithm in the class Random-NC. <s> BIB008
In the survey by L. W. Beineke , the following sufficient conditions for a bipartite tournament to have a cycle of length at least 2r due to B. Jackson BIB003 are given: Theorem 5.1 Let T be a strongly connected BT with the property that for all vertices v and w, either (v, w) ∈ A(T ) or d Then T has a cycle of length at least 2r. This supplies a sufficient condition for a BT to be Hamiltonian by taking r = n/2. J. Z. Wang BIB007 showed the following result improving Theorem 5.1 in the Hamiltonian case. ) when m is odd or B( The following necessary and sufficient conditions for the existence of a Hamiltonian cycle in a semicomplete bipartite digraph have appeared in BIB004 BIB005 . Lemma 5.4 can be proved in a rather simple way using Theorem 4.8 . An algorithm for checking whether a SBD D contains a Hamiltonian cycle and finding one if D is Hamiltonian consists of the following steps. 1) Check whether D is strongly connected applying any O(n 2 )-time algorithm (e. g. the one in BIB001 ). If D is not strongly connected, then D is not Hamiltonian. 2) Find in D a maximum 1-diregular subgraph F (apply the construction described before Algorithm 3.3). If F is not a 1-difactor of D, then D is not Hamiltonian. 3) Construct a semicomplete digraph T = T (F ) as follows. The vertices of T are the cycles of F . A cycle C 1 of F dominates another cycle C 2 in T if and only if there is an arc in D from C 1 to C 2 . Find a Hamiltonian cycle H in T (F ) using the algorithm from BIB006 . 4) Transform H into a Hamiltonian cycle of D using Lemma 5.4. J. Bang-Jensen proved that Theorem 5.3 remains true for arc-local tournament digraphs. Recently, J. Bang-Jensen, M. El Haddad, Y. Manoussakis and T. Przytycka BIB008 obtained a random parallel algorithm for checking whether a SBD D has a Hamiltonian cycle and finding one (if there is) in time O(log 4 n) using a CRCW PRAM with O(n 2 ) processors (see, e. g., for the definition of CRCW PRAM). It follows from Theorem 4.11 that the first part of Theorem 5.3 cannot be extended to the entire set of semicomplete t-partite digraphs (t ≥ 3). M. Manoussakis and Y. Manoussakis determined the number of non-isomorphic BTs with 2m vertices containing a unique Hamiltonian cycle. Let h m be the number of such BTs. It is shown in that h 2 = h 3 = 1 and h m = 4h m−1 + h m−2 for m ≥ 4. R. J. Douglas gave a structural characterization of tournaments having a unique Hamiltonian cycle. This result implies a formula for the number s n of non-isomorphic tournaments of order n with a unique Hamiltonian cycle. This characterization as well as formula are rather complicated. M. R. Garey BIB002 later showed that s n could be expressed as a Fibonacci number (s n = f 2n−6 ); his derivation was based on Douglas's characterization. J. W. Moon obtained a direct proof of Garey's formula that is essentially independent of Douglas's characterization. We make the following trivial but useful observation. The length of a longest cycle in any digraph is equal to the maximum length of a longest cycle in its strongly connected components. Hence, solving the longest cycle problem, one may consider only strongly connected digraphs. In the following result which gives a complete solution of the longest cycle problem in the case SBDs was obtained. It easy to see that Theorem 5.5 follows from Lemmas 4.9, 5.4 and the construction described before Algorithm 3.3. The algorithm mentioned in Theorem 5.5 is just a modification of that described after Lemma 5.4. Theorem 5.3 as well as Theorem 5.5 can be proved for the class of ordinary semicomplete t-partite digraphs (t ≥ 3) with a little alteration. Indeed, the following two claims hold . Let T = T (x 1 , ..., x n ) be a semicomplete digraph with V (T ) = {x 1 , ..., x n }, and let k i be non-negative integers (1 ≤ i ≤ n). A closed (k 1 , k 2 , , ..., k n )-walk of T is a closed directed walk of T visiting each vertex x j no more than k j times (the first and the last vertices of a closed walk coincide and are considered as a single vertex).
Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Theorem 5.6 implies <s> In this review paper, we make a detailed study of a class of directed graphs, known as tournaments. The reason they are called tournaments is that they represent the structure of round robin tournaments, in which players or teams engage in a game that cannot end in a tie and in which every player plays each other exactly once. Although tournaments are quite restricted structurally, they are realized by a great many empirical phenomena in addition to round robin competitions. For example, it is known that many species of birds and mammals develop dominance relations so that for every pair of individuals, one dominates the other. Thus, the digraph of the "pecking structure" of a flock of hens is asymmetric and complete, and hence a tournament. Still another realization of tournaments arises in the method of scaling, known as "paired comparisons." Suppose, for example, that one wants to know the structure of a person's preferences among a collection of competing brands of a product. He can be asked to indicate for each pair of brands which one he prefers. If he is not allowed to indicate indifference, the structure of his stated preferences can be represented by a tournament. Tournaments appear similarly in the theory of committees and elections. Suppose that a committee is considering four alternative policies. It has been argued that the best decision will be reached by a series of votes in which each policy is paired against each other. The outcome of these votes can be represented by a digraph whose points are policies and whose lines indicate that one policy defeated the other. Such a digraph is clearly a tournament. After giving some essential definitions, we develop properties that all tournaments display. We then turn our attention to transitive tournaments, namely those that are complete orders. It is well known that not all preference structures are transitive. There is considerable interest, therefore, in knowing how transitive any given tournament is. Such an index is presented toward the end of the second section. In the final section, we consider some properties of strongly connected tournaments. <s> BIB001 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Theorem 5.6 implies <s> We obtain several sufficient conditions on the degrees of an oriented graph for the existence of long paths and cycles. As corollaries of our results we deduce that a regular tournament contains an edge-disjoint Hamilton cycle and path, and that a regular bipartite tournament is hamiltonian. <s> BIB002 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Theorem 5.6 implies <s> Abstract It is shown that if an oriented complete bipartite graph has a directed cycle of length 2 n , then it has directed cycles of all smaller even lengths unless n is even and the 2 n -cycle induces one special digraph. <s> BIB003 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Theorem 5.6 implies <s> An activated gas reaction apparatus which comprises an activation chamber; feeding means for conducting feed gas into the activation chamber; microwave power-generating means for activating raw gas received in the activation chamber; and a reaction chamber provided apart from the activation chamber for reaction of activated gas, the activated gas reaction apparatus so constructed as to satisfy the following formula: <s> BIB004 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Theorem 5.6 implies <s> We give necessary and sufficient conditions in terms of connectivity and factors for the existence of hamiltonian cycles and hamiltonian paths and also give sufficient conditions in terms of connectivity for the existence of cycles through any two vertices in bipartite tournaments. <s> BIB005 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Theorem 5.6 implies <s> Abstract We give several sufficient conditions on the half-degrees of a bipartite digraph for the existence of cycles and paths of various lengths. Some analogous results are obtained for bipartite oriented graphs and for bipartite tournaments. <s> BIB006 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Theorem 5.6 implies <s> Efficient algorithms for finding Hamiltonian cycles, Hamiltonian paths, and cycles through two given vertices in bipartite tournaments are given. <s> BIB007 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Theorem 5.6 implies <s> Abstract A digraph obtained by replacing each edge of a complete multipartite graph by an arc or a pair of mutually opposite arcs with the same end vertices is called a complete multipartite graph. Such a digraph D is called ordinary if for any pair X , Y of its partite sets the set of arcs with both end vertices in X ∪ Y coincides with X × Y = {( x , y ): xϵX , yϵY } or Y × X or X × Y ∪ Y × X . We characterize all the pancyclic and vertex pancyclic ordinary complete multipartite graphs. Our charcterizations admit polynomial time algorithms. <s> BIB008
Corollary 5.8 The maximum length of a closed (k 1 , k 2 , ..., k n )-walk of a strongly connected semicomplete digraph T (x 1 , x 2 , ..., x n ) is equal to the number of vertices of a maximum 1-diregular subgraph of the digraph D T (x L. Moser BIB001 and J. Moon strengthened the theorem of Camion mentioned above and proved, respectively, that a strongly connected tournament is pancyclic and, even, vertex pancyclic. The following characterizations of even pancyclic and vertex even pancyclic bipartite tournaments were derived in BIB003 and BIB004 , respectively. Note, that the last characterization was obtained independently in BIB005 as well. Theorem 5.9 A bipartite tournament is even pancyclic as well as vertex even pancyclic if and only if it is Hamiltonian and is not isomorphic to the bipartite tournament B(r, r, r, r) (r = 2, 3, . . .). Considering diregular bipartite tournaments, D. Amar and Y. Manoussakis BIB006 and, independently, J. Z. Wang showed the following: Theorem 5.10 An r-diregular BT is arc even pancyclic unless it is isomorphic to B(r, r, r, r). The analogous result for diregular tournaments was obtained by B. Alspach in (every diregular tournament is arc pancyclic). Z. M. Song studied complementary cycles in BTs which are similar to diregular ones. He proved the following: Theorem 5.11 Let R be a BT with 2k + 1 vertices in each partite set (k ≥ 4). If every vertex of R has outdegree and indegree at least k then for any vertex x in R, R contains a pair of disjoint cycles C, Q such that C includes x and the length of C is at most 6 unless R is isomorphic to B(k Observe that a characterization of even pancyclic (and vertex even pancyclic) semicomplete bipartite digraphs coincides with the above-mentioned one. Indeed, the result follows from the fact that any bipartite tournament obtained by the reorientation of an arc of B(r, r, r, r) is Hamiltonian, and so, vertex even pancyclic. Combining these results with the above described necessary and sufficient conditions for the existence of a Hamiltonian cycle in a semicomplete bipartite digraph (Theorem 5.3) we obtain a polynomial characterization for the above properties. A characterization of pancyclic (and vertex pancyclic) ordinary m-partite (m ≥ 3) tournaments was established in . As opposed to the characterization of even pancyclic semicomplete bipartite graphs the last one does not imply immediately a characterization of pancyclic (or vertex pancyclic) ordinary semicomplete m-partite digraphs. Indeed, there exist vertex pancyclic ordinary SMDs which contain no Hamiltonian ordinary multipartite tournaments as spanning subgraphs. Such examples are semicomplete m-partite digraphs S m,r with r vertices in each partite set but one and (m − 1)r vertices in the last one (r ≥ 1, m ≥ 3). A semicomplete m-partite digraph is called a complete m-partite digraph if it has the arcs (u, v), (v, u) for any pair u, v in distinct partite sets. S m,r is vertex pancyclic by Theorem 5.12 (see below) and it has no Hamiltonian ordinary m-partite tournament as a spanning subgraph since any Hamiltonian cycle of S m,r must alternate between the largest partite set and the other partite sets and hence it cannot be a subgraph of an ordinary multipartite tournament. An ordinary SMD D is called a zigzag digraph if it has more than four vertices and Observe that any cycle in such a graph has the same number, say s, of vertices from V 1 and V 2 and at least s vertices from V 3 ∪ · · · ∪ V k . Therefore, H has no prehamiltonian cycle, i.e. a cycle containing all vertices of H but one. Observe also that an ordinary 4-partite tournament with more than four vertices is not a pancyclic digraph. Indeed, the single (up to isomorphism) strongly connected tournament with four vertices has no closed directed walk of length five. The following characterization of pancyclic and vertex pancyclic ordinary SBD was obtained in BIB008 . ii) it has a 1-diregular spanning subgraph; iii) it is neither a zigzag digraph nor a 4-partite tournament with at least five vertices. 2) A pancyclic ordinary semicomplete k-partite digraph D is vertex pancyclic if and only if either i) k > 3 or ii) k = 3 and D contains two 2-cycles Z 1 , Z 2 such that Z 1 ∪ Z 2 has vertices in three partite sets. 3) There exists an O n 2.5 / √ log n algorithm for determining whether an ordinary semicomplete k-partite (k ≥ 3) digraph D is pancyclic (vertex pancyclic). The following result conjectured in BIB005 was proved in [15] . 2)There exists a O(n 3 ) algorithm to find a cycle through any set of k vertices in a k-strongly connected bipartite tournament. considered in BIB005 shows that Theorem 5.13 is best possible in terms of connectivity (B is non-k-vertex cyclic). J. Bang-Jensen and Y. Manoussakis [15] raised the following conjecture. Conjecture 5.14 For every fixed k there exists a polynomial algorithm to decide the existence of a cycle through a given set of k vertices in a BT and to find one if it exists. Y. Manoussakis and Z. Tuza BIB007 have already proved this conjecture for k = 2. The situation with k-cyclic ordinary SMD is better. J. Bang-Jensen, G. Gutin and J. Huang derived in a complete characterization of k-cyclic ordinary SMDs. They showed the following: B. Jackson BIB002 suggests that Kelly's conjecture remains valid for diregular BTs, i. e. he raises the following
Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles and paths in semicomplete digraphs <s> Fundamental concepts Connectedness Path problems Trees Leaves and lobes The axiom of choice Matching theorems Directed graphs Acyclic graphs Partial order Binary relations and Galois correspondences Connecting paths Dominating sets, covering sets and independent sets Chromatic graphs Groups and graphs Bibliography List of concepts Index of names. <s> BIB001 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles and paths in semicomplete digraphs <s> Abstract Given a tournament with n vertices, we consider the number of comparisons needed, in the worst case, to find a permutation υ 1 υ 2 … υ n of the vertices, such that the results of the games υ 1 υ 2 , υ 2 υ 3 ,…, υ n −1 υ n match a prescribed pattern. If the pattern requires all arcs to go forwrd, i.e., υ 1 → υ 2 , υ 2 → υ 3 ,…, υ n −1 → υ n , and the tournament is transitive, then this is essentially the problem of sorting a linearly ordered set. It is well known that the number of comparisons required in this case is at least cn lg n , and we make the observation that O ( n lg n ) comparisons suffice to find such a path in any (not necessarily transitive) tournament. On the other hand, the pattern requiring the arcs to alternate backward-forward-backward, etc., admits an algorithm for which O ( n ) comparisons always suffice. Our main result is the somewhat surprising fact that for various other patterns the complexity (number of comparisons) of finding paths matching the pattern can be cn lg α n for any α between 0 and 1. Thus there is a veritable spectrum of complexities, depending on the prescribed pattern of the desired path. Similar problems on complexities of algorithms for finding Hamiltonian cycles in graphs and directed graphs have been considered by various authors, [2, pp. 142, 148, 149; 4]. <s> BIB002 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles and paths in semicomplete digraphs <s> An extensible interlocking structure suitable for tower cranes, scaffolding towers and the like is of multilateral cross-section and has two sets of main members which engage one another in end-to-end relation when the structure is extended. Tie members are pivoted to the main members, with the free ends of the tie members interlocking automatically with the main members during extension to provide diagonal bracing. On retraction the members may be stored on drums or in a rack. <s> BIB003 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles and paths in semicomplete digraphs <s> A tournament is a digraph T=(V,E) in which, for every pair of vertices, u & v, exactly one of (u,v), (v,u) is in E. Two classical theorems about tournaments are that every tournament has a Hamiltonian path, and every strongly connected tournament has a Hamiltonian cycle. Furthermore, it is known how to find these in polynomial time. In this paper we discuss the parallel complexity of these problems. Our main result is that constructing a Hamiltonian path in a general tournament and a Hamiltonian cycle in a strongly connected tournament are both in NC. In addition, we give an NC algorithm for finding a Hamiltonian path one fixed endpoint. In finding fast parallel algorithms, we also obtain new proofs for the theorems. <s> BIB004 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles and paths in semicomplete digraphs <s> A general method is presented for translating sorting by comparisons algorithms to algorithms that compute a Hamilton path in a tournament. The translation is based on the relation between minimal feedback sets and Hamilton paths in tournaments. It is proven that there is a one to one correspondence between the set of minimal feedback sets and the set of Hamilton paths. In the comparison model, all the tradeoffs for sorting between the number of processors and the number of rounds hold as well for computing Hamilton paths. For the CRCW model, with $O( n )$ processors, we show the following: (i) Two paths in a tournament can be merged in $O(\log \log n)$ time (Valiant’s algorithm [SIAM J. Comput., 4 (1975), pp. 348–355], (ii) a Hamilton path can be computed in $O(\log n)$ time (Cole’s algorithm). This improves a previous algorithm for computing a Hamilton path whose running time was $O(\log^2 n)$ using $O(n^2 )$ processors. <s> BIB005 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles and paths in semicomplete digraphs <s> Abstract We give a simple algorithm to transform a Hamiltonian path in a Hamiltonian cycle, if one exists, in a tournament T of order n. Our algorithm is linear in the number of arcs, i.e., of complexity O(m)=O(n2) and when combined with the O(n log n) algorithm of [2] to find a Hamiltonian path in T, it yields an O(n2) algorithm for searching a Hamiltonian cycle in a tournament. Up to now, algorithms for searching Hamiltonian cycles in tournaments were of order O(n3) [3], or O(n2 log n) [5]. <s> BIB006 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles and paths in semicomplete digraphs <s> Abstract We describe a polynomial algorithm, which either finds a Hamiltonian path with prescribed initial and terminal vertices in a tournament (in fact, in any semicomplete digraph), or decides that no such path exists. <s> BIB007 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles and paths in semicomplete digraphs <s> This paper presents polynomially bounded algorithms for finding a cycle through any two prescribed arcs in a semicomplete digraph and for finding a cycle through any two prescribed vertices in a complete k-partite oriented graph. It is also shown that the problem of finding a maximum transitive subtournament of a tournament and the problem of finding a cycle through a prescribed arc set in a tournament are both NP-complete. <s> BIB008 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles and paths in semicomplete digraphs <s> We propose a parallel algorithm which reduces the problem of computing Hamiltonian cycles in tournaments to the problem of computing Hamiltonian paths. The running time of our algorithm is O(log n) using O(n2/log n) processors on a CRCW PRAM, and O(log n log log n) on an EREW PRAM using O(n2/log n log log n) processors. As a corollary, we obtain a new parallel algorithm for computing Hamiltonian cycles in tournaments. This algorithm can be implemented in time O(log n) using O(n2/log n) processors in the CRCW model, and in time O(log2n) with O(n2/log n log log n) processors in the EREW model. <s> BIB009
What is the complexity of the Hamiltonian path and cycle problems in tournaments? The inductive classical proof of Redei's theorem gives at once a simple O(n 2 ) algorithm for the first problem. Since sorting corresponds to finding a Hamiltonian path in a transitive tournament, we have an O(n log n)-time algorithm in this case. P. Hell and M. Rosenfeld BIB002 obtained an algorithm with the same complexity solving the Hamiltonian path problem for any tournament. The well known proof of Moon's theorem provides an O(n 3 )-time algorithm for the Hamiltonian cycle problem. Y. Manoussakis BIB006 constructed an O(n 2 )-time algorithm for this problem. A parallel algorithm A for a problem with size n is called an N C-algorithm if there are constants k, l such that A can be performed in time O(log k n) on an O(n l ) processors PRAM. We refer the reader to for a discussion of N C-algorithms. D. Soroker BIB004 studies the parallel complexity of the above mentioned problems. He proved the following: Theorem 6.1 There are N C-algorithms for the Hamiltonian path and Hamiltonian cycle problems in tournaments. Another N C -algorithm for the Hamiltonian path problem in tournaments has been obtained by J. Naor BIB003 . As to the Hamiltonian path problem for tournaments, the most effective parallel algorithm is due to A. Bar-Noy and J. Naor BIB005 . They constructed an algorithm performed in time O(log n) on an O(n) processors CRCW PRAM for a tournament containing n vertices. The most effective parallel algorithm for the Hamiltonian cycle problem for tournaments is due to E. Bampis, M. El Haddad, Y. Manoussakis and M. Santha BIB009 . They found a fast parallel procedure which transforms the Hamiltonian cycle problem into the Hamiltonian path one in the following sense: Given a Hamiltonian path in a tournament as input, the procedure constructs a Hamiltonian cycle. The parallel running time of the procedure is O(log n) using O(n 2 / log n) processors in the CRCW model. J. Bang-Jensen, Y. Manoussakis and C. Thomassen BIB007 obtained a polynomial algorithm solving the problem (which appears in BIB001 BIB004 ) of deciding the existence of a Hamiltonian path with prescribed initial and terminal vertices in a tournament. Obviously, the last problem is equivalent to the problem of existence of a Hamiltonian cycle containing a given arc in a tournament. They raised the following: Conjecture 6.2 For each fixed k, there exists a polynomial algorithm for deciding if there exists a Hamiltonian cycle through k prescribed arcs in a tournament. The k-arc cyclic problem is the following: Given k distinct arcs in a digraph D, decide whether D has a cycle through all the arcs. Bang-Jensen and Thomassen BIB008 considered this problem for semicomplete digraphs. They proved: Theorem 6.3 There exists a polynomial algorithm for deciding if two independent arcs lie on a common cycle in a semicomplete digraph. They also showed that if k is part of the input then the above problem is N P -complete.
Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Theorem 6.4 <s> We consider the problem of finding two disjoint directed paths with prescribed ends in a semicomplete digraph. The problem is NP - complete for general digraphs as proved in [4]. We obtain best possible sufficient conditions in terms of connectivity for a semicomplete digraph to be 2-linked (i.e., it contains two disjoint paths with prescribed ends for any four given endvertices). We also consider the algorithmically equivalent problem of finding a cycle through two given disjoint edges in a semicomplete digraph. For this problem it is shown that if D is a 5–connected semicomplete digraph, then D has a cycle through any two disjoint edges, and this result is best possible in terms of the connectivity. In contrast to this we prove that if T is a 3–connected tournament, then T has a cycle through any two disjoint edges. This is best possible, too. Finally we give best possible sufficient conditions in terms of local connectivities for a tournament to contain a cycle through af given pair of disjoint edges. <s> BIB001 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Theorem 6.4 <s> Abstract A directed graph is called ( m , k )-transitive if for every directed path x 0 x 1 … x m there is a directed path y 0 y 1 … y k such that x 0 = y 0 , x m = y k , and { y i |0⩽ i ⩽ k } ⊂{ x i |0⩽ i ⩽ m }. We describe the structure of those ( m , 1)-transitive and (3,2)-transitive directed graphs in which each pair of vertices is adjacent by an arc in at least one direction, and present an algorithm with running time O( n 2 ) that tests ( m, k )-transitivity in such graphs on n vertices for every m and k =1, and for m =3 and k =2. <s> BIB002
The k-arc cyclic problem is N P -complete even for semicomplete and semicomplete bipartite digraphs. Sufficient conditions for semicomplete digraphs and tournaments to be 2-arc cyclic are studied in BIB001 where the following theorem is proved. Theorem 6.5 Every 5-strongly connected semicomplete digraph is 2-arc cyclic; every 3-connected tournament is 2-arc cyclic. In BIB001 it is noted that both results are best possible in terms of the required connectivity. A digraph D is said to be transitive if (x, y), (y, z) ∈ A(D) implies (x, z) ∈ A(D). This notion has been generalized by F. Harary (cf. BIB002 ) as follows: D is (m, k)-transitive (m > k ≥ 1) if for every path P of length m there exists a path Q of length k with the same endvertices, such that V (Q) ⊂ V (P ). A. Gyárfás, M.S. Jacobson and L.F. Kinch studied (m, k)-transitivity and obtained a characterization of (m, 1)-transitive tournaments for m ≥ 2. Using another approach, Z. Tuza BIB002 characterized (m, 1)-transitive semicomplete digraphs for every m ≥ 2 and k = 1. Obviously, this characterization provides an O(n 2 )-time algorithm for finding the minimum m such that a given semicomplete digraph D is (m,1)-transitive. Z. Tuza BIB002 also obtained a characterization of (3,2)-transitive semicomplete digraphs.
Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> The number of paths and cycles in MTs <s> This invention relates to a printing device of the type having a carriage which supports a printing roll. In order to perform the printing operation, the carriage is pivotally and slidably mounted on a slide rod. An arm of the carriage is mounted in a housing and is provided with a guide roller that is located in the same vertical plane as the printing roll. The guide roller cooperates with a guide means that extends parallel to the slide rod to keep the printing roll in engagement with a printing anvil and to enable the printing roll to swing upwardly into a raised position when arriving at one of the ends of the slide rod. <s> BIB001 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> The number of paths and cycles in MTs <s> Solving an old conjecture of Szele we show that the maximum number of directed Hamiltonian paths in a tournament onn vertices is at mostc · n3/2· n!/2n−1, wherec is a positive constant independent ofn. <s> BIB002
The main problems in the topic of this section are the following: 1) Find the maximum possible number of s-cycles (s-paths) in a MT with a given number of vertices in each partite set. 2) Find the minimum possible number of s-cycles in a strongly connected MT with a given number of vertices in each partite set. These problems were completely solved only in some special cases. The first problem (on cycles) was solved for tournaments when s = 3, 4 and for BTs when m = 4 (cf. ). Solving an old conjecture of T. Szele , N. Alon BIB002 showed: The maximum number, P (n), of Hamiltonian paths in a tournament on n vertices satisfies P (n) ≤ cn 1.5 n!/2 where c is independent of n. The short proof of Theorem 7.1 is based on Minc's Conjecture BIB001 on permanents of (0,1)-matrices proved by Bregman . Szele proved that P (n) ≥ n!/2 n−1 and hence the gap between the upper and lower bounds for P (n) is only O(n 1.5 ). It would be interesting to close this gap and determine P (n) up to a constant factor. The second problem was completely solved for tournaments for all s and for BTs when s = 4 (cf. ). For MTs the following two results were obtained. Theorem 7.2 Let T be a strongly connected m-partite tournament , m ≥ 3. Then T contains at least m − 2 3-cycles. Theorem 7.3 Let G be a complete m-partite graph, m ≥ 3, which is not isomorphic to K 2,2,,...,2 for odd m. Then there exists a strong orientation of G with exactly m − 2 3-cycles.
Quantum games: a review of the history, current state, and interpretation <s> Introduction <s> Abstract A conceptual analysis of the classical information theory of Shannon (1948) shows that this theory cannot be directly generalized to the usual quantum case. The reason is that in the usual quantum mechanics of closed systems there is no general concept of joint and conditional probability. Using, however, the generalized quantum mechanics of open systems (A. Kossakowski 1972) and the generalized concept of observable (“semiobservable”, E.B. Davies and J.T. Lewis 1970) it is possible to construct a quantum information theory being then a straightforward generalization of Shannon's theory. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Introduction <s> On the program it says this is a keynote speech--and I don't know what a keynote speech is. I do not intend in any way to suggest what should be in this meeting as a keynote of the subjects or anything like that. I have my own things to say and to talk about and there's no implication that anybody needs to talk about the same thing or anything like it. So what I want to talk about is what Mike Dertouzos suggested that nobody would talk about. I want to talk about the problem of simulating physics with computers and I mean that in a specific way which I am going to explain. The reason for doing this is something that I learned about from Ed Fredkin, and my entire interest in the subject has been inspired by him. It has to do with learning something about the possibilities of computers, and also something about possibilities in physics. If we suppose that we know all the physical laws perfectly, of course we don't have to pay any attention to computers. It's interesting anyway to entertain oneself with the idea that we've got something to learn about physical laws; and if I take a relaxed view here (after all I 'm here and not at home) I'll admit that we don't understand everything. The first question is, What kind of computer are we going to use to simulate physics? Computer theory has been developed to a point where it realizes that it doesn't make any difference; when you get to a universal computer, it doesn't matter how it's manufactured, how it's actually made. Therefore my question is, Can physics be simulated by a universal computer? I would like to have the elements of this computer locally interconnected, and therefore sort of think about cellular automata as an example (but I don't want to force it). But I do want something involved with the <s> BIB002 </s> Quantum games: a review of the history, current state, and interpretation <s> Introduction <s> We show that a set of gates that consists of all one-bit quantum gates (U(2)) and the two-bit exclusive-or gate (that maps Boolean values (x,y) to (x,x ⊕y)) is universal in the sense that all unitary operations on arbitrarily many bits n (U(2 n )) can be expressed as compositions of these gates. We investigate the number of the above gates required to implement other gates, such as generalized Deutsch-Toffoli gates, that apply a specific U(2) transformation to one input bit if and only if the logical AND of all remaining input bits is satisfied. These gates play a central role in many proposed constructions of quantum computational networks. We derive upper and lower bounds on the exact number of elementary gates required to build up a variety of two- and three-bit quantum gates, the asymptotic number required for n-bit Deutsch-Toffoli gates, and make some observations about the number required for arbitrary n-bit unitary operations. <s> BIB003 </s> Quantum games: a review of the history, current state, and interpretation <s> Introduction <s> We explore quantum search from the geometric viewpoint of a complex projective space $CP$, a space of rays. First, we show that the optimal quantum search can be geometrically identified with the shortest path along the geodesic joining a target state, an element of the computational basis, and such an initial state as overlaps equally, up to phases, with all the elements of the computational basis. Second, we calculate the entanglement through the algorithm for any number of qubits $n$ as the minimum Fubini-Study distance to the submanifold formed by separable states in Segre embedding, and find that entanglement is used almost maximally for large $n$. The computational time seems to be optimized by the dynamics as the geodesic, running across entangled states away from the submanifold of separable states, rather than the amount of entanglement itself. <s> BIB004 </s> Quantum games: a review of the history, current state, and interpretation <s> Introduction <s> Scalability of a quantum computation requires that the information be processed on multiple subsystems. However, it is unclear how the complexity of a quantum algorithm, quantified by the number of entangling gates, depends on the subsystem size. We examine the quantum circuit complexity for exactly universal computation on many d-level systems (qudits). Both a lower bound and a constructive upper bound on the number of two-qudit gates result, proving a sharp asymptotic of theta(d(2n)) gates. This closes the complexity question for all d-level systems (d finite). The optimal asymptotic applies to systems with locality constraints, e.g., nearest neighbor interactions. <s> BIB005 </s> Quantum games: a review of the history, current state, and interpretation <s> Introduction <s> Finally, here is a modern, self-contained text on quantum information theory suitable for graduate-level courses. Developing the subject 'from the ground up' it covers classical results as well as major advances of the past decade. Beginning with an extensive overview of classical information theory suitable for the non-expert, the author then turns his attention to quantum mechanics for quantum information theory, and the important protocols of teleportation, super-dense coding and entanglement distribution. He develops all of the tools necessary for understanding important results in quantum information theory, including capacity theorems for classical, entanglement-assisted, private and quantum communication. The book also covers important recent developments such as superadditivity of private, coherent and Holevo information, and the superactivation of quantum capacity. This book will be warmly welcomed by the upcoming generation of quantum information theorists and the already established community of classical information theorists. <s> BIB006 </s> Quantum games: a review of the history, current state, and interpretation <s> Introduction <s> When elementary quantum systems, such as polarized photons, are used to transmit digital information, the uncertainty principle gives rise to novel cryptographic phenomena unachievable with traditional transmission media, e.g. a communications channel on which it is impossible in principle to eavesdrop without a high probability of disturbing the transmission in such a way as to be detected. Such a quantum channel can be used in conjunction with ordinary insecure classical channels to distribute random key information between two users with the assurance that it remains unknown to anyone else, even when the users share no secret information initially. We also present a protocol for coin-tossing by exchange of quantum messages, which is secure against traditional kinds of cheating, even by an opponent with unlimited computing power, but ironically can be subverted by use of a still subtler quantum phenomenon, the Einstein-Podolsky-Rosen paradox. <s> BIB007
Roots of the theory of quantum information lie in the ideas of Wisener and Ingarden BIB001 . These authors proposed ways to incorporate the uncertainty or entropy of information within the uncertainty inherent in quantum mechanical processes. Uncertainty is a fundamental concept in both physics and the theory of information, hence serving as the natural link between the two disciplines. Uncertainty itself is defined in terms of probability distributions. Every quantum physical object produces a probability distribution when its state is measured. More precisely, a general state of an m-state quantum object is an element of the m-dimensional complex projective Hilbert space CP m , say v = (v 1 , . . . , v m ) . Upon measurement with respect to the observable states of the quantum object (which are the elements of an orthogonal basis of CP m ), v will produce the probability distribution over the observable states. Hence, a notion of quantum entropy or uncertainty may be defined that coincides with the corresponding notion in classical information theory . Quantum information theory was further developed as the theory of quantum computation by Feynman and Deutsch BIB002 . Feynman outlines a primitive version of what is now known as an n qubit quantum computer , that is, a physical system that emulates any unitary function where CP 1 is the two-dimensional complex projective Hilbert space modeling the two-state quantum system or the qubit, in a way that Q can be expressed as a tensor product of unitary functions Q j (also known as quantum logic gates) that act only on one or two qubits BIB003 BIB005 . Feynman's argument showed that it is possible to simulate two-state computations by Bosonic two-state quantum systems. Quantum computers crossed the engineering and commercialization thresholds in this decade with the Canadian technology company D-wave producing and selling a quantum annealing based quantum computer, and establish technology industry giants like IBM, Intel, Google, and Microsoft devoting financial resources to initiate their own quantum computing efforts. More generally, quantum information theory has made great strides starting in the 1980's in form of quantum communication protocols where, roughly speaking, one studies the transmission of quantum information over channels and applications. A milestone of quantum information theory is provably secure public key distribution which uses the uncertainty inherent in quantum mechanics to guarantee the security. This idea was first proposed by Charles Bennett and Gilles Brassard in 1984 at a conference in Bengaluru, India, and recently appeared in a journal BIB007 . Several companies including Toshiba and IDquantique offer commercial devices that can used to set up quantum cryptography protocols. While the literature on quantum information theory is vast, we refer the reader to books BIB006 for further survey of quantum information theory. In the emerging field of quantum information technology, optimal implementation of quantum information processes will be of fundamental importance. To this end, the classic problem of optimizing a function's value will play a crucial role, with one looking to optimize the functional description of the quantum processes, as in BIB004 , for example. Generalizing further, solutions to the problem of simultaneous optimization of two or more functions will be even more crucial given the uncertainty inherent in quantum systems. This generalized, multi-objective optimization problem forms the essence of non-cooperative game theory, where notional players are introduced as having the said functions as payoff or objective functions. The original single-objective optimization problem can also be studied as a single player game or what is also known as a dynamic game. We will give a mathematically formal discussion of non-cooperative games and their quantum mechanical counterparts in sections 2 and 3, followed by a discussion in section 4 on the history of how such quantum games have been viewed and criticized in the literature in the context of the optimal implementation of quantum technologies as well the quantum mechanical implementation of non-cooperative games. In section 4.1 we contrast cooperative and non-cooperative games, and in section 4.2, we introduce a new perspective on quantum entanglement as a mechanism for establishing social equilibrium. Section 5 gives an overview of quantum games and quantum algorithms and communication protocols, and section 6 concerns Bell's inequalities and their role in quantum Bayesian games, and sections 7 through 10 concern classical and quantum versions of stochastic and dynamic games. We give the current state-of-affairs in the experimental realization of quantum games in section 11, followed by section 12 that discusses potential future applications of quantum games.
Quantum games: a review of the history, current state, and interpretation <s> Non-cooperative games <s> One may define a concept of an n -person game in which each player has a finite set of pure strategies and in which a definite set of payments to the n players corresponds to each n -tuple of pure strategies, one strategy being taken for each player. For mixed strategies, which are probability distributions over the pure strategies, the pay-off functions are the expectations of the players, thus becoming polylinear forms … <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Non-cooperative games <s> Abstract : Kakutani's Fixed Point Theorem states that in Euclidean n-space a closed point to (non-void) convex set map of a convex compact set into itself has a fixed point. Kakutani showed that this implied the minimax theorem for finite games. The object of this note is to point out that Kakutani's theorem may be extended to convex linear topological spaces, and implies the minimax theorem for continuous games with continuous payoff as well as the existence of Nash equilibrium points. <s> BIB002 </s> Quantum games: a review of the history, current state, and interpretation <s> Non-cooperative games <s> This article is a complement to an earlier one,1 in which at least two questions have been left in the shadows. Here we shall focus our attention on them. <s> BIB003 </s> Quantum games: a review of the history, current state, and interpretation <s> Non-cooperative games <s> Preface 1. Decision-Theoretic Foundations 1.1 Game Theory, Rationality, and Intelligence 1.2 Basic Concepts of Decision Theory 1.3 Axioms 1.4 The Expected-Utility Maximization Theorem 1.5 Equivalent Representations 1.6 Bayesian Conditional-Probability Systems 1.7 Limitations of the Bayesian Model 1.8 Domination 1.9 Proofs of the Domination Theorems Exercises 2. Basic Models 2.1 Games in Extensive Form 2.2 Strategic Form and the Normal Representation 2.3 Equivalence of Strategic-Form Games 2.4 Reduced Normal Representations 2.5 Elimination of Dominated Strategies 2.6 Multiagent Representations 2.7 Common Knowledge 2.8 Bayesian Games 2.9 Modeling Games with Incomplete Information Exercises 3. Equilibria of Strategic-Form Games 3.1 Domination and Ratonalizability 3.2 Nash Equilibrium 3.3 Computing Nash Equilibria 3.4 Significance of Nash Equilibria 3.5 The Focal-Point Effect 3.6 The Decision-Analytic Approach to Games 3.7 Evolution. Resistance. and Risk Dominance 3.8 Two-Person Zero-Sum Games 3.9 Bayesian Equilibria 3.10 Purification of Randomized Strategies in Equilibria 3.11 Auctions 3.12 Proof of Existence of Equilibrium 3.13 Infinite Strategy Sets Exercises 4. Sequential Equilibria of Extensive-Form Games 4.1 Mixed Strategies and Behavioral Strategies 4.2 Equilibria in Behavioral Strategies 4.3 Sequential Rationality at Information States with Positive Probability 4.4 Consistent Beliefs and Sequential Rationality at All Information States 4.5 Computing Sequential Equilibria 4.6 Subgame-Perfect Equilibria 4.7 Games with Perfect Information 4.8 Adding Chance Events with Small Probability 4.9 Forward Induction 4.10 Voting and Binary Agendas 4.11 Technical Proofs Exercises 5. Refinements of Equilibrium in Strategic Form 5.1 Introduction 5.2 Perfect Equilibria 5.3 Existence of Perfect and Sequential Equilibria 5.4 Proper Equilibria 5.5 Persistent Equilibria 5.6 Stable Sets 01 Equilibria 5.7 Generic Properties 5.8 Conclusions Exercises 6. Games with Communication 6.1 Contracts and Correlated Strategies 6.2 Correlated Equilibria 6.3 Bayesian Games with Communication 6.4 Bayesian Collective-Choice Problems and Bayesian Bargaining Problems 6.5 Trading Problems with Linear Utility 6.6 General Participation Constraints for Bayesian Games with Contracts 6.7 Sender-Receiver Games 6.8 Acceptable and Predominant Correlated Equilibria 6.9 Communication in Extensive-Form and Multistage Games Exercises Bibliographic Note 7. Repeated Games 7.1 The Repeated Prisoners Dilemma 7.2 A General Model of Repeated Garnet 7.3 Stationary Equilibria of Repeated Games with Complete State Information and Discounting 7.4 Repeated Games with Standard Information: Examples 7.5 General Feasibility Theorems for Standard Repeated Games 7.6 Finitely Repeated Games and the Role of Initial Doubt 7.7 Imperfect Observability of Moves 7.8 Repeated Wines in Large Decentralized Groups 7.9 Repeated Games with Incomplete Information 7.10 Continuous Time 7.11 Evolutionary Simulation of Repeated Games Exercises 8. Bargaining and Cooperation in Two-Person Games 8.1 Noncooperative Foundations of Cooperative Game Theory 8.2 Two-Person Bargaining Problems and the Nash Bargaining Solution 8.3 Interpersonal Comparisons of Weighted Utility 8.4 Transferable Utility 8.5 Rational Threats 8.6 Other Bargaining Solutions 8.7 An Alternating-Offer Bargaining Game 8.8 An Alternating-Offer Game with Incomplete Information 8.9 A Discrete Alternating-Offer Game 8.10 Renegotiation Exercises 9. Coalitions in Cooperative Games 9.1 Introduction to Coalitional Analysis 9.2 Characteristic Functions with Transferable Utility 9.3 The Core 9.4 The Shapkey Value 9.5 Values with Cooperation Structures 9.6 Other Solution Concepts 9.7 Colational Games with Nontransferable Utility 9.8 Cores without Transferable Utility 9.9 Values without Transferable Utility Exercises Bibliographic Note 10. Cooperation under Uncertainty 10.1 Introduction 10.2 Concepts of Efficiency 10.3 An Example 10.4 Ex Post Inefficiency and Subsequent Oilers 10.5 Computing Incentive-Efficient Mechanisms 10.6 Inscrutability and Durability 10.7 Mechanism Selection by an Informed Principal 10.8 Neutral Bargaining Solutions 10.9 Dynamic Matching Processes with Incomplete Information Exercises Bibliography Index <s> BIB004 </s> Quantum games: a review of the history, current state, and interpretation <s> Non-cooperative games <s> Ken Binmore's previous game theory textbook, Fun and Games (D.C. Heath, 1991), carved out a significant niche in the advanced undergraduate market; it was intellectually serious and more up-to-date than its competitors, but also accessibly written. Its central thesis was that game theory allows us to understand many kinds of interactions between people, a point that Binmore amply demonstrated through a rich range of examples and applications. This replacement for the now out-of-date 1991 textbook retains the entertaining examples, but changes the organization to match how game theory courses are actually taught, making Playing for Real a more versatile text that almost all possible course designs will find easier to use, with less jumping about than before. In addition, the problem sections, already used as a reference by many teachers, have become even more clever and varied, without becoming too technical. Playing for Real will sell into advanced undergraduate courses in game theory, primarily those in economics, but also courses in the social sciences, and serve as a reference for economists. <s> BIB005
Non-cooperative game theory is the mathematical foundation of making optimal decisions in competitive situations based on available information. The written philosophical foundations of Game Theory trace back to at least the great works of Sun Tsu (The Art of War), circa 500 BCE in China, and Chanakya (Arthashastra), circa 250 BCE in India. Sun Tsu captures the essence of game-theoretic thinking in the following (translated ) lines from The Art of War: Knowing the other and knowing oneself, In one hundred battle no danger, Not knowing the other and knowing oneself, One victory for one loss, Not knowing the other and not knowing oneself, In every battle certain defeat (Denma translation). In short, each competitor or player, in the competitive situation or game, should know the preferences of each player over the outcomes of the game, and knowing this information is sufficient for each player to make optimal decisions or strategic choices. The word "optimal" requires further elaboration. In noncooperative game theory, there are two ways to utilize it. First is via the notion of Nash equilibrium, proposed by Nobel Laureate John Nash BIB001 , where each player's strategic choice, given the strategic choices of all the other players, produces an outcome of the game that maximizes the player's preferences over the outcomes. In other words, unilateral deviation by the player to another strategic choice will produce an outcome which is less preferable to the player. Further yet, one can say that each player's strategic choice is a best response to every other. The second way the word "optimal" is used in game theory is via the notion of Pareto-optimality where the strategic choices made by the players produce an outcome of the game that maximizes the preferences of every player. In other words, a unilateral deviation by any one player to some other strategic choice will produce an outcome which is less preferred by some player. If the adversely affected player is also the one who unilaterally deviated, then the Pareto-optimal outcome is also a Nash equilibrium. Note that Nash equilibrium is a more likely outcome in a non-cooperative game than a Pareto-optimal one in the sense that on average, or in repeated games, players' strategy choices will tend toward the Nash equilibrium. Formalizing, we say that an N player, non-cooperative game in normal form is a function Γ Γ : with the additional feature of the notion of non-identical preferences over the elements of the set of outcomes O, for every "player" of the game. The preferences are a pre-ordering of the elements of O, that is, for l, m, n ∈ O m m, and l m and m n =⇒ l n. where the symbol denotes "of less or equal preference". Preferences are typically quantified numerically for the ease of calculation of the payoffs. To this end, functions Γ i are introduced which act as the payoff function for each player i and typically map elements of O into the real numbers in a way that preserves the preferences of the players. That is, is replaced with ≤ when analyzing the payoffs. The factor S i in the domain of Γ is said to be the strategy set of player i, and a play of Γ is an n-tuple of strategies, one per player, producing a payoff to each player in terms of his preferences over the elements of O in the image of Γ. A Nash equilibrium is a play of Γ in which every player employs a strategy that is a best reply, with respects to his preferences over the outcomes, to the strategic choice of every other player. In other words, unilateral deviation from a Nash equilibrium by any one player in the form of a different choice of strategy will produce an outcome which is less preferred by that player than before. Following Nash, we say that a play p of Γ counters another play p if Γ i (p ) ≥ Γ i (p) for all players i, and that a self-countering play is an (Nash) equilibrium. Let C p denote the set of all the plays of Γ that counter p. Denote n i=1 S i by S for notational convenience, and note that C p ⊂ S and therefore C p ∈ 2 S . Further note that the game Γ can be factored as where to any play p the map Γ C associates its countering set C p via the payoff functions Γ i . The set-valued map Γ C may be viewed as a pre-processing stage where players seek out a self-countering play, and if one is found, it is mapped to its corresponding outcome in O by the function E. The condition for the existence of a self-countering play, and therefore of a Nash equilibrium, is that Γ C have a fixed point, that is, an element p * ∈ S such that p * ∈ C p * . In a general set-theoretic setting for non-cooperative games, the map Γ C may not have a fixed point. Hence, not all non-cooperative games will have a Nash equilibrium. However, according to Nash's theorem, when the S i are finite and the game is extended to its mixed version, that is, the version in which randomization via probability distributions is allowed over the elements of all the S i , as well as over the elements of O, then Γ C has at least one fixed point and therefore at least one Nash equilibrium. Formally, given a game Γ with finite S i for all i, its mixed version is the product function Λ : where ∆(S i ) is the set of probability distributions over the i th player's strategy set S i , and the set ∆(O) is the set of probability distributions over the outcomes O. Payoffs are now calculated as expected payoffs, that is, weighted averages of the values of Γ i , for each player i, with respect to probability distributions in ∆(O) that arise as the product of the plays of Λ. Denote the expected payoff to player i by the function Λ i . Also, note that Λ restricts to Γ. In such n-player games, at least one Nash equilibrium play is guaranteed to exist as a fixed point of Λ via Kakutani's fixed-point theorem . Kakutani's fixed-point theorem: Let S ⊂ R n be nonempty, bounded, closed, and convex, and let F : S → 2 S be an upper semi-continuous set-valued mapping such that F (s) is non-empty, closed, and convex for all s ∈ S. Then there exists some s * ∈ S such that s * ∈ F (s * ). To see this, make S = n i=1 ∆(S i ). Then S ⊂ R n and S is non-empty, bounded, and closed because it is a finite product of finite non-empty sets. The set S is also convex because its the convex hull of the elements of a finite set. Next, let C p be the set of all plays of Λ that counter the play p. Then C p is nonempty, closed, and convex. Further, C p ⊂ S and therefore C p ∈ 2 S . Since Λ is a game, it factors according to (7) where the map Λ C associates a play to its countering set via the payoff functions Λ i . Since Λ i are all continuous, Λ C is continuous. Further, Λ C (s) is non-empty, closed, and convex for all s ∈ S (we will establish the convexity of Λ C (s) below; the remaining conditions are also straightforward to establish). Hence, Kakutani's theorem applies and there exists an s * ∈ S that counters itself, that is, s * ∈ Λ C (s * ), and is therefore a Nash equilibrium. The function E Π simply maps s * to ∆(O) as the product probability distribution from which the Nash equilibrium expected payoff is computed for each player. The convexity of the Λ C (s) = C p is straight forward to show. Let q, r ∈ C p . Then for all i. Now let 0 ≤ µ ≤ 1 and consider the convex combination µq + (1 − µ)r which we will show to be in C p . First note that µq + (1 − µ)r ∈ S because S is the product of the convex sets ∆(S i ). Next, since the Λ i are all linear, and because of the inequalities in (10) and the restrictions on the values of µ, whereby µq + (1 − µ)r ∈ C p and C p is convex. Going back to the game Γ in (5) defined in the general set-theoretic setting, certainly Kakutani's theorem would apply to Γ if the conditions are right, such as when the image set of Γ is pre-ordered and Γ is linear. Kakutani's fixed-point theorem can be generalized to include subsets S of convex topological vector spaces, as was done by Glicksberg in BIB002 . The following is a paraphrased but equivalent statement of Glicksberg's fixed-point theorem (the term "linear space" in the original statement of Glicksberg's theorem is equivalent to the term vector space): Glicksberg's fixed-point theorem: Let H be nonempty, compact, convex subset of a convex Hausdorff topological vector space and let Φ : H → 2 H be an upper semi-continuous set-valued mapping such that Φ(h) is non-empty and convex for all h ∈ H. Then there exists some h * ∈ H such that h * ∈ Φ(h * ). Using Glicksberg's fixed-point theorem, one can show that Nash equilibrium exists in games where the strategy sets are infinite or possibly also un-countably infinite. Non-cooperative game theory has been an immensely successful mathematical model for studying scientific and social phenomena. In particular, it has offered key insights into equilibrium and optimal behavior in economics, evolutionary biology, and politics. As with any established subject, game theory has a vast literature available. However, we refer the reader to BIB005 BIB004 . Given the successful interface of non-cooperative game theory with several other subjects, it is no wonder then that physicist have explored the possibility of using Game Theory to model physical processes as games and study their equilibrium behaviors. The first paper that the author's are aware of in which aspects of quantum physics, wave mechanics in particular, were viewed as games is BIB003 .
Quantum games: a review of the history, current state, and interpretation <s> Non-cooperative quantum games <s> We investigate the quantization of non-zero sum games. For the particular case of the Prisoners' Dilemma we show that this game ceases to pose a dilemma if quantum strategies are allowed for. We also construct a particular quantum strategy which always gives reward if played against any classical strategy. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Non-cooperative quantum games <s> Poker has become a popular pastime all over the world. At any given moment one can find tens, if not hundreds, of thousands of players playing poker via their computers on the major on-line gaming sites. Indeed, according to the Vancouver, B.C. based pokerpulse.com estimates, more than 190 million US dollars daily is bet in on-line poker rooms. But communication and computation are changing as the relentless application of Moore's Law brings computation and information into the quantum realm. The quantum theory of games concerns the behavior of classical games when played in the coming quantum computing environment or when played with quantum information. In almost all cases, the"quantized"versions of these games afford many new strategic options to the players. The study of so-called quantum games is quite new, arising from a seminal paper of D. Meyer \cite{Meyer} published in Physics Review Letters in 1999. The ensuing near decade has seen an explosion of contributions and controversy over what exactly a quantized game really is and if there is indeed anything new for game theory. With the settling of some of these controversies \cite{Bleiler}, it is now possible to fully analyze some of the basic endgame models from the game theory of Poker and predict with confidence just how the optimal play of Poker will change when played in the coming quantum computation environment. The analysis here shows that for certain players,"entangled"poker will allow results that outperform those available to players"in real life". <s> BIB002 </s> Quantum games: a review of the history, current state, and interpretation <s> Non-cooperative quantum games <s> Two qubit quantum computations are viewed as two player, strictly competitive games and a game-theoretic measure of optimality of these computations is developed. To this end, the geometry of Hilbert space of quantum computations is used to establish the equivalence of game-theoretic solution concepts of Nash equilibrium and mini-max outcomes in games of this type, and quantum mechanisms are designed for realizing these mini-max outcomes. <s> BIB003
A formal merger of non-cooperative game theory and quantum computing was initiated in by D. Meyer, who was motivated to study efficient quantum algorithms, and to this end, proposed a game-theoretic model or P1 Q P2 Figure 1: A) The Penny Flip game in normal form, with N and F representing the players' strategic choices of Flip and No-Flip respectively. B) The quantum circuit for this game, with P1 and P2 being quantum strategies of Player 1, and Q being the quantum strategy of Player 2, which when chosen to be the Hadamard gate H, allows Player 2 to always win the game when the input qubit state is either |0 or |1 . for quantum algorithms. To this end, his focus of study was the situation where in a particular two player game, one of the players had access to quantum strategies. Meyer in fact did not introduce the term "quantum game" in his work; rather, this was done by another group of authors whose work will be discussed shortly. Meyer defined a quantum strategy to be a single qubit logic gate in the quantum computation for which the game model was constructed. The particular game he considers is the Penny Flip game of Figure 1A ), which he then realizes as a single qubit quantum circuit of Figure 1B ) in which the first player employs P1 and P2 from the restricted set of quantum operations whereas Player 2 is allowed to employ, in particular, the Hadamard operation both on either the qubit state |0 or |1 . When the quantum circuit is played out with respect to the computational basis for the Hilbert space, one sees that Player 2 always wins the game. A similar two player game model was applied to quantum algorithms such as Simon's and Grover's algorithms. Meyer showed that in this setting, the player with access to a proper quantum strategy (and not simply a classical one residing inside a quantum one) would always win this game. He further showed that if both players had access to proper quantum strategies, then in a strictly competitive or zero-sum game (where the preferences of the players over the outcomes are diametrically opposite), a Nash equilibrium need not exist. However, in the case where players are allowed to choose their quantum strategies with respect to a probability distribution, that is, employ mixed quantum strategies, Meyer used Glicksberg's fixed point theorem to show that in this situation Nash equilibrium would always exist. Meyer's work also provides a way to study equilibrium behavior of quantum computational mechanisms. The term quantum game appears to have been first used by Eisert, Wilkens, and Lewenstein in their paper BIB001 which was published soon after Meyer's work. These authors were interested in, as they put it, "... the quantization of non-zero sum games". At face value, this expression can create controversy (and it has), since quantization is a physical process whereas a game is primarily an abstract concept. However, Chess, Poker, and Football are examples of games that can be implemented physically. It becomes clear upon reading the paper that the authors' goal is to investigate the consequences of a non-cooperative game implemented quantum physically. More accurately, Eisert et al. give a quantum computational implementation of Prisoner's Dilemma. This implementation is reproduced in Figure 2 . Eisert et al. show that in their quantum computational implementation of the non-strictly competitive game of Prisoner's Dilemma, when followed by quantum measurement, players can achieve a Nash equilibrium that is also Pareto-optimal. One should view the "EWL quantization protocol" for Prisoner's Dilemma as an extension of the original game to include higher order randomization via quantum superposition and entanglement followed by measurement BIB002 similar to the way game theorists have traditionally extended (or physically implemented) a game to include randomization via probability distributions. And indeed, Eisert et al. ensure that their quantum game restricts to the original Prisoner's Dilemma. Inspired by the EWL quantization protocol, Marinatto and Weber proposed the "MW quantization protocol" in . As Figure 3 shows, the MW protocol differs from the EWL protocol in only the absence of the dis-entangling gate. Whereas Meyer's seminal work laid down the mathematical foundation of quantum games via its Nash equilibrium result using a fixed-point theorem, the works of Eisert et al. and Marinatto et al. have been the dominant protocols for quantization of games. But before discussing the impact of these works on the subject of quantum game theory, it is pertinent to introduce a mathematically formal definition of a noncooperative quantum game in normal form that is consistent with these authors' perspectives. An n-player quantum game in normal form arises from (5) when one introduces quantum physically relevant restrictions. We define a pure strategy quantum game (in normal form) to be any unitary function where CP d i is the d i -dimensional complex projective Hilbert space of pure quantum states that constitutes player i's pure quantum strategies, as well as the set of outcomes with a notion of non-identical preferences defined over its elements, one per player BIB003 . Figure 4 captures this definition as a quantum circuit diagram.
Quantum games: a review of the history, current state, and interpretation <s> Entangling gate <s> An algorithm is developed to compute the complete CS decomposition (CSD) of a partitioned unitary matrix. Although the existence of the CSD has been recognized since 1977, prior algorithms compute only a reduced version (the 2-by-1 CSD) that is equivalent to two simultaneous singular value decompositions. The algorithm presented here computes the complete 2-by-2 CSD, which requires the simultaneous diagonalization of all four blocks of a unitary matrix partitioned into a 2-by-2 block structure. The algorithm appears to be the only fully specified algorithm available. The computation occurs in two phases. In the first phase, the unitary matrix is reduced to bidiagonal block form, as described by Sutton and Edelman. In the second phase, the blocks are simultaneously diagonalized using techniques from bidiagonal SVD algorithms of Golub, Kahan, and Demmel. The algorithm has a number of desirable numerical features. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Entangling gate <s> With respect to probabilistic mixtures of the strategies in non-cooperative games, quantum game theory provides guarantee of fixed-point stability, the so-called Nash equilibrium. This permits players to choose mixed quantum strategies that prepare mixed quantum states optimally under constraints. We show here that fixed-point stability of Nash equilibrium can also be guaranteed for pure quantum strategies via an application of the Nash embedding theorem, permitting players to prepare pure quantum states optimally under constraints. <s> BIB002
Quantum strategy of Player A Quantum strategy of Player B Figure 3 : Quantum circuit for the MW quantization scheme. Although similar to the EWL scheme, this scheme does not restrict to the original classical game due to the lack of the dis-entangling operation before quantum measurement. This is due to the fact that the classical game is encoded into the Hilbert space as the elements of the computational basis. A mixed quantum game would then be any function where ∆ represents the set of probability distributions over the argument. Our definition of a quantum game in both and (13) is consistent with Meyer's perspective in the sense that it allows one to constrained optimize a quantum mechanism by defining payoff functions before measurement, and it is consistent with the EWL perspective if one a defines the payoff functions after measurement as expected value. As mentioned earlier, Meyer used Glicksberrg's fixed point theorem to establish the guarantee of Nash equilibrium in the mixed quantum game R. This is not surprising given that probabilistic mixtures form a convex structure, which is an essential ingredient for fixed-point theorems to hold on "flat" manifolds such as R n . However, it was only very recently that two of the current authors showed that Nash equilibrium via fixed-point theorem can be guaranteed in the quantum game Q BIB002 . These authors used the Riemannian manifold structure of CP n to invoke John Nash's other, mathematically more popular theorem known as the Nash embedding theorem: Nash embedding theorem: Every compact Riemannian manifold can be (isometrically) embedded into R m for a sufficiently large m. The Nash embedding theorem tells us that CP n is diffeomorphic to its image under a length preserving map into R m . With suitable considerations in place, it follows that Kakutani's theorem applies to the image of CP n in R m . Now, tracing the diffeomorphim back to CP n guarantees the existence of Nash equilibrium in the pure quantum game Q. Another key insight established in BIB002 is that just as in classical games, linearity of the payoff functions is a fundamental requirement for guaranteeing the existence of Nash equilibrium in pure quantum games. Hence, quantization of games such as the EWL protocol, in which the payoff is the expected value computed after quantum measurement, cannot guarantee Nash equilibrium. On the other hand, the problem of pure Q A B Figure 4 : A) A n-player quantum game Q as a n qubit quantum logic gate, with the provision that preferences are defined over the elements of the computational basis of the Hilbert space, one per player. B) An example of playing the quantum game Q (equivalently, implementing the quantum logic gate) as a quantum circuit comprised of only one qubit logic gates (strategies) and two qubit logic gates (quantum mediated communication) using matrix decomposition techniques such as the cosine-sine decomposition BIB001 . state preparation, when viewed as a quantum game with the overlap (measured by the inner-product) of quantum states as the payoff function, guarantees Nash equilibrium.
Quantum games: a review of the history, current state, and interpretation <s> Criticism of quantum games -a discussion <s> We consider two aspects of quantum game theory: the extent to which the quantum solution solves the original classical game, and to what extent the new solution can be obtained in a classical model. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Criticism of quantum games -a discussion <s> Poker has become a popular pastime all over the world. At any given moment one can find tens, if not hundreds, of thousands of players playing poker via their computers on the major on-line gaming sites. Indeed, according to the Vancouver, B.C. based pokerpulse.com estimates, more than 190 million US dollars daily is bet in on-line poker rooms. But communication and computation are changing as the relentless application of Moore's Law brings computation and information into the quantum realm. The quantum theory of games concerns the behavior of classical games when played in the coming quantum computing environment or when played with quantum information. In almost all cases, the"quantized"versions of these games afford many new strategic options to the players. The study of so-called quantum games is quite new, arising from a seminal paper of D. Meyer \cite{Meyer} published in Physics Review Letters in 1999. The ensuing near decade has seen an explosion of contributions and controversy over what exactly a quantized game really is and if there is indeed anything new for game theory. With the settling of some of these controversies \cite{Bleiler}, it is now possible to fully analyze some of the basic endgame models from the game theory of Poker and predict with confidence just how the optimal play of Poker will change when played in the coming quantum computation environment. The analysis here shows that for certain players,"entangled"poker will allow results that outperform those available to players"in real life". <s> BIB002 </s> Quantum games: a review of the history, current state, and interpretation <s> Criticism of quantum games -a discussion <s> In the time since a merger of quantum mechanics and game theory was proposed formally in 1999, the two distinct perspectives apparent in this merger of applying quantum mechanics to game theory, referred to henceforth as the theory of"quantized games", and of applying game theory to quantum mechanics, referred to henceforth as"gaming the quantum", have become synonymous under the single ill-defined term"quantum game". Here, these two perspectives are delineated and a game-theoretically proper description of what makes a multi-player, non-cooperative game quantum mechanical, is given. Within the context of this description, finding a Nash equilibrium in a strictly competitive quantum game is shown to be equivalent to finding a solution to a simultaneous best approximation problem in the state space of quantum objects, thus setting up a framework for a game theory inspired study of"equilibrium"behavior of quantum physical systems such as those utilized in quantum information processing and computation. <s> BIB003 </s> Quantum games: a review of the history, current state, and interpretation <s> Criticism of quantum games -a discussion <s> The Marinatto-Weber approach to quantum game is a straightforward way to apply the power of quantum mechanics to classical game theory. In the simplest case, the quantum scheme is that players manipulate their own qubits of a two-qubit state either with the identity operator or the Pauli operator $\sigma_{x}$. However, such a simplification of the scheme raises doubt as to whether it could really reflect a quantum game. In this paper we put forward examples which may constitute arguments against the present form of the Marinatto-Weber scheme. Next, we modify the scheme to eliminate the undesirable properties of the protocol by extending the players' strategy sets. <s> BIB004 </s> Quantum games: a review of the history, current state, and interpretation <s> Criticism of quantum games -a discussion <s> Quantization has become a new way to investigate classical game theory since quantum strategies and quantum games were proposed. In the existing studies, many typical game models, such as the prisoner's dilemma, battle of the sexes, Hawk-Dove game, have been extensively explored by using quantization approach. Along a similar method, here several game models of opinion formations will be quantized on the basis of the Marinatto-Weber quantum game scheme, a frequently used scheme of converting classical games to quantum versions. Our results show that the quantization can fascinatingly change the properties of some classical opinion formation game models so as to generate win-win outcomes. <s> BIB005 </s> Quantum games: a review of the history, current state, and interpretation <s> Criticism of quantum games -a discussion <s> We point out a flaw in the unfair case of the quantum Prisoner's Dilemma as introduced in the pioneering Letter "Quantum Games and Quantum Strategies" of Eisert, Wilkens and Lewenstein. It is not true that the so-called miracle move therein always gives quantum Alice a large reward against classical Bob and outperforms tit-for-tat in an iterated game. Indeed, we introduce a new classical strategy that becomes Bob's dominant strategy, should Alice play the miracle move. Finally, we briefly survey the subsequent literature and turn to the 3-parameter strategic space instead of the 2-parameter one of Eisert et al. <s> BIB006
Criticism of quantum games has historically been focused on the EWL quantization protocol and we will discuss this criticism in detail below. On the other hand, we conjecture that the reason that Meyer's work has not seen much criticism is its mathematically formal foundation, and we also note that the MW quantization protocol has not been subjected to the same level of scrutiny as the EWL protocol vis-a-vis quantum physical implementation of games. This is remarkable because the MW protocol does not restrict to the original classical two player game (Prisoners' Dilemma, for example) coded into the Hilbert space via identification with the elements of a fixed basis. Therefore, the MW protocol holds little game-theoretic meaning! Frackiewicz has produced an adaptation of the MW protocol in BIB004 which attempts to rectify this protocol's deficiencies. Nonetheless, the original, game-theoretically questionable version of MW still appears in quantum game theory papers, for example BIB005 . Although the MW quantization protocol lacks game-theoretic substance in its original form, it is amenable to interpretation as an example of applying non-cooperative game theory to quantum mechanics, or "gaming the quantum" BIB003 . In this interpretation, the MW protocol represents a game-theoretic approach to designing quantum computational mechanisms which exhibit optimal behavior under multiple constraints. This makes the MW protocol more akin to Meyer's approach of using game theory to gain insights into quantum algorithms for quantum computation and communication. We will discuss this idea further in section 5. The remainder of this section is devoted to a discussion of the EWL quantization protocol and its criticism. In BIB001 , van Enk et al. state that the output of the EWL protocol for a specific and finite set of quantum strategies, after measurement, produces a function that is an extension of Prisoner's Dilemma but is entirely non-quantum mechanical. These authors argue that since this post measurement function emulates the results of the EWL quantization protocol, the quantum nature of the latter is redundant. However, if this criticism is taken seriously, then extensions of Prisoner's Dilemma via probability distributions can also be restricted to specific, finite mixed strategy sets to produce a larger game that is entirely non-probabilistic and which has a different structure than the original game! The source of this criticism appears to be a confusion between descriptive and prescriptive interpretations of game theory. For the mixed game should not be understood as a description of a game that utilizes piece-wise larger, non-probabilistic games. Rather, the reasoning behind extending to a mixed game is prescriptive, allowing one to design a mechanism that identifies probability distributions over the players' mixed strategies, which when mapped to probability distributions over the outcomes of the game via the product function, produce an expected outcome of the game as Nash equilibrium. From this point of view, the EWL protocol is a perfectly valid higher order mechanism for extending Prisoner's Dilemma. Another criticism by van Enk et al. of the EWL quantization protocol is that it does not preserve the non-cooperative nature of Prisoner's Dilemma due to the presence of quantum entanglement generated correlations. Eisert et al. have argued that entanglement can be viewed as an honest referee who communicates to the players on their behalf. But van Enck et al. insist that introducing communication between the players "...blurs the contrast between cooperative and non-cooperative games". This is true, but classical game theory also has a long and successful history of blurring this distinction through the use of mediated communication. Bringing in an honest referee into a game is just another form of game extension known as mediated communication which, to be fair, can easily be mistaken as a form of cooperation. In fact however, such games are non-cooperative and Nash equilibrium still holds as the suitable solution concept. It is only when one tries to relate the Nash equilibrium in the extended game (with mediated communication) to a notion of equilibrium in the original game that the broader notion of correlated equilibrium arises. The motivation for introducing mediated communication in games comes from the desire to realize probability distributions over the outcomes of a game which are not in the image of the mixed game. From this perspective, the EWL protocol could be interpreted as a higher order extension of Prisoner's Dilemma to include quantum mediated communication. An excellent, mathematically formal explanation of the latter interpretation can be found in BIB002 . Finally, in BIB006 , Benjamin et al. argue that Nash equilibrium in the EWL protocol, while gametheoretically correct, is of limited quantum physical interest. These authors proceed to show that when a naturally more general and quantum physically more meaningful implementation of EWL is considered, the quantum Prisoner's Dilemma has no Nash equilibrium! However, once randomization via probability distributions is introduced into their quantum Prisoner's Dilemma, a Nash equilibrium that is near-optimal, but still better paying than the one available in the classical game, materializes again in line with GlicksbergMeyer theorem. This criticism was in fact addressed by Eisert and Wilkens in a follow up publication . Benjamin et al. give a discrete set of strategies that could be employed in a classical set-up of the game that gives the same solution to the dilemma. Eisert et al's strategy set is then just the continuous analogue of this discrete set. One may contextualize the criticism of quantum computational implementation of games using the more formal language of computational complexity theory as follows. The class BQP is that of problems that can be efficiently solved on a quantum computer, and P is the class that can be solved efficiently on a classical computer. It is known that P is strictly contained in BQP and, moreover, that BQP is not contained in P. That is to say, there exist problems that quantum computers can solve efficiently but which classical computers cannot. Examples include quantum simulation, quantum search (in the Oracle setting), and solving systems of linear equations. However, it is currently unknown how BQP compares to BPP, the class of problems that can be solved on a probabilistic classical computer. The latter ambiguity calls into question whether efficient classical methods may exists for some quantum algorithms, such as the famous Shor's factoring algorithm. In particular, criticism of the EWL protocol may now be succinctly phrased as follows: it has been shown previously by Eisert et al. that some quantum games, such as Prisoners Dilemma, may provide better paying Nash equilibria when entanglement alone is added to the players strategies. However, van Enk and Pike have noted that permitting classical advice within those same games can recover similar Nash equilibria. This raises the question as to whether the power of quantum computational protocols for games using entanglement are captured completely by classical sprotocols using advice. This open question is akin to the question is quantum computing as to whether BQP is contained in BPP. We end this section by noting that the EWL protocol is just one of many possible quantization protocols that may be constructed via quantum circuit synthesis methods such as the one envisioned in Figure 4 . Consequences of these other quantum computational implementations of non-cooperative games used in economics, evolutionary biology, and any other subject where game theory is applicable, appear to be largely unexplored.
Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> Abstract : Kakutani's Fixed Point Theorem states that in Euclidean n-space a closed point to (non-void) convex set map of a convex compact set into itself has a fixed point. Kakutani showed that this implied the minimax theorem for finite games. The object of this note is to point out that Kakutani's theorem may be extended to convex linear topological spaces, and implies the minimax theorem for continuous games with continuous payoff as well as the existence of Nash equilibrium points. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> We propose a complexity model of quantum circuits analogous to the standard (acyclic) Boolean circuit model. It is shown that any function computable in polynomial time by a quantum Turing machine has a polynomial-size quantum circuit. This result also enables us to construct a universal quantum computer which can simulate, with a polynomial factor slowdown, a broader class of quantum machines than that considered by E. Bernstein and U. Vazirani (1993), thus answering an open question raised by them. We also develop a theory of quantum communication complexity, and use it as a tool to prove that the majority function does not have a linear-size quantum formula.<<ETX>> <s> BIB002 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> Recently the concept of quantum information has been introduced into game theory. Here we present the first study of quantum games with more than two players. We discover that such games can possess an alternative form of equilibrium strategy, one which has no analog either in traditional games or even in two-player quantum games. In these ``coherent'' equilibria, entanglement shared among multiple players enables different kinds of cooperative behavior: indeed it can act as a contract, in the sense that it prevents players from successfully betraying one another. <s> BIB003 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> Uncondit ionally secure bit commi tmen t and coin flipping are known to be impossible in the classical world. Bit commitment is known to be impossible also in the quan tum world. We introduce a related new primitive quantum bit escrow. In this primitive Alice commits to a bit b to Bob. The commi tment is bindingin the sense tha t if Alice is asked to reveal the bit, Alice can not bias her commi tmen t wi thout having a good probability of being detected cheating. The commitment is sealing in the sense tha t if Bob learns information about the encoded bit, then if later on he is asked to prove he was playing honestly, he is detected cheating with a good probability. Rigorously proving the correctness of quan tum cryptographic protocols has proved to be a difficult task. We develop techniques to prove quant i ta t ive s ta tements about the binding and sealing propert ies of the quan tum bit escrow protocol. A related primitive we construct is a quan tum biased coin flipping protocol where no player can control the game, i.e., even an all-powerful cheating player must lose with some constant probability, which stands in sharp contrast to the classical world where such protocols are impossible. *This research was supported in part by a U.C. president 's postdoctoral fellowship and NSF Grant CCR-9800024. *Supported in part by David Zuckerman's David and Lucile Packard Fellowship for Science and Engineering and NSF NYI Grant No. CCR-9457799. *This research was supported in part by NSF Grant CCR9800024, and a 3SEP grant. §This research was supported in part by D A R P A and NSF under CCR-9627819, by NSF under CCR-9820855, and by a Visiting Professorship sponsored by the Research Miller Inst i tute at Berkeley. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the lull citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. STOC 2000 Portland Oregon USA Copyright ACM 2000 1-58113-184-4/00/5...$5.00 General Terms Quan tum cryptography, bit commi tmen t Quan tum coin tossing, Quan tum <s> BIB004 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> The quantum advantage arising in a simplified multiplayer quantum game is found to be a disadvantage when the game's qubit source is corrupted by a noisy ``demon.'' Above a critical value of the corruption rate, or noise level, the coherent quantum effects impede the players to such an extent that the ``optimal'' choice of game changes from quantum to classical. <s> BIB005 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> Abstract Evolutionarily stable strategy (ESS) in classical game theory is a refinement of Nash equilibrium concept. We investigate the consequences when a small group of mutants using quantum strategies try to invade a classical ESS in a population engaged in symmetric bimatrix game of prisoner's dilemma. Secondly we show that in an asymmetric quantum game between two players an ESS pair can be made to appear or disappear by resorting to entangled or unentangled initial states used to play the game even when the strategy pair remains a Nash equilibrium in both forms of the game. <s> BIB006 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> A version of the Monty Hall problem is presented where the players are permitted to select quantum strategies. If the initial state involves no entanglement the Nash equilibrium in the quantum game offers the players nothing more than that obtained with a classical mixed strategy. However, if the initial state involves entanglement of the qutrits of the two players, it is advantageous for one player to have access to a quantum strategy while the other does not. Where both players have access to quantum strategies there is no Nash equilibrium in pure strategies, however, there is a Nash equilibrium in quantum mixed strategies that gives the same average payoff as the classical game. <s> BIB007 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> We consider a slightly modified version of the Rock-Scissors-Paper (RSP) game from the point of view of evolutionary stability. In its classical version the game has a mixed Nash equilibrium (NE) not stable against mutants. We find a quantized version of the RSP game for which the classical mixed NE becomes stable. <s> BIB008 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> We present a solution to an old problem in distributed computing. In its simplest form, a sender has to broadcast some information to two receivers, but they have access only to pairwise communication channels. Unlike quantum key distribution, here the goal is not secrecy but agreement, and the adversary (one of the receivers or the sender himself) is not outside but inside the game. Using only classical channels this problem is provably impossible. The solution uses pairwise quantum channels and entangled qutrits. <s> BIB009 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> In a recent paper [D. A. Meyer, Phys. Rev. Lett. 82, 1052 (1999)], it has been shown that a classical zero-sum strategic game can become a winning quantum game for the player with a quantum device. Nevertheless, it is well known that quantum systems easily decohere in noisy environments. In this paper, we show that if the handicapped player with classical means can delay his action for a sufficiently long time, the quantum version reverts to the classical zero-sum game under decoherence. <s> BIB010 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> Recent development in quantum computation and quantum information theory allows to extend the scope of game theory for the quantum world. The paper is devoted to the analysis of interference of quantum strategies in quantum market games. <s> BIB011 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> A new approach to play games quantum mechanically is proposed. We consider two players who perform measurements in an EPR-type setting. The payoff relations are defined as functions of correlations, i.e. without reference to classical or quantum mechanics. Classical bi-matrix games are reproduced if the input states are classical and perfectly anti-correlated, that is, for a classical correlation game. However, for a quantum correlation game, with an entangled singlet state as input, qualitatively different solutions are obtained. For example, the Prisoners' Dilemma acquires a Nash equilibrium if both players apply a mixed strategy. It appears to be conceptually impossible to reproduce the properties of quantum correlation games within the framework of classical games. <s> BIB012 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> This paper investigates the powers and limitations of quantum entanglement in the context of cooperative games of incomplete information. We give several examples of such nonlocal games where strategies that make use of entanglement outperform all possible classical strategies. One implication of these examples is that entanglement can profoundly affect the soundness property of two-prover interactive proof systems. We then establish limits on the probability with which strategies making use of entanglement can win restricted types of nonlocal games. These upper bounds may be regarded as generalizations of Tsirelson-type inequalities, which place bounds on the extent to which quantum information can allow for the violation of Bell inequalities. We also investigate the amount of entanglement required by optimal and nearly optimal quantum strategies for some games. <s> BIB013 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> This paper studies quantum Arthur-Merlin games, which are a restricted form of quantum interactive proof system in which the verifier's messages are given by unbiased coin-flips. The following results are proved. For one-message quantum Arthur-Merlin games, which correspond to the complexity class QMA, completeness and soundness errors can be reduced exponentially without increasing the length of Merlin's message. Previous constructions for reducing error required a polynomial increase in the length of Merlin's message. Applications of this fact include a proof that logarithmic length quantum certificates yield no increase in power over BQP and a simple proof that QMA /spl sube/ PP. In the case of three or more messages, quantum Arthur-Merlin games are equivalent in power to ordinary quantum interactive proof systems. In fact, for any language having a quantum interactive proof system there exists a three-message quantum Arthur-Merlin game in which Arthur's only message consists of just a single coin-flip that achieves perfect completeness and soundness error exponentially close to 1/2. Any language having a two-message quantum Arthur-Merlin game is contained in BP /spl middot/ PP. This gives some suggestion that three messages are stronger than two in the quantum Arthur-Merlin setting. <s> BIB014 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> We establish the first hardness results for the problem of computing the value of one-round games played by a verifier and a team of provers who can share quantum entanglement. In particular, we show that it is NP-hard to approximate within an inverse polynomial the value of a one-round game with (i) quantum verifier and two entangled provers or (ii) classical verifier and three entangled provers. Previously it was not even known if computing the value exactly is NP-hard. We also describe a mathematical conjecture, which, if true, would imply hardness of approximation to within a constant. We start our proof by describing two ways to modify classical multi-prover games to make them resistant to entangled provers. We then show that a strategy for the modified game that uses entanglement can be ``rounded'' to one that does not. The results then follow from classical inapproximability bounds. Our work implies that, unless P=NP, the values of entangled-prover games cannot be computed by semidefinite programs that are polynomial in the size of the verifier's system, a method that has been successful for more restricted quantum games. <s> BIB015 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> This paper presents an overview and survey of a new type of game-theoretic setting based on ideas emanating from quantum computing. (We provide a brief overview of quantum computing at the beginning of the paper.) Initial results suggest this view brings more flexibility and possibilities into decisions involving game-theoretic considerations. Applications cover a broad spectrum of classical games as well as games in economics, finance and other areas. <s> BIB016 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> The generation and control of quantum states of light constitute fundamental tasks in cavity quantum electrodynamics (QED). The superconducting realization of cavity QED, circuit QED, enables on-chip microwave photonics, where superconducting qubits control and measure individual photon states. A long-standing issue in cavity QED is the coherent transfer of photons between two or more resonators. Here, we use circuit QED to implement a three-resonator architecture on a single chip, where the resonators are interconnected by two superconducting phase qubits. We use this circuit to shuffle one- and two-photon Fock states between the three resonators, and demonstrate qubit-mediated vacuum Rabi swaps between two resonators. This illustrates the potential for using multi-resonator circuits as photon quantum registries and for creating multipartite entanglement between delocalized bosonic modes. <s> BIB017
In the almost two decades since the inception of the theory of quantum games, the EWL quantization protocol has taken on the role of a working definition for non-cooperative quantum games for physicists. Several notable results on the quantum physical implementation of games followed Eiset et al.'s work, such as BIB003 BIB005 BIB006 BIB007 BIB008 BIB012 BIB010 . This may seem odd as one would think that the physics community would be more interested in the equilibrium or optimal behavior of quantum systems than the quantum physical implementation of games. On the other hand, this scenario makes sense from a practical point of view because with the advent of technological realizations of quantum computers and quantum communication systems, the playability of games quantum computationally would be of fundamental importance for financial and economic decision making. There is a considerable body of work in which the authors claim to cooperatively game the quantum. Several authors in the early 2000's, such as BIB013 BIB009 BIB015 BIB004 BIB014 , gamed quantum communication protocols or studied complexity classes for quantum processes by considering the protocols as cooperative games of incomplete information. While most authors of such work have mainly focused on identifying quantum advantages similar to the one Meyer identified in his paper, the source of motivation for their work is different. For example, in , the authors state: "Formally, a quantum coin flipping protocol with bias is a two-party communication game in the style of BIB001 ..." where the citation provided is to Chi-Chi Yao's paper titled Quantum circuit complexity BIB002 . Despite the fact that cooperative game theory is the motivation for the latter and other similar work, a formal discussion of cooperative games together with a formal mapping of the relevant physics to the requisite cooperative game is almost always missing. In fact, it would be accurate to say the the word "game" is thrown around in this body of literature as frivolously as the headless carcass of a goat is thrown around in the Afghan game of Buzkashi; but the beef is nowhere to be found. This is not surprising since beef comes from cows! The point of this somewhat macabre analogy is that one should be just as disturbed when hearing the word "game" used for an object that isn't one, as one surely is when hearing the word "beef" used for a goat carcass. Cooperative games are sophisticated conceptual and mathematical constructs. Quoting Aumann [50] (the quote is taken from ) Cooperative (game) theory starts with a formalization of games that abstracts away altogether from procedures and . . . concentrates, instead, on the possibilities for agreement . . . There are several reasons that explain why cooperative games came to be treated separately. One is that when one does build negotiation and enforcement procedures explicitly into the model, then the results of a non-cooperative analysis depend very strongly on the precise form of the procedures, on the order of making offers and counter-offers and so on. This may be appropriate in voting situations in which precise rules of parliamentary order prevail, where a good strategist can indeed carry the day. But problems of negotiation are usually more amorphous; it is difficult to pin down just what the procedures are. More fundamentally, there is a feeling that procedures are not really all that relevant; that it is the possibilities for coalition forming, promising and threatening that are decisive, rather than whose turn it is to speak. . . . Detail distracts attention from essentials. Some things are seen better from a distance; the Roman camps around Metzada are indiscernible when one is in them, but easily visible from the top of the mountain. More formally, a cooperative game allows players to benefit by forming coalitions, and binding agreements are possible. This means that a formal definition of a cooperative game is different than that of a non-cooperative one, and instead of Nash equilibrium, cooperative games entertain solution concepts such as a coalition structure consisting of various coalitions of players, together with a payoff vector for the various coalitions. Optimality features of the solution concepts are different than those in non-cooperative games as well. For instance, there is the notion of an imputation which is a coalition structure in which every player in a coalition prefers to stay in the coalition over "going at it alone". While aspects of cooperative games are certainly reminiscent of those of non-cooperative games, the two types of games are very different objects in very different categories. Because the body of literature that purports to utilize cooperative games to identify some form of efficient or optimal solutions in quantum information processes does so via unclear, indirect, and informal analogies, one could argue that it remains unclear as to what the merit, game-theoretic or quantum physical, of such work is. What is needed in this context is a formal study of quantum games rooted in the formalism of cooperative game theory. Another interesting situation can be observed during the early years of quantum game theory (2002 to be exact) when Piotrowski proposed a quantum physical model for markets and economics which are viewed as games. His ideas appear to be inspired by Meyer's work. In fact, Eisert et al.'s paper does not even appear as a reference in this paper. In a later paper BIB011 however, Piotrowski states "We are interested in quantum description of a game which consists of buying or selling of market good" (the emphasis is our addition). Note that from the terminology used in both of Piotrowski's papers, it seems that the author wants to implement games quantum physically, even though his initial motivation comes from gaming the quantum! This goes to show that in the early years of quantum game theory, the motivation for merging aspects of quantum physics and game theory was certainly not clear cut. Finally, there are offenders in the quantum physics community who use the word "game" colloquially and not in a game-theoretically meaningful way. An example can be found in BIB017 . Such vacuous usage of the word "game" further confuses and obfuscates serious studies in quantized games and gaming the quantum, cooperative or not. Literature on quantum games is considerable. A good source of reference is the Google Scholar page on the subject which contains a wealth of information on past and recent publications in the area. The survey paper by Guo et al. BIB016 is an excellent precursor to our efforts here.
Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> In a wide class of social systems each agent has a range of actions among which he selects one. His choice is not, however, entirely free and the actions of all the other agents determine the subset to which his selection is restricted. Once the action of every agent is given, the outcome of the social activity is known. The preferences of each agent yield his complete ordering of the outcomes and each one of them tries by choosing his action in his restricting subset to bring about the best outcome according to his own preferences. The existence theorem presented here gives general conditions under which there is for such a social system an equilibrium, i.e., a situation where the action of every agent belongs to his restricting subset and no agent has incentive to choose another action. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> An open system can be modeled as a two-player game between the system and its environment. At each round of the game, player 1 (the system) and player 2 (the environment) independently and simultaneously choose moves, and the two choices determine the next state of the game. Properties of open systems can be modeled as objectives of these two-player games. For the basic objective of reachability-can player 1 force the game to a given set of target states?-there are three types of winning states, according to the degree of certainty with which player 1 can reach the target. From type-1 states, player 1 has a deterministic strategy to always reach the target. From type-2 states, player 1 has a randomized strategy to reach the target with probability 1. From type-3 states, player 1 has for every real /spl epsi/>0 a randomized strategy to reach the target with probability greater than 1-/spl epsi/. We show that for finite state spaces, all three sets of winning states can be computed in polynomial time: type-1 states in linear time, and type-2 and type-3 states in quadratic time. The algorithms to compute the three sets of winning states also enable the construction of the winning and spoiling strategies. Finally, we apply our results by introducing a temporal logic in which all three kinds of winning conditions can be specified, and which can be model checked in polynomial time. This logic, called Randomized ATL, is suitable for reasoning about randomized behavior in open (two-agent) as well as multi-agent systems. <s> BIB002 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> Effects of quantum and classical correlations on game theory are studied to clarify the new aspects brought into game theory by the quantum mechanical toolbox. In this study, we compare quantum correlation represented by a maximally entangled state and classical correlation that is generated through phase damping processes on the maximally entangled state. Thus, this also sheds light on the behavior of games under the influence of noisy sources. It is observed that the quantum correlation can always resolve the dilemmas in non-zero sum games and attain the maximum sum of both players' payoffs, while the classical correlation cannot necessarily resolve the dilemmas. <s> BIB003 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> We introduce a novel game that models the creation of Internet-like networks by selfish node-agents without central design or coordination. Nodes pay for the links that they establish, and benefit from short paths to all destinations. We study the Nash equilibria of this game, and prove results suggesting that the "price of anarchy" [4] in this context (the relative cost of the lack of coordination) may be modest. Several interesting: extensions are suggested. <s> BIB004 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> Ken Binmore's previous game theory textbook, Fun and Games (D.C. Heath, 1991), carved out a significant niche in the advanced undergraduate market; it was intellectually serious and more up-to-date than its competitors, but also accessibly written. Its central thesis was that game theory allows us to understand many kinds of interactions between people, a point that Binmore amply demonstrated through a rich range of examples and applications. This replacement for the now out-of-date 1991 textbook retains the entertaining examples, but changes the organization to match how game theory courses are actually taught, making Playing for Real a more versatile text that almost all possible course designs will find easier to use, with less jumping about than before. In addition, the problem sections, already used as a reference by many teachers, have become even more clever and varied, without becoming too technical. Playing for Real will sell into advanced undergraduate courses in game theory, primarily those in economics, but also courses in the social sciences, and serve as a reference for economists. <s> BIB005 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> We study Nash equilibria in the setting of network creation games introduced recently by Fabrikant, Luthra, Maneva, Papadimitriou and Shenker. In this game we have a set of selfish node players, each creating some incident links, and the goal is to minimize α times the cost of the created links plus sum of the distances to all other players. Fabrikant et al. proved an upper bound O(√α) on the price of anarchy, i.e., the relative cost of the lack of coordination. Albers, Eilts, Even-Dar, Mansour, and Roditty show that the price of anarchy is constant for α = O(√n) and for α ≥ 12n[lg n], and that the price of anarchy is 15(1+min {α2 n, n2 α})1/3) for any α. The latter bound shows the first sublinear worst-case bound, O(n1/3), for all α. But no better bound is known for α between ω(√n) and o(n lg n). Yet α ≈ n is perhaps the most interesting range, for it corresponds to considering the average distance (instead ofthe sum of distances) to other nodes to be roughly on par with link creation (effectively dividing α by n). In this paper, we prove the first o(ne) upper bound for general α, namely 2O(√lg n). We also prove aconstant upper bound for α = O(n1-e) for any fixed e > 0, substantially reducing the range of α for which constant bounds have not been obtained. Along the way, we also improve the constant upper bound by Albers et al. (with the leadconstant of 15 ) to 6 for α Next we consider the bilateral network variant of Corbo and Parkesin which links can be created only with the consent of both end points and the link price is shared equally by the two. Corbo and Parkes show an upper bound of O(√α) and a lower bound of Ω(lg α) for α ≤ n. In this paper, we show that in fact the upper bound O(√α) is tight for α ≤, by proving a matching lower bound of Ω(√α). For α > n, we prove that the price of anarchy is Θ(n/√ α). Finally we introduce a variant of both network creation games, in which each player desires to minimize α times the cost of its created links plus the maximum distance (instead of the sum of distances) to the other players. This variant of the problem is naturally motivated by considering the worst case instead of the average case. Interestingly, for the original (unilateral) game, we show that the price of anarchy is at most 2 for α ≥ n, O(min{4√lg n, (n/α)1/3}) for 2√lgn ≤ α ≤ n, and O(n2/α) for α α+1) for α ≤ n, and an upper bound of 2 for α > n. <s> BIB006 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> Quantum key distribution uses quantum mechanics to guarantee secure communication. BB84 is a widely used quantum key distribution that provides a way for two parties, a sender, Alice, and a receiver, Bob, to share an unconditionally secure key in the presence of an eavesdropper, Eve. In a new approach, we view this protocol as a three player static game in which Alice and Bob are two cooperative players and Eve is a competitive one. In our game model Alice’s and Bob’s objective is to maximize the probability of detecting Eve, while Eve’s objective is to minimize this probability. Using this model we show how game theory can be used to find the strategies for Alice, Bob and Eve. <s> BIB007 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> In the time since a merger of quantum mechanics and game theory was proposed formally in 1999, the two distinct perspectives apparent in this merger of applying quantum mechanics to game theory, referred to henceforth as the theory of"quantized games", and of applying game theory to quantum mechanics, referred to henceforth as"gaming the quantum", have become synonymous under the single ill-defined term"quantum game". Here, these two perspectives are delineated and a game-theoretically proper description of what makes a multi-player, non-cooperative game quantum mechanical, is given. Within the context of this description, finding a Nash equilibrium in a strictly competitive quantum game is shown to be equivalent to finding a solution to a simultaneous best approximation problem in the state space of quantum objects, thus setting up a framework for a game theory inspired study of"equilibrium"behavior of quantum physical systems such as those utilized in quantum information processing and computation. <s> BIB008 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> Two qubit quantum computations are viewed as two player, strictly competitive games and a game-theoretic measure of optimality of these computations is developed. To this end, the geometry of Hilbert space of quantum computations is used to establish the equivalence of game-theoretic solution concepts of Nash equilibrium and mini-max outcomes in games of this type, and quantum mechanisms are designed for realizing these mini-max outcomes. <s> BIB009 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> Game theory and its quantum extension apply in numerous fields that affect people’s social, political, and economical life. Physical limits imposed by the current technology used in computing architectures (e.g., circuit size) give rise to the need for novel mechanisms, such as quantum inspired computation. Elements from quantum computation and mechanics combined with game-theoretic aspects of computing could open new pathways towards the future technological era. This paper associates dominant strategies of repeated quantum games with quantum automata that recognize infinite periodic inputs. As a reference, we used the PQ-PENNY quantum game where the quantum strategy outplays the choice of pure or mixed strategy with probability 1 and therefore the associated quantum automaton accepts with probability 1. We also propose a novel game played on the evolution of an automaton, where players’ actions and strategies are also associated with periodic quantum automata. <s> BIB010 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> In a seminal paper, Meyer (Phys Rev Lett 82:1052, 1999) described the advantages of quantum game theory by looking at the classical penny flip game. A player using a quantum strategy can win against a classical player almost 100 % of the time. Here we make a slight modification to the quantum game, with the two players sharing an entangled state to begin with. We then analyze two different scenarios: First in which quantum player makes unitary transformations to his qubit, while the classical player uses a pure strategy of either flipping or not flipping the state of his qubit. In this case, the quantum player always wins against the classical player. In the second scenario, we have the quantum player making similar unitary transformations, while the classical player makes use of a mixed strategy wherein he either flips or not with some probability "p." We show that in the second scenario, 100 % win record of a quantum player is drastically reduced and for a particular probability "p" the classical player can even win against the quantum player. This is of possible relevance to the field of quantum computation as we show that in this quantum game of preserving versus destroying entanglement a particular classical algorithm can beat the quantum algorithm. <s> BIB011 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> We propose a quantum voting system, in the spirit of quantum games such as the quantum Prisoner's Dilemma. Our scheme enables a constitution to violate a quantum analog of Arrow's Impossibility Theorem. Arrow's Theorem is a claim proved deductively in economics: Every (classical) constitution endowed with three innocuous-seeming properties is a dictatorship. We construct quantum analogs of constitutions, of the properties, and of Arrow's Theorem. A quantum version of majority rule, we show, violates this Quantum Arrow Conjecture. Our voting system allows for tactical-voting strategies reliant on entanglement, interference, and superpositions. This contribution to quantum game theory helps elucidate how quantum phenomena can be harnessed for strategic advantage. <s> BIB012 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> In this paper, we formulate and analyze generalizations of the quantum penny flip game. In the penny flip game, one coin has two states, heads or tails, and two players apply alternating operations on the coin. In the original Meyer game, the first player is allowed to use quantum (i.e., non-commutative) operations, but the second player is still only allowed to use classical (i.e., commutative) operations. In our generalized games, both players are allowed to use non-commutative operations, with the second player being partially restricted in what operators they use. We show that even if the second player is allowed to use "phase-variable" operations, which are non-Abelian in general, the first player still has winning strategies. Furthermore, we show that even when the second player is allowed to choose one from two or more elements of the group $U(2)$, the second player has winning strategies under certain conditions. These results suggest that there is often a method for restoring the quantum state disturbed by another agent. <s> BIB013 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> With respect to probabilistic mixtures of the strategies in non-cooperative games, quantum game theory provides guarantee of fixed-point stability, the so-called Nash equilibrium. This permits players to choose mixed quantum strategies that prepare mixed quantum states optimally under constraints. We show here that fixed-point stability of Nash equilibrium can also be guaranteed for pure quantum strategies via an application of the Nash embedding theorem, permitting players to prepare pure quantum states optimally under constraints. <s> BIB014
Entanglement in a quantum physical system implies non-classical correlations between the observable of the system. While Eisert et al. showed that their quantum computational implementation of Prisoner's Dilemma produced non-classical correlations and resolved the dilemma (Nash equilibrium is also optimal), in BIB003 , Shimumura et al. establish a stronger result that entanglement enabled correlations always resolve dilemmas in non-zero sum games, and that classical correlations do not necessarily do the same. Quantum entanglement is clearly a resource for quantum games. In this section, we offer here a new perspective on the role of quantum entanglement in quantum games. We consider quantum entanglement in the context of Debreu's BIB001 "social equilibrium". Whereas Nash equilibrium is the solution of a non-cooperative game in which each player's strategy set is independent of all other players' strategy sets, social equilibrium occurs in a generalization (not extension) of noncooperative games where the players' strategy sets are not independent of each other. These generalized games are also are known as abstract economies. Take for instance the example of a supermarket shopper (this example is paraphrased from ) interested in buying the basket of goods that is best for her family. While in theory she can choose any basket she pleases, realistically, she must stay within her budget which is not independent of the actions of other players in the economy. For instance, her budget will depend on what her employer pays her in wages. Furthermore, given her budget, which baskets are affordable will depend on the price of the various commodities, which, in turn, are determined by supply and demand in the economy. In an abstract economy, a player is restricted to playing strategies from a subset of his strategy set, with this limitation being a function of the strategy choices of all other players. More formally, in an abstract economy with n players, let S i be the i th player's strategy set and let s −i represent the (n − 1)-tuple of strategy choices of the other (n − 1) players. Then player i is restricted to play feasible strategies only from some γ i (s −i ) ⊆ S i where γ i is the "restriction" function. In social equilibrium, each player employs a feasible strategy, and no player can gain by unilaterally deviating to some other feasible strategy. Debreu gives a guarantee of the existence of a social equilibrium in BIB001 using an argument that also utilizes Kakutani's fixed point theorem. Note that an abstract economy may be viewed as a type of mediated communication where the communication occurs via interaction with the social environment. Recall that the EWL quantization of Prisoner's Dilemma utilizes maximally entangled qubits. Rather than as a quantum mechanism for mediated communication, we interpret the entanglement between qubits as a restriction function, restricting the players' strategy sets to feasible strategy subsets which Eisert et al. call the two-parameter strategy sets. It is exactly in this restricted strategy set that the existence of a dilemma-breaking, optimal Nash equilibrium is established. The point of note here is that the resource that quantum entanglement affords the players in the EWL quantization (and possibly others) can be interpreted in two different ways: one, as an extension to mediated quantum communication that produces a near-optimal Nash equilibrium in the quantum game, and the other as a generalization of Prisoner's Dilemma to a quantum abstract economy with social equilibrium, where entanglement serves as the environment. In the former interpretation, Nash equilibrium is realized in mixed quantum strategies; in the latter interpretation, social equilibrium is realized via pure quantum strategies. Whereas the Nash equilibrium is guaranteed by Glicksberg's fixed point theorem, the question of a guarantee of social equilibrium in pure strategy quantum games is raised here for the first time. We conjecture that the answer would be in the affirmative, and that it will most likely be found using Nash embedding of CP n into R m similar to the one appearing in BIB014 . Finally, the interpretation of quantum entanglement as a restriction function also addresses van Enck et al's criticism of the EWL quantization as blurring the distinction between cooperative and non-cooperative games. games from a decision theory point of view, with examples from biology, economics, gambling, and has connections to quantum algorithms and protocols. A more recent work gives a secure, remote, twoparty game that can be played using a quantum gambling machine. This quantum communication protocol has the property that one can make the game played on it demonstrably fair to both parties by modifying the Nash equilibrium point. Some other recent efforts to game quantum computations as non-cooperative games at Nash equilibrium appear in BIB008 BIB009 and are reminiscent of work in concurrent reachability games BIB002 and control theory in general. In , the authors use quantum game models to design protocols that make classical communications more efficient, and in BIB007 the authors view the BB84 quantum key distribution protocol as a three player static game in which the communicative parties form a cooperative coalition against an eavesdropper. In this game model, the coalition players aim to maximize the probability of detecting the intruder, while the intruder's objective is to minimize this probability. In other words, the BB84 protocol is modeled as a two player, zero-sum game, though it is not clear whether the informal mingling of elements of cooperative and non-cooperative game theory skews the results of this paper. Further, in BIB010 , the authors associate Meyer's quantum Penny Flip game with a quantum automaton and propose a game played on the evolution of the automaton. This approach studies quantum algorithms via quantum automaton akin to how behavioral game-theorists study repeated classical games using automaton such as Tit-for-Tat, Grimm, and Tat-for-Tit etc BIB005 . Finally, some interesting studies into Meyer's Penny Flip game appear in BIB011 , where the authors consider a two player version of this game played with entangled strategies and show that a particular classical algorithm can beat the quantum algorithm, and in BIB013 , where the author formulates and analyzes generalizations of the quantum Penny Flip game to include non-commutative quantum strategies. The main results of this work is that there is sometimes a method for restoring the quantum state disturbed by another player, a result of strong relevance to the problem of quantum error-correction and fault-tolerance in quantum computing and quantum algorithm design. Finally, motivated by quantum game theory, in BIB012 Bao et al. propose a quantum voting system that has an algorithmic flavor to it. Their quantized voting scheme enables a constitution to violate a quantum analog of Arrow's impossibility theorem, which states that every classical constitution endowed with three apparently innocuous properties is in fact a dictatorship. Thier voting system allows for tactical-voting strategies reliant on entanglement, interference, and superpositions and shows how an algorithmic approach to quantum games leads to strategic advantage. Despite the excellent efforts of several authors in the preceding discussion, the approach of Meyer to search for quantum advantage as a Nash equilibrium in quantum games remains largely unexplored. In terms of quantum communication protocols, where quantum processes are assumed to be noisy and are therefore modeled as density matrices or mixed quantum states, the Meyer-Glicksberg theorem offers a guarantee of Nash equilibrium. Whereas a similar guarantee in mixed classical states spurred massive research in classical computer science, economics and political science, the same is not true in the case of, in the least, quantum communication protocols. Likewise for pure quantum states. These states are used to model the pristine and noiseless world of quantum computations and quantum algorithms. One set of quantum computational protocols is the MW quantum game (not the MW game quantization protocol) which we interpreted above as having a Nash equilibrium in pure quantum states. Note that one could in principle say the same about the EWL quantum game, despite its questionable quantum physical reputation. However, unlike the situation with mixed quantum states, a guarantee of a Nash equilibrium has only come to light very recently BIB014 . On the other hand, some efforts have been made in bringing together ideas from network theory, in particular network creation games BIB004 BIB006 , and quantum algorithms in the context of the quantum Internet and compilation of adiabatic quantum programs [73] .
Quantum games: a review of the history, current state, and interpretation <s> Bell's inequalities and quantum games <s> In the conventional approach to quantum mechanics, indeterminism is an axiom and nonlocality is a theorem. We consider inverting the logical order, making nonlocality an axiom and indeterminism a theorem. Nonlocal “superquantum” correlations, preserving relativistic causality, can violate the CHSH inequality more strongly than any quantum correlations. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Bell's inequalities and quantum games <s> We consider two aspects of quantum game theory: the extent to which the quantum solution solves the original classical game, and to what extent the new solution can be obtained in a classical model. <s> BIB002 </s> Quantum games: a review of the history, current state, and interpretation <s> Bell's inequalities and quantum games <s> Correlated equilibria are sometimes more efficient than the Nash equilibria of a game without signals. We investigate whether the availability of quantum signals in the context of a classical strategic game may allow the players to achieve even better efficiency than in any correlated equilibrium with classical signals, and find the answer to be positive. <s> BIB003 </s> Quantum games: a review of the history, current state, and interpretation <s> Bell's inequalities and quantum games <s> This paper investigates the powers and limitations of quantum entanglement in the context of cooperative games of incomplete information. We give several examples of such nonlocal games where strategies that make use of entanglement outperform all possible classical strategies. One implication of these examples is that entanglement can profoundly affect the soundness property of two-prover interactive proof systems. We then establish limits on the probability with which strategies making use of entanglement can win restricted types of nonlocal games. These upper bounds may be regarded as generalizations of Tsirelson-type inequalities, which place bounds on the extent to which quantum information can allow for the violation of Bell inequalities. We also investigate the amount of entanglement required by optimal and nearly optimal quantum strategies for some games. <s> BIB004 </s> Quantum games: a review of the history, current state, and interpretation <s> Bell's inequalities and quantum games <s> Games with incomplete information are formulated in a multi-sector probability matrix formalism that can cope with quantum as well as classical strategies. An analysis of classical and quantum strategy in a multi-sector extension of the game of Battle of Sexes clarifies the two distinct roles of nonlocal strategies, and establish the direct link between the true quantum gain of game's payoff and the breaking of Bell inequalities. <s> BIB005 </s> Quantum games: a review of the history, current state, and interpretation <s> Bell's inequalities and quantum games <s> We show that, for a continuous set of entangled four-partite states, the task of maximizing the payoff in the symmetric-strategy four-player quantum Minority game is equivalent to maximizing the violation of a four-particle Bell inequality. We conclude the existence of direct correspondences between (i) the payoff rule and Bell inequalities, and (ii) the strategy and the choice of measured observables in evaluating these Bell inequalities. We also show that such a correspondence is unique to minority-like games. <s> BIB006 </s> Quantum games: a review of the history, current state, and interpretation <s> Bell's inequalities and quantum games <s> We propose a simple yet rich model to extend the notions of Nash equilibria and correlated equilibria of strategic games to the quantum setting, in which we then study the relations between classical and quantum equilibria. Unlike the previous work that focus on qualitative questions on specific games of small sizes, we address the following fundamental and quantitative question for general games: How much"advantage"can playing quantum strategies provide, if any? Two measures of the advantage are studied, summarized as follows. 1. A natural measure is the increase of payoff. We consider natural mappings between classical and quantum states, and study how well those mappings preserve the equilibrium properties. Among other results, we exhibit correlated equilibrium $p$ whose quantum superposition counterpart $\sum_s \sqrt{p(s)}\ket{s}$ is far from being a quantum correlated equilibrium; actually a player can increase her payoff from almost 0 to almost 1 in a [0,1]-normalized game. We achieve this by a tensor product construction on carefully designed base cases. 2. For studying the hardness of generating correlated equilibria, we propose to examine \emph{correlation complexity}, a new complexity measure for correlation generation. We show that there are $n$-bit correlated equilibria which can be generated by only one EPR pair followed by local operation (without communication), but need at least $\log(n)$ classical shared random bits plus communication. The randomized lower bound can be improved to $n$, the best possible, assuming (even a much weaker version of) a recent conjecture in linear algebra. We believe that the correlation complexity, as a complexity-theoretical counterpart of the celebrated Bell's inequality, has independent interest in both physics and computational complexity theory and deserves more explorations. <s> BIB007 </s> Quantum games: a review of the history, current state, and interpretation <s> Bell's inequalities and quantum games <s> Nonlocality enables two parties to win specific games with probabilities strictly higher than allowed by any classical theory. Nevertheless, all known such examples consider games where the two parties have a common interest, since they jointly win or lose the game. The main question we ask here is whether the nonlocal feature of quantum mechanics can offer an advantage in a scenario where the two parties have conflicting interests. We answer this in the affirmative by presenting a simple conflicting interest game, where quantum strategies outperform classical ones. Moreover, we show that our game has a fair quantum equilibrium with higher payoffs for both players than in any fair classical equilibrium. Finally, we play the game using a commercial entangled photon source and demonstrate experimentally the quantum advantage. <s> BIB008 </s> Quantum games: a review of the history, current state, and interpretation <s> Bell's inequalities and quantum games <s> In a previous publication, we showed how group actions can be used to generate Bell inequalities. The group action yields a set of measurement probabilities whose sum is the basic element in the inequality. The sum has an upper bound if the probabilities are a result of a local, realistic theory, but this bound can be violated if the probabilities come from quantum mechanics. In our first paper, we considered the case of only two parties making the measurements and single-generator groups. Here we show that the method can be extended to three parties, and it can also be extended to non-Abelian groups. We discuss the resulting inequalities in terms of nonlocal games. <s> BIB009 </s> Quantum games: a review of the history, current state, and interpretation <s> Bell's inequalities and quantum games <s> We study team decision problems where communication is not possible, but coordination among team members can be realized via signals in a shared environment. We consider a variety of decision problems that differ in what team members know about one another's actions and knowledge. For each type of decision problem, we investigate how different assumptions on the available signals affect team performance. Specifically, we consider the cases of perfectly correlated, i.i.d., and exchangeable classical signals, as well as the case of quantum signals. We find that, whereas in perfect-recall trees (Kuhn 1950 Proc. Natl Acad. Sci. USA 36, 570-576; Kuhn 1953 In Contributions to the theory of games, vol. II (eds H Kuhn, A Tucker), pp. 193-216) no type of signal improves performance, in imperfect-recall trees quantum signals may bring an improvement. Isbell (Isbell 1957 In Contributions to the theory of games, vol. III (eds M Drescher, A Tucker, P Wolfe), pp. 79-96) proved that, in non-Kuhn trees, classical i.i.d. signals may improve performance. We show that further improvement may be possible by use of classical exchangeable or quantum signals. We include an example of the effect of quantum signals in the context of high-frequency trading. <s> BIB010 </s> Quantum games: a review of the history, current state, and interpretation <s> Bell's inequalities and quantum games <s> Drawing on ideas from game theory and quantum physics, we investigate nonlocal correlations from the point of view of equilibria in games of incomplete information. These equilibria can be classified in decreasing power as general communication equilibria, belief-invariant equilibria and correlated equilibria, all of which contain the familiar Nash equilibria. The notion of belief-invariant equilibrium has appeared in game theory before, in the 1990s. However, the class of non-signalling correlations associated to belief-invariance arose naturally already in the 1980s in the foundations of quantum mechanics. In the present work, we explain and unify these two origins of the idea and study the above classes of equilibria, and furthermore quantum correlated equilibria, using tools from quantum information but the language of game theory. We present a general framework of belief-invariant communication equilibria, which contains (quantum) correlated equilibria as special cases. Our framework also contains the theory of Bell inequalities, a question of intense interest in quantum mechanics and the original motivation for the above-mentioned studies. We then use our framework to show new results related to social welfare. Namely, we exhibit a game where belief-invariance is socially better than correlated equilibria, and one where all non-belief-invariant equilibria are socially suboptimal. Then, we show that in some cases optimal social welfare is achieved by quantum correlations, which do not need an informed mediator to be implemented. Furthermore, we illustrate potential practical applications: for instance, situations where competing companies can correlate without exposing their trade secrets, or where privacy-preserving advice reduces congestion in a network. Along the way, we highlight open questions on the interplay between quantum information, cryptography, and game theory. <s> BIB011 </s> Quantum games: a review of the history, current state, and interpretation <s> Bell's inequalities and quantum games <s> Game theory is a well established branch of mathematics whose formalism has a vast range of applications from the social sciences, biology, to economics. Motivated by quantum information science, there has been a leap in the formulation of novel game strategies that lead to new (quantum Nash) equilibrium points whereby players in some classical games are always outperformed if sharing and processing joint information ruled by the laws of quantum physics is allowed. We show that, for a bipartite non zero-sum game, input local quantum correlations, and separable states in particular, suffice to achieve an advantage over any strategy that uses classical resources, thus dispensing with quantum nonlocality, entanglement, or even discord between the players' input states. This highlights the remarkable key role played by pure quantum coherence at powering some protocols. Finally, we propose an experiment that uses separable states and basic photon interferometry to demonstrate the locally-correlated quantum advantage. <s> BIB012 </s> Quantum games: a review of the history, current state, and interpretation <s> Bell's inequalities and quantum games <s> Quantum entanglement has been recently demonstrated as a useful resource in conflicting interest games of incomplete information between two players, Alice and Bob [Pappa et al., Phys. Rev. Lett. 114, 020401 (2015)]. General setting for such games is that of correlated strategies where the correlation between competing players is established through a trusted common adviser; however, players need not reveal their input to the adviser. So far, quantum advantage in such games has been revealed in a restricted sense. Given a quantum correlated equilibrium strategy, one of the players can still receive a higher than quantum average payoff with some classically-correlated equilibrium strategy. In this work, by considering a class of asymmetric Bayesian games, we show the existence of games with quantum correlated equilibrium where average payoff of both the players exceed respective individual maximum for each player over all classically-correlated equilibriums. <s> BIB013
It is well known that the EWL quantization scheme is limited in it's applicability to any situation where the players can perform any physically possible operation. This may be applicable in some instances when the hardware used to implement a quantum game only allows a limited set of operations, and there are no players with malicious intent or the technological sophistication to perform operations outside of the allowed set. However, for more sophisticated analyses that include such actors or other factors, a more general framework is desired. To this end, one can start by focusing on the role of quantum entanglement in quantum games. As discussed earlier, the most common interpretation is that the entanglement between players' quantum strategies acts as a type of mediated communication, advice, or contract between the players. A common objection is that quantum games have more strategy choices than the classical version, and that it is possible to simply reformulate a classical game to incorporate more strategy choices so that the classical game has the same equilibria as the quantum counterpart, as was shown with the quantum Prisoner's Dilemma BIB002 . However, as is discussed below, this is not always the case. The study of Bayesian quantum games addresses these objections and has elucidated the role of entanglement in quantum games as well as the possible advantages of a quantum game by relating them to Bell's inequalities BIB013 . The connection between Bell's inequalities and Bayesian quantum games was first recognized in the similarities between the form of a Bell's inequality and the payoff function of a Bayesian quantum game. It was found that by casting a quantum Bayesian game in a certain way, the payoff function can resemble a form of a Bell's inequality so that in the presence of quantum correlations, i.e. entanglement, the inequality will be broken and the quantum game will have a higher payoff than the classical version. In the analogy, a direct parallel can be drawn between the measurements and measurement outcomes in a Bell's inequality experiment to the player types and player actions in a Bayesian quantum game . In Bayesian games, the players have incomplete information about the payoff matrix of the game. This can be formulated by assigning the players different types characterized by different payoff matrices. When making their strategy choice, the players know their own type, but not that of their opponent . This is also noted that this is related to the conditions in non-local games BIB004 , the condition that the players cannot communicate during a game, and the concept of the no-signaling in physics. A correspondence can be drawn between the condition of locality, used in deriving a Bell's inequality, and the condition that the players do not know the type of the other player. This condition can be described mathematically for a two player, two strategy game for example, by labeling the player types as X, Y and the strategy choices as x, y with the following equation BIB005 : That is, the probability of the joint actions x, y given that the player types are X and Y is equal to the probability that a player of type X plays x multiplied by the probability that a player of type Y plays y. The factorizability of the joint probability distribution is a statement that the players action cannot be influenced by the type of their opponent. It has been noted previously by Fine [78] that a sufficient condition for the breaking of a Bell's inequality is that the joint probability distribution is non-factorizable. For example, if there are two players (X and Y), with two possible strategy choices (x and y), the joint probability distribution of a mixed-strategy where both players choose each strategy with a 50% probability is given by: (|xy + |yx ), it is possible to realize the probability distribution: This probability distribution, when analyzed for an individual player still has a 50% probability of either strategy. The difference is that the strategy choices of X and Y are perfectly correlated. This probability distribution is not in the image of the original mixed strategy game and is not possible without some form of communication between the players or advice. Thus it is possible to formulate a Bell's inequality from a given Bayesian quantum game and vice versa BIB006 . The objection that the strategy choices available to a quantum player are greater than that of the classical player was addressed by Iqbal and Abbott BIB006 . They formulated a quantum Bayesian game using probability distributions rather than state manipulations. The condition of factorizability of these probability distributions produces constraints on the joint probability distributions of the players, which can in turn be formulated as Bell's inequalities. The advantage in this case is that the strategies available to the classical players are identical to those of the quantum players. The difference is that in the quantum case, the players are given an entangled input, while in the classical case they are given a product state. Within this formalism the solution to the Prisoner's Dilemma is identical in the quantum and classical case, whereas in other games, the violation of Bell's inequality can lead to a quantum advantage, as in the matching pennies game. This analysis can be taken further to incorporate the player's receiving advice in a classical game. In a classical non-local game, the players are allowed to formulate strategies before the game and may be the recipients of some form of common advice, but the advice cannot depend on the state of nature. As discussed earlier, this leads to the correlated equilibria . As we also noted in section 4, correlated equilibrium allows for the possible realization of more general probability distributions over the outcomes that may not be compatible with the player's mixed strategies. More precisely, a mixed strategy Nash equilibrium is a correlated equilibrium where the probability distribution is a product of two mixed-strategies. In quantum games, these non-factorizable probability distributions are provided by entanglement or mediated quantum communication. Brunner and Linden incorporated the correlations that can be produced from classical advice into their analysis of quantum games. In this case, the joint probability distribution can be described by: Where the variable λ represents the advice, or information distributed ot the players according to the prior ρ(λ). This type of probability model accurately describes the behavior of players under shared classical advice. This condition is precisely the condition that is used to derive a Bell's inequality, and the history of violation of Bell's inequalities shows that quantum correlations arising from entanglement can break the inequalities derived from equation 15. Thus, entanglement produces joint probability distributions of outcomes that are not possible classically, not just because they are non-factorizable, but also because they cannot have arisen from a classical source of advice, or in traditional quantum mechanical terminology, a hidden variable. If these joint probability distributions are realized in a Bayesian game with payoffs assigned appropriately, the players with access to quantum resources can play with probability distributions that are more favorable than what is possible classically. Correlated equilibria are possible in classical games because of the existence of probability distributions that are not factorizable. They therefore exhibit a wider class of probability distributions. And indeed, Bells inequalities show that there are quantum correlation that are beyond the classically available correlations. Games designed around Bells inequalities demonstrate that there are quantum games that can out-perform even classical games with correlated equilibria. These games do not have the weakness of the EWL quantization schemes, where the same results can be obtained by allowing correlated equilibria and without restricting the allowed strategies, which can often make EWL games un-physical. More recently, several researchers have used these results to generate games based on Bells inequalities that exhibit true benefit from quantum correlations without suffering from the shortcomings of earlier quantization schemes BIB003 BIB010 BIB007 BIB011 BIB008 . Thus it has been shown that the probability distributions of outcomes are more fundamental than the presence of entanglement within a game. Indeed these considerations shed light on other types of correlations that can exist, both within quantum mechanics and beyond quantum mechanics. For example, there are quantum states that exhibit quantum correlations even when the entanglement is known to be zero. These correlations are known as quantum discord, and it is possible to formulate games that have an advantage under quantum discord BIB012 . Further, there are types of correlations known to be consistent with the no-signaling condition that are not even possible with quantum mechanics, known as super-quantum correlations Popescu and Rohrlich BIB001 . Games formulated with these correlations can outperform even the quantum versions . The analysis of Bayesian quantum games has thus addressed several of the objections to the importance of quantum games. The correlations that exist, or the joint probability distributions, of mixed strategies are shown to be more powerful in analyzing the advantage of a quantum game than just the presence of entanglement. The connections between Bayesian quantum games and Bell's inequalities will likely continue to give insight to and play a role in analyzing either different games that are formulated, or forms of Bell's inequalities that are derived BIB009 .
Quantum games: a review of the history, current state, and interpretation <s> Stochastic games <s> In a stochastic game the play proceeds by steps from position to position, according to transition probabilities controlled jointly by the two players. We shall assume a finite number, N , of positions, and finite numbers m K , n K of choices at each position; nevertheless, the game may not be bounded in length. If, when at position k , the players choose their i th and j th alternatives, respectively, then with probability s i j k > 0 the game stops, while with probability p i j k l the game moves to position l . Define s = min k , i , j s i j k . Since s is positive, the game ends with probability 1 after a finite number of steps, because, for any number t , the probability that it has not stopped after t steps is not more than (1 − s ) t . ::: ::: Payments accumulate throughout the course of play: the first player takes a i j k from the second whenever the pair i , j is chosen at position k. If we define the bound M: M = max k , i , j | a i j k | , then we see that the expected total gain or loss is bounded by M + ( 1 − s ) M + ( 1 − s ) 2 M + … = M / s . (1) ::: ::: The process therefore depends on N 2 + N matrices P K l = ( p i j k l | i = 1 , 2 , … , m K ; j = 1 , 2 , … , n K ) A K = ( a i j k | … <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Stochastic games <s> "Elegantly written, with obvious appreciation for fine points of higher mathematics...most notable is [the] author's effort to weave classical probability theory into [a] quantum framework." - The American Mathematical Monthly "This is an excellent volume which will be a valuable companion both for those who are already active in the field and those who are new to it. Furthermore there are a large number of stimulating exercises scattered through the text which will be invaluable to students." - Mathematical Reviews An Introduction to Quantum Stochastic Calculus aims to deepen our understanding of the dynamics of systems subject to the laws of chance both from the classical and the quantum points of view and stimulate further research in their unification. This is probably the first systematic attempt to weave classical probability theory into the quantum framework and provides a wealth of interesting features: The origin of Ito's correction formulae for Brownian motion and the Poisson process can be traced to communication relations or, equivalently, the uncertainty principle. Quantum stochastic interpretation enables the possibility of seeing new relationships between fermion and boson fields. Quantum dynamical semigroups as well as classical Markov semigroups are realized through unitary operator evolutions. The text is almost self-contained and requires only an elementary knowledge of operator theory and probability theory at the graduate level. <s> BIB002 </s> Quantum games: a review of the history, current state, and interpretation <s> Stochastic games <s> A model of stochastic games where multiple controllers jointly control the evolution of the state of a dynamic system but have access to different information about the state and action processes is considered. The asymmetry of information among the controllers makes it difficult to compute or characterize Nash equilibria. Using the common information among the controllers, the game with asymmetric information is used to construct another game with symmetric information such that the equilibria of the new game can be transformed to equilibria of the original game. Further, under certain conditions, a Markov state is identified for the new symmetric information game and its Markov perfect equilibria are characterized. This characterization provides a backward induction algorithm to find Nash equilibria of the original game with asymmetric information in pure or behavioral strategies. Each step of this algorithm involves finding Bayesian Nash equilibria of a one-stage Bayesian game. The class of Nash equilibria of the original game that can be characterized in this backward manner are named common information based Markov perfect equilibria. <s> BIB003
Stochastic games extend strategic games to dynamical situations where the actions of the players and history of states affect the evolution of the game. In this section let us review a subset of classical multi-stage stochastic games that are Markovian in evolution. The hope is that it will generate interest in quantizing such multi-stage games leveraging the advances in quantum stochastic calculus BIB002 and quantum stochastics . There is very little work done on quantum Markov decision processes (qMDP) [92] which are specialized quantum games and so there are lot of opportunities to explore in this class of quantum stochastic games. We start our discussions with stochastic games, specialize them to Markov decision processes (MDP), review the literature on quantized MDPs that involve partial observations, introduce quantum probability and quantum Markov semigroups, and finally outline one possible way to quantize stochastic games. A classical stochastic game a la Shapely BIB001 is a tuple (χ, A i (x), Q i (x, a), P(x|x, a), λ, x 0 ), where χ is the finite state space, A i is the finite action space for individual players, Q i is the i th player's payoff function, P is the transition probability function which can be thought of as a random variable because it is a conditional probability and would become a completely positive map in the quantum case, 0 ≤ λ ≤ 1 is the discount factor that is player i's valuation of the game diminishes with time depending on this factor, and x 0 is the initial state of the game. The discount factor is introduced in infinite horizon games so as to have finite values and another way to understand it is to relate λ to the player's patience. How much more does the player value a dollar today than a dollar received in the future which can be quantified by the factor, so as her discount factor increases she values the later amount more and more nearly as much as the earlier payment. A person is more patient the less she minds waiting for something valuable rather than receiving it immediately. In this interpretation higher discount factor implies higher levels of patience. There is yet another reason to discount the future in multi-stage games. The players may not be sure about how long the game will continue. Even in the absence of time preference per se, a player would prefer a dollar today rather than a promise of one tomorrow because of the uncertainty of the future. Put another way, a payoff at a future time is really a conditional payoff conditional on the game lasting that long. The formulation of Shapely has been extended in different directions such as non-zero sum games, states that are infinite (countable as well as uncountable), and the existence of Nash equilibria established under some restricted conditions. For a recent perspective on dynamical games we refer the reader to Ref . The dynamic game starts at x 0 and all the players simultaneously choose apply a strategy s i that is an action from A i depending upon the history of states. The payoffs and the next state of the game are determined by the functions Q and P . The expected payoff for player i is given by Definition 1. A strategy is called a Markov strategy if s i is a strategy that depends only on the state and we will let s i (x) denote the action that player i would choose in state x. A Markov perfect equilibrium (MPE) is an equilibrium on the dynamic game where each player selects a strategy that depend only upon the current state of the game. MPEs are a subset of Nash equilibria for stochastic game. Let us start with the observation that if all players other than i are playing Markov strategies s −i , then player i has a best response that is a Markov strategy. This is easy to see as if there exists a best response where player i plays a i after a history h leading to state x, and plays a' i after another history h' that also leads to state x, then both a i and a' i must yield the same expected payoff to player i. Let us define a quantity V i (x; s −i ) for each state x that is the highest possible payoff player i can achieve starting from state x, given that all other players play the Markov strategies s −i . A Markov best response is given by: Existence of MPE for finite games When the state space, number of players, and actions space are all finite a stochastic game will have a MPE. To see this, let us construct a new game with N*S players where N and S are the number of players and the states respectively of the original game. Then the payoff and action for player (i,x) are given by, This is a finite stochastic game that is guaranteed to have a Nash equilibrium. It is also a MPE as each player's strategy depends only on the current state. By construction, the strategy of player i maximizes his payoff among all Markov strategies given s −i . As shown above each player i has a best response that is a Markov strategy when all opponents play Markov strategies. Definition 2. Two player zero sum stochastic game: A two player zero sum game is defined by an m × n matrix P, where the P ij corresponds to the payoff for player 1 when the two players apply strategies i ∈ A 1 , i = 1, . . . m and j ∈ A 2 , j = 1, . . . n respectively and correspondingly the payoff for the second player is −P ij . When the players use mixed strategies ∆(S 1 ) and ∆(S 2 ) respectively, the game being finite is guaranteed to have a Nash equilibrium as follows: The above mini-max theorem can be extended to stochastic games as shown by Shapley BIB001 . By a symmetric argument reversing B and C we establish the lemma. Let us now consider the stochastic version of this game played in k stages. The value of the game is defined via a function α k : χ → R and operator T which is a contraction as To see that the operator T is a contraction with respect to the supremum norm and thus the game has a fixed point with any initial condition let us consider Theorem 4. Given a two-player zero-sum stochastic game, define α * as the unique solution to α * = T α * . A pair of strategies (s 1 , s 2 ) is a subgame perfect equilibrium if and only if after any history leading to the state x, the expected discounted payoff to player 1 is exactly α * (x). Proof. . Let us suppose the game starts in state x and player 1 plays an optimal strategy for k stages with terminal payoffs α 0 (x') = 0, ∀x' ∈ χ and plays any strategy afterwards. This will guarantee him this payoff This follows from the observation that after k stages the payoff for first player is negative of maximum possible for second player. When k → ∞ the value becomes α * and by symmetrical argument for layer two the theorem is established. Proposition 5. Let s 1 (x), s 2 (x) be optimal (possibly mixed) strategies for players 1 and 2 in zero-sum game defined by the matrix R x (α * ). Then s 1 , s 2 are optimal strategies in the stochastic game for both players; in particular, (s 1 , s 2 ) is an MPE. Proof. . Let us fix a strategy that could possibly be history dependentŝ 2 for player 2. Then, we first consider a k stage game, where terminal payoffs are given by α * . In this game, it follows that player 1 can guarantee a payoff of at least α * (x) by playing the strategy s 1 given in the proposition, irrespective of the strategy of player 2. Thus we have: From this we get we finally get an expression that in the limit k → ∞ goes to α * and by symmetric argument for the second player we establish the result. The method described above is the backward induction algorithm to solve the Bellman equation (16) and is applicable for games with perfect state observation. For games with asymmetric information, that is, when players make independent noisy observations of the state and do not share all their information, we refer the reader to reference BIB003 and the references mentioned therein. Our interest here is confined to games with symmetric information from the quantization point of view as they would be a good starting point. Now, let us consider a special class of stochastic games called Markov decision processes (MDP) where only one player called MAX plays the game against nature, that introduces randomness, with a goal of maximizing a payoff function. It is easy to see that MDP generalizes to a stochastic game with two players MAX and MIN with a zero sum objective. Clearly, we can have results similar to stochastic games in the case of finitely many states and action space MDP with discounted payoff and infinite horizon. , a) , λ, z) with finitely many states and actions and every discount factor λ < 1 there is a pure stationary strategy σ such that for every initial state z and every strategy τ we have Moreover, the stationary strategy σ obeys, for every state z,
Quantum games: a review of the history, current state, and interpretation <s> Quantum probability <s> In his investigations of the mathematical foundations of quantum mechanics, Mackey1 has proposed the following problem: Determine all measures on the closed subspaces of a Hilbert space. A measure on the closed subspaces means a function μ which assigns to every closed subspace a non-negative real number such that if {Ai} is a countable collection of mutually orthogonal subspaces having closed linear span B, then ::: ::: $$ \mu (B) = \sum {\mu \left( {{A_i}} \right)} $$ ::: ::: . <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum probability <s> "Elegantly written, with obvious appreciation for fine points of higher mathematics...most notable is [the] author's effort to weave classical probability theory into [a] quantum framework." - The American Mathematical Monthly "This is an excellent volume which will be a valuable companion both for those who are already active in the field and those who are new to it. Furthermore there are a large number of stimulating exercises scattered through the text which will be invaluable to students." - Mathematical Reviews An Introduction to Quantum Stochastic Calculus aims to deepen our understanding of the dynamics of systems subject to the laws of chance both from the classical and the quantum points of view and stimulate further research in their unification. This is probably the first systematic attempt to weave classical probability theory into the quantum framework and provides a wealth of interesting features: The origin of Ito's correction formulae for Brownian motion and the Poisson process can be traced to communication relations or, equivalently, the uncertainty principle. Quantum stochastic interpretation enables the possibility of seeing new relationships between fermion and boson fields. Quantum dynamical semigroups as well as classical Markov semigroups are realized through unitary operator evolutions. The text is almost self-contained and requires only an elementary knowledge of operator theory and probability theory at the graduate level. <s> BIB002 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum probability <s> Quantum Probability and Orthogonal Polynomials.- Adjacency Matrices.- Distance-Regular Graphs.- Homogeneous Trees.- Hamming Graphs.- Johnson Graphs.- Regular Graphs.- Comb Graphs and Star Graphs.- The Symmetric Group and Young Diagrams.- The Limit Shape of Young Diagrams.- Central Limit Theorem for the Plancherel Measures of the Symmetric Groups.- Deformation of Kerov's Central Limit Theorem. <s> BIB003
Let us now review the basic concepts in quantum probability and quantum Markov semigroups that would be required to define quantum stochastic games. The central ideas of classical probability consist of random variables and measures that have quantum analogues in self adjoint operators and trace mappings. To get a feel for the theory, let us consider the most familiar example BIB003 of random variables, namely the coin toss. From the equations (39) and (40), it is clear that the self adjoint Pauli operator σ x is stochastically equivalent to the Bernoulli random variable. This moment generating sequence can be visualized as a walk on a graph as follows: Using this we can rewrite (40) as Definition 9. A finite dimensional quantum probability (QP) space is a tuple (H , A, P) where H is a finite dimensional Hilbert space, A is a *-algebra of operators, and P is a trace class operator, specifically a density matrix, denoting the quantum state. As we have alluded earlier, random variables in a CP are stochastically equivalent to observables in a Hilbert space H . These are self-adjoint operators with a spectral resolution X = Σ i x i E i x where the x i 's are the eigenvalues of X and each E X i is interpreted as the event X taking the value x i . States are positive operators with unit trace and denoted by P. In this framework, the expectation of an observable X in the state P is defined using the trace as trP(X). The observables when measured are equivalent to random variables on a probability space, and a collection of such classical spaces constitute a quantum probability space. If all the observables of interest commute with each other then the classical spaces can be composed to a product probability space, and the equivalence CP = QP holds. The main feature of a QP is the admission of possibly non-commuting projections and observables of the underlying Hilbert space within the same setting. Definition 10. Canonical observables: Starting from a σ-finite measure space we can construct observables on a Hilbert space that are called canonical, as every observable can be shown to be unitarily equivalent to the direct sum of them BIB002 . Let (Ω, Γ, µ), be a σ-finite measure space with a countably additive σ−algebra. We can construct the complex Hilbert space as a space of all square integrable functions w.r.t µ and denote it as L 2 (µ). Then, the observable ξ µ : Γ → P(H ) can be set up as where I is the indicator function. Example 11. Let H =C 2 and A = M 2 the *-algebra of complex matrices of dimension 2 × 2 and the state P(A) = ψ, Aψ = A † ψ, ψ where ψ is any unit vector. This space models quantum spin systems in physics and qubits in quantum information processing. This example can be generalized to an n-dimensional space to build quantum probability spaces. Definition 12. Two quantum mechanical observables are said to be compatible, that is they can be measured simultaneously, if the operators representing them can be diagonalized concurrently. The two operators that share a common eigenvector will be characterized as co-measurable. There is a canonical way to create quantum probability spaces from their classical counterparts. The process involves creating a Hilbert space from the square integrable functions with respect to the classical probability measure. The *-algebra of interest is usually defined in terms of creation, conservation, and annihilation operators. Classical probability measures become quantum states in a natural way through Gleason's theorem BIB001 . In this case, unitary operators are identified in the algebra to describe quantum evolutions. A sequence of operators forming a quantum stochastic process can be defined similarly to stochastic processes in a classical probability space. Conditional expectation in a quantum context is not always defined and here we give a version that will be adequate for our purposes and is consistent with its classical counterpart in being a projection and enjoys properties such as tower. Followed by that we will define quantum Markov semigroups (QMS), as a one parameter family of completely positive maps, required for defining the quantum stochastic games.
Quantum games: a review of the history, current state, and interpretation <s> Quantum Markov decision processes <s> "Elegantly written, with obvious appreciation for fine points of higher mathematics...most notable is [the] author's effort to weave classical probability theory into [a] quantum framework." - The American Mathematical Monthly "This is an excellent volume which will be a valuable companion both for those who are already active in the field and those who are new to it. Furthermore there are a large number of stimulating exercises scattered through the text which will be invaluable to students." - Mathematical Reviews An Introduction to Quantum Stochastic Calculus aims to deepen our understanding of the dynamics of systems subject to the laws of chance both from the classical and the quantum points of view and stimulate further research in their unification. This is probably the first systematic attempt to weave classical probability theory into the quantum framework and provides a wealth of interesting features: The origin of Ito's correction formulae for Brownian motion and the Poisson process can be traced to communication relations or, equivalently, the uncertainty principle. Quantum stochastic interpretation enables the possibility of seeing new relationships between fermion and boson fields. Quantum dynamical semigroups as well as classical Markov semigroups are realized through unitary operator evolutions. The text is almost self-contained and requires only an elementary knowledge of operator theory and probability theory at the graduate level. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum Markov decision processes <s> We show that when the speed of control is bounded, there is a widely applicable minimal-time control problem for which a coherent feedback protocol is optimal, and is faster than all measurement-based feedback protocols, where the latter are defined in a strict sense. The superiority of the coherent protocol is due to the fact that it can exploit a geodesic path in Hilbert space, a path that measurement-based protocols cannot follow. <s> BIB002
A quantum Markov Decision Processes (qMDP) is a tuple (χ = C 2 ⊗n , A(x), Q(x, a), T (x|x, a), λ, ρ 0 ). Here, χ is the finite 2 n dimensional complex Hilbert space, A is the finite action space for the single player (unitary operators of the Hilbert space C n such as the Pauli operators), Q is the player's payoff function based on partial observation of the state, T is a completely positive mapping that would induce a quantum Markov semigroup when it is time dependent, 0 ≤ λ ≤ 1 is the discount factor, and ρ 0 is the initial quantum state of the game. In terms of quantum information theory the state of the game can be represented by n qubits and each player applies an unitary operator, a fixed finite set, on a qubit as a strategy. Instead of partially observed state based payoff, a continuous non-demolition measurement based approach can be formulated. In Boutan et al., derived Bellman equations for optimal feedback control of qubit states using ideas from quantum filtering theory. The qubit is coupled to an environment, for example the second quantized electromagnetic radiation, and by continually measuring the field quadratures the state of the qubit (non-demolition measurements) can be estimated. The essential step involves deriving a quantum filtering equation rigorously based on quantum stochastic calculus BIB001 to estimate the state of the system coupled to the environment. By basing the payoff on this estimate the qMDP process can evolve coherently until a desired time. Such an approach can be extended to the quantum stochastic games described next. In addition, coherent evolutions may have advantages over measurement based dynamics BIB002 .
Quantum games: a review of the history, current state, and interpretation <s> Experimental realizations <s> After a brief introduction to the principles and promise of quantum information processing, the requirements for the physical implementation of quantum computation are discussed. These five requirements, plus two relating to the communication of quantum information, are extensively ex- plored and related to the many schemes in atomic physics, quantum optics, nuclear and electron magnetic resonance spectroscopy, superconducting electronics, and quantum-dot physics, for achiev- ing quantum computing. I. INTRODUCTION � The advent of quantum information processing, as an abstract concept, has given birth to a great deal of new thinking, of a very concrete form, about how to create physical computing devices that operate in the hitherto unexplored quantum mechanical regime. The efforts now underway to produce working laboratory devices that perform this profoundly new form of information pro- cessing are the subject of this book. In this chapter I provide an overview of the common objectives of the investigations reported in the remain- der of this special issue. The scope of the approaches, proposed and underway, to the implementation of quan- tum hardware is remarkable, emerging from specialties in atomic physics (1), in quantum optics (2), in nuclear (3) and electron (4) magnetic resonance spectroscopy, in su- perconducting device physics (5), in electron physics (6), and in mesoscopic and quantum dot research (7). This amazing variety of approaches has arisen because, as we will see, the principles of quantum computing are posed using the most fundamental ideas of quantum mechanics, ones whose embodiment can be contemplated in virtually every branch of quantum physics. The interdisciplinary spirit which has been fostered as a result is one of the most pleasant and remarkable fea- tures of this field. The excitement and freshness that has been produced bodes well for the prospect for discovery, invention, and innovation in this endeavor. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Experimental realizations <s> Realizing robust quantum information transfer between long-lived qubit registers is a key challenge for quantum information science and technology. Here we demonstrate unconditional teleportation of arbitrary quantum states between diamond spin qubits separated by 3 meters. We prepare the teleporter through photon-mediated heralded entanglement between two distant electron spins and subsequently encode the source qubit in a single nuclear spin. By realizing a fully deterministic Bell-state measurement combined with real-time feed-forward, quantum teleportation is achieved upon each attempt with an average state fidelity exceeding the classical limit. These results establish diamond spin qubits as a prime candidate for the realization of quantum networks for quantum communication and network-based quantum computing. <s> BIB002
The implementation of quantum games on hardware can be viewed as a small quantum computation, in that sense, the requirements for a good platform on which to perform a quantum game are the same as that for a quantum computer BIB001 . The quantum computer may not need to be a universal computer but requires both single and two-qubit gates. Quantum computers with the capabilities required for quantum games are just beginning to come online (see for example www.research.ibm.com/ibm-q), and full quantum networks are in their infancy BIB002 , thus, many of the experimental demonstrations to date, have been performed on hardware that are not ideal from the point of view of the criterion above. Though notably, unlike many interesting quantum computing algorithms, quantum games are typically performed with very few qubits, making them an attractive application for early demonstrations on emerging quantum hardware. The potential applications and uses of quantum games suggest that certain characteristics are desirable for their implementation, beyond just those desirable for quantum computation. First, by definition, quantum games contain several independent agents. For realistic applications this likely requires that the agents be remotely located. This requires not only a small quantum computation, but also some form of network. The network would need to be able to transmit quantum resources, for example, produce entangled pairs that are shared at two remote locations which is typically done with photons. Second, a quantum game needs to have input from some independent agent, either a human or computer. This may require some wait time for the interaction with a classical system to occur, perhaps while the agent makes their decision. This implies that another desirable characteristic for quantum hardware is to have a quantum memory, that is, the ability to store the quantum information for some variable amount of time during the computation. Typically, the capabilities of a quantum information processor are quantified by the ratio of the coherence time to the time it takes to perform gates. Whereas in quantum games, the actual coherence time of the qubits may be a necessary metric in itself. One may wonder what it even means to perform an experimental demonstration of a quantum game. No experiment has ever had actual independent agents (i.e. humans or computers) play a game on quantum hardware in real time. Thus the implementations of games to date have been in some sense partial implementations. The games are typically implemented by using quantum hardware to run the circuit, or circuit equivalent, of the game with the strategy choices of the Nash equilibria. Where the strategy choices that form the Nash equilibria are determined by theoretical analysis of the quantum game in question. The output states of the experiment are then weighted by the payoff matrix, and the payoff at Nash equilibria is reported and compared to that of the classical case. One can view this as a type of bench-marking experiment where the hardware is bench-marked against a game theoretic application, rather than with random operations. The games that have been implemented always set up to have a larger payoff in the quantum equilibrium than the classical case, presumably because these are the interesting games to quantum physicists. Because of this, the effect of noise or decoherence is almost always to lower the payoff of the players. It is generally seen that the payoffs at equilibrium of the quantum games still outperform the classical game with some amount of decoherence. It should be noted that this section is concerned with evaluating the hardware for quantum games. As such the specific game theoretical results of the games that were performed will not be discussed, only the relative merits of each physical implementation.
Quantum games: a review of the history, current state, and interpretation <s> Nuclear magnetic resonance <s> We generalize the quantum prisoner's dilemma to the case where the players share a nonmaximally entangled states. We show that the game exhibits an intriguing structure as a function of the amount of entanglement with two thresholds which separate a classical region, an intermediate region, and a fully quantum region. Furthermore this quantum game is experimentally realized on our nuclear magnetic resonance quantum computer. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Nuclear magnetic resonance <s> In a three player quantum `Dilemma' game each player takes independent decisions to maximize his/her individual gain. The optimal strategy in the quantum version of this game has a higher payoff compared to its classical counterpart. However, this advantage is lost if the initial qubits provided to the players are from a noisy source. We have experimentally implemented the three player quantum version of the `Dilemma' game as described by Johnson, [N.F. Johnson, Phys. Rev. A 63 (2001) 020302(R)] using nuclear magnetic resonance quantum information processor and have experimentally verified that the payoff of the quantum game for various levels of corruption matches the theoretical payoff. (c) 2007 Elsevier Inc. All rights reserved. <s> BIB002
The first experimental realization of a quantum game was performed on a two qubit NMR quantum computer BIB001 . The computations are performed on the spin to spin interactions of hydrogen atoms embedded in a deuterated nucleic acid cytosine whose spins interact with a strength of 7.17Hz. They examined the payoff of the quantum game at Nash equilibrium as a function of the amount of entanglement. The experimentally determined payoffs showed good agreement with theory, with an error of 8 percent. In total, they computed 19 data points, which each took 300 ms to compute compared to the coherence time of the NMR qubits of ∼ 3 seconds. An NMR system has also demonstrated a three qubit game BIB002 . This game is performed on the hydrogen, fluorine and carbon atoms in a 13 CHF Br 2 molecule. The single qubit resonances are in the hundreds of MHz, while the couplings between spins are tens to hundreds Hz. For their theoretical analysis they used three possible strategy choices, resulting in 27 possible strategy choice sets, which can be classified into 10 classes. They show the output of the populations of all 8 possible states in the three qubit system, and thus the expected payoff, for each class of strategy choices sets. They ran the game 11 times, varying the amount of noise on the initial state, thus direcly showing the decrease of the payoff as the noise increases. Their experimental populations had a discrepancy of 15 to 20 percent with the theory. Quantum computations on NMR based systems are performed on ensembles of qubits, and can have relatively large signal sizes. However, there do not appear to be promising avenues for scaling to larger numbers of qubits, or interfacing with a quantum communication scheme. Also, NMR computers are not capable of initializing in pure quantum states. Thus, methods have been developed to initialize the states to approximate states, [106] , but there is uncertainty as to whether such mixed states actually exhibit entanglement or if they are separable [107] .
Quantum games: a review of the history, current state, and interpretation <s> Linear optics <s> We report the first demonstration of a quantum game on an all- optical one-way quantum computer. Following a recent theoretical proposal we implement a quantum version of Prisoner's Dilemma, where the quantum circuit is realized by a four-qubit box-cluster configuration and the player's local strategies by measurements performed on the physical qubits of the cluster. This demonstration underlines the strength and versatility of the one-way model and we expect that this will trigger further interest in designing quantum protocols and algorithms to be tested in state-of-the-art cluster resources. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Linear optics <s> Quantum gambling --- a secure remote two-party protocol which has no classical counterpart --- is demonstrated through optical approach. A photon is prepared by Alice in a superposition state of two potential paths. Then one path leads to Bob and is split into two parts. The security is confirmed by quantum interference between Alice's path and one part of Bob's path. It is shown that a practical quantum gambling machine can be feasible by this way. <s> BIB002 </s> Quantum games: a review of the history, current state, and interpretation <s> Linear optics <s> Game theory is central to the understanding of competitive interactions arising in many fields, from the social and physical sciences to economics. Recently, as the definition of information is generalized to include entangled quantum systems, quantum game theory has emerged as a framework for understanding the competitive flow of quantum information. Up till now only two-player quantum games have been demonstrated. Here we report the first experiment that implements a four-player quantum Minority game over tunable four-partite entangled states encoded in the polarization of single photons. Experimental application of appropriate quantum player strategies give equilibrium payoff values well above those achievable in the classical game. These results are in excellent quantitative agreement with our theoretical analysis of the symmetric Pareto optimal strategies. Our result demonstrate for the first time how non-trivial equilibria can arise in a competitive situation involving quantum agents and pave the way for a range of quantum transaction applications. <s> BIB003 </s> Quantum games: a review of the history, current state, and interpretation <s> Linear optics <s> We implement a multi-player quantum public-goods game using only bipartite entanglement and two-qubit logic. Within measurement error, the expectation per player follows predicted values as the number of players is increased. <s> BIB004 </s> Quantum games: a review of the history, current state, and interpretation <s> Linear optics <s> We propose and experimentally demonstrate a zero-sum game that is in a fair Nash equilibrium for classical players, but has the property that a quantum player can always win using an appropriate strategy. The gain of the quantum player is measured experimentally for different quantum strategies and input states. It is found that the quantum gain is maximized by a maximally entangled state, but does not decrease to zero when entanglement disappears. Instead, it links with another kind of quantum correlation described by discord for the qubit case and the connection is demonstrated both theoretically and experimentally. <s> BIB005
Implementing quantum games with optical circuits has several appealing characteristics. They do not suffer from the uncertainty in entanglement of NMR computing and can potentially have very high fidelities. Gates are implemented with standard optical elements such as beam splitters and waveplates. Also, since they are performed on photons, they can naturally be adapted to work with remote agents. One possible implementation is to use a single photon and utilize multiple degrees of freedom. Typically the polarization state of the photon is entangled with its spatial mode. In BIB002 , a heavily attenuated He-Ne laser was used as a single photon source. The single photon is input into a Mach-Zender interferometer where the two paths through the interferometer forms the first qubit pair, and the polarization state of the photon forms the second. They are entangled by splitting the photon into the two paths depending on its polarization. Gates are performed by single photon polarization rotations, i.e. adding waveplates to the photons path. They report an error in the experimentally determined payoff to the theoretical one of 1 to 2 percent. Rather than using the path of an interferometer as the spatial degree of freedom, one can also use the transverse modes of a beam . In these implementations, beams of light are incident on holographic masks to produce higher order transverse modes in a polarization dependent way. These implementations have typically been done at higher light levels, i.e. ∼ mW , and the beams are imaged on a camera to determine the steady state value of many photons being run in parallel. Another possible implementation using linear optics utilizes cluster states. This has been done for a two player Prisoner's Dilemma game BIB001 . The computation is performed with a four-qubit cluster state. Gates are performed by measurements of photons. Spontaneous parametric down conversion in a non-linear BBO crystal produces entangled photon pairs which are then interfered with beam splitters and waveplates. The creation of the four-qubit cluster state is post-selected by coincidence clicks on single photon detectors, so that runs are only counted if four single photo detectors registered a photon. The experimentalists can also characterize their output with full quantum tomography, and they report a fidelity of sixty two percent. Rather than producing cluster states, one can take the entangled photon pairs output from a non-linear crystal and perform gates in much the same way as they are performed in the single photon case BIB004 BIB003 . These approaches have reported fidelities around seventy to eighty percent. A four player quantum game has been implemented with a spontaneous parametric down conversion process that produces four photons, in two entangled pairs BIB004 . Again the information is encoded into the polarization and spatial mode of the photons. This method, with two entangled pair inputs, can naturally be set up to input a continuous set of initial states. The initial entangled state in this implementation is a superposition of a GHZ state and products of Bell states. Again, the fidelities are reported to be near 75 percent which results in errors in the payoff at the equilibrium of about 10 percent. Another example of a linear optical implementation sheds light on other types of correlations that can occur in quantum mechanics that are beyond entanglement, i.e. discord BIB005 . To create states with discord, the measurements are taken with different Bell pairs, and then the data are partitioned out into different sets randomly, which produces a statistical mixture of entangled states. Such a mixture is known to have no entanglement as measured by the concurrence, but retains the quantum correlation of discord. The entangled pairs were produced from spontaneous parametric down conversion. This experiment reported a fidelity of 95 percent. Notably, even when there is no entanglement, the game can still exhibit a quantum advantage. The linear optical implementations are promising because of their ability to perform games with remotely located agents. They are also capable of high fidelity quantum information processing. However they have drawbacks as well. In order to run a different circuit, one must physically rearrange the linear optical elements such as waveplates and beamsplitters which could be done with liquid crystal displays or other photonic devices, though this could be difficult to scale up to implementations of more complicated games. In addition, the production of larger amounts of entangled photon pairs is experimentally challenging. These make it difficult to scale up implementations with linear optical circuits to more complicated games. In addition, purely photonic implementations have no memory, and thus may not be conducive to games that may require wait time for a decision to be made, or some sort of feed forward on measurements.
Quantum games: a review of the history, current state, and interpretation <s> Other proposals <s> We propose a general, scalable framework for implementing two-choices-multiplayer Quantum Games in ion traps. In particular, we discuss two famous examples: the Quantum Prisoners' Dilemma and the Quantum Minority Game. An analysis of decoherence due to intensity fluctuations in the applied laser fields is also provided. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Other proposals <s> In this paper, we propose a scheme for implementing quantum game (QG) in cavity quantum electrodynamics(QED). In the scheme, the cavity is only virtually excited and thus the proposal is insensitive to the cavity fields states and cavity decay. So our proposal can be experimentally realized in the range of current cavity QED techniques. <s> BIB002 </s> Quantum games: a review of the history, current state, and interpretation <s> Other proposals <s> We demonstrate a Bayesian quantum game on an ion trap quantum computer with five qubits. The players share an entangled pair of qubits and perform rotations on their qubit as the strategy choice. Two five-qubit circuits are sufficient to run all 16 possible strategy choice sets in a game with four possible strategies. The data are then parsed into player types randomly in order to combine them classically into a Bayesian framework. We exhaustively compute the possible strategies of the game so that the experimental data can be used to solve for the Nash equilibria of the game directly. Then we compare the payoff at the Nash equilibria and location of phase-change-like transitions obtained from the experimental data to the theory, and study how it changes as a function of the amount of entanglement. <s> BIB003
There are many other potential platforms for quantum information processing and it is unclear which will be dominant in quantum computation. Ion trapped systems and cavity QED systems stand out as having all of the characteristics we desired in a quantum information processor specifically designed for implementing quantum games: they are potentially powerful quantum computers, they can have long memory times, and can be coupled to photonic modes for long distance communication. There have been proposals for implementations of quantum games on such systems BIB001 BIB002 . Ion trapped qubits can perform quantum computations with as many as five qubits with a very high fidelity . In addition, ion trapped systems can also be coupled to single photons for entangling remote ions . Cavity QED systems have a single atom, or ensemble of atoms, strongly coupled to a photonic mode. This allows the quantum information of the atomic system, which can be used for information processing, to be mapped to the photonic system for communication purposes with very high fidelity. Recently BIB003 , a Bayesian quantum game was demonstrated on a five-qubit quantum computer where the payoff, and phase-change-like behavior of the game were analyzed as a function of the amount of entanglement.
Quantum games: a review of the history, current state, and interpretation <s> Human implementations <s> Game theory suggests quantum information processing technologies could provide useful new economic mechanisms. For example, using shared entangled quantum states can alter incentives so as to reduce the free-rider problem inherent in economic contexts such as public goods provisioning. However, game theory assumes players understand fully the consequences of manipulating quantum states and are rational. Its predictions do not always describe human behavior accurately. To evaluate the potential practicality of quantum economic mechanisms, we experimentally tested how people play the quantum version of the prisoner's dilemma game in a laboratory setting using a simulated version of the underlying quantum physics. Even without formal training in quantum mechanics, people nearly achieve the payoffs theory predicts, but do not use mixed-strategy Nash equilibria predicted by game theory. Moreover, this correspondence with game theory for the quantum game is closer than that of the classical game. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Human implementations <s> We describe human-subject laboratory experiments on probabilistic auctions based on previously proposed auction protocols involving the simulated manipulation and communication of quantum states. These auctions are probabilistic in determining which bidder wins, or having no winner, rather than always having the highest bidder win. Comparing two quantum protocols in the context of first-price sealed bid auctions, we find the one predicted to be superior by game theory also performs better experimentally. We also compare with a conventional first price auction, which gives higher performance. Thus to provide benefits, the quantum protocol requires more complex economic scenarios such as maintaining privacy of bids over a series of related auctions or involving allocative externalities. <s> BIB002 </s> Quantum games: a review of the history, current state, and interpretation <s> Human implementations <s> We demonstrate a Bayesian quantum game on an ion trap quantum computer with five qubits. The players share an entangled pair of qubits and perform rotations on their qubit as the strategy choice. Two five-qubit circuits are sufficient to run all 16 possible strategy choice sets in a game with four possible strategies. The data are then parsed into player types randomly in order to combine them classically into a Bayesian framework. We exhaustively compute the possible strategies of the game so that the experimental data can be used to solve for the Nash equilibria of the game directly. Then we compare the payoff at the Nash equilibria and location of phase-change-like transitions obtained from the experimental data to the theory, and study how it changes as a function of the amount of entanglement. <s> BIB003
In addition to the bench-marking types of demonstrations described above, there is a separate interpretation about what it means to implement a quantum game, and that is having actual agents play a game with quantum rules. For real world applications of quantum games, it is interesting to speculate on whether or not it will be possible for players to effectively learn the correct strategies in a quantum game if they have no training in quantum theory. In fact, one of the biggest problems of classical game theory is that players do not act entirely rationally, and thus the equilibria of a game are only a guide to determine what real players will do. This problem may be exacerbated in quantum games by the fact that the players will likely have little or no knowledge of quantum mechanics or entanglement. There have been a few experiments to research this question BIB001 BIB002 . Due to the limited availability of quantum hardware, in order to ease implementation, in these experiments the quantum circuits are simulated on a classical computer. Though a simulated quantum game will give the same outputs as a quantum computer, there are none of the benefits afforded by the absolute physical security of quantum communication protocols, which are likely a very desirable quality of quantum games. In addition, if the quantum games become sufficiently complex, it may not be possible to simulate them efficiently on a classical computer, as the number of states in a computation goes up as 2 n where n is the number of qubits, as is well known in quantum computing. In BIB003 , the players were randomly paired and played the quantum Prisoner's Dilemma game, and the results were compared for classical versus quantum rules. They also performed one experiment where the players played repeatedly with the same partner. In the classical interpretation of the Prisoner's Dilemma, one can interpret the people who play the Pareto optimal strategy choice, even though it is not a Nash equilibrium, as altruistic. In any real instantiation of the game, there will be some players who play the altruistic option, even though, strictly, it lowers their individual payoff. As such, the prediction of the Nash equilibria from game theory can be interpreted as a guide to what players may do, especially in repeated games. When players played the game with quantum rules, the players tended to play the altruistic option more often than in the classical case, as is predicted by the Nash equilibria that occur in the quantum version of the game. This at least shows that players, who had no formal training in quantum mechanics, though had some instruction in the rules of the game, were capable of playing rationally, that is, maximizing their payoff. Interestingly, the game theory prediction for behavior was actually closer to the behavior of the players than when the behavior of players playing a classical game is compared to the classical theory. The players of the classical game played less rationally than those of the quantum game, and there was more variation between players in the classical versions. These results may suggest that the players have more preconceptions about the strategy choices in the classical version than in the quantum version, where the interpretation is more complicated. In the classical version, they can choose to cooperate or defect independently, while in the quantum version ultimately, whether or not they cooperate also depends on the strategy choice of their opponent. Preconceptions about the strategy choices in the classical game may provide influences beyond the desire to simply maximize ones own payoff and lead to larger deviations from the game theory prediction. A full implementation of a quantum game with real players on quantum hardware has not been performed. Yet demonstrations of quantum game circuits on quantum hardware are compelling because they provide results that are interesting while only using small numbers of qubits. As quantum networks and quantum computers become more developed, we expect that quantum games will play a role in their adoption on a larger scale either as applications or as a diagnostic tool of the quantum hardware.
Quantum games: a review of the history, current state, and interpretation <s> Future applications for quantum game theory <s> A binary game is introduced and analysed. N players have to choose one of the two sides independently and those on the minority side win. Players use a finite set of ad hoc strategies to make their decision, based on the past record. The analysing power is limited and can adapt when necessary. Interesting cooperation and competition patterns of the society seem to arise and to be responsive to the payoff function. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Future applications for quantum game theory <s> Recently the concept of quantum information has been introduced into game theory. Here we present the first study of quantum games with more than two players. We discover that such games can possess an alternative form of equilibrium strategy, one which has no analog either in traditional games or even in two-player quantum games. In these ``coherent'' equilibria, entanglement shared among multiple players enables different kinds of cooperative behavior: indeed it can act as a contract, in the sense that it prevents players from successfully betraying one another. <s> BIB002 </s> Quantum games: a review of the history, current state, and interpretation <s> Future applications for quantum game theory <s> We consider two aspects of quantum game theory: the extent to which the quantum solution solves the original classical game, and to what extent the new solution can be obtained in a classical model. <s> BIB003 </s> Quantum games: a review of the history, current state, and interpretation <s> Future applications for quantum game theory <s> A quantum version of the Minority game for an arbitrary number of agents is studied. When the number of agents is odd, quantizing the game produces no advantage to the players, however, for an even number of agents new Nash equilibria appear that have no classical analogue. The new Nash equilibria provide far preferable expected payoffs to the players compared to the equivalent classical game. The effect on the Nash equilibrium payoff of reducing the degree of entanglement, or of introducing decoherence into the model, is indicated. <s> BIB004 </s> Quantum games: a review of the history, current state, and interpretation <s> Future applications for quantum game theory <s> We consider a directed network in which every edge possesses a latency function that specifies the time needed to traverse the edge given its congestion. Selfish, noncooperative agents constitute the network traffic and wish to travel from a source vertex s to a destination t as quickly as possible. Since the route chosen by one network user affects the congestion experienced by others, we model the problem as a noncooperative game. Assuming that each agent controls only a negligible portion of the overall traffic, Nash equilibria in this noncooperative game correspond to s-t flows in which all flow paths have equal latency.A natural measure for the performance of a network used by selfish agents is the common latency experienced by users in a Nash equilibrium. Braess's Paradox is the counterintuitive but well-known fact that removing edges from a network can improve its performance. Braess's Paradox motivates the following network design problem: given a network, which edges should be removed to obtain the best flow at Nash equilibrium? Equivalently, given a network of edges that can be built, which subnetwork will exhibit the best performance when used selfishly?We give optimal inapproximability results and approximation algorithms for this network design problem. For example, we prove that there is no approximation algorithm for this problem with approximation ratio less than n/2, where n is the number of network vertices, unless P = NP. We further show that this hardness result is the best possible, by exhibiting an (n/2)-approximation algorithm. We also prove tight inapproximability results when additional structure, such as linearity, is imposed on the network latency functions.Moreover, we prove that an optimal approximation algorithm for these problems is the trivial algorithm: given a network of candidate edges, build the entire network. As a consequence, we show that Braess's Paradox--even in its worst-possible manifestations--is impossible to detect efficiently.En route to these results, we give a fundamental generalization of Braess's Paradox: the improvement in performance that can be effected by removing edges can be arbitrarily large in large networks. Even though Braess's Paradox has enjoyed 35 years as a textbook example, our result is the first to extend its severity beyond that in Braess's original four-node network. <s> BIB005 </s> Quantum games: a review of the history, current state, and interpretation <s> Future applications for quantum game theory <s> The digital revolution of the information age and in particular the sweeping changes of scientific communication brought about by computing and novel communication technology, potentiate global, high grade scientific information for free. The arXiv for example is the leading scientific communication platform, mainly for mathematics and physics, where everyone in the world has free access on. While in some scientific disciplines the open access way is successfully realized, other disciplines (e.g. humanities and social sciences) dwell on the traditional path, even though many scientists belonging to these communities approve the open access principle. In this paper we try to explain these different publication patterns by using a game theoretical approach. Based on the assumption, that the main goal of scientists is the maximization of their reputation, we model different possible game settings, namely a zero sum game, the prisoners’ dilemma case and a version of the stag hunt game, that show the dilemma of scientists belonging to “non-open access communities”. From an individual perspective, they have no incentive to deviate from the Nash Equilibrium of traditional publishing. By extending the model using the quantum game theory approach it can be shown, that if the strength of entanglement exceeds a certain value, the scientists will overcome the dilemma and terminate to publish only traditionally in all three settings. <s> BIB006 </s> Quantum games: a review of the history, current state, and interpretation <s> Future applications for quantum game theory <s> Recent spectrum-sharing research has produced a strategy to address spectrum scarcity problems. This novel idea, named cognitive radio, considers that secondary users can opportunistically exploit spectrum holes left temporarily unused by primary users. This presents a competitive scenario among cognitive users, making it suitable for game theory treatment. In this work, we show that the spectrum-sharing benefits of cognitive radio can be increased by designing a medium access control based on quantum game theory. In this context, we propose a model to manage spectrum fairly and effectively, based on a multiple-users multiple-choice quantum minority game. By taking advantage of quantum entanglement and quantum interference, it is possible to reduce the probability of collision problems commonly associated with classic algorithms. Collision avoidance is an essential property for classic and quantum communications systems. In our model, two different scenarios are considered, to meet the requirements of different user strategies. The first considers sensor networks where the rational use of energy is a cornerstone; the second focuses on installations where the quality of service of the entire network is a priority. <s> BIB007 </s> Quantum games: a review of the history, current state, and interpretation <s> Future applications for quantum game theory <s> We discuss the connection between a class of distributed quantum games, with remotely located players, to the counter intuitive Braess' paradox of traffic flow that is an important design consideration in generic networks where the addition of a zero cost edge decreases the efficiency of the network. A quantization scheme applicable to non-atomic routing games is applied to the canonical example of the network used in Braess' Paradox. The quantum players are modeled by simulating repeated game play. The players are allowed to sample their local payoff function and update their strategies based on a selfish routing condition in order to minimize their own cost, leading to the Wardrop equilibrium flow. The equilibrium flow in the classical network has a higher cost than the optimal flow. If the players have access to quantum resources, we find that the cost at equilibrium can be reduced to the optimal cost, resolving the paradox. <s> BIB008
Applications of conventional game theory have played an important role in many modern strategic decisionmaking processes including diplomacy, economics, national security, and business. These applications typically reduce a domain-specific problem into a game theoretic setting such that a domain-specific solution can be developed by studying the game-theoretic solution. A well-known application example is the game of "Chicken" applied to studies of international politics during the Cold War . In Chicken, two players independently select from strategies that either engage in conflict or avoid it. Schelling has cited the study of this game as influential in understanding the Cuban Missile crisis. More broadly, studying these games enables an understanding of how rational and irrational players select strategies, an insight that has played an important role in nuclear brinkmanship. The method of recognizing formal game-theoretic solutions within domain-specific applications may also extend to quantum game-theoretic concepts. This requires a model for the game that accounts for the inclusion of unique quantum resource including shared entangled states. For example, Zabaleta et al. BIB007 have investigated a quantum game model for the problem of spectrum sharing in wireless communication environments in which transmitters compete for access. Their application is cast as a version of the minority game put forward by Challet and Zhang BIB001 and first studied in quantized form by Benjamin and Hayden BIB002 and also by Flitney and Hollenberg BIB004 . For Zabaleta et al., a base station distributes an n-partite entangled quantum state among n individual transmitters, i.e., players, who then apply local strategies to each part of the quantum state before measuring. Based on the observed, correlated outcomes, the players select whether to transmit (1) or wait (0). Zabaleta et al. showed that using the quantum resource in this game reduces the probability of transmission collision by a factor of n while retaining fairness in access management. In a related application, Solmeyer et al. investigated a quantum routing game for sending transmissions through a communication network BIB008 . The conventional routing game has been extensively studied as a representation of flow strategies in real-world network, for example, Braess' paradox that adding more routes does not always improve flow BIB005 . Solmeyer et al. developed a quantized version of the routing game modified to include a distributed quantum state between players representing the nodes within the network. Each player is permitted to apply a local quantum strategy to their part of the state in the form of a unitary rotation before measuring. Solmeyer et al. simulated the total cost of network flow in terms of overall latency and found that the minimal cost is realized when using a partially entangled state between nodes. Notably, their results has demonstrated Braess' paradox but only for the case of maximal and vanishing entanglement. If, and when, quantum networks become a reality, with multiple independent quantum agents operating distributed applications, quantum game theory may not only provide possible applications, but may also be necessary for their analysis. In the field of decision science, Hanauske et al. have applied quantum game theory to the selection of open access publishing decisions in scientific literature BIB006 . Motivated by the different publication patterns observed across scientific disciplines, they perform a comparative analysis of open-access choices using three different games: zero-sum, Prisoner's Dilemma and stag hunt. The formal solutions from each of these classical games provide Nash equilibria that either discourage open access publication or include this choice as a minority in a mixed strategy. By contrast, Hanauske et al. found that quantized versions of these games which include distributed quantum resources yield Nash equilibria that favor open access publication. In this case, quantum game theory may provide a more general probability theory to form a descriptive analysis of a such socially constructed environments. In addition to decision making applications, game theory may also serve as a model for understanding competitive processes such as those found in ecological or social systems. It is an outstanding question to assess whether quantum game theory can provide new methods to these studies. In addition to the study of classical processes, such as evolution and natural selection, quantum game theory also shows promise for the study of strictly quantum mechanical processes as well. In particular, several non-cooperative processes underlying existing approaches to the development of quantum technology including quantum control, quantum error correction, and fault-tolerant quantum operations. Each of these application areas require a solution to the competition between the user and the environment, which may be considered to be a 'player' in the game theoretic setting. The solutions to these specific applications require a model of the quantum mechanical processes for dynamics and interactions which are better suited for quantum game theory. A fundamental concern for any practical application of game theory is the efficiency of the implementation. A particular concern for a quantum game solution is the relative cost of quantum resources, including entangled states and measurements operations. Currently, high-fidelity, addressable qubits are expensive to fabricate, store, operate, and measure, though these quantum resources are likely to reduce in relative cost over time. For some users, however, the expense of not finding the best solution will always outweigh the associated implementation cost, and the cost argument need not apply for those application where quantum game theory provides a truly unique advantage. van Enk and Pike have remarked that some quantum games can be reduced to similar classical game, often by incorporating a classical source of advice BIB003 . The effect of this advice is to introduce correlations into the player strategies in a way that is similar to how a distributed quantum state provides means of generating correlated strategies. For example, Brunner and Liden have shown how non-local outcomes in Bell's test can be modeled by conventional Bayesian game theory . This raises the question as to whether it is ever necessary to formulate a problem in a quantum game theoretic setting. As demonstrated above, there are many situations for which distributed quantum resources offer a more natural application, e.g., quantum networking, and such formulations are at least realistic if not necessary. The current availability of prototype general-purpose, quantum processors provides opportunities for the continued study of quantum game theory. This will include experimental studies of how users interacts with quantum games as well as the translation of quantum strategies into real-world settings. However, quantum networks are likely to be needed for field testing quantum game applications, as most require the distribution of a quantum resource between multiple players. Alongside moderate duration quantum memories and high-fidelity entangling operations, these quantum networks must also provide players with synchronized classical control frameworks and infrastructure. These prototype quantum gaming networks may then evolve toward more robust routing methods.
Kernels for Vector-Valued Functions: A Review <s> Introduction <s> This paper investigates learning in a lifelong context. Lifelong learning addresses situations in which a learner faces a whole stream of learning tasks. Such scenarios provide the opportunity to transfer knowledge across multiple learning tasks, in order to generalize more accurately from less training data. In this paper, several different approaches to lifelong learning are described, and applied in an object recognition domain. It is shown that across the board, lifelong learning approaches generalize consistently more accurately from less training data, by their ability to transfer knowledge across learning tasks. <s> BIB001 </s> Kernels for Vector-Valued Functions: A Review <s> Introduction <s> In this paper, we address the problem of statistical learning for multi-topic text categorization (MTC), whose goal is to choose all relevant topics (a label) from a given set of topics. The proposed algorithm, Maximal Margin Labeling (MML), treats all possible labels as independent classes and learns a multi-class classifier on the induced multi-class categorization problem. To cope with the data sparseness caused by the huge number of possible labels, MML combines some prior knowledge about label prototypes and a maximal margin criterion in a novel way. Experiments with multi-topic Web pages show that MML outperforms existing learning algorithms including Support Vector Machines. <s> BIB002 </s> Kernels for Vector-Valued Functions: A Review <s> Introduction <s> Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. This paper reviews prior work on MTL, presents new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals, and presents new results for MTL with k-nearest neighbor and kernel regression. In this paper we demonstrate multitask learning in three domains. We explain how multitask learning works, and show that there are many opportunities for multitask learning in real domains. We present an algorithm and results for multitask learning with case-based methods like k-nearest neighbor and kernel regression, and sketch an algorithm for multitask learning in decision trees. Because multitask learning works, can be applied to many different kinds of domains, and can be used with different learning algorithms, we conjecture there will be many opportunities for its use on real-world problems. <s> BIB003 </s> Kernels for Vector-Valued Functions: A Review <s> Introduction <s> In this paper we investigate multi-task learning in the context of Gaussian Processes (GP). We propose a model that learns a shared covariance function on input-dependent features and a "free-form" covariance matrix over tasks. This allows for good flexibility when modelling inter-task dependencies while avoiding the need for large amounts of data for training. We show that under the assumption of noise-free observations and a block design, predictions for a given task only depend on its target values and therefore a cancellation of inter-task transfer occurs. We evaluate the benefits of our model on two practical applications: a compiler performance prediction problem and an exam score prediction task. Additionally, we make use of GP approximations and properties of our model in order to provide scalability to large data sets. <s> BIB004 </s> Kernels for Vector-Valued Functions: A Review <s> Introduction <s> In this paper, we describe a novel, computationally efficient algorithm that facilitates the autonomous acquisition of readings from sensor networks (deciding when and which sensor to acquire readings from at any time), and which can, with minimal domain knowledge, perform a range of information processing tasks including modelling the accuracy of the sensor readings, predicting the value of missing sensor readings, and predicting how the monitored environmental variables will evolve into the future. Our motivating scenario is the need to provide situational awareness support to first responders at the scene of a large scale incident, and to this end, we describe a novel iterative formulation of a multi-output Gaussian process that can build and exploit a probabilistic model of the environmental variables being measured (including the correlations and delays that exist between them). We validate our approach using data collected from a network of weather sensors located on the south coast of England. <s> BIB005 </s> Kernels for Vector-Valued Functions: A Review <s> Introduction <s> We introduce Gaussian process dynamical models (GPDMs) for nonlinear time series analysis, with applications to learning models of human pose and motion from high-dimensional motion capture data. A GPDM is a latent variable model. It comprises a low-dimensional latent space with associated dynamics, as well as a map from the latent space to an observation space. We marginalize out the model parameters in closed form by using Gaussian process priors for both the dynamical and the observation mappings. This results in a nonparametric model for dynamical systems that accounts for uncertainty in the model. We demonstrate the approach and compare four learning algorithms on human motion capture data, in which each pose is 50-dimensional. Despite the use of small data sets, the GPDM learns an effective representation of the nonlinear dynamics in these spaces. <s> BIB006
Many modern applications of machine learning require solving several decision making or prediction problems and exploiting dependencies between the problems is often the key to obtain better results and coping with a lack of data (to solve a problem we can borrow strength from a distinct but related problem). In sensor networks, for example, missing signals from certain sensors may be predicted by exploiting their correlation with observed signals acquired from other sensors BIB005 . In geostatistics, predicting the concentration of heavy pollutant metals, which are expensive to measure, can be done using inexpensive and oversampled variables as a proxy . In computer graphics, a common theme is the animation and simulation of physically plausible humanoid motion. Given a set of poses that delineate a particular movement (for example, walking), we are faced with the task of completing a sequence by filling in the missing frames with natural-looking poses. Human movement exhibits a high-degree of correlation. Consider, for example, the way we walk. When moving the right leg forward, we unconsciously prepare the left leg, which is currently touching the ground, to start moving as soon as the right leg reaches the floor. At the same time, our hands move synchronously with our legs. We can exploit these implicit correlations for predicting new poses and for generating new natural-looking walking sequences BIB006 . In text categorization, one document can be assigned to multiple topics or have multiple labels BIB002 . In all the examples above, the simplest approach ignores the potential correlation among the different output components of the problem and employ models that make predictions individually for each output. However, these examples suggest a different approach through a joint prediction exploiting the interaction between the different components to improve on individual predictions. Within the machine learning community this type of modeling is often broadly referred to to as multitask learning. Again the key idea is that information shared between different tasks can lead to improved performance in comparison to learning the same tasks individually. These ideas are related to transfer learning BIB001 BIB003 BIB004 , a term which refers to systems that learn by transferring knowledge between different domains, for example: "what can we learn about running through seeing walking?" More formally, the classical supervised learning problem requires estimating the output for any given input x * ; an estimator f * (x * ) is built on the basis of a training set consisting of N input-output pairs S = (X, Y) = (x1, y1), . . . , (xN , yN ) . The input space X is usually a space of vectors, while the output space is a space of scalars. In multiple output learning (MOL) the output space is a space of vectors; the estimator is now a vector valued function f . Indeed, this situation can also be described as the problem of solving D distinct classical supervised problems, where each problem is described by one of the components f1, . . . , fD of f . As mentioned before, the key idea is to work under the assumption that the problems are in some way related. The idea is then to exploit the relation among the problems to improve upon solving each problem separately. The goal of this survey is twofold. First, we aim at discussing recent results in multi-output/multi-task learning based on kernel methods and Gaussian processes providing an account of the state of the art in the field. Second, we analyze systematically the connections between Bayesian and regularization (frequentist) approaches. Indeed, related techniques have been proposed from different perspectives and drawing clearer connections can boost advances in the field, while fostering collaborations between different communities. The plan of the paper follows. In chapter 2 we give a brief review of the main ideas underlying kernel methods for scalar learning, introducing the concepts of regularization in reproducing kernel Hilbert spaces and Gaussian processes. In chapter 3 we describe how similar concepts extend to the context of vector valued functions and discuss different settings that can be considered. In chapters 4 and 5 we discuss approaches to constructing multiple output kernels, drawing connections between the Bayesian and regularization frameworks. The parameter estimation problem and the computational complexity problem are both described in chapter 6. In chapter 7 we discuss some potential applications that can be seen as multi-output learning. Finally we conclude in chapter 8 with some remarks and discussion.
Kernels for Vector-Valued Functions: A Review <s> A Regularization Perspective <s> Abstract : The present paper may be considered as a sequel to our previous paper in the Proceedings of the Cambridge Philosophical Society, Theorie generale de noyaux reproduisants-Premiere partie (vol. 39 (1944)) which was written in 1942-1943. In the introduction to this paper we outlined the plan of papers which were to follow. In the meantime, however, the general theory has been developed in many directions, and our original plans have had to be changed. Due to wartime conditions we were not able, at the time of writing the first paper, to take into account all the earlier investigations which, although sometimes of quite a different character, were, nevertheless, related to our subject. Our investigation is concerned with kernels of a special type which have been used under different names and in different ways in many domains of mathematical research. We shall therefore begin our present paper with a short historical introduction in which we shall attempt to indicate the different manners in which these kernels have been used by various investigators, and to clarify the terminology. We shall also discuss the more important trends of the application of these kernels without attempting, however, a complete bibliography of the subject matter. (KAR) P. 2 <s> BIB001 </s> Kernels for Vector-Valued Functions: A Review <s> A Regularization Perspective <s> Abstract : The report presents classes of prior distributions for which the Bayes' estimate of an unknown function given certain observations is a spline function. (Author) <s> BIB002 </s> Kernels for Vector-Valued Functions: A Review <s> A Regularization Perspective <s> (1) A main theme of this report is the relationship of approximation to learning and the primary role of sampling (inductive inference). We try to emphasize relations of the theory of learning to the mainstream of mathematics. In particular, there are large roles for probability theory, for algorithms such as least squares, and for tools and ideas from linear algebra and linear analysis. An advantage of doing this is that communication is facilitated and the power of core mathematics is more easily brought to bear. We illustrate what we mean by learning theory by giving some instances. (a) The understanding of language acquisition by children or the emergence of languages in early human cultures. (b) In Manufacturing Engineering, the design of a new wave of machines is anticipated which uses sensors to sample properties of objects before, during, and after treatment. The information gathered from these samples is to be analyzed by the machine to decide how to better deal with new input objects (see [43]). (c) Pattern recognition of objects ranging from handwritten letters of the alphabet to pictures of animals, to the human voice. Understanding the laws of learning plays a large role in disciplines such as (Cognitive) Psychology, Animal Behavior, Economic Decision Making, all branches of Engineering, Computer Science, and especially the study of human thought processes (how the brain works). Mathematics has already played a big role towards the goal of giving a universal foundation of studies in these disciplines. We mention as examples the theory of Neural Networks going back to McCulloch and Pitts [25] and Minsky and Papert [27], the PAC learning of Valiant [40], Statistical Learning Theory as developed by Vapnik [42], and the use of reproducing kernels as in [17] among many other mathematical developments. We are heavily indebted to these developments. Recent discussions with a number of mathematicians have also been helpful. In <s> BIB003 </s> Kernels for Vector-Valued Functions: A Review <s> A Regularization Perspective <s> Wahba's classical representer theorem states that the solutions of certain risk minimization problems involving an empirical risk term and a quadratic regularizer can be written as expansions in terms of the training examples. We generalize the theorem to a larger class of regularizers and empirical risk terms, and give a self-contained proof utilizing the feature space associated with a kernel. The result shows that a wide range of problems have optimal solutions that live in the finite dimensional span of the training examples mapped into feature space, thus enabling us to carry out kernel algorithms independent of the (potentially infinite) dimensionality of the feature space. <s> BIB004 </s> Kernels for Vector-Valued Functions: A Review <s> A Regularization Perspective <s> Networks can be considered as approximation schemes. Multilayer networks of the perceptron type can approximate arbitrarily well continuous functions (Cybenko 1988, 1989; Funahashi 1989; Stinchcombe and White 1989). We prove that networks derived from regularization theory and including Radial Basis Functions (Poggio and Girosi 1989), have a similar property. From the point of view of approximation theory, however, the property of approximating continuous functions arbitrarily well is not sufficient for characterizing good approximation schemes. More critical is the property ofbest approximation. The main result of this paper is that multilayer perceptron networks, of the type used in backpropagation, do not have the best approximation property. For regularization networks (in particular Radial Basis Function networks) we prove existence and uniqueness of best approximation. <s> BIB005 </s> Kernels for Vector-Valued Functions: A Review <s> A Regularization Perspective <s> In regularized kernel methods, the solution of a learning problem is found by minimizing functionals consisting of the sum of a data and a complexity term. In this paper we investigate some properties of a more general form of the above functionals in which the data term corresponds to the expected risk. First, we prove a quantitative version of the representer theorem holding for both regression and classification, for both differentiable and non-differentiable loss functions, and for arbitrary offset terms. Second, we show that the case in which the offset space is non trivial corresponds to solving a standard problem of regularization in a Reproducing Kernel Hilbert Space in which the penalty term is given by a seminorm. Finally, we discuss the issues of existence and uniqueness of the solution. From the specialization of our analysis to the discrete setting it is immediate to establish a connection between the solution properties of sparsity and coefficient boundedness and some properties of the loss function. For the case of Support Vector Machines for classification, we also obtain a complete characterization of the whole method in terms of the Khun-Tucker conditions with no need to introduce the dual formulation. <s> BIB006 </s> Kernels for Vector-Valued Functions: A Review <s> A Regularization Perspective <s> In this letter, we provide a study of learning in a Hilbert space of vectorvalued functions. We motivate the need for extending learning theory of scalar-valued functions by practical considerations and establish some basic results for learning vector-valued functions that should prove useful in applications. Specifically, we allow an output space Y to be a Hilbert space, and we consider a reproducing kernel Hilbert space of functions whose values lie in Y. In this setting, we derive the form of the minimal norm interpolant to a finite set of data and apply it to study some regularization functionals that are important in learning theory. We consider specific examples of such functionals corresponding to multiple-output regularization networks and support vector machines, for both regression and classification. Finally, we provide classes of operator-valued kernels of the dot product and translation-invariant type. <s> BIB007 </s> Kernels for Vector-Valued Functions: A Review <s> A Regularization Perspective <s> We characterize the reproducing kernel Hilbert spaces whose elements are p-integrable functions in terms of the boundedness of the integral operator whose kernel is the reproducing kernel. Moreover, for p = 2, we show that the spectral decomposition of this integral operator gives a complete description of the reproducing kernel, extending the Mercer theorem. <s> BIB008 </s> Kernels for Vector-Valued Functions: A Review <s> A Regularization Perspective <s> We present a sparse approximation approach for dependent output Gaussian processes (GP). Employing a latent function framework, we apply the convolution process formalism to establish dependencies between output variables, where each latent function is represented as a GP. Based on these latent functions, we establish an approximation scheme using a conditional independence assumption between the output processes, leading to an approximation of the full covariance which is determined by the locations at which the latent functions are evaluated. We show results of the proposed methodology for synthetic data and real world applications on pollution prediction and a sensor network. <s> BIB009 </s> Kernels for Vector-Valued Functions: A Review <s> A Regularization Perspective <s> In this thesis we address the problem of modeling correlated outputs using Gaussian process priors. Applications of modeling correlated outputs include the joint prediction of pollutant metals in geostatistics and multitask learning in machine learning. Defining a Gaussian process prior for correlated outputs translates into specifying a suitable covariance function that captures dependencies between the different output variables. Classical models for obtaining such a covariance function include the linear model of coregionalization and process convolutions. We propose a general framework for developing multiple output covariance functions by performing convolutions between smoothing kernels particular to each output and covariance functions that are common to all outputs. Both the linear model of coregionalization and the process convolutions turn out to be special cases of this framework. Practical aspects of the proposed methodology are studied in this thesis. They involve the use of domain-specific knowledge for defining relevant smoothing kernels, efficient approximations for reducing computational complexity and a novel method for establishing a general class of nonstationary covariances with applications in robotics and motion capture data.Reprints of the publications that appear at the end of this document, report case studies and experimental results in sensor networks, geostatistics and motion capture data that illustrate the performance of the different methods proposed. <s> BIB010
We will first describe a regularization (frequentist) perspective (see BIB005 ). The key point in this setting is that the function of interest is assumed to belong to a reproducing kernel Hilbert space (RKHS), Then the estimator is derived as the minimizer of a regularized functional The first term in the functional is the so called empirical risk and it is the sum of the squared errors. It is a measure of the price we pay when predicting f (x) in place of y. The second term in the functional is the (squared) norm in a RKHS. This latter concept plays a key role, so we review a few essential concepts (see BIB001 BIB003 ). A RKHS H k is a Hilbert space of functions and can be defined by a reproducing kernel 1 k : X × X → R, which is a symmetric, positive definite function. The latter assumption amounts to requiring the matrix with entries k(xi, xj) to be positive for any (finite) sequence (xi). Given a kernel k, the RKHS H k is the Hilbert space such that the function k(x, ·) belongs to belongs to H k for all x ∈ X and where ·, · k is the inner product in H k . The latter property, known as the reproducing property, gives the name to the space. Two further properties make RKHS appealing: • functions in a RKHS are in the closure of the linear combinations of the kernel at given points, f (x) = i k(xi, x)ci. This allows us to describe, in a unified framework, linear models as well as a variety of generalized linear models; • the norm in a RKHS can be written as i,j k(xi, xj)cicj and is a natural measure of how complex is a function. Specific examples are given by the shrinkage point of view taken in ridge regression with linear models or the regularity expressed in terms of magnitude of derivatives, as is done in spline models . In this setting the functional (2) can be derived either from a regularization point of view BIB005 or from the theory of empirical risk minimization (ERM) . In the former, one observes that, if the space H k is large enough, the minimization of the empirical error is ill-posed, and in particular it responds in an unstable manner to noise, or when the number of samples is low Adding the squared norm stabilizes the problem. The latter point of view, starts from the analysis of ERM showing that generalization to new samples can be achieved if there is a tradeoff between fitting and complexity 2 of the estimator. The functional BIB009 can be seen as an instance of such a trade-off. The explicit form of the estimator is derived in two steps. First, one can show that the minimizer of (2) can always be written as a linear combination of the kernels centered at the training set points, see for example BIB007 BIB008 . The above result is the well known representer theorem originally proved in BIB002 (see also BIB004 and BIB006 for recent results and further references). The explicit form of the coefficients c = [c1, . . . , cN ] ⊤ can be then derived by substituting for f * (x * ) in (2). BIB010 In the following we will simply write kernel rather than reproducing kernel. BIB009 For example, a measure of complexity is the Vapnik-Chervonenkis dimension
Kernels for Vector-Valued Functions: A Review <s> A Connection Between Bayesian and Regularization Point of Views <s> The problem of the approximation of nonlinear mapping, (especially continuous mappings) is considered. Regularization theory and a theoretical framework for approximation (based on regularization techniques) that leads to a class of three-layer networks called regularization networks are discussed. Regularization networks are mathematically related to the radial basis functions, mainly used for strict interpolation tasks. Learning as approximation and learning as hypersurface reconstruction are discussed. Two extensions of the regularization approach are presented, along with the approach's corrections to splines, regularization, Bayes formulation, and clustering. The theory of regularization networks is generalized to a formulation that includes task-dependent clustering and dimensionality reduction. Applications of regularization networks are discussed. > <s> BIB001 </s> Kernels for Vector-Valued Functions: A Review <s> A Connection Between Bayesian and Regularization Point of Views <s> Kernel methods provide a powerful and unified framework for pattern discovery, motivating algorithms that can act on general types of data (e.g. strings, vectors or text) and look for general types of relations (e.g. rankings, classifications, regressions, clusters). The application areas range from neural networks and pattern recognition to machine learning and data mining. This book, developed from lectures and tutorials, fulfils two major roles: firstly it provides practitioners with a large toolkit of algorithms, kernels and solutions ready to use for standard pattern discovery problems in fields such as bioinformatics, text analysis, image analysis. Secondly it provides an easy introduction for students and researchers to the growing field of kernel-based pattern analysis, demonstrating with examples how to handcraft an algorithm or a kernel for a new specific application, and covering all the necessary conceptual and mathematical tools to do so. <s> BIB002 </s> Kernels for Vector-Valued Functions: A Review <s> A Connection Between Bayesian and Regularization Point of Views <s> Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. GPs have received increased attention in the machine-learning community over the past decade, and this book provides a long-needed systematic and unified treatment of theoretical and practical aspects of GPs in machine learning. The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics.The book deals with the supervised-learning problem for both regression and classification, and includes detailed algorithms. A wide variety of covariance (kernel) functions are presented and their properties discussed. Model selection is discussed both from a Bayesian and a classical perspective. Many connections to other well-known techniques from machine learning and statistics are discussed, including support-vector machines, neural networks, splines, regularization networks, relevance vector machines and others. Theoretical issues including learning curves and the PAC-Bayesian framework are treated, and several approximation methods for learning with large datasets are discussed. The book contains illustrative examples and exercises, and code and datasets are available on the Web. Appendixes provide mathematical background and a discussion of Gaussian Markov processes. <s> BIB003
Connections between regularization theory and Gaussian process prediction or Bayesian models for prediction have been pointed out elsewhere BIB001 BIB003 . Here we just give a very brief sketch of the argument. We restrict ourselves to finite dimensional RKHS. Under this assumption one can show that every RKHS can be described in terms of a feature map , that is a map Φ : X → R p , such that The solid line corresponds to the predictive mean, the shaded region corresponds to two standard deviations of the prediction. Dots are values of the output function Y. We have also included some samples from the posterior distribution, shown as dashed lines. In fact in this case one can show that functions in the RKHS with kernel k can be written as Then we can build a Gaussian process by assuming the coefficient w = w 1 , . . . , w p to be distributed according to a multivariate Gaussian distribution. Roughly speaking, in this case the assumption f * ∼ GP(0, k) becomes As we noted before if we assume a Gaussian likelihood we have Then the posterior distribution is proportional to e and we see that a maximum a posteriori estimate will in turn give the minimization problem defining Tikhonov regularization , where the regularization parameter is now related to the noise variance. We note that in regularization the squared error is often replaced by a more general error term 1 . In a regularization perspective, the loss function ℓ : R × R → R + measure the error we incur when predicting f (x) in place of y. The choice of the loss function is problem dependent. Often used examples are the square loss, the logistic loss or the hinge loss used in support vector machines (see ). The choice of a loss function in a regularization setting can be contrasted to the choice of the likelihood in a Bayesian setting. In this context, the likelihood function models how the observations deviate from the assumed true model in the generative process. The notion of a loss function is philosophically different. It represents the cost we pay for making errors. In Bayesian modeling decision making is separated from inference. In the inference stage the posterior distributions are computed evaluating the uncertainty in the model. The loss function appears only at the second stage of the analysis, known as the decision stage, and weighs how incorrect decisions are penalized given the current uncertainty. However, whilst the two notions are philosophically very different, we can see that, due to the formulation of the frameworks, the loss function and the log likelihood provide the same role mathematically. The discussion in the previous sections shows that the notion of a kernel plays a crucial role in statistical modeling both in the Bayesian perspective (as the covariance function of a GP) and the regularization perspective (as a reproducing kernel). Indeed, for scalar valued problems there is a rich literature on the design of kernels (see for example BIB002 BIB003 and references therein). In the next sections we show how the concept of a kernel can be used in multi-output learning problems. Before doing that, we describe how the concepts of RKHSs and GPs translate to the setting of vector valued learning.
Kernels for Vector-Valued Functions: A Review <s> Multi-output Learning <s> Consider the problem of learning logistic-regression models for multiple classification tasks, where the training data set for each task is not drawn from the same statistical distribution. In such a multi-task learning (MTL) scenario, it is necessary to identify groups of similar tasks that should be learned jointly. Relying on a Dirichlet process (DP) based statistical model to learn the extent of similarity between classification tasks, we develop computationally efficient algorithms for two different forms of the MTL problem. First, we consider a symmetric multi-task learning (SMTL) situation in which classifiers for multiple tasks are learned jointly using a variational Bayesian (VB) algorithm. Second, we consider an asymmetric multi-task learning (AMTL) formulation in which the posterior density function from the SMTL model parameters (from previous tasks) is used as a prior for a new task: this approach has the significant advantage of not requiring storage and use of all previous data from prior tasks. The AMTL formulation is solved with a simple Markov Chain Monte Carlo (MCMC) construction. Experimental results on two real life MTL problems indicate that the proposed algorithms: (a) automatically identify subgroups of related tasks whose training data appear to be drawn from similar distributions; and (b) are more accurate than simpler approaches such as single-task learning, pooling of data across all tasks, and simplified approximations to DP. <s> BIB001 </s> Kernels for Vector-Valued Functions: A Review <s> Multi-output Learning <s> We provide some insights into how task correlations in multi-task Gaussian process (GP) regression affect the generalization error and the learning curve. We analyze the asymmetric two-tasks case, where a secondary task is to help the learning of a primary task. Within this setting, we give bounds on the generalization error and the learning curve of the primary task. Our approach admits intuitive understandings of the multi-task GP by relating it to single-task GPs. For the case of one-dimensional input-space under optimal sampling with data only for the secondary task, the limitations of multi-task GP can be quantified explicitly. <s> BIB002
The problem we are interested in is that of learning an unknown functional relationship f between an input space X , for example X = R p , and an output space R D . In the following we will see that the problem can be tackled either assuming that f belongs to reproducing kernel Hilbert space of vector valued functions or assuming that f is drawn from a vector valued Gaussian process. Before doing this we describe several related settings all falling under the framework of multi-output learning. The natural extension of the traditional (scalar) supervised learning problem is the one we discussed in the introduction, when the data are pairs S = (X, Y) = (x1, y1), . . . , (xN , yN ). For example this is the typical setting for problems such as motion/velocity fields estimation. A special case is that of multi-category classification problem or multi-label problems, where if we have D classes each input point can be associated to a (binary) coding vector where, for example 1 stands for presence (0 for absence) of a class instance.The simplest example is the so called one vs all approach to multiclass classification which, if we have {1, . . . , D} classes, amounts to the coding i → ei, where (ei) is the canonical basis of R D . A more general situation is that where different outputs might have different training set cardinalities, different input points or in the extreme case even different input spaces. More formally, in this case we have a training set where the number of data associated with each output, (N d ) might be different and the input for a component might belong to different input space (X d ). The terminology used in machine learning often does not distinguish the different settings above and the term multitask learning is often used. In this paper we use the term multi-output learning or vector valued learning to define the general class of problems and use the term multi-task for the case where each component has different inputs. Indeed in this very general situation each component can be thought of as a distinct task possibly related to other tasks (components). In the geostatistics literature, if each output has the same set of inputs the model is called isotopic and heterotopic if each output to be associated with a different set of inputs [104] . Heterotopic data is further classified into entirely heterotopic data, where the variables have no sample locations in common, and partially heterotopic data, where the variables share some sample locations. In machine learning, the partially heterotopic case is sometimes referred to as asymmetric multitask learning BIB001 BIB002 . The notation in the multitask learning scenario (heterotopic case) is a bit more involved. To simplify the notation we assume that the number of data for each output is the same. Moreover, for the sake of simplicity sometimes we restrict the presentation to the isotopic setting, though the models can usually readily be extended to the more general setting. We will use the notation X to indicate the collection of all the training input points, {Xj } N j=1 , and S to denote the collection of all the training data. Also we will use the notation f (X) to indicate a vector valued function evaluated at different training points. This notation has slightly different meaning depending on the way the input points are sampled. If the input to all the components are the same then X = x1, . . . , xN and f (X) = f1(x1), . . . , fD(xN ). If the input for the different components are different then X = {X d } D d=1 = X1, . . . , XD, where X d = {x d,n } N n=1 and f (X) = (f1(x1,1), . . . , f1(x1,N )), . . . , (fD(xD,1), . . . , fD(xD,N )).
Kernels for Vector-Valued Functions: A Review <s> Reproducing Kernel for Vector Valued Function <s> In this letter, we provide a study of learning in a Hilbert space of vectorvalued functions. We motivate the need for extending learning theory of scalar-valued functions by practical considerations and establish some basic results for learning vector-valued functions that should prove useful in applications. Specifically, we allow an output space Y to be a Hilbert space, and we consider a reproducing kernel Hilbert space of functions whose values lie in Y. In this setting, we derive the form of the minimal norm interpolant to a finite set of data and apply it to study some regularization functionals that are important in learning theory. We consider specific examples of such functionals corresponding to multiple-output regularization networks and support vector machines, for both regression and classification. Finally, we provide classes of operator-valued kernels of the dot product and translation-invariant type. <s> BIB001 </s> Kernels for Vector-Valued Functions: A Review <s> Reproducing Kernel for Vector Valued Function <s> We study the problem of learning many related tasks simultaneously using kernel methods and regularization. The standard single-task kernel methods, such as support vector machines and regularization networks, are extended to the case of multi-task learning. Our analysis shows that the problem of estimating many task functions with regularization can be cast as a single task learning problem if a family of multi-task kernel functions we define is used. These kernels model relations among the tasks and are derived from a novel form of regularizers. Specific kernels that can be used for multi-task learning are provided and experimentally tested on two real data sets. In agreement with past empirical work on multi-task learning, the experiments show that learning multiple related tasks simultaneously using the proposed approach can significantly outperform standard single-task learning particularly when there are many related tasks but few data per task. <s> BIB002 </s> Kernels for Vector-Valued Functions: A Review <s> Reproducing Kernel for Vector Valued Function <s> We characterize the reproducing kernel Hilbert spaces whose elements are p-integrable functions in terms of the boundedness of the integral operator whose kernel is the reproducing kernel. Moreover, for p = 2, we show that the spectral decomposition of this integral operator gives a complete description of the reproducing kernel, extending the Mercer theorem. <s> BIB003 </s> Kernels for Vector-Valued Functions: A Review <s> Reproducing Kernel for Vector Valued Function <s> We present a sparse approximation approach for dependent output Gaussian processes (GP). Employing a latent function framework, we apply the convolution process formalism to establish dependencies between output variables, where each latent function is represented as a GP. Based on these latent functions, we establish an approximation scheme using a conditional independence assumption between the output processes, leading to an approximation of the full covariance which is determined by the locations at which the latent functions are evaluated. We show results of the proposed methodology for synthetic data and real world applications on pollution prediction and a sensor network. <s> BIB004
The definition of RKHS for vector valued functions parallels the one in the scalar, with the main difference that the reproducing kernel is now matrix valued, see for example BIB001 BIB003 . A reproducing kernel is a symmetric function K : X × X → R D×D , such that for any x, x ′ K(x, x ′ ) is a positive semi-definite matrix. A vector valued RKHS is a Hilbert space H of functions f : X → R D , such that for very c ∈ R D , and x ∈ X , K(x, x ′ )c, as a function of x ′ belongs to H and moreover K has the reproducing property where ·, · K is the inner product in H. Again, the choice of the kernel corresponds to the choice of the representation (parameterization) for the function of interest. In fact any function in the RKHS is in the closure of the set of linear combinations where we note that in the above equation each term K(xi, x) is a matrix acting on a vector cj. The norm in the RKHS typically provides a measure of the complexity of a function and this will be the subject of the next sections. Note that the definition of vector valued RKHS can be described in a component-wise fashion in the following sense. The kernel K can be described by a scalar kernel R acting jointly on input examples and task indices, that is where R is a scalar reproducing kernel on the space X × {1, . . . , D}. This latter point of view is useful while dealing with multitask learning, see BIB002 for a discussion. Provided with the above concepts we can follow a regularization approach to define an estimator by minimizing the regularized empirical error BIB004 , which in this case can be written as where f = (f1, . . . , fD). Once again the solution is given by the representer theorem BIB001 f and the coefficient satisfies the linear system where c, y are N D vectors obtained concatenating the coefficients and the output vectors, and K(X, X) is an N D × N D with entries (K(xi, xj)) d,d ′ , for i, j = 1, . . . , N and d, d ′ = 1, . . . , D (see for example BIB001 ). More explicitly (K(X1, X1))1,1 · · · (K(X1, XD))1,D (K(X2, X1))2,1 · · · (K(X2, XD))2,D . . . · · · . . . where each block (K(Xi, Xj))i,j is an N by N matrix (here we make the simplifying assumption that each output has same number of training data). Note that given a new point x * the corresponding prediction is given by where Kx * ∈ R D×ND has entries (K(x * , xj)) d,d ′ for j = 1, . . . , N and d, d ′ = 1, . . . , D.
Kernels for Vector-Valued Functions: A Review <s> Gaussian Processes for Vector Valued Functions <s> Abstract : The present paper may be considered as a sequel to our previous paper in the Proceedings of the Cambridge Philosophical Society, Theorie generale de noyaux reproduisants-Premiere partie (vol. 39 (1944)) which was written in 1942-1943. In the introduction to this paper we outlined the plan of papers which were to follow. In the meantime, however, the general theory has been developed in many directions, and our original plans have had to be changed. Due to wartime conditions we were not able, at the time of writing the first paper, to take into account all the earlier investigations which, although sometimes of quite a different character, were, nevertheless, related to our subject. Our investigation is concerned with kernels of a special type which have been used under different names and in different ways in many domains of mathematical research. We shall therefore begin our present paper with a short historical introduction in which we shall attempt to indicate the different manners in which these kernels have been used by various investigators, and to clarify the terminology. We shall also discuss the more important trends of the application of these kernels without attempting, however, a complete bibliography of the subject matter. (KAR) P. 2 <s> BIB001 </s> Kernels for Vector-Valued Functions: A Review <s> Gaussian Processes for Vector Valued Functions <s> We consider best linear unbiased prediction for multivariable data. Minimizing mean-squared-prediction errors leads to prediction equations involving either covariances or variograms. We discuss problems with multivariate extensions that include the construction of valid models and the estimation of their parameters. In this paper, we develop new methods to construct valid crossvariograms, fit them to data, and then use them for multivariable spatial prediction, including cokriging. Crossvariograms are constructed by explicitly modeling spatial data as moving averages over white noise random processes. Parameters of the moving average functions may be inferred from the variogram, and with few additional parameters, crossvariogram models are constructed. Weighted least squares is then used to fit the crossvariogram model to the empirical crossvariogram for the data. We demonstrate the method for simulated data, and show a considerable advantage of cokriging over ordinary kriging. <s> BIB002 </s> Kernels for Vector-Valued Functions: A Review <s> Gaussian Processes for Vector Valued Functions <s> Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. GPs have received increased attention in the machine-learning community over the past decade, and this book provides a long-needed systematic and unified treatment of theoretical and practical aspects of GPs in machine learning. The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics.The book deals with the supervised-learning problem for both regression and classification, and includes detailed algorithms. A wide variety of covariance (kernel) functions are presented and their properties discussed. Model selection is discussed both from a Bayesian and a classical perspective. Many connections to other well-known techniques from machine learning and statistics are discussed, including support-vector machines, neural networks, splines, regularization networks, relevance vector machines and others. Theoretical issues including learning curves and the PAC-Bayesian framework are treated, and several approximation methods for learning with large datasets are discussed. The book contains illustrative examples and exercises, and code and datasets are available on the Web. Appendixes provide mathematical background and a discussion of Gaussian Markov processes. <s> BIB003
Gaussian process methods for modeling vector-valued functions follow the same approach as in the single output case. Recall that a Gaussian process is defined as a collection of random variables, such that any finite number of them follows a joint Gaussian distribution. In the single output case, the random variables are associated to a single process f evaluated at different values of x while in the multiple output case, the random variables are associated to different processes {f d } D d=1 , evaluated at different values of x BIB002 . The vector-valued function f is assumed to follow a Gaussian process where m ∈ R D is a vector which components are the mean functions {m d (x)} D d=1 of each output and K is a positive matrix valued function as in section 3.2. The entries (K(x, x ′ )) d,d ′ in the matrix K(x, x ′ ) correspond to the covariances between the outputs f d (x) and f d ′ (x ′ ) and express the degree of correlation or similarity between them. For a set of inputs X, the prior distribution over the vector f (X) is given by where m(X) is a vector that concatenates the mean vectors associated to the outputs and the covariance matrix K(X, X) is the block partitioned matrix in BIB001 . Without loss of generality, we assume the mean vector to be zero. In a regression context, the likelihood function for the outputs is often taken to be Gaussian distribution, so that . For a Gaussian likelihood, the predictive distribution and the marginal likelihood can be derived analytically. The predictive distribution for a new vector x * is BIB003 p(f (x * )|S, f , x * , φ) = N (f * (x * ), K * (x * , x * )) , with . . , N and d, d ′ = 1, . . . , D, and φ denotes a possible set of hyperparameters of the covariance function K(x, x ′ ) used to compute K(X, X) and the variances of the noise for each output . Again we note that if we are interested into the distribution of the noisy predictions it is easy to see that we simply have to add P Σ to the expression of the prediction variance. The above expression for the mean prediction coincides again with the prediction of the estimator derived in the regularization framework. In the following chapters we describe several possible choices of kernels (covariance function) for multi-output problems. We start in the next chapter with kernel functions that clearly separate the contributions of input and output. We will see later alternative ways to construct kernel functions that interleave both contributions in a non trivial way.
Kernels for Vector-Valued Functions: A Review <s> Kernels and Regularizers <s> This paper provides a foundation for multi-task learning using reproducing kernel Hilbert spaces of vector-valued functions. In this setting, the kernel is a matrix-valued function. Some explicit examples will be described which go beyond our earlier results in [7]. In particular, we characterize classes of matrix- valued kernels which are linear and are of the dot product or the translation invariant type. We discuss how these kernels can be used to model relations between the tasks and present linear multi-task learning algorithms. Finally, we present a novel proof of the representer theorem for a minimizer of a regularization functional which is based on the notion of minimal norm interpolation. <s> BIB001 </s> Kernels for Vector-Valued Functions: A Review <s> Kernels and Regularizers <s> In this letter, we provide a study of learning in a Hilbert space of vectorvalued functions. We motivate the need for extending learning theory of scalar-valued functions by practical considerations and establish some basic results for learning vector-valued functions that should prove useful in applications. Specifically, we allow an output space Y to be a Hilbert space, and we consider a reproducing kernel Hilbert space of functions whose values lie in Y. In this setting, we derive the form of the minimal norm interpolant to a finite set of data and apply it to study some regularization functionals that are important in learning theory. We consider specific examples of such functionals corresponding to multiple-output regularization networks and support vector machines, for both regression and classification. Finally, we provide classes of operator-valued kernels of the dot product and translation-invariant type. <s> BIB002 </s> Kernels for Vector-Valued Functions: A Review <s> Kernels and Regularizers <s> We study the problem of learning many related tasks simultaneously using kernel methods and regularization. The standard single-task kernel methods, such as support vector machines and regularization networks, are extended to the case of multi-task learning. Our analysis shows that the problem of estimating many task functions with regularization can be cast as a single task learning problem if a family of multi-task kernel functions we define is used. These kernels model relations among the tasks and are derived from a novel form of regularizers. Specific kernels that can be used for multi-task learning are provided and experimentally tested on two real data sets. In agreement with past empirical work on multi-task learning, the experiments show that learning multiple related tasks simultaneously using the proposed approach can significantly outperform standard single-task learning particularly when there are many related tasks but few data per task. <s> BIB003 </s> Kernels for Vector-Valued Functions: A Review <s> Kernels and Regularizers <s> The CRASH computer model simulates the effect of a vehicle colliding against different barrier types. If it accurately represents real vehicle crashworthiness, the computer model can be of great value in various aspects of vehicle design, such as the setting of timing of air bag releases. The goal of this study is to address the problem of validating the computer model for such design goals, based on utilizing computer model runs and experimental data from real crashes. This task is complicated by the fact that (i) the output of this model consists of smooth functional data, and (ii) certain types of collision have very limited data. We address problem (i) by extending existing Gaussian process-based methodology developed for models that produce real-valued output, and resort to Bayesian hierarchical modeling to attack problem (ii). Additionally, we show how to formally test if the computer model reproduces reality. Supplemental materials for the article are available online. <s> BIB004 </s> Kernels for Vector-Valued Functions: A Review <s> Kernels and Regularizers <s> In this paper we study a class of regularized kernel methods for multi-output learning which are based on filtering the spectrum of the kernel matrix. The considered methods include Tikhonov regularization as a special case, as well as interesting alternatives such as vector-valued extensions of L2 boosting and other iterative schemes. Computational properties are discussed for various examples of kernels for vector-valued functions and the benefits of iterative techniques are illustrated. Generalizing previous results for the scalar case, we show a finite sample bound for the excess risk of the obtained estimator, which allows to prove consistency both for regression and multi-category classification. Finally, we present some promising results of the proposed algorithms on artificial and real data. <s> BIB005
In this section we largely follow the results in BIB001 BIB002 BIB003 ] and BIB005 . A possible way to design multi-output kernels of the form (9) is given by the following result. If K is given by BIB004 then is possible to prove that the norm of a function in the corresponding RKHS can be written as where B † is the pseudoinverse of B and f = (f1, . . . , fD). The above expression gives another way to see why the matrix B encodes the relation among the components. In fact, we can interpret the right hand side in the above expression as a regularizer inducing specific coupling among different tasks ft, f t ′ k with different weights given by B † d,d ′ . This result says that any such regularizer induces a kernel of the form BIB004 . We illustrate the above idea with a few examples.
Kernels for Vector-Valued Functions: A Review <s> Mixed Effect Regularizer Consider the regularizer given by <s> This paper provides a foundation for multi-task learning using reproducing kernel Hilbert spaces of vector-valued functions. In this setting, the kernel is a matrix-valued function. Some explicit examples will be described which go beyond our earlier results in [7]. In particular, we characterize classes of matrix- valued kernels which are linear and are of the dot product or the translation invariant type. We discuss how these kernels can be used to model relations between the tasks and present linear multi-task learning algorithms. Finally, we present a novel proof of the representer theorem for a minimizer of a regularization functional which is based on the notion of minimal norm interpolation. <s> BIB001 </s> Kernels for Vector-Valued Functions: A Review <s> Mixed Effect Regularizer Consider the regularizer given by <s> We study the problem of learning many related tasks simultaneously using kernel methods and regularization. The standard single-task kernel methods, such as support vector machines and regularization networks, are extended to the case of multi-task learning. Our analysis shows that the problem of estimating many task functions with regularization can be cast as a single task learning problem if a family of multi-task kernel functions we define is used. These kernels model relations among the tasks and are derived from a novel form of regularizers. Specific kernels that can be used for multi-task learning are provided and experimentally tested on two real data sets. In agreement with past empirical work on multi-task learning, the experiments show that learning multiple related tasks simultaneously using the proposed approach can significantly outperform standard single-task learning particularly when there are many related tasks but few data per task. <s> BIB002 </s> Kernels for Vector-Valued Functions: A Review <s> Mixed Effect Regularizer Consider the regularizer given by <s> In multi-task learning several related tasks are considered simultaneously, with the hope that by an appropriate sharing of information across tasks, each task may benefit from the others. In the context of learning linear functions for supervised classification or regression, this can be achieved by including a priori information about the weight vectors associated with the tasks, and how they are expected to be related to each other. In this paper, we assume that tasks are clustered into groups, which are unknown beforehand, and that tasks within a group have similar weight vectors. We design a new spectral norm that encodes this a priori assumption, without the prior knowledge of the partition of tasks into groups, resulting in a new convex optimization formulation for multi-task learning. We show in simulations on synthetic examples and on the IEDB MHC-I binding dataset, that our approach outperforms well-known convex methods for multi-task learning, as well as related non-convex methods dedicated to the same problem. <s> BIB003 </s> Kernels for Vector-Valued Functions: A Review <s> Mixed Effect Regularizer Consider the regularizer given by <s> We investigate the problem of learning multiple tasks that are related according to a network structure, using the multi-task kernel framework proposed in (Evgeniou et al., 2006). Our method combines a graphical task kernel with an arbitrary base kernel. We demonstrate its effectiveness on a real ecological application that inspired this work. <s> BIB004
where Aω = 1 2(1−ω)(1−ω+ωD) and Cω = (2 − 2ω + ωD). The above regularizer is composed of two terms: the first is a standard regularization term on the norm of each component of the estimator; the second forces each f ℓ to be close to the mean estimator across the components, f = 1 D D q=1 fq. The corresponding kernel imposes a common similarity structure between all the output components and the strength of the similarity is controlled by a parameter ω, where 1 is the D × D matrix whose entries are all equal to 1, and k is a scalar kernel on the input space X . Setting ω = 0 corresponds to treating all components independently and the possible similarity among them is not exploited. Conversely, ω = 1 is equivalent to assuming that all components are identical and are explained by the same function. By tuning the parameter ω the above kernel interpolates between this two opposites cases. We note that from a Bayesian perspective B is a correlation matrix with all the off-diagonals equal to ω, which means that the output of the Gaussian process are exchangeable. Cluster Based Regularizer. Another example of regularizer, proposed in BIB002 , is based on the idea of grouping the components into r clusters and enforcing the components in each cluster to be similar. Following BIB003 , let us define the matrix E as the D × r matrix, where r is the number of clusters, such that E ℓ,c = 1 if the component l belongs to cluster c and 0 otherwise. Then we can compute the mc if components l and q belong to the same cluster c, and mc is its cardinality, M ℓ,q = 0 otherwise. Furthermore let I(c) be the index set of the components that belong to cluster c. Then we can consider the following regularizer that forces components belonging to the same cluster to be close to each other: where f c is the mean of the components in cluster c and ǫ1, ǫ2 are parameters balancing the two terms. Straightforward calculations show that the previous regularizer can be rewritten as Therefore the corresponding matrix valued kernel is Graph Regularizer. Following BIB001 BIB004 , we can define a regularizer that, in addition to a standard regularization on the single components, forces stronger or weaker similarity between them through a given D × D positive weight matrix M, The regularizer J(f ) can be rewritten as: where Therefore the resulting kernel will be K(x, x ′ ) = k(x, x ′ )L † , with k(x, x ′ ) a scalar kernel to be chosen according to the problem at hand. In the next section we will see how models related to those described above can be derived from suitable generative models.
Kernels for Vector-Valued Functions: A Review <s> Comparison Between ICM and LMC <s> We propose a semiparametric model for regression and classification problems involving multiple response variables. The model makes use of a set of Gaussian processes to model the relationship to the inputs in a nonparametric fashion. Conditional dependencies between the responses can be captured through a linear mixture of the driving processes. This feature becomes important if some of the responses of predictive interest are less densely supplied by observed data than related auxiliary ones. We propose an efficient approximate inference scheme for this semiparametric model whose complexity is linear in the number of training data points. <s> BIB001 </s> Kernels for Vector-Valued Functions: A Review <s> Comparison Between ICM and LMC <s> We characterize the reproducing kernel Hilbert spaces whose elements are p-integrable functions in terms of the boundedness of the integral operator whose kernel is the reproducing kernel. Moreover, for p = 2, we show that the spectral decomposition of this integral operator gives a complete description of the reproducing kernel, extending the Mercer theorem. <s> BIB002 </s> Kernels for Vector-Valued Functions: A Review <s> Comparison Between ICM and LMC <s> Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. GPs have received increased attention in the machine-learning community over the past decade, and this book provides a long-needed systematic and unified treatment of theoretical and practical aspects of GPs in machine learning. The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics.The book deals with the supervised-learning problem for both regression and classification, and includes detailed algorithms. A wide variety of covariance (kernel) functions are presented and their properties discussed. Model selection is discussed both from a Bayesian and a classical perspective. Many connections to other well-known techniques from machine learning and statistics are discussed, including support-vector machines, neural networks, splines, regularization networks, relevance vector machines and others. Theoretical issues including learning curves and the PAC-Bayesian framework are treated, and several approximation methods for learning with large datasets are discussed. The book contains illustrative examples and exercises, and code and datasets are available on the Web. Appendixes provide mathematical background and a discussion of Gaussian Markov processes. <s> BIB003
We have seen before that the intrinsic coregionalization model is a particular case of the linear model of coregionalization for Q = 1 (with Rq = 1) in equation BIB002 . Here we contrast these two models. Note that a different particular case of the linear model of coregionalization is assuming Rq = 1 (with Q = 1). This model, known in the machine learning literature as the semiparametric latent factor model (SLFM) BIB001 , will be introduced in the next subsection. To compare the two models we have sampled from a multi-output Gaussian process with two outputs (D = 2), a one-dimensional input space (x ∈ R) and a LMC with different values for Rq and Q. As basic kernels kq(x, x ′ ) we have used the exponentiated quadratic (EQ) kernel given as BIB003 , where · represents the Euclidian norm and ℓq is known as the characteristic length-scale. The exponentiated quadratic is variously referred to as the Gaussian, the radial basis function or the squared exponential kernel. Figure 2 shows samples from the intrinsic coregionalization model for Rq = 1, meaning a coregionalization matrix B1 of rank one. Samples share the same length-scale and have similar form. They have different variances, though. Each sample may be considered as a scaled version of the latent function, as it can be seen from equation 18 with Q = 1 and Rq = 1, where we have used x instead of x for the one-dimensional input space. Figure 3 shows samples from an ICM of rank two. From equation 18, we have for Q = 1 and Rq = 2, where u 1 1 (x) and u 2 1 (x) are sampled from the same Gaussian process. Outputs are weighted sums of two different latent functions that share the same covariance. In contrast to the ICM of rank one, we see from figure 3 that both outputs have different forms, although they share the same length-scale. Figure 4 displays outputs sampled from a LMC with Rq = 1 and two latent functions (Q = 2) with different length-scales. Notice that both samples are combinations of two terms, a long length-scale term and a short lengthscale term. According to equation 18, outputs are given as where u 1 1 (x) and u 1 2 (x) are samples from two Gaussian processes with different covariance functions. In a similar way to the ICM of rank one (see figure 2 ), samples from both outputs have the same form, this is, they are aligned. We have the additional case for a LMC with Rq = 2 and Q = 2 in figure 5. According to equation 18, the outputs are give as where the pair of latent functions u 1 1 (x) and u 2 1 (x) share their covariance function and the pair of latent functions u 1 2 (x) and u 2 2 (x) also share their covariance function. As in the case of the LMC with Rq = 1 and Q = 2 in figure 4 , the outputs are combinations of a term with a long length-scale and a term with a short length-scale. A key difference however, is that, for Rq = 2 and Q = 2, samples from different outputs have different shapes. 4
Kernels for Vector-Valued Functions: A Review <s> Linear Model of Coregionalization in Machine Learning and Statistics <s> We present a framework for sparse Gaussian process (GP) methods which uses forward selection with criteria based on information-theoretic principles, previously suggested for active learning. Our goal is not only to learn d-sparse predictors (which can be evaluated in O(d) rather than O(n), d ≪ n, n the number of training points), but also to perform training under strong restrictions on time and memory requirements. The scaling of our method is at most O(n · d2), and in large real-world classification experiments we show that it can match prediction performance of the popular support vector machine (SVM), yet can be significantly faster in training. In contrast to the SVM, our approximation produces estimates of predictive probabilities ('error bars'), allows for Bayesian model selection and is less complex in implementation. <s> BIB001 </s> Kernels for Vector-Valued Functions: A Review <s> Linear Model of Coregionalization in Machine Learning and Statistics <s> This paper describes an efficient method for learning the parameters of a Gaussian process (GP). The parameters are learned from multiple tasks which are assumed to have been drawn independently from the same GP prior. An efficient algorithm is obtained by extending the informative vector machine (IVM) algorithm to handle the multi-task learning case. The multi-task IVM (MTIVM) saves computation by greedily selecting the most informative examples from the separate tasks. The MT-IVM is also shown to be more efficient than random sub-sampling on an artificial data-set and more effective than the traditional IVM in a speaker dependent phoneme recognition task. <s> BIB002 </s> Kernels for Vector-Valued Functions: A Review <s> Linear Model of Coregionalization in Machine Learning and Statistics <s> Sparse approximations to Bayesian inference for nonparametric Gaussian Process models scale linearly in the number of training points, allowing for the application of these powerful kernel-based models to large datasets. We show how to generalize the binary classification informative vector machine (IVM) (Lawrence et.al., 2002) to multiple classes. In contrast to earlier efficient approaches to kernel-based non-binary classification, our method is a principled approximation to Bayesian inference which yields valid uncertainty estimates and allows for hyperparameter adaption via marginal likelihood maximization. While most earlier proposals suggest fitting independent binary discriminants to heuristically chosen partitions of the data and combining these in a heuristic manner, our method operates jointly on the data for all classes. Crucially, we still achieve a linear scaling in both the number of classes and the number of training points. <s> BIB003 </s> Kernels for Vector-Valued Functions: A Review <s> Linear Model of Coregionalization in Machine Learning and Statistics <s> We consider the problem of multi-task learning, that is, learning multiple related functions. Our approach is based on a hierarchical Bayesian framework, that exploits the equivalence between parametric linear models and nonparametric Gaussian processes (GPs). The resulting models can be learned easily via an EM-algorithm. Empirical studies on multi-label text categorization suggest that the presented models allow accurate solutions of these multi-task problems. <s> BIB004 </s> Kernels for Vector-Valued Functions: A Review <s> Linear Model of Coregionalization in Machine Learning and Statistics <s> We propose a semiparametric model for regression and classification problems involving multiple response variables. The model makes use of a set of Gaussian processes to model the relationship to the inputs in a nonparametric fashion. Conditional dependencies between the responses can be captured through a linear mixture of the driving processes. This feature becomes important if some of the responses of predictive interest are less densely supplied by observed data than related auxiliary ones. We propose an efficient approximate inference scheme for this semiparametric model whose complexity is linear in the number of training data points. <s> BIB005 </s> Kernels for Vector-Valued Functions: A Review <s> Linear Model of Coregionalization in Machine Learning and Statistics <s> Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. GPs have received increased attention in the machine-learning community over the past decade, and this book provides a long-needed systematic and unified treatment of theoretical and practical aspects of GPs in machine learning. The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics.The book deals with the supervised-learning problem for both regression and classification, and includes detailed algorithms. A wide variety of covariance (kernel) functions are presented and their properties discussed. Model selection is discussed both from a Bayesian and a classical perspective. Many connections to other well-known techniques from machine learning and statistics are discussed, including support-vector machines, neural networks, splines, regularization networks, relevance vector machines and others. Theoretical issues including learning curves and the PAC-Bayesian framework are treated, and several approximation methods for learning with large datasets are discussed. The book contains illustrative examples and exercises, and code and datasets are available on the Web. Appendixes provide mathematical background and a discussion of Gaussian Markov processes. <s> BIB006 </s> Kernels for Vector-Valued Functions: A Review <s> Linear Model of Coregionalization in Machine Learning and Statistics <s> In this paper we investigate multi-task learning in the context of Gaussian Processes (GP). We propose a model that learns a shared covariance function on input-dependent features and a "free-form" covariance matrix over tasks. This allows for good flexibility when modelling inter-task dependencies while avoiding the need for large amounts of data for training. We show that under the assumption of noise-free observations and a block design, predictions for a given task only depend on its target values and therefore a cancellation of inter-task transfer occurs. We evaluate the benefits of our model on two practical applications: a compiler performance prediction problem and an exam score prediction task. Additionally, we make use of GP approximations and properties of our model in order to provide scalability to large data sets. <s> BIB007 </s> Kernels for Vector-Valued Functions: A Review <s> Linear Model of Coregionalization in Machine Learning and Statistics <s> In this paper, we describe a novel, computationally efficient algorithm that facilitates the autonomous acquisition of readings from sensor networks (deciding when and which sensor to acquire readings from at any time), and which can, with minimal domain knowledge, perform a range of information processing tasks including modelling the accuracy of the sensor readings, predicting the value of missing sensor readings, and predicting how the monitored environmental variables will evolve into the future. Our motivating scenario is the need to provide situational awareness support to first responders at the scene of a large scale incident, and to this end, we describe a novel iterative formulation of a multi-output Gaussian process that can build and exploit a probabilistic model of the environmental variables being measured (including the correlations and delays that exist between them). We validate our approach using data collected from a network of weather sensors located on the south coast of England. <s> BIB008 </s> Kernels for Vector-Valued Functions: A Review <s> Linear Model of Coregionalization in Machine Learning and Statistics <s> We present a novel approach to multitask learning in classification problems based on Gaussian process (GP) classification. The method extends previous work on multitask GP regression, constraining the overall covariance (across tasks and data points) to factorize as a Kronecker product. Fully Bayesian inference is possible but time consuming using sampling techniques. We propose approximations based on the popular variational Bayes and expectation propagation frameworks, showing that they both achieve excellent accuracy when compared to Gibbs sampling, in a fraction of time. We present results on a toy dataset and two real datasets, showing improved performance against the baseline results obtained by learning each task independently. We also compare with a recently proposed state-of-the-art approach based on support vector machines, obtaining comparable or better results. <s> BIB009
The linear model of coregionalization has already been used in machine learning in the context of Gaussian processes for multivariate regression and in statistics for computer emulation of expensive multivariate computer codes. As we have seen before, the linear model of coregionalization imposes the correlation of the outputs explicitly through the set of coregionalization matrices. A simple idea used in the early papers of multi-output GPs for machine learning was based on the intrinsic coregionalization model and assumed B = ID. In other words, the outputs were considered to be conditionally independent given the parameters φ. Correlation between the outputs was assumed to exist implicitly by imposing the same set of hyperparameters φ for all outputs and estimating those parameters, or the kernel matrix k(X, X) directly, using data from all the outputs BIB002 BIB004 . In this section, we review more recent approaches for multiple output modeling that are different versions of the linear model of coregionalization. Semiparametric latent factor model. The semiparametric latent factor model (SLFM) proposed by BIB005 turns out to be a simplified version of the LMC. In fact it corresponds to setting Rq = 1 in (18) so that we can rewrite equation (10) as where aq ∈ R D×1 with elements {a d,q } D d=1 and q fixed. With some algebraic manipulations, that exploit the properties of the Kronecker product, we can write where A ∈ R D×Q is a matrix with columns aq and K ∈ R QN×QN is a block diagonal matrix with blocks given by kq(X, X). The functions uq(x) are considered to be latent factors and the semiparametric name comes from the fact that it is combining a nonparametric model, that is a Gaussian process, with a parametric linear mixing of the functions uq(x). The kernels kq, for each basic process is assumed to be exponentiated quadratic with a different characteristic length-scale for each input dimension. The informative vector machine (IVM) BIB001 is employed to speed up computations. Gaussian processes for Multi-task, Multi-output and Multi-class The intrinsic coregionalization model is considered by BIB007 in the context of multitask learning. The authors use a probabilistic principal component analysis (PPCA) model to represent the matrix B. The spectral factorization in the PPCA model is replaced by an incomplete Cholesky decomposition to keep numerical stability. The authors also refer to the autokrigeability effect as the cancellation of inter-task transfer BIB007 , and discuss the similarities between the multi-task GP and the ICM, and its relationship to the SLFM and the LMC. The intrinsic coregionalization model has also been used by BIB008 . Here the matrix B is assumed to have a spherical parametrization, B = diag(e)S ⊤ S diag(e), where e gives a description for the scale length of each output variable and S is an upper triangular matrix whose i-th column is associated with particular spherical coordinates of points in R i (for details see sec. 3.4 ). The scalar kernel k is represented through a Matérn kernel, where different parameterizations allow the expression of periodic and non-periodic terms. Sparsification for this model is obtained using an IVM style approach. In a classification context, Gaussian processes methodology has been mostly restricted to the case where the outputs are conditionally independent given the hyperparameters φ BIB002 BIB003 BIB004 BIB006 . Therefore, the kernel matrix K(X, X) takes a block-diagonal form, with blocks given by (K(X d , X d )) d,d . Correlation between the outputs is assumed to exist implicitly by imposing the same set of hyperparameters φ for all outputs and estimating those parameters, or directly the kernel matrices (K(X d , X d )) d,d , using data from all the outputs BIB002 BIB004 BIB006 . Alternatively, it is also possible to have parameters φ d associated to each output BIB003 . Only recently, the intrinsic coregionalization model has been used in the multiclass scenario. In BIB009 , the authors use the intrinsic coregionalization model for classification, by introducing a probit noise model as the likelihood. Since the posterior distribution is no longer analytically tractable, the authors use Gibbs sampling, Expectation-Propagation (EP) and variational Bayes 5 to approximate the distribution.
Kernels for Vector-Valued Functions: A Review <s> Computer emulation. <s> The potential for feedbacks between terrestrial vegetation, climate, and the atmospheric CO 2 partial pressure have been addressed by modelling. Previous research has established that under global warming and CO 2 enrichment, the stomatal conductance of vegetation tends to decrease, causing a warming effect on top of the driving change in greenhouse warming. At the global scale, this positive feedback is ultimately changed to a negative feedback through changes in vegetation structure. In spatial terms this structural feedback has a variable geographical pattern in terms of magnitude and sign. At high latitudes, increases in vegetation leaf area index (LAI) and vegetation height cause a positive feedback, and warming through reductions in the winter snow–cover albedo. At lower latitudes when vegetation becomes more sparse with warming, the higher albedo of the underlying soil leads to cooling. However, the largest area effects are of negative feedbacks caused by increased evaporative cooling with increasing LAI. These effects do not include feedbacks on the atmospheric CO 2 concentration, through changes in the carbon cycle of the vegetation. Modelling experiments, with biogeochemical, physiological and structural feedbacks on atmospheric CO 2 , but with no changes in precipitation, ocean activity or sea ice formation, have shown that a consequence of the CO 2 fertilization effect on vegetation will be a reduction of atmospheric CO 2 concentration, in the order of 12% by the year 2100 and a reduced global warming by 0.7°C, in a total greenhouse warming of 3.9°C. <s> BIB001 </s> Kernels for Vector-Valued Functions: A Review <s> Computer emulation. <s> Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. GPs have received increased attention in the machine-learning community over the past decade, and this book provides a long-needed systematic and unified treatment of theoretical and practical aspects of GPs in machine learning. The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics.The book deals with the supervised-learning problem for both regression and classification, and includes detailed algorithms. A wide variety of covariance (kernel) functions are presented and their properties discussed. Model selection is discussed both from a Bayesian and a classical perspective. Many connections to other well-known techniques from machine learning and statistics are discussed, including support-vector machines, neural networks, splines, regularization networks, relevance vector machines and others. Theoretical issues including learning curves and the PAC-Bayesian framework are treated, and several approximation methods for learning with large datasets are discussed. The book contains illustrative examples and exercises, and code and datasets are available on the Web. Appendixes provide mathematical background and a discussion of Gaussian Markov processes. <s> BIB002 </s> Kernels for Vector-Valued Functions: A Review <s> Computer emulation. <s> This work focuses on combining observations from field experiments with detailed computer simulations of a physical process to carry out statistical inference. Of particular interest here is determining uncertainty in resulting predictions. This typically involves calibration of parameters in the computer simulator as well as accounting for inadequate physics in the simulator. The problem is complicated by the fact that simulation code is sufficiently demanding that only a limited number of simulations can be carried out. We consider applications in characterizing material properties for which the field data and the simulator output are highly multivariate. For example, the experimental data and simulation output may be an image or may describe the shape of a physical object. We make use of the basic framework of Kennedy and O'Hagan. However, the size and multivariate nature of the data lead to computational challenges in implementing the framework. To overcome these challenges, we make use of basis repre... <s> BIB003 </s> Kernels for Vector-Valued Functions: A Review <s> Computer emulation. <s> One of the challenges with emulating the response of a multivariate function to its inputs is the quantity of data that must be assimilated, which is the product of the number of model evaluations and the number of outputs. This article shows how even large calculations can be made tractable. It is already appreciated that gains can be made when the emulator residual covariance function is treated as separable in the model-inputs and model-outputs. Here, an additional simplification on the structure of the regressors in the emulator mean function allows very substantial further gains. The result is that it is now possible to emulate rapidly—on a desktop computer—models with hundreds of evaluations and hundreds of outputs. This is demonstrated through calculating costs in floating-point operations, and in an illustration. Even larger sets of outputs are possible if they have additional structure, for example, spatial-temporal. <s> BIB004 </s> Kernels for Vector-Valued Functions: A Review <s> Computer emulation. <s> Model calibration analysis is concerned with the estimation of unobservable modeling parameters using observations of system response. When the model being calibrated is an expensive computer simulation, special techniques such as surrogate modeling and Bayesian inference are often fruitful. In this paper, we show how the flexibility of the Bayesian calibration approach can be exploited to account for a wide variety of uncertainty sources in the calibration process. We propose a straightforward approach for simultaneously handling Gaussian and non-Gaussian errors, as well as a framework for studying the effects of prescribed uncertainty distributions for model inputs that are not treated as calibration parameters. Further, we discuss how Gaussian process surrogate models can be used effectively when simulator response may be a function of time and/or space (multivariate output). The proposed methods are illustrated through the calibration of a simulation of thermally decomposing foam. <s> BIB005 </s> Kernels for Vector-Valued Functions: A Review <s> Computer emulation. <s> The CRASH computer model simulates the effect of a vehicle colliding against different barrier types. If it accurately represents real vehicle crashworthiness, the computer model can be of great value in various aspects of vehicle design, such as the setting of timing of air bag releases. The goal of this study is to address the problem of validating the computer model for such design goals, based on utilizing computer model runs and experimental data from real crashes. This task is complicated by the fact that (i) the output of this model consists of smooth functional data, and (ii) certain types of collision have very limited data. We address problem (i) by extending existing Gaussian process-based methodology developed for models that produce real-valued output, and resort to Bayesian hierarchical modeling to attack problem (ii). Additionally, we show how to formally test if the computer model reproduces reality. Supplemental materials for the article are available online. <s> BIB006 </s> Kernels for Vector-Valued Functions: A Review <s> Computer emulation. <s> Computer models are widely used in scientific research to study and predict the behaviour of complex systems. The run times of computer-intensive simulators are often such that it is impractical to make the thousands of model runs that are conventionally required for sensitivity analysis, uncertainty analysis or calibration. In response to this problem, highly efficient techniques have recently been developed based on a statistical meta-model (the emulator) that is built to approximate the computer model. The approach, however, is less straightforward for dynamic simulators, designed to represent time-evolving systems. Generalisations of the established methodology to allow for dynamic emulation are here proposed and contrasted. Advantages and difficulties are discussed and illustrated with an application to the Sheffield Dynamic Global Vegetation Model, developed within the UK Centre for Terrestrial Carbon Dynamics. <s> BIB007
A computer emulator is a statistical model used as a surrogate for a computationally expensive deterministic model or computer code, also known as a simulator. Gaussian processes have become the preferred statistical model among computer emulation practitioners (for a review see ). Different Gaussian process emulators have been recently proposed to deal with several outputs BIB003 BIB007 BIB004 BIB005 BIB006 . In BIB003 , the linear model of coregionalization is used to model images representing the evolution of the implosion of steel cylinders after using TNT and obtained employing the so called Neddemeyer simulation model (see BIB003 for further details). The input variable x represents parameters of the simulation model, while the output is an image of the radius of the inner shell of the cylinder over a fixed grid of times and angles. In the version of the LMC that the authors employed, Rq = 1 and the Q vectors aq were obtained as the eigenvectors of a PCA decomposition of the set of training images. In BIB007 , the intrinsic coregionalization model is employed for emulating the response of a vegetation model called the Sheffield Dynamic Global Vegetation Model (SDGVM) BIB001 . Authors refer to the ICM as the Multiple-Output (MO) emulator. The inputs to the model are ten (p = 10) variables related to broad soil, vegetation and climate data, while the outputs are time series of the net biome productivity (NBP) index measured at a particular site in a forest area of Harwood, UK. The NBP index accounts for the residual amount of carbon at a vegetation site after some natural processes have taken place. In the paper, the authors assume that the outputs correspond to the different sampling time points, so that D = T , being T the number of time points, while each observation corresponds to specific values of the ten input variables. Values of the input variables are chosen according to a maxi-min Latin hypercube design. Rougier BIB004 introduces an emulator for multiple-outputs that assumes that the set of output variables can be seen as a single variable while augmenting the input space with an additional index over the outputs. In other words, it considers the output variable as an input variable. BIB007 , refers to the model in BIB004 as the Time Input (TI) emulator and discussed how the TI model turns out to be a particular case of the MO model that assumes a particular exponentiated quadratic kernel (see chapter 4 BIB002 ) for the entries in the coregionalization matrix B. McFarland et al. BIB005 consider a multiple-output problem as a single output one. The setup is similar to the one used in BIB007 , where the number of outputs are associated to different time points, this is, D = T . The outputs correspond to the time evolutions of the temperature of certain location of a container with decomposing foam, as function of five different calibration variables (input variables in this context, p = 5). The authors use the time index as an input (akin to BIB004 ) and apply a greedy-like algorithm to select the training points for the Gaussian process. Greedy approximations like this one have also been used in the machine learning literature (for details, see BIB002 , page 174). Similar to BIB004 and BIB005 , Bayarri et al. BIB006 use the time index as an input for a computer emulator that evaluates the accuracy of CRASH, a computer model that simulates the effect of a collision of a vehicle with different types of barriers. Quian et al. propose a computer emulator based on Gaussian processes that supports quantitative and qualitative inputs. The covariance function in this computer emulator is related to the ICM in the case of one qualitative factor: the qualitative factor is considered to be the index of the output, and the covariance function takes again the form k(x, x ′ )kT (d, d ′ ). In the case of more than one qualitative input, the computer emulator could be considered a multiple output GP in which each output index would correspond to a particular combination of the possible values taken by the qualitative factors. In this case, the matrix B in ICM would have a block diagonal form, each block determining the covariance between the values taken by a particular qualitative input.
Kernels for Vector-Valued Functions: A Review <s> Extensions Within the Regularization Framework <s> Gaussian processes are usually parameterised in terms of their covariance functions. However, this makes it difficult to deal with multiple outputs, because ensuring that the covariance matrix is positive definite is problematic. An alternative formulation is to treat Gaussian processes as white noise sources convolved with smoothing kernels, and to parameterise the kernel instead. Using this, we extend Gaussian processes to handle multiple, coupled outputs. <s> BIB001 </s> Kernels for Vector-Valued Functions: A Review <s> Extensions Within the Regularization Framework <s> We present a method for learning sparse representations shared across multiple tasks. This method is a generalization of the well-known single-task 1-norm regularization. It is based on a novel non-convex regularizer which controls the number of learned features common across the tasks. We prove that the method is equivalent to solving a convex optimization problem for which there is an iterative algorithm which converges to an optimal solution. The algorithm has a simple interpretation: it alternately performs a supervised and an unsupervised step, where in the former step it learns task-specific functions and in the latter step it learns common-across-tasks sparse representations for these functions. We also provide an extension of the algorithm which learns sparse nonlinear representations using kernels. We report experiments on simulated and real data sets which demonstrate that the proposed method can both improve the performance relative to learning each task independently and lead to a few learned features common across related tasks. Our algorithm can also be used, as a special case, to simply select--not learn--a few common variables across the tasks. <s> BIB002 </s> Kernels for Vector-Valued Functions: A Review <s> Extensions Within the Regularization Framework <s> In multi-task learning several related tasks are considered simultaneously, with the hope that by an appropriate sharing of information across tasks, each task may benefit from the others. In the context of learning linear functions for supervised classification or regression, this can be achieved by including a priori information about the weight vectors associated with the tasks, and how they are expected to be related to each other. In this paper, we assume that tasks are clustered into groups, which are unknown beforehand, and that tasks within a group have similar weight vectors. We design a new spectral norm that encodes this a priori assumption, without the prior knowledge of the partition of tasks into groups, resulting in a new convex optimization formulation for multi-task learning. We show in simulations on synthetic examples and on the IEDB MHC-I binding dataset, that our approach outperforms well-known convex methods for multi-task learning, as well as related non-convex methods dedicated to the same problem. <s> BIB003 </s> Kernels for Vector-Valued Functions: A Review <s> Extensions Within the Regularization Framework <s> We consider the problem of learning in an environment of classification tasks. Tasks sampled from the environment are used to improve classification performance on future tasks. We consider situations in which the tasks can be divided into groups. Tasks within each group are related by sharing a low dimensional representation, which differs across the groups. We present an algorithm which divides the sampled tasks into groups and computes a common representation for each group. We report experiments on a synthetic and two image data sets, which show the advantage of the approach over single-task learning and a previous transfer learning method. <s> BIB004 </s> Kernels for Vector-Valued Functions: A Review <s> Extensions Within the Regularization Framework <s> We address the problem of recovering a common set of covariates that are relevant simultaneously to several classification problems. By penalizing the sum of ? 2 norms of the blocks of coefficients associated with each covariate across different classification problems, similar sparsity patterns in all models are encouraged. To take computational advantage of the sparsity of solutions at high regularization levels, we propose a blockwise path-following scheme that approximately traces the regularization path. As the regularization coefficient decreases, the algorithm maintains and updates concurrently a growing set of covariates that are simultaneously active for all problems. We also show how to use random projections to extend this approach to the problem of joint subspace selection, where multiple predictors are found in a common low-dimensional subspace. We present theoretical results showing that this random projection approach converges to the solution yielded by trace-norm regularization. Finally, we present a variety of experimental results exploring joint covariate selection and joint subspace selection, comparing the path-following approach to competing algorithms in terms of prediction accuracy and running time. <s> BIB005
When we consider kernels of the form K(x, x ′ ) = k(x, x ′ )B, a natural question is whether the matrix B can be learned from data. In a regression setting, one idea is to estimate B in a separate inference step as the covariance matrix of the output vectors in the training set and this is standard in the geostatistics literature [104] . A further question is whether we can learn both B and an estimator within a unique inference step. This is the question tackled in BIB003 . The authors consider a variation of the regularizer in BIB001 and try to learn the cluster matrix as a part of the optimization process. More precisely the authors considered a regularization term of the form where we recall that r is the number of clusters. The three terms in the functional can be seen as: a global penalty, a term penalizing between cluster variance and a term penalizing within cluster variance. As in the case of the regularizer in (14), the above regularizer is completely characterized by a cluster matrix M, i.e. R(f ) = RM(f ) (note that the corresponding matrix B will be slightly different from (15)). The idea is then to consider a regularized functional to be minimized jointly over f and M (see BIB003 for details). This problem is typically non tractable from a computational point of view, so the authors in BIB003 propose a relaxation of the problem which can be shown to be convex. A different approach is taken in BIB002 and BIB004 . In this case the idea is that only a a small subset of features is useful to learn all the components/tasks. In the simplest case the authors propose to minimize a functional of the form over w1, . . . , wD ∈ R p , U ∈ R D×D under the constraint Tr(U ⊤ t Ut) ≤ γ. Note that the minimization over the matrix U couples the otherwise disjoint component-wise problems. The authors of BIB002 discuss how the above model is equivalent to considering a kernel of the form where D is a positive definite matrix and a model which can be described components wise as making apparent the connection with the LMC model. In fact, it is possible to show that the above minimization problem is equivalent to minimizing over a ′ 1 , . . . , a ′ D ∈ R p and Tr(D) ≤ 1, where the last restriction is a convex approximation of the low rank requirement. Note that from a Bayesian perspective the above scheme can be interpreted as learning a covariance matrix for the response variables which is optimal for all the tasks. In BIB002 , the authors consider a more general setting where D is replaced by F (D) and show that if the matrix valued function F is matrix concave, then the induced minimization problem is jointly convex in (ai) and D. Moreover, the authors discuss how to extend the above framework to the case of more general kernel functions. Note that an approach similar to the one we just described is at the basis of recent work exploiting the concept of sparsity while solving multiple tasks. These latter methods cannot in general be cast in the framework of kernel methods and we refer the interested reader to BIB005 and references therein. For the reasoning above the key assumption is that a response variable is either important for all the tasks or not. In practice it is probably often the case that only certain subgroups of tasks share the same variables. This idea is at the basis of the study in BIB004 , where the authors design an algorithm to learn at once the group structure and the best set of variables for each groups of tasks. Let G = (Gt) ⊤ t=1 be a partition of the set of components/tasks, where Gt denotes a group of tasks and |Gt| ≤ D. Then the author propose to consider a functional of the form where U1, . . . UT is a sequence of p by p matrices. The authors show that while the above minimization problem is not convex, stochastic gradient descent can be used to find local minimizers which seems to perform well in practice.
Kernels for Vector-Valued Functions: A Review <s> Invariant Kernels <s> In this paper, we consider a broad class of interpolation problems, for both scalarand vector-valued multivariate functions subject to linear side conditions, such as being divergence-free, where the data are generated via integration against compactly supported distributions. We show that, by using certain families of matrix-valued conditionally positive definite functions, such interpolation problems are well poised; that is, the interpolation matrices are invertible. As a sample result, we show that a divergence-free vector field can be interpolated by a linear combination of convolutions of the data-generating distributions with a divergence-free, 3 x 3 matrix-valued conditionally positive definite function. In addition, we obtain norm estimates for inverses of interpolation matrices that arise in a class of multivariate Hermite interpolation problems. <s> BIB001 </s> Kernels for Vector-Valued Functions: A Review <s> Invariant Kernels <s> Recently a new class of customized radial basis functions (RBFs) was introduced. We revisit this class of RBFs and derive a density result guaranteeing that any sufficiently smooth divergence-free function can be approximated arbitrarily closely by a linear combination of members of this class. This result has potential applications to numerically solving differential equations, such as fluid flows, whose solution is divergence free. <s> BIB002 </s> Kernels for Vector-Valued Functions: A Review <s> Invariant Kernels <s> In this paper we study a class of regularized kernel methods for multi-output learning which are based on filtering the spectrum of the kernel matrix. The considered methods include Tikhonov regularization as a special case, as well as interesting alternatives such as vector-valued extensions of L2 boosting and other iterative schemes. Computational properties are discussed for various examples of kernels for vector-valued functions and the benefits of iterative techniques are illustrated. Generalizing previous results for the scalar case, we show a finite sample bound for the excess risk of the obtained estimator, which allows to prove consistency both for regression and multi-category classification. Finally, we present some promising results of the proposed algorithms on artificial and real data. <s> BIB003
Divergence free and curl free fields. The following two kernels are matrix valued exponentiated quadratic (EQ) kernels BIB001 and can be used to estimate divergence-free or curl-free vector fields when the input and output space have the same dimension. These kernels induce a similarity between the vector field components that depends on the input points, and therefore cannot be reduced to the form K(x, We consider the case of vector fields with D = p, where X = R p . The divergence-free matrix-valued kernel can be defined via a translation invariant matrix-valued EQ kernel where H is the Hessian operator and φ a scalar EQ kernel, so that K(x, The columns of the matrix valued EQ kernel, Φ, are divergence-free. In fact, computing the divergence of a linear combination of its columns, ∇ ⊤ (Φ(u)c), with c ∈ R p , it is possible to show that BIB003 where the last equality follows applying the product rule of the gradient, the fact that the coefficient vector c does not depend upon u and the equality a ⊤ aa ⊤ = a ⊤ a ⊤ a, ∀a ∈ R p . Choosing a exponentiated quadratic, we obtain the divergence-free kernel where The curl-free matrix valued kernels are obtained as where φ is a scalar RBF. It is easy to show that the columns of Ψ are curl-free. The j-th column of Ψ is given by Ψej, where ej is the standard basis vector with a one in the j-th position. This gives us where g = −∂φ/∂xj. The function g is a scalar function and the curl of the gradient of a scalar function is always zero. Choosing a exponentiated quadratic, we obtain the following curl-free kernel It is possible to consider a convex linear combination of these two kernels to obtain a kernel for learning any kind of vector field, while at the same time allowing reconstruction of the divergence-free and curl-free parts separately (see ). The interested reader can refer to BIB001 BIB002 for further details on matrix-valued RBF and the properties of divergence-free and curl-free kernels. Transformable kernels. Another example of invariant kernels is discussed in and is given by kernels defined by transformations. For the purpose of our discussion, let Y = R D , X0 be a Hausdorff space and T d a family of maps (not necessarily linear) from X to X0 for d = {1, . . . , D} . Then, given a continuous scalar kernel k : X0 × X0 → R, it is possible to define the following matrix valued kernel for any x, x ′ ∈ X A specific instance of the above example is described by in the context of system identification, see also for further details.
Kernels for Vector-Valued Functions: A Review <s> Process Convolutions <s> Abstract : The present paper may be considered as a sequel to our previous paper in the Proceedings of the Cambridge Philosophical Society, Theorie generale de noyaux reproduisants-Premiere partie (vol. 39 (1944)) which was written in 1942-1943. In the introduction to this paper we outlined the plan of papers which were to follow. In the meantime, however, the general theory has been developed in many directions, and our original plans have had to be changed. Due to wartime conditions we were not able, at the time of writing the first paper, to take into account all the earlier investigations which, although sometimes of quite a different character, were, nevertheless, related to our subject. Our investigation is concerned with kernels of a special type which have been used under different names and in different ways in many domains of mathematical research. We shall therefore begin our present paper with a short historical introduction in which we shall attempt to indicate the different manners in which these kernels have been used by various investigators, and to clarify the terminology. We shall also discuss the more important trends of the application of these kernels without attempting, however, a complete bibliography of the subject matter. (KAR) P. 2 <s> BIB001 </s> Kernels for Vector-Valued Functions: A Review <s> Process Convolutions <s> This article proposes a new approach to kriging, where a flexible family of variograms is used in lieu of one of the traditionally used parametric models. This nonparametric approach minimizes the problems of misspecifying the variogram model. The flexible variogram family is developed using the idea of a moving average function composed of many small rectangles for the one-dimensional case and many small boxes for the twodimensional case. Through simulation, we show that the use of flexible piecewise-linear models can result in lower mean squared prediction errors than the use of traditional models. We then use a flexible piecewise-planar variogram model as a step in kriging the two-dimensional Wolfcamp Aquifer data, without the need to assume that the underlying process is isotropic. We prove that, in one dimension, any continuous variogram with a sill can be approximated arbitrarily close by piecewise-linear variograms. We discuss ways in which the piecewise-linear variogram models can be modified to improve the fit of the variogram estimate near the origin. <s> BIB002 </s> Kernels for Vector-Valued Functions: A Review <s> Process Convolutions <s> We consider best linear unbiased prediction for multivariable data. Minimizing mean-squared-prediction errors leads to prediction equations involving either covariances or variograms. We discuss problems with multivariate extensions that include the construction of valid models and the estimation of their parameters. In this paper, we develop new methods to construct valid crossvariograms, fit them to data, and then use them for multivariable spatial prediction, including cokriging. Crossvariograms are constructed by explicitly modeling spatial data as moving averages over white noise random processes. Parameters of the moving average functions may be inferred from the variogram, and with few additional parameters, crossvariogram models are constructed. Weighted least squares is then used to fit the crossvariogram model to the empirical crossvariogram for the data. We demonstrate the method for simulated data, and show a considerable advantage of cokriging over ordinary kriging. <s> BIB003 </s> Kernels for Vector-Valued Functions: A Review <s> Process Convolutions <s> A continuous spatial model can be constructed by convolving a very simple, perhaps independent, process with a kernel or point spread function. This approach for constructing a spatial process offers a number of advantages over specification through a spatial covariogram. In particular, this process convolution specification leads to computational simplifications and easily extends beyond simple stationary models. This paper uses process convolution models to build space and space-time models that are flexible and able to accommodate large amounts of data. Data from environmental monitoring is considered. <s> BIB004 </s> Kernels for Vector-Valued Functions: A Review <s> Process Convolutions <s> Spatial processes are important models for many environmental problems. Classical geostatistics and Fourier spectral methods are powerful tools for stuyding the spatial structure of stationary processes. However, it is widely recognized that in real applications spatial processes are rarely stationary and isotropic. Consequently, it is important to extend these spectral methods to processes that are nonstationary. In this work, we present some new spectral approaches and tools to estimate the spatial structure of a nonstationary process. More specifically, we propose an approach for the spectral analysis of nonstationary spatial processes that is based on the concept of spatial spectra, i.e., spectral functions that are space-dependent. This notion of spatial spectra generalizes the definition of spectra for stationary processes, and under certain conditions, the spatial spectrum at each Location can be estimated from a single realization of the spatial process.The motivation for this work is the modeling... <s> BIB005 </s> Kernels for Vector-Valued Functions: A Review <s> Process Convolutions <s> We propose a nonstationary periodogram and various parametric approaches for estimating the spectral density of a nonstationary spatial process. We also study the asymptotic properties of the proposed estimators via shrinking asymptotics, assuming the distance between neighbouring observations tends to zero as the size of the observation region grows without bound. With this type of asymptotic model we can uniquely determine the spectral density, avoiding the aliasing problem. We also present a new class of nonstationary processes, based on a convolution of local stationary processes. This model has the advantage that the model is simultaneously defined everywhere, unlike 'moving window' approaches, but it retains the attractive property that, locally in small regions, it behaves like a stationary spatial process. Applications include the spatial analysis and modelling of air pollution data provided by the US Environmental Protection Agency. Copyright Biometrika Trust 2002, Oxford University Press. <s> BIB006 </s> Kernels for Vector-Valued Functions: A Review <s> Process Convolutions <s> Spatio-temporal processes can often be written as hierarchical state-space processes. In situations with complicated dynamics such as wave propagation, it is difficult to parameterize state transition functions for high-dimensional state processes. Although in some cases prior understanding of the physical process can be used to formulate models for the state transition, this is not always possible. Alternatively, for processes where one considers discrete time and continuous space, complicated dynamics can be modeled by stochastic integro-difference equations in which the associated redistribution kernel is allowed to vary with space and/or time. By considering a spectral implementation of such models, one can formulate a spatio-temporal model with relatively few parameters that can accommodate complicated dynamics. This approach can be developed in a hierarchical framework for non-Gaussian processes, as demonstrated on cloud intensity data. <s> BIB007 </s> Kernels for Vector-Valued Functions: A Review <s> Process Convolutions <s> We introduce a class of nonstationary covariance functions for Gaussian process (GP) regression. Nonstationary covariance functions allow the model to adapt to functions whose smoothness varies with the inputs. The class includes a nonstationary version of the Matern stationary co-variance, in which the differentiability of the regression function is controlled by a parameter, freeing one from fixing the differentiability in advance. In experiments, the nonstationary GP regression model performs well when the input space is two or three dimensions, outperforming a neural network model and Bayesian free-knot spline models, and competitive with a Bayesian neural network, but is outperformed in one dimension by a state-of-the-art Bayesian free-knot spline model. The model readily generalizes to non-Gaussian data. Use of computational methods for speeding GP fitting may allow for implementation of the method on larger datasets. <s> BIB008 </s> Kernels for Vector-Valued Functions: A Review <s> Process Convolutions <s> There is increasing interest in predicting ecological processes. Methods to accomplish such predictions must account for uncertainties in observation, sampling, models, and parameters. Statistical methods for spatiotemporal processes are powerful, yet difficult to implement in complicated high-dimensional settings. However, recent advances in hierarchical formulations for such processes can be utilized for ecological prediction. These formulations are able to account for the various sources of uncertainty and can incorporate scientific judgment in a probabilistically consistent manner. In particular, analytical diffusion models can serve as motivation for the hierarchical model for invasive species. We demonstrate by example that such a framework can be utilized to predict, spatially and temporally, the relative population abundance of House Finches over the eastern United States. Corresponding Editor (ad hoc): J. S. Clark. <s> BIB009 </s> Kernels for Vector-Valued Functions: A Review <s> Process Convolutions <s> This paper develops a process-convolution approach for space-time modelling. With this approach, a dependent process is constructed by convolving a simple, perhaps independent, process. Since the convolution kernel may evolve over space and time, this approach lends itself to specifying models with non-stationary dependence structure. The model is motivated by an application from oceanography: estimation of the mean temperature field in the North Atlantic Ocean as a function of spatial location and time. The large amount of this data poses some difficulties; hence computational considerations weigh heavily in some modelling aspects. A Bayesian approach is taken here which relies on Markov chain Monte Carlo for exploring the posterior distribution. <s> BIB010 </s> Kernels for Vector-Valued Functions: A Review <s> Process Convolutions <s> Gaussian processes are usually parameterised in terms of their covariance functions. However, this makes it difficult to deal with multiple outputs, because ensuring that the covariance matrix is positive definite is problematic. An alternative formulation is to treat Gaussian processes as white noise sources convolved with smoothing kernels, and to parameterise the kernel instead. Using this, we extend Gaussian processes to handle multiple, coupled outputs. <s> BIB011 </s> Kernels for Vector-Valued Functions: A Review <s> Process Convolutions <s> Space-time data are ubiquitous in the environmental sciences. Often, as is the case with atmo- spheric and oceanographic processes, these data contain many different scales of spatial and temporal variability. Such data are often non-stationary in space and time and may involve many observation/prediction locations. These factors can limit the effectiveness of traditional space- time statistical models and methods. In this article, we propose the use of hierarchical space-time models to achieve more flexible models and methods for the analysis of environmental data distributed in space and time. The first stage of the hierarchical model specifies a measurement- error process for the observational data in terms of some 'state' process. The second stage allows for site-specific time series models for this state variable. This stage includes large-scale (e.g. seasonal) variability plus a space-time dynamic process for the ’anomalies'. Much of our interest is with this anomaly proc ess. In the third stage, the parameters of these time series models, which are distributed in space, are themselves given a joint distribution with spatial dependence (Markov random fields). The Bayesian formulation is completed in the last two stages by speci- fying priors on parameters. We implement the model in a Markov chain Monte Carlo framework and apply it to an atmospheric data set of monthly maximum temperature. <s> BIB012 </s> Kernels for Vector-Valued Functions: A Review <s> Process Convolutions <s> We study the problem of learning many related tasks simultaneously using kernel methods and regularization. The standard single-task kernel methods, such as support vector machines and regularization networks, are extended to the case of multi-task learning. Our analysis shows that the problem of estimating many task functions with regularization can be cast as a single task learning problem if a family of multi-task kernel functions we define is used. These kernels model relations among the tasks and are derived from a novel form of regularizers. Specific kernels that can be used for multi-task learning are provided and experimentally tested on two real data sets. In agreement with past empirical work on multi-task learning, the experiments show that learning multiple related tasks simultaneously using the proposed approach can significantly outperform standard single-task learning particularly when there are many related tasks but few data per task. <s> BIB013 </s> Kernels for Vector-Valued Functions: A Review <s> Process Convolutions <s> Modelling the dynamics of transcriptional processes in the cell requires the knowledge of a number of key biological quantities. While some of them are relatively easy to measure, such as mRNA decay rates and mRNA abundance levels, it is still very hard to measure the active concentration levels of the transcription factor proteins that drive the process and the sensitivity of target genes to these concentrations. In this paper we show how these quantities for a given transcription factor can be inferred from gene expression levels of a set of known target genes. We treat the protein concentration as a latent function with a Gaussian process prior, and include the sensitivities, mRNA decay rates and baseline expression levels as hyperparameters. We apply this procedure to a human leukemia dataset, focusing on the tumour repressor p53 and obtaining results in good accordance with recent biological studies. <s> BIB014 </s> Kernels for Vector-Valued Functions: A Review <s> Process Convolutions <s> Over the last decade, convolution-based models for spatial data have increased in popularity as a result of their flexibility in modeling spatial dependence and their ability to accommodate large datasets. The modeling flexibility is due to the framework’s moving-average construction that guarantees a valid (i.e., non-negative definite) spatial covariance function. This constructive approach to spatial modeling has been used (1) to provide an alternative to the standard classes of parametric variogram/covariance functions commonly used in geostatistics; (2) to specify Gaussian-process models with nonstationary and anisotropic covariance functions; and (3) to create non-Gaussian classes of models for spatial data. Beyond the flexible nature of convolution-based models, computational challenges associated with modeling large datasets can be alleviated in part through dimension reduction, where the dimension of the convolved process is less than the dimension of the spatial data. In this paper, we review various types of convolution-based models for spatial data and point out directions for future research. <s> BIB015 </s> Kernels for Vector-Valued Functions: A Review <s> Process Convolutions <s> Gaussian processes have proved to be useful and powerful constructs for the purposes of regression. The classical method proceeds by parameterising a covariance function, and then infers the parameters given the training data. In this thesis, the classical approach is augmented by interpreting Gaussian processes as the outputs of linear filters excited by white noise. This enables a straightforward definition of dependent Gaussian processes as the outputs of a multiple output linear filter excited by multiple noise sources. We show how dependent Gaussian processes defined in this way can also be used for the purposes of system identification. Onewell known problemwith Gaussian process regression is that the computational complexity scales poorly with the amount of training data. We review one approximate solution that alleviates this problem, namely reduced rank Gaussian processes. We then show how the reduced rank approximation can be applied to allow for the efficient computation of dependent Gaussian processes. We then examine the application of Gaussian processes to the solution of other machine learning problems. To do so, we review methods for the parameterisation of full covariance matrices. Furthermore, we discuss how improvements can be made by marginalising over alternative models, and introduce methods to perform these computations efficiently. In particular, we introduce sequential annealed importance sampling as a method for calculating model evidence in an on-line fashion as new data arrives. Gaussian process regression can also be applied to optimisation. An algorithm is described that uses model comparison between multiple models to find the optimum of a function while taking as few samples as possible. This algorithm shows impressive performance on the standard control problem of double pole balancing. Finally, we describe how Gaussian processes can be used to efficiently estimate gradients of noisy functions, and numerically estimate integrals. <s> BIB016 </s> Kernels for Vector-Valued Functions: A Review <s> Process Convolutions <s> We present a sparse approximation approach for dependent output Gaussian processes (GP). Employing a latent function framework, we apply the convolution process formalism to establish dependencies between output variables, where each latent function is represented as a GP. Based on these latent functions, we establish an approximation scheme using a conditional independence assumption between the output processes, leading to an approximation of the full covariance which is determined by the locations at which the latent functions are evaluated. We show results of the proposed methodology for synthetic data and real world applications on pollution prediction and a sensor network. <s> BIB017 </s> Kernels for Vector-Valued Functions: A Review <s> Process Convolutions <s> This paper introduces methods for probabilistic uncertainty analysis of a frequency response function (FRF) of a structure obtained via a finite element (FE) model. The methods are applicable to computationally expensive FE models, making use of a Bayesian metamodel known as an emulator. The emulator produces fast predictions of the FE model output, but also accounts for the additional uncertainty induced by only having a limited number of model evaluations. Two approaches to the probabilistic uncertainty analysis of FRFs are developed. The first considers the uncertainty in the response at discrete frequencies, giving pointwise uncertainty intervals. The second considers the uncertainty in an entire FRF across a frequency range, giving an uncertainty envelope function. The methods are demonstrated and compared to alternative approaches in a practical case study. <s> BIB018 </s> Kernels for Vector-Valued Functions: A Review <s> Process Convolutions <s> In this paper we study a class of regularized kernel methods for multi-output learning which are based on filtering the spectrum of the kernel matrix. The considered methods include Tikhonov regularization as a special case, as well as interesting alternatives such as vector-valued extensions of L2 boosting and other iterative schemes. Computational properties are discussed for various examples of kernels for vector-valued functions and the benefits of iterative techniques are illustrated. Generalizing previous results for the scalar case, we show a finite sample bound for the excess risk of the obtained estimator, which allows to prove consistency both for regression and multi-category classification. Finally, we present some promising results of the proposed algorithms on artificial and real data. <s> BIB019
More general non-separable kernels can also be constructed from a generative point of view. We saw in section 4.2.1 that the linear model of coregionalization involves instantaneous mixing through a linear weighted sum of independent processes to construct correlated processes. By instantaneous mixing we mean that the output function f (x) evaluated at the input point x only depends on the values of the latent functions {uq(x)} Q q=1 at the same input x. This instantaneous mixing leads to a kernel function for vector-valued functions that has a separable form. A non-trivial way to mix the latent functions is through convolving a base process with a smoothing kernel. BIB001 If the base process is a Gaussian process, it turns out that the convolved process is also a Gaussian process. We can therefore exploit convolutions to construct covariance functions BIB002 BIB003 BIB010 BIB004 BIB011 BIB014 BIB017 . In a similar way to the linear model of coregionalization, we consider Q groups of functions, where a particular group q has elements u i q (z), for i = 1, . . . , Rq. Each member of the group has the same covariance kq(x, x ′ ), but is sampled independently. Any output f d (x) is described by and {w d (x)} D d=1 are independent Gaussian processes with zero mean and covariance kw d (x, x ′ ). For the integrals in equation BIB013 to exist, it is assumed that each kernel G i d,q (x) is a continuous function with compact support or square-integrable BIB003 BIB004 . The kernel G i d,q (x) is also known as the moving average function BIB003 or the smoothing kernel BIB004 . We have included the superscript q for f q d (x) in BIB013 to emphasize the fact that the function depends on the set of latent processes {u i q (x)} Rq i=1 . The latent functions u i q (z) are Gaussian processes with general covariances kq(x, x ′ ). Under the same independence assumptions used in the linear model of coregionalization, the covariance between f d (x) and f d ′ (x ′ ) follows where Specifying G i d,q (x − z) and kq(z, z ′ ) in the equation above, the covariance for the outputs f d (x) can be constructed indirectly. Notice that if the smoothing kernels are taken to be the Dirac delta function in equation BIB018 , such BIB019 the double integral is easily solved and the linear model of coregionalization is recovered. In this respect, process convolutions could also be seen as a dynamic version of the linear model of coregionalization in the sense that the latent functions are dynamically transformed with the help of the kernel smoothing functions, as opposed to a static mapping of the latent functions in the LMC case. See section 5.3.1 for a comparison between the process convolution and the LMC. A recent review of several extensions of this approach for the single output case is presented in BIB015 . Some of those extensions include the construction of nonstationary covariances BIB010 BIB005 BIB006 BIB008 and spatiotemporal covariances BIB012 BIB007 BIB009 . The idea of using convolutions for constructing multiple output covariances was originally proposed by BIB003 . They assumed that Q = 1, Rq = 1, that the process u(x) was white Gaussian noise and that the input space was X = R p . BIB004 depicted a similar construction to the one introduced by BIB003 , but partitioned the input space into disjoint subsets X = D d=0 X d , allowing dependence between the outputs only in certain subsets of the input space where the latent process was common to all convolutions. BIB002 Higdon BIB004 coined the general moving average construction to develop a covariance function in equation (27) as a process convolution. Boyle and Frean BIB011 introduced the process convolution approach for multiple outputs to the machine learning community with the name of "dependent Gaussian processes" (DGP), further developed in BIB016 . They allow the number of latent functions to be greater than one (Q ≥ 1). In BIB014 and BIB017 , the latent processes {uq(x)} Q q=1 followed a more general Gaussian process that goes beyond the white noise assumption. Figure 6 shows an example of the instantaneous mixing effect obtained in the ICM and the LMC, and the noninstantaneous mixing effect due to the process convolution framework. We sampled twice from a two-output Gaussian process with an ICM covariance with Rq = 1 (first column), an LMC covariance with Rq = 2 (second column) and a process convolution covariance with Rq = 1 and Q = 1 (third column). As in the examples for the LMC, we use EQ kernels for the basic kernels kq(x, x ′ ). We also use an exponentiated quadraticform for the smoothing kernel functions G 1 1,1 (x − x ′ ) and G 1 2,1 (x − x ′ ) and assume that the latent function is white Gaussian noise.
Kernels for Vector-Valued Functions: A Review <s> Other Approaches Related to Process Convolutions <s> Spatial processes are important models for many environmental problems. Classical geostatistics and Fourier spectral methods are powerful tools for stuyding the spatial structure of stationary processes. However, it is widely recognized that in real applications spatial processes are rarely stationary and isotropic. Consequently, it is important to extend these spectral methods to processes that are nonstationary. In this work, we present some new spectral approaches and tools to estimate the spatial structure of a nonstationary process. More specifically, we propose an approach for the spectral analysis of nonstationary spatial processes that is based on the concept of spatial spectra, i.e., spectral functions that are space-dependent. This notion of spatial spectra generalizes the definition of spectra for stationary processes, and under certain conditions, the spatial spectrum at each Location can be estimated from a single realization of the spatial process.The motivation for this work is the modeling... <s> BIB001 </s> Kernels for Vector-Valued Functions: A Review <s> Other Approaches Related to Process Convolutions <s> We propose a nonstationary periodogram and various parametric approaches for estimating the spectral density of a nonstationary spatial process. We also study the asymptotic properties of the proposed estimators via shrinking asymptotics, assuming the distance between neighbouring observations tends to zero as the size of the observation region grows without bound. With this type of asymptotic model we can uniquely determine the spectral density, avoiding the aliasing problem. We also present a new class of nonstationary processes, based on a convolution of local stationary processes. This model has the advantage that the model is simultaneously defined everywhere, unlike 'moving window' approaches, but it retains the attractive property that, locally in small regions, it behaves like a stationary spatial process. Applications include the spatial analysis and modelling of air pollution data provided by the US Environmental Protection Agency. Copyright Biometrika Trust 2002, Oxford University Press. <s> BIB002 </s> Kernels for Vector-Valued Functions: A Review <s> Other Approaches Related to Process Convolutions <s> Kernel methods have been very popular in the machine learning literature in the last ten years, mainly in the context of Tikhonov regularization algorithms. In this paper we study a coherent Bayesian kernel model based on an integral operator defined as the convolution of a kernel with a signed measure. Priors on the random signed measures correspond to prior distributions on the functions mapped by the integral operator. We study several classes of signed measures and their image mapped by the integral operator. In particular, we identify a general class of measures whose image is dense in the reproducing kernel Hilbert space (RKHS) induced by the kernel. A consequence of this result is a function theoretic foundation for using non-parametric prior specifications in Bayesian modeling, such as Gaussian process and Dirichlet process prior distributions. We discuss the construction of priors on spaces of signed measures using Gaussian and Levy processes, with the Dirichlet processes being a special case the latter. Computational issues involved with sampling from the posterior distribution are outlined for a univariate regression and a high dimensional classification problem. <s> BIB003 </s> Kernels for Vector-Valued Functions: A Review <s> Other Approaches Related to Process Convolutions <s> Soil pollution data collection typically studies multivariate measurements at sampling locations, e.g., lead, zinc, copper or cadmium levels. With increased collection of such multivariate geostatistical spatial data, there arises the need for flexible explanatory stochastic models. Here, we propose a general constructive approach for building suitable models based upon convolution of covariance functions. We begin with a general theorem which asserts that, under weak conditions, cross convolution of covariance functions provides a valid cross covariance function. We also obtain a result on dependence induced by such convolution. Since, in general, convolution does not provide closed-form integration, we discuss efficient computation. We then suggest introducing such specification through a Gaussian process to model multivariate spatial random effects within a hierarchical model. We note that modeling spatial random effects in this way is parsimonious relative to say, the linear model of coregionalization. Through a limited simulation, we informally demonstrate that performance for these two specifications appears to be indistinguishable, encouraging the parsimonious choice. Finally, we use the convolved covariance model to analyze a trivariate pollution dataset from California. <s> BIB004
In BIB004 , a different moving average construction for the covariance of multiple outputs was introduced. It is obtained as a convolution over covariance functions in contrast to the process convolution approach where the convolution is performed over processes. Assuming that the covariances involved are isotropic and the only latent function u(x) is a white Gaussian noise, BIB004 show that the cross-covariance obtained from where k d (h) and k d ′ (h) are covariances associated to the outputs d and d ′ , lead to a valid covariance function for the outputs {f d (x)} D d=1 . If we assume that the smoothing kernels are not only square integrable, but also positive definite functions, then the covariance convolution approach turns out to be a particular case of the process convolution approach (square-integrability might be easier to satisfy than positive definiteness). [67] introduced the idea of transforming a Gaussian process prior using a discretized process convolution, n=1,m=1 and u ⊤ = [u(x1), . . . , u(xM )]. Such a transformation could be applied for the purposes of fusing the information from multiple sensors, for solving inverse problems in reconstruction of images or for reducing computational complexity working with the filtered data in the transformed space . Convolutions with general Gaussian processes for modelling single outputs, were also proposed by BIB001 BIB002 , but instead of the continuous convolution, BIB001 BIB002 used a discrete convolution. The purpose in BIB001 BIB002 was to develop a spatially varying covariance for single outputs, by allowing the parameters of the covariance of a base process to change as a function of the input domain. Process convolutions are closely related to the Bayesian kernel method BIB003 construct reproducible kernel Hilbert spaces (RKHS) by assigning priors to signed measures and mapping these measures through integral operators. In particular, define the following space of functions, for some space Γ ⊆ B(X ) of signed Borel measures. In [77, proposition 1] , the authors show that for Γ = B(X ), the space of all signed Borel measures, F corresponds to a RKHS. Examples of these measures that appear in the form of stochastic processes include Gaussian processes, Dirichlet processes and Lévy processes. In principle, we can extend this framework for the multiple output case, expressing each output as
Kernels for Vector-Valued Functions: A Review <s> Estimation of Parameters in Regularization Theory <s> We consider the problem of learning in an environment of classification tasks. Tasks sampled from the environment are used to improve classification performance on future tasks. We consider situations in which the tasks can be divided into groups. Tasks within each group are related by sharing a low dimensional representation, which differs across the groups. We present an algorithm which divides the sampled tasks into groups and computes a common representation for each group. We report experiments on a synthetic and two image data sets, which show the advantage of the approach over single-task learning and a previous transfer learning method. <s> BIB001 </s> Kernels for Vector-Valued Functions: A Review <s> Estimation of Parameters in Regularization Theory <s> In this paper we study a class of regularized kernel methods for multi-output learning which are based on filtering the spectrum of the kernel matrix. The considered methods include Tikhonov regularization as a special case, as well as interesting alternatives such as vector-valued extensions of L2 boosting and other iterative schemes. Computational properties are discussed for various examples of kernels for vector-valued functions and the benefits of iterative techniques are illustrated. Generalizing previous results for the scalar case, we show a finite sample bound for the excess risk of the obtained estimator, which allows to prove consistency both for regression and multi-category classification. Finally, we present some promising results of the proposed algorithms on artificial and real data. <s> BIB002
From a regularization perspective, once the kernel is fixed, to find a solution we need to solve the linear system defined in BIB001 . The regularization parameter as well as the possible kernel parameters are typically tuned via crossvalidation. The kernel free-parameters are usually reduced to one or two scalars (e.g. the width of a scalar kernel). While considering for example separable kernels the matrix B is fixed by design, rather than learned, and the only free parameters are those of the scalar kernel. Solving problem BIB001 , this is c = (K(X, X) + λN I) −1 y, is in general a costly operation both in terms of memory and time. When we have to solve the problem for a single value of λ Cholesky decomposition is the method of choice, while when we want to compute the solution for different values of λ (for example to perform cross validation) singular valued decomposition (SVD) is the method of choice. In both case the complexity in the worst case is O(D 3 N 3 ) (with a larger constant for the SVD) and the associated storage requirement is O(D 2 N 2 ) As observed in BIB002 , this computational burden can be greatly reduced for separable kernels. For example, if we consider the kernel K(x, x ′ ) = k(x, x ′ )I the kernel matrix K(X, X) becomes block diagonal. In particular if the input points are the same, all the blocks are equal and the problem reduces to inverting an N by N matrix. The simple example above serves as a prototype for the more general case of a kernel of the form K(x, x ′ ) = k(x, x ′ )B. The point is that for this class of kernels, we can use the eigen-system of the matrix B to define a new coordinate system where the kernel matrix becomes block diagonal. We start observing that if we denote with (σ1, u1), . . . , (σD, uD) the eigenvalues and eigenvectors of B we can write the matrix C = (c1, . . . , cN ), with ci ∈ R D , as and ⊗ is the tensor product and similarlyỸ = D d=1ỹ d ⊗ u d , withỹ d = ( y1, u d D , . . . , yN , u d D ). The above transformations are simply rotations in the output space. Moreover, for the considered class of kernels, the kernel matrix K(X, X) is given by the tensor product of the N ×N scalar kernel matrix k(X, X) and B, that is K(X, X) = B ⊗ k(X, X). Then we have the following equalities Since the eigenvectors uj are orthonormal, it follows that: for d = 1, . . . , D. The above equation shows that in the new coordinate system we have to solve D essentially independent problems after rescaling each kernel matrix by σ d or equivalently rescaling the regularization parameter (and the outputs). The above calculation shows that all kernels of this form allow for a simple implementation at the price of the eigen-decomposition of the matrix B. Then we see that the computational cost is now essentially in the general case. Also, it shows that the coupling among the different tasks can be seen as a rotation and rescaling of the output points. Stegle et al. also applied this approach in the context of fitting matrix variate Gaussian models with spherical noise.
Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> The geostatistical analysis of multivariate data involves choosing and fitting theoretical models to the empirical matrix. This paper considers the specific case of the model of linear coregionalization, and describes an automated procedure for fitting models, that are adequate in the mathematical sense, using a least-squares like technique. It also describes how to decide whether the number of parameters of the cross-variogram matrix model should be reduced to improve stability of fit. The procedure is illustrated with an analysis of the spatial relations among the physical properties of an alluvial soil. The results show the main influence of the scale and the shape of the basic models on the goodness of fit. The choice of the number of basic models appears of secondary importance, though it greatly influences the resulting interpretation of the coregionalization analysis. <s> BIB001 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> Normal cross-variograms cannot be estimated from data in the usual way when there are only a few points where both variables have been measured. But the experimental pseudo cross-variogram can be computed even where there are no matching sampling points, and this appears as its principal advantage. The pseudo cross-variogram may be unbounded, though for its existence the intrinsic hypothesis alone is not a sufficient stationarity condition. In addition the differences between the two random processes must be second order stationary. Modeling the function by linear coregionalization reflects the more restrictive stationarity condition: the pseudo cross-variogram can be unbounded only if the unbounded correlation structures are the same in all variograms. As an alternative to using the pseudo cross-variogram a new method is presented that allows estimating the normal cross variogram from data where only one variable has been measured at a point. <s> BIB002 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> Generalized cross-covariances describe the linear relationships between spatial variables observed at different locations. They are invariant under translation of the locations for any intrinsic processes, they determine the cokriging predictors without additional assumptions and they are unique up to linear functions. If the model is stationary, that is if the variograms are bounded, they correspond to the stationary cross-covariances. Under some symmetry condition they are equal to minus the usual cross-variogram. We present a method to estimate these generalized cross-covariances from data observed at arbitrary sampling locations. In particular we do not require that all variables are observed at the same points. For fitting a linear coregionalization model we combine this new method with a standard algorithm which ensures positive definite coregionalization matrices. We study the behavior of the method both by computing variances exactly and by simulating from various models. <s> BIB003 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> We consider best linear unbiased prediction for multivariable data. Minimizing mean-squared-prediction errors leads to prediction equations involving either covariances or variograms. We discuss problems with multivariate extensions that include the construction of valid models and the estimation of their parameters. In this paper, we develop new methods to construct valid crossvariograms, fit them to data, and then use them for multivariable spatial prediction, including cokriging. Crossvariograms are constructed by explicitly modeling spatial data as moving averages over white noise random processes. Parameters of the moving average functions may be inferred from the variogram, and with few additional parameters, crossvariogram models are constructed. Weighted least squares is then used to fit the crossvariogram model to the empirical crossvariogram for the data. We demonstrate the method for simulated data, and show a considerable advantage of cokriging over ordinary kriging. <s> BIB004 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> A continuous spatial model can be constructed by convolving a very simple, perhaps independent, process with a kernel or point spread function. This approach for constructing a spatial process offers a number of advantages over specification through a spatial covariogram. In particular, this process convolution specification leads to computational simplifications and easily extends beyond simple stationary models. This paper uses process convolution models to build space and space-time models that are flexible and able to accommodate large amounts of data. Data from environmental monitoring is considered. <s> BIB005 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> We present a framework for sparse Gaussian process (GP) methods which uses forward selection with criteria based on information-theoretic principles, previously suggested for active learning. Our goal is not only to learn d-sparse predictors (which can be evaluated in O(d) rather than O(n), d ≪ n, n the number of training points), but also to perform training under strong restrictions on time and memory requirements. The scaling of our method is at most O(n · d2), and in large real-world classification experiments we show that it can match prediction performance of the popular support vector machine (SVM), yet can be significantly faster in training. In contrast to the SVM, our approximation produces estimates of predictive probabilities ('error bars'), allows for Bayesian model selection and is less complex in implementation. <s> BIB006 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> Abstract It is not simple to model cross and auto-variograms to describe the covariation of two or more soil properties, since the models that are fitted must meet certain constraints. These constraints are most readily met by fitting a linear model of coregionalization (LMCR). This presents practical problems. Not all combinations of authorized variogram functions constitute a LMCR. This paper presents a method for automated fitting of variogram functions to auto and cross-variogram estimates, subject to the constraints of the LMCR. The method uses simulated annealing to minimize a weighted sum of squares between the observed and modelled variograms. The method was applied to some data on soil. It was found to be robust to the initial choice of variogram parameters. Practical methods for setting up a good cooling schedule for the simulated annealing are discussed. <s> BIB007 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> We introduce a class of nonstationary covariance functions for Gaussian process (GP) regression. Nonstationary covariance functions allow the model to adapt to functions whose smoothness varies with the inputs. The class includes a nonstationary version of the Matern stationary co-variance, in which the differentiability of the regression function is controlled by a parameter, freeing one from fixing the differentiability in advance. In experiments, the nonstationary GP regression model performs well when the input space is two or three dimensions, outperforming a neural network model and Bayesian free-knot spline models, and competitive with a Bayesian neural network, but is outperformed in one dimension by a state-of-the-art Bayesian free-knot spline model. The model readily generalizes to non-Gaussian data. Use of computational methods for speeding GP fitting may allow for implementation of the method on larger datasets. <s> BIB008 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> In geostatistical studies, the fitting of the linear model of coregionalization (LMC) to direct and cross experimental semivariograms is usually performed with a weighted least-squares (WLS) procedure based on the number of pairs of observations at each lag. So far, no study has investigated the efficiency of other least-squares procedures, such as ordinary least squares (OLS), generalized least squares (GLS), and WLS with other weighing functions, in the context of the LMC. In this article, we compare the statistical properties of the sill estimators obtained with eight least-squares procedures for fitting the LMC: OLS, four WLS, and three GLS. The WLS procedures are based on approximations of the variance of semivariogram estimates at each distance lag. The GLS procedures use a variance–covariance matrix of semivariogram estimates that is (i) estimated using the fourth-order moments with sill estimates (GLS1), (ii) calculated using the fourth-order moments with the theoretical sills (GLS2), and (iii) based on an approximation using the correlation between semivariogram estimates in the case of spatial independence of the observations (GLS3). The current algorithm for fitting the LMC by WLS while ensuring the positive semidefiniteness of sill matrix estimates is modified to include any least-squares procedure. A Monte Carlo study is performed for 16 scenarios corresponding to different combinations of the number of variables, number of spatial structures, values of ranges, and scale dependence of the correlations among variables. Simulation results show that the mean square error is accounted for mostly by the variance of the sill estimators instead of their squared bias. Overall, the estimated GLS1 and theoretical GLS2 are the most efficient, followed by the WLS procedure that is based on the number of pairs of observations and the average distance at each lag. On that basis, GLS1 can be recommended for future studies using the LMC. <s> BIB009 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> Models for the analysis of multivariate spatial data are receiving increased attention these days. In many applications it will be preferable to work with multivariate spatial processes to specify such models. A critical specification in providing these models is the cross covariance function. Constructive approaches for developing valid cross-covariance functions offer the most practical strategy for doing this. These approaches include separability, kernel convolution or moving average methods, and convolution of covariance functions. We review these approaches but take as our main focus the computationally manageable class referred to as the linear model of coregionalization (LMC). We introduce a fully Bayesian development of the LMC. We offer clarification of the connection between joint and conditional approaches to fitting such models including prior specifications. However, to substantially enhance the usefulness of such modelling we propose the notion of a spatially varying LMC (SVLMC) providing a very rich class of multivariate nonstationary processes with simple interpretation. We illustrate the use of our proposed SVLMC with application to more than 600 commercial property transactions in three quite different real estate markets, Chicago, Dallas and San Diego. Bivariate nonstationary process models are developed for income from and selling price of the property. <s> BIB010 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> Gaussian processes are usually parameterised in terms of their covariance functions. However, this makes it difficult to deal with multiple outputs, because ensuring that the covariance matrix is positive definite is problematic. An alternative formulation is to treat Gaussian processes as white noise sources convolved with smoothing kernels, and to parameterise the kernel instead. Using this, we extend Gaussian processes to handle multiple, coupled outputs. <s> BIB011 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> Models for spatial autocorrelation and cross-correlation depend on the distance and direction separating two locations, and are constrained so that for all possible sets of locations, the covariance matrices implied from the models remain nonnegative-definite. Based on spatial correlation, optimal linear predictors can be constructed that yield complete maps of spatial fields from incomplete and noisy spatial data. This methodology is called kriging if the data are of only one variable type, and it is called cokriging if it is of two or more variable types. Historically, to satisfy the nonnegative-definite condition, cokriging has used coregionalization models for cross-variograms, even though this class of models is not very flexible. Recent research has shown that moving-average functions may be used to generate a large class of valid, flexible variogram models, and that they can also be used to generate valid cross-variograms that are compatible with component variograms. There are several problems wit... <s> BIB012 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> This paper describes an efficient method for learning the parameters of a Gaussian process (GP). The parameters are learned from multiple tasks which are assumed to have been drawn independently from the same GP prior. An efficient algorithm is obtained by extending the informative vector machine (IVM) algorithm to handle the multi-task learning case. The multi-task IVM (MTIVM) saves computation by greedily selecting the most informative examples from the separate tasks. The MT-IVM is also shown to be more efficient than random sub-sampling on an artificial data-set and more effective than the traditional IVM in a speaker dependent phoneme recognition task. <s> BIB013 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> We propose a semiparametric model for regression and classification problems involving multiple response variables. The model makes use of a set of Gaussian processes to model the relationship to the inputs in a nonparametric fashion. Conditional dependencies between the responses can be captured through a linear mixture of the driving processes. This feature becomes important if some of the responses of predictive interest are less densely supplied by observed data than related auxiliary ones. We propose an efficient approximate inference scheme for this semiparametric model whose complexity is linear in the number of training data points. <s> BIB014 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> We study the problem of learning many related tasks simultaneously using kernel methods and regularization. The standard single-task kernel methods, such as support vector machines and regularization networks, are extended to the case of multi-task learning. Our analysis shows that the problem of estimating many task functions with regularization can be cast as a single task learning problem if a family of multi-task kernel functions we define is used. These kernels model relations among the tasks and are derived from a novel form of regularizers. Specific kernels that can be used for multi-task learning are provided and experimentally tested on two real data sets. In agreement with past empirical work on multi-task learning, the experiments show that learning multiple related tasks simultaneously using the proposed approach can significantly outperform standard single-task learning particularly when there are many related tasks but few data per task. <s> BIB015 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> While there is strong motivation for using Gaussian Processes (GPs) due to their excellent performance in regression and classification problems, their computational complexity makes them impractical when the size of the training set exceeds a few thousand cases. This has motivated the recent proliferation of a number of cost-effective approximations to GPs, both for classification and for regression. In this paper we analyze one popular approximation to GPs for regression: the reduced rank approximation. While generally GPs are equivalent to infinite linear models, we show that Reduced Rank Gaussian Processes (RRGPs) are equivalent to finite sparse linear models. We also introduce the concept of degenerate GPs and show that they correspond to inappropriate priors. We show how to modify the RRGP to prevent it from being degenerate at test time. Training RRGPs consists both in learning the covariance function hyperparameters and the support set. We propose a method for learning hyperparameters for a given support set. We also review the Sparse Greedy GP (SGGP) approximation (Smola and Bartlett, 2001), which is a way of learning the support set for given hyperparameters based on approximating the posterior. We propose an alternative method to the SGGP that has better generalization capabilities. Finally we make experiments to compare the different ways of training a RRGP. We provide some Matlab code for learning RRGPs. <s> BIB016 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> We provide a new unifying view, including all existing proper probabilistic sparse approximations for Gaussian process regression. Our approach relies on expressing the effective prior which the methods are using. This allows new insights to be gained, and highlights the relationship between existing methods. It also allows for a clear theoretically justified ranking of the closeness of the known approximations to the corresponding full GPs. Finally we point directly to designs of new better sparse approximations, combining the best of the existing strategies, within attractive computational constraints. <s> BIB017 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> In this paper we investigate multi-task learning in the context of Gaussian Processes (GP). We propose a model that learns a shared covariance function on input-dependent features and a "free-form" covariance matrix over tasks. This allows for good flexibility when modelling inter-task dependencies while avoiding the need for large amounts of data for training. We show that under the assumption of noise-free observations and a block design, predictions for a given task only depend on its target values and therefore a cancellation of inter-task transfer occurs. We evaluate the benefits of our model on two practical applications: a compiler performance prediction problem and an exam score prediction task. Additionally, we make use of GP approximations and properties of our model in order to provide scalability to large data sets. <s> BIB018 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> Gaussian processes have proved to be useful and powerful constructs for the purposes of regression. The classical method proceeds by parameterising a covariance function, and then infers the parameters given the training data. In this thesis, the classical approach is augmented by interpreting Gaussian processes as the outputs of linear filters excited by white noise. This enables a straightforward definition of dependent Gaussian processes as the outputs of a multiple output linear filter excited by multiple noise sources. We show how dependent Gaussian processes defined in this way can also be used for the purposes of system identification. Onewell known problemwith Gaussian process regression is that the computational complexity scales poorly with the amount of training data. We review one approximate solution that alleviates this problem, namely reduced rank Gaussian processes. We then show how the reduced rank approximation can be applied to allow for the efficient computation of dependent Gaussian processes. We then examine the application of Gaussian processes to the solution of other machine learning problems. To do so, we review methods for the parameterisation of full covariance matrices. Furthermore, we discuss how improvements can be made by marginalising over alternative models, and introduce methods to perform these computations efficiently. In particular, we introduce sequential annealed importance sampling as a method for calculating model evidence in an on-line fashion as new data arrives. Gaussian process regression can also be applied to optimisation. An algorithm is described that uses model comparison between multiple models to find the optimum of a function while taking as few samples as possible. This algorithm shows impressive performance on the standard control problem of double pole balancing. Finally, we describe how Gaussian processes can be used to efficiently estimate gradients of noisy functions, and numerically estimate integrals. <s> BIB019 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> This work focuses on combining observations from field experiments with detailed computer simulations of a physical process to carry out statistical inference. Of particular interest here is determining uncertainty in resulting predictions. This typically involves calibration of parameters in the computer simulator as well as accounting for inadequate physics in the simulator. The problem is complicated by the fact that simulation code is sufficiently demanding that only a limited number of simulations can be carried out. We consider applications in characterizing material properties for which the field data and the simulator output are highly multivariate. For example, the experimental data and simulation output may be an image or may describe the shape of a physical object. We make use of the basic framework of Kennedy and O'Hagan. However, the size and multivariate nature of the data lead to computational challenges in implementing the framework. To overcome these challenges, we make use of basis repre... <s> BIB020 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> One of the challenges with emulating the response of a multivariate function to its inputs is the quantity of data that must be assimilated, which is the product of the number of model evaluations and the number of outputs. This article shows how even large calculations can be made tractable. It is already appreciated that gains can be made when the emulator residual covariance function is treated as separable in the model-inputs and model-outputs. Here, an additional simplification on the structure of the regressors in the emulator mean function allows very substantial further gains. The result is that it is now possible to emulate rapidly—on a desktop computer—models with hundreds of evaluations and hundreds of outputs. This is demonstrated through calculating costs in floating-point operations, and in an illustration. Even larger sets of outputs are possible if they have additional structure, for example, spatial-temporal. <s> BIB021 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> We consider the problem of learning in an environment of classification tasks. Tasks sampled from the environment are used to improve classification performance on future tasks. We consider situations in which the tasks can be divided into groups. Tasks within each group are related by sharing a low dimensional representation, which differs across the groups. We present an algorithm which divides the sampled tasks into groups and computes a common representation for each group. We report experiments on a synthetic and two image data sets, which show the advantage of the approach over single-task learning and a previous transfer learning method. <s> BIB022 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> The CRASH computer model simulates the effect of a vehicle colliding against different barrier types. If it accurately represents real vehicle crashworthiness, the computer model can be of great value in various aspects of vehicle design, such as the setting of timing of air bag releases. The goal of this study is to address the problem of validating the computer model for such design goals, based on utilizing computer model runs and experimental data from real crashes. This task is complicated by the fact that (i) the output of this model consists of smooth functional data, and (ii) certain types of collision have very limited data. We address problem (i) by extending existing Gaussian process-based methodology developed for models that produce real-valued output, and resort to Bayesian hierarchical modeling to attack problem (ii). Additionally, we show how to formally test if the computer model reproduces reality. Supplemental materials for the article are available online. <s> BIB023 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> We present a sparse approximation approach for dependent output Gaussian processes (GP). Employing a latent function framework, we apply the convolution process formalism to establish dependencies between output variables, where each latent function is represented as a GP. Based on these latent functions, we establish an approximation scheme using a conditional independence assumption between the output processes, leading to an approximation of the full covariance which is determined by the locations at which the latent functions are evaluated. We show results of the proposed methodology for synthetic data and real world applications on pollution prediction and a sensor network. <s> BIB024 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> Computer models are widely used in scientific research to study and predict the behaviour of complex systems. The run times of computer-intensive simulators are often such that it is impractical to make the thousands of model runs that are conventionally required for sensitivity analysis, uncertainty analysis or calibration. In response to this problem, highly efficient techniques have recently been developed based on a statistical meta-model (the emulator) that is built to approximate the computer model. The approach, however, is less straightforward for dynamic simulators, designed to represent time-evolving systems. Generalisations of the established methodology to allow for dynamic emulation are here proposed and contrasted. Advantages and difficulties are discussed and illustrated with an application to the Sheffield Dynamic Global Vegetation Model, developed within the UK Centre for Terrestrial Carbon Dynamics. <s> BIB025 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> Bayesian approaches to preference elicitation (PE) are particularly attractive due to their ability to explicitly model uncertainty in users' latent utility functions. However, previous approaches to Bayesian PE have ignored the important problem of generalizing from previous users to an unseen user in order to reduce the elicitation burden on new users. In this paper, we address this deficiency by introducing a Gaussian Process (GP) prior over users' latent utility functions on the joint space of user and item features. We learn the hyper-parameters of this GP on a set of preferences of previous users and use it to aid in the elicitation process for a new user. This approach provides a flexible model of a multi-user utility function, facilitates an efficient value of information (VOI) heuristic query selection strategy, and provides a principled way to incorporate the elicitations of multiple users back into the model. We show the effectiveness of our method in comparison to previous work on a real dataset of user preferences over sushi types. <s> BIB026 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> We present a novel approach to multitask learning in classification problems based on Gaussian process (GP) classification. The method extends previous work on multitask GP regression, constraining the overall covariance (across tasks and data points) to factorize as a Kronecker product. Fully Bayesian inference is possible but time consuming using sampling techniques. We propose approximations based on the popular variational Bayes and expectation propagation frameworks, showing that they both achieve excellent accuracy when compared to Gibbs sampling, in a fraction of time. We present results on a toy dataset and two real datasets, showing improved performance against the baseline results obtained by learning each task independently. We also compare with a recently proposed state-of-the-art approach based on support vector machines, obtaining comparable or better results. <s> BIB027 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> In this thesis we address the problem of modeling correlated outputs using Gaussian process priors. Applications of modeling correlated outputs include the joint prediction of pollutant metals in geostatistics and multitask learning in machine learning. Defining a Gaussian process prior for correlated outputs translates into specifying a suitable covariance function that captures dependencies between the different output variables. Classical models for obtaining such a covariance function include the linear model of coregionalization and process convolutions. We propose a general framework for developing multiple output covariance functions by performing convolutions between smoothing kernels particular to each output and covariance functions that are common to all outputs. Both the linear model of coregionalization and the process convolutions turn out to be special cases of this framework. Practical aspects of the proposed methodology are studied in this thesis. They involve the use of domain-specific knowledge for defining relevant smoothing kernels, efficient approximations for reducing computational complexity and a novel method for establishing a general class of nonstationary covariances with applications in robotics and motion capture data.Reprints of the publications that appear at the end of this document, report case studies and experimental results in sensor networks, geostatistics and motion capture data that illustrate the performance of the different methods proposed. <s> BIB028 </s> Kernels for Vector-Valued Functions: A Review <s> Parameters Estimation for Gaussian Processes <s> This paper introduces methods for probabilistic uncertainty analysis of a frequency response function (FRF) of a structure obtained via a finite element (FE) model. The methods are applicable to computationally expensive FE models, making use of a Bayesian metamodel known as an emulator. The emulator produces fast predictions of the FE model output, but also accounts for the additional uncertainty induced by only having a limited number of model evaluations. Two approaches to the probabilistic uncertainty analysis of FRFs are developed. The first considers the uncertainty in the response at discrete frequencies, giving pointwise uncertainty intervals. The second considers the uncertainty in an entire FRF across a frequency range, giving an uncertainty envelope function. The methods are demonstrated and compared to alternative approaches in a practical case study. <s> BIB029
In machine learning parameter estimation for Gaussian processes is often approached through maximization of the marginal likelihood. The method also goes by the names of evidence approximation, type II maximum likelihood, empirical Bayes, among others . With a Gaussian likelihood and after integrating f using the Gaussian prior, the marginal likelihood is given by where φ are the hyperparameters. The objective function is the logarithm of the marginal likelihood The parameters φ are obtained by maximizing log p(y|X, φ) with respect to each element in φ. Maximization is performed using a numerical optimization algorithm, for example, a gradient based method. Derivatives follow where φi is an element of the vector φ and K(X, X) = K(X, X) + Σ. In the case of the LMC, in which the coregionalization matrices must be positive semidefinite, it is possible to use an incomplete Cholesky decomposition Bq = Lq L ⊤ q , with Lq ∈ R D×Rq , as suggested in BIB018 . The elements of the matrices Lq are considered part of the vector φ. Another method used for parameter estimation, more common in the geostatistics literature, consists of optimizing an objective function which involves some empirical measure of the correlation between the functions f d (x), K(x, x ′ ), and the multivariate covariance obtained using a particular model, K(x, x ′ ) BIB001 BIB003 BIB009 . Assuming stationary covariances, this criteria reduces to where hi = xi − x ′ i is a lag vector, w(hi) is a weight coefficient, K(hi) is an experimental covariance matrix with entries obtained by different estimators for cross-covariance functions BIB002 BIB004 , and K(hi) is the covariance matrix obtained, for example, using the linear model of coregionalization. BIB023 One of the first algorithms for estimating the parameter vector φ in LMC was proposed by BIB001 . It assumed that the parameters of the basic covariance functions kq(x, x ′ ) had been determined a priori and then used a weighted least squares method to fit the coregionalization matrices. In BIB009 the efficiency of other least squares procedures was evaluated experimentally, including ordinary least squares and generalized least squares. Other more general algorithms in which all the parameters are estimated simultaneously include simulated annealing BIB007 and the EM algorithm . Ver Hoef and Barry BIB004 also proposed the use of an objective function like BIB010 , to estimate the parameters in the covariance obtained from a process convolution. Both methods described above, the evidence approximation or the least-square method, give point estimates of the parameter vector φ. Several authors have employed full Bayesian inference by assigning priors to φ and computing the posterior distribution through some sampling procedure. Examples include BIB020 and BIB025 under the LMC framework or BIB011 and under the process convolution approach. As mentioned before, for non-Gaussian likelihoods, there is not a closed form solution for the posterior distribution nor for the marginal likelihood. However, the marginal likelihood can be approximated under a Laplace, variational Bayes or expectation propagation (EP) approximation frameworks for multiple output classification BIB027 BIB026 , and used to find estimates for the hyperparameters. Hence, the error function is replaced for log q(y|X, φ), where q(y|X, φ) is the approximated marginal likelihood. Parameters are again estimated using a gradient based methods. The problem of computational complexity for Gaussian processes in the multiple output context has been studied by different authors BIB021 BIB012 BIB014 BIB019 BIB024 BIB028 . Fundamentally, the computational problem is the same than the one appearing in regularization theory, that is, the inversion of the matrix K(X, X) = K(X, X) + Σ for solving equation BIB022 . This step is necessary for computing the marginal likelihood and its derivatives (for estimating the hyperparameters as explained before) or for computing the predictive distribution. With the exception of the method by BIB021 , the approximation methods proposed in BIB012 BIB014 BIB019 BIB024 BIB028 can be applied to reduce computational complexity, whichever covariance function (LMC or process convolution, for example) is used to compute the multi-output covariance matrix. In other words, the computational efficiency gained is independent of the particular method employed to compute the covariance matrix. Before looking with some detail at the different approximation methods employed in the Gaussian processes literature for multiple outputs, it is worth mentioning that computing the kernel function through process convolutions in equation (29) implies solving a double integral, which is not always feasible for any choice of the smoothing kernels G i d,q (·) and covariance functions kq(x, x ′ ). An example of an analytically tractable covariance function occurs when both the smoothing kernel and the covariance function for the latent functions have EQ kernels BIB024 , or when the smoothing kernels have an exponentiated quadratic form and the latent functions are Gaussian white noise processes BIB008 BIB011 . An alternative would be to consider discrete process convolutions BIB005 instead of the continuous process convolution of equations BIB015 and BIB029 , avoiding in this way the need to solve double integrals. We now briefly summarize different methods for reducing computational complexity in multi-output Gaussian processes. As we mentioned before, Rougier BIB021 assumes that the multiple output problem can be seen as a single output problem considering the output index as another variable of the input space. The predicted output, f (x * ) is expressed as a weighted sum of Q deterministic regressors that explain the mean of the output process plus a Gaussian error term that explains the variance in the output. Both, the set of regressors and the covariance for the error are assumed to be separable in the input space. The covariance takes the form k(x, x ′ )kT (d, d ′ ), as in the introduction of section 4. For isotopic models ( BIB021 refers to this condition as regular outputs, meaning outputs that are evaluated at the same set of inputs X), the mean and covariance for the output, can be obtained through Kronecker products for the regressors and the covariances involved in the error term. For inference the inversion of the necessary terms is accomplished using properties of the Kronecker product. For example, if K(X, X ′ ) = B ⊗ k(X, X ′ ), then K −1 (X, X ′ ) = B −1 ⊗ k −1 (X, X ′ ). Computational complexity is reduced to O(D 3 ) + O(N 3 ), similar to the eigendecomposition method in section 6.1. Ver Hoef and Barry BIB012 present a simulation example with D = 2. Prediction over one of the variables is performed using cokriging. In cokriging scenarios, usually one has access to a few measurements of a primary variable, but plenty of observations for a secondary variable. In geostatistics, for example, predicting the concentration of heavy pollutant metals (say Cadmium or Lead), which are expensive to measure, can be done using inexpensive and oversampled variables as a proxy (say pH levels) . Following a suggestion by (page 172), the authors partition the secondary observations into subgroups of observations and assume the likelihood function is the sum of the partial likelihood functions of several systems that include the primary observations and each of the subgroups of the secondary observations. In other words, the joint probability distribution p(f1(X1), f2(X2)) is factorised as p(f1(X1), f2(X2)) = J j=1 p(f1(X1), f 2 ) indicates the observations in the subgroup j out of J subgroups of observations, for the secondary variable. Inversion of the particular covariance matrix derived from these assumptions grows as O(JN 3 ), where N is the number of input points per secondary variable. Also, the authors use a fast Fourier transform for computing the autocovariance matrices (K(X d , X d )) d,d and cross-covariance matrices (K(X d , X d ′ )) d,d ′ . Boyle BIB019 proposed an extension of the reduced rank approximation method presented by BIB016 , to be applied to the dependent Gaussian process construction. The author outlined the generalization of the methodology for D = 2. The outputs f1(X1) and f2(X2) are defined as f1(X1) f2(X2) = (K(X1, X1))1,1 (K(X1, X2))1,2 (K(X2, X1))2,1 (K(X2, X2))2,2 w1 w2 , where w d are vectors of weights associated to each output including additional weights corresponding to the test inputs, one for each output. Based on this likelihood, a predictive distribution for the joint prediction of f1(X) and f2(X) can be obtained, with the characteristic that the variance for the approximation, approaches to the variance of the full predictive distribution of the Gaussian process, even for test points away from the training data. The elements in the matrices (K(X d , X d ′ )) d,d ′ are computed using the covariances and cross-covariances developed in sections 4 and 5. Computational complexity reduces to O(DN M 2 ), where N is the number of sample points per output and M is an user specified value that accounts for the rank of the approximation. In BIB024 , the authors show how through making specific conditional independence assumptions, inspired by the model structure in the process convolution formulation (for which the LMC is a special case), it is possible to arrive at a series of efficient approximations that represent the covariance matrix K(X, X) using a reduced rank approximation Q plus a matrix D, where D has a specific structure that depends on the particular independence assumption made to obtain the approximation. Approximations can reduce the computational complexity to O(N DM 2 ) with M representing a user specified value that determines the rank of Q. Approximations obtained in this way, have similarities with the conditional approximations summarized for a single output in BIB017 . Finally, the informative vector machine (IVM) BIB006 has also been extended to Gaussian processes using kernel matrices derived from particular versions of the linear model of coregionalization, including BIB014 and BIB013 . In the IVM, only a smaller subset of size M of the data points is chosen for constructing the GP predictor. The data points selected are the ones that maximize a differential entropy score BIB013 or an information gain criteria BIB014 . Computational complexity for this approximation is again O(N DM 2 ). For the computational complexities shown above, we assumed Rq = 1 and Q = 1.
Kernels for Vector-Valued Functions: A Review <s> Applications of Multivariate Kernels <s> Physical Experiments and Computer Experiments.- Basic Elements of Computer Experiments.- Analyzing Output from Computer Experiments-Predicting Output from Training Data.- Space Filling Designs for Computer Experiments.- Criteria Based Designs for Computer Experiments.- Other Issues. <s> BIB001 </s> Kernels for Vector-Valued Functions: A Review <s> Applications of Multivariate Kernels <s> Modelling the dynamics of transcriptional processes in the cell requires the knowledge of a number of key biological quantities. While some of them are relatively easy to measure, such as mRNA decay rates and mRNA abundance levels, it is still very hard to measure the active concentration levels of the transcription factor proteins that drive the process and the sensitivity of target genes to these concentrations. In this paper we show how these quantities for a given transcription factor can be inferred from gene expression levels of a set of known target genes. We treat the protein concentration as a latent function with a Gaussian process prior, and include the sensitivities, mRNA decay rates and baseline expression levels as hyperparameters. We apply this procedure to a human leukemia dataset, focusing on the tumour repressor p53 and obtaining results in good accordance with recent biological studies. <s> BIB002 </s> Kernels for Vector-Valued Functions: A Review <s> Applications of Multivariate Kernels <s> Model calibration analysis is concerned with the estimation of unobservable modeling parameters using observations of system response. When the model being calibrated is an expensive computer simulation, special techniques such as surrogate modeling and Bayesian inference are often fruitful. In this paper, we show how the flexibility of the Bayesian calibration approach can be exploited to account for a wide variety of uncertainty sources in the calibration process. We propose a straightforward approach for simultaneously handling Gaussian and non-Gaussian errors, as well as a framework for studying the effects of prescribed uncertainty distributions for model inputs that are not treated as calibration parameters. Further, we discuss how Gaussian process surrogate models can be used effectively when simulator response may be a function of time and/or space (multivariate output). The proposed methods are illustrated through the calibration of a simulation of thermally decomposing foam. <s> BIB003 </s> Kernels for Vector-Valued Functions: A Review <s> Applications of Multivariate Kernels <s> In this paper, we describe a novel, computationally efficient algorithm that facilitates the autonomous acquisition of readings from sensor networks (deciding when and which sensor to acquire readings from at any time), and which can, with minimal domain knowledge, perform a range of information processing tasks including modelling the accuracy of the sensor readings, predicting the value of missing sensor readings, and predicting how the monitored environmental variables will evolve into the future. Our motivating scenario is the need to provide situational awareness support to first responders at the scene of a large scale incident, and to this end, we describe a novel iterative formulation of a multi-output Gaussian process that can build and exploit a probabilistic model of the environmental variables being measured (including the correlations and delays that exist between them). We validate our approach using data collected from a network of weather sensors located on the south coast of England. <s> BIB004 </s> Kernels for Vector-Valued Functions: A Review <s> Applications of Multivariate Kernels <s> The inverse dynamics problem for a robotic manipulator is to compute the torques needed at the joints to drive it along a given trajectory; it is beneficial to be able to learn this function for adaptive control. A robotic manipulator will often need to be controlled while holding different loads in its end effector, giving rise to a multi-task learning problem. By placing independent Gaussian process priors over the latent functions of the inverse dynamics, we obtain a multi-task Gaussian process prior for handling multiple loads, where the inter-task similarity depends on the underlying inertial parameters. Experiments demonstrate that this multi-task formulation is effective in sharing information among the various loads, and generally improves performance over either learning only on single tasks or pooling the data over all tasks. <s> BIB005 </s> Kernels for Vector-Valued Functions: A Review <s> Applications of Multivariate Kernels <s> Motivation: Inference of latent chemical species in biochemical interaction networks is a key problem in estimation of the structure and parameters of the genetic, metabolic and protein interaction networks that underpin all biological processes. We present a framework for Bayesian marginalization of these latent chemical species through Gaussian process priors. ::: ::: Results: We demonstrate our general approach on three different biological examples of single input motifs, including both activation and repression of transcription. We focus in particular on the problem of inferring transcription factor activity when the concentration of active protein cannot easily be measured. We show how the uncertainty in the inferred transcription factor activity can be integrated out in order to derive a likelihood function that can be used for the estimation of regulatory model parameters. An advantage of our approach is that we avoid the use of a coarsegrained discretization of continuous time functions, which would lead to a large number of additional parameters to be estimated. We develop exact (for linear regulation) and approximate (for non-linear regulation) inference schemes, which are much more efficient than competing sampling-based schemes and therefore provide us with a practical toolkit for model-based inference. ::: ::: Availability: The software and data for recreating all the experiments in this paper is available in MATLAB from http://www.cs.man.ac.uk/~neill/gpsim. ::: ::: Contact: [email protected] <s> BIB006 </s> Kernels for Vector-Valued Functions: A Review <s> Applications of Multivariate Kernels <s> Elevated levels of particulate matter (PM) in the ambient air have been shown to be associated with certain adverse human health effects. As a result, monitoring networks that track PM levels have been established across the United States. Some of the older monitors measure PM less than 10 µm in diameter (PM10), while the newer monitors track PM levels less than 2.5 µm in diameter (PM2.5); it is now believed that this fine component of PM is more likely to be related to the negative health effects associated with PM. We propose a bivariate dynamic process convolution model for PM2.5 and PM10 concentrations. Our aim is to extract information about PM2.5 from PM10 monitor readings using a latent variable approach and to provide better space-time interpolations of PM2.5 concentrations compared to interpolations made using only PM2.5 monitoring information. We illustrate the approach using PM2.5 and PM10 readings taken across the state of Ohio in 2000. Copyright © 2007 John Wiley & Sons, Ltd. <s> BIB007 </s> Kernels for Vector-Valued Functions: A Review <s> Applications of Multivariate Kernels <s> Purely data driven approaches for machine learning present diculties when data is scarce relative to the complexity of the model or when the model is forced to extrapolate. On the other hand, purely mechanistic approaches need to identify and specify all the interactions in the problem at hand (which may not be feasible) and still leave the issue of how to parameterize the system. In this paper, we present a hybrid approach using Gaussian processes and dierential equations to combine data driven modelling with a physical model of the system. We show how dierent, physically-inspired, kernel functions can be developed through sensible, simple, mechanistic assumptions about the underlying system. The versatility of our approach is illustrated with three case studies from computational biology, motion capture and geostatistics. <s> BIB008 </s> Kernels for Vector-Valued Functions: A Review <s> Applications of Multivariate Kernels <s> Computer models are widely used in scientific research to study and predict the behaviour of complex systems. The run times of computer-intensive simulators are often such that it is impractical to make the thousands of model runs that are conventionally required for sensitivity analysis, uncertainty analysis or calibration. In response to this problem, highly efficient techniques have recently been developed based on a statistical meta-model (the emulator) that is built to approximate the computer model. The approach, however, is less straightforward for dynamic simulators, designed to represent time-evolving systems. Generalisations of the established methodology to allow for dynamic emulation are here proposed and contrasted. Advantages and difficulties are discussed and illustrated with an application to the Sheffield Dynamic Global Vegetation Model, developed within the UK Centre for Terrestrial Carbon Dynamics. <s> BIB009 </s> Kernels for Vector-Valued Functions: A Review <s> Applications of Multivariate Kernels <s> Bayesian approaches to preference elicitation (PE) are particularly attractive due to their ability to explicitly model uncertainty in users' latent utility functions. However, previous approaches to Bayesian PE have ignored the important problem of generalizing from previous users to an unseen user in order to reduce the elicitation burden on new users. In this paper, we address this deficiency by introducing a Gaussian Process (GP) prior over users' latent utility functions on the joint space of user and item features. We learn the hyper-parameters of this GP on a set of preferences of previous users and use it to aid in the elicitation process for a new user. This approach provides a flexible model of a multi-user utility function, facilitates an efficient value of information (VOI) heuristic query selection strategy, and provides a principled way to incorporate the elicitations of multiple users back into the model. We show the effectiveness of our method in comparison to previous work on a real dataset of user preferences over sushi types. <s> BIB010 </s> Kernels for Vector-Valued Functions: A Review <s> Applications of Multivariate Kernels <s> This paper introduces methods for probabilistic uncertainty analysis of a frequency response function (FRF) of a structure obtained via a finite element (FE) model. The methods are applicable to computationally expensive FE models, making use of a Bayesian metamodel known as an emulator. The emulator produces fast predictions of the FE model output, but also accounts for the additional uncertainty induced by only having a limited number of model evaluations. Two approaches to the probabilistic uncertainty analysis of FRFs are developed. The first considers the uncertainty in the response at discrete frequencies, giving pointwise uncertainty intervals. The second considers the uncertainty in an entire FRF across a frequency range, giving an uncertainty envelope function. The methods are demonstrated and compared to alternative approaches in a practical case study. <s> BIB011
In this chapter we further describe in more detail some of the applications of kernel approaches to multi-output learning from the statistics and machine learning communities. One of the main application areas of multivariate Gaussian process has been in computer emulation. In BIB011 , the LMC is used as the covariance function for a Gaussian process emulator of a finite-element method that solves for frequency response functions obtained from a structure. The outputs correspond to pairs of masses and stiffnesses for several structural modes of vibration for an aircraft model. The input space is made of variables related to physical properties, such as Tail tip mass or Wingtip mass, among others. Multivariate computer emulators are also frequently used for modelling time series. We mentioned this type of application in section 4.2.4. Mostly, the number of time points in the time series are matched to the number of outputs (we expressed this as D = T before), and different time series correspond to different input values for the emulation. The particular input values employed are obtained from different ranges that the input variables can take (given by an expert), and are chosen according to some space-filling criteria (Latin hypercube design, for example) BIB001 . In BIB009 , the time series correspond to the evolution of the net biome productivity (NBP) index, which in turn is the output of the Sheffield dynamic global vegetation model. In BIB003 , the time series is the temperature of a particular location of a container with decomposing foam. The simulation model is a finite element model and simulates the transfer of heat through decomposing foam. In machine learning the range of applications for multivariate kernels is increasing. In BIB004 , the ICM is used to model the dependencies of multivariate time series in a sensor network. Sensors located in the south coast of England measure different environmental variables such as temperature, wind speed, tide height, among others. Sensors located close to each other make similar readings. If there are faulty sensors, their missing readings could be interpolated using the healthy ones. In BIB005 , the authors use the ICM for obtaining the inverse dynamics of a robotic manipulator. The inverse dynamics problem consists in computing the torques at different joints of the robotic arm, as function of the angle, angle velocity and angle acceleration for the different joints. Computed torques are necessary to drive the robotic arm along a particular trajectory. Furthermore, the authors consider several contexts, this is, different dynamics due to different loadings at the end effector. Joints are modelled independently using an ICM for each of them, being the outputs the different contexts and being the inputs, the angles, the angle velocities and the angle accelerations. Besides interpolation, the model is also used for extrapolation of novel contexts. The authors of BIB010 use the ICM for preference elicitation, where a user is prompted to solve simple queries in order to receive a recommendation. The ICM is used as a covariance function for a GP that captures dependencies between users (through the matrix B), and dependencies between items (through the covariance k(x, x ′ )). In BIB002 and BIB006 , the authors use a process convolution to model the interaction between several genes and a transcription factor protein, in a gene regulatory network. Each output corresponds to a gene, and each latent function corresponds to a transcription factor protein. It is assummed that transcription factors regulate the rate at which particular genes produce primary RNA. The output functions and the latent functions are indexed by time. The smoothing kernel functions G i d,q (·) correspond to the impulse response obtained from an ordinary differential equation of first order. Given gene expression data, the problem is to infer the time evolution of the transcription factor. In BIB008 , the authors use a process convolution to model the dependencies between different body parts of an actor that performs modern dancing movements. This type of data is usually known as mocap (for motion capture) data. The outputs correspond to time courses of angles referenced to a root node, for each body part modelled. The smoothing kernel used corresponds to a Green's function arising from a second order ordinary differential equation. In , the authors use a discretized process convolution for solving an inverse problem in reconstruction of images, and for fusing the information from multiple sensors. In BIB007 , two particulate matter (PM) levels measured in the air (10 µm in diameter and 2.5 µm in diameter), at different spatial locations, are modeled as the added influence of coarse and fine particles. In turn, these coarse and fine particles are modeled as random walks and then transformed by discrete convolutions to represent the levels of PM at 10 µm and 2.5 µm. The objective is to extract information about PM at 2.5 µm from the abundant readings of PM at 10 µm.
Technologies for Web and cloud service interaction: a survey <s> Introduction <s> Issues in designing distributed computing systems. Shortcomings of RFC ::: 674; see also RFCs 542 and 354. <s> BIB001 </s> Technologies for Web and cloud service interaction: a survey <s> Introduction <s> Remote procedure calls ( RPC ) are a useful paradigm for providing communication across a network between programs written in a high level language. This paper describes a package, written as part of the Cedar project, providing a remote procedure call facility. The paper describes the options that face a designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients. Our primary aim in building an RPC package was to make the building of distributed systems easier. Previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. We hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. To achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls. <s> BIB002 </s> Technologies for Web and cloud service interaction: a survey <s> Introduction <s> This paper introduces the major components of, and standards associated with, the Web services architecture. The different roles associated with the Web services architecture and the programming stack for Web services are described. The architectural elements of Web services are then related to a real-world business scenario in order to illustrate how the Web services approach helps solve real business problems. <s> BIB003 </s> Technologies for Web and cloud service interaction: a survey <s> Introduction <s> Though cloud computing is considered mature for practical application, there is a need for more research. The identified challenges primarily concern client-cloud interaction and cloud interoperability. As to the former one, we highlight the needs of clients, contracting and legal aspects, and missing foundations as necessary fields of investigation. For the latter one clouds are considered to constitute repositories of services, so the challenge is to realize web-scale, service-oriented, distributed computing. <s> BIB004
The need to share network-based resources and use remote functionality in a program without dealing with low-level network access has fueled discussions in the 1970s BIB001 and ultimately led to the remote procedure call (RPC) framework by Birrell and Nelson BIB002 in the 1980s. RPC became a driver in enterprise systems; location transparency of procedures eases code reuse but requires tight coupling, e.g., a unified type system. In the 1990s, the principles of object orientation and RPC gave raise to distributed objects [221] . Tight coupling and interaction complexity in RPC and distributed objects affected the scalability of enterprise systems, and at the end of the 1990s, message passing between the so-called services became an alternative enterprise architecture with relaxed coupling and easier scalability. Today, middleware for message queuing and the concept of serviceoriented architecture (SOA) [211] dominate large-scale distributed enterprise systems. In the meantime, Berners-Lee laid out the foundation for a World Wide Web of nonlinear text documents, i.e., hypertext, exchanged in a client-server architecture over the predecessor of today's Internet in 1989. The first Web browser was announced end of 1990, the World Wide Web Consortium (W3C) was established in 1994, and W3C published the first Hypertext Markup Language (HTML) Recommendation in 1997. Since then, the Web has evolved from simple hypermedia exchange, to interactive user interfaces, rich client applications, user-provided content, mashups, social platforms, and wide-scale mobile device support. Web technology has become pervasive and is not limited to hypermedia applications anymore. Standards are widely accepted, and they have contributed to the success of Web services because protocols like the Hypertext Transfer Protocol (HTTP) are reliably forwarded over the Internet . A survey of technologies in such a dynamic environment needs a defined scope. All technologies that allow a client to interact with a service should be considered; however, the notion of service is not precisely defined BIB004 . The following informal properties and restrictions therefore characterize a service in context of this work: -Service interface. Services are considered as distributed, network-accessible software components that offer functionality and need communication for interaction BIB003 . A notion of interface that accepts a certain language is therefore required. The survey is restricted to technologies that enable communication between clients and service interfaces applicable in Web, PaaS, and SaaS cloud delivery models. -Heterogeneous platforms. A characteristic of serviceorientation is to provide functionality and content across hard-and software platforms. Only technologies that embrace this compatibility are considered. -Publicly available standards. The focus is on technologies that are available to the public audience, in particular, technologies based on Internet protocols, i.e., the TCP/IP protocol suite , and with publicly available specifications. Specialized technologies for a limited audience or application, like industrial control systems, are not part of this study. -Parties. There are two participating parties or peers in service interaction: a client that consumes some service offered by a provider or server, i.e., client-to-service interaction. On a conceptual level, a service can participate also as a client to consume other services for a composition, i.e., service-to-service interaction. Furthermore, a service can coordinate two clients to establish client-toclient or peer-to-peer interaction. In accordance with the aforementioned characteristics, the state-of-the-art and recent trends in Web and service communication technologies applicable to the Web, PaaS, and SaaS are surveyed.
Technologies for Web and cloud service interaction: a survey <s> Motivation <s> Would you like to use a consistent visual notation for drawing integration solutions? Look inside the front cover. Do you want to harness the power of asynchronous systems without getting caught in the pitfalls? See "Thinking Asynchronously" in the Introduction. Do you want to know which style of application integration is best for your purposes? See Chapter 2, Integration Styles. Do you want to learn techniques for processing messages concurrently? See Chapter 10, Competing Consumers and Message Dispatcher. Do you want to learn how you can track asynchronous messages as they flow across distributed systems? See Chapter 11, Message History and Message Store. Do you want to understand how a system designed using integration patterns can be implemented using Java Web services, .NET message queuing, and a TIBCO-based publish-subscribe architecture? See Chapter 9, Interlude: Composed Messaging.Utilizing years of practical experience, seasoned experts Gregor Hohpe and Bobby Woolf show how asynchronous messaging has proven to be the best strategy for enterprise integration success. However, building and deploying messaging solutions presents a number of problems for developers. Enterprise Integration Patterns provides an invaluable catalog of sixty-five patterns, with real-world solutions that demonstrate the formidable of messaging and help you to design effective messaging solutions for your enterprise.The authors also include examples covering a variety of different integration technologies, such as JMS, MSMQ, TIBCO ActiveEnterprise, Microsoft BizTalk, SOAP, and XSL. A case study describing a bond trading system illustrates the patterns in practice, and the book offers a look at emerging standards, as well as insights into what the future of enterprise integration might hold.This book provides a consistent vocabulary and visual notation framework to describe large-scale integration solutions across many technologies. It also explores in detail the advantages and limitations of asynchronous messaging architectures. The authors present practical advice on designing code that connects an application to a messaging system, and provide extensive information to help you determine when to send a message, how to route it to the proper destination, and how to monitor the health of a messaging system. If you want to know how to manage, monitor, and maintain a messaging system once it is in use, get this book. 0321200683B09122003 <s> BIB001 </s> Technologies for Web and cloud service interaction: a survey <s> Motivation <s> In a service-oriented architecture (SOA), a system is viewed as a collection of independent units (services) that interact with one another through message exchanges. Established languages such as the Web services description language and the business process execution language allow developers to capture the interactions in which an individual service can engage, both from a structural and from a behavioral perspective. However, in large service-oriented systems, stakeholders may require a global picture of the way services interact with each other, rather than multiple small pictures focusing on individual services. Such "global models" are especially useful when a set of services interact in such a way that none of them sees all messages being exchanged, yet interactions taking place between some services affect the way other services interact. An issue that arises when dealing with global models of service interactions is that these models may capture behavioral constraints that can not be enforced locally. In other words, some global models may not be translatable into a collection of local models such that the sum of the local models equals the original global model. Starting from a previously proposed language for global modeling of service interactions, this paper defines an algorithm for determining if a global model is locally enforceable and an algorithm for generating local models from global ones <s> BIB002 </s> Technologies for Web and cloud service interaction: a survey <s> Motivation <s> In this paper, we present a high-level definition of a formal method in terms of ambient abstract state machine rules which makes it possible to describe formal models of mobile computing systems and complex service oriented architectures in two abstraction layers. <s> BIB003 </s> Technologies for Web and cloud service interaction: a survey <s> Motivation <s> Though cloud computing is considered mature for practical application, there is a need for more research. The identified challenges primarily concern client-cloud interaction and cloud interoperability. As to the former one, we highlight the needs of clients, contracting and legal aspects, and missing foundations as necessary fields of investigation. For the latter one clouds are considered to constitute repositories of services, so the challenge is to realize web-scale, service-oriented, distributed computing. <s> BIB004 </s> Technologies for Web and cloud service interaction: a survey <s> Motivation <s> We introduce the concept of an identity management machine (based on ASM) to mitigate problems regarding identity management in cloud computing. We decompose the client to cloud interaction into three distinct scenarios and introduce a set of ASM rules for each of them. We first consider a direct client to cloud interaction where the identity information stored on the client side is mapped to the identity created on the cloud provider's IdM system. To enhance privacy we then introduce the concept of real, obfuscated and partially obfuscated identities. Finally we take advantage of the increase in standardization in IdM systems defining the rules necessary to support authentication protocols such as OpenID. Our solution makes no supposition regarding the technologies used by the client and the cloud provider. Through abstract functions we allow for a distinct separation between the IdM system of the client and that of the cloud or service provider. Since a user is only required to authenticate once to our system, our solution represents a client centric single sign-on mechanism for the use of cloud services. <s> BIB005 </s> Technologies for Web and cloud service interaction: a survey <s> Motivation <s> Fulfilling cloud customers needs entails describing a quality of service on top of the services functional description. Currently, the only guarantees that are offered by cloud providers are imprecise and incomplete Service Level Agreements (SLA). We present a model to describe one of the main attributes discussed in SLAs which is availability. The model is developed using Web Ontology Language OWL. And it aims at covering the different concepts of availability and availability-related attributes that should be present in a service contract in order to guarantee the quality of service the consumer is expecting. <s> BIB006 </s> Technologies for Web and cloud service interaction: a survey <s> Motivation <s> This paper introduces a new idea of designing Web Applications, guiding the development phases from the requirements to the implementation. Abstract State Machines (ASMs) method is used to create ground models which describe the behavior of several agents engaged in the client-Cloud interaction. For solving the problem of Cloud-based applications' adaptation to various channels and end-devices (in particular with respect to needs arising from mobile clients) we need to include rigorous analysis and definitions prior to code development. This implies the creation of ASM ground models based on content adaptation techniques (e.g.: server-side and client-side adaptation). <s> BIB007 </s> Technologies for Web and cloud service interaction: a survey <s> Motivation <s> False-positives are a problem in anomaly-based intrusion detection systems. To counter this issue, we discuss anomaly detection for the extensible Markup Language (XML) in a language-theoretic view. We argue that many XML-based attacks target the syntactic level, i.e. the tree structure or element content, and syntax validation of XML documents reduces the attack surface. XML offers so-called schemas for validation, but in real world, schemas are often unavailable, ignored or too general. In this work-in-progress paper we describe a grammatical inference approach to learn an automaton from example XML documents for detecting documents with anomalous syntax. We discuss properties and expressiveness of XML to understand limits of learn ability. Our contributions are an XML Schema compatible lexical data type system to abstract content in XML and an algorithm to learn visibly pushdown automata (VPA) directly from a set of examples. The proposed algorithm does not require the tree representation of XML, so it can process large documents or streams. The resulting deterministic VPA then allows stream validation of documents to recognize deviations in the underlying tree structure or data types. <s> BIB008 </s> Technologies for Web and cloud service interaction: a survey <s> Motivation <s> We present a formal language theory approach to improving the security aspects of protocol design and message-based interactions in complex composed systems. We argue that these aspects are responsible for a large share of modern computing systems' insecurity. We show how our approach leads to advances in input validation, security modeling, attack surface reduction, and ultimately, software design and programming methodology. We cite examples based on real-world security flaws in common protocols, representing different classes of protocol complexity. We also introduce a formalization of an exploit development technique, the parse tree differential attack, made possible by our conception of the role of formal grammars in security. We also discuss the negative impact unnecessarily increased protocol complexity has on security. This paper provides a foundation for designing verifiable critical implementation components with considerably less burden to developers than is offered by the current state of the art. In addition, it offers a rich basis for further exploration in the areas of offensive analysis and, conversely, automated defense tools, and techniques. <s> BIB009 </s> Technologies for Web and cloud service interaction: a survey <s> Motivation <s> We describe the concept of automatic authentication for cloud-based services via the use of a client-centric solution for small and medium enterprises (SMEs). In previous work we have introduced the Identity Management Machine (IdMM) which is designed to handle the interaction between a client’s identity directory and various cloud identity management systems. We now further refine this machine by describing its interaction with various cloud authentication systems. The IdMM is designed to aid SMEs in their adoption or migration to cloud-based services. The system allows SMEs to store its confidential data on-premise, enhancing the client’s control over the data. We further enhance the privacy related aspects of a client-to-cloud interaction via the introduction of obfuscated and partially obfuscated identities which allow SMEs to also choose the type of data being sent to a cloud service. Since the IdMM is a single sign-on system capable of automatic authentication the risk of phishing or other social engineering attacks is reduced as an individual user may not be aware of his or her credentials for a given cloud service. <s> BIB010 </s> Technologies for Web and cloud service interaction: a survey <s> Motivation <s> When a client consumes a cloud service, computational liabilities are transferred to the service provider in accordance to the cloud paradigm, and the client loses some control over software components. One way to raise assurance about correctness and dependability of a consumed service and its software components is monitoring. In particular, a monitor is a system that observes the behavior of another system, and observation points that expose the target system’s state and state changes are required. Due to the cloud paradigm, popular techniques for monitoring such as code instrumentation are often not available to the client because of limited visibility, lack of control, and black-box software components. Based on a literature review, we identify potential observation points in today’s cloud services. Furthermore, we investigate two cloud-specific monitoring applications based on our ongoing research. While service level agreement (SLA) monitoring ensures that agreed-upon conditions between clients and providers are met, language-based anomaly detection monitors the interaction between client and cloud for misuse attempts. <s> BIB011
This survey is motivated by ongoing research efforts in formal modeling of cloud services BIB003 BIB004 , modeling of service quality BIB006 , service adaptation BIB007 , identity management BIB005 BIB010 , and security monitoring BIB008 BIB011 for cloud services. All these aspects need communication between clients and services. Understanding the state-of-theart in service communication is therefore necessary, e.g., for security research because an ambiguous or imprecise service interface is in fact a gateway for attacks BIB009 . There is a rich body of literature using patterns to describe service interaction on a conceptual level BIB001 BIB002 . On the other hand, the numerous software implementations used in today's services are heavily driven by continuously evolving standards and ad hoc specifications. This work aims to bridge this gap by surveying the state-of-the-art of technologies and resort to patterns when concepts are discussed. Patterns are appealing because they allow to describe solutions in a conceptual way and can therefore support service integrators and scientists in understanding new technologies.
Technologies for Web and cloud service interaction: a survey <s> Languages for content and media <s> This book is a rigorous exposition of formal languages and models of computation, with an introduction to computational complexity. The authors present the theory in a concise and straightforward manner, with an eye out for the practical applications. Exercises at the end of each chapter, including some that have been solved, help readers confirm and enhance their understanding of the material. This book is appropriate for upper-level computer science undergraduates who are comfortable with mathematical arguments. <s> BIB001 </s> Technologies for Web and cloud service interaction: a survey <s> Languages for content and media <s> ISO/IEC 10646-1 defines a multi-octet character set called the Universal Character Set (UCS) which encompasses most of the world's writing systems. Multi-octet characters, however, are not compatible with many current applications and protocols, and this has led to the development of a few so-called UCS transformation formats (UTF), each with different characteristics. UTF-8, the object of this memo, has the characteristic of preserving the full US-ASCII range, providing compatibility with file systems, parsers and other software that rely on US-ASCII values but are transparent to other values. This memo updates and replaces RFC 2044, in particular addressing the question of versions of the relevant standards. <s> BIB002 </s> Technologies for Web and cloud service interaction: a survey <s> Languages for content and media <s> We present a formal language theory approach to improving the security aspects of protocol design and message-based interactions in complex composed systems. We argue that these aspects are responsible for a large share of modern computing systems' insecurity. We show how our approach leads to advances in input validation, security modeling, attack surface reduction, and ultimately, software design and programming methodology. We cite examples based on real-world security flaws in common protocols, representing different classes of protocol complexity. We also introduce a formalization of an exploit development technique, the parse tree differential attack, made possible by our conception of the role of formal grammars in security. We also discuss the negative impact unnecessarily increased protocol complexity has on security. This paper provides a foundation for designing verifiable critical implementation components with considerably less burden to developers than is offered by the current state of the art. In addition, it offers a rich basis for further exploration in the areas of offensive analysis and, conversely, automated defense tools, and techniques. <s> BIB003
Formally, a language is a (possibly infinite) set of strings generated from a finite set of symbols, referred to as alphabet BIB001 . Languages are essential to communicate information represented as messages. While information exchange in Web and cloud services can be distinguished into message and stream based, a stream is in fact a single message sent in chunks or as a sequence of individual smaller messages. Languages for encoding content or media are also referred to as data serialization formats or formats in short. Communicating parties can only parse content of a certain kind, where the format, i.e., syntax, and meaning, i.e., semantics, of the language are defined. The hardness of parsing is then a computational complexity property of the language : With increased expressiveness, more and more information can be encoded in a language, but parsing also becomes harder and therefore more error-prone in software implementations BIB003 . Alphabets for intercommunicating digital systems are typically binary, and the basic unit of information is a bit. For Internet applications, a byte of eight bits is a common transferable unit. Content can be distinguished into binary and text based with respect to the alphabet: -Binary content. When a language describes a bijection between digital sequences and the domain of actual values and structures, then contents are referred to as binary content and they are likely not human-readable. -Text-based content. Text is not simply text, but rather bits and bytes with an associated mapping to human-readable symbols, so some digital sequence has a textual representation. Such a mapping is called character encoding or character set, e.g., ASCII. Content is said to be text based, if its syntax has a human-readable representation. ASCII is the most fundamental character encoding; it uses seven bits to enumerate a set of control and printable characters, but it is limited to the English alphabet. Unicode attempts to enumerate all the human-readable symbols in all natural languages. Character encodings like the ASCII-compatible UTF-8 BIB002 then specify a compact, byte-oriented encoding to represent millions of symbols efficiently.
Technologies for Web and cloud service interaction: a survey <s> Semi-structured languages <s> The common abstraction of XML Schema by unranked regular tree languages is not entirely accurate. To shed some light on the actual expressive power of XML Schema, intuitive semantical characterizations of the Element Declarations Consistent (EDC) rule are provided. In particular, it is obtained that schemas satisfying EDC can only reason about regular properties of ancestors of nodes. Hence, with respect to expressive power, XML Schema is closer to DTDs than to tree automata. These theoretical results are complemented with an investigation of the XML Schema Definitions (XSDs) occurring in practice, revealing that the extra expressiveness of XSDs over DTDs is only used to a very limited extent. As this might be due to the complexity of the XML Schema specification and the difficulty of understanding the effect of constraints on typing and validation of schemas, a simpler formalism equivalent to XSDs is proposed. It is based on contextual patterns rather than on recursive types and it might serve as a light-weight front end for XML Schema. Next, the effect of EDC on the way XML documents can be typed is discussed. It is argued that a cleaner, more robust, larger but equally feasible class is obtained by replacing EDC with the notion of 1-pass preorder typing (1PPT): schemas that allow one to determine the type of an element of a streaming document when its opening tag is met. This notion can be defined in terms of grammars with restrained competition regular expressions and there is again an equivalent syntactical formalism based on contextual patterns. Finally, algorithms for recognition, simplification, and inclusion of schemas for the various classes are given. <s> BIB001 </s> Technologies for Web and cloud service interaction: a survey <s> Semi-structured languages <s> False-positives are a problem in anomaly-based intrusion detection systems. To counter this issue, we discuss anomaly detection for the extensible Markup Language (XML) in a language-theoretic view. We argue that many XML-based attacks target the syntactic level, i.e. the tree structure or element content, and syntax validation of XML documents reduces the attack surface. XML offers so-called schemas for validation, but in real world, schemas are often unavailable, ignored or too general. In this work-in-progress paper we describe a grammatical inference approach to learn an automaton from example XML documents for detecting documents with anomalous syntax. We discuss properties and expressiveness of XML to understand limits of learn ability. Our contributions are an XML Schema compatible lexical data type system to abstract content in XML and an algorithm to learn visibly pushdown automata (VPA) directly from a set of examples. The proposed algorithm does not require the tree representation of XML, so it can process large documents or streams. The resulting deterministic VPA then allows stream validation of documents to recognize deviations in the underlying tree structure or data types. <s> BIB002 </s> Technologies for Web and cloud service interaction: a survey <s> Semi-structured languages <s> JavaScript Object Notation (JSON) is a lightweight, text-based, ::: language-independent data interchange format. It was derived from the ::: ECMAScript Programming Language Standard. JSON defines a small set of ::: formatting rules for the portable representation of structured data. ::: This document removes inconsistencies with other specifications of ::: JSON, repairs specification errors, and offers experience-based ::: interoperability guidance. <s> BIB003
Three of the most influential languages for information exchange in the Web are HTML, the Extensible Markup Language (XML), and the JavaScript Object Notation (JSON). Hypertext Markup Language. HTML is the standard for defining websites and has the MIME type text/html. It uses markup such as tags, attributes, declarations, or processing instructions to express structural, presentational, and semantic information as text. While earlier HTML versions up to 4.01 [330] are an application of the Standard Generalized Markup Language (SGML), which requires a complex SGML parser framework, today's HTML5 [363] specifies an individual parser. An examplary HTML5 document is listed in Fig. 2 . SGML-based parsers distinguish the grammars of different HTML versions in the document type declaration in the first line. As HTML5 is not SGML based, the document type declaration is deliberately incomplete to indicate SGML independence. A document is separated into a header for metadata and a body for the semi-structured content of a website. All allowed tags are specified in the standard. Interestingly, the character encoding of a document is defined within the document itself in a meta tag. This tag should be the fist tag in the header, so the parser becomes aware of the encoding before other tags are encountered. An SGML or HTML5 parser in a Web browser transforms a document into a Document Object Model (DOM) , a generalized tree-like data structure that is eventually rendered visible by the user interface. Another popular format with respect to HTML is Cascading Style Sheets (CSS) for defining both the look and behavior of a DOM's visual representation. BIB002 Extensible Markup Language. XML [346] originates from SGML, but it is more restricted and popular for electronic data exchange. An example is shown in Fig. 3 . Tags, attributes, namespaces, declarations, and processing instructions are syntactic constructs for structuring information in XML as text. The first line in an XML document should be a processing instruction that informs the parser about the XML version and the applied character encoding. XML is a language family: The structuring of elements (tag names) and attributes within a document is unrestricted, and only the syntactic rules have to be obliged. Element content is limited to text; by default, XML distinguishes two datatypes, parsed (PCDATA) and unparsed character data (CDATA). Mixed-content XML relaxes element content restrictions; text in element content is also allowed to nest other elements, e.g., the review element in Fig. 3 . The underlying logical structure of XML is a tree; therefore, open and close tags must be correctly nested. A document with correct nesting, proper syntax, and a single root element is well-formed. Furthermore, a document is said to have an XML Information Set if it is well-formed and namespace constraints are satisfied. An Infoset is an unambiguous abstraction from textual syntax; e.g., there are two syntactic notions for empty elements in plain XML. To restrict the structure, XML offers schema languages, e.g., Document Type Definition (DTD) [346] , XML Schema (XSD) , and Relax NG . Formally, a schema is a grammar that characterizes a set of XML documents, and a document is said to be valid if its schema is obeyed BIB001 . In this sense, a schema allows to specify a content subtype of XML. XSD and Relax NG also support more fine-grained datatypes than PCDATA and CDATA for restricting element contents. The duality of text representation and logical tree structure of documents has led to two different processing approaches. A document can be either parsed into a DOM for tree operations or processed directly as a stream of text, open-, or close-tag events using the Simple API for XML (SAX) . Repeated element names in tags reduce the information density in XML. SXML is an alternative syntax using S-expressions such that element names occur only once, and higher information density is achieved. The MIME media type of the XML language family is application/xml. XHTML [333] is a re-specification of HTML and has a MIME type that indicates the XML origin (application/xhtml+xml). XHTML is conceptually an XML subtype with a strict syntax specified in a schema, so an XML parser can be used instead of a complex markup parser. XML is also the supertype for many Web formats, e.g., Scalable Vector Graphics (SVG) or Mathematical Markup Language (MathML) , and MIME types for well-known subtypes are specified in RFC 3023 [196] . JavaScript Object Notation. JSON BIB003 is a simple text format to serialize information as structured key-value pairs. JSON has the MIME type application/json, and it is human-readable, as shown in Fig. 4 . The syntax is a subset of the JavaScript language (discussed in Sect. 2.5), and a JSON document is either parsed or evaluated to an object during runtime. JSON specifies six basic datatypes: null, Number, String, Boolean, Array, and Object, and syntactic rules to represent them as text. A proper JSON document always has a single root object. Similar to XML, JSON defines a family of languages because there are syntactical restrictions, but no structural limitations in the standard. JSON Schema is a schema language expressed in JSON format, and the motivation is the same as in XML schemas: to define a set of JSON documents and enable schema validation.
Technologies for Web and cloud service interaction: a survey <s> Container formats <s> The Multipart/Related content-type provides a common mechanism for representing objects that are aggregates of related MIME body parts. This document defines the Multipart/Related content-type and provides examples of its use. <s> BIB001 </s> Technologies for Web and cloud service interaction: a survey <s> Container formats <s> STD 11, RFC 822 defines a message representation protocol specifying considerable detail about US-ASCII message headers, but which leaves the message content, or message body, as flat US-ASCII text. This set of documents, collectively called the Multipurpose Internet Mail Extensions, or MIME, redefines the format of messages to allow for <s> BIB002
A container is an encoding to encapsulate other arbitrary contents. The MIME standards specify a multipart BIB002 content Fig. 6 This text-based MIME multipart/mixed container example has two parts and demonstrates the boundary principle type for containers, where contents of varying types are interleaved. In multipart, a text-based boundary string separates individual parts as shown in Fig. 6 . Each part has a textbased header that denotes its Content-Type and additional metadata such as binary-to-text or transfer encodings. Multipart defines several subtypes: -multipart/alternative BIB002 to model a choice over multiple contents to a consumer; -multipart/byteranges to encapsulate a subsequence of bytes that belongs to a larger message or file; -multipart/digest BIB002 to store a sequence of textbased messages; -multipart/form-data for submitting a set of completed form fields from an HTML website; -multipart/message BIB002 for an email message; -multipart/mixed for inline placement of media in a text-based message, e.g., embedded images in emails; -multipart/parallel BIB002 to process all parts simultaneously on hardware or software capable of doing so; -multipart/related BIB001 as a mechanism to aggregate related objects in a single content; -multipart/report as a container for email messages; and -multipart/x-mixed-replace [198] for a stream of parts, where the most recent part always invalidates all preceding ones, e.g., an event stream. An application of multipart is XML-binary Optimized Packaging (XOP) . XOP specifies a container format for XML as a MIME multipart/related package BIB001 , where binary element content is directly embedded to remove the necessity of binary-to-text encoding. The W3C specifies a set of attributes and rules for XML and XSD to handle MIME types [341] which are effectively used in XOP. S/MIME is a security extension for MIME. It defines encryption (multipart/encrypted) and digital signatures (multipart/signed) for confidentiality, integrity, and non-repudiation of data using public key cryptography. In general, a drawback of MIME multipart is that the boundary string must not appear in any of the parts because it would break the format. Microsoft's Direct Internet Message Encapsulation (DIME) is another standard for encapsulation and streaming of arbitrary binary data in the spirit of MIME multipart, but with an improved boundary mechanism to reliably distinguish parts.
Technologies for Web and cloud service interaction: a survey <s> Protocols <s> This RFC is an official specification for the Internet community. It ::: incorporates by reference, amends, corrects, and supplements the ::: primary protocol standards documents relating to hosts. [STANDARDS- ::: TRACK] <s> BIB001 </s> Technologies for Web and cloud service interaction: a survey <s> Protocols <s> Considering the urgency of the need for standards which would allow constitution of heterogeneous computer networks, ISO created a new subcommittee for Open Systems Interconnection (ISO/TC97/SC16) in 1977. The first priority of subcommittee 16 was to develop an architecture for open systems interconnection which could serve as a framework for the definition of standard protocols. As a result of 18 months of studies and discussions, SC16 adopted a layered architecture comprising seven layers (Physical, Data Link, Network, Transport, Session, Presentation, and Application). In July 1979 the specifications of this architecture, established by SC16, were passed under the name of OSI Reference Model to Technical Committee 97 Data Processing along with recommendations to start officially, on this basis, a set of protocols standardization projects to cover the most urgent needs. These recommendations were adopted by TC97 at the end of 1979 as the basis for the following development of standards for Open Systems Interconnection within ISO. The OSI Reference Model was also recognized by CCITT Rapporteur's Group on Layered Model for Public Data Network Services. This paper presents the model of architecture for Open Systems Interconnection developed by SC16. Some indications are also given on the initial set of protocols which will likely be developed in this OSI Reference Model. <s> BIB002 </s> Technologies for Web and cloud service interaction: a survey <s> Protocols <s> We present a formal language theory approach to improving the security aspects of protocol design and message-based interactions in complex composed systems. We argue that these aspects are responsible for a large share of modern computing systems' insecurity. We show how our approach leads to advances in input validation, security modeling, attack surface reduction, and ultimately, software design and programming methodology. We cite examples based on real-world security flaws in common protocols, representing different classes of protocol complexity. We also introduce a formalization of an exploit development technique, the parse tree differential attack, made possible by our conception of the role of formal grammars in security. We also discuss the negative impact unnecessarily increased protocol complexity has on security. This paper provides a foundation for designing verifiable critical implementation components with considerably less burden to developers than is offered by the current state of the art. In addition, it offers a rich basis for further exploration in the areas of offensive analysis and, conversely, automated defense tools, and techniques. <s> BIB003
To deal with the complexity of communication in computer networks, layered protocol design, as proposed by the OSI reference model BIB002 or the simplified Internet model BIB001 , has become an industry standard to separate concerns in communication protocol design. Low delay and multiplexing are two major drivers for recent developments in accelerating Web technology. Multiplexing in this context refers to techniques for transporting multiple parallel dialogs over a single channel between two peers instead of establishing multiple channels. This survey approaches the state-of-the-art for bilateral and multilateral The Internet layer allows hosts to communicate beyond their local neighborhood of physically connected devices. Logical addresses, routing, and packet-based data exchange are the core aspects of today's Internet. The Transport layer enables inter-process communication over networks. Multiple processes can run on the same host and share the same logical network address. Transport layer protocols extend the logical addressing to enable communication between distributed processes. Application layer protocols dictate how to provide functionality, content, and media across two or more processes that are able to communicate, e.g., clients and services. Such protocols provide transport mechanisms for communicating messages between processes. While the Internet and Transport layer protocols are typically handled by operating systems, application layer protocols are implemented by the client and service. From a lexical point of view, all Internet protocols share the same binary alphabet, and a byte is typically the smallest transferable symbol. A common practice seen in Internet protocols is to separate a transferable sequence of bytes into a protocol header and payload, where the header defines the payload type, so other protocols can be recursively embedded. Due to the shared alphabet, every protocol needs an unambiguous language to prevent confusion BIB003 .
Technologies for Web and cloud service interaction: a survey <s> Internet layer protocols <s> This document specifies version 6 of the Internet Protocol (IPv6), also sometimes referred to as IP Next Generation or IPng. <s> BIB001 </s> Technologies for Web and cloud service interaction: a survey <s> Internet layer protocols <s> Network support for multicast has triggered the development of group communication applications such as multipoint data dissemination and multiparty conferencing tools. To support these applications, several multicast transport protocols have been proposed and implemented. Multicast transport protocols have been an area of active research for the past couple of years. This article summarizes the activities in this work-in-progress area by surveying several multicast transport protocols. It also presents a taxonomy to classify the surveyed protocols according to several distinct features, discusses the rationale behind the protocol's design decisions, and presents some current research-issues in multicast protocol design. <s> BIB002 </s> Technologies for Web and cloud service interaction: a survey <s> Internet layer protocols <s> This document is intended as part of an IETF discussion about "middleboxes" - defined as any intermediary box performing functions apart from normal, standard functions of an IP router on the data path between a source host and destination host. This document establishes a catalogue or taxonomy of middleboxes, cites previous and current IETF work concerning middleboxes, and attempts to identify some preliminary conclusions. It does not, however, claim to be definitive. <s> BIB003 </s> Technologies for Web and cloud service interaction: a survey <s> Internet layer protocols <s> This document provides a high level introduction to the capabilities supported by the Stream Control Transmission Protocol (SCTP). It is intended as a guide for potential users of SCTP as a general purpose transport protocol. <s> BIB004 </s> Technologies for Web and cloud service interaction: a survey <s> Internet layer protocols <s> The Datagram Congestion Control Protocol (DCCP) is a transport ::: protocol that provides bidirectional unicast connections of ::: congestion-controlled unreliable datagrams. DCCP is suitable for ::: applications that transfer fairly large amounts of data and that can ::: benefit from control over the tradeoff between timeliness and ::: reliability. [STANDARDS-TRACK] <s> BIB005 </s> Technologies for Web and cloud service interaction: a survey <s> Internet layer protocols <s> Advances in network equipment now allow internet service providers to monitor the content of data packets in real-time and make decisions about how to handle them. If deployed widely this technology, known as deep packet inspection (DPI), has the potential to alter basic assumptions that have underpinned internet governance to date. The article explores the way internet governance is responding to deep packet inspection and the political struggles around it. Avoiding the extremes of technological determinism and social constructivism, it integrates theoretical approaches from the sociology of technology and actor-centered institutionalism into a new framework for technology-aware policy analysis. <s> BIB006
The TCP/IP protocol suite is the de facto standard for computer networks. Internet layer functionality is provided by the Internet Protocol (IP) [259] , or IPv4, which defines packet-based networking by an addressing schema, packet layouts, and routing conditions. Using IP, host A can send a packet to the logical address of host B without knowing the physical address or location of B. Based on the addresses in the header of the packet, so-called routers forward the packet, but delivery is not always guaranteed. IP therefore implements a Dynamic Routing pattern. To send and receive packets, a host needs at least one logical IP address in a physically connected network, and a router (gateway) in this network that forwards packets. If a host is simultaneously connected to two or more physical networks, it is referred to as multihoming. The maximum size of an IP packet is bounded by the underlying link layer technology, and packets are eventually fragmented or dropped if they are too large. IP Fragmentation allows to split oversized packets into smaller ones, but increases the load on a link because more packets introduce a larger overhead. The header of an IP packet specifies the type of the enclosed transport protocol in the content. IP has restrictions; the upper bound of 2 32 logical addresses is the biggest issue. An ad hoc solution is Network Address Translation (NAT) by routers for private networks. There is an ongoing effort to switch to IP Version 6 (IPv6) BIB001 that provides 2 128 logical addresses to deal with the exhaustion problem that becomes immanent in an Internet of Things. For addressing multiple recipients with a single packet, IP offers broadcast addresses based on the logical addressing scheme, i.e., a subnet. IPv6 does not support broadcasts. Both IP and IPv6 offer multicast, where a packet, sent to a special logical group address, is replicated for all recipients in the group. Both multicast and broadcast enable the One-to-Many Send pattern on top of Dynamic Routing. Ideally, the Internet layer promotes content neutrality. All routing decisions should depend on the header of IP packets independent from their content. But this neutrality is violated in practice. Network devices like firewalls, Quality-ofService traffic shapers, or content-based routers derive routing decisions from payloads of IP packets. Such devices have functionality across layers in reference models, and they are referred to as middleboxes BIB003 . Their techniques are referred to as Deep Packet Inspection (DPI) BIB006 . The more properties a protocol supports, the more overhead for control structures is required, and timeliness is affected. This leads to an upper bound for the maximum transfer rate because there is always a time delay between sending and receiving information. If, for example, a protocol requires several interactions to synchronize state or acknowledge delivery, the delays accumulate and effectively limit the available transfer rate. The two most prominent transport layer protocols in the Internet are the Transmission Control Protocol (TCP) [260] and User Datagram Protocol (UDP) . Both protocols are provided in modern operating systems. There is an increased interest in enhanced protocols, such as MultiPath TCP (MPTCP) , Stream Control Transmission Protocol (SCTP) [302] , and Google's Quick UDP Internet Connections (QUIC) to overcome limitations of TCP and UDP. Transmission Control Protocol. TCP is a protocol to connect two endpoints, i.e., processes, and information is exchanged in bidirectional byte streams, where the correctness and order of bytes is guaranteed. For compatibility with packet-based networks, a byte stream is split into TCP segments. A TCP connection is stateful and establishes a session between the two endpoints. It requires a so-called three-way handshake to synchronize, which causes a delay before the streams can start. TCP distinguishes a client that initiates the handshake, and a server that listens for incoming connections. An attempt to reduce the latency caused by the handshake between two already familiar endpoints is TCP Fast Open [56] . The source and destination port number in TCP segment headers identify the client and service endpoints. There is no payload-type identifier in TCP, and it just transfers byte streams. IANA [133] therefore maintains a list of default server listening ports that are automatically assumed by URI schemes if not explicitly overridden, e.g., port 80/TCP for HTTP. Reliable delivery in TCP is achieved by acknowledgments and retransmissions. Integrity is guaranteed by checksums. Segment headers also contain a window size that informs the receiver how many bytes the sender can handle in the other directional stream. The window mechanism enables flow and congestion control through rate adaptation. The maximum segment size is announced as a TCP option to avoid IP fragmentation. A problem in TCP is the so-called head-of-line blocking; if a single byte in a stream is incorrect or lost, the stream cannot proceed until retransmission succeeds. This imposes a problem for messaging protocols implemented on top of TCP. The necessity of acknowledgments limits TCP to bilateral Send and Receive communication. User Datagram Protocol. UDP is a message-oriented unidirectional protocol that transports datagrams without acknowledgments or order. A datagram header holds a source and destination port, the payload length, and a checksum for integrity, and there is no support for flow and congestion control. The overhead of UDP is small and it is stateless; therefore, no synchronization is required beforehand. If the size of a datagram exceeds the maximum payload of the IP packet, fragmentation takes place. Similar to TCP, a datagram is a sequence of bytes, and the payload type is not identified. IANA also assigns default server ports to UDP-based protocols. UDP supports the Send and Receive pattern without delivery guarantees. Because interaction is stateless, UDP is a candidate for multilateral One-to-Many Send on top of IP broadcast or multicast, where datagrams are replicated by the networking infrastructure. MultiPath TCP. A limitation of TCP is that segments eventually take the same network path, and a network problem disrupts a connection until error handling routines or timeouts are triggered. This problem affects mobile devices in radio dead spots or during roaming between wireless networks. Particularly for mobile devices, it is more and more common that a device is connected to several physical networks simultaneously, i.e., multihoming. MPTCP is a TCP extension to increase both redundancy and transfer rate by aggregating multiple paths over all available links as subflows of a single connection . The MPTCP connection does not fail if a path becomes congested or interrupted and an alternative path is still available. For compatibility with middleboxes, MPTCP is a hostside extension, and subflows are regular TCP connections. A notable example using MTCP is Apple Siri which utilizes both Wi-Fi and 3G/4G networks for increased service availability in mobile devices [18] . Communication in MPTCP is still bilateral like TCP, i.e., Send and Receive patterns. Stream Control Transmission Protocol. TCP offers reliable delivery and strict byte order in the stream, but there are applications that require reliability and ordering is less important; TCP can create unnecessary delays in such a scenario [302] . Also, the streaming nature of TCP introduces complexity in higher messaging protocols because streams then need a notion of state, delimiters to indicate message boundaries, and measures to circumvent head-of-line blocking. SCTP [302] is an alternative to TCP that was designed to raise availability through multihoming as seen in MPTCP. An association between two endpoints is established in a four-way handshake. A handshake needs more interactions than in TCP, but the SCTP service endpoint stays stateless until synchronization is completed. This eliminates the wellknown security vulnerability of TCP SYN flooding . SCTP is message-based and multiplexes byte-streamed messages, similar to MPTCP subflows, in a single association. SCTP offers reliability through checksums, optional ordering, and rate adaption for flow and congestion control BIB004 . Messages are sequences of bytes, and like TCP and UDP, SCTP does not identify the payload type. Default listening server ports are managed by IANA. A drawback of SCTP is its limited popularity: Transportation over the Internet is not guaranteed because middleboxes might block it. The supported interaction patterns are Send and Receive. Quick UDP Internet Connections. Developed by Google and already available in the Chrome browser, QUIC is an experimental transport layer protocol to reduce latency and redundant data transmissions in Web application protocols. QUIC implements the SCTP multiplexing concept on top of UDP to overcome the issue of SCTP being filtered by middleboxes. Similar to TCP Fast Open, latency is reduced by removing handshakes between already familiar hosts. QUIC allows transparent data compression, provides checksums and retransmission for reliability, and supports congestion avoidance. Forward error correction minimizes obstructive retransmissions by an error-correcting code. In terms of patterns, QUIC offers bilateral Send and Receive. Other transport protocols. There exist several transport layer protocols, where no evident application in a Web or cloud context has been found. Examples for multilateral interaction are multicast transport protocols BIB002 . Examples for bilateral interaction are: the message-based Datagram Congestion Control Protocol (DCCP) BIB005 that offers congestion control, but its delivery is unreliable; UDP Lite with relaxed checksums; and Reliable UDP (RUDP) as an extension of UDP with acknowledgments, retransmissions, and flow control.
Technologies for Web and cloud service interaction: a survey <s> Domain Name System <s> This RFC is the revised basic definition of The Domain Name System. It ::: obsoletes RFC-882. This memo describes the domain style names and ::: their used for host address look up and electronic mail forwarding. It ::: discusses the clients and servers in the domain name system and the ::: protocol used between them. <s> BIB001 </s> Technologies for Web and cloud service interaction: a survey <s> Domain Name System <s> This RFC is the revised specification of the protocol and format used ::: in the implementation of the Domain Name System. It obsoletes RFC-883. ::: This memo documents the details of the domain name client - server ::: communication. <s> BIB002 </s> Technologies for Web and cloud service interaction: a survey <s> Domain Name System <s> This document specifies how DNS resource records are named and ::: structured to facilitate service discovery. Given a type of service ::: that a client is looking for, and a domain in which the client is ::: looking for that service, this mechanism allows clients to discover a ::: list of named instances of that desired service, using standard DNS ::: queries. This mechanism is referred to as DNS-based Service Discovery, ::: or DNS-SD. <s> BIB003 </s> Technologies for Web and cloud service interaction: a survey <s> Domain Name System <s> DNSSEC was proposed more than 15 years ago but its (correct) adoption is still very limited. Recent cache poisoning attacks motivate deployment of DNSSEC. In this work we present a comprehensive overview of challenges and potential pitfalls of DNSSEC, including: Vulnerable configurations: we show that inter-domain referrals (via NS, MX and CNAME records) present a challenge for DNSSEC deployment and may result in vulnerable configurations. Due to the limited deployment so far, these configurations are expected to be popular. Incremental Deployment: we discuss implications of interoperability problems on DNSSEC validation by resolvers and potential for increased vulnerability due to popular practices of incremental deployment. Super-sized Response Challenges: we explain how large DNSSEC-enabled DNS responses cause interoperability challenges, and can be abused for DoS and even DNS poisoning. <s> BIB004
The Domain Name System (DNS) BIB001 BIB002 is an Internet core service, managed by IANA, and specifies an application layer protocol. As names are more usable for humans than numeric addresses, DNS is a distributed database for a hierarchical naming scheme that maps names onto IP addresses. Records in DNS have a certain type; e.g., type A is a host address record, type CNAME is an alias for another name, or type MX is reserved for SMTP-service-specific records. The hierarchical name of a host is then referred to as Fully Qualified Domain Name (FQDN). DNS is a binary and stateless protocol implementing the Send-Receive pattern. A client queries a service to resolve a name of a certain type, and the service eventually returns a record. DNS uses UDP as transport protocol for queries but also supports TCP for large responses or DNS transactions, i.e., zone transfers. The drawback of using TCP for short queries is the handshake delay. DNS has become a critical service for today's Internet, and the majority of services relies on DNS as an abstraction layer for locating endpoints; DNS service records (type SRV) even enable dynamic service discovery BIB003 . Nevertheless, a failure, misuse, or misconfiguration in DNS can lead to unforeseeable security consequences; e.g., attacks like DNS spoofing BIB004 are a serious threat. If DNS responses are tampered with, an attacker can redirect interaction to malign hosts. DNSSEC is therefore an attempt to secure
Technologies for Web and cloud service interaction: a survey <s> The hypertext transfer protocol <s> Internet applications currently have a choice between stream and datagram transport abstractions. Datagrams efficiently support small transactions and streams are suited for long-running conversations, but neither abstraction adequately supports applications like HTTP that exhibit a mixture of transaction sizes, or applications like FTP and SIP that use multiple transport instances. Structured Stream Transport (SST) enhances the traditional stream abstraction with a hierarchical hereditary structure, allowing applications to create lightweight child streams from any existing stream. Unlike TCP streams, these lightweight streams incur neither 3-way handshaking delays on startup nor TIME-WAIT periods on close. Each stream offers independent data transfer and flow control, allowing different transactions to proceed in parallel without head-of-line blocking, but all streams share one congestion control context. SST supports both reliable and best-effort delivery in a way that semantically unifies datagrams with streams and solves the classic "large datagram" problem, where a datagram's loss probability increases exponentially with fragment count. Finally, an application can prioritize its streams relative to each other and adjust priorities dynamically through out-of-band signaling. A user-space prototype shows that SST is TCP-friendly to within 2%, and performs comparably to a user-space TCP and to within 10% of kernel TCP on a WiFi network. <s> BIB001 </s> Technologies for Web and cloud service interaction: a survey <s> The hypertext transfer protocol <s> A multistreamed web transport has the potential to reduce head-of-line (HOL) blocking, and improve response times in high latency Internet browsing environments, typical of developing regions. In our position paper [13], we proposed a design for HTTP over the multistreamed Stream Control Transmission Protocol (SCTP), and implemented the design for non-pipelined (HTTP 1.0) transactions in the Apache web server and Firefox web browser. We have since adapted Apache and Firefox to handle HTTP 1.1 persistent, pipelined transfers over SCTP streams. Initial emulation results over high latency paths reveal that HTTP over SCTP streams benefits from faster page downloads, and achieves visually perceivable improvements to pipelined objects' response times. Movies comparing page downloads of HTTP/TCP vs. HTTP/SCTP streams can be found on the author's website [12]. The promising results have motivated us to propose a low cost, easily realizable, gateway-based HTTP over SCTP deployment solution to enhance users' browsing experience in developing regions. <s> BIB002 </s> Technologies for Web and cloud service interaction: a survey <s> The hypertext transfer protocol <s> This document defines the HTTP Cookie and Set-Cookie headers. These ::: headers can be used by HTTP servers to store state on HTTP user ::: agents, letting the servers maintain a stateful session over the ::: mostly stateless HTTP protocol. The cookie protocol has many ::: historical infelicities that degrade its security and privacy. NOTE: ::: If you have suggestions for improving the draft, please send email to ::: [email protected]. Suggestions with test cases are especially ::: appreciated. <s> BIB003 </s> Technologies for Web and cloud service interaction: a survey <s> The hypertext transfer protocol <s> This specification defines a mechanism enabling web sites to declare ::: themselves accessible only via secure connections and/or for users to ::: be able to direct their user agent(s) to interact with given sites ::: only over secure connections. This overall policy is referred to as ::: HTTP Strict Transport Security (HSTS). The policy is declared by web ::: sites via the Strict-Transport-Security HTTP response header field ::: and/or by other means, such as user agent configuration, for example. ::: [STANDARDS-TRACK] <s> BIB004 </s> Technologies for Web and cloud service interaction: a survey <s> The hypertext transfer protocol <s> The Hypertext Transfer Protocol (HTTP) is a stateless \%application- ::: level protocol for distributed, collaborative, hypertext information ::: systems. This document defines the semantics of HTTP/1.1 messages, as ::: expressed by request methods, request header fields, response status ::: codes, and response header fields, along with the payload of messages ::: (metadata and body content) and mechanisms for content negotiation. <s> BIB005 </s> Technologies for Web and cloud service interaction: a survey <s> The hypertext transfer protocol <s> The Hypertext Transfer Protocol (HTTP) is a stateless application- ::: level protocol for distributed, collaborative, hypertext information ::: systems. This document defines range requests and the rules for ::: constructing and combining responses to those requests. <s> BIB006 </s> Technologies for Web and cloud service interaction: a survey <s> The hypertext transfer protocol <s> The Hypertext Transfer Protocol (HTTP) is a stateless application- ::: level protocol for distributed, collaborative, hypertext information ::: systems. This document defines HTTP/1.1 conditional requests, ::: including metadata header fields for indicating state changes, request ::: header fields for making preconditions on such state, and rules for ::: constructing the responses to a conditional request when one or more ::: preconditions evaluate to false. <s> BIB007 </s> Technologies for Web and cloud service interaction: a survey <s> The hypertext transfer protocol <s> The Hypertext Transfer Protocol (HTTP) is a stateless application- ::: level protocol for distributed, collaborative, hypertext information ::: systems. This document provides an overview of HTTP architecture and ::: its associated terminology, defines the "http" and "https" Uniform ::: Resource Identifier (URI) schemes, defines the HTTP/1.1 message syntax ::: and parsing requirements, and describes related security concerns for ::: implementations. <s> BIB008
HTTP, originally specified in RFC 2616 , is the fundamental application layer protocol for the Web and many service technologies. It is stateless and implements the SendReceive pattern: A client sends a request message, and the service answers with a response message. HTTP is designed to operate on top of TCP, and both control and content are sent in a single TCP connection, where control instructions are text based. To separate control from data, HTTP specifies a header format and delimiters. Figure 10 shows a SendReceive cycle between a client and a service. Both HTTP request and response messages specify a header and an optional body separated by a delimiter. The request header defines the method, the URI-path of a resource, the protocol version, and a list of header fields. The presence of a body depends on the method. Similarly, the response header holds a status code, a list of response header fields, and eventually a body, i.e., the requested content. Header fields make HTTP extensible. Some headers are mandatory, others are optional. Content-Type is a central header field to specify the MIME type of media. The content type of a requested resource is therefore undefined until the HTTP response message arrives at the client. Today's HTTP/1.1 optimizes its predecessor versions in several ways. HTTP/1.1 supports eight methods: OPTIONS, GET, HEAD, POST, PUT, DELETE, TRACE, and CONNECT. The Upgrade header enables the client and service to switch the application protocol after a Send-Receive cycle. The Host header distinguishes FQDN authorities that are hosted on the same server-a common practice in Web hosting. Also, HTTP/1.1 introduces persistent TCP connections and pipelining of requests to minimize the accumulating Fig. 10 After the client has established a TCP connection with the service, a request is sent as text-based stream. The service parses the request and returns a response containing a header and the resource through the other directional stream of the TCP connection. Depending on the HTTP method, a request eventually has a content, e.g., from a form submission or file upload delay caused by the TCP handshakes for every requested resource. The standard proclaims that a client should not exceed two simultaneous TCP connections to a service. HTTP differentiates Content-Encoding of the requested resource and Transfer-Encoding for transport-only compression between client and service. HTTP also offers fine-grained caching mechanisms, e.g., by timestamps, the ETag header, and conditional GET, such that clients and socalled proxies can minimize data transfer. HTTP/1.1 allows client-or service-driven content negotiation for resources BIB005 . In service-driven negotiation, the service considers the client's User-Agent, clientside content-type restrictions (Accept), accepted character encodings for text-based formats (Accept-Charset), accepted content encodings (Accept-Encoding), and personal natural language preferences (Accept-Language) for sending a suitable representation. In client-driven negotiation, also referred to as agent driven, the service returns a multiple-choices status in a first response that lists all available representations, where the client can choose from in a second Send-Receive cycle. While HTTP is stateless in principle, HTTP/1.1 adds support for so-called cookies using the Set-Cookie and Cookie header fields to track state across Send-Receive cycles BIB003 . A cookie is basically a text string that identifies a client's HTTP session. The responsibility for correct application state tracking is on the service side; cookies therefore have important security and privacy aspects. Recently, the HTTP/1.1 standard has been completely respecified in order to remove imprecisions that have led to ambiguous implementations BIB006 BIB007 BIB008 BIB005 . HTTP security. The use of SSL/TLS for TCP connections has become the de facto standard to secure HTTP interaction between a client and a service, and it is specified as HTTP Secure (HTTPS) . As HTTPS has its own URI scheme (https:), a user can recognize from a URL whether the access to a resource is protected. A client can eventually choose to access an identical resource through multiple transport mechanisms, e.g., HTTP or HTTPS. To notify a Web client that resources of a certain authority can only be accessed through HTTPS, the HTTP Strict Transport Security (HSTS) BIB004 specifies a response header field that informs the client about this policy. Using the CONNECT HTTP method, a client can ask a proxy service to establish a connection to the intended service on behalf of the client, and byte streams are forwarded. This functionality, referred to as HTTP tunneling, is required in proxies for HTTPS access to services. As exchanged data are encrypted, caching is not possible. Another way to secure HTTP interaction is to upgrade an existing TCP connection to a TLS session by using the Upgrade header in HTTP/1.1 . A drawback of this solution is that a user can no longer see from a URL whether access is encrypted or not. Push technology. In terms of patterns, a Send-Receive interaction in HTTP is synchronous and can only be initiated by the client; resources are pulled from a service. Due to wide availability of HTTP-enabled software and acceptance by middleboxes, HTTP has been exploited to achieve asynchronous interaction without breaking the protocol specification, e.g., for client-side Receive or Multi-Responses patterns. These techniques are commonly referred to as push technology , so a service can push a resource to a client preferably in real time, e.g., for data feeds or event notification, without being explicitly requested. Historically, the first attempts have resorted to client-side polling, and real-time event notification was not possible. To minimize the number of Send-Receive cycles and to decrease response times, long polling is similar to polling, but the HTTP request hangs, i.e., is not answered, until a server-side event or a timeout occurs. Comet , also known as HTTP Streaming or HTTP server push, exploits persistent connections in HTTP/1.1 to keep a single TCP connection open after a client requests an event resource. The service then gradually delivers events using MIME type multipart/x-mixed-replace for the response. Comet implements the Multi-Responses pattern. Reverse HTTP 163, exploits the Upgrade feature of HTTP/1.1 to change the application layer protocol and switch the roles of client and service. The service becomes an HTTP client in the established TCP connection, and real-time events are then issued as HTTP requests from the original service to the original client, i.e., client-side Receive-Send in terms of patterns. For simultaneous bidirectional communication between client and service, Bidirectional-streams Over Synchronous HTTP (BOSH) [246] maintains two separate TCP connections. The client uses the first connection to issue HTTP request messages to the service, the second connection is a hanging request initiated by the client, so the service can interact with the client asynchronously. This enables patterns Send and Receive for both client and service. Two recent Web techniques in HTML5 are Server-Sent Events (SSE) [354] and WebSocket [74, . For SSE, the client requests a resource that acts as event resource similar to Comet, i.e., implements the Multi-Responses pattern. The response is of MIME type text/event-stream, the TCP connection is kept open, and events are delivered as byte chunks. In case of a timeout, the client reconnects to the event resource. WebSocket establishes a bidirectional channel for simultaneous communication in both directions. A WebSocket connection behaves like a bidirectional byte-stream-oriented TCP connection between client and service for Send and Receive interaction, and it is established in a handshake by exploiting the HTTP Upgrade header in HTTP/1.1. During this HTTP Send-Receive cycle for the WebSocket handshake, properties are negotiated using HTTP headers, including Sec-WebSocket-Protocol to agree on a subprotocol for continuation after the handshake. WebSocket supports operation on top of TLS and provides individual URI schemes for unencrypted (ws:) and encrypted (wss:) communication. With respect to Comet, Reverse HTTP, or BOSH, WebSocket has the smallest overhead because it is independent of HTTP when established. Nevertheless, clients and services need an explicit application layer protocol to operate on top of a WebSocket connection. Performance and speed. Performance is an issue in HTTP when many resources are requested simultaneously. Even when HTTP/1.1 persistent connections and pipelining are supported, access becomes somehow serialized because of limited simultaneous TCP connections. Allowing more parallel connections has a negative effect on the availability of a service because there is an upper limit, how many TCP connections a host can serve simultaneously. A workaround for the connection limit is domain sharding ; resources are distributed over multiple authorities, controlled by the service provider, so a client can use the maximum number of TCP connections to every authority in parallel. In general, there are a several approaches to increase Web performance and user experience: -minimize protocol handshake latency; -reduce protocol overhead; -multiplexed access; -prioritization of access. Two standards that have never left the experimental status are the binary HTTP-NG [328], intended as a successor of HTTP, and multiplexing HTTP access over a single TCP connection based on SMUX . Other experimental multiplexing approaches are Structured Stream Transport BIB001 and HTTP over SCTP BIB002 . The state-of-the-art, Google SPDY , acts conceptually as a session layer protocol between TCP and HTTP to increase Web performance through multiplexed resource access, prioritization, and compression of headers and content to reduce overhead. SPDY changes the wire format, but retains the semantics of HTTP; it is basically an augmentation to HTTP and no individual URI scheme is specified for compatibility reasons. SPDY also allows service-initiated interaction to push related resources to the client before they are asked for, i.e., the Multi-Responses interaction pattern. As SPDY changes the wire format of HTTP, encryption is mandatory to prevent middleboxes from tampering with interactions. SPDY requires TLS with Next Protocol Negotiation (NPN) support for backward compatibility with HTTPS. When a client accesses an HTTPS service, the service announces SPDY support through NPN during the TLS handshake, and the client can choose to proceed with SPDY or traditional HTTP within the TLS session. The experimental QUIC transport protocol was specifically designed for SPDY to remove delays between familiar hosts, caused by the initial TCP handshake, and optimized flow control . SPDY and WebSocket have been proposed as two core technologies in the upcoming HTTP/2 which is currently in the specification process . SPDY is already confirmed as the basis for HTTP/2, but protocol negotiation will be switched from NPN to the more general TLS Application Layer Protocol Negotiation (APLN) mechanism in the future [29] . Contrary to SPDY, encryption in HTTP/2 is not mandatory; some implementations have stated that they will only support HTTP/2 over an encrypted connection .
Technologies for Web and cloud service interaction: a survey <s> Messaging protocols <s> The Simple Authentication and Security Layer (SASL) is a framework for ::: providing authentication and data security services in connection- ::: oriented protocols via replaceable mechanisms. It provides a ::: structured interface between protocols and mechanisms. The resulting ::: framework allows new protocols to reuse existing mechanisms and allows ::: old protocols to make use of new mechanisms. The framework also ::: provides a protocol for securing subsequent protocol exchanges within ::: a data security layer. This document describes how a SASL mechanism ::: is structured, describes how protocols include support for SASL, and ::: defines the protocol for carrying a data security layer over a ::: connection. In addition, this document defines one SASL mechanism, the ::: EXTERNAL mechanism. This document obsoletes RFC 2222. [STANDARDS- ::: TRACK] <s> BIB001 </s> Technologies for Web and cloud service interaction: a survey <s> Messaging protocols <s> This document defines a binding for the XMPP protocol over a WebSocket ::: transport layer. A WebSocket binding for XMPP provides higher ::: performance than the current HTTP binding for XMPP. <s> BIB002 </s> Technologies for Web and cloud service interaction: a survey <s> Messaging protocols <s> The Constrained Application Protocol (CoAP) is a specialized web ::: transfer protocol for use with constrained nodes and constrained ::: (e.g., low-power, lossy) networks. The nodes often have 8-bit ::: microcontrollers with small amounts of ROM and RAM, while constrained ::: networks such as IPv6 over Low-Power Wireless Personal Area Networks ::: (6LoWPANs) often have high packet error rates and a typical throughput ::: of 10s of kbit/s. The protocol is designed for machine- to-machine ::: (M2M) applications such as smart energy and building automation. CoAP ::: provides a request/response interaction model between application ::: endpoints, supports built-in discovery of services and resources, and ::: includes key concepts of the Web such as URIs and Internet media ::: types. CoAP is designed to easily interface with HTTP for integration ::: with the Web while meeting specialized requirements such as multicast ::: support, very low overhead, and simplicity for constrained ::: environments. <s> BIB003
A increasingly popular group of protocols is designed for messaging passing, where peers interact through a messageoriented middleware or a message broker service. Messaging solutions often specify their own wire formats, i.e., application layer protocols, and this subsection enumerates the most important ones with respect to cloud computing. Architectures utilizing these protocols are then discussed in Sect. 5.2.4. Proprietary wire formats. The Microsoft Message Queuing (MSMQ) service specifies individual wire formats and utilizes TCP and UDP during interaction. Version 3.0 of MSMQ also introduces messaging through HTTP or HTTPS to overcome middleboxes. For sending messages to multiple recipients on different hosts, MSMQ supports IP multicast, so message replication is implicitly performed by the network, and not MSMQ. TIBCO The OpenMQ binary wire format is a protocol for Java Glassfish . Another individual binary format is OpenWire for Apache ActiveMQ . Apache Kafka also specifies a binary protocol on top of TCP transportation. ZeroMQ is an intelligent socket library for exchanging arbitrary binary messages, and ZeroMQ specifies a protocol to split those messages into one or more frames for transportation over TCP. . The transport model specifies a binary wire format that multiplexes channels into a single TCP connection or SCTP association. It supports flow control for messaging, authentication BIB001 , and encryption by TLS. AMQP is stateful because communicating peers negotiate a session with a handshake after an TCP connection has been established. Peers exchange so-called frames that contain header fields and binary content. For interoperable representation of messages, AMQP offers a self-contained type system that includes a set of primitive datatypes, descriptors for specifying custom types, restricted datatypes, and composite types for structured information. This allows selfdescribing annotated content when interaction between heterogeneous platforms takes place. Known as Jabber, and initially motivated by portable instant messaging, the Extensible Messaging and Presence Protocol (XMPP) [283, is another standard for messaging. Short text-based messages, i.e., XML stanzas, are bidirectionally exchanged in open-ended XML streams over a long-lived TCP connection, eventually protected by TLS, between a client and service or service-to-service. To overcome middleboxes, XMPP can utilize HTTP and push technology, i.e., BOSH [245] . Also, XMPP over WebSocket is in an experimental state BIB002 . The text-based Streaming Text Oriented Messaging Protocol (STOMP) [304] is also an interoperable messaging protocol that has resemblance to HTTP. It operates on top of a bidirectional byte-stream-based transport protocol such as TCP or WebSocket, supports TLS for encryption, and uses UTF-8 as default character encoding. MQ Telemetry Transport (MQTT) is an open standard for lightweight messaging on top of TCP and low bandwidths as encountered in the Internet of Things. It defines a binary message format with a small fixed-size header of only two bytes and therefore little overhead. MQTT also supports SSL/TLS for encrypted transfers. The Constrained Application Protocol (CoAP) BIB003 is another standard for the Internet of Things, where hardware is typically constrained. CoAP has a binary message format for asynchronous messaging over UDP. The standard specifies mappings between CoAP and HTTP, and they have similar semantics. CoAP offers optional delivery guarantees using acknowledgments, supports Send-Receive, and also allows multilateral interaction using IP Multicast. The standard refers to DTLS for securing CoAP interactions. The Data Distribution Service for Real-Time Systems (DDS) is a machine-to-machine middleware specification with applications in the Internet of Things. DDS also specifies the Real-Time Publish-Subscribe (RTPS) binary wire protocol for TCP-and UDP-based messaging, including IP multicast for One-to-many Send interaction.
Technologies for Web and cloud service interaction: a survey <s> Web mashup <s> Purpose – Mashups have been studied extensively in the literature; nevertheless, the large body of work in this area focuses on service/data level integration and leaves UI level integration, hence UI mashups, almost unexplored. The latter generates digital environments in which participating sources exist as individual entities; member applications and data sources share the same graphical space particularly in the form of widgets. However, the true integration can only be realized through enabling widgets to be responsive to the events happening in each other. The authors call such an integration “widget orchestration” and the resulting application “mashup by orchestration”. This article aims to explore and address challenges regarding the realization of widget‐based UI mashups and UI level integration, prominently in terms of widget orchestration, and to assess their suitability for building web‐based personal environments.Design/methodology/approach – The authors provide a holistic view on mashups and... <s> BIB001 </s> Technologies for Web and cloud service interaction: a survey <s> Web mashup <s> The web is growing quickly, substructures are coming up: a {social, semantic, etc.} web, or the {business, services, etc.} ecosystem which includes all resources of a specific web habitat. In the mashup ecosystem, developers are in intense scientific activity, what is easily measured by the number of their recent papers. Since mashups inherit an opportunistic (participatory) attitude, a main point of research is enabling users to create situation-specific mashups with little effort. After an overview, the chapter highlights areas of intensive discussion one by one: mashup description and modeling, semantic mashups, media mashups, ubiquitous mashups and end-user related development. Information is organized in two levels: right under the headings, a block of topic-related references may pop up. It is addressed to readers with deeper interest. After that, the text for everybody explains and illustrates innovative approaches. The chapter ends with an almost fail-safe outlook: given the growth of the web, the ecosystem of mashups will keep branching out. Core mashup features such as reuse of resources, user orientation, and versatile coordination (loose coupling) of components will propagate. <s> BIB002 </s> Technologies for Web and cloud service interaction: a survey <s> Web mashup <s> The Internet is no longer a web of linked pages, but a flourishing swarm of connected sites sharing resources and data. Modern web sites are increasingly interconnected, and a majority rely on content maintained by a third party. Web mashups are at the very extreme of this evolution, built almost entirely around external content. In that sense the web is becoming mashed up. This decentralized setting implies complex trust relationships among involved parties, since each party must trust all others not to compromise data. This poses a question: ::: ::: How can we secure the mashed up web? ::: ::: From a language-based perspective, this thesis approaches the question from two directions: attacking and securing the languages of the web. The first perspective explores new challenging scenarios and weaknesses in the modern web, identifying novel attack vectors, such as polyglot and mutation-based attacks, and their mitigations. The second perspective investigates new methods for tracking information in the browser, providing frameworks for expressing and enforcing decentralized information-flow policies using dynamic run-time monitors, as well as architectures for deploying such monitors. <s> BIB003
A mashup composes so-called Web components, e.g., multimedia resources or script code, from different origins into a new website BIB002 . Examples for mashup components are JavaScript libraries, gadgets , and services like Google Maps. Web components and the mashup principle achieve composability and reusability of resources which is also a contributing factor for the success of the Web 2.0. A mashup service, also called integrator, can be distinguished based on the location, where integration of Web components takes place BIB001 : Fig. 12 A mashup integrates n Web resources from different origins as Web components C 1 , . . . , C n . a Client-side mashup; b serviceside mashup -Service-side mashup. When a client requests a mashedup service, the service first gathers the foreign resources from other origins, processes them, and returns the integrated markup and media to the client. -Client-side mashup. A client-side mashup service returns a markup skeleton and script code, so the client embeds components statically or loads them dynamically using AJAX. The client is then responsible for integration. Figure 12 shows both cases. Script code acts as a glue between Web components, and the information flow between components can lead to security issues. Web components therefore require proper encapsulation BIB003 . In terms of patterns, a mashup enables one or more parallel bilateral SendReceive interactions between a client and Web servers that host Web components, eventually routing messages between them. Client-side content policies lead to two extreme cases of local script code interaction: no separation nor isolation by direct embedding in the same DOM using a script tag and strong separation and isolation by embedding in isolated DOMs using object or iframe elements. Ryck et al. survey the state-of-the-art techniques for more finegrained controls in mashups, and they distinguish four categories of Web component integration restrictions: To ease composability of mashup components, public API specifications, also referred to as Open APIs, have become popular in recent years. Open APIs can range from Web resource access to sophisticated service architectures as discussed in Sect. 5.2. In particular, OpenSocial is an initiative to standardize APIs for building social Web applications. Integrating the social dimension is a step toward personalized user experience, which is a characteristic of the Web 3.0.
Technologies for Web and cloud service interaction: a survey <s> Remote procedure calls <s> Remote procedure calls ( RPC ) are a useful paradigm for providing communication across a network between programs written in a high level language. This paper describes a package, written as part of the Cedar project, providing a remote procedure call facility. The paper describes the options that face a designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients. Our primary aim in building an RPC package was to make the building of distributed systems easier. Previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. We hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. To achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls. <s> BIB001 </s> Technologies for Web and cloud service interaction: a survey <s> Remote procedure calls <s> Web services are frequently described as the latest incarnation of distributed object technology. This misconception, perpetuated by people from both industry and academia, seriously limits broader acceptance of the true Web services architecture. Although the architects of many distributed and Internet systems have been vocal about the differences between Web services and distributed objects, dispelling the myth that they are closely related appears difficult. Many believe that Web services is a distributed systems technology that relies on some form of distributed object technology. Unfortunately, this is not the only common misconception about Web services. We seek to clarify several widely held beliefs about the technology that are partially or completely wrong. Within the distributed technology world, it is probably more appropriate to associate Web services with messaging technologies because they share a common architectural view, although they address different application types. Web services technology will have a dramatic enabling effect on worldwide interoperable distributed computing once everyone recognizes that Web services are about interoperable document-centric computing, not distributed objects. <s> BIB002 </s> Technologies for Web and cloud service interaction: a survey <s> Remote procedure calls <s> Middleware is a software layer standing between the operating system and the application, enabling the transparent integration of distributed objects. In this paper, we propose a framework that facilitates the comparison of middleware infrastructures. Our approach serves for identifying similarities and differences between middleware infrastructures and revealing their advantages and disadvantages when facing the question of choosing one that satisfies the application’s requirements. Based on the proposed framework, we compare CORBA with J2EE and COM+, three of the most widely used infrastructures in both industry and academia. <s> BIB003 </s> Technologies for Web and cloud service interaction: a survey <s> Remote procedure calls <s> This document describes the ONC (Open Network Computing) Remote ::: Procedure Call (ONC RPC Version 2) protocol as it is currently ::: deployed and accepted. This document obsoletes [RFC1831]. <s> BIB004
RPC is a simple yet powerful architectural style to offer a service by exposing network-accessible functions BIB001 . In terms of patterns, RPC is bilateral Send-Receive between a client and a service, as shown in Fig. 13 . The client initiates the interaction, and if not stated otherwise, a network function call in RPC is synchronous and interfaces are statically typed. Specifying an RPC service requires an agreed-upon transport mechanism, e.g., TCP or HTTP, an agreement on how to address and bind to a remote function, and a data serialization format to exchange structured data, e.g., ASN.1. Historically, one of the most widely deployed RPC solutions is Open Network Computing RPC (ONC-RPC) BIB004 , e.g., for network file systems. ONC-RPC originates from Sun Microsystems, and APIs are available on practically all major platforms. ONC-RPC uses TCP and UDP as transport mechanism, where call and return values are serialized in the XDR format. RPC for distributed systems has evolved from function calls to distributed computation over shared objects BIB002 . Zarras BIB003 analyzes three prominent RPC-style middleware approaches that are based on object sharing: -Microsoft Component Services (COM+) [181] ; -The OMG-standardized Common Object Request Broker Architecture (CORBA) [221] ; and -Java Remote Method Invocation (RMI) [237] in the Java Platform, Enterprise Edition (Java EE) [235] . All three approaches define individual data serialization formats and support communication over the Internet protocols, in particular, TCP. While protocols and wire formats for COM+ are specified in the DCE/RPC standard , CORBA uses the Internet Inter-ORB Protocol (IIOP) for communication over TCP connections. Java RMI specifies individual protocols and wire formats on top of TCP, e.g., the Java Remote Method Protocol (JRMP) or Oracle Remote Method Invocation (ORMI), but also supports RMI over IIOP [238] for compatibility with CORBA systems. The three approaches enable shared objects in an RPCstyle architecture. An object broker for allocation, garbage collection, and transactions is implicitly required BIB002 . To represent a shared object on the client side, a stub abstracts away the serialization and communication. The service interface is therefore statically typed. A recent middleware framework, similar to CORBA, but compatible with Web protocols, is the Internet Communications Engine .
Technologies for Web and cloud service interaction: a survey <s> SOAP/WS-* Web services <s> This paper introduces the major components of, and standards associated with, the Web services architecture. The different roles associated with the Web services architecture and the programming stack for Web services are described. The architectural elements of Web services are then related to a real-world business scenario in order to illustrate how the Web services approach helps solve real business problems. <s> BIB001 </s> Technologies for Web and cloud service interaction: a survey <s> SOAP/WS-* Web services <s> Web services are frequently described as the latest incarnation of distributed object technology. This misconception, perpetuated by people from both industry and academia, seriously limits broader acceptance of the true Web services architecture. Although the architects of many distributed and Internet systems have been vocal about the differences between Web services and distributed objects, dispelling the myth that they are closely related appears difficult. Many believe that Web services is a distributed systems technology that relies on some form of distributed object technology. Unfortunately, this is not the only common misconception about Web services. We seek to clarify several widely held beliefs about the technology that are partially or completely wrong. Within the distributed technology world, it is probably more appropriate to associate Web services with messaging technologies because they share a common architectural view, although they address different application types. Web services technology will have a dramatic enabling effect on worldwide interoperable distributed computing once everyone recognizes that Web services are about interoperable document-centric computing, not distributed objects. <s> BIB002 </s> Technologies for Web and cloud service interaction: a survey <s> SOAP/WS-* Web services <s> Loose coupling is often quoted as a desirable property of systems architectures. One of the main goals of building systems using Web technologies is to achieve loose coupling. However, given the lack of a widely accepted definition of this term, it becomes hard to use coupling as a criterion to evaluate alternative Web technology choices, as all options may exhibit, and claim to provide, some kind of "loose" coupling effects. This paper presents a systematic study of the degree of coupling found in service-oriented systems based on a multi-faceted approach. Thanks to the metric introduced in this paper, coupling is no longer a one-dimensional concept with loose coupling found somewhere in between tight coupling and no coupling. The paper shows how the metric can be applied to real-world examples in order to support and improve the design process of service-oriented systems. <s> BIB003
Many RPC architectures require static typing in an IDL and, as a consequence, have tight coupling from code restrictions. The goal of so-called big Web services is to relax this coupling by open standards for heterogeneous platforms BIB003 . A Web service deals with XML documents and document encapsulation to evade the complexity of distributed computation on shared objects BIB002 . The core technology in this attempt is the Simple Object Access Protocol (SOAP) for expressing messages as XML documents. SOAP is transport-agnostic by design and supports all kinds of transport mechanisms, including message-oriented middleware. However, HTTP has become an industry standard for because of its middlebox compatibility BIB001 . Web services standards (referred to as WS-*) extend SOAP-based interaction with security, reliability, transaction, orchestration, workflow, and business process aspects. Technologies. In accordance with Alonso et al. Web services interaction stack in accordance with the WS-I Basic Profile [367, is restricted to SOAP over HTTP as transport mechanism ite service consumes other services, and according to the application logic, it allows more advanced interaction patterns. Due to the large number and many versions of WS-* standards, the Web Services Interoperability Organization (WS-I) establishes best practices for interoperability, published as Basic Profiles [367, . Figure 15 is limited to basic profile protocols and standards for transport and messaging.