reference
stringlengths
376
444k
target
stringlengths
31
68k
Meta‐analysis and Mendelian randomization: A review <s> | CONCLUSIONS AND FUTURE DIRECTIONS <s> Identifying genetic variants that influence human height will advance our understanding of skeletal growth and development. Several rare genetic variants have been convincingly and reproducibly associated with height in mendelian syndromes, and common variants in the transcription factor gene HMGA2 are associated with variation in height in the general population. Here we report genome-wide association analyses, using genotyped and imputed markers, of 6,669 individuals from Finland and Sardinia, and follow-up analyses in an additional 28,801 individuals. We show that common variants in the osteoarthritis-associated locus GDF5-UQCC contribute to variation in height with an estimated additive effect of 0.44 cm (overall P < 10(-15)). Our results indicate that there may be a link between the genetic basis of height and osteoarthritis, potentially mediated through alterations in bone growth and development. <s> BIB001 </s> Meta‐analysis and Mendelian randomization: A review <s> | CONCLUSIONS AND FUTURE DIRECTIONS <s> Mendelian randomization is the use of genetic instrumental variables to obtain causal inferences from observational data. Two recent developments for combining information on multiple uncorrelated instrumental variables (IVs) into a single causal estimate are as follows: (i) allele scores, in which individual-level data on the IVs are aggregated into a univariate score, which is used as a single IV, and (ii) a summary statistic method, in which causal estimates calculated from each IV using summarized data are combined in an inverse-variance weighted meta-analysis. To avoid bias from weak instruments, unweighted and externally weighted allele scores have been recommended. Here, we propose equivalent approaches using summarized data and also provide extensions of the methods for use with correlated IVs. We investigate the impact of different choices of weights on the bias and precision of estimates in simulation studies. We show that allele score estimates can be reproduced using summarized data on genetic associations with the risk factor and the outcome. Estimates from the summary statistic method using external weights are biased towards the null when the weights are imprecisely estimated; in contrast, allele score estimates are unbiased. With equal or external weights, both methods provide appropriate tests of the null hypothesis of no causal effect even with large numbers of potentially weak instruments. We illustrate these methods using summarized data on the causal effect of low-density lipoprotein cholesterol on coronary heart disease risk. It is shown that a more precise causal estimate can be obtained using multiple genetic variants from a single gene region, even if the variants are correlated. © 2015 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. <s> BIB002 </s> Meta‐analysis and Mendelian randomization: A review <s> | CONCLUSIONS AND FUTURE DIRECTIONS <s> MendelianRandomization is a software package for the R open-source software environment that performs Mendelian randomization analyses using summarized data. The core functionality is to implement the inverse-variance weighted, MR-Egger and weighted median methods for multiple genetic variants. Several options are available to the user, such as the use of robust regression, fixed- or random-effects models and the penalization of weights for genetic variants with heterogeneous causal estimates. Extensions to these methods, such as allowing for variants to be correlated, can be chosen if appropriate. Graphical commands allow summarized data to be displayed in an interactive graph, or the plotting of causal estimates from multiple methods, for comparison. Although the main method of data entry is directly by the user, there is also an option for allowing summarized data to be incorporated from the PhenoScanner database of genotype-phenotype associations. We hope to develop this feature in future versions of the package. The R software environment is available for download from [https://www.r-project.org/]. The MendelianRandomization package can be downloaded from the Comprehensive R Archive Network (CRAN) within R, or directly from [https://cran.r-project.org/web/packages/MendelianRandomization/]. Both R and the MendelianRandomization package are released under GNU General Public Licenses (GPL-2|GPL-3). <s> BIB003 </s> Meta‐analysis and Mendelian randomization: A review <s> | CONCLUSIONS AND FUTURE DIRECTIONS <s> It may not always be possible to blind participants of a randomized controlled trial for treatment allocation. As a result, estimators of the actual treatment effect may be biased. In this paper, we will extend a novel method, originally introduced in genetic research, for instrumental variable meta-analysis, adjusting for bias due to unblinding of trial participants. Using simulation studies, this novel method, "Egger Correction for non-Adherence", is introduced and compared to the performance of the "intention-to-treat," "as-treated," and conventional "instrumental variable" estimators. Scenarios considered (time-varying) non-adherence, confounding, and between-study heterogeneity. The effect of treatment on a binary endpoint was quantified by means of a risk difference. In all scenarios with unblinded treatment allocation, the Egger Correction for non-Adherence method was the least biased estimator. However, unless the variation in adherence was relatively large, precision was lacking, and power did not surpass 0.50. As a comparison, in a meta-analysis of blinded randomized controlled trials, power of the conventional IV estimator was 1.00 versus at most 0.14 for the Egger Correction for non-Adherence estimator. Due to this lack of precision and power, we suggest to use this method mainly as a sensitivity analysis. <s> BIB004 </s> Meta‐analysis and Mendelian randomization: A review <s> | CONCLUSIONS AND FUTURE DIRECTIONS <s> Our health is affected by many exposures and risk factors, including aspects of our lifestyles, our environments, and our biology. It can, however, be hard to work out the causes of health outcomes because ill-health can influence risk factors and risk factors tend to influence each other. To work out whether particular interventions influence health outcomes, scientists will ideally conduct a so-called randomized controlled trial, where some randomly-chosen participants are given an intervention that modifies the risk factor and others are not. But this type of experiment can be expensive or impractical to conduct. Alternatively, scientists can also use genetics to mimic a randomized controlled trial. This technique – known as Mendelian randomization – is possible for two reasons. First, because it is essentially random whether a person has one version of a gene or another. Second, because our genes influence different risk factors. For example, people with one version of a gene might be more likely to drink alcohol than people with another version. Researchers can compare people with different versions of the gene to infer what effect alcohol drinking has on their health. Every day, new studies investigate the role of genetic variants in human health, which scientists can draw on for research using Mendelian randomization. But until now, complete results from these studies have not been organized in one place. At the same time, statistical methods for Mendelian randomization are continually being developed and improved. To take advantage of these advances, Hemani, Zheng, Elsworth et al. produced a computer programme and online platform called “MR-Base”, combining up-to-date genetic data with the latest statistical methods. MR-Base automates the process of Mendelian randomization, making research much faster: analyses that previously could have taken months can now be done in minutes. It also makes studies more reliable, reducing the risk of human error and ensuring scientists use the latest methods. MR-Base contains over 11 billion associations between people’s genes and health-related outcomes. This will allow researchers to investigate many potential causes of poor health. As new statistical methods and new findings from genetic studies are added to MR-Base, its value to researchers will grow. <s> BIB005
Meta-analysis methods have been used in MR investigations throughout its short lifetime, initially as a tool for aiding collaborative analysis of individual level data across epidemiological studies and latterly for synthesizing GWAS results within two-sample summary data MR. Established techniques for detecting heterogeneity and bias in meta-analysis have successfully been applied to MR to both test and adjust for violations of the IV assumptions. However, the direction of methodology is not just one way: MR-Egger regression has recently been proposed as a means to adjust the analysis of multicenter randomized trials for nonadherence BIB004 and to examine the mechanism of action for statins. 32 Median-and modebased estimation have also been suggested as sensitivity analysis tools for meta-analyses of RCTs with suspected small study effects. Our description of the summary data MR approaches in this paper assume that the SNPs used in the analysis are sufficiently separated in the genome so as to be mutually uncorrelated. This justifies the use of standard weighted least squares to estimate the parameters in IVW and MREgger regression and also underlies the simple empirical density functions used by the weighted median and MBE. Both IVW and MR-Egger regression can easily be extended to the case of correlated variants. In that case, the model parameters must be estimated using generalized least squares by specifying a correlation matrix for the set of SNPs. BIB002 Extensions for the weighted median and MBE for correlated variants have yet to be explored and is an interesting avenue for further research. Summary data MR analysis relies on obtaining SNPtrait associations from a GWAS, which is itself usually a conglomeration of data from many studies. Meta-analysis is therefore required to derive the necessary estimates. Fixed effect models are typically used for this purpose, for example, the most widely used software package METAL BIB001 does not have a random effects option. If heterogeneous results are obtained for a single SNP across studies, whole studies are sometimes removed to promote the fixed effect analysis. State-of-the-art methods for random effects meta-analysis 37,38 might have considerable utility in improving the summary information flowing from a GWAS, which would then affect subsequent summary data MR analyses. This is another area for future research. The uptake and implementation of two-sample summary data MR is being facilitated by software packages in R BIB003 BIB005 and Stata to implement all of the analysis methods highlighted in this paper, and more. In particular, MR-BASE (http://www.mrbase.org/) BIB005 is an analytical web-based platform linking genetic and trait summary data from over 1000 studies and 14 million samples with state-of-the-art tools for MR analysis. This has enabled causal relationships to be assessed with ease on an unprecedented breadth and scale. In time, it may be necessary for analysis and reporting guidelines, which have worked successfully for meta-analyses of clinical trials, to be agreed on to help ensure that MR analyses remain a principled and reliable means to probe causal questions in the new era of "big data."
Cyber attacks, countermeasures, and protection schemes — A state of the art survey <s> INTRODUCTION <s> Phishing is a significant problem involving fraudulent email and web sites that trick unsuspecting users into revealing private information. In this paper, we present the design, implementation, and evaluation of CANTINA, a novel, content-based approach to detecting phishing web sites, based on the TF-IDF information retrieval algorithm. We also discuss the design and evaluation of several heuristics we developed to reduce false positives. Our experiments show that CANTINA is good at detecting phishing sites, correctly labeling approximately 95% of phishing sites. <s> BIB001 </s> Cyber attacks, countermeasures, and protection schemes — A state of the art survey <s> INTRODUCTION <s> Cyber-attacks have greatly increased over the years, and the attackers have progressively improved in devising attacks towards specific targets. To aid in identifying and defending against cyber-attacks we propose a cyber attack taxonomy called AVOIDIT (Attack Vector, Operational Impact, Defense, Information Impact, and Target). We use five major classifiers to characterize the nature of an attack: classification by attack vector, classification by operational impact, classification by defense, classification by informational impact, and classification by attack target. Classification by defense is oriented towards providing information to the network administrator regarding attack mitigation or remediation strategies. Contrary to the existing taxonomies, AVOIDIT efficiently classifies blended attacks. We further propose an efficient cause, action, defense, analysis, and target (CADAT) process used to facilitate attack classification. AVOIDIT and CADAT are used by an issue resolution system (IRS) to educate the defender on possible cyber-attacks and the development of potential security policies. We validate the proposed AVOIDIT taxonomy using cyber-attacks scenarios and highlight future work intended to simulate AVOIDIT's use within the IRS. <s> BIB002 </s> Cyber attacks, countermeasures, and protection schemes — A state of the art survey <s> INTRODUCTION <s> Phishing attacks continue to plague users as attackers develop new ways to fool users into submitting personal information to fraudulent sites. Many schemes claim to protect against phishing sites. Unfortunately, most do not protect against zero-day phishing sites. Those schemes that do allege to provide zero-day protection, often incorrectly label both phishing and legitimate sites. We propose a scheme that protects against zero-day phishing attacks with high accuracy. Our approach captures an image of a page, uses optical character recognition to convert the image to text, then leverages the Google PageRank algorithm to help render a decision on the validity of the site. After testing our tool on 100 legitimate sites and 100 phishing sites, we accurately reported 100% of legitimate sites and 98% of phishing sites. <s> BIB003 </s> Cyber attacks, countermeasures, and protection schemes — A state of the art survey <s> INTRODUCTION <s> Security in Health Information Systems (HIS) is a central concern of researchers, academicians, and practitioners. Increased numbers of data security breaches have caused concern over the humans' role as the different users in security of HIS. Since many human errors or failures in all information systems (IS) can be prevented with education and training, this study tries to investigate the effects of education and training on significant human factors in HIS security. This paper also proceeds in describing how the data was collected. Secondary data resources are used in highlighting the security culture and security awareness of users as the significant factors for the implementation of security effectiveness in the healthcare domain. The results from this research will also provide some guidance and insights to both researchers and professionals of information security in the health care domain. <s> BIB004 </s> Cyber attacks, countermeasures, and protection schemes — A state of the art survey <s> INTRODUCTION <s> Abstract Website phishing is considered one of the crucial security challenges for the online community due to the massive numbers of online transactions performed on a daily basis. Website phishing can be described as mimicking a trusted website to obtain sensitive information from online users such as usernames and passwords. Black lists, white lists and the utilisation of search methods are examples of solutions to minimise the risk of this problem. One intelligent approach based on data mining called Associative Classification (AC) seems a potential solution that may effectively detect phishing websites with high accuracy. According to experimental studies, AC often extracts classifiers containing simple “If-Then” rules with a high degree of predictive accuracy. In this paper, we investigate the problem of website phishing using a developed AC method called Multi-label Classifier based Associative Classification (MCAC) to seek its applicability to the phishing problem. We also want to identify features that distinguish phishing websites from legitimate ones. In addition, we survey intelligent approaches used to handle the phishing problem. Experimental results using real data collected from different sources show that AC particularly MCAC detects phishing websites with higher accuracy than other intelligent algorithms. Further, MCAC generates new hidden knowledge (rules) that other algorithms are unable to find and this has improved its classifiers predictive performance. <s> BIB005 </s> Cyber attacks, countermeasures, and protection schemes — A state of the art survey <s> INTRODUCTION <s> Drive-by Download(DbD) attack is one of malware infection schemes that pose a major threat to users on the Internet. The attack tends to go unnoticed by users, because, upon infection, there is almost no visible change to the screen or the computer. Moreover, infections can occur merely as a result of a user visiting a web page. The conventional approach to DbD attacks is to use anti-virus(AV) software to detect malware. However, this approach is limited, because AV software does not always correctly detect emerging malware. Therefore, we designed a network-communication visualization system to assist in the detection of DbD attacks. We expect that the proposed visualization system will successfully give an awareness to users of suspicious software downloads. <s> BIB006 </s> Cyber attacks, countermeasures, and protection schemes — A state of the art survey <s> INTRODUCTION <s> There continue to be numerous breaches publicised pertaining to cybersecurity despite security practices being applied within industry for many years. This paper is intended to be the first in a number of papers as research into cybersecurity assurance processes. This paper is compiled based on current research related to cybersecurity assurance and the impact of the human element on it. The objective of this work is to identify elements of cybersecurity that would benefit from further research and development based on the literature review findings. The results outlined in this paper present a need for the cybersecurity field to look in to established industry areas to benefit from effective practices such as human reliability assessment, along with improved methods of validation such as statistical quality control in order to obtain true assurance. The paper proposes the development of a framework that will be based upon defined and repeatable quantification, specifically relating to the range of human aspect tasks that provide or are intended not to negatively affect cybersecurity assurance. Copyright © 2016 John Wiley & Sons, Ltd. <s> BIB007
Cyber-attacks have become more frequent and costly to individual users, businesses, economies and other critical infrastructure components. Symantec discovered more than 430 million new unique pieces of malware in 2015 [1] , 91% of attacks started by using phishing techniques, while numerous high-profile breaches originated from a single phishing attack . New ransomware evolves its approaches of propagation, encryption, the victims it seeks, and the means of distribution, including Internet chat, peer-to-peer networks, newsgroup postings, and email spam on a daily basis. . Traditional security tools such as anti-virus measures are unable to prevent all cyber-attacks, and particularly unknown ones. Developing the users' ability to detect, prevent and defend against cyber-attacks is important factor because humans are considered the weakest link in the current interconnected world and most security breaches are due to human performance BIB007 . A recent research reported that 93% of breaches were due to human error while 95% of data loss was due to cultural factors BIB004 . It is evident that critical security incidents occur due to users' unintentional mistakes, errors, culture and knowledge which are not considered properly by current security schemes. The enhancement of existing cyber security schemes to create better user awareness, advice, and response to cybercrime will be required. Increasing users' implicit and explicit knowledge of current and upcoming attacks is also an important requirement. Several protection tools are proposed to improve safety behaviour and promote the confidence of users to become involved in online activities. Cyber security protection systems are mainly based on the use of three techniques: blacklists, heuristics or a hybrid. Blacklist based techniques cannot cope with zero-day cyber-attacks and rapid recycling of blocked attacking pages. Web browsers' filters [8] are an example of blacklists which block phishing web pages by comparing URLs against known phishing sites stored locally on the user's machine or in a remote database. Meanwhile, heuristics based techniques rely on decision rules which are difficult to apply in a way which seems consistent to human perception. CANTINA BIB001 is an example of a heuristics based technique which blocks phishing web pages based on features extracted from them. Hybrid based techniques use both blacklists and heuristics to cope with cyber-attacks. GoldPhish BIB003 is an example of a hybrid phishing detection technique which utilises an optical character recognition (OCR) technique to detect phishing webpages. Such techniques are used to extract text from images found on web pages, such as the company logo, and then to leverage the Google PageRank algorithm to help render a decision on the validity of these webpages. However, such techniques can suffer from the drawbacks of both blacklists and heuristics based techniques. Potential protection by any security system requires standards when disseminating cyber vulnerability information to allow analysis of multiple cyber vulnerabilities of users BIB002 . It is important to provide users with a holistic picture of cyber space to include information about types of attackers and possible attacks, motives and drivers, and the targets and consequences of cybercrimes. There are still open and challenging problems for existing protection tools to leverage best practices, define terminology, classify and identify dimensions used by cyber space standards to populate attack forms, as well as training and education metadata. Cyber-attack taxonomy and classification can help users involved in a protection process not only to identify attacks, but also to identify measures to prevent, mitigate and remediate cyber vulnerabilities. Planning and exchange of cyber information via cyber security technical forums, social media or other kinds of sharing methods is noteworthy to consider for existing protection tools. Cyber-attacks are expected to increase in number and sophistication in the future. Therefore, existing protection tools are not able to intercept attacks such as drive-by download because these attacks become more sophisticated and well-organised BIB006 . Smart detection techniques that solve existing cyber problems are needed BIB005 . A combination of different techniques which use human factors as a basis, along with the heuristics-based approach can, as long as standardized historical data is available, deliver an effective intelligence based protection scheme to help users make a good real time decision. Our contributions in this paper include: (a) providing the state of the art in the cyber security field of study and its importance in everyday online activities; and (b) presenting a useful cyber-attack taxonomy and classification to help users involved in a protection process to identify attacks and measures to prevent, mitigate and remediate cyber vulnerabilities. The taxonomy includes information about type of attackers and possible attacks, motives and drivers, targets and consequences of cybercrimes. (c) Unlike previous research, existing protection systems which target cyber threats and risks are evaluated against three of our criteria for an effective anticyber-crime system; resilience to cyber attacks' countermeasures, real-time support and needs-based action, and training and education materials to increase users' awareness of cybercrimes. This evaluation can help researchers in the cyber security field of study to propose useful and effective protection schemes for current and upcoming attacks. The remaining parts of this paper are organised as follows. Section II gives an overview of the various types of cyber-attacks. Section III provides a comprehensive review of existing protection tools. Section IV presents a recommendation for cyber security researchers to build a smart protection tool. Conclusions of the paper are provided in Section V.
Cyber attacks, countermeasures, and protection schemes — A state of the art survey <s> II. VARIOUS TYPES OF CYBER ATTACKS <s> Cyber incidents are growing in intensity and severity. Several industry groups are therefore taking steps to better coordinate and improve information security across sectors. Also, various different types of public-private partnerships are developing, where cyber incident information is shared across institutions. This cooperation may improve the understanding of various types of cyber incidents, their severity, and impact on various types of targets. Research has shown that different types of attackers may be distinguished in terms of sophistication, skill level, attacking style, and objective of attack. It may further be proposed that different sectors experience different types of attacks. Attack characteristics and information about the modus operandi of criminal offenders have been used to learn more about the attacker and the motive of an attack. This information may also be used to distinguish between cyber attacks towards different types of targets. The current study focuses on reported cyber intrusions by the commercial and government sectors. The reported data come from CERT^(R)Coordination Center (CERT/CC), which has categorized the aspects of cyber intrusions in the current study. The aspects analyzed are: 'Method of Operation (MO)' which refers to the methods used by perpetrator to carry out an attack; 'Impact' which refers to the effect of the attack; 'Source' which refers to the source of the attack, and 'Target' which refers to the victim of the attack. The current study uses 839 cases of cyber attacks towards the commercial sector and 558 cases towards the government sector. The 23 variables from the four different cyber intrusion aspects; MO, impact, source sector and target sector, were analyzed using multidimensional scaling (MDS), which is a technique that has often been used when profiling traditional types of crimes. The analysis gave a Guttman-Lingoes' coefficient of alienation of 0.19 with 42 iterations in a 3-dimensional solution. It was shown that the commercial and government sectors experience different types of attacks, with different types of impact, stemming from different sources. The findings and implications are discussed in relation to the benefits of standardization, reporting, and sharing of cyber incident information. <s> BIB001 </s> Cyber attacks, countermeasures, and protection schemes — A state of the art survey <s> II. VARIOUS TYPES OF CYBER ATTACKS <s> Cyber-attacks have greatly increased over the years, and the attackers have progressively improved in devising attacks towards specific targets. To aid in identifying and defending against cyber-attacks we propose a cyber attack taxonomy called AVOIDIT (Attack Vector, Operational Impact, Defense, Information Impact, and Target). We use five major classifiers to characterize the nature of an attack: classification by attack vector, classification by operational impact, classification by defense, classification by informational impact, and classification by attack target. Classification by defense is oriented towards providing information to the network administrator regarding attack mitigation or remediation strategies. Contrary to the existing taxonomies, AVOIDIT efficiently classifies blended attacks. We further propose an efficient cause, action, defense, analysis, and target (CADAT) process used to facilitate attack classification. AVOIDIT and CADAT are used by an issue resolution system (IRS) to educate the defender on possible cyber-attacks and the development of potential security policies. We validate the proposed AVOIDIT taxonomy using cyber-attacks scenarios and highlight future work intended to simulate AVOIDIT's use within the IRS. <s> BIB002 </s> Cyber attacks, countermeasures, and protection schemes — A state of the art survey <s> II. VARIOUS TYPES OF CYBER ATTACKS <s> Summary Cyber attack is one of the most rapidly growing threats to the world of cutting edge information technology. As new tools and techniques are emerging everyday to make information accessible over the Internet, so is their vulnerabilities. Cyber defense is inevitable in order to ensure reliable and secure communication and transmission of information. Intrusion Detection System (IDS) and Intrusion Prevention System (IPS) are the major technologies dominating in the area of cyber defense. Tremendous efforts have already been put in intrusion detection research for decades but intrusion prevention research is still in its infancy. This paper provides a comprehensive review of the current research in both Intrusion Detection Systems and recently emerged Intrusion Prevention Systems. Limitations of current research works in both fields are also discussed in conclusion. <s> BIB003 </s> Cyber attacks, countermeasures, and protection schemes — A state of the art survey <s> II. VARIOUS TYPES OF CYBER ATTACKS <s> Drive-by download attack is one of the most severe threats to Internet users. Typically, only visiting a malicious page will result in compromise of the client and infection of malware. By the end of 2008, drive-by download had already become the number one infection vector of malware [5]. The downloaded malware may steal the users' personal identification and password. They may also join botnet to send spams, host phishing site or launch distributed denial of service attacks. Generally, these attacks rely on successful exploits of the vulnerabilities in web browsers or their plug-ins. Therefore, we proposed an inter-module communication monitoring based technique to detect malicious exploitation of vulnerable components thus preventing the vulnerability being exploited. We have implemented a prototype system that was integrated into the most popular web browser Microsoft Internet Explorer. Experimental results demonstrate that, on our test set, by using vulnerability-based signature, our system could accurately detect all attacks targeting at vulnerabilities in our definitions and produced no false positive. The evaluation also shows the performance penalty is kept low. <s> BIB004 </s> Cyber attacks, countermeasures, and protection schemes — A state of the art survey <s> II. VARIOUS TYPES OF CYBER ATTACKS <s> The role of computers and the Internet in modern society is well recognized. Recent developments in the fields of networking and cyberspace have greatly benefited mankind, but the rapid growth of cyberspace has also contributed to unethical practices by individuals who are bent on using the technology to exploit others. Such exploitation of cyberspace for the purpose of accessing unauthorized or secure information, spying, disabling of networks and stealing both data and money is termed as cyber attack. Such attacks have been increasing in number and complexity over the past few years. There has been a dearth of knowledge about these attacks which has rendered many individuals/agencies/organizations vulnerable to these attacks. Hence there is a need to have comprehensive understanding of cyber attacks and its classification. The purpose of this survey is to do a comprehensive study of these attacks in order to create awareness about the various types of attacks and their mode of action so that appropriate defense measures can be initiated against such attacks. <s> BIB005
Cyber-attack is one of the most rapidly growing threats to the interconnected world of information technology BIB003 . These are computer-based attacks which exploit human vulnerabilities rather than software vulnerabilities. Phishing email or phishing webpage is a type of cyber-attack in which victims are sent emails or fake webpages with links which deceive them into providing sensitive information such as account numbers, passwords, or other personal information to an attacker. Collecting information about victims can make phishing more convincing and allow it to falsely relate to a reputable business where victims might have an account. Victims are directed to a spoofed web site controlled by an attacker where they enter sensitive information such as credit card numbers. Drive-by download is a malicious piece of software which works by exploiting vulnerabilities in web browsers, plug-ins or other components that work within browsers to distribute malware without the victim's knowledge. The downloaded malware may use the victim's actions or automatically conduct malicious actions such as stealing users' personal identification or password, joining a botnet to send spams, hosting a phishing site or launching distributed denial of service attacks BIB004 . Social engineering is also a kind of attack which exploits human behaviour to act on malicious intentions: especially on social networking sites. In addition, there are currently more cyber-attacks associated with exploiting humans than previously recorded, and these are more challenging to classify and control. It is a significant global challenge for information security to protect confidentiality, integrity and availability of information. Cyber-attack taxonomy and classification can help users involved in a protection process not only to identify attacks, but also to identify measures to prevent, mitigate and remediate cyber vulnerabilities. Several researchers have contributed to the knowledge of cyberattack taxonomy and classification in order to help users become aware of cyber threats/risks associated with online activities BIB002 , BIB001 - BIB005 . In this section, we provide a holistic view of some known cyber-attacks in computer security. It includes information about types of attackers and possible attacks, motives and drivers, targets and consequences of cybercrimes. The holistic approach taken is provided in Fig. 1 .
Cyber attacks, countermeasures, and protection schemes — A state of the art survey <s> Fig. 2. Protection Systems Techniques <s> Web spoofing is a significant problem involving fraudulent email and web sites that trick unsuspecting users into revealing private information. We discuss some aspects of common attacks and propose a framework for client-side defense: a browser plug-in that examines web pages and warns the user when requests for data may be part of a spoof attack. While the plugin, SpoofGuard, has been tested using actual sites obtained through government agencies concerned about the problem, we expect that web spoofing and other forms of identity theft will be continuing problems in <s> BIB001 </s> Cyber attacks, countermeasures, and protection schemes — A state of the art survey <s> Fig. 2. Protection Systems Techniques <s> Phishing is a significant problem involving fraudulent email and web sites that trick unsuspecting users into revealing private information. In this paper, we present the design, implementation, and evaluation of CANTINA, a novel, content-based approach to detecting phishing web sites, based on the TF-IDF information retrieval algorithm. We also discuss the design and evaluation of several heuristics we developed to reduce false positives. Our experiments show that CANTINA is good at detecting phishing sites, correctly labeling approximately 95% of phishing sites. <s> BIB002 </s> Cyber attacks, countermeasures, and protection schemes — A state of the art survey <s> Fig. 2. Protection Systems Techniques <s> Phishing has been easy and effective way for trickery and deception on the Internet. While solutions such as URL blacklisting have been effective to some degree, their reliance on exact match with the blacklisted entries makes it easy for attackers to evade. We start with the observation that attackers often employ simple modifications (e.g., changing top level domain) to URLs. Our system, PhishNet, exploits this observation using two components. In the first component, we propose five heuristics to enumerate simple combinations of known phishing sites to discover new phishing URLs. The second component consists of an approximate matching algorithm that dissects a URL into multiple components that are matched individually against entries in the blacklist. In our evaluation with real-time blacklist feeds, we discovered around 18,000 new phishing URLs from a set of 6,000 new blacklist entries. We also show that our approximate matching algorithm leads to very few false positives (3%) and negatives (5%). <s> BIB003 </s> Cyber attacks, countermeasures, and protection schemes — A state of the art survey <s> Fig. 2. Protection Systems Techniques <s> Detecting and identifying any phishing websites in real-time, particularly for e-banking, is really a complex and dynamic problem involving many factors and criteria. Because of the subjective considerations and the ambiguities involved in the detection, fuzzy data mining techniques can be an effective tool in assessing and identifying phishing websites for e-banking since it offers a more natural way of dealing with quality factors rather than exact values. In this paper, we present novel approach to overcome the 'fuzziness' in the e-banking phishing website assessment and propose an intelligent resilient and effective model for detecting e-banking phishing websites. The proposed model is based on fuzzy logic combined with data mining algorithms to characterize the e-banking phishing website factors and to investigate its techniques by classifying the phishing types and defining six e-banking phishing website attack criteria's with a layer structure. Our experimental results showed the significance and importance of the e-banking phishing website criteria (URL & Domain Identity) represented by layer one and the various influence of the phishing characteristic on the final e-banking phishing website rate. <s> BIB004 </s> Cyber attacks, countermeasures, and protection schemes — A state of the art survey <s> Fig. 2. Protection Systems Techniques <s> Spam delivery is common in the Internet. Most modern spam-filtering solutions are deployed on the receiver side. They are good at filtering spam for end users, but spam messages still keep wasting Internet bandwidth and the storage space of mail servers. This work is therefore intended to detect and nip spamming bots in the bud. We use the Bro intrusion detection system to monitor the SMTP sessions in a university campus, and track the number and the uniqueness of the recipients' email addresses in the outgoing mail messages from each individual internal host as the features for detecting spamming bots. Due to the huge number of email addresses observed in the SMTP sessions, we store and manage them efficiently in the Bloom filters. According to the SMTP logs over a period of six months from November 2011 to April 2012, we found totally 65 dedicated spamming bots in the campus and observed 1.5 million outgoing spam messages from them.We also found account cracking events on 14 legitimate mail servers, on which some user accounts are cracked and abused for spamming. The method can effectively detect and curb the spamming bots with the precision and the recall up to 0.97 and 0.96. <s> BIB005 </s> Cyber attacks, countermeasures, and protection schemes — A state of the art survey <s> Fig. 2. Protection Systems Techniques <s> Phishing is an instance of social engineering techniques used to deceive users into giving their sensitive information using an illegitimate website that looks and feels exactly like the target organization website. Most phishing detection approaches utilizes Uniform Resource Locator (URL) blacklists or phishing website features combined with machine learning techniques to combat phishing. Despite the existing approaches that utilize URL blacklists, they cannot generalize well with new phishing attacks due to human weakness in verifying blacklists, while the existing feature-based methods suffer high false positive rates and insufficient phishing features. As a result, this leads to an inadequacy in the online transactions. To solve this problem robustly, the proposed study introduces new inputs (Legitimate site rules, User-behavior profile, PhishTank, User-specific sites, Pop-Ups from emails) which were not considered previously in a single protection platform. The idea is to utilize a Neuro-Fuzzy Scheme with 5 inputs to detect phishing sites with high accuracy in real-time. In this study, 2-Fold cross-validation is applied for training and testing the proposed model. A total of 288 features with 5 inputs were used and has so far achieved the best performance as compared to all previously reported results in the field. <s> BIB006 </s> Cyber attacks, countermeasures, and protection schemes — A state of the art survey <s> Fig. 2. Protection Systems Techniques <s> Phishing is a widespread practice and a lucrativebusiness. It is invasive and hard to stop: a company needsto worry about all emails that all employees receive, while anattacker only needs to have a response from a key person, e.g.,a finance or human resources' responsible, to cause a lot ofdamages. Some research has looked into what elements makephishing so successful. Many of these elements recall strategiesthat have been studied as principles of persuasion, scams andsocial engineering. This paper identifies, from the literature, theelements which reflect the effectiveness of phishing, and manuallyquantifies them within a phishing email sample. Most elementsrecognised as more effective in phishing commonly use persuasionprinciples such as authority and distraction. This insight couldlead to better automate the identification of phishing emails anddevise more appropriate countermeasures against them. <s> BIB007 </s> Cyber attacks, countermeasures, and protection schemes — A state of the art survey <s> Fig. 2. Protection Systems Techniques <s> URL blacklists are used by the majority of modern web browsers as a means to protect users from rogue web sites, i.e. those serving malware and/or hosting phishing scams. There is a plethora of URL blacklists/reputation services, out of which Google's Safe Browsing and Microsoft's SmartScreen stand out as the two most commonly used ones. Frequently, such lists are the only safeguard web browsers implement against such threats. In this paper, we examine the level of protection that is offered by popular web browsers on iOS, Android and desktop (Windows) platforms, against a large set of phishing and malicious URL. The results reveal that most browsers - especially those for mobile devices - offer limited protection against such threats. As a result, we propose and evaluate a countermeasure, which can be used to significantly improve the level of protection offered to the users, regardless of the web browser or platform they are using. <s> BIB008 </s> Cyber attacks, countermeasures, and protection schemes — A state of the art survey <s> Fig. 2. Protection Systems Techniques <s> Google and Yandex Safe Browsing are popular services included in many web browsers to prevent users from visiting phishing or malware websites. If these services protect their users from losing private information, they also require that their servers receive browsing information on the very same users. In this paper, we analyze Google and Yandex Safe Browsing services from a privacy perspective. We quantify the privacy provided by these services by analyzing the possibility of re-identifying URLs visited by a client. We thereby challenge Google's privacy policy which claims thatGoogle cannot recover URLs visited by its users. Our analysis and experimental results show that Google and Yandex Safe Browsing canpotentially be used as a tool to track specific classes of individuals. Additionally, our investigations on the data currently included in Google and Yandex Safe Browsing provides a concrete set of URLs/domains that can be re-identified without much effort. <s> BIB009
Big brand Internet security products such as MacAfee LiveSafe attract a large number of users because of their features. These tools warn users of suspicious web pages and, as part of an anti-theft regime, can take photographs and wipe data remotely. There is also use of encrypted password management via SafeKey and Personal Locker. Having these capabilities is useful for users who have knowledge about online threats, costs, and countermeasures and in turn can respond to security warnings. A high number of users however ignore security warnings (e.g. when they use PayPal for an online purchase or even posting or sharing a link on online social networks) because they have no knowledge or training regarding potential threats, and the warning message may be difficult for them to understand. So, it is important to educate users, to improve awareness of threats, risks, and what security warnings are about. Socialpsychological research on cyber security has identified that ineffective cognitive processing is a key reason for users to be victimized BIB007 . Security tools need to focus on training users to better understand their vulnerabilities and in turn detect cyber activities. Web browser filters such as PhishTank SiteChecker protect Internet users against phishing and malware based on the blacklist technique by comparing the currently-requested URL against a database of known fake web pages. The filters notify user of results regarding whether the URL is legitimate or fraudulent by sending a request to a remote database. It is common that blacklists depend on human intervention to verify suspicious URLs before adding them to the blacklist, and in turn this may give cyber attackers a chance to reach their goals. Web browser filters protection tools fail to satisfy our main criteria of being resilient to cyber-attack because of the response of phishers in quickly recycling phishing pages onto a new domain. The short lifetime of fraudulent web pages makes it difficult for Cyber Security Protection Tools blacklist filters to detect them, with several days between launch and takedown. Further, maintaining a huge blacklist of fraudulent URLs and frequently updating it is a challenging task. The Google Safe Browsing system for anti-phishing is based on certain URL and client-side checks. When Safe Browsing is enabled, the most recent Safe Browsing list (containing unsafe sites) is periodically downloaded and stored locally on the user's system. However, Google Safe browsing has been largely criticised for its privacyunfriendly by design. Google stores another cookie on the user's computer, which can be used to identify the IP addresses which the user visits: i.e. it can be used to track him or her. According to the Google Chrome Privacy Whitepaper , Google logs the transferred data in its raw form for up to two weeks. It collects standard log information in connection with Safe Browsing requests, including an IP address and one or more cookies. These logs are also tied to other Safe Browsing requests made from the same device. Existing safe browsing tools and especially web browsers are therefore exposed to several privacy threats BIB009 . Another design weakness is that for example Google Safe Browsing does not block any phishing URL when the synchronization step is skipped BIB008 . Considering that some users may not frequently synchronize their devices, this may result in an outdated blacklist. Thus, in the meantime, any phishing site that has been created, even if it has been reported to the Safe Browsing list, represents a risk to users who utilise such a tool to fight against cyber threats BIB008 . Also, the amount of time taken by Google's API is large (approximately 80ms for Google, excluding the time taken by an end user to download the Google blacklists locally), and besides, there is no limit on the response time by the lookup server BIB003 . The Phishing Initiative protection tool is a European project that protects a company or administration against cyber-crime with the aim of helping them fight fraudulent web pages stealing identities. The Phishing Initiative uses the same technique as Google of blacklisting suspicious web pages based on a remote database of known fraudulent web pages, and in turn inherits its weaknesses. It allows an organisation's users to submit suspicious phishing URLs and then send them over to CERT-LEXSI's expert teams for analysis. The final step includes, where necessary, undertaking relevant countermeasures to add confirmed fraudulent web pages to blacklists. Spoofguard BIB001 , the CANTINA toolbar with Internet Explorer BIB002 and Mozilla Thunderbird are protection tools based on heuristic techniques that block fraudulent web pages based on features extracted from them. The heuristics used in these tools are machine learning models which are presented as black boxes, and with no clear explanation of how a web page is classified as a phish. The main drawback of such tools is their use of decision rules that cannot be consistent with human perception. Therefore, they incorrectly identify many legitimate web pages as fraudulent, and have them blocked and shown as phishing pages. High false positives can reduce the confidence of users in protection tools and cause the disregarding of security warnings. The use of simple heuristics without study of human behaviour is insufficient to fulfil the online-basedneeds of users with regard to trust, identity, privacy and security. Design weaknesses also exist in mail clients' spam filtering tools, such as Mozilla Thunderbird, which receives mail before filtering. Therefore, spamming activities still exist, and the waste of Internet bandwidth and the storage space of mail servers due to spam messages also still exist. Spamming bots can be detected by such tools and addressed on the sender side as early as possible. Thus, the number of spam messages can be significantly reduced BIB005 . Another heuristics-based approach proposed in BIB004 depends on experimentally contrasting rules based classification algorithms using fuzzy data mining techniques to assess and identify phishing websites after collecting dissimilar features from a range of websites. Fuzzy data mining techniques can offer a more natural way of dealing with quality factors rather than exact values. A number of features are assessed to take one of three uncertain values: "Genuine'', ''Doubtful'' and "Fraud". The fuzzy data mining phishing website model shows a significant and important association of the "URL" and "Domain Identity" phishing website criteria. However, no justification is given of the way in which features have been assessed. The authors use a large set of features to predict whether websites are legitimate or not, and their methods show promising results in accuracy. However, design ambiguity in this method exists, in which the way that human factors-based features are extracted from the websites is not revealed. Besides this, the rules used were established based on human experience rather than intelligent data mining techniques. Lastly, the authors classify websites as very legitimate, legitimate, suspicious, phishy or very-phishy, but do not clarify the fine line that separates one class from another. A Neuro-Fuzzy model based on the use of advanced techniques is developed in BIB006 to identify and extract phishing features based on five inputs: namely, Legitimate site rules, User-behaviour profile, PhishTank, User-specific sites and Pop-Ups from Emails. From these inputs, 288 features are extracted, which are used as training and testing input data for the Neuro-Fuzzy system to generate heuristics, and to discriminate between phishing, suspicious and legitimate sites in real-time. The method aims to make users more secure and build their confidence in online transactions. The authors provide a comparative study to demonstrate the merits of the proposed approach in terms of maximizing the accuracy of performance and minimizing false positives and operation time. Further, the authors claim that the use of a large number of features has the benefit of differentiating between phishing, suspicious and legitimate sites more accurately. Two main challenges associated with the use of the Neuro-Fuzzy Inference System are indicated by the authors themselves, in that it is complex and only gives a single output obtained using weighted average defuzzification, and besides this all output membership functions must be of the same type, either being linear or constant. However, for the two methods discussed above BIB004 [36], it is unclear how effective such approaches are in mitigating phishing attacks in real life. Training and education is also missing. Anti-phishing Phil , CyberCIEGE and BigAmbition [39] are protection tools that have been proposed in industry and academia to educate users to improve their awareness of cyber-crime, and in turn change their behaviours and reduce risk. Further, these systems aim to increase awareness of cyber space, including attacks and defences in a virtual environment, with the objective of reproducing real life experience. In this context, these tools cannot satisfy the online-based-needs criterion in which it is significant for users to experience real life by using an innovative tool to monitor users' behaviours, actions and identify users' real-time needs and feed these into the gamified education system. When users are able to sense a real threat, they will have strong motivation to educate themselves and effectively engage in the gamified education system. The Phishing Education Landing Page Program (PELPP) is a protection tool developed by the Anti-Phishing Working Group (APWG) to train users on how to prevent themselves from being victimised by phishing attacks. The protection tool works when users click on a phishing link, with the users in turn redirected to a landing page which provides training material on how they can avoid being victimised in future, as a way of alerting users to the threat. This protection tool can satisfy the criteria of being resilient to cyber-attack countermeasures and of real-time support to provide education and training at the most teachable moment, when users encounter a phishing attack. However, this tool cannot satisfy the needs-based action criteria of users to automatically customise their training, security needs, and preferences by employing intelligent capabilities. This tool also suffers from the involvement of a third party (ex. ISP), which is required by PELPP to redirect any suspicious URL to the anti-phishing training webpage.
An overview of verification and validation of simulation models <s> INTRODUCTION <s> This paper discusses the quantitative as well as qualitative tests which can be run in trying to convince the user of a simulation model that the results are valid <s> BIB001 </s> An overview of verification and validation of simulation models <s> INTRODUCTION <s> In this paper we discuss verification and validation of simulation models. Four different approaches to deciding model validity are described; two different paradigms that relate verification and validation to the model development process are presented; various validation techniques are defined; conceptual model validity, model verification, operational validity, and data validity are discussed; a way to document results is given; a recommended procedure for model validation is presented; and model accreditation is briefly discussed. <s> BIB002
Simulation models are often used to aid in decision making and problem-solving. The users of these modeIs are rightly concerned with whether the models and information derived from them can be used with confidence. Model developers address this concern through model verification and validation. Model validation is usually defined to mean "substantiation that a computerized model within its domain of applicability possesses a satisfactory range of accuracy consistent with the intended application of the model" ] and is the definition used hem. Model verification is frequently defined as ensuring that the computer program of the computerized model (i.e., the simulator) and its implementation is correct and will be the definition used here. A related topic is model credibility or acceptability, which is developing in the (potential) users of information from the models (e.g., decision-makers) sufficient confidence in the information that they are willing to use it. A model should be developed for a specific purpose or use and its validity determined with respect to that purpose. Several sets of experimental conditions are usually required to define the domain of the model's intended application. A model may be valid for one set of experimental conditions and be invalid in another. A model is considered valid for a set of experimental conditions if its accuracy is within the acceptable range of accuracy which is the amount of accuracy required for the model's intended purpose. The substantiation that a model is valid, i.e., model validation, is part of the total model development process and is itself a process. The validation process consists of performing test and evaluations within the model development process to determine whether a model is valid or not. It is usually too costly and time consuming to determine that a model is absolutely valid over the complete domain of its intended application. Instead, tests and evaluations am conducted until sufficient confidence is obtained that a model can he considered valid for its intended application BIB002 and BIB001 ]. The relationships of the cost of performing model validation and the value of the model to the user as a function of model confidence are illustrated in Figure 1 . Recent research [Gass and Thompson (1980) , and BIB002 ] has related model validation and verification to specific steps of the model development process. We will follow the development of BIB002 and use Figure 2 . The problem entity is the system (real or proposed), idea, situation, policy, or phenomena to be modelbed; the conceptual model is the mathematicaLIlogical/verbal representation (mimic) of the problem entity developed for a particular study; and the computerized model is the conceptual model implemented on a computer. The conceptual model is developed through an Malysis and model& phase, the computerized model is developed through a compuferprogromming and implementation phme, and inferences about the problem entity ate obtained by conducting computer experiments on the computerized model in the experimentation phaw *This paper is an updated version of "An Expository on Verification and Validation of Simulation Models," Proceedings of the 1985 Winter Simulation Conference, pp. 1522. We relate validation and verification to this simplified version of the modelling process as shown in Figure 2 . Conceptual model validity is defined as determining that the theories and assumptions underlying the conceptual model are correct and that the model representation of the problem entity is "reasonable" for the intended use of the model. Computerized model verification is defined as ensuring that the computer programming and implementation of the conceptual model is correct. Operofionul validity is defined as determining that the model's output behavior has sufficient accuracy for its intended purpose or use over the domain of the model's intended application. Data validity is defined as ensuring that the data necessary for model building, model evaluation and testing, and conducting the model experiments to solve the problem are adequate and correct. Several models are usuaIIy developed in the modefling process prior to obtaining a satisfactory valid model. During each model iteration, model validation and verification are performed ]. A variety of (validation) techniques are used, which are described below. Unfortunately, no algorithm or procedure exists to select which techniques to use. Some of their attributes are discussed in .
A Survey of Techniques for Approximate Computing <s> INTRODUCTION <s> Control and memory divergence between threads within the same execution bundle, or warp, have been shown to cause significant performance bottlenecks for GPU applications. In this paper, we exploit the observation that many GPU applications exhibit error tolerance to propose branch and data herding. Branch herding eliminates control divergence by forcing all threads in a warp to take the same control path. Data herding eliminates memory divergence by forcing each thread in a warp to load from the same memory block. To safely and efficiently support branch and data herding, we propose a static analysis and compiler framework to prevent exceptions when control and data errors are introduced, a profiling framework that aims to maximize performance while maintaining acceptable output quality, and hardware optimizations to improve the performance benefits of exploiting error tolerance through branch and data herding. Our software implementation of branch herding on NVIDIA GeForce GTX 480 improves performance by up to 34% (13%, on average) for a suite of NVIDIA CUDA SDK and Parboil benchmarks. Our hardware implementation of branch herding improves performance by up to 55% (30%, on average). Data herding improves performance by up to 32% (25%, on average). Observed output quality degradation is minimal for several applications that exhibit error tolerance, especially for visual computing applications. <s> BIB001 </s> A Survey of Techniques for Approximate Computing <s> INTRODUCTION <s> As semiconductor fabrics scale closer to fundamental physical limits, their reliability is decreasing due to process variation, noise margin effects, aging effects, and increased susceptibility to soft errors. Reliability can be regained through redundancy, error checking with recovery, voltage scaling and other means, but these techniques impose area/energy costs. Since some applications (e.g. media) can tolerate limited computation errors and still provide useful results, error-tolerant computation models have been explored, with both the application and computation fabric having stochastic characteristics. Stochastic computation has, however, largely focused on application-specific hardware solutions, and is not general enough to handle arbitrary bit errors that impact memory addressing or control in processors. In response, this paper addresses requirements for error-tolerant execution by proposing and evaluating techniques for running error-tolerant software on a general-purpose processor built from an unreliable fabric. We study the minimum error-protection required, from a microarchitecture perspective, to still produce useful results at the application output. Even with random errors as frequent as every 250μs, our proposed design allows JPEG and MP3 benchmarks to sustain good output quality---14dB and 7dB respectively. Overall, this work establishes the potential for error-tolerant single-threaded execution, and details its required hardware/system support. <s> BIB002 </s> A Survey of Techniques for Approximate Computing <s> INTRODUCTION <s> With growing use of internet and exponential growth in amount of data to be stored and processed (known as “big data”), the size of data centers has greatly increased. This, however, has resulted in significant increase in the power consumption of the data centers. For this reason, managing power consumption of data centers has become essential. In this paper, we highlight the need of achieving energy efficiency in data centers and survey several recent architectural techniques designed for power management of data centers. We also present a classification of these techniques based on their characteristics. This paper aims to provide insights into the techniques for improving energy efficiency of data centers and encourage the designers to invent novel solutions for managing the large power dissipation of data centers. <s> BIB003
As large-scale applications such as scientific computing, social media, and financial analysis gain prominence, the computational and storage demands of modern systems have far exceeded the available resources. It is expected that, in the coming decade, the amount of information managed by worldwide data centers will grow 50-fold, while the number of processors will increase only tenfold . In fact, the electricity consumption of just the US data centers is expected to increase from 61 billion kWh (kilowatt hour) in 2006 BIB003 ] and 91 billion kWh in 2013 to 140 billion kWh in 2020 [NRDC 2013] . It is clear that rising performance demands will soon outpace the growth in resource budgets; hence, overprovisioning of resources alone will not solve the conundrum that awaits the computing industry in the near future. A promising solution for this dilemma is approximate computing (AC), which is based on the intuitive observation that, while performing exact computation or maintaining peak-level service demand require a high amount of resources, allowing selective approximation or occasional violation of the specification can provide disproportionate gains in efficiency. For example, for a k-means clustering algorithm, up to 50× energy saving can be achieved by allowing a classification accuracy loss of 5 percent . Similarly, a neural approximation approach can accelerate an inverse kinematics application by up to 26× compared to the GPU execution, while incurring an error of less than 5 percent . Approximate computing and storage approach leverages the presence of errortolerant code regions in applications and perceptual limitations of users to intelligently trade off implementation, storage, and/or result accuracy for performance or energy gains. In brief, AC exploits the gap between the level of accuracy required by the applications/users and that provided by the computing system, for achieving diverse optimizations. Thus, AC has the potential to benefit a wide range of applications/ frameworks, for example, data analytics, scientific computing, multimedia and signal processing, machine learning and MapReduce, and so forth. However, although promising, AC is not a panacea. Effective use of AC requires judicious selection of approximable code/data portions and approximation strategy, since uniform approximation can produce unacceptable quality loss BIB001 . Even worse, approximation in control flow or memory access operations can lead to catastrophic results such as segmentation fault BIB002 . Further, careful monitoring of output is required to ensure that quality specifications are met, since large loss makes the output unacceptable or necessitates repeated execution with precise parameters. Clearly, leveraging the full potential of AC requires addressing several issues. Recently, several techniques have been proposed to fulfill this need. Contribution and article organization: In this article, we present a survey of techniques for approximate computing. Figure 1 shows the organization of this paper. We first discuss the opportunities and obstacles in the use of AC in Section 2. We then present in Section 3 techniques for finding approximable program portions and monitoring output quality, along with the language support for expressing approximable Table I . Terminology Used in Approximate Computing Research (a) AC is synonymous with or has significant overlap with the ideas of : Dynamic effort-scaling , quality programmability/configurability and variable accuracy (b) Applications or their code portions that are amenable to AC are called: Approximable , relaxable , soft slices/ computations , best-effort computations , noncritical/crucial , error-resilient/tolerant ], error-acceptable , tunable , having 'forgiving nature' and being 'optional' (vs. guaranteed/mandatory) variables/operations. We review the strategies for actually approximating these data in Section 4. In Section 5, we discuss research works to show how these strategies are used in many ACTs employed for different memory technologies, system components, and processing units. In these sections, we organize the works in different categories to underscore their similarities and dissimilarities. Note that the works presented in these sections are deeply intertwined; while we study a work under a single group, several of these works belong to multiple groups. To show the spectrum of application of AC, we organize the works based on their workload or application domain in Section 6. Finally, Section 7 concludes this article with a discussion on future challenges. Scope of the article: The scope of AC encompasses a broad range of approaches. For a concise presentation, we limit the scope of this article in the following manner. We focus on works that use an approximation strategy to trade off result quality/accuracy and not those that mainly focus on mitigation of hard/soft errors or other faults. We mainly focus on ACTs at architecture, programming, and system level, and briefly include design of inexact circuits. We do not typically include works on theoretical studies on AC. We believe that this article will be useful for computer architects, application developers, system designers, and other researchers 1 . Table I summarizes the terminology used for referring to AC and the code regions amenable to AC. As we show in Sections 2.2 and 2.3, use of a suitable quality metric is extremely important to ensure correctness and balance quality loss with efficiency gain. For this reason, Table II shows the commonly used metrics for evaluating QoR of various applications/kernels (note that some of these applications may be internally composed of these kernels. Also, these metrics are not mutually exclusive). For several applications, multiple metrics can be used for evaluating quality loss, for example, both clustering accuracy and mean centroid distance can be used as metrics for k-means clustering . In essence, all of these metrics seek to compare some form of output (depending on the application, e.g., pixel values, body position, classification decision, execution time) in the approximate computation with that in 1 We use the following acronyms throughout the article: bandwidth (BW), dynamic binary instrumentation (DBI), embedded DRAM (eDRAM), error-correcting code (ECC), finite impulse response (FIR), floating point (FP) unit (FPU), hardware (HW), instruction set architecture (ISA), multilayer perceptron (MLP), multilevel cell (MLC), neural network (NN), neural processing unit (NPU), nonvolatile memory (NVM), peak signal-tonoise ratio (PSNR), phase change memory (PCM), quality of result (QoR), resistive RAM (ReRAM), single instruction multiple data (SIMD), software (SW), solid state drive (SSD), spin transfer torque RAM (STT-RAM), structural similarity (SSIM).
A Survey of Techniques for Approximate Computing <s> Error-Resilience of Programs and Users. <s> Control and memory divergence between threads within the same execution bundle, or warp, have been shown to cause significant performance bottlenecks for GPU applications. In this paper, we exploit the observation that many GPU applications exhibit error tolerance to propose branch and data herding. Branch herding eliminates control divergence by forcing all threads in a warp to take the same control path. Data herding eliminates memory divergence by forcing each thread in a warp to load from the same memory block. To safely and efficiently support branch and data herding, we propose a static analysis and compiler framework to prevent exceptions when control and data errors are introduced, a profiling framework that aims to maximize performance while maintaining acceptable output quality, and hardware optimizations to improve the performance benefits of exploiting error tolerance through branch and data herding. Our software implementation of branch herding on NVIDIA GeForce GTX 480 improves performance by up to 34% (13%, on average) for a suite of NVIDIA CUDA SDK and Parboil benchmarks. Our hardware implementation of branch herding improves performance by up to 55% (30%, on average). Data herding improves performance by up to 32% (25%, on average). Observed output quality degradation is minimal for several applications that exhibit error tolerance, especially for visual computing applications. <s> BIB001 </s> A Survey of Techniques for Approximate Computing <s> Error-Resilience of Programs and Users. <s> Modern processors are using increasingly larger sized on-chip caches. Also, with each CMOS technology generation, there has been a significant increase in their leakage energy consumption. For this reason, cache power management has become a crucial research issue in modern processor design. To address this challenge and also meet the goals of sustainable computing, researchers have proposed several techniques for improving energy efficiency of cache architectures. This paper surveys recent architectural techniques for improving cache power efficiency and also presents a classification of these techniques based on their characteristics. For providing an application perspective, this paper also reviews several real-world processor chips that employ cache energy saving techniques. The aim of this survey is to enable engineers and researchers to get insights into the techniques for improving cache power efficiency and motivate them to invent novel solutions for enabling low-power operation of caches. <s> BIB002
The perceptual limitations of humans provide scope for AC in visual and other computing applications. Similarly, many programs have noncritical portions, and small errors in these do not affect QoR significantly. For example, note that in a 3D raytracer application, 98% of FP operations and 91% of data accesses are approximable. Similarly, since the lowerorder bits have smaller significance than the higher-order bits, approximating them may have only a minor impact on QoR Ranjan et al. 2015; Rahimi et al. 2015; . In several iterative refinement algorithms, running extra iterations with reduced precision of intermediate computations can still provide the same QoR . In some scenarios, for example, search engines, no unique answer exists, but multiple answers are admissible. Similarly, redundancy due to spatial/temporal correlation provides a scope for AC BIB001 Yazdanbakhsh et al. 2015b; Samadi et al. 2013] . 2.2.3. Efficiency Optimization. In the image processing domain, a PSNR value greater than 30dB, and in typical error-resilient applications, errors less than 10%, are generally considered acceptable . By exploiting this margin, AC can aggressively improve performance and energy efficiency. For example, by intelligently reducing the eDRAM/DRAM refresh rate or SRAM supply voltage, the energy consumed in storage and memory access can be reduced with minor loss in precision BIB002 . Similarly, the AC approach can alleviate the scalability bottleneck , improving performance by early loop termination , skipping memory accesses , offloading computations to an accelerator , improving yield , and much more. 2.2.4. Quality Configurability. AC can provide knobs to trade off quality with efficiency; thus, instead of executing every computation to full fidelity, the user needs to expend only as much effort (e.g., area, energy) as dictated by the QoR requirement . For example, an ACT can use different precisions for data storage/processing, program versions of different quality (see Table IV ), different refresh rates in eDRAM/DRAM (see Table V) , and so on, to just fulfill the QoR requirement.
A Survey of Techniques for Approximate Computing <s> IDENTIFYING APPROXIMABLE PORTIONS AND EXPRESSING THIS AT LANGUAGE LEVEL <s> In this paper, we propose a framework for low-energy digital signal processing (DSP) where the supply voltage is scaled beyond the critical voltage required to match the critical path delay to the throughput. This deliberate introduction of input-dependent errors leads to degradation in the algorithmic performance, which is compensated for via algorithmic noise-tolerance (ANT) schemes. The resulting setup comprised of the DSP architecture operating at sub-critical voltage and the error control scheme is referred to as soft DSP. It is shown that technology scaling renders the proposed scheme more effective as the delay penalty suffered due to voltage scaling reduces due to short channel effects. The effectiveness of the proposed scheme is also enhanced when arithmetic units with a higher "delay-imbalance" are employed. A prediction based error-control scheme is proposed to enhance the performance of the filtering algorithm in presence of errors due to soft computations. For a frequency selective filter, it is shown that the proposed scheme provides 60%-81% reduction in energy dissipation for filter bandwidths up to 0.5 /spl pi/ (where 2 /spl pi/ corresponds to the sampling frequency f/sub s/) over that achieved via conventional voltage scaling, with a maximum of 0.5 dB degradation in the output signal-to-noise ratio (SNR/sub o/). It is also shown that the proposed algorithmic noise-tolerance schemes can be used to improve the performance of DSP algorithms in presence of bit-error rates of up to 10/sup -3/ due to deep submicron (DSM) noise. <s> BIB001 </s> A Survey of Techniques for Approximate Computing <s> IDENTIFYING APPROXIMABLE PORTIONS AND EXPRESSING THIS AT LANGUAGE LEVEL <s> Energy-efficient computing is important in several systems ranging from embedded devices to large scale data centers. Several application domains offer the opportunity to tradeoff quality of service/solution (QoS) for improvements in performance and reduction in energy consumption. Programmers sometimes take advantage of such opportunities, albeit in an ad-hoc manner and often without providing any QoS guarantees. We propose a system called Green that provides a simple and flexible framework that allows programmers to take advantage of such approximation opportunities in a systematic manner while providing statistical QoS guarantees. Green enables programmers to approximate expensive functions and loops and operates in two phases. In the calibration phase, it builds a model of the QoS loss produced by the approximation. This model is used in the operational phase to make approximation decisions based on the QoS constraints specified by the programmer. The operational phase also includes an adaptation function that occasionally monitors the runtime behavior and changes the approximation decisions and QoS model to provide strong statistical QoS guarantees. To evaluate the effectiveness of Green, we implemented our system and language extensions using the Phoenix compiler framework. Our experiments using benchmarks from domains such as graphics, machine learning, signal processing, and finance, and an in-production, real-world web search engine, indicate that Green can produce significant improvements in performance and energy consumption with small and controlled QoS degradation. <s> BIB002 </s> A Survey of Techniques for Approximate Computing <s> IDENTIFYING APPROXIMABLE PORTIONS AND EXPRESSING THIS AT LANGUAGE LEVEL <s> As semiconductor fabrics scale closer to fundamental physical limits, their reliability is decreasing due to process variation, noise margin effects, aging effects, and increased susceptibility to soft errors. Reliability can be regained through redundancy, error checking with recovery, voltage scaling and other means, but these techniques impose area/energy costs. Since some applications (e.g. media) can tolerate limited computation errors and still provide useful results, error-tolerant computation models have been explored, with both the application and computation fabric having stochastic characteristics. Stochastic computation has, however, largely focused on application-specific hardware solutions, and is not general enough to handle arbitrary bit errors that impact memory addressing or control in processors. In response, this paper addresses requirements for error-tolerant execution by proposing and evaluating techniques for running error-tolerant software on a general-purpose processor built from an unreliable fabric. We study the minimum error-protection required, from a microarchitecture perspective, to still produce useful results at the application output. Even with random errors as frequent as every 250μs, our proposed design allows JPEG and MP3 benchmarks to sustain good output quality---14dB and 7dB respectively. Overall, this work establishes the potential for error-tolerant single-threaded execution, and details its required hardware/system support. <s> BIB003 </s> A Survey of Techniques for Approximate Computing <s> IDENTIFYING APPROXIMABLE PORTIONS AND EXPRESSING THIS AT LANGUAGE LEVEL <s> Control and memory divergence between threads within the same execution bundle, or warp, have been shown to cause significant performance bottlenecks for GPU applications. In this paper, we exploit the observation that many GPU applications exhibit error tolerance to propose branch and data herding. Branch herding eliminates control divergence by forcing all threads in a warp to take the same control path. Data herding eliminates memory divergence by forcing each thread in a warp to load from the same memory block. To safely and efficiently support branch and data herding, we propose a static analysis and compiler framework to prevent exceptions when control and data errors are introduced, a profiling framework that aims to maximize performance while maintaining acceptable output quality, and hardware optimizations to improve the performance benefits of exploiting error tolerance through branch and data herding. Our software implementation of branch herding on NVIDIA GeForce GTX 480 improves performance by up to 34% (13%, on average) for a suite of NVIDIA CUDA SDK and Parboil benchmarks. Our hardware implementation of branch herding improves performance by up to 55% (30%, on average). Data herding improves performance by up to 32% (25%, on average). Observed output quality degradation is minimal for several applications that exhibit error tolerance, especially for visual computing applications. <s> BIB004 </s> A Survey of Techniques for Approximate Computing <s> IDENTIFYING APPROXIMABLE PORTIONS AND EXPRESSING THIS AT LANGUAGE LEVEL <s> Prior art in approximate computing has extensively studied computational resilience to imprecision. However, existing approaches often rely on static techniques, which potentially compromise coverage and reliability. Our approach, on the other hand, decouples error analysis of the approximate accelerator from quality analysis of the overall application. We use high-level, application-specific metrics, or Light-Weight Checks (LWCs), to gain coverage by exploiting imprecision tolerance at the application level. Unlike metrics that compare approximate solutions to exact ones, LWCs can be leveraged dynamically for error analysis and recovery. The resulting methodology adapts to output quality at runtime, providing guarantees on worst-case application-level error. To ensure platform agnosticism, these light-weight metrics are integrated directly into the application, enabling compatibility with any approximate acceleration technique. Our results present a case study of dynamic error control for inverse kinematics. Using software-based neural acceleration with LWC support, we demonstrate improvements in coverage, reliability, and overall performance. <s> BIB005
Finding approximable variables and operations is the crucial initial step in every ACT. While this is straightforward in several cases (e.g., approximating lower-order bits of graphics data), in other cases, it may require insights into program characteristics or error-injection to find the portions that can be approximated with little impact on QoR (Section 3.1). Closely related to it is the output-monitoring step, which verifies adherence to the quality constraint and triggers parameter adjustment or precise execution in the case of unacceptable quality loss (Section 3.2). Further, once relaxable portions are identified, conveying this to the software or compiler requires source-code annotations; several programming frameworks provide support for this (Section 3.3). This source-code annotation can be in the form of OpenMPstyle pragma directives (Section 3.4), which provides several benefits, for example, nonintrusive and incremental program transformation, and easy debugging or comparison with exact code by disabling the approximation directives with a single compiler flag. Table III classifies the techniques based on these factors. We now discuss several of these techniques. present a SW framework for automatically discovering approximable data in a program by using statistical methods. Their technique first collects the variables of the program and the range of values that they can take. Then, using binary instrumentation, the values of the variables are perturbed and the new output is measured. By comparing this against the correct output, which fulfills the acceptable QoS threshold, the contribution of each variable in the program output is measured. Based on this, the variables are marked as approximable or nonapproximable. Thus, their framework obviates the need of a programmer's involvement or source-code annotations for AC. They show that, compared to a baseline with type-qualifier annotations by the programmer Error-injection Venkataramani et al. 2013; BIB003 ] Use of DBI Düben et al. 2015; Venkataramani et al. 2013 ] Output quality monitoring BIB005 BIB001 Khudia et al. 2015; Mahajan et al. 2015; Ringenburg et al. , 2015 Samadi and Mahlke 2014; ] Annotating approximable program portions Esmaeilzadeh et al. , 2012b Shi et al. 2015; Yazdanbakhsh et al. , 2015b Use of OpenMP-style pragma Use of compiler BIB002 Esmaeilzadeh et al. , 2012b Mahajan et al. 2015; Rahimi et al. 2013; Ringenburg et al. 2015; Samadi et al. 2013; BIB004 Sidiroglou et al. 2011; BIB003 programmer-annotated version may be marked as approximable by their technique, which can lead to errors. Chippa et al.
A Survey of Techniques for Approximate Computing <s> STRATEGIES FOR APPROXIMATION <s> In this paper, we propose a framework for low-energy digital signal processing (DSP) where the supply voltage is scaled beyond the critical voltage required to match the critical path delay to the throughput. This deliberate introduction of input-dependent errors leads to degradation in the algorithmic performance, which is compensated for via algorithmic noise-tolerance (ANT) schemes. The resulting setup comprised of the DSP architecture operating at sub-critical voltage and the error control scheme is referred to as soft DSP. It is shown that technology scaling renders the proposed scheme more effective as the delay penalty suffered due to voltage scaling reduces due to short channel effects. The effectiveness of the proposed scheme is also enhanced when arithmetic units with a higher "delay-imbalance" are employed. A prediction based error-control scheme is proposed to enhance the performance of the filtering algorithm in presence of errors due to soft computations. For a frequency selective filter, it is shown that the proposed scheme provides 60%-81% reduction in energy dissipation for filter bandwidths up to 0.5 /spl pi/ (where 2 /spl pi/ corresponds to the sampling frequency f/sub s/) over that achieved via conventional voltage scaling, with a maximum of 0.5 dB degradation in the output signal-to-noise ratio (SNR/sub o/). It is also shown that the proposed algorithmic noise-tolerance schemes can be used to improve the performance of DSP algorithms in presence of bit-error rates of up to 10/sup -3/ due to deep submicron (DSM) noise. <s> BIB001 </s> A Survey of Techniques for Approximate Computing <s> STRATEGIES FOR APPROXIMATION <s> Instruction memoization is a promising technique to reduce the power consumption and increase the performance of future low-end/mobile multimedia systems. Power and performance efficiency can be improved by reusing instances of an already executed operation. Unfortunately, this technique may not always be worth the effort due to the power consumption and area impact of the tables required to leverage an adequate level of reuse. In this paper, we introduce and evaluate a novel way of understanding multimedia floating-point operations based on the fuzzy computation paradigm: performance and power consumption can be improved at the cost of small precision losses in computation. By exploiting this implicit characteristic of multimedia applications, we propose a new technique called tolerant memoization. This technique expands the capabilities of classic memoization by associating entries with similar inputs to the same output. We evaluate this new technique by measuring the effect of tolerant memoization for floating-point operations in a low-power multimedia processor and discuss the trade-offs between performance and quality of the media outputs. We report energy improvements of 12 percent for a set of key multimedia applications with small LUT of 6 Kbytes, compared to 3 percent obtained using previously proposed techniques. <s> BIB002 </s> A Survey of Techniques for Approximate Computing <s> STRATEGIES FOR APPROXIMATION <s> Energy-efficient computing is important in several systems ranging from embedded devices to large scale data centers. Several application domains offer the opportunity to tradeoff quality of service/solution (QoS) for improvements in performance and reduction in energy consumption. Programmers sometimes take advantage of such opportunities, albeit in an ad-hoc manner and often without providing any QoS guarantees. We propose a system called Green that provides a simple and flexible framework that allows programmers to take advantage of such approximation opportunities in a systematic manner while providing statistical QoS guarantees. Green enables programmers to approximate expensive functions and loops and operates in two phases. In the calibration phase, it builds a model of the QoS loss produced by the approximation. This model is used in the operational phase to make approximation decisions based on the QoS constraints specified by the programmer. The operational phase also includes an adaptation function that occasionally monitors the runtime behavior and changes the approximation decisions and QoS model to provide strong statistical QoS guarantees. To evaluate the effectiveness of Green, we implemented our system and language extensions using the Phoenix compiler framework. Our experiments using benchmarks from domains such as graphics, machine learning, signal processing, and finance, and an in-production, real-world web search engine, indicate that Green can produce significant improvements in performance and energy consumption with small and controlled QoS degradation. <s> BIB003 </s> A Survey of Techniques for Approximate Computing <s> STRATEGIES FOR APPROXIMATION <s> As semiconductor fabrics scale closer to fundamental physical limits, their reliability is decreasing due to process variation, noise margin effects, aging effects, and increased susceptibility to soft errors. Reliability can be regained through redundancy, error checking with recovery, voltage scaling and other means, but these techniques impose area/energy costs. Since some applications (e.g. media) can tolerate limited computation errors and still provide useful results, error-tolerant computation models have been explored, with both the application and computation fabric having stochastic characteristics. Stochastic computation has, however, largely focused on application-specific hardware solutions, and is not general enough to handle arbitrary bit errors that impact memory addressing or control in processors. In response, this paper addresses requirements for error-tolerant execution by proposing and evaluating techniques for running error-tolerant software on a general-purpose processor built from an unreliable fabric. We study the minimum error-protection required, from a microarchitecture perspective, to still produce useful results at the application output. Even with random errors as frequent as every 250μs, our proposed design allows JPEG and MP3 benchmarks to sustain good output quality---14dB and 7dB respectively. Overall, this work establishes the potential for error-tolerant single-threaded execution, and details its required hardware/system support. <s> BIB004 </s> A Survey of Techniques for Approximate Computing <s> STRATEGIES FOR APPROXIMATION <s> Control and memory divergence between threads within the same execution bundle, or warp, have been shown to cause significant performance bottlenecks for GPU applications. In this paper, we exploit the observation that many GPU applications exhibit error tolerance to propose branch and data herding. Branch herding eliminates control divergence by forcing all threads in a warp to take the same control path. Data herding eliminates memory divergence by forcing each thread in a warp to load from the same memory block. To safely and efficiently support branch and data herding, we propose a static analysis and compiler framework to prevent exceptions when control and data errors are introduced, a profiling framework that aims to maximize performance while maintaining acceptable output quality, and hardware optimizations to improve the performance benefits of exploiting error tolerance through branch and data herding. Our software implementation of branch herding on NVIDIA GeForce GTX 480 improves performance by up to 34% (13%, on average) for a suite of NVIDIA CUDA SDK and Parboil benchmarks. Our hardware implementation of branch herding improves performance by up to 55% (30%, on average). Data herding improves performance by up to 32% (25%, on average). Observed output quality degradation is minimal for several applications that exhibit error tolerance, especially for visual computing applications. <s> BIB005
Once approximable variables and operations have been identified, they can be approximated using a variety of strategies, such as reducing their precision; skipping tasks, memory accesses, or some iterations of a loop; performing an operation on inexact Düben et al. 2015; Keramidas et al. 2015; Rahimi et al. 2013; Venkataramani et al. 2013; ] Loop perforation BIB003 Samadi and Mahlke 2014; Shi et al. 2015; Sidiroglou et al. 2011 ] Load value approximation Yazdanbakhsh et al. 2015b] Memoization BIB002 Keramidas et al. 2015; Rahimi et al. 2013 Rahimi et al. , 2015 Ringenburg et al. 2015; Samadi et al. 2014; ] Task dropping/skipping Samadi et al. 2013; Sidiroglou et al. 2011; ] Memory access skipping Yazdanbakhsh et al. 2015b; Data sampling Samadi et al. 2014 ] Using program versions of different accuracy BIB003 Using inexact or faulty HW BIB001 Kahng and Kang 2012; Rahimi et al. 2013; Varatkar and Shanbhag 2008; BIB004 Voltage scaling BIB001 Rahimi et al. 2015; Varatkar and Shanbhag 2008; Venkataramani et al. 2013 ] Refresh rate reduction ] Inexact reads/writes Ranjan et al. 2015 ] Reducing divergence in GPU BIB005 Lossy compression BIB004 ] Use of neural network Eldridge et al. 2014; Esmaeilzadeh et al. 2012b; Grigorian et al. 2015; Reinman 2014, 2015; Khudia et al. 2015; Mahajan et al. 2015; hardware; and so forth. Table IV summarizes the strategies used for approximation. Note that the ideas used in these strategies are not mutually exclusive. We now discuss these strategies, in context of the ACTs in which they are used.
A Literature Overview of Fuzzy Database Models * <s> Fuzzy Sets and Possibility Distributions <s> This paper deals with the application of fuzzy logic in a relational database environment with the objective of capturing more meaning of the data. It is shown that with suitable interpretations for the fuzzy membership functions, a fuzzy relational data model can be used to represent ambiguities in data values as well as impreciseness in the association among them. Relational operators for fuzzy relations have been studied, and applicability of fuzzy logic in capturing integrity constraints has been investigated. By introducing a fuzzy resemblance measure EQUAL for comparing domain values, the definition of classical functional dependency has been generalized to fuzzy functional dependency (ffd). The implication problem of ffds has been examined and a set of sound and complete inference axioms has been proposed. Next, the problem of lossless join decomposition of fuzzy relations for a given set of fuzzy functional dependencies is investigated. It is proved that with a suitable restriction on EQUAL, the design theory of a classical relational database with functional dependencies can be extended to fuzzy relations satisfying fuzzy functional dependencies. <s> BIB001 </s> A Literature Overview of Fuzzy Database Models * <s> Fuzzy Sets and Possibility Distributions <s> The theory of possibility described in this paper is related to the theory of fuzzy sets by defining the concept of a possibility distribution as a fuzzy restriction which acts as an elastic constraint on the values that may be assigned to a variable. More specifically, if F is a fuzzy subset of a universe of discourse U = {u} which is characterized by its membership function μF, then a proposition of the form “X is F”, where X is a variable taking values in U, induces a possibility distribution tx which equates the possibility of X taking the value u to μF(u)—the compatibility of u with F. In this way, X becomes a fuzzy variable which is associated with the possibility distribution tx in much the same way as a random variable is associated with a probability distribution. In general, a variable may be associated both with a possibility distribution and a probability distribution, with the weak connection between the two expressed as the possibility/probability consistency principle. ::: ::: A thesis advanced in this paper is that the imprecision that is intrinsic in natural languages is, in the main, possibilistic rather than probabilistic in nature. Thus, by employing the concept of a possibility distribution, a proposition, p, in a natural language may be translated into a procedure which computes the probability distribution of a set of attributes which are implied by p. Several types of conditional translation rules are discussed and, in particular, a translation rule for propositions of the form “X is F is α-possible”, where α is a number in the interval [0,1], is formulated and illustrated by examples. <s> BIB002 </s> A Literature Overview of Fuzzy Database Models * <s> Fuzzy Sets and Possibility Distributions <s> Fuzzy Sets.- The Operation of Fuzzy Set.- Fuzzy Relation and Composition.- Fuzzy Graph and Relation.- Fuzzy Number.- Fuzzy Function.- Probabilisy and Uncertainty.- Fuzzy Logic.- Fuzzy Inference.- Fuzzy Control and Fuzzy Expert Systems.- Fusion of Fuzzy System and Neural Networks.- Fusion of Fuzzy Systems and Genetic Algorithms. <s> BIB003
Many of the existing approaches dealing with imprecision and uncertainty are based on the theory of fuzzy sets and possibility distribution theory BIB002 is defined for the fuzzy set F, where μ F (u), for each u ∈ U, denotes the degree of membership of u in the fuzzy set F. Thus the fuzzy set F is described as follows: When U is an infinite set, then the fuzzy set F can be represented by ( )/ . When the membership function μ F( u) above is explained to be a measure of the possibility that a variable X has the value u, where X takes values in U, a fuzzy value is described by a possibility distribution π X . Here, π X (u i ), u i ∈ U denotes the possibility that u i is true. Let π X and F be the possibility distribution representation and the fuzzy set representation for a fuzzy value, respectively. It is clear that π X = F is true BIB001 . For more concepts and operations about fuzzy sets, one can refer to BIB003 .
A Literature Overview of Fuzzy Database Models * <s> FUZZY RELATIONAL DATABASES <s> Preface. 1. Database Fundamentals. 2. Relational Databases and Fuzzy Sets Background. 3. Similarity-Based Models. 4. Possibility-Based Models. 5. Alternative Database Models and Approaches. 6. Commercial and Industrial Applications. Index. <s> BIB001 </s> A Literature Overview of Fuzzy Database Models * <s> FUZZY RELATIONAL DATABASES <s> Some recent fuzzy database modeling advances for the non-traditional applications are introduced in this book. The focus is on database models for modeling complex information and uncertainty at the conceptual, logical, physical design levels and from integrity constraints defined on the fuzzy relations. The database models addressed here are; the conceptual data models, including the ExIFO and ExIFO2 data models, the logical database models, including the extended NF2 database model, fuzzy object-oriented database model, and the fuzzy deductive object-oriented database model. Integrity constraints are defined on the fuzzy relations are also addressed. A continuing reason for the limited adoption of fuzzy database systems has been performance. There have been few efforts at defining physical structures that accomodate fuzzy information. A new access structure and data organization for fuzzy information is introduced in this book. <s> BIB002 </s> A Literature Overview of Fuzzy Database Models * <s> FUZZY RELATIONAL DATABASES <s> Preface. Part I: Basic Concepts. 1. The Relational Data Model. 2. Conceptual Modeling with the Entity-Relationship Model. 3. Fuzzy Logic. Part II: Fuzzy Conceptual Modeling. 4. Fuzzy ER Concepts. 5. Fuzzy EER Concepts. Part III: Representation of Fuzzy Data and Constraints. 6. Fuzzy Data Representation. 7. Fuzzy Functional Dependencies (FFDs) as Integrity Constraints. 8. A FFD Inference System. Part IV: Fuzzy Database Design and Information Maintenance. 9. Scheme Decomposition and Information Maintenance. 10. Design of Fuzzy Databases to Avoid Update Anomalies. Bibliography. Appendix. A: List of Examples. B: List of Definitions. C: List of Theorems. D: List of Lemmas. E: List of Algorithms. Index. <s> BIB003 </s> A Literature Overview of Fuzzy Database Models * <s> FUZZY RELATIONAL DATABASES <s> Background Information.- Conceptual Data Modeling.- Logical Database Models.- Fuzzy Sets and Possibility Distributions.- Fuzzy Conceptual Data Modeling.- The Fuzzy Er and Fuzzy Eer Models.- The Fuzzy UML Data Model.- The Fuzzy XML Model.- Fuzzy Database Models.- The Fuzzy Relational Databases.- The Fuzzy Nested Relational Databases.- The Fuzzy Object-Oriented Databases.- Conceptual Design of Fuzzy Databases. <s> BIB004
Some major questions have been discussed and answered in the literature of the fuzzy relational databases (FRDBs), including representations and models, semantic measures and data redundancies, query and data processing, data dependencies and normalizations, implementation, and etc. For a comprehensive review of what has been done in the development of fuzzy relational databases, please refer to BIB003 BIB004 BIB001 BIB002 .
A Literature Overview of Fuzzy Database Models * <s> Representations and Models <s> A structure for representing inexact information in the form of a relational database is presented. The structure differs from ordinary relational databases in two important respects: Components of tuples need not be single values and a similarity relation is required for each domain set of the database. Two critical properties possessed by ordinary relational databases are proven to exist in the fuzzy relational structure. These properties are (1) no two tuples have identical interpretations, and (2) each relational operation has a unique result. <s> BIB001 </s> A Literature Overview of Fuzzy Database Models * <s> Representations and Models <s> This paper deals with the application of fuzzy logic in a relational database environment with the objective of capturing more meaning of the data. It is shown that with suitable interpretations for the fuzzy membership functions, a fuzzy relational data model can be used to represent ambiguities in data values as well as impreciseness in the association among them. Relational operators for fuzzy relations have been studied, and applicability of fuzzy logic in capturing integrity constraints has been investigated. By introducing a fuzzy resemblance measure EQUAL for comparing domain values, the definition of classical functional dependency has been generalized to fuzzy functional dependency (ffd). The implication problem of ffds has been examined and a set of sound and complete inference axioms has been proposed. Next, the problem of lossless join decomposition of fuzzy relations for a given set of fuzzy functional dependencies is investigated. It is proved that with a suitable restriction on EQUAL, the design theory of a classical relational database with functional dependencies can be extended to fuzzy relations satisfying fuzzy functional dependencies. <s> BIB002 </s> A Literature Overview of Fuzzy Database Models * <s> Representations and Models <s> In fuzzy data modeling, there is a series of design issues that should be dealt with. This article concentrates on the definition of data redundancy in a fuzzy relational data model. Compared to the existing approaches, a general treatment is provided, enabling us to directly and rigourously handle the situation in which possibility distributions can appear as attribute values combined with closeness relations in domain elements. The treatment of fuzzy data redundancy, which we propose here, is mainly based on Zadeh's extension principle. In doing so, a natural incorporation of closeness relations into the operations defined by the extension principle is worked out. Certain desirable properties are achieved in our model, in terms of intuition and model design. © 1992 John Wiley & Sons, Inc. <s> BIB003 </s> A Literature Overview of Fuzzy Database Models * <s> Representations and Models <s> In this paper, we propose notions of equivalence and inclusion of fuzzy data in relational databases for measuring their semantic relationship. The fuzziness of data appears in attribute values in forms of possibility distribution as well as resemblance relations in attribute domain elements. An approach for evaluating semantic measures is presented. With the proposal, one can remove fuzzy data redundancy and define fuzzy functional dependency. © 2000 John Wiley & Sons, Inc. <s> BIB004 </s> A Literature Overview of Fuzzy Database Models * <s> Representations and Models <s> In the real world, there exist a lot of fuzzy data which cannot or need not be precisely defined. We distinguish two types of fuzziness: one in an attribute value itself and the other in an association of them. For such fuzzy data, we propose a possibility-distribution-fuzzy-relational model, in which fuzzy data are represented by fuzzy relations whose grades of membership and attribute values are possibility distributions. In this model, the former fuzziness is represented by a possibility distribution and the latter by a grade of membership. Relational algebra for the ordinary relational database as defined by Codd includes the traditional set operations and the special relational operations. These operations are classified into the primitive operations, namely, union, difference, extended Cartesian product, selection and projection, and the additional operations, namely, intersection, join, and division. We define the relational algebra for the possibility-distribution-fuzzy-relational model of fuzzy databases. <s> BIB005
Several approaches have been taken to incorporate fuzzy data into relational databases. One of FRDB models is based on fuzzy relation BIB002 and similarity relation BIB001 . The other one is based on possibility distribution , which can further be classified into two categories: tuples associated with possibilities and attribute values represented by possibility distributions. The possibility-based FRDB model can be further extended into extended possibility-based FRDB model (see Table 1 ). BIB004 A fuzzy relation r on a relational schema R( where Dom(A i ) may be a fuzzy subset or even a set of fuzzy subset and there is the resemblance relation on the (symmetry) The form of an n-tuple in each of the above-mentioned fuzzy relational models can be expressed, respectively, as Based on the above-mentioned basic FRDB models, there are several extended FRDB models. It is clear that one can combine two kinds of fuzziness in possibilitybased FRDBs, where attribute values may be possibility distributions and tuples are connected with membership degrees. Such FRDBs are called possibility-distribution-fuzzy relational models in BIB005 . Another possible extension is to combine possibility distribution and similarity (proximity or resemblance) relation, and the extended possibilitybased fuzzy relational databases are hereby proposed in BIB003 BIB004 , where possibility distribution and resemblance relation arise in a relational database simultaneously.
A Literature Overview of Fuzzy Database Models * <s> Semantic Measures <s> Abstract It has been widely recognized that the imprecision and incompleteness inherent in real-world data suggest a fuzzy extension for information management systems. Various attempts to enhance these systems by fuzzy extensions can be found in the literature. Varying approaches concerning the fuzzification of the concept of a relation are possible, two of which are referred to in this article as the generalized fuzzy approach and the fuzzy-set relation approach. In these enhanced models, items can no longer be retrieved by merely using equality-check operations between constants; instead, operations based on some kind of nearness measures have to be developed. In fact, these models require such a nearness measure to be established for each domain for the evaluation of queries made upon them. An investigation of proposed nearness measures, often fuzzy equivalences, is conducted. The unnaturalness and impracticality of these measures leads to the development of a new measure: the resemblance relation, which is defined to be a fuzzified version of a tolerance relation. Various aspects of this relation are analyzed and discussed. It is also shown how the resemblance relation can be used to reduce redundancy in fuzzy relational database systems. <s> BIB001 </s> A Literature Overview of Fuzzy Database Models * <s> Semantic Measures <s> In fuzzy data modeling, there is a series of design issues that should be dealt with. This article concentrates on the definition of data redundancy in a fuzzy relational data model. Compared to the existing approaches, a general treatment is provided, enabling us to directly and rigourously handle the situation in which possibility distributions can appear as attribute values combined with closeness relations in domain elements. The treatment of fuzzy data redundancy, which we propose here, is mainly based on Zadeh's extension principle. In doing so, a natural incorporation of closeness relations into the operations defined by the extension principle is worked out. Certain desirable properties are achieved in our model, in terms of intuition and model design. © 1992 John Wiley & Sons, Inc. <s> BIB002 </s> A Literature Overview of Fuzzy Database Models * <s> Semantic Measures <s> The need to incorporate and treat information given in fuzzy terms in Relational Databases has concentrated a great effort in the last years. This article focuses on the treatment of functional dependencies (f.d.) between attributes of a relation scheme. We review other approaches to this problem and present some of its missfunctions concerning intuitive properties a fuzzy extension of f.d. should verify. Then we introduce a fuzzy extension of this concept to overcome the previous anomalous behaviors and study its properties. of primary interest is the completeness of our fuzzy version of Armstrong axioms in order to derive all the fuzzy functional dependencies logically implied by a set of f.f.d. just using these axioms. © 1994 John Wiley & Sons, Inc. <s> BIB003 </s> A Literature Overview of Fuzzy Database Models * <s> Semantic Measures <s> In this paper, we propose notions of equivalence and inclusion of fuzzy data in relational databases for measuring their semantic relationship. The fuzziness of data appears in attribute values in forms of possibility distribution as well as resemblance relations in attribute domain elements. An approach for evaluating semantic measures is presented. With the proposal, one can remove fuzzy data redundancy and define fuzzy functional dependency. © 2000 John Wiley & Sons, Inc. <s> BIB004
To measure the semantic relationship between fuzzy data, some investigation results for assessing data redundancy can be found in literature, which are the closeness measure based on resemblance . (a) The notion of nearness measure is proposed in BIB001 . Two fuzzy data π A and π B are considered α-β redundant if and only if the following inequality equations hold true: where α and β are the given thresholds, Res(x, y) denotes the resemblance relation on the attribute domain, and supp(π A ) denotes the support of π A . It is clear that a twofold condition is applied in their study: the resemblance criterion and the matching criterion. (b) For two data π A and π B , the following approach is defined in BIB002 to assess the possibility and impossibility that π A = π B . Here c(x, y) denotes a closeness relation (being the same as the resemblance relation). Classical equality is extended by means of a function E c : where T denotes True and F denotes False. The key idea is to extend the operations to be performed not only upon the identical elements, but also upon the close elements. (c) In BIB003 , the notions of weak resemblance and strong resemblance are proposed for representing the possibility and the necessity that two fuzzy values π A and π B are approximately equal, respectively. Weak resemblance and strong resemblance can be expressed as follows. and The weak resemblance gives the extent to which some crisp element in an imprecise values A(x) is resemblant to some crisp element in another imprecise values A(y). The strong resemblance gives the extent to which all the crisp elements in A(x) are resemblant to all the crisp elements in A(y). (d) The following function is given in to measure the interchangeability that fuzzy value π A can be replaced with another fuzzy data π B , i.e., the possibility that π A is close to π B from the left-hand side: μ S (π A , π B )(x) can measure the extent to which there exists a representative <y, π B (y)> in π B which can be substituted for x. The treatment of (a) sets two criteria separately for redundancy evaluation and counterintuitive results are produced BIB002 . The approaches of (b) and (d), in which the approach in (d) is actually an extension of the approach of (a), tried to set two criteria together for the redundancy evaluation. But the counterintuitive problem in (a) still exists in the approach in (d) BIB004 . For the approach in (b), there also exist some inconsistencies for assessing the redundancy of fuzzy data represented possibility distribution BIB004 . As to the approach in (c), the weak resemblance, however, appears to be too "optimistic" and strong resemblance is too severe for the semantic assessment of fuzzy data. The approach in (b) is somewhat similar to the weak resemblance measure except that the degree of resemblance between crisp values is no longer incorporated into the min but is used to calibrate the set of comparable values . (e) In BIB004 , two notions semantic inclusion degree SID(π A , π B ) and semantic equivalence degree SED(π A , π B ) are introduced for the semantic measure of two fuzzy data π A and π B . Based on possibility distribution and resemblance relation, the definitions of calculating SID(π A , π B ) and SED(π A , π B ) are given as follows. and Res ( , ) 1 Here SID α (π A , π B ) means that the degree that π A semantically includes π B and SED(π A , π B ) means that the degree that π A and π B are equivalent to each other.
A Literature Overview of Fuzzy Database Models * <s> Query and Data Processing <s> Using a fuzzy-logic-based calculus of linguistically quantified propositions we present FQUERY III+, a new, more “human-friendly” and easier-to-use implementation of a querying scheme proposed originally by Kacprzyk and Ziolkowski to handle imprecise queries including a linguistic quantifier as, e.g. find all records in which most (almost all, much more than 75%, … or any other linguistic quantifier) of the important attributes (out of a specified set) are as desired (e.g. equal to five, more than 10, large, more or less equal to 15, etc.). FQUERY III+ is an “add-on” to Ashton-Tate's dBase III Plus. <s> BIB001 </s> A Literature Overview of Fuzzy Database Models * <s> Query and Data Processing <s> Abstract This paper addresses the problem of processsing fuzzy queries in databases and information retrieval systems. Most of the existing approaches for handling fuzziness in queries require explicit definitions of fuzziness and membership functions. We propose an architecture and data structures for a fuzzy query processor that utilizes clustering techniques as a tool to generate the mapping between fuzzy terms, defined at a high level of abstraction and the data items of the database records. The clustering techniques developed in this paper are based on multiple thresholding of fuzzy clustering. The use of thresholded fuzzy clustering provides a controlled overlap between clusters of records and thus reflects, naturally, the required fuzziness in the response. A prototype fuzzy query processor based on this approach has been implemented and tested on a sample database. <s> BIB002 </s> A Literature Overview of Fuzzy Database Models * <s> Query and Data Processing <s> Two fuzzy database query languages are proposed. They are used to query fuzzy databases that are enhanced from relational databases in such a way that fuzzy sets are allowed in both attribute values and truth values. A fuzzy calculus query language is constructed based on the relational calculus, and a fuzzy algebra query language is also constructed based on the relational algebra. In addition, a fuzzy relational completeness theorem such that the languages have equivalent expressive power is proved. > <s> BIB003 </s> A Literature Overview of Fuzzy Database Models * <s> Query and Data Processing <s> An important issue in extending database management systems functionalities is to allow the expression of imprecise queries to enable these systems to satisfy the user needs more closely. This paper deals with imprecise querying of regular relational databases. The basic idea is to extend an existing query language, namely SQL. In this context, two important points must be considered: one concerns the integration in the extended language of many propositions that have been made elsewhere, in particular those concerning fuzzy aggregation operators; and the second point is to know whether the equivalences which are valid in SQL still hold in the extended language. Both these topics are investigated in this paper. > <s> BIB004 </s> A Literature Overview of Fuzzy Database Models * <s> Query and Data Processing <s> This paper aims at establishing connections between Sugeno fuzzy integral (1974) and flexible queries addressed to usual relational databases. It is shown that if an appropriate measure is chosen, the Sugeno fuzzy integral covers a broad range of queries including the use of logical connectives, fuzzy quantified statements and the division of fuzzy relations. In addition, a semantic view of Sugeno fuzzy integral is suggested in terms of /spl alpha/-cuts of a fuzzy set and fulfillment of a given property. <s> BIB005 </s> A Literature Overview of Fuzzy Database Models * <s> Query and Data Processing <s> The paper presents a new method for fuzzy query translation based on the /spl alpha/-cuts operations of fuzzy numbers. This proposed method allows the retrieval conditions of SQL queries to be described by fuzzy terms represented by fuzzy numbers. It emphasizes friendliness and flexibility for inexperienced users. The authors have implemented a fuzzy query translator to translate user's fuzzy queries into precise queries for relational database systems. Because the proposed method allows the user to construct his fuzzy queries intuitively and to choose different retrieval threshold values for fuzzy query translation, the existing relational database systems will be more friendly and more flexible to the users. <s> BIB006 </s> A Literature Overview of Fuzzy Database Models * <s> Query and Data Processing <s> Imprecise information is represented by fuzzy disjunctive information, and an extended fuzzy relational model is used to accommodate such information. In the presence of imprecise information, answers to a query can be categorized into two kinds of answers: sure answers and possible answers. To find more likely answers to a given query, the authors develop a method to measure the matching strength of each tuple as an answer to the query. The quality of an answer is higher in the case where less extra information is required and the more sure information is provided. <s> BIB007 </s> A Literature Overview of Fuzzy Database Models * <s> Query and Data Processing <s> Proposes a new measure of fuzzy equality (FE) comparison based on the similarity of possibility distributions. We define a type of fuzzy equi-join based on the new FE comparison and allow threshold values to be associated with predicates of the join condition. A sort-merge join algorithm based on a partial order of intervals is used to evaluate the fuzzy equi-join. In order for the evaluation to be efficient, we identify various mappings, called FE indicators, that determine appropriate intervals for fuzzy data with different characteristics. Experimental results from our preliminary simulation of the algorithm show a significant improvement of efficiency when FE indicators are used with the sort-merge join algorithm. <s> BIB008 </s> A Literature Overview of Fuzzy Database Models * <s> Query and Data Processing <s> Two kinds of fuzziness in attribute values of the fuzzy relational databases can be distinguished: One is that attribute values are possibility distributions, and the other is that there are resemblance relations in attribute domains. The fuzzy relational databases containing these two kinds of fuzziness simultaneously are called extended possibility-based fuzzy relational databases. In this paper, we focus on such fuzzy relational databases. We classify two kinds of fuzzy data redundancies and define their removal. On this basis, we define fuzzy relational operations in relational algebra, which, being similar to the conventional relational databases, are complete and sound. In particular, we investigate fuzzy querying strategies and give the form of fuzzy querying with SQL. © 2002 Wiley Periodicals, Inc. <s> BIB009 </s> A Literature Overview of Fuzzy Database Models * <s> Query and Data Processing <s> One of the main objectives of third generation databases is to design database management systems which provide users with more and more functionalities. In such a wide context, various proposals have been made in order to introduce some kind of explicit or implicit flexibility into user queries. In this paper, we propose a classification of the various approaches dealing with imprecise queries. Moreover, we show that the approach based on fuzzy sets is powerful enough to answer a wide range of imprecise queries in an appropriate way and to support the expression of the capabilities available in the other classes of solutions. An outline of an SQL-like language allowing for a variety of imprecise queries is also presented. <s> BIB010 </s> A Literature Overview of Fuzzy Database Models * <s> Query and Data Processing <s> In the real world, there exist a lot of fuzzy data which cannot or need not be precisely defined. We distinguish two types of fuzziness: one in an attribute value itself and the other in an association of them. For such fuzzy data, we propose a possibility-distribution-fuzzy-relational model, in which fuzzy data are represented by fuzzy relations whose grades of membership and attribute values are possibility distributions. In this model, the former fuzziness is represented by a possibility distribution and the latter by a grade of membership. Relational algebra for the ordinary relational database as defined by Codd includes the traditional set operations and the special relational operations. These operations are classified into the primitive operations, namely, union, difference, extended Cartesian product, selection and projection, and the additional operations, namely, intersection, join, and division. We define the relational algebra for the possibility-distribution-fuzzy-relational model of fuzzy databases. <s> BIB011 </s> A Literature Overview of Fuzzy Database Models * <s> Query and Data Processing <s> Users of information systems would like to express flexible queries over the data possibly retrieving imperfect items when the perfect ones, which exactly match the selection conditions, are not available. Most commercial DBMSs are still based on the SQL for querying. Therefore, providing some flexibility to SQL can help users to improve their interaction with the systems without requiring them to learn a completely novel language. Based on the fuzzy set theory and the @a-cut operation of fuzzy number, this paper presents the generic fuzzy queries against classical relational databases and develops the translation of the fuzzy queries. The generic fuzzy queries mean that the query condition consists of complex fuzzy terms as the operands and complex fuzzy relations as the operators in a fuzzy query. With different thresholds that the user chooses for the fuzzy query, the user's fuzzy queries can be translated into precise queries for classical relational databases. <s> BIB012 </s> A Literature Overview of Fuzzy Database Models * <s> Query and Data Processing <s> Two kinds of fuzziness in attribute values of the fuzzy relational databases can be distinguished: one is that attribute values are possibility distributions and the other is that there are resemblance relations in attribute domains. The fuzzy relational databases containing these two kinds of fuzziness simultaneously are called extended possibility-based fuzzy relational databases. In this article, we focus on such fuzzy relational databases and investigate three update operations for the fuzzy relational databases, which are Insertion, Deletion, and Modification, respectively. We develop the strategies and implementation algorithms of these operations. © 2007 Wiley Periodicals, Inc. Int J Int Syst 22: 237–258, 2007. <s> BIB013
Classical relational databases suffer from a lack of flexibility in query. The given selection condition and the contents of the relations are all crisp. A query is flexible if the following conditions can be satisfied BIB010 : • A qualitative distinction between the selected tuples is allowed. • Imprecise conditions inside queries are introduced when the user cannot define his/her needs in a definite way, or when a prespecified number of responses are desired and therefore a margin is allowed to interpret the query. Here typically, the former case occurs when the queried relational databases contain incomplete information and the query conditions are crisp and the later case occurs when the query conditions are imprecise even if the queried relational databases do not contain imperfect information . In BIB001 , a "human-consistent" database querying system based on fuzzy logic with linguistic quantifiers is presented. Using clustering techniques, a fuzzy query processing method is presented in BIB002 . Takahashi presents a fuzzy query language for relational databases and discusses the theoretical foundation of query languages to fuzzy databases in BIB003 . Two fuzzy database query languages are proposed, which are a fuzzy calculus query language and a fuzzy algebra query language. In BIB005 , the concepts of fuzzy integrals and database flexible querying are presented. In BIB004 , a relational data-base language called SQLf for fuzzy querying is presented. Selection, join, and projection operations are extended to handle fuzzy conditions. Also fuzzy query translation techniques for relational database systems and techniques of fuzzy query processing for fuzzy database systems are presented in BIB006 BIB012 and , respectively. In addition, based on matching strengths of answers in FRDBs, a method for fuzzy query processing is presented in BIB007 . In , nested fuzzy SQL queries in a FRDB are discussed. In addition to query processing in FRDBs, there are also few studies focusing on the operations of relational algebra in FRDBs BIB009 BIB011 . In BIB008 , a type of fuzzy equi-join is defined using fuzzy equality indicators. Updating FRDBs is investigated in BIB013 .
A Literature Overview of Fuzzy Database Models * <s> Data Dependencies and Normalizations <s> This paper deals with the application of fuzzy logic in a relational database environment with the objective of capturing more meaning of the data. It is shown that with suitable interpretations for the fuzzy membership functions, a fuzzy relational data model can be used to represent ambiguities in data values as well as impreciseness in the association among them. Relational operators for fuzzy relations have been studied, and applicability of fuzzy logic in capturing integrity constraints has been investigated. By introducing a fuzzy resemblance measure EQUAL for comparing domain values, the definition of classical functional dependency has been generalized to fuzzy functional dependency (ffd). The implication problem of ffds has been examined and a set of sound and complete inference axioms has been proposed. Next, the problem of lossless join decomposition of fuzzy relations for a given set of fuzzy functional dependencies is investigated. It is proved that with a suitable restriction on EQUAL, the design theory of a classical relational database with functional dependencies can be extended to fuzzy relations satisfying fuzzy functional dependencies. <s> BIB001 </s> A Literature Overview of Fuzzy Database Models * <s> Data Dependencies and Normalizations <s> Abstract We show that by using the theory of possibility distribution, multivalued dependencies in fuzzy relations can be expressed in the frame work of particularization. Another definition of functional dependencies in a fuzzy relation is given in terms of fuzzy Hamming weight. We have also discussed the validity of inference rules for multivalued dependencies in fuzzy relations. <s> BIB002 </s> A Literature Overview of Fuzzy Database Models * <s> Data Dependencies and Normalizations <s> Abstract The fuzzy relational database model as defined by Buckles and Petry employs sets in place of atomic values for components of tuples in database relations. This technique for dealing with imprecision in relational databases is intuitively appealing. Moreover, the model preserves several important properties of classical relational databases. In recent works we have demonstrated that the existence of partitions on scalar domains is the key to ensuring conformity with the classical relational model. Specifically, by restricting the components of fuzzy tuples to be nonempty subsets of equivalence classes from domain partitions, it is possible to define the notions of redundant tuples and consistent database relations and to specify a well-defined fuzzy relational algebra. Since these properties are obtained by working only with equivalence classes, the fuzzy relational model of Buckles and Petry is generalized to an equivalence classes model of relational databases. In this work, additional properties of the fuzzy relational database model are presented. By employing equivalence classes from domain partitions, we define functional dependencies and normal forms for the fuzzy relational model. These definitions extend the corresponding classical definitions. Moreover, our definitions of functional dependencies and normal forms provide valuable guidelines for designing fuzzy relational databases. <s> BIB003 </s> A Literature Overview of Fuzzy Database Models * <s> Data Dependencies and Normalizations <s> Abstract To answer user's queries in a fuzzy relational database, it often becomes necessary to decompose a relation scheme into components and subsequently to perform the join operation of two or more component fuzzy relations. For maintaining database integrity, it is essential that the join operation of the components yield the original relation. Such a decomposition is termed a lossless join decomposition. However, the join may not always recover the original relation. In this paper, we study the decomposition problem of fuzzy relation schemes. The associated question of lossless join is also investigated. It has been proved that the algorithm proposed by Loizou et al. [8] to test lossless join property for classical relations is applicable to examine lossless join of fuzzy relations in the presence of the fuzzy functional dependencies. <s> BIB004 </s> A Literature Overview of Fuzzy Database Models * <s> Data Dependencies and Normalizations <s> The need to incorporate and treat information given in fuzzy terms in Relational Databases has concentrated a great effort in the last years. This article focuses on the treatment of functional dependencies (f.d.) between attributes of a relation scheme. We review other approaches to this problem and present some of its missfunctions concerning intuitive properties a fuzzy extension of f.d. should verify. Then we introduce a fuzzy extension of this concept to overcome the previous anomalous behaviors and study its properties. of primary interest is the completeness of our fuzzy version of Armstrong axioms in order to derive all the fuzzy functional dependencies logically implied by a set of f.f.d. just using these axioms. © 1994 John Wiley & Sons, Inc. <s> BIB005 </s> A Literature Overview of Fuzzy Database Models * <s> Data Dependencies and Normalizations <s> Abstract This paper focuses on one of the fundamental design issues in fuzzy database modeling, namely, the dependency-preserving decomposition. In a fuzzy relational data model where imprecision is reflected by possibility distributions for attribute values as well as by closeness relations for domain elements, a “poor” model design can be remedied, in many cases, by decomposing relation schemes in order to eliminate/reduce data redundancy and update anomalies. On the other hand, the decomposition should guarantee that the semantic knowledge and integrity constraints expressed by fuzzy functional dependency (FFD) are satisfactorily preserved by the resultant relation schemes. Based on the concept of FFD transitive closure, an algorithm has been developed to test whether a given decomposition is dependency-preserving with respect to a given set of FFDs. Finally, two special FFD sets, one composed of a X 1 -to- X 1 FFD loop and the other composed of a X 1 - X m FFD and a X 1 -to- X m FFD chain, are investigated. <s> BIB006 </s> A Literature Overview of Fuzzy Database Models * <s> Data Dependencies and Normalizations <s> Abstract In this paper we have considered data dependencies, more specifically (restricted) fuzzy functional dependencies, in the framework of extended fuzzy relational database model. A formal system is presented for the derivation of fuzzy functional dependencies, which is both sound and complete. The problem of lossless join decomposition is discussed in the framework of extended fuzzy relational model. Then we introduce a formal statement, of fuzzy functional independency, which expresses the fact that the corresponding restricted fuzzy functional dependency is not a legitimate constraint of the real world. A formal system is presented for the derivation of restricted fuzzy functional dependencies and independencies. The various problems such as the membership problem, the contradiction problem, and the problem of canonical represents are discussed. A cover, called minimal cover, of restricted fuzzy functional independencies is shown to be unique. <s> BIB007 </s> A Literature Overview of Fuzzy Database Models * <s> Data Dependencies and Normalizations <s> Abstract In many cases, classical databases need to be extended in order to represent and manipulate uncertain and imprecise information. In a fuzzy relational data model where attribute values are represented by possibility distributions and domains are associated with closeness relations, the problems of update anomalies and data redundancy may still exist. This paper aims to extend the normalization theory of the classical relational data model so as to provide theoretical guidelines for fuzzy relational database design. Based upon the notion of fuzzy functional dependency (FFD), a number of concepts such as relation keys and normal forms are generalized. As a result, q-keys. Fuzzy First Normal Form (F1NF), q-Fuzzy Second Normal Form (q-F2NF), q-Fuzzy Third Normal Form (q-F3NF), and q-Fuzzy Boyce-Codd Normal Form (q-FBCNF) have been formulated. Finally, dependency-preserving and lossless-join decompositions into q-F3NFs are discussed. <s> BIB008 </s> A Literature Overview of Fuzzy Database Models * <s> Data Dependencies and Normalizations <s> Abstract The fuzzy multivalued dependency is defined on the lines of classical multivalued dependency to resolve the redundancy and inconsistency in fuzzy databases. The implication problem of fmvd's has been examined and a set of sound and complete inference axioms has been proposed. We next examine lossless join problem of fuzzy relations in the presence of fuzzy functional depandencies and fuzzy multivalued dependencies and obtain decomposition of relation schemes. <s> BIB009 </s> A Literature Overview of Fuzzy Database Models * <s> Data Dependencies and Normalizations <s> The concept of fuzzy multivalued dependencies (FMVDs) is introduced. It is concluded that FMVDs are more generalised than classical multivalued dependencies. A set of sound and complete inference rules, similar to Armstrong's axioms are proposed to derive more dependencies from a given set of FMVDs. With reference to FMVDs, inter-relationships between two-tuple subrelations and the relation, to which they belong, have been established. The proof procedures for the inference rules are based on these relationships. <s> BIB010 </s> A Literature Overview of Fuzzy Database Models * <s> Data Dependencies and Normalizations <s> In the context of regular relational databases, functional dependencies have received a lot of attention, since they capture some semantics about the data related to redundancy. Functional dependencies lead to an appropriate design of a database in terms of a set of relations and can make the checking process of integrity constraints significantly easier. For about 10 years, several proposals to deal with ill-known information in database management systems have been made, and extensions of the relational data model have been proposed accordingly. In this context, the idea of fuzzy functional dependency has emerged to extend the classical functional dependency, and several definitions have been proposed. In this article, an overview of these different proposals is provided, and the connection between fuzzy functional dependencies and database design is discussed. In addition, some semantics and use of fuzzy functional dependencies are suggested. © 1998 John Wiley & Sons, Inc. <s> BIB011 </s> A Literature Overview of Fuzzy Database Models * <s> Data Dependencies and Normalizations <s> Based on the concepts of the semantic proximity, we present a definition of the fuzzy functional dependency, We show that the inference rules for fuzzy functional dependencies, which are the same as Armstrong's axioms for the crisp case, are correct and complete. We also show that dependent constraints with dull values constitute a lattice. Functional dependencies in classical relational databases and null functional dependencies can be viewed as a special case of fuzzy functional dependencies. By applying the unified functional dependencies to the relational database design, we can represent the data with fuzzy values, null values and crisp values under relational database management systems, By using fuzzy functional dependencies, we can compress the range of a fuzzy value and make this fuzzy value "clearer". <s> BIB012 </s> A Literature Overview of Fuzzy Database Models * <s> Data Dependencies and Normalizations <s> Reviews a new definition of fuzzy functional dependency based on conditional probability and its application to approximate data reduction related to the operation of projection in classical relational databases in order to construct fuzzy integrity constraints. We introduce the concept of partial fuzzy functional dependency, which expresses the fact that a given attribute X does not determine Y completely, but in the partial area of X it might determine Y. Finally, we discuss another application of fuzzy functional dependency in constructing fuzzy query relations for data querying and approximate joins of two or more fuzzy query relations in the framework of an extended query system. <s> BIB013 </s> A Literature Overview of Fuzzy Database Models * <s> Data Dependencies and Normalizations <s> This paper first introduces the formal definitions of fuzzy functional and multivalued dependencies which are given on the basis of the conformance values presented here. Second, the inference rules are listed after both fuzzy functional and multivalued dependencies are shown to be consistent, that is, they reduce to those of the classic functional and multivalued dependencies when crisp attributes are involved. Finally, the inference rules presented here are shown to be sound and complete for the family of functional and multivalued dependencies in fuzzy database relations. <s> BIB014 </s> A Literature Overview of Fuzzy Database Models * <s> Data Dependencies and Normalizations <s> Based on the semantic equivalence degree the formal definitions of fuzzy functional dependencies (FFDs) and fuzzy multivalued dependencies (FMVDs) are first introduced to the fuzzy relational databases, where fuzziness of data appears in attribute values in the form of possibility attributions, as well as resemblance relations in attribute domain elements, called extended possibility-based fuzzy relational databases. A set of inference rules for FFDs and FMVDs is then proposed. It is shown that FFDs and FMVDs are consistent and the inference rules are sound and complete, just as Armstrong's axioms for classic cases. © 2002 Wiley Periodicals, Inc. <s> BIB015 </s> A Literature Overview of Fuzzy Database Models * <s> Data Dependencies and Normalizations <s> In this article, we focus on the issues of fuzzy data dependencies. After introducing the notion of semantic equivalence degree, fuzzy functional and multivalued dependencies are defined. A set of sound and complete inference rules, similar to Armstrong's axioms for classic cases, for fuzzy functional dependencies (FFDs) and fuzzy multivalued dependencies (FMVDs) are proposed. The strategies and approaches for compressing fuzzy values by FFDs and FMVDs are investigated. By such processing, the unnecessary elements are eliminated from a fuzzy value and its range is compressed. © 2002 Wiley Periodicals, Inc. <s> BIB016 </s> A Literature Overview of Fuzzy Database Models * <s> Data Dependencies and Normalizations <s> This paper is situated in the area of fuzzy databases, i.e., databases containing imprecise information. More precisely, it deals with the notion of a functional dependency when data involved in this type of property are possibly imprecise. Contrary to previous works, the idea is not to fuzzify the concept of a functional dependency. In the view suggested here, we rather consider regular functional dependencies and we study the impact of the presence of such FDs on the insertion and handling of imprecise data. <s> BIB017 </s> A Literature Overview of Fuzzy Database Models * <s> Data Dependencies and Normalizations <s> Fuzzy relational database models generalize the classical relational database model by allowing uncertain and imprecise information to be represented and manipulated. In this article, we introduce fuzzy extensions of the normal forms for the similarity-based fuzzy relational database model. Within this framework of fuzzy data representation, similarity, conformance of tuples, the concept of fuzzy functional dependencies, and partial fuzzy functional dependencies are utilized to define the fuzzy key notion, transitive closures, and the fuzzy normal forms. Algorithms for dependency preserving and lossless join decompositions of fuzzy relations are also given. We include examples to show how normalization, dependency preserving, and lossless join decomposition based on the fuzzy functional dependencies of fuzzy relation are done and applied to some real-life applications. © 2004 Wiley Periodicals, Inc. Int J Int Syst 19: 885–917, 2004. <s> BIB018
Integrity constraints play a critical role in a logical database design. Among these constraints, data dependencies are of more interest. Based on various FRDB models, some attempts have been taken to express the data dependencies, mainly including fuzzy functional dependency (FFD) and fuzzy multivalued dependency (FMVD). Some papers focus on FFD, in which we can classify two kinds of papers: • the first one has a focus on the axiomatization of FFD BIB006 BIB005 BIB012 BIB007 . • the second has a focus on the lossless join and decomposition BIB004 BIB017 BIB001 , which is the basis to implement the normalization of fuzzy relational databases BIB008 . There are some papers that focus on FMVD BIB010 BIB009 BIB002 . Finally some papers focus both on FFD and FMVD and present the axiomatization of FFD and FMVD BIB015 BIB014 . Note that the fuzzy data dependencies can be applied in data handling. In BIB011 , FFD is used for redundancy elimination. In BIB013 , FFD is used for approximate data querying. In BIB012 BIB016 , FFD is used for fuzzy data compression. To solve the problems of update anomalies and data redundancies that may exist in FRDBs, the normalization theory of the classical RDB model must be extended so as to provide theoretical guideline for FRDB design. By employing equivalence classes from domain partitions, the functional dependencies and normal forms for FRDB model are defined in BIB003 and then the associated normalization issues are discussed. Based on the notion of FFD, some notions such as relation keys and normal forms are generalized in BIB008 . As a result, q-keys, Fuzzy First Normal Form, q-Fuzzy Second Normal Form, q-Fuzzy Third Normal Form, and q-Fuzzy Boyce-Codd Normal Form are formulated. Dependency-preserving and lossless-join decompositions into q-F3NFs are discussed. Within the framework of the similarity-based fuzzy data representation, similarity, conformance of tuples, the concept of FFDs, and partial FFDs are discussed in BIB018 . On the basis, the fuzzy key notion, transitive closures, and fuzzy normal forms are defined for similarity-based FRDBs and the algorithms for dependency preserving and lossless join decompositions of fuzzy relations are given. Also it is shown how normalization, dependency preserving, and lossless join decomposition based on FFDs of fuzzy relation are done and applied to some real-life applications.
A Literature Overview of Fuzzy Database Models * <s> Some Basic Fuzzy Object-Oriented Database Models <s> The paper presents an object-centered representation, where both a range of allowed values and a range of typical values can be specified for the attributes describing a class. These ranges may be fuzzy. Then various kinds of (graded) inclusion relations can be defined between classes. Inheritance mechanisms are discussed in this framework, as well as other kinds of reasoning tasks such as classification. the architecture of a software system implementing these ideas is outlined. <s> BIB001 </s> A Literature Overview of Fuzzy Database Models * <s> Some Basic Fuzzy Object-Oriented Database Models <s> This paper fully develops a previous approach by George et al. (1993) to modeling uncertainty in class hierarchies. The model utilizes fuzzy logic to generalize equality to similarity which permitted impreciseness in data to be represented by uncertainty in classification. In this paper, the data model is formally defined and a nonredundancy preserving primitive operator, the merge, is described. It is proven that nonredundancy is always preserved in the model. An object algebra is proposed, and transformations that preserve query equality are discussed. <s> BIB002 </s> A Literature Overview of Fuzzy Database Models * <s> Some Basic Fuzzy Object-Oriented Database Models <s> Ordinary object-oriented databases have been well studied. We have already proposed a fuzzy object-oriented database that can treat fuzzy-set attribute values with certainty factors and fuzzy inheritance. In this paper, we design and implement an SQL-type data manipulation language and demonstrate its facilities using several examples. <s> BIB003 </s> A Literature Overview of Fuzzy Database Models * <s> Some Basic Fuzzy Object-Oriented Database Models <s> In this paper, the definition of graph-based operations to select and browse a fuzzy object oriented database which manages both crisp and fuzzy information is proposed. The underlying data model is a fuzzy graph-based model (FOOD), previously defined by Bordogna et al., within fuzzy set theory and possibility theory. The evaluation mechanism of the graph-based operations is formalized in terms of graph transformations and fuzzy pattern matching. © 2001 John Wiley & Sons, Inc. <s> BIB004 </s> A Literature Overview of Fuzzy Database Models * <s> Some Basic Fuzzy Object-Oriented Database Models <s> In this paper, based on possibility distribution and the semantic measure of fuzzy data, we introduce an extended object-oriented database model to handle imperfect as well as complex objects in the real world. Some major notions in object-oriented databases such as objects, classes, objects-classes relationships, subclass/superclass, and multiple inheritances are extended under fuzzy information environment. A generic model for fuzzy object-oriented databases and some operations are hereby developed in the paper. <s> BIB005
A FOODB model defined in BIB003 uses fuzzy attribute values with a certain factor and an SQL type data manipulation language. An UFO (uncertainty and fuzziness in an object-oriented) databases model is proposed in to model fuzziness and uncertainty by means of conjunctive fuzzy sets and generalized fuzzy sets, respectively. That the behaviors and structure of the object are incompletely defined results in a gradual nature for the instantiation of an object. The partial inheritance, conditional inheritance, and multiple inheritances are permitted in fuzzy hierarchies. Based on the extension of a graphs-based model object model, a fuzzy object-oriented data model is defined in . The notion of strength expressed by linguistic qualifiers is proposed, which can be associated with the instance relationship as well as an object with a class. Fuzzy classes and fuzzy class hierarchies are thus modeled in the OODB. The definition of graph-based operations to select and browse such a FOODB that manages both crisp and fuzzy information is proposed in BIB004 . Based on similarity relationship, the range of attribute values is used to represent the set of allowed values for an attribute of a given class in BIB002 . Depending on the inclusion of the actual attribute values of the given object into the range of the attributes for the class, the membership degrees of an object to a class can be calculated. The weak and strong class hierarchies are defined based on monotone increase or decrease of the membership of a subclass in its superclass. Based on possibility theory, vagueness and uncertainty are represented in class hierarchies in BIB001 , where the fuzzy ranges of the subclass attributes defined restrictions on that of the superclass attributes and then the degree of inclusion of a subclass in the superclass is dependent on the inclusion between the fuzzy ranges of their attributes. Also based possibility distribution theory, in BIB005 , some major notions in object-oriented databases such as objects, classes, objects-classes relationships, subclass/superclass, and multiple inheritances are extended under fuzzy information environment. A generic model for FOODBs and some operations are hereby developed.
A Literature Overview of Fuzzy Database Models * <s> ODMG-Based Fuzzy Object-Oriented Databases <s> The Fuzzy Object Data Management Group has been formed as a joint international collaborative research effort among fuzzy database researchers in order to establish common terminology and concepts, to formalize and integrate the current body of fuzzy object model research, to provide a basis for future extensions, and to contribute to the commercial success of a fuzzy object data model. This paper presents the initial research efforts to use ODMG-93 object data model standard as the basis for defining a fuzzy object data model since it is becoming a defacto standard and several object-oriented database vendors are currently releasing commercial products in compliance with this standard. The syntactic extensions to the ODMG object model and the semantic issues related to these extensions in order to provide fuzzy set objects and fuzzy objects are presented. These extensions are an initial effort in accomplishing the challenging task of defining a fuzzy type and fuzzy hierarchy and their related semantics. <s> BIB001 </s> A Literature Overview of Fuzzy Database Models * <s> ODMG-Based Fuzzy Object-Oriented Databases <s> Abstract In the real world information is, for the most part, available in an imperfect form. Managing this kind of information with classical database systems brings a disadvantageous loss of data semantics along. Therefore, advanced database modelling techniques are necessary. This paper deals with a uniform and advantageous representation of both perfect and imperfect ‘real world’ information in object-oriented databases. An object-oriented data(base) modelling technique, based on the concept ‘level-2 fuzzy set’, is presented. Hereby, the focus is on the semantic definitions of the structural, as well as on the behavioural aspects of the data. It is shown how level-2 fuzzy sets can be used to generalise the concept ‘type’. Since types are generally recognised to be the basic building blocks of object-oriented database models, generalised types can be used as basic notions of fuzzy object-oriented database models. Finally, it is illustrated and discussed how the ODMG data model can be generalised to handle ‘real world’ data in a more advantageous way. <s> BIB002
Some efforts have been paid on the establishment of consistent framework for a fuzzy object-oriented model based on the standard for the Object Data Management Group (ODMG) object data model BIB001 . In BIB002 , an object-oriented database modeling technique is presented based on the concept 'level-2 fuzzy set' to deals with a uniform and advantageous representation of both perfect and imperfect 'real world' information. It is illustrated and discussed how the ODMG data model can be generalized to handle 'real world' data in a more advantageous way.
A Literature Overview of Fuzzy Database Models * <s> Other Fuzzy Extension of Object-Oriented Databases <s> The paper presents an object-centered representation, where both a range of allowed values and a range of typical values can be specified for the attributes describing a class. These ranges may be fuzzy. Then various kinds of (graded) inclusion relations can be defined between classes. Inheritance mechanisms are discussed in this framework, as well as other kinds of reasoning tasks such as classification. the architecture of a software system implementing these ideas is outlined. <s> BIB001 </s> A Literature Overview of Fuzzy Database Models * <s> Other Fuzzy Extension of Object-Oriented Databases <s> In this article, we present a formalism for embedding fuzzy logic into object-oriented methodology in order to deal with the uncertainty and vagueness that pervade knowledge and object descriptions in the real world. We show how fuzzy logic can be used to represent knowledge in conventional objects, while still preserving the essential features of object-oriented methodology. Fuzzy object attributes and relationships are defined and the framework for obtaining fuzzy generalizations and aggregations are formulated. Object's attributes in this formalism are viewed as hybrids of crisp and fuzzy characterizations. Attributes with vague descriptions are fuzzified and manipulated with fuzzy rules and fuzzy set operations, while others are treated as crisp sets. In addition to the fuzzification of the object's attributes, each object is provided with a fuzzy knowledge base and an inference engine. The fuzzy knowledge base consists of a set of fuzzy rules and fuzzy set operators. Objects with a knowledge base and an inference engine are referred to as intelligent objects. <s> BIB002 </s> A Literature Overview of Fuzzy Database Models * <s> Other Fuzzy Extension of Object-Oriented Databases <s> Abstract One of the foci of the recent development in object-oriented modeling (OOM) has been the extension of OOM to fuzzy logic to capture and analyze informal requirements that are imprecise in nature. In this paper, a new approach to object-oriented modeling based on fuzzy logic is proposed to formulate imprecise requirements along four dimensions: (1) to extend a class by grouping objects with similar properties into a fuzzy class, (2) to encapsulate fuzzy rules in a fuzzy class to describe the relationship between attributes, (3) to evaluate the membership function of a fuzzy class by considering both static and dynamic properties, and (4) to model uncertain fuzzy associations between classes. The proposed approach is illustrated using the problem domain of a meeting scheduler system. <s> BIB003 </s> A Literature Overview of Fuzzy Database Models * <s> Other Fuzzy Extension of Object-Oriented Databases <s> ´The requirements of complex applications in the world of object-oriented databases have motivated the study of the addition of vagueness to the existing models, giving rise to different approaches. The presence of vagueness can be considered in the type associated to a class, parallel to and independently from a fuzzy view of the set of objects that Ž. belong to the class extent of the class . This paper offers a new perspective for representing the type associated to a class, tackling the problem of vagueness in the database schema and defining the concept of fuzzy type. Two different components of these types are defined: the structural component and the behavior component. An adaptation of the mechanism of instantiation and inheritance is presented by considering adequate union operators. A new criterion for handling the redefinition capability of the object-oriented data model is explained. The description is accompanied by some illustrative examples. 2000 John Wiley & Sons, Inc. I. INTRODUCTION The number of real world applications whose complexity demands an answer from the array of available databases is increasing. This answer must offer both good performance and user friendliness. In order to improve the capability of data retrieval in normal databases and increase the range of problems that can be solved, the possibility of working with vagueness is being introduced. Object-oriented databases are considered to be one of the most useful tools that can be used to solve this problem due to their powerful features in the modeling process. At the moment, object-oriented databases are also being extended to include fuzzy capabilities in the same way as other database models were. The addition of vagueness at different levels has led to Fuzzy Object-Oriented Databases models. <s> BIB004 </s> A Literature Overview of Fuzzy Database Models * <s> Other Fuzzy Extension of Object-Oriented Databases <s> Fuzzy types have been developed as a new way of managing fuzzy structures. With types of this kind, properties are ordered on different levels of precision or amplitude, according to their relationship with the concept represented by the type. In order to implement this new tool, two different strategies can be followed. On the one hand, a new system incorporating fuzzy types as an intrinsic capability can be developed. On the other hand, a new layer that implements fuzzy types can be added to an existing object-oriented database system (OODB). This paper shows how the typical classes of an OODB can be used to represent a fuzzy type and how the mechanisms of instantiation and inheritance can be modeled using this kind of new type on an OODB. © 2001 John Wiley & Sons, Inc. <s> BIB005 </s> A Literature Overview of Fuzzy Database Models * <s> Other Fuzzy Extension of Object-Oriented Databases <s> To better model the dynamic and uncertain real world; many researchers have proposed approaches for integrating fuzzy set theory into the knowledge representation methods used in data modeling. This paper presents an initial recommendation for the primary relationships of a generalized object model that would incorporate fuzzy set theory as a tool to use in the task of object modeling. <s> BIB006 </s> A Literature Overview of Fuzzy Database Models * <s> Other Fuzzy Extension of Object-Oriented Databases <s> The comparison concept plays a determining role in many problems related to object management in an Object-Oriented Database Model. Object comparison is appropriately managed in a crisp object-oriented context by means of the concepts of identity and value equality. However, when dealing with imprecise or imperfect objects, questions like ‘To which extent may two objects be the same one?’ or ‘How similar are two objects?’ have not a clear answer, because the equality concept becomes fuzzy. In this paper we present a set of operators that are useful when comparing objects in a fuzzy environment. In particular, we introduce a generalized resemblance degree between two fuzzy sets of imprecise objects and a generalized resemblance degree to compare complex fuzzy objects within a given class. q 2003 Elsevier Science B.V. All rights reserved. <s> BIB007 </s> A Literature Overview of Fuzzy Database Models * <s> Other Fuzzy Extension of Object-Oriented Databases <s> Many researchers have developed proposals for integrating fuzzy set theory into the knowledge representation methods used in conceptual data modeling. A primary objective of data modeling is to describe concepts and the relationships among the concepts. The basic relationships defined among the concepts are associated with general abstraction principles. These abstraction principles are examined. Several proposals for fuzzy extensions to these abstraction principles are reviewed. Recommendations for the application of fuzzy set theory in a generalized object model are made based on the abstraction principles and a review of proposals for fuzzy object models. <s> BIB008
Based on two different strategies, fuzzy types are added into FOODBs to manage vague structures in BIB005 BIB004 . It is also presented how the typical classes of an OODB can be used to represent a fuzzy type and how the mechanisms of instantiation and inheritance can be modeled using this kind of new type in an OODB. In BIB007 , complex object comparison in a fuzzy context is developed. In BIB006 BIB008 , fuzzy relationships in object models are investigated. In BIB002 , a fuzzy intelligent architecture based on the uncertain object-oriented data model introduced initially in BIB001 , is proposed. The classes include fuzzy IF-THEN rules to define knowledge and the possibility theory is used for representations of vagueness and uncertainty. In BIB003 , an approach to OO modeling based on fuzzy logic is proposed to formulate imprecise requirements along four dimensions: fuzzy class, fuzzy rules, fuzzy class relationships, and fuzzy associations between classes. The fuzzy rules, i.e., the rules with linguistic terms are used to describe the relationships between attributes.
A Literature Overview of Fuzzy Database Models * <s> Special Fuzzy Object-Oriented Databases <s> Abstract This paper presents a modeling approach which couples fuzzy object-oriented database modeling with fuzzy logic. The modeling approach introduced here handles fuzziness at attribute, object/class and class/superclass levels in addition to fuzziness in class/class relationships and various associations among classes. We utilize logical rules to define some of the crisp/fuzzy relationships and associations which cannot be presented easily with object-oriented modeling features alone in the class hierarchies. We think that incorporation of object-oriented database modeling with logic along with usage of fuzzy set theory simplifies the design of complex and knowledge-intensive applications and handles uncertainty effectively, therefore resulting in a powerful modeling framework. <s> BIB001 </s> A Literature Overview of Fuzzy Database Models * <s> Special Fuzzy Object-Oriented Databases <s> Abstract Modeling, storing and retrieving geographical information has become an important part of our information society. Geographical information is typically specified in terms of collections of entities and phenomena that are structured aggregations of spatial entities. GIS features tend to form natural class hierarchies. Another characteristic of geographical information is that often it may be inexact or vague. With respect to these characteristics, the confluence of the two technologies fuzzy set theory and object-oriented databases could provide a powerful tool for knowledge representation underlying geographical information systems. The fuzzy object data model is currently being developed and prototype implementations have been undertaken using an integrative approach with existing software including an expert system shell and a commercial object-oriented database system. In this paper, the benefits of a fuzzy object data model for geographical information systems are examined, an overview of the model is presented, and the current prototype implementations are described. <s> BIB002 </s> A Literature Overview of Fuzzy Database Models * <s> Special Fuzzy Object-Oriented Databases <s> Next generation information system applications require powerful and intelligent information management that necessitates an efficient interaction between database and knowledge base technologies. It is also important for these applications to incorporate uncertainty in data objects, in integrity constraints, and/or in application. In this study, we propose an intelligent object-oriented database architecture, FOOD, which permits the flexible modeling and querying of complex data and knowledge including uncertainty with powerful retrieval capability. <s> BIB003 </s> A Literature Overview of Fuzzy Database Models * <s> Special Fuzzy Object-Oriented Databases <s> Abstract We introduce a deductive probabilistic and fuzzy object-oriented model where a class property (i.e., an attribute or a method) can contain fuzzy set values, and uncertain class membership and property applicability are measured by lower and upper bounds on probability. Each uncertainly applicable property is interpreted as a default probabilistic logic rule, which is defeasible, and probabilistic default reasoning on fuzzy events is proposed for uncertain property inheritance and class recognition. This provides a formal basis for the design and implementation of FRIL++, the object-oriented extension of FRIL, a logic programming language dealing with both probability and fuzziness. The basic features of FRIL++ and its application as a programming language for deductive probabilistic and fuzzy object-oriented databases are presented. <s> BIB004
Some special fuzzy object-oriented databases, e.g., fuzzy deductive object-oriented databases BIB003 BIB001 , and fuzzy and probabilistic object bases BIB004 , have been developed. In addition, fuzzy object-oriented database have been applied in some areas such as geographical information systems BIB002 and multimedia .
A Survey on Applications of Bipartite Graph Edit Distance <s> Introduction <s> This paper is concerned with the inexact matching of attributed, relational graphs for structural pattern recognition. The matching procedure is based on a state space search utilizing heuristic information. Some experimental results are reported. <s> BIB001 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Introduction <s> A method to determine a distance measure between two nonhierarchical attributed relational graphs is presented. In order to apply this distance measure, the graphs are characterised by descriptive graph grammars (DGG). The proposed distance measure is based on the computation of the minimum number of modifications required to transform an input graph into the reference one. Specifically, the distance measure is defined as the cost of recognition of nodes plus the number of transformations which include node insertion, node deletion, branch insertion, branch deletion, node label substitution and branch label substitution. The major difference between the proposed distance measure and the other ones is the consideration of the cost of recognition of nodes in the distance computation. In order to do this, the principal features of the nodes are described by one or several cost functions which are used to compute the similarity between the input nodes and the reference ones. Finally, an application of this distance measure to the recognition of lower case handwritten English characters is presented. <s> BIB002 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Introduction <s> A special class of graphs is introduced in this paper. The graphs belonging to this class are characterised by the existence of unique node labels. A number of matching algorithms for graphs with unique node labels are developed. It is shown that problems such as graph isomorphism, subgraph isomorphism, maximum common subgraph (MCS) and graph edit distance (GED) have a computational complexity that is only quadratic in the number of nodes. Moreover, computing the median of a set of graphs is only linear in the cardinality of the set. In a series of experiments, it is demonstrated that the proposed algorithms run very fast in practice. The considered class makes the matching of large graphs, consisting of thousands of nodes, computationally tractable. We also discuss an application of the considered class of graphs and related matching algorithms to the classification and detection of abnormal events in computer networks. <s> BIB003 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Introduction <s> A recent paper posed the question: "Graph Matching: What are we really talking about?". Far from providing a definite answer to that question, in this paper we will try to characterize the role that graphs play within the Pattern Recognition field. To this aim two taxonomies are presented and discussed. The first includes almost all the graph matching algorithms proposed from the late seventies, and describes the different classes of algorithms. The second taxonomy considers the types of common applications of graph-based techniques in the Pattern Recognition and Machine Vision field. <s> BIB004 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Introduction <s> The support vector machine algorithm together with graph kernel functions has recently been introduced to model structure−activity relationships (SAR) of molecules from their 2D structure, without the need for explicit molecular descriptor computation. We propose two extensions to this approach with the double goal to reduce the computational burden associated with the model and to enhance its predictive accuracy: description of the molecules by a Morgan index process and definition of a second-order Markov model for random walks on 2D structures. Experiments on two mutagenicity data sets validate the proposed extensions, making this approach a possible complementary alternative to other modeling strategies. <s> BIB005 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Introduction <s> It is widely believed that comparing discrepancies in the protein-protein interaction (PPI) networks of individuals will become an important tool in understanding and preventing diseases. Currently PPI networks for individuals are not available, but gene expression data is becoming easier to obtain and allows us to represent individuals by a co-integrated gene expression/protein interaction network. Two major problems hamper the application of graph kernels ‐ state-of-the-art methods for whole-graph comparison ‐ to compare PPI networks. First, these methods do not scale to graphs of the size of a PPI network. Second, missing edges in these interaction networks are biologically relevant for detecting discrepancies, yet, these methods do not take this into account. In this article we present graph kernels for biological network comparison that are fast to compute and take into account missing interactions. We evaluate their practical performance on two datasets of co-integrated gene expression/PPI networks. <s> BIB006 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Introduction <s> In recent years, the use of graph based object representation has gained popularity. Simultaneously, graph edit distance emerged as a powerful and flexible graph matching paradigm that can be used to address different tasks in pattern recognition, machine learning, and data mining. The key advantages of graph edit distance are its high degree of flexibility, which makes it applicable to any type of graph, and the fact that one can integrate domain specific knowledge about object similarity by means of specific edit cost functions. Its computational complexity, however, is exponential in the number of nodes of the involved graphs. Consequently, exact graph edit distance is feasible for graphs of rather small size only. In the present paper we introduce a novel algorithm which allows us to approximately, or suboptimally, compute edit distance in a substantially faster way. The proposed algorithm considers only local, rather than global, edge structure during the optimization process. In experiments on different datasets we demonstrate a substantial speed-up of our proposed method over two reference systems. Moreover, it is emprically verified that the accuracy of the suboptimal distance remains sufficiently accurate for various pattern recognition applications. <s> BIB007 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Introduction <s> In this paper, we examine the main advances registered in the last ten years in Pattern Recognition methodologies based on graph matching and related techniques, analyzing more than 180 papers; the aim is to provide a systematic framework presenting the recent history and the current developments. This is made by introducing a categorization of graph-based techniques and reporting, for each class, the main contributions and the most outstanding research results. <s> BIB008 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Introduction <s> Thank you very much for downloading graph theoretic techniques for web content mining. Maybe you have knowledge that, people have look hundreds times for their favorite books like this graph theoretic techniques for web content mining, but end up in harmful downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they cope with some malicious bugs inside their computer. <s> BIB009
Most pattern recognition applications are either based on statistical (i.e. vectorial) or structural data structures (i.e. strings, trees, or graphs). Graphs, in contrast to feature vectors, are able to represent both entities and binary relationships that might exist between subparts of these entities. Moreover, graphs can adapt their size and complexity to the size and complexity of the actual pattern to be modelled. Due to their representational power and flexibility, graphs have found widespread application in pattern recognition and related fields. Prominent examples of classes of patterns, which can be formally represented in a more suitable and natural way by means of graphs rather than with feature vectors, are chemical compounds BIB005 , documents BIB009 , proteins BIB006 , and networks BIB003 (see for an early survey on applications of graphs in pattern recognition). The availability of a dissimilarity or similarity measure is a basic requirement for pattern recognition and analysis. For graph dissimilarity computation, commonly solved via a particular graph matching algorithm, no standard model has been established to date. For an excellent and exhaustive review on graph matching methods emerged during the last forty years, the reader is referred to BIB004 BIB008 . The present paper is concerned with the graph matching paradigm of graph edit distance BIB001 BIB002 . In fact, the concept of graph edit distance is considered as one of the most flexible and versatile graph matching models available. Yet, the major drawback of graph edit distance is its computational complexity that restricts its applicability to graphs of rather small size. Graph edit distance belongs to the family of Quadratic Assignment Problems (QAPs), which in turn belong to the class of N P-complete problems. That is, an exact and efficient algorithm for the graph edit distance problem can not be developed unless P = N P. About ten years ago, an algorithmic framework, which allows the approximate computation of graph edit distance in a substantially faster way than traditional methods on general graphs, has been introduced BIB007 . The basic idea of this approach, termed Bipartite Graph Edit Distance (BP), is to reduce the difficult QAP of graph edit distance computation to a Linear Sum Assignment Problem (LSAP). LSAPs basically constitute the problem of finding an optimal assignment between two independent sets of entities. For LSAPs quite an arsenal of efficient (i.e. polynomial) algorithms exist (see [12] for an exhaustive survey on LSAP algorithms). The graph dissimilarity framework BP presented in BIB007 resolves several major issues that appear when graph edit distance is reformulated to an instance of an LSAP. In a first step the graphs to be matched are subdivided into individual nodes including local structural information. Next, in step 2, an algorithm solving the LSAP is employed in order to find an optimal assignment of the nodes (plus local structures) of both graphs. Finally, in step 3, an approximate graph edit distance, which is globally consistent with the underlying edge structures of both graphs, is derived from the assignment of step 2. The time complexity of this matching framework is cubic with respect to the number of nodes of the involved graphs. Hence, BP is also applicable to larger graphs. Due to this benefit, the underlying methodology has been employed in a great variety of applications. The contribution of the present paper is to give a first survey on these application fields and the corresponding methods that actually use the BP framework.
A Survey on Applications of Bipartite Graph Edit Distance <s> Applications <s> In recent years, the use of graph based object representation has gained popularity. Simultaneously, graph edit distance emerged as a powerful and flexible graph matching paradigm that can be used to address different tasks in pattern recognition, machine learning, and data mining. The key advantages of graph edit distance are its high degree of flexibility, which makes it applicable to any type of graph, and the fact that one can integrate domain specific knowledge about object similarity by means of specific edit cost functions. Its computational complexity, however, is exponential in the number of nodes of the involved graphs. Consequently, exact graph edit distance is feasible for graphs of rather small size only. In the present paper we introduce a novel algorithm which allows us to approximately, or suboptimally, compute edit distance in a substantially faster way. The proposed algorithm considers only local, rather than global, edge structure during the optimization process. In experiments on different datasets we demonstrate a substantial speed-up of our proposed method over two reference systems. Moreover, it is emprically verified that the accuracy of the suboptimal distance remains sufficiently accurate for various pattern recognition applications. <s> BIB001 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications <s> Abstract We present a new algorithm to compute the Graph Edit Distance in a sub-optimal way. We demonstrate that the distance value is exactly the same than the one obtained by the algorithm called Bipartite but with a reduced run time. The only restriction we impose is that the edit costs have to be defined such that the Graph Edit Distance can be really defined as a distance function, that is, the cost of insertion plus deletion of nodes (or arcs) have to be lower or equal than the cost of substitution of nodes (or arcs). Empirical validation shows that higher is the order of the graphs, higher is the obtained Speed up. <s> BIB002 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications <s> The definition of efficient similarity or dissimilarity measures between graphs is a key problem in structural pattern recognition. This problem is nicely addressed by the graph edit distance, which constitutes one of the most flexible graph dissimilarity measure in this field. Unfortunately, the computation of an exact graph edit distance is known to be exponential in the number of nodes. In the early beginning of this decade, an efficient heuristic based on a bipartite assignment algorithm has been proposed to find efficiently a suboptimal solution. This heuristic based on an optimal matching of nodes' neighborhood provides a good approximation of the exact edit distance for graphs with a large number of different labels and a high density. Unfortunately, this heuristic works poorly on unlabeled graphs or graphs with a poor diversity of neighborhoods. In this work we propose to extend this heuristic by considering a mapping of bags of walks centered on each node of both graphs. <s> BIB003 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications <s> Recently the authors of the present paper introduced an approximation framework for the graph edit distance problem. The basic idea of this approximation is to first build a square cost matrix C = ( c ij ) , where each entry cij reflects the cost of a node substitution, deletion or insertion plus the matching cost arising from the local edge structure. Based on C an optimal assignment of the nodes and their local structure can be established in polynomial time. Since this approach considers local - rather than the global - structural properties of the graphs only, the graph edit distance derived from the optimal node assignment generally overestimates the true edit distance. The present paper pursues the idea of applying additional search strategies that build upon the initial assignment in order to reduce this overestimation. To this end, six different search strategies are investigated in this paper. In an exhaustive experimental evaluation on five real world graph data sets we empirically verify a substantial gain of distance accuracy by means of all search methods while run time remains remarkably low. HighlightsWe show how the well-known bipartite graph edit distance approximation can substantially be improved with respect to distance accuracy.To this end, we introduce and compare six different methodologies for extending the graph matching framework.We empirically verify a substantial gain of distance accuracy by means of all methods while run time remains remarkably low.The benefit of improved distance quality is also verified in a clustering application. <s> BIB004 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications <s> The concept of graph edit distance (GED) is still one of the most flexible and powerful graph matching approaches available. Yet, exact computation of GED can be solved in exponential time complexity only. A previously introduced approximation framework reduces the computation of GED to an instance of a linear sum assignment problem. Major benefit of this reduction is that an optimal assignment of nodes (including local structures) can be computed in polynomial time. Given this assignment an approximate value of GED can be immediately derived. Yet, this approach considers local — rather than the global — structural properties of the graphs only, and thus GED derived from the optimal node assignment generally overestimates the true edit distance. Recently, it has been shown how the existing approximation framework can be exploited to additionally derive a lower bound of the exact edit distance without any additional computations. In this paper we make use of regression analysis in order to predict the exact GED using these two bounds. In an experimental evaluation on diverse graph data sets we empirically verify the gain of distance accuracy of the estimated GEDs compared to both bounds. <s> BIB005 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications <s> Review of recent quadratic-time approximations of graph edit distance.Novel upper bound based on bipartite assignment in quadratic time (BP2).Combines the principle of Hausdorff distance with bijective node substitutions.Evaluated empirically on the IAM graph database repository.Outperforms previous cubic-time approximation based on bipartite assignment (BP). Approximation of graph edit distance in polynomial time enables us to compare large, arbitrarily labeled graphs for structural pattern recognition. In a recent approximation framework, bipartite graph matching (BP) has been proposed to reduce the problem of edit distance to a cubic-time linear sum assignment problem (LSAP) between local substructures. Following the same line of research, first attempts towards quadratic-time approximation have been made recently, including a lower bound based on Hausdorff matching (Hausdorff Edit Distance) and an upper bound based on greedy assignment (Greedy Edit Distance). In this paper, we compare the two approaches and derive a novel upper bound (BP2) which combines advantages of both. In an experimental evaluation on the IAM graph database repository, we demonstrate that the proposed quadratic-time methods perform equally well or, quite surprisingly, in some cases even better than the cubic-time method. <s> BIB006 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications <s> Graph edit distance is one of the most flexible and general graph matching models available. The major drawback of graph edit distance, however, is its computational complexity that restricts its applicability to graphs of rather small size. Recently, the authors of the present paper introduced a general approximation framework for the graph edit distance problem. The basic idea of this specific algorithm is to first compute an optimal assignment of independent local graph structures (including substitutions, deletions, and insertions of nodes and edges). This optimal assignment is complete and consistent with respect to the involved nodes of both graphs and can thus be used to instantly derive an admissible (yet suboptimal) solution for the original graph edit distance problem in <inline-formula><tex-math notation="LaTeX">$O(n^3)$</tex-math><alternatives><mml:math><mml:mrow><mml:mi>O</mml:mi><mml:mo>(</mml:mo><mml:msup><mml:mi>n</mml:mi><mml:mn>3</mml:mn></mml:msup><mml:mo>)</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href="riesen-ieq1-2478463.gif"/></alternatives></inline-formula> time. For large scale graphs or graph sets, however, the cubic time complexity may still be too high. Therefore, we propose to use suboptimal algorithms with quadratic rather than cubic time for solving the basic assignment problem. In particular, the present paper introduces five different greedy assignment algorithms in the context of graph edit distance approximation. In an experimental evaluation, we show that these methods have great potential for further speeding up the computation of graph edit distance while the approximated distances remain sufficiently accurate for graph based pattern classification. <s> BIB007
In the last decade, the original paper (that describes BP for the first time) as well as its extended version BIB001 have been cited more than 360 times. Regarding these citing papers we observe two main categories. The first category is concerned with methodological extensions of BP. There are, for instance, papers that use another basic cost model than proposed in the original framework BIB002 BIB003 , or works that aim at making the approximation faster BIB007 BIB006 , or more accurate BIB004 BIB005 . The second category of citing papers is concerned with different applications of the approximate graph matching framework BP. The main focus of the present paper is to review and categorise the papers of this second category. A taxonomy of the application fields and the corresponding papers (reviewed in the following subsections) is given in Fig. 1 . In all of these applications, graphs are used to represent real-word (or abstract) objects or patterns, such as for instance images, proteins, or business processes (to mention just a few examples). Eventually, the BP framework is used to measure the (dis)similarity between pairs of graph-based representations.
A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> In the context of the NAVIDOMASS project, the problematic of this paper concerns the clustering of historical document images. We propose a structural-based framework to handle the ancient ornamental letters data-sets. The contribution, firstly, consists of examining the structural (i.e. graph) representation of the ornamental letters, secondly, the graph matching problem is applied to the resulted graph-based representations. In addition, a comparison between the structural (graphs) and statistical (generic Fourier descriptor) techniques is drawn. <s> BIB001 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> In this paper, we present a graph-based fingerprint classification algorithm that deals with ones collected from touch scanners. A relational graph, which reflects distribution of ridge directions in an orientation field, is constructed with additional emphasis on the distribution of the ridges inside the core area that is defined to minimize external influences. For classification, a general edit distance scheme is employed to measure the similarity between the constructed graph and pre-trained graph models. Experimental results using a database in FVC2004 show that the proposed algorithm has higher classification accuracy than other structural approaches. <s> BIB002 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> In this paper we present a quantitative comparison between two approaches, Graph Kernels and Symbolic Learning, within a classification scheme. The experimental case-study is the predictive toxicology evaluation, that is the inference of the toxic characteristics of chemical compounds from their structure. The results demonstrate that both approaches are comparable in terms of accuracy, but present pros and cons that are discussed in the last part of the paper. <s> BIB003 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> A graph matching approach is proposed to retrieve envelope images from a large image database. First, the graph representation of an envelop image is generated based on the image segmentation results, in which each node corresponds to one segmented region. The attributes of nodes and edges in the graph are described by characteristics of the envelope image. Second, a minimum weighted bipartite graph matching method is employed to compute the distance between two graphs. Finally, the whole retrieval system including two principal stages is presented, namely, rough matching and fine matching. The experiments on a database of envelope images captured from real-life mail pieces demonstrate that the proposed method achieves promising results. <s> BIB004 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> We present a novel method for retrieval and classification of 3D building models that is tailored to the specific requirements of architects. In contrast to common approaches our algorithm relies on the interior spatial arrangement of rooms instead of exterior geometric shape. We first represent the internal topological building structure by a Room Connectivity Graph (RCG). To enable fast and efficient retrieval and classification with RCGs, we transform the structured graph representation into a vector-based one by introducing a new concept of subgraph embeddings. We provide comprehensive experiments showing that the introduced subgraph embeddings yield superior performance compared to state-of-the-art graph retrieval approaches. <s> BIB005 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> We represent the retina vessel pattern as a spatial relational graph, and match features using error-correcting graph matching. We study the distinctiveness of the nodes (branching and crossing points) compared with that of the edges and other substructures (nodes of degree k, paths of length k). On a training set from the VARIA database, we show that as well as nodes, three other types of graph sub-structure completely or almost completely separate genuine from imposter comparisons. We show that combining nodes and edges can improve the separation distance. We identify two retina graph statistics, the edge-to-node ratio and the variance of the degree distribution, that have low correlation with node match score. <s> BIB006 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> This paper proposes a novel approach of on-line signature verification. Firstly, on-line signatures are partitioned into a series of segments, which are then represented by graphs. Four segmentation methods are taken into account. Secondly, graph matching techniques are adopted to compute edit distance between corresponding graphs, which measures the similarity of them. Finally, having been able to compare two signatures, limited genuine signatures are used to train user dependent classifiers for each user. Experiments are conducted to validate the effectiveness of the proposed method and promising results are achieved. <s> BIB007 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> Chemoinformatics is a well established research field concerned with the discovery of molecule's properties through informational techniques. Computer science's research fields mainly concerned by the chemoinformatics field are machine learning and graph theory. From this point of view, graph kernels provide a nice framework combining machine learning techniques with graph theory. Such kernels prove their efficiency on several chemoinformatics problems. This paper presents two new graph kernels applied to regression and classification problems within the chemoinformatics field. The first kernel is based on the notion of edit distance while the second is based on sub trees enumeration. Several experiments show the complementary of both approaches. <s> BIB008 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> Existing techniques for Web service discovery focus mainly on matching functional parameters of atomic services, such as inputs and outputs. However, one of the main advantages of Web services is that they are often composed into more complex processes to achieve a given goal. Applying such techniques in these cases, ignores the workflow structure of the composite process, and therefore may produce matches that are not very accurate. To overcome this limitation, we propose in this paper a graph-based method for matching composite services, that are semantically described as OWL-S processes. We propose a graph representation of composite OWL-S processes and we introduce a matching algorithm that performs comparisons not only at the level of individual components but also at the structural level, taking into consideration the control flow among the atomic components. We also report our preliminary results of our experimental evaluation. <s> BIB009 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> The amount of suspicious binary executables submitted to Anti-Virus (AV) companies are in the order of tens of thousands per day. Current hash-based signature methods are easy to deceive and are inefficient for identifying known malware that have undergone minor changes. Examining malware executables using their call graphs view is a suitable approach for overcoming the weaknesses of hash-based signatures. Unfortunately, many operations on graphs are of high computational complexity. One of these is the Graph Edit Distance (GED) between pairs of graphs, which seems a natural choice for static comparison of malware. We demonstrate how Simulated Annealing can be used to approximate the graph edit distance of call graphs, while outperforming previous approaches both in execution time and solution quality. Additionally, we experiment with opcode mnemonic vectors to reduce the problem size and examine how Simulated Annealing is affected. <s> BIB010 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> In this paper we propose a shape recognition approach applied to a dataset composed of 512 shoeprints where shapes are strongly occluded. We provide a local adaptation of the HRT (Histogram Radon Transform) descriptor. A shoeprint is decomposed into its connect components and describes locally by the local HRT. Then, following this description, we find the best local matching between the connected components and the similarity between two images is defined as mean of local similarity measures. <s> BIB011 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> Problem statement: A malware is a program that has malicious intent. Nowadays, malware ::: authors apply several sophisticated techniques such as packing and obfuscation to avoid malware ::: detection. That makes zero-day attacks and false positives the most challenging problems in the ::: malware detection field. Approach: In this study, the static and dynamic analysis techniques that are ::: used in malware detection are surveyed. Static analysis techniques, dynamic analysis techniques and ::: their combination including Signature-Based and Behaviour-Based techniques are discussed. Results: In addition, a new malware detection framework is proposed. Conclusion: The proposed framework ::: combines Signature-Based with Behaviour-Based using API graph system. The goal of the proposed ::: framework is to improve accuracy and scan process time for malware detection. <s> BIB012 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> The focus of this thesis is the exploration of graph-based similarity, in the context of natural language processing. The work is motivated by a need for richer representations of text. A graph edit distance algorithm was implemented, that calculates the difference between graphs. Sentences were represented by means of dependency graphs, which consist of words connected by dependencies. A dependency graph captures the syntactic structure of a sentence. The graph-based similarity approach was applied to the problem of detecting plagiarism, and was compared against state of the art systems. The key advantages of graph-based textual representations are mainly word order indifference and the ability to capture similarity between words, based on the sentence structure. The approach was compared against contributions made to the PAN plagiarism detection challenge at the CLEF 2011 conference, and would have achieved a 5th place out of 10 contestants. The evaluation results suggest that the approach can be applicable to the task of detecting plagiarism, but require some fine tuning on input parameters. The evaluation results demonstrated that dependency graphs are best represented by directed edges. The graph edit distance algorithm scored best with a combination of node and edge label matching. Different edit weights were applied, which increased performance. Keywords: Graph Edit Distance, Natural Language Processing, Dependency Graphs, Plagiarism Detection <s> BIB013 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> The recognition of unconstrained handwriting images is usually based on vectorial representation and statistical classification. Despite their high representational power, graphs are rarely used in this field due to a lack of efficient graph-based recognition methods. Recently, graph similarity features have been proposed to bridge the gap between structural representation and statistical classification by means of vector space embedding. This approach has shown a high performance in terms of accuracy but had shortcomings in terms of computational speed. The time complexity of the Hungarian algorithm that is used to approximate the edit distance between two handwriting graphs is demanding for a real-world scenario. In this paper, we propose a faster graph matching algorithm which is derived from the Hausdorff distance. On the historical Parzival database it is demonstrated that the proposed method achieves a speedup factor of 12.9 without significant loss in recognition accuracy. <s> BIB014 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> This paper presents an automatic retina verification framework based on the biometric graph matching (BGM) algorithm. The retinal vasculature is extracted using a family of matched filters in the frequency domain and morphological operators. Then, retinal templates are defined as formal spatial graphs derived from the retinal vasculature. The BGM algorithm, a noisy graph matching algorithm, robust to translation, non-linear distortion, and small rotations, is used to compare retinal templates. The BGM algorithm uses graph topology to define three distance measures between a pair of graphs, two of which are new. A support vector machine (SVM) classifier is used to distinguish between genuine and imposter comparisons. Using single as well as multiple graph measures, the classifier achieves complete separation on a training set of images from the VARIA database (60% of the data), equaling the state-of-the-art for retina verification. Because the available data set is small, kernel density estimation (KDE) of the genuine and imposter score distributions of the training set are used to measure performance of the BGM algorithm. In the one dimensional case, the KDE model is validated with the testing set. A 0 EER on testing shows that the KDE model is a good fit for the empirical distribution. For the multiple graph measures, a novel combination of the SVM boundary and the KDE model is used to obtain a fair comparison with the KDE model for the single measure. A clear benefit in using multiple graph measures over a single measure to distinguish genuine and imposter comparisons is demonstrated by a drop in theoretical error of between 60% and more than two orders of magnitude. <s> BIB015 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> Cancer causes deviations in the distribution of cells, leading to changes in biological structures that they form. Correct localization and characterization of these structures are crucial for accurate cancer diagnosis and grading. In this paper, we introduce an effective hybrid model that employs both structural and statistical pattern recognition techniques to locate and characterize the biological structures in a tissue image for tissue quantification. To this end, this hybrid model defines an attributed graph for a tissue image and a set of query graphs as a reference to the normal biological structure. It then locates key regions that are most similar to a normal biological structure by searching the query graphs over the entire tissue graph. Unlike conventional approaches, this hybrid model quantifies the located key regions with two different types of features extracted using structural and statistical techniques. The first type includes embedding of graph edit distances to the query graphs whereas the second one comprises textural features of the key regions. Working with colon tissue images, our experiments demonstrate that the proposed hybrid model leads to higher classification accuracies, compared against the conventional approaches that use only statistical techniques for tissue quantification. <s> BIB016 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> Graphs can be used to represent a variety of information, from molecular structures to biological pathways to computational workflows. With a growing volume of data represented as graphs, the problem of understanding and analyzing the variations in a collection of graphs is of increasing importance. We present an algorithm to compute a single summary graph that efficiently encodes an entire collection of graphs by finding and merging similar nodes and edges. Instead of only merging nodes and edges that are exactly the same, we use domain-specific comparison functions to collapse similar nodes and edges which allows us to generate more compact representations of the collection. In addition, we have developed methods that allow users to interactively control the display of these summary graphs. These interactions include the ability to highlight individual graphs in the summary, control the succinctness of the summary, and explicitly define when specific nodes should or should not be merged. We show that our approach to generating and interacting with graph summaries leads to a better understanding of a graph collection by allowing users to more easily identify common substructures and key differences between graphs. <s> BIB017 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> As the volume of malware inexorably rises, comparison of binary code is of increasing importance to security analysts as a method of automatically classifying new malware samples; purportedly new examples of malware are frequently a simple evolution of existing code, whose differences stem only from a need to avoid detection. This paper presents a polynomial algorithm for calculating the differences between two binaries, obtained by fusing the well-known BinDiff algorithm with the Hungarian algorithm for bi-partite graph matching. This significantly improves the matching accuracy. Additionally a meaningful metric of similarity is calculated, based on graph edit distance, from which an informed comparison of the binaries can be made. The accuracy of this method over the standard approach is demonstrated. <s> BIB018 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> Building the behaviour for non-player characters in a game is a complex collaborative task among AI designers and programmers. In this paper we present a visual authoring tool for game designers that supports behaviour reuse. We describe a visual editor, capable of storing, indexing, retrieving and reusing behaviours previously designed by AI programmers. One of the most notable features of our editor is its capability for sketch-based retrieval: searching in a repository for behaviours that are similar to the one the user is drawing, and making suggestions about how to complete it. As this process relies on graph behaviour comparison, in this paper, we describe different algorithms for graph comparison, and demonstrate, through empirical evaluation in a particular test domain, that we can provide structure-based similarity for graphs that preserves behaviour similarity and can be computed at reasonable cost. <s> BIB019 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> Petroglyphs can be found on rock panels all over the world. The possibilities of digital photography and more recently various 3D scanning methods opened a new stage for the documentation and analysis of petroglyphs. The existing work on petroglyph shape similarity has largely avoided the questions of articulation, merged petroglyphs and potentially missing parts of petroglyphs. We aim at contributing to close this gap by applying a novel petroglyph shape descriptor based on the skeletal graph. Our contribution is twofold: First, we provide a real-world dataset of petroglyph shapes. Second, we propose a graph-based shape descriptor for petroglyphs. Comprehensive evaluations show, that the combination of the proposed descriptor with existing ones improves the performance in petroglyph shape similarity modeling. <s> BIB020 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> The objective of the planned project is to adapt a recent graph matching framework developed by the applicant to the problem of keyword spotting. The overall question to be answered is whether or not graph based representation and especially graph matching techniques can be beneficially employed for keyword spotting. For testing this novel keyword spotting framework, the Miroslav Gospels will be used. Miroslav Gospels is a 362-page illuminated manuscript Gospel Book on parchment with very rich decorations, which was inscribed on UNESCO’s Memory of the World Register in recognition of its historical value. It is one of the oldest surviving documents written in Old Church Slavonic. We plan to make the extracted word graphs from the Miroslav Gospels publicly available for further developments in graph based keyword spotting. <s> BIB021 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> Effective information retrieval on handwritten documentimages has always been a challenging task. In this paper, we propose a novel handwritten word spotting approach based on graph representation. The presented model comprises both topological and morphological signatures of handwriting. Skeleton-based graphs with the Shape Context labelled vertexes are established for connected components. Each word image is represented as a sequence of graphs. In order to be robust to the handwriting variations, an exhaustive merging process based on DTW alignment result is introduced in the similarity measure between word images. With respect to the computation complexity, an approximate graph edit distance approach using bipartite matching is employed for graph matching. The experiments on the George Washington dataset and the marriage records from the Barcelona Cathedral dataset demonstrate that the proposed approach outperforms the state-of-the-art structural methods. <s> BIB022 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> Effective information retrieval on handwritten document images has always been a challenging task, especially historical ones. In the paper, we propose a coarse-to-fine handwritten word spotting approach based on graph representation. The presented model comprises both the topological and morphological signatures of the handwriting. Skeleton-based graphs with the Shape Context labelled vertexes are established for connected components. Each word image is represented as a sequence of graphs. Aiming at developing a practical and efficient word spotting approach for large-scale historical handwritten documents, a fast and coarse comparison is first applied to prune the regions that are not similar to the query based on the graph embedding methodology. Afterwards, the query and regions of interest are compared by graph edit distance based on the Dynamic Time Warping alignment. The proposed approach is evaluated on a public dataset containing 50 pages of historical marriage license records. The results show that the proposed approach achieves a compromise between efficiency and accuracy. <s> BIB023 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> This study proposes an automatic dorsal hand vein verification system using a novel algorithm called biometric graph matching (BGM). The dorsal hand vein image is segmented using the K-means technique and the region of interest is extracted based on the morphological analysis operators and normalised using adaptive histogram equalisation. Veins are extracted using a maximum curvature algorithm. The locations and vascular connections between crossovers, bifurcations and terminations in a hand vein pattern define a hand vein graph. The matching performance of BGM for hand vein graphs is tested with two cost functions and compared with the matching performance of two standard point patterns matching algorithms, iterative closest point (ICP) and modified Hausdorff distance. Experiments are conducted on two public databases captured using far infrared and near infrared (NIR) cameras. BGM's matching performance is competitive with state-of-the-art algorithms on the databases despite using small and concise templates. For both databases, BGM performed at least as well as ICP. For the small sized graphs from the NIR database, BGM significantly outperformed point pattern matching. The size of the common subgraph of a pair of graphs is the most significant discriminating measure between genuine and imposter comparisons. <s> BIB024 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> We explore the potential of the use of blood vessels as anatomical landmarks for developing image registration methods in colonoscopy images. An unequivocal representation of blood vessels could be used to guide follow-up methods to track lesions over different interventions. We propose a graph-based representation to characterize network structures, such as blood vessels, based on the use of intersections and endpoints. We present a study consisting of the assessment of the minimal performance a keypoint detector should achieve so that the structure can still be recognized. Experimental results prove that, even by achieving a loss of \(25\,\%\) of the keypoints, the descriptive power of the associated graphs to the vessel pattern is still high enough to recognize blood vessels. <s> BIB025 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> We propose a graph-based semantic model for representing document content. Our method relies on the use of a semantic network, namely the DBpedia knowledge base, for acquiring fine-grained information about entities and their semantic relations, thus resulting in a knowledge-rich document model. We demonstrate the benefits of these semantic representations in two tasks: entity ranking and computing document semantic similarity. To this end, we couple DBpedia's structure with an information-theoretic measure of concept association, based on its explicit semantic relations, and compute semantic similarity using a Graph Edit Distance based measure, which finds the optimal matching between the documents' entities using the Hungarian method. Experimental results show that our general model outperforms baselines built on top of traditional methods, and achieves a performance close to that of highly specialized methods that have been tuned to these specific tasks. <s> BIB026 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> In this paper we address the problem of predicting SPARQL query performance. We use machine learning techniques to learn SPARQL query performance from previously executed queries. Traditional approaches for estimating SPARQL query cost are based on statistics about the underlying data. However, in many use-cases involving querying Linked Data, statistics about the underlying data are often missing. Our approach does not require any statistics about the underlying RDF data, which makes it ideal for the Linked Data scenario. We show how to model SPARQL queries as feature vectors, and use k-nearest neighbors regression and Support Vector Machine with the nu-SVR kernel to accurately predict SPARQL query execution time. <s> BIB027 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> The drastic increase of Android malware has led to a strong interest in developing methods to automate the malware analysis process. Existing automated Android malware detection and classification methods fall into two general categories: 1) signature-based and 2) machine learning-based. Signature-based approaches can be easily evaded by bytecode-level transformation attacks. Prior learning-based works extract features from application syntax, rather than program semantics, and are also subject to evasion. In this paper, we propose a novel semantic-based approach that classifies Android malware via dependency graphs. To battle transformation attacks, we extract a weighted contextual API dependency graph as program semantics to construct feature sets. To fight against malware variants and zero-day malware, we introduce graph similarity metrics to uncover homogeneous application behaviors while tolerating minor implementation differences. We implement a prototype system, DroidSIFT, in 23 thousand lines of Java code. We evaluate our system using 2200 malware samples and 13500 benign samples. Experiments show that our signature detection can correctly label 93\% of malware instances; our anomaly detector is capable of detecting zero-day malware with a low false negative rate (2\%) and an acceptable false positive rate (5.15\%) for a vetting purpose. <s> BIB028 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> We are currently working on the concept of an interactive word retrieval system for ancient document collection navigation, based on query composition for non-expert users. We have introduced a new notion: invariants, which are writing pieces automatically extracted from the old document collection. The invariants can be used in query making process in where the user selects and composes appropriate invariants to make the query. The invariants can be also used as descriptor to characterize word images. We introduced our unsupervised method for extracting invariants in our earlier paper. In this paper, we present a new structural word spotting system using a graph representation based on invariants as a descriptor. Through experiments, we conclude that our proposed system can adapt to different types of homogenous alphabetic languages documents (regardless of language/script, antiquity, handwritten or printed). <s> BIB029 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> This paper presents a graph-based word spotting for handwritten documents. Contrary to most word spotting techniques, which use statistical representations, we propose a structural representation suitable to be robust to the inherent deformations of handwriting. Attributed graphs are constructed using a part-based approach. Graphemes extracted from shape convexities are used as stable units of handwriting, and are associated to graph nodes. Then, spatial relations between them determine graph edges. Spotting is defined in terms of an error-tolerant graph matching using bipartite-graph matching algorithm. To make the method usable in large datasets, a graph indexing approach that makes use of binary embeddings is used as preprocessing. Historical documents are used as experimental framework. The approach is comparable to statistical ones in terms of time and memory requirements, especially when dealing with large document collections. <s> BIB030 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> We introduce the Palm Vein Graph, a spatial graph representation of the palm vasculature, for use as biometric identifiers. The palm vein image captured from an infra red camera undergoes several image processing steps to be represented as a graph. After image enhancement and binarisation, the palm vein features are extracted from the skeleton using a novel two stage spur removal technique. The location of the features and the connections between them are used to define a Palm Vein Graph. Palm vein graphs are compared using the Biometric Graph Matching (BGM) Algorithm. We propose a graph registration algorithm that incorporates the length of the edges between graph vertices to improve the registration process. We introduce a technique called Graph Trimming that shrinks the compared graphs to achieve faster graph matching and improved performance. We introduce 10 graph topology-based measures for comparing palm vein graphs. Experiments are conducted on a public palm vein database for full and trimmed graphs. For the full graphs, one of the introduced measures, an edge-based similarity, gives a definite improvement in matching accuracies over other published results on the same database. Trimming graphs improves matching performance markedly, especially when the compared graphs had only a small common overlap area due to displacement. For the full graphs, when the edge-based measure was combined with one of three other topological features, we demonstrate an improvement in matching accuracy. <s> BIB031 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> Key point correspondence plays an important role in lunar surface image processing. Since lunar surface images often contain obvious illumination changes, noisy points and repetitive patterns, traditional appearance based algorithms may fail when local appearance descriptors become less distinctive. In this paper, we introduce a graph matching based algorithm to tackle this problem. First, by incorporating structural information, key point sets in lunar surface images are represented by graphs. Then key point correspondence is formulated as a specific graph matching problem which aims to find a specified number of best assignments, and effectively approximately solved. Finally, an outlier assignment elimination method is proposed based on the affine invariance assumption. Simulations on both benchmark datasets and lunar surface images witness the effectiveness of the proposed method. <s> BIB032 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> The amount of handwritten documents that is digitally available is rapidly increasing. However, we observe a certain lack of accessibility to these documents especially with respect to searching and browsing. This paper aims at closing this gap by means of a novel method for keyword spotting in ancient handwritten documents. The proposed system relies on a keypoint-based graph representation for individual words. Keypoints are characteristic points in a word image that are represented by nodes, while edges are employed to represent strokes between two keypoints. The basic task of keyword spotting is then conducted by a recent approximation algorithm for graph edit distance. The novel framework for graph-based keyword spotting is tested on the George Washington dataset on which a state-of-the-art reference system is clearly outperformed. <s> BIB033 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR). S+SSPR 2016: Structural, Syntactic, and Statistical Pattern Recognition pp. 553-563. <s> BIB034 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Applications of the Bipartite Graph Edit Distance (BP) <s> We propose a complete graph-based process for Kite recognition.We propose a geometric hierarchical graph matching solution to the problem of geometric graph matching.We also propose a benchmark of Kite graphs from real images.We experimentally evaluate our approach over both real and synthetic data sets. Display Omitted Kites are huge archaeological structures of stone visible from satellite images. Because of their important number and their wide geographical distribution, automatic recognition of these structures on images is an important step towards understanding these enigmatic remnants. This paper presents a complete identification tool relying on a graph representation of the Kites. As Kites are naturally represented by graphs, graph matching methods are thus the main building blocks in the Kite identification process. However, Kite graphs are disconnected geometric graphs for which traditional graph matching methods are useless. To address this issue, we propose a graph similarity measure adapted for Kite graphs. The proposed approach combines graph invariants with a geometric graph edit distance computation leading to an efficient Kite identification process. We analyze the time complexity of the proposed algorithms and conduct extensive experiments both on real and synthetic Kite graph data sets to attest the effectiveness of the approach. We also perform a set of experimentations on other data sets in order to show that the proposed approach is extensible and quite general. <s> BIB035
Image Analysis BIB004 BIB032 BIB035 BIB011 BIB020 BIB005 Handwritten Document Analysis BIB014 BIB021 BIB022 BIB023 BIB033 BIB029 BIB030 BIB001 BIB034 Biometrics BIB006 BIB015 BIB024 BIB031 BIB002 BIB007 Bio-and Chemoinformatics BIB016 BIB025 BIB003 BIB008 BIB017 Knowledge and Process Management BIB009 BIB026 BIB027 Malware Detection BIB018 BIB012 BIB010 BIB028 Other Applications BIB019 BIB013
A Survey on Applications of Bipartite Graph Edit Distance <s> Image Analysis <s> A graph matching approach is proposed to retrieve envelope images from a large image database. First, the graph representation of an envelop image is generated based on the image segmentation results, in which each node corresponds to one segmented region. The attributes of nodes and edges in the graph are described by characteristics of the envelope image. Second, a minimum weighted bipartite graph matching method is employed to compute the distance between two graphs. Finally, the whole retrieval system including two principal stages is presented, namely, rough matching and fine matching. The experiments on a database of envelope images captured from real-life mail pieces demonstrate that the proposed method achieves promising results. <s> BIB001 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Image Analysis <s> We present a novel method for retrieval and classification of 3D building models that is tailored to the specific requirements of architects. In contrast to common approaches our algorithm relies on the interior spatial arrangement of rooms instead of exterior geometric shape. We first represent the internal topological building structure by a Room Connectivity Graph (RCG). To enable fast and efficient retrieval and classification with RCGs, we transform the structured graph representation into a vector-based one by introducing a new concept of subgraph embeddings. We provide comprehensive experiments showing that the introduced subgraph embeddings yield superior performance compared to state-of-the-art graph retrieval approaches. <s> BIB002 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Image Analysis <s> In this paper we propose a shape recognition approach applied to a dataset composed of 512 shoeprints where shapes are strongly occluded. We provide a local adaptation of the HRT (Histogram Radon Transform) descriptor. A shoeprint is decomposed into its connect components and describes locally by the local HRT. Then, following this description, we find the best local matching between the connected components and the similarity between two images is defined as mean of local similarity measures. <s> BIB003 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Image Analysis <s> Petroglyphs can be found on rock panels all over the world. The possibilities of digital photography and more recently various 3D scanning methods opened a new stage for the documentation and analysis of petroglyphs. The existing work on petroglyph shape similarity has largely avoided the questions of articulation, merged petroglyphs and potentially missing parts of petroglyphs. We aim at contributing to close this gap by applying a novel petroglyph shape descriptor based on the skeletal graph. Our contribution is twofold: First, we provide a real-world dataset of petroglyph shapes. Second, we propose a graph-based shape descriptor for petroglyphs. Comprehensive evaluations show, that the combination of the proposed descriptor with existing ones improves the performance in petroglyph shape similarity modeling. <s> BIB004 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Image Analysis <s> Key point correspondence plays an important role in lunar surface image processing. Since lunar surface images often contain obvious illumination changes, noisy points and repetitive patterns, traditional appearance based algorithms may fail when local appearance descriptors become less distinctive. In this paper, we introduce a graph matching based algorithm to tackle this problem. First, by incorporating structural information, key point sets in lunar surface images are represented by graphs. Then key point correspondence is formulated as a specific graph matching problem which aims to find a specified number of best assignments, and effectively approximately solved. Finally, an outlier assignment elimination method is proposed based on the affine invariance assumption. Simulations on both benchmark datasets and lunar surface images witness the effectiveness of the proposed method. <s> BIB005 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Image Analysis <s> We propose a complete graph-based process for Kite recognition.We propose a geometric hierarchical graph matching solution to the problem of geometric graph matching.We also propose a benchmark of Kite graphs from real images.We experimentally evaluate our approach over both real and synthetic data sets. Display Omitted Kites are huge archaeological structures of stone visible from satellite images. Because of their important number and their wide geographical distribution, automatic recognition of these structures on images is an important step towards understanding these enigmatic remnants. This paper presents a complete identification tool relying on a graph representation of the Kites. As Kites are naturally represented by graphs, graph matching methods are thus the main building blocks in the Kite identification process. However, Kite graphs are disconnected geometric graphs for which traditional graph matching methods are useless. To address this issue, we propose a graph similarity measure adapted for Kite graphs. The proposed approach combines graph invariants with a geometric graph edit distance computation leading to an efficient Kite identification process. We analyze the time complexity of the proposed algorithms and conduct extensive experiments both on real and synthetic Kite graph data sets to attest the effectiveness of the approach. We also perform a set of experimentations on other data sets in order to show that the proposed approach is extensible and quite general. <s> BIB006
Image analysis is often based on measuring the similarity of objects represented by 2D-or 3D-images. In the present scenario, graphs are used to represent these images, while the dissimilarity between pairs of images is measured by BP. In fact, many of the reviewed application fields presented below can be seen as special case of image analysis. In the present section, however, we present applications that are not part of any of the following subsections. In BIB001 , for instance, graphs are used to represent envelope images. That is, segmented regions are represented by nodes, while edges are inserted between specific pairs of nodes. The dissimilarities returned by BP are finally used to build a retrieval system. Another image analysis application is presented in BIB005 . In this case lunar surface images are formalised with graphs, where nodes represent SIFT-keypoints, while edges represent a Delaunay triangulation of the nodes. The BP distances are eventually used for localisation tasks. In BIB006 graphs are used to represent thinned images of archaeological structures (so called Kites) in order to find similar structures in large aerial image databases. Finally, in BIB003 the BP framework has been employed for shoe print classification, while in BIB004 the BP algorithm is used for the computation of similarities between petroglyphs. Graphs are also used for 3D-images analysis. In BIB002 , for instance, graphs are used to represent topological building structures by a so called Room Connectivity Graph. To this end, nodes represent rooms and are labelled by three-dimensional characteristics of the room. The edges are used to represent the connectivity between rooms labelled by two-dimensional features (i.e. width and height). BP is then employed in a retrieval scenario.
A Survey on Applications of Bipartite Graph Edit Distance <s> Handwritten Document Analysis <s> In the context of the NAVIDOMASS project, the problematic of this paper concerns the clustering of historical document images. We propose a structural-based framework to handle the ancient ornamental letters data-sets. The contribution, firstly, consists of examining the structural (i.e. graph) representation of the ornamental letters, secondly, the graph matching problem is applied to the resulted graph-based representations. In addition, a comparison between the structural (graphs) and statistical (generic Fourier descriptor) techniques is drawn. <s> BIB001 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Handwritten Document Analysis <s> The recognition of unconstrained handwriting images is usually based on vectorial representation and statistical classification. Despite their high representational power, graphs are rarely used in this field due to a lack of efficient graph-based recognition methods. Recently, graph similarity features have been proposed to bridge the gap between structural representation and statistical classification by means of vector space embedding. This approach has shown a high performance in terms of accuracy but had shortcomings in terms of computational speed. The time complexity of the Hungarian algorithm that is used to approximate the edit distance between two handwriting graphs is demanding for a real-world scenario. In this paper, we propose a faster graph matching algorithm which is derived from the Hausdorff distance. On the historical Parzival database it is demonstrated that the proposed method achieves a speedup factor of 12.9 without significant loss in recognition accuracy. <s> BIB002 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Handwritten Document Analysis <s> The objective of the planned project is to adapt a recent graph matching framework developed by the applicant to the problem of keyword spotting. The overall question to be answered is whether or not graph based representation and especially graph matching techniques can be beneficially employed for keyword spotting. For testing this novel keyword spotting framework, the Miroslav Gospels will be used. Miroslav Gospels is a 362-page illuminated manuscript Gospel Book on parchment with very rich decorations, which was inscribed on UNESCO’s Memory of the World Register in recognition of its historical value. It is one of the oldest surviving documents written in Old Church Slavonic. We plan to make the extracted word graphs from the Miroslav Gospels publicly available for further developments in graph based keyword spotting. <s> BIB003 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Handwritten Document Analysis <s> Effective information retrieval on handwritten documentimages has always been a challenging task. In this paper, we propose a novel handwritten word spotting approach based on graph representation. The presented model comprises both topological and morphological signatures of handwriting. Skeleton-based graphs with the Shape Context labelled vertexes are established for connected components. Each word image is represented as a sequence of graphs. In order to be robust to the handwriting variations, an exhaustive merging process based on DTW alignment result is introduced in the similarity measure between word images. With respect to the computation complexity, an approximate graph edit distance approach using bipartite matching is employed for graph matching. The experiments on the George Washington dataset and the marriage records from the Barcelona Cathedral dataset demonstrate that the proposed approach outperforms the state-of-the-art structural methods. <s> BIB004 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Handwritten Document Analysis <s> Effective information retrieval on handwritten document images has always been a challenging task, especially historical ones. In the paper, we propose a coarse-to-fine handwritten word spotting approach based on graph representation. The presented model comprises both the topological and morphological signatures of the handwriting. Skeleton-based graphs with the Shape Context labelled vertexes are established for connected components. Each word image is represented as a sequence of graphs. Aiming at developing a practical and efficient word spotting approach for large-scale historical handwritten documents, a fast and coarse comparison is first applied to prune the regions that are not similar to the query based on the graph embedding methodology. Afterwards, the query and regions of interest are compared by graph edit distance based on the Dynamic Time Warping alignment. The proposed approach is evaluated on a public dataset containing 50 pages of historical marriage license records. The results show that the proposed approach achieves a compromise between efficiency and accuracy. <s> BIB005 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Handwritten Document Analysis <s> We are currently working on the concept of an interactive word retrieval system for ancient document collection navigation, based on query composition for non-expert users. We have introduced a new notion: invariants, which are writing pieces automatically extracted from the old document collection. The invariants can be used in query making process in where the user selects and composes appropriate invariants to make the query. The invariants can be also used as descriptor to characterize word images. We introduced our unsupervised method for extracting invariants in our earlier paper. In this paper, we present a new structural word spotting system using a graph representation based on invariants as a descriptor. Through experiments, we conclude that our proposed system can adapt to different types of homogenous alphabetic languages documents (regardless of language/script, antiquity, handwritten or printed). <s> BIB006 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Handwritten Document Analysis <s> This paper presents a graph-based word spotting for handwritten documents. Contrary to most word spotting techniques, which use statistical representations, we propose a structural representation suitable to be robust to the inherent deformations of handwriting. Attributed graphs are constructed using a part-based approach. Graphemes extracted from shape convexities are used as stable units of handwriting, and are associated to graph nodes. Then, spatial relations between them determine graph edges. Spotting is defined in terms of an error-tolerant graph matching using bipartite-graph matching algorithm. To make the method usable in large datasets, a graph indexing approach that makes use of binary embeddings is used as preprocessing. Historical documents are used as experimental framework. The approach is comparable to statistical ones in terms of time and memory requirements, especially when dealing with large document collections. <s> BIB007 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Handwritten Document Analysis <s> The amount of handwritten documents that is digitally available is rapidly increasing. However, we observe a certain lack of accessibility to these documents especially with respect to searching and browsing. This paper aims at closing this gap by means of a novel method for keyword spotting in ancient handwritten documents. The proposed system relies on a keypoint-based graph representation for individual words. Keypoints are characteristic points in a word image that are represented by nodes, while edges are employed to represent strokes between two keypoints. The basic task of keyword spotting is then conducted by a recent approximation algorithm for graph edit distance. The novel framework for graph-based keyword spotting is tested on the George Washington dataset on which a state-of-the-art reference system is clearly outperformed. <s> BIB008 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Handwritten Document Analysis <s> Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR). S+SSPR 2016: Structural, Syntactic, and Statistical Pattern Recognition pp. 553-563. <s> BIB009
In recent years, handwritten (historical) documents have become increasingly digital available. However, the accessibility to these digitised documents with respect to browsing and searching is often limited. A first approach to bridge this gap is presented in BIB002 , which aims at the recognition of unconstrained handwriting images. In this approach, nodes represent keypoints on the skeletonised word images, while edges represent strokes between keypoints. The BP framework (which has been extended in this particular case) is eventually used to define graph similarity features. Keyword spotting allows to retrieve arbitrary keywords in handwritten documents. In case of graph-based keyword spotting, graphs commonly represent (parts of) segmented word images. The nodes of these graphs are, for instance, based on keypoints BIB003 BIB004 BIB005 BIB008 or prototype strokes BIB006 BIB007 , while edges are commonly used to represent the connectivity between pairs of keypoints or prototype strokes. The actual spotting for certain words in a document is then based on dissimilarity computations between the query graph and document graphs using BP. The BP dissimilarity framework has not only been applied for spotting keywords in historical documents, but also for clustering ancient ornamental initials (so called lettrines) BIB001 . In this particular case, each lettrine is represented by a Region Adjacency Graph (RAG), where nodes are used to represent homogenous regions. Finally, edges are inserted based on the adjacency of regions. In BIB009 a graph database for ancient handwritten documents is proposed and evaluated by means of a word classification experiment using BP. In particular, on the basis of segmented word images, six different graph representation formalisms are proposed and compared with each other using the BP dissimilarity model.
A Survey on Applications of Bipartite Graph Edit Distance <s> Biometrics <s> In this paper, we present a graph-based fingerprint classification algorithm that deals with ones collected from touch scanners. A relational graph, which reflects distribution of ridge directions in an orientation field, is constructed with additional emphasis on the distribution of the ridges inside the core area that is defined to minimize external influences. For classification, a general edit distance scheme is employed to measure the similarity between the constructed graph and pre-trained graph models. Experimental results using a database in FVC2004 show that the proposed algorithm has higher classification accuracy than other structural approaches. <s> BIB001 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Biometrics <s> In the context of the NAVIDOMASS project, the problematic of this paper concerns the clustering of historical document images. We propose a structural-based framework to handle the ancient ornamental letters data-sets. The contribution, firstly, consists of examining the structural (i.e. graph) representation of the ornamental letters, secondly, the graph matching problem is applied to the resulted graph-based representations. In addition, a comparison between the structural (graphs) and statistical (generic Fourier descriptor) techniques is drawn. <s> BIB002 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Biometrics <s> We represent the retina vessel pattern as a spatial relational graph, and match features using error-correcting graph matching. We study the distinctiveness of the nodes (branching and crossing points) compared with that of the edges and other substructures (nodes of degree k, paths of length k). On a training set from the VARIA database, we show that as well as nodes, three other types of graph sub-structure completely or almost completely separate genuine from imposter comparisons. We show that combining nodes and edges can improve the separation distance. We identify two retina graph statistics, the edge-to-node ratio and the variance of the degree distribution, that have low correlation with node match score. <s> BIB003 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Biometrics <s> This paper proposes a novel approach of on-line signature verification. Firstly, on-line signatures are partitioned into a series of segments, which are then represented by graphs. Four segmentation methods are taken into account. Secondly, graph matching techniques are adopted to compute edit distance between corresponding graphs, which measures the similarity of them. Finally, having been able to compare two signatures, limited genuine signatures are used to train user dependent classifiers for each user. Experiments are conducted to validate the effectiveness of the proposed method and promising results are achieved. <s> BIB004 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Biometrics <s> This paper presents an automatic retina verification framework based on the biometric graph matching (BGM) algorithm. The retinal vasculature is extracted using a family of matched filters in the frequency domain and morphological operators. Then, retinal templates are defined as formal spatial graphs derived from the retinal vasculature. The BGM algorithm, a noisy graph matching algorithm, robust to translation, non-linear distortion, and small rotations, is used to compare retinal templates. The BGM algorithm uses graph topology to define three distance measures between a pair of graphs, two of which are new. A support vector machine (SVM) classifier is used to distinguish between genuine and imposter comparisons. Using single as well as multiple graph measures, the classifier achieves complete separation on a training set of images from the VARIA database (60% of the data), equaling the state-of-the-art for retina verification. Because the available data set is small, kernel density estimation (KDE) of the genuine and imposter score distributions of the training set are used to measure performance of the BGM algorithm. In the one dimensional case, the KDE model is validated with the testing set. A 0 EER on testing shows that the KDE model is a good fit for the empirical distribution. For the multiple graph measures, a novel combination of the SVM boundary and the KDE model is used to obtain a fair comparison with the KDE model for the single measure. A clear benefit in using multiple graph measures over a single measure to distinguish genuine and imposter comparisons is demonstrated by a drop in theoretical error of between 60% and more than two orders of magnitude. <s> BIB005 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Biometrics <s> This study proposes an automatic dorsal hand vein verification system using a novel algorithm called biometric graph matching (BGM). The dorsal hand vein image is segmented using the K-means technique and the region of interest is extracted based on the morphological analysis operators and normalised using adaptive histogram equalisation. Veins are extracted using a maximum curvature algorithm. The locations and vascular connections between crossovers, bifurcations and terminations in a hand vein pattern define a hand vein graph. The matching performance of BGM for hand vein graphs is tested with two cost functions and compared with the matching performance of two standard point patterns matching algorithms, iterative closest point (ICP) and modified Hausdorff distance. Experiments are conducted on two public databases captured using far infrared and near infrared (NIR) cameras. BGM's matching performance is competitive with state-of-the-art algorithms on the databases despite using small and concise templates. For both databases, BGM performed at least as well as ICP. For the small sized graphs from the NIR database, BGM significantly outperformed point pattern matching. The size of the common subgraph of a pair of graphs is the most significant discriminating measure between genuine and imposter comparisons. <s> BIB006 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Biometrics <s> We introduce the Palm Vein Graph, a spatial graph representation of the palm vasculature, for use as biometric identifiers. The palm vein image captured from an infra red camera undergoes several image processing steps to be represented as a graph. After image enhancement and binarisation, the palm vein features are extracted from the skeleton using a novel two stage spur removal technique. The location of the features and the connections between them are used to define a Palm Vein Graph. Palm vein graphs are compared using the Biometric Graph Matching (BGM) Algorithm. We propose a graph registration algorithm that incorporates the length of the edges between graph vertices to improve the registration process. We introduce a technique called Graph Trimming that shrinks the compared graphs to achieve faster graph matching and improved performance. We introduce 10 graph topology-based measures for comparing palm vein graphs. Experiments are conducted on a public palm vein database for full and trimmed graphs. For the full graphs, one of the introduced measures, an edge-based similarity, gives a definite improvement in matching accuracies over other published results on the same database. Trimming graphs improves matching performance markedly, especially when the compared graphs had only a small common overlap area due to displacement. For the full graphs, when the edge-based measure was combined with one of three other topological features, we demonstrate an improvement in matching accuracy. <s> BIB007
Biometrics are often used to verify or identify an individual based on certain biometrical characteristics (e.g. iris, fingerprint, or signature). In BIB003 BIB005 , retina vessels are used as a biometric trait for user verification. Formally, nodes are used to represent keypoints in skeletonised vessel images, while edges represent the vessels between selected keypoints. Finally, BP is used to match the retina vessel graphs BIB003 or to derive different graph similarity measures for building a multiple classifier system BIB005 . In BIB006 BIB007 a very similar approach is applied on palm veins rather than retina vessels. Moreover, graphs are also used to for fingerprint identification as introduced in BIB001 . In particular, fingerprint images are segmented into core areas (i.e areas with same ridge direction), which are in turn represented by nodes, while edges are inserted between adjacent areas. The resulting fingerprint graphs are then classified using the distances derived from the BP framework. In recent years, a trend towards high coverage of video surveillance can be observed. Thus, person re-identification over serval camera scenes evolved to a crucial task. In a graph-based approach is presented for this task. To this end, segmented camera images are represented by means of a RAG BIB002 . The BP framework is then used in conjunction with a Laplacian-kernel in order to re-identify persons. Last but not least, BP is also used for on-line signature verification BIB004 . First, the signatures are divided into segments which are in turn represented by graphs. That is, nodes represent the sample points of a segment, while edges are inserted between specific pairs of nodes. Finally, two signatures are compared with each other by measuring a sum of BP dissimilarities between pairs of graphs.
A Survey on Applications of Bipartite Graph Edit Distance <s> Bio-and Chemoinformatics <s> In this paper we present a quantitative comparison between two approaches, Graph Kernels and Symbolic Learning, within a classification scheme. The experimental case-study is the predictive toxicology evaluation, that is the inference of the toxic characteristics of chemical compounds from their structure. The results demonstrate that both approaches are comparable in terms of accuracy, but present pros and cons that are discussed in the last part of the paper. <s> BIB001 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Bio-and Chemoinformatics <s> Chemoinformatics is a well established research field concerned with the discovery of molecule's properties through informational techniques. Computer science's research fields mainly concerned by the chemoinformatics field are machine learning and graph theory. From this point of view, graph kernels provide a nice framework combining machine learning techniques with graph theory. Such kernels prove their efficiency on several chemoinformatics problems. This paper presents two new graph kernels applied to regression and classification problems within the chemoinformatics field. The first kernel is based on the notion of edit distance while the second is based on sub trees enumeration. Several experiments show the complementary of both approaches. <s> BIB002 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Bio-and Chemoinformatics <s> Cancer causes deviations in the distribution of cells, leading to changes in biological structures that they form. Correct localization and characterization of these structures are crucial for accurate cancer diagnosis and grading. In this paper, we introduce an effective hybrid model that employs both structural and statistical pattern recognition techniques to locate and characterize the biological structures in a tissue image for tissue quantification. To this end, this hybrid model defines an attributed graph for a tissue image and a set of query graphs as a reference to the normal biological structure. It then locates key regions that are most similar to a normal biological structure by searching the query graphs over the entire tissue graph. Unlike conventional approaches, this hybrid model quantifies the located key regions with two different types of features extracted using structural and statistical techniques. The first type includes embedding of graph edit distances to the query graphs whereas the second one comprises textural features of the key regions. Working with colon tissue images, our experiments demonstrate that the proposed hybrid model leads to higher classification accuracies, compared against the conventional approaches that use only statistical techniques for tissue quantification. <s> BIB003 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Bio-and Chemoinformatics <s> Graphs can be used to represent a variety of information, from molecular structures to biological pathways to computational workflows. With a growing volume of data represented as graphs, the problem of understanding and analyzing the variations in a collection of graphs is of increasing importance. We present an algorithm to compute a single summary graph that efficiently encodes an entire collection of graphs by finding and merging similar nodes and edges. Instead of only merging nodes and edges that are exactly the same, we use domain-specific comparison functions to collapse similar nodes and edges which allows us to generate more compact representations of the collection. In addition, we have developed methods that allow users to interactively control the display of these summary graphs. These interactions include the ability to highlight individual graphs in the summary, control the succinctness of the summary, and explicitly define when specific nodes should or should not be merged. We show that our approach to generating and interacting with graph summaries leads to a better understanding of a graph collection by allowing users to more easily identify common substructures and key differences between graphs. <s> BIB004 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Bio-and Chemoinformatics <s> We explore the potential of the use of blood vessels as anatomical landmarks for developing image registration methods in colonoscopy images. An unequivocal representation of blood vessels could be used to guide follow-up methods to track lesions over different interventions. We propose a graph-based representation to characterize network structures, such as blood vessels, based on the use of intersections and endpoints. We present a study consisting of the assessment of the minimal performance a keypoint detector should achieve so that the structure can still be recognized. Experimental results prove that, even by achieving a loss of \(25\,\%\) of the keypoints, the descriptive power of the associated graphs to the vessel pattern is still high enough to recognize blood vessels. <s> BIB005
Bio-and chemoinformatics combine approaches and techniques of a broad field to analyse and interpret biological (i.e. DNA, protein sequences) or chemical structures (i.e. molecules), respectively. An important application in the field of bioinformatics is the analysis of deviations in biological structures to detect cancer. In BIB003 , for instance, graphs are used to represent tissue image of biological cells. In this case nodes are used to represent tissue components, while their spatial relationship is represented by edges. Subsequently, the BP framework is used to classify graphs representing normal, low-grade and high-grade cancerous tissue images. In BIB005 , a similar approach is introduced to detect irregularities in blood vessels rather than biological cells. Chemoinformatics has become a well established field of research. Chemoinformatics is mainly concerned with the prediction of molecular properties by means of informational techniques. The assumption that two similar molecules should have similar activities and properties, is one of the major principles in this particular field. Clearly, molecules can be readily described with labelled graphs, where atoms are represented by nodes, while bonds between atoms (e.g. single, double, triple, or aromatic) are represented by edges. In BIB001 BIB002 the approximation of graph edit distance by means of BP is used to build a novel graph kernel for activity predictions of molecular compounds. In various graph embeddings methods and graph kernels, which are in part built upon the BP framework, are evaluated in diverse chemoinformatics applications. Finally, in BIB004 an algorithm to compute single summary graphs from a collection of molecule graphs has been proposed. The formulation of the cost of a matching, which is actually used in this methodology, is based on BP.
A Survey on Applications of Bipartite Graph Edit Distance <s> Knowledge and Process Management <s> Existing techniques for Web service discovery focus mainly on matching functional parameters of atomic services, such as inputs and outputs. However, one of the main advantages of Web services is that they are often composed into more complex processes to achieve a given goal. Applying such techniques in these cases, ignores the workflow structure of the composite process, and therefore may produce matches that are not very accurate. To overcome this limitation, we propose in this paper a graph-based method for matching composite services, that are semantically described as OWL-S processes. We propose a graph representation of composite OWL-S processes and we introduce a matching algorithm that performs comparisons not only at the level of individual components but also at the structural level, taking into consideration the control flow among the atomic components. We also report our preliminary results of our experimental evaluation. <s> BIB001 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Knowledge and Process Management <s> We propose a graph-based semantic model for representing document content. Our method relies on the use of a semantic network, namely the DBpedia knowledge base, for acquiring fine-grained information about entities and their semantic relations, thus resulting in a knowledge-rich document model. We demonstrate the benefits of these semantic representations in two tasks: entity ranking and computing document semantic similarity. To this end, we couple DBpedia's structure with an information-theoretic measure of concept association, based on its explicit semantic relations, and compute semantic similarity using a Graph Edit Distance based measure, which finds the optimal matching between the documents' entities using the Hungarian method. Experimental results show that our general model outperforms baselines built on top of traditional methods, and achieves a performance close to that of highly specialized methods that have been tuned to these specific tasks. <s> BIB002 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Knowledge and Process Management <s> In this paper we address the problem of predicting SPARQL query performance. We use machine learning techniques to learn SPARQL query performance from previously executed queries. Traditional approaches for estimating SPARQL query cost are based on statistics about the underlying data. However, in many use-cases involving querying Linked Data, statistics about the underlying data are often missing. Our approach does not require any statistics about the underlying RDF data, which makes it ideal for the Linked Data scenario. We show how to model SPARQL queries as feature vectors, and use k-nearest neighbors regression and Support Vector Machine with the nu-SVR kernel to accurately predict SPARQL query execution time. <s> BIB003
In the last decades, a trend towards digitalisation of business models can be observed throughout most industries. Knowledge and process management ensures a thorough information flow, which is actually needed to manage both physical and intellectual resources. Nowadays, business processes are often supported (or completely created) by means of web services. Thus, the re-discoverability of composite web services (described by means of an OWL-S process) is of high relevance and issued in BIB001 . To this end, a graph is used to represent a composite process. Nodes represent process states and atomic services, while directed edges are used to represent the control flow. By a similar principle, business (sub)-processes rather than web services are retrieved in . In particular, business process activities represent nodes, while directed edges are used to represent the business process flow. Finally, the BP framework is used to find similarities between business (sub)-processes. Another application in this field is presented in BIB002 , where semantical enriched documents (so called Resource Description Framework (RDF) ontologies) are represented by graphs. To this end, document key concepts (e.g. db:Bob Dylan, db:Folk Music) are represented by nodes, while directed edges are used to represent semantic relations (e.g. dbp:genre) and labelled by their importance. Finally, the similarity of documents is computed by an adapted BP matching framework. Based on (similar) RDF ontologies, an approach to estimate the execution time of SPARQL (the RDF query language) queries is presented in BIB003 . In this scenario the BP distances of an unknown query to a set of training queries are used as query features.
A Survey on Applications of Bipartite Graph Edit Distance <s> Malware Detection <s> The amount of suspicious binary executables submitted to Anti-Virus (AV) companies are in the order of tens of thousands per day. Current hash-based signature methods are easy to deceive and are inefficient for identifying known malware that have undergone minor changes. Examining malware executables using their call graphs view is a suitable approach for overcoming the weaknesses of hash-based signatures. Unfortunately, many operations on graphs are of high computational complexity. One of these is the Graph Edit Distance (GED) between pairs of graphs, which seems a natural choice for static comparison of malware. We demonstrate how Simulated Annealing can be used to approximate the graph edit distance of call graphs, while outperforming previous approaches both in execution time and solution quality. Additionally, we experiment with opcode mnemonic vectors to reduce the problem size and examine how Simulated Annealing is affected. <s> BIB001 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Malware Detection <s> Problem statement: A malware is a program that has malicious intent. Nowadays, malware ::: authors apply several sophisticated techniques such as packing and obfuscation to avoid malware ::: detection. That makes zero-day attacks and false positives the most challenging problems in the ::: malware detection field. Approach: In this study, the static and dynamic analysis techniques that are ::: used in malware detection are surveyed. Static analysis techniques, dynamic analysis techniques and ::: their combination including Signature-Based and Behaviour-Based techniques are discussed. Results: In addition, a new malware detection framework is proposed. Conclusion: The proposed framework ::: combines Signature-Based with Behaviour-Based using API graph system. The goal of the proposed ::: framework is to improve accuracy and scan process time for malware detection. <s> BIB002 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Malware Detection <s> As the volume of malware inexorably rises, comparison of binary code is of increasing importance to security analysts as a method of automatically classifying new malware samples; purportedly new examples of malware are frequently a simple evolution of existing code, whose differences stem only from a need to avoid detection. This paper presents a polynomial algorithm for calculating the differences between two binaries, obtained by fusing the well-known BinDiff algorithm with the Hungarian algorithm for bi-partite graph matching. This significantly improves the matching accuracy. Additionally a meaningful metric of similarity is calculated, based on graph edit distance, from which an informed comparison of the binaries can be made. The accuracy of this method over the standard approach is demonstrated. <s> BIB003 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Malware Detection <s> The drastic increase of Android malware has led to a strong interest in developing methods to automate the malware analysis process. Existing automated Android malware detection and classification methods fall into two general categories: 1) signature-based and 2) machine learning-based. Signature-based approaches can be easily evaded by bytecode-level transformation attacks. Prior learning-based works extract features from application syntax, rather than program semantics, and are also subject to evasion. In this paper, we propose a novel semantic-based approach that classifies Android malware via dependency graphs. To battle transformation attacks, we extract a weighted contextual API dependency graph as program semantics to construct feature sets. To fight against malware variants and zero-day malware, we introduce graph similarity metrics to uncover homogeneous application behaviors while tolerating minor implementation differences. We implement a prototype system, DroidSIFT, in 23 thousand lines of Java code. We evaluate our system using 2200 malware samples and 13500 benign samples. Experiments show that our signature detection can correctly label 93\% of malware instances; our anomaly detector is capable of detecting zero-day malware with a low false negative rate (2\%) and an acceptable false positive rate (5.15\%) for a vetting purpose. <s> BIB004
Anti-virus companies receive huge amounts of samples of potentially harmful executables. This growing amount of data makes robust and automatic detection of malware necessary. The differentiation between malicious and original binary executables is actually another field where the framework BP has been extensively used. In BIB003 BIB002 BIB001 , for instance, malware detection based on comparisons of call graphs has been proposed. In particular, the authors propose to represent malware samples as call graphs such that certain variations of the malware can be generalised. This approach enables the detection of structural similarities between samples in a robust way. For pairwise comparisons of these call graphs the approximation of BP is employed. In BIB004 a similar approach has been pursued for the detection of malware by using weighted contextual API dependency graphs in conjunction with an extended version of BP for graph comparison. Finally, in BIB003 BP has been employed for the development of a polynomial time algorithm for calculating the differences between two binaries.
A Survey on Applications of Bipartite Graph Edit Distance <s> Other Applications <s> The focus of this thesis is the exploration of graph-based similarity, in the context of natural language processing. The work is motivated by a need for richer representations of text. A graph edit distance algorithm was implemented, that calculates the difference between graphs. Sentences were represented by means of dependency graphs, which consist of words connected by dependencies. A dependency graph captures the syntactic structure of a sentence. The graph-based similarity approach was applied to the problem of detecting plagiarism, and was compared against state of the art systems. The key advantages of graph-based textual representations are mainly word order indifference and the ability to capture similarity between words, based on the sentence structure. The approach was compared against contributions made to the PAN plagiarism detection challenge at the CLEF 2011 conference, and would have achieved a 5th place out of 10 contestants. The evaluation results suggest that the approach can be applicable to the task of detecting plagiarism, but require some fine tuning on input parameters. The evaluation results demonstrated that dependency graphs are best represented by directed edges. The graph edit distance algorithm scored best with a combination of node and edge label matching. Different edit weights were applied, which increased performance. Keywords: Graph Edit Distance, Natural Language Processing, Dependency Graphs, Plagiarism Detection <s> BIB001 </s> A Survey on Applications of Bipartite Graph Edit Distance <s> Other Applications <s> Building the behaviour for non-player characters in a game is a complex collaborative task among AI designers and programmers. In this paper we present a visual authoring tool for game designers that supports behaviour reuse. We describe a visual editor, capable of storing, indexing, retrieving and reusing behaviours previously designed by AI programmers. One of the most notable features of our editor is its capability for sketch-based retrieval: searching in a repository for behaviours that are similar to the one the user is drawing, and making suggestions about how to complete it. As this process relies on graph behaviour comparison, in this paper, we describe different algorithms for graph comparison, and demonstrate, through empirical evaluation in a particular test domain, that we can provide structure-based similarity for graphs that preserves behaviour similarity and can be computed at reasonable cost. <s> BIB002
A further application where the BP matching algorithm has been applied, is for instance, the retrieval of stories (for storytelling) . In this scenario, nodes represent goals and actions, while edges represent time and order. Thus, similar stories can be retrieved by means of the BP framework. In BIB002 a similar approach is introduced to retrieve sketches used to define the building behaviour of nonplayer characters in computer games. Finally, the BP framework is also used to detect plagiarism. In particular, and BIB001 BP is used to detect plagiarism in Haskell programs and in textual documents, respectively.
A Survey on Detection of Sinkhole Attack in Wireless Sensor Network <s> INTRODUCTION <s> In this paper, we present an Intrusion Detection System designed for wireless sensor networks and show how it can be configured to detect Sinkhole attacks. A Sinkhole attack forms a serious threat to sensor networks. We study in depth this attack by presenting how it can be launched in realistic networks that use the MintRoute protocol of TinyOS. MintRoute is the most widely used routing protocol in sensor network deployments, using the link quality metric to build the corresponding routing tree. Having implemented this attack in TinyOS, we embed the appropriate rules in our IDS system that will enable it to detect successfully the intruder node. We demonstrate this in our own sensor network deployment and we also present simulation results to confirm the effectiveness and accuracy of the algorithm in the general case of random topologies. <s> BIB001 </s> A Survey on Detection of Sinkhole Attack in Wireless Sensor Network <s> INTRODUCTION <s> In a wireless sensor network, multiple nodes would send sensor readings to a base station for further processing. It is known that such a many-to-one communication is highly vulnerable to a sinkhole attack, where an intruder attracts surrounding nodes with unfaithful routing information, and then performs selective forwarding or alters the data passing through it. A sinkhole attack forms a serious threat to sensor networks, particularly considering that the sensor nodes are often deployed in open areas and of weak computation and battery power. In this paper, we present a novel algorithm for detecting the intruder in a sinkhole attack. The algorithm first finds a list of suspected nodes through checking data consistency, and then effectively identifies the intruder in the list through analyzing the network flow information. The algorithm is also robust to deal with multiple malicious nodes that cooperatively hide the real intruder. We have evaluated the performance of the proposed algorithm through both numerical analysis and simulations, which confirmed the effectiveness and accuracy of the algorithm. Our results also suggest that its communication and computation overheads are reasonably low for wireless sensor networks. <s> BIB002 </s> A Survey on Detection of Sinkhole Attack in Wireless Sensor Network <s> INTRODUCTION <s> In wireless sensor networks, an adversary may deploy malicious nodes into the network and launch various attacks. These nodes are collectively called compromised nodes. In this paper, we first analyze the unique features of wireless sensor networks and discuss the challenges for compromised nodes detection. Then we propose a novel algorithm for detecting sinkhole attacks for large-scale wireless sensor networks. We formulate the detection problem as a change-point detection problem. Specifically, we monitor the CPU usage of each sensor node and analyze the consistency of the CPU usage. Thus, the proposed algorithm is able to differentiate between the malicious and the legitimate nodes. Extensive simulations have been conducted to verify the effectiveness of the algorithm. <s> BIB003 </s> A Survey on Detection of Sinkhole Attack in Wireless Sensor Network <s> INTRODUCTION <s> sinkhole attacks are particularly harmful against routing in wireless sensor networks in which the intruder attempts to lure all traffic towards itself, with false routing information. It gives serious damage to the network load balancing, as well as other types of attacks to provide a platform. So it is very important to prevent and detect this type of attack. In this paper, we present a secure routing algorithm against sinkholes attacks based on mobile agents designed for mobile wireless sensor networks (MWSN) and show how it can be configured to avoid sinkhole attacks. <s> BIB004 </s> A Survey on Detection of Sinkhole Attack in Wireless Sensor Network <s> INTRODUCTION <s> Wireless sensor networks are specific adhoc networks. They are characterized by their limited computing power and energy constraints. This paper proposes a study of security in this kind of network. We show what are the specificities and vulnerabilities of wireless sensor networks. We present a list of attacks, which can be found in these particular networks, and how they use their vulnerabilities. Finally we discuss about different solutions made by the scientific community to secure wireless sensor networks. <s> BIB005 </s> A Survey on Detection of Sinkhole Attack in Wireless Sensor Network <s> INTRODUCTION <s> Wireless sensor networks are collections of large number of sensor nodes. The sensor nodes are featured with limited energy, computation and transmission power. Each node in the network coordinates with every other node in forwarding their packets to reach the destination. Since these nodes operate in a physically insecure environment; they are vulnerable to different types of attacks such as selective forwarding and sinkhole. These attacks can inject malicious packets by compromising the node. Geographical routing protocols of wireless sensor networks have been developed without considering the security aspects against these attacks. In this paper, a secure routing protocol named secured greedy perimeter stateless routing protocol (S-GPSR) is proposed for mobile sensor networks by incorporating trust based mechanism in the existing greedy perimeter stateless routing protocol (GPSR). Simulation results prove that S-GPSR outperforms the GPSR by reducing the overhead and improving the delivery ratio of the networks. <s> BIB006 </s> A Survey on Detection of Sinkhole Attack in Wireless Sensor Network <s> INTRODUCTION <s> A Wireless Sensor network (WSN) consists of large number of low cost low power sensor nodes. The nature of wireless sensor networks makes them very attractive to attackers. One of the most popular and serious attacks in wireless sensor networks is sink hole attack and most existing protocols to defend against this attack used cryptographic methods with keys. In sink hole attack a sensor node will have a lot of false neighbors. Wireless Sensor Network has a dynamic topology, intermittent connectivity, and resource constrained device nodes. Researchers over the past years have encouraged the use of mobile agent to overcome these challenges. The proposed scheme is to defend against sink hole attacks using mobile agents. Mobile agent is a program segment which is self controlling. They navigate from node to node not only transmitting data but also doing computation. They are an effective paradigm for distributed applications, and especially attractive in a dynamic network environment. A routing algorithm with multiple constraints is proposed based on mobile agents. It uses mobile agents to collect information of all mobile sensor nodes to make every node aware of the entire network so that a valid node will not listen the cheating information from malicious or compromised node which leads to sink hole attack. The significant feature of the proposed mechanism is that it does not need any encryption or decryption mechanism to detect the sink hole attack. This mechanism does not require more energy than normal routing protocols like AODV. Here we implement a simulation-based model of our solution to recover from a Sinkhole attack in a Wireless Sensor Network. The mobile agents were developed using the Aglet <s> BIB007
Wireless sensor network consists of small nodes with ability to sense and send data to base station . Wireless sensor network is used in different applications example in military activities, which used to track movement of their enemy. It also used in fire detection and in healthy service for monitoring heart beat BIB003 BIB004 . Unfortunately most of wireless network are deployed in unfriendly area and normally left unattended. Also most of their routing protocols do not consider security aspect due to resource constraints which include low computational power, low memory, low power supply and low communication range BIB005 . This constraint creates chance for several attackers to easily attack wireless sensor network. An example of attack is sinkhole attack. Sinkhole attack is implemented in network layer where an adversary tries to attract many traffic with the aim to prevent base station from receiving a complete sensing data from nodes BIB006 .The adversary normally compromises the node and that node will be used to launch an attack. The compromised node send fake information to neighboring nodes about its link quality which used in routing metric to select best route during data transmission. Then all the packets from his neighbors pass through him before reach to base station. . Sinkhole attack prevents base station from acquiring a complete and correct sensing data from nodes. The purpose of this paper is to study existing solutions used to detect sinkhole attack. Different solutions which were used to detect and identified sinkhole attack were suggested by different researchers, such as Krontiris BIB001 , Ngai et al BIB002 and Sheela et al BIB007 . Rule based detection solution were proposed by Krontiris et al to detect sinkhole attack. All the rules were focused on node impersonation and were implanted in intrusion detection system. Then intruder was easily detected when they violate either of the rules. Another centralized solution which involve base station in detection process proposed by Ngai et al BIB002 A non cryptography scheme which used mobile agent in the network to prevent sinkhole attack was also proposed by Sheela et al BIB007 The remainder of this paper is organized as follow. Section 2 discusses sinkhole attack and their attack mechanism in two different protocols. Section 3 presents the challenges in detection of sinkhole attack in wireless sensor network. Section 4 presents different approaches that proposed by different researchers to detect sinkhole attack. Finally, section 5 conclude this paper and proposed some future works.
A review of two models for tolerance analysis of an assembly: Jacobian and torsor <s> Introduction <s> Manual drafting is rapidly being replaced by modern, computerized systems for defining the geometry of mechanical parts and assemblies, and a new generation of powerful systems, called geometric (solid) modeling systems (GMSs), is entering industrial use. Solid models are beginning to play an important role in off-line robot programming, model- driven vision, and other industrial robotic applications.A major deficiency of current GMSs is their lack of facilities for specifying tolerancing information, which is essential for design analysis, process planning, assembly planning for tightly toleranced components, and other applications of solid modeling. This paper proposes a mathe matical theory of tolerancing that formalizes and generalizes current practices and is a suitable basis for incorporating tolerances into GMSs.A tolerance specification in the proposed theory is a collection ofgeometric constraints on an object's surface features, which are two-dimensional subsets of the object's boundary. An obje... <s> BIB001 </s> A review of two models for tolerance analysis of an assembly: Jacobian and torsor <s> Introduction <s> In this article we propose a modeling space for objects defined in terms of tolerances based on Requicha's sug gestion of using variational classes. Variational classes are subsets of the hyperspace 2En, where Enis euclidean n-space. In order to motivate the ideas, the discussion involves the same simple example throughout: the specifi cation of a ball bearing defined by position, size, and form constraints. We begin by discussing the relationship between Reguicha's original proposal and our proposal for a definition of what should be viewed as a permissible variational class. We call such a permissible class, together with a nominal solid S, an R-class. We then introduce generalized versions of the regularized Boolean operations, which operate not on r-sets, but rather on R- classes. Just as the r-sets are closed under regularized Boolean operations, so the R-classes are closed under the generalized versions of the regularized Boolean opera tions. Finally, we discuss the relationship between the R- class... <s> BIB002 </s> A review of two models for tolerance analysis of an assembly: Jacobian and torsor <s> Introduction <s> Geometric feature variations are the result of variations in the shape, orientation or location of part features as defined in ANSI Y14.5M-1982 tolerance standard. When such feature variations occur on the mating surfaces between components of an assembly, they affect the variation of the completed assembly. The geometric feature variations accumulate statistically and propagate kinematically in a similar manner to the dimensional variations of the components in the assembly.The direct linearization method (DLM) for assembly tolerance analysis provides a means of estimating variations and assembly rejects, caused by the dimensional variations of the components in an assembly. So far no generalized approach has been developed to include all geometric feature variations in a computer-aided tolerance analysis system.This paper introduces a new, generalized approach for including all the geometric feature variations in the tolerance analysis of mechanical assemblies. It focuses on how to characterize geometri... <s> BIB003 </s> A review of two models for tolerance analysis of an assembly: Jacobian and torsor <s> Introduction <s> A variational model is a computer representation of a variational class and stands for a collection of different instances of the part or assembly modeled in CAD. The different basic approaches to variational modeling are reviewed. A surface-based approach to variational modeling is discussed. The approach is applied to solving the problems of eliminating rigid-body motion, handling incidence and tangency constraints, and modeling form variations. The application of variational modeling to automated tolerance analysis is also discussed. > <s> BIB004 </s> A review of two models for tolerance analysis of an assembly: Jacobian and torsor <s> Introduction <s> A computer aided tolerance analysis tool is presented that assists the designer in evaluating worst case quality of assembly after tolerances have been specified. In tolerance analysis calculations, sets of equations are generated. The number of equations can be restricted by using a minimum number of points in which quality of assembly is calculated. The number of points needed depends on the type of surface association. The number of parameters in the set of equations can be reduced by considering the most critical direction for the assembly condition. The latter direction, called virtual plan fragment direction, is determined using a virtual plan fragment table, based on an analogy to the plan fragment table used in degrees of freedom (DOF) analysis. This reduced set of equations is then solved and optimized in order to find the maximum/minimum values for the assembly condition using simulated annealing. This method for tolerance analysis has been implemented in a feature based (re-)design support system called FROOM, as part of the functional tolerancing module. <s> BIB005 </s> A review of two models for tolerance analysis of an assembly: Jacobian and torsor <s> Introduction <s> This paper presents a matrix approach coupled to the notion of constraints for the representation of tolerance zones within CAD/CAM (computer-aided design and manufacture) systems. The proposed theory, reproduces themeasurable ornon-invariant displacements associated with various types of tolerance zone. This is done using the homogeneous transforms commonly associated with robotic modelling. <s> BIB006 </s> A review of two models for tolerance analysis of an assembly: Jacobian and torsor <s> Introduction <s> The direct linearization method (DLM) for tolerance analysis of 3-D mechanical assemblies is presented. Vector assembly models are used, based on 3-D vector loops which represent the dimensional chains that produce tolerance stackup in an assembly. Tolerance analysis procedures are formulated for both open and closed loop assembly models. The method generalizes assembly variation models to include small kinematic adjustments between mating parts. Open vector loops describe critical assembly features. Closed vector loops describe kinematic constraints for an assembly. They result in a set of algebraic equations which are implicit functions of the resultant assembly dimensions. A general linearization procedure is outlined, by which the variation of assembly parameters may be estimated explicitly by matrix algebra. Solutions to an over-determined system or a system having more equations than unknowns are included. A detailed example is presented to demonstrate the procedures of applying the DLM to a 3-D mechanical assembly. <s> BIB007 </s> A review of two models for tolerance analysis of an assembly: Jacobian and torsor <s> Introduction <s> Ever since the plus/minus limits on dimensions first started to appear on engineering drawings in the early 1900s, tolerances have been one of the most important issues for every engineer involved in the product realization processes. In particular, with the advancement of computers and CAD/CAM techniques in the 1970s, the tolerance-related issues have continuously drawn the attention of many researchers since then. As a result, a tremendous number of research articles have been published over the last 30 years. This paper aims at a comprehensive state-of-the-art review on various tolerancing issues in design and manufacturing. However, due to the overwhelming number of existing research publications, any reviews on tolerancing issues could by no means be exhaustive. Rather, this review attempts to provide the reader with a view toward a balanced understanding of the various problems in tolerancing by presenting some typical research work for each of the classified fields, and tries to draw the potential ... <s> BIB008 </s> A review of two models for tolerance analysis of an assembly: Jacobian and torsor <s> Introduction <s> The paper surveys the current status of the models for representing, manipulating and analyzing dimensioning and tolerancing data behind the major computer-aided tolerancing (CAT) systems, now commercially available. The solid models and tolerance models that these systems adopt, and their implications for the successful integration of CAT and CAD/CAM are discussed. There are two main purposes. First, the attention is focused on the theoretical backgrounds of every CAT system analyzed, for understanding its limits and its usefulness in design and manufacturing practice. The other purpose is to present a comparison among these systems to support the designer and the manufacturing engineer, as well as the researchers, in the choice of the system will be more proper to everyone. Finally, the research work that remains to be carried out, to improve the future CAT systems and their integration with CAD/CAM is sketched out. <s> BIB009 </s> A review of two models for tolerance analysis of an assembly: Jacobian and torsor <s> Introduction <s> The so-called min/max tolerance charts are popular for analysing tolerance accumulation in parts and assemblies, despite the fact that this method is limited to one-dimensional worst-case analysis. Since different chart construction rules apply to different tolerance classes, the user has to remember all the different rules to construct a valid tolerance chart. Constructing the charts manually, as is common practice in industry today, is time-consuming and error-prone. This paper presents procedures for automating tolerance charting. It requires a computer aided design (CAD) model with GD&T specifications as input. The automation is based on a data model called the constraint-feature-tolerance (CTF) graph and machine interpretable representation of charting rules specific to each tolerance class. This paper also presents procedures for automatically extracting dimension and tolerance stacks for any user defined analysis dimension, and automatic part arrangement in assemblies corresponding to worst case analyses. Important implementation issues are also discussed and several case studies are presented, the results of which have been verified by manual chart construction. <s> BIB010 </s> A review of two models for tolerance analysis of an assembly: Jacobian and torsor <s> Introduction <s> Mechanical products are usually made by assembling many parts. The dimensional and geometrical variations of each part have to be limited by tolerances able to ensure both a standardized production and a certain level of quality, which is defined by satisfying functional requirements. The appropriate allocation of tolerances among the different parts of an assembly is the fundamental tool to ensure assemblies that work rightly at lower costs. Therefore, there is a strong need to develop a tolerance analysis to satisfy the requirements of the assembly by the tolerances imposed on the single parts. This tool has to be based on a mathematical model able to evaluate the cumulative effect of the single tolerances. Actually, there are some different models used or proposed by the literature to make the tolerance analysis of an assembly, but none of them is completely and univocally accepted. Some authors focus their attention on the solution of single problems found in these models or in their practical application in computer-aided tolerancing systems. But none of them has done an objective and complete comparison among them, analyzing the advantages and the weakness and furnishing a criterion for their choice and application. This paper briefly introduces two of the main models for tolerance analysis, the vector loop and the matrix. In this paper, these models are briefly described and then compared showing their analogies and differences. <s> BIB011
There is a strong need for industries to produce high precision assemblies at lower costs. Therefore, there is a strong need to use tolerance analysis to predict the effects of the tolerances that have been assigned to the components of an assembly on the functional requirements of the assembly itself. The aim of the tolerance analysis is to study the accumulation of dimensional and/or geometric variations resulting from a stack of dimensions and tolerances. The results of the analysis are meaningfully conditioned by the adopted mathematical model. Some are the models proposed by the literature to carry out a tolerance analysis of an assembly constituted by rigid parts. The foremost works are found in Requicha that introduced the mathematical definition of the tolerance's semantic BIB001 and proposed a solid offset approach for this purpose. Since then, a lot of models are proposed by the literature: the vector loop uses vectors to represent relevant dimensions in an assembly BIB003 BIB007 ; the variational uses homogeneous transformation matrix to represent the variability of an assembly due to tolerances and assembly constraints BIB002 BIB004 ; the matrix uses displacement matrix to describe any roto-translational variation a feature may be subjected to BIB005 BIB006 ; the tolerance-map uses hypothetical solid including points in n-dimensions which represent all possible variations of a feature or an assembly BIB010 ); the Jacobian uses an approach derived from the description of kinematic chains in robotics to formulate the displacement matrices; and the torsor uses screw parameters to model three dimensional tolerance zones. In the literature, some studies compare these models for tolerance analysis by dealing with their general features BIB008 . Other studies compare the main computer aided tolerancing softwares that implement some of the models of the tolerance analysis BIB009 ; but these studies focus the attention only on the general features of the considered models. Moreover, there does not exist a paper that compares the different analytical methods on the basis of a case study that underlines in a clear way all the advantages and the weakness; therefore, no guidelines exist to select the method more appropriate to the specific aims. The purpose of this work is to analyse two of the most significant models for tolerance analysis of rigidparts assembly: the model called Jacobian and the model called torsor. The comparison of the models starts from their application to a case study. Dimensional and geometrical tolerances have been considered as part of stack-up functions. The worst and the statistical approaches have been taken into account. The application of the Envelope Principle (ASME 1994) and of the Independence Principle has been deeply investigated. Finally, the evolution of these two models, called unified Jacobian-torsor model, is presented too; it should overcome the limits of the two compared models. Two further works of the authors compare the other main developed models of the literature: the matrix and the vector loop models BIB011 and the variational models . Section 2 gives an overall explanation of the Jacobian and the torsor models. Section 3 gives a comprehensive comparison of the two models by means of a case study that is characterised by 2D tolerance stack-up functions. It offers some guidelines for those who will have to make the choice too. Finally, Section 4 presents the evolution of the two models, the unified Jacobian-torsor, that seems to overcome some of the limits of the Jacobian and torsor models.
A review of two models for tolerance analysis of an assembly: Jacobian and torsor <s> Torsor model <s> The analysis of the difficulties encountered with traditional dimension chains in the description of the behaviour of a set of parts, with variation, has led us to develop a tridimensional model of variations. It is therefore designed to treat the problem of transfer of dimensions. <s> BIB001 </s> A review of two models for tolerance analysis of an assembly: Jacobian and torsor <s> Torsor model <s> The work presented here goes along the research for the principles that, starting from functional requirements, allow to compute the nature and value of tolerances on each part of a mechanism. In comparison with A.Clement’s or J.Turner’s works, our contribution is included in the formal description of the elements of tridimensional tolerance chains. This approach is built upon two elements, a modelization of geometric errors and a method of computation for their propagation inside of a mechanism. The modelization of geometric variations proposed here is founded upon the association of small displacement torsors to the different types of deviations that can be met in a mechanism. From then on, determining the parts’ small displacements under the effect of deviations and of gaps of the parts in a mechanism, becomes a computation of the composition of the modelized geometric errors. This computation of each part’s position yields two results. First, the formal determination of the part’s position in the mechanism in relation with the chains of influent geometric variations influenced by the parts’ surfaces. Then, the description of a combina- tory of a mechanism’s configurations. The application of this method shows the results obtained as well as the possibilities of extension towards a tolerancing aiding tool. <s> BIB002 </s> A review of two models for tolerance analysis of an assembly: Jacobian and torsor <s> Torsor model <s> Abstract A method to perform tridimensional analysis and tridimensional synthesis of machining tolerances is presented. The model relies on the small displacement torsor concept and on the computer aided tolerancing approach. The link between process planning and functional tolerances with torsor chains according to workpiece set-ups is exposed. An example shows how the model is used to determine automatically geometrical variations of the workpiece knowing geometrical variations of each part (part-holder, positioning devices, machined surfaces, etc.). <s> BIB003 </s> A review of two models for tolerance analysis of an assembly: Jacobian and torsor <s> Torsor model <s> Abstract The object of this article is to present a tolerancing model, the “Proportioned Assembly Clearance Volume: PACV”. The finality of the PACV is to create a three-dimensional (3D) tolerancing analysis tool that takes into account only standardized specifications. The PACV is based on the Small Displacements Torsor (SDT) concept. SDT is used to express the relative position between two ideal surfaces. The rotations between two geometric features are linearized, i.e. displacements are transformed into small displacements. An ideal surface is a surface, which can be characterized by a finite number of geometric features: point, centerline, part face, etc. A nominal surface is an ideal surface by definition. By modeling fabricated surfaces in ideal surfaces, it is possible to compute the limits of small displacements of a fabricated surface inside a tolerance zone. The values of these limits define a PACV. With a similar method, the limits of small displacements between two surfaces of two distinct parts, e.g. clearance in a joint, can be determined: they define a PACV. Using a graph, we illustrate how PACV (edges) could be associated in series or in parallel between two any surfaces (vertices) in an assembly, in order to create 3D dimension-chains. The governing rule of PACV in series is introduced. In addition, one example of computation of 3D dimension-chain (result of an association of PACV in series) is presented. <s> BIB004 </s> A review of two models for tolerance analysis of an assembly: Jacobian and torsor <s> Torsor model <s> This paper proposes a three-dimensional (3D) model on manufacturing tolerancing for mechanical parts. The work presented relies on research conducted at the LURPA on the computation of 3D tolerance chains for mechanisms. Starting from these works, the authors propose a formalization of the problem within the more specific context of manufacturing tolerances. Models of the workpiece, the set-ups and the machining operations are provided. The main originality is to model the machining set-up as a mechanism. The concept of the small displacements torsor (SDT) is used to model the process planning. It opens up the way for the 3D integration product/process because of the similarities between the concepts used in both points of view. The first part recalls the principle of the modelling of surface variations with SDT as well as its application to the modelling of mechanisms. The second part introduces the use of the concept in the case of manufacturing tolerancing. A third part shows the modelling of the probl... <s> BIB005
The torsor model uses screw parameters to model three dimensional tolerance zones BIB003 . Each actual surface of a part is modelled by a substitute surface. A substitute surface is a surface that has the shape of the nominal surface and it is used to model the actual surface. A substitute surface is characterised by a set of screw parameters which are the deviation of the substitute surface from the nominal one. For each of the seven types of tolerance zone, there are the correspondent screw parameters obtained by annulling the ones that leave the surface invariant in its local frame. The obtained screw parameters are arranged in a particular mathematical operator called 'torsor'. Considering a generic feature, if u A , v A , w A are the translation parameters of the point A, and a, b, g are the rotation angles (considered small) as regards to the nominal position, the torsor of point A is given by: where R is the DRF where the screw parameters are evaluated. Once known the torsor of point A, the torsor of point B may be evaluated as: where: where AB x , AB y and AB z are the vectors of the distance between points A and B along the x, y and z axes respectively. To model the interaction between the parts of an assembly, three kinds of torsors (or Small Displacement Torsor SDT) may be defined BIB001 BIB002 ): a part SDT for each part of the mechanism (A, B, . . .) to model the displacement allowed to the part; a deviation SDT for each surface (A1, A2, . . .) of each part to model the geometrical variations of the surface; a gap SDT between two surfaces linking two parts to model the joint properties. Therefore, a union of the set of SDTs that are involved at the joints is used in order to obtain the global behaviour of the mechanism. This may be done by considering that, with the worst case approach, the cumulative effect of a simple chain of n-elements is simply expressed by adding the single components of the torsors: (it is to observe that to compute this sum is necessary that the components of all the single torsors are referred to the same point B and in the same datum reference frame R). The basic steps of the torsor model are BIB005 ): (1) Identify the elements of the parts and the relations among them -the first step is to identify the elements of the parts and the relations among them; these information are usually reported in a surfaces graph. The functional requirements and the stacks to relate these FRs are identified too. (2) Define the parameters of the mechanism -a deviation torsor has to be associated to each surface of the parts; therefore, a characterisation of the global SDT of each part has to be done. The shape of the gap torsor that is associated to each joint according to the functional conditions required by the assembly has to be defined too. (3) Computate the cumulative effect of the torsors involved in each stack in order to evaluate the functional requirements by Equation (9). Finally, some fundamental considerations are needed. The first is that this model is developed under the hypothesis to use the TTRS and the positional tolerancing criteria. The second is that the solution of stacks arranged in a network is not completely developed in spite of the different works produced in the last years ). The third is that the torsor components assume a double meaning. In a first approach the small displacement torsor components are simple parameters and they are computed by means of common algebraic rules. An example of this approach is in BIB003 where a tolerance problem involving network functions is solved. In the second approach the small displacement torsor components are admissible intervals according to the applied tolerance ranges. An example of this approach is in BIB004 . The first approach gives a solution to the tolerance analysis problems very similar to the other approaches of the literature, while the second approach may theoretically relate the variability range of the functional requirements of the assembly to the assigned tolerance ranges. However, this second approach needs to compute the small displacement torsor components, that are intervals, by means of the arithmetic by interval that is not fully developed yet. In the following we make reference only to this second approach, since its potentialities seem more interesting in tolerance analysis.
A review of two models for tolerance analysis of an assembly: Jacobian and torsor <s> Unified Jacobian-torsor model <s> Abstract This paper presents a model for computer-aided tolerancing which enables to perform tolerance analysis and tolerance synthesis in both deterministic or statistical situations. The model combines the benefits of the Jacobian and torsor approaches developed for computer aided tolerancing. The proposed unified model is formulated using interval-based arithmetic. The paper describes how different solving engines of the same set of interval-based equations lead to different types of problems being solved, i.e. deterministic (worst case) or statistical problems. The paper also shows how the unified analysis model can be numerically inverted for performing synthesis calculations. <s> BIB001 </s> A review of two models for tolerance analysis of an assembly: Jacobian and torsor <s> Unified Jacobian-torsor model <s> this paper describes how the unified Jacobian-Torsor model can be used for the redesign of assembly tolerances. Through a numerical and graphical approach, the designer is guided into choosing his tolerances. After having identified a functional requirement (FR) and the functional elements (FEs) of the dimensional chain, it becomes possible to compute the percentage contribution of each FE to the FR. A percentage contribution chart tells the designer how each dimension has contributed to the total FR. Once these contributions have been identified, we proceed to modify the most critical tolerance zones to respect the requirements imposed by the designer. The results are evaluated by comparing the predicted variation with the corresponding specified design requirements. Then, the contributions can be used to help decide which tolerance to tighten or loosen. This study has been developed for Worst-Case approach. Finally, an example application is presented to illustrate our methodology. <s> BIB002 </s> A review of two models for tolerance analysis of an assembly: Jacobian and torsor <s> Unified Jacobian-torsor model <s> This paper presents the results of ongoing research aimed at developing an integrated computer-aided tolerancing tool. A unified Jacobian–Torsor approach has been developed for deterministic (worst case) computer-aided tolerancing. The paper describes how one can use the same set of interval-based deterministic equations in a statistical context. The nature of the resulting equations lends itself to very fast computations to determine the percentage of rejected assemblies produced given some statistical distribution of the tolerances of their constituent parts. An example application is also presented to illustrate the use of the developed tool. <s> BIB003
As just underlined, the Jacobian model takes its best advantage in the simplicity to evaluate the Jacobian matrix from the nominal conditions; Equations (4) and (5). This makes it possible to directly relate the displacements of the functional requirements to the virtual joints displacements; equation (3). The solution of the network functions seems easier to approach than the torsor one too. Despite this advantage, the virtual joint displacements are difficult to relate to the tolerances applied on the components. The torsor model allows to easily evaluate the variability ranges of the small displacements from the tolerances applied on the components, but it is very difficult to relate these ranges to the variability ranges of the functional requirements of the assembly. In the last years the idea of the unified Jacobiantorsor model has been presented in order to evaluate the virtual joint displacements from the tolerances applied on the components by the torsors and, therefore, to relate the displacements of the functional requirements to the virtual joint displacements by the Jacobian matrix BIB001 ; it is theoretically possible since the deviations are usually small and the equations may be linearised. The proposed unified model expands the functionalities of the Jacobian model under two important aspects . First, the punctual small displacement variables of the former Jacobian formulation are now considered as intervals formulated and solved using interval-base arithmetic. The equations describing the bounds within which the feature is permitted to move, which are the constraint equations of the torsor formulation, are applied on the unified model. Second, some of the small displacement variables used in the model are eliminated due to the invariant nature of the movements they generate with respect to the toleranced feature. This standard result of the torsor formulation is applied to the unified model. The effect of this is to significantly reduce the unified model size. This new model enables to perform tolerance analysis and tolerance synthesis BIB001 or to redesign the assembly tolerances BIB002 ). The unified Jacobian-torsor model has been developed for deterministic (worst-case) computer aidedtolerancing. Recently, the same set of interval-based deterministic equations has been applied to a statistical context BIB003 and the model has been used to develop a method for obtaining the functional requirement cost for product ).
An overview of signal processing issues in chemical sensing <s> <s> Improved lead-in device passing a conductor through the cover of an electrical precipitation apparatus down to its spray system. The improved device is characterized in that a porous insulator is used as the insulator and a gas-tight insulating cylinder concentrically surrounds the porous insulator. The device is further fitted with a tubular gas inlet opening into the porous insulating cylinder. The upper and lower ends of the insulator and the insulating cylinder, respectively, are connected together by annular metal plates so as to form a space therebetween, and the space is occupied by a sealing gas maintained at a relatively higher pressure than that prevailing inside the electrical precipitation apparatus. <s> BIB001 </s> An overview of signal processing issues in chemical sensing <s> <s> Abstract This paper explains the multi-way decomposition method PARAFAC and its use in chemometrics. PARAFAC is a generalization of PCA to higher order arrays, but some of the characteristics of the method are quite different from the ordinary two-way case. There is no rotation problem in PARAFAC, and e.g., pure spectra can be recovered from multi-way spectral data. One cannot as in PCA estimate components successively as this will give a model with poorer fit, than if the simultaneous solution is estimated. Finally scaling and centering is not as straightforward in the multi-way case as in the two-way case. An important advantage of using multi-way methods instead of unfolding methods is that the estimated models are very simple in a mathematical sense, and therefore more robust and easier to interpret. All these aspects plus more are explained in this tutorial and an implementation in Matlab code is available, that contains most of the features explained in the text. Three examples show how PARAFAC can be used for specific problems. The applications include subjects as: Analysis of variance by PARAFAC, a five-way application of PARAFAC, PARAFAC with half the elements missing, PARAFAC constrained to positive solutions and PARAFAC for regression as in principal component regression. <s> BIB002 </s> An overview of signal processing issues in chemical sensing <s> <s> This brief proposes a design method for a digital fractional order Savitzky-Golay differentiator (DFOSGD), which generalizes the Savitzky-Golay filter from the integer order to the fractional order for estimating the fractional order derivative of the contaminated signal. The proposed method calculates the moving window's weights using the polynomial least-squares method and the Riemann-Liouville fractional order derivative definition, and then computes the fractional order derivative of the given signal using the convolution between the weights and the signal, instead of the complex mathematical deduction. Hence, the computation time is greatly improved. Frequency-domain analysis reveals that the proposed differentiator is essentially a fractional order low-pass filter. Experiments demonstrate that the proposed DFOSGD can accurately estimate the fractional order derivatives of both noise-free signal and contaminated signal. <s> BIB003
With the advent of more affordable, higher resolution or novel data acquisition techniques, chemical analysis has been using, progressively, more advanced signal and image processing tools. Since analytical chemistry (AC) has numerous applications in forensics, bioinformatics, clinical, environmental and material analysis, investigating how signal processing (SP) methods, thereafter encompassing image analysis as well, can be used for solving analytical chemistry problems will be interesting in many application domains. Indeed, both specialties (AC and SP) share very similar values of best practice in carrying out identifications and comprehensive characterizations, albethey of chemical samples for AC or of numerical data for SP. For instance, the chemical analyst approach to performing an analysis, resorting from different preparation steps to different analytical techniques (Section 2.1), resembles the manner employed in traditional signal or image analysis. As a consequence, a better interaction between both communities is possible and desirable. Interactions between SP and AC communities would be useful in providing new types of data and constraints and in solving AC related issues [1] . Conversely, interactions can also be beneficial for the SP community, with opportunities in less known tools. As a first example, it is clear that the well known PARAFAC approach BIB002 played an important role in SP: it has been (and still is) a source of inspiration for source separation methods and other representations of complex multi-way data based on tensor decomposition. A second example is the Savitsky-Golay filter, whose original work BIB001 is one of the most cited papers in analytical chemistry . The design of theses filters make them shape-preserving smoothers, better suited to denoising empirical data comprised of sum of round-shaped peaks than standard averaging or frequency designed filters. Interestingly, even if it falls in the category of least-squares, polynomial interpolating filters, it is barely present in signal processing textbooks and rarely known from signal processing specialists BIB003 . The recent tutorial paper might renew the interest of the community on this specific topic. We similarly aim at bringing less known chemical sensing issues and references to the signal processing community. The paper, obviously far from exhaustive, provides a selection of key contributions to the field of analytical chemistry, whose modus operandi bears some similarities with those in signal processing, as described in Section 2, along with a description of the main types of data encountered and some of the needs in routine chemical analysis. Section 3 forms the core of the tutorial. It first reminds prior seminal works, followed by a decomposition of the main issues in SP-related topics. Some conclusions are provided in Section 4. Figure 1 -left. The observed amplitude at each location is generally considered as related to the proportion of a certain component. Since some uncertainty and variability exist, the proportion of an elementary component is generally distributed around an average location on the ordinal axis, so as to form a "peak". Different peak parametric models [8, p. 97 sq.], for instance Gaussian or Lorentzian, have been developed to address different types of observed separation processes or analyzed components. The ordinal variable is not restricted to time or space. It represents a physico-chemical property which realizes the separation between elementary components, e.g. boiling point (temperature), migration (molecular mass), sensitivity to electro-magnetic fields (mass-to-charge ratio), etc. When considering additional instrumental drift and disturbances, the resulting chemical signal, often termed "spectrum", is composed, at first order, of a linear combination of a sum of peaks of different amplitudes (more of less overlapped, or "co-eluted 2 "), a slow-varying, sometimes monotone, baseline (or background) representing the lower limit of peak amplitude quantification, and noise. Hence, the most simple model is a linear mixture. Globally, those signals somehow differ from SP standards, as they rarely ex-hibit either jumps, step edges or oscillatory behavior. Hence, they deserve a set of analysis tools that drift away from usual derivative-based contour detectors, frequency-domain filters or multi-scale detectors. From a signal processing perspective, they often enjoy additional useful properties. For instance, when considering concentrations of analytes in a mixture, elementary spectra should be non-negative and have unit-concentration, taking into account the stoichiometric constants of balanced chemical equations (conservation of mass, charge or atoms). Recently, sparsity constraints on chemical species in a reaction have come into play.
An overview of signal processing issues in chemical sensing <s> On needs and trends in chemical signal analysis <s> Abstract In this paper, we compare the current separation power of comprehensive two-dimensional gas chromatography (GC×GC) with the potential separation power of GC–mass spectrometry (GC–MS) systems. Using simulated data, we may envisage a GC–MS contour plot, that can be compared with a GC×GC chromatogram. Real examples are used to demonstrate the current potential of the two techniques in the field of hydrocarbon analysis. As a separation technique for complex hydrocarbon mixtures, GC×GC is currently about as powerful as GC–MS is potentially powerful. GC–MS has not reached its potential separation power in this area, because a universal, soft ionization method does not exist. The greatest advantage of GC×GC is, however, its potential for quantitative analysis. Because flame-ionisation detection can be used, quantitative analysis by GC×GC is much more robust, reliable and reproducible. <s> BIB001 </s> An overview of signal processing issues in chemical sensing <s> On needs and trends in chemical signal analysis <s> Comprehensive two-dimensional gas chromatography (GC x GC) has been investigated for the characterization of high valuable petrochemical samples from dehydrogenation of n-paraffins, Fischer-Tropsch and oligomerization processes. GC x GC separations, performed using a dual-jets CO2 modulator, were optimized using a test mixture representative of the hydrocarbons found in petrochemicals. For complex samples, a comparison of GC x GC qualitative and quantitative results with conventional gas chromatography (1D-GC) has demonstrated an improved resolution power of major importance for the processes: the group type separation has permitted the detection of aromatic compounds in the products from dehydrogenation of n-paraffins and from oligomerization, and the separation of alcohols from other hydrocarbons in Fischer-Tropsch products. <s> BIB002
Due to the need of routine chemical analysis and testing for quantities of data from high-throughput instruments, it is very important that processing techniques have a limited number of parameters, with semi-automatic or at least intuitive determination. Since chemical analyses are often made relative, the repeatability of signal processing is a very important feature. Despite the simplistic linear model described above, analytical methods possess different specificities and pose distinct challenges. As the studied chemical compounds steadily become more complex, their separation into elementary components is often difficult with a single technique. Even with instrumental resolution increases (e.g., the capability to output finer peaks, with respect to the full width at half maximum), the need for a separation based on two or more chemical properties (e.g. boiling point and electronic structure) has emerged. This has given birth to hyphenated techniques, combining some of the aforementioned techniques in pair, triple, etc. For instance, two-dimensional or comprehensive chromatography (GC×GC, Fig. 1-right) BIB002 generates a two-dimensional signal with the above features. The resulting images are far different from the standard cartoon/texture model, and promote innovative methods . Hyphenation may be extended to higher dimensions BIB001 , providing an enhancement of resolution at the costs of more drastic data management problems. Despite the variety of techniques, AC methods include common concepts of separation, detection, identification and quantification (here of atomic, molecular, and ionic species). Such broad concepts are cores in signal and image processing as well, albethey with different meanings. Due to the close relationship between both disciplines, we choose to decompose chemical sensing issues in SP related fields, better suited to the target audience.
An overview of signal processing issues in chemical sensing <s> Historical mentions and early works <s> Improved lead-in device passing a conductor through the cover of an electrical precipitation apparatus down to its spray system. The improved device is characterized in that a porous insulator is used as the insulator and a gas-tight insulating cylinder concentrically surrounds the porous insulator. The device is further fitted with a tubular gas inlet opening into the porous insulating cylinder. The upper and lower ends of the insulator and the insulating cylinder, respectively, are connected together by annular metal plates so as to form a space therebetween, and the space is occupied by a sealing gas maintained at a relatively higher pressure than that prevailing inside the electrical precipitation apparatus. <s> BIB001 </s> An overview of signal processing issues in chemical sensing <s> Historical mentions and early works <s> Abstract Signal processing is a unifying principle useful in understanding the operation of analytical instrumentation. Through it new modes of operation may be found for existing analytical methods and entirely new methods may be discovered. <s> BIB002 </s> An overview of signal processing issues in chemical sensing <s> Historical mentions and early works <s> Signal processing refers to a variety of operations that can be carried out on a continuous (analog) or discrete (digital) sequence of measurements in order to enhance the quality of information it is intended to convey. In the analog domain, electronic signal processing can encompass such operations as amplification, filtering, integration, differentiation, modulation/demodulation, peakdetection, and analog-to-digital (A/D) conversion. Digital signal processing can include a variety of filtering methods (e.g. polynomial least-squares smoothing, differentiation, median smoothing, matched filtering, boxcar averaging, interpolation, decimation, and Kalman filtering) and domain transformations (e.g. Fourier transform (FT), Hadamard transform (HT), and wavelet transform (WT)). Generally the objective is to separate the useful part of the signal from the part that contains no useful information (the noise) using either explicit or implicit models that distinguish these two components. Signal processing at various stages has become an integral part of most modern analytical measurement systems and plays a critical role in ensuring the quality of those measurements. <s> BIB003
According to , "the dawn of the computer-controlled analytical instrument can be traced to" the Savitzky-Golay paper BIB001 . Gottschalk relates analytical chemistry to information theory and considers the materials analyzed as more generic "systems". The late professor J. B. Phillips, who fathered the comprehensive chromatography evoked in 2.1, considers that "It is no longer possible to understand the chemistry without considering signal processing [...] as a whole" BIB002 . This paper may have been overlooked, or at least undercited. Other insightful considerations on signal processing interplay with analytical chemistry may be found in BIB003 .
An overview of signal processing issues in chemical sensing <s> Acquisition and compression related problems <s> A simple, non-moving dual-stage CO2 jet modulator is described, which cools two short sections of the front end of the second-dimension column of a comprehensive two-dimensional gas chromatograph. A stream of expanding CO2 is sprayed directly onto this capillary column to trap small fractions eluting from the first-dimension column. Remobilization of the trapped analytes is performed by direct heating by the GC oven air. Installation, maintenance and control of the modulator is simple. Focusing and remobilization of the fractions is a very efficient process, as the bandwidths of the re-injected pulses are less than 10 ms. As a result, alkane peaks eluting from the second-dimension column have peakwidths at the baseline of only 120 ms. <s> BIB001 </s> An overview of signal processing issues in chemical sensing <s> Acquisition and compression related problems <s> Complex hysteresis smoothing (CHS), which was developed for noise removal of scanning electron microscopy (SEM) images some years ago, is utilized in acquisition of an SEM image. When using CHS together, recording time can be reduced without problems by about one-third under the condition of SEM signal with a comparatively high signal-to-noise ratio (SNR). We do not recognize artificiality in a CHS-filtered image, because it has some advantages, that is, no degradation of resolution, only one easily chosen processing parameter (this parameter can be fixed and used in this study), and no processing artifacts. This originates in the fact that its criterion for distinguishing noises depends simply on the amplitude of the SEM signal. The automation of reduction in acquisition time is not difficult, because CHS successfully works for almost all varieties of SEM images with a fairly high SNR. <s> BIB002 </s> An overview of signal processing issues in chemical sensing <s> Acquisition and compression related problems <s> The objective of this work is to establish a means of correcting the theoretical maximum peak capacity of comprehensive two-dimensional (2D) separations to account for the deleterious effect of undersampling first-dimension peaks. Simulations of comprehensive 2D separations of hundreds of randomly distributed sample constituents were carried out, and 2D statistical overlap theory was used to calculate an effective first-dimension peak width based on the number of observed peaks in the simulated separations. The distinguishing feature of this work is the determination of the effective first-dimension peak width using the number of observed peaks in the entire 2D separation as the defining metric of performance. We find that the ratio of the average effective first-dimension peak width after sampling to its width prior to sampling (defined as ) is a simple function of the ratio of the first-dimension sampling time (ts) to the first-dimension peak standard deviation prior to sampling (1σ): =This is v... <s> BIB003 </s> An overview of signal processing issues in chemical sensing <s> Acquisition and compression related problems <s> Conventional approaches to sampling signals or images follow Shannon's theorem: the sampling rate must be at least twice the maximum frequency present in the signal (Nyquist rate). In the field of data conversion, standard analog-to-digital converter (ADC) technology implements the usual quantized Shannon representation - the signal is uniformly sampled at or above the Nyquist rate. This article surveys the theory of compressive sampling, also known as compressed sensing or CS, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition. CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use. <s> BIB004 </s> An overview of signal processing issues in chemical sensing <s> Acquisition and compression related problems <s> Reducing the acquisition time for two-dimensional nuclear magnetic resonance (2D NMR) spectra is important. One way to achieve this goal is reducing the acquired data. In this paper, within the framework of compressed sensing, we proposed to undersample the data in the indirect dimension for a type of self-sparse 2D NMR spectra, that is, only a few meaningful spectral peaks occupy partial locations, while the rest of locations have very small or even no peaks. The spectrum is reconstructed by enforcing its sparsity in an identity matrix domain with lp (p = 0.5) norm optimization algorithm. Both theoretical analysis and simulation results show that the proposed method can reduce the reconstruction errors compared with the wavelet-based l1 norm optimization. <s> BIB005
Data acquisition is a fundamental problem in chemistry. While classical techniques can be considered for some chemical data, there are several situations in which acquisition is a demanding step. This is the case in SEM images, for which reducing the acquisition time is a crucial need. In BIB002 , the author proposes an approach based on smoothing that reduces the acquisition time by about one-third, yet ensuring a good signal-to-noise ratio. Efforts on acquisition methods have also been conducted for different chemical analyzes, involving for instance sampling of parametrized peaks BIB003 , adapted to peak-like, non harmonic signals, or detector modulation BIB001 , akin to an hybrid between multiplexing and time-frequency representations. Another task that has been focus of many works is data compression. The need for compression of chemical data arises in techniques for which one must store large datasets that are used as reference. Among compression techniques, the ones based on the wavelet transform are the most adopted solutions in chemistry -these methods were applied in ion mobility spectrometry (IMS) sensors, MS, IR and NMR spectroscopy. Finally, it worth mentioning the recent works on compressive sensing (CS) for chemical data. Briefly speaking, CS can be seen as a paradigm in which acquisition and compression are conducted at the same time. By exploiting the fact that the desired signal is sparse in a given domain -and by relying on a sort of random acquisition -CS methods are able to reconstruct the desired signal from a number of samples lower than the one predicted by the Nyquist-Shannon theorem BIB004 . In analytical chemistry, CS methods have been used, for instance, in NMR spectroscopy BIB005 . In these works, the application of CS methods provides relevant gains in terms of acquisition time.
An overview of signal processing issues in chemical sensing <s> Background estimation and filtering <s> The ASC guidelines published in 1980 (ASC Committee on Environmental Improvement, ‘Guidelines for data acquisition and data quality evaluation in environmental chemistry’, Anal. Chem.52, 2242–2249 (1980)) define the limit of quantification between a signal and its notional zero (baseline), i.e. the value of the signal without an analyte, as ten times the standard deviation of the signal's noise. If the value of the blank signal is estimated, the limit of quantification is dependent upon the variability of the estimated baseline rather than a simple multiple of the noise. It is shown that the limit of peak quantification (via numerical integration) of a Gaussian peak on an arbitrary background is strongly dependent upon the baseline estimator. In particular a peak should have a signal-to-noise ratio of approximately 60:1 using a linear baseline estimator and 10:1 for a cubic baseline estimator. Further, the expected standard deviation of peak parameters obtained via curve fitting is inflated by a factor of seven for a linear baseline estimator and two for a cubic baseline estimator. © 1997 John Wiley & Sons, Ltd. <s> BIB001 </s> An overview of signal processing issues in chemical sensing <s> Background estimation and filtering <s> Abstract One way to obtain an intuitive understanding of the wavelet transform is to explain it in terms of segmentation of the time-frequency/scale domain. The ordinary Fourier transform does not contain information about frequency changes over time and the short time Fourier transform (STFT) technique was suggested as a solution to this problem. The wavelet transform has similarities to STFT, but partitions the time-frequency space differently in order to obtain better resolutions along time and frequency/scales. In STFT a constant bandwidth partitioning is performed whereas in the wavelet transform the time-frequency domain is partitioned according to a constant relative bandwidth scheme. In this paper we also discuss the following application areas of wavelet transforms in chemistry and analytical biotechnology: denoising, removal of baselines, determination of zero crossings of higher derivatives, signal compression and wavelet preprocessing in partial least squares (PLS) regression. <s> BIB002 </s> An overview of signal processing issues in chemical sensing <s> Background estimation and filtering <s> Part headings and chapter headings: Preface. Theory. Finding frequencies in signals the Fourier transform (B. van den Bogaert). When frequencies change in time towards the wavelet transform (B. van den Bogaert). Fundamentals of wavelet transforms (Y. Mallet et al.). The discrete wavelet transform in practice (O. de Vel et al.). Multiscale methods for denoising and compression (M.N. Nounou, B.R. Bakshi). Wavelet packet transforms and best basis algorithms (Y. Mallet et al.). Joint basis and joint best-basis for data sets (B. Walczak, D.L. Massart). The adaptive wavelet algorithm for designing task specific wavelets (Y. Mallet et al.). Applications. Application of wavelet transform in processing chromatographic data (Foo-tim Chau, A. Kai-man Leung). Application of wavelet transform in electrochemical studies (Foo-tim Chau, A. Kai-man Leung). Applications of wavelet transform in spectroscopic studies (Foo-tim Chau, A. Kai-man Leung). Application of wavelet analysis to physical chemistry (H. Teitelbaum). Wavelet bases for IR library compression, searching and reconstruction (B. Walczak, J.P. Radomski). Application of the discrete wavelet transformation for online detection of transitions in time series (M. Marth). Calibration in wavelet domain (B. Walczak, D.L. Massart). Wavelets in parsimonious functional data analysis models (B.K. Alsberg). Multiscale statistical process control and model-based denoising (B.R. Bakshi). Application of adaptive wavelets in classification and regression (Y. Mallet et al.). Wavelet-based image compression (O. de Vel et al.). Wavelet analysis and processing of 2-D and 3-D analytical images (S.G. Nikolov et al.). Index. <s> BIB003 </s> An overview of signal processing issues in chemical sensing <s> Background estimation and filtering <s> A time domain filter that combines the properties of matched filtering and two-fold differentiation is presented. The filter coefficients are given by the second derivative of a Gaussian model peak, controlled by the setting of two parameters related to the chromatographic system. The fundamental characteristics of the filter were derived, and its applicability demonstrated for real liquid chromatography–mass spectrometry (LC–MS) data. The filter is primarily intended as a fast pre-processing step, for a mass chromatogram with 320 scans over 700 mass channels the computation time was 0.6 s on a standard PC. Base peak chromatograms with improved peak detection capability and mass spectra useful for compound identification were obtained with filtered data. The most significant effect of the described filter is background reduction due to the differentiation, which in combination with the matched filter can be performed with maintained or even improved signal-to-noise ratio. <s> BIB004 </s> An overview of signal processing issues in chemical sensing <s> Background estimation and filtering <s> The goal of this paper is to review existing methods for protein mass spectrometry data analysis, and to present a new methodology for automatic extraction of significant peaks (biomarkers). For the pre-processing step required for data from MALDI-TOF or SELDI- TOF spectra, we use a purely nonparametric approach that combines stationary invariant wavelet transform for noise removal and penalized spline quantile regression for baseline correction. We further present a multi-scale spectra alignment technique that is based on identification of statistically significant peaks from a set of spectra. This method allows one to find common peaks in a set of spectra that can subsequently be mapped to individual proteins. This may serve as useful biomarkers in medical applications, or as individual features for further multidimensional statistical analysis. MALDI-TOF spectra obtained from serum samples are used throughout the paper to illustrate the methodology. <s> BIB005 </s> An overview of signal processing issues in chemical sensing <s> Background estimation and filtering <s> In this paper, we consider a new background elimination method for Raman spectra. As a background is usually slowly varying with respect to wavelength, it could be approximated by a slowly varying curve. However, the usual curve-fitting method cannot be applied because there is a constraint that the estimated background must be beneath a measured spectrum. To meet the requirement, we adopt a polynomial as an approximating function and show that background estimation could be converted to a linear programming problem which is a special case of constrained optimization. In addition, we present an order selection algorithm for automatic baseline elimination. According to the experimental results, it is shown that the proposed method could be successfully applied to experimental Raman spectra as well as synthetic spectra. Copyright © 2011 John Wiley & Sons, Ltd. <s> BIB006 </s> An overview of signal processing issues in chemical sensing <s> Background estimation and filtering <s> We present here a fully automated spectral baseline-removal procedure. The method uses a large-window moving average to estimate the baseline; thus, it is a model-free approach with a peak-stripping method to remove spectral peaks. After processing, the baseline-corrected spectrum should yield a flat baseline and this endpoint can be verified with the χ2-statistic. The approach provides for multiple passes or iterations, based on a given χ2-statistic for convergence. If the baseline is acceptably flat given the χ2-statistic after the first pass at correction, the problem is solved. If not, the non-flat baseline (i.e., after the first effort or first pass at correction) should provide an indication of where the first pass caused too much or too little baseline to be subtracted. The second pass thus permits one to compensate for the errors incurred on the first pass. Thus, one can use a very large window so as to avoid affecting spectral peaks—even if the window is so large that the baseline is inaccurately removed—because baseline-correction errors can be assessed and compensated for on subsequent passes. We start with the largest possible window and gradually reduce it until acceptable baseline correction based on the χ2 statistic is achieved. Results, obtained on both simulated and measured Raman data, are presented and discussed. <s> BIB007 </s> An overview of signal processing issues in chemical sensing <s> Background estimation and filtering <s> The article is intended to introduce and discuss a new quantile regression method for baseline detrending of chromatographic signals. It is compared with current methods based on polynomial fitting, spline fitting, LOESS, and Whittaker smoother, each with thresholding and reweighting approach. For curve flexibility selection in existing algorithms, a new method based on skewness of the residuals is successfully applied. The computational efficiency of all approaches is also discussed. The newly introduced methods could be preferred to visible better performance and short computational time. The other algorithms behave in comparable way, and polynomial regression can be here preferred due to short computational time. <s> BIB008
In the basic signal formation model given in Section 2.1, two disturbances affect the desired signal considered as a linear combination of elementary peaks: an analytical background or baseline, accounting for slow-varying instrumental perturbations, and noise. Both should be remediated without harming peak shapes. In , the baseline is defined as "the portion of a detector record resulting from only eluant or carrier gas emerging from the column". Broader definitions exits, encompassing more deterministic components such as temperature fluctuations or even small peaks that cannot be easily distinguished from a notional or arbitrary zero level BIB001 , serving as a reference for peak properties quantification (height, area). Removing a slow-varying, potentially monotone, trend ( Fig. 1-left) in a signal is generally an ill-posed posed problem, despite its apparent simplicity. The need for almost automatic methods is still present, after many attempts using leastsquare fits, wavelet preprocessing, robust or asymmetric error regression or factor analysis methods BIB005 BIB006 BIB007 BIB008 . Given the importance of Savitzky and Golay filters, denoising and filtering have inspired many works in analytical chemistry, for instance BIB002 BIB004 . We refer to [11, vol. 2] for a broad overview of both background removal and filtering, while BIB003 provide a focus on the use of wavelet transforms.
An overview of signal processing issues in chemical sensing <s> Sensor array processing and signal separation <s> Conventional approaches to chemical sensors have ::: traditionally made use of a “lock-and-key” design, ::: wherein a specific receptor is synthesized in order to ::: strongly and highly selectively bind the analyte of ::: interest.1-6 A related approach involves exploiting a ::: general physicochemical effect selectively toward a ::: single analyte, such as the use of the ionic effect in ::: the construction of a pH electrode. In the first ::: approach, selectivity is achieved through recognition ::: of the analyte at the receptor site, and in the second, ::: selectivity is achieved through the transduction ::: process in which the method of detection dictates ::: which species are sensed. Such approaches are appropriate ::: when a specific target compound is to be ::: identified in the presence of controlled backgrounds ::: and interferences. However, this type of approach ::: requires the synthesis of a separate, highly selective ::: sensor for each analyte to be detected. In addition, ::: this type of approach is not particularly useful for ::: analyzing, classifying, or assigning human value ::: judgments to the composition of complex vapor ::: mixtures such as perfumes, beers, foods, mixtures of ::: solvents, etc. <s> BIB001 </s> An overview of signal processing issues in chemical sensing <s> Sensor array processing and signal separation <s> A novel approach to design low cost/high performance on-line physiological data and water monitoring systems is presented and discussed. The approach is based on a set of solid state chemical sensors (i.e. ISFETs/CHEMFETs), and a post-processing stage which performs the estimation of the ionic activities presented in the solution. This estimation is performed by a non-linear blind separation algorithm that uses some prior knowledge about how sensors simultaneously respond to several ionic activities. Additionally, the presented approach addresses some important problems of these sensors like temperature effects, cross sensitivity and drift; it also takes into account some hardware implementation constraints. Experimental results confirm the effectiveness of the proposed architecture. <s> BIB002 </s> An overview of signal processing issues in chemical sensing <s> Sensor array processing and signal separation <s> Traditional approaches to gas sensing are usually related with gas identification and classification, i.e., recognition of aromas. In this work we propose an innovative approach to determine the concentration of the single species in a gas mixture by using nonlinear source separation techniques. Additionally, responses of tin oxide sensor arrays were analyzed using nonlinear regression techniques to determine the concentrations of ammonia and acetone in gas mixtures. The use of the source separation approach allows the compensation of some of the most important sensor disadvantages: the parameter spreading and time drift. <s> BIB003 </s> An overview of signal processing issues in chemical sensing <s> Sensor array processing and signal separation <s> This paper addresses blind-source separation in the case where both the source signals and the mixing coefficients are non-negative. The problem is referred to as non-negative source separation and the main application concerns the analysis of spectrometric data sets. The separation is performed in a Bayesian framework by encoding non-negativity through the assignment of Gamma priors on the distributions of both the source signals and the mixing coefficients. A Markov chain Monte Carlo (MCMC) sampling procedure is proposed to simulate the resulting joint posterior density from which marginal posterior mean estimates of the source signals and mixing coefficients are obtained. Results obtained with synthetic and experimental spectra are used to discuss the problem of non-negative source separation and to illustrate the effectiveness of the proposed method <s> BIB004 </s> An overview of signal processing issues in chemical sensing <s> Sensor array processing and signal separation <s> Fundamentals.- Semiconductor Structures as Chemical Sensors.- Mass-Sensitive Sensors.- Conductivity Sensors and Capacitive Sensors.- Thermometric and Calorimetric Sensors.- Electrochemical Sensors.- Optical Sensors.- Chemical Sensors as Detectors and Indicators.- Sensor Arrays and Micro Total Analysis Systems. <s> BIB005 </s> An overview of signal processing issues in chemical sensing <s> Sensor array processing and signal separation <s> Potentiometry with ion-selective electrodes (ISEs) provides a simple and cheap approach for estimating ionic activities. However, a well-known shortcoming of ISEs regards their lack of selectivity. Recent works have suggested that smart sensor arrays equipped with a blind source separation (BSS) algorithm offer a promising solution to the interference problem. In fact, the use of blind methods eases the time-demanding calibration stages needed in the typical approaches. In this work, we develop a Bayesian source separation method for processing the outputs of an ISE array. The major benefit brought by the Bayesian framework is the possibility of taking into account some prior information, which can result in more realistic solutions. Concerning the inference stage, it is conducted by means of Markov chain Monte Carlo (MCMC) methods. The validity of our approach is supported by experiments with artificial data and also in a scenario with real data. <s> BIB006 </s> An overview of signal processing issues in chemical sensing <s> Sensor array processing and signal separation <s> Smart sensors arrays (SSAs) provide a flexible approach to deal with the interference problem typical of ion-selective electrodes (ISEs). The development of the core of a SSA, the signal processing algorithm, often requires a dataset containing input-output measurements. Motivated by that, this letter presents a set of experiments with arrays of ISEs. The acquired dataset is publicly available in a web page where published results with these data can be added for benchmarking. <s> BIB007
In analytical chemistry, a major issue is the lack of selectivity of some sensors. Large efforts have been undertaken to de-velop new materials providing more selective sensors. However, despite the good results provided by this approach, it usually leads to expensive solutions. An interesting alternative to overcome this problem is to resort to sensing mechanisms based on diversity. For instance, instead of considering only one sensor, data can be acquired by an array composed of sensors responding differently to a given analyte. Then, the acquired data can be treated by signal processing methods with the aim of retrieving the desired information. This approach, which will be referred to as Smart Sensor Arrays (SSA), is usually adopted in the estimation of ionic concentration and gas analysis BIB001 . Since the sensors within the array are not necessarily selective in the SSA approach, the acquired signals correspond to mixtures of the desired signals, e.g., the temporal evolution of the activities of different ions. Therefore, the problem here is to retrieve the original signals (sources) from a set of mixtures of these sources. If one has access to a set of training (or calibration) data, this problem can be solved by multivariate regression methods. On the other hand, if only the mixtures are available, one can use blind source separation methods (BSS) . An interesting feature of blind (or unsupervised) methods in chemistry is the possibility of avoiding (or simplifying) calibration stages, which are usually time demanding. BSS methods have been providing interesting results in SSAs composed of potentiometric sensors used for measuring ionic activities. The main challenge in this application is due to the fact that potentiometric sensors are, as a rule, nonlinear devices BIB005 . As a result, the mixing processes are nonlinear, thus requiring advanced BSS methods. For instance, in BIB002 , nonlinear BSS techniques based on Independent Component Analysis (ICA) were proposed for dealing with the mixing model that stems from potentiometric sensors. In BIB003 , ICA methods were also applied to process the data acquired by arrays composed of tin oxide gas sensors, whose resulting mixing model is nonlinear, too. Despite the good results provided by ICA in these applications, it has a major limitation: ICA is based on the source statistical independence. Unfortunately, many chemical sources are clearly dependent. Consequently, other priors on sources have to be used, such as positivity or sparsity. In order to deal with dependent sources in potentiometric arrays, BIB006 proposed a Bayesian BSS method. Since this approach is not based on source statistical independence and can easily take into account positivity constraints as statistical priors, it may provide good results even when the sources are highly correlated -this fact was illustrated in an actual dataset acquired by an array of ion-selective electrodes 3 . Bayesian BSS methods were also applied to separate mixtures of spectra obtained from NIR spectroscopy BIB004 . Again, the sources (the spectra of cyclopentane, cyclohexane, and n-pentane) were dependent, which made difficult the application of ICA-based solutions. 3 The dataset used in this work is publicly available BIB007 . Beyond SSA processing, source separation methods have been successfully used for solving many other issues in chemical engineering, e.g. separation of molecules in mass spectrograms, or spectral unmixing in hyperspectral imaging, etc.
An overview of signal processing issues in chemical sensing <s> Matching, identification, and learning <s> Witnessing the swift advances in the electronic means of seeing and hearing, scientists and engineers scent a market for systems mimicking the human nose. Already commercial systems from several companies are targeting applications, present and potential, that range from quality assurance of food and drugs to medical diagnosis, environmental monitoring, safety and security and military use. Here, the authors outline the major transducer technologies-in one sense, the key component of an electronic nose. <s> BIB001 </s> An overview of signal processing issues in chemical sensing <s> Matching, identification, and learning <s> Development of promising sensor instrument — “electronic tongue” based on sensor arrays with data processing by pattern recognition methods have been described. The attention is paid to “electronic tongue” based on an array of original non-specific (non-selective) potentiometric chemical sensors with chalcogenide glass membranes. Principles of research, criteria for the development of non-selective sensing materials, pattern recognition methods have been described. Possible applications and some results of integral qualitative analysis of beverages and of quantitative analysis of complex liquids, containing heavy metals are reported. Discriminating power obtained and possibility of multicomponent analysis permit to consider “electronic tongue” as a perspective analytical concept. <s> BIB002 </s> An overview of signal processing issues in chemical sensing <s> Matching, identification, and learning <s> The purposes of this tutorial are twofold. First, it reviews the classical statistical learning scenario by highlighting its fundamental taxonomies and its key aspects. The second aim of the paper is to introduce some modern (ensembles) methods developed inside the machine learning field. The tutorial starts by putting the topic of supervised learning into the broader context of data analysis and by reviewing the classical pattern recognition methods: those based on class-conditional density estimation and the use of the Bayes theorem and those based on discriminant functions. The fundamental topic of complexity control is treated in some detail. Ensembles techniques have drawn considerable attention in recent years: a set of learning machines increases classification accuracy with respect to a single machine. Here, we introduce boosting, in which classifiers adaptively concentrate on the harder examples located near to the classification boundary and output coding, where a set of independent two-class machines solves a multiclass problem. The first successful applications of these methods to data produced by the Pico-2 electronic nose (EN), developed at the University of Brescia, Brescia, Italy, are also briefly shown. <s> BIB003
Higher-level data processing tasks, such as matching and pattern recognition, are also common in chemical applications. A typical example in this context is the problem of peak matching in chromatography. The output of this laboratory technique comprises a set of chromatograms, which in turn are composed of peaks. Ideally, different material samples containing the same components with different proportions should displays peaks at the same positions on the ordinal axis. However, due to several experiments issues, one may observe shifts between these peaks, which may be potentially harmful for subsequent analyzes. Most of solutions to peak alignment are based on time warping (see for a comparison between three matching methods). Another example of chemical systems using high-level data processing can be found in the electronic noses BIB001 and tongues BIB002 . These systems are mainly adopted to perform automatic recognition of chemical elements. They are based on several signal processing stages. First, the signals acquired by the array are pre-processed with the aim of mitigating impairments such as sensor drift. Second, feature extraction is performed for (1) reducing the dimensionality of the data, and (2) extracting the relevant information that will feed the automatic classifier. Feature extraction methods used in electronic noses and tongues include principal component analysis (PCA) and self-organized maps (SOM) BIB003 . The last step is the classification, which can be carried out by artificial neural networks such as the multilayer perceptron, support vector machines, and Bayesian classifiers BIB001 .
Bias in data‐driven artificial intelligence systems—An introductory survey <s> <s> Computational sensemaking aims to develop methods and systems to “make sense” of complex data and information. The ultimate goal is then to provide insights and enhance understanding for supporting subsequent intelligent actions. Understandability and interpretability are key elements of that process as well as models and patterns captured therein. Here, declarativity helps to include guiding knowledge structures into the process, while explication provides interpretability, transparency, and explainability. This paper provides an overview of the key points and important developments in these areas, and outlines future potential and challenges. <s> BIB001 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> <s> As artificial intelligence (AI) systems become increasingly ubiquitous, the topic of AI governance for ethical decision-making by AI has captured public imagination. Within the AI research community, this topic remains less familiar to many researchers. In this paper, we complement existing surveys, which largely focused on the psychological, social and legal discussions of the topic, with an analysis of recent advances in technical solutions for AI governance. By reviewing publications in leading AI conferences including AAAI, AAMAS, ECAI and IJCAI, we propose a taxonomy which divides the field into four areas: 1) exploring ethical dilemmas; 2) individual ethical decision frameworks; 3) collective ethical decision frameworks; and 4) ethics in human-AI interactions. We highlight the intuitions and key techniques used in each approach, and discuss promising future research directions towards successful integration of ethical AI systems into human societies. <s> BIB002 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> <s> In recent years, many accurate decision support systems have been constructed as black boxes, that is as systems that hide their internal logic to the user. This lack of explanation constitutes both a practical and an ethical issue. The literature reports many approaches aimed at overcoming this crucial weakness, sometimes at the cost of sacrificing accuracy for interpretability. The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, it explicitly or implicitly delineates its own definition of interpretability and explanation. The aim of this article is to provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system. Given a problem definition, a black box type, and a desired explanation, this survey should help the researcher to find the proposals more useful for his own work. The proposed classification of approaches to open black box models should also be useful for putting the many research open questions in perspective. <s> BIB003
provides a visual map of the topics discussed in this survey. This paper complements existing surveys that either have a strong focus on machine ethics, such as BIB002 , study a specific subproblem, such as explaining black box models BIB001 BIB003 , or focus in specific contexts, such as the Web (Baeza-Yates, 2018), by providing a broad categorization of the technical challenges and solutions, a comprehensive coverage of the different lines of research as well as their legal grounds. We are aware that the problems of bias and discrimination are not limited to AI and that the technology can be deployed (consciously or unconsciously) in ways that reflect, amplify or distort real world perception, and status quo. Therefore, as the roots to these problems are not only technological, it is also naive to believe that technological solutions will suffice. Rather, more than technical solutions are required including socially acceptable definitions of fairness and meaningful interventions to ensure the long-term well-being of all groups. These challenges require multidisciplinary perspectives and a constant dialogue with the society as bias and fairness are multifaceted and volatile. Nevertheless, as the AI technology penetrates our lives, it is extremely important for technology creators to be aware of bias and discrimination and to ensure responsible usage of the technology, keeping in mind that a technological approach on its own is not a panacea for all sorts of bias and AI problems.
Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Socio-technical causes of bias <s> The Dictionary of Media and Communication is an authoritative and wide-ranging A-Z providing over 2,200 entries on terms used in media and communication, from concepts and theories to technical terms, across subject areas that include advertising, digital culture, journalism, new media, radio studies, and telecommunications. It also covers relevant terminology from related disciplines such as literary theory, semiotics, cultural studies, and philosophy. The entries are extensively cross-referenced, allowing the reader to link related concepts that span different discourses with ease. It is an indispensable guide for undergraduate students on degree courses in media or communication studies, and also for those taking related subjects such as film studies, visual culture, and cultural studies. With highly relevant web links to key essays, images, examples, and websites which complement the A-Z entries, all updated and accessed via a companion webpage, as well as a biographical appendix with web links to key people, this is a valuable resource for media professionals, postgraduates, academics, and researchers and an eminently practical and user-friendly reference for anyone involved in the worlds of media and communication. <s> BIB001 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Socio-technical causes of bias <s> Large-scale databases of human activity in social media have captured scientific and policy attention, producing a flood of research and discussion. This paper considers methodological and conceptual challenges for this emergent field, with special attention to the validity and representativeness of social media big data analyses. Persistent issues include the over-emphasis of a single platform, Twitter, sampling biases arising from selection by hashtags, and vague and unrepresentative sampling frames. The socio-cultural complexity of user behavior aimed at algorithmic invisibility (such as subtweeting, mock-retweeting, use of "screen captures" for text, etc.) further complicate interpretation of big data social media. Other challenges include accounting for field effects, i.e. broadly consequential events that do not diffuse only through the network under study but affect the whole society. The application of network methods from other fields to the study of human social activity may not always be appropriate. The paper concludes with a call to action on practical steps to improve our analytic capacity in this promising, rapidly-growing field. <s> BIB002 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Socio-technical causes of bias <s> Every day automated algorithms make decisions that can amplify the power of businesses and governments. Yet as algorithms come to regulate more aspects of our lives, the contours of their power can remain difficult to grasp. This paper studies the notion of algorithmic accountability reporting as a mechanism for elucidating and articulating the power structures, biases, and influences that computational artifacts exercise in society. A framework for algorithmic power based on autonomous decision-making is proffered and motivates specific questions about algorithmic influence. Five cases of algorithmic accountability reporting involving the use of reverse engineering methods in journalism are then studied and analyzed to provide insight into the method and its application in a journalism context. The applicability of transparency policies for algorithms is discussed alongside challenges to implementing algorithmic accountability as a broadly viable investigative method. <s> BIB003 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Socio-technical causes of bias <s> Homophily can put minority groups at a disadvantage by restricting their ability to establish links with a majority group or to access novel information. Here, we show how this phenomenon can influence the ranking of minorities in examples of real-world networks with various levels of heterophily and homophily ranging from sexual contacts, dating contacts, scientific collaborations, and scientific citations. We devise a social network model with tunable homophily and group sizes, and demonstrate how the degree ranking of nodes from the minority group in a network is a function of (i) relative group sizes and (ii) the presence or absence of homophilic behaviour. We provide analytical insights on how the ranking of the minority can be improved to ensure the representativeness of the group and correct for potential biases. Our work presents a foundation for assessing the impact of homophilic and heterophilic behaviour on minorities in social networks. <s> BIB004
AI relies heavily on data generated by humans (e.g., user-generated content) or collected via systems created by humans. Therefore, whatever biases exist in humans enter our systems and even worse, they are amplified due to the complex sociotechnical systems, such as the Web. 3 As a result, algorithms may reproduce (or even increase) existing inequalities or discriminations BIB004 . Within societies, certain social groups may be disadvantaged, which usually results in "institutional bias" where there is a tendency for the procedures and practices of particular institutions to operate in ways in which some social groups are being advantaged and others disadvantaged. This needs not be the result of conscious discrimination but rather of the majority following existing norms. Institutional racism and sexism are common examples BIB001 . Algorithms are part of existing (biased) institutions and structures, but they may also amplify or introduce bias as they favor those phenomena and aspects of human behavior that are easily quantifiable over those which are hard or even impossible to measure. This problem is exacerbated by the fact that certain data may be easier to access and analyze than others, which has caused, for example, the role of Twitter for various societal phenomena to be overemphasized BIB002 . Once introduced, algorithmic systems encourage the creation of very specific data collection infrastructures and policies, for example, they may suggest tracking and surveillance (Introna & Wood, 2004) which then change or amplify power relations. Algorithms thus shape societal institutions and potential interventions, and vice versa. It is currently not entirely clear, how this complex interaction between algorithms and structures plays out in our societies. Scholars have thus called for "algorithmic accountability" to improve understanding of the power structures, biases, and influences that algorithms exercise in society BIB003 .
Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Sensitive features and causal influences <s> Increasing numbers of decisions about everyday life are made using algorithms. By algorithms we mean predictive models (decision rules) captured from historical data using data mining. Such models often decide prices we pay, select ads we see and news we read online, match job descriptions and candidate CVs, decide who gets a loan, who goes through an extra airport security check, or who gets released on parole. Yet growing evidence suggests that decision making by algorithms may discriminate people, even if the computing process is fair and well-intentioned. This happens due to biased or non-representative learning data in combination with inadvertent modeling procedures. From the regulatory perspective there are two tendencies in relation to this issue: (1) to ensure that data-driven decision making is not discriminatory, and (2) to restrict overall collecting and storing of private data to a necessary minimum. This paper shows that from the computing perspective these two goals are contradictory. We demonstrate empirically and theoretically with standard regression models that in order to make sure that decision models are non-discriminatory, for instance, with respect to race, the sensitive racial information needs to be used in the model building process. Of course, after the model is ready, race should not be required as an input variable for decision making. From the regulatory perspective this has an important implication: collecting sensitive personal data is necessary in order to guarantee fairness of algorithms, and law making needs to find sensible ways to allow using such data in the modeling process. <s> BIB001 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Sensitive features and causal influences <s> In this work, we argue for the importance of causal reasoning in creating fair algorithms for decision making. We give a review of existing approaches to fairness, describe work in causality necessary for the understanding of causal approaches, argue why causality is necessary for any approach that wishes to be fair, and give a detailed analysis of the many recent approaches to causality-based fairness. <s> BIB002
Data encode a number of people characteristics in the form of feature values. Sensitive characteristics that identify grounds of discrimination or bias may be present or not. Removing or ignoring such sensitive features does not prevent learning biased models, because other correlated features (also known as redundant encodings) may be used as proxies for them. For example, neighborhoods in U.S. cities are highly correlated with race, and this fact has been used for systematic denial of services such as bank loans or same-day purchase delivery. 4 Rather, including sensitive features in data may be beneficial in the design of fair models BIB001 . Sensitive features may also be correlated with the target feature that classification models want to predict. For example, a minority's preference for red cars may induce bias against the minority in predicting accident rate if red cars are also preferred by aggressive drivers. Higher insurance premium may then be set for red car owners, which disproportionately impacts minority members. Simple correlation between apparently neutral features can then lead to biased decisions. Discovering and understanding causal influences among variables is a fundamental tool for dealing with bias, as recognized in the legal circles (Foster, 2004) and in medical research . The interested reader is referred to the recent survey on causal approaches to fairness in classification models BIB002 ).
Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Representativeness of data <s> Nowadays, more and more decision procedures are supported or even guided by automated processes. An important technique in this automation is data mining. In this chapter we study how such automatically generated decision support models may exhibit discriminatory behavior towards certain groups based upon, e.g., gender or ethnicity. Surprisingly, such behavior may even be observed when sensitive information is removed or suppressed and the whole procedure is guided by neutral arguments such as predictive accuracy only. The reason for this phenomenon is that most data mining methods are based upon assumptions that are not always satisfied in reality, namely, that the data is correct and represents the population well. In this chapter we discuss the implicit modeling assumptions made by most data mining algorithms and show situations in which they are not satisfied. Then we outline three realistic scenarios in which an unbiased process can lead to discriminatory models. The effects of the implicit assumptions not being fulfilled are illustrated by examples. The chapter concludes with an outline of the main challenges and problems to be solved. <s> BIB001 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Representativeness of data <s> State-of-the-art decision tree methods apply heuristics recursively to create each split in isolation, which may not capture well the underlying characteristics of the dataset. The optimal decision tree problem attempts to resolve this by creating the entire decision tree at once to achieve global optimality. In the last 25?years, algorithmic advances in integer optimization coupled with hardware improvements have resulted in an astonishing 800 billion factor speedup in mixed-integer optimization (MIO). Motivated by this speedup, we present optimal classification trees, a novel formulation of the decision tree problem using modern MIO techniques that yields the optimal decision tree for axes-aligned splits. We also show the richness of this MIO formulation by adapting it to give optimal classification trees with hyperplanes that generates optimal decision trees with multivariate splits. Synthetic tests demonstrate that these methods recover the true decision tree more closely than heuristics, refuting the notion that optimal methods overfit the training data. We comprehensively benchmark these methods on a sample of 53 datasets from the UCI machine learning repository. We establish that these MIO methods are practically solvable on real-world datasets with sizes in the 1000s, and give average absolute improvements in out-of-sample accuracy over CART of 1---2 and 3---5% for the univariate and multivariate cases, respectively. Furthermore, we identify that optimal classification trees are likely to outperform CART by 1.2---1.3% in situations where the CART accuracy is high and we have sufficient training data, while the multivariate version outperforms CART by 4---7% when the CART accuracy or dimension of the dataset is low. <s> BIB002
Statistical (including ML) inferences require that the data from which the model was learned be representative of the data on which it is applied. However, data collection often suffers from biases that lead to the over-or underrepresentation of certain groups, especially in big data, where many data sets have not been created with the rigor of a statistical study, but they are the by-product of other activities with different, often operational, goals BIB002 . Frequently occurring biases include selection bias (certain individuals are more likely to be selected for study), often as self-selection bias, and the reverse exclusion bias; reporting bias (observations of a certain kind are more likely to be reported, which leads to a sort of selection bias on observations); and detection bias (a phenomenon is more likely to be observed for a particular set of subjects). Analogous biases can lead to under-or over-representations of properties of individuals, for example ). If the mis-represented groups coincide with social groups against which there already exists social bias such as prejudice or discrimination, even "unbiased computational processes can lead to discriminative decision procedures" BIB001 . Mis-representation in the data can lead to vicious cycles that perpetuate discrimination and disadvantage BIB002 . Such "pernicious feedback loops" (O'Neil, 2016) can occur with both under-representation of historically disadvantaged groups, for example, women and people of color in IT developer communities and image datasets , and with over-representation, for example, black people in drug-related arrests (Lum & Isaac, 2016).
Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Data modalities and bias <s> The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between the words receptionist and female, while maintaining desired associations such as between the words queen and female. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias. <s> BIB001 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Data modalities and bias <s> Recent years have witnessed a surge of publications aimed at tracing temporal changes in lexical semantics using distributional methods, particularly prediction-based word embedding models. However, this vein of research lacks the cohesion, common terminology and shared practices of more established areas of natural language processing. In this paper, we survey the current state of academic research related to diachronic word embeddings and semantic shifts detection. We start with discussing the notion of semantic shifts, and then continue with an overview of the existing methods for tracing such time-related shifts with word embedding models. We propose several axes along which these methods can be compared, and outline the main challenges before this emerging subfield of NLP, as well as prospects and possible applications. <s> BIB002
Data come in different modalities (numerical, textual, images, etc.) as well as in multimodal representations (e.g., audio-visual content). Most of the fairness-aware ML approaches refer to structured data represented in some fixed feature space. Data modality-specific approaches also exist, especially for textual data and images. Bias in language has attracted a lot of recent interest with many studies exposing a large number of offensive associations related to gender and race on publicly available word embeddings BIB001 as well as how these associations have evolved over time BIB002 . Similarly for the computer vision community where standard image collections like MNIST are exploited for training, or off-the-shelf pretrained models are used as feature extractors, assuming the collections comprise representative samples of the real world. In reality, though, the collections can be biased as many recent studies have indicated. For instance, have found that commercial facial recognition services perform much better on lighter male subjects than darker female ones. Overall, the additional layer of feature extraction that is typically used within AI-based multimodal analysis systems makes it even more challenging to trace the source of bias in such systems.
Bias in data‐driven artificial intelligence systems—An introductory survey <s> | How is fairness defined? <s> This paper opens up for scrutiny the politics of algorithmic surveillance through an examination of Facial Recognition Systems (FRSs) in video surveillance, showing that seemingly mundane design decisions may have important political consequences that ought to be subject to scrutiny. It first focuses on the politics of technology and algorithmic surveillance systems in particular: considering the broad politics of technology; the nature of algorithmic surveillance and biometrics, claiming that software algorithms are a particularly important domain of techno-politics; and finally considering both the growth of algorithmic biometric surveillance and the potential problems with such systems. Secondly, it gives an account of FRS's, the algorithms upon which they are based, and the biases embedded therein. In the third part, the ways in which these biases may manifest itself in real world implementation of FRS's are outlined. Finally, some policy suggestions for the future development of FRS's are made; it is noted that the most common critiques of such systems are based on notions of privacy which seem increasingly at odds with the world of automated systems. <s> BIB001 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | How is fairness defined? <s> Society is increasingly relying on data-driven predictive models for automated decision making. This is not by design, but due to the nature and noisiness of observational data, such models may systematically disadvantage people belonging to certain categories or groups, instead of relying solely on individual merits. This may happen even if the computing process is fair and well-intentioned. Discrimination-aware data mining studies of how to make predictive models free from discrimination, when the historical data, on which they are built, may be biased, incomplete, or even contain past discriminatory decisions. Discrimination-aware data mining is an emerging research discipline, and there is no firm consensus yet of how to measure the performance of algorithms. The goal of this survey is to review various discrimination measures that have been used, analytically and computationally analyze their performance, and highlight implications of using one or another measure. We also describe measures from other disciplines, which have not been used for measuring discrimination, but potentially could be suitable for this purpose. This survey is primarily intended for researchers in data mining and machine learning as a step towards producing a unifying view of performance criteria when developing new algorithms for non-discriminatory predictive modeling. In addition, practitioners and policy makers could use this study when diagnosing potential discrimination by predictive models. <s> BIB002 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | How is fairness defined? <s> Machine learning can impact people with legal or ethical consequences when it is used to automate decisions in areas such as insurance, lending, hiring, and predictive policing. In many of these scenarios, previous decisions have been made that are unfairly biased against certain subpopulations, for example those of a particular race, gender, or sexual orientation. Since this past data may be biased, machine learning predictors must account for this to avoid perpetuating or creating discriminatory practices. In this paper, we develop a framework for modeling fairness using tools from causal inference. Our definition of counterfactual fairness captures the intuition that a decision is fair towards an individual if it the same in (a) the actual world and (b) a counterfactual world where the individual belonged to a different demographic group. We demonstrate our framework on a real-world problem of fair prediction of success in law school. <s> BIB003 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | How is fairness defined? <s> Algorithm fairness has started to attract the attention of researchers in AI, Software Engineering and Law communities, with more than twenty different notions of fairness proposed in the last few years. Yet, there is no clear agreement on which definition to apply in each situation. Moreover, the detailed differences between multiple definitions are difficult to grasp. To address this issue, this paper collects the most prominent definitions of fairness for the algorithmic classification problem, explains the rationale behind these definitions, and demonstrates each of them on a single unifying case-study. Our analysis intuitively explains why the same case can be considered fair according to some definitions and unfair according to others. <s> BIB004 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | How is fairness defined? <s> The nascent field of fair machine learning aims to ensure that decisions guided by algorithms are equitable. Over the last several years, three formal definitions of fairness have gained prominence: (1) anti-classification, meaning that protected attributes---like race, gender, and their proxies---are not explicitly used to make decisions; (2) classification parity, meaning that common measures of predictive performance (e.g., false positive and false negative rates) are equal across groups defined by the protected attributes; and (3) calibration, meaning that conditional on risk estimates, outcomes are independent of protected attributes. Here we show that all three of these fairness definitions suffer from significant statistical limitations. Requiring anti-classification or classification parity can, perversely, harm the very groups they were designed to protect; and calibration, though generally desirable, provides little guarantee that decisions are equitable. In contrast to these formal fairness criteria, we argue that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce. Such a strategy, while not universally applicable, often aligns well with policy objectives; notably, this strategy will typically violate both anti-classification and classification parity. In practice, it requires significant effort to construct suitable risk estimates. One must carefully define and measure the targets of prediction to avoid retrenching biases in the data. But, importantly, one cannot generally address these difficulties by requiring that algorithms satisfy popular mathematical formalizations of fairness. By highlighting these challenges in the foundation of fair machine learning, we hope to help researchers and practitioners productively advance the area. <s> BIB005
More than 20 different definitions of fairness have appeared thus far in the computer science literature BIB004 BIB002 ; and some of these definitions and others were proposed and investigated in work on formalizing fairness from other disciplines, such as education, over the past 50 years BIB001 . Existing fairness definitions can be categorized into: (a) "predicted outcome," (b) "predicted and actual outcome," (c) "predicted probabilities and actual outcome," (d) "similarity based," and (e) "causal reasoning" BIB004 . "Predicted outcome" definitions solely rely on a model's predictions (e.g., demographic parity checks the percentage of protected and non-protected groups in the positive class). "Predicted and actual outcome" combine a model's predictions with the true labels (e.g., equalized odds requires false positive and negative rates to be similar among protected and non-protected groups). "Predicted probabilities and actual outcome" employ the predicted probabilities instead of the predicted outcomes (e.g., good calibration requires the true positive probabilities between protected and non-protected groups to be the same). Contrary to definitions (a)-(c) that only consider the sensitive attribute, "similarity based" definitions also employ non-sensitive attributes (e.g., fairness through awareness states that similar individuals must be treated equally). Finally, "causal reasoning" definitions are based on directed acyclic graphs that capture relations between features and their impact on the outcomes by structural equations (e.g., counterfactual fairness BIB003 ) constructs a graph that verifies whether the attributes defining the outcome are correlated to the sensitive attribute). Despite the many formal, mathematical definitions of fairness proposed over the last years the problem of formalizing fairness is still open as well as the discussion about the merits and demerits of the different measures is missing. BIB005 show the statistical limitations of prevailing mathematical definitions of fairness and the (negative) effect of enforcing such fairness-measures on group well-being and urge the community to explicitly focus on consequences of potential interventions.
Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Preprocessing approaches <s> In this paper we study the problem of classifier learning where the input data contains unjustified dependencies between some data attributes and the class label. Such cases arise for example when the training data is collected from different sources with different labeling criteria or when the data is generated by a biased decision process. When a classifier is trained directly on such data, these undesirable dependencies will carry over to the classifier’s predictions. In order to tackle this problem, we study the classification with independency constraints problem: find an accurate model for which the predictions are independent from a given binary attribute. We propose two solutions for this problem and present an empirical validation. <s> BIB001 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Preprocessing approaches <s> With the support of the legally-grounded methodology of situation testing, we tackle the problems of discrimination discovery and prevention from a dataset of historical decisions by adopting a variant of k-NN classification. A tuple is labeled as discriminated if we can observe a significant difference of treatment among its neighbors belonging to a protected-by-law group and its neighbors not belonging to it. Discrimination discovery boils down to extracting a classification model from the labeled tuples. Discrimination prevention is tackled by changing the decision value for tuples labeled as discriminated before training a classifier. The approach of this paper overcomes legal weaknesses and technical limitations of existing proposals. <s> BIB002 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Preprocessing approaches <s> Non-discrimination is a recognized objective in algorithmic decision making. In this paper, we introduce a novel probabilistic formulation of data pre-processing for reducing discrimination. We propose a convex optimization for learning a data transformation with three goals: controlling discrimination, limiting distortion in individual data samples, and preserving utility. We characterize the impact of limited sample size in accomplishing this objective. Two instances of the proposed optimization are applied to datasets, including one on real-world criminal recidivism. Results show that discrimination can be greatly reduced at a small cost in classification accuracy. <s> BIB003
Approaches in this category focus on the data, the primary source of bias, aiming to produce a "balanced" dataset that can then be fed into any learning algorithm. The intuition behind these approaches is that the fairer the training data is, the less discriminative the resulting model will be. Such methods modify the original data distribution by altering class labels of carefully selected instances close to the decision boundary or in local neighborhoods BIB002 , by assigning different weights to instances based on their group membership BIB001 or by carefully sampling from each group. These methods use heuristics aiming to balance the protected and unprotected groups in the training set; however, their impact is not well controlled despite their efforts for minimal data interventions. Recently, BIB003 proposed a probabilistic fairness-aware framework that alters the data distribution towards fairness while controlling the per-instance distortion and by preserving data utility for learning.
Bias in data‐driven artificial intelligence systems—An introductory survey <s> | In-processing approaches <s> Antidiscrimination law and scholarship for a long time has been engaged in the debate over whether a discriminatory intent or disparate impact test best captures the type of discrimination the law should, or can, prohibit. In this article, I suggest that we move beyond this dichotomous debate and focus instead on how courts reason about discrimination cases brought under both the intent and impact doctrines. I identify a distinct pattern, or framework, in the way courts reason about discrimination in both types of cases that defies neat doctrinal labels. This reasoning process, which I short handedly refer to as causation, is at the heart of evidentiary structures in both intent and impact actions. Unfortunately, the reigning distinction between intentional and disparate impact discrimination, an increasingly blurry one, has obscured the more important focus on the element of causation which, in my view, constitutes the normative core of antidiscrimination law. Beyond illustrating this common causal element, a close examination of the causal reasoning processes in antidiscrimination law provides a window into understanding why, despite the existence of generous evidentiary mechanisms, intent and impact actions have ceased being a viable avenue of relief for discrimination plaintiffs. What this examination reveals is that courts in intent and impact actions share a common way of reasoning about discrimination and, in particular, the causal inquiry at the heart of discrimination claims. Both intent and impact causes of action are premised on a three-step process of causal inquiry: status inference, neutral explanation, and causal attribution. This three-step causal inquiry is itself based in, and reliant upon, two types of reasoning processes - counterfactual and contrastive thinking - that social scientists have found dominate causal determinations. These two reasoning processes not only permeate causal thinking, but are also shaped by various influences - normative expectations and cognitive biases, for example - that critically shape these reasoning processes and can have a determinative role on causal attributions. Informed by this research, I illustrate that the causal inquiry at the heart of evidentiary structures in both intent and impact actions has, over time, become vulnerable in two respects. First, the evidentiary structures employed to discern causation in intent and impact cases are deeply vulnerable to attribution mistakes that may occur as a result of unconscious stereotypes and cognitive biases that can distort the causal attribution judgments by legal fact finders. As others have written, unconscious biases and cognitive stereotypes account for much of modern day discrimination. Legal decision makers and fact finders not apt to detect these biases either in themselves or in others when evaluating discrimination claims. This article demonstrates how the same cognitive biases that give rise to discrimination in the society can also distort causal judgments about that discrimination. This danger is embedded in the evidentiary frameworks in both intent and impact cases, which allow the causal inquiry underlying discrimination claims to be determined by comparative reasoning exercises - i.e. explanations and analysis seeking to distinguish disparately treated and affected individuals and groups - that invite reliance on the very stereotypical categorization structures at the root of status discrimination. Many discrimination claims fail because courts are uneven at best, and often neglectful, in evaluating these explanations against existing antidiscrimination norms. But there is a deeper vulnerability in the evidentiary structure of antidiscrimination law that is more destabilizing to the causal inquiry which lies at its center. That is the erosion of certain normative presumptions that underlie evidentiary structures in intent and impact cases. The Court has rooted its evidentiary frameworks in a set of normative assumptions about the existence, operation, and prevalence of status discrimination in our society. Based on these assumptions, the Court has aided plaintiffs in establishing an inference of discrimination (or status influence) by employing a counterfactual heuristic that imagines what decision making processes and outcomes would look like in a world free of discrimination and deems deviations from those processes and outcomes as evidence of discrimination in a particular case. However, despite the formal retention of these evidentiary structures over time, there has been steady erosion of the normative assumptions underlying them. The erosion of these presumptions has had a correspondingly devastating impact on the ability of plaintiffs to prove status discrimination especially given the increasing distance from the worst and most overt forms of discrimination, the increasing subtle and structural nature of discrimination, and the shift in public attitudes about the existence of status bias. This analysis, then, calls into question the belief among civil rights advocates that survival of the disparate impact cause of action and the dismantling of the intent standard will preserve the civil rights gains of the past. While certainly these two steps would give the appearance of stemming the rollback of these gains, they would ultimately prove to be insufficient and unsatisfactory. Unless this understanding changes in the near future, courts will continue to be an inhospitable forum for discrimination victims and no amount of doctrinal reform will significantly alter the odds that the court will see the increasingly subtle and sophisticated nature of contemporary discrimination. <s> BIB001 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | In-processing approaches <s> Recently, the following discrimination aware classification problem was introduced: given a labeled dataset and an attribute B, find a classifier with high predictive accuracy that at the same time does not discriminate on the basis of the given attribute B. This problem is motivated by the fact that often available historic data is biased due to discrimination, e.g., when B denotes ethnicity. Using the standard learners on this data may lead to wrongfully biased classifiers, even if the attribute B is removed from training data. Existing solutions for this problem consist in “cleaning away” the discrimination from the dataset before a classifier is learned. In this paper we study an alternative approach in which the non-discrimination constraint is pushed deeply into a decision tree learner by changing its splitting criterion and pruning strategy. Experimental evaluation shows that the proposed approach advances the state-of-the-art in the sense that the learned decision trees have a lower discrimination than models provided by previous methods, with little loss in accuracy. <s> BIB002 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | In-processing approaches <s> With the spread of data mining technologies and the accumulation of social data, such technologies and data are being used for determinations that seriously affect individuals' lives. For example, credit scoring is frequently determined based on the records of past credit data together with statistical prediction techniques. Needless to say, such determinations must be nondiscriminatory and fair in sensitive features, such as race, gender, religion, and so on. Several researchers have recently begun to attempt the development of analysis techniques that are aware of social fairness or discrimination. They have shown that simply avoiding the use of sensitive features is insufficient for eliminating biases in determinations, due to the indirect influence of sensitive information. In this paper, we first discuss three causes of unfairness in machine learning. We then propose a regularization approach that is applicable to any prediction algorithm with probabilistic discriminative models. We further apply this approach to logistic regression and empirically show its effectiveness and efficiency. <s> BIB003 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | In-processing approaches <s> Automated data-driven decision making systems are increasingly being used to assist, or even replace humans in many settings. These systems function by learning from historical decisions, often taken by humans. In order to maximize the utility of these systems (or, classifiers), their training involves minimizing the errors (or, misclassifications) over the given historical data. However, it is quite possible that the optimally trained classifier makes decisions for people belonging to different social groups with different misclassification rates (e.g., misclassification rates for females are higher than for males), thereby placing these groups at an unfair disadvantage. To account for and avoid such unfairness, in this paper, we introduce a new notion of unfairness, disparate mistreatment, which is defined in terms of misclassification rates. We then propose intuitive measures of disparate mistreatment for decision boundary-based classifiers, which can be easily incorporated into their formulation as convex-concave constraints. Experiments on synthetic as well as real world datasets show that our methodology is effective at avoiding disparate mistreatment, often at a small cost in terms of accuracy. <s> BIB004 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | In-processing approaches <s> We study the question of fair clustering under the {\em disparate impact} doctrine, where each protected class must have approximately equal representation in every cluster. We formulate the fair clustering problem under both the $k$-center and the $k$-median objectives, and show that even with two protected classes the problem is challenging, as the optimum solution can violate common conventions---for instance a point may no longer be assigned to its nearest cluster center! En route we introduce the concept of fairlets, which are minimal sets that satisfy fair representation while approximately preserving the clustering objective. We show that any fair clustering problem can be decomposed into first finding good fairlets, and then using existing machinery for traditional clustering algorithms. While finding good fairlets can be NP-hard, we proceed to obtain efficient approximation algorithms based on minimum cost flow. We empirically quantify the value of fair clustering on real-world datasets with sensitive attributes. <s> BIB005 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | In-processing approaches <s> The widespread use of ML-based decision making in domains with high societal impact such as recidivism, job hiring and loan credit has raised a lot of concerns regarding potential discrimination. In particular, in certain cases it has been observed that ML algorithms can provide different decisions based on sensitive attributes such as gender or race and therefore can lead to discrimination. Although, several fairness-aware ML approaches have been proposed, their focus has been largely on preserving the overall classification accuracy while improving fairness in predictions for both protected and non-protected groups (defined based on the sensitive attribute(s)). The overall accuracy however is not a good indicator of performance in case of class imbalance, as it is biased towards the majority class. As we will see in our experiments, many of the fairness-related datasets suffer from class imbalance and therefore, tackling fairness requires also tackling the imbalance problem. To this end, we propose AdaFair, a fairness-aware classifier based on AdaBoost that further updates the weights of the instances in each boosting round taking into account a cumulative notion of fairness based upon all current ensemble members, while explicitly tackling class-imbalance by optimizing the number of ensemble members for balanced classification error. Our experiments show that our approach can achieve parity in true positive and true negative rates for both protected and non-protected groups, while it significantly outperforms existing fairness-aware methods up to 25% in terms of balanced error. <s> BIB006
In-processing approaches reformulate the classification problem by explicitly incorporating the model's discrimination behavior in the objective function through regularization or constraints, or by training on latent target labels. For example, BIB002 modify the splitting criterion of decision trees to also consider the impact of the split w.r.t. the protected attribute. BIB003 integrate a regularizer to reduce the effect of "indirect prejudice" (mutual information between the sensitive features and class labels). BIB001 redefine the classification problem by minimizing an arbitrary loss function subject to the individual fairness-constraint (similar individuals are treated similarly). BIB004 propose a constraint-based approach for disparate mistreatment (defined in terms of misclassification rates) which can be incorporated into logistic-regression and SVMs. In a different direction, Krasanakis, Xioufis, Papadopoulos, and Kompatsiaris (2018) assume the existence of latent fair classes and propose an iterative training approach towards those classes which alters the in-training weights of the instances. BIB006 propose a sequential fair ensemble, AdaFair, that extends the weighted distribution approach of AdaBoost by also considering the cumulative fairness of the learner up to the current boosting round and moreover, it optimizes for balanced error instead of overall error to account for class imbalance. While most of the in-processing approaches refer to classification, approaches for the unsupervised case have also emerged recently, for example, the fair-PCA approach of Samadi, Tantipongpipat, Morgenstern, Singh, and Vempala (2018) that forces equal reconstruction errors for both protected and unprotected groups. BIB005 formulate the problem of fair clustering as having approximately equal representation for each protected group in every cluster and define fair-variants of classical k-means and k-medoids algorithms.
Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Post-processing approaches <s> Discrimination in social sense (e.g., against minorities and disadvantaged groups) is the subject of many laws worldwide, and it has been extensively studied in the social and economic sciences. We tackle the problem of determining, given a dataset of historical decision records, a precise measure of the degree of discrimination suffered by a given group (e.g., an etnic minority) in a given context (e.g., a geographic area) with respect to the decision (e.g. credit denial). In our approach, this problem is rephrased in a classification rule based setting, and a collection of quantitative measures of discrimination is introduced, on the basis of existing norms and regulations. The measures are defined as functions of the contingency table of a classification rule, and their statistical significance is assessed, relying on a large body of statistical inference methods for proportions. Based on this basic method, we are then able to address the more general problems of: (1) unveiling all discriminatory decision patterns hidden in the historical data, combining discrimination analysis with association rule mining, (2) unveiling discrimination in classifiers that learn over training data biased by discriminatory decisions, and (3) in the case of rule-based classifiers, sanitizing discriminatory rules by correcting their confidence. Our approach is validated on the German credit dataset and on the CPAR classifier. <s> BIB001 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Post-processing approaches <s> In this paper, we investigate how to modify the naive Bayes classifier in order to perform classification that is restricted to be independent with respect to a given sensitive attribute. Such independency restrictions occur naturally when the decision process leading to the labels in the data-set was biased; e.g., due to gender or racial discrimination. This setting is motivated by many cases in which there exist laws that disallow a decision that is partly based on discrimination. Naive application of machine learning techniques would result in huge fines for companies. We present three approaches for making the naive Bayes classifier discrimination-free: (i) modifying the probability of the decision being positive, (ii) training one model for every sensitive attribute value and balancing them, and (iii) adding a latent variable to the Bayesian model that represents the unbiased label and optimizing the model parameters for likelihood using expectation maximization. We present experiments for the three approaches on both artificial and real-life data. <s> BIB002 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Post-processing approaches <s> Recently, the following discrimination aware classification problem was introduced: given a labeled dataset and an attribute B, find a classifier with high predictive accuracy that at the same time does not discriminate on the basis of the given attribute B. This problem is motivated by the fact that often available historic data is biased due to discrimination, e.g., when B denotes ethnicity. Using the standard learners on this data may lead to wrongfully biased classifiers, even if the attribute B is removed from training data. Existing solutions for this problem consist in “cleaning away” the discrimination from the dataset before a classifier is learned. In this paper we study an alternative approach in which the non-discrimination constraint is pushed deeply into a decision tree learner by changing its splitting criterion and pruning strategy. Experimental evaluation shows that the proposed approach advances the state-of-the-art in the sense that the learned decision trees have a lower discrimination than models provided by previous methods, with little loss in accuracy. <s> BIB003 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Post-processing approaches <s> We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. We enourage readers to consult the more complete manuscript on the arXiv. <s> BIB004 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Post-processing approaches <s> Scientific investigations in medicine and beyond increasingly require observations to be described by more features than can be simultaneously visualized. Simply reducing the dimensionality by projections destroys essential relationships in the data. Similarly, traditional clustering algorithms introduce data bias that prevents detection of natural structures expected from generic nonlinear processes. We examine how these problems can best be addressed, where in particular we focus on two recent clustering approaches, Phenograph and Hebbian learning clustering, applied to synthetic and natural data examples. Our results reveal that already for very basic questions, minimizing clustering bias is essential, but that results can benefit further from biased post-processing. This article is part of the themed issue Mathematical methods in medicine: neuroscience, cardiology and pathology. <s> BIB005 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Post-processing approaches <s> Social discrimination is said to occur when an unfavorable decision for an individual is influenced by her membership to certain protected groups such as females and minority ethnic groups. Such discriminatory decisions often exist in historical data. Despite recent works in discrimination-aware data mining, there remains the need for robust, yet easily usable, methods for discrimination control. In this paper, we utilize reject option in classification, a general decision theoretic framework for handling instances whose labels are uncertain, for modeling and controlling discriminatory decisions. Specifically, this framework permits a formal treatment of the intuition that instances close to the decision boundary are more likely to be discriminated in a dataset. Based on this framework, we present three different solutions for discrimination-aware classification. The first solution invokes probabilistic rejection in single or multiple probabilistic classifiers while the second solution relies upon ensemble rejection in classifier ensembles. The third solution integrates one of the first two solutions with situation testing which is a procedure commonly used in the court of law. All solutions are easy to use and provide strong justifications for the decisions. We evaluate our solutions extensively on four real-world datasets and compare their performances with previously proposed discrimination-aware classifiers. The results demonstrate the superiority of our solutions in terms of both performance and flexibility of applicability. In particular, our solutions are effective at removing illegal discrimination from the predictions. (C) 2017 Published by Elsevier Inc. <s> BIB006 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Post-processing approaches <s> We present a systematic approach for achieving fairness in a binary classification setting. While we focus on two well-known quantitative definitions of fairness, our approach encompasses many other previously studied definitions as special cases. The key idea is to reduce fair classification to a sequence of cost-sensitive classification problems, whose solutions yield a randomized classifier with the lowest (empirical) error subject to the desired constraints. We introduce two reductions that work for any representation of the cost-sensitive classifier and compare favorably to prior baselines on a variety of data sets, while overcoming several of their disadvantages. <s> BIB007 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Post-processing approaches <s> A popular methodology for building binary decision-making classifiers in the presence of imperfect information is to first construct a calibrated non-binary "scoring" classifier, and then to post-process this score to obtain a binary decision. We study various fairness (or, error-balance) properties of this methodology, when the non-binary scores are calibrated over all protected groups, and with a variety of post-processing algorithms. Specifically, we show: First, there does not exist a general way to post-process a calibrated classifier to equalize protected groups' positive or negative predictive value (PPV or NPV). For certain "nice" calibrated classifiers, either PPV or NPV can be equalized when the post-processor uses different thresholds across protected groups. Still, when the post-processing consists of a single global threshold across all groups, natural fairness properties, such as equalizing PPV in a nontrivial way, do not hold even for "nice" classifiers. Second, when the post-processing stage is allowed to defer on some decisions (that is, to avoid making a decision by handing off some examples to a separate process), then for the non-deferred decisions, the resulting classifier can be made to equalize PPV, NPV, false positive rate (FPR) and false negative rate (FNR) across the protected groups. This suggests a way to partially evade the impossibility results of Chouldechova and Kleinberg et al., which preclude equalizing all of these measures simultaneously. We also present different deferring strategies and show how they affect the fairness properties of the overall system. We evaluate our post-processing techniques using the COMPAS data set from 2016. <s> BIB008
The third strategy is to postprocess the classification model once it has been learned from data. This consists of altering the model's internals (white-box approaches) or its predictions (black-box approaches). Examples of the white-box approach consist of correcting the confidence of CPAR classification rules BIB001 ), probabilities in Naïve Bayes models BIB002 , or the class label at leaves of decision trees BIB003 . White-box approaches have not been further developed in recent years, being superseded by in-processing methods. Examples of the black-box approach aim at keeping proportionality of decisions among protected versus unprotected groups by promoting or demoting predictions close to the decision boundary BIB006 , by differentiating the decision boundary itself over groups BIB004 , or by wrapping a fair classifier on top of a black-box base classifier BIB007 ). An analysis of how to postprocess group-wise calibrated classifiers under fairness constraints is given in BIB008 . While the majority of approaches are concerned with classification models, bias post-processing has been deemed as relevant when interpreting clustering models as well BIB005 .
Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Legal issues of mitigating bias <s> Recent work has explored how to train machine learning models which do not discriminate against any subgroup of the population as determined by sensitive attributes such as gender or race. To avoid disparate treatment, sensitive attributes should not be considered. On the other hand, in order to avoid disparate impact, sensitive attributes must be examined, e.g., in order to learn a fair model, or to check if a given model is fair. We introduce methods from secure multi-party computation which allow us to avoid both. By encrypting sensitive attributes, we show how an outcome-based fair model may be learned, checked, or have its outputs verified and held to account, without users revealing their sensitive attributes. <s> BIB001 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Legal issues of mitigating bias <s> Many individuals are concerned about the governance of machine learning systems and the prevention of algorithmic harms. The EU's recent General Data Protection Regulation (GDPR) has been seen as a core tool for achieving better governance of this area. While the GDPR does apply to the use of models in some limited situations, most of its provisions relate to the governance of personal data, while models have traditionally been seen as intellectual property. We present recent work from the information security literature around ‘model inversion’ and ‘membership inference’ attacks, which indicates that the process of turning training data into machine-learned systems is not one way, and demonstrate how this could lead some models to be legally classified as personal data. Taking this as a probing experiment, we explore the different rights and obligations this would trigger and their utility, and posit future directions for algorithmic governance and regulation. This article is part of the theme issue ‘Governing artificial intelligence: ethical, legal, and technical opportunities and challenges’. <s> BIB002
Pertinent legal questions involve whether modifications of data as envisaged by the pre-and in-processing approaches, as well as altering the model in the post-processing approach, could be considered lawful. Besides intellectual property issues that might occur, there is no general legal provision dealing with the way data is collected, selected or (even) modified. Provisions are in place mainly if such training data would (still) be personal data. Modifications (as well as any other processing) would need a legal basis. However, legitimation could derive from informed consent (provided that specific safeguards are met), or could rely on contract or legitimate interest. Besides, data quality could be relevant in terms of warranties, if a data provider sells data. A specific issue arises when "debiasing" involves sensitive data, as under Art. 9 GDPR special category data such as ethnicity often requires explicit consent BIB001 . A possible solution could be Art. 9(2)(g) GDPR which permits processing for reasons of substantial public interest, which arguably could be seen in 'debiasing'. The same grounds of legitimation apply when altering the model. However, contrary to data modification, data protection law would arguably not be applicable here, as the model would not contain personal data, unless the model is vulnerable to confidentiality attacks such as model inversion and membership inference BIB002 .
Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Proactively: bias-aware data collection <s> Policy decisions often require synthesis of evidence from multiple sources, and the source studies typically vary in rigour and in relevance to the target question. We present simple methods of allowing for differences in rigour (or lack of internal bias) and relevance (or lack of external bias) in evidence synthesis. The methods are developed in the context of reanalysing a UK National Institute for Clinical Excellence technology appraisal in antenatal care, which includes eight comparative studies. Many were historically controlled, only one was a randomized trial and doses, populations and outcomes varied between studies and differed from the target UK setting. Using elicited opinion, we construct prior distributions to represent the biases in each study and perform a bias-adjusted meta-analysis. Adjustment had the effect of shifting the combined estimate away from the null by approximately 10%, and the variance of the combined estimate was almost tripled. Our generic bias modelling approach allows decisions to be based on all available evidence, with less rigorous or less relevant studies downweighted by using computationally simple methods. <s> BIB001 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Proactively: bias-aware data collection <s> Crowdsourcing systems, in which tasks are electronically distributed to numerous "information piece-workers", have emerged as an effective paradigm for human-powered solving of large scale problems in domains such as image classification, data entry, optical character recognition, recommendation, and proofreading. Because these low-paid workers can be unreliable, nearly all crowdsourcers must devise schemes to increase confidence in their answers, typically by assigning each task multiple times and combining the answers in some way such as majority voting. In this paper, we consider a general model of such crowdsourcing tasks, and pose the problem of minimizing the total price (i.e., number of task assignments) that must be paid to achieve a target overall reliability. We give a new algorithm for deciding which tasks to assign to which workers and for inferring correct answers from the workers' answers. We show that our algorithm significantly outperforms majority voting and, in fact, is asymptotically optimal through comparison to an oracle that knows the reliability of every worker. <s> BIB002 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Proactively: bias-aware data collection <s> Twitter shares a free 1% sample of its tweets through the "Streaming API". Recently, research has pointed to evidence of bias in this source. The methodologies proposed in previous work rely on the restrictive and expensive Firehose to find the bias in the Streaming API data. We tackle the problem of finding sample bias without costly and restrictive Firehose data. We propose a solution that focuses on using an open data source to find bias in the Streaming API. <s> BIB003 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Proactively: bias-aware data collection <s> Models for aggregating contributions by crowd workers have been shown to be challenged by the rise of task-specific biases or errors. Task-dependent errors in assessment may shift the majority opinion of even large numbers of workers to an incorrect answer. We introduce and evaluate probabilistic models that can detect and correct task-dependent bias automatically. First, we show how to build and use probabilistic graphical models for jointly modeling task features, workers' biases, worker contributions and ground truth answers of tasks so that task-dependent bias can be corrected. Second, we show how the approach can perform a type of transfer learning among workers to address the issue of annotation sparsity. We evaluate the models with varying complexity on a large data set collected from a citizen science project and show that the models are effective at correcting the task-dependent worker bias. Finally, we investigate the use of active learning to guide the acquisition of expert assessments to enable automatic detection and correction of worker bias. <s> BIB004 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Proactively: bias-aware data collection <s> We consider the biases that can arise in bias elicitation when expert assessors make random errors. We illustrate the phenomenon for two sources of bias: that due to omitting important variables in a least squares regression and that which arises in adjusting relative risks for treatment effects using an elicitation scale. Results show that, even when assessors' elicitations of bias have desirable properties (such as unbiasedness and independence), the nonlinear nature of biases can lead to elicitations of bias that are, themselves, biased. We show the corrections which can be made to remove this bias and discuss the implications for the applied literature which employs these methods. <s> BIB005 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Proactively: bias-aware data collection <s> Crowdsourced data acquired from tasks that comprise a subjective component (e.g. opinion detection, sentiment analysis) is potentially affected by the inherent bias of crowd workers who contribute to the tasks. This can lead to biased and noisy ground-truth data, propagating the undesirable bias and noise when used in turn to train machine learning models or evaluate systems. In this work, we aim to understand the influence of workers' own opinions on their performance in the subjective task of bias detection. We analyze the influence of workers' opinions on their annotations corresponding to different topics. Our findings reveal that workers with strong opinions tend to produce biased annotations. We show that such bias can be mitigated to improve the overall quality of the data collected. Experienced crowd workers also fail to distance themselves from their own opinions to provide unbiased annotations. <s> BIB006
A variety of methods are adopted for data acquisition to serve diverse needs; these may be prone to introducing bias at the data collection stage itself, for example, BIB003 . Proposals have been made for a structured approach to bias elicitation in evidence synthesis, including bias checklists and elicitation tasks that can be performed either by individual assessors and mathematical pooling, group elicitation and consensus building or hybrid approaches BIB001 ). However, bias elicitations have themselves been found to be biased even when high quality assessors are involved and remedies have been proposed BIB005 . Among other methods, crowdsourcing is a popular approach that relies on large-scale acquisition of human input for dealing with data and label scarcity in ML. Crowdsourced data and labels may be subject to bias at different stages of the process: task design and experimental setup, task decomposition and result aggregation, selection of workers, and the entailing human factors BIB006 BIB004 BIB002 . Mitigating biases in crowdsourced data becomes harder in subjective tasks, where the presence of varying ideological and cultural backgrounds of workers means that it is possible to observe biased labels with complete agreement among the workers.
Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Describing and modeling bias using ontologies <s> We describe the principles and functionalities of DLAB (Declarative LAnguage Bias). DLAB can be used in inductive learning systems to define syntactically and traverse efficiently finite subspaces of first order clausal logic, be it a set of propositional formulae, association rules, Horn clauses, or full clauses. A Prolog implementation of DLAB is available by ftp access. <s> BIB001 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Describing and modeling bias using ontologies <s> This paper describes a context modelling approach using ontologies as a formal fundament. We introduce our Aspect-Scale-Context (ASC) model and show how it is related to some other models. A Context Ontology Language (CoOL) is derived from the model, which may be used to enable context-awareness and contextual interoperability during service discovery and execution in a proposed distributed system architecture. A core component of this architecture is a reasoner which infers conclusions about the context based on an ontology built with CoOL. <s> BIB002 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Describing and modeling bias using ontologies <s> We investigate an approach to reasoning about causes through argumentation. We consider a causal model for a physical system, and look for arguments about facts. Some arguments are meant to provide explanations of facts whereas some challenge these explanations and so on. At the root of argumentation here, are causal links ({A_1, ... ,A_n} causes B) and ontological links (o_1 is_a o_2). We present a system that provides a candidate explanation ({A_1, ... ,A_n} explains {B_1, ... ,B_m}) by resorting to an underlying causal link substantiated with appropriate ontological links. Argumentation is then at work from these various explaining links. A case study is developed: a severe storm Xynthia that devastated part of France in 2010, with an unaccountably high number of casualties. <s> BIB003 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Describing and modeling bias using ontologies <s> We propose a declarative framework for representing and reasoning about truthfulness of agents using answer set programming. We show how statements by agents can be evaluated against a set of observations over time equipped with our knowledge about the actions of the agents and the normal behavior of agents. We illustrate the framework using examples and discuss possible extensions that need to be considered. <s> BIB004 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Describing and modeling bias using ontologies <s> Recommending packages of items to groups of users has several applications, including recommending vacation packages to groups of tourists, entertainment packages to groups of friends, or sets of courses to groups of students. In this paper, we focus on a novel aspect of package-to-group recommendations, that of fairness. Specifically, when we recommend a package to a group of people, we ask that this recommendation is fair in the sense that every group member is satisfied by a sufficient number of items in the package. We explore two definitions of fairness and show that for either definition the problem of finding the most fair package is NP-hard. We exploit the fact that our problem can be modeled as a coverage problem, and we propose greedy algorithms that find approximate solutions within reasonable time. In addition, we study two extensions of the problem, where we impose category or spatial constraints on the items to be included in the recommended packages. We evaluate the appropriateness of the fairness models and the performance of the proposed algorithms using real data from Yelp, and a user study. <s> BIB005 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Describing and modeling bias using ontologies <s> RDF provides the means to publish, link, and consume heterogeneous information on the Web of Data, whereas OWL allows the construction of ontologies and inference of new information that is implicit in the data. Annotating RDF data with additional information, such as provenance, trustworthiness, or temporal validity is becoming more and more important in recent times; however, it is possible to natively represent only binary (or dyadic) relations between entities in RDF and OWL. While there are some approaches to represent metadata on RDF, they lose most of the reasoning power of OWL. In this paper we present an extension of Welty and Fikes’ 4dFluents ontology—on associating temporal validity to statements—to any number of dimensions, provide guidelines and design patterns to implement it on actual data, and compare its reasoning power with alternative representations. <s> BIB006 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Describing and modeling bias using ontologies <s> The WORLD07 study was a female-specific database, to prospectively characterise the clinical, histological, molecular and treatment-related features in Spanish women with lung cancer. Data were collected from patients' medical records and patient interviews from October 2007 to December 2012. A total of 2,060 women were analysed: median age, 61.3 years; white, 98.6%; postmenopausal, 80.2%; and no smokers, 55% including never smokers and ex-smokers. A family history of cancer was found in 42.5% of patients, 12.0% of patients had had a previous history of cancer (breast cancer, 39.7%). Most patients (85.8%) were diagnosed of non-small-cell lung cancer (NSCLC), most commonly reported with adenocarcinoma (71.4%), which was stage IV at diagnosis in 57.6%. Median overall survival (OS) for the entire population was 24.0 months, with a 1- and 2-year survival rate of 70.7% and 50.0% respectively. Median OS in patients with small-cell lung cancer was 18.8 months versus 25.0 months in patients with NSCLC (p = 0.011). Lung cancer appears to be a biologically different disease in women. By collecting prospective information about characteristics of women with lung cancer attending university hospitals in Spain, we hope to highlight the need to develop strategies based on gender differences and influence future healthcare policy. <s> BIB007 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Describing and modeling bias using ontologies <s> The study of actual causation concerns reasoning about events that have been instrumental in bringing about a particular outcome. Although the subject has long been studied in a number of fields including artificial intelligence, existing approaches have not yet reached the point where their results can be directly applied to explain causation in certain advanced scenarios, such as pin-pointing causes and responsibilities for the behavior of a complex cyber-physical system. We believe that this is due, at least in part, to a lack of distinction between the laws that govern individual states of the world and events whose occurrence cause state to evolve. In this paper, we present a novel approach to reasoning about actual causation that leverages techniques from Reasoning about Actions and Change to identify detailed causal explanations for how an outcome of interest came to be. We also present an implementation of the approach that leverages Answer Set Programming. <s> BIB008
Accounting for bias not only requires understanding of the different sources, that is, data, knowledge bases, and algorithms, but more importantly, it demands the interpretation and description of the meaning, potential side effects, provenance, and context of bias. Usually unbalanced categories are understood as bias and considered as sources of negative side effects. Nevertheless, skewed distributions may simply hide features or domain characteristics that, if removed, would hinder the discovery of relevant insights. This situation can be observed, for instance, in populations of lung cancer patients. As highlighted in diverse scientific reports, for example, BIB007 , lung cancer in women and men has significant differences such as etiology, pathophysiology, histology, and risk factors, which may impact in cancer occurrence, treatment outcomes, and survival. Furthermore, there are specific organizations that collaborate in lung cancer prevention and in the battle against smoking; some of these campaigns are oriented to particular focus groups and the effects of these initiatives are observed in certain populations. All these facts impact on the gender distribution of the population and could be interpreted as bias. However, in this context, imbalance reveals domain specific facts that need to be preserved in the population, and a formal description of these uneven distributions should be provided to avoid misinterpretation. Moreover, as any type of data source, knowledge bases and ontologies can also suffer from various types of bias or knowledge imbalance. For example, the description of the existing mutations of a gene in a knowledge base like COSMIC, 6 or the properties associated with a gene in the Gene Ontology, 7 may be biased by the amount of research that has been conducted in the diseases associated with these genes. Expressive formal models are demanded in order to describe and explain the characteristics of a data source and under which conditions or context, the data source is biased. Formalisms like description and causal logics, for example, BIB003 BIB001 BIB008 , allow for measuring and detecting bias in data collections of diverse types, for example, online data sets and recommendation systems BIB005 . They also enable the annotation of statements with trustworthiness BIB004 and temporality , as well as causation relationships between them BIB008 . Ontologies also play a relevant role as knowledge representation models for describing universe of discourses in terms of concepts such as classes, properties, and subsumption relationships, as well as contextual statements of these concepts. NdFluents BIB006 and Context Ontology Language (CoOL) BIB002 , represent exemplar ontology formal models able to express and combine diverse contextual dimensions and interrelations (e.g., locality and vicinity). Albeit expressive, existing logic-based and ontological formalisms are not tailored for representing contextual bias or differentiating unbalanced categories that consistently correspond to instances of a realworld domain. Therefore, expressive ontological formalisms are demanded to represent the contextual dimensions of various types of sources, for example, data collections, knowledge bases, or ontologies, as well as annotations denoting causality and provenance of the represented knowledge. These formalisms will equip bias detection algorithms with reasoning mechanisms that not only enhance accuracy but also enable explainability of the meaning, conditions, origin, and context of bias. Thus, domain modeling using ontologies will support context-aware bias description and interpretability.
Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Retroactively: explaining AI decisions <s> The purpose of this paper is to study the fundamental mechanism, humans use in argumentation, and to explore ways to implement this mechanism on computers. We do so by first developing a theory for argumentation whose central notion is the acceptability of arguments. Then we argue for the “correctness” or “appropriateness” of our theory with two strong arguments. The first one shows that most of the major approaches to nonmonotonic reasoning in AI and logic programming are special forms of our theory of argumentation. The second argument illustrates how our theory can be used to investigate the logical structure of many practical problems. This argument is based on a result showing that our theory captures naturally the solutions of the theory of n-person games and of the well-known stable marriage problem. By showing that argumentation can be viewed as a special form of logic programming with negation as failure, we introduce a general logic-programming-based method for generating meta-interpreters for argumentation systems, a method very much similar to the compiler-compiler idea in conventional programming. Keyword: Argumentation; Nonmonotonic reasoning; Logic programming; n-person games; The stable marriage problem <s> BIB001 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Retroactively: explaining AI decisions <s> Standard generalized additive models (GAMs) usually model the dependent variable as a sum of univariate models. Although previous studies have shown that standard GAMs can be interpreted by users, their accuracy is significantly less than more complex models that permit interactions. In this paper, we suggest adding selected terms of interacting pairs of features to standard GAMs. The resulting models, which we call GA2{M}$-models, for Generalized Additive Models plus Interactions, consist of univariate terms and a small number of pairwise interaction terms. Since these models only include one- and two-dimensional components, the components of GA2M-models can be visualized and interpreted by users. To explore the huge (quadratic) number of pairs of features, we develop a novel, computationally efficient method called FAST for ranking all possible pairs of features as candidates for inclusion into the model. In a large-scale empirical study, we show the effectiveness of FAST in ranking candidate pairs of features. In addition, we show the surprising result that GA2M-models have almost the same performance as the best full-complexity models on a number of real datasets. Thus this paper postulates that for many problems, GA2M-models can yield models that are both intelligible and accurate. <s> BIB002 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Retroactively: explaining AI decisions <s> Classifiers are often opaque and cannot easily be inspected to gain understanding of which factors are of importance. We propose an efficient iterative algorithm to find the attributes and dependencies used by any classifier when making predictions. The performance and utility of the algorithm is demonstrated on two synthetic and 26 real-world datasets, using 15 commonly used learning algorithms to generate the classifiers. The empirical investigation shows that the novel algorithm is indeed able to find groupings of interacting attributes exploited by the different classifiers. These groupings allow for finding similarities among classifiers for a single dataset as well as for determining the extent to which different classifiers exploit such interactions in general. <s> BIB003 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Retroactively: explaining AI decisions <s> Scoring systems are linear classification models that only require users to add, subtract and multiply a few small numbers in order to make a prediction. These models are in widespread use by the medical community, but are difficult to learn from data because they need to be accurate and sparse, have coprime integer coefficients, and satisfy multiple operational constraints. We present a new method for creating data-driven scoring systems called a Supersparse Linear Integer Model (SLIM). SLIM scoring systems are built by using an integer programming problem that directly encodes measures of accuracy (the 0–1 loss) and sparsity (the $$\ell _0$$ℓ0-seminorm) while restricting coefficients to coprime integers. SLIM can seamlessly incorporate a wide range of operational constraints related to accuracy and sparsity, and can produce acceptable models without parameter tuning because of the direct control provided over these quantities. We provide bounds on the testing and training accuracy of SLIM scoring systems, and present a new data reduction technique that can improve scalability by eliminating a portion of the training data beforehand. Our paper includes results from a collaboration with the Massachusetts General Hospital Sleep Laboratory, where SLIM is being used to create a highly tailored scoring system for sleep apnea screening. <s> BIB004 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Retroactively: explaining AI decisions <s> One of the most important obstacles to deploying predictive models is the fact that humans do not understand and trust them. Knowing which variables are important in a model's prediction and how they are combined can be very powerful in helping people understand and trust automatic decision making systems. Here we propose interpretable decision sets, a framework for building predictive models that are highly accurate, yet also highly interpretable. Decision sets are sets of independent if-then rules. Because each rule can be applied independently, decision sets are simple, concise, and easily interpretable. We formalize decision set learning through an objective function that simultaneously optimizes accuracy and interpretability of the rules. In particular, our approach learns short, accurate, and non-overlapping rules that cover the whole feature space and pay attention to small but important classes. Moreover, we prove that our objective is a non-monotone submodular function, which we efficiently optimize to find a near-optimal set of rules. Experiments show that interpretable decision sets are as accurate at classification as state-of-the-art machine learning techniques. They are also three times smaller on average than rule-based models learned by other methods. Finally, results of a user study show that people are able to answer multiple-choice questions about the decision boundaries of interpretable decision sets and write descriptions of classes based on them faster and more accurately than with other rule-based models that were designed for interpretability. Overall, our framework provides a new approach to interpretable machine learning that balances accuracy, interpretability, and computational efficiency. <s> BIB005 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Retroactively: explaining AI decisions <s> In recent years artificial neural networks have become the method of choice for many pattern recognition tasks. Despite their overwhelming success, a rigorous and easy to interpret mathematical explanation of the influence of input variables on a output produced by a neural network is still missing. We propose a generic framework as well as a concrete method for quantifying the influence of individual input signals on the output computed by a deep neural network. Inspired by the variable weighting scheme in the log-linear combination of variables in logistic regression, the proposed method provides linear models for specific observations of the input variables. This linear model locally approximates the behaviour of the neural network and can be used to quantify the influence of input variables in a principled way. We demonstrate the effectiveness of the proposed method in experiments on various synthetic and real-world datasets. <s> BIB006 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Retroactively: explaining AI decisions <s> Accuracy and interpretability are two dominant features of successful predictive models. Typically, a choice must be made in favor of complex black box models such as recurrent neural networks (RNN) for accuracy versus less accurate but more interpretable traditional models such as logistic regression. This tradeoff poses challenges in medicine where both accuracy and interpretability are important. We addressed this challenge by developing the REverse Time AttentIoN model (RETAIN) for application to Electronic Health Records (EHR) data. RETAIN achieves high accuracy while remaining clinically interpretable and is based on a two-level neural attention model that detects influential past visits and significant clinical variables within those visits (e.g. key diagnoses). RETAIN mimics physician practice by attending the EHR data in a reverse time order so that recent clinical visits are likely to receive higher attention. RETAIN was tested on a large health system EHR dataset with 14 million visits completed by 263K patients over an 8 year period and demonstrated predictive accuracy and computational scalability comparable to state-of-the-art methods such as RNN, and ease of interpretability comparable to traditional models. <s> BIB007 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Retroactively: explaining AI decisions <s> An intelligent agent interacting with the real world will encounter individual people, courses, test results, drugs prescriptions, chairs, boxes, etc., and needs to reason about properties of these individuals and relations among them as well as cope with uncertainty. Uncertainty has been studied in probability theory and graphical models, and relations have been studied in logic, in particular in the predicate calculus and its extensions. This book examines the foundations of combining logic and probability into what are called relational probabilistic models. It introduces representations, inference, and learning techniques for probability, logic, and their combinations. The book focuses on two representations in detail: Markov logic networks, a relational extension of undirected graphical models and weighted first-order predicate calculus formula, and Problog, a probabilistic extension of logic programs that can also be viewed as a Turing-complete relational extension of Bayesian networks. <s> BIB008 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Retroactively: explaining AI decisions <s> This is the first in a series of essays that addresses the manifest programmatic interest in developing intelligent systems that help people make good decisions in messy, complex, and uncertain circumstances by exploring several questions: What is an explanation? How do people explain things? How might intelligent systems explain their workings? How might intelligent systems help humans be better understanders as well as better explainers? This article addresses the theoretical foundations. <s> BIB009 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Retroactively: explaining AI decisions <s> Abstract This paper provides an entry point to the problem of interpreting a deep neural network model and explaining its predictions. It is based on a tutorial given at ICASSP 2017. As a tutorial paper, the set of methods covered here is not exhaustive, but sufficiently representative to discuss a number of questions in interpretability, technical challenges, and possible applications. The second part of the tutorial focuses on the recently proposed layer-wise relevance propagation (LRP) technique, for which we provide theory, recommendations, and tricks, to make most efficient use of it on real data. <s> BIB010 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Retroactively: explaining AI decisions <s> Developing transparent predictive analytics has attracted significant research attention recently. There have been multiple theories on how to model learning transparency but none of them aims to understand the internal and often complicated modeling processes. In this paper we adopt a contemporary philosophical concept called "constructivism", which is a theory regarding how human learns. We hypothesize that a critical aspect of transparent machine learning is to "reveal" model construction with two key process: (1) the assimilation process where we enhance our existing learning models and (2) the accommodation process where we create new learning models. With this intuition we propose a new learning paradigm, constructivism learning, using a Bayesian nonparametric model to dynamically handle the creation of new learning tasks. Our empirical study on both synthetic and real data sets demonstrate that the new learning algorithm is capable of delivering higher quality models (as compared to base lines and state-of-the-art) and at the same time increasing the transparency of the learning process. <s> BIB011 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Retroactively: explaining AI decisions <s> At the dawn of the fourth industrial revolution, we are witnessing a fast and widespread adoption of artificial intelligence (AI) in our daily life, which contributes to accelerating the shift towards a more algorithmic society. However, even with such unprecedented advancements, a key impediment to the use of AI-based systems is that they often lack transparency. Indeed, the black-box nature of these systems allows powerful predictions, but it cannot be directly explained. This issue has triggered a new debate on explainable AI (XAI). A research field holds substantial promise for improving trust and transparency of AI-based systems. It is recognized as the sine qua non for AI to continue making steady progress without disruption. This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI. Through the lens of the literature, we review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories. <s> BIB012 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Retroactively: explaining AI decisions <s> Machine learning algorithms aim at minimizing the number of false decisions and increasing the accuracy of predictions. However, the high predictive power of advanced algorithms comes at the costs of transparency. State-of-the-art methods, such as neural networks and ensemble methods, often result in highly complex models that offer little transparency. ::: We propose shallow model trees as a way to combine simple and highly transparent predictive models for higher predictive power without losing the transparency of the original models. We present a novel split criterion for model trees that allows for significantly higher predictive power than state-of-the-art model trees while maintaining the same level of simplicity. This novel approach finds split points which allow the underlying simple models to make better predictions on the corresponding data. In addition, we introduce multiple mechanisms to increase the transparency of the resulting trees. <s> BIB013 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Retroactively: explaining AI decisions <s> This paper proposes a method to modify a traditional convolutional neural network (CNN) into an interpretable CNN, in order to clarify knowledge representations in high conv-layers of the CNN. In an interpretable CNN, each filter in a high conv-layer represents a specific object part. Our interpretable CNNs use the same training data as ordinary CNNs without a need for any annotations of object parts or textures for supervision. The interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. We can apply our method to different types of CNNs with various structures. The explicit knowledge representation in an interpretable CNN can help people understand the logic inside a CNN, i.e. what patterns are memorized by the CNN for prediction. Experiments have shown that filters in an interpretable CNN are more semantically meaningful than those in a traditional CNN. The code is available at https://github.com/zqs1022/interpretableCNN. <s> BIB014 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | Retroactively: explaining AI decisions <s> Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it's important to remember Box's maxim that "All models are wrong but some are useful." We focus on the distinction between these models and explanations in philosophy and sociology. These models can be understood as a "do it yourself kit" for explanations, allowing a practitioner to directly answer "what if questions" or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly. <s> BIB015
Explainability refers to the extent the internal mechanics of a learning model can be explained in human terms. It is often used interchangeably with interpretability, although the latter refers to whether one can predict what will happen given a change in the model input or parameters. Although attempts to tackle interpretable ML have existed for some time BIB009 , there has been an exceptional growth of research literature in the last years with emerging keywords such as explainable AI BIB012 and black box explanation . Many papers propose approaches for understanding the global logic of a model by building an interpretable classifier able to mimic the obscure decision system. Generally, these methods are designed for explaining specific models, for example, deep neural networks BIB010 . Only few are agnostic to the black box model BIB003 . The difficulties in explaining black boxes and complex models ex post, have motivated proposals of transparent classifiers which are interpretable on their own and exhibit predictive accuracy close to that of obscure models. These include Bayesian models BIB011 , generalized additive models BIB002 , supersparse linear models BIB004 , rule-based decision sets BIB005 , optimal classification trees (Bertsimas & Dunn, 2017), model trees BIB013 , and neural networks with interpretable layers BIB014 . A different stream of approaches focuses on the local behavior of a model, searching for an explanation of the decision made for a specific instance . Such approaches are either model-dependent, for example, Taylor approximations BIB006 , saliency masks (the image regions that are mainly responsible for the decision) for neural network decisions , and attention models for recurrent networks BIB007 , or model-agnostic, such as those started by the LIME method . The main idea is to derive a local explanation for a decision outcome on a specific instance by learning an interpretable model from a randomly generated neighborhood of the instance. A third stream aims at bridging the local and the global ones by defining a strategy for combining local models in an incremental way . More recent work has asked the fundamental question What is an explanation? BIB015 and reject such usage of the term "explanation," criticizing that it might be appropriate for a modeling expert, but not for a lay man, and that, for example, humanities or philosophy have an entirely different understanding of what explanations are. We speculate that there are computational methods that will allow us to find some middle ground. For instance, some approaches in ML, statistical relational learning in particular BIB008 , take the perspective of knowledge representation and reasoning into account when developing ML models on more formal logical and statistical grounds. AI knowledge representation has been developing a rich theory of argumentation over the last 25 years BIB001 , which recent approaches try to leverage for generalizing the reasoning aspect of ML towards the use of computational models of argumentation. The outcome are models of arguments and counterarguments towards certain classifications that can be inspected by a human user and might be used as formal grounds for explanations in the manner that BIB015 called out for.
Bias in data‐driven artificial intelligence systems—An introductory survey <s> | FUTURE DIRECTIONS AND CONCLUSIONS <s> Distributing the spoils of a joint enterprise on the basis of work contribution or relative productivity seems natural to the modern Western mind. But such notions of merit-based distributive justice may be culturally constructed norms that vary with the social and economic structure of a group. In the present research, we showed that children from three different cultures have very different ideas about distributive justice. Whereas children from a modern Western society distributed the spoils of a joint enterprise precisely in proportion to productivity, children from a gerontocratic pastoralist society in Africa did not take merit into account at all. Children from a partially hunter-gatherer, egalitarian African culture distributed the spoils more equally than did the other two cultures, with merit playing only a limited role. This pattern of results suggests that some basic notions of distributive justice are not universal intuitions of the human species but rather culturally constructed behavioral norms. <s> BIB001 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | FUTURE DIRECTIONS AND CONCLUSIONS <s> Fairness-aware learning is increasingly important in data mining. Discrimination prevention aims to prevent discrimination in the training data before it is used to conduct predictive analysis. In this paper, we focus on fair data generation that ensures the generated data is discrimination free. Inspired by generative adversarial networks (GAN), we present fairness-aware generative adversarial networks, called FairGAN, which are able to learn a generator producing fair data and also preserving good data utility. Compared with the naive fair data generation models, FairGAN further ensures the classifiers which are trained on generated data can achieve fair classification on real data. Experiments on a real dataset show the effectiveness of FairGAN. <s> BIB002 </s> Bias in data‐driven artificial intelligence systems—An introductory survey <s> | FUTURE DIRECTIONS AND CONCLUSIONS <s> Some recent works revealed that deep neural networks (DNNs) are vulnerable to so-called adversarial attacks where input examples are intentionally perturbed to fool DNNs. In this work, we revisit the DNN training process that includes adversarial examples into the training dataset so as to improve DNN's resilience to adversarial attacks, namely, adversarial training. Our experiments show that different adversarial strengths, i.e., perturbation levels of adversarial examples, have different working zones to resist the attack. Based on the observation, we propose a multi-strength adversarial training method (MAT) that combines the adversarial training examples with different adversarial strengths to defend adversarial attacks. Two training structures - mixed MAT and parallel MAT - are developed to facilitate the tradeoffs between training time and memory occupation. Our results show that MAT can substantially minimize the accuracy degradation of deep learning systems to adversarial attacks on MNIST, CIFAR-10, CIFAR-100, and SVHN. <s> BIB003
There are several directions that can impact this field going forward. First, despite the large number of methods for mitigating bias, there are still no conclusive results regarding what is the state of the art method for each category, which of the fairness-related interventions perform best, or whether category-specific interventions perform better comparing to holistic approaches that tackle bias at all stages of the analysis process. We believe that a systematic evaluation of the existing approaches is necessary to understand their capabilities and limitations and also, a vital part of proposing new solutions. The difficulty of the evaluation lies on the fact that different methods work with different fairness notions and are applicable to different AI models. To this end, benchmark datasets should be made available that cover different application areas and manifest real-world challenges. Finally, standard evaluation procedures and measures covering both model performance and fairness-related aspects should be followed, in accordance with international standards like the IEEE-ALGB-WG-Algorithmic Bias Working Group. 8 Second, we recognize that "fairness cannot be reduced to a simple self-contained mathematical definition," "fairness is dynamic and social and not a statistical issue." 9 Also, "fair is not fair everywhere" BIB001 meaning that the notion of fairness varies across countries, cultures and application domains. Therefore, it is important to have realistic and applicable fairness definitions for different contexts as well as domainspecific datasets for method development and evaluation. Moreover, it is important to move beyond the typical training-test evaluation setup and to consider the consequences of potential fairness-related interventions to ensure long-term wellbeing of different groups. Finally, given the temporal changes of fairness perception, the question of whether one can train models on historical data and use them for current fairness-related problems becomes increasingly pressing. Third, the related work thus far focuses mainly on supervised learning. In many cases however, direct feedback on the data (i.e., as labels) is not available. Therefore alternative learning tasks should be considered, like unsupervised learning or reinforcement learning (RL) where only intermediate feedback is provided to the model. Recent works have emerged in this direction, for example, Jabbari, Joseph, Kearns, Morgenstern, and Roth (2017) examine fairness in the RL context where one needs to reconsider the effects of short-term actions on long-term rewards. Fourth, there is a general trend in the ML community recently for generating plausible data from existing data using Generative Adversarial Networks in an attempt to cover the high data demand of modern methods, especially DNNs. Recently, such approaches have been used also in the context of fairness BIB002 , that is, how to generate synthetic fair data that are similar to the real data. Still however, the problem of representativeness of the training data and its impact on the representativeness of the generated data might aggravate issues of fairness and discrimination. In the same topic, recent work revealed that DNNs are vulnerable to adversarial attacks, that is, intentional perturbations of the input examples, and therefore there is a need for methods to enhance their resilience BIB003 . Fifth, AI scientists and everyone involved in the decision making process should be aware of bias-related issues and the effect of their design choices and assumptions. For instance, studies show that representation-related biases creep into development processes because the development teams are not aware of the importance of distinguishing between certain categories . Members of a privileged group may not even be aware of the existence of (e.g., racial) categories in the sense that they often perceive themselves as "just people," and the interpretation of this as an unconscious default requires the voice of individuals from underprivileged groups, who persistently perceive their being "different." Two strategies appear promising for addressing this cognitive bias: try to improve diversity in development teams, and subject algorithms to outside and as-open-as-possible scrutiny, for example by permitting certain forms of reverse engineering for algorithmic accountability. Finally, from a legal point of view, apart from data protection law, general provisions with respect to data quality or selection are still missing. Recently an ISO standard on data quality (ISO 8000) was published, though not binding and not with regard to decision-making techniques. Moreover, first important steps have been made, for example, the Draft Ethics Guidelines for trustworthy AI from the European Commission's high-level Expert group on AI or the European parliament resolution containing recommendations to the Commission on Civil Law Rules on Robotics. However, these resolutions are still generic. Further interdisciplinary research is needed to define specifically what is needed to meet the balance between the fundamental rights and freedoms of citizens by mitigating bias, while at the same time considering the technical challenges and economical needs. Therefore, any legislative procedures will require a close collaboration of legal and technical experts. As already mentioned, the legal discussion in this paper refers to the EU where despite the many recent efforts, there is still no consensus for algorithmic fairness regulations across its countries. Therefore, there is still a lot of work to be done on analyzing the legal standards and regulations at a national and international level to support globally legal AI designs. To conclude, the problem of bias and discrimination in AI-based decision-making systems has attracted a lot of attention recently from science, industry, society and policy makers, and there is an ongoing debate on the AI opportunities and risks for our lives and our civilization. This paper surveys technical challenges and solutions as well as their legal grounds in order to advance this field in a direction that exploits the tremendous power of AI for solving real world problems but also considers the societal implications of these solutions. As a final note, we want to stress again that biases are deeply embedded in our societies and it is an illusion to believe that the AI and bias problem will be eliminated only with technical solutions. Nevertheless, as the technology reflects and projects our biases into the future, it is a key responsibility of technology creators to understand its limits and to propose safeguards to avoid pitfalls. Of equal importance is also for the technology creators to realize that technical solutions without any social and legal ground cannot thrive and therefore multidisciplinary approaches are required.
Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> INTRODUCTION <s> Over the last few years, we have experienced a variety of access technologies being deployed. While 2G cellular systems evolve into 3G systems such as UMTS or cdma2000, providing worldwide coverage, wireless LAN solutions have been extensively deployed to provide hotspot high-bandwidth Internet access in airports, hotels, and conference centers. At the same time, fixed access such as DSL and cable modem tied to wireless LANs appear in home and office environments. The always best connected (ABC) concept allows a person connectivity to applications using the devices and access technologies that best suit his or her needs, thereby combining the features of access technologies such as DSL, Bluetooth, and WLAN with cellular systems to provide an enhanced user experience for 2.5G, 3G, and beyond. An always best connected scenario, where a person is allowed to choose the best available access networks and devices at any point in time, generates great complexity and a number of requirements, not only for the technical solutions, but also in terms of business relationships between operators and service providers, and in subscription handling. This article describes the concept of being always best connected, discusses the user experience and business relationships in an ABC environment, and outlines the different aspects of an ABC solution that will broaden the technology and business base of 3G. <s> BIB001 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> INTRODUCTION <s> The twenty-first century has witnessed major technological changes that have transformed the way we live, work, and interact with one another. One of the major technology enablers responsible for this remarkable transformation in our global society is the deployment and use of Information and Communication Technology (ICT) equipment. In fact, today ICT has become highly integrated in our society that includes the dependence on ICT of various sectors, such as business, transportation, education, and the economy to the point that we now almost completely depend on it. Over the last few years, the energy consumption resulting from the usage of ICT equipment and its impact on the environment have fueled a lot of interests among researchers, designers, manufacturers, policy makers, and educators. We present some of the motivations driving the need for energy-efficient communications. We describe and discuss some of the recent techniques and solutions that have been proposed to minimize energy consumption by communication devices, protocols, networks, end-user systems, and data centers. In addition, we highlight a few emerging trends and we also identify some challenges that need to be addressed to enable novel, scalable, cost-effective energy-efficient communications in the future. <s> BIB002
As wireless networks (a.k.a. Wi-Fi) and mobile devices have been experiencing an out-standing progress, users demand uninterrupted, continuous, and seamless services with Quality of Service (QoS) from any source to any device at any time while on the move or stationary. Cisco forecasts that the Wi-Fi and mobile devices will account for 66% of the IP traffic and the Internet traffic will reach 18 GB per capita by 2019 [1] . In order to satisfy the increasing traffic demands and the service requirements, the next generation of wireless infrastructures (5G networks) paradigm will include a high deployment of base stations and several different radio access technologies (RATs), such as: Wireless Local, Metropolitan and Wide Area Networks (WLAN, WMAN, WWAN), Long Term Evolution (LTE, LTE-A), Worldwide Interoperability for Microwave Access (WiMAX), Wireless Broadband (WiBro) etc. as illustrated in Fig. 1 . However, there is no single RAT that can simultaneously offer high amount of bandwidth, low-latency, wide coverage and high QoS levels for mobile users. Therefore, the next generation wireless systems will make use of various interworking solutions and technologies. For example, the integration of Software Defined Networks (SDN) and Network Function Virtualization (NFV) could help the mobile operators to reduce their CAPEX intensity by transferring their hardwarebased network to software-and cloud-based solutions. Another option could be Cloud-Radio Access Networks (C-RAN) which offers a centralized, cooperative, clean (green) and cloud computing architecture for radio access networks. A popular solution is the hyper-dense small cell dynamic cooperation of different RATs and Wi-Fi and Femtocell opportunistic offloading techniques of the mobile traffic. These solutions will enable a cooperative heterogeneous wireless environment where the users will be always best connected (ABC) at anytime and anywhere BIB001 . Thus, this heterogeneous wireless environment, as illustrated in Fig. 1 , can be defined as a multi-technology, multiterminal, multi-application, multi-user environment within which mobile users can roam freely. In this context, the main promise of the heterogeneous network integration is to increase the wireless capacity ensuring seamless mobility and to add support for high data rates and low latency to the mobile users. Until recently, the aim of Information and Communication Technology (ICT) was mainly focused on performance and cost and insufficient effort was allocated towards the energy consumed by ICTs and their impact on the environment. Current trends, such as increasing costs of electricity, reserve limitations, and increasing emissions of carbon dioxide (CO 2 ) are shifting the focus of ICT towards energy-efficient well-performed solutions. Even though governments and companies are now aware of the massive carbon emissions and energy requirements, it is obvious that carbon emissions and the amount of energy consumption will continue to increase BIB002 . As stated by the SMART 2020 study , ICTbased CO BIB001 emissions are rising at a rate of 6% per year being expected to reach 12% of worldwide emissions by 2020.
Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Fig. 1. Next Generation Communication Scenario <s> EARTH is a major new European research project starting in 2010 with 15 partners from 10 countries. Its main technical objective is to achieve a reduction of the overall energy consumption of mobile broadband networks by 50%. In contrast to previous efforts, EARTH regards both network aspects and individual radio components from a holistic point of view. Considering that the signal strength strongly decreases with the distance to the base station, small cells are more energy efficient than large cells. EARTH will develop corresponding deployment strategies as well as management algorithms and protocols on the network level. On the component level, the project focuses on base station optimizations as power amplifiers consume the most energy in the system. A power efficient transceiver will be developed that adapts to changing traffic load for an energy efficient operation in mobile radio systems. With these results EARTH will reduce energy costs and carbon dioxide emissions and will thus enable a sustainable increase of mobile data rates. <s> BIB001 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Fig. 1. Next Generation Communication Scenario <s> Worldwide mobile broadband communications networks are increasingly contributing to global energy consumption. In this paper we tackle the important issue of enhancing the energy efficiency of cellular networks without compromising coverage and users perceived Quality of Service (QoS). The motivation is twofold. First, operators need to reduce their operational energy bill. Second, there is a request of environmental protection from governments and customers to reduce CO2 emissions due to information and communications technology. To this end, in this paper we first present the holistic system view design adopted in EARTH (Energy Aware Radio and neTworking tecHnologies) project. The goal is to ensure that any proposed solution to improve energy efficiency does not degrade the energy efficiency or performance on any other part of the system. Then, we focus on technical solutions related to resource allocation strategies designed for increasing diversity order, robustness and effectiveness of a wireless multi-user communication system. We investigate both standalone and heterogeneous cells deployment scenarios. In standalone cells deployment scenarios, the challenge is to reduce the overall downlink energy consumption while adapting the target of spectral efficiency to the actual load of the system and meeting the QoS. Then, with heterogeneous deployment scenarios, different cell scales that ranges from macro to <s> BIB002 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Fig. 1. Next Generation Communication Scenario <s> Energy-efficient communication has sparked tremendous interest in recent years as one of the main design goals of future wireless Heterogeneous Networks (HetNets). This has resulted in paradigm shift of current operation from data oriented to energy-efficient oriented networks. In this paper, we propose a framework for green communications in wireless HetNets. This framework is cognitive in holistic sense and aims at improving energy efficiency of the whole system, not just one isolated part of the network. In particular, we propose a cyclic approach, named as energy-cognitive cycle, which extends the classic cognitive cycle and enables dynamic selection of different available strategies for reducing the energy consumption in the network while satisfying the quality of service constraints. <s> BIB003
The dense deployment of various RATs, which may differ in terms of technology, protocols, coverage, bandwidth, latency, or even service providers, is essential to handle the ever-growing demand of performance and coverage. However, these increases have led to the increase in wireless network's energy consumption that represents one of the main current challenges that has received remarkable attention from both industry and academia BIB003 BIB002 . In order to decrease the overall energy consumption, the Greentouch consortium [8] and major European projects like EARTH BIB001 and Mobile VCE focus on infrastructure-based energy savings for wireless networks at the system level. The major aim of these projects is to design and implement pioneering approaches for green operation of wireless networks. However, these projects have only examined the optimization of homogeneous wireless systems. Since current mobile devices are equipped with several network interface cards to operate within the existing heterogeneous wireless infrastructure in a flexible way, energy-centric optimization solutions for heterogeneous networks represent an important issue that needs to be investigated carefully to reduce the energy consumption and carbon emissions. The rest of the paper is organized as follows. Section 2 presents background information related to the vertical handover concept (e.g., definition, classification and procedure), describes the handover process in various radio access technologies (e.g., WiFi, 3G, LTE, WiMAX) and summarizes several energy-efficient vertical handover standards and industry solution approaches. Section 3 examines the impact of specific parameters, methods and vertical handover approaches on the energy efficiency. Section 4 presents a comprehensive comparison of the existing handover approaches from the literature in terms energy-gain. Section 5 provides recommendations for an energy efficient vertical handover. Finally, the conclusions are presented in Sect. 6.
Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Vertical Handover Definition, Classificatiton and Procedure <s> Handover is the mechanism that transfers an ongoing call from one cell to another as a user moves through the coverage area of a cellular system. As smaller cells are deployed to meet the demands for increased capacity, the number of cell boundary crossings increases. The author presents an overview of published work on handover performance and control and discusses current trends in handover research. He discusses investigations that are applicable to a single tier of cells. He focuses on macrocells, but includes a brief discussion on how things change as cell sizes shrink. By assuming an overlay of macrocells and microcells he summarizes issues and approaches unique to such systems. <s> BIB001 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Vertical Handover Definition, Classificatiton and Procedure <s> With the emergence of a variety of mobile data services with variable coverage, bandwidth, and handoff strategies, and the need for mobile terminals to roam among these networks, handoff in hybrid data networks has attracted tremendous attention. This article presents an overview of issues related to handoff with particular emphasis on hybrid mobile data networks. Issues are logically divided into architectural and handoff decision time algorithms. The handoff architectures in high-speed local coverage IEEE 802.11 wireless LANs, and low-speed wide area coverage CDPD and GPRS mobile data networks are described and compared. A survey of traditional algorithms and an example of an advanced algorithm using neural networks for PTO decision time in homogeneous networks are presented. The HO architectural issues related to hybrid networks are discussed through an example of a hybrid network that employs GPRS and IEEE 802.11. Five architectures for the example hybrid network, based on emulation of GPRS entities within the WLAN, mobile IP, a virtual access point, and a mobility gateway (proxy), are described and compared. The mobility gateway and mobile IP approaches are selected for more detailed discussion. The differences in applying a complex algorithm for HO decision time in a homogeneous and a hybrid network are shown through an example. <s> BIB002 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Vertical Handover Definition, Classificatiton and Procedure <s> No single wireless network technology simultaneously provides a low latency, high bandwidth, wide area data service to a large number of mobile users. Wireless Overlay Networks - a hierarchical structure of room-size, building-size, and wide area data networks - solve the problem of providing network connectivity to a large number of mobile users in an efficient and scalable way. The specific topology of cells and the wide variety of network technologies that comprise wireless overlay networks present new problems that have not been encountered in previous cellular handoff systems. We have implemented a vertical handoff system that allows users to roam between cells in wireless overlay networks. Our goal is to provide a user with the best possible connectivity for as long as possible with a minimum of disruption during handoff. Results of our initial implementation show that the handoff latency is bounded by the discovery time, the amount of time before the mobile host discovers that it has moved into or out of a new wireless overlay. This discovery time is measured in seconds: large enough to disrupt reliable transport protocols such as TCP and introduce significant disruptions in continuous multimedia transmission. To efficiently support applications that cannot tolerate these disruptions, we present enhancements to the basic scheme that significantly reduce the discovery time without assuming any knowledge about specific channel characteristics. For handoffs between room-size and building-size overlays, these enhancements lead to a best-case handoff latency of approximately 170 ms with a 1.5% overhead in terms of network resources. For handoffs between building-size and wide-area data networks, the best-case handoff latency is approximately 800 ms with a similarly low overhead. <s> BIB003 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Vertical Handover Definition, Classificatiton and Procedure <s> In the next generation of wireless networks, mobile users can move between heterogeneous networks, using terminals with multiple access interfaces and non-real-time or real-time services. The most important issue in such environment is the Always Best Connected (ABC) concept allowing the best connectivity to applications anywhere at anytime. To answer ABC requirement, various vertical handover decision strategies have been proposed in the literature recently, using advanced tools and proven concepts. In this paper, we give an overview of the most interesting and recent strategies. We classify it into five categories for which we present their main characteristics. We also compare each one with the others in order to introduce our vertical handover decision approach. <s> BIB004 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Vertical Handover Definition, Classificatiton and Procedure <s> In the context of wireless user's increasing demands for better device power and battery management, this paper investigates some factors that can impact the power consumption on the energy consumption of mobile devices. The focus is on two factors when performing multimedia streaming: the impact of the traffic location within a WLAN; and the impact of the radio access network technology (WLAN, HSDPA, UMTS). The energy measurement results show that by changing the quality level of the multimedia stream the energy can be greatly conserved while the user perceived quality level is still acceptable. Moreover, by using the cellular interface much more energy is consumed (up to 61%) than by using the WLAN interface. <s> BIB005 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Vertical Handover Definition, Classificatiton and Procedure <s> Due to the ever growing mobile broadband data traffic over the cellular networks, the small cell deployment is seen as a promising solution for the network operators to increase their network capacity at low cost. This in turn would lead to an increase number of handovers (HOs) for the mobile users, which could affect the device power consumption. In this context, this paper investigates the impact of the HO process on the device energy consumption while performing VoD over a real LTE Small Cell experimental environment. Subjective tests are carried out to study the impact of the video quality on the user perceived QoE. The results show that by changing the quality level of the multimedia stream the energy can be greatly conserved while the user perceived QoE is still acceptable. Furthermore, by adapting to a lower quality level during the HO process, up to 56% energy savings could be achieved. <s> BIB006
The handover process BIB002 enables the link between communication and user mobility. A good definition of handover is given by ETSI and 3GPP which define handover as being the process by which the mobile device keeps its connection when changing the PoA (base station or access point). In terms of technologies, if both the source and target system employ the same RAT and reply on the same specifications, then the handover process is referred to as Horizontal Handover BIB002 . If the target system employs a different RAT, the handover process is called Vertical Handover (VHO) BIB003 , which is the focus of this paper. The main objective of the handover process is to minimize the service disruption, which can be due to data loss and delay during the session transfer. The handover procedure can be divided into three phases: (1) information gathering, (2) decision, and (3) execution. Figure 2 illustrates the relation among these phases required to perform handover in wireless heterogeneous networks. Throughout the information gathering phase, mobile devices periodically scan the available networks to be able to associate with a more suitable PoA when the service quality drops below the required QoS level. The mobile devices gather information received locally or remotely. The reliability of the gathered information is essential for the vertical handover process as the decision-making procedure depends on it. Traditionally, the handover process is performed based on the Received Signal Strength Indicator (RSSI) BIB001 , such that stations select a PoA that has the strongest RSSI. Existing energy-efficient handover methods save energy by either reducing the overall channel scanning duration or connecting to a better energy-efficient PoA in relation to the RSS levels. Nevertheless, as each RAT has specific features, to increase the energy efficiency and the handover accuracy, a vertical handover method has to evaluate each RAT separately, making use of as much as local and network related parameters. In this context, main parameters that can be received remotely (network-side assisting) are: overall throughput, network connectivity graph, probability of collision, cost, packet loss ratio, frame error rate, latency, security, bandwidth available, offered bandwidth, jitter, number of users, link capacity, mobility, coverage, handoff rate, RSSI, noise signal ratio (NSR), bit error rate, distance, location, QoS parameters, transmission power, channel busy time (CBT), etc. All of the aforementioned parameters might assist mobile devices to save energy. However, most of these parameters require message exchanges, which cause additional overhead on the network and extra processingbased energy consumption for mobile devices. Similarly, the parameters that can be received locally (mobile-side assisting) are: user preferences, battery status, handover thresholds, resources, channel scanning results, speed, historical information, service class, accelerometer, GPS, probability of local packet loss, local latency, local throughput, scanning frequency, specific application requirements, and etc. These parameters can also assist mobile devices for energy saving. However, they may also introduce extra processing-based power consumption for mobile devices. Consequently, the parameters received by information gathering, either remotely or locally, are very important for an energy-efficient vertical handover process and its accuracy. However, a trade-off between accuracy and overhead needs to be considered, as keeping accurate estimates for the more dynamic parameters depends on their frequency of change and can be data intensive, adding to signaling, processor and memory burden and could lead to introducing extra-energy consumption for mobile devices. Moreover, the energy consumption is also affected by the type of wireless access technology used by the mobile device and the users' location relative to the access point BIB005 . A dense HetNet environment results in an increased number of handovers at the mobile device side that introduces a further increase in the energy consumption BIB006 . Therefore, all of the afore-mentioned parameters must be first analyzed in terms of energy versus performance trade-off. The handover decision phase is in charge of deciding whether a handover is necessary or not. If so, when and where to trigger the handover are essential information in the process. The when decision refers to the exact time of the handover initiation and the where decision refers to the selection of the most suitable PoA that satisfies the optimal requirements. In homogeneous networks, deciding when to handover generally depends on the RSSI values, while the where is not an issue, as there is only one RAT. The traditional handover decision policy BIB001 BIB004 that is mainly based only on RSSI is as follow. If the RSSI is the only parameter, a handover is performed whenever RSSI new > RSSI old . If a threshold T is considered, a handover is performed whenever RSSI new > RSSI old and RSSI old < T. If a hysteresis H is considered, a handover is performed whenever RSSI new > RSSI old + H. If both a hysteresis and a threshold are considered, then a handover is performed whenever RSSI new > RSSI old + H and RSSI old < T. In heterogeneous networks, the handover decision is more complex. To be able to perform the best decision, the data collected in the information gathering phase must contain as many essential parameters as possible obtained from various sources, such as the device, network and user preferences. However, redundancy of the information gathered not always leads to energy efficiency, as this process may take significant time and pro-cessing overhead for devices. The decision phase also consists of three sub-phases: (1) parameter-selection, (2) parameter-processing, and (3) parameter-aggregation. In order to evaluate and weight a candidate association, only the parameters that the algorithm requires are selected in the parameter-selection phase. In order to extract relevant data, all the selected parameters are normalized in the parameter-processing phase. Additionally, neural networks, fuzzy logic and specific utility functions are used to merge value parameters with diffuse information. Finally, the best candidate RAT is selected with the help of the network selection algorithm that aggregates and evaluates the load/cost of each parameter in the parameteraggregation phase. Once the information is gathered (phase 1), processed and a network candidate is selected (phase 2), handover execution phase performs the handover itself. This phase also handles the security, control, mobility and session issues to achieve a seamless handover operation .
Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> IEEE 802.21 Media Independent Handover (MIH) <s> The demands for accessing services at high data rates while on the move, anyplace and anytime, resulted in numerous research efforts to integrate heterogeneous wireless and mobile networks. The focus was mainly put on the integration of the Universal Mobile Telecommunications System (UMTS) and the wireless local area network (WLAN) IEEE 802.11, which is beneficial in terms of capacity, coverage and cost. With the advent of IEEE 802.16(e) the attention of the research community was shifted to its interworking, on one side, with complementary WLANs, and on the other, with UMTS for extra capacity. In addition, there has been also research on UMTS interworking with different broadcasting systems, including the Digital Video Broadcasting system for handheld devices (DVB-H). All these research activities resulted in various heterogeneous architectures where the interworking was performed at different levels in the network. In this article, we address the integration at the UMTS radio access level, known also as very tight coupling. This integration approach exhibits good vertical handover performance and may allow for seamless session continuity during the handover. However, it is a technology specific solution, where not all the mechanisms applied to the integration of one wireless technology can be straightforwardly reused for embedding another. This integration approach introduces various modifications to UMTS that have to be standardized, which makes it a long-term solution. We present here the general architecture for the integration at the UMTS radio access level and discuss the extension of the architectural framework for various types of access systems with as few as possible additional modifications. The focus of the work is put on the vertical handovers. We discuss various vertical handovers among WCDMA, IEEE 802.11, IEEE 802.16e and DVB-H in the considered heterogeneous architecture. We present new handover types, describe the vertical handover procedures and provide performance evaluation of the vertical handovers in different scenarios and for different combinations of the wireless access technologies. <s> BIB001 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> IEEE 802.21 Media Independent Handover (MIH) <s> Integration of various wireless access technologies is one of the major concerns in recent wireless systems in which multi-technology mobile devices are provided to users to roam between different access networks. Being an essential part in heterogeneous wireless systems, vertical handover is more complex than conventional horizontal handover. As IEEE 802.21 Media Independent Handover (MIH) is the standard addressing a uniform and media-independent framework for seamless handover between different access technologies, many works have been carried out in the literature to employ MIH services in handover management This paper presents a comprehensive survey of the proposed mobility management mechanisms that are using this framework. As a comparative view, the paper categorizes the efforts according to the layer of mobility management and evaluates some of the representative methods discussing about their advantages and disadvantages The paper also looks into recent handover decision and interface management methods that are exploiting MIH Moreover, the extensions and the amendments proposed on MIH are overviewed. <s> BIB002 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> IEEE 802.21 Media Independent Handover (MIH) <s> Energy efficiency in wireless networks is one of the major issues for users as mobile devices rely on their batteries. In this context, Wireless Network Interface Cards (WNICs) have to be taken into consideration carefully as it consumes a significant portion of the overall system energy. In this paper, we aim to reduce the energy consumption of wireless mobile devices performing specific solutions, such as reducing the overhead of channel scanning by proposing a smart selective channel scanning method during the handover preparation phase and associating with a Point of Attachment (PoA) that is expected to consume the least amount of energy among all PoAs. Power consumption prediction of a station is made by using the channel scanning results, switching costs, Channel Busy Times (CBTs), traffic class of the station, the number of stations deployed in each PoA, and the power consumption of each WNIC. Stations performing the proposed scheme can fairly coexist with the other stations in the network. In the proposed scheme, each station makes use of its local information and the information provided by the IEEE 802.21 Media Independent Handover (MIH) Information Server (IS). Performance of the proposed scheme has been investigated by numerical analyses and extensive simulations. The results illustrate that the proposed scheme reduces the energy consumption of mobile stations and provides longer lifetime under a wide range of contention and signal strength levels. <s> BIB003
Media Independent Handover (MIH) standard is part of the IEEE 802.21 protocol BIB002 . It provides mobile devices with link-layer information of different Radio Access Networks (RANs) and battery-level status. Hence, it improves not only the vertical handover process and user experiences, but also energy efficiency, assisting both mobile and networkinitiated handovers. MIH provides stations with the abstract services that enable the information exchange between higher and lower layers by utilizing a media independent framework and associated services BIB001 . MIH standard has three key services that support the handover operation: (1) Media Independent Event Services (MIES) states events, such as Link_Up and Link_Down that signify the variations in the link quality, (2) Media Independent Command Service (MICS) provides commands to control the link state, (3) Media Independent Information Service (MIIS) provides mobile devices with energy-aware and rapid channel scanning results BIB003 .
Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Industry Solutions for Network Selection <s> Micromobility protocols aim to improve localized mobility by reducing the handover overhead. The Mobility Plane Architecture (MPA) was designed to support micromobility in standard IP or MPLS/GMPLS networks in a network-centric way, that is, the burden demanded by micromobility is placed on the network, not on the mobile nodes. The aim of this paper is to present the reactive and proactive handover procedures supported by MPA, its modeling and performance evaluation. Results show that the proactive handover loses much less packets than reactive handover, being more suited for multimedia traffic. <s> BIB001 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Industry Solutions for Network Selection <s> The twenty-first century has witnessed major technological changes that have transformed the way we live, work, and interact with one another. One of the major technology enablers responsible for this remarkable transformation in our global society is the deployment and use of Information and Communication Technology (ICT) equipment. In fact, today ICT has become highly integrated in our society that includes the dependence on ICT of various sectors, such as business, transportation, education, and the economy to the point that we now almost completely depend on it. Over the last few years, the energy consumption resulting from the usage of ICT equipment and its impact on the environment have fueled a lot of interests among researchers, designers, manufacturers, policy makers, and educators. We present some of the motivations driving the need for energy-efficient communications. We describe and discuss some of the recent techniques and solutions that have been proposed to minimize energy consumption by communication devices, protocols, networks, end-user systems, and data centers. In addition, we highlight a few emerging trends and we also identify some challenges that need to be addressed to enable novel, scalable, cost-effective energy-efficient communications in the future. <s> BIB002 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Industry Solutions for Network Selection <s> The realization of high data rates in LTE technology over an all IP network means an ever increasing load on packet data networks. 3GPP has defined data offloading as a key solution to cope with this challenge. Data offloading has been a critical area of study in 3GPP Release-10. The 3GPP Evolved Packet Core (EPC) has been defined to be expansive; for example, it is designed to work in conjunction with non-3GPP access networks, femto cells etc., This diversity causes more scenarios to tackle, but it also provides solutions in the ways of offloading data from the EPC. This article is a tutorial on three such mechanisms, namely Local IP Access (LIPA), Selected IP Traffic Offload (SIPTO) and IP Flow Mobility (IFOM). The article starts with an overview of data offloading methods. The basics and key concepts of each method are summarized before giving the requirements, architecture and procedures with high-level message flows. The article ends with a brief evaluation of the mechanisms and references for the reader interested in further reading. <s> BIB003
The mass-market adoption of the high-end mobile devices has led the network operators to adopt various solutions to help them cope with the explosion of mobile broadband data traffic. One promising solution is the mobile data offloading technique that has become a popular solution for the network operators, especially in the 3GPP Release-10 BIB003 . This enables the network operators to accommodate more mobile users and keep up with their traffic demands by transferring some of the traffic from the core cellular network to Wi-Fi or femtocells at peak times and key locations (e.g., home, office, public HotSpot, etc.). Even though this solution presents advantages for the network operators with improved capacity at low cost, a HetNet dense-small cell environment results in an increased number of handovers for the mobile user. Two handover strategies could be identified in this context: (1) proactive handover where the handover is triggered well in advance and (2) reactive handover where the handover is postponed as long as possible. It has been shown that the proactive handover reduces the packet loss probability when compared to the reactive handover BIB001 , making it more suitable for real-time applications and more energy efficient. Qualcomm presented a study , which shows that the LTE-Advanced HetNet with LTE pico-cell solution is the best option over the HetNet with Wi-Fi cells in terms of throughput gain, handover mechanism, QoS guarantee, security, and self-organizing features. Moreover, the LTE-Advanced HetNet with LTE picocells already achieves seam-less handover between the two networks whereas for HetNet with Wi-Fi cells seamless handover is not possible yet as it requires an inter-RAT handover. However, in terms of CAPEX and OPEX, HetNet with Wi-Fi cells is a better option for network operators. The HetNets Wi-Fi offload solution is already adopted by many service providers. For example, the main service providers in United Kingdom, such as EE, Vodafone, O2 and Three offer WiFi-calling letting their customers to make and receive calls and send and receive texts over WiFi using their mobile number. The O2 and Three service providers enable WiFi calling by using an app, such as O2 TU Go 1 app and inTouch 2 app, respectively. Whereas EE BIB002 and Vodafone 4 offer a seamless approach without the need for a separate add by using the standard dialer and SMS apps of the mobile phone. In this way, customers can avail of a wider service offering. A white paper published by 4G Americas provides recommendations for an Intelligent Network Selection (INS) that will enable the mobile device to select between WiFi and cellular networks. The INS is based on the ANDSF and IEEE 802.11u standards and the selection decision makes use of the RSSI, QoS parameters such as RTT delay, jitter, packet loss and UE local information like battery and data usage or the mobile device motion state relative to the WiFi Access Point position. Another solution based on the ANDSF standard is proposed by InterDigital referred to as Smart Access Manager (SAM). The proposed solution is distributed and consists of a SAM client residing at the mobile device side that monitors the network environment and the services and applications running on the device, whereas a mobile-networkbased ANDSF server integrates all the cost/revenue policy rules and the decision-making intelligence. A leading wireless, wireline, broadband and cable TV operator in South Europe adopted the solution offered by Openet 5 that provides intelligent Wi-Fi management and offload capabilities in real-time on a subscriber-by-subscriber basis. The solution enables the network operator to optimize the mobile data experience for its customers and reduce the network costs based on policy and charging controls combined with user profiles and service information. The Wi-Fi network database provider WeFi 6 launched the WeFi enhanced Access Network Discovery and Selection Function (WeANDSF) that is ANDSF 3GPP compliant, supporting Wi-Fi and all 2G/3G/4G cellular technologies. The selection decision is based on weighted factors taking into consideration the real-time and historical network performance parameters for all networks within the user's location. The solution enables the operators to save investments costs in CAPEX/OPEX by maximizing the utilization of all existing and potential resources. Data offloading solution is a promising solution for the network operators. However, the key problem is the lack of integration between the cellular network and the carrier Wi-Fi networks. To this extent, the new 3GPP Rel-13 considers several key features and technologies including LTE Wireless Local Area Network Radio Level Aggregation (LWA) and the LTE Unlicensed or Licensed Assisted Access for LTE (LTE-U/LAA) which utilises the unlicensed spectrum (e.g., 5 GHz) to provide additional radio spectrum for the network operators. According to 4G Americas white paper there are two basic deployment scenarios for LWA as illustrated in Fig. 3: (1) a collocated scenario where the LTE eNB integrates one or multiple WLAN Access Points (APs), and (2) a noncollocated scenario where the LTE eNB connects to WLAN via an interface that is being standardized by 3GPP in Rel-13. In this scenario the eNB is an anchor node that enables the Core Network connectivity and forwards the data packets to WLAN. However, these deployment scenarios consider the LTE and WLAN networks deployed and controlled by an operator and its partners. In this way, the operators can have more control over the offloading techniques and the quality experienced by their customers over the Wi-Fi network.
Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Impact of Local and Network-related Parameters on the Energy efficiency <s> Micromobility protocols aim to improve localized mobility by reducing the handover overhead. The Mobility Plane Architecture (MPA) was designed to support micromobility in standard IP or MPLS/GMPLS networks in a network-centric way, that is, the burden demanded by micromobility is placed on the network, not on the mobile nodes. The aim of this paper is to present the reactive and proactive handover procedures supported by MPA, its modeling and performance evaluation. Results show that the proactive handover loses much less packets than reactive handover, being more suited for multimedia traffic. <s> BIB001 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Impact of Local and Network-related Parameters on the Energy efficiency <s> Context information brings new opportunities for efficient and effective system resource management of mobile devices. In this work, we focus on the use of context information to achieve energy-efficient, ubiquitous wireless connectivity. Our field-collected data show that the energy cost of network interfaces poses a great challenge to ubiquitous connectivity, despite decent availability of cellular networks. We propose to leverage the complementary strengths of Wi-Fi and cellular interfaces by automatically selecting the most efficient one based on context information. We formulate the selection of wireless interfaces as a statistical decision problem. The challenge is to accurately estimate Wi-Fi network conditions without powering up the network interface. We explore the use of different context information, including time, history, cellular network conditions, and device motion, to statistically estimate Wi-Fi network conditions with negligible overhead. We evaluate several context-based algorithms for the estimation and prediction of current and future network conditions. Simulations using field-collected traces show that our network estimation algorithms can improve the average battery lifetime of a commercial mobile phone for an ECG reporting application by 40 percent, very close to the estimated theoretical upper bound of 42 percent. Furthermore, our most effective algorithm can predict Wi-Fi availability for one and ten hours into the future with 95 and 90 percent accuracy, respectively. <s> BIB002
To increase handover accuracy, vertical handover approaches utilize a large set of local and network-related parameters. However, this comes at the cost of higher network overhead that could lead to increase in delay, handover duration, processing power and finally more energy consumption. On the other side, considering a small set of parameters might improve the energy efficiency but at the cost of handover accuracy. Thus, in order to maintain a good trade-off between the energy efficiency and the handover accuracy a balanced number of parameters need to be considered. In this context, this section presents the impact of specific parameters on the handover accuracy and the trade-off they provide in terms of energy efficiency. The parameters are classified into two groups: (1) mobile-based parameters that can be collected locally on the mobile device side and (2) network-based parameters that are received remotely from the network side. Both categories are summarized in the table below. As seen in Table 1 , most of the parameters present a high energy-efficiency trade-off, depending on the specific problem they are addressing. For example, energy savings might be achieved by reducing the number of handovers when making use of the coverage range information about the PoAs in the vicinity. Avoiding frequent retransmissions could also lead to energy savings. By using the information about the application requirements and the underlying transport protocol energy savings could be achieved by selecting an energy efficient transmission, such as UDP. An important aspect to consider is what information is readily available to the decision maker and how accurate and/or dynamic that information is. For example, because of the dynamics of the wireless environment the received signal strength or the available band-width can present major fluctuations for short periods of time. On the other side, the coverage and the PoAs location are less dynamic and they do not present changes on a daily basis. Whereas the security level and access methodology are parameters that are more static. Note that the parameters presented above do not represent an exhaustive list and are possible choices that might be used as input into the handover decision strategy. Some solutions may use only a subset of these parameters, or may include additional parameters as well. It should be noted that most of local and network-related handover decision parameters are extremely related to each other and cannot be addressed individually. For instance, network connection time is closely related to the RSSI, location and speed of the device. Therefore, a multi-criteria based handover procedure is more suitable as it has a higher potential to fulfill an energy-efficient network/interface selection. Information about the speed of the mobile user (e.g., stationary, pedestrian walking or vehicular speed). Global Positioning System (GPS) can be used to obtain the location of the device relative to its PoA. Used in combination with other parameters for improved accuracy by deciding when and where to handover. Very low if GPS is used, as it consumes approximately ten times more energy than an accelerometer BIB001 . [12] Accelerometer Widely used as a motion sensor in the latest smart devices. Used in combination with other parameters for improved accuracy by performing channel scanning only when movement is detected. Medium, energy efficiency before handover can be achieved as in the work presented in . [ 48-50, 54, 72, 73] User Preferences It enables the users to express their preferences towards a certain criteria. High, if the users gives priority to handover accuracy. High, if the users gives priority to energy efficiency. BIB002 Historical Information Storing the information about the networks the device was associated according to specific time and location. High, as it speeds up the network selection process based on the previous user experience. High, as reduction of power consumption during the decision process can be achieved. [ High, when used in combination with other parameters to speed up the network selection process. High, as reduction of power consumption during network discovery is achieved.
Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Security and Access Methodology <s> This paper proposes an energy-aware handoff algorithm based on energy consumption measurements of UMTS and 802.11 WLAN networks on an Android mobile phone. The handoff algorithm uses estimation of application traffic size to find the minimum energy cost alternative by comparing the cost of using UMTS with the cost of performing an opportunistic downward vertical handoff to a WLAN and using WLAN for the transfer and the eventual upward vertical handoff back to UMTS. Our experiments show that the energy cost of UMTS is nearly equal to WLAN as a function of transfer time, but for bulk transfers, transferring a byte of data over UMTS can be over a hundred times more expensive than over WLAN. Further, we discovered that the energy cost of the vertical handoff is quite high, comparable to downloading 0.12-0.67 MB of data over UMTS. To calculate the energy cost of data transfers before they take place, we propose and evaluate a distributed traffic estimation mechanism. The mechanism can predict how much data will be transferred due to a user action (i.e. clicking of an URL link). We provide initial results on the accuracy of the mechanism. Finally, we perform a numerical analysis on the the performance of the handoff algorithm and show that it can reduce the energy consumption significantly when compared with simple policies. <s> BIB001 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Security and Access Methodology <s> Energy efficiency in wireless networks is one of the major issues for users as mobile devices rely on their batteries. In this context, Wireless Network Interface Cards (WNICs) have to be taken into consideration carefully as it consumes a significant portion of the overall system energy. In this paper, we aim to reduce the energy consumption of wireless mobile devices performing specific solutions, such as reducing the overhead of channel scanning by proposing a smart selective channel scanning method during the handover preparation phase and associating with a Point of Attachment (PoA) that is expected to consume the least amount of energy among all PoAs. Power consumption prediction of a station is made by using the channel scanning results, switching costs, Channel Busy Times (CBTs), traffic class of the station, the number of stations deployed in each PoA, and the power consumption of each WNIC. Stations performing the proposed scheme can fairly coexist with the other stations in the network. In the proposed scheme, each station makes use of its local information and the information provided by the IEEE 802.21 Media Independent Handover (MIH) Information Server (IS). Performance of the proposed scheme has been investigated by numerical analyses and extensive simulations. The results illustrate that the proposed scheme reduces the energy consumption of mobile stations and provides longer lifetime under a wide range of contention and signal strength levels. <s> BIB002 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Security and Access Methodology <s> As wireless communications evolve towards heterogeneous networks, mobile terminals have been enabled to handover seamlessly from one network to another. At the same time, the continuous increase in the terminal power consumption has resulted in an ever-decreasing battery lifetime. To that end, the network selection is expected to play a key role on how to minimize the energy consumption, and thus to extend the terminal lifetime. Hitherto, terminals select the network that provides the highest received power. However, it has been proved that this solution does not provide the highest energy efficiency. Thus, this paper proposes an energy efficient vertical handover algorithm that selects the most energy efficient network that minimizes the uplink power consumption. The performance of the proposed algorithm is evaluated through extensive simulations and it is shown to achieve high energy efficiency gains compared to the conventional approach. <s> BIB003 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Security and Access Methodology <s> The ever increasing demand for ubiquitous wireless connectivity and the inherently limited power resources at mobile devices highlight the need for an energy efficient operation of these devices. The paper addresses this need by means of an appropriate handover policy. According to it, a handover is initiated whenever the energy efficiency of the user equipment falls below a given threshold. The handover target is selected among the candidates so as to maximize the achievable energy efficiency. For networks employing Proportionally Fair access, it is shown that the achievable energy efficiency can be calculated by means of a simple expression, requiring only a limited amount of network and terminal status information. In particular, no network load status is required. Simulation results demonstrate that the proposed handover policy can lead to a significantly improved (up to 15%) energy efficiency without compromising throughput performance. <s> BIB004
High security procedures, request/response-based access methodologies, authentication and encryption processes of some networks let mobile devices have a secure but slow communication channel. High, when used in combination with other parameters. High, as associating with a network/interface that has minimum or no security procedures increases the energy efficiency, as the additional overheads on the system are eliminated. BIB002 BIB003 Number of Connected Users Information on PoAs load allows mobile devices to comment on the channel utilization and possible probability of collision ratios. High, as mobile devices can associate with the network that has the minimum number of connected users. High, as the probability of collision is decreased and hence, the mobile device will consume less amount of energy. [52, 67] Coverage Coverage range information about the PoAs in the vicinity. High, as using the coverage information of each network/interface, minimum number of handover associations can be provided. High, by reducing the number of handovers. BIB002 BIB003 BIB004 BIB001 Channel Busy Time (CBT) Estimation of the transmission duration. High, when used in combination with other parameters. High, when used in combination with other parameters.
Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Impact of Handover Decision Strategies on the Energy Efficiency <s> In the next generation of wireless networks, mobile users can move between heterogeneous networks, using terminals with multiple access interfaces and non-real-time or real-time services. The most important issue in such environment is the Always Best Connected (ABC) concept allowing the best connectivity to applications anywhere at anytime. To answer ABC requirement, various vertical handover decision strategies have been proposed in the literature recently, using advanced tools and proven concepts. In this paper, we give an overview of the most interesting and recent strategies. We classify it into five categories for which we present their main characteristics. We also compare each one with the others in order to introduce our vertical handover decision approach. <s> BIB001 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Impact of Handover Decision Strategies on the Energy Efficiency <s> There are currently a large variety of wireless access networks, including the emerging vehicular ad hoc networks (VANETs). A large variety of applications utilizing these networks will demand features such as real-time, high-availability, and even instantaneous high-bandwidth in some cases. Therefore, it is imperative for network service providers to make the best possible use of the combined resources of available heterogeneous networks (wireless area networks (WLANs), Universal Mobile Telecommunications Systems, VANETs, Worldwide Interoperability for Microwave Access (WiMAX), etc.) for connection support. When connections need to migrate between heterogeneous networks for performance and high-availability reasons, seamless vertical handoff (VHO) is a necessary first step. In the near future, vehicular and other mobile applications will be expected to have seamless VHO between heterogeneous access networks. With regard to VHO performance, there is a critical need to develop algorithms for connection management and optimal resource allocation for seamless mobility. In this paper, we develop a VHO decision algorithm that enables a wireless access network to not only balance the overall load among all attachment points (e.g., base stations and access points) but also maximize the collective battery lifetime of mobile nodes (MNs). In addition, when ad hoc mode is applied to 3/4G wireless data networks, VANETs, and IEEE 802.11 WLANs for a more seamless integration of heterogeneous wireless networks, we devise a route-selection algorithm for forwarding data packets to the most appropriate attachment point to maximize collective battery lifetime and maintain load balancing. Results based on a detailed performance evaluation study are also presented here to demonstrate the efficacy of the proposed algorithms. <s> BIB002 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Impact of Handover Decision Strategies on the Energy Efficiency <s> In the nowadays heterogeneous wireless environment, a plethora of access networks have to be inter-connected in an optimal manner to meet the always best connected paradigm. This paper highlights some of the main technical challenges in heterogeneous wireless networks underlying seamless vertical handover making as a fundamental feature to all future networking endeavors. It provides a survey on the vertical mobility management process and mainly focuses on decision making mechanisms. After presenting the related standards, the main vertical handover approaches in the literature are analyzed and compared. The paper also points out the importance of multihoming in a such heterogeneous environment and provides an overview of the most known supporting mobility protocols. Finally, the main research trends and challenges are discussed. <s> BIB003 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Impact of Handover Decision Strategies on the Energy Efficiency <s> In order to cater for the overwhelming growth in bandwidth demand from mobile Internet users operators have started to deploy different, overlapping radio access network technologies. One important challenge in such a heterogeneous wireless environment is to enable network selection mechanisms in order to keep the mobile users Always Best Connected (ABC) anywhere and anytime. Game theory techniques have been receiving growing attention in recent years as they can be adopted in order to model and understand competitive and cooperative scenarios between rational decision makers. This paper presents an overview of the network selection decision problem and challenges, a comprehensive classification of related game theoretic approaches and a discussion on the application of game theory to the network selection problem faced by the next generation of 4G wireless networks. <s> BIB004
The parameters collected from the existing wireless networks and interfaces are weighted based on their importance during the vertical handover decision stage. The result of this stage is the selection of a network/interface, considering the information gathered throughout the channel scanning phase. Some of the existing vertical handover decision strategies that are widely used in the network selection process are: function-based decision, user-centric decision, fuzzy logic based decision, game theoretic decision and reputation-based decision. The proposed handover decision strategies from the literature are trying to find the best trade-off between various parameters and are not entirely focused on one parameter only. For example, the function-based decision selects the network/interface that maximizes an objective function. In most of the cases, the objective function is represented by a weighted sum of different parameters, such as QoS, cost, trust, power consumption, compatibility, user preferences, capacity, etc. Consequently, the energy efficiency when adopting this handover decision strategy will vary according to the power consumption's weight value. In the case of user-centric decision solutions, the user satisfaction plays an important role in the decision criteria. Therefore, energy efficiency when using these strategies will vary according to users' preferences in terms of performance, QoS, cost and power consumption. Fuzzy logic based decision deals with uncertainties. It analyzes vague data, such as the behavior of the RSS, channel utilization, energy consumed per bit or the BER. This information is then combined with other decision strategies to select the network/interface that finds the best trade-off between these parameters. Vertical handover decision problem can also be modeled by using some of the game theory approaches, such as cooperative games, non-cooperative games, hierarchic games and evolutionary games BIB003 BIB004 . Finally, reputation-based decision makes use of a new subjective metric that relies on earlier experiences and observations of users in similar situations. Reputation-based decision strategies compute global reputation values based on previous experiences of users. This might speed up the overall handover process and it might enable the mobile devices to perform fast and energy-efficient VHO operations. Thus, decision strategies select a net-work/interface, considering the information gathered throughout the network discovery phase. Moreover, the decision strategy selected has a direct impact on the data processing intensity and memory usage that in turn could introduce delay and extra energy consumption to the overall handover process. An optimal energy-efficient vertical handover could be achieved by employing a decision strategy that gathers only the most significant local and network-related information and selects the network/interface that is expected to find the best trade-off between performance and energy efficiency. Comprehensive anal-ysis of vertical handover decision strategies can be found in BIB003 BIB001 BIB004 BIB002 .
Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> TCP is a transport protocol that guarantees reliable ordered delivery of data packets over wired networks. Although it is well tuned for wired networks, TCP performs poorly in mobile ad hoc networks (MANETs). This is because TCP's implicit assumption that any packet loss is due to congestion is invalid in mobile ad hoc networks where wireless channel errors, link contention, mobility and multipath routing may significantly corrupt or disorder packet delivery. If TCP misinterprets such losses as congestion and consequently invokes congestion control procedures, it will suffer from performance degradation and unfairness. To understand TCP behaviour and improve the TCP performance over multi-hop ad hoc networks, considerable research has been carried out. As the research in this area is still active and many problems are still wide open, an in-depth and timely survey is needed. In this paper, the challenges imposed on the standard TCP in the wireless ad hoc network environment are first identified. Then some existing solutions are discussed according to their design philosophy. Finally, some suggestions regarding future research issues are presented. <s> BIB001 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> In this letter, we propose an efficient power-saving mechanism using paging of cellular networks for WLAN in heterogeneous wireless networks, where WLAN interface is turned off during idle state without any periodic wake-up in order to save power consumption while at the same time, the existing paging of cellular network is utilized in place of beacons in WLAN. For the proposed mechanism, the mean power consumption is investigated via analytical and simulation results. <s> BIB002 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> Inheterogeneous wireless networks, there havebeen several efforts aimedathaving mobiledevices equipped with multiple interfaces connect optimally totheaccess network that minimizes their powerconsumption. However, astudy ofexisting schemes notes thatintheidle state, adevice withbothaWLAN andaWWAN interface needtokeepbothinterfaces "on"inorder toreceive periodic beaconmessages fromtheAP (WLAN)and downlink control information fromthebasestation (WWAN), resulting insignificant powerconsumption. Therefore, inthis paper, we propose a Power-efficient Communication Protocol thatincludes turning offtheWLAN interface after itenters the idlestate andusingtheexisting paging ofWWAN inorderto wakeuptheWLAN interface whenthere isincoming long-lived multimedia data. Further, wepropose turning ontheWLAN interface when thenumberofpackets intheradio network controller's buffer reaches a certain threshold level inordertoavoidrepeatedly turning onandoffWLAN interfaces, thatconsumes asignificant amountofpower. Thetradeoffs between thepowersaving and thenumberofpackets dropped atthebuffer areinvestigated analytically. Simulation results forscenarios ofinterest arealso provided. <s> BIB003 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> This paper considers two issues arising in an integrated IEEE 802.16e/802.11 network: (1) finding a possible network, which mobile station (MSTA) can switch to, and (2) making a decision whether to execute a vertical handoff (VHO). For this purpose, we propose that 802.16e Base Stations (BSs) periodically broadcast the information about the density of 802.11 access points (APs) within their cell coverage. Based on this information, we develop a novel model, which predicts the successful scan probability during a given scan time. Using this analytical model, we devise an energy-efficient scan policy (ESP) algorithm, which enables an MSTA to decide (1) whether to attempt to discover APs in the current 802.16e cell, and (2) if so, how to set the 802.11 active scan interval considering the energy consumption. For the VHO decision, we mainly consider the impact of the service charge. Especially, a practical service fee models, i.e., flat pricing for WLAN and partially-flat pricing for 802.16e network, are considered. Under this service charge plan, we need to control the usage of the 802.16e network to minimize the user's payment. To this end, we propose a scheme, which intelligently postpones the delivery of the delay-tolerant traffic within a certain time limit combined with ESP algorithm. <s> BIB004 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> In this paper, we propose a novel network selection algorithm considering power consumption in hybrid wireless networks for vertical handover. CDMA, WiBro, WLAN networks are candidate networks for this selection algorithm. This algorithm is composed of the power consumption prediction algorithm and the final network selection algorithm. The power consumption prediction algorithm estimates the expected lifetime of the mobile station based on the current battery level, traffic class and power consumption for each network interface card of the mobile station. If the expected lifetime of the mobile station in a certain network is not long enough compared the handover delay, this particular network will be removed from the candidate network list, thereby preventing unnecessary handovers in the preprocessing procedure. On the other hand, the final network selection algorithm consists of AHP (analytic hierarchical process) and GRA (grey relational analysis). The global factors of the network selection structure are QoS, cost and lifetime. If user preference is lifetime, our selection algorithm selects the network that stays longer due to low power consumption. Also, we conduct some simulations using the OPNET simulation tool. The simulation results show that the proposed algorithm provides longer lifetime in the hybrid wireless network environment. <s> BIB005 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> The proliferation of WLANs and the ubiquitous coverage of cellular networks have resulted in several integration proposals towards 4G networks. Among them, the tight- coupled WLAN/UMTS architectures promise seamless service continuity to users and enhanced network performance. Most of these solutions assume that only one interface is active at a time, while much fewer consider the concurrent use of both WLAN and UMTS interfaces as this is expected to consume more energy. This paper presents a detailed description of power consumption for the two different tight-coupled WLAN/UMTS approaches based on the states of the wireless devices. A simple analytical model is provided for estimating the power consumption in each approach, while a simulation model measures the power needs for more complicated cases. Moreover, the enhancement due to a power-saving mechanism in WLAN is also assumed in the system and useful deductions are provided about the average power consumption per mobile terminal. <s> BIB006 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> With the advent of a number of wireless network technologies such as WCDMA and WLAN, current mobiles are equipped with multiple network interfaces, so called Multi-Mode Terminal (MMT). MMTs are capable to access different kinds of networks by performing a vertical handover between heterogeneous wireless networks, where during the idle state, the MMTs consume a lot of energy since their WLAN interface must wake up for listening to periodical beacons. However, previous studies on the vertical handover did not address how to select the optimal interface taking into account the characteristics of MMTs, especially energy consumption. Therefore, in this paper, we propose an energy-efficient interface selection scheme for MMTs in the integrated WLAN and cellular networks. The proposed interface selection scheme takes advantage of existing out-of-band paging channel of cellular networks, so that the WLAN interface can be completely turned off during the idle state leading to reduction in energy consumption. Simulation results show that the proposed scheme outperforms conventional approaches in terms of energy consumption with reduced signaling overhead and handover delay. <s> BIB007 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> We consider an issue arising in a vertical handoff (VHO) between IEEE 802.16e and IEEE 802.11 networks, i.e., how efficiently the scanning operation can be controlled to find a target wireless local area network (WLAN) access point (AP). For this purpose, we propose that the 802.16e Base Stations (BSs) periodically broadcast the information about the density of 802.11 APs within their cell coverage. Based on this information, we develop a mathematical model which predicts the probability that any WLAN AP is found within a given scan time. The analytical model is validated via a comparison with the simulation results obtained for both random waypoint and city section mobility models. Based on the analytical model, we devise an energy-efficient scanning algorithm, which enables an mobile station (MSTA) to decide (1) whether to conduct a scan operation in the current 802.16e cell, and (2) if so, how to configure the inter-scan interval considering the energy consumption due to the scan operation. An intensive simulation study shows that our proposed scanning algorithm indeed works well as designed. <s> BIB008 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> This paper proposes an energy-aware handoff algorithm based on energy consumption measurements of UMTS and 802.11 WLAN networks on an Android mobile phone. The handoff algorithm uses estimation of application traffic size to find the minimum energy cost alternative by comparing the cost of using UMTS with the cost of performing an opportunistic downward vertical handoff to a WLAN and using WLAN for the transfer and the eventual upward vertical handoff back to UMTS. Our experiments show that the energy cost of UMTS is nearly equal to WLAN as a function of transfer time, but for bulk transfers, transferring a byte of data over UMTS can be over a hundred times more expensive than over WLAN. Further, we discovered that the energy cost of the vertical handoff is quite high, comparable to downloading 0.12-0.67 MB of data over UMTS. To calculate the energy cost of data transfers before they take place, we propose and evaluate a distributed traffic estimation mechanism. The mechanism can predict how much data will be transferred due to a user action (i.e. clicking of an URL link). We provide initial results on the accuracy of the mechanism. Finally, we perform a numerical analysis on the the performance of the handoff algorithm and show that it can reduce the energy consumption significantly when compared with simple policies. <s> BIB009 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> WiFi networks have been deployed in many regions such as buildings and campuses to provide wireless Internet access. However, to support ubiquitous wireless service, one possibility is to integrate these narrow-range WiFi networks with a wide-range network such as WiMAX. Under this WiMAX-WiFi integrated network, how to conduct energy-efficient handovers is a critical issue. In this paper, we propose a handover scheme with geographic mobility awareness (HGMA) by considering the past handover patterns of mobile devices. HGMA can conserve the energy of handovering devices from three aspects. First, it prevents mobile devices from triggering unnecessary handovers by measuring their received signal strength and moving speeds. Second, it includes a handover candidate selection (HCS) method for mobile devices to intelligently select a subset of WiFi access points or WiMAX relay stations to be scanned. Therefore, mobile devices can reduce their network scanning and thus save their energy. Third, HGMA prefers mobile devices staying in their original WiMAX or WiFi networks. This can prevent devices from consuming too much energy on interface switching. Simulation results show that HGMA can reduce about 69% and 30% of energy consumption on network scanning and interface switching, respectively, and with 16% to 64% more probabilities for mobile devices staying in WiFi networks. <s> BIB010 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> This paper optimizes handover decisions between WLAN (802.11) and WiMAX (802.16e) standards, for both uplink and downlink data transmission. The handover becomes part of a cross-layer approach controlling several knobs (air interface parameters or platform settings) in order to reduce the terminal power consumption. ::: ::: The first step is to derive detailed power and performance models for both standards, in order to correctly evaluate the opportunity of a handover. This includes channel fading fluctuations, extraction of MAC-level behavior and packet error rates, and overall power consumption from the wireless platform. Such models enable first optimal single-standard power-throughput trade-offs, that will be used as reference points before adding the handover possibility in order to assess its specific gain. ::: ::: The second step is the design of a handover controller that selects the network with the lowest expected power for the required rate. The proposed mechanism is based on regular scanning of both networks. It computes the expected energy in order to send a given amount of data over each network, taking handover cost into account, and selects the most appropriate one. ::: ::: Based on a software-defined radio platform and a typical channel coherence time of one second, simulations demonstrate a power saving factor up to 2.5 as a function of the scenario, compared to a single-standard system that is already cross-layer optimized. This illustrates the large gain that is available, besides the handover advantage in terms of improved connectivity. <s> BIB011 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> To provide wireless Internet access, WiFi networks have been deployed in many regions such as buildings and campuses. However, WiFi networks are still insufficient to support ubiquitous wireless service due to their narrow coverage. One possibility to resolve this deficiency is to integrate WiFi networks with the wide-range WiMAX networks. Under such an integrated WiMAX and WiFi network, how to conduct energy-efficient handovers is a critical issue. In this paper, we propose a handover scheme with geographic mobility awareness (HGMA), which considers the historical handover patterns of mobile devices. HGMA can conserve the energy of handovering devices from three aspects. First, it prevents mobile devices from triggering unnecessary handovers according to their received signal strength and moving speeds. Second, it contains a handover candidate selection method for mobile devices to intelligently select a subset of WiFi access points or WiMAX relay stations to be scanned. Therefore, mobile devices can reduce their network scanning and thus save their energy. Third, HGMA prefers mobile devices staying in their original WiMAX or WiFi networks. This can prevent mobile devices from consuming too much energy on interface switching. In addition, HGMA prefers the low-tier WiFi network over the WiMAX network and guarantees the bandwidth requirements of handovering devices. Simulation results show that HGMA can save about 59– 80p of energy consumption of a handover operation, make mobile devices to associate with WiFi networks with 16–62p more probabilities, and increase about 20–61p of QoS satisfaction ratio to handovering devices. Copyright © 2009 John Wiley & Sons, Ltd. ::: ::: In this paper, we identify the GM (Geographic Mobility) feature of mobile devices in an integrated WiMAX and WiFi network. By adopting the GM feature, we propose an energy-efficient handover scheme with geographic mobility awareness (HGMA). This scheme can eliminate unnecessary handovers, reduce the number of network scanning, and avoid switching network interfaces too frequently. In addition, HGMA prefers the low-tier WiFi network over the WiMax network, and guarantees the bandwidth requirements of handovering mobile devies. Copyright © 2009 John Wiley & Sons, Ltd. <s> BIB012 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> This paper presents an energy efficient radio access network (RAN) selection for vertical handover between heterogeneous networks. The proposed RAN selection switches evaluation bases by application and adaptively selects a RAN with low energy consumption. In addition, the selection employs a penalty function that avoids discarded vertical handovers to reduce handover overhead and network loading by reducing the integrations. <s> BIB013 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> This paper proposes a new way of measuring signal to interference and noise ratio (SINR) at a low level of power consumption for vertical handover. In order to select the most suitable radio access networks (RAN) in vertical handover, the SINR of the alternative RAN should be measured at a certain interval while communicating with the existing RAN. In our proposal, the SINR measurement interval for the alternative RAN is controlled on the basis of SINR fluctuations in order to maintain high tracking ability and reduce power consumption during monitoring operations for vertical handover. In addition, a simple probability inequality is applied to detect SINR fluctuations with high precision and achieve low computational complexity. The effectiveness of the proposed monitoring method was verified through computer simulations and the results showed that the averaged SINR could be measured to an accuracy of about 1 dB while maintaining sleep mode at about 30%. <s> BIB014 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> Mobile devices people carry in their pockets every day can use various means to connect to data services all around the Internet, e.g., 2G, 3G and WLAN. This has been an important development towards an easily accessible and always-on the Internet. While radio connectivity, bits rates in particular, has developed tremendously during the recent years, battery technology and electronics has not. Thus, the more we use Internet services on mobile phones, the faster the battery of the device runs out, even within a few hours. This paper analysis various radio technologies found in modern mobile phones, and characterize their power consumption with different uplink and downlink data transfers. We are interested to understand how much energy is needed per bit of user data when sending or receiving data over various wireless links. <s> BIB015 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> The power consumption of wireless access networks will become an important issue in the coming years. In this paper, the power consumption of base stations for mobile WiMAX, HSPA, and LTE is modelled. This power consumption is related to the coverage of the base station. The considered technologies are compared according to their energy efficiency for different bit rates at a bandwidth of 5 MHz. For this particular case and based on the assumptions of parameters of the specifications, HSPA is the least energy-efficient technology. Until a bit rate of 11 Mbps LTE is the most energy-efficient while for higher bit rates mobile WiMAX performs the best. Furthermore the influence of MIMO is investigated. A decrease of about 80 % for mobile WiMAX and about 74 % for HSPA and LTE for the power consumption per covered area is found for a 4×4 MIMO system compared to a SISO system. The introduction of MIMO has thus a positive influence on the energy efficiency of the considered technologies. The power consumption and coverage model for base stations is then used to develop a prediction tool for power consumption in wireless access networks. <s> BIB016 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> Context information brings new opportunities for efficient and effective system resource management of mobile devices. In this work, we focus on the use of context information to achieve energy-efficient, ubiquitous wireless connectivity. Our field-collected data show that the energy cost of network interfaces poses a great challenge to ubiquitous connectivity, despite decent availability of cellular networks. We propose to leverage the complementary strengths of Wi-Fi and cellular interfaces by automatically selecting the most efficient one based on context information. We formulate the selection of wireless interfaces as a statistical decision problem. The challenge is to accurately estimate Wi-Fi network conditions without powering up the network interface. We explore the use of different context information, including time, history, cellular network conditions, and device motion, to statistically estimate Wi-Fi network conditions with negligible overhead. We evaluate several context-based algorithms for the estimation and prediction of current and future network conditions. Simulations using field-collected traces show that our network estimation algorithms can improve the average battery lifetime of a commercial mobile phone for an ECG reporting application by 40 percent, very close to the estimated theoretical upper bound of 42 percent. Furthermore, our most effective algorithm can predict Wi-Fi availability for one and ten hours into the future with 95 and 90 percent accuracy, respectively. <s> BIB017 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> Current mobile devices are equipped with multi-standard interfaces to fully utilize both the existing and the imminent Public Land Mobile Network infrastructure. This flexibility comes with significant energy consumption overheads at the mobile terminal side and thus, the adopted interface / network selection scheme, a.k.a. Vertical Handover mechanism, should be based on both Quality of Service oriented and energy-efficiency criteria. This paper summarizes the current state of the art on energy-centric vertical handover algorithms, discusses weak aspects of existing approaches and presents a novel context-aware vertical handover framework towards energy-efficiency. Rather than a pure energy-centric vertical handover scheme, this framework is a context-awareness enabler for energy-centric vertical handover decision making. <s> BIB018 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> The growing presence of concurrent heterogeneous wireless access networks, together with the increasing service demands from the end-users, require re-thinking of current access selection polices and appropriate management mechanisms, namely concerning quality of service, energy efficiency, etc. The recent IEEE 802.21 standard introduces link layer intelligence as well as related network information to upper layers in order to optimise handovers between networks of different technologies, such as WiMAX, Wi-Fi and 3GPP. With the massification of mobile terminals with multiple wireless interfaces it is important to efficiently manage those interfaces not only to appropriately provide the requested services to the user, but also to do that in an energy efficient way in order to allow higher mobility to the user by extending the battery life of its terminal. The study the IEEE 802.21 standard is briefly introduced, presented in the signalling in a handover between WiMAX and Wi-Fi, and exploited through an implementation in ns-2 introducing a simple, but effective, energy-saving approach. <s> BIB019 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> Mobile terminals in heterogeneous wireless networks continuously undergo network selection within the initial access process and handover process. In order for a mobile terminal to be connected to a network in the best possible way in terms of QoS performance and energy consumption, this paper presents a novel method that takes into account user preferences, network conditions, QoS and energy consumption requirements in order to select the optimal network which achieves the best balance between performance and energy consumption. The proposed network selection method incorporates the use of fuzzy logic because of the available sources of information from different radio access technology (RAT) are qualitatively interpreted and heterogeneous in nature, and adopts different energy consumption metrics for real-time and non-real-time applications. Finally, simulations confirm our scheme's suitability and effectiveness. <s> BIB020 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> In this paper, we propose an advanced model, called e-Aware, for estimating how application layer protocol properties affect the energy consumption of mobile devices, operating in 3G (WCDMA) and WLAN (802.11) networks. The main motivation for the model is to facilitate designing energy-efficient networking solutions, by reducing the need for time-consuming measurements with real-life networks and devices. The model makes a distinction between signaling and media transfers due to their different energy consumption characteristics, and takes into account the fundamentals of radio interface properties, such as different energy states and timers controlling them. The model is fine-tuned using device-specific coefficients that are defined according to real-world measurements with actual devices. We have implemented the model and simulated it in Matlab environment. The correct functionality is verified by comparing the results with real-life measurements in identical networking scenarios. <s> BIB021 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> Energy efficiency in wireless networks is one of the major issues for users as mobile devices rely on their batteries. In this context, Wireless Network Interface Cards (WNICs) have to be taken into consideration carefully as it consumes a significant portion of the overall system energy. In this paper, we aim to reduce the energy consumption of wireless mobile devices performing specific solutions, such as reducing the overhead of channel scanning by proposing a smart selective channel scanning method during the handover preparation phase and associating with a Point of Attachment (PoA) that is expected to consume the least amount of energy among all PoAs. Power consumption prediction of a station is made by using the channel scanning results, switching costs, Channel Busy Times (CBTs), traffic class of the station, the number of stations deployed in each PoA, and the power consumption of each WNIC. Stations performing the proposed scheme can fairly coexist with the other stations in the network. In the proposed scheme, each station makes use of its local information and the information provided by the IEEE 802.21 Media Independent Handover (MIH) Information Server (IS). Performance of the proposed scheme has been investigated by numerical analyses and extensive simulations. The results illustrate that the proposed scheme reduces the energy consumption of mobile stations and provides longer lifetime under a wide range of contention and signal strength levels. <s> BIB022 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> To address the challenging issues of energy-efficiency and seamless connectivity in heterogeneous networks, 3GPP and IEEE have recently incorporated several architectural and functional enhancements to the baseline operation of their standards for cellular and wireless local area network access, respectively. Based on the 3GPP Access Network Discovery and Selection Function (ANDSF) and the advanced measurement capabilities provided by the IEEE 802.11-2012 and the 3GPP Long Term Evolution - Advanced (LTE-A) Standards, we propose an ANDSF-assisted energy-efficient vertical handover decision algorithm for the heterogeneous IEEE 802.11-2012 / LTE-A network. The proposed algorithm enables a multi-mode mobile terminal to select and associate with the network point of attachment that minimizes its average overall power consumption and guarantees a minimum supported quality of service for its ongoing connections. System-level simulation is used to evaluate the performance of the proposed algorithm and compare it to that of other competing solutions. <s> BIB023 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> Along with the widespread usage of mobile devices supporting multiple radio interfaces, users can connect to the mobile operator's network using different access technologies such as UMTS, WLAN and WiMAX/LTE. Depending on the network conditions and the QoS requirements of users, the usage of the right technology at the right time can provide more effective networking with higher data rate, lower cost, longer battery duration, and higher coverage area. To enable this networking effectiveness, a seamless handover mechanism to minimize service disruption is required. IEEE 802.21, which defines a set of standards for media independent handover, is capable of realizing both horizontal and vertical handovers. In this paper, a new algorithm is proposed to provide energy efficient handovers among UMTS, WiMAX, and WLAN networks. The proposed algorithm is implemented within the IEEE 802.21 framework and its performance analysis is performed using the ns-2.29 simulator. Simulation results show that the proposed algorithm significantly increases the service duration and hence decreases packet losses by dynamically changing the video downloading rate according to handover decisions and the remaining battery power levels. <s> BIB024 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> The continuing growth in video content exchanged by mobile users creates challenges for the network service providers in terms of supporting seamless multimedia delivery at high quality levels, especially given the existing wireless network resources. A solution that deals with this mobile broadband data growth is to make use of multiple networks supported by diverse radio access technologies. This multiaccess solution requires innovative network selection mechanisms to keep the mobile users always best connected anywhere and anytime. Additionally, there is a need to develop energy efficient techniques in order to reduce power consumption in next-generation wireless networks, while meeting user quality expectations. In this context, this paper proposes an enhanced power-friendly access network selection solution (E-PoFANS) for multimedia delivery over heterogeneous wireless networks. E-PoFANS enables the battery of the mobile device to last longer, while performing multimedia content delivery, and maintains an acceptable user perceived quality by selecting the network that offers the best energy-quality tradeoff. Based on real test-bed measurements, the proposed solution is modeled and validated through simulations. The results show how by using E-PoFANS, the users achieve up to 30% more energy savings with insignificant degradation in quality, in comparison with another state-of-the-art energy efficient network selection solution. <s> BIB025 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> The growing energy consumption driven by dramatic increases in data usage becomes the key issue for the users in meeting demands on longer battery life. Thus, managing energy efficiency should be the first priority for mobile terminals. In this paper, our aim is to improve the users' energy efficiency in heterogeneous network environments. To this end, we propose a network selection mechanism that exploits both user and network context information in order to efficiently utilize the radio wireless resources in heterogeneous network. The mechanism enables the multi-mode mobile terminal to select and connect to the network of attachment that maximizes its average energy efficiency and guarantees a supported quality of service for its connections. The performance of the proposed mechanism is compared against the maximum battery lifetime scheme and priority LTE access scheme in terms of the energy consumption, the throughput and the energy efficiency. According to simulation results, the system that employs the proposed network selection mechanism achieves better performance improvement in improving the user average throughput and energy efficiency. <s> BIB026 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> As wireless communications evolve towards heterogeneous networks, mobile terminals have been enabled to handover seamlessly from one network to another. At the same time, the continuous increase in the terminal power consumption has resulted in an ever-decreasing battery lifetime. To that end, the network selection is expected to play a key role on how to minimize the energy consumption, and thus to extend the terminal lifetime. Hitherto, terminals select the network that provides the highest received power. However, it has been proved that this solution does not provide the highest energy efficiency. Thus, this paper proposes an energy efficient vertical handover algorithm that selects the most energy efficient network that minimizes the uplink power consumption. The performance of the proposed algorithm is evaluated through extensive simulations and it is shown to achieve high energy efficiency gains compared to the conventional approach. <s> BIB027 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> The ever increasing demand for ubiquitous wireless connectivity and the inherently limited power resources at mobile devices highlight the need for an energy efficient operation of these devices. The paper addresses this need by means of an appropriate handover policy. According to it, a handover is initiated whenever the energy efficiency of the user equipment falls below a given threshold. The handover target is selected among the candidates so as to maximize the achievable energy efficiency. For networks employing Proportionally Fair access, it is shown that the achievable energy efficiency can be calculated by means of a simple expression, requiring only a limited amount of network and terminal status information. In particular, no network load status is required. Simulation results demonstrate that the proposed handover policy can lead to a significantly improved (up to 15%) energy efficiency without compromising throughput performance. <s> BIB028 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> Mobile devices such as personal digital assistants (PDAs) and smartphones are widely used not only in our everyday lives but also in various industrial fields. Most of these mobile devices have multiple wireless network interfaces, such as Bluetooth, 3G, and Wi-Fi. A considerable amount of energy is consumed to transfer the data through wireless communication. Moreover, most of these mobile devices operate on limited battery power. In industrial environments, changes in the communication environment are severe due to significant noise sources and due to distortion in the transceiver circuitry of strong motors, static frequency changers, electrical discharge devices, and other devices. It is necessary to select the network interface efficiently in order to extend the lifetimes of mobile devices and their applications. Therefore, in this paper, we propose an energy-efficient adaptive wireless network interface-selection scheme (AWNIS). Our scheme is proposed based on the mathematical modeling of energy consumption and data transfer delay patterns. Our scheme selects the best wireless network interface in terms of energy consumption by considering the link quality and adapting a dynamic network interface-selection interval according to the network environment. The simulation results show that proposed scheme effectively improves the energy efficiency while guaranteeing a certain level of data transfer delay. <s> BIB029 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> Existing Energy Efficient Handover Approaches <s> The coverage area and the capacity of existing cellular network systems are not sufficient to meet the growing demand of high data rate for wireless communications. Heterogeneous Networks (HetNets) based on Long Term Evolution (LTE) can be a possible solution to enhance indoor coverage, deliver high bandwidths and off-load traffic from the macro base stations. However, this technology is still under development and several open issues have to be still investigated, such as interference coordination, power consumption, resources management and handover techniques. The aim of this work is to guarantee the reduction of power consumption using a new handover algorithm based on green policy. In addition, the proposed scheme guarantees the minimization of unnecessary handovers. The simulation campaigns have been conducted through the open-source Network Simulator 3 (NS-3). The preliminary results demonstrate that an efficient use of green approach improves the HetNETs performance in terms of power saving, energy efficiency and allow to reduces the number of unnecessary handovers. <s> BIB030
There have been many works proposed in the literature that focus on energy-efficient interface/network selection. These works are either network-assisted Chamodrakas et al. in propose an energy-efficient interface/network selection approach based on a modified fuzzy version of Technique for Order Preference by Similarity to the Ideal Solution (TOPSIS) that takes into account both network conditions (with the help of MIH protocol), user preferences, QoS and energy consumption requirements. Additionally, authors in BIB022 BIB023 aim to decrease the energy consumption of mobile devices by making use of a smart selective channel scanning approach and associating with a PoA that is expected to consume the least amount of energy among all PoAs. In these works, the expected amount of energy consumption is obtained by using the channel scanning results, channel busy times (CBTs), RSS and SINR values, traffic class of the station, switching costs, the number of stations deployed in each PoA, and the power consumption of each WNIC. While an IEEE 802.21 MIH-assisted interface/network selection is aimed between 3G and WiFi in BIB022 , an ANDSF-assisted interface/network selection is aimed between LTE-A and WiFi in BIB023 . Unlike the aforementioned algorithms, Coskun et al. in BIB024 BIB025 utilizes specific parameters (e.g., user mobility, user preferences, application requirements, and network conditions) and proposes an energy-efficient MIH-assisted network selection procedure for multimedia delivery over wireless heterogeneous networks. The proposed method increases the battery lifetime of mobile devices by selecting the network that offers the best energy-quality trade-off, while performing multimedia content delivery. In BIB019 , a geo-referenced-based network selection that aims to increase the mobility of mobile devices is proposed. The proposed scheme makes use of GPS, power consumption values in each NIC, list of available PoAs and IEEE 802.21 protocol to decide when and where to handover. There have been works BIB003 BIB026 Whenever the number of packets in the radio network controller's buffer reaches a certain threshold, WLAN interface is re-activated by using the existing paging of WWANs. Additionally, Zhang et al. in BIB026 propose an energy management mechanism that increases users' energy efficiency in non-saturated wireless heterogeneous network by making use of both a central server and the ANDSF protocol. The proposed method provides energy efficiency, balancing the user preferences and their energy requirements. Apart from the works that utilize either VHO standards or central servers, other pro-posed solutions BIB027 BIB028 BIB029 BIB007 BIB002 aim to associate with the most energy-efficient interface/network, using an expected energy consumption model. For instance, Pons et al. in BIB027 dynamically estimate the network/interface that is expected to consume the least amount of energy for the uplink traffic between WLAN and LTE networks. In BIB028 , it is shown that achievable energy efficiency can be calculated by means of a simple expression, requiring only a limited amount of local and network-related information (e.g., data rate, throughput, channel fading and network load) for the networks employing Proportionally Fair Access (PFA). Kim et al. in [61] also propose a network/interface selection method called AWNIS that is based on mathematical modeling of energy consumption and data transfer delay patterns. The proposed method chooses a PoA, taking the link quality into account and adjusting a dynamic network/interface selection interval according to the network environment. Similar to BIB003 , Seo et al. in BIB007 and Lee et al. in [63] also propose an interface selection method that turns the WLAN interface completely off, without any periodic wake-up, during the idle state to save energy. In the proposed method, existing out-of-band paging channels (PCHs) of cellular networks are exploited within the mobile stations. These schemes may reduce the total energy consumption dramatically in case each duration in the idle state is known beforehand. However, it is not an easy task to predict the exact idle time of a station and hence, the station may stay in long transmission/receiving states using the proposed method. Additionally, this method is effective only for tightly-coupled systems that makes the WLAN appear to the 3G core network as another 3G access network. Furthermore, Choi et al. in BIB004 and BIB008 propose an energy-efficient network-scanning algorithm for integrated IEEE 802.16e/802.11 networks. In order to achieve energy efficiency, 802.16e Base Stations (BSs) periodically broadcast the information about the density of 802.11 APs within their cell coverage. In this context, the proposed scheme forecasts the effective scanning probability during a given scanning time. Authors in BIB005 propose a multiple criteria decision method to estimate the expected lifetime of stations in a heterogeneous wireless environment (CDMA, WiBro, WLAN). The proposed method makes use of Analytic Hierarchy Process (AHP) and Grey Relational Analysis (GRA) and takes the bandwidth, BER, jitter, delay, cost, QoS, and battery lifetime as input parameters. Petander et al. in BIB009 considers the handover operation between WLAN and UMTS networks on an Android mobile phone and examines energy consumption values. The results indicate that the energy consumption of UMTS is approximately equal to WLAN as a function of transfer time. However, for bulk transfers, the results indicate that transferring a byte of data using UMTS may require much more energy (over a hundred times) than using the WLAN. In this context, the proposed approach makes use of traffic load estimations according to Signal to Noise Ratio (SNR) and network load provided by the Home Agent (HA). The proposed scheme uses the aforementioned information to compute a threshold for the UMTS to WLAN handover operation. Moreover, handover from WLAN to UMTS is automatically initiated once the station leaves the coverage area of a WLAN. Additionally, Yang et al. in BIB010 and BIB012 propose an energy-efficient interface selection for integrated WiMAX-WLAN networks making use of the Geographic Mobility Awareness. The proposed method initiates a handover candidate selection based on historical handover geographic patterns, utilizing the RSS of the networks and the velocity of the station. Additionally, Desset et al. in BIB011 propose an energy-efficient handover decision strategy for both uplink and downlink data transmission between WLAN and WiMAX networks. In this context, the authors first examine related metrics, such as channel fading fluctuations, extraction of MAC-level behavior, packet error rates, and overall power consumption in each state. Then, authors present a handover controller to find the network that has the lowest expected power consumption for the required transmission rate. Rahmati et al. in BIB017 express the selection of wireless interfaces/networks as a statistical decision problem. In this context, authors explore various context information metrics, such as the time, history, cellular network conditions, and device motion, to statistically estimate Wi-Fi network conditions without powering up the network interface. Xenakis et al. in BIB018 propose an energy-efficient interface/network selection algorithm that makes use of parameters such as the network congestion, SINR level, offered QoS on the target PoA, remaining battery lifetime at the mobile station, energy consumption on the current PoA, charging policy and user preferences. Nevertheless, this work mainly uses the same analytical power consumption estimation for different radio access technologies, which results in imprecise computations. In BIB020 Fan et al. propose an energy-efficient interface selection strategy for real-time and non-real-time applications based on fuzzy logic that considers network conditions, user preferences and QoS requirements. All the aforementioned energy-efficient vertical handover approaches are mainly net-work-assisted approaches and they are initiated utilizing the information remotely obtained from networks. However, there are also some approaches BIB013 BIB014 BIB001 BIB030 BIB021 that are initiated using only the local information obtained by the mobile station itself. For instance, In BIB013 , Kanno et al. propose an energy-efficient interface selection scheme according to the traffic-type of the application running on the mobile station, as energy requirements of different traffic-types will be different. For instance, a non-real-time application, such as a file download consumes energy until the end of its process, mainly staying in the receiving state. However, a real-time application, such as a voice communication, consumes energy both in transmitting, receiving and idle state, as it does not always have a frame in its queue to transmit or receive. Additionally, Ikeda et al. in BIB014 propose a new way of measuring signal to interference and noise ratio (SINR) at a low level of power consumption for vertical handover. In this scheme, the SINR values of the other RANs in the vicinity are measured at a certain interval while communicating with the existing RAN. In BIB001 , energy is saved by proposing a method that activates the network interfaces with a location-based wireless network discovery, instead of keeping them ''alive'' continuously. However, the energy saved using this method is inversely proportional to the frequency of activations of the interfaces. Besides, GPS solutions are not that practical in indoor or urban environments. In BIB030 , Araniti et al. focus on green interface selection policies and aim to guarantee an efficient management of the power consumed by base stations (BS) and reduce the unnecessary handovers. In this context, the proposed scheme rejects the inbound handover requests from the stations with high mobility and allows only the handovers that do not increase the overall transmitted power of the BS target. Finally in BIB021 , Harjula et al. propose an approach, referred to as e-Aware, to estimate the impact of the application layer protocol properties on the energy consumption of mobile devices operating in 3G and WLAN networks. The proposed energy consumption model is a mathematical model that estimates the energy consumption of network operations, such as signaling and media transfers. Apart from the energy-efficient network/interface selection approaches, there are also some works that examine the total amount of energy consumed by mobile devices from various angles, such as the architecture, operating system, available resources, etc. For example, in BIB006 the authors examine the energy consumption characteristics of two approaches of tight coupling architectures. While the first approach is the case when only one interface is active at a time, the second approach is the case when both interfaces may be concurrently active. Additionally, Wang et al. in BIB015 presents the results of real-time measurements of uplink and downlink power consumptions of EDGE, HSPA and 802.11 radio interfaces. In this regard, the authors suggest that the data must be transmitted/ received as bursts to keep the interfaces in low power-consumption mode for longer. Furthermore, power consumptions of base stations for mobile WiMAX, HSPA, and LTE are modeled, based on the coverage of the base station, in BIB016 .
Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> EVALUATION OF ENERGY-EFFICIENT VERTICAL HANDOVER PARAMETERS AND APPROACHES <s> The proliferation of WLANs and the ubiquitous coverage of cellular networks have resulted in several integration proposals towards 4G networks. Among them, the tight- coupled WLAN/UMTS architectures promise seamless service continuity to users and enhanced network performance. Most of these solutions assume that only one interface is active at a time, while much fewer consider the concurrent use of both WLAN and UMTS interfaces as this is expected to consume more energy. This paper presents a detailed description of power consumption for the two different tight-coupled WLAN/UMTS approaches based on the states of the wireless devices. A simple analytical model is provided for estimating the power consumption in each approach, while a simulation model measures the power needs for more complicated cases. Moreover, the enhancement due to a power-saving mechanism in WLAN is also assumed in the system and useful deductions are provided about the average power consumption per mobile terminal. <s> BIB001 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> EVALUATION OF ENERGY-EFFICIENT VERTICAL HANDOVER PARAMETERS AND APPROACHES <s> Nowadays, wireless access networks are a large contributor to the CO 2 emissions of ICT. Today, ICT is responsible for 4 % of the annual energy consumption and this number is expected to grow drastically in the coming years. The power consumption of these wireless access networks will thus become an important issue in the coming years. In this paper, the power consumption of wireless base stations for mobile WiMAX, HSPA and LTE is modelled and compared for a future scenario. For our research, we assume a suburban area and a physical bit rate of 10 Mbps. We compare the wireless technologies for a SISO and three MIMO systems. For each case, we give a ranking of the wireless technologies as a function of their power consumption, range and energy eff ciency. Based on these results, we cover a specif ed area with each technology and determine which technology is the best solution for the specif ed area. We also compare the power consumption of the wireless access networks with the power consumption of the wired access networks. <s> BIB002 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> EVALUATION OF ENERGY-EFFICIENT VERTICAL HANDOVER PARAMETERS AND APPROACHES <s> Recent studies have shown that there are currently more than 1.08 billion of Smartphones in the world, with around 89% of them used throughout the day. On average each of these users transfers more ... <s> BIB003 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> EVALUATION OF ENERGY-EFFICIENT VERTICAL HANDOVER PARAMETERS AND APPROACHES <s> Energy efficiency in wireless networks is one of the major issues for users as mobile devices rely on their batteries. In this context, Wireless Network Interface Cards (WNICs) have to be taken into consideration carefully as it consumes a significant portion of the overall system energy. In this paper, we aim to reduce the energy consumption of wireless mobile devices performing specific solutions, such as reducing the overhead of channel scanning by proposing a smart selective channel scanning method during the handover preparation phase and associating with a Point of Attachment (PoA) that is expected to consume the least amount of energy among all PoAs. Power consumption prediction of a station is made by using the channel scanning results, switching costs, Channel Busy Times (CBTs), traffic class of the station, the number of stations deployed in each PoA, and the power consumption of each WNIC. Stations performing the proposed scheme can fairly coexist with the other stations in the network. In the proposed scheme, each station makes use of its local information and the information provided by the IEEE 802.21 Media Independent Handover (MIH) Information Server (IS). Performance of the proposed scheme has been investigated by numerical analyses and extensive simulations. The results illustrate that the proposed scheme reduces the energy consumption of mobile stations and provides longer lifetime under a wide range of contention and signal strength levels. <s> BIB004
Previous works from the literature compare the power consumption of pairs of two net-works, such as: WiFi-3G BIB004 , WiFi-LTE BIB003 BIB001 , 3G-LTE [82] and LTE-WiMAX BIB002 networks. To the best of our knowledge, there is no single work that compares the power consumption of the four aforementioned RATs in the literature. Although comparisons are not performed by a single work and the results may vary due to different test-beds and simulation environments, the general and also the accepted opinion is that a station con-nected to a WiFi network consumes the least power in case the network is not highly loaded and has a good signal strength. Additionally, the works presented in [82] and BIB002 show that LTE and WiMAX interfaces consume similar amount of powers to transfer the same amount of throughput, whereas 3G interface mainly consumes less power than both of these interfaces. However, it should be noted that there are many factors that may affect the amount of power consumption, such as received signal strengths, RAT interference, bit error rate, channel utilization, number of connected stations, etc. Hence, a station may even save power by switching from WiFi to a 3G, LTE or WiMAX network. In this regard, a comparison is proposed in Table 2 to summarize features and amount of energy savings of the algorithms presented in the Sect. 3. Table 2 shows that high amount of energy can be saved by utilizing as many parameters and protocols as possible, unless additional message exchanges resulting from these parameters and protocols are not damaging (e.g. additional delay, power consumption, memory and CPU requirements, etc.) the ongoing network operations. In other words, vertical handover approaches may utilize a large set of local and network-related parameters. Nevertheless, higher network overhead, resulting from additional parameters and protocol support, may lead to increase in delay, handover duration, processing power and finally more energy consumption. Considering a small set of parameters might improve the energy efficiency but at the cost of handover accuracy. Thus, a balanced number of parameters need to be considered to maintain a good trade-off between the energy efficiency and the handover accuracy. As an example, IEEE 802.21 protocol support enables stations to save considerable amount of energy as the protocol broadcast up-to-date network coverage map and available PoA list, in return for limited number of message exchanges. In contrast, energy saving might be low (might be even worse than when not used) if GPS is used to locate stations, since it also consumes high amount of energy. Table 2 . A brief comparison of the proposed energy-efficient network/interface selection algorithms.
Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> RECOMMENDATIONS ON HOW TO SAVE ENERGY BEFORE, DURING AND AFTER HANDOVER <s> This paper presents a power management scheme that maximizes energy saving in wireless ad hoc networks while still meeting the required quality of service (QoS). We assume that battery-powered devices can be remotely activated by a waking-up signal using a simple circuit based on RF tag technology. In this way, devices that are not currently active may enter a sleep state and power up only when they have pending traffic. Radio devices select different time-out values, so called sleep pattern, to enter various sleep states depending on their battery status and quality of service. The performances of the proposed policy are derived by simulation for a simple ad hoc network scenario. Results show the achieved tradeoff between power saving and traffic delay. <s> BIB001 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> RECOMMENDATIONS ON HOW TO SAVE ENERGY BEFORE, DURING AND AFTER HANDOVER <s> A composition for use in the treatment for developing seedless fleshy berry of grapes. By treating the flower bunches of a grape tree with a composition containing gibberellin and cyclic 3',5'-adenylic acid in the form of an aqueous solution, it became possible to make seedless fleshy berry from grape trees belonging to varieties other than Delaware, namely belonging to Campbell-Arley, Berry A, Niagara, Kyoho, etc., from which seedless fleshy berry cannot be made by the conventional treatment with gibberellin. <s> BIB002 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> RECOMMENDATIONS ON HOW TO SAVE ENERGY BEFORE, DURING AND AFTER HANDOVER <s> Power-saving is a critical issue for almost all kinds of portable devices. In this paper, we consider the design of powersaving protocols for mobile ad hoc networks (MANETs) that allow mobile hosts to switch to a low-power sleep mode. The MANETs being considered in this paper are characterized by unpredictable mobility, multi-hop communication, and no clock synchronization mechanism. In particular, the last characteristic would complicate the problem since a host has to predict when another host will wake up to receive packets. We propose three power management protocols, namely dominating-awake-interval, periodically-fully-awake-interval, and quorum-based protocols, which are directly applicable to IEEE 802.11-based MANETs. As far as we know, the power management problem for multihop MANETs has not been seriously addressed in the literature. Existing standards, such as IEEE 802.11, HIPERLAN, and bluetooth, all assume that the network is fully connected or there is a clock synchronization mechanism. Extensive simulation results are presented to verify the effectiveness of the proposed protocols. <s> BIB003 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> RECOMMENDATIONS ON HOW TO SAVE ENERGY BEFORE, DURING AND AFTER HANDOVER <s> A set of novel PHY-MAC mechanisms based on a cross layer dialogue have been proposed, and their performance has been analyzed. System efficiency improvement is achieved by means of automatic transmission rate adaptation, trading off bit rate for power, with resulting energy saving features in a generic packet-switched CDMA access network. The rate adaptation mechanism improves spectrum efficiency while keeping packet delay minimized. On the other hand, power dependent strategies reduce power consumption and intercell interference. Simulation results show that the benefits obtained are very encouraging, so the proposed schemes could be used in future communication systems. <s> BIB004 </s> Energy-Efficient Vertical Handover Parameters, Classification and Solutions over Wireless Heterogeneous Networks: A Comprehensive Survey <s> RECOMMENDATIONS ON HOW TO SAVE ENERGY BEFORE, DURING AND AFTER HANDOVER <s> WiFi based phones are becoming increasingly popular due to the ubiquitous presence of wireless LANs and the use of unlicensed spectrum. These phones use VoIP techniques over wireless LANs. In addition to the spectral efficiency and security issues, energy consumption is a vital issue in making the usage of these phones widespread. Efforts are underway in improving energy conservation in these phones and thus increasing the duration between recharging the battery. This paper provides a detailed anatomy of the energy consumption by various components of WiFi-based phones. Through a measurement-based study of WiFi-based phones, we have analyzed the energy consumption for various workloads at various components. The impact of scan operations and related issues in the energy consumption on WiFi phones are quantified through actual measurements. Several inferences and derived guidelines are provided for improving the power conservation approaches in WiFi- based phones from the first of its kind experimental study. <s> BIB005
In order to perform an energy efficient vertical handover, rather than considering a full information set, a limited set consisting of the information that provides the best performance versus energy efficiency trade-off must be gathered and transferred to the decision phase. In this context, mobile devices must seek for available networks (network discovery) at first, to detect whether there is a PoA to associate with in the vicinity. In addition to the network discovery, network-related convenient parameters must be advertised to the mobile devices. Local information, such as speed, battery status, resources, service class, historical information, accelerometer, GPS, etc. could be collected as well. Finally, all the above-mentioned information, along with the user preferences, need to be transferred to the decision phase. Consequently, there are five possible stages to save energy before the handover execution (throughout the information gathering and decision phases); (1) network discovery, (2) network-side assisting, (3) mobile-side assisting, (4) user preferences and (5) handover decision. Frequency of information gathering is crucial for an energy-efficient handover. Some approaches initiate the information gathering or the discovery process only in case the network is no more able to handle the current traffic, or in other words, information gathering is initiated only when the measured RSSI is below a certain threshold. In this way, as long as the channel allows mobile devices to be connected and to communicate, these devices only perform their regular actions, which means there is no extra processing time and additional energy consumption. At first sight, this procedure seems energy efficient. However, there might be another PoA(s) in the vicinity that will let the device consume less power in case of an association scenario with that PoA(s). The device does not perform a discovery process since the measured RSSI is not below a certain threshold. Thus, the device will consume more power as long as it is associated with its old PoA. Therefore, this procedure may not always be energy efficient. In contrast to the first approach, some approaches continuously or periodically seek for available networks and collect related information to let mobile devices perform fast and accurate handover opportunity. This is also not an energy efficient approach as continuous or periodic channel scanning might cause mobile devices to consume additional energy and interrupt their regular action and hence, overall throughput of the device decreases. In this regard, a dynamic algorithm that increases or decreases the frequency of information gathering can provide an optimal energy efficiency. In this context, the algorithm must increase the frequency in case the device is moving or the channel condition rapidly changes. In contrary, the algorithm must decrease the frequency in case the device is stable and the channel condition is fixed or slowly changes. It is possible for mobile devices to obtain many network-related information with the network-side assisting. It is also highly possible for mobile devices to make a better prediction, using this information. However, in order to collect this information, mobile devices may need to transmit additional frames (requests). These additional frames may also take significant time (one round-trip-time for each information) and processing overhead for mobile devices. Consequently, the device may be too late to handover, waiting for network-related information or may consume an important portion of unnecessary power. Therefore, gathering only the related and convenient information lets mobile devices achieve fast and energy-efficient handover. Making use of mobile-side assisting, mobile devices can process their local information and transfer it to the decision stage. Since these devices process only the local information, there are no message exchanges between devices and the network in mobile-side assisting. Gathering this information usually takes a very short time and consumes such a small amount of power (unless the information is obtained by additional hardware support such as GPS, accelerometer, etc.) compared to the time and power consumption of network-side assisting. Therefore, for an optimal energy efficient handover opportunity, all set of local information supported by the mobile device can be processed and transferred to the decision stage. If maximization of the communication time is an important metric for users, an important portion of energy consumption can also be reduced with the definition of user preferences. All the gathered information is transferred to the decision stage along with the information on user preferences. Making use of the user preferences, decision algorithms increase the weight of the energy priority and hence, association to an energy-efficient PoA would be performed for the device in a possible handover scenario. Various network interface selection methods (fuzzy-logic, context-aware, etc.) used in the decision stage may also result in different amount of power consumption for mobile devices. Even though the total energy consumed in the decision stage is not as much as in the information gathering stage as previously seen. As mentioned earlier, handover execution phase performs the handover (mainly hard or soft handover) itself. In both hard and soft handover, executions are performed in such a small amount of time, with only the required message exchanges and processing over-heads. Therefore, both of these two handover execution methods consume close and small amount of power, which is even negligible compared to the power consumed in the information-gathering phase. Consequently, making use of the aforementioned different stages efficiently, maxi-mization of the communication time with minimized energy consumption can be achieved not only before the handover (as only convenient parameters are collected, keeping the energy efficiency in mind) but also after the handover (associating with the most energy efficient network means the device will consume the least amount of energy for wireless access after the handover until the channel condition has changed and the device decides to hand over again). Last but not the least, while one of the wireless radio interfaces of a mobile device is active, reducing some amount of energy consumption is also possible by utilizing the transmission power control (TPC) BIB003 , frame size adaptation BIB005 , and data compression and aggregation methods . Modifying TPC can be achieved by using directional antennas BIB002 , location or RSSI-based low power transmission tuning BIB001 or bit rate per frame adaptation in CDMA-based devices BIB004 .
Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> Introduction <s> Within the last 20 years at least 200 supercarriers have been lost, due to severe weather conditions. In many cases the cause of accidents is believed to be ‘rouge waves’, which are individual waves of exceptional wave height or abnormal shape. I situ measurements of extreme waves are scarce and most observations are reported by ship masters after the encounter. In this paper a global set of synthetic aperture radar (SAR) images is used to detect extreme ocean wave events. The data were acquired aboard the European remote sensing satellite ERS-2 every 200 km along the track. As the data are not available as a standard product of the Europea Space Agency (ESA), the radar raw data were focused to complex SAR images using the processor BSAR developed by the German Aerospace Center. The entire SAR data set covers 27 days representing 34000 SAR imagettes with a size of 5km×10km. Complex SAR data contain information on ocean wave height, propagation direction and grouping as well as on ocean surface winds. Combining all of this information allows to extract and locate extreme waves from complex SAR images on a global basis. Special algorithms have been developed to retrieve the following parameters from the SAR data: Wind speed and direction, significant wave height, wave direction, wave groups and their individual heights. The satellite ENVISAT launched in March 2002 acquires SAR data with an even higher sampling rate (every 100 km). It is expected that a long-term analysis of ERS and ENVISAT data will give new insight into the physical processes responsible for rogue wave generation. Furthermore, the identification of hot spots will contribute to the optimization of ship routes.© 2002 ASME <s> BIB001 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> Introduction <s> We present an introduction to positon theory, almost never covered in the Russian scientific literature. Positons are long-range analogues of solitons and are slowly decreasing, oscillating solutions of nonlinear integrable equations of the KdV type. Positon and soliton–positon solutions of the KdV equation, first constructed and analyzed about a decade ago, were then constructed for several other models: for the mKdV equation, the Toda chain, the NS equation, as well as the sinh-Gordon equation and its lattice analogue. Under a proper choice of the scattering data, the one-positon and multipositon potentials have a remarkable property: the corresponding reflection coefficient is zero, but the transmission coefficient is unity (as is known, the latter does not hold for the standard short-range reflectionless potentials). <s> BIB002 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> Introduction <s> Abstract We present a statistical analysis of some of the largest waves occurring during 793 h of surface elevation measurements collected during 14 severe storms in the North Sea. This data contains 104 freak waves. It is found that the probability of occurrence of freak waves is only weekly dependent on the significant wave height, significant wave steepness and spectral bandwidth. The probability does show a slightly stronger dependency on the skew and kurtosis of the surface elevation data, but on removing the contribution to these measures from the presence of the freakwaves themselves, this dependency largely disappears. Distributions of extreme waves are modelled by fitting Generalised Pareto distributions, and extreme value distributions and return periods are given for freak waves in terms of the empirical fitted parameters. It is shown by comparison with these fits that both the Rayleigh distribution and the fit of Nerzic and Prevosto severely under-predict the probability of occurrence of extreme waves. For the most extreme freak wave in our data, the Rayleigh distribution over-predicts the return period by about 300 times when compared to the fitted model. <s> BIB003 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> Introduction <s> The concept of rogue waves in an optical system is investigated by utilizing a new real-time detection technique to study a system that exposes extremely steep, large optical waves as rare outcomes from injection of a population of almost-identical optical pulses. Analysis of these results finds that the optical rogue waves arise when random noise perturbs the initially smooth pulses with a certain frequency shift and within a well-defined time window. <s> BIB004 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> Introduction <s> Safety of shipping is an ever growing concern. In a summary, Faulkner investigated the causes of shipping casualties (2002, "Shipping Safety: A Matter of Concern, " Ingenia, The Royal Academy of Engineering, Marine Matters, pp. 13-20) and concluded that the numbers of unexplained accidents are far too high in comparison to other means of transport. From various sources, including insurers data over 30% of the casualties are due to bad weather (a fact that ships should be able to cope with) and a further 25% remain completely unexplained. The European project MaxWave aimed at investigating ship and platform accidents due to severe weather conditions using different radars and in situ sensors and at suggesting improved design and new safety measures. Heavy sea states and severe weather conditions have caused the loss of more than 200 large cargo vessels within the 20 years between 1981 and 2000 (Table I in Faulkner). In many cases, single "rogue waves" of abnormal height as well as groups of extreme waves have been reported by crew members of such ships. The European Project MaxWave deals with both theoretical aspects of extreme waves and new techniques to observe these waves using different remote sensing techniques. The final goal is to improve the understanding of the physical processes responsible for the generation of extreme waves and to identify geophysical conditions in which such waves are most likely to occur. Two-dimensional sea surface elevation fields are derived from marine radar and space borne synthetic aperture radar data. Individual wave parameters such as maximum to significant wave height ratios and wave steepness, are derived from the sea surface topography. Several ship and offshore platform accidents are analyzed and the impact on ship and offshore design is discussed. Tank experiments are performed to test the impact of designed extreme waves on ships and offshore structures. This article gives an overview of the different work packages on observation of rogue waves, explanations, and consequences for design. <s> BIB005 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> Introduction <s> Oceanic rogue waves are surface gravity waves whose wave heights are much larger than expected for the sea state. The common operational definition requires them to be at least twice as large as the significant wave height. In most circumstances, the properties of rogue waves and their probability of occurrence appear to be consistent with second-order random-wave theory. There are exceptions, although it is unclear whether these represent measurement errors or statistical flukes, or are caused by physical mechanisms not covered by the model. A clear deviation from second-order theory occurs in numerical simulations and wave-tank experiments, in which a higher frequency of occurrence of rogue waves is found in long-crested waves owing to a nonlinear instability. <s> BIB006 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> Introduction <s> The appearance of rogue waves is well known in oceanographics, optics, and cold matter systems. Here we show a possibility for the existence of atmospheric rogue waves. <s> BIB007 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> Introduction <s> We have numerically calculated chaotic waves of the focusing nonlinear Schrrodinger equation (NLSE), starting with a plane wave modulated by relatively weak random waves. We show that the peaks with highest amplitude of the resulting wave composition (rogue waves) can be described in terms of exact solutions of the NLSE in the form of the collision of Akhmediev breathers. <s> BIB008 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> Introduction <s> We present a method for finding the hierarchy of rational solutions of the self-focusing nonlinear Schrodinger equation and present explicit forms for these solutions from first to fourth order. We also explain their relation to the highest amplitude part of a field that starts with a plane wave perturbed by random small amplitude radiation waves. Our work can elucidate the appearance of rogue waves in the deep ocean and can be applied to the observation of rogue light pulse waves in optical fibers. <s> BIB009 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> Introduction <s> The Peregrine soliton — a wave localized in both space and time — is now observed experimentally for the first time by using femtosecond pulses in an optical fibre. The results give some insight into freak waves that can appear out of nowhere before simply disappearing. <s> BIB010 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> Introduction <s> It is shown that the electrostatic surface plasma rogue waves can be excited and propagate along a plasma-vacuum interface due to the nonlinear coupling between high-frequency surface plasmons and low-frequency ion oscillations. The nonlinear pulse propagation condition and its behavior are discussed. The nonlinear structures may be useful for controlling and maximizing plasmonic energy along the plasma surface. <s> BIB011 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> Introduction <s> N.A. is grateful to Alexander von Humboldt Foundation as this ::: work was prepared while he was visiting Germany. A.A. acknowledges ::: the support of the Australian Research Council (Discovery ::: Project DP0985394). J.M.S.C. acknowledges support from the Spanish ::: Ministerio de Ciencia e Innovacion under contracts FIS2006- ::: 03376 and FIS2009-09895. J.M.D. thanks the Institut Universitaire ::: de France and the French Agence Nationale de la Recherche ::: projects MANUREVA ANR-08-SYSC-019 and IMFINI ANR-09-BLAN- ::: 0065, for support. <s> BIB012 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> Introduction <s> Climate and climate variability.- Marine weather phenomena.- Models for the marine environment.- How to determine long-term changes in marine climate.- Past and future changes in wind, wave, and storm surge climates. <s> BIB013 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> Introduction <s> Abstract We present, analytically, self-similar rogue wave solutions (rational solutions) of the inhomogeneous nonlinear Schrodinger equation (NLSE) via a similarity transformation connected with the standard NLSE. Then we discuss the propagation behaviors of controllable rogue waves under dispersion and nonlinearity management. In an exponentially dispersion-decreasing fiber, the postponement, annihilation and sustainment of self-similar rogue waves are modulated by the exponential parameter σ . Finally, we investigate the nonlinear tunneling effect for self-similar rogue waves. Results show that rogue waves can tunnel through the nonlinear barrier or well with increasing, unchanged or decreasing amplitudes via the modulation of the ratio of the amplitudes of rogue waves to the barrier or well height. <s> BIB014 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> Introduction <s> Being considered as a prototype for description of oceanic rogue waves, the Peregrine breather solution of the nonlinear Schrodinger equation has been recently observed and intensely investigated experimentally in particular within the context of water waves. Here, we report the experimental results showing the evolution of the Peregrine solution in the presence of wind forcing in the direction of wave propagation. The results show the persistence of the breather evolution dynamics even in the presence of strong wind and chaotic wave field generated by it. Furthermore, we have shown that characteristic spectrum of the Peregrine breather persists even at the highest values of the generated wind velocities thus making it a viable characteristic for prediction of rogue waves. <s> BIB015 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> Introduction <s> We consider the paradigmatic Brusselator model for the study of dissipative structures in far from equilibrium systems. In two dimensions, we show the occurrence of a self-replication phenomenon leading to the fragmentation of a single localized spot into four daughter spots. This instability affects the new spots and leads to splitting behavior until the system reaches a hexagonal stationary pattern. This phenomenon occurs in the absence of delay feedback. In addition, we incorporate a time-delayed feedback loop in the Brusselator model. In one dimension, we show that the delay feedback induces extreme events in a chemical reaction diffusion system. We characterize their formation by computing the probability distribution of the pulse height. The long-tailed statistical distribution, which is often considered as a signature of the presence of rogue waves, appears for sufficiently strong feedback intensity. The generality of our analysis suggests that the feedback-induced instability leading to the spontaneous formation of rogue waves in a controllable way is a universal phenomenon. <s> BIB016
Anomalous waves, or "rogue waves", represent a rare phenomenon at sea, which occurs on multiple occasions yearly BIB001 BIB005 and causes millions of dollars of loss of cargo and loss of lives annually . Rogue waves are abnormally elevated waves, with a height 2-3× that of the normal average wave and unusually having steep shapes BIB003 . Rogue waves were recorded for the first time in 1995 during a winter storm in the North Sea, when the "New Years Wave" hit the Draupner platform with a wave height of 27 m and 2.25× the average wave height . The laser installation on the deck, which regularly records the elevation of the platform over the sea bed, registered the solitary giant wave with its elevation 15.4 m above and 11.6 m below the zero-level . The shape of the wave was symmetrical ( Figure 1 ) with a Gaussian bell shape and with a particularly narrow wavelength. This shape and behavior of anomalous waves are conserved across several observations made in the last 25 years, including the rogue wave that hit the North Alwyn platform in November 1997 BIB006 , the Gorm platform in 1984 BIB006 and from Storm 172 on the North Alwyn field 100 miles east of Shetland BIB003 . The latter was particularly unusual, with a height 3.19× the average (Figure 1 ). the atmosphere BIB007 , in plasma BIB011 as well as in molecular systems during chemical reactions BIB016 . Earlier mathematical models and derived algorithms that were used to predict wave patterns were originally developed by using the linear Gaussian random model, and rogue phenomena at sea were largely disregarded as superstition. The linear Gaussian model is essentially a superposition of elementary waves and predicts the occurrence of a rogue event at a very low probability. This 52 low probability is however incorrect accounting for the laser-readings made in the last 2 decades 53 at off-shore installations. Non-linear models which show a better agreement with the frequency BIB002 of rogue events at sea, are therefore gradually replacing the Gaussian model used in the insurance industry. Non-linear models have been studied by several groups, and include the modified non-linear Schrödinger Equation (NLSE) BIB006 , the Peregrine soliton model BIB010 the Levi-Civita and Nekrasov 57 models , the Davey-Stewartson model , the fourth order partial differential equation of Kadomtsev-Petviashvili, the one-dimensional Korteweg-de Vries equation for shallow water surfaces, the second-order Zakharov partial differential equation , and the fully nonlinear potential Rogue waves are known to have sunk over 20 supercarriers since 1970 BIB006 and carry a force of 16-20-times (100 metric tons/m 2 ) that of a 12-m wave, and they can easily break ship structures, which are designed to withstand far lower impact forces (6 metric tons/m 2 ) . Rogue waves are an eminent threat to shipping and naval activities and increase in prevalence with climate-change weather patterns BIB013 . In this context, the insurance sector has searched for new models for predicting rogue waves and for fortifying naval structures , as both off-shore installations, shipping and also cruise-ships have been increasingly exposed to rogue waves in the last few decades BIB006 . This development has also sparked the project "Max Wave" BIB005 , which has contributed with new models and algorithms for predicting rogue waves by the use of satellite observation data. Rogue waves occur also in optical systems BIB004 , in the atmosphere BIB007 , in plasma BIB011 , as well as in molecular systems during chemical reactions BIB016 . Earlier mathematical models and derived algorithms that were used to predict wave patterns were originally developed by using the linear Gaussian random model, and rogue phenomena at sea were largely disregarded as superstition. The linear Gaussian model is essentially a superposition of elementary waves and predicts the occurrence of a rogue event at a very low probability. This low probability is however incorrect accounting for the laser readings made in the last two decades at off-shore installations. Non-linear models, which show a better agreement with the frequency of rogue events at sea, are therefore gradually replacing the Gaussian model used in the insurance industry. Non-linear models have been studied by several groups and include the modified non-linear Schrödinger equation (NLSE) BIB006 , the Peregrine soliton model BIB010 the Levi-Civita and Nekrasov models , the Davey-Stewartson model , the fourth order partial differential equation of Kadomtsev-Petviashvili, the one-dimensional Korteweg-de Vries equation for shallow water surfaces, the second-order Zakharov partial differential equation and the fully-nonlinear potential equations. Other systems have recently been developed and are herein reviewed in detail given their relevance to rogue wave ocean phenomena, including the inhomogenous non-linear Schrödinger equation BIB014 , the Akhmediev model BIB008 BIB009 BIB012 BIB015 and the recent models developed by Cousins and Sapsis .
Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Non-Linear Schrödinger Equation in the Prediction of Rogue Waves <s> We study the stability of steady nonlinear waves on the surface of an infinitely deep fluid [1, 2]. In section 1, the equations of hydrodynamics for an ideal fluid with a free surface are transformed to canonical variables: the shape of the surface η(r, t) and the hydrodynamic potential ψ(r, t) at the surface are expressed in terms of these variables. By introducing canonical variables, we can consider the problem of the stability of surface waves as part of the more general problem of nonlinear waves in media with dispersion [3,4]. The resuits of the rest of the paper are also easily applicable to the general case. <s> BIB001 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Non-Linear Schrödinger Equation in the Prediction of Rogue Waves <s> We consider a nonlinear, passive optical system contained in an appropriate cavity, and driven by a coherent, plane wave, stationary beam. Under suitable conditions, diffraction gives rise to an instability which leads to the emergence of a stationary spatial dissipative structure in the transverse profile of the transmitted beam. <s> BIB002 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Non-Linear Schrödinger Equation in the Prediction of Rogue Waves <s> Ultrashort pulse propagation in high gain optical fiber amplifiers with normal dispersion is studied by self-similarity analysis of the nonlinear Schrodinger equation with gain. An exact asymptotic solution is found, corresponding to a linearly chirped parabolic pulse which propagates self-similarly subject to simple scaling rules. The solution has been confirmed by numerical simulations and experiments studying propagation in a Yb-doped fiber amplifier. Additional experiments show that the pulses remain parabolic after propagation through standard single mode fiber with normal dispersion. The establishment of self-similarity is a key element in the understanding of many widely differing nonlinear physical phenomena, including the propagation of thermal waves in nuclear explosions, the formation of fractures in elastic solids, and the scaling properties of turbulent flow [1]. In particular, the presence of self-similarity can be exploited to obtain asymptotic solutions to partial differential equations describing a physical system by using the mathematical technique of symmetry reduction to reduce the number of degrees of freedom [2]. Although the powerful mathematical techniques associated with the analysis of self-similar phenomena have been extensively applied in certain areas of physics such as hydrodynamics, their application in optics has not been widespread. However, some important results have been obtained, with previous theoretical studies considering asymptotic self-similar behavior in radial pattern formation [3], stimulated Raman scattering [4], and in the nonlinear propagation of ultrashort pulses with parabolic intensity profiles in optical fibers with normal dispersion [5]. This latter case has also been studied using numerical simulations, with results suggesting that parabolic pulses are generated in the amplification of ultrashort pulses in nonlinear optical fiber amplifiers with normal dispersion [6]. To date, however, there has been no experimental demonstration of self-similar parabolic pulse propagation either in optical fibers or in nonlinear optical amplifiers. In this Letter we present results of calculations using self-similarity methods [1– 4] to analyze pulse propagation in an optical fiber amplifier described by the nonlinear Schrodinger equation (NLSE) with gain and normal dispersion. These calculations show that parabolic pulses are, in fact, exact asymptotic solutions of the NLSE with gain, and propagate in the amplifier self-similarly subject to exponential scaling of amplitude and temporal width. In addition, the pulses possess a strictly linear chirp. Our theoretical results are confirmed both by numerical simulations and by experiments which have taken advantage of the current availability of high gain optical fiber amplifiers and of recent developments in methods of ultrashort pulse characterization. In particular, the use of the pulse characterization technique of frequency-resolved optical gating (FROG) [7] has allowed us to measure the intensity and chirp of parabolic pulses generated in a Yb-doped fiber amplifier, and to compare these experimental results directly with theoretical predictions. Additional experiments have demonstrated that the pulses remain parabolic in profile during propagation in normally dispersive fiber, confirming the self-similar nature of propagation in this regime. <s> BIB003 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Non-Linear Schrödinger Equation in the Prediction of Rogue Waves <s> Self-similarity techniques are used to study pulse propagation in a normal-dispersion optical fiber amplifier with an arbitrary longitudinal gain profile. Analysis of the nonlinear Schrödinger equation that describes such an amplifier leads to an exact solution in the high-power limit that corresponds to a linearly chirped parabolic pulse. The self-similar scaling of the propagating pulse in the amplifier is found to be determined by the functional form of the gain profile, and the solution is confirmed by numerical simulations. The implications for achieving chirp-free pulses after compression of the amplifier output are discussed. <s> BIB004 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Non-Linear Schrödinger Equation in the Prediction of Rogue Waves <s> Abstract Rogue waves are rare “giant”, “freak”, “monster” or “steep wave” events in nonlinear deep water gravity waves which occasionally rise up to surprising heights above the background wave field. Holes are deep troughs which occur before and/or after the largest rogue crests. The dynamical behavior of these giant waves is here addressed as solutions of the nonlinear Schrodinger equation in both 1+1 and 2+1 dimensions. We discuss analytical results for 1+1 dimensions and demonstrate numerically, for certain sets of initial conditions, the ubiquitous occurrence of rogue waves and holes in 2+1 spatial dimensions. A typical wave field evidently consists of a background of stable wave modes punctuated by the intermittent upthrusting of unstable rogue waves . <s> BIB005 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Non-Linear Schrödinger Equation in the Prediction of Rogue Waves <s> A broad class of exact self-similar solutions to the nonlinear Schrödinger equation (NLSE) with distributed dispersion, nonlinearity, and gain or loss has been found. Appropriate solitary wave solutions applying to propagation in optical fibers and optical fiber amplifiers with these distributed parameters have also been studied. These solutions exist for physically realistic dispersion and nonlinearity profiles in a fiber with anomalous group velocity dispersion. They correspond either to compressing or spreading solitary pulses which maintain a linear chirp or to chirped oscillatory solutions. The stability of these solutions has been confirmed by numerical simulations of the NLSE. <s> BIB006 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Non-Linear Schrödinger Equation in the Prediction of Rogue Waves <s> A broad class of exact self-similar solutions to the nonlinear Schrödinger equation (NLSE) with distributed dispersion, nonlinearity, and gain or loss has been found describing both periodic and solitary waves. Appropriate solitary wave solutions applying to propagation in optical fibers and optical fiber amplifiers with these distributed parameters have also been studied in detail. These solutions exist for physically realistic dispersion and nonlinearity profiles. They correspond either to compressing or spreading solitary pulses which maintain a linear chirp or to chirped oscillatory solutions. The stability of these solutions has been confirmed by numerical simulations of the NLSE with perturbed initial conditions. These self-similar propagation regimes are expected to find practical application in both optical fiber amplifier systems and in fiber compressors. <s> BIB007 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Non-Linear Schrödinger Equation in the Prediction of Rogue Waves <s> Novel soliton solutions for the nonautonomous nonlinear Schrödinger equation models with linear and harmonic oscillator potentials substantially extend the concept of classical solitons and generalize it to the plethora of nonautonomous solitons that interact elastically and generally move with varying amplitudes, speeds, and spectra adapted both to the external potentials and to the dispersion and nonlinearity variations. The nonautonomous soliton concept can be applied to different physical systems, from hydrodynamics and plasma physics to nonlinear optics and matter waves, and offer many opportunities for further scientific studies. <s> BIB008 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Non-Linear Schrödinger Equation in the Prediction of Rogue Waves <s> We present a method for finding the hierarchy of rational solutions of the self-focusing nonlinear Schrodinger equation and present explicit forms for these solutions from first to fourth order. We also explain their relation to the highest amplitude part of a field that starts with a plane wave perturbed by random small amplitude radiation waves. Our work can elucidate the appearance of rogue waves in the deep ocean and can be applied to the observation of rogue light pulse waves in optical fibers. <s> BIB009 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Non-Linear Schrödinger Equation in the Prediction of Rogue Waves <s> The Peregrine soliton — a wave localized in both space and time — is now observed experimentally for the first time by using femtosecond pulses in an optical fibre. The results give some insight into freak waves that can appear out of nowhere before simply disappearing. <s> BIB010 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Non-Linear Schrödinger Equation in the Prediction of Rogue Waves <s> By means of the similarity transformation connecting with the solvable stationary cubic-quintic nonlinear Schrodinger equation (CQNLSE), we construct explicit chirped and chirp-free self-similar cnoidal wave and solitary wave solutions of the generalized CQNLSE with spatially inhomogeneous group velocity dispersion (GVD) and amplification or attenuation. As an example, we investigate their propagation dynamics in a soliton control system. <s> BIB011 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Non-Linear Schrödinger Equation in the Prediction of Rogue Waves <s> We discover and analytically describe self-similar pulses existing in homogeneously broadened amplifying linear media in a vicinity of an optical resonance. We demonstrate numerically that the discovered pulses serve as universal self-similar asymptotics of any near-resonant short pulses with sharp leading edges, propagating in coherent linear amplifiers. We show that broadening of any low-intensity seed pulse in the amplifier has a diffusive nature: Asymptotically the pulse width growth is governed by the simple diffusion law. We also compare the energy gain factors of short and long self-similar pulses supported by such media. <s> BIB012 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Non-Linear Schrödinger Equation in the Prediction of Rogue Waves <s> We investigate exact nonlinear matter wave functions with odd and even parities in the framework of quasi-two-dimensional Bose-Einstein condensates (BECs) with spatially modulated cubic-quintic nonlinearities and harmonic potential. The existence condition for these exact solutions requires that the minimum energy eigenvalue of the corresponding linear Schrodinger equation with harmonic potential is the cutoff value of the chemical potential A. The competition between two-body and three-body interactions influences the energy of the localized state. For attractive two-body and three-body interactions, the larger the matter wave order number n, the larger the energy of the corresponding localized state. A linear stability analysis and direct simulations with initial white noise demonstrate that, for the same state (fixed n), increasing the number of atoms can add stability. A quasi-stable ground-state matter wave is also found for repulsive two-body and three-body interactions. We also discuss the experimental realization of these results in future experiments. These results are of particular significance to matter wave management in higher-dimensional BECs. (C) 2011 Elsevier Inc. All rights reserved. <s> BIB013 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Non-Linear Schrödinger Equation in the Prediction of Rogue Waves <s> Abstract We present, analytically, self-similar rogue wave solutions (rational solutions) of the inhomogeneous nonlinear Schrodinger equation (NLSE) via a similarity transformation connected with the standard NLSE. Then we discuss the propagation behaviors of controllable rogue waves under dispersion and nonlinearity management. In an exponentially dispersion-decreasing fiber, the postponement, annihilation and sustainment of self-similar rogue waves are modulated by the exponential parameter σ . Finally, we investigate the nonlinear tunneling effect for self-similar rogue waves. Results show that rogue waves can tunnel through the nonlinear barrier or well with increasing, unchanged or decreasing amplitudes via the modulation of the ratio of the amplitudes of rogue waves to the barrier or well height. <s> BIB014 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Non-Linear Schrödinger Equation in the Prediction of Rogue Waves <s> Optical bistability (OB) and optical multistability (OM) behavior in molecular magnets is theoretically studied. It is demonstrated that the OB of the system can be controlled via adjusting the magnetic field intensity. In addition, it is shown that the frequency detuning of probe and coupling fields, as well as the cooperation parameter, has remarkable effects on the OB behavior of the system. Also, we find that OB can be converted to OM through the magnitude of control-field detuning. Our results can be used as a guideline for optimizing and controlling the switching process in the crystal of molecular magnets. <s> BIB015 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Non-Linear Schrödinger Equation in the Prediction of Rogue Waves <s> The pioneering paper 'Optical rogue waves' by Solli et al (2007 Nature 450 1054) started the new subfield in optics. This work launched a great deal of activity on this novel subject. As a result, the initial concept has expanded and has been enriched by new ideas. Various approaches have been suggested since then. A fresh look at the older results and new discoveries has been undertaken, stimulated by the concept of 'optical rogue waves'. Presently, there may not by a unique view on how this new scientific term should be used and developed. There is nothing surprising when the opinion of the experts diverge in any new field of research. After all, rogue waves may appear for a multiplicity of reasons and not necessarily only in optical fibers and not only in the process of supercontinuum generation. We know by now that rogue waves may be generated by lasers, appear in wide aperture cavities, in plasmas and in a variety of other optical systems. Theorists, in turn, have suggested many other situations when rogue waves may be observed. The strict definition of a rogue wave is still an open question. For example, it has been suggested that it is defined as 'an optical pulse whose amplitude or intensity is much higher than that of the surrounding pulses'. This definition (as suggested by a peer reviewer) is clear at the intuitive level and can be easily extended to the case of spatial beams although additional clarifications are still needed. An extended definition has been presented earlier by N Akhmediev and E Pelinovsky (2010 Eur. Phys. J. Spec. Top. 185 1-4). Discussions along these lines are always useful and all new approaches stimulate research and encourage discoveries of new phenomena. Despite the potentially existing disagreements, the scientific terms 'optical rogue waves' and 'extreme events' do exist. Therefore coordination of our efforts in either unifying the concept or in introducing alternative definitions must be continued. From this point of view, a number of the scientists who work in this area of research have come together to present their research in a single review article that will greatly benefit all interested parties of this research direction. Whether the authors of this 'roadmap' have similar views or different from the original concept, the potential reader of the review will enrich their knowledge by encountering most of the existing views on the subject. Previously, a special issue on optical rogue waves (2013 J. Opt. 15 060201) was successful in achieving this goal but over two years have passed and more material has been published in this quickly emerging subject. Thus, it is time for a roadmap that may stimulate and encourage further research. <s> BIB016 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Non-Linear Schrödinger Equation in the Prediction of Rogue Waves <s> We demonstrate a way to generate a two-dimensional rogue waves in two types of broad area nonlinear optical systems subject to time-delayed feedback: in the generic Lugiato-Lefever model and in model of a broad-area surface-emitting laser with saturable absorber. The delayed feedback is found to induce a spontaneous formation of rogue waves. In the absence of delayed feedback, spatial pulses are stationary. The rogue waves are exited and controlled by the delay feedback. We characterize their formation by computing the probability distribution of the pulse height. The long-tailed statistical contribution which is often considered as a signature of the presence of rogue waves appears for sufficiently strong feedback. The generality of our analysis suggests that the feedback induced instability leading to the spontaneous formation of two-dimensional rogue waves is an universal phenomenon. <s> BIB017 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Non-Linear Schrödinger Equation in the Prediction of Rogue Waves <s> Driven nonlinear optical cavities can exhibit complex spatiotemporal dynamics. We consider the paradigmatic Lugiato-Lefever model describing driven nonlinear optical resonator. This model is one of the most-studied nonlinear equations in optics. It describes a large spectrum of nonlinear phenomena from bistability, to periodic patterns, localized structures, self-pulsating localized structures and to a complex spatiotemporal behavior. The model is considered also as prototype model to describe several optical nonlinear devices such as Kerr media, liquid crystals, left handed materials, nonlinear fiber cavity, and frequency comb generation. We focus our analysis on a spatiotemporal chaotic dynamics in one-dimension. We identify a route to spatiotemporal chaos through an extended quasiperiodicity. We have estimated the Kaplan-Yorke dimension that provides a measure of the strange attractor complexity. Likewise, we show that the Lugiato-Leferver equation supports rogues waves in two-dimensional settings. We characterize rogue-wave formation by computing the probability distribution of the pulse height. <s> BIB018
Rogue waves occur both in oceans, as well as in optical systems BIB010 , as well as in other wave systems (see above). For fiber optic systems, rogue waves are normally entirely one-dimensional; however, two-dimensional rogue waves have been recently documented, in the form of the two-dimensional dissipative rogue waves BIB017 . These recently discovered optical rogue waves occur when a delayed feedback is generated in the transverse plane of the the cavity, forming an overlap of counter-directional fiber optic signals, which leads to a rogue amplitude BIB017 . These two-dimensional signals in optical systems are described by there own form of PDE, the Lugiato-Lefever equation BIB002 , which allows for 2D rogue wave solutions to be modeled without collapse dynamics. This model is also used for describing a large spectrum of nonlinear phenomena in optical systems, such as bistability, localized structures, self-pulsating localized structures and also complex spatiotemporal behavior through an extended quasi-periodicity BIB018 . Rogue waves formed in fiber optic systems have also been recently considered as a new field of research in optics, given their anharmonic and nonlinear properties, which can be a future application of optical technologies BIB016 . In particular, a class of rogue waves which have the potential for application in optical technologies are the self-similar pulses BIB011 BIB012 BIB006 BIB007 . Self-similar pulses are wave amplitudes measured in fiber amplifiers BIB003 , which experience an optical gain together with a Kerr nonlinearity BIB015 . During the induction of the self-similar impulse in the solid, a fluid or any wave-carrying medium, the shape of the resulting rogue wave no longer depends on the shape or duration of the seed pulses, but depends only on the seed pulse energy (chirping). This creates a large effect on the amplitude, which is largely independent of the initial conditions of the wave pattern. This event, or formation of a rogue component in the wave train, has also been observed in ocean wave systems and has attracted various groups to develop prediction methods using the variations of the non-linear Schrödinger equation (NLSE) BIB014 BIB011 BIB003 BIB004 BIB005 . One group in particular developed the variable coefficient inhomogenous nonlinear Schrödinger equation (vci-NLSE) for optical signals BIB014 : which derives from the Zakharov equation BIB001 . Here, ψ(t, x) is the complex function for the electrical (wave) field and x, and t are respectively the propagation distance function and retarded time function. The parameter α(x) defines the normalized loss rate, and the function α(x)t 2 accounts for the chirping effects (which indicates that the initial chirping parameter is the square of the normalized growth rate). The parameter β(x) defines the group-velocity dispersion (i.e., for an entire wave train), while χ(x) defines non-linearity parameters, and γ(x) defines the loss or gain effects of the wave signal. This equation is adaptable both for oceanic waves, as well as for optical non-linear wave guides. Equation (1) is essentially the same as the generalized Gross-Pitaevskii equation with the harmonic oscillator potentials in the Bose-Einstein condensates BIB008 and can be solved by applying the similarity transformation BIB013 by replacing ψ(t, x) in Equation (1) with: where ρ(x) is the amplitude and T and X represent the differential functions describing the original propagation distance and the similarity variable, while φ(t, x) is the linear variable function of the exponential term, which all must be considered well to avoid singularity of the system ψ(t, x) BIB014 . T and X are given as: and hence, the similarity transformation gives: which is the standard non-linear Schrödinger equation. The transformation and integrability conditions derived by BIB014 show that the factors of the wave system, such as effective wave propagation, distance, central position amplitude, the width and phase of the pulse, are ultimately dependent on the group velocity dispersion and on the non-linearity parameters of the system (α, β, γ, χ). The "self-similar" solution found in the process of the transformation of the variable coefficient inhomogenous nonlinear Schrödinger equation into the standard nonlinear Schrödinger equation can ultimately be controlled under dispersion and non-linearity management BIB014 . Once transformed from the iNLSE, the solutions to the NLSE are derived by the derivation of polynomial conjugates to the root exponential function. This process is reviewed in detail here from the studies by BIB009 .
Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Solutions to the NLSE <s> The phenomenon in question arises when a periodic progressive wave train with fundamental frequency ω is formed on deep water—say by radiation from an oscillating paddle—and there are also present residual wave motions at adjacent side-band frequencies ω(1 ± δ), such as would be generated if the movement of the paddle suffered a slight modulation at low frequency. In consequence of coupling through the non-linear boundary conditions at the free surface, energy is then transferred from the primary motion to the side bands at a rate that, as will be shown herein, can increase exponentially as the interaction proceeds. The result is that the wave train becomes highly irregular far from its origin, even when the departures from periodicity are scarcely detectable at the start. In this paper a theoretical investigation is made into the stability of periodic wave trains to small disturbances in the form of a pair of side-band modes, and Part 2 which will follow is an account of some experimental observations in accord with the present predictions. The main conclusion of the theory is that infinitesimal disturbances of the type considered will undergo unbounded magnification if ::: \[ ::: 0 < \delta \leqslant (\sqrt{2})ka, ::: \] where k and a are the fundamental wave-number and amplitude of the perturbed wave train. The asymptotic rate of growth is a maximum for δ = ka . <s> BIB001 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Solutions to the NLSE <s> Equations governing modulations of weakly nonlinear water waves are described. The modulations are coupled with wave-induced mean flows except in the case of water deeper than the modulation length scale. Equations suitable for water depths of the order the modulation length scale are deduced from those derived by Davey and Stewartson [5] and Dysthe [6]. A number of ases in which these equations reduce to a one dimensional nonlinear Schrodinger (NLS) equation are enumerated. Several analytical solutions of NLS equations are presented, with discussion of some of their implications for describing the propagation of water waves. Some of the solutions have not been presented in detail, or in convenient form before. One is new, a “rational” solution describing an “amplitude peak” which is isolated in space-time. Ma's [13] soli ton is particularly relevant to the recurrence of uniform wave trains in the experiment of Lake et al .[10]. In further discussion it is pointed out that although water waves are unstable to three-dimensional disturbances, an effective description of weakly nonlinear two-dimensional waves would be a useful step towards describing ocean wave propagation. <s> BIB002 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Solutions to the NLSE <s> A very simple exact analytic solution of the nonlinear Schroedinger equation is found in the class of periodic solutions. It describes the time evolution of a wave with constant amplitude on which a small periodic perturbation is superimposed. Expressions are obtained for the evolution of the spectrum of this solution, and these expressions are analyzed qualitatively. It is shown that there exists a certain class of periodic solutions for which the real and imaginary parts are linearly related, and an example of a one-parameter family of such solutions is given. <s> BIB003 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Solutions to the NLSE <s> Some breather type solutions of the NLS equation have been suggested by Henderson et al (to appear in Wave Motion) as models for a class of 'freak' wave events seen in 2D-simulations on surface gravity waves. In this paper we first take a closer look on these simple solutions and compare them with some of the simulation data (Henderson et al to appear in Wave Motion). Our findings tend to strengthen the idea of Henderson et al. Especially the Ma breather and the so called Peregrine solution may provide useful and simple analytical models for 'freak' wave events. <s> BIB004 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Solutions to the NLSE <s> Abstract Rogue waves are rare “giant”, “freak”, “monster” or “steep wave” events in nonlinear deep water gravity waves which occasionally rise up to surprising heights above the background wave field. Holes are deep troughs which occur before and/or after the largest rogue crests. The dynamical behavior of these giant waves is here addressed as solutions of the nonlinear Schrodinger equation in both 1+1 and 2+1 dimensions. We discuss analytical results for 1+1 dimensions and demonstrate numerically, for certain sets of initial conditions, the ubiquitous occurrence of rogue waves and holes in 2+1 spatial dimensions. A typical wave field evidently consists of a background of stable wave modes punctuated by the intermittent upthrusting of unstable rogue waves . <s> BIB005 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Solutions to the NLSE <s> The paper examines the effect of the bottom stress on the weakly nonlinear evolution of a narrow-band wave field, as a potential mechanism of suppression of 'freak' wave formation in water of moderate depth. Relying upon established experimental studies the bottom stress is modelled by the quadratic drag law with an amplitude/bottom roughness-dependent drag coefficient. The asymptotic analysis yields Davey-Stewartson-type equations with an added nonlinear complex friction term in the envelope equation. The friction leads to a power-law decay of the spatially uniform wave amplitude. It also affects the modulational (Benjamin-Feir) instability, e.g. alters the growth rates of sideband perturbations and the boundaries of the linearized stability domains in the modulation wavevector space. Moreover, the instability occurs only if the amplitude of the background wave exceeds a certain threshold. Since the friction is nonlinear and increases with wave amplitude, its effect on the formation of nonlinear patterns is more dramatic. Numerical experiments show that even when the friction is small compared to the nonlinear term, it hampers formation of the Akhmediev/Ma-type breathers (believed to be weakly nonlinear 'prototypes' of freak waves) at the nonlinear stage of instability. The specific predictions for a particular location depend on the bottom roughness k s in addition to the water depth and wave field characteristics. <s> BIB006 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Solutions to the NLSE <s> We present a method for finding the hierarchy of rational solutions of the self-focusing nonlinear Schrodinger equation and present explicit forms for these solutions from first to fourth order. We also explain their relation to the highest amplitude part of a field that starts with a plane wave perturbed by random small amplitude radiation waves. Our work can elucidate the appearance of rogue waves in the deep ocean and can be applied to the observation of rogue light pulse waves in optical fibers. <s> BIB007 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Solutions to the NLSE <s> We have numerically calculated chaotic waves of the focusing nonlinear Schrrodinger equation (NLSE), starting with a plane wave modulated by relatively weak random waves. We show that the peaks with highest amplitude of the resulting wave composition (rogue waves) can be described in terms of exact solutions of the NLSE in the form of the collision of Akhmediev breathers. <s> BIB008 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Solutions to the NLSE <s> The Peregrine soliton — a wave localized in both space and time — is now observed experimentally for the first time by using femtosecond pulses in an optical fibre. The results give some insight into freak waves that can appear out of nowhere before simply disappearing. <s> BIB009 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Solutions to the NLSE <s> We investigate exact nonlinear matter wave functions with odd and even parities in the framework of quasi-two-dimensional Bose-Einstein condensates (BECs) with spatially modulated cubic-quintic nonlinearities and harmonic potential. The existence condition for these exact solutions requires that the minimum energy eigenvalue of the corresponding linear Schrodinger equation with harmonic potential is the cutoff value of the chemical potential A. The competition between two-body and three-body interactions influences the energy of the localized state. For attractive two-body and three-body interactions, the larger the matter wave order number n, the larger the energy of the corresponding localized state. A linear stability analysis and direct simulations with initial white noise demonstrate that, for the same state (fixed n), increasing the number of atoms can add stability. A quasi-stable ground-state matter wave is also found for repulsive two-body and three-body interactions. We also discuss the experimental realization of these results in future experiments. These results are of particular significance to matter wave management in higher-dimensional BECs. (C) 2011 Elsevier Inc. All rights reserved. <s> BIB010 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Solutions to the NLSE <s> It is shown that the electrostatic surface plasma rogue waves can be excited and propagate along a plasma-vacuum interface due to the nonlinear coupling between high-frequency surface plasmons and low-frequency ion oscillations. The nonlinear pulse propagation condition and its behavior are discussed. The nonlinear structures may be useful for controlling and maximizing plasmonic energy along the plasma surface. <s> BIB011 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Solutions to the NLSE <s> Abstract We present, analytically, self-similar rogue wave solutions (rational solutions) of the inhomogeneous nonlinear Schrodinger equation (NLSE) via a similarity transformation connected with the standard NLSE. Then we discuss the propagation behaviors of controllable rogue waves under dispersion and nonlinearity management. In an exponentially dispersion-decreasing fiber, the postponement, annihilation and sustainment of self-similar rogue waves are modulated by the exponential parameter σ . Finally, we investigate the nonlinear tunneling effect for self-similar rogue waves. Results show that rogue waves can tunnel through the nonlinear barrier or well with increasing, unchanged or decreasing amplitudes via the modulation of the ratio of the amplitudes of rogue waves to the barrier or well height. <s> BIB012
The NLSE equation has been solved by various groups, including BIB007 BIB010 BIB002 . Following one of the most recent works by BIB008 BIB007 in particular, the steps for deriving exact solutions to the NLSE are defined by identifying rational solutions BIB008 for the homogenous nonlinear system in Equation (5) by using the Darboux transformation [43] . This method is often used to derive rational solutions for non-linear systems and is adaptable to specific optical rogue waves, as well as ocean rogue waves, when represented by the NLSE. The main definition of a rogue event is that the wave "appears from nowhere and vanishes without a trace", which is feature partly related to the behavior of solitons, which are independent waves that self-propagate and exit a collision unchanged. The origin of solitons arises from the first observation of a single solitary wave in the North Sea, made in 1834 by J. S. Russell, who later reproduced the solitary wave in a tank. Since then, solitons have been mainly studied in optical systems and are represented as solutions to several types of nonlinear PDEs, including the NLSE, the Korteweg-de Vries equation and the Sine-Gordon equation. This type of rogue behavior is described by the rational solutions derived from the NLSE BIB007 , which describe an induction of a system instability to the top of a plane wave amplitude, which is transferred to the highest amplitude and then decays exponentially towards zero BIB008 . This behavior is represented by Ma-solitons and by Akhmediev breathers or "Akhmediev solitons" BIB008 BIB007 BIB003 BIB004 BIB006 . The difference between these two soliton models lies in the initial conditions, where the Ma-solitons originate from the initial conditions, while the Akhmediev solitons arise during evolution of the system given by modulation instability BIB003 BIB001 . When solving the NLSE according to the Akhmediev scheme BIB007 , their method describes the modeled envelope function (ψ) as a solution ranked into an order of hierarchy, starting from first and progressing to the second, third or fourth order BIB007 . The difference between each order is the increasing amplitude of the rogue wave (first order, lowest amplitude, fourth order, sharpest peak and highest amplitude). The envelope function ψ is expressed as a ratio of polynomials multiplied to the complex exponential root function, e ix . The polynomials, which are given by functions of variable x and t, are identified by performing the Darboux transformation on the NLSE system BIB007 . Akhmediev and colleagues furthermore applied a compatibility-check between the root function e ix and the reference-state for two specified column matrix elements, which define initial conditions for the NLSE. These matrix elements (vectors) are given specifically by Akhmediev and colleagues BIB007 as two differential equations: which are split into real and imaginary parts, before being simplified and solved to fit into the modified Darboux scheme BIB007 43] to give the two linear differential forms: where the two vectors (8), (9) are used in the Darboux scheme to find ψj, where j is the order of hierarchy. The general solution to the NLSE, derived from this scheme BIB007 , is given by the following general form (for any order in the hierarchy): where G, H and Dare the polynomials of the two variables x and t (mentioned above). The first order-solution BIB007 has the following polynomials: G = 1, H = 2 and D = 1 + 4t 2 + 4x 2 , which give the following envelope function (shown in Figure 2 ): Figure 2 . The plot of ψ 1 , the first-order solution to the standard NLSE BIB007 . Real and imaginary parts shown respectively above and below. Plotted with SageMATH . For the second-order solution, BIB007 identify the vectors r2 and s2 by solving Equations (6) and (7) by using the form of ψ given in BIB011 . This gives the second-order solution: BIB009 which is shown in Figure 3 . The third and fourth order rational solutions are furthermore calculated and given in BIB007 . The same hierarchy-dependency is given in the approach by BIB012 , for the transformed vci-NLSE, who define the general solutions for the NLSE in the hierarchical n-th order given by: where each factor is given for the first and second order rational solutions BIB012 . Similarly to the hierarchy solutions of Akhmediev BIB007 , the increasing order gives higher and higher rogue waves, compared to their surrounding waves. The first and second order rational solutions given in BIB012 reflect respectively a 3× and 5× rogue wave height, compared to the surrounding waves. For plots of these, refer to BIB012 . The similarity between and (10) is striking, and both retain the basic form of a complex polynomial multiplied by a complex exponential root function giving soliton solutions. The root functions of (10) and are shown in their generic form in Figure 4 , which depicts the distinction between the seed pulse for the regular NLSE and the vci-NLSE, as studied respectively by BIB012 BIB007 for the rogue wave problem. The root function for the vci-NLSE (Figure 4) shows its specific pattern of wave accumulation, which is similar to the formation of wave packets. This pattern is conserved with the physical behavior of rogue wave formation, where the rogue wave forms during a focusing phase . Other approaches used to solve the NLSE have been given by BIB005 , who used the inverse scattering method of transformation, which is a generalization of the Fourier analysis. Their solutions differ from the methods discussed above and are periodic and ascribed by a complex envelope function for the deep water train with added higher order terms from the perturbation procedure BIB005 . One of the solutions is shown in Figure 5 , which shows the following variant of the Osborne models: which is a periodic function in space, derived from the general form given in BIB005 , shown in Figure 5 . The disadvantage of this system, compared to single-peak models derived from BIB007 , lies in their periodicity and multiple peaks, while the rational solutions behind the single peak models of BIB008 BIB007 are the first in general to serve as prototypes for rogue waves. (Top) The seed impulse used in the solutions to the standard NLSE, e ix BIB007 . (Bottom) A generic form of the seed impulse used in the solutions of the inhomogenous variable coefficient NLSE BIB012 f (x) = e i(1−x 2 /2)+x . Real part (blue) and imaginary part (red).
Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Korteweg-de Vries Equation <s> We present an introduction to positon theory, almost never covered in the Russian scientific literature. Positons are long-range analogues of solitons and are slowly decreasing, oscillating solutions of nonlinear integrable equations of the KdV type. Positon and soliton–positon solutions of the KdV equation, first constructed and analyzed about a decade ago, were then constructed for several other models: for the mKdV equation, the Toda chain, the NS equation, as well as the sinh-Gordon equation and its lattice analogue. Under a proper choice of the scattering data, the one-positon and multipositon potentials have a remarkable property: the corresponding reflection coefficient is zero, but the transmission coefficient is unity (as is known, the latter does not hold for the standard short-range reflectionless potentials). <s> BIB001 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Korteweg-de Vries Equation <s> We present a method for finding the hierarchy of rational solutions of the self-focusing nonlinear Schrodinger equation and present explicit forms for these solutions from first to fourth order. We also explain their relation to the highest amplitude part of a field that starts with a plane wave perturbed by random small amplitude radiation waves. Our work can elucidate the appearance of rogue waves in the deep ocean and can be applied to the observation of rogue light pulse waves in optical fibers. <s> BIB002 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Korteweg-de Vries Equation <s> N.A. is grateful to Alexander von Humboldt Foundation as this ::: work was prepared while he was visiting Germany. A.A. acknowledges ::: the support of the Australian Research Council (Discovery ::: Project DP0985394). J.M.S.C. acknowledges support from the Spanish ::: Ministerio de Ciencia e Innovacion under contracts FIS2006- ::: 03376 and FIS2009-09895. J.M.D. thanks the Institut Universitaire ::: de France and the French Agence Nationale de la Recherche ::: projects MANUREVA ANR-08-SYSC-019 and IMFINI ANR-09-BLAN- ::: 0065, for support. <s> BIB003
Wave systems defined by higher order nonlinear PDEs, such as (2), can be solved also by the bilinearization technique . This technique involves the step of transforming the differential equation into a more tractable form by replacing the unknown time-and position-dependent envelope function with a new form . After this replacement has been performed, the bilinearization technique applies Hirota bilinear operators for a modified Bäcklund transformation technique , which assists in rewriting the original PDE into a simplified PDE composed of bilinear operators, from which exact soliton solutions can be identified. The most suitable example for the application of the bilinearization technique is on the Korteweg-de Vries (KdV) equation: where the boundary conditions are that ψ → 0 as |x| → ∞. The real wavefunction is differentiated according to the spatial and temporal dimensions as denoted. In the bilinearization technique, a transformation of the wavefunction to another form is the first step, where an ideal steady-state form is proposed to: where: and η 0 and p are arbitrary constants. By the bilinearization technique , one can rewrite (16) in the form: which is converted to its functional form: with f(x) = 1+e η . The bilinearization technique substitutes BIB002 into the original KdV Equation (15) and integrates it with respect to x: which is the original version of the bilinearized variant of the Korteweg-de Vries Equation (15) as derived by . The solution to BIB003 , f(x) = 1+e η , is defined as a more fundamental quantity than ψ in Equation (18) for the structure of the original nonlinear PDE in Equation . In the method of bilinearization, the Hirota bilinear operators are introduced. These are defined by the following definition : with m and n being arbitrary positive integers. At this stage, the converted form of the KdV Equation (20) is rewritten as a PDE composed of Hirota operators: which is a simplified form for the identification of exact solutions using the Bäcklund transformation for the original nonlinear PDE . The exact solution structure for the type of Hirota-operator based PDE form of the KdV Equation (15) is given by: which represents the two-soliton solution to the original KdV Equation . η 1 and η 2 are the functions with the independent variables x and t as given in (16) for each of the solitons, and Ω 1 = −p 3 1 and 2 , following the same definition for (16) for each soliton. η represent the perturbations . The KdV Equation (15) has also been solved by Matveev by identifying positon solutions BIB001 , which exert the same behavior as solitons, such as conserved shape after collision and elastic collision behavior. The positon differs from the soliton in that it has an infinite energy and is therefore not a strong model for oceanic or optical rogue waves. Positons have however a tendency to represent smoother solutions than solitons to the KdV equation and can have very high peaks compared to the wave normal. The KdW equation has also been solved by a nonlinear Fourier method , which is represented by a superposition of nonlinear oscillatory modes of the wave spectrum. This model, developed by Osborne , has the capacity to include a large number of non-linear oscillatory patterns, also known as multi-quasi-cnoidal waves, which are used to form the rogue wave by superposition in constructive phases. These solutions to the original KdV Equation (15) include several solitons, depending on the number of degrees of freedom selected for the numerical simulation of the KdV equation. This yields a 3D wave complex composed of solitons and radiation components in the simulated wave train .
Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Extended Dysthe Equation <s> The ordinary nonlinear Schrodinger equation for deep water waves, found by perturbation analysis to O (∊ 3 ) in the wave-steepness ∊ ═ ka , is shown to compare rather unfavourably with the exact calculations of Longuet-Higgins (1978 b ) for ∊ > 0.15, say. We show that a significant improvement can be achieved by taking the perturbation analysis one step further O (∊ 4 ). The dominant new effect introduced to order ∊ 4 is the mean flow response to non-uniformities in the radiation stress caused by modulation of a finite amplitude wave. <s> BIB001 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Extended Dysthe Equation <s> Abstract The modified nonlinear Schrodinger equation of Dysthe [Proc. Roy. Soc. Lond. Ser. A, 369, 105–114 (1979)] is extended by relaxing the narrow bandwidth constraint to make it more suitable for application to a realistic ocean wave spectrum. The new equation limits the bandwidth of unstable wave number perturbations of a Stokes wave in good agreement with the exact results of McLean et al. [Phys. Rev. Lett. 46, 817–820 (1981)]. Results are presented for the parametric bifurcation boundary between collinear and oblique most unstable side band perturbations of a Stokes wave. <s> BIB002 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Extended Dysthe Equation <s> We propose a new approach for modeling weakly nonlinear waves, based on enhancing truncated amplitude equations with exact linear dispersion. Our example is based on the nonlinear Schrodinger (NLS) equation for deep-water waves. The enhanced NLS equation reproduces exactly the conditions for nonlinear four-wave resonance (the “figure 8” of Phillips) even for bandwidths greater than unity. Sideband instability for uniform Stokes waves is limited to finite bandwidths only, and agrees well with exact results of McLean; therefore, sideband instability cannot produce energy leakage to high-wave-number modes for the enhanced equation, as reported previously for the NLS equation. The new equation is extractable from the Zakharov integral equation, and can be regarded as an intermediate between the latter and the NLS equation. Being solvable numerically at no additional cost in comparison with the NLS equation, the new model is physically and numerically attractive for investigation of wave evolution. <s> BIB003 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Extended Dysthe Equation <s> We demonstrate a way to generate a two-dimensional rogue waves in two types of broad area nonlinear optical systems subject to time-delayed feedback: in the generic Lugiato-Lefever model and in model of a broad-area surface-emitting laser with saturable absorber. The delayed feedback is found to induce a spontaneous formation of rogue waves. In the absence of delayed feedback, spatial pulses are stationary. The rogue waves are exited and controlled by the delay feedback. We characterize their formation by computing the probability distribution of the pulse height. The long-tailed statistical contribution which is often considered as a signature of the presence of rogue waves appears for sufficiently strong feedback. The generality of our analysis suggests that the feedback induced instability leading to the spontaneous formation of two-dimensional rogue waves is an universal phenomenon. <s> BIB004
In 1979, Dysthe BIB001 developed a modification of the perturbation-based NLSE by adding an additional term to the third-order perturbation variant originally developed by Higgins . Dysthe's method gave an NLSE variant, known as the extended Dysthe equation, which was shown to have a better agreement with the mean flow response to non-uniformities in deep-water waves. The extended Dysthe equation is given by: where the inhomogenous component is the fourth-order perturbation defined by Dysthe BIB001 . Dysthe transformed this equation to standard NLSE using dimensionless variables and added the following perturbation to the general solution: where α and θ are small real perturbations of the amplitude and phase, respectively. After insertion of BIB004 in the dimensionless form of (24) and linearizing, Dysthe obtained a simplified system of two PDEs, where the respective plane-wave solutions are in the form: and: where K = (λ 2 + µ 2 ) and λ, µ and Ω are selected parameters that satisfy a set of dispersion relations given by Dysthe BIB001 . The stability of the solutions derived by Dysthe shows that the Dysthe equation represents a more realistic model than the NLSE, given that it does not predict a maximum growth rate for all wavevectors, but only for some wavevectors. This displays that the fourth order perturbation term added to the NLSE gives a considerable improvement to the results relating to the stability of the finite amplitude wave. It is particularly the first derivative by the transformed variables in the x and z dimensions in Equation (24) that contributes to the excellent results of Dysthe. Dysthe and Trulsen BIB002 further developed this equation by including up to the fifth-order of the derivative of the wave amplitude describing the linear dispersive terms and simulated successfully BIB003 the New Year's wave using the extended Dysthe equation BIB001 BIB003 .
Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The MMT Model <s> A family of one-dimensional nonlinear dispersive wave equations is introduced as a model for assessing the validity of weak turbulence theory for random waves in an unambiguous and transparent fashion. These models have an explicitly solvable weak turbulence theory which is developed here, with Kolmogorov-type wave number spectra exhibiting interesting dependence on parameters in the equations. These predictions of weak turbulence theory are compared with numerical solutions with damping and driving that exhibit a statistical inertial scaling range over as much as two decades in wave number. <s> BIB001 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The MMT Model <s> Optical solitons and quasisolitons are investigated in reference to Cherenkov radiation. It is shown that both solitons and quasisolitons can exist, if the linear operator specifying their asymptotic behavior at infinity is sign-definite. In particular, the application of this criterion to stationary optical solitons shifts the soliton carrier frequency at which the first derivative of the dielectric constant with respect to the frequency vanishes. At that point the phase and group velocities coincide. Solitons and quasisolitons are absent, if the third-order dispersion is taken into account. The stability of a soliton is proved for fourth order dispersion using the sign-definiteness of the operator and integral estimates of the Sobolev type. This proof is based on the boundedness of the Hamiltonian for a fixed value of the pulse energy. <s> BIB002 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The MMT Model <s> Abstract The problem of turbulence is one of the central problems in theoretical physics. While the theory of fully developed turbulence has been widely studied, the theory of wave turbulence has been less studied, partly because it developed later. Wave turbulence takes place in physical systems of nonlinear dispersive waves. In most applications nonlinearity is small and dispersive wave interactions are weak. The weak turbulence theory is a method for a statistical description of weakly nonlinear interacting waves with random phases. It is not surprising that the theory of weak wave turbulence began to develop in connection with some problems of plasma physics as well as of wind waves. The present review is restricted to one-dimensional wave turbulence, essentially because finer computational grids can be used in numerical computations. Most of the review is devoted to wave turbulence in various wave equations, and in particular in a simple one-dimensional model of wave turbulence introduced by Majda, McLaughlin and Tabak in 1997. All the considered equations are model equations, but consequences on physical systems such as ocean waves are discussed as well. The main conclusion is that the range in which the theory of pure weak turbulence is valid is narrow. In general, wave turbulence is not completely weak. Together with the weak turbulence component, it can include coherent structures, such as solitons, quasisolitons, collapses or broad collapses. As a result, weak and strong turbulence coexist. In situations where coherent structures cannot develop, weak turbulence dominates. Even though this is primarily a review paper, new results are presented as well, especially on self-organized criticality and on quasisolitonic turbulence. <s> BIB003 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The MMT Model <s> Safety of shipping is an ever growing concern. In a summary, Faulkner investigated the causes of shipping casualties (2002, "Shipping Safety: A Matter of Concern, " Ingenia, The Royal Academy of Engineering, Marine Matters, pp. 13-20) and concluded that the numbers of unexplained accidents are far too high in comparison to other means of transport. From various sources, including insurers data over 30% of the casualties are due to bad weather (a fact that ships should be able to cope with) and a further 25% remain completely unexplained. The European project MaxWave aimed at investigating ship and platform accidents due to severe weather conditions using different radars and in situ sensors and at suggesting improved design and new safety measures. Heavy sea states and severe weather conditions have caused the loss of more than 200 large cargo vessels within the 20 years between 1981 and 2000 (Table I in Faulkner). In many cases, single "rogue waves" of abnormal height as well as groups of extreme waves have been reported by crew members of such ships. The European Project MaxWave deals with both theoretical aspects of extreme waves and new techniques to observe these waves using different remote sensing techniques. The final goal is to improve the understanding of the physical processes responsible for the generation of extreme waves and to identify geophysical conditions in which such waves are most likely to occur. Two-dimensional sea surface elevation fields are derived from marine radar and space borne synthetic aperture radar data. Individual wave parameters such as maximum to significant wave height ratios and wave steepness, are derived from the sea surface topography. Several ship and offshore platform accidents are analyzed and the impact on ship and offshore design is discussed. Tank experiments are performed to test the impact of designed extreme waves on ships and offshore structures. This article gives an overview of the different work packages on observation of rogue waves, explanations, and consequences for design. <s> BIB004 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The MMT Model <s> We demonstrate a way to generate a two-dimensional rogue waves in two types of broad area nonlinear optical systems subject to time-delayed feedback: in the generic Lugiato-Lefever model and in model of a broad-area surface-emitting laser with saturable absorber. The delayed feedback is found to induce a spontaneous formation of rogue waves. In the absence of delayed feedback, spatial pulses are stationary. The rogue waves are exited and controlled by the delay feedback. We characterize their formation by computing the probability distribution of the pulse height. The long-tailed statistical contribution which is often considered as a signature of the presence of rogue waves appears for sufficiently strong feedback. The generality of our analysis suggests that the feedback induced instability leading to the spontaneous formation of two-dimensional rogue waves is an universal phenomenon. <s> BIB005
The MMT equation is a one-dimensional nonlinear dispersion equation, which was originally proposed by Majda, McLaughlin and Tabak BIB001 . The MMT equation gives soliton-like solutions, which have been analyzed in detail by Zhakarov BIB003 , and gives four-wave resonant interaction between waves, which, when coupled with large-scale forces and small-scale damping, yields a family of solutions that exhibit direct and inverse cascades . The MMT equation is given by: where ψ is a complex scalar and |∂ x | α is the pseudodifferential operator defined on the real axis through the Fourier transform: The last term in (28) is the dissipation term, which is tuned to fit ocean waves through the Laplacian operator, Dψ, defined in the Fourier space: This dissipation term, used by , is similar to other dissipation models used by Komen and colleagues , who have developed concrete models for simulating large wave groups with focusing and defocusing effects. λ is the nonlinearity coefficient and corresponds to the focusing phase when <0 and to the defocusing phase when >0. The MMT Equation (28) differs from the standard NLSE in that its family of solutions develops in a more exponential pattern, rather then the Gaussian bell-shaped pattern observed for the solutions for the NLSE . The interesting aspect of this pattern of the spectrum of solutions of the MMT equation is in the mode of formation of the rogue wave, where their energy is transferred from and to the surrounding waves. The solutions are in other words induced by the intermittent formation from the localized rogue event arising out of the regular Gaussian background and collapsing into the surrounding waves. The energy of the rogue wave is transferred to the surroundings and experiences a complete zero-point state, merging completely in the background . The MMT model shows also the formation of quasisolitons, which appear in triple-wave packets, as modeled by Zakharov and Pushkarev and differ from regular solitons in that they radiate the energy backwards towards the preceding amplitudes. This behavior of the solutions may be particularly compatible with the simulation of rogue wave events occurring in regions with strong counter-wind currents, such as in the Agulhas Current or in the regions of the Irish sea BIB004 , which are heavily populated by rogue events, on the passage of the warm waters of the Gulf stream when encountering the frequent low-pressure systems over the Irish sea with counter-wave winds. The quasibreathers or quasisolitons have a root function similar to the Dysthe-type solutions given in BIB005 . Zakharov and Pushkarev approach the solutions in the form: where Ω and V are constants (Ω < 0 and V > 0) and k is the wavenumber, which is an approximate solution to the soliton-like solution for the MMT model. In this approximation, give φk the following form: which represents a form that gives quasi-soliton solutions BIB002 to the MMT equation. This form of the solutions to the MMT equation radiates energy backwards to the proceeding amplitudes and represents therefore an energy focusing, which is rather dissimilar from the focusing effects modeled by others for rogue patterns (vide supra). It is interesting to note that backward radiation plays also a central role in the dynamics of the quasi-solitons, and not only for their energy accumulation profile. Using the MMT model, Zakharov and Pushkarev developed also a model for collapses of the rogue event, by using self-similar solutions, and modeled the formation of the wave wedge in the appearing and vanishing state, given by a Fourier space distribution of the wave function. Zakharov and Pushkarev have also used the MMT model to develop a turbulence-based solution for the localized rogue event, using the initial condition in the form of an NLSE soliton: which shows a conserved action and momentum and an "inner turbulence" localized both in the real and Fourier spaces of the solutions to the modeled envelope function. This "intrinsic turbulence" is described by the authors as affecting the form of its wave spectra, which is irregular and has a stochastic behavior . This model of the rogue wave shows quasi-periodic oscillations with slowly diminishing amplitudes over time, caused by the destruction of rogue wave by the surrounding interference, which the authors denoted as a "quasi-breather".
Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Hirota Equation <s> A very simple exact analytic solution of the nonlinear Schroedinger equation is found in the class of periodic solutions. It describes the time evolution of a wave with constant amplitude on which a small periodic perturbation is superimposed. Expressions are obtained for the evolution of the spectrum of this solution, and these expressions are analyzed qualitatively. It is shown that there exists a certain class of periodic solutions for which the real and imaginary parts are linearly related, and an example of a one-parameter family of such solutions is given. <s> BIB001 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Hirota Equation <s> We present a method for finding the hierarchy of rational solutions of the self-focusing nonlinear Schrodinger equation and present explicit forms for these solutions from first to fourth order. We also explain their relation to the highest amplitude part of a field that starts with a plane wave perturbed by random small amplitude radiation waves. Our work can elucidate the appearance of rogue waves in the deep ocean and can be applied to the observation of rogue light pulse waves in optical fibers. <s> BIB002 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Hirota Equation <s> The Hirota equation is a modified nonlinear Schrodinger equation NLSE that takes into account higherorder dispersion and time-delay corrections to the cubic nonlinearity. In describing wave propagation in the ocean and optical fibers, it can be viewed as an approximation which is more accurate than the NLSE. We have modified the Darboux transformation technique to show how to construct the hierarchy of rational solutions of the Hirota equation. We present explicit forms for the two lower-order solutions. Each one is a regular nonsingular rational solution with a single maximum that can describe a rogue wave in this model. Numerical simulations reveal the appearance of these solutions in a chaotic field generated from a perturbed continuous wave solution. <s> BIB003 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Hirota Equation <s> We investigate exact nonlinear matter wave functions with odd and even parities in the framework of quasi-two-dimensional Bose-Einstein condensates (BECs) with spatially modulated cubic-quintic nonlinearities and harmonic potential. The existence condition for these exact solutions requires that the minimum energy eigenvalue of the corresponding linear Schrodinger equation with harmonic potential is the cutoff value of the chemical potential A. The competition between two-body and three-body interactions influences the energy of the localized state. For attractive two-body and three-body interactions, the larger the matter wave order number n, the larger the energy of the corresponding localized state. A linear stability analysis and direct simulations with initial white noise demonstrate that, for the same state (fixed n), increasing the number of atoms can add stability. A quasi-stable ground-state matter wave is also found for repulsive two-body and three-body interactions. We also discuss the experimental realization of these results in future experiments. These results are of particular significance to matter wave management in higher-dimensional BECs. (C) 2011 Elsevier Inc. All rights reserved. <s> BIB004 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Hirota Equation <s> The determinant representation of the n-fold Darboux transformation of the Hirota equation is given. Based on our analysis, the 1-soliton, 2-soliton, and breathers solutions are given explicitly. Further, the first order rogue wave solutions are given by a Taylor expansion of the breather solutions. In particular, the explicit formula of the rogue wave has several parameters, which is more general than earlier reported results and thus provides a systematic way to tune experimentally the rogue waves by choosing different values for them. <s> BIB005 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Hirota Equation <s> We introduce a mechanism for generating higher order rogue waves (HRWs) of the nonlinear Schr\"odinger(NLS) equation: the progressive fusion and fission of $n$ degenerate breathers associated with a critical eigenvalue $\lambda_0$, creates an order $n$ HRW. By adjusting the relative phase of the breathers at the interacting area, it is possible to obtain different types of HRWs. The value $\lambda_0$ is a zero point of the eigenfunction of the Lax pair of the NLS equation and it corresponds to the limit of the period of the breather tending to infinity. By employing this mechanism we prove two conjectures regarding the total number of peaks, as well as a decomposition rule in the circular pattern of an order $n$ HRW. <s> BIB006 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Hirota Equation <s> Optical bistability (OB) and optical multistability (OM) behavior in molecular magnets is theoretically studied. It is demonstrated that the OB of the system can be controlled via adjusting the magnetic field intensity. In addition, it is shown that the frequency detuning of probe and coupling fields, as well as the cooperation parameter, has remarkable effects on the OB behavior of the system. Also, we find that OB can be converted to OM through the magnitude of control-field detuning. Our results can be used as a guideline for optimizing and controlling the switching process in the crystal of molecular magnets. <s> BIB007
Multisolitons and breathers for rogue waves have been also successfully modeled BIB005 by applying the Darboux transformation on the Hirota equation . In their approach, Tao and He BIB005 developed the Lax pair on the Hirota equation, by using the AKNS procedure to get the Lax pair with the spectral parameters of the Hirota equation given below: where the Lax pair is expressed as: giving rise to the extended matrix representation of the operators in the Hirota equation as described by Tao and He BIB005 . Tao and He further applied the Darboux transformation [43] on the Lax-represented system by using the simple gauge transformation for spectral problems, where T is the polynomial applied on the parameter λ given in the Lax pair and φ is the seed function. Tao and He BIB005 argue however that regular seed solution φ = e ix as described in the previous sections is too special and makes the rogue wave model not universal enough. Tao and He develop therefore a different seed function compared to Akhmediev and colleagues BIB002 BIB001 , for instance, and develop a more extended form of the seed function by starting from a zero seed solution and a periodic seed solution to construct the complete solutions for the breathers and solitons. At zero seed and with the parameter λ from the Lax pair, they set the following Hermitian seed pair: and: back in the Darboux transformation to get the one-soliton solution: Tao and He BIB005 further report the model for the two-soliton solution and finally give the form of the one-soliton breather solution: where d 1 , d 2 , σ are given in BIB005 . Tao and He finally construct the Rogue wave solutions to the original Hirota equation in BIB007 by Taylor expansion on the breather solutions in BIB004 . The Taylor expansion is carried out on the η variable of the breather solution BIB004 , which is given in BIB005 , and forms the general form of the first order rogue wave of the Hirota equation: where the polynomials k 1 , k 2 , k 3 are given by Tao and He BIB005 . The rogue wave model resulting from this form is more general than the model given by Akhmediev and colleagues BIB003 on the Hirota equation. This difference is caused by the appearance of several parameters related to the eigenvalues of the Lax pairs and gives however a possibility to tune more finely the model to experiments on rogue waves. This advantage of the model by Tao and He increases the ability to modulate the precision of reproducing a rogue wave model by calculations. Tao and He's method grants also the possibility of calculating higher order rogue wave solutions to the Hirota equation by determinant representation of the Darboux transform, which was carried out in a subsequent work BIB006 .
Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Ablowitz-Musslimani Models: Non-Local Rogue Waves <s> The self-dual Yang-Mills equations play a central role in the study of integrable systems. In this paper we develop a formalism for deriving a four dimensional integrable hierarchy of commuting nonlinear flows containing the self-dual Yang-Mills flow as the first member. We show that upon appropriate reduction and suitable choice of gauge group it produces virtually all well known hierarchies of soliton equations in 1+1 and 2+1 dimensions and can be considered as a “universal” integrable hierarchy. Prototypical examples of reductions to classical soliton equations are presented and related issues such as recursion operators, symmetries, and conservation laws are discussed. <s> BIB001 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Ablowitz-Musslimani Models: Non-Local Rogue Waves <s> We have numerically calculated chaotic waves of the focusing nonlinear Schrrodinger equation (NLSE), starting with a plane wave modulated by relatively weak random waves. We show that the peaks with highest amplitude of the resulting wave composition (rogue waves) can be described in terms of exact solutions of the NLSE in the form of the collision of Akhmediev breathers. <s> BIB002 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Ablowitz-Musslimani Models: Non-Local Rogue Waves <s> We present a method for finding the hierarchy of rational solutions of the self-focusing nonlinear Schrodinger equation and present explicit forms for these solutions from first to fourth order. We also explain their relation to the highest amplitude part of a field that starts with a plane wave perturbed by random small amplitude radiation waves. Our work can elucidate the appearance of rogue waves in the deep ocean and can be applied to the observation of rogue light pulse waves in optical fibers. <s> BIB003 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Ablowitz-Musslimani Models: Non-Local Rogue Waves <s> N.A. is grateful to Alexander von Humboldt Foundation as this ::: work was prepared while he was visiting Germany. A.A. acknowledges ::: the support of the Australian Research Council (Discovery ::: Project DP0985394). J.M.S.C. acknowledges support from the Spanish ::: Ministerio de Ciencia e Innovacion under contracts FIS2006- ::: 03376 and FIS2009-09895. J.M.D. thanks the Institut Universitaire ::: de France and the French Agence Nationale de la Recherche ::: projects MANUREVA ANR-08-SYSC-019 and IMFINI ANR-09-BLAN- ::: 0065, for support. <s> BIB004 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Ablowitz-Musslimani Models: Non-Local Rogue Waves <s> A new integrable nonlocal nonlinear Schrödinger equation is introduced. It possesses a Lax pair and an infinite number of conservation laws and is PT symmetric. The inverse scattering transform and scattering data with suitable symmetries are discussed. A method to find pure soliton solutions is given. An explicit breathing one soliton solution is found. Key properties are discussed and contrasted with the classical nonlinear Schrödinger equation. <s> BIB005 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Ablowitz-Musslimani Models: Non-Local Rogue Waves <s> A nonlocal nonlinear Schrodinger (NLS) equation was recently introduced and shown to be an integrable infinite dimensional Hamiltonian evolution equation. In this paper a detailed study of the inverse scattering transform of this nonlocal NLS equation is carried out. The direct and inverse scattering problems are analyzed. Key symmetries of the eigenfunctions and scattering data and conserved quantities are obtained. The inverse scattering theory is developed by using a novel left–right Riemann–Hilbert problem. The Cauchy problem for the nonlocal NLS equation is formulated and methods to find pure soliton solutions are presented; this leads to explicit time-periodic one and two soliton solutions. A detailed comparison with the classical NLS equation is given and brief remarks about nonlocal versions of the modified Korteweg–de Vries and sine-Gordon equations are made. <s> BIB006 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Ablowitz-Musslimani Models: Non-Local Rogue Waves <s> A nonlocal nonlinear Schr\"odinger (NLS) equation was recently found by the authors and shown to be an integrable infinite dimensional Hamiltonian equation. Unlike the classical (local) case, here the nonlinearly induced"potential"is $PT$ symmetric thus the nonlocal NLS equation is also $PT$ symmetric. In this paper, new {\it reverse space-time} and {\it reverse time} nonlocal nonlinear integrable equations are introduced. They arise from remarkably simple symmetry reductions of general AKNS scattering problems where the nonlocality appears in both space and time or time alone. They are integrable infinite dimensional Hamiltonian dynamical systems. These include the reverse space-time, and in some cases reverse time, nonlocal nonlinear Schr\"odinger, modified Korteweg-deVries (mKdV), sine-Gordon, $(1+1)$ and $(2+1)$ dimensional three-wave interaction, derivative NLS,"loop soliton", Davey-Stewartson (DS), partially $PT$ symmetric DS and partially reverse space-time DS equations. Linear Lax pairs, an infinite number of conservation laws, inverse scattering transforms are discussed and one soliton solutions are found. Integrable reverse space-time and reverse time nonlocal discrete nonlinear Schr\"odinger type equations are also introduced along with few conserved quantities. Finally, nonlocal Painlev\'e type equations are derived from the reverse space-time and reverse time nonlocal NLS equations. <s> BIB007 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Ablowitz-Musslimani Models: Non-Local Rogue Waves <s> In 2013 a new nonlocal symmetry reduction of the well-known AKNS scattering problem was found; it was shown to give rise to a new nonlocal $PT$ symmetric and integrable Hamiltonian nonlinear Schr\"{o}dinger (NLS) equation. Subsequently, the inverse scattering transform was constructed for the case of rapidly decaying initial data and a family of spatially localized, time periodic one soliton solution were found. In this paper, the inverse scattering transform for the nonlocal NLS equation with nonzero boundary conditions at infinity is presented in the four cases when the data at infinity have constant amplitudes. The direct and inverse scattering problems are analyzed. Specifically, the direct problem is formulated, the analytic properties of the eigenfunctions and scattering data and their symmetries are obtained. The inverse scattering problem is developed via a left-right Riemann-Hilbert problem in terms of a suitable uniformization variable and the time dependence of the scattering data is obtained. This leads to a method to linearize/solve the Cauchy problem. Pure soliton solutions are discussed and explicit 1-soliton solution and two 2-soliton solutions are provided for three of the four different cases corresponding to two different signs of nonlinearity and two different values of the phase difference between plus and minus infinity. In the one other case there are no solitons. <s> BIB008 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Ablowitz-Musslimani Models: Non-Local Rogue Waves <s> Rogue waves in the nonlocal \({\mathcal {PT}}\)-symmetric nonlinear Schrodinger (NLS) equation are studied by Darboux transformation. Three types of rogue waves are derived, and their explicit expressions in terms of Schur polynomials are presented. These rogue waves show a much wider variety than those in the local NLS equation. For instance, the polynomial degrees of their denominators can be not only \(n(n+1)\), but also \(n(n-1)+1\) and \(n^2\), where n is an arbitrary positive integer. Dynamics of these rogue waves is also examined. It is shown that these rogue waves can be bounded for all space and time or develop collapsing singularities, depending on their types as well as values of their free parameters. In addition, the solution dynamics exhibits rich patterns, most of which have no counterparts in the local NLS equation. <s> BIB009 </s> Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions <s> The Ablowitz-Musslimani Models: Non-Local Rogue Waves <s> Starting from a discrete spectral problem, we derive a hierarchy of nonlinear discrete equations which include the Ablowitz-Ladik (AL) equation. We analytically study the discrete rogue-wave (DRW) solutions of AL equation with three free parameters. The trajectories of peaks and depressions of profiles for the first- and second-order DRWs are produced by means of analytical and numerical methods. In particular, we study the solutions with dispersion in parity-time ( PT) symmetric potential for Ablowitz-Musslimani equation. And we consider the non-autonomous DRW solutions, parameters controlling and their interactions with variable coefficients, and predict the long-living rogue wave solutions. Our results might provide useful information for potential applications of synthetic PT symmetric systems in nonlinear optics and condensed matter physics. <s> BIB010
Another critical method for modeling rogue waves was developed by Ablowitz and Musslimani BIB005 BIB006 BIB007 and uses nonlocal integrable models of the NLSE (1) and KdV (15) equations, where the resulting wave is derived by reverse space-time symmetry. The model evolves by establishing integrability by an infinite number of constants of motion or an infinite number of conservation laws. By this, the method uses a compatible pair of linear equations (similar to the Lax pair in the Hirota equation in (35)) with the nonlinear integrable equation. The method by Ablowitz and Musslimani differs from the Hirota method in that the pair of linear equations represent the scattering problem and the evolution of the scattering data BIB005 . Furthermore, the method by Ablowitz and Musslimani is different from others in that it constructs an inverse scattering problem also known as a linear Riemann-Hilbert problem, which gives the solution to the nonlinear PDE with dependency on time. The approach by Ablowitz and Musslimani BIB005 starts by linearizing the equation: where one can immediately observe the existence of a Hermitian pair with reverse directional variables. This form, where reverse variables are used, defines the nonlocal property of the equation and has the advantage that the equation remains invariant in time and space, after the complex conjugate is taken. Hence, the nonlocal equation is parity-and time-symmetric (P T -symmetric), which prevents the equation from yielding different results by a self-induced potential. An exemplary Lax pair is given in as: where v is the two-component vector and k is a special parameter, while A and B are complex functions. Ablowitz and Musslimani use this step specific compatibility conditions BIB001 to transform the original PDE in (42) (i.e., ψ xt = ψ tx ) and gain the simplified PDE pair: which yields the original form in Equation . Ablowitz and Musslimani further define the nonlocality by using a specific symmetry reduction: This step is particularly characteristic of Ablowitz-Musslimani models BIB005 BIB006 BIB007 , from which the new class of nonlocal integrable evolution equations with the nonlocal NLSE hierarchy is directly derived. The aforementioned property of conserved quantities and conservation laws is also characteristic of Ablowitz-Musslimani models BIB005 BIB007 . Here, they define a set of eigenfunctions that obey specific boundary conditions BIB005 . The eigenfunctions are very similar to the seed functions used by other groups and, when inserted in the Lax pair, yield a Riccati model of the conservation quantities. This yields the global conservation laws, which are given in BIB005 by: which are real integrable Hamiltonians. Ablowitz and Musslimani BIB005 derive furthermore local conservation laws defined by the equations: which are used to develop the framework for the direct scattering problem and the inverse scattering problem, where the scattering data are given by specific scattering matrices. The same symmetry is also in the problem of the potential and in the eigenfunction and leads naturally to the same symmetry relation in the scattering matrices, which is given by: and: where Λ is a 2 × 2 matrix with zeros in the diagonal and one, ±1 on the lower and upper diagonal, respectively. For the inverse scattering problem, Ablowitz and Musslimani BIB005 account for the symmetry condition by considering the set of basis terms as a left scattering problem and supplement these terms with the equivalent right-scattering problem, from which they formulate the Riemann-Hilbert problem and find the linear integral equations that govern the functions M and M in . These equations are given by: where R(k) andR(x) are the reflection coefficients. The terms B l andB l are the conservation law Hamiltonians applied symmetrically. From this stage, Ablowitz and Musslimani BIB005 derive a linear algebraic integral system of equations that solves the inverse problem for the eigenfunctionsM(x, k) and M(x, k). The resulting soliton solutions of the Ablowitz-Musslimani model hence assume the form: which represents a family of solutions defined by the four independent parameters, which have a dynamic relationship with the time variable and which gradually develop a singularity in a finite time period, t s at x = 0, where t s is given by: Equation (55) is a critical form of the time-variable, which distinguishes the method of Ablowitz-Musslimani BIB005 BIB006 BIB007 BIB001 from other rogue wave models and adds a non-linear evolution of the rogue wave. Ablowitz and Musslimani have also most recently developed a new model that includes nonlocal rogue waves with nonzero background, which provide a more realistic view of the rogue wave, which focuses energy from neighboring waves BIB008 . Solutions to the Ablowitz-Musslimani model (42) were also developed by Yang and Yang BIB009 , who used the Darboux transformation method on the PDE coupled with the Bäcklund transformation on the potential functions, identifying three types of rogue waves from the Ablowitz-Musslimani picture. Yang and Yang expanded the solutions to polynomials using Schur polynomials BIB009 . This analysis of the Ablowitz-Musslimani model showed greater variation in the rogue waves compared to the regular NLSE, where the variations were represented by the terms in the denominator of the soliton solutions. The parity-time symmetry potential of the Ablowitz-Musslimani equations has also been studied by Yu BIB010 very recently, who obtained discrete rogue wave solutions with three free parameters (refer to Equation (54) for similarities). Yu studied in particular the effect that the dispersion of the parity-time symmetry has on the solutions, as well as the effect of the coefficients and the parameters. Yu BIB010 used the Darboux transformation method in a similar fashion to Yang and Yang BIB009 to derive different forms of solitons with different heights, which were defined by two of the three free parameters in the solution (η andη in Equation (54)). Yu BIB010 also assessed the stability of rogue waves over a specific period of time and included a modulation instability coefficient that allows the modeling of several discrete solutions that represent various stages of rogue wave formation (appearing suddenly and disappearing suddenly), a property of rogue waves also reported by Akhmediev and colleagues BIB002 BIB003 BIB004 . Finally, Yu modeled rogue waves that appear rapidly and do not disappear. This latter model may be particularly relevant to describe rogue events during low-pressure systems at open sea, which have been reported in several cases to give stable rogue waves with a long lifetime (i.e., the rogue waves reported in the study by Munk ).
Clock Synchronization in Wireless Sensor Networks: An Overview <s> Introduction <s> A novel system for the location of people in an office environment is described. Members of staff wear badges that transmit signals providing information about their location to a centralized location service, through a network of sensors. The paper also examines alternative location techniques, system design issues and applications, particularly relating to telephone call routing. Location systems raise concerns about the privacy of an individual and these issues are also addressed. <s> BIB001 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Introduction <s> As part of a larger study we established a remote physiological monitoring network to investigate cardiorespiratory function during sleep in 400 infants in their homes. The objective of the study was to link measurements made at three weeks and three months of age with detailed measurements of maternal and fetal nutrition during pregnancy and subsequent outcome (growth, development and cardiorespiratory diseases). Five infants per night were monitored anywhere within a radius of 10 miles (16 km) of the remote hub. Of 800 overnight studies on 64 (of 400) infants, 99 were completed. Options allowed downloading data from monitor memory as well as review (on- and off-line) from the infants' or nurses' homes, the hub and the Oxford telemonitoring centre. The reasons for these measurements being made at home were to ensure physiological accuracy and to reduce cost. The network was effectively run by local community nurses; with the exception of the size and cost of the monitors, it had all the elements of a remote primary-care clinical service. <s> BIB002 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Introduction <s> Despite extraordinary advances in Global Positioning System (GPS) technology, millions of square meters of indoor space are out of reach of Navstar satellites. Their signals, originating high above the Earth, are not designed to penetrate most construction materials, and no amount of technical wizardry is likely to help. So the greater part of the world's commerce, being conducted indoors, cannot be followed by GPS satellites. Here, the authors describe how tracking people and assets indoors has now moved from the realm of science fiction to reality, thanks to a radiofrequency identification technique now being introduced to the market. <s> BIB003 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Introduction <s> Advanced new platform technologies are critical to the realization of the Earth Science Vision in the 2020 timeframe. Examples of the platform technology challenges and current state-of-the-art capabilities are presented. <s> BIB004 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Introduction <s> The developments in the space technologies are enabling the realization of deep space scientific missions such as Mars exploration. InterPlaNetary (IPN) Internet is expected to be the next step in the design and development of deep space networks as the Internet of the deep space planetary networks. However, there exist significant challenges to be addressed for the realization of this objective. Many researchers and several international organizations are currently engaged in defining and addressing these challenges and developing the required technologies for the realization of the InterPlaNetary Internet. In this paper, the current status of the research efforts to realize the InterPlaNetary Internet objective is captured. The communication architecture is presented, and the challenges posed by the several aspects of the InterPlaNetary Internet are introduced. The existing algorithms and protocols developed for each layer and the other related work are explored, and their shortcomings are pointed out along with the open research issues for the realization of the InterPlaNetary Internet. The objective of this survey is to motivate the researchers around the world to tackle these challenging problems and help to realize the InterPlaNetary Internet. <s> BIB005 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Introduction <s> We develop distributed algorithms for self-organizing sensor networks that respond to directing a target through a region. The sensor network models the danger levels sensed across its area and has the ability to adapt to changes. It represents the dangerous areas as obstacles. A protocol that combines the artificial potential field of the sensors with the goal location for the moving object guides the object incrementally across the network to the goal, while maintaining the safest distance to the danger areas. We give the analysis to the protocol and report on hardware experiments using a physical sensor network consisting of Mote sensors. <s> BIB006 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Introduction <s> All emphasize low-cost components operating on shoestring power budgets for years at a time in potentially hostile environments without hope of human intervention. <s> BIB007 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Introduction <s> M ONITORING WITH S ENSOR N ETWORKS By Robert Szewczyk, Eric Osterweil, Joseph Polastre, Michael Hamilton, Alan Mainwaring, and Deborah Estrin These networks deliver to ecologists data on localized environmental conditions at the scale of individual organisms to help settle large-scale land-use issues affecting animals, plants, and people. istorically, the study of micro- climate and habitat utilization have been largely observa- tional, with climatic and behavioral variables being extrapolated from a few or even individual measurement sites. Today, densely deployed sensor networks are being scaled to the size of the organisms under study, sampling phenomena at fre- quencies the organisms encounter, and dispersed in patterns that capture the full range of environmen- tal exposures to provide the fine-grain information needed for accurate modeling and prediction. Ranging in size from tens to potentially thousands of nodes within a habitat patch, these networks are beginning to provide a view of often subtle changes in a given landscape at unprecedented spatial and temporal resolution. The technological challenges for developing and deploying them are daunting. They must be unobtrusive yet durable under a range of environmental stresses, including damage caused by the organisms themselves. They must be so energy efficient that they can remain in situ with little human interaction and be maintenance-free for years at a time. They must also reliably interconnect with a cyber infrastructure that permits frequent network H ABITAT H access for data upload, device programming, and management. Here, we survey the components of a complete habitat-monitoring system, from miniature data-collection sensor nodes to data-processing back- ends containing millions of observations, showing how they fit into a unified architecture, deriving our data and conclusions from several case studies (see the sidebar “Sensing the Natural Environment”). Few themes permeate basic and applied ecological research to such an extent as the relationship of microclimate and ecological patterns, processes, physiology, and biological diversity. Microclimate can be defined as the climate close to surfaces, upon and beneath soils, under snow, or in water, on living things (such as trees), or even on individual animals. Individuals may disperse across broad areas, but per- sistence, growth, and reproduction depend on the existence of narrow ranges of key environmental con- ditions that vary over narrow spatial gradients. For example, we see only the stand of trees that reached the right microclimate as seeds but not the tens of thousands of seeds that perished or simply failed to take root because they germinated in areas outside the range of their tolerance. Through their presence and activity, organisms alter their surroundings in important ways. Tree shape, physiology, and canopy structure can produce June 2004/Vol. 47, No. 6 COMMUNICATIONS OF THE ACM <s> BIB008 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Introduction <s> Ch 1 Intro. Ch 2 Canonical Problem: Localization and Tracking Ch 3 Networking Sensor Networks Ch 4 Synchronization and Localization Ch 5 Sensor Tasking and Control Ch 6 Sensor Network Database Ch 7 Sensor Network Platforms and Tools Ch 8 Application and Future Direction <s> BIB009 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Introduction <s> Abstract Recent advances in micro-electromechanical (MEMS) technology have led to the development of small, low-cost, and low-power sensors. Wireless sensor networks (WSNs) are large-scale networks of such sensors, dedicated to observing and monitoring various aspects of the physical world. In such networks, data from each sensor is agglomerated using data fusion to form a single meaningful result, which makes time synchronization between sensors highly desirable. This paper surveys and evaluates existing clock synchronization protocols based on a palette of factors like precision, accuracy, cost, and complexity. The design considerations presented here can help developers either in choosing an existing synchronization protocol or in defining a new protocol that is best suited to the specific needs of a sensor-network application. Finally, the survey provides a valuable framework by which designers can compare new and existing synchronization protocols. <s> BIB010
The recent advances in micro-electro-mechanical systems technology have expedited the development of tiny, low-cost, low-power, and multifunctional sensing devices, which are capable of performing tasks such as sensing, data processing, and communication BIB007 . A wireless sensor network (WSN) is a distributed network consisting, in general, of a large number of sensor nodes, which are densely deployed over a wide geographical region to track a certain physical phenomenon. The positions of wireless sensor nodes need not be engineered or predetermined. This enables random deployment in inaccessible terrains or during disaster relief operations. Therefore, this implies a need for wireless sensor network protocols and algorithms with self-organizing capabilities. Another unique feature of wireless sensor networks is the collaborative effort of sensor nodes to perform tasks such as data fusion, detection and measurement. Instead of sending the raw data to the destination node, sensor nodes use their own processing abilities to locally perform simple computations and transmit only the required and partially processed data. In other words, data from each sensor is collected to produce a single meaningful result value . Wireless sensor networks can be applied to a wide range of applications in domains as diverse as medical BIB002 , industrial, military , environmental BIB008 , scientific BIB005 BIB004 , and home networks BIB002 [17] BIB006 BIB001 BIB003 . Specifically, WSNs enable doctors to identify predefined symptoms by monitoring the physiological data of patients remotely. As a military application, WSNs can be used to detect nuclear, biological, and chemical attacks and presence of hazardous materials, prevent enemy attacks by means of alerts when enemy aircrafts are spotted, and monitor friendly forces, equipment and ammunition. Moreover, WSNs are also conducive to monitoring forest fire, observing ecological and biological habitats, and detecting floods and earthquakes. In terms of civilian applications of WSNs, it is possible to determine spot availability in a public parking lot, track active badge at the workplace, observe security in public places such as banks and shopping malls, and monitor highway traffic in a certain time. Additionally, WSNs can meet the needs for scientific applications such as space and interplanetary exploration, high energy physics, and deep undersea exploration BIB010 . Since the sensors in a wireless sensor network operate independently, their local clocks may not be synchronized with one another. This can cause difficulties when trying to integrate and interpret information sensed at different nodes. For instance, if a moving car is detected at two different times along a road, before we can even tell in what direction the car is going, the detection times have to be compared meaningfully. In addition, we must be able to transform the two time readings into a common frame of reference before estimating the speed of the vehicle. Estimating time differences across nodes accurately is also important in node localization. For example, many localization algorithms use ranging technologies to estimate internode distances; in these technologies, synchronization is needed for time-of-flight measurements that are then transformed into distances by multiplying with the medium propagation speed for the type of signal used such as radio frequency or ultrasonic. There are additional examples where cooperative sensing requires the nodes involved to agree on a common time frame such as configuring a beam-forming array and setting a TDMA (Time Division Multiple Access) radio schedule BIB009 . These situations mandate the necessity of one common notion of time in wireless sensor networks. Therefore, currently there is a huge interest towards developing energy efficient clock synchronization protocols to provide a common notion of time.
Clock Synchronization in Wireless Sensor Networks: An Overview <s> General Clock Model <s> Recent advances in miniaturization and low-cost, low-power design have led to active research in large-scale networks of small, wireless, low-power sensors and actuators. Time synchronization is critical in sensor networks for diverse purposes including sensor data fusion, coordinated actuation, and power-efficient duty cycling. Though the clock accuracy and precision requirements are often stricter than in traditional distributed systems, strict energy constraints limit the resources available to meet these goals.We present Reference-Broadcast Synchronization, a scheme in which nodes send reference beacons to their neighbors using physical-layer broadcasts. A reference broadcast does not contain an explicit timestamp; instead, receivers use its arrival time as a point of reference for comparing their clocks. In this paper, we use measurements from two wireless implementations to show that removing the sender's nondeterminism from the critical path in this way produces high-precision clock agreement (1.85± 1.28μsec, using off-the-shelf 802.11 wireless Ethernet), while using minimal energy. We also describe a novel algorithm that uses this same broadcast property to federate clocks across broadcast domains with a slow decay in precision (3.68± 2.57μsec after 4 hops). RBS can be used without external references, forming a precise relative timescale, or can maintain microsecond-level synchronization to an external timescale such as UTC. We show a significant improvement over the Network Time Protocol (NTP) under similar conditions. <s> BIB001 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> General Clock Model <s> All emphasize low-cost components operating on shoestring power budgets for years at a time in potentially hostile environments without hope of human intervention. <s> BIB002 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> General Clock Model <s> Ch 1 Intro. Ch 2 Canonical Problem: Localization and Tracking Ch 3 Networking Sensor Networks Ch 4 Synchronization and Localization Ch 5 Sensor Tasking and Control Ch 6 Sensor Network Database Ch 7 Sensor Network Platforms and Tools Ch 8 Application and Future Direction <s> BIB003 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> General Clock Model <s> Abstract Recent advances in micro-electromechanical (MEMS) technology have led to the development of small, low-cost, and low-power sensors. Wireless sensor networks (WSNs) are large-scale networks of such sensors, dedicated to observing and monitoring various aspects of the physical world. In such networks, data from each sensor is agglomerated using data fusion to form a single meaningful result, which makes time synchronization between sensors highly desirable. This paper surveys and evaluates existing clock synchronization protocols based on a palette of factors like precision, accuracy, cost, and complexity. The design considerations presented here can help developers either in choosing an existing synchronization protocol or in defining a new protocol that is best suited to the specific needs of a sensor-network application. Finally, the survey provides a valuable framework by which designers can compare new and existing synchronization protocols. <s> BIB004
Computer clocks are, in general, based on crystal oscillators which provide a local time for each network node. The time in a computer clock is just a counter that gets incremented with crystal oscillators and is referred to as software clock. The interrupt handler must increment the software clock by one every time an interrupt occurs. Most hardware oscillators are not so precise because the frequency which makes time increase is never exactly right. Even a frequency deviation of only 0.001% would bring a clock error of about one second per day. Considering the physical clock synchronization in a distributed system to UTC (Universal Time Controller), the computer clock shows time C(t), which may or may not be the same as t, at any point of real time t. For a perfect clock, the derivative dC(t)/dt should be equal to 1. This term is referred to as clock skew. The clock skew can actually vary over time due to environmental conditions, such as humidity and temperature, but we assume that it stays bounded and close to 1, so that: ( ) BIB002 1 , dC t dt where ρ denotes the maximum skew rate. A typical value of the maximum skew specified by the manufacturer for today's hardware is 10 -6 . We note that the clocks of two nodes are synchronized to one common time at some point of in time, but they do not stay synchronized in the future due to clock skew. Even if there is no skew, the clocks of different nodes may not be the same. Time differences caused by the lack of a common time origin are called clock phase offsets. Fig. 1 shows the behavior of fast, slow, and perfect clocks with respect to UTC BIB004 BIB003 . We next review some definitions related to clock terminology that have seen widely adapted in the literature. The time of a clock in a sensor node A is defined to be the function C A (t), where C A (t) = t for a perfect clock and the clock frequency is the rate at which a clock progresses. Clock offset is the difference between the times reported by the clocks at two sensor nodes; namely, the offset of clock C A relative to C B at time t is given by C A (t) -C B (t). Clock skew is defined to be the difference in the frequencies of two clocks (C' A (t) -C' B (t)), i.e., the rate of variation or derivative of clock offset. Another clock terminology is clock drift, which is the second derivative of the clock offset with regard to time. Mathematically, the drift of clock C A relative to C B at time t is (C'' A (t) -C'' B (t)) BIB004 . In order for sensor nodes to be able to synchronize, they have to possess for a period of time a communication channel where the message delays between nodes can be reliably estimated. However, the enemy of accurate network time synchronization is the non-determinism of the delay estimation process. The estimation of latencies in channel is confounded by random events which bring about asymmetric round-trip-message delivery delays. This causes synchronization errors. The latency in channel can be decomposed into four distinct components BIB001 . Figure 2 illustrates the decomposition of packet delay when the packet travels over a wireless channel.  Send Time: This is the time spent by the sender to construct the message, including kernel protocol processing and variable delays introduced by the operating system, e.g., context switches and system call overhead incurred by the synchronization application. This time also accounts for the time needed to transfer the message from the host to its network interface.  Access Time: This is the delay incurred while waiting for access to the transmission channel. Access Time is very MAC (Medium Access Control)-specific. Contention-based MACs must wait for the channel to be clear before transmitting, and retransmit in case that a collision happened. Wireless RTS/CTS schemes such as those in 802.11 networks need an exchange of control packets before data transmission. TDMA channels require the sender to wait for its slot before transmitting.  Propagation Time: This is the time for the message to travel from the sender to the destination node through the channel since it left the sender. In case that the sender and the receiver share access to the same physical media (e.g., neighbors in an ad-hoc wireless network or a LAN), this time is very small as it is simply the physical propagation time of the message through the medium. In contrast, Propagation Time dominates the delay in wide-area networks, where it includes queuing delay and switching delay at each router as the message transits through the network.  Receive Time: This is the time for the network interface on the receiver side to get the message and notify the host of its arrival. This is typically the time required for the network interface to generate a message reception signal. If the arrival time is time-stamped at a low enough level in the host's operating system kernel (e.g., inside of the network driver's interrupt handler), Receive Time does not include the overhead of system calls, context switches, or even the transfer of the message from the network interface to the host and so can be kept small.
Clock Synchronization in Wireless Sensor Networks: An Overview <s> Clock Synchronization Protocols for Wireless Sensor Networks <s> The network time protocol (NTP), which is designed to distribute time information in a large, diverse system, is described. It uses a symmetric architecture in which a distributed subnet of time servers operating in a self-organizing, hierarchical configuration synchronizes local clocks within the subnet and to national time standards via wire, radio, or calibrated atomic clock. The servers can also redistribute time information within a network via local routing algorithms and time daemons. The NTP synchronization system, which has been in regular operation in the Internet for the last several years, is described, along with performance data which show that timekeeping accuracy throughout most portions of the Internet can be ordinarily maintained to within a few milliseconds, even in cases of failure or disruption of clocks, time servers, or networks. > <s> BIB001 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Clock Synchronization Protocols for Wireless Sensor Networks <s> Presents and analyzes a new probabilistic clock synchronization algorithm that can guarantee a much smaller bound on the clock skew than most existing algorithms. The algorithm is probabilistic in the sense that the bound on the clock skew that it guarantees has a probability of invalidity associated with it. However, the probability of invalidity may be made extremely small by transmitting a sufficient number of synchronization messages. It is shown that an upper bound on the probability of invalidity decreases exponentially with the number of synchronization messages transmitted. A closed-form expression that relates the probability of invalidity to the clock skew and the number of synchronization messages is also derived. > <s> BIB002 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Clock Synchronization Protocols for Wireless Sensor Networks <s> Continuous clock synchronization avoids unpredictable instantaneous corrections of clock values. This is usually achieved by spreading the clock correction over the synchronization interval. In the context of wireless real time applications, a protocol achieving continuous clock synchronization must tolerate message losses and should have a low overhead in terms of the number of messages. The paper presents a clock synchronization protocol for continuous clock synchronization in wireless real time applications. It extends the IEEE 802.11 standard for wireless local area networks. It provides continuous clock synchronization, improves the precision by exploiting the tightness of the communication medium, and tolerates message losses. Continuous clock synchronization is achieved with an advanced algorithm adjusting the clock rates. We present the design of the protocol, its mathematical analysis, and measurements of a driver level implementation of the protocol on Windows NT. <s> BIB003 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Clock Synchronization Protocols for Wireless Sensor Networks <s> Recent advances in miniaturization and low-cost, low-power design have led to active research in large-scale networks of small, wireless, low-power sensors and actuators. Time synchronization is a critical piece of infrastructure in any distributed system, but wireless sensor networks make particularly extensive use of synchronized time. Almost any form of sensor data fusion or coordinated actuation requires synchronized physical time for reasoning about events in the physical world. However, while the clock accuracy and precision requirements are often stricter in sensor networks than in traditional distributed systems, energy and channel constraints limit the resources available to meet these goals. ::: New approaches to time synchronization can better support the broad range of application requirements seen in sensor networks, while meeting the unique resource constraints found in such systems. We first describe the design principles we have found useful in this problem space: tiered and multi-modal architectures are a better fit than a single solution forced to solve all problems; tunable methods allow synchronization to be more finely tailored to problem at hand; peer-to-peer synchronization eliminates the problems associated with maintaining a global timescale. We propose a new service model for time synchronization that provides a much more natural expression of these techniques: explicit timestamp conversions . ::: We describe the implementation and characterization of several synchronization methods that exemplify our design principles. Reference-Broadcast Synchronization achieves high precision at low energy cost by leveraging the broadcast property inherent to wireless communication. A novel multi-hop algorithm allows RBS timescales to be federated across broadcast domains. Post-Facto Synchronization can make systems significantly more efficient by relaxing the traditional constraint that clocks must be kept in continuous synchrony. ::: Finally, we describe our experience in applying our new methods to the implementation of a number of research and commercial sensor network applications. <s> BIB004 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Clock Synchronization Protocols for Wireless Sensor Networks <s> Recent advances in miniaturization and low-cost, low-power design have led to active research in large-scale networks of small, wireless, low-power sensors and actuators. Time synchronization is critical in sensor networks for diverse purposes including sensor data fusion, coordinated actuation, and power-efficient duty cycling. Though the clock accuracy and precision requirements are often stricter than in traditional distributed systems, strict energy constraints limit the resources available to meet these goals.We present Reference-Broadcast Synchronization, a scheme in which nodes send reference beacons to their neighbors using physical-layer broadcasts. A reference broadcast does not contain an explicit timestamp; instead, receivers use its arrival time as a point of reference for comparing their clocks. In this paper, we use measurements from two wireless implementations to show that removing the sender's nondeterminism from the critical path in this way produces high-precision clock agreement (1.85± 1.28μsec, using off-the-shelf 802.11 wireless Ethernet), while using minimal energy. We also describe a novel algorithm that uses this same broadcast property to federate clocks across broadcast domains with a slow decay in precision (3.68± 2.57μsec after 4 hops). RBS can be used without external references, forming a precise relative timescale, or can maintain microsecond-level synchronization to an external timescale such as UTC. We show a significant improvement over the Network Time Protocol (NTP) under similar conditions. <s> BIB005 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Clock Synchronization Protocols for Wireless Sensor Networks <s> Amino acid fermentation is conducted by fermenting bacterial cells in a culture medium in a fermentor and separating fermentation solution withdrawn from the fermentor into a solution containing said bacterial cells and a solution not containing bacterial cells by a cell separator. The solution containing said bacterial cells being circulated from said cell separator to said fermenter by circulating means to perform amino acid fermentation continuously, and bubbles being removed from said fermentation solution by a bubble separator before said fermentation solution is fed to said circulating means and said cell separator. <s> BIB006 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Clock Synchronization Protocols for Wireless Sensor Networks <s> Wireless ad-hoc sensor networks have emerged as an interesting and important research area in the last few years. The applications envisioned for such networks require collaborative execution of a distributed task amongst a large set of sensor nodes. This is realized by exchanging messages that are time-stamped using the local clocks on the nodes. Therefore, time synchronization becomes an indispensable piece of infrastructure in such systems. For years, protocols such as NTP have kept the clocks of networked systems in perfect synchrony. However, this new class of networks has a large density of nodes and very limited energy resource at every node; this leads to scalability requirements while limiting the resources that can be used to achieve them. A new approach to time synchronization is needed for sensor networks.In this paper, we present Timing-sync Protocol for Sensor Networks (TPSN) that aims at providing network-wide time synchronization in a sensor network. The algorithm works in two steps. In the first step, a hierarchical structure is established in the network and then a pair wise synchronization is performed along the edges of this structure to establish a global timescale throughout the network. Eventually all nodes in the network synchronize their clocks to a reference node. We implement our algorithm on Berkeley motes and show that it can synchronize a pair of neighboring motes to an average accuracy of less than 20ms. We argue that TPSN roughly gives a 2x better performance as compared to Reference Broadcast Synchronization (RBS) and verify this by implementing RBS on motes. We also show the performance of TPSN over small multihop networks of motes and use simulations to verify its accuracy over large-scale networks. We show that the synchronization accuracy does not degrade significantly with the increase in number of nodes being deployed, making TPSN completely scalable. <s> BIB007 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Clock Synchronization Protocols for Wireless Sensor Networks <s> Wireless sensor network applications, similarly to other distributed systems, often require a scalable time synchronization service enabling data consistency and coordination. This paper describes the Flooding Time Synchronization Protocol (FTSP), especially tailored for applications requiring stringent precision on resource limited wireless platforms. The proposed time synchronization protocol uses low communication bandwidth and it is robust against node and link failures. The FTSP achieves its robustness by utilizing periodic flooding of synchronization messages, and implicit dynamic topology update. The unique high precision performance is reached by utilizing MAC-layer time-stamping and comprehensive error compensation including clock skew estimation. The sources of delays and uncertainties in message transmission are analyzed in detail and techniques are presented to mitigate their effects. The FTSP was implemented on the Berkeley Mica2 platform and evaluated in a 60-node, multi-hop setup. The average per-hop synchronization error was in the one microsecond range, which is markedly better than that of the existing RBS and TPSN algorithms. <s> BIB008 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Clock Synchronization Protocols for Wireless Sensor Networks <s> Abstract Recent advances in micro-electromechanical (MEMS) technology have led to the development of small, low-cost, and low-power sensors. Wireless sensor networks (WSNs) are large-scale networks of such sensors, dedicated to observing and monitoring various aspects of the physical world. In such networks, data from each sensor is agglomerated using data fusion to form a single meaningful result, which makes time synchronization between sensors highly desirable. This paper surveys and evaluates existing clock synchronization protocols based on a palette of factors like precision, accuracy, cost, and complexity. The design considerations presented here can help developers either in choosing an existing synchronization protocol or in defining a new protocol that is best suited to the specific needs of a sensor-network application. Finally, the survey provides a valuable framework by which designers can compare new and existing synchronization protocols. <s> BIB009 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Clock Synchronization Protocols for Wireless Sensor Networks <s> In the near future, small intelligent devices will be deployed in homes, plantations, oceans, rivers, streets, and highways to monitor the environment. These devices require time synchronization, so voice and video data from different sensor nodes can be fused and displayed in a meaningful way at the sink. Instead of time synchronization between just the sender and receiver or within a local group of sensor nodes, some applications require the sensor nodes to maintain a similar time within a certain tolerance throughout the lifetime of the network. The Time-Diffusion Synchronization Protocol (TDP) is proposed as a network-wide time synchronization protocol. It allows the sensor network to reach an equilibrium time and maintains a small time deviation tolerance from the equilibrium time. In addition, it is analytically shown that the TDP enables time in the network to converge. Also, simulations are performed to validate the effectiveness of TDP in synchronizing the time throughout the network and balancing the energy consumed by the sensor nodes. <s> BIB010
Clock synchronization in wireless sensor networks has attracted a lot of attention in recent years. The development of post-facto synchronization by Elson and Estrin was a pioneering work BIB004 . In this method, unlike in conventional synchronization approaches such as NTP, local clocks of sensor nodes should normally operate unsynchronized at their own pace, but should synchronize whenever synchronization is needed. Local timestamps of two sensor nodes at the occurrence time of an event are synchronized later by extrapolating backwards to estimate the offset between clocks at a previous time. This synchronization method has laid down the ground for the RBS (Reference Broadcast Synchronization) protocol. Since most networks are very closely associated with the application, therefore for the intended protocols used for synchronization are different from each other in some aspects and similar to one another in other aspects. In BIB009 , the synchronization protocols are classified according to two kinds of features, which are synchronization related issues and application-dependent characteristics. Synchronization issues  Master-slave versus peer-to-peer synchronization -Master-slave: This protocol assigns one node as the master and the other nodes as slaves. The slave nodes regard the local clock reading of the master node as the reference time and try to synchronize with the master. The representative examples in this class are the protocol of Mock et al. BIB003 and Ping's protocol BIB006 . -Peer-to-peer: Any node can communicate directly with other nodes in the network. Such an approach removes the risk of the master node failure. Therefore these class of protocols are more flexible but also more uncontrollable. RBS BIB005 and the time diffusion protocol (TDP) BIB010 assume peer-to-peer configurations.  Internal synchronization versus external synchronization -Internal synchronization: A global time base is not available from within the system and therefore the protocol attempts to minimize the maximum difference between the readings of local clocks of the sensors. The protocol of Mock et al. BIB003 belongs to this scheme. -External synchronization: A standard time such as UTC (Universal Time Controller) is available and is used as a reference time. The local clocks of sensors seek to synchronize to this reference time. NTP BIB001 is the representative example.  Probabilistic versus deterministic synchronization -Probabilistic synchronization: This method gives a probabilistic guarantee on the maximum clock offset with a failure probability that can be bonded or determined. In a wireless environment where energy is scarce, this can be very expensive. The protocol of PalChaudhuri et al. is a probabilistic variation of RBS BIB005 . -Deterministic synchronization: Arvind BIB002 defined deterministic algorithms as those guaranteeing an upper bound on the clock offset with certainty. Most protocols are deterministic and so are RBS and TDP.  Sender-to-receiver versus receiver-to-receiver versus receiver-only synchronization -Sender-to-receiver synchronization (SRS): The sender node periodically sends a message with its local time as a timestamp to the receiver and then the receiver synchronizes with the sender using the timestamp received from the sender. -Receiver-to-receiver synchronization (RRS): This method uses the property that if any two receivers receive the same message in a single-hop transmission, they receive it at approximately the same time. Receivers exchange the time at which they received the same message and compute their offset based on the difference in reception times. -Receiver-only synchronization (ROS): A group of nodes can be simultaneously synchronized by only listening to the message exchanges of a pair of nodes.  Clock correction versus untethered clocks -Clock correction: The local clocks of nodes participating in the network are corrected either instantaneously or continually to keep the entire network synchronized. Timingsync protocol for sensor networks (TPSN) BIB007 uses this approach. -Untethered clocks: Every node maintains its own clock as it is, and keeps a timetranslation table relating its clock to the clock of the other nodes. Local timestamps are compared using the table. A global timescale is maintained in this way with the clocks untethered. RBS belongs to this approach.  Pairwise Synchronization versus network-wide synchronization -Pairwise synchronization: The protocols are primarily designed to synchronize two nodes, although they usually can be extended to deal with the synchronization of a group of nodes. -Network-wide synchronization: The protocols are mainly designed to synchronize a large number of nodes in the network. Application-dependent features  Single-hop versus multi-hop networks -Single-hop communication: A sensor node can directly communicate and exchange messages with any other sensor in a single-hop network. The protocol of Mock et al. BIB003 is a representative example. However, it can be extended to multi-hop communication. -Mobile networks: Sensors have the ability to move, and they connect with other sensors only when entering the geographical scope of those sensors. The changing topology is often a problem because it needs resynchronization of nodes and re-computation of the neighborhoods or clusters.  MAC-layer-based approach versus standard approach -RBS does not depend on MAC protocols so as to avoid a tight integration of the application with the MAC layer. On the other hand, the protocols proposed by Ganeriwal et al. BIB007 and Mock et al. BIB003 rely on the CSMA/CA protocol for the MAC layer. Next we will summarize various synchronization protocols, discuss their advantages and disadvantages, and explain techniques for clock offset and skew estimation in several representative clock synchronization protocols. Given that sensor networks are generally closely related to the real world environment that they monitor, different networks present different characteristics impacting the synchronization requirements. For the rest of this section, we will describe the synchronization schemes explicitly designed and proposed for wireless sensor networks. We will specifically consider the following protocols: Reference Broadcast Synchronization (RBS) BIB005 Timing-Sync Protocol for Sensor Networks (TPSN) BIB007 Delay Measurement Time Synchronization for Wireless Sensor Networks (DMTS) BIB006 Flooding Time Synchronization Protocol (FTSP) BIB008 Probabilistic clock synchronization service in sensor networks Time Diffusion Synchronization Protocol (TDP) BIB010 3.1. Reference Broadcast Synchronization BIB005 Elson et al. proposed a synchronization protocol for sensor networks referred to as Reference Broadcast Synchronization (RBS) and that it is based on the receiver-receiver synchronization. The fundamental property of RBS is that a broadcast message is only used to synchronize a set of receivers with one another, in contrast with traditional protocols that synchronize the sender of a message with its receiver. By doing this, it removes the Send Time and Access Time from the critical path, as shown in Figure 3 . This is a significant advantage for synchronization in a LAN, where the Send Time and Access Time are typically the biggest contributors to the non-determinism in the latency. An RBS broadcast is always used as a relative time reference, and never to communicate an absolute time value. It is exactly this property that eliminates the error caused by the Send Time and Access Time: each receiver is synchronizing to a reference packet which was injected into the physical channel at the same instant. The message itself does not include a timestamp generated by the sender, nor is it important exactly when it is sent. As a matter of fact, the broadcast does not even need to be a dedicated time synchronization packet. Almost any extant broadcast can be used to recover timing information -for instance, ARP packets in Ethernet, or the broadcast control traffic in wireless networks (e.g., RTS/CTS exchanges or route discovery packets). As mentioned above, RBS removes the effect of the error sources of Send Time and Access Time altogether; the two remaining factors are Propagation Time and Receive Time. The authors in BIB005 consider Propagation Time to be effectively 0. The propagation speed of electromagnetic signals through air is close to c (1nsec/foot), and through copper wire about 2 /3 c . For a LAN or ad-hoc network spanning tens of feet, propagation time is at most tens of nanoseconds, which does not contribute significantly to the  sec-scale error budget. Moreover, RBS is only sensitive to the difference in propagation time between pair of receivers, as shown in Figure 3 . In order for a receiver node to interpret a message at all, it must be synchronized to the incoming message within one bit time. Latency caused by processing in the receiver electronics is irrelevant as long as it is deterministic, since the RBS scheme is only sensitive to differences in the receive time of messages within a set of receivers. Additionally, the system clock can easily be read at interrupt time when a packet arrives; this eliminates delays due to receiver protocol processing, context switches, and interface-to-host transfer form the critical path. The simplest form of RBS is broadcasting a single pulse to two receiver nodes, enabling them to estimate their relative clock offsets. In other words, at first, a transmitter broadcasts a reference packet to two receivers (i and j) and then each receiver records the time when the reference packet was received, according to its own local clock. Finally, the receivers exchange their observation data. Based on this single broadcast alone, the receivers have sufficient information to form a local or relative timescale. This basic RBS scheme can be extended by allowing synchronization between n receivers by a single packet, where n may be larger than two or increasing the number of reference packets to obtain higher precision. Reference BIB005 shows via numeric simulation results that in the simplest case of two receivers, 30 reference broadcasts can improve the precision from 11 μsec to 1.6 μsec, after which there is a point of diminishing returns. The authors also make use of this redundancy for estimating clock skews. Instead of averaging the phase offsets from multiple observations, they performed a least-squares linear regression. This offers a fast, closed-form method for finding the best fit line through the phase error observations over time. The clock offset and skew of the local node with respect to the remote node can be recovered from the intercept and slope of the line. Of course, fitting a line to observation data implicitly assumes that the frequency is stable, i.e., that the phase error is changing at a constant rate. The frequency of real oscillators changes over time due to environmental effects. In general, network time synchronization algorithms (for example, NTP) correct a clock's phase and its oscillator's frequency error, but do not try to model its frequency instability. In other words, frequency adjustments are made continuously based on a recent window of observations relating the local oscillator to a reference. The RBS system assumes also a similar scheme. RBS models oscillators as having high short-term frequency stability by ignoring data that is more than a few minutes old. To test RBS, the authors implemented it on two different hardware platforms to assess its precision performance. The first used platform is the Berkeley Motes which is one of the most widely used sensor node architectures, and RBS acquired on this platform a synchronization precision within 11 μsec. The other platform is commodity hardware, Compaq IPAQs running Linux kernel v2.4, which is connected with an 11 Mbps 802.11 wireless network. The achieved precision of RBS on this platform was 6.29  6.45 μsec. The advantages of RBS are as follows BIB005 :  The largest sources of nondeterministic latency can be eliminated from the critical path by using the broadcast channel to synchronize receivers with one another. This leads to significantly better precision synchronization than algorithms that measure round-trip delay.  Multiple broadcasts enable tighter synchronization because residual errors tend to follow well-behaved distributions, and also allow estimation of clock skew and extrapolation of past phase offsets.  Outliers and lost packets are handled gracefully; the best fit line can be drawn even if some points are missing.  RBS allows nodes to construct local timescales. This is useful for sensor networks and other applications that require synchronized time but may not have an absolute time reference available. On the contrary, RBS presents the following disadvantages BIB009 BIB005 :  This protocol is not applicable to point-to-point networks; a broadcasting medium is needed.  For a single-hop network of n nodes, RBS requires O( 2 n ) message exchanges, which is computationally expensive in the case of large scale networks.  Convergence time, which is the time taken to synchronize the network, can be high because of the large number of message exchanges.  The reference node is left unsynchronized in this protocol. In some sensor networks, if the reference node needs to be synchronized, it will result in a considerable waste of energy. Now we take a look at a method for joint estimation of clock offset and skew in RBS.
Clock Synchronization in Wireless Sensor Networks: An Overview <s> Estimation of clock offset and clock skew <s> Recent advances in miniaturization and low-cost, low-power design have led to active research in large-scale networks of small, wireless, low-power sensors and actuators. Time synchronization is critical in sensor networks for diverse purposes including sensor data fusion, coordinated actuation, and power-efficient duty cycling. Though the clock accuracy and precision requirements are often stricter than in traditional distributed systems, strict energy constraints limit the resources available to meet these goals.We present Reference-Broadcast Synchronization, a scheme in which nodes send reference beacons to their neighbors using physical-layer broadcasts. A reference broadcast does not contain an explicit timestamp; instead, receivers use its arrival time as a point of reference for comparing their clocks. In this paper, we use measurements from two wireless implementations to show that removing the sender's nondeterminism from the critical path in this way produces high-precision clock agreement (1.85± 1.28μsec, using off-the-shelf 802.11 wireless Ethernet), while using minimal energy. We also describe a novel algorithm that uses this same broadcast property to federate clocks across broadcast domains with a slow decay in precision (3.68± 2.57μsec after 4 hops). RBS can be used without external references, forming a precise relative timescale, or can maintain microsecond-level synchronization to an external timescale such as UTC. We show a significant improvement over the Network Time Protocol (NTP) under similar conditions. <s> BIB001 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Estimation of clock offset and clock skew <s> This letter proposes an energy-efficient clock synchronization scheme for Wireless Sensor Networks (WSNs) based on a novel time synchronization approach. Within the proposed synchronization approach, a subset of sensor nodes are synchronized by overhearing the timing message exchanges of a pair of sensor nodes. Therefore, a group of sensor nodes can be synchronized without sending any extra messages. This paper brings two main contributions: 1. Development of a novel synchronization approach which can be partially or fully applied for implementation of new synchronization protocols and for improving the performance of existing time synchronization protocols. 2. Design of a time synchronization scheme which significantly reduces the overall network-wide energy consumption without incurring any loss of synchronization accuracy compared to other well-known schemes. <s> BIB002
As mentioned before, RBS is based on the receiver-receiver synchronization (RRS) scheme. RRS is an approach synchronizing a set of children nodes which receive the beacon messages from the parent node. Reference BIB002 suggested the maximum likelihood estimator of the relative clock offset which is equivalent to the estimator presented in BIB001 . The estimation of clock offset and skew in BIB002 is performed in accordance with the following approach. Consider a parent node P and arbitrary nodes A and B, which are located within the communication range of the parent node P, assuming, as illustrated in Figure 4 , that both Node A and Node B receive the th i beacon from node P at time instants Subtracting (3) from (2), we obtain the following equation:  , we can write the set of observation data in matrix form as follows: where
Clock Synchronization in Wireless Sensor Networks: An Overview <s> Timing-Sync Protocol for Sensor Networks [29] <s> Wireless ad-hoc sensor networks have emerged as an interesting and important research area in the last few years. The applications envisioned for such networks require collaborative execution of a distributed task amongst a large set of sensor nodes. This is realized by exchanging messages that are time-stamped using the local clocks on the nodes. Therefore, time synchronization becomes an indispensable piece of infrastructure in such systems. For years, protocols such as NTP have kept the clocks of networked systems in perfect synchrony. However, this new class of networks has a large density of nodes and very limited energy resource at every node; this leads to scalability requirements while limiting the resources that can be used to achieve them. A new approach to time synchronization is needed for sensor networks.In this paper, we present Timing-sync Protocol for Sensor Networks (TPSN) that aims at providing network-wide time synchronization in a sensor network. The algorithm works in two steps. In the first step, a hierarchical structure is established in the network and then a pair wise synchronization is performed along the edges of this structure to establish a global timescale throughout the network. Eventually all nodes in the network synchronize their clocks to a reference node. We implement our algorithm on Berkeley motes and show that it can synchronize a pair of neighboring motes to an average accuracy of less than 20ms. We argue that TPSN roughly gives a 2x better performance as compared to Reference Broadcast Synchronization (RBS) and verify this by implementing RBS on motes. We also show the performance of TPSN over small multihop networks of motes and use simulations to verify its accuracy over large-scale networks. We show that the synchronization accuracy does not degrade significantly with the increase in number of nodes being deployed, making TPSN completely scalable. <s> BIB001 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Timing-Sync Protocol for Sensor Networks [29] <s> In large-scale systems, such as Internet-based distributed systems, classical clock-synchronization solutions become impractical or poorly performing, due to the number of nodes and/or the distance among them. We present a global time service for world-wide systems, based on an innovative clock synchronization scheme, named CesiumSpray. The service exhibits high precision and accuracy; it is virtually indefinitely scalable; and it is fault-tolerant. It is deterministic for real-time machinery in the local area, which makes it particularly well-suited for, though not limited to, large-scale real-time systems. The main features of our clock synchronization scheme can be summarized as follows: hybrid external/internal synchronization protocol improves effectiveness of synchronization; heterogeneous failure semantics for clocks and processors improves previous lower bounds on processors; two-level hierarchy improves scalability. The root of the hierarchy is the GPS satellite constellation, which “sprays” its reference time over a set of nodes provided with GPS receivers, one per local network. The second level of the hierarchy performs internal synchronization, further “spraying” the external time inside the local network. <s> BIB002
Ganeriwal et al. presented a network-wide clock synchronization protocol for sensor networks referred to as Timing-Synch Protocol for Sensor Networks (TPSN) BIB001 , which relies on the traditional approach of sender-receiver synchronization. TPSN relies on the two-way message exchange scheme shown in Figure 5 to acquire the synchronization between two nodes. The authors argue that for sensor networks, the classical approach of implementing a handshake between a pair of nodes is better than synchronizing a set of receivers BIB002 . This observation comes as a result of time stamping the packets at the moment when they are sent, namely, at MAC layer, which is indeed feasible for sensor networks. The authors compared the performance of TPSN with that of RBS based on the receiver-receiver synchronization approach, and showed that TPSN provides about two times better performance, in terms of accuracy, than RBS on the Berkeley motes platform. They also illustrated that TPSN can synchronize a pair of motes to an average accuracy of less than 20 μsec and a worst-case accuracy of around 50 μsec. The first step of the TPSN protocol is to create a hierarchical topology in the network. Each node is assigned a level in this hierarchical structure. A node belonging to level i can communicate with at least one node belonging to level i -1. Only one node is assigned level 0, and it is referred to as the root node. This is done in the level discovery phase. Once the hierarchical tree structure is established, the root node initiates the second stage of the protocol, called the synchronization phase. In this second phase, a node with level i synchronize to a node with level i -1. After all, every node is synchronized to the root node with level 0 and TPSN achieves network-wide time synchronization. Next we will describe the two phases in TPSN in some more detail.
Clock Synchronization in Wireless Sensor Networks: An Overview <s>  Synchronization Phase <s> The network time protocol (NTP), which is designed to distribute time information in a large, diverse system, is described. It uses a symmetric architecture in which a distributed subnet of time servers operating in a self-organizing, hierarchical configuration synchronizes local clocks within the subnet and to national time standards via wire, radio, or calibrated atomic clock. The servers can also redistribute time information within a network via local routing algorithms and time daemons. The NTP synchronization system, which has been in regular operation in the Internet for the last several years, is described, along with performance data which show that timekeeping accuracy throughout most portions of the Internet can be ordinarily maintained to within a few milliseconds, even in cases of failure or disruption of clocks, time servers, or networks. > <s> BIB001 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s>  Synchronization Phase <s> Wireless ad-hoc sensor networks have emerged as an interesting and important research area in the last few years. The applications envisioned for such networks require collaborative execution of a distributed task amongst a large set of sensor nodes. This is realized by exchanging messages that are time-stamped using the local clocks on the nodes. Therefore, time synchronization becomes an indispensable piece of infrastructure in such systems. For years, protocols such as NTP have kept the clocks of networked systems in perfect synchrony. However, this new class of networks has a large density of nodes and very limited energy resource at every node; this leads to scalability requirements while limiting the resources that can be used to achieve them. A new approach to time synchronization is needed for sensor networks.In this paper, we present Timing-sync Protocol for Sensor Networks (TPSN) that aims at providing network-wide time synchronization in a sensor network. The algorithm works in two steps. In the first step, a hierarchical structure is established in the network and then a pair wise synchronization is performed along the edges of this structure to establish a global timescale throughout the network. Eventually all nodes in the network synchronize their clocks to a reference node. We implement our algorithm on Berkeley motes and show that it can synchronize a pair of neighboring motes to an average accuracy of less than 20ms. We argue that TPSN roughly gives a 2x better performance as compared to Reference Broadcast Synchronization (RBS) and verify this by implementing RBS on motes. We also show the performance of TPSN over small multihop networks of motes and use simulations to verify its accuracy over large-scale networks. We show that the synchronization accuracy does not degrade significantly with the increase in number of nodes being deployed, making TPSN completely scalable. <s> BIB002 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s>  Synchronization Phase <s> Abstract Recent advances in micro-electromechanical (MEMS) technology have led to the development of small, low-cost, and low-power sensors. Wireless sensor networks (WSNs) are large-scale networks of such sensors, dedicated to observing and monitoring various aspects of the physical world. In such networks, data from each sensor is agglomerated using data fusion to form a single meaningful result, which makes time synchronization between sensors highly desirable. This paper surveys and evaluates existing clock synchronization protocols based on a palette of factors like precision, accuracy, cost, and complexity. The design considerations presented here can help developers either in choosing an existing synchronization protocol or in defining a new protocol that is best suited to the specific needs of a sensor-network application. Finally, the survey provides a valuable framework by which designers can compare new and existing synchronization protocols. <s> BIB003
Pair wise synchronization is performed in this phase along the edges of the hierarchical structure constructed in the level discovery phase. As mentioned above, the classical approach of senderreceiver synchronization for implementing the handshake between a pair of nodes is used along each edge of the hierarchical tree. Consider a two-way message exchange between node A and node B as shown in Figure 5 . At time 1,i T (according to its local clock), node A sends a synchronization_pulse packet, which contains the level of node A and the value of 1,i T , to node B. Node B receives this packet at time 2, T . Assuming that the clock offset and the propagation delay is constant in this small span of time, node A can calculate the clock offset and propagation delay as illustrates by the equation (9) below, and synchronize itself to the clock of node B. This represents a sender-initiated approach, where the sender synchronizes its clock to that of the receiver: This message exchange begins with the root node's initiating the synchronization phase by broadcasting a time sync packet. As soon as receiving this packet, nodes with level 1 wait for some random time before initiating the two-way message exchange with the root node, so as to avoid the contention in medium access. After receiving back an acknowledgement, these nodes adjust their clocks to the clock of the root node. The nodes with level 2 overhear this message exchange, and then they back off for some random time in order to ensure that the nodes with level 1 have completed their synchronization, after which they initiate the message exchange with nodes with level 1. This process eventually enables all nodes to be synchronized to the root node. The advantages of TPSN are as follows BIB003 BIB002 :  It is scalable and the synchronization precision does not deteriorate significantly as the size of the network increases. 2, 1,  Network-wide synchronization is computationally less expensive in comparison with such protocols as NTP BIB001 . On the other hand, TPSN has the following disadvantages BIB003 BIB002 :  Energy conservation is not so effective since a physical clock correction needs to be performed on the local clocks of sensors while achieving synchronization.  The protocol is not suitable for applications with highly mobile nodes because it requires a hierarchical infrastructure.  TPSN does not support multi-hop communication. Now let us take a look at some methods for estimating the clock offset and clock skew in a two-way message exchange model.
Clock Synchronization in Wireless Sensor Networks: An Overview <s> Estimation of Clock Offset <s> Introduction The Accuracy of a Sample Mean Random Samples and Probabilities The Empirical Distribution Function and the Plug-In Principle Standard Errors and Estimated Standard Errors The Bootstrap Estimate of Standard Error Bootstrap Standard Errors: Some Examples More Complicated Data Structures Regression Models Estimates of Bias The Jackknife Confidence Intervals Based on Bootstrap "Tables" Confidence Intervals Based on Bootstrap Percentiles Better Bootstrap Confidence Intervals Permutation Tests Hypothesis Testing with the Bootstrap Cross-Validation and Other Estimates of Prediction Error Adaptive Estimation and Calibration Assessing the Error in Bootstrap Estimates A Geometrical Representation for the Bootstrap and Jackknife An Overview of Nonparametric and Parametric Inference Further Topics in Bootstrap Confidence Intervals Efficient Bootstrap Computations Approximate Likelihoods Bootstrap Bioequivalence Discussion and Further Topics Appendix: Software for Bootstrap Computations References <s> BIB001 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Estimation of Clock Offset <s> We discuss the problem of detecting errors in measurements of the total delay experienced by packets transmitted through a wide-area network. We assume that we have measurements of the transmission times of a group of packets sent from an originating host, A, and a corresponding set of measurements of their arrival times at their destination host, B, recorded by two separate clocks. We also assume that we have a similar series of measurements of packets sent from B to A (as might occur when recording a TCP connection), but we do not assume that the clock at A is synchronized with the clock at B, nor that they run at the same frequency. We develop robust algorithms for detecting abrupt adjustments to either clock, and for estimating the relative skew between the clocks. By analyzing a large set of measurements of Internet TCP connections, we find that both clock adjustments and relative skew are sufficiently common that failing to detect them can lead to potentially large errors when analyzing packet transit times. We further find that synchronizing clocks using a network time protocol such as NTP does not free them from such errors. <s> BIB002 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Estimation of Clock Offset <s> We study the performance of a class of time-offset estimation algorithms for synchronization of master-slave nodes based on asynchronous transfer of timing cells when GPS is not used. We implement a synchronization control mechanism based on cell acknowledgment time-out (TO) with wait or no wait options. We analyze the mechanism reliability and performance parameters over symmetric links using an exponential cell delay variation model. We show that the maximum-likelihood offset estimator does not exist for the exponential likelihood function. We analytically provide RMS error result comparisons for five ad hoc offset estimation algorithms: the median round delay, the minimum round delay, the minimum link delay (MnLD), the median phase, and the average phase. We show that the MnLD algorithm achieves the best accuracy over symmetric links without having to impose a strict TO control, which substantially speeds up the algorithm. We also discuss an open-loop estimation updating mechanism based on standard clock models. <s> BIB003 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Estimation of Clock Offset <s> Wireless ad-hoc sensor networks have emerged as an interesting and important research area in the last few years. The applications envisioned for such networks require collaborative execution of a distributed task amongst a large set of sensor nodes. This is realized by exchanging messages that are time-stamped using the local clocks on the nodes. Therefore, time synchronization becomes an indispensable piece of infrastructure in such systems. For years, protocols such as NTP have kept the clocks of networked systems in perfect synchrony. However, this new class of networks has a large density of nodes and very limited energy resource at every node; this leads to scalability requirements while limiting the resources that can be used to achieve them. A new approach to time synchronization is needed for sensor networks.In this paper, we present Timing-sync Protocol for Sensor Networks (TPSN) that aims at providing network-wide time synchronization in a sensor network. The algorithm works in two steps. In the first step, a hierarchical structure is established in the network and then a pair wise synchronization is performed along the edges of this structure to establish a global timescale throughout the network. Eventually all nodes in the network synchronize their clocks to a reference node. We implement our algorithm on Berkeley motes and show that it can synchronize a pair of neighboring motes to an average accuracy of less than 20ms. We argue that TPSN roughly gives a 2x better performance as compared to Reference Broadcast Synchronization (RBS) and verify this by implementing RBS on motes. We also show the performance of TPSN over small multihop networks of motes and use simulations to verify its accuracy over large-scale networks. We show that the synchronization accuracy does not degrade significantly with the increase in number of nodes being deployed, making TPSN completely scalable. <s> BIB004 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Estimation of Clock Offset <s> A recently proposed estimator of the offset between two clocks in a data communications network is based on an exchange of timing messages between the clocks. It is well known that different distributions of the transmission delays in the two directions associated with the exchanged messages cause the estimator to be biased. We use the bootstrap methodology to obtain a closed-form estimator of the bias and then form a new bias-corrected estimator of the clock offset. We show that for common distribution assumptions for the transmission delays, the bias-corrected estimator has smaller mean squared error (MSE) than the uncorrected estimator. We also derive the order statistic-based best linear unbiased estimator (o-BLUE) of the clock offset under the assumption that transmission delays are exponentially distributed. Several studies on network delay characteristics show that no single distribution adequately characterizes delays. Not only are delays highly dependent on the nature of traffic, they are also ti... <s> BIB005 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Estimation of Clock Offset <s> Estimating and correcting the offset between two or more clocks is an important problem in data communication networks. For example, Internet telephony depends on network routers having a common notion of time, and cellular networks provide a higher quality of service by using transmission protocols that depend on neighboring base stations knowing the offset that exists between their local clocks. In previous work it was shown that bootstrap bias correction of Paxson's well-known estimator of clock offset produces an estimator with improved bias and mean squared error (MSE) properties. In addition, the ordered-BLUE (o-BLUE) under an exponential distribution for network delays was derived, and its bootstrap bias-corrected form was shown to have lower bias and MSE than the bootstrap bias-corrected form of Paxson's estimator when network delays follow lognormal, gamma, and Weibull distributions. The inferred robustness of the bias-corrected o-BLUE to the assumed distribution of network delays is an attractiv... <s> BIB006 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Estimation of Clock Offset <s> Recently, a few efficient timing synchronization protocols for wireless sensor networks (WSNs) have been proposed with the goal of maximizing the accuracy and minimizing the power utilization. This paper proposes novel clock skew estimators assuming different delay environments to achieve energy-efficient network-wide synchronization for WSNs. The proposed clock skew correction mechanism significantly increases the re-synchronization period, which is a critical factor in reducing the overall power consumption. The proposed synchronization scheme can be applied to the conventional protocols without additional overheads. Moreover, this paper derives the Cramer-Rao lower bounds and the maximum likelihood estimators under different delay models and assumptions. These analytical metrics serves as good benchmarks for the thus far reported experimental results <s> BIB007 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Estimation of Clock Offset <s> Clock synchronization represents a crucial element in the operation of wireless sensor networks (WSNs). For any general time synchronization protocol involving a two-way message exchange mechanism, e.g., timing synch protocol for sensor networks (TPSN) [see S. Ganeriwal, R. Kumar, and M. B. Srivastava, "Timing Synch Protocol for Sensor Networks," in Proceedings of the First International Conference on Embedded Network Sensor Systems," 2003, pp. 138-149], the maximum likelihood estimate (MLE) for clock offset under the exponential delay model was derived in [D. R. Jeske, ";On the Maximum Likelihood Estimation of Clock Offset," IEEE Transactions on Communications, vol. 53, no. 1, pp. 53-54, January 2005] assuming no clock skew between the nodes. Since all practical clocks are running at different rates with respect to each other, the skew correction becomes important for achieving long term synchronization since it results in the reduction of the number of message exchanges and hence minimization of power consumption. In this paper, the joint MLE of clock offset and skew under the exponential delay model for a two way timing message exchange mechanism and the corresponding algorithms for finding these estimates are presented. Since any time synchronization protocol involves real time message exchanges between the sensor nodes, ML estimates for other synchronization protocols can be derived by employing a similar procedure. In addition, due to the computational complexity of the MLE, a simple, computationally efficient and easy to implement algorithm is presented as an alternative to the ML estimator which particularly suits the low power demanding regime of wireless sensor networks. <s> BIB008 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Estimation of Clock Offset <s> Wireless sensor networks have become an important and promising research area during the last recent years. Clock synchronization is one of the areas that play a crucial role in the design, implementation, and operation of wireless sensor networks. Under the assumption that there is no clock skew between sensor nodes, the Maximum Likelihood Estimate (MLE) of clock offset was proved by [1] for clock synchronization protocols that assume exponential random delays and a two-way message exchange mechanism such as TPSN (Timing-sync Protocol for Sensor Networks [2]). This MLE is asymptotically unbiased. However, the estimator is biased in the presence of a finite number of samples and much more biased in asymmetric random delay models, where the upstream delay characteristics are different from the downstream delay characteristics, and thus its performance is deteriorated. This paper proposes clock offset estimators based on the bootstrap bias correction approach, which estimates and corrects the bias of the MLE in the exponential delay model, and hence it results in better performances in mean squared error (MSE). <s> BIB009
Modeling of network delays in WSNs seems to be a challenging task . Several probability distribution function (PDF) models for random queuing delays have been proposed so far, the most widely used being Gaussian, exponential, gamma, Weibull distributions . By the Central Limit Theorem (CLT), the PDF of the sum of a large number of independent and identically distributed (i.i.d.) approaches that of a Gaussian RV. This model is proper if the delays are thought to be the addition of numerous independent random processes. The Gaussian distribution for the clock offset errors was also reported by a few authors, such as BIB002 , based on laboratory tests. On the other hand, a single-server M/M/1 queue can fittingly represent the cumulative link delay for point-to-point hypothetical reference connections, where the random delays are independently modeled as exponential random variables BIB003 . In this paper, we limit our presentation to mainly the situations where the portions of delays are Gaussian or exponential random variables. Noh et al. proposed the maximum likelihood estimator (MLE) of clock offset in a two-way message exchange model, which will be deeply discussed below BIB007 . The authors suppose that the clock offsets of two nodes remain equal during the synchronization period, and the delays at i th nodes i X and Therefore, the MLE of clock offset is given as follows BIB007 under the assumption that there is no clock skew is given by BIB007 : Thus, Node A can be synchronized to the node B by simply taking the difference of the average observations U and V . Noh et al. also proposed the joint MLE of clock offset and clock skew under the assumption of Gaussian random delays BIB007 , a result which will not be detailed herein. For exponential random delays i X and i Y , Jeske proved in that the maximum likelihood estimator of clock offset, A  exists when d is unknown and is the same form as the estimator proposed in BIB002 , namely: In case of one round of message exchange ( 1 N  ), the MLE of clock offset for both Gaussian and exponential delay models is , which is exactly the same as the estimator presented in BIB004 . Notice further that the extension of MLE for joint estimation of clock phase offset and skew in networks with exponential delays was recently reported by Chaudhari et al. in BIB008 . In general, the delay distribution in the upstream, X F is not equal to that in the downstream Y F , because the node A  node B and node B  node A transmission paths through the network typically present different traffic characteristics, and thus the network delays in each path are potentially different. Equation fits well the symmetric exponential delay model where both the uplink and downlink have the same exponential delay distributions. However, if this MLE is used in the asymmetric exponential delay model, there will be a bias in the clock offset. Therefore, it is necessary to achieve a more accurate estimate of the clock offset by using an alternate approach. In BIB009 , Lee et al. proposed a clock offset estimator using the bootstrap technique and bias correction method, which gives better performance than Jeske's MLE in the asymmetric exponential delay model. Specifically, bias-corrected estimators through non-parametric and parametric bootstrap were proposed. The procedures of bootstrap bias correction follow BIB001 . These two bootstrap bias corrected estimators require Monte Carlo resampling of the empirical distribution functions. As far as the bootstrap bias correction method is concerned, in BIB005 , Jeske had proposed a closed-form expression of the clock offset estimator by bootstrap bias correction approach based on the nonparametric technique and had compared analytically the estimator with Paxson's estimator BIB002 which was proved to be the MLE in . Additionally, the effectiveness of bootstrap bias correction in the context of clock offset estimation was reported in BIB006 within the context of Pareto distribution, which was suggested in recent internet traffic modeling research. At first, let us take a look at the clock offset estimation using bootstrap bias correction based on the nonparametric bootstrap method and the parametric bootstrap method, and then we will consider the estimation of the clock data offset using the particle filtering approach. Suppose that one has some partial information about F . For example, F is known to be the exponential distribution but with unknown mean  . This suggests that we should draw a resample of size n from the exponential distribution with mean  , where  is estimated from X rather than from a non-parametric estimate F of F . We use the exponential distribution in the suggested bias correction approach through parametric bootstrapping. The parametric bootstrap principle is almost the same as the above non-parametric bootstrap principle, except some steps.  The Bootstrap Estimate of Bias Let us suppose that an unknown probability distribution F has given the data   Figure 6 shows simulation results comparing the mean squared error performance of Jeske's MLE of clock offset with those of clock offset estimators based on the bootstrap bias correction methodology described above in a two-way message exchange scheme under the assumption of asymmetric exponential random delays. The notations 1  and  denote the exponential delay parameters for the uplink and the downlink delay distributions, respectively. MSE-MLE, MSE MSENBC, and MSE-PBC denote the mean squared error (MSE) of Jeske's MLE, which is the MLE in the exponential delay model, the MSE of the bias-corrected estimator through nonparametric bootstrapping, and the MSE of the bias-corrected estimator through parametric bootstrapping, respectively. It is clear that the performances of the bias-corrected estimators are improved in an asymmetric exponential delay model and the bias corrected estimator through the parametric bootstrapping method has the best performance for the asymmetric exponential delay distributions.
Clock Synchronization in Wireless Sensor Networks: An Overview <s> Clock Offset Estimation via Particle Filtering <s> Brief review of linear algebra and linear systems brief review of probability theory brief review of statistics some basic concepts in estimation linear estimation in static systems linear dynamic systems with random inputs state estimation in discrete-time linear dynamic systems estimation for Kinematic models computational aspects of estimation extensions of discrete-time estimation continuous-time linear state estimation state estimation for nonlinear dynamic systems adaptive estimation and manoeuvering targets problem solutions. <s> BIB001 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Clock Offset Estimation via Particle Filtering <s> This paper presents a joint channel coefficient and time-delay tracking technique for code-division multiple-access (CDMA) systems. Due to the highly nonlinear nature of time delay estimation, an iterative nonlinear filtering algorithm, called the "unscented filter" (UF), is employed. The UF can provide a better alternative to nonlinear filtering than the conventional extended Kalman filter (EKF) since it avoids errors associated with linearization. The Cramer-Rao lower bound is derived for the estimator, and computer simulations show that it provides a more viable means for tracking time-varying amplitudes and delays in CDMA communication systems than estimators based on the EKF. <s> BIB002 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Clock Offset Estimation via Particle Filtering <s> A package for fragile articles is provided which includes a container having a bottom surface delimited by upright walls, and a plurality of article-loaded trays arranged in a compact, stacked, superposed relation and disposed within the container. Each tray has a plurality of article accommodating pockets separated by struts. The struts of a tray are in vertical alignment with the pockets of the trays disposed immediately above and below. The peripheries of the stacked trays are adapted to be vertically supported by the adjacent walls of the container thereby preventing sagging of certain of the peripheral pockets during storage or transporting of the loaded container. <s> BIB003 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Clock Offset Estimation via Particle Filtering <s> Wireless ad-hoc sensor networks have emerged as an interesting and important research area in the last few years. The applications envisioned for such networks require collaborative execution of a distributed task amongst a large set of sensor nodes. This is realized by exchanging messages that are time-stamped using the local clocks on the nodes. Therefore, time synchronization becomes an indispensable piece of infrastructure in such systems. For years, protocols such as NTP have kept the clocks of networked systems in perfect synchrony. However, this new class of networks has a large density of nodes and very limited energy resource at every node; this leads to scalability requirements while limiting the resources that can be used to achieve them. A new approach to time synchronization is needed for sensor networks.In this paper, we present Timing-sync Protocol for Sensor Networks (TPSN) that aims at providing network-wide time synchronization in a sensor network. The algorithm works in two steps. In the first step, a hierarchical structure is established in the network and then a pair wise synchronization is performed along the edges of this structure to establish a global timescale throughout the network. Eventually all nodes in the network synchronize their clocks to a reference node. We implement our algorithm on Berkeley motes and show that it can synchronize a pair of neighboring motes to an average accuracy of less than 20ms. We argue that TPSN roughly gives a 2x better performance as compared to Reference Broadcast Synchronization (RBS) and verify this by implementing RBS on motes. We also show the performance of TPSN over small multihop networks of motes and use simulations to verify its accuracy over large-scale networks. We show that the synchronization accuracy does not degrade significantly with the increase in number of nodes being deployed, making TPSN completely scalable. <s> BIB004
In Figure 5 , the th k up and down link delay observations corresponding to the th k timing message exchange are assumed to be given by delays, which are assumed to be any distributions such as Gaussian, exponential, Gamma, and Weibull. Given the observation samples [ , ] Since the clock offset value is constant, the clock offset is assumed to obey a Gauss-Markov dynamic state-space channel model BIB002 (23) and . Under the Bayesian framework, an emergent technique for obtaining the posterior probability density function (PDF) is known as particle filtering (PF). PF is based on Monte Carlo simulations with sequential importance sampling (SIS). These methods allow for a complete representation of the posterior distribution of the states using sequential importance sampling and resampling for the various probability densities. Since the true posterior PDF embodies all the available statistical information about the channel estimates, PF is optimal in the sense that all the available information has been used. The posterior density By computing the filtering density recursively, we do not need to keep track of the complete history of the states. Therefore, from a storage point of view, the filtering density is more parsimonious than the full posterior density function. If we know the filtering density, we can easily derive various estimates of the system's states including means, modes, medians and confidence intervals. We show how the filtering density may be approximated using sequential importance sampling techniques. Figure 7 . The recursive computation of the filtering density. The filtering density is estimated recursively in two stages: prediction and update (correction), as illustrated in Figure 7 . In the prediction step, the filtering density is propagated into the future via the transition density as follows, The transition density is defined in terms of the probabilistic model governing the states' evolution (23) and the process noise statistics. The update stage involves the application of Bayes' rule when new data is observed BIB003 0: 1 0: called a proposal distribution that is a probability distribution from which we can easily sample . The selection of the proposal function is one of the most critical design issues in importance sampling algorithms and is the source of the main concern. The more accurate the proposal is to the true posterior, the better the performance of the particle filter is. It is often convenient to choose the proposal distribution to be the prior : (29) Again, we choose the stochastic model given by (23) as our model for the proposal distribution. As a result of not incorporating the most recent observations, this would seem to be the most common choice of proposal distribution since it is intuitive and can be implemented easily. This has the effect of simplifying (28) to: A common problem with the SIS particle filter is the degeneracy phenomenon, where after a few iterations, all but one particle will have negligible weights. It has been shown BIB001 that the variance of the importance weights can only increase over time, and thus it is impossible to avoid the degeneracy phenomenon. A large number of samples are thus effectively removed from the sample set because their importance weights become numerically insignificant. To avoid this degeneracy, a resampling stage may be used to eliminate samples with low importance weights and multiply samples with high importance weights. A common heuristic used to maintain an appropriate number of particles is to first calculate the effective sample size eff N introduced in , and defined as: We have so far explained how to compute the importance weights sequentially and how to improve the sample set by resampling. The essential structure of the PF to clock offset estimation using the proposal function BIB004 can now be presented in terms of the following pseudo-code. Step.1) Prediction : predict via the state model (23) ( ) 1: Step.2) Measurement Update :  Evaluate the weights according to the likelihood function as (30), 1: Step. where the probability to take sample i Step. Output : Step.5) Continue: set 1 k k   and iterate to Step. 2. Finally, we now introduce the PF with Bootstrap Sampling (BS) approach that integrates the PF with the BS for estimating the clock offset. The basic idea is quiet straightforward. In order to provide a large amount of observation data, we generate sampled observation data from the original observation data set by using the BS procedure. Then, we estimate the clock offset based on the PF. The important thing to check is how close the PDF of sampled data is to the true PDF. However, in case of less observation data, the performance's limitation is related to the finite number of observation data. Therefore, the solution is to overcome this limitation in the presence of reduced number of observation data. BS assumes additional data samples relative to the original data samples; these additional samples are defined by drawing at random with replacement. Each of the bootstrap samples is considered as new data. Based on the BS, we will increase the observation data set. Given a large number of new observation data, we can then approximate the clock offset by using PF. The following pseudo-code describes the procedure for estimating the clock offset via the nonparametric bootstrap sampling method.
Clock Synchronization in Wireless Sensor Networks: An Overview <s> Delay Measurement Time Synchronization for Wireless Sensor Networks (DMTS) [42] <s> Amino acid fermentation is conducted by fermenting bacterial cells in a culture medium in a fermentor and separating fermentation solution withdrawn from the fermentor into a solution containing said bacterial cells and a solution not containing bacterial cells by a cell separator. The solution containing said bacterial cells being circulated from said cell separator to said fermenter by circulating means to perform amino acid fermentation continuously, and bubbles being removed from said fermentation solution by a bubble separator before said fermentation solution is fed to said circulating means and said cell separator. <s> BIB001 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Delay Measurement Time Synchronization for Wireless Sensor Networks (DMTS) [42] <s> Abstract Recent advances in micro-electromechanical (MEMS) technology have led to the development of small, low-cost, and low-power sensors. Wireless sensor networks (WSNs) are large-scale networks of such sensors, dedicated to observing and monitoring various aspects of the physical world. In such networks, data from each sensor is agglomerated using data fusion to form a single meaningful result, which makes time synchronization between sensors highly desirable. This paper surveys and evaluates existing clock synchronization protocols based on a palette of factors like precision, accuracy, cost, and complexity. The design considerations presented here can help developers either in choosing an existing synchronization protocol or in defining a new protocol that is best suited to the specific needs of a sensor-network application. Finally, the survey provides a valuable framework by which designers can compare new and existing synchronization protocols. <s> BIB002
DMTS relies on a master-slave synchronization, sender-receiver synchronization, and clockcorrection approach. This protocol was developed due to the need to develop a more suitable time synchronization method that avoids round trip time estimation. DMTS synchronizes the sender and multiple receivers at the same time and requires less number of message transfers than RBS. One of the characteristics of sensor networks is their self-organization and dynamic behavior. The selforganization feature implies that the network topology may change from time to time. DMTS focuses on scalability and flexibility, which means being either adaptive or insensitive to changes in network topology. In this protocol, a leader is chosen as time master and broadcasts its time. All receivers measure the time delay and set their time as received master time plus measured time transfer delay. As a result, all sensors receiving the time synchronization message can be synchronized with the leader. The time synchronization precision is bounded mainly to how well the delay measurements are along the path. where e t is the estimated time to transit the preamble and start symbols, 1 t and 2 t are receiver timestamps. Since a radio device has a fixed transmit rate, for instance, Mica radios transmit preamble and start symbols at the rate of 20 kbps, e t is a fixed delay and is expressed as e t n  , where n stands for the number of bits to transmit and  denotes the time to transmit one bit over radio. In the DMTS method, a time synchronization leader sends a time synchronization message with its timestamp t , which is added after MAC delay and a clear channel is detected. The receiver calculates the path delay and adjusts its local clock to r t : The receiver is then synchronized with the leader. The lower bound of DMTS is the radio device synchronization precision, and the upper bound is the accuracy of local clock. Since DMTS needs only one-time signal transfer to synchronize all nodes within a single hop, it is energy efficient. It is also lightweight because there are no complex operations involved. Multi-hop synchronization is also possible. If a node knows that it has children nodes, it broadcasts a time signal after it adjusts its own time. The node can now synchronize with its children by using single-hop time communication with a known-leader. To handle the situation when network nodes have no knowledge about their children, the concept of a time-source level is used to identify the network distance of a node from the master, which is selected by means of a leader selection algorithm. DMTS uses the concept of time source level to identify the distance from the master to another node. A time master assumes the time source level 0. A node synchronized with a level n receives a time source level 1 n  . The root node broadcasts its time periodically and the synchronized nodes also do the same thing. On receiving a time signal, a node checks the time source level. If it is from a source of lower level than itself, it accepts the time; otherwise, it discards the signal. In this way, DMTS guarantees that the master time will be propagated to all network nodes with the number of broadcastings being equal to the number of the nodes. In addition, the algorithm warrants the shortest path to the time master, or the least number of hops, because a node always selects the node that is nearest to the time leader as its parent. DMTS exhibits the following advantages BIB002 BIB001 :  A user application interface is provided to monitor a wireless sensor network at run-time.  Computational complexity is low and energy efficiency is quite high. On the other hand, the disadvantages of the DMTS protocol are as follows BIB002 BIB001 :  DMTS can be applied only to low resolution, low frequency external clocks.  Synchronization precision is traded for the sake of low computational complexity and energy efficiency.
Clock Synchronization in Wireless Sensor Networks: An Overview <s> Flooding Time Synchronization Protocol (FTSP) [46] <s> Recent advances in miniaturization and low-cost, low-power design have led to active research in large-scale networks of small, wireless, low-power sensors and actuators. Time synchronization is critical in sensor networks for diverse purposes including sensor data fusion, coordinated actuation, and power-efficient duty cycling. Though the clock accuracy and precision requirements are often stricter than in traditional distributed systems, strict energy constraints limit the resources available to meet these goals.We present Reference-Broadcast Synchronization, a scheme in which nodes send reference beacons to their neighbors using physical-layer broadcasts. A reference broadcast does not contain an explicit timestamp; instead, receivers use its arrival time as a point of reference for comparing their clocks. In this paper, we use measurements from two wireless implementations to show that removing the sender's nondeterminism from the critical path in this way produces high-precision clock agreement (1.85± 1.28μsec, using off-the-shelf 802.11 wireless Ethernet), while using minimal energy. We also describe a novel algorithm that uses this same broadcast property to federate clocks across broadcast domains with a slow decay in precision (3.68± 2.57μsec after 4 hops). RBS can be used without external references, forming a precise relative timescale, or can maintain microsecond-level synchronization to an external timescale such as UTC. We show a significant improvement over the Network Time Protocol (NTP) under similar conditions. <s> BIB001 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Flooding Time Synchronization Protocol (FTSP) [46] <s> Wireless ad-hoc sensor networks have emerged as an interesting and important research area in the last few years. The applications envisioned for such networks require collaborative execution of a distributed task amongst a large set of sensor nodes. This is realized by exchanging messages that are time-stamped using the local clocks on the nodes. Therefore, time synchronization becomes an indispensable piece of infrastructure in such systems. For years, protocols such as NTP have kept the clocks of networked systems in perfect synchrony. However, this new class of networks has a large density of nodes and very limited energy resource at every node; this leads to scalability requirements while limiting the resources that can be used to achieve them. A new approach to time synchronization is needed for sensor networks.In this paper, we present Timing-sync Protocol for Sensor Networks (TPSN) that aims at providing network-wide time synchronization in a sensor network. The algorithm works in two steps. In the first step, a hierarchical structure is established in the network and then a pair wise synchronization is performed along the edges of this structure to establish a global timescale throughout the network. Eventually all nodes in the network synchronize their clocks to a reference node. We implement our algorithm on Berkeley motes and show that it can synchronize a pair of neighboring motes to an average accuracy of less than 20ms. We argue that TPSN roughly gives a 2x better performance as compared to Reference Broadcast Synchronization (RBS) and verify this by implementing RBS on motes. We also show the performance of TPSN over small multihop networks of motes and use simulations to verify its accuracy over large-scale networks. We show that the synchronization accuracy does not degrade significantly with the increase in number of nodes being deployed, making TPSN completely scalable. <s> BIB002 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Flooding Time Synchronization Protocol (FTSP) [46] <s> Wireless sensor network applications, similarly to other distributed systems, often require a scalable time synchronization service enabling data consistency and coordination. This paper describes the Flooding Time Synchronization Protocol (FTSP), especially tailored for applications requiring stringent precision on resource limited wireless platforms. The proposed time synchronization protocol uses low communication bandwidth and it is robust against node and link failures. The FTSP achieves its robustness by utilizing periodic flooding of synchronization messages, and implicit dynamic topology update. The unique high precision performance is reached by utilizing MAC-layer time-stamping and comprehensive error compensation including clock skew estimation. The sources of delays and uncertainties in message transmission are analyzed in detail and techniques are presented to mitigate their effects. The FTSP was implemented on the Berkeley Mica2 platform and evaluated in a 60-node, multi-hop setup. The average per-hop synchronization error was in the one microsecond range, which is markedly better than that of the existing RBS and TPSN algorithms. <s> BIB003
The aim of the FTSP is to attain a network wide synchronization of the local clocks of participating nodes by using multi-hop synchronization. It is assumed that every node has a local clock exhibiting the typical timing errors of crystals and can communicate over an unreliable but error corrected wireless channel to its neighbor nodes. FTSP synchronizes the time of a sender to possibly multiple receivers making use of a single radio message time-stamped at both the sender and the receiver sides. MAC layer time-stamping can eliminate many of the errors, as shown in TPSN BIB002 . However, accurate clock-synchronization at discrete points in time is a partial solution only and thus compensation for the clock drift of the nodes is necessary for obtaining high precision in-between synchronization points and to keep the communication overhead low. Linear regression is used in this protocol to compensate for clock drift, which is already suggested in RBS BIB001 . As mentioned above, FTSP provides multi-hop synchronization. The root of the network -a single, dynamically elected node -keeps the global time and all other nodes synchronize their clocks to that of the root. The nodes form an ad-hoc structure to transfer the global time from the root to all the other nodes, as opposed to the fixed spanning-tree based approach proposed in BIB002 . This saves the initial phase of establishing the tree and is more robust against node and link failures, and changes in network topology BIB003 .
Clock Synchronization in Wireless Sensor Networks: An Overview <s> Probabilistic Clock Synchronization [44] <s> Presents and analyzes a new probabilistic clock synchronization algorithm that can guarantee a much smaller bound on the clock skew than most existing algorithms. The algorithm is probabilistic in the sense that the bound on the clock skew that it guarantees has a probability of invalidity associated with it. However, the probability of invalidity may be made extremely small by transmitting a sufficient number of synchronization messages. It is shown that an upper bound on the probability of invalidity decreases exponentially with the number of synchronization messages transmitted. A closed-form expression that relates the probability of invalidity to the clock skew and the number of synchronization messages is also derived. > <s> BIB001 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Probabilistic Clock Synchronization [44] <s> Recent advances in miniaturization and low-cost, low-power design have led to active research in large-scale networks of small, wireless, low-power sensors and actuators. Time synchronization is critical in sensor networks for diverse purposes including sensor data fusion, coordinated actuation, and power-efficient duty cycling. Though the clock accuracy and precision requirements are often stricter than in traditional distributed systems, strict energy constraints limit the resources available to meet these goals.We present Reference-Broadcast Synchronization, a scheme in which nodes send reference beacons to their neighbors using physical-layer broadcasts. A reference broadcast does not contain an explicit timestamp; instead, receivers use its arrival time as a point of reference for comparing their clocks. In this paper, we use measurements from two wireless implementations to show that removing the sender's nondeterminism from the critical path in this way produces high-precision clock agreement (1.85± 1.28μsec, using off-the-shelf 802.11 wireless Ethernet), while using minimal energy. We also describe a novel algorithm that uses this same broadcast property to federate clocks across broadcast domains with a slow decay in precision (3.68± 2.57μsec after 4 hops). RBS can be used without external references, forming a precise relative timescale, or can maintain microsecond-level synchronization to an external timescale such as UTC. We show a significant improvement over the Network Time Protocol (NTP) under similar conditions. <s> BIB002 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Probabilistic Clock Synchronization [44] <s> We study the performance of a class of time-offset estimation algorithms for synchronization of master-slave nodes based on asynchronous transfer of timing cells when GPS is not used. We implement a synchronization control mechanism based on cell acknowledgment time-out (TO) with wait or no wait options. We analyze the mechanism reliability and performance parameters over symmetric links using an exponential cell delay variation model. We show that the maximum-likelihood offset estimator does not exist for the exponential likelihood function. We analytically provide RMS error result comparisons for five ad hoc offset estimation algorithms: the median round delay, the minimum round delay, the minimum link delay (MnLD), the median phase, and the average phase. We show that the MnLD algorithm achieves the best accuracy over symmetric links without having to impose a strict TO control, which substantially speeds up the algorithm. We also discuss an open-loop estimation updating mechanism based on standard clock models. <s> BIB003 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Probabilistic Clock Synchronization [44] <s> Abstract Recent advances in micro-electromechanical (MEMS) technology have led to the development of small, low-cost, and low-power sensors. Wireless sensor networks (WSNs) are large-scale networks of such sensors, dedicated to observing and monitoring various aspects of the physical world. In such networks, data from each sensor is agglomerated using data fusion to form a single meaningful result, which makes time synchronization between sensors highly desirable. This paper surveys and evaluates existing clock synchronization protocols based on a palette of factors like precision, accuracy, cost, and complexity. The design considerations presented here can help developers either in choosing an existing synchronization protocol or in defining a new protocol that is best suited to the specific needs of a sensor-network application. Finally, the survey provides a valuable framework by which designers can compare new and existing synchronization protocols. <s> BIB004
This protocol is an extension of the deterministic RBS protocol for providing probabilistic clock synchronization. Arvind BIB001 defined a probabilistic clock synchronization protocol for wired networks. However, most synchronization protocols are based exclusively on deterministic algorithms. Deterministic methods have an advantage that they usually guarantee an upper bound on the error in clock offset estimation. However, in case that the system resources are badly constrained, a guarantee on synchronization accuracy may result in a large number of messages being exchanged during synchronization. In these cases, probabilistic algorithms can provide reasonable synchronization precision with lower computational and network overhead than deterministic protocols. Elson et al. BIB002 found the distribution of the synchronization error among a set of receivers. Multiple messages are sent from the sender to the receivers. The difference in the actual reception times at the receivers is plotted. As each of these pulses are independently distributed, the difference in reception times yields a Gaussian distribution with zero mean. Given a Gaussian probability distribution for the synchronization error, it is possible to calculate the relationship between a given maximum error in synchronization and the probability of actually synchronizing with an error less than the maximum error. If max e is the maximum error allowed between two synchronizing nodes, then the probability of synchronizing with an error max e e  is given by: Therefore, as the max e limit increases, the probability of failure max (1 (| | )) P e e   decreases exponentially. Based on equation (34), PalChaudhuri et al. derived expressions converting the size of maximum clock synchronization error (service specifications) to the number of messages and the synchronization overhead (actual protocol parameter). The probability for the achieved error being less than the maximum specified error is given by: In equation BIB003 , n stands for the minimum number of synchronization messages to guarantee the minimum allowed error and  denotes the standard deviation of the distribution. In , the relationship between the synchronization period and the maximum specified clock skew is also described. Given a maximum value for clock skew, a time period is derived within which resynchronization must be done: where max  is the maximum allowable synchronization period at any point in time, sync T is the time period between synchronization points for the Always On model (time period of validity for Sensor Initiated model),  is the maximum drift of the clock rate, and max  is the maximum delay (after the synchronization procedure was started) in the time values of one receiver reaching another receiver . This algorithm can be possibly extended to create a probabilistic clock synchronization service between receivers that may be multiple hops away from a sender. This extension is in contrast to the multi-hop extension used in RBS BIB002 assuming that all sensor nodes are always within a single hop of at least one sender. Moreover, the RBS algorithm requires the existence of a node which is within the broadcast region of both senders. This algorithm does not assume such assumptions, and sensor nodes herein are allowed to be multiple hops away from a sender and still be synchronized with all other nodes within the nodes transmission range of nodes. The advantages of probabilistic clock synchronization service in sensor networks are as follows BIB004 :  A probabilistic guarantee reduces both the number of messages exchanged among nodes and the computational load on each node.  There is a tradeoff between synchronization accuracy and resource cost.  This protocol supports multi-hop networks, which span several domains. However, this method also presents disadvantages BIB004 :  In case of safety-critical applications (for example, nuclear plant monitoring), a probabilistic guarantee on accuracy may not be proper.  The protocol is sensitive to message losses. Nevertheless, it does not consider provisions for message losses.
Clock Synchronization in Wireless Sensor Networks: An Overview <s> Time Diffusion Synchronization Protocol [43] <s> In the near future, small intelligent devices will be deployed in homes, plantations, oceans, rivers, streets, and highways to monitor the environment. These devices require time synchronization, so voice and video data from different sensor nodes can be fused and displayed in a meaningful way at the sink. Instead of time synchronization between just the sender and receiver or within a local group of sensor nodes, some applications require the sensor nodes to maintain a similar time within a certain tolerance throughout the lifetime of the network. The Time-Diffusion Synchronization Protocol (TDP) is proposed as a network-wide time synchronization protocol. It allows the sensor network to reach an equilibrium time and maintains a small time deviation tolerance from the equilibrium time. In addition, it is analytically shown that the TDP enables time in the network to converge. Also, simulations are performed to validate the effectiveness of TDP in synchronizing the time throughout the network and balancing the energy consumed by the sensor nodes. <s> BIB001 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Time Diffusion Synchronization Protocol [43] <s> Abstract Recent advances in micro-electromechanical (MEMS) technology have led to the development of small, low-cost, and low-power sensors. Wireless sensor networks (WSNs) are large-scale networks of such sensors, dedicated to observing and monitoring various aspects of the physical world. In such networks, data from each sensor is agglomerated using data fusion to form a single meaningful result, which makes time synchronization between sensors highly desirable. This paper surveys and evaluates existing clock synchronization protocols based on a palette of factors like precision, accuracy, cost, and complexity. The design considerations presented here can help developers either in choosing an existing synchronization protocol or in defining a new protocol that is best suited to the specific needs of a sensor-network application. Finally, the survey provides a valuable framework by which designers can compare new and existing synchronization protocols. <s> BIB002
TDP is a network-wide time synchronization protocol proposed by Su et al. BIB001 . Specifically, this protocol enables all the sensors in the network to have a local time that is within a small bounded time deviation from the network-wide equilibrium time. TDP architecture comprises many algorithms and procedures, which are used to autonomously synchronize the nodes, remove the false tickers (clocks deviating from those of the neighbors), and balance the load required for time synchronization among the sensor nodes. In the beginning, the sensor nodes may receive an Initialize pulse from the sink either through direct broadcast or multi-hop flooding. Then they determine for themselves to become master nodes with the election/reelection of master/diffused leader node procedure (ERP), which is composed of the false ticker isolation algorithm (FIA) and load distribution algorithm (LDA) . At the end of the ERP procedure, the elected master nodes start the peer evaluation procedure (PEP) while other nodes do nothing. PEP helps to eliminate false tickers from becoming master nodes or diffused leader nodes. After PEP, the elected master nodes start the time diffusion procedure (TP) through which they diffuse the timing information messages at every  seconds for duration of  seconds. Each neighbor node receiving these timing information messages self-determines to become a diffused leader node using the procedure ERP. Moreover, all neighbor nodes adjust their local clocks using the time adjustment algorithm (TAA) and the clock discipline algorithm (CDA) after waiting for  seconds. The elected diffused leader nodes diffuse the timing information messages to their neighboring nodes located within their broadcast range. This diffusion procedure allows all nodes to be autonomously synchronized. Additionally, the master nodes are re-elected at every  seconds using the ERP procedure. The following are the advantages of TDP BIB002 :  This protocol is tolerant to message losses.  A network-wide equilibrium time is achieved across all nodes and involves all the nodes in the synchronization process.  The diffusion does not count on static level-by-level transmissions and thus it exhibits flexibility and fault-tolerance.  The protocol is geared towards mobility. On the other hand, the disadvantages are as follows.  The convergence time tends to be high in case that no external precise time servers are used.  Clocks may run backward. This can happen whenever a clock value is suddenly adjusted to a lower value.
Low-Rank Matrix Completion: A Contemporary Survey <s> I. INTRODUCTION <s> Multidimensional scaling can be considered as involving three basic steps. In the first step, a scale of comparative distances between all pairs of stimuli is obtained. This scale is analogous to the scale of stimuli obtained in the traditional paired comparisons methods. In this scale, however, instead of locating each stimulus-object on a given continuum, the distances between each pair of stimuli are located on a distance continuum. As in paired comparisons, the procedures for obtaining a scale of comparative distances leave the true zero point undetermined. Hence, a comparative distance is not a distance in the usual sense of the term, but is a distance minus an unknown constant. The second step involves estimating this unknown constant. When the unknown constant is obtained, the comparative distances can be converted into absolute distances. In the third step, the dimensionality of the psychological space necessary to account for these absolute distances is determined, and the projections of stimuli on axes of this space are obtained. A set of analytical procedures was developed for each of the three steps given above, including a least-squares solution for obtaining comparative distances by the complete method of triads, two practical methods for estimating the additive constant, and an extension of Young and Householder's Euclidean model to include procedures for obtaining the projections of stimuli on axes from fallible absolute distances. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> I. INTRODUCTION <s> Recent advances in radio and embedded systems have enabled the proliferation of wireless sensor networks. Wireless sensor networks are tremendously being used in different environments to perform various monitoring tasks such as search, rescue, disaster relief, target tracking and a number of tasks in smart environments. In many such tasks, node localization is inherently one of the system parameters. Node localization is required to report the origin of events, assist group querying of sensors, routing and to answer questions on the network coverage. So, one of the fundamental challenges in wireless sensor network is node localization. This paper reviews different approaches of node localization discovery in wireless sensor networks. The overview of the schemes proposed by different scholars for the improvement of localization in wireless sensor networks is also presented. Future research directions and challenges for improving node localization in wireless sensor networks are also discussed. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> I. INTRODUCTION <s> Our deteriorating civil infrastructure faces the critical challenge of long-term structural health monitoring for damage detection and localization. In contrast to existing research that often separates the designs of wireless sensor networks and structural engineering algorithms, this paper proposes a cyber-physical co-design approach to structural health monitoring based on wireless sensor networks. Our approach closely integrates (1) flexibility-based damage localization methods that allow a tradeoff between the number of sensors and the resolution of damage localization, and (2) an energy-efficient, multi-level computing architecture specifically designed to leverage the multi-resolution feature of the flexibility-based approach. The proposed approach has been implemented on the Intel Imote2 platform. Experiments on a physical beam and simulations of a truss structure demonstrate the system's efficacy in damage localization and energy efficiency. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> I. INTRODUCTION <s> This paper develops a novel framework for phase retrieval, a problem which arises in X-ray crystallography, diffraction imaging, astronomical imaging, and many other applications. Our approach, called PhaseLift, combines multiple structured illuminations together with ideas from convex programming to recover the phase from intensity measurements, typically from the modulus of the diffracted wave. We demonstrate empirically that a complex-valued object can be recovered from the knowledge of the magnitude of just a few diffracted patterns by solving a simple convex optimization problem inspired by the recent literature on matrix completion. More importantly, we also demonstrate that our noise-aware algorithms are stable in the sense that the reconstruction degrades gracefully as the signal-to-noise ratio decreases. Finally, we introduce some theory showing that one can design very simple structured illumination patterns such that three diffracted figures uniquely determine the phase of the object we wish to... <s> BIB004 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> I. INTRODUCTION <s> We discuss the analysis and design of an Environmental Monitoring Application.The application is reliable and maintenance-free, runs in multihop wireless network.We analyze the different alternatives and tradeoffs, using open source software.The application is validated in long-term outdoor deployments with good results.Related work does not analyze the software design with open source. We discuss the entire process for the analysis and design of an Environmental Monitoring Application for Wireless Sensor Networks, using existing open source components to create the application. We provide a thorough study of the different alternatives, from the selection of the embedded operating system to the different algorithms and strategies. The application has been designed to gather temperature and relative humidity data following the rules of quality assurance for environmental measurements, suitable for use in both research and industry. The main features of the application are: (a) runs in a multihop low-cost network based on IEEE 802.15.4, (b) improved network reliability and lifetimes, (c) easy management and maintenance-free, (d) ported to different platforms and (e) allows different configurations and network topologies. The application has been tested and validated in several long-term outdoor deployments with very good results and the conclusions are aligned with the experimental evidence. <s> BIB005 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> I. INTRODUCTION <s> Location awareness, providing ability to identify the location of sensor, machine, vehicle, and wearable device, is a rapidly growing trend of hyper-connected society and one of key ingredients for internet of things (IoT). In order to make a proper reaction to the collected information from devices, location information of things should be available at the data center. One challenge for the massive IoT networks is to identify the location map of whole sensor nodes from partially observed distance information. This is especially important for massive sensor networks, relay-based and hierarchical networks, and vehicular to everything (V2X) networks. The primary goal of this paper is to propose an algorithm to reconstruct the Euclidean distance matrix (and eventually the location map) from partially observed distance information. By casting the low-rank matrix completion problem into the unconstrained minimization problem in Riemannian manifold in which a notion of differentiability can be defined, we are able to solve the low-rank matrix completion problem efficiently using a modified conjugate gradient algorithm. From the analysis and numerical experiments, we show that the proposed method, termed localization in Riemannian manifold using conjugate gradient (LRM-CG), is effective in recovering the Euclidean distance matrix for both noiseless and noisy environments. <s> BIB006 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> I. INTRODUCTION <s> Location awareness, providing the ability to identify the location of sensor, machine, vehicle, and wearable device, is a rapidly growing trend of hyper-connected society and one of the key ingredients for the Internet of Things (IoT) era. In order to make a proper reaction to the collected information from things , location information of things should be available at the data center. One challenge for the IoT networks is to identify the location map of whole nodes from partially observed distance information. The aim of this paper is to present an algorithm to recover the Euclidean distance matrix (and eventually the location map) from partially observed distance information. By casting the low-rank matrix completion problem into the unconstrained minimization problem in a Riemannian manifold in which a notion of differentiability can be defined, we solve the low-rank matrix completion problem using a modified conjugate gradient algorithm. From the convergence analysis, we show that localization in Riemannian manifold using conjugate gradient (LRM-CG) converges linearly to the original Euclidean distance matrix under the extended Wolfe’s conditions. From the numerical experiments, we demonstrate that the proposed method, called LRM-CG, is effective in recovering the Euclidean distance matrix. <s> BIB007
In the era of big data, the low-rank matrix has become a useful and popular tool to express two-dimensional information. One well-known example is the rating matrix in the recommendation systems representing users' tastes on products [1] . Since users expressing similar ratings on multiple products tend to have the same interest for the new product, columns associated with users sharing the same interest are highly likely to be the same, resulting in the low rank structure of the rating matrix (see Fig. 1 ). Another example is the Euclidean distance matrix formed by the pairwise distances of a large number of sensor nodes. Since the rank of a Euclidean distance matrix in the k-dimensional Euclidean space is at most k + 2 (if k = 2, then the rank is 4), this matrix can be readily modeled as a low-rank matrix BIB002 - BIB007 . A holy grail of the low-rank matrix is that the essential information, expressed in terms of degree of freedom, in a The associate editor coordinating the review of this manuscript and approving it for publication was Congduan Li. matrix is much smaller than the total number of entries. Therefore, even though the number of observed entries is small, we still have a good chance to recover the whole matrix. There are a variety of scenarios where the number of observed entries of a matrix is tiny. In the recommendation systems, for example, users are recommended to submit the feedback in a form of rating number, e.g., 1 to 5 for the purchased product. However, users often do not want to leave a feedback and thus the rating matrix will have many missing entries. Also, in the internet of things (IoT) network, sensor nodes have a limitation on the radio communication range or under the power outage so that only small portion of entries in the Euclidean distance matrix is available. When there is no restriction on the rank of a matrix, the problem to recover unknown entries of a matrix from partial observed entries is ill-posed. This is because any value can be assigned to unknown entries, which in turn means that there are infinite number of matrices that agree with the observed entries. As a simple example, consider the following 2 × 2 matrix with one unknown entry marked ? If M is a full rank, i.e., the rank of M is two, then any value except 10 can be assigned to ?. Whereas, if M is a low-rank matrix (the rank is one in this trivial example), two columns differ by only a constant and hence unknown element ? can be easily determined using a linear relationship between two columns (? = 10). This example is obviously simple, but the fundamental principle to recover a large dimensional matrix is not much different from this and the low-rank constraint plays a pivotal role in recovering unknown entries of the matrix. Before we proceed, we discuss a few notable applications where the underlying matrix is modeled as a low-rank matrix. 1) Recommendation system: In 2006, the online DVD rental company Netflix announced a contest to improve the quality of the company's movie recommendation system. The company released a training set of half million customers. Training set contains ratings on more than ten thousands movies, each movie being rated on a scale from 1 to 5 [1] . The training data can be represented in a large dimensional matrix in which each column represents the rating of a customer for the movies. The primary goal of the recommendation system is to estimate the users' interests on products using the sparsely sampled 1 rating matrix. BIB002 Often, users sharing the same interests in key factors such as the type, the price, and the appearance of the product tend to provide the same rating on the movies. The ratings of those users might form a low-rank column space, FIGURE 2. Localization via LRMC BIB007 . The Euclidean distance matrix can be recovered with 92% of distance error below 0.5m using 30% of observed distances. resulting in the low-rank model of the rating matrix (see Fig. 1 ). 2) Phase retrieval: The problem to recover a signal not necessarily sparse from the magnitude observation is referred to as the phase retrieval. Phase retrieval is an important problem in X-ray crystallography and quantum mechanics since only the magnitude of the Fourier transform is measured in these applications BIB004 . Suppose the unknown time-domain signal m = [m 0 · · · m n−1 ] is acquired in a form of the measured magnitude of the Fourier transform. That is, m t e −j2π ωt/n , ω ∈ , where is the set of sampled frequencies. Further, let M = mm H where m H is the conjugate transpose of m. Then, (2) can be rewritten as where F w = f w f H w is the rank-1 matrix of the waveform f ω . Using this simple transform, we can express the quadratic magnitude |z ω | 2 as linear measurement of M. In essence, the phase retrieval problem can be converted to the problem to reconstruct the rank-1 matrix M in the positive semi-definite (PSD) cone BIB006 BIB004 : min X rank(X) subject to M, F ω = |z ω | 2 , ω ∈ X 0. 3) Localization in IoT networks: In recent years, internet of things (IoT) has received much attention for its plethora of applications such as healthcare, automatic metering, environmental monitoring (temperature, pressure, moisture), and surveillance BIB002 , BIB005 , BIB003 . Since the action in IoT networks, such as fire alarm, command broadcasting, or emergency request, is made primarily on the data center, data center should figure out the location information of whole devices in the networks. Also, in the wireless energy harvesting systems, accurate location information is crucial to improve the efficiency of wireless power transfer. In this scheme, called network localization (a.k.a. cooperative localization), each sensor node measures the distance information of adjacent nodes and then sends it to the data center. Then the data center constructs a map of sensor nodes using the collected distance information BIB001 . Due to various reasons, such as the power outage of a sensor node or the limitation of radio communication range (see Fig. 2 ), only small number of distance information is available at the data center. Also, in the vehicular networks, it is not easy to measure the distance of all adjacent vehicles when a vehicle is located at the dead zone. An example of the observed Euclidean distance matrix is ? ? where d ij is the pairwise distance between two sensor nodes i and j. Since the rank of Euclidean distance matrix M is at most k + 2 in the k-dimensional Euclidean space (k = 2 or k = 3) BIB006 , BIB007 , the problem to reconstruct M can be well-modeled as the LRMC problem. 4) Image compression and restoration: When there is dirt or scribble in a two-dimensional image (see Fig. 3 ), one simple solution is to replace the contaminated pixels with the interpolated version of adjacent pixels. A better way is to exploit intrinsic domination of a few singular values in an image. In fact, one can readily approximate an image to the low-rank matrix without perceptible loss of quality. By using clean (uncontaminated) pixels as observed entries, an original image can be recovered via the low-rank matrix completion.
Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> Knowledge of accurate and timely channel state information (CSI) at the transmitter is becoming increasingly important in wireless communication systems. While it is often assumed that the receiver (whether base station or mobile) needs to know the channel for accurate power control, scheduling, and data demodulation, it is now known that the transmitter (especially the base station) can also benefit greatly from this information. For example, recent results in multiantenna multiuser systems show that large throughput gains are possible when the base station uses multiple antennas and a known channel to transmit distinct messages simultaneously and selectively to many single-antenna users. In time-division duplex systems, where the base station and mobiles share the same frequency band for transmission, the base station can exploit reciprocity to obtain the forward channel from pilots received over the reverse channel. Frequency-division duplex systems are more difficult because the base station transmits and receives on different frequencies and therefore cannot use the received pilot to infer anything about the multiantenna transmit channel. Nevertheless, we show that the time occupied in frequency-duplex CSI transfer is generally less than one might expect and falls as the number of antennas increases. Thus, although the total amount of channel information increases with the number of antennas at the base station, the burden of learning this information at the base station paradoxically decreases. Thus, the advantages of having more antennas at the base station extend from having network gains to learning the channel information. We quantify our gains using linear analog modulation which avoids digitizing and coding the CSI and therefore can convey information very rapidly and can be readily analyzed. The old paradigm that it is not worth the effort to learn channel information at the transmitter should be revisited since the effort decreases and the gain increases with the number of antennas. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? ::: ::: We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys $$m\ge C\,n^{1.2}r\log n$$ for some positive numerical constant C, then with very high probability, most n×n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> This paper is concerned with the problem of recovering an unknown matrix from a small fraction of its entries. This is known as the matrix completion problem, and comes up in a great number of applications, including the famous Netflix Prize and other similar questions in collaborative filtering. In general, accurate recovery of a matrix from a small number of entries is impossible; but the knowledge that the unknown matrix has low rank radically changes this premise, making the search for solutions meaningful. ::: This paper presents optimality results quantifying the minimum number of entries needed to recover a matrix of rank r exactly by any method whatsoever (the information theoretic limit). More importantly, the paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors). This convex program simply finds, among all matrices consistent with the observed entries, that with minimum nuclear norm. As an example, we show that on the order of nr log(n) samples are needed to recover a random n x n matrix of rank r by any method, and to be sure, nuclear norm minimization succeeds as soon as the number of entries is of the form nr polylog(n). <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> A new algorithm, termed subspace evolution and transfer (SET), is proposed for solving the consistent matrix completion problem. In this setting, one is given a subset of the entries of a low-rank matrix, and asked to find one low-rank matrix consistent with the given observations. We show that this problem can be solved by searching for a column space that matches the observations. The corresponding algorithm consists of two parts — subspace evolution and subspace transfer. In the evolution part, we use a line search procedure to refine the column space. However, line search is not guaranteed to converge, as there may exist barriers along the search path that prevent the algorithm from reaching a global optimum. To address this problem, in the transfer part, we design mechanisms to detect barriers and transfer the estimated column space from one side of the barrier to the another. The SET algorithm exhibits excellent empirical performance for very low-rank matrices. <s> BIB004 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> Motivated by the problem of learning a linear regression model whose parameter is a large fixed-rank non-symmetric matrix, we consider the optimization of a smooth cost function defined on the set of fixed-rank matrices. We adopt the geometric framework of optimization on Riemannian quotient manifolds. We study the underlying geometries of several well-known fixed-rank matrix factorizations and then exploit the Riemannian quotient geometry of the search space in the design of a class of gradient descent and trust-region algorithms. The proposed algorithms generalize our previous results on fixed-rank symmetric positive semidefinite matrices, apply to a broad range of applications, scale to high-dimensional problems and confer a geometric basis to recent contributions on the learning of fixed-rank non-symmetric matrices. We make connections with existing algorithms in the context of low-rank matrix completion and discuss relative usefulness of the proposed framework. Numerical experiments suggest that the proposed algorithms compete with the state-of-the-art and that manifold optimization offers an effective and versatile framework for the design of machine learning algorithms that learn a fixed-rank matrix. <s> BIB005 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> Presents a list of articles published by the IEEE Signal Processing Society (SPS) that ranked among the top 100 most downloaded IEEE Xplore articles. <s> BIB006 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> The global bandwidth shortage facing wireless carriers has motivated the exploration of the underutilized millimeter wave (mm-wave) frequency spectrum for future broadband cellular communication networks. There is, however, little knowledge about cellular mm-wave propagation in densely populated indoor and outdoor environments. Obtaining this information is vital for the design and operation of future fifth generation cellular networks that use the mm-wave spectrum. In this paper, we present the motivation for new mm-wave cellular systems, methodology, and hardware for measurements and offer a variety of measurement results that show 28 and 38 GHz frequencies can be used when employing steerable directional antennas at base stations and mobile devices. <s> BIB007 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> This paper develops a novel framework for phase retrieval, a problem which arises in X-ray crystallography, diffraction imaging, astronomical imaging, and many other applications. Our approach, called PhaseLift, combines multiple structured illuminations together with ideas from convex programming to recover the phase from intensity measurements, typically from the modulus of the diffracted wave. We demonstrate empirically that a complex-valued object can be recovered from the knowledge of the magnitude of just a few diffracted patterns by solving a simple convex optimization problem inspired by the recent literature on matrix completion. More importantly, we also demonstrate that our noise-aware algorithms are stable in the sense that the reconstruction degrades gracefully as the signal-to-noise ratio decreases. Finally, we introduce some theory showing that one can design very simple structured illumination patterns such that three diffracted figures uniquely determine the phase of the object we wish to... <s> BIB008 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> Recovering a large matrix from a small subset of its entries is a challenging problem arising in many real applications, such as image inpainting and recommender systems. Many existing approaches formulate this problem as a general low-rank matrix approximation problem. Since the rank operator is nonconvex and discontinuous, most of the recent theoretical studies use the nuclear norm as a convex relaxation. One major limitation of the existing approaches based on nuclear norm minimization is that all the singular values are simultaneously minimized, and thus the rank may not be well approximated in practice. In this paper, we propose to achieve a better approximation to the rank of matrix by truncated nuclear norm, which is given by the nuclear norm subtracted by the sum of the largest few singular values. In addition, we develop a novel matrix completion algorithm by minimizing the Truncated Nuclear Norm. We further develop three efficient iterative procedures, TNNR-ADMM, TNNR-APGL, and TNNR-ADMMAP, to solve the optimization problem. TNNR-ADMM utilizes the alternating direction method of multipliers (ADMM), while TNNR-AGPL applies the accelerated proximal gradient line search method (APGL) for the final optimization. For TNNR-ADMMAP, we make use of an adaptive penalty according to a novel update rule for ADMM to achieve a faster convergence rate. Our empirical study shows encouraging results of the proposed algorithms in comparison to the state-of-the-art matrix completion algorithms on both synthetic and real visual datasets. <s> BIB009 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> Multiple-input multiple-output (MIMO) systems with a large number of base station antennas, often called massive MIMO, have received much attention in academia and industry as a means to improve the spectral efficiency, energy efficiency, and processing complexity of next generation cellular systems. The mobile communication industry has initiated a feasibility study of massive MIMO systems to meet the increasing demand of future wireless systems. Field trials of the proof-of-concept systems have demonstrated the potential gain of the Full-Dimension MIMO (FD-MIMO), an official name for the MIMO enhancement in the 3rd generation partnership project (3GPP). 3GPP initiated standardization activity for the seamless integration of this technology into current 4G LTE systems. In this article, we provide an overview of FD-MIMO systems, with emphasis on the discussion and debate conducted on the standardization process of Release 13. We present key features for FD-MIMO systems, a summary of the major issues for the standardization and practical system design, and performance evaluations for typical FD-MIMO scenarios. <s> BIB010 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> Channel state information at the transmitter (CSIT) is essential for frequency-division duplexing (FDD) massive MIMO systems, but conventional solutions involve overwhelming overhead both for downlink channel training and uplink channel feedback. In this letter, we propose a joint CSIT acquisition scheme to reduce the overhead. Particularly, unlike conventional schemes where each user individually estimates its own channel and then feed it back to the base station (BS), we propose that all scheduled users directly feed back the pilot observation to the BS, and then joint CSIT recovery can be realized at the BS. We further formulate the joint CSIT recovery problem as a low-rank matrix completion problem by utilizing the low-rank property of the massive MIMO channel matrix, which is caused by the correlation among users. Finally, we propose a hybrid low-rank matrix completion algorithm based on the singular value projection to solve this problem. Simulations demonstrate that the proposed scheme can provide accurate CSIT with lower overhead than conventional schemes. <s> BIB011 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> This paper proposes a new framework for the design of transmit and receive beamformers for interference alignment (IA) without symbol extensions in multi-antenna cellular networks. We consider IA in a $G$ cell network with $K$ users/cell, $N$ antennas at each base station (BS) and $M$ antennas at each user. The proposed framework is developed by recasting the conditions for IA as two sets of rank constraints, one on the rank of interference matrices, and the other on the transmit beamformers in the uplink. The interference matrix consists of all the interfering vectors received at a BS from the out-of-cell users in the uplink. Using these conditions and the crucial observation that the rank of interference matrices under alignment can be determined beforehand, this paper develops two sets of algorithms for IA. The first part of this paper develops rank minimization algorithms for IA by iteratively minimizing a weighted matrix norm of the interference matrix. Different choices of matrix norms lead to reweighted nuclear norm minimization (RNNM) or reweighted Frobenius norm minimization (RFNM) algorithms with significantly different per-iteration complexities. Alternately, the second part of this paper devises an alternating minimization (AM) algorithm where the rank-deficient interference matrices are expressed as a product of two lower-dimensional matrices that are then alternately optimized. Simulation results indicate that RNNM, which has a per-iteration complexity of a semidefinite program, is effective in designing aligned beamformers for proper-feasible systems with or without redundant antennas, while RFNM and AM, which have a per-iteration complexity of a quadratic program, are better suited for systems with redundant antennas. <s> BIB012 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> An F-RAN is presented in this article as a promising paradigm for the fifth generation wireless communication system to provide high spectral and energy efficiency. The core idea is to take full advantage of local radio signal processing, cooperative radio resource management, and distributed storing capabilities in edge devices, which can decrease the heavy burden on fronthaul and avoid large-scale radio signal processing in the centralized baseband unit pool. This article comprehensively presents the system architecture and key techniques of F-RANs. In particular, key techniques and their corresponding solutions, including transmission mode selection and interference suppression, are discussed. Open issues in terms of edge caching, software-defined networking, and network function virtualization are also identified. <s> BIB013 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> In this paper, we present a flexible low-rank matrix completion (LRMC) approach for topological interference management (TIM) in the partially connected $K$ -user interference channel. No channel state information (CSI) is required at the transmitters except the network topology information. The previous attempt on the TIM problem is mainly based on its equivalence to the index coding problem, but so far only a few index coding problems have been solved. In contrast, in this paper, we present an algorithmic approach to investigate the achievable degrees-of-freedom (DoFs) by recasting the TIM problem as an LRMC problem. Unfortunately, the resulting LRMC problem is known to be NP-hard, and the main contribution of this paper is to propose a Riemannian pursuit (RP) framework to detect the rank of the matrix to be recovered by iteratively increasing the rank. This algorithm solves a sequence of fixed-rank matrix completion problems. To address the convergence issues in the existing fixed-rank optimization methods, the quotient manifold geometry of the search space of fixed-rank matrices is exploited via Riemannian optimization. By further exploiting the structure of the low-rank matrix varieties, i.e., the closure of the set of fixed-rank matrices, we develop an efficient rank increasing strategy to find good initial points in the procedure of rank pursuit. Simulation results demonstrate that the proposed RP algorithm achieves a faster convergence rate and higher achievable DoFs for the TIM problem compared with the state-of-the-art methods. <s> BIB014 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> The upcoming big data era is likely to demand tremendous computation and storage resources for communications. By pushing computation and storage to network edges, fog radio access networks (Fog-RAN) can effectively increase network throughput and reduce transmission latency. Furthermore, we can exploit the benefits of cache enabled architecture in Fog-RAN to deliver contents with low latency. Radio access units (RAUs) need content delivery from fog servers through wireline links whereas multiple mobile devices acquire contents from RAUs wirelessly. This work proposes a unified low-rank matrix completion (LRMC) approach to solving the content delivery problem in both wireline and wireless parts of Fog-RAN. To attain a low caching latency, we present a high precision approach with Riemannian trust-region method to solve the challenging LRMC problem by exploiting the quotient manifold geometry of fixed-rank matrices. Numerical results show that the new approach has a faster convergence rate, is able to achieve optimal results, and outperforms other state-of-art algorithms. <s> BIB015 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> Low-rank matrices play a fundamental role in modeling and computational methods for signal processing and machine learning. In many applications where low-rank matrices arise, these matrices cannot be fully sampled or directly observed, and one encounters the problem of recovering the matrix given only incomplete and indirect observations. This paper provides an overview of modern techniques for exploiting low-rank structure to perform matrix recovery in these settings, providing a survey of recent advances in this rapidly-developing field. Specific attention is paid to the algorithms most commonly used in practice, the existing theoretical guarantees for these algorithms, and representative practical applications of these techniques. <s> BIB016 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> We consider the problem of channel estimation for millimeter wave (mmWave) systems, where, to minimize the hardware complexity and power consumption, an analog transmit beamforming and receive combining structure with only one radio frequency (RF) chain at the base station (BS) and mobile station (MS) is employed. Most existing works for mmWave channel estimation exploit sparse scattering characteristics of the channel. In addition to sparsity, mmWave channels may exhibit angular spreads over the angle of arrival (AoA), angle of departure (AoD), and elevation domains. In this paper, we show that angular spreads give rise to a useful low-rank structure that, along with the sparsity, can be simultaneously utilized to reduce the sample complexity, i.e. the number of samples needed to successfully recover the mmWave channel. Specifically, to effectively leverage the joint sparse and low-rank structure, we develop a two-stage compressed sensing method for mmWave channel estimation, where the sparse and low-rank properties are respectively utilized in two consecutive stages, namely, a matrix completion stage and a sparse recovery stage. Our theoretical analysis reveals that the proposed two-stage scheme can achieve a lower sample complexity than a direct compressed sensing method that exploits only the sparse structure of the mmWave channel. Simulation results are provided to corroborate our theoretical results and to show the superiority of the proposed two-stage method. <s> BIB017
By exploiting hundreds of antennas at the basestation (BS), massive MIMO can offer a large gain in capacity. In order to maximize the performance gain of the massive MIMO systems, the channel state information at the transmitter (CSIT) is required BIB010 . One way to acquire the CSIT is to let each user directly feed back its own pilot observation to BS for the joint CSIT estimation of all users BIB011 . In this setup, the MIMO channel matrix H can be reconstructed in two steps: 1) finding the pilot matrix Y using the least squares (LS) estimation or linear minimum mean square error (LMMSE) estimation and 2) reconstructing H using the model Y = H where each column of is the pilot signal from one antenna at BS BIB001 , BIB006 . Since the number of resolvable paths P is limited in most cases, one can readily assume that rank(H) ≤ P BIB011 . In the massive MIMO systems, P is often much smaller than the dimension of H due to the limited number of clusters around BS. Thus, the problem to recover H at BS can be solved via the rank minimization problem subject to the linear constraint Y = H BIB006 . Other than these, there are a bewildering variety of applications of LRMC in wireless communication, such as millimeter wave (mmWave) channel estimation BIB007 , BIB017 , topological interference management (TIM) BIB014 - BIB012 and mobile edge caching in fog radio access networks (Fog-RAN) BIB013 , BIB015 . The paradigm of LRMC has received much attention ever since the works of Fazel , Candes and Recht BIB002 , and Candes and Tao BIB003 . Over the years, there have been lots of works on this topic BIB008 , BIB005 , BIB004 , BIB009 , but it might not be easy to grasp the essentials of LRMC from these studies. One reason is because many of these works are highly theoretical and based on random matrix theory, graph theory, manifold analysis, and convex optimization so that it is not easy to grasp the essential knowledge from these studies. Another reason is because most of these works are proposals of new LRMC technique so that it is difficult to catch a general idea and big picture of LRMC from these. The primary goal of this paper is to provide a contemporary survey on LRMC, a new paradigm to recover unknown entries of a low-rank matrix from partial observations. To provide better view, insight, and understanding of the potentials and limitations of LRMC to researchers and practitioners in a friendly way, we present early scattered results in a structured and accessible way. Firstly, we classify the stateof-the-art LRMC techniques into two main categories and then explain each category in detail. Secondly, we present issues to be considered when using LRMC techniques. Specifically, we discuss the intrinsic properties required for low-rank matrix recovery and explain how to exploit a special structure, such as positive semidefinite-based structure, Euclidean distance-based structure, and graph structure, in LRMC design. Thirdly, we compare the recovery performance and the computational complexity of LRMC techniques via numerical simulations. We conclude the paper by commenting on the choice of LRMC techniques and providing future research directions. Recently, there have been a few overview papers on LRMC. An overview of LRMC algorithms and their performance guarantees can be found in BIB016 . A survey with an emphasis on first-order LRMC techniques together with their computational efficiency is presented in . Our work is clearly distinct from the previous studies in several aspects. Firstly, we categorize the state-of-the-art LRMC techniques into two classes and then explain details of each class, which can help researchers to easily catch technique useful for the given problem setup. Secondly, we provide a comprehensive survey of LRMC techniques and also provide extensive simulation results on the recovery quality and the running time complexity from which one can easily see the pros and cons of each LRMC technique and also gain a better insight into the choice of LRMC algorithms. Finally, we discuss how to exploit a special structure of a low-rank matrix in the LRMC algorithm design. In particular, we introduce the CNN-based LRMC algorithm that exploits the graph structure of a low-rank matrix. We briefly summarize notations used in this paper. • For a vector a ∈ R n , diag(a) ∈ R n×n is the diagonal matrix formed by a. • For a matrix A ∈ R n 1 ×n 2 , a i ∈ R n 1 is the i-th column of A. • rank(A) is the rank of A. and A B are the inner product and the Hadamard product (or element-wise multiplication) of two matrices A and B, respectively, where tr(·) denotes the trace operator. • A , A * , and A F stand for the spectral norm (i.e., the largest singular value), the nuclear norm (i.e., the sum of singular values), and the Frobenius norm of A, respectively. -dimensional matrices with entries being zero and one, respectively. • If A is a square matrix (i.e., n 1 = n 2 = n), diag(A) ∈ R n is the vector formed by the diagonal entries of A. • vec(X) is the vectorization of X.
Low-Rank Matrix Completion: A Contemporary Survey <s> 1) NUCLEAR NORM MINIMIZATION (NNM) <s> A primal-dual infeasible-interior-point path-following algorithm is proposed for solving semidefinite programming (SDP) problems. If the problem has a solution, then the algorithm is globally convergent. If the starting point is feasible or close to being feasible, the algorithm finds an optimal solution in at most $O(\sqrt{n}L)$ iterations, where n is the size of the problem and L is the logarithm of the ratio of the initial error and the tolerance. If the starting point is large enough, then the algorithm terminates in at most O(nL) steps either by finding a solution or by determining that the primal-dual problem has no solution of norm less than a given number. Moreover, we propose a sufficient condition for the superlinear convergence of the algorithm. In addition, we give two special cases of SDP for which the algorithm is quadratically convergent. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) NUCLEAR NORM MINIMIZATION (NNM) <s> An available pressurized air source, such as an inflated tire, connectable by appropriate conduit means, with flow control and pressure regulation provisions, through an air transmitter or face mask, to the breathing passages of a passenger in a submerged land vehicle to either provide emergency breathing air for the passenger, or to fill an inflatable and portable air pack which the passenger may leave the vehicle with, or both. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) NUCLEAR NORM MINIMIZATION (NNM) <s> The modern era of interior-point methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have become quite sophisticated, while extensions to more general classes of problems, such as convex quadratic programming, semi-definite programming, and nonconvex and nonlinear problems, have reached varying levels of maturity. We review some of the key developments in the area, including comments on both the complexity theory and practical algorithms for linear programming, semi-definite programming, monotone linear complementarity, and convex programming over sets that can be characterized by self-concordant barrier functions. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) NUCLEAR NORM MINIMIZATION (NNM) <s> Convex optimization problems arise frequently in many different fields. A comprehensive introduction to the subject, this book shows in detail how such problems can be solved numerically with great efficiency. The focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them. The text contains many worked examples and homework exercises and will appeal to students, researchers and practitioners in fields such as engineering, computer science, mathematics, statistics, finance, and economics. <s> BIB004 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) NUCLEAR NORM MINIMIZATION (NNM) <s> The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum rank solution can be recovered by solving a convex optimization problem, namely the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this pre-existing concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. <s> BIB005 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) NUCLEAR NORM MINIMIZATION (NNM) <s> We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? ::: ::: We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys $$m\ge C\,n^{1.2}r\log n$$ for some positive numerical constant C, then with very high probability, most n×n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information. <s> BIB006 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) NUCLEAR NORM MINIMIZATION (NNM) <s> The global bandwidth shortage facing wireless carriers has motivated the exploration of the underutilized millimeter wave (mm-wave) frequency spectrum for future broadband cellular communication networks. There is, however, little knowledge about cellular mm-wave propagation in densely populated indoor and outdoor environments. Obtaining this information is vital for the design and operation of future fifth generation cellular networks that use the mm-wave spectrum. In this paper, we present the motivation for new mm-wave cellular systems, methodology, and hardware for measurements and offer a variety of measurement results that show 28 and 38 GHz frequencies can be used when employing steerable directional antennas at base stations and mobile devices. <s> BIB007 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) NUCLEAR NORM MINIMIZATION (NNM) <s> Location awareness, providing the ability to identify the location of sensor, machine, vehicle, and wearable device, is a rapidly growing trend of hyper-connected society and one of the key ingredients for the Internet of Things (IoT) era. In order to make a proper reaction to the collected information from things , location information of things should be available at the data center. One challenge for the IoT networks is to identify the location map of whole nodes from partially observed distance information. The aim of this paper is to present an algorithm to recover the Euclidean distance matrix (and eventually the location map) from partially observed distance information. By casting the low-rank matrix completion problem into the unconstrained minimization problem in a Riemannian manifold in which a notion of differentiability can be defined, we solve the low-rank matrix completion problem using a modified conjugate gradient algorithm. From the convergence analysis, we show that localization in Riemannian manifold using conjugate gradient (LRM-CG) converges linearly to the original Euclidean distance matrix under the extended Wolfe’s conditions. From the numerical experiments, we demonstrate that the proposed method, called LRM-CG, is effective in recovering the Euclidean distance matrix. <s> BIB008
Since the rank minimization problem (10) is NP-hard , it is computationally intractable when the dimension of a matrix is large. One common trick to avoid computational issue is to replace the non-convex objective function with its convex surrogate, meaning that to convert the combinatorial search problem into a convex optimization problem. There are two clear advantages in solving the convex optimization problem: 1) a local optimum solution is globally optimal and 2) there are many efficient polynomial-time convex optimization solvers (e.g., interior point method BIB004 and semi-definite programming (SDP) solver). In the LRMC problem, the nuclear norm X * , the sum of the singular values of X, has been widely used as a convex surrogate of rank(X) BIB006 : Indeed, it has been shown that the nuclear norm is the convex envelope (the ''best'' convex approximation) of the rank function on the set {X ∈ R n 1 ×n 2 : X ≤ 1} . BIB008 Note that the relaxation from the rank function to the nuclear norm is conceptually analogous to the relaxation from 0 -norm to it has been shown that if the observed entries of a rank r matrix M(∈ R n×n ) are suitably random and the number of observed entries satisfies where µ 0 is the largest coherence of M (see the definition in Subsection III-A.2), then M is the unique solution of the NNM problem (11) with overwhelming probability (see Appendix B). It is worth mentioning that the NNM problem in (11) can also be recast as a semidefinite program (SDP) as (see is the sequence of linear sampling matrices, and {b k } | | k=1 are the observed entries. The problem BIB007 can be solved by the offthe-shelf SDP solvers such as SDPT3 and SeDuMi BIB002 using interior-point methods - BIB001 . It has been shown that the computational complexity of SDP techniques is O(n 3 ) where n = max(n 1 , n 2 ) . Also, it has been shown that under suitable conditions, the output M of SDP satisfies M − M F ≤ in at most O(n ω log( 1 )) iterations where ω is a positive constant BIB003 . Alternatively, one can reconstruct M by solving the equivalent nonconvex quadratic optimization form of the NNM problem BIB005 . Note that this approach has computational benefit since the number of primal variables of NNM is reduced from n 1 n 2 to r(n 1 + n 2 ) (r ≤ min(n 1 , n 2 )). Interested readers may refer to BIB005 for more details.
Low-Rank Matrix Completion: A Contemporary Survey <s> 2) SINGULAR VALUE THRESHOLDING (SVT) <s> Convex optimization problems arise frequently in many different fields. A comprehensive introduction to the subject, this book shows in detail how such problems can be solved numerically with great efficiency. The focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them. The text contains many worked examples and homework exercises and will appeal to students, researchers and practitioners in fields such as engineering, computer science, mathematics, statistics, finance, and economics. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) SINGULAR VALUE THRESHOLDING (SVT) <s> This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative and produces a sequence of matrices (X^k, Y^k) and at each step, mainly performs a soft-thresholding operation on the singular values of the matrix Y^k. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates X^k is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. We provide numerical examples in which 1,000 by 1,000 matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are connected with linearized Bregman iterations for l1 minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) SINGULAR VALUE THRESHOLDING (SVT) <s> The global bandwidth shortage facing wireless carriers has motivated the exploration of the underutilized millimeter wave (mm-wave) frequency spectrum for future broadband cellular communication networks. There is, however, little knowledge about cellular mm-wave propagation in densely populated indoor and outdoor environments. Obtaining this information is vital for the design and operation of future fifth generation cellular networks that use the mm-wave spectrum. In this paper, we present the motivation for new mm-wave cellular systems, methodology, and hardware for measurements and offer a variety of measurement results that show 28 and 38 GHz frequencies can be used when employing steerable directional antennas at base stations and mobile devices. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) SINGULAR VALUE THRESHOLDING (SVT) <s> This paper develops a novel framework for phase retrieval, a problem which arises in X-ray crystallography, diffraction imaging, astronomical imaging, and many other applications. Our approach, called PhaseLift, combines multiple structured illuminations together with ideas from convex programming to recover the phase from intensity measurements, typically from the modulus of the diffracted wave. We demonstrate empirically that a complex-valued object can be recovered from the knowledge of the magnitude of just a few diffracted patterns by solving a simple convex optimization problem inspired by the recent literature on matrix completion. More importantly, we also demonstrate that our noise-aware algorithms are stable in the sense that the reconstruction degrades gracefully as the signal-to-noise ratio decreases. Finally, we introduce some theory showing that one can design very simple structured illumination patterns such that three diffracted figures uniquely determine the phase of the object we wish to... <s> BIB004 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) SINGULAR VALUE THRESHOLDING (SVT) <s> We consider the problem of channel estimation for millimeter wave (mmWave) systems, where, to minimize the hardware complexity and power consumption, an analog transmit beamforming and receive combining structure with only one radio frequency (RF) chain at the base station (BS) and mobile station (MS) is employed. Most existing works for mmWave channel estimation exploit sparse scattering characteristics of the channel. In addition to sparsity, mmWave channels may exhibit angular spreads over the angle of arrival (AoA), angle of departure (AoD), and elevation domains. In this paper, we show that angular spreads give rise to a useful low-rank structure that, along with the sparsity, can be simultaneously utilized to reduce the sample complexity, i.e. the number of samples needed to successfully recover the mmWave channel. Specifically, to effectively leverage the joint sparse and low-rank structure, we develop a two-stage compressed sensing method for mmWave channel estimation, where the sparse and low-rank properties are respectively utilized in two consecutive stages, namely, a matrix completion stage and a sparse recovery stage. Our theoretical analysis reveals that the proposed two-stage scheme can achieve a lower sample complexity than a direct compressed sensing method that exploits only the sparse structure of the mmWave channel. Simulation results are provided to corroborate our theoretical results and to show the superiority of the proposed two-stage method. <s> BIB005
While the solution of the NNM problem in (11) can be obtained by solving BIB003 , this procedure is computationally burdensome when the size of the matrix is large. As an effort to mitigate the computational burden, the singular value thresholding (SVT) algorithm has been proposed BIB002 . The key idea of this approach is to put the regularization term into the objective function of the NNM problem: where τ is the regularization parameter. In [33, Theorem 3.1], it has been shown that the solution to the problem BIB005 converges to the solution of the NNM problem as τ → ∞. BIB004 Let L(X, Y) be the Lagrangian function associated with BIB005 , i.e., where Y is the dual variable. Let X and Y be the primal and dual optimal solutions. Then, by the strong duality BIB001 , we have The SVT algorithm finds X and Y in an iterative fashion. Specifically, starting with Y 0 = 0 n 1 ×n 2 , SVT updates X k and Y k as where {δ k } k≥1 is a sequence of positive step sizes. Note that X k can be expressed as where (a) is because P (A), B = A, P (B) and (b) is because Y k−1 vanishes outside of (i.e., P (Y k−1 ) = Y k−1 ) by (17b). Due to the inclusion of the nuclear norm, finding out the solution X k of (18) seems to be difficult. However, thanks to the intriguing result of Cai et al., we can easily obtain the solution.
Low-Rank Matrix Completion: A Contemporary Survey <s> Theorem 1 ([33, Theorem 2.1]): Let Z be a matrix whose singular value decomposition (SVD) is <s> This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative and produces a sequence of matrices (X^k, Y^k) and at each step, mainly performs a soft-thresholding operation on the singular values of the matrix Y^k. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates X^k is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. We provide numerical examples in which 1,000 by 1,000 matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are connected with linearized Bregman iterations for l1 minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> Theorem 1 ([33, Theorem 2.1]): Let Z be a matrix whose singular value decomposition (SVD) is <s> Minimizing the rank of a matrix subject to affine constraints is a fundamental problem with many important applications in machine learning and statistics. In this paper we propose a simple and fast algorithm SVP (Singular Value Projection) for rank minimization under affine constraints (ARMP) and show that SVP recovers the minimum rank solution for affine constraints that satisfy a restricted isometry property (RIP). Our method guarantees geometric convergence rate even in the presence of noise and requires strictly weaker assumptions on the RIP constants than the existing methods. We also introduce a Newton-step for our SVP framework to speed-up the convergence with substantial empirical gains. Next, we address a practically important application of ARMP - the problem of low-rank matrix completion, for which the defining affine constraints do not directly obey RIP, hence the guarantees of SVP do not hold. However, we provide partial progress towards a proof of exact recovery for our algorithm by showing a more restricted isometry property and observe empirically that our algorithm recovers low-rank incoherent matrices from an almost optimal number of uniformly sampled entries. We also demonstrate empirically that our algorithms outperform existing methods, such as those of [5, 18, 14], for ARMP and the matrix completion problem by an order of magnitude and are also more robust to noise and sampling schemes. In particular, results show that our SVP-Newton method is significantly robust to noise and performs impressively on a more realistic power-law sampling scheme for the matrix completion problem. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> Theorem 1 ([33, Theorem 2.1]): Let Z be a matrix whose singular value decomposition (SVD) is <s> Matrices of low rank can be uniquely determined from fewer linear measurements, or entries, than the total number of entries in the matrix. Moreover, there is a growing literature of computationally efficient algorithms which can recover a low rank matrix from such limited information; this process is typically referred to as matrix completion. We introduce a particularly simple yet highly efficient alternating projection algorithm which uses an adaptive stepsize calculated to be exact for a restricted subspace. This method is proven to have near-optimal order recovery guarantees from dense measurement masks and is observed to have average case performance superior in some respects to other matrix completion algorithms for both dense measurement masks and entry measurements. In particular, this proposed algorithm is able to recover matrices from extremely close to the minimum number of measurements necessary. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> Theorem 1 ([33, Theorem 2.1]): Let Z be a matrix whose singular value decomposition (SVD) is <s> We discuss the analysis and design of an Environmental Monitoring Application.The application is reliable and maintenance-free, runs in multihop wireless network.We analyze the different alternatives and tradeoffs, using open source software.The application is validated in long-term outdoor deployments with good results.Related work does not analyze the software design with open source. We discuss the entire process for the analysis and design of an Environmental Monitoring Application for Wireless Sensor Networks, using existing open source components to create the application. We provide a thorough study of the different alternatives, from the selection of the embedded operating system to the different algorithms and strategies. The application has been designed to gather temperature and relative humidity data following the rules of quality assurance for environmental measurements, suitable for use in both research and industry. The main features of the application are: (a) runs in a multihop low-cost network based on IEEE 802.15.4, (b) improved network reliability and lifetimes, (c) easy management and maintenance-free, (d) ported to different platforms and (e) allows different configurations and network topologies. The application has been tested and validated in several long-term outdoor deployments with very good results and the conclusions are aligned with the experimental evidence. <s> BIB004
where D τ is the singular value thresholding operator defined as To conclude, the update equations for X k and Y k are given by One can notice from (21a) and (21b) that the SVT algorithm is computationally efficient since we only need the truncated SVD and elementary matrix operations in each iteration. Indeed, let r k be the number of singular values of Y k−1 being greater than the threshold τ . Also, we suppose {r k } converges to the rank of the original matrix, i.e., lim k→∞ r k = r. Then the computational complexity of SVT is O(rn 1 n 2 ). Note also that the iteration number to achieve the -approximation BIB004 is O( 1 √ ) BIB001 . In Table 1 , we summarize the SVT algorithm. For the details of the stopping criterion of SVT, see BIB001 Section 5] . Over the years, various SVT-based techniques have been proposed BIB003 , , BIB002 . In , an iterative matrix completion algorithm using the SVT-based operator called proximal operator has been proposed. Similar algorithms inspired by the iterative hard thresholding (IHT) algorithm in CS have also been proposed BIB003 , BIB002 .
Low-Rank Matrix Completion: A Contemporary Survey <s> 3) ITERATIVELY REWEIGHTED LEAST SQUARES (IRLS) MINIMIZATION <s> We present and analyze an efficient implementation of an iteratively reweighted least squares algorithm for recovering a matrix from a small number of linear measurements. The algorithm is designed for the simultaneous promotion of both a minimal nuclear norm and an approximatively low-rank solution. Under the assumption that the linear measurements fulfill a suitable generalization of the Null Space Property known in the context of compressed sensing, the algorithm is guaranteed to recover iteratively any matrix with an error of the order of the best k-rank approximation. In certain relevant cases, for instance for the matrix completion problem, our version of this algorithm can take advantage of the Woodbury matrix identity, which allows to expedite the solution of the least squares problems required at each iteration. We present numerical experiments that confirm the robustness of the algorithm for the solution of matrix completion problems, and demonstrate its competitiveness with respect to other techniques proposed recently in the literature. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 3) ITERATIVELY REWEIGHTED LEAST SQUARES (IRLS) MINIMIZATION <s> We discuss the analysis and design of an Environmental Monitoring Application.The application is reliable and maintenance-free, runs in multihop wireless network.We analyze the different alternatives and tradeoffs, using open source software.The application is validated in long-term outdoor deployments with good results.Related work does not analyze the software design with open source. We discuss the entire process for the analysis and design of an Environmental Monitoring Application for Wireless Sensor Networks, using existing open source components to create the application. We provide a thorough study of the different alternatives, from the selection of the embedded operating system to the different algorithms and strategies. The application has been designed to gather temperature and relative humidity data following the rules of quality assurance for environmental measurements, suitable for use in both research and industry. The main features of the application are: (a) runs in a multihop low-cost network based on IEEE 802.15.4, (b) improved network reliability and lifetimes, (c) easy management and maintenance-free, (d) ported to different platforms and (e) allows different configurations and network topologies. The application has been tested and validated in several long-term outdoor deployments with very good results and the conclusions are aligned with the experimental evidence. <s> BIB002
Yet another simple and computationally efficient way to solve the NNM problem is the IRLS minimization technique BIB001 , . In essence, the NNM problem can be recast using the least squares minimization as where W = (XX T ) − 1 2 . It can be shown that (22) is equivalent to the NNM problem (11) since we have BIB001 The key idea of the IRLS technique is to find X and W in an iterative fashion. The update expressions are Note that the weighted least squares subproblem (24a) can be easily solved by updating each and every column of X k BIB001 . In order to compute W k , we need a matrix inversion (24b). To avoid the ill-behavior (i.e., some of the singular values of X k approach to zero), an approach to use the perturbation of singular values has been proposed BIB001 , . BIB002 By -approximation, we mean M − M * F ≤ where M is the reconstructed matrix and M * is the optimal solution of SVT. Similar to SVT, the computational complexity per iteration of the IRLS-based technique is O(rn 1 n 2 ). Also, IRLS requires O(log( 1 )) iterations to achieve the -approximation solution. We summarize the IRLS minimization technique in Table 2 .
Low-Rank Matrix Completion: A Contemporary Survey <s> B. LRMC ALGORITHMS USING RANK INFORMATION <s> An available pressurized air source, such as an inflated tire, connectable by appropriate conduit means, with flow control and pressure regulation provisions, through an air transmitter or face mask, to the breathing passages of a passenger in a submerged land vehicle to either provide emergency breathing air for the passenger, or to fill an inflatable and portable air pack which the passenger may leave the vehicle with, or both. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> B. LRMC ALGORITHMS USING RANK INFORMATION <s> Algorithms to construct/recover low-rank matrices satisfying a set of linear equality constraints have important applications in many signal processing contexts. Recently, theoretical guarantees for minimum-rank matrix recovery have been proven for nuclear norm minimization (NNM), which can be solved using standard convex optimization approaches. While nuclear norm minimization is effective, it can be computationally demanding. In this work, we explore the use of the PowerFactorization (PF) algorithm as a tool for rank-constrained matrix recovery. Empirical results indicate that incremented-rank PF is significantly more successful than NNM at recovering low-rank matrices, in addition to being faster. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> B. LRMC ALGORITHMS USING RANK INFORMATION <s> We address the inverse problem that arises in compressed sensing of a low-rank matrix. Our approach is to pose the inverse problem as an approximation problem with a specified target rank of the solution. A simple search over the target rank then provides the minimum rank solution satisfying a prescribed data approximation bound. We propose an atomic decomposition that provides an analogy between parsimonious representations of a sparse vector and a low-rank matrix. Efficient greedy algorithms to solve the inverse problem for the vector case are extended to the matrix case through this atomic decomposition. In particular, we propose an efficient and guaranteed algorithm named ADMiRA that extends CoSaMP, its analogue for the vector case. The performance guarantee is given in terms of the rank-restricted isometry property and bounds both the number of iterations and the error in the approximate solution for the general case where the solution is approximately low-rank and the measurements are noisy. With a sparse measurement operator such as the one arising in the matrix completion problem, the computation in ADMiRA is linear in the number of measurements. The numerical experiments for the matrix completion problem show that, although the measurement operator in this case does not satisfy the rank-restricted isometry property, ADMiRA is a competitive algorithm for matrix completion. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> B. LRMC ALGORITHMS USING RANK INFORMATION <s> This paper describes gradient methods based on a scaled metric on the Grassmann manifold for low-rank matrix completion. The proposed methods significantly improve canonical gradient methods, especially on ill-conditioned matrices, while maintaining established global convegence and exact recovery guarantees. A connection between a form of subspace iteration for matrix completion and the scaled gradient descent procedure is also established. The proposed conjugate gradient method based on the scaled gradient outperforms several existing algorithms for matrix completion and is competitive with recently proposed methods. <s> BIB004 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> B. LRMC ALGORITHMS USING RANK INFORMATION <s> Recovering a large matrix from a small subset of its entries is a challenging problem arising in many real applications, such as image inpainting and recommender systems. Many existing approaches formulate this problem as a general low-rank matrix approximation problem. Since the rank operator is nonconvex and discontinuous, most of the recent theoretical studies use the nuclear norm as a convex relaxation. One major limitation of the existing approaches based on nuclear norm minimization is that all the singular values are simultaneously minimized, and thus the rank may not be well approximated in practice. In this paper, we propose to achieve a better approximation to the rank of matrix by truncated nuclear norm, which is given by the nuclear norm subtracted by the sum of the largest few singular values. In addition, we develop a novel matrix completion algorithm by minimizing the Truncated Nuclear Norm. We further develop three efficient iterative procedures, TNNR-ADMM, TNNR-APGL, and TNNR-ADMMAP, to solve the optimization problem. TNNR-ADMM utilizes the alternating direction method of multipliers (ADMM), while TNNR-AGPL applies the accelerated proximal gradient line search method (APGL) for the final optimization. For TNNR-ADMMAP, we make use of an adaptive penalty according to a novel update rule for ADMM to achieve a faster convergence rate. Our empirical study shows encouraging results of the proposed algorithms in comparison to the state-of-the-art matrix completion algorithms on both synthetic and real visual datasets. <s> BIB005 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> B. LRMC ALGORITHMS USING RANK INFORMATION <s> The matrix completion problem consists of finding or approximating a low-rank matrix based on a few samples of this matrix. We propose a new algorithm for matrix completion that minimizes the least-square distance on the sampling set over the Riemannian manifold of fixed-rank matrices. The algorithm is an adaptation of classical nonlinear conjugate gradients, developed within the framework of retraction-based optimization on manifolds. We describe all the necessary objects from differential geometry necessary to perform optimization over this low-rank matrix manifold, seen as a submanifold embedded in the space of matrices. In particular, we describe how metric projection can be used as retraction and how vector transport lets us obtain the conjugate search directions. Finally, we prove convergence of a regularized version of our algorithm under the assumption that the restricted isometry property holds for incoherent matrices throughout the iterations. The numerical experiments indicate that our approach... <s> BIB006 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> B. LRMC ALGORITHMS USING RANK INFORMATION <s> In this paper we develop a new framework that captures the common landscape underlying the common non-convex low-rank matrix problems including matrix sensing, matrix completion and robust PCA. In particular, we show for all above problems (including asymmetric cases): 1) all local minima are also globally optimal; 2) no high-order saddle points exists. These results explain why simple algorithms such as stochastic gradient descent have global converge, and efficiently optimize these non-convex objective functions in practice. Our framework connects and simplifies the existing analyses on optimization landscapes for matrix sensing and symmetric matrix completion. The framework naturally leads to new results for asymmetric matrix completion and robust PCA. <s> BIB007
In many applications such as localization in IoT networks, recommendation system, and image restoration, we encounter the situation where the rank of a desired matrix is known in advance. As mentioned, the rank of a Euclidean distance matrix in a localization problem is at most k + 2 (k is the dimension of the Euclidean space). In this situation, the LRMC problem can be formulated as a Frobenius norm minimization (FNM) problem: Due to the inequality of the rank constraint, an approach to use an approximate rank information (e.g., upper bound of the rank) has been proposed BIB003 . The FNM problem has two main advantages: 1) the problem is well-posed in the noisy scenario and 2) the cost function is differentiable so that various gradient-based optimization techniques (e.g., gradient descent, conjugate gradient, Newton methods, and manifold optimization) can be used to solve the problem. Over the years, various techniques to solve the FNM problem in BIB001 have been proposed BIB003 - BIB004 , BIB005 . The performance guarantee of the FNM-based techniques has also been provided - . It has been shown that under suitable conditions of the sampling ratio p = | |/(n 1 n 2 ) and the largest coherence µ 0 of M (see the definition in Subsection III-A.2), the gradient-based algorithms globally converges to M with high probability BIB007 . Well-known FNM-based LRMC techniques include greedy techniques BIB003 , alternating projection techniques BIB002 , and optimization over Riemannian manifold BIB006 . In this subsection, we explain these techniques in detail.
Low-Rank Matrix Completion: A Contemporary Survey <s> 1) GREEDY TECHNIQUES <s> This work concerns primal--dual interior-point methods for semidefinite programming (SDP) that use a search direction originally proposed by Helmberg et al. [SIAM J. Optim., 6 (1996), pp. 342--361] and Kojima, Shindoh, and Hara [SIAM J. Optim., 7 (1997), pp. 86--125.] and recently rediscovered by Monteiro [SIAM J. Optim., 7 (1997), pp. 663--678] in a more explicit form. In analyzing these methods, a number of basic equalities and inequalities were developed in [Kojima, Shindoh, and Hara] and also in [Monteiro] through different means and in different forms. ::: In this paper, we give a concise derivation of the key equalities and inequalities for complexity analysis along the exact line used in linear programming (LP), producing basic relationships that have compact forms almost identical to their counterparts in LP. We also introduce a new formulation of the central path and variable-metric measures of centrality. These results provide convenient tools for deriving polynomiality results for primal--dual algorithms extended from LP to SDP using the aforementioned and related search directions. We present examples of such extensions, including the long-step infeasible-interior-point algorithm of Zhang [SIAM J. Optim., 4 (1994), pp. 208--227]. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) GREEDY TECHNIQUES <s> This paper demonstrates theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called basis pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) GREEDY TECHNIQUES <s> Abstract Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect to an orthonormal basis. The major algorithmic challenge in compressive sampling is to approximate a compressible signal from noisy samples. This paper describes a new iterative recovery algorithm called CoSaMP that delivers the same guarantees as the best optimization-based approaches. Moreover, this algorithm offers rigorous bounds on computational cost and storage. It is likely to be extremely efficient for practical problems because it requires only matrix–vector multiplies with the sampling matrix. For compressible signals, the running time is just O ( N log 2 N ) , where N is the length of the signal. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) GREEDY TECHNIQUES <s> We address the inverse problem that arises in compressed sensing of a low-rank matrix. Our approach is to pose the inverse problem as an approximation problem with a specified target rank of the solution. A simple search over the target rank then provides the minimum rank solution satisfying a prescribed data approximation bound. We propose an atomic decomposition that provides an analogy between parsimonious representations of a sparse vector and a low-rank matrix. Efficient greedy algorithms to solve the inverse problem for the vector case are extended to the matrix case through this atomic decomposition. In particular, we propose an efficient and guaranteed algorithm named ADMiRA that extends CoSaMP, its analogue for the vector case. The performance guarantee is given in terms of the rank-restricted isometry property and bounds both the number of iterations and the error in the approximate solution for the general case where the solution is approximately low-rank and the measurements are noisy. With a sparse measurement operator such as the one arising in the matrix completion problem, the computation in ADMiRA is linear in the number of measurements. The numerical experiments for the matrix completion problem show that, although the measurement operator in this case does not satisfy the rank-restricted isometry property, ADMiRA is a competitive algorithm for matrix completion. <s> BIB004 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) GREEDY TECHNIQUES <s> Our deteriorating civil infrastructure faces the critical challenge of long-term structural health monitoring for damage detection and localization. In contrast to existing research that often separates the designs of wireless sensor networks and structural engineering algorithms, this paper proposes a cyber-physical co-design approach to structural health monitoring based on wireless sensor networks. Our approach closely integrates (1) flexibility-based damage localization methods that allow a tradeoff between the number of sensors and the resolution of damage localization, and (2) an energy-efficient, multi-level computing architecture specifically designed to leverage the multi-resolution feature of the flexibility-based approach. The proposed approach has been implemented on the Intel Imote2 platform. Experiments on a physical beam and simulations of a truss structure demonstrate the system's efficacy in damage localization and energy efficiency. <s> BIB005 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) GREEDY TECHNIQUES <s> As a greedy algorithm to recover sparse signals from compressed measurements, orthogonal matching pursuit (OMP) algorithm has received much attention in recent years. In this paper, we introduce an extension of the OMP for pursuing efficiency in reconstructing sparse signals. Our approach, henceforth referred to as generalized OMP (gOMP), is literally a generalization of the OMP in the sense that multiple N indices are identified per iteration. Owing to the selection of multiple “correct” indices, the gOMP algorithm is finished with much smaller number of iterations when compared to the OMP. We show that the gOMP can perfectly reconstruct any K-sparse signals (K >; 1), provided that the sensing matrix satisfies the RIP with δNK <; [(√N)/(√K+3√N)]. We also demonstrate by empirical simulations that the gOMP has excellent recovery performance comparable to l1-minimization technique with fast processing speed and competitive computational complexity. <s> BIB006 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) GREEDY TECHNIQUES <s> Low rank matrix completion has been applied successfully in a wide range of machine learning applications, such as collaborative filtering, image inpainting and Microarray data imputation. However, many existing algorithms are not scalable to large-scale problems, as they involve computing singular value decomposition. In this paper, we present an efficient and scalable algorithm for matrix completion. The key idea is to extend the well-known orthogonal matching pursuit from the vector case to the matrix case. In each iteration, we pursue a rank-one matrix basis generated by the top singular vector pair of the current approximation residual and update the weights for all rank-one matrices obtained up to the current iteration. We further propose a novel weight updating rule to reduce the time and storage complexity, making the proposed algorithm scalable to large matrices. We establish the linear convergence of the proposed algorithm. The fast convergence is achieved due to the proposed construction of matrix bases and the estimation of the weights. We empirically evaluate the proposed algorithm on many real-world large-scale datasets. Results show that our algorithm is much more efficient than state-of-the-art matrix completion algorithms while achieving similar or better prediction performance. <s> BIB007
In recent years, greedy algorithms have been popularly used for LRMC due to the computational simplicity. In a nutshell, they solve the LRMC problem by making a heuristic decision at each iteration with a hope to find the right solution in the end. Let r be the rank of a desired low-rank matrix M ∈ R n×n and M = U V T be the singular value decomposition of M where U, V ∈ R n×r . By noting that M can be expressed as a linear combination of r rank-one matrices. The main task of greedy techniques is to investigate the atom set Once the atom set A M is found, the singular values σ i (M) = σ i can be computed easily by solving the following problem One popular greedy technique is atomic decomposition for minimum rank approximation (ADMiRA) BIB004 , which can be viewed as an extension of the compressive sampling matching pursuit (CoSaMP) algorithm in CS BIB003 - BIB006 . ADMiRA employs a strategy of adding as well as pruning to identify the atom set A M . In the addition stage, ADMiRA identifies 2r rank-one matrices representing a residual best and then adds the matrices to the pre-chosen atom set. Specifically, if X i−1 is the output matrix generated in the (i − 1)-th iteration and A i−1 is its atom set, then ADMiRA computes the residual R i = P (M) − P (X i−1 ) and then adds 2r leading principal components of R i to A i−1 . In other words, the enlarged atom set i is given by where u R i ,j and v R i ,j are the j-th principal left and right singular vectors of R i , respectively. Note that i contains at most 3r elements. In the pruning stage, ADMiRA refines i into a set of r atoms. To be specific, if X i is the best rank-3r approximation of M, i.e., 7 then the refined atom set A i is expressed as where u X i ,j and v X i ,j are the j-th principal left and right singular vectors of X i , respectively. The computational complexity of ADMiRA is mainly due to two operations: the least BIB005 Note that the solution to (29) can be computed in a similar way as in (27). squares operation in BIB001 and the SVD-based operation to find out the leading atoms of the required matrix (e.g., R k and X k+1 ). First, since BIB001 involves the pseudo-inverse of Second, the computational cost of performing a truncated SVD of O(r) atoms is O(rn 1 n 2 ). Since | | < n 1 n 2 , the computational complexity of ADMiRA per iteration is O(rn 1 n 2 ). Also, the iteration number of ADMiRA to achieve the -approximation is O(log( 1 )) BIB004 . In Table 3 , we summarize the ADMiRA algorithm. Yet another well-known greedy method is the rank-one matrix pursuit algorithm BIB007 , an extension of the orthogonal matching pursuit algorithm in CS BIB002 . In this approach, instead of choosing multiple atoms of a matrix, an atom corresponding to the largest singular value of the residual matrix R k is chosen.
Low-Rank Matrix Completion: A Contemporary Survey <s> 2) ALTERNATING MINIMIZATION TECHNIQUES <s> A primal-dual infeasible-interior-point path-following algorithm is proposed for solving semidefinite programming (SDP) problems. If the problem has a solution, then the algorithm is globally convergent. If the starting point is feasible or close to being feasible, the algorithm finds an optimal solution in at most $O(\sqrt{n}L)$ iterations, where n is the size of the problem and L is the logarithm of the ratio of the initial error and the tolerance. If the starting point is large enough, then the algorithm terminates in at most O(nL) steps either by finding a solution or by determining that the primal-dual problem has no solution of norm less than a given number. Moreover, we propose a sufficient condition for the superlinear convergence of the algorithm. In addition, we give two special cases of SDP for which the algorithm is quadratically convergent. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) ALTERNATING MINIMIZATION TECHNIQUES <s> This work concerns primal--dual interior-point methods for semidefinite programming (SDP) that use a search direction originally proposed by Helmberg et al. [SIAM J. Optim., 6 (1996), pp. 342--361] and Kojima, Shindoh, and Hara [SIAM J. Optim., 7 (1997), pp. 86--125.] and recently rediscovered by Monteiro [SIAM J. Optim., 7 (1997), pp. 663--678] in a more explicit form. In analyzing these methods, a number of basic equalities and inequalities were developed in [Kojima, Shindoh, and Hara] and also in [Monteiro] through different means and in different forms. ::: In this paper, we give a concise derivation of the key equalities and inequalities for complexity analysis along the exact line used in linear programming (LP), producing basic relationships that have compact forms almost identical to their counterparts in LP. We also introduce a new formulation of the central path and variable-metric measures of centrality. These results provide convenient tools for deriving polynomiality results for primal--dual algorithms extended from LP to SDP using the aforementioned and related search directions. We present examples of such extensions, including the long-step infeasible-interior-point algorithm of Zhang [SIAM J. Optim., 4 (1994), pp. 208--227]. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) ALTERNATING MINIMIZATION TECHNIQUES <s> This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative and produces a sequence of matrices (X^k, Y^k) and at each step, mainly performs a soft-thresholding operation on the singular values of the matrix Y^k. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates X^k is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. We provide numerical examples in which 1,000 by 1,000 matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are connected with linearized Bregman iterations for l1 minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) ALTERNATING MINIMIZATION TECHNIQUES <s> Algorithms to construct/recover low-rank matrices satisfying a set of linear equality constraints have important applications in many signal processing contexts. Recently, theoretical guarantees for minimum-rank matrix recovery have been proven for nuclear norm minimization (NNM), which can be solved using standard convex optimization approaches. While nuclear norm minimization is effective, it can be computationally demanding. In this work, we explore the use of the PowerFactorization (PF) algorithm as a tool for rank-constrained matrix recovery. Empirical results indicate that incremented-rank PF is significantly more successful than NNM at recovering low-rank matrices, in addition to being faster. <s> BIB004 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) ALTERNATING MINIMIZATION TECHNIQUES <s> We address the inverse problem that arises in compressed sensing of a low-rank matrix. Our approach is to pose the inverse problem as an approximation problem with a specified target rank of the solution. A simple search over the target rank then provides the minimum rank solution satisfying a prescribed data approximation bound. We propose an atomic decomposition that provides an analogy between parsimonious representations of a sparse vector and a low-rank matrix. Efficient greedy algorithms to solve the inverse problem for the vector case are extended to the matrix case through this atomic decomposition. In particular, we propose an efficient and guaranteed algorithm named ADMiRA that extends CoSaMP, its analogue for the vector case. The performance guarantee is given in terms of the rank-restricted isometry property and bounds both the number of iterations and the error in the approximate solution for the general case where the solution is approximately low-rank and the measurements are noisy. With a sparse measurement operator such as the one arising in the matrix completion problem, the computation in ADMiRA is linear in the number of measurements. The numerical experiments for the matrix completion problem show that, although the measurement operator in this case does not satisfy the rank-restricted isometry property, ADMiRA is a competitive algorithm for matrix completion. <s> BIB005 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) ALTERNATING MINIMIZATION TECHNIQUES <s> This book describes and analyzes all available alternating projection methods for solving the general problem of finding a point in the intersection of several given sets belonging to a Hilbert space. For each method the authors describe and analyze convergence, speed of convergence, acceleration techniques, stopping criteria, and applications. Different types of algorithms and applications are studied for subspaces, linear varieties, and general convex sets. The authors also unify these algorithms into a common theoretical framework. Alternating Projection Methods is a comprehensive and accessible source of information, providing readers with the theoretical and practical aspects of the most relevant alternating projection methods. It features several acceleration techniques for every method it presents and analyzes, including schemes that cannot be found in other books. It also provides full descriptions of several important mathematical problems and specific applications for which the alternating projection methods represent an efficient option. Examples and problems that illustrate this material are also included. Audience: This book can be used as a textbook for advanced undergraduate or first-year graduate students. Because it is comprehensive, it can also be used as a tutorial or a reference by mathematicians and nonmathematicians from many fields of application who need to solve alternating projection problems in their work. Contents: Preface; Chapter 1: Introduction; Chapter 2: Overview on Spaces; Chapter 3: The MAP on Subspaces; Chapter 4: Row-Action Methods; Chapter 5: Projecting on Convex Sets; Chapter 6: Applications of MAP for Matrix Problems; Bibliography; Author Index; Subject Index. <s> BIB006 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) ALTERNATING MINIMIZATION TECHNIQUES <s> Matrix completion involves recovering a matrix from a subset of its entries by utilizing interdependency between the entries, typically through low rank structure. Despite matrix completion requiring the global solution of a non-convex objective, there are many computationally efficient algorithms which are effective for a broad class of matrices. In this paper, we introduce an alternating steepest descent algorithm (ASD) and a scaled variant, ScaledASD, for the fixed-rank matrix completion problem. Empirical evaluation of ASD and ScaledASD on both image inpainting and random problems show they are competitive with other state-of-the-art matrix completion algorithms in terms of recoverable rank and overall computational time. In particular, their low per iteration computational complexity makes ASD and ScaledASD efficient for large size problems, especially when computing the solutions to moderate accuracy such as in the presence of model misfit, noise, and/or as an initialization strategy for higher order methods. A preliminary convergence analysis is also presented. <s> BIB007
Many of LRMC algorithms BIB003 , BIB005 require the computation of (partial) SVD to obtain the singular values and vectors (expressed as O(rn 2 )). As an effort to further reduce the computational burden of SVD, alternating minimization techniques have been proposed BIB004 - . The basic premise behind this approach is that a low-rank matrix M ∈ R n 1 ×n 2 of rank r can be factorized into tall and fat matrices, i.e., M = XY where X ∈ R n 1 ×r and Y ∈ R r×n 2 (r n 1 , n 2 ). The key idea of this approach is to find out X and Y minimizing the residual (the difference between the original matrix and the estimate of it) on the sampling space. In other words, X and Y are recovered by solving min X,Y Power factorization, a simple alternating minimization algorithm, finds out the solution to (31) by updating X and Y alternately as BIB004 X i+1 = arg min Alternating steepest descent (ASD) is another alternating method to find out the solution BIB007 . The key idea of ASD is to update X and Y by applying the steepest gradient descent method to the objective function f (X, BIB001 . Specifically, ASD first computes the gradient of f (X, Y) with respect to X and then updates X along the steepest gradient descent direction: where the gradient descent direction f Y i (X i ) and stepsize t x i are given by After updating X, ASD updates Y in a similar way: where The low-rank matrix fitting (LMaFit) algorithm finds out the solution in a different way by solving arg min With the arbitrary input of X 0 ∈ R n 1 ×r and Y 0 ∈ R r×n 2 and Z 0 = P (M), the variables X, Y, and Z are updated in the i-th iteration as where X † is Moore-Penrose pseudoinverse of matrix X. Running time of the alternating minimization algorithms is very short due to the following reasons: 1) the SVD computation is unnecessary and 2) the size of matrices to be inverted is smaller than the size of matrices for the greedy algorithms. While the inversion of huge size matrices (size of | | × O(1)) is required in a greedy algorithms (see BIB002 ), alternating minimization only requires the pseudo inversion of X and Y (size of n 1 × r and r × n 2 , respectively). Indeed, the computational complexity of this approach is O(r| | + r 2 n 1 + r 2 n 2 ), which is much smaller than that of SVT and ADMiRA when r min(n 1 , n 2 ). Also, the iteration number of ASD and LMaFit to achieve the -approximation is O(log( 1 )) BIB007 , . It has been shown that alternating minimization techniques are simple to implement and also require small sized memory BIB006 . Major drawback of these approaches is that it might converge to the local optimum.
Low-Rank Matrix Completion: A Contemporary Survey <s> 3) OPTIMIZATION OVER SMOOTH RIEMANNIAN MANIFOLD <s> Multidimensional scaling can be considered as involving three basic steps. In the first step, a scale of comparative distances between all pairs of stimuli is obtained. This scale is analogous to the scale of stimuli obtained in the traditional paired comparisons methods. In this scale, however, instead of locating each stimulus-object on a given continuum, the distances between each pair of stimuli are located on a distance continuum. As in paired comparisons, the procedures for obtaining a scale of comparative distances leave the true zero point undetermined. Hence, a comparative distance is not a distance in the usual sense of the term, but is a distance minus an unknown constant. The second step involves estimating this unknown constant. When the unknown constant is obtained, the comparative distances can be converted into absolute distances. In the third step, the dimensionality of the psychological space necessary to account for these absolute distances is determined, and the projections of stimuli on axes of this space are obtained. A set of analytical procedures was developed for each of the three steps given above, including a least-squares solution for obtaining comparative distances by the complete method of triads, two practical methods for estimating the additive constant, and an extension of Young and Householder's Euclidean model to include procedures for obtaining the projections of stimuli on axes from fallible absolute distances. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 3) OPTIMIZATION OVER SMOOTH RIEMANNIAN MANIFOLD <s> Contents: Matrix Eigenvalue Methods.- Double Bracket Isospectral Flows.- Singular Value Decomposition.- Linear Programming.- Approximation and Control.- Balanced Matrix Factorizations.- Invariant Theory and System Balancing.- Balancing via Gradient Flows.- Sensitivity Optimization.- Linear Algebra.- Dynamical Systems.- Global Analysis. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 3) OPTIMIZATION OVER SMOOTH RIEMANNIAN MANIFOLD <s> An available pressurized air source, such as an inflated tire, connectable by appropriate conduit means, with flow control and pressure regulation provisions, through an air transmitter or face mask, to the breathing passages of a passenger in a submerged land vehicle to either provide emergency breathing air for the passenger, or to fill an inflatable and portable air pack which the passenger may leave the vehicle with, or both. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 3) OPTIMIZATION OVER SMOOTH RIEMANNIAN MANIFOLD <s> Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra. Optimization Algorithms on Matrix Manifolds offers techniques with broad applications in linear algebra, signal processing, data mining, computer vision, and statistical analysis. It can serve as a graduate-level textbook and will be of interest to applied mathematicians, engineers, and computer scientists. <s> BIB004 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 3) OPTIMIZATION OVER SMOOTH RIEMANNIAN MANIFOLD <s> A new algorithm, termed subspace evolution and transfer (SET), is proposed for solving the consistent matrix completion problem. In this setting, one is given a subset of the entries of a low-rank matrix, and asked to find one low-rank matrix consistent with the given observations. We show that this problem can be solved by searching for a column space that matches the observations. The corresponding algorithm consists of two parts — subspace evolution and subspace transfer. In the evolution part, we use a line search procedure to refine the column space. However, line search is not guaranteed to converge, as there may exist barriers along the search path that prevent the algorithm from reaching a global optimum. To address this problem, in the transfer part, we design mechanisms to detect barriers and transfer the estimated column space from one side of the barrier to the another. The SET algorithm exhibits excellent empirical performance for very low-rank matrices. <s> BIB005 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 3) OPTIMIZATION OVER SMOOTH RIEMANNIAN MANIFOLD <s> Motivated by the problem of learning a linear regression model whose parameter is a large fixed-rank non-symmetric matrix, we consider the optimization of a smooth cost function defined on the set of fixed-rank matrices. We adopt the geometric framework of optimization on Riemannian quotient manifolds. We study the underlying geometries of several well-known fixed-rank matrix factorizations and then exploit the Riemannian quotient geometry of the search space in the design of a class of gradient descent and trust-region algorithms. The proposed algorithms generalize our previous results on fixed-rank symmetric positive semidefinite matrices, apply to a broad range of applications, scale to high-dimensional problems and confer a geometric basis to recent contributions on the learning of fixed-rank non-symmetric matrices. We make connections with existing algorithms in the context of low-rank matrix completion and discuss relative usefulness of the proposed framework. Numerical experiments suggest that the proposed algorithms compete with the state-of-the-art and that manifold optimization offers an effective and versatile framework for the design of machine learning algorithms that learn a fixed-rank matrix. <s> BIB006 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 3) OPTIMIZATION OVER SMOOTH RIEMANNIAN MANIFOLD <s> The matrix completion problem consists of finding or approximating a low-rank matrix based on a few samples of this matrix. We propose a new algorithm for matrix completion that minimizes the least-square distance on the sampling set over the Riemannian manifold of fixed-rank matrices. The algorithm is an adaptation of classical nonlinear conjugate gradients, developed within the framework of retraction-based optimization on manifolds. We describe all the necessary objects from differential geometry necessary to perform optimization over this low-rank matrix manifold, seen as a submanifold embedded in the space of matrices. In particular, we describe how metric projection can be used as retraction and how vector transport lets us obtain the conjugate search directions. Finally, we prove convergence of a regularized version of our algorithm under the assumption that the restricted isometry property holds for incoherent matrices throughout the iterations. The numerical experiments indicate that our approach... <s> BIB007
In many applications where the rank of a matrix is known in a priori (i.e., rank(M) = r), one can strengthen the constraint of (25) by defining the feasible set, denoted by F, as Note that F is not a vector space BIB001 and thus conventional optimization techniques cannot be used to solve the problem defined over F. While this is bad news, a remedy for this is that F is a smooth Riemannian manifold BIB006 , BIB002 . Roughly speaking, smooth manifold is a generalization of R n 1 ×n 2 on which a notion of differentiability exists. For more rigorous definition, see, e.g., BIB004 , . A smooth manifold equipped with an inner product, often called a Riemannian metric, forms a smooth Riemannian manifold. Since the smooth Riemannian manifold is a differentiable structure equipped with an inner product, one can use all necessary ingredients to solve the optimization problem with quadratic cost function, such as Riemannian gradient, Hessian matrix, exponential map, and parallel translation BIB004 . Therefore, optimization techniques in R n 1 ×n 2 (e.g., steepest descent, Newton method, conjugate gradient method) can be used to solve BIB003 in the smooth Riemannian manifold F. In recent years, many efforts have been made to solve the matrix completion over smooth Riemannian manifolds. These works are classified by their specific choice of Riemannian manifold structure. One well-known approach is to solve (25) over the Grassmann manifold of orthogonal matrices 9 BIB005 . In this approach, a feasible set can be expressed as F = {QR T : Q T Q = I, Q ∈ R n 1 ×r , R ∈ R n 2 ×r } and thus solving BIB003 is to find an n 1 × r orthonormal matrix Q satisfying In BIB005 , an approach to solve (39) over the Grassmann manifold has been proposed. Recently, it has been shown that the original matrix can be reconstructed by the unconstrained optimization over the smooth Riemannian manifold F BIB007 . Often, F is expressed using the singular value decomposition as The FNM problem BIB003 can then be reformulated as an unconstrained optimization over F: One can easily obtain the closed-form expression of the ingredients such as tangent spaces, Riemannian metric, Riemannian gradient, and Hessian matrix in the unconstrained optimization BIB002 , BIB004 , . In fact, major benefits of the Riemannian optimization-based LRMC techniques are the simplicity in implementation and the fast convergence. Similar to ASD, the computational complexity per iteration of these techniques is O(r| | + r 2 n 1 + r 2 n 2 ), and they require O(log( 1 )) iterations to achieve the -approximation solution BIB007 .
Low-Rank Matrix Completion: A Contemporary Survey <s> 4) TRUNCATED NNM <s> The alternating direction method is one of the attractive approaches for solving linearly constrained separate monotone variational inequalities. Experience on applications has shown that the number of iterations depends significantly on the penalty parameter for the system of linear constraint equations. While the penalty parameter is a constant in the original method, in this paper we present a modified alternating direction method that adjusts the penalty parameter per iteration based on the iterate message. Preliminary numerical tests show that the self-adaptive adjustment technique is effective in practice. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 4) TRUNCATED NNM <s> Knowledge of accurate and timely channel state information (CSI) at the transmitter is becoming increasingly important in wireless communication systems. While it is often assumed that the receiver (whether base station or mobile) needs to know the channel for accurate power control, scheduling, and data demodulation, it is now known that the transmitter (especially the base station) can also benefit greatly from this information. For example, recent results in multiantenna multiuser systems show that large throughput gains are possible when the base station uses multiple antennas and a known channel to transmit distinct messages simultaneously and selectively to many single-antenna users. In time-division duplex systems, where the base station and mobiles share the same frequency band for transmission, the base station can exploit reciprocity to obtain the forward channel from pilots received over the reverse channel. Frequency-division duplex systems are more difficult because the base station transmits and receives on different frequencies and therefore cannot use the received pilot to infer anything about the multiantenna transmit channel. Nevertheless, we show that the time occupied in frequency-duplex CSI transfer is generally less than one might expect and falls as the number of antennas increases. Thus, although the total amount of channel information increases with the number of antennas at the base station, the burden of learning this information at the base station paradoxically decreases. Thus, the advantages of having more antennas at the base station extend from having network gains to learning the channel information. We quantify our gains using linear analog modulation which avoids digitizing and coding the CSI and therefore can convey information very rapidly and can be readily analyzed. The old paradigm that it is not worth the effort to learn channel information at the transmitter should be revisited since the effort decreases and the gain increases with the number of antennas. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 4) TRUNCATED NNM <s> We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 4) TRUNCATED NNM <s> Many problems can be characterized by the task of recovering the low-rank and sparse components of a given matrix. Recently, it was discovered that this nondeterministic polynomial-time hard (NP-hard) task can be well accomplished, both theoretically and numerically, via heuristically solving a convex relaxation problem where the widely acknowledged nuclear norm and $l_1$ norm are utilized to induce low-rank and sparsity. This paper studies the recovery task in the general settings that only a fraction of entries of the matrix can be observed and the observation is corrupted by both impulsive and Gaussian noise. We show that the resulting model falls into the applicable scope of the classical augmented Lagrangian method. Moreover, the separable structure of the new model enables us to solve the involved subproblems more efficiently by splitting the augmented Lagrangian function. Hence, some splitting numerical algorithms are developed for solving the new recovery model. Some preliminary numerical experiments verify that these augmented-Lagrangian-based splitting algorithms are easily implementable and surprisingly efficient for tackling the new recovery model. <s> BIB004 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 4) TRUNCATED NNM <s> Many machine learning and signal processing problems can be formulated as linearly constrained convex programs, which could be efficiently solved by the alternating direction method (ADM). However, usually the subproblems in ADM are easily solvable only when the linear mappings in the constraints are identities. To address this issue, we propose a linearized ADM (LADM) method by linearizing the quadratic penalty term and adding a proximal term when solving the sub-problems. For fast convergence, we also allow the penalty to change adaptively according a novel update rule. We prove the global convergence of LADM with adaptive penalty (LADMAP). As an example, we apply LADMAP to solve low-rank representation (LRR), which is an important subspace clustering technique yet suffers from high computation cost. By combining LADMAP with a skinny SVD representation technique, we are able to reduce the complexity O(n3) of the original ADM based method to O(rn2), where r and n are the rank and size of the representation matrix, respectively, hence making LRR possible for large scale applications. Numerical experiments verify that for LRR our LADMAP based methods are much faster than state-of-the-art algorithms. <s> BIB005 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 4) TRUNCATED NNM <s> Recovering a large matrix from a small subset of its entries is a challenging problem arising in many real applications, such as image inpainting and recommender systems. Many existing approaches formulate this problem as a general low-rank matrix approximation problem. Since the rank operator is nonconvex and discontinuous, most of the recent theoretical studies use the nuclear norm as a convex relaxation. One major limitation of the existing approaches based on nuclear norm minimization is that all the singular values are simultaneously minimized, and thus the rank may not be well approximated in practice. In this paper, we propose to achieve a better approximation to the rank of matrix by truncated nuclear norm, which is given by the nuclear norm subtracted by the sum of the largest few singular values. In addition, we develop a novel matrix completion algorithm by minimizing the Truncated Nuclear Norm. We further develop three efficient iterative procedures, TNNR-ADMM, TNNR-APGL, and TNNR-ADMMAP, to solve the optimization problem. TNNR-ADMM utilizes the alternating direction method of multipliers (ADMM), while TNNR-AGPL applies the accelerated proximal gradient line search method (APGL) for the final optimization. For TNNR-ADMMAP, we make use of an adaptive penalty according to a novel update rule for ADMM to achieve a faster convergence rate. Our empirical study shows encouraging results of the proposed algorithms in comparison to the state-of-the-art matrix completion algorithms on both synthetic and real visual datasets. <s> BIB006 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 4) TRUNCATED NNM <s> We propose a DC (Difference of two Convex functions) formulation approach for sparse optimization problems having a cardinality or rank constraint. With the largest-k norm, an exact DC representation of the cardinality constraint is provided. We then transform the cardinality-constrained problem into a penalty function form and derive exact penalty parameter values for some optimization problems, especially for quadratic minimization problems which often appear in practice. A DC Algorithm (DCA) is presented, where the dual step at each iteration can be efficiently carried out due to the accessible subgradient of the largest-k norm. Furthermore, we can solve each DCA subproblem in linear time via a soft thresholding operation if there are no additional constraints. The framework is extended to the rank-constrained problem as well as the cardinality- and the rank-minimization problems. Numerical experiments demonstrate the efficiency of the proposed DCA in comparison with existing methods which have other penalty terms. <s> BIB007
Truncated NNM is a variation of the NNM-based technique requiring the rank information r. BIB002 While the NNM technique takes into account all the singular values of a desired matrix, truncated NNM considers only the n − r smallest singular values BIB006 . Specifically, truncated NNM finds a solution to where we have and thus the problem (42) can be reformulated to min X X * − max This problem can be solved in an iterative way. Specifically, starting from X 0 = P (M), truncated NNM updates X i by solving BIB006 min where U i−1 , V i−1 ∈ R n×r are the matrices of left and right-singular vectors of X i−1 , respectively. We note that an approach in (46) has two main advantages: 1) the rank information of the desired matrix can be incorporated and 2) various gradient-based techniques including alternating direction method of multipliers (ADMM) BIB004 , BIB005 , ADMM with an adaptive penalty (ADMMAP) BIB001 , and accelerated proximal gradient line search method (APGL) BIB003 can be employed. Note also that the dominant operation is the truncated SVD operation and its complexity is O(rn 1 n 2 ), which is much smaller than that of the NNM technique (see Table 5 ) as long BIB002 Although truncated NNM is a variant of NNM, we put it into the second category since it exploits the rank information of a low-rank matrix. as r min(n 1 , n 2 ). Similar to SVT, the iteration complexity of the truncated NNM to achieve the -approximation is O( 1 √ ) BIB006 . Alternatively, the difference of two convex functions (DC) based algorithm can be used to solve (45) BIB007 . In Table 4 , we summarize the truncated NNM algorithm.
Low-Rank Matrix Completion: A Contemporary Survey <s> 1) SPARSITY OF OBSERVED ENTRIES <s> We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? ::: ::: We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys $$m\ge C\,n^{1.2}r\log n$$ for some positive numerical constant C, then with very high probability, most n×n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) SPARSITY OF OBSERVED ENTRIES <s> This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low rank matrix. These results improve on prior work by Candes and Recht, Candes and Tao, and Keshavan, Montanari, and Oh. The reconstruction is accomplished by minimizing the nuclear norm, or sum of the singular values, of the hidden matrix subject to agreement with the provided entries. If the underlying matrix satisfies a certain incoherence condition, then the number of entries required is equal to a quadratic logarithmic factor times the number of parameters in the singular value decomposition. The proof of this assertion is short, self contained, and uses very elementary analysis. The novel techniques herein are based on recent work in quantum information theory. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) SPARSITY OF OBSERVED ENTRIES <s> Motivated by the problem of learning a linear regression model whose parameter is a large fixed-rank non-symmetric matrix, we consider the optimization of a smooth cost function defined on the set of fixed-rank matrices. We adopt the geometric framework of optimization on Riemannian quotient manifolds. We study the underlying geometries of several well-known fixed-rank matrix factorizations and then exploit the Riemannian quotient geometry of the search space in the design of a class of gradient descent and trust-region algorithms. The proposed algorithms generalize our previous results on fixed-rank symmetric positive semidefinite matrices, apply to a broad range of applications, scale to high-dimensional problems and confer a geometric basis to recent contributions on the learning of fixed-rank non-symmetric matrices. We make connections with existing algorithms in the context of low-rank matrix completion and discuss relative usefulness of the proposed framework. Numerical experiments suggest that the proposed algorithms compete with the state-of-the-art and that manifold optimization offers an effective and versatile framework for the design of machine learning algorithms that learn a fixed-rank matrix. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) SPARSITY OF OBSERVED ENTRIES <s> Presents a list of articles published by the IEEE Signal Processing Society (SPS) that ranked among the top 100 most downloaded IEEE Xplore articles. <s> BIB004 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) SPARSITY OF OBSERVED ENTRIES <s> The global bandwidth shortage facing wireless carriers has motivated the exploration of the underutilized millimeter wave (mm-wave) frequency spectrum for future broadband cellular communication networks. There is, however, little knowledge about cellular mm-wave propagation in densely populated indoor and outdoor environments. Obtaining this information is vital for the design and operation of future fifth generation cellular networks that use the mm-wave spectrum. In this paper, we present the motivation for new mm-wave cellular systems, methodology, and hardware for measurements and offer a variety of measurement results that show 28 and 38 GHz frequencies can be used when employing steerable directional antennas at base stations and mobile devices. <s> BIB005 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) SPARSITY OF OBSERVED ENTRIES <s> Channel state information at the transmitter (CSIT) is essential for frequency-division duplexing (FDD) massive MIMO systems, but conventional solutions involve overwhelming overhead both for downlink channel training and uplink channel feedback. In this letter, we propose a joint CSIT acquisition scheme to reduce the overhead. Particularly, unlike conventional schemes where each user individually estimates its own channel and then feed it back to the base station (BS), we propose that all scheduled users directly feed back the pilot observation to the BS, and then joint CSIT recovery can be realized at the BS. We further formulate the joint CSIT recovery problem as a low-rank matrix completion problem by utilizing the low-rank property of the massive MIMO channel matrix, which is caused by the correlation among users. Finally, we propose a hybrid low-rank matrix completion algorithm based on the singular value projection to solve this problem. Simulations demonstrate that the proposed scheme can provide accurate CSIT with lower overhead than conventional schemes. <s> BIB006
Sparsity expresses an idea that when a matrix has a low rank property, then it can be recovered using only a small number of observed entries. Natural question arising from this is how many elements do we need to observe for the accurate recovery of the matrix. In order to answer this question, we need to know a notion of a degree of freedom (DOF). The DOF of a matrix is the number of freely chosen variables in the matrix. One can easily see that the DOF of the rank one matrix in (1) is 3 since one entry can be determined after observing three. As an another example, consider the following rank one matrix One can easily see that if we observe all entries of one column and one row, then the rest can be determined by a simple VOLUME 7, 2019 linear relationship between these since M is the rank-one matrix. Specifically, if we observe the first row and the first column, then the first and the second columns differ by the factor of three so that as long as we know one entry in the second column, rest will be recovered. Thus, the DOF of M is 4 + 4 − 1 = 7. Following lemma generalizes our observations. Lemma 1: The DOF of a square n×n matrix with rank r is 2nr − r 2 . Also, the DOF of n 1 × n 2 -matrix is (n 1 + n 2 )r − r 2 . Proof: Since the rank of a matrix is r, we can freely choose values for all entries of the r columns, resulting in nr degrees of freedom for the first r column. Once r independent columns, say m 1 , · · · m r , are constructed, then each of the rest n − r columns is expressed as a linear combinations of the first r columns (e.g., m r+1 = α 1 m 1 + · · · + α r m r ) so that r linear coefficients (α 1 , · · · α r ) can be freely chosen in these columns. By adding nr and (n − r)r, we obtain the desired result. Generalization to n 1 × n 2 matrix is straightforward. This lemma says that if n is large and r is small enough (e.g., r = O(1)), essential information in a matrix is just in the order of n, DOF = O(n), which is clearly much smaller than the total number of entries of the matrix. Interestingly, the DOF is the minimum number of observed entries required for the recovery of a matrix. If this condition is violated, that is, if the number of observed entries is less than the DOF (i.e., m < 2nr − r 2 ), no algorithm whatsoever can recover the matrix. In Fig. 5 , we illustrate how to recover the matrix when the number of observed entries equals the DOF. In this figure, we assume that blue colored entries are observed. BIB004 In a nutshell, unknown entries of the matrix are found in two-step process. First, we identify the linear relationship between the BIB004 Since we observe the first r rows and columns, we have 2nr − r 2 observations in total. first r columns and the rest. For example, the (r + 1)-th column can be expressed as a linear combination of the first r columns. That is, Since the first r entries of m 1 , · · · m r+1 are observed (see Fig. 5(a) ), we have r unknowns (α 1 , · · · , α r ) and r equations so that we can identify the linear coefficients α 1 , · · · α r with the computational cost O(r 3 ) of an r × r matrix inversion. Once these coefficients are identified, we can recover the unknown entries m r+1,r+1 · · · m r+1,n of m r+1 using the linear relationship in (48) (see Fig. 5(b) ). By repeating this step for the rest of columns, we can identify all unknown entries with O(rn 2 ) computational complexity. BIB006 Now, an astute reader might notice that this strategy will not work if one entry of the column (or row) is unobserved. As illustrated in Fig. 6 , if only one entry in the r-th row, say (r, l)-th entry, is unobserved, then one cannot recover the l-th column simply because the matrix in Fig. 6 cannot be converted to the matrix form in Fig. 5(b) . It is clear from this discussion that the measurement size being equal to the DOF is not enough for the most cases and in fact it is just a necessary condition for the accurate recovery of the rank-r matrix. This seems like a depressing news. However, DOF is in any case important since it is a fundamental limit (lower bound) of the number of observed entries to ensure the exact recovery of the matrix. Recent results show that the BIB006 For each unknown entry, it needs r multiplication and r − 1 addition operations. Since the number of unknown entries is (n − r) 2 , the computational cost is (2r − 1)(n − r) 2 . Recall that O(r 3 ) is the cost of computing (α 1 , · · · , α r ) in BIB003 . Thus, the total cost is O(r 3 + (2r − 1)(n − r) 2 ) = O(rn 2 ). DOF is not much different from the number of measurements ensuring the recovery of the matrix BIB001 , BIB002 . BIB005
Low-Rank Matrix Completion: A Contemporary Survey <s> 2) COHERENCE <s> This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low rank matrix. These results improve on prior work by Candes and Recht, Candes and Tao, and Keshavan, Montanari, and Oh. The reconstruction is accomplished by minimizing the nuclear norm, or sum of the singular values, of the hidden matrix subject to agreement with the provided entries. If the underlying matrix satisfies a certain incoherence condition, then the number of entries required is equal to a quadratic logarithmic factor times the number of parameters in the singular value decomposition. The proof of this assertion is short, self contained, and uses very elementary analysis. The novel techniques herein are based on recent work in quantum information theory. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) COHERENCE <s> We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? ::: ::: We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys $$m\ge C\,n^{1.2}r\log n$$ for some positive numerical constant C, then with very high probability, most n×n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) COHERENCE <s> A new algorithm, termed subspace evolution and transfer (SET), is proposed for solving the consistent matrix completion problem. In this setting, one is given a subset of the entries of a low-rank matrix, and asked to find one low-rank matrix consistent with the given observations. We show that this problem can be solved by searching for a column space that matches the observations. The corresponding algorithm consists of two parts — subspace evolution and subspace transfer. In the evolution part, we use a line search procedure to refine the column space. However, line search is not guaranteed to converge, as there may exist barriers along the search path that prevent the algorithm from reaching a global optimum. To address this problem, in the transfer part, we design mechanisms to detect barriers and transfer the estimated column space from one side of the barrier to the another. The SET algorithm exhibits excellent empirical performance for very low-rank matrices. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) COHERENCE <s> This paper is concerned with the problem of recovering an unknown matrix from a small fraction of its entries. This is known as the matrix completion problem, and comes up in a great number of applications, including the famous Netflix Prize and other similar questions in collaborative filtering. In general, accurate recovery of a matrix from a small number of entries is impossible; but the knowledge that the unknown matrix has low rank radically changes this premise, making the search for solutions meaningful. ::: This paper presents optimality results quantifying the minimum number of entries needed to recover a matrix of rank r exactly by any method whatsoever (the information theoretic limit). More importantly, the paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors). This convex program simply finds, among all matrices consistent with the observed entries, that with minimum nuclear norm. As an example, we show that on the order of nr log(n) samples are needed to recover a random n x n matrix of rank r by any method, and to be sure, nuclear norm minimization succeeds as soon as the number of entries is of the form nr polylog(n). <s> BIB004 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) COHERENCE <s> The global bandwidth shortage facing wireless carriers has motivated the exploration of the underutilized millimeter wave (mm-wave) frequency spectrum for future broadband cellular communication networks. There is, however, little knowledge about cellular mm-wave propagation in densely populated indoor and outdoor environments. Obtaining this information is vital for the design and operation of future fifth generation cellular networks that use the mm-wave spectrum. In this paper, we present the motivation for new mm-wave cellular systems, methodology, and hardware for measurements and offer a variety of measurement results that show 28 and 38 GHz frequencies can be used when employing steerable directional antennas at base stations and mobile devices. <s> BIB005 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) COHERENCE <s> The matrix completion problem consists of finding or approximating a low-rank matrix based on a few samples of this matrix. We propose a new algorithm for matrix completion that minimizes the least-square distance on the sampling set over the Riemannian manifold of fixed-rank matrices. The algorithm is an adaptation of classical nonlinear conjugate gradients, developed within the framework of retraction-based optimization on manifolds. We describe all the necessary objects from differential geometry necessary to perform optimization over this low-rank matrix manifold, seen as a submanifold embedded in the space of matrices. In particular, we describe how metric projection can be used as retraction and how vector transport lets us obtain the conjugate search directions. Finally, we prove convergence of a regularized version of our algorithm under the assumption that the restricted isometry property holds for incoherent matrices throughout the iterations. The numerical experiments indicate that our approach... <s> BIB006 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) COHERENCE <s> Channel state information at the transmitter (CSIT) is essential for frequency-division duplexing (FDD) massive MIMO systems, but conventional solutions involve overwhelming overhead both for downlink channel training and uplink channel feedback. In this letter, we propose a joint CSIT acquisition scheme to reduce the overhead. Particularly, unlike conventional schemes where each user individually estimates its own channel and then feed it back to the base station (BS), we propose that all scheduled users directly feed back the pilot observation to the BS, and then joint CSIT recovery can be realized at the BS. We further formulate the joint CSIT recovery problem as a low-rank matrix completion problem by utilizing the low-rank property of the massive MIMO channel matrix, which is caused by the correlation among users. Finally, we propose a hybrid low-rank matrix completion algorithm based on the singular value projection to solve this problem. Simulations demonstrate that the proposed scheme can provide accurate CSIT with lower overhead than conventional schemes. <s> BIB007
If nonzero elements of a matrix are concentrated in a certain region, we generally need a large number of observations to recover the matrix. On the other hand, if the matrix is spread out widely, then the matrix can be recovered with a relatively small number of entries. For example, consider the following two rank-one matrices in R n×n The matrix M 1 has only four nonzero entries at the top-left corner. Suppose n is large, say n = 1000, and all entries but the four elements in the top-left corner are observed (99.99% of entries are known). In this case, even though the rank of a matrix is just one, there is no way to recover this matrix since the information bearing entries is missing. This tells us that although the rank of a matrix is very small, one might not recover it if nonzero entries of the matrix are concentrated in a certain area. In contrast to the matrix M 1 , one can accurately recover the matrix M 2 with only 2n − 1 (= DOF) known entries. In other words, one row and one column are enough to recover M 2 ). BIB005 In BIB001 , it has been shown that the required number of entries to recover the matrix using the nuclear-norm minimization is in the order of n 1.2 when the rank is O(1). One can deduce from this example that the spread of observed entries is important for the identification of unknown entries. In order to quantify this, we need to measure the concentration of a matrix. Since the matrix has two-dimensional structure, we need to check the concentration in both row and column directions. This can be done by checking the concentration in the left and right singular vectors. Recall that the SVD of a matrix is where U = [u 1 · · · u r ] and V = [v 1 · · · v r ] are the matrices constructed by the left and right singular vectors, respectively, and is the diagonal matrix whose diagonal entries are σ i . From BIB003 , we see that the concentration on the vertical direction (concentration in the row) is determined by u i and that on the horizontal direction (concentration in the column) is determined by v i . For example, if one of the standard basis vector e i , say e 1 = [1 0 · · · 0] T , lies on the space spanned by u 1 , · · · u r while others (e 2 , e 3 , · · · ) are orthogonal to this space, then it is clear that nonzero entries of the matrix are only on the first row. In this case, clearly one cannot infer the entries of the first row from the sampling of the other row. That is, it is not possible to recover the matrix without observing the entire entries of the first row. The coherence, a measure of concentration in a matrix, is formally defined as BIB001 µ(U) = n r max 1≤i≤n P U e i 2 BIB006 where e i is standard basis and P U is the projection onto the range space of U. Since the columns of U = [u 1 · · · u r ] are orthonormal, we have where the first equality is due to the idempotency of P U (i.e., P T U P U = P U ) and the last equality is because n i=1 |u ij | 2 = 1. Coherence is maximized when the nonzero entries of a matrix are concentrated in a row (or column). For example, consider the matrix whose nonzero entries are concentrated on the first row Then, U = [1 0 0] T , and thus P U e 1 2 = 1 and P U e 2 2 = P U e 3 2 = 0. As shown in Fig. 7(a) , the standard basis e 1 lies on the space spanned by U while others are orthogonal to this space so that the maximum coherence is achieved (max i P U e i . In this case, as illustrated in Fig. 7(b) , P U e i 2 is the same for all standard basis vector e i , achieving lower bound in (51) and the minimum coherence (max i P U e i 2 2 = 1 3 and µ(U) = 1). As discussed in BIB007 , the number of measurements to recover the low-rank matrix is proportional to the coherence of the matrix BIB002 , BIB004 , BIB001 .
Low-Rank Matrix Completion: A Contemporary Survey <s> B. WORKING WITH DIFFERENT TYPES OF LOW-RANK MATRICES <s> This paper deals with the Riemannian geometry of the set of symmetric positive semidefinite matrices of fixed rank. This set is studied as an embedded submanifold of the real matrices equipped with the usual Euclidean metric. With this structure, we derive expressions of the tangent space and geodesics of the manifold, suitable for efficient numerical computations. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> B. WORKING WITH DIFFERENT TYPES OF LOW-RANK MATRICES <s> Matrix completion models are among the most common formulations of recommender systems. Recent works have showed a boost of performance of these techniques when introducing the pairwise relationships between users/items in the form of graphs, and imposing smoothness priors on these graphs. However, such techniques do not fully exploit the local stationarity structures of user/item graphs, and the number of parameters to learn is linear w.r.t. the number of users and items. We propose a novel approach to overcome these limitations by using geometric deep learning on graphs. Our matrix completion architecture combines graph convolutional neural networks and recurrent neural networks to learn meaningful statistical graph-structured patterns and the non-linear diffusion process that generates the known ratings. This neural network system requires a constant number of parameters independent of the matrix size. We apply our method on both synthetic and real datasets, showing that it outperforms state-of-the-art techniques. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> B. WORKING WITH DIFFERENT TYPES OF LOW-RANK MATRICES <s> Location awareness, providing the ability to identify the location of sensor, machine, vehicle, and wearable device, is a rapidly growing trend of hyper-connected society and one of the key ingredients for the Internet of Things (IoT) era. In order to make a proper reaction to the collected information from things , location information of things should be available at the data center. One challenge for the IoT networks is to identify the location map of whole nodes from partially observed distance information. The aim of this paper is to present an algorithm to recover the Euclidean distance matrix (and eventually the location map) from partially observed distance information. By casting the low-rank matrix completion problem into the unconstrained minimization problem in a Riemannian manifold in which a notion of differentiability can be defined, we solve the low-rank matrix completion problem using a modified conjugate gradient algorithm. From the convergence analysis, we show that localization in Riemannian manifold using conjugate gradient (LRM-CG) converges linearly to the original Euclidean distance matrix under the extended Wolfe’s conditions. From the numerical experiments, we demonstrate that the proposed method, called LRM-CG, is effective in recovering the Euclidean distance matrix. <s> BIB003
In many practical situations where the matrix has a certain structure, we want to make the most of the given structure to maximize profits in terms of performance and computational complexity. In this subsection, we discuss several cases including LRMC of the PSD matrix BIB001 , Euclidean distance matrix BIB003 , and recommendation matrix BIB002 and describe how the special structure can be exploited in the algorithm design.
Low-Rank Matrix Completion: A Contemporary Survey <s> 2) EUCLIDEAN DISTANCE MATRIX COMPLETION <s> Contents: Matrix Eigenvalue Methods.- Double Bracket Isospectral Flows.- Singular Value Decomposition.- Linear Programming.- Approximation and Control.- Balanced Matrix Factorizations.- Invariant Theory and System Balancing.- Balancing via Gradient Flows.- Sensitivity Optimization.- Linear Algebra.- Dynamical Systems.- Global Analysis. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) EUCLIDEAN DISTANCE MATRIX COMPLETION <s> Optimization is the science of making a best choice in the face of conflicting requirements. Any convex optimization problem has geometric interpretation. If a given optimization problem can be transformed to a convex equivalent, then this interpretive benefit is acquired. That is a powerful attraction: the ability to visualize geometry of an optimization problem. Conversely, recent advances in geometry hold convex optimization within their proofs' core. This book is about convex optimization, convex geometry (with particular attention to distance geometry), geometrical problems, and problems that can be transformed into geometrical problems. Euclidean distance geometry is, fundamentally, a determination of point conformation from interpoint distance information; e.g., given only distance information, determine whether there corresponds a realizable configuration of points; a list of points in some dimension that attains the given interpoint distances. large black & white paperback <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) EUCLIDEAN DISTANCE MATRIX COMPLETION <s> Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra. Optimization Algorithms on Matrix Manifolds offers techniques with broad applications in linear algebra, signal processing, data mining, computer vision, and statistical analysis. It can serve as a graduate-level textbook and will be of interest to applied mathematicians, engineers, and computer scientists. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) EUCLIDEAN DISTANCE MATRIX COMPLETION <s> This paper deals with the Riemannian geometry of the set of symmetric positive semidefinite matrices of fixed rank. This set is studied as an embedded submanifold of the real matrices equipped with the usual Euclidean metric. With this structure, we derive expressions of the tangent space and geodesics of the manifold, suitable for efficient numerical computations. <s> BIB004 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) EUCLIDEAN DISTANCE MATRIX COMPLETION <s> Location awareness, providing ability to identify the location of sensor, machine, vehicle, and wearable device, is a rapidly growing trend of hyper-connected society and one of key ingredients for internet of things (IoT). In order to make a proper reaction to the collected information from devices, location information of things should be available at the data center. One challenge for the massive IoT networks is to identify the location map of whole sensor nodes from partially observed distance information. This is especially important for massive sensor networks, relay-based and hierarchical networks, and vehicular to everything (V2X) networks. The primary goal of this paper is to propose an algorithm to reconstruct the Euclidean distance matrix (and eventually the location map) from partially observed distance information. By casting the low-rank matrix completion problem into the unconstrained minimization problem in Riemannian manifold in which a notion of differentiability can be defined, we are able to solve the low-rank matrix completion problem efficiently using a modified conjugate gradient algorithm. From the analysis and numerical experiments, we show that the proposed method, termed localization in Riemannian manifold using conjugate gradient (LRM-CG), is effective in recovering the Euclidean distance matrix for both noiseless and noisy environments. <s> BIB005 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) EUCLIDEAN DISTANCE MATRIX COMPLETION <s> Location awareness, providing the ability to identify the location of sensor, machine, vehicle, and wearable device, is a rapidly growing trend of hyper-connected society and one of the key ingredients for the Internet of Things (IoT) era. In order to make a proper reaction to the collected information from things , location information of things should be available at the data center. One challenge for the IoT networks is to identify the location map of whole nodes from partially observed distance information. The aim of this paper is to present an algorithm to recover the Euclidean distance matrix (and eventually the location map) from partially observed distance information. By casting the low-rank matrix completion problem into the unconstrained minimization problem in a Riemannian manifold in which a notion of differentiability can be defined, we solve the low-rank matrix completion problem using a modified conjugate gradient algorithm. From the convergence analysis, we show that localization in Riemannian manifold using conjugate gradient (LRM-CG) converges linearly to the original Euclidean distance matrix under the extended Wolfe’s conditions. From the numerical experiments, we demonstrate that the proposed method, called LRM-CG, is effective in recovering the Euclidean distance matrix. <s> BIB006
Low-rank Euclidean distance matrix completion arises in the localization problem (e.g., sensor node localization in IoT networks). Let {z i } n i=1 be sensor locations in the k-dimensional Euclidean space (k = 2 or k = 3). Then, the Euclidean distance matrix M = (m ij ) ∈ R n×n of sensor nodes is defined as m ij = z i − z j 2 2 . It is obvious that M is symmetric with diagonal elements being zero (i.e., m ii = 0). As mentioned, the rank of the Euclidean distance matrix M is at most k + 2 (i.e., rank(M) ≤ k + 2). Also, one can show that a matrix D ∈ R n×n is a Euclidean distance matrix if and only if D = D T and BIB002 where h = [1 1 · · · 1] T ∈ R n . Using these, the problem to recover the Euclidean distance matrix M can be formulated as Let Y = ZZ T where Z = [z 1 · · · z n ] T ∈ R n×k is the matrix of sensor locations. Then, one can easily check that Thus, by letting g(Y) = diag(Y)h T + hdiag(Y) T − 2Y, the problem in (57) can be equivalently formulated as Since the feasible set associated with the problem in (59) is a smooth Riemannian manifold BIB001 , BIB004 , an extension of the Euclidean space on which a notion of differentiation exists BIB003 , , various gradient-based optimization techniques such as steepest descent, Newton method, and conjugate gradient algorithms can be applied to solve (59) BIB005 , BIB006 , BIB003 .