reference
stringlengths
376
444k
target
stringlengths
31
68k
A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> C. Security in Molecular based Nanonetworks <s> The microdot is a means of concealing messages (steganography)1 that was developed by Professor Zapp and used by German spies in the Second World War to transmit secret information2. A microdot (“the enemy's masterpiece of espionage”2) was a greatly reduced photograph of a typewritten page that was pasted over a full stop in an innocuous letter2. We have taken the microdot a step further and developed a DNA-based, doubly steganographic technique for sending secret messages. A DNA-encoded message is first camouflaged within the enormous complexity of human genomic DNA and then further concealed by confining this sample to a microdot. <s> BIB001 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> C. Security in Molecular based Nanonetworks <s> Recent research has considered DNA as a medium for ultra-scale computation and for ultra-compact information storage. One potential key application is DNA-based, molecular cryptography systems. We present some procedures for DNA-based cryptography based on one-time-pads that are in principle unbreakable. Practical applications of cryptographic systems based on one-time-pads are limited in conventional electronic media by the size of the one-time-pad; however DNA provides a much more compact storage medium, and an extremely small amount of DNA suffices even for huge one-time-pads. We detail procedures for two DNA one-time-pad encryption schemes: (i) a substitution method using libraries of distinct pads, each of which defines a specific, randomly generated, pair-wise mapping; and (ii) an XOR scheme utilizing molecular computation and indexed, random key strings. These methods can be applied either for the encryption of natural DNA or for artificial DNA encoding binary data. In the latter case, we also present a novel use of chip-based DNA micro-array technology for 2D data input and output. Finally, we examine a class of DNA steganography systems, which secretly tag the input DNA and then hide it within collections of other DNA. We consider potential limitations of these steganographic techniques, proving that in theory the message hidden with such a method can be recovered by an adversary. We also discuss various modified DNA steganography methods which appear to have improved security. <s> BIB002 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> C. Security in Molecular based Nanonetworks <s> The paper presents the principles of bio molecular computation (BMC) and several algorithms for DNA (deoxyribonucleic acid) steganography and cryptography: One-Time-Pad (OTP), DNA XOR OTP and DNA chromosomes indexing. It represents a synthesis of our work in the field, sustained by former referred publications. Experimental results obtained using Matlab Bioinformatics Toolbox and conclusions are ending the work. <s> BIB003 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> C. Security in Molecular based Nanonetworks <s> Incredible improvements in the field of nano-technologies have enabled nano-scale machines that promise new solutions for several applications in biomedical, industry and military fields. Some of these applications require or might exploit the potential advantages of communication and hence cooperative behavior of these nano-scale machines to achieve a common and challenging objective that exceeds the capabilities of a single device. Extensions to known wireless communication mechanisms as well as completely novel approaches have been investigated. Examples include RF radio communication in the terahertz band or molecular communication based on transmitter molecules. Yet, one question has not been considered so far and that is nano-communication security, i.e., how we can protect such systems from manipulation by malicious parties? Our objective in this paper is to provide some first insights into this new field and to highlight some of the open research challenges. We start from a discussion of classical security objectives and their relevance in nano-networking. Looking at the well-understood field of sensor networks, we derive requirements and investigate if and how available solutions can be applied to nano-communication. Our main observation is that, especially for molecular communication, existing security and cryptographic solutions might not be applicable. In this context, we coin the new term biochemical cryptography that might open a completely new research direction and lead to significant improvements in the field of molecular communication. We point out similarities with typical network architectures where they exist but also highlight completely new challenges where existing solutions do not apply. <s> BIB004 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> C. Security in Molecular based Nanonetworks <s> Nano communication is one of the fastest growing emerging research fields. In recent years, much progress has been achieved in developing nano machines supporting our needs in health care and other scenarios. However, experts agree that only the interaction among nano machines allows to address the very complex requirements in the field. Drug delivery and environmental control are only two of the many interesting application domains, which, at the same time, pose many new challenging problems. Very relevant communication concepts have been investigated such as RF radio communication in the terra hertz band or molecular communication based on transmitter molecules. Yet, one question has not been considered so far and that is nano communication security, i.e., will it be possible to protect such systems from manipulation by malicious parties? Our objective is to provide some first insights into the security challenges and to highlight some of the open research challenges in this field. The main observation is that especially for molecular communication existing security and cryptographic solutions might not be applicable. In this context, we coin the term biochemical cryptography that might lead to significant improvements in the field of molecular communication. We also point to relevant problems that have similarities with typical network architectures but also completely new challenges. <s> BIB005 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> C. Security in Molecular based Nanonetworks <s> Molecular Communication (MC) is an emerging and ::: promising communication paradigm for several multi-disciplinary ::: domains like bio-medical, industry and military. Differently to the ::: traditional communication paradigm, the information is encoded ::: on the molecules, that are then used as carriers of information. ::: Novel approaches related to this new communication paradigm ::: have been proposed, mainly focusing on architectural aspects and ::: categorization of potential applications. So far, security and privacy ::: aspects related to the molecular communication systems have ::: not been investigated at all and represent an open question that ::: need to be addressed. The main motivation of this paper lies on ::: providing some first insights about security and privacy aspects of ::: MC systems, by highlighting the open issues and challenges and ::: above all by outlining some specific directions of potential solutions. ::: Existing cryptographicmethods and security approaches are ::: not suitable for MC systems since do not consider the pecific issues ::: and challenges, that need ad-hoc solutions.We will discuss directions ::: in terms of potential solutions by trying to highlight the ::: main advantages and potential drawbacks for each direction considered. ::: We will try to answer to the main questions: 1) why this ::: solution can be exploited in the MC field to safeguard the system ::: and its reliability? 2) which are the main issues related to the specific ::: approach? <s> BIB006 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> C. Security in Molecular based Nanonetworks <s> The emergence of molecular communication has provided an avenue for developing biological nanonetworks. Synthetic biology is a platform that enables reprogramming cells, which we refer to as Bio-NanoThings, that can be assembled to create nanonetworks. In this paper, we focus on specific Bio-NanoThings, i.e, bacteria, where engineering their ability to emit or sense molecules can result in functionalities, such as cooperative target localization. Although this opens opportunities, e.g., for novel healthcare applications of the future, this can also lead to new problems, such as a new form of bioterrorism. In this paper, we investigate the disruptions that malicious Bio-NanoThings (M-BNTs) can create for molecular nanonetworks. In particular, we introduce two types of attacks: 1) blackhole and 2) sentry attacks. In blackhole attack M-BNTs emit attractant chemicals to draw-in the legitimate Bio-NanoThings (L-BNTs) from searching for their target, while in the sentry attack, the M-BNTs emit repellents to disperse the L-BNTs from reaching their target. We also present a countermeasure that L-BNTs can take to be resilient to the attacks, where we consider two forms of decision processes that includes Bayes’ rule as well as a simple threshold approach. We run a thorough set of simulations to assess the effectiveness of the proposed attacks as well as the proposed countermeasure. Our results show that the attacks can significantly hinder the regular behavior of Bio-NanoThings, while the countermeasures are effective for protecting against such attacks. <s> BIB007 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> C. Security in Molecular based Nanonetworks <s> Eavesdroppers are notoriously difficult to detect and locate in traditional wireless communication systems, especially if they are silent. We show that in molecular communications, where information molecules undergo random walk (RW) propagation, eavesdropper detection and localization are possible if the eavesdropper is an absorbing receiver. This is due to the fact that the RW process has a finite return probability, and the eavesdropper is a detectable energy sink of which its location can be reverse estimated. <s> BIB008 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> C. Security in Molecular based Nanonetworks <s> Molecular communication in nanonetworks is an emerging communication paradigm that uses molecules as information carriers. Achieving a secure information exchange is one of the practical challenges that need to be considered to address the potential of molecular communications in nanonetworks. In this article, we have introduced secure channel into molecular communications to prevent eavesdropping. First, we propose a Diffie Hellman algorithm-based method by which communicating nanomachines can exchange a secret key through molecular signaling. Then, we use this secret key to perform ciphering. Also, we present both the algorithm for secret key exchange and the secured molecular communication system. The proposed secured system is found effective in terms of energy consumption. <s> BIB009
For MC networks, the traditional crypto based methods need to be replaced by the so-called biochemical crypto techniques whereby attacks as well as countermeasures are all defined by the chemical reactions between the molecules BIB004 , BIB005 . Various bio-inspired approaches are proposed in BIB006 to secure MC systems and different attacks are classified according to the (five) different layers of MC system in Table IV . From the table, we can see that besides the classical attacks numerous other novel attacks are possible. Two kinds of attacks are discussed in BIB007 , which are blackhole attack where malicious bionano things attract the other bionano things towards itself (by emitting chemoattractants) preventing them from their task of localization, and sentry attacks where malicious bionano things in the vicinity of the target cells emit chemo-repellents not letting the legitimate bionano things reach their target. Reference BIB008 and BIB009 consider the situations that eavesdropper appears and causes troubles; furthermore, the solutions are also discussed and evaluated. Additionally, in vesicle based molecular transport, vesicles act like keys in MC networks and thus inherently help the cause of secure communication. Recently, researchers from cryptography have extensively work on DNA inspired cryptography BIB001 [245] BIB002 BIB003 , the crux of which is that DNA computing is a computationally hard problem of biological origin, just as Heisenberg's uncertainty principle is a hard problem of physics origin; thus, this could be applied to cryptography purposes.
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET <s> Data dissemination in dynamic environments such as vehicular networks has been a critical challenge. One of the key characteristics of vehicular networks is the high intermittent connectivity. Recent studies have investigated and proven the feasibility of a content-centric networking paradigm for vehicular networks. Content-centric information dissemination has a potential number of applications in vehicular networking, including advertising, traffic and parking notifications and emergency announcements. It is clear and evident that knowledge about the type of content and its relevance can enhance the performance of data dissemination in VANETs. In this paper we address the problem of information dissemination in vehicular network environments and propose a model and solution based on a content-centric approach of networking. We leverage the expansion properties of interacting nodes in a cluster to be interpreted in terms of social connections among nodes and perform a selective random network coding approach. We compare the reliability performance of our method with a conventional random network coding approach and comment on the complexity of the proposed solution. <s> BIB001 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET <s> Traditionally, the vehicle has been the extension of the man's ambulatory system, docile to the driver's commands. Recent advances in communications, controls and embedded systems have changed this model, paving the way to the Intelligent Vehicle Grid. The car is now a formidable sensor platform, absorbing information from the environment (and from other cars) and feeding it to drivers and infrastructure to assist in safe navigation, pollution control and traffic management. The next step in this evolution is just around the corner: the Internet of Autonomous Vehicles. Pioneered by the Google car, the Internet of Vehicles will be a distributed transport fabric capable to make its own decisions about driving customers to their destinations. Like other important instantiations of the Internet of Things (e.g., the smart building), the Internet of Vehicles will have communications, storage, intelligence, and learning capabilities to anticipate the customers' intentions. The concept that will help transition to the Internet of Vehicles is the Vehicular Cloud, the equivalent of Internet cloud for vehicles, providing all the services required by the autonomous vehicles. In this article, we discuss the evolution from Intelligent Vehicle Grid to Autonomous, Internet-connected Vehicles, and Vehicular Cloud. <s> BIB002 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET <s> The peculiarities of the vehicular environment, characterized by dynamic topologies, unreliable broadcast channels, short-lived and intermittent connectivity, call into the question the capabilities of existing IP-based networking solutions to support the wide set of initially conceived and emerging vehicular applications. The research community is currently exploring groundbreaking approaches to transform the Internet. Among them, the Information-Centric Networking (ICN) paradigm appears as a promising solution to tackle the aforementioned challenges. By leveraging innovative concepts, such as named content, name-based routing, and in-network content caching, ICN well suits scenarios in which applications specify what they search for and not where they expect it to be provided and all that is required is a localized communication exchange. In this chapter, solutions are presented that rely on Content-Centric Networking (CCN), the most studied ICN approach for vehicular networks. The potential of ICN as the key enabler of the emerging vehicular cloud computing paradigm is also discussed. <s> BIB003
The research community and academia world explored and successfully deployed VANET in the last two decades. Currently, it is gaining more popularity due to rapid increase in the number of vehicles worldwide. VANET emerges in our daily life through conferring vehicles and their related objects, for example, road side units (RSUs), with communication resources that lead to vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), infrastructure-to-vehicle (I2V), and more generically, vehicle-to-everything (V2X) communications. Sensors in vehicles allow the transmission of collected information such as live video streaming between two vehicles, V2X application servers, RSUs, and cellphones of pedestrians . Vehicles may extend the insight of their network outside and can have more general view of the local settings. Remote driving permits an application concerning V2X to activate a remote vehicle located in a hazardous zone or for people who are unable to drive vehicles. For a scenario where roads are predictable such as public transportation, then driving on the basis of cloud computing may be used. For this kind of scenario, such a platform may be considered where the entrance is based on cloud services . VANET related applications have several advantages, for instance, it reduces accidents and harms identified with individuals and vehicles BIB003 . In addition, it saves individuals time by providing traffic related data such as a busy and congested road BIB002 . ICN is an appealing applicant solution for vehicular communications due to its numerous benefits. First, it fits fine to the quality of usual VANET usages, such as route reports and accident messages . These usages are probable to benefit from in-network content caching and strategies. Second, data caching is mainly favorable to speed up content retrieval via caching in different nodes BIB001 . In vehicles, caching may typically be deployed at fairly low cost, as the energy demands of ICN nodes are likely to be a small portion of the overall energy use of vehicles, hence agreeing on high-level computation, uninterrupted data processing, and enough caching space in vehicles. In addition, ICN mainly endorses asynchronous information sharing among end users. . Beside several advantages, VANET also faces numerous challenges, for example, vehicles' mobility and network disruptions, i.e., if two vehicles are connected in a grid and after some distance they change their route, then it is a challenging task that how they can communicate with each other. To overcome these kinds of issues, a new clean-slate model is required so that the user quality of experience is achieved. ICN is believed to be the most suitable paradigm for smooth data transmission without extra retrieval delay in VANET environments.
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Issues <s> There have been many recent papers on data-oriented or content-centric network architectures. Despite the voluminous literature, surprisingly little clarity is emerging as most papers focus on what differentiates them from other proposals. We begin this paper by identifying the existing commonalities and important differences in these designs, and then discuss some remaining research issues. After our review, we emerge skeptical (but open-minded) about the value of this approach to networking. <s> BIB001 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Issues <s> The current Internet architecture was founded upon a host-centric communication model, which was appropriate for coping with the needs of the early Internet users. Internet usage has evolved however, with most users mainly interested in accessing (vast amounts of) information, irrespective of its physical location. This paradigm shift in the usage model of the Internet, along with the pressing needs for, among others, better security and mobility support, has led researchers into considering a radical change to the Internet architecture. In this direction, we have witnessed many research efforts investigating Information-Centric Networking (ICN) as a foundation upon which the Future Internet can be built. Our main aims in this survey are: (a) to identify the core functionalities of ICN architectures, (b) to describe the key ICN proposals in a tutorial manner, highlighting the similarities and differences among them with respect to those core functionalities, and (c) to identify the key weaknesses of ICN proposals and to outline the main unresolved research challenges in this area of networking research. <s> BIB002 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Issues <s> Vehicular Ad-hoc Networks (VANETs) are seen as the key enabling technology of Intelligent Transportation Systems (ITS). In addition to safety, VANETs also provide a cost-effective platform for numerous comfort and entertainment applications. A pragmatic solution of VANETs requires synergistic efforts in multidisciplinary areas of communication standards, routings, security and trust. Furthermore, a realistic VANET simulator is required for performance evaluation. There have been many research efforts in these areas, and consequently, a number of surveys have been published on various aspects. In this article, we first explain the key characteristics of VANETs, then provide a meta-survey of research works. We take a tutorial approach to introducing VANETs and gradually discuss intricate details. Extensive listings of existing surveys and research projects have been provided to assess development efforts. The article is useful for researchers to look at the big picture and channel their efforts in an effective way. <s> BIB003 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Issues <s> Information-centric networking (ICN) is a new communication paradigm that focuses on content retrieval from a network regardless of the storage location or physical representation of this content. In ICN, securing the content itself is much more important than securing the infrastructure or the endpoints. To achieve the security goals in this new paradigm, it is crucial to have a comprehensive understanding of ICN attacks, their classification, and proposed solutions. In this paper, we provide a survey of attacks unique to ICN architectures and other generic attacks that have an impact on ICN. It also provides a taxonomy of these attacks in ICN, which are classified into four main categories, i.e., naming, routing, caching, and other miscellaneous related attacks. Furthermore, this paper shows the relation between ICN attacks and unique ICN attributes, and that between ICN attacks and security requirements, i.e., confidentiality, integrity, availability, and privacy. Finally, this paper presents the severity levels of ICN attacks and discusses the existing ICN security solutions. <s> BIB004 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Issues <s> In the connected vehicle ecosystem, a high volume of information-rich and safety-critical data will be exchanged by roadside units and onboard transceivers to improve the driving and traveling experience. However, poor-quality wireless links and the mobility of vehicles highly challenge data delivery. The IP address-centric model of the current Internet barely works in such extremely dynamic environments and poorly matches the localized nature of the majority of vehicular communications, which typically target specific road areas (e.g., in the proximity of a hazard or a point of interest) regardless of the identity/address of a single vehicle passing by. Therefore, a paradigm shift is advocated from traditional IP-based networking toward the groundbreaking information- centric networking. In this article, we scrutinize the applicability of this paradigm in vehicular environments by reviewing its core functionalities and the related work. The analysis shows that, thanks to features like named content retrieval, innate multicast support, and in-network data caching, information-centric networking is positioned to meet the challenging demands of vehicular networks and their evolution. Interoperability with the standard architectures for vehicular applications along with synergies with emerging computing and networking paradigms are debated as future research perspectives. <s> BIB005 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Issues <s> Information-Centric Networking (ICN) treats content as a first-class entity — each content has a unique identity and ICN routers forward traffic based on content identity rather than the locations of the content. This provides benefits like dynamic request routing, caching and mobility support. The choice of naming schema (flat vs. hierarchical) is a fundamental design choice in ICN which determines the functional separation between the network layer and the application layer. With hierarchical names, the network layer is cognizant of the semantics of hierarchical names. Name space management is also part of network layer. ICN architectures using flat names leave these to the application layer. The naming schema affects the performance and scalability of the network in terms of forwarding efficiency, routing table size and name space size. This paper provides both qualitative and quantitative comparison on the two naming schemas using these metrics, noting that they are interdependent. We seek to understand which naming schema would be better for a high-performance, scalable ICN architecture. <s> BIB006
The existing papers, such as BIB003 BIB005 [10] address the ICN features corresponding to VANET approach, which motivate this article. However, these papers present VANET issues in part or whole in terms of security, routing, mobility, and scalability. In this paper, we investigate the ICN-based VANET challenges with respect to security, routing, mobility, naming, and caching, which are serious issues and must be resolved when ICN is deployed in the VANET environment. The mentioned modules have received a tremendous interest from the VANET research community that works towards the ICN-based VANET deployment. Security, among other features, is the most vital feature of a wireless sensor network, which can be resolved up to a maximum extent. However, there are still some scenarios in which ICN may also face problems. These issues can be categorized into several modes, such as denial of service (DOS) attacks. These attacks may be further divided into numerous types, for example, attacks on (a) authentication, (b) availability, and (c) confidentiality. Authentication is further divided into Sybil and impersonation. In the prior one the attacker exercises various identities at a time, whereas in the latter one the attacker demonstrates himself as the genuine user. Availability is the most vigorous element in the VANET environment, therefore, attackers exercise attacks on this critical aspect. The main aim of attackers in this module is to confirm customers' lives. This element has been presented in the literature rigorously, and interested readers are referred to [10] for in-depth comprehensions. In the third module, confidentiality, the information is merely accessed by the authorized group. In other words, the information must be hidden from unauthorized users. Confidentiality is prone to attacks due to exchanged information in the public network. However, attacks on the confidentiality may be avoided up to a maximum level in ICN as content requests are based on names rather than IP addresses. ICN routing mechanisms are categorized into two classes, i.e., name-based routing and name resolution. In name-based routing, a content request is forwarded on the basis of content name where its related information is stored on the publisher-subscriber path and therefore the content itself is delivered on the reverse path to the subscriber . On the other hand, name resolution can be achieved in two steps. In the first step, the content is matched with a single IP or group of IP addresses. In the second step, a shortest path in the network is followed using any protocol such as Open Shortest Path First (OSFP), and subscribers' requests are forwarded to the content publisher BIB004 . The existing Internet was designed for fixed devices, where a node's IP should be in the subnet of the network to which it is attached. Nevertheless, the number of non-fixed nodes shows a persistent growth where wireless traffic with 27 billion devices will report for more than 63% of the total IP traffic by 2021 . Mobile devices can easily switch networks, change their IP addresses, and therefore present novel transmission means on the basis of opportunistic and intermittent connectivity BIB002 . Nonetheless, such kinds of approaches do not attain uninterrupted connectivity, which has become an essential need to a great extent. ICN naming may be divided into two classes, i.e., hierarchical and flat naming. In the hierarchical approach, a name can be made of several hierarchical elements. Where an element may be a series of characters that is created by subscribers. Hierarchical names are easy to understand but they are non-persistent BIB004 . On the other hand, a flat name approach is more useful as compared to hierarchical one because a hash-table is used to identify the next hop in case of a content request BIB006 . The supremacy of flat names over hierarchical names is such that flat names can be subdivided and therefore parallel processing can be achieved. ICN In-network caching can be achieved using three principles, i.e., democratic, uniform, and pervasive BIB001 . In the democratic principle, all network nodes have equal rights to publish contents if they have cached them already. In the uniform principle, a routing protocol may be used for all contents or all network nodes, if required. Pervasive principle demands that a cached content should be available/provided to all nodes in the network.
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> An Overview of Research Challenges <s> The current Internet architecture was founded upon a host-centric communication model, which was appropriate for coping with the needs of the early Internet users. Internet usage has evolved however, with most users mainly interested in accessing (vast amounts of) information, irrespective of its physical location. This paradigm shift in the usage model of the Internet, along with the pressing needs for, among others, better security and mobility support, has led researchers into considering a radical change to the Internet architecture. In this direction, we have witnessed many research efforts investigating Information-Centric Networking (ICN) as a foundation upon which the Future Internet can be built. Our main aims in this survey are: (a) to identify the core functionalities of ICN architectures, (b) to describe the key ICN proposals in a tutorial manner, highlighting the similarities and differences among them with respect to those core functionalities, and (c) to identify the key weaknesses of ICN proposals and to outline the main unresolved research challenges in this area of networking research. <s> BIB001 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> An Overview of Research Challenges <s> Content dissemination in Vehicular Ad-hoc Networks has a myriad of applications, ranging from advertising and parking notifications, to traffic and emergency warnings. This heterogeneity requires optimizing content storing, retrieval and forwarding among vehicles to deliver data with short latency and without jeopardizing network resources. In this paper, for a few reference scenarios, we illustrate how approaches that combine Content Centric Networking (CCN) and Floating Content (FC) enable new and efficient solutions to this issue. Moreover, we describe how a network architecture based on Software Defined Networking (SDN) can support both CCN and FC by coordinating distributed caching strategies, by optimizing the packet forwarding process and the availability of floating data items. For each scenario analyzed, we highlight the main research challenges open, and we describe a few possible solutions. <s> BIB002
In this section, we provide the imperative ICN-based VANET challenges, as presented in Figure 2 , that need to be addressed and resolved before the ICN deployment. The main goal is to grasp the attention of the ICN, SDN, Edge, and VANET research communities for merging these attractive models at one platform. The taxonomy of these models is presented in Figure 3 , which is based on the papers BIB002 . In ICN, contents are forwarded hop-by-hop by in-network nodes, with each node holding three data structures, i.e., Pending Interest Table (PIT) that tracks records of the interfaces through which content requests arrive, Forwarding Information Base (FIB) that matches content names to the output interfaces, and Content Store (CS) which is used to cache contents locally BIB001 .
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> SDN-Based Vehicular ICN <s> With the advances in telecommunications, more and more devices are connected to the Internet and getting smart. As a promising application scenario for carrier networks, vehicular communication has enabled many traffic-related applications. However, the heterogeneity of wireless infrastructures and the inflexibility in protocol deployment hinder the real world application of vehicular communications. SDN is promising to bridge the gaps through unified network abstraction and programmability. In this research, we propose an SDN-based architecture to enable rapid network innovation for vehicular communications. Under this architecture, heterogeneous wireless devices, including vehicles and roadside units, are abstracted as SDN switches with a unified interface. In addition, network resources such as bandwidth and spectrum can also be allocated and assigned by the logically centralized control plane, which provides a far more agile configuration capability. Besides, we also study several cases to highlight the advantages of the architecture, such as adaptive protocol deployment and multiple tenants isolation. Finally, the feasibility and effectiveness of the proposed architecture and cases are validated through traffic-trace-based simulation. <s> BIB001 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> SDN-Based Vehicular ICN <s> This paper provides an overview of software-defined “hardware” infrastructures (SDHI). SDHI builds upon the concept of hardware (HW) resource disaggregation. HW resource disaggregation breaks today’s physical server-oriented model where the use of a physical resource (e.g., processor or memory) is constrained to a physical server’s chassis. SDHI extends the definition of of software-defined infrastructures (SDI) and brings greater modularity, flexibility, and extensibility to cloud infrastructures, thus allowing cloud operators to employ resources more efficiently and allowing applications not to be bounded by the physical infrastructure’s layout. This paper aims to be an initial introduction to SDHI and its associated technological advancements. This paper starts with an overview of the cloud domain and puts into perspective some of the most prominent efforts in the area. Then, it presents a set of differentiating use-cases that SDHI enables. Next, we state the fundamentals behind SDI and SDHI, and elaborate why SDHI is of great interest today. Moreover, it provides an overview of the functional architecture of a cloud built on SDHI, exploring how the impact of this transformation goes far beyond the cloud infrastructure level in its impact on platforms, execution environments, and applications. Finally, an in-depth assessment is made of the technologies behind SDHI, the impact of these technologies, and the associated challenges and potential future directions of SDHI. <s> BIB002 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> SDN-Based Vehicular ICN <s> Information-Centric Networking (ICN) is an appealing architecture that has received a remarkable interest from the research community thanks to its friendly structure. Several projects have proposed innovative ICN models to cope with the Internet practice, which moves from host-centrism to receiver-driven communication. A worth mentioning component of these novel models is in-network caching, which provides flexibility and pervasiveness for the upturn of swiftness in data distribution. Because of the rapid Internet traffic growth, cache deployment and content caching have been unanimously accepted as conspicuous ICN issues to be resolved. In this article, a survey of cache management strategies in ICN is presented along with their contributions and limitations, and their performance is evaluated in a simulation network environment with respect to cache hit, stretch ratio, and eviction operations. Some unresolved ICN caching challenges and directions for future research in this networking area are also discussed. <s> BIB003
SDN carries a new idea to the Cloud BIB002 by presenting resource separation where all resources are assembled and supervised by software. Unlike legacy physical server model, SDN takes into account resource separation and allows for hardware resources as discrete and flexible elements. With the deployment of SDN, the concept of client-server system is exterminated, but only information of various hardware is coded and processed. The administrator of a Cloud architecture then deploys a particular client-server environment, for example, logical clients and servers, which brings a high level of flexibility to the Cloud environment. However, this phenomenon needs essential physical resources so that to connect various devices through a high-speed communications channel to deal with the physical disaggregation of resources BIB002 . Moreover, SDN is an evolving idea that isolates data plane from control plane, where the controller collects information from network nodes and provides an abstract view of the network. In addition, all SDN applications can access the SDN controller, where the applications deploy various network services . In the context of ICN-based VANET, SDN provides scalability, manageability, and a universal view of the network. Among different ICN modules, in-network caching is one of the most popular approach BIB003 , which improves the content availability ratio and decreases the content retrieval delay. SDN is the most suitable choice for the cache improvement as it helps in distinguishing diverse kinds of cached contents. However, this linkage of ICN and SDN in the VANET environment is a challenging task due to vehicles' mobility. An SDN-based VANET scheme is proposed in BIB001 , where different wireless nodes are considered as SDN switches, which share the network bandwidth in a centralized control plane. This technique seems suitable for the SDN-based VANET, but it does not consider the most promising paradigm, i.e., ICN. Thus, an intelligent and flexible method is required for the ICN-based VANET communication paradigm to share network resources with smooth content mobility and caching.
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Cloud-Based Vehicular ICN <s> Cloud computing offers utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of electrical energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only minimize operational costs but also reduce the environmental impact. In this paper, we define an architectural framework and principles for energy-efficient Cloud computing. Based on this architecture, we present our vision, open research challenges, and resource provisioning and allocation algorithms for energy-efficient management of Cloud computing environments. The proposed energy-aware allocation heuristics provision data center resources to client applications in a way that improves energy efficiency of the data center, while delivering the negotiated Quality of Service (QoS). In particular, in this paper we conduct a survey of research in energy-efficient computing and propose: (a) architectural principles for energy-efficient management of Clouds; (b) energy-efficient resource allocation policies and scheduling algorithms considering QoS expectations and power usage characteristics of the devices; and (c) a number of open research challenges, addressing which can bring substantial benefits to both resource providers and consumers. We have validated our approach by conducting a performance evaluation study using the CloudSim toolkit. The results demonstrate that Cloud computing model has immense potential as it offers significant cost savings and demonstrates high potential for the improvement of energy efficiency under dynamic workload scenarios. <s> BIB001 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Cloud-Based Vehicular ICN <s> We present a Selective Neighbor Caching (SNC) approach for enhancing seamless mobility in ICN architectures. The approach is based on proactively caching information requests and the corresponding items to a subset of proxies that are one hop away from the proxy a mobile is currently connected to. A key contribution of this paper is the definition of a target cost function that captures the tradeoff between delay and cache cost, and a simple procedure for selecting the appropriate subset of neighbors which considers the mobility behavior of users. We present investigations for the steady-state and transient performance of the proposed scheme which identify and quantify its gains compared to proactively caching in all neighbor proxies and to the case where no caching is performed. Moreover, our investigations show how these gains are affected by the delay and cache cost, and the mobility behavior. <s> BIB002 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Cloud-Based Vehicular ICN <s> The Internet is straining to meet demands that its design never anticipated, such as supporting billions of mobile devices and transporting huge amounts of multimedia content. The publish-subscribe Internet (PSI) architecture, a clean slate information-centric networking approach to the future Internet, was designed to satisfy the current and emerging user demands for pervasive content delivery, which the Internet can no longer handle. This article provides an overview of the PSI architecture, explaining its operation from bootstrapping to information delivery, focusing on its support for network layer caching and seamless mobility, which make PSI an excellent platform for ubiquitous information delivery. <s> BIB003 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Cloud-Based Vehicular ICN <s> Content-Centric Networking (CCN) is a new popular communication paradigm that achieves information retrieval and distribution by using named data instead of end-to-end host-centric communications. This innovative model particularly fits mobile wireless environments characterized by dynamic topologies, unreliable broadcast channels, short-lived and intermittent connectivity, as proven by preliminary works in the literature. In this paper we extend the CCN framework to efficiently and reliably support content delivery on top of IEEE 802.11p vehicular technology. Achieved results show that the proposed solution, by leveraging distributed broadcast storm mitigation techniques, simple transport routines, and lightweight soft-state forwarding procedures, brings significant improvements w.r.t. a plain CCN model, confirming the effectiveness and efficiency of our design choices. <s> BIB004 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Cloud-Based Vehicular ICN <s> Cloud computing datacenter hosts hundreds of thousands of servers that coordinate users' tasks in order to deliver highly available computing service. These servers consist of multiple memory modules, network cards, storage disks, processors etc…, each of these components while capable of failing. At such a large scale, hardware component failure is the norm rather than an exception. Hardware failure can lead to performance degradation to users and can result in losses to the business. Fault tolerant is one of efficient modules that keep hardware in operational mode as much as possible. In this paper, we survey the most famous fault tolerance technique in cloud computing, and list numerous FT methods proposed by the research experts in this field. <s> BIB005 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Cloud-Based Vehicular ICN <s> Migration can contribute to efficient resource management in cloud computing environment. Migration is used in many areas such as power reduction, load balancing, and fault tolerance in Cloud Dater Centers (CDCs). However most of the previous works concentrated on the implementation of migration technology itself. Thus, it need to consider metrics which may impact the migration performance and energy efficiency. In this paper we summarize and classify previous approaches of migration in CDCs. Furthermore, we conclude with a discussion of research problems in this area. In the future work, we will study on live migration mechanism to improve the live migration performance and energy efficiency in the variety of CDCs. <s> BIB006 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Cloud-Based Vehicular ICN <s> Recently, a series of innovative information-centric networking (ICN) architectures have been designed to better address the shift from host-centric end-to-end communication to requester-driven content retrieval. With the explosive increase of mobile data traffic, the mobility issue in ICN is a growing concern and a number of approaches have been proposed to deal with the mobility problem in ICN. Despite the potential advantages of ICN in mobile wireless environments, several significant research challenges remain to be addressed before its widespread deployment, including consistent routing, local cached content discovery, energy efficiency, privacy, security and trust, and practical deployment. In this paper, we present a brief survey on some of the works that have already been done to achieve mobile ICN, and discuss some research issues and challenges. We identify several important aspects of mobile ICN: overview, mobility enabling technologies, information-centric wireless mobile networks, and research challenges. <s> BIB007
Unlike data processing/execution in a local area network (LAN), Cloud computing is a concept where computation depends on difference resources that are shared in a wide area network (WAN). Usually, the architecture of Cloud computing is based on across-the-board data centers connected together wherein users access their desired resources through the Internet BIB005 . At present, various companies such as Google, Microsoft, and Amazon exploit Cloud data centers BIB001 for caching a huge amount of records and hold prevalent service applications BIB006 . Because of the hardware and/or software constraints or reformation of security issues, it is indispensable to use data centers for storing records BIB001 BIB006 . However, service provision in this situation becomes a crucial problem. Due to exponential growth in Internet traffic, use of vehicles as cloud nodes is an accepting idea. This is possible because several vehicles are equipped with caching and sensing sensors for providing safety in the driving as well as infotainment for passengers. Using the cloud as an infotainment-only feature is easy in the current IP-based Internet paradigm. However, linking this feature with the ICN model is a challenging task due to vehicles' mobility. Using the concept of naming in ICN, seamless mobility can be supported without executing difficult network administrations needed in the IP-based networks, when topological or physical locations of mobile nodes change BIB007 . In the last few years, various strategies have been proposed for addressing this challenge from the viewpoints of publisher and subscriber mobility. The proposals for enabling subscriber's mobility generally incorporate proactive caching BIB002 and prompt recovery of request/reply BIB004 . Basically, ICN is a publish-subscribe networking model, wherein subscribers are mainly interested in actual contents rather than their locations BIB003 , its primary focus is on content retrieval to allow subscribers to attain the requested information. Thus, it is a prospective model to resolve several ubiquitous challenges face in the IP-based infrastructure, for example, mobility and security among others BIB007 . Mobile cloud services can be integrated with the ICN-based VANET to provide access to the stored information. However, due to the nature of ICN accessing method which is based on names, a feasible connectivity approach is required for the smooth transition. Nevertheless, this facility raises several new challenges in the network model, which are discussed in Section 5.
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Edge Computing-Based Vehicular ICN <s> In many aspects of human activity, there has been a continuous struggle between the forces of centralization and decentralization. Computing exhibits the same phenomenon; we have gone from mainframes to PCs and local networks in the past, and over the last decade we have seen a centralization and consolidation of services and applications in data centers and clouds. We position that a new shift is necessary. Technological advances such as powerful dedicated connection boxes deployed in most homes, high capacity mobile end-user devices and powerful wireless networks, along with growing user concerns about trust, privacy, and autonomy requires taking the control of computing applications, data, and services away from some central nodes (the "core") to the other logical extreme (the "edge") of the Internet. We also position that this development can help blurring the boundary between man and machine, and embrace social computing in which humans are part of the computation and decision making loop, resulting in a human-centered system design. We refer to this vision of human-centered edge-device based computing as Edge-centric Computing. We elaborate in this position paper on this vision and present the research challenges associated with its implementation. <s> BIB001 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Edge Computing-Based Vehicular ICN <s> The proliferation of Internet of Things (IoT) and the success of rich cloud services have pushed the horizon of a new computing paradigm, edge computing, which calls for processing the data at the edge of the network. Edge computing has the potential to address the concerns of response time requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy. In this paper, we introduce the definition of edge computing, followed by several case studies, ranging from cloud offloading to smart home and city, as well as collaborative edge to materialize the concept of edge computing. Finally, we present several challenges and opportunities in the field of edge computing, and hope this paper will gain attention from the community and inspire more research in this direction. <s> BIB002 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Edge Computing-Based Vehicular ICN <s> As a meaningful and typical application scenario of Internet of Things (IoT), Internet of Vehicles (IoV) has attracted a lot of attentions to solve the increasingly severe problem of traffic congestion and safety issues in smart city. Information-centric networks (ICN) is a main stream of next generation network because of its content based forwarding strategy and in-network caching properties. Many existing works have been done to introduce ICN in IoV because of IP-based network architecture's maladjustment of such extremely mobile and dynamic IoV environment. In contrast, ICN is able to sustain packet delivery in unreliable and extreme environment. However, the frequent mobility of the vehicles will consume ICN's network resources to incessantly update the Forward Information Base (FIB) which will further affect the aggregation processing. For example, geographic location based name schema aggregation will be affected dramatically by mobility problem. On the other hand, fog computing is an edge computing technology usually integrated with IoT by bringing computation and storage capacity near the underlying networks to provide low-latency and time sensitive services. In this paper, we integrate fog computing into information-centric IoV to provide mobility support by allocating different schema taking account of the data characteristic (e.g., user-shareable data, communication data). Moreover, we use the computation, storage and location-aware capabilities of fog to design a mobility support mechanism for data exchange and communication considering the feature of IoV service (e.g., alarm danger in local, updating traffic information, V2V communication etc.). We evaluate related performances of the proposed mechanism in high mobility environment, compared with original information-centric IoV. The result shows the advantages of a fog computing based information-centric IoV. <s> BIB003 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Edge Computing-Based Vehicular ICN <s> Internet of Things (IoT) allows billions of physical objects to be connected to collect and exchange data for offering various applications, such as environmental monitoring, infrastructure management, and home automation. On the other hand, IoT has unsupported features (e.g., low latency, location awareness, and geographic distribution) that are critical for some IoT applications, including smart traffic lights, home energy management and augmented reality. To support these features, fog computing is integrated into IoT to extend computing, storage and networking resources to the network edge. Unfortunately, it is confronted with various security and privacy risks, which raise serious concerns towards users. In this survey, we review the architecture and features of fog computing and study critical roles of fog nodes, including real-time services, transient storage, data dissemination and decentralized computation. We also examine fog-assisted IoT applications based on different roles of fog nodes. Then, we present security and privacy threats towards IoT applications and discuss the security and privacy requirements in fog computing. Further, we demonstrate potential challenges to secure fog computing and review the state-of-the-art solutions used to address security and privacy issues in fog computing for IoT applications. Finally, by defining several open research issues, it is expected to draw more attention and efforts into this new architecture. <s> BIB004
Edge computing BIB001 is deployed to bring computing resources near data subscribers. The architecture of Edge computing is decentralized that uses network nodes to jointly execute a considerable amount of processing, data caching, and controlling BIB002 BIB004 . With the help of Edge nodes and network connections, processing overhead is largely decreased and bandwidth restrictions are surmounted for consolidated services . The attraction of Edge computing increases with the provision of on-demand services and availability of resources near consumer devices, which result in low response time and greater consumer satisfaction . Data sharing in vehicular network has been a growing concern for the last few years. Three types of data sharing are deemed the most prominent practices in vehicular network, i.e., sharing warning messages about an accident, reminder for the prevention of vehicles crash, and notice about the road congestion BIB003 . Besides, infotainment content availability for passengers has also attracted the attention of vehicular research forums. In this regard, Edge computing, which brings the storage capacity and computational processes near the customers for content retrieval with minimum delays, is integrated with vehicular networks . However, the integration of ICN with vehicular Edge computing is a challenging issue due to consistent mobility of vehicles. That is, if content, which is stored in an Edge node (a vehicle), is accessed by a passenger in a moving car, that can be accessed somehow easily in the IP-based Internet model. However, relating it with the ICN, which believes in names rather than IPs, is quite complicated job. This can be resolved by storing an accessed content at several RSUs thereby ignoring the redundancy factor. Yet, according to the Cisco Visual Networking Index (VNI) , mobile traffic by 2021, with 5.5 billion users, will reach seven times more than the current traffic, which introduces further complication for vehicular ICN. In other words, the storage of RSUs would exhaust in a moment, which requires replacement of the already-stored contents. In this scenario, the accessing sensor node (vehicle) will not find the previously cached content. This scenario motivates researchers to design such an intelligent and sophisticated mechanism that paves the way to integrate Edge and ICN with vehicular networks.
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Research Opportunities <s> Vehicle-to-anything (V2X) communications refer to information exchange between a vehicle and various elements of the intelligent transportation system (ITS), including other vehicles, pedestrians, Internet gateways, and transport infrastructure (such as traffic lights and signs). The technology has a great potential of enabling a variety of novel applications for road safety, passenger infotainment, car manufacturer services, and vehicle traffic optimization. Today, V2X communications is based on one of two main technologies: dedicated short-range communications (DSRC) and cellular networks. However, in the near future, it is not expected that a single technology can support such a variety of expected V2X applications for a large number of vehicles. Hence, interworking between DSRC and cellular network technologies for efficient V2X communications is proposed. This paper surveys potential DSRC and cellular interworking solutions for efficient V2X communications. First, we highlight the limitations of each technology in supporting V2X applications. Then, we review potential DSRC-cellular hybrid architectures, together with the main interworking challenges resulting from vehicle mobility, such as vertical handover and network selection issues. In addition, we provide an overview of the global DSRC standards, the existing V2X research and development platforms, and the V2X products already adopted and deployed in vehicles by car manufactures, as an attempt to align academic research with automotive industrial activities. Finally, we suggest some open research issues for future V2X communications based on the interworking of DSRC and cellular network technologies. <s> BIB001 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Research Opportunities <s> The developments of connected vehicles are heavily influenced by information and communications technologies, which have fueled a plethora of innovations in various areas, including networking, caching, and computing. Nevertheless, these important enabling technologies have traditionally been studied separately in the existing works on vehicular networks. In this paper, we propose an integrated framework that can enable dynamic orchestration of networking, caching, and computing resources to improve the performance of next generation vehicular networks. We formulate the resource allocation strategy in this framework as a joint optimization problem, where the gains of not only networking but also caching and computing are taken into consideration in the proposed framework. The complexity of the system is very high when we jointly consider these three technologies. Therefore, we propose a novel deep reinforcement learning approach in this paper. Simulation results with different system parameters are presented to show the effectiveness of the proposed scheme. <s> BIB002
Generally, VANET communications are achieved on the basis of two key techniques, i.e., cellular networking and dedicated short range communications (DSRC) BIB001 . Through these two technologies, a vehicle obtains information from its own sensors and provides it to other vehicles or RSUs BIB002 . Currently, moving computing, management, and content caching in the Edge, ICN, and Cloud is the growing trend. This computation however brings a lot of new system requirements, which are presented in Figure 4 and discussed in the following subsections.
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Mobility <s> 'Where's' in a name? <s> BIB001 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Mobility <s> Wireless network virtualization and information-centric networking (ICN) are two promising techniques in software-defined 5G mobile wireless networks. Traditionally, these two technologies have been addressed separately. In this paper we show that integrating wireless network virtualization with ICN techniques can significantly improve the end-to-end network performance. In particular, we propose an information- centric wireless network virtualization architecture for integrating wireless network virtualization with ICN. We develop the key components of this architecture: radio spectrum resource, wireless network infrastructure, virtual resources (including content-level slicing, network-level slicing, and flow-level slicing), and informationcentric wireless virtualization controller. Then we formulate the virtual resource allocation and in-network caching strategy as an optimization problem, considering the gain of not only virtualization but also in-network caching in our proposed information-centric wireless network virtualization architecture. The obtained simulation results show that our proposed information-centric wireless network virtualization architecture and the related schemes significantly outperform the other existing schemes. <s> BIB002
Mobility is a critical challenge to the ICN deployment corresponding to all emerging technologies BIB001 . In ICN, when subscribers change their location, their connectivity is changed from one node to the other. However, as no IP address is used to route contents, it is transparent, in contrast to IP, in which addresses are changed BIB002 . In the VANET environment, content objects must pass a centralized facilitator (vehicle having sensors) before reaching the actual subscriber. This is the most crucial module because contents in VANETs travel along a longer path rather than the best one. Mobility in ICN is achieved through a paradigm, known as publish-subscribe Internet model. In this approach, interested subscribers request particular contents by sending request messages without knowing the location of the content, and the publisher responds with the actual content BIB002 . This phenomenon guarantees secure content distribution because of the publisher and subscriber decoupling, and thereby catch the attention of the ICN and VANET communities for integrating these emerging models. However, providing mobility support in ICN-based VANETs requires smart and suitable techniques so that content should route to a particular destination without extra retrieving delay.
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Naming <s> In this paper we apply the Named Data Networking [8], a newly proposed Internet architecture, to networking vehicles on the run. Our initial design, dubbed V-NDN, illustrates NDN's promising potential in providing a unifying architecture that enables networking among all computing devices independent from whether they are connected through wired infrastructure, ad hoc, or intermittent DTN. This paper describes a prototype implementation of V-NDN and its preliminary performance assessment, and identifies remaining challenges. <s> BIB001 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Naming <s> In the connected vehicle ecosystem, a high volume of information-rich and safety-critical data will be exchanged by roadside units and onboard transceivers to improve the driving and traveling experience. However, poor-quality wireless links and the mobility of vehicles highly challenge data delivery. The IP address-centric model of the current Internet barely works in such extremely dynamic environments and poorly matches the localized nature of the majority of vehicular communications, which typically target specific road areas (e.g., in the proximity of a hazard or a point of interest) regardless of the identity/address of a single vehicle passing by. Therefore, a paradigm shift is advocated from traditional IP-based networking toward the groundbreaking information- centric networking. In this article, we scrutinize the applicability of this paradigm in vehicular environments by reviewing its core functionalities and the related work. The analysis shows that, thanks to features like named content retrieval, innate multicast support, and in-network data caching, information-centric networking is positioned to meet the challenging demands of vehicular networks and their evolution. Interoperability with the standard architectures for vehicular applications along with synergies with emerging computing and networking paradigms are debated as future research perspectives. <s> BIB002 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Naming <s> Information-centric networking (ICN) approaches have been considered as an alternative approach to TCP/IP. Contrary to the traditional IP, the ICN treats content as a first-class citizen of the entire network, where names are given through different naming schemes to contents and are used during the retrieval. Among ICN approaches, content centric networking (CCN) is one of the key protocols being explored for Internet of Things (IoT), names the contents using hierarchical naming. Moreover, CCN follows pull-based strategy and exhibits the communication loop problem because of its broadcasting mode. However, IoT requires both pull and push modes of communication with scalable and secured content names in terms of integrity. In this paper, we propose a hybrid naming scheme that names contents using hierarchical and flat components to support both push and pull communication and to provide both scalability and security, respectively. We consider an IoT-based smart campus scenario and introduce two transmission modes: 1) unicast mode and 2) broadcast mode to address loop problem associated with CCN. Simulation results demonstrate that proposed scheme significantly improves the rate of interest transmissions, number of covered hops, name aggregation, and reliability along with addressing the loop problem. <s> BIB003
Unlike the current Internet sending-receiving approach, contents in ICN are named, where names are either hierarchical or flat BIB003 . In ICN, the name of the content is decoupled from its location with the intention to supply it to the requesting subscriber. Hence, content retrieval follows a receiver-driven approach and therefore avoids control of subscribers over the content, as done in the IP-based architecture. In the ICN-based VANET, content may be easily discovered as compared to the IP-based VANET, because ICN does not need the original server (publisher) to be connected every time content is requested. Furthermore, content retrieval from different publishers, for instance, map from a common RSU, becomes easier through combining requests for contents with the same names. This process simplifies the process of data delivery for the incoming requests BIB002 . ICN naming brings significant assistance to vehicular communications by allowing forwarding vehicles to handle contents on the basis of application requirements. Named data transmission provides ICN-based VANET with robustness to connection interruptions and hereby characterizes vehicular Internet BIB001 .
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Caching <s> Information-Centric Networking (ICN) is an appealing architecture that has received a remarkable interest from the research community thanks to its friendly structure. Several projects have proposed innovative ICN models to cope with the Internet practice, which moves from host-centrism to receiver-driven communication. A worth mentioning component of these novel models is in-network caching, which provides flexibility and pervasiveness for the upturn of swiftness in data distribution. Because of the rapid Internet traffic growth, cache deployment and content caching have been unanimously accepted as conspicuous ICN issues to be resolved. In this article, a survey of cache management strategies in ICN is presented along with their contributions and limitations, and their performance is evaluated in a simulation network environment with respect to cache hit, stretch ratio, and eviction operations. Some unresolved ICN caching challenges and directions for future research in this networking area are also discussed. <s> BIB001 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Caching <s> The developments of connected vehicles are heavily influenced by information and communications technologies, which have fueled a plethora of innovations in various areas, including networking, caching, and computing. Nevertheless, these important enabling technologies have traditionally been studied separately in the existing works on vehicular networks. In this paper, we propose an integrated framework that can enable dynamic orchestration of networking, caching, and computing resources to improve the performance of next generation vehicular networks. We formulate the resource allocation strategy in this framework as a joint optimization problem, where the gains of not only networking but also caching and computing are taken into consideration in the proposed framework. The complexity of the system is very high when we jointly consider these three technologies. Therefore, we propose a novel deep reinforcement learning approach in this paper. Simulation results with different system parameters are presented to show the effectiveness of the proposed scheme. <s> BIB002
ICN caching is divided into several parts, i.e., (a) off-path caching that requires extra storing devices, (b) on-path caching which is achieved opportunistically, (c) homogeneous caching where two caching nodes co-operate with each other, and (d) heterogeneous caching in which caching nodes do not co-operate with one another. For understanding the concept of these techniques, a recent survey BIB001 provides a detailed explanation of ICN caching strategies along with their contributions and limitations. In vehicular networks, caching is achieved either through RSUs or even another sensing device such as vehicle. If an RSU is deemed as a caching node and a vehicle needs to access its cached data, then it can be acquired easily BIB002 . However, when a vehicle covers some distance and leaves the covering area of the RSU, then it is complicated to locate the real cache position. Similarly, if another vehicle is supposed to be a caching node, then after covering distance in the opposite direction from the accessing vehicle, it is more challenging to locate that caching vehicle. Moreover, cache update is also an essential mania for avoiding traffic congestions and accidents, which may cause due to outdated cached content. Therefore, a flawless updating cache strategy accompanied by a fine-grained forwarding mechanism is the basic requirement of the ICN-based VANET environment.
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Reliability <s> Sensors are distributed across the globe leading to an avalanche of data about our environment. It is possible today to utilize networks of sensors to detect and identify a multitude of observations, from simple phenomena to complex events and situations. The lack of integration and communication between these networks, however, often isolates important data streams and intensifies the existing problem of too much data and not enough knowledge. With a view to addressing this problem, the semantic sensor Web (SSW) proposes that sensor data be annotated with semantic metadata that will both increase interoperability and provide contextual information essential for situational knowledge. <s> BIB001 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Reliability <s> Direct radio-based vehicle-to-vehicle communication can help prevent accidents by providing accurate and up-to-date local status and hazard information to the driver. In this paper, we assume that two types of messages are used for traffic safety-related communication: 1) Periodic messages (ldquobeaconsrdquo) that are sent by all vehicles to inform their neighbors about their current status (i.e., position) and 2) event-driven messages that are sent whenever a hazard has been detected. In IEEE 802.11 distributed-coordination-function-based vehicular networks, interferences and packet collisions can lead to the failure of the reception of safety-critical information, in particular when the beaconing load leads to an almost-saturated channel, as it could easily happen in many critical vehicular traffic conditions. In this paper, we demonstrate the importance of transmit power control to avoid saturated channel conditions and ensure the best use of the channel for safety-related purposes. We propose a distributed transmit power control method based on a strict fairness criterion, i.e., distributed fair power adjustment for vehicular environments (D-FPAV), to control the load of periodic messages on the channel. The benefits are twofold: 1) The bandwidth is made available for higher priority data like dissemination of warnings, and 2) beacons from different vehicles are treated with ldquoequal rights,rdquo and therefore, the best possible reception under the available bandwidth constraints is ensured. We formally prove the fairness of the proposed approach. Then, we make use of the ns-2 simulator that was significantly enhanced by realistic highway mobility patterns, improved radio propagation, receiver models, and the IEEE 802.11p specifications to show the beneficial impact of D-FPAV for safety-related communications. We finally put forward a method, i.e., emergency message dissemination for vehicular environments (EMDV), for fast and effective multihop information dissemination of event-driven messages and show that EMDV benefits of the beaconing load control provided by D-FPAV with respect to both probability of reception and latency. <s> BIB002 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Reliability <s> Vehicular ad hoc networks play a critical role in enabling important active safety applications such as cooperative collision warning. These active safety applications rely on continuous broadcast of self-information by all vehicles, which allows each vehicle to track all its neighboring cars in real time. The most pressing challenge in such safety-driven communication is to maintain acceptable tracking accuracy while avoiding congestion in the shared channel. In this article we propose a transmission control protocol that adapts communication rate and power based on the dynamics of a vehicular network and safety-driven tracking process. The proposed solution uses a closed-loop control concept and accounts for wireless channel unreliability. Simulation results confirm that if packet generation rate and associated transmission power for safety messages are adjusted in an on-demand and adaptive fashion, robust tracking is possible under various traffic conditions. <s> BIB003 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Reliability <s> Vehicle Safety Communications (VSC) is advancing rapidly towards product development and field testing. While a number of possible solutions have been proposed, the question remains open as how such a system will address the issue of scalability in its actual deployment. This paper presents a design methodology for congestion control in VSC as well as the description and evaluation of a resulting rate adaption oriented protocol named PULSAR. We start with a list of design principles reflecting the state of the art that define why and how vehicles should behave while responding to channel congestion in order to ensure fairness and support the needs of safety applications. From these principles, we derive protocol building blocks required to fulfill the defined objectives. Then, the actual protocol is described and assessed in detail, including a discussion on the intricate features of channel load assessment, rate adaptation and information sharing. A comparison with other state-of-the-art protocols shows that “details matter” with respect to the temporal and spatial dimensions of the protocol outcome. <s> BIB004
The deployment of ICN schemes in VANETs confronts the most vibrant and heterogeneous kind of problems, when considering the goals and requirements of such integration . The difference in techniques (with respect to actuators, sensors, end-to-end diversity, and their functionalities, and in the collected and consumed data in such scenarios, would certainly lead to various concerns . For example, vehicular nodes share similarities with the restrictions and prerequisites. One of the most vital problems is the use of technologies that may deploy resourceful connectivity in an unreliable network, i.e., VANET. Nevertheless, one of the most significant points in vehicular communications is how different semantics of the shared and stored contents would affect content distributions. In reality, this is a challenging issue which is not covered in the ICN scope BIB001 . Considering the facility of in-network nodes to store forwarded contents to increase upcoming requests, an issue appears concerning the ICN framework that whether it should participate in any type of process that enables the association or the analysis of various suppliers of information . In VANETs, the traffic on wireless medium that results from periodic packet exchange needs to be wisely controlled so as to avoid decline in the quality of safety-related data at the time of receiving . For this reason, various strategies have been proposed in the literature such as D-FPAV BIB002 , ATC BIB003 , and PULSAR BIB004 , which regulate traffic congestion with a stringent fairness measure that needs to be achieved for security purposes as well as emergency messages. However, the exchange of control messages in vehicular ICN is yet-to-resolve issue and needs careful considerations for designing adaptive strategies to achieve fair and reliable transmissions.
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based 5G-Enabled VANET <s> In this paper, we present a service-driven mobility support architecture for Information Centric Networks that provides seamless mobility as an on-demand network service, which can be enabled/disabled based on network capabilities or resource availability. Proposed architecture relies on the ID/Locator split on ICN namespaces to support the use of persistent names and avoids name reconfiguration due to mobility. We implemented the proposed solution over a service-centric CCN platform, with multiple end-hosts running a video conferencing application acting as Consumers and Producers, and observed its capability to support seamless handover. <s> BIB001 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based 5G-Enabled VANET <s> This chapter presents a thorough investigation on current vehicular networking architectures (access technologies and overlay networks) and their (r)evolution towards the 5G era. The main driving force behind vehicular networking is to increase safety, with several other applications exploiting this ecosystem for traffic efficiency and infotainment provision. The most prominent existing candidates for vehicular networking are based on dedicated short range communications (DSRC) and cellular (4G) communications. In addition, the maturity of cloud computing has accommodated the invasion of vehicular space with cloud-based services. Nevertheless, current architectures can not meet the latency requirements of Intelligent Transport Systems (ITS) applications in highly congested and mobile environments. The future trend of autonomous driving pushes current networking architectures further to their limits with hard real-time requirements. Vehicular networks in 5G have to address five major challenges that affect current architectures: congestion, mobility management, backhaul networking, air interface and security. As networking transforms from simple connectivity provision, to service and content provision, fog computing approaches with caching and pre-fetching improve significantly the performance of the networks. The cloudification of network resources through software defined networking (SDN)/network function virtualization (NFV) principles, is another promising enabler for efficient vehicular networking in 5G. Finally, new wireless access mechanisms combined with current DSRC and 4G will enable to bring the vehicles in the cloud. <s> BIB002 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based 5G-Enabled VANET <s> The proposed 3GPP's 5G Next-generation (NextGen) Core architecture (5GC) enables the ability to introduce new user and control plane functions within the context of network slicing to allow greater flexibility in handling of heterogeneous devices and applications. In this paper, we discuss the integration of such architecture with future networking technologies by focusing on the information centric networking (ICN) technology. For that purpose, we first provide a short description of the proposed 5GC, which is followed by a discussion on the extensions to 5GC's control and user planes to support Protocol Data Unit (PDU) sessions from ICN. To illustrate the value of enabling ICN within 5GC, we focus on two important network services that can be enabled by ICN data networks. The first case targets mobile edge computing for a connected car use case, whereas the second case targets seamless mobility support for ICN sessions. We present these discussions in consideration with the procedures proposed by 3GPP's 23.501 and 23.502 technical specifications. <s> BIB003 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based 5G-Enabled VANET <s> The challenging requirements of 5G, from both the applications and architecture perspectives, motivate the need to explore the feasibility of delivering services over new network architectures. As 5G proposes application-centric network slicing, which enables the use of new data planes realizable over a programmable compute, storage, and transport infrastructure, we consider information- centric networking as a candidate network architecture to realize 5G objectives. This can coexist with end-to-end IP services that are offered today. To this effect, we first propose a 5G-ICN architecture and compare its benefits (i.e., innovative services offered by leveraging ICN features) to current 3GPP-based mobile architectures. We then introduce a general application-driven framework that emphasizes the flexibility afforded by network functions virtualization and software defined networking over which 5G-ICN can be realized. We specifically focus on the issue of how mobility as a service (MaaS) can be realized as a 5G-ICN slice, and give an in-depth overview on resource provisioning and inter-dependencies and coordination among functional 5G-ICN slices to meet the MaaS objectives. The article tries to show the flexibility of delivering services over ICN where virtualization of control and data plane can be used by applications to meet complex service logic execution while creating value to its end users. <s> BIB004
The 5G architecture, proposed by 3GPP, provides manageability to familiarize new control plane and user plane functions in the perspective of network slicing, which offers better elasticity to control various applications as well as devices. Thus, ICN would benefit 5G from the viewpoint of multi-access edge computing (MEC) in terms of edge computing, edge caching, and session mobility BIB003 . In addition, mobile nodes are positioned at the network edge that support various delay sensitive applications, e.g., virtual and augmented reality (VR/AR), and driving of autonomous vehicles. BIB004 . This drift is useful in low-latency and high-bandwidth applications, e.g., VR/AR, and non-real-time applications, for instance, IoT communications and video-on-demand (VoD) BIB003 . Furthermore, the caching feature of ICN assists both real-time and non-real-time applications every time there are temporal or spatial associations among data objects retrieved by edge subscribers BIB003 . This argument can be strengthened by the study conducted in , where it is argued that vehicular named networking encodes the data of geolocation into names. This is important as all request messages are sent to geolocations where contents are published. This is utterly feasible that a request message hits a vehicle with the desired content prior to reach its location. Moreover, the existing deployments of mobile communications hold session mobility through centralized techniques for routing, which face severe problems at the time of replication of service demands. Different from this phenomenon, name resolutions and application-restricted identifiers divide the notion deemed for ICN is exposed to grip node mobility in an effective way BIB001 . However, this module is quite challenging in such environments where rapid mobility is involved, for example, vehicular communications BIB002 . Thus, substantial determination is demanded in this area from the ICN community working on vehicular communications. A summary of the existing ICN-based vehicular communication proposals is presented in Table 1 .
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Routing <s> Content-centric networking is a new paradigm conceived for future Internet architectures, where communications are driven by contents instead of host addresses. This paradigm has key potentialities to enable effective and efficient communications in the challenging vehicular environment characterized by short-lived connectivity and highly dynamic network topologies. We design CRoWN, a content-centric framework for vehicular ad-hoc networks, which is implemented on top of the IEEE 802.11p standard layers and is fully compliant with them. Performance comparison against the legacy IP-based approach demonstrates the superiority of CRoWN, thus paving the way for content-centric vehicular networking. <s> BIB001 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Routing <s> Content-Centric Networking (CCN) is a new popular communication paradigm that achieves information retrieval and distribution by using named data instead of end-to-end host-centric communications. This innovative model particularly fits mobile wireless environments characterized by dynamic topologies, unreliable broadcast channels, short-lived and intermittent connectivity, as proven by preliminary works in the literature. In this paper we extend the CCN framework to efficiently and reliably support content delivery on top of IEEE 802.11p vehicular technology. Achieved results show that the proposed solution, by leveraging distributed broadcast storm mitigation techniques, simple transport routines, and lightweight soft-state forwarding procedures, brings significant improvements w.r.t. a plain CCN model, confirming the effectiveness and efficiency of our design choices. <s> BIB002 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Routing <s> Recently, Information Centric Networking (ICN) has attracted much attention also for mobiles. Unlike host-based communication models, ICN promotes data names as the first-class citizen in the network. However, the current ICN name-based routing requires Interests be routed by name to the nearest replica, implying the Interests are flooded in VANET. This introduces large overhead and consequently degrades wireless network performance. In order to maintain the efficiency of ICN implementation in VANET, we propose an opportunistic geo-inspired content based routing method. Our method utilizes the last encounter information of each node to infer the locations of content holders. With this information, the Interests can be geo-routed instead of being flooded to reduce the congestion level of the entire network. The simulation results show that our proposed method reduces the scope of flooding to less than two hops and improves retrieval rate by 1.42 times over flooding-based methods. <s> BIB003 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Routing <s> Vehicular information network and Internet of Things (IoT) technologies have been receiving a lot of attention in recent years. As one of the most important and promising IoT areas, a vehicular information network aims to implement a myriad of applications related to vehicles, traffic information, drivers, passengers, and pedestrians. However, intervehicular communication (IVC) in a vehicular information network is still based on the TCP/IP protocol stack which is not efficient and scalable. To address the efficiency and scalability issues of the IVC, we leverage the named data networking (NDN) paradigm where the end user only cares about the needed content and pays no attention to the actual location of the content. The NDN model is highly suitable for the IVC scenario with its hierarchical content naming scheme and flexible content retrieval and caching support. We design a novel vehicular information network architecture based on the basic communication principle of NDN. Our proposed architecture aims to improve content naming, addressing, data aggregation, and mobility for IVC in the vehicular information network. In addition, the key parameter settings of the proposed schemes are analyzed in order to help guide their actual deployment. <s> BIB004 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Routing <s> High-quality multimedia streaming services in Vehicular Ad-hoc Networks (VANETs) are severely hindered by intermittent host connectivity issues. The Information Centric Networking (ICN) paradigm could help solving this issue thanks to its new networking primitives driven by content names rather than host addresses. This unique feature, in fact, enables native support to mobility, in-network caching, nomadic networking, multicast, and efficient content dissemination. In this paper, we focus on exploring the potential social cooperation among vehicles in highways. An ICN-based COoperative Caching solution, namely ICoC, is proposed to improve the quality of experience (QoE) of multimedia streaming services. In particular, ICoC leverages two novel social cooperation schemes, namely partner-assisted and courier-assisted, to enhance information-centric caching. To validate its effectiveness, extensive ns-3 simulations have been executed, showing that ICoC achieves a considerable improvement in terms of start-up delay and playback freezing with respect to a state-of-the-art solution based on probabilistic caching. <s> BIB005
Amadeo et al. BIB001 Collision avoidance Techniques to avoid the explosion of ICN data structures Amadeo et al. BIB002 Selective flooding scheme Selection of dynamic outgoing interfaces Naming Yan et al. BIB004 Hierarchical naming schemes Agreement on common naming Quan et al. BIB005 Flat naming scheme Agreement on common naming Caching Yu et al. BIB003 Caching unsolicited contents Smart scope-based caching strategies
Secret-Sharing Schemes: A Survey <s> Introduction <s> Every function of n inputs can be efficiently computed by a complete network of n processors in such a way that: If no faults occur, no set of size t n /2 of players gets any additional information (other than the function value), Even if Byzantine faults are allowed, no set of size t n /3 can either disrupt the computation or get additional information. Furthermore, the above bounds on t are tight! <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Introduction <s> Under the assumption that each pair of participants can communicate secretly, we show that any reasonable multiparty protocol can be achieved if at least 2 n /3 of the participants are honest. The secrecy achieved is unconditional. It does not rely on any assumption about computational intractability. <s> BIB002 </s> Secret-Sharing Schemes: A Survey <s> Introduction <s> We present three alternative simple constructions of small probability spaces on n bits for which any k bits are almost independent. The number of bits used to specify a point in the sample space is (2 + o(1)) (log log n + k/2 + log k + log 1/ϵ), where ϵ is the statistical difference between the distribution induced on any k bit locations and the uniform distribution. This is asymptotically comparable to the construction recently presented by Naor and Naor (our size bound is better as long as ϵ < 1/(k log n)). An additional advantage of our constructions is their simplicity. <s> BIB003 </s> Secret-Sharing Schemes: A Survey <s> Introduction <s> We suggest a method of controlling the access to a secure database via quorum systems. A quorum system is a collection of sets (quorums) every two of which have a nonempty intersection. Quorum systems have been used for a number of applications in the area of distributed systems. We propose a separation between access servers which are protected and trustworthy, but may be outdated, and the data servers which may all be compromised. The main paradigm is that only the servers in a complete quorum can collectively grant (or revoke) access permission. The method we suggest ensures that after authorization is revoked, a cheating user Alice will not be able to access the data even if many access servers still consider her authorized, and even if the complete raw database is available to her. The method has a low overhead in terms of communication and computation. It can also be converted into a distributed system for issuing secure signatures. <s> BIB004 </s> Secret-Sharing Schemes: A Survey <s> Introduction <s> We show that verifiable secret sharing (VSS) and secure multi-party computation (MPC) among a set of n players can efficiently be based on any linear secret sharing scheme (LSSS) for the players, provided that the access structure of the LSSS allows MPC or VSS at all. Because an LSSS neither guarantees reconstructability when some shares are false, nor verifiability of a shared value, nor allows for the multiplication of shared values, an LSSS is an apparently much weaker primitive than VSS or MPC. ::: ::: Our approach to secure MPC is generic and applies to both the information-theoretic and the cryptographic setting. The construction is based on 1) a formalization of the special multiplicative property of an LSSS that is needed to perform a multiplication on shared values, 2) an efficient generic construction to obtain from any LSSS a multiplicative LSSS for the same access structure, and 3) an efficient generic construction to build verifiability into every LSSS (always assuming that the adversary structure allows for MPC or VSS at all). ::: ::: The protocols are efficient. In contrast to all previous information-theoretically secure protocols, the field size is not restricted (e.g, to be greater than n). Moreover, we exhibit adversary structures for which our protocols are polynomial in n while all previous approaches to MPC for non-threshold adversaries provably have super-polynomial complexity. <s> BIB005 </s> Secret-Sharing Schemes: A Survey <s> Introduction <s> As more sensitive data is shared and stored by third-party sites on the Internet, there will be a need to encrypt data stored at these sites. One drawback of encrypting data, is that it can be selectively shared only at a coarse-grained level (i.e., giving another party your private key). We develop a new cryptosystem for fine-grained sharing of encrypted data that we call Key-Policy Attribute-Based Encryption (KP-ABE). In our cryptosystem, ciphertexts are labeled with sets of attributes and private keys are associated with access structures that control which ciphertexts a user is able to decrypt. We demonstrate the applicability of our construction to sharing of audit-log information and broadcast encryption. Our construction supports delegation of private keys which subsumesHierarchical Identity-Based Encryption (HIBE). <s> BIB006 </s> Secret-Sharing Schemes: A Survey <s> Introduction <s> Protocols for Generalized Oblivious Transfer(GOT) were introduced by Ishai and Kushilevitz [10]. They built it by reducing GOT protocols to standard 1-out-of-2 oblivious transfer protocols based on private protocols. In our protocols, we provide alternative reduction by using secret sharing schemes instead of private protocols. We therefore show that there exist a natural correspondence between GOT and general secret sharing schemes and thus the techniques and tools developed for the latter can be applied equally well to the former. <s> BIB007 </s> Secret-Sharing Schemes: A Survey <s> Introduction <s> We present a new methodology for realizing Ciphertext-Policy Attribute Encryption (CP-ABE) under concrete and noninteractive cryptographic assumptions in the standard model. Our solutions allow any encryptor to specify access control in terms of any access formula over the attributes in the system. In our most efficient system, ciphertext size, encryption, and decryption time scales linearly with the complexity of the access formula. The only previous work to achieve these parameters was limited to a proof in the generic group model. ::: ::: We present three constructions within our framework. Our first system is proven selectively secure under a assumption that we call the decisional Parallel Bilinear Diffie-Hellman Exponent (PBDHE) assumption which can be viewed as a generalization of the BDHE assumption. Our next two constructions provide performance tradeoffs to achieve provable security respectively under the (weaker) decisional Bilinear-Diffie-Hellman Exponent and decisional Bilinear Diffie-Hellman assumptions. <s> BIB008
Secret-sharing schemes are a tool used in many cryptographic protocols. A secretsharing scheme involves a dealer who has a secret, a set of n parties, and a collection A of subsets of parties called the access structure. A secret-sharing scheme for A is a method by which the dealer distributes shares to the parties such that: BIB003 any subset in A can reconstruct the secret from its shares, and (2) any subset not in A cannot reveal any partial information on the secret. Originally motivated by the problem of secure information storage, secret-sharing schemes have found numerous other applications in cryptography and distributed computing, e.g., Byzantine agreement , secure multiparty computations BIB001 BIB002 BIB005 , threshold cryptography , access control BIB004 , attribute-based encryption BIB006 BIB008 , and generalized oblivious transfer BIB007 .
Secret-Sharing Schemes: A Survey <s> Example 1 (Attribute Based Encryption). <s> Certain cryptographic keys, such as a number which makes it possible to compute the secret decoding exponent in an RSA public key cryptosystem,1,5 or the system master key and certain other keys in a DES cryptosystem,3 are so important that they present a dilemma. If too many copies are distributed one might go astray. If too few copies are made they might all be destroyed. A typical cryptosystem will have several volatile copies of an important key in protected memory locations where they will very probably evaporate if any tampering or probing occurs. Since an opponent may be content to disrupt the system by forcing the evaporation of all these copies it is useful to entrust one or more other nonvolatile copies to reliable individuals or secure locations. What must the nonvolatile copies of the keys, or nonvolatile pieces of information from which the keys are reconstructed, be guarded against? The answer is that there are at least three types of incidents: <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Example 1 (Attribute Based Encryption). <s> Secret Sharing from the perspective of threshold schemes has been well-studied over the past decade. Threshold schemes, however, can only handle a small fraction of the secret sharing functions which we may wish to form. For example, if it is desirable to divide a secret among four participants A, B, C, and D in such a way that either A together with B can reconstruct the secret or C together with D can reconstruct the secret, then threshold schemes (even with weighting) are provably insufficient.This paper will present general methods for constructing secret sharing schemes for any given secret sharing function. There is a natural correspondence between the set of "generalized" secret sharing functions and the set of monotone functions, and tools developed for simplifying the latter set can be applied equally well to the former set. <s> BIB002 </s> Secret-Sharing Schemes: A Survey <s> Example 1 (Attribute Based Encryption). <s> In a secret sharing scheme, a dealer has a secret. The dealer gives each participant in the scheme a share of the secret. There is a set Γ of subsets of the participants with the property that any subset of participants that is in Γ can determine the secret. In a perfect secret sharing scheme, any subset of participants that is not in Γ cannot obtain any information about the secret. We will say that a perfect secret sharing scheme is ideal if all of the shares are from the same domain as the secret. Shamir and Blakley constructed ideal threshold schemes, and Benaloh has constructed other ideal secret sharing schemes. In this paper, we construct ideal secret sharing schemes for more general access structures which include the multilevel and compartmented access structures proposed by Simmons. <s> BIB003 </s> Secret-Sharing Schemes: A Survey <s> Example 1 (Attribute Based Encryption). <s> Particular aqueous gels, epoxy resin compositions and optional additives such as diluents, retarders and accelerators are described which produce a practical composition and method for in situ sand consolidation and gravel packing by which a resin coated sand is positioned in a desired location and cured by an internal catalyst to form a porous permeable or plugged consolidated mass. <s> BIB004 </s> Secret-Sharing Schemes: A Survey <s> Example 1 (Attribute Based Encryption). <s> We present three alternative simple constructions of small probability spaces on n bits for which any k bits are almost independent. The number of bits used to specify a point in the sample space is (2 + o(1)) (log log n + k/2 + log k + log 1/ϵ), where ϵ is the statistical difference between the distribution induced on any k bit locations and the uniform distribution. This is asymptotically comparable to the construction recently presented by Naor and Naor (our size bound is better as long as ϵ < 1/(k log n)). An additional advantage of our constructions is their simplicity. <s> BIB005 </s> Secret-Sharing Schemes: A Survey <s> Example 1 (Attribute Based Encryption). <s> A linear algebraic model of computation the span program, is introduced, and several upper and lower bounds on it are proved. These results yield applications in complexity and cryptography. The proof of the main connection, between span programs and counting branching programs, uses a variant of Razborov's general approximation method. > <s> BIB006 </s> Secret-Sharing Schemes: A Survey <s> Example 1 (Attribute Based Encryption). <s> A secret sharing scheme permits a secret to be shared among participants of an n-element group in such a way that only qualified subsets of participants can recover the secret. If any nonqualified subset has absolutely no information on the secret, then the scheme is called perfect. The share in a scheme is the information that a participant must remember. ::: ::: In [3] it was proved that for a certain access structure any perfect secret sharing scheme must give some participant a share which is at least 50\percent larger than the secret size. We prove that for each n there exists an access structure on n participants so that any perfect sharing scheme must give some participant a share which is at least about $n/\log n$ times the secret size.^1 We also show that the best possible result achievable by the information-theoretic method used here is n times the secret size. ::: ::: ^1 All logarithms in this paper are of base 2. <s> BIB007 </s> Secret-Sharing Schemes: A Survey <s> Example 1 (Attribute Based Encryption). <s> A secret sharing scheme permits a secret to be shared among participants in such a way that only qualified subsets of participants can recover the secret, but any nonqualified subset has absolutely no information on the secret. The set of all qualified subsets defines the access structure to the secret. Sharing schemes are useful in the management of cryptographic keys and in multiparty secure protocols. ::: ::: We analyze the relationships among the entropies of the sample spaces from which the shares and the secret are chosen. We show that there are access structures with four participants for which any secret sharing scheme must give to a participant a share at least 50% greater than the secret size. This is the first proof that there exist access structures for which the best achievable information rate (i.e., the ratio between the size of the secret and that of the largest share) is bounded away from 1. The bound is the best possible, as we construct a secret sharing scheme for the above access structures that meets the bound with equality. <s> BIB008 </s> Secret-Sharing Schemes: A Survey <s> Example 1 (Attribute Based Encryption). <s> Span programs provide a linear algebraic model of computation. Lower bounds for span programs imply lower bounds for formula size, symmetric branching programs, and contact schemes. Monotone span programs correspond also to linear secret-sharing schemes. We present a new technique for proving lower bounds for monotone span programs. We prove a lower bound of Ω(m2.5) for the 6-clique function. Our results improve on the previously known bounds for explicit functions. <s> BIB009 </s> Secret-Sharing Schemes: A Survey <s> Example 1 (Attribute Based Encryption). <s> As more sensitive data is shared and stored by third-party sites on the Internet, there will be a need to encrypt data stored at these sites. One drawback of encrypting data, is that it can be selectively shared only at a coarse-grained level (i.e., giving another party your private key). We develop a new cryptosystem for fine-grained sharing of encrypted data that we call Key-Policy Attribute-Based Encryption (KP-ABE). In our cryptosystem, ciphertexts are labeled with sets of attributes and private keys are associated with access structures that control which ciphertexts a user is able to decrypt. We demonstrate the applicability of our construction to sharing of audit-log information and broadcast encryption. Our construction supports delegation of private keys which subsumesHierarchical Identity-Based Encryption (HIBE). <s> BIB010 </s> Secret-Sharing Schemes: A Survey <s> Example 1 (Attribute Based Encryption). <s> We present a new methodology for realizing Ciphertext-Policy Attribute Encryption (CP-ABE) under concrete and noninteractive cryptographic assumptions in the standard model. Our solutions allow any encryptor to specify access control in terms of any access formula over the attributes in the system. In our most efficient system, ciphertext size, encryption, and decryption time scales linearly with the complexity of the access formula. The only previous work to achieve these parameters was limited to a proof in the generic group model. ::: ::: We present three constructions within our framework. Our first system is proven selectively secure under a assumption that we call the decisional Parallel Bilinear Diffie-Hellman Exponent (PBDHE) assumption which can be viewed as a generalization of the BDHE assumption. Our next two constructions provide performance tradeoffs to achieve provable security respectively under the (weaker) decisional Bilinear-Diffie-Hellman Exponent and decisional Bilinear Diffie-Hellman assumptions. <s> BIB011
Public-key encryption is a powerful mechanism for protecting the confidentiality of stored and transmitted information. Nowadays, in many applications there is a provider that wants to share data according to some policy based on user's credentials. In an attributed-based encryption system, presented by Sahai and Waters , each user has a set of attributes (i.e., credentials), and the provider will grant permission to decrypt the message if some predicate of the attributes holds (e.g., a user can decode an e-mail if she is a "FRIEND" and "IMPORTANT"). In BIB010 BIB011 , it is shown that if the predicate can be described by an access structure that can be implemented by an efficient linear secret-sharing scheme, then there is an efficient attribute-based encryption system for this predicate. Secret-sharing schemes were introduced by Blakley BIB001 and Shamir for the threshold case, that is, for the case where the subsets that can reconstruct the secret are all the sets whose cardinality is at least a certain threshold. Secretsharing schemes for general access structures were introduced and constructed by Ito, Saito, and Nishizeki . More efficient schemes were presented in, e.g., BIB002 BIB004 BIB003 BIB006 . Specifically, Benaloh and Leichter BIB002 proved that if an access structure can be described by a small monotone formula then it has an efficient perfect secret-sharing scheme. This was generalized by Karchmer and Wigderson BIB006 who showed that if an access structure can be described by a small monotone span program then it has an efficient scheme (a special case of this construction appeared before in BIB003 ). A major problem with secret-sharing schemes is that the shares' size in the best known secret-sharing schemes realizing general access structures is exponential in the number of parties in the access structure. Thus, the known constructions for general access structures are impractical. This is true even for explicit access structures (e.g., access structures whose characteristic function can be computed by a small uniform circuit). On the other hand, the best known lower bounds on the shares' size for sharing a secret with respect to an access structure (e.g., in BIB008 BIB007 ) are far from the above upper bounds. The best lower bound was proved by Csirmaz BIB007 , proving that, for every n, there is an access structure with n parties such that sharing -bit secrets requires shares of length Ω( n/ log n). The question if there exist more efficient schemes, or if there exists an access structure that does not have (space) efficient schemes remains open. The following is a widely believed conjecture (see, e.g., ): Conjecture 1. There exists an > 0 such that for every integer n there is an access structure with n parties, for which every secret-sharing scheme distributes shares of length exponential in the number of parties, that is, 2 n . Proving (or disproving) this conjecture is one of the most important open questions concerning secret sharing. No major progress on proving or disproving this conjecture has been obtained in the last 16 years. It is not known how to prove that there exists an access structure that requires super-polynomial shares (even for an implicit access structure). Most previously known secret-sharing schemes are linear. In a linear scheme, the secret is viewed as an element of a finite field, and the shares are obtained by applying a linear mapping to the secret and several independent random field elements. For example, the schemes of BIB001 BIB002 BIB004 BIB006 are all linear. For many application, the linearity is important, e.g., for secure multiparty computation as will be described in Section 4. Thus, studying linear secret-sharing schemes and their limitations is important. Linear secret-sharing schemes are equivalent to monotone span programs, defined by BIB006 . Super-polynomial lower bounds for monotone span programs and, therefore, for linear secret-sharing schemes were proved in BIB009 . In this survey we will present two unpublished results of Rudich . Rudich considered a Hamiltonian access structure, the parties in this access structure are edges in a complete undirected graph, and a set of edges (parties) is authorized if it contains a Hamiltonian cycle. BIB005 Rudich proved that if N P = coN P , then this access structure does not have a secret-sharing scheme in which the sharing of the secret can be done by a polynomial-time algorithm. As efficient sharing of secrets is essential in applications of secret-sharing, Rudich's results implies that there is no practical scheme for the Hamiltonian access structure. Furthermore, Rudich proved that if one-way functions exist and if the Hamiltonian access structure has a computational secret-sharing scheme (with efficient sharing and reconstruction), then efficient protocols for oblivious transfer exists. Thus, constructing a computational secret-sharing scheme for the Hamiltonian access structure will solve a major open problem in cryptography, i.e., using Impagliazzo's terminology , it will prove that Minicrypt = Cryptomania.
Secret-Sharing Schemes: A Survey <s> Definitions <s> Given a set of parties {1, /spl middot//spl middot//spl middot/, n}, an access structure is a monotone collection of subsets of the parties. For a certain domain of secrets, a secret-sharing scheme for an access structure is a method for a dealer to distribute shares to the parties. These shares enable subsets in the access structure to reconstruct the secret, while subsets not in the access structure get no information about the secret. A secret-sharing scheme is ideal if the domains of the shares are the same as the domain of the secrets. An access structure is universally ideal if there exists an ideal secret-sharing scheme for it over every finite domain of secrets. An obvious necessary condition for an access structure to be universally ideal is to be ideal over the binary and ternary domains of secrets. The authors prove that this condition is also sufficient. They also show that being ideal over just one of the two domains does not suffice for universally ideal access structures. Finally, they give an exact characterization for each of these two conditions. > <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Definitions <s> Let ? n be a monotone, nontrivial family of sets over {1, 2, ?,n}. An ? n perfect secret-sharing scheme is a probabilistic mapping of a secret ton shares, such that: ::: ::: Various secret-sharing schemes have been proposed, and applications in diverse contexts were found. In all these cases the set of secrets and the set of shares are finite. ::: ::: In this paper we study the possibility of secret-sharing schemes overinfinite domains. The major case of interest is when the secrets and the shares are taken from acountable set, for example all binary strings. We show that no ? n secret-sharing scheme over any countable domain exists (for anyn ? 2). ::: ::: One consequence of this impossibility result is that noperfect private-key encryption schemes, over the set of all strings, exist. Stated informally, this means that there is no way to encrypt all strings perfectly without revealing information about their length. These impossibility results are stated and proved not only for perfect secret-sharing and private-key encryption schemes, but also for wider classes--weak secret-sharing and private-key encryption schemes. ::: ::: We constrast these results with the case where both the secrets and the shares are real numbers. Simple perfect secret-sharing schemes (and perfect private-key encryption schemes) are presented. Thus, infinity alone does not rule out the possibility of secret sharing. <s> BIB002 </s> Secret-Sharing Schemes: A Survey <s> Definitions <s> We give a unified account of classical secret-sharing goals from a modern cryptographic vantage. Our treatment encompasses perfect, statistical, and computational secret sharing; static and dynamic adversaries; schemes with or without robustness; schemes where a participant recovers the secret and those where an external party does so. We then show that Krawczyk's 1993 protocol for robust computational secret sharing (RCSS) need not be secure, even in the random-oracle model and for threshold schemes, if the encryption primitive it uses satisfies only one-query indistinguishability (ind1), the only notion Krawczyk defines. Nonetheless, we show that the protocol is secure (in the random-oracle model, for threshold schemes) if the encryption scheme also satisfies one-query key-unrecoverability (key1). Since practical encryption schemes are ind1+key1 secure, our result effectively shows that Krawczyk's RCSS protocol is sound (in the random-oracle model, for threshold schemes). Finally, we prove the security for a variant of Krawczyk's protocol, in the standard model and for arbitrary access structures, assuming ind1 encryption and a statistically-hiding, weakly-binding commitment scheme. <s> BIB003
In this section we define secret-sharing schemes. We supply two definitions and argue that they are equivalent. We start with a definition of secret-sharing as given in BIB002 BIB001 BIB003 .
Secret-Sharing Schemes: A Survey <s> Definition 2 (Secret Sharing <s> A "secret sharing system" permits a secret to be shared among n trustees in such a way that any k of them can recover the secret, but any k-1 have complete uncertainty about it. A linear coding scheme for secret sharing is exhibited which subsumes the polynomial interpolation method proposed by Shamir and can also be viewed as a deterministic version of Blakley's probabilistic method. Bounds on the maximum value of n for a given k and secret size are derived for any system, linear or nonlinear. The proposed scheme achieves the lower bound which, for practical purposes, differs insignificantly from the upper bound. The scheme may be extended to protect several secrets. Methods to protect against deliberate tampering by any of the trustees are also presented. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Definition 2 (Secret Sharing <s> Preface to the Second Edition. Preface to the First Edition. Acknowledgments for the Second Edition. Acknowledgments for the First Edition. 1. Introduction and Preview. 1.1 Preview of the Book. 2. Entropy, Relative Entropy, and Mutual Information. 2.1 Entropy. 2.2 Joint Entropy and Conditional Entropy. 2.3 Relative Entropy and Mutual Information. 2.4 Relationship Between Entropy and Mutual Information. 2.5 Chain Rules for Entropy, Relative Entropy, and Mutual Information. 2.6 Jensen's Inequality and Its Consequences. 2.7 Log Sum Inequality and Its Applications. 2.8 Data-Processing Inequality. 2.9 Sufficient Statistics. 2.10 Fano's Inequality. Summary. Problems. Historical Notes. 3. Asymptotic Equipartition Property. 3.1 Asymptotic Equipartition Property Theorem. 3.2 Consequences of the AEP: Data Compression. 3.3 High-Probability Sets and the Typical Set. Summary. Problems. Historical Notes. 4. Entropy Rates of a Stochastic Process. 4.1 Markov Chains. 4.2 Entropy Rate. 4.3 Example: Entropy Rate of a Random Walk on a Weighted Graph. 4.4 Second Law of Thermodynamics. 4.5 Functions of Markov Chains. Summary. Problems. Historical Notes. 5. Data Compression. 5.1 Examples of Codes. 5.2 Kraft Inequality. 5.3 Optimal Codes. 5.4 Bounds on the Optimal Code Length. 5.5 Kraft Inequality for Uniquely Decodable Codes. 5.6 Huffman Codes. 5.7 Some Comments on Huffman Codes. 5.8 Optimality of Huffman Codes. 5.9 Shannon-Fano-Elias Coding. 5.10 Competitive Optimality of the Shannon Code. 5.11 Generation of Discrete Distributions from Fair Coins. Summary. Problems. Historical Notes. 6. Gambling and Data Compression. 6.1 The Horse Race. 6.2 Gambling and Side Information. 6.3 Dependent Horse Races and Entropy Rate. 6.4 The Entropy of English. 6.5 Data Compression and Gambling. 6.6 Gambling Estimate of the Entropy of English. Summary. Problems. Historical Notes. 7. Channel Capacity. 7.1 Examples of Channel Capacity. 7.2 Symmetric Channels. 7.3 Properties of Channel Capacity. 7.4 Preview of the Channel Coding Theorem. 7.5 Definitions. 7.6 Jointly Typical Sequences. 7.7 Channel Coding Theorem. 7.8 Zero-Error Codes. 7.9 Fano's Inequality and the Converse to the Coding Theorem. 7.10 Equality in the Converse to the Channel Coding Theorem. 7.11 Hamming Codes. 7.12 Feedback Capacity. 7.13 Source-Channel Separation Theorem. Summary. Problems. Historical Notes. 8. Differential Entropy. 8.1 Definitions. 8.2 AEP for Continuous Random Variables. 8.3 Relation of Differential Entropy to Discrete Entropy. 8.4 Joint and Conditional Differential Entropy. 8.5 Relative Entropy and Mutual Information. 8.6 Properties of Differential Entropy, Relative Entropy, and Mutual Information. Summary. Problems. Historical Notes. 9. Gaussian Channel. 9.1 Gaussian Channel: Definitions. 9.2 Converse to the Coding Theorem for Gaussian Channels. 9.3 Bandlimited Channels. 9.4 Parallel Gaussian Channels. 9.5 Channels with Colored Gaussian Noise. 9.6 Gaussian Channels with Feedback. Summary. Problems. Historical Notes. 10. Rate Distortion Theory. 10.1 Quantization. 10.2 Definitions. 10.3 Calculation of the Rate Distortion Function. 10.4 Converse to the Rate Distortion Theorem. 10.5 Achievability of the Rate Distortion Function. 10.6 Strongly Typical Sequences and Rate Distortion. 10.7 Characterization of the Rate Distortion Function. 10.8 Computation of Channel Capacity and the Rate Distortion Function. Summary. Problems. Historical Notes. 11. Information Theory and Statistics. 11.1 Method of Types. 11.2 Law of Large Numbers. 11.3 Universal Source Coding. 11.4 Large Deviation Theory. 11.5 Examples of Sanov's Theorem. 11.6 Conditional Limit Theorem. 11.7 Hypothesis Testing. 11.8 Chernoff-Stein Lemma. 11.9 Chernoff Information. 11.10 Fisher Information and the Cram-er-Rao Inequality. Summary. Problems. Historical Notes. 12. Maximum Entropy. 12.1 Maximum Entropy Distributions. 12.2 Examples. 12.3 Anomalous Maximum Entropy Problem. 12.4 Spectrum Estimation. 12.5 Entropy Rates of a Gaussian Process. 12.6 Burg's Maximum Entropy Theorem. Summary. Problems. Historical Notes. 13. Universal Source Coding. 13.1 Universal Codes and Channel Capacity. 13.2 Universal Coding for Binary Sequences. 13.3 Arithmetic Coding. 13.4 Lempel-Ziv Coding. 13.5 Optimality of Lempel-Ziv Algorithms. Compression. Summary. Problems. Historical Notes. 14. Kolmogorov Complexity. 14.1 Models of Computation. 14.2 Kolmogorov Complexity: Definitions and Examples. 14.3 Kolmogorov Complexity and Entropy. 14.4 Kolmogorov Complexity of Integers. 14.5 Algorithmically Random and Incompressible Sequences. 14.6 Universal Probability. 14.7 Kolmogorov complexity. 14.9 Universal Gambling. 14.10 Occam's Razor. 14.11 Kolmogorov Complexity and Universal Probability. 14.12 Kolmogorov Sufficient Statistic. 14.13 Minimum Description Length Principle. Summary. Problems. Historical Notes. 15. Network Information Theory. 15.1 Gaussian Multiple-User Channels. 15.2 Jointly Typical Sequences. 15.3 Multiple-Access Channel. 15.4 Encoding of Correlated Sources. 15.5 Duality Between Slepian-Wolf Encoding and Multiple-Access Channels. 15.6 Broadcast Channel. 15.7 Relay Channel. 15.8 Source Coding with Side Information. 15.9 Rate Distortion with Side Information. 15.10 General Multiterminal Networks. Summary. Problems. Historical Notes. 16. Information Theory and Portfolio Theory. 16.1 The Stock Market: Some Definitions. 16.2 Kuhn-Tucker Characterization of the Log-Optimal Portfolio. 16.3 Asymptotic Optimality of the Log-Optimal Portfolio. 16.4 Side Information and the Growth Rate. 16.5 Investment in Stationary Markets. 16.6 Competitive Optimality of the Log-Optimal Portfolio. 16.7 Universal Portfolios. 16.8 Shannon-McMillan-Breiman Theorem (General AEP). Summary. Problems. Historical Notes. 17. Inequalities in Information Theory. 17.1 Basic Inequalities of Information Theory. 17.2 Differential Entropy. 17.3 Bounds on Entropy and Relative Entropy. 17.4 Inequalities for Types. 17.5 Combinatorial Bounds on Entropy. 17.6 Entropy Rates of Subsets. 17.7 Entropy and Fisher Information. 17.8 Entropy Power Inequality and Brunn-Minkowski Inequality. 17.9 Inequalities for Determinants. 17.10 Inequalities for Ratios of Determinants. Summary. Problems. Historical Notes. Bibliography. List of Symbols. Index. <s> BIB002 </s> Secret-Sharing Schemes: A Survey <s> Definition 2 (Secret Sharing <s> A secret sharing scheme permits a secret to be shared among participants in such a way that only qualified subsets of participants can recover the secret, but any nonqualified subset has absolutely no information on the secret. The set of all qualified subsets defines the access structure to the secret. Sharing schemes are useful in the management of cryptographic keys and in multiparty secure protocols. ::: ::: We analyze the relationships among the entropies of the sample spaces from which the shares and the secret are chosen. We show that there are access structures with four participants for which any secret sharing scheme must give to a participant a share at least 50% greater than the secret size. This is the first proof that there exist access structures for which the best achievable information rate (i.e., the ratio between the size of the secret and that of the largest share) is bounded away from 1. The bound is the best possible, as we construct a secret sharing scheme for the above access structures that meets the bound with equality. <s> BIB003
Remark 1. In the above definition, we required correctness with probability 1 and perfect privacy: for every two secrets a, b the distributions Π(a, r) T and Π(a, r) T are identical. We can relax these requirements and require that the correctness holds with high probability and that the statistical distance between Π(a, r) T and Π(a, r) T is small. Schemes that satisfy these relaxed requirements are called statistical secret-sharing schemes. For example, such schemes are designed in . We next define an alternative definition of secret-sharing schemes originating in BIB001 BIB003 ; this definition uses the entropy function. For this definition we assume that there is some known probability distribution on the domain of secrets K. Any probability distribution on the domain of secrets, together with the distribution scheme Σ, induces, for any A ⊆ {p 1 , . . . , p n }, a probability distribution on the vector of shares of the parties in A. We denote the random variable taking values according to this probability distribution on the vector of shares of A by S A , and by S the random variable denoting the secret. The privacy in the alternative definition requires that if T / ∈ A, then the random variables S and S T are independent. As traditional in the secret sharing literature, we formalize the above two requirements using the entropy function. The support of a random variables X is the set of all values x such that Pr[X = x] > 0. Given a random variable X, the entropy of X is defined as H(X) , where the sum is taken over all values x in the support of X, i.e., all values x such that Pr[X = x] > 0. It holds that 0 ≤ H(X) ≤ log |SUPPORT(X)|. Intuitively, H(X) measures the amount of uncertainty in X where H(X) = 0 if X is deterministic, i.e., there is a value x such that Pr[X = x] = 1, and H(X) = log |SUPPORT(X)| if X is uniformly distributed over SUPPORT(X). Given two random variables X and Y we consider their concatenation XY and define the conditional entropy ; two random variables X and Y are independent iff H(X|Y ) = H(X) and the value of Y implies the value of X iff H(X|Y ) = 0. For more background on the entropy function, the reader may consult BIB002 . Definition 3 (Secret Sharing -Alternative Definition). We say that a distribution scheme is a secret-sharing scheme realizing an access structure A with respect to a given probability distribution on the secrets, denoted by a random variable S, if the following conditions hold.
Secret-Sharing Schemes: A Survey <s> The Monotone Formulae Construction [14] <s> Secret Sharing from the perspective of threshold schemes has been well-studied over the past decade. Threshold schemes, however, can only handle a small fraction of the secret sharing functions which we may wish to form. For example, if it is desirable to divide a secret among four participants A, B, C, and D in such a way that either A together with B can reconstruct the secret or C together with D can reconstruct the secret, then threshold schemes (even with weighting) are provably insufficient.This paper will present general methods for constructing secret sharing schemes for any given secret sharing function. There is a natural correspondence between the set of "generalized" secret sharing functions and the set of monotone functions, and tools developed for simplifying the latter set can be applied equally well to the former set. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> The Monotone Formulae Construction [14] <s> We present three alternative simple constructions of small probability spaces on n bits for which any k bits are almost independent. The number of bits used to specify a point in the sample space is (2 + o(1)) (log log n + k/2 + log k + log 1/ϵ), where ϵ is the statistical difference between the distribution induced on any k bit locations and the uniform distribution. This is asymptotically comparable to the construction recently presented by Naor and Naor (our size bound is better as long as ϵ < 1/(k log n)). An additional advantage of our constructions is their simplicity. <s> BIB002
Benaloh and Leichter BIB001 describe a construction of secret-sharing schemes for any access structure based on monotone formulae. The construction of BIB001 generalizes the construction of and is more efficient. However, also in this scheme for most access structures the length of the shares is exponential in the number of parties even for a one-bit secret. The scheme of Benaloh and Leichter is recursive. It starts with schemes for simple access structures and constructs a scheme for a composition of the access structures. Let A 1 and A 2 be two access structures. We assume that they have the same set of parties {p 1 , . . . , p n }. However, it is possible that some parties are redundant in one of the access structures, that is, there might be parties that do not belong to minimal authorized sets in one of the access structures. We define two new access structures, where B ∈ A 1 ∨ A 2 iff B ∈ A 1 or B ∈ A 2 , and B ∈ A 1 ∧ A 2 iff B ∈ A 1 and B ∈ A 2 . We assume that for i ∈ {1, 2} there is a secret-sharing scheme Σ i realizing A i , where the two schemes have same domain of secrets K = {0, . . . , m − 1} for some m ∈ N. Furthermore, assume that for every 1 ≤ j ≤ n the share of p j in the scheme Σ i is an element in K a i,j for every i ∈ {1, 2}, and denote a j = a 1,j + a 2,j . Then there exist secret-sharing schemes realizing A 1 ∨ A 2 and A 1 ∧ A 2 in which the domain of shares of p j is K aj : -To share a secret k ∈ K for the access structure A 1 ∨ A 2 , independently share k using the scheme with uniform distribution and let k 2 = (k − k 1 ) mod m. Next, for i ∈ {1, 2}, independently share k i using the scheme Σ i (realizing A i ). For every set B ∈ A 1 ∧ A 2 , the parties in B can reconstruct both k 1 and k 2 and compute On the other hand, for every set T / ∈ A, the parties in T do not have any information on at least one k i , hence do not have any information on the secret k. For example, given an access structure , and for every 1 ≤ i ≤ there is a scheme realizing {B i } with a domain of secrets {0, 1}, where each p j ∈ B gets a one-bit share. Applying the scheme of Benaloh and Leichter recursively, we get the scheme of Ito, Saito, and Nishizeki. The scheme of Benaloh and Leichter can efficiently realize a much richer family of access structures than the access structures that can be efficiently realized by the scheme of Ito, Saito, and Nishizeki. To describe the access structures that can be efficiently realized by Benaloh and Leichter's scheme it is convenient to view an access structure as a function. We describe each set A ⊆ {p BIB002 With an access structure A, we associate the function f A : {0, 1} n → {0, 1}, where f A (v B ) = 1 iff B ∈ A. We say that f A describes A. As A is monotone, the function f A is monotone. Furthermore, for two access structures A 1 and Using this observation, the scheme of Benaloh and Leichter can efficiently realize every access structure that can be described by a small monotone formula.
Secret-Sharing Schemes: A Survey <s> 3.5 <s> In a secret sharing scheme, a dealer has a secret. The dealer gives each participant in the scheme a share of the secret. There is a set Γ of subsets of the participants with the property that any subset of participants that is in Γ can determine the secret. In a perfect secret sharing scheme, any subset of participants that is not in Γ cannot obtain any information about the secret. We will say that a perfect secret sharing scheme is ideal if all of the shares are from the same domain as the secret. Shamir and Blakley constructed ideal threshold schemes, and Benaloh has constructed other ideal secret sharing schemes. In this paper, we construct ideal secret sharing schemes for more general access structures which include the multilevel and compartmented access structures proposed by Simmons. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> 3.5 <s> A linear algebraic model of computation the span program, is introduced, and several upper and lower bounds on it are proved. These results yield applications in complexity and cryptography. The proof of the main connection, between span programs and counting branching programs, uses a variant of Razborov's general approximation method. > <s> BIB002
The monotone Span Programs Construction BIB001 BIB002 All the above constructions are linear, that is, the distribution scheme is a linear mapping. More formally, in a linear secret-sharing scheme over a finite field F, the secret is an element of the field, the random string is a vector over the field such that each coordinate of this vector is chosen independently with uniform distribution from the field, and each share is a vector over the field such that each coordinate of this vector is some fixed linear combination of the secret and the coordinates of the random string. Example 2. Consider the scheme for A ustcon described in Section 3.2. This scheme is linear over the field with two elements F 2 . In particular, the randomness is a vector r 2 , . . . , r |V |−1 of |V | − 2 random elements in F 2 , and the share of an edge (v 1 , v 2 ), for example, is (k + r 2 ) mod 2, that is, this is the linear combination where the coefficient of k and r 2 are 1 and all other coefficients are zero. To model a linear scheme, we use monotone span programs, which is, basically, the matrix describing the linear mapping of the linear scheme. The monotone span program also defines the access structure which the secret-sharing scheme realizes. In the rest of the paper, vectors are denoted by bold letters (e.g., r) and, according to the context, vectors are either row vectors or column vectors (i.e., if we write rM , then r is a row vector, if we write M r, then r is a column vector). We next prove that this scheme is private. If T / ∈ A, then the rows of M T do not span the vector e 1 , i.e., rank(M T ) < rank
Secret-Sharing Schemes: A Survey <s> Remark 2 (Historical Notes). <s> In a secret sharing scheme, a dealer has a secret. The dealer gives each participant in the scheme a share of the secret. There is a set Γ of subsets of the participants with the property that any subset of participants that is in Γ can determine the secret. In a perfect secret sharing scheme, any subset of participants that is not in Γ cannot obtain any information about the secret. We will say that a perfect secret sharing scheme is ideal if all of the shares are from the same domain as the secret. Shamir and Blakley constructed ideal threshold schemes, and Benaloh has constructed other ideal secret sharing schemes. In this paper, we construct ideal secret sharing schemes for more general access structures which include the multilevel and compartmented access structures proposed by Simmons. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Remark 2 (Historical Notes). <s> A linear algebraic model of computation the span program, is introduced, and several upper and lower bounds on it are proved. These results yield applications in complexity and cryptography. The proof of the main connection, between span programs and counting branching programs, uses a variant of Razborov's general approximation method. > <s> BIB002 </s> Secret-Sharing Schemes: A Survey <s> Remark 2 (Historical Notes). <s> In this paper, we generalize the vector space construction due to Brickell [5]. This generalization, introduced by Bertilsson [1], leads to perfect secret sharing schemes with rational information rates in which the secret can be computed efficiently by each qualified group. A one to one correspondence between the generalized construction and linear block codes is stated. It turns out that the approach of minimal codewords by Massey [15] is a special case of this construction. For general access structures we present an outline of an algorithm for determining whether a rational number can be realized as information rate by means of the generalized vector space construction. If so, the algorithm produces a perfect secret sharing scheme with this information rate. As a side-result we show a correspondence between the duality of access structures and the duality of codes. <s> BIB003 </s> Secret-Sharing Schemes: A Survey <s> Remark 2 (Historical Notes). <s> A secret sharing scheme permits a secret to be shared among participants in such a way that only qualified subsets of participants can recover the secret, but any nonqualified subset has absolutely no information on the secret. The set of all qualified subsets defines the access structure to the secret. Sharing schemes are useful in the management of cryptographic keys and in multiparty secure protocols. ::: ::: We analyze the relationships among the entropies of the sample spaces from which the shares and the secret are chosen. We show that there are access structures with four participants for which any secret sharing scheme must give to a participant a share at least 50% greater than the secret size. This is the first proof that there exist access structures for which the best achievable information rate (i.e., the ratio between the size of the secret and that of the largest share) is bounded away from 1. The bound is the best possible, as we construct a secret sharing scheme for the above access structures that meets the bound with equality. <s> BIB004
Brickell BIB001 in 1989 implicitly defined monotone span programs for the case that each party labels exactly one row, and proved Claim 2. Karchmer and Wigderson BIB002 in 1993 explicitly defined span programs and monotone span programs. They considered them as a computational model and their motivation was proving lower bounds for modular branching programs. Karchmer and Wigderson showed that monotone span programs imply (linear) secret-sharing schemes. Beimel proved that linear secret-sharing schemes imply monotone span programs. Thus, linear secret-sharing schemes are equivalent to monotone span programs, and lower bounds on the size of monotone span programs imply the same lower bounds on the information ratio of linear secretsharing schemes. Example 4. We next describe the linear secret-sharing for A ustcon , presented in Section 3.2, as a monotone span program. In this access structure, we consider a graph with m vertices and n = m 2 edges, each edge is a party. We construct a monotone span program over F 2 , which has b = m − 1 columns and a = n rows. For every party (edge) (v i , v j ), where 1 ≤ i < j ≤ m − 1, there is a unique row in the program labeled by this party; in this row all entries in the row are zero, except for the ith and the jth entries which are 1. Furthermore, for every party (edge) (v i , v m ), where 1 ≤ i ≤ m − 1, there is a unique row in the program labeled by this party; in this row all entries in the row are zero, except for the ith entry which is 1 (this is equivalent to choosing r m = 0 in Section 3.2). It can be proved that this monotone span program accepts a set of parties (edges) if and only if the set contains a path from v 1 to v m . To construct a secret-sharing scheme from this monotone span program, we multiply the above matrix by a vector r = (k, r 2 , . . . , r m−1 ) and the share of party (v i , v j ) is the row labeled by (v i , v j ) in the matrix multiplied by r, that is, the share is as defined in the scheme for A ustcon described above. 3.6 Multi-Linear Secret-Sharing Schemes BIB003 In the schemes derived from monotone span programs, the secret is one element from the field. This can be generalized to the case where the secret is some vector over the field. Such schemes, studied by BIB003 , are called multi linear and are based on the following generalization of monotone span programs. = (k 1 , . . . , k c , r c+1 , . . . , r b ), and computes the shares M r. Any multi-target monotone span program is a monotone span program; however, using it to construct a multi-linear secret-sharing scheme results in a scheme with better information ratio. BIB004 that in any secret-sharing scheme realizing the information ratio is at least 1.5. We present this lower bound and prove it in Theorem 1. By definition, the information ratio of a linear scheme is integral. We next present a multi-linear secret-sharing scheme realizing with information ratio 1.5. We first describe a linear scheme whose information ratio is 2. To share a bit k 1 ∈ F 2 , the dealer independently chooses two random bits r 1 and r 2 with uniform distribution. The share of p 1 is r 1 , the share of p 2 is r 1 ⊕ k 1 , the share of p 3 is two bits, r 1 and r 2 ⊕ k 1 , and the share of p 4 is r 2 . Clearly, this scheme realizes .
Secret-Sharing Schemes: A Survey <s> Other Constructions <s> In information based systems, the integrity of the information (from unauthorized scrutiny or disclosure, manipulation or alteration, forgery, false dating, etc.) is commonly provided for by requiring operation(s) on the information that one or more of the participants, who know some private piece(s) of information not known to all of the other participants, can carry out but which (probably) can’t be carried out by anyone who doesn’t know the private information. Encryption/decryption in a single key cryptoalgorithm is a paradigm of such an operation, with the key being the private (secret) piece of information. Although it is implicit, it is almost never stated explicitly that in a single-key cryptographic communications link, the transmitter and the receiver must unconditionally trust each other since either can do anything that the other can. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Other Constructions <s> In a secret sharing scheme, a dealer has a secret. The dealer gives each participant in the scheme a share of the secret. There is a set Γ of subsets of the participants with the property that any subset of participants that is in Γ can determine the secret. In a perfect secret sharing scheme, any subset of participants that is not in Γ cannot obtain any information about the secret. We will say that a perfect secret sharing scheme is ideal if all of the shares are from the same domain as the secret. Shamir and Blakley constructed ideal threshold schemes, and Benaloh has constructed other ideal secret sharing schemes. In this paper, we construct ideal secret sharing schemes for more general access structures which include the multilevel and compartmented access structures proposed by Simmons. <s> BIB002 </s> Secret-Sharing Schemes: A Survey <s> Other Constructions <s> The paper describes a very powerful decomposition construction for perfect secret-sharing schemes. The author gives several applications of the construction and improves previous results by showing that for any graph G of maximum degree d, there is a perfect secret-sharing scheme for G with information rate 2/(d+1). As a corollary, the maximum information rate of secret-sharing schemes for paths on more than three vertices and for cycles on more than four vertices is shown to be 2/3. > <s> BIB003 </s> Secret-Sharing Schemes: A Survey <s> Other Constructions <s> We consider the problem of threshold secret sharing in groups with hierarchical structure. In such settings, the secret is shared among a group of participants that is partitioned into levels. The access structure is then determined by a sequence of threshold requirements: a subset of participants is authorized if it has at least k 0 members from the highest level, as well as at least k 1 > k 0 members from the two highest levels and so forth. Such problems may occur in settings where the participants differ in their authority or level of confidence and the presence of higher level participants is imperative to allow the recovery of the common secret. Even though secret sharing in hierarchical groups has been studied extensively in the past, none of the existing solutions addresses the simple setting where, say, a bank transfer should be signed by three employees, at least one of whom must be a department manager. We present a perfect secret sharing scheme for this problem that, unlike most secret sharing schemes that are suitable for hierarchical structures, is ideal. As in Shamir’s scheme, the secret is represented as the free coefficient of some polynomial. The novelty of our scheme is the usage of polynomial derivatives in order to generate lesser shares for participants of lower levels. Consequently, our scheme uses Birkhoff interpolation, i.e., the construction of a polynomial according to an unstructured set of point and derivative values. A substantial part of our discussion is dedicated to the question of how to assign identities to the participants from the underlying finite field so that the resulting Birkhoff interpolation problem will be well posed. In the course of this discussion, we borrow some results from the theory of Birkhoff interpolation over ℝ and import them to the context of finite fields. <s> BIB004 </s> Secret-Sharing Schemes: A Survey <s> Other Constructions <s> Weighted threshold functions with positive weights are a natural generalization of unweighted threshold functions. These functions are clearly monotone. However, the naive way of computing them is adding the weights of the satisfied variables and checking if the sum is greater than the threshold; this algorithm is inherently non-monotone since addition is a non-monotone function. In this work we by-pass this addition step and construct a polynomial size logarithmic depth unbounded fan-in monotone circuit for every weighted threshold function, i.e., we show that weighted threshold functions are in mAC^1. (To the best of our knowledge, prior to our work no polynomial monotone circuits were known for weighted threshold functions.) Our monotone circuits are applicable for the cryptographic tool of secret sharing schemes. Using general results for compiling monotone circuits (Yao, 1989) and monotone formulae (Benaloh and Leichter, 1990) into secret sharing schemes, we get secret sharing schemes for every weighted threshold access structure. Specifically, we get: (1) information-theoretic secret sharing schemes where the size of each share is quasi-polynomial in the number of users, and (2) computational secret sharing schemes where the size of each share is polynomial in the number of users. <s> BIB005 </s> Secret-Sharing Schemes: A Survey <s> Other Constructions <s> By using threshold schemes, λ -decompositions were introduced by Stinson [D.R. Stinson, Decomposition constructions for secret sharing schemes, IEEE Trans. Inform. Theory IT-40 (1994) 118-125] and used to achieve often optimal worst-case information rates of secret sharing schemes based on graphs. By using the broader class of ramp schemes, (λ,ω)-decompositions were introduced in [M. van Dijk, W.-A. Jackson, K.M. Martin, A general decomposition construction for incomplete secret sharing schemes, Des. Codes Cryptogr. 15 (1998) 301-321] together with a general theory of decompositions. However, no improvements of existing schemes have been found by using this general theory. In this contribution we show for the first time how to successfully use (λ,ω)-decompositions. We give examples of improved constructions of secret sharing schemes based on connected graphs on six vertices. <s> BIB006 </s> Secret-Sharing Schemes: A Survey <s> Other Constructions <s> Given a set of participants that is partitioned into distinct compartments, a multipartite access structure is an access structure that does not distinguish between participants belonging to the same compartment. We examine here three types of such access structures: two that were studied before, compartmented access structures and hierarchical threshold access structures, and a new type of compartmented access structures that we present herein. We design ideal perfect secret sharing schemes for these types of access structures that are based on bivariate interpolation. The secret sharing schemes for the two types of compartmented access structures are based on bivariate Lagrange interpolation with data on parallel lines. The secret sharing scheme for the hierarchical threshold access structures is based on bivariate Lagrange interpolation with data on lines in general position. The main novelty of this paper is the introduction of bivariate Lagrange interpolation and its potential power in designing schemes for multipartite settings, as different compartments may be associated with different lines or curves in the plane. In particular, we show that the introduction of a second dimension may create the same hierarchical effect as polynomial derivatives and Birkhoff interpolation were shown to do in Tassa (J. Cryptol. 20:237–264, 2007). <s> BIB007
There are many other constructions of secret-sharing schemes for other specific access structures, e.g., hierarchical access structures BIB001 BIB002 BIB004 BIB007 , weighted threshold access structures BIB005 , and more complicated compositions of access structures BIB003 BIB006 .
Secret-Sharing Schemes: A Survey <s> Secret Sharing and Secure Multi-Party Computation <s> Every function of n inputs can be efficiently computed by a complete network of n processors in such a way that: If no faults occur, no set of size t n /2 of players gets any additional information (other than the function value), Even if Byzantine faults are allowed, no set of size t n /3 can either disrupt the computation or get additional information. Furthermore, the above bounds on t are tight! <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Secret Sharing and Secure Multi-Party Computation <s> Under the assumption that each pair of participants can communicate secretly, we show that any reasonable multiparty protocol can be achieved if at least 2 n /3 of the participants are honest. The secrecy achieved is unconditional. It does not rely on any assumption about computational intractability. <s> BIB002
Secret-sharing schemes are a basic building box in construction of many cryptographic protocols. In this section we demonstrate the use of secret-sharing schemes for secure multi-party computation of general functions. For simplicity we concentrate on the case that the parties are honest-but-curious, that is, the parties follow the instructions of the protocol, however, at the end of the protocol some of them might collude and try to deduce information from the messages they got. The protocols that we describe are secure against an all-powerful adversary, that is, they supply information-theoretic security. We will first show a homomorphic property of Shamir's secret-sharing scheme. Using this property, we show how to use secret sharing to construct a protocol for securely computing the sum of secret inputs. Then, we will show how to securely compute the product of inputs. Combining these protocols we get an efficient protocol for computing any function which can be computed by a small arithmetic circuit. Such protocols with information-theoretic security were first presented in BIB001 BIB002 . The exact protocol we present here is from . Proof. Let Q 1 and Q 2 be the polynomial of degree at most t generating the shares s 1,1 , . . . , s 1,n and s 2,1 , . . . , s 2,n respectively, that is Q i (0) = k i and Q i (α j ) = s i,j for i ∈ {1, 2} and 1 ≤ j ≤ n (where α 1 , . . . , α n are defined in Section 3.1). Define Q(x) = Q 1 (x) + Q 2 (x). This is a polynomial of degree at most t such that 1 + s 2,1 , . . . , s 1,n + s 2,n given the secret . This is a polynomial degree at most 2t generating the shares s 1,1 · s 2,1 , . . . , s 1,n · s 2,n given the secret k 1 · k 2 .
Secret-Sharing Schemes: A Survey <s> Extensions to Other Models <s> The goal of secure multiparty computation is to transform a given protocol involving a trusted party into a protocol without need for the trusted party, by simulating the party among the players. Indeed, by the same means, one can simulate an arbitrary player in any given protocol. We formally define what it means to simulate a player by a multiparty protocol among a set of (new) players, and we derive the resilience of the new protocol as a function of the resiliences of the original protocol and the protocol used for the simulation. ::: ::: In contrast to all previous protocols that specify the tolerable adversaries by the number of corruptible players (a threshold), we consider general adversaries characterized by an adversary structure, a set of subsets of the player set, where the adversary may corrupt the players of one set in the structure. Recursively applying the simulation technique to standard threshold multiparty protocols results in protocols secure against general adversaries. ::: ::: The classical results in unconditional multiparty computation among a set of n players state that, in the passive model, any adversary that corrupts less than n/2 players can be tolerated, and in the active model, any adversary that corrupts less than n/3 players can be tolerated. Strictly generalizing these results we prove that, in the passive model, every function (more generally, every cooperation specified by involving a trusted party) can be computed securely with respect to a given adversary structure if and only if no two sets in the adversary structure cover the full set of players, and, in the active model, if and only if no three sets cover the full set of players. The complexities of the protocols are polynomial in the number of maximal adverse player sets in the adversary structure. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Extensions to Other Models <s> We show that verifiable secret sharing (VSS) and secure multi-party computation (MPC) among a set of n players can efficiently be based on any linear secret sharing scheme (LSSS) for the players, provided that the access structure of the LSSS allows MPC or VSS at all. Because an LSSS neither guarantees reconstructability when some shares are false, nor verifiability of a shared value, nor allows for the multiplication of shared values, an LSSS is an apparently much weaker primitive than VSS or MPC. ::: ::: Our approach to secure MPC is generic and applies to both the information-theoretic and the cryptographic setting. The construction is based on 1) a formalization of the special multiplicative property of an LSSS that is needed to perform a multiplication on shared values, 2) an efficient generic construction to obtain from any LSSS a multiplicative LSSS for the same access structure, and 3) an efficient generic construction to build verifiability into every LSSS (always assuming that the adversary structure allows for MPC or VSS at all). ::: ::: The protocols are efficient. In contrast to all previous information-theoretically secure protocols, the field size is not restricted (e.g, to be greater than n). Moreover, we exhibit adversary structures for which our protocols are polynomial in n while all previous approaches to MPC for non-threshold adversaries provably have super-polynomial complexity. <s> BIB002
The protocol we described above assumes that the corrupted parties are honestbut-curious. A more realistic assumption is that the parties can deviate from the protocol and send any messages that might help them. Such parties are called malicious. For example, in the multiplication protocol, a party that should share s j can send shares that are not consistent with any secret. Furthermore, in the reconstruction step in the arithmetic circuit protocol, a party can send a "wrong" share. To cope with malicious behavior, the notion of verifiable secret sharing was introduced by Chor et al. . Such schemes were constructed under various assumptions, see for a partial list of such constructions. We will not elaborate on verifiable secret sharing in this survey. In the definition of secure computation we assumed that there is a parameter t, and an adversary can control any coalition of size at most t. This assumes that all parties are as likely to be corrupted. Hirt and Maurer BIB001 considered a more general scenario in which there is an access structure, and the adversary can control any set of parties not in the access structure. That is, they require that any set not in the access structure cannot learn information not implied by the inputs of the parties in the set and the output of the function. Similarly to the requirement that 2t < n in the protocol we described above, secure computation against honest-but-curious parties is possible for general functions iff the union of every two sets not in the access structure does not cover the entire set of parties BIB001 . For every such access structure A, Cramer et al. BIB002 showed that using every linear secret-sharing scheme realizing A, one can construct a protocol for computing any arithmetic circuit such that any set not in the access structure cannot learn any information; the complexity of the protocol is linear in the size of the circuit. Their protocol is similar to the protocol we described above, where for addition gates every party does local computation. Multiplication is also similar, however, the choice of the constants β 1 , . . . , β n is more involved. The protocol of Cramer et al. BIB002 shows the need for general secret-sharing schemes.
Secret-Sharing Schemes: A Survey <s> Lower Bounds on the Size of the Shares <s> A "secret sharing system" permits a secret to be shared among n trustees in such a way that any k of them can recover the secret, but any k-1 have complete uncertainty about it. A linear coding scheme for secret sharing is exhibited which subsumes the polynomial interpolation method proposed by Shamir and can also be viewed as a deterministic version of Blakley's probabilistic method. Bounds on the maximum value of n for a given k and secret size are derived for any system, linear or nonlinear. The proposed scheme achieves the lower bound which, for practical purposes, differs insignificantly from the upper bound. The scheme may be extended to protect several secrets. Methods to protect against deliberate tampering by any of the trustees are also presented. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Lower Bounds on the Size of the Shares <s> Secret Sharing from the perspective of threshold schemes has been well-studied over the past decade. Threshold schemes, however, can only handle a small fraction of the secret sharing functions which we may wish to form. For example, if it is desirable to divide a secret among four participants A, B, C, and D in such a way that either A together with B can reconstruct the secret or C together with D can reconstruct the secret, then threshold schemes (even with weighting) are provably insufficient.This paper will present general methods for constructing secret sharing schemes for any given secret sharing function. There is a natural correspondence between the set of "generalized" secret sharing functions and the set of monotone functions, and tools developed for simplifying the latter set can be applied equally well to the former set. <s> BIB002 </s> Secret-Sharing Schemes: A Survey <s> Lower Bounds on the Size of the Shares <s> In a secret sharing scheme, a dealer has a secret. The dealer gives each participant in the scheme a share of the secret. There is a set Γ of subsets of the participants with the property that any subset of participants that is in Γ can determine the secret. In a perfect secret sharing scheme, any subset of participants that is not in Γ cannot obtain any information about the secret. We will say that a perfect secret sharing scheme is ideal if all of the shares are from the same domain as the secret. Shamir and Blakley constructed ideal threshold schemes, and Benaloh has constructed other ideal secret sharing schemes. In this paper, we construct ideal secret sharing schemes for more general access structures which include the multilevel and compartmented access structures proposed by Simmons. <s> BIB003 </s> Secret-Sharing Schemes: A Survey <s> Lower Bounds on the Size of the Shares <s> A linear algebraic model of computation the span program, is introduced, and several upper and lower bounds on it are proved. These results yield applications in complexity and cryptography. The proof of the main connection, between span programs and counting branching programs, uses a variant of Razborov's general approximation method. > <s> BIB004 </s> Secret-Sharing Schemes: A Survey <s> Lower Bounds on the Size of the Shares <s> In this paper, we generalize the vector space construction due to Brickell [5]. This generalization, introduced by Bertilsson [1], leads to perfect secret sharing schemes with rational information rates in which the secret can be computed efficiently by each qualified group. A one to one correspondence between the generalized construction and linear block codes is stated. It turns out that the approach of minimal codewords by Massey [15] is a special case of this construction. For general access structures we present an outline of an algorithm for determining whether a rational number can be realized as information rate by means of the generalized vector space construction. If so, the algorithm produces a perfect secret sharing scheme with this information rate. As a side-result we show a correspondence between the duality of access structures and the duality of codes. <s> BIB005 </s> Secret-Sharing Schemes: A Survey <s> Lower Bounds on the Size of the Shares <s> We derive new limitations on the information rate and the average information rate of secret sharing schemes for access structure represented by graphs. We give the first proof of the existence of access structures with optimal information rate and optimal average information rate less that 1/2 + e, where e is an arbitrary positive constant. We also provide several general lower bounds on information rate and average information rate of graphs. In particular, we show that any graph with n vertices admits a secret sharing scheme with information rate Ω((log n)/n). <s> BIB006 </s> Secret-Sharing Schemes: A Survey <s> Lower Bounds on the Size of the Shares <s> A secret sharing scheme permits a secret to be shared among participants of an n-element group in such a way that only qualified subsets of participants can recover the secret. If any nonqualified subset has absolutely no information on the secret, then the scheme is called perfect. The share in a scheme is the information that a participant must remember. ::: ::: In [3] it was proved that for a certain access structure any perfect secret sharing scheme must give some participant a share which is at least 50\percent larger than the secret size. We prove that for each n there exists an access structure on n participants so that any perfect sharing scheme must give some participant a share which is at least about $n/\log n$ times the secret size.^1 We also show that the best possible result achievable by the information-theoretic method used here is n times the secret size. ::: ::: ^1 All logarithms in this paper are of base 2. <s> BIB007 </s> Secret-Sharing Schemes: A Survey <s> Lower Bounds on the Size of the Shares <s> A secret sharing scheme permits a secret to be shared among participants in such a way that only qualified subsets of participants can recover the secret, but any nonqualified subset has absolutely no information on the secret. The set of all qualified subsets defines the access structure to the secret. Sharing schemes are useful in the management of cryptographic keys and in multiparty secure protocols. ::: ::: We analyze the relationships among the entropies of the sample spaces from which the shares and the secret are chosen. We show that there are access structures with four participants for which any secret sharing scheme must give to a participant a share at least 50% greater than the secret size. This is the first proof that there exist access structures for which the best achievable information rate (i.e., the ratio between the size of the secret and that of the largest share) is bounded away from 1. The bound is the best possible, as we construct a secret sharing scheme for the above access structures that meets the bound with equality. <s> BIB008 </s> Secret-Sharing Schemes: A Survey <s> Lower Bounds on the Size of the Shares <s> Span programs provide a linear algebraic model of computation. Lower bounds for span programs imply lower bounds for formula size, symmetric branching programs, and contact schemes. Monotone span programs correspond also to linear secret-sharing schemes. We present a new technique for proving lower bounds for monotone span programs. We prove a lower bound of Ω(m2.5) for the 6-clique function. Our results improve on the previously known bounds for explicit functions. <s> BIB009
The best known constructions of secret-sharing schemes for general access structures (e.g., BIB002 BIB003 BIB004 BIB005 ) have information ratio 2 O(n) , where n is the number of parties in the access structure. As discussed in the introduction, we conjecture that this is the best possible. Lower bounds for secret-sharing schemes have been proved in, e.g., BIB001 BIB008 BIB006 BIB007 . However, these lower bounds are far from the exponential upper bounds. The best lower bound was proved by Csirmaz BIB007 , who proved that for every n there exists an n-party access structure such that every secret-sharing scheme realizing it has information ratio Ω(n/ log n). In Sections 5.2 -5.3, we review this proof. For linear secret-sharing schemes the situation is much better -for every n there exist access structures with n parties such that every linear secret-sharing scheme realizing them has super-polynomial, i.e., n Ω(log n) , information ratio BIB009 . In Section 5.5, we present the lower bound proof of .
Secret-Sharing Schemes: A Survey <s> Stronger Lower Bounds <s> A "secret sharing system" permits a secret to be shared among n trustees in such a way that any k of them can recover the secret, but any k-1 have complete uncertainty about it. A linear coding scheme for secret sharing is exhibited which subsumes the polynomial interpolation method proposed by Shamir and can also be viewed as a deterministic version of Blakley's probabilistic method. Bounds on the maximum value of n for a given k and secret size are derived for any system, linear or nonlinear. The proposed scheme achieves the lower bound which, for practical purposes, differs insignificantly from the upper bound. The scheme may be extended to protect several secrets. Methods to protect against deliberate tampering by any of the trustees are also presented. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Stronger Lower Bounds <s> We derive new limitations on the information rate and the average information rate of secret sharing schemes for access structure represented by graphs. We give the first proof of the existence of access structures with optimal information rate and optimal average information rate less that 1/2 + e, where e is an arbitrary positive constant. We also provide several general lower bounds on information rate and average information rate of graphs. In particular, we show that any graph with n vertices admits a secret sharing scheme with information rate Ω((log n)/n). <s> BIB002 </s> Secret-Sharing Schemes: A Survey <s> Stronger Lower Bounds <s> A secret sharing scheme permits a secret to be shared among participants of an n-element group in such a way that only qualified subsets of participants can recover the secret. If any nonqualified subset has absolutely no information on the secret, then the scheme is called perfect. The share in a scheme is the information that a participant must remember. ::: ::: In [3] it was proved that for a certain access structure any perfect secret sharing scheme must give some participant a share which is at least 50\percent larger than the secret size. We prove that for each n there exists an access structure on n participants so that any perfect sharing scheme must give some participant a share which is at least about $n/\log n$ times the secret size.^1 We also show that the best possible result achievable by the information-theoretic method used here is n times the secret size. ::: ::: ^1 All logarithms in this paper are of base 2. <s> BIB003 </s> Secret-Sharing Schemes: A Survey <s> Stronger Lower Bounds <s> A secret sharing scheme permits a secret to be shared among participants in such a way that only qualified subsets of participants can recover the secret, but any nonqualified subset has absolutely no information on the secret. The set of all qualified subsets defines the access structure to the secret. Sharing schemes are useful in the management of cryptographic keys and in multiparty secure protocols. ::: ::: We analyze the relationships among the entropies of the sample spaces from which the shares and the secret are chosen. We show that there are access structures with four participants for which any secret sharing scheme must give to a participant a share at least 50% greater than the secret size. This is the first proof that there exist access structures for which the best achievable information rate (i.e., the ratio between the size of the secret and that of the largest share) is bounded away from 1. The bound is the best possible, as we construct a secret sharing scheme for the above access structures that meets the bound with equality. <s> BIB004
Starting from the works of Karnin et al. BIB001 and Capocelli et al. BIB004 , the entropy was used to prove lower bounds on the share size in secret-sharing schemes BIB002 BIB003 . In other words, to prove lower bounds on the information ratio of secret-sharing schemes, we use the alternative definition of secret sharing via the entropy function, Definition 3. Towards proving lower bounds, we use properties of the entropy function as well as the correctness and privacy of secret-sharing schemes. This is summarized in Claim 5. To simplify notations, in the sequel we denote H(S A ) by H(A) for any set of parties A ⊆ {p 1 , . . . , p n }.
Secret-Sharing Schemes: A Survey <s> Limitations of Known Techniques for Lower Bounds <s> A secret sharing scheme permits a secret to be shared among participants of an n-element group in such a way that only qualified subsets of participants can recover the secret. If any nonqualified subset has absolutely no information on the secret, then the scheme is called perfect. The share in a scheme is the information that a participant must remember. ::: ::: In [3] it was proved that for a certain access structure any perfect secret sharing scheme must give some participant a share which is at least 50\percent larger than the secret size. We prove that for each n there exists an access structure on n participants so that any perfect sharing scheme must give some participant a share which is at least about $n/\log n$ times the secret size.^1 We also show that the best possible result achievable by the information-theoretic method used here is n times the secret size. ::: ::: ^1 All logarithms in this paper are of base 2. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Limitations of Known Techniques for Lower Bounds <s> The properties of the so-called basic information inequalities of Shannon's information measures are discussed. Do these properties fully characterize the entropy function? To make this question more precise, we view an entropy function as a 2n-1 dimensional vector where the coordinates are indexed by the subsets of the ground set (1, 2, ..., n). The main discovery of this paper is a new information inequality involving 4 discrete random variables which gives a negative answer to this fundamental problem of information theory. <s> BIB002 </s> Secret-Sharing Schemes: A Survey <s> Limitations of Known Techniques for Lower Bounds <s> When finite, Shannon entropies of all sub vectors of a random vector are considered for the coordinates of an entropic point in Euclidean space. A linear combination of the coordinates gives rise to an unconstrained information inequality if it is nonnegative for all entropic points. With at least four variables no finite set of linear combinations generates all such inequalities. This is proved by constructing explicitly an infinite sequence of new linear information inequalities and a curve in a special geometric position to the halfspaces defined by the inequalities. The inequalities are constructed recurrently by adhesive pasting of restrictions of polymatroids and the curve ranges in the closure of a set of the entropic points. <s> BIB003 </s> Secret-Sharing Schemes: A Survey <s> Limitations of Known Techniques for Lower Bounds <s> An access structure specifying the qualified sets of a secret sharing scheme must have information rate less than or equal to one. The Vamos matroid induces two non-isomorphic access structures V"1 and V"6, which were shown by Marti-Farre and Padro to have information rates of at least 3/4. Beimel, Livne, and Padro showed that the information rates of V"1 and V"6 are bounded above by 10/11 and 9/10 respectively. Here we improve those upper bounds to 8/9 for V"1 and 17/19 for V"6. We also indicate a general method that allows one to read off an upper bound for the information rate of V"6 directly from the coefficients of any non-Shannon inequality with certain properties, properties that hold for all 4-variable non-Shannon inequalities known to the author. <s> BIB004 </s> Secret-Sharing Schemes: A Survey <s> Limitations of Known Techniques for Lower Bounds <s> The known secret-sharing schemes for most access structures are not efficient; even for a one-bit secret the length of the shares in the schemes is 2O(n), where n is the number of participants in the access structure. It is a long standing open problem to improve these schemes or prove that they cannot be improved. The best known lower bound is by Csirmaz, who proved that there exist access structures with n participants such that the size of the share of at least one party is n/logn times the secret size. Csirmaz's proof uses Shannon information inequalities, which were the only information inequalities known when Csirmaz published his result. On the negative side, Csirmaz proved that by only using Shannon information inequalities one cannot prove a lower bound of ω(n) on the share size. In the last decade, a sequence of non-Shannon information inequalities were discovered. In fact, it was proved that there are infinity many independent information inequalities even in four variables. This raises the hope that these inequalities can help in improving the lower bounds beyond n . However, we show that any information inequality with four or five variables cannot prove a lower bound of ω(n) on the share size. In addition, we show that the same negative result holds for all information inequalities with more than five variables that are known to date. <s> BIB005
Basically, all known lower bounds for the size of shares in secret-sharing schemes are implied by Claim 5. In other words, they only use the so-called Shannon information inequalities (i.e., the fact that the conditional mutual information is non-negative). Csirmaz BIB001 in 1994 proved that such proofs cannot prove a lower bound of ω(n) on the information ratio. That is, Csirmaz's lower bound is nearly optimal (up to a factor log n) using only Shannon inequalities. In 1998, new information inequalities were discovered by Zhang and Yeung BIB002 . Other information inequalities were discovered since, see, e.g. . In particular, there are infinitely many independent information inequalities in 4 variables BIB003 . Such inequalities were used in BIB004 to prove lower bounds for secret-sharing schemes. Beimel and Orlov BIB005 proved that all information information inequalities with 4 or 5 variables and all known information inequalities in more than 5 variables cannot prove a lower bound of ω(n) on the information ratio of secret-sharing schemes. Thus, new information inequalities with more than 5 variables should be found if we want to improve the lower bounds.
Secret-Sharing Schemes: A Survey <s> Lower Bounds for Linear Secret Sharing <s> We present three alternative simple constructions of small probability spaces on n bits for which any k bits are almost independent. The number of bits used to specify a point in the sample space is (2 + o(1)) (log log n + k/2 + log k + log 1/ϵ), where ϵ is the statistical difference between the distribution induced on any k bit locations and the uniform distribution. This is asymptotically comparable to the construction recently presented by Naor and Naor (our size bound is better as long as ϵ < 1/(k log n)). An additional advantage of our constructions is their simplicity. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Lower Bounds for Linear Secret Sharing <s> Monotone span programs are a linear-algebraic model of computation. They are equivalent to linear secret sharing schemes and have various applications in cryptography and complexity. A fundamental question is how the choice of the field in which the algebraic operations are performed effects the power of the span program. In this paper we prove that the power of monotone span programs over finite fields of different characteristics is incomparable; we show a super-polynomial separation between any two fields with different characteristics, answering an open problem of Pudlak and Sgall (1998). Using this result we prove a super-polynomial lower bound for monotone span programs for a function in uniform - /spl Nscr/;/spl Cscr/;/sup 2/ (and therefore in /spl Pscr/;), answering an open problem of Babai, Wigderson, and Gal (1999). Finally, we show that quasi-linear schemes, a generalization of linear secret sharing schemes introduced in Beimel and Ishai (2001), are stronger than linear secret sharing schemes. In particular, this proves, without any assumptions, that non-linear secret sharing schemes are more efficient than linear secret sharing schemes. <s> BIB002 </s> Secret-Sharing Schemes: A Survey <s> Lower Bounds for Linear Secret Sharing <s> Span programs provide a linear algebraic model of computation. Lower bounds for span programs imply lower bounds for formula size, symmetric branching programs, and contact schemes. Monotone span programs correspond also to linear secret-sharing schemes. We present a new technique for proving lower bounds for monotone span programs. We prove a lower bound of Ω(m2.5) for the 6-clique function. Our results improve on the previously known bounds for explicit functions. <s> BIB003
For linear secret-sharing schemes we can prove much stronger lower bounds than for general secret-sharing schemes. Recall that linear secret-sharing schemes are equivalent to monotone span programs and we first state the results using monotone span programs. Lower bounds for monotone span programs were presented in BIB003 BIB002 ; the best known lower bound is n Ω(log n) as proved in . We present here an alternative proof of . We start with a simple observation. Observation 1. Let A be a (monotone) access structure. Let B ∈ A and C ⊆ {p BIB001 The observation follows from the fact that if B∩C = ∅, then B ⊆ {p 1 , . . . , p n }\ C, contradicting the fact that B ∈ A and {p 1 , . . . , p n } \ C / ∈ A. To prove the lower bound, Gál and Pudlák choose a subset of the unauthorized sets that satisfies some properties, they use this subset to construct a matrix over F, and prove that the rank of the matrix over F is a lower bound on the size of every monotone span program realizing A. Let B = {B 1 , . . . , B } be the collection of minimal authorized sets in A and To prove the lower bound, Gál and Pudlák use a collection C such that, for every i, j, exactly one of the above conditions hold.
Secret-Sharing Schemes: A Survey <s> For a set <s> We present three alternative simple constructions of small probability spaces on n bits for which any k bits are almost independent. The number of bits used to specify a point in the sample space is (2 + o(1)) (log log n + k/2 + log k + log 1/ϵ), where ϵ is the statistical difference between the distribution induced on any k bit locations and the uniform distribution. This is asymptotically comparable to the construction recently presented by Naor and Naor (our size bound is better as long as ϵ < 1/(k log n)). An additional advantage of our constructions is their simplicity. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> For a set <s> A linear algebraic model of computation the span program, is introduced, and several upper and lower bounds on it are proved. These results yield applications in complexity and cryptography. The proof of the main connection, between span programs and counting branching programs, uses a variant of Razborov's general approximation method. > <s> BIB002 </s> Secret-Sharing Schemes: A Survey <s> For a set <s> The known secret-sharing schemes for most access structures are not efficient; even for a one-bit secret the length of the shares in the schemes is 2O(n), where n is the number of participants in the access structure. It is a long standing open problem to improve these schemes or prove that they cannot be improved. The best known lower bound is by Csirmaz, who proved that there exist access structures with n participants such that the size of the share of at least one party is n/logn times the secret size. Csirmaz's proof uses Shannon information inequalities, which were the only information inequalities known when Csirmaz published his result. On the negative side, Csirmaz proved that by only using Shannon information inequalities one cannot prove a lower bound of ω(n) on the share size. In the last decade, a sequence of non-Shannon information inequalities were discovered. In fact, it was proved that there are infinity many independent information inequalities even in four variables. This raises the hope that these inequalities can help in improving the lower bounds beyond n . However, we show that any information inequality with four or five variables cannot prove a lower bound of ω(n) on the share size. In addition, we show that the same negative result holds for all information inequalities with more than five variables that are known to date. <s> BIB003
it is a neighbor of all vertices in A. Let G = (U, V, E) be a bipartite graph satisfying the isolated neighbor property for t, where the vertices of the graph are parties, i.e., U ∪ V = {p 1 , . . . , p n }. We define an access structure N G with |U |+|V | parties whose minimal authorized sets are the sets A∪N (A) where A ⊂ U and |A| = t. Example 8. Consider the graph described in Figure 1 . This is a trivial graph satisfying the isolated neighbor property for t = 2. For example, consider the disjoint sets {p 1 , p 2 } and {p 3 , p 4 }; vertex p 5 is a neighbor of all the vertices in the first set while it is not a neighbor of any vertex in the second set. The access structure N G defined for this graph is the access structure defined in Example 6. t . Proof. We prove the lemma using Theorem 4. We take C to be all the pairs ∈ E}, that is, C 1 contains all vertices that are not neighbors of any vertex in C 1 . We first claim that the collection C satisfies the unique intersection property for A: . We need to show that T / ∈ A, that is, T does not contain any minimal authorized set. Let A ⊆ U ∩T be any set such that |A| = t. Thus, |A| = |C 0 | = t, and there is a vertex v ∈ V such that v ∈ N (A) and v ∈ C 1 , that is, v / ∈ T . In other words, T does not contain any minimal authorized set A ∪ N (A). Thus, by Theorem 4, the size of every monotone span program accepting A is at least rank F (D). In this case, for every A, C 0 such that |A| = |C 0 | = t, the entry corresponding to A ∪ N (A) and (C 0 , C 1 ) is zero if A ∩ C 0 = ∅ and is one otherwise. That is, D is the (n, t)-disjointness matrix, which has full rank over every field (see, e.g., Example 2.12] ). BIB003 The rank of D is, thus, the number of minimal authorized sets in A, namely, As there exist graphs which satisfy the isolated neighbor property for t = Ω(log n), e.g., the Paley Graph BIB001 , we derive the promised lower bound. Theorem 5. For every n, there exists an access structure N n such that every monotone span program over any field accepting it has size n Ω(log n) . As monotone span program are equivalent to linear secret-sharing schemes BIB002 , the same lower bound applies to linear secret-sharing schemes.
Secret-Sharing Schemes: A Survey <s> Oblivious-Transfer Protocols from Secret-Sharing <s> Randomized protocols for signing contracts, certified mail, and flipping a coin are presented. The protocols use a 1-out-of-2 oblivious transfer subprotocol which is axiomatically defined. The 1-out-of-2 oblivious transfer allows one party to transfer exactly one secret, out of two recognizable secrets, to his counterpart. The first (second) secret is received with probability one half, while the sender is ignorant of which secret has been received. An implementation of the 1-out-of-2 oblivious transfer, using any public key cryptosystem, is presented. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Oblivious-Transfer Protocols from Secret-Sharing <s> We present strong evidence that the implication, “if one-way permutations exist, then secure secret key agreement is possible”, is not provable by standard techniques. Since both sides of this implication are widely believed true in real life, to show that the implication is false requires a new model. We consider a world where all parties have access to a black box for a randomly selected permutation. Being totally random, this permutation will be strongly one-way in a provable, information-theoretic way. We show that, if P = NP, no protocol for secret key agreement is secure in such a setting. Thus, to prove that a secret key agreement protocol which uses a one-way permutation as a black box is secure is as hard as proving P ≠ NP. We also obtain, as a corollary, that there is an oracle relative to which the implication is false, i.e., there is a one-way permutation, yet secret-exchange is impossible. Thus, no technique which relativizes can prove that secret exchange can be based on any one-way permutation. Our results present a general framework for proving statements of the form, “Cryptographic application X is not likely possible based solely on complexity assumption Y.” <s> BIB002 </s> Secret-Sharing Schemes: A Survey <s> Oblivious-Transfer Protocols from Secret-Sharing <s> Much research in theoretical cryptography has been centered around finding the weakest possible cryptographic assumptions required to implement major primitives. Ever since Diffie and Hellman first suggested that modern cryptography be based on one-way functions (which are easy to compute, but hard to invert) and trapdoor functions (one-way functions which are, however, easy to invert given an associated secret), researchers have been busy trying to construct schemes that only require one of these general assumptions. For example, pseudorandom generators at first could only be constructed from a specific hard problem, such as discrete log IBM2]. Later it was shown how to construct pseudo-random generators given any one-way permutation [Y], and from other weak forms of one-way functions [Le, GKL]. Finally JILL] proved that the existence of any one-way function was a necessary and sufficient condition for the existence of pseudo-random generators. Similarly, the existence of trapdoor permutations can be shown to be necessary and sufficient for secure encryption schemes. However, progress on characterizing the requirements for secure digital signatures has been slower in coming. We will be interested in signature schemes which are secure agMnst existential forgery under adaptive chosen message attacks. This notion of security, as well as the first construction of digital signatures secure in this sense was provided by [GMR]. Their scheme was based on factoring, or more generally, the existence of clawfree pairs. More recently, signatures based on any trap*supported in p a r t b y a N a t i o n a l Science F o u n d a t i o n G r a d u a t e Fellowship, D A R P A c o n t r a c t N00014-80-C-0622, a n d Air Force G r a n t A F S O R - 8 6 - 0 0 7 8 <s> BIB003 </s> Secret-Sharing Schemes: A Survey <s> Oblivious-Transfer Protocols from Secret-Sharing <s> We present three alternative simple constructions of small probability spaces on n bits for which any k bits are almost independent. The number of bits used to specify a point in the sample space is (2 + o(1)) (log log n + k/2 + log k + log 1/ϵ), where ϵ is the statistical difference between the distribution induced on any k bit locations and the uniform distribution. This is asymptotically comparable to the construction recently presented by Naor and Naor (our size bound is better as long as ϵ < 1/(k log n)). An additional advantage of our constructions is their simplicity. <s> BIB004 </s> Secret-Sharing Schemes: A Survey <s> Oblivious-Transfer Protocols from Secret-Sharing <s> We give a unified account of classical secret-sharing goals from a modern cryptographic vantage. Our treatment encompasses perfect, statistical, and computational secret sharing; static and dynamic adversaries; schemes with or without robustness; schemes where a participant recovers the secret and those where an external party does so. We then show that Krawczyk's 1993 protocol for robust computational secret sharing (RCSS) need not be secure, even in the random-oracle model and for threshold schemes, if the encryption primitive it uses satisfies only one-query indistinguishability (ind1), the only notion Krawczyk defines. Nonetheless, we show that the protocol is secure (in the random-oracle model, for threshold schemes) if the encryption scheme also satisfies one-query key-unrecoverability (key1). Since practical encryption schemes are ind1+key1 secure, our result effectively shows that Krawczyk's RCSS protocol is sound (in the random-oracle model, for threshold schemes). Finally, we prove the security for a variant of Krawczyk's protocol, in the standard model and for arbitrary access structures, assuming ind1 encryption and a statistically-hiding, weakly-binding commitment scheme. <s> BIB005
To appreciate the result presented below we start with some background. Cryptographic protocols are built based on some assumptions. These assumption can be specific (e.g., factoring is hard) or generic (e.g., there exist one-way functions or there exist trapdoor permutations). The minimal generic assumption is the existence of one-way functions. This assumption implies, for example, that pseudorandom generators and private-key encryption systems exist and digital signatures exist BIB003 . However, many other tasks are not known to follow from one-way functions. Impagliazzo and Rudich BIB002 showed that using blackbox reductions one cannot construct oblivious-transfer protocols based on one-way functions. The next result of Rudich shows how to construct oblivious-transfer protocols based on one-way functions and an efficient secret-sharing scheme for A ham . By Theorem 6, we cannot hope for a perfect secret-sharing scheme for A ham . However, if one can construct computational secret-sharing schemes realizing A ham based on one-way functions, then we get that one-way functions imply oblivious-transfer protocols. This will solve a major open problem in cryptography, i.e., using Impagliazzo's terminology , it will prove that Minicrypt = Cryptomania. As Rudich's result uses a non-blackbox reduction, such construction bypasses the impossibility result of BIB002 . Preliminaries. In this survey we will not define computational secret-sharing schemes. This definition can be found in BIB005 . In such schemes we require that the sharing and reconstruction are done in polynomial-time in the secret length and the number of parties in the access structure. Furthermore, we require that a polynomial-time adversary controlling of an unauthorized set cannot distinguish between shares of one secret and shares of another secret. Rudich considers schemes for A ham , where the requirement on efficient reconstruction is quite weak: any authorized subset E can efficiently reconstruct the secret given that the set knows the Hamiltonian cycle in E. Thus, this weaker requirement avoids problems arising from the NP-completeness of the Hamiltonian problem. Next, we recall the notion of 1-out-of-2 oblivious transfer BIB001 . This is a two party protocol between two parties, a sender holding two bits b 0 , b 1 and a receiver holding an index i ∈ {0, 1}. At the end of the protocol, the receiver should hold b i without gaining any knowledge on the other bit b 1−i . The sender should not be able to deduce any information on i. Intuitively, the sender sends exactly one bit to the receiver, however, it is oblivious to which bit it sends. As in Section 4, we consider honest-but-curious parties. As the result of BIB002 already applies to this setting, constructing oblivious-transfer protocols for honest-butcurious parties is already interesting. Furthermore, by a transformation of , any such protocol can be transformed into a protocol secure against malicious parties assuming that one-way functions exist. We are ready to state and prove Rudich's result. Proof. Let Gen be a pseudorandom generator stretching bits to 2 bits. By , if one-way functions exist, then such Gen exists. Define the language L Gen = {y : ∃ x Gen(x) = y}. Clearly, L Gen ∈ N P . Let f be a polynomial-time reduction from L Gen to Hamiltonian, that is, f can be computed in polynomial time and y ∈ L Gen iff G = f (y) ∈ Hamiltonian. Such f exists with the property that a witness to y can be efficiently translated to a witness to G = f (x), that is, given y ∈ L Gen , a witness x for it, that is, Gen(x) = y, and G = f (x), one can find in polynomial time a Hamiltonian cycle in G. The next protocol is an oblivious-transfer protocol (for honest-but-curious parties): Receiver's input: i ∈ {0, 1} and security parameter 1 . Sender's input: b 0 , b 1 and security parameter 1 . Instructions for the receiver: -Choose at random x 1 ∈ {0, 1} and compute y 1 = Gen(x 1 ). -Choose at random y 0 ∈ {0, 1} 2 . Instructions for the sender: - be the graphs that the receiver sends. -For j ∈ {0, 1}, share the bit b j using the scheme for the Hamiltonian access structure A ham for the complete graph with |V j | vertices, and send the shares of the parties corresponding to the edges in E j to the receiver. Instructions for the receiver: Compute a Hamiltonian cycle in G 1 from x BIB004 and y 1 , and reconstruct b i from the shares of this cycle for the graph The privacy of the receiver is protected since the sender cannot efficiently distinguish between a string sampled according to the uniform distribution in {0, 1} 2 and an output of the generator on a string sampled uniformly in {0, 1} . In particular, the sender cannot efficiently distinguish between the output of the reduction f on two such strings. The privacy of the sender is protected against an honest-but-curious receiver since with probability at least 1 − 1/2 the string y 0 is not in the range of Gen, thus, G 1−i has no Hamiltonian cycle, that is, E i is an unauthorized set. In this case, the secret b 1−i cannot be efficiently computed from the shares of E 1−i . If we hope to construct an oblivious-transfer protocol using the approach of Theorem 7, then we should construct an efficient computational scheme for the Hamiltonian access structure based on the assumption that one-way functions exist. For feasibility purposes it would be interesting to construct a computational secret-sharing scheme for Hamiltonicity based on stronger cryptographic assumptions, e.g., that trapdoor permutations exist.
Secret-Sharing Schemes: A Survey <s> Summary and Open Problems <s> Secret Sharing from the perspective of threshold schemes has been well-studied over the past decade. Threshold schemes, however, can only handle a small fraction of the secret sharing functions which we may wish to form. For example, if it is desirable to divide a secret among four participants A, B, C, and D in such a way that either A together with B can reconstruct the secret or C together with D can reconstruct the secret, then threshold schemes (even with weighting) are provably insufficient.This paper will present general methods for constructing secret sharing schemes for any given secret sharing function. There is a natural correspondence between the set of "generalized" secret sharing functions and the set of monotone functions, and tools developed for simplifying the latter set can be applied equally well to the former set. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Summary and Open Problems <s> In a secret sharing scheme, a dealer has a secret. The dealer gives each participant in the scheme a share of the secret. There is a set Γ of subsets of the participants with the property that any subset of participants that is in Γ can determine the secret. In a perfect secret sharing scheme, any subset of participants that is not in Γ cannot obtain any information about the secret. We will say that a perfect secret sharing scheme is ideal if all of the shares are from the same domain as the secret. Shamir and Blakley constructed ideal threshold schemes, and Benaloh has constructed other ideal secret sharing schemes. In this paper, we construct ideal secret sharing schemes for more general access structures which include the multilevel and compartmented access structures proposed by Simmons. <s> BIB002 </s> Secret-Sharing Schemes: A Survey <s> Summary and Open Problems <s> A linear algebraic model of computation the span program, is introduced, and several upper and lower bounds on it are proved. These results yield applications in complexity and cryptography. The proof of the main connection, between span programs and counting branching programs, uses a variant of Razborov's general approximation method. > <s> BIB003 </s> Secret-Sharing Schemes: A Survey <s> Summary and Open Problems <s> A secret-sharing scheme enables a dealer to distribute a secret among $n$ parties such that only some predefined authorized sets of parties will be able to reconstruct the secret from their shares. The (monotone) collection of authorized sets is called an access structure, and is freely identified with its characteristic monotone function $f:\{0,1\}^n\rightarrow \{0,1\}$. A family of secret-sharing schemes is called efficient if the total length of the n shares is polynomial in n. Most previously known secret-sharing schemes belonged to a class of linear schemes, whose complexity coincides with the monotone span program size of their access structure. Prior to this work there was no evidence that nonlinear schemes can be significantly more efficient than linear schemes, and in particular there were no candidates for schemes efficiently realizing access structures which do not lie in NC. ::: The main contribution of this work is the construction of two efficient nonlinear schemes: (1) A scheme with perfect privacy whose access structure is conjectured not to lie in NC, and (2) a scheme with statistical privacy whose access structure is conjectured not to lie in P/poly. Another contribution is the study of a class of nonlinear schemes, termed quasi-linear schemes, obtained by composing linear schemes over different fields. While these schemes are (superpolynomially) more powerful than linear schemes, we show that they cannot efficiently realize access structures outside NC. <s> BIB004 </s> Secret-Sharing Schemes: A Survey <s> Summary and Open Problems <s> Monotone span programs are a linear-algebraic model of computation. They are equivalent to linear secret sharing schemes and have various applications in cryptography and complexity. A fundamental question is how the choice of the field in which the algebraic operations are performed effects the power of the span program. In this paper we prove that the power of monotone span programs over finite fields of different characteristics is incomparable; we show a super-polynomial separation between any two fields with different characteristics, answering an open problem of Pudlak and Sgall (1998). Using this result we prove a super-polynomial lower bound for monotone span programs for a function in uniform - /spl Nscr/;/spl Cscr/;/sup 2/ (and therefore in /spl Pscr/;), answering an open problem of Babai, Wigderson, and Gal (1999). Finally, we show that quasi-linear schemes, a generalization of linear secret sharing schemes introduced in Beimel and Ishai (2001), are stronger than linear secret sharing schemes. In particular, this proves, without any assumptions, that non-linear secret sharing schemes are more efficient than linear secret sharing schemes. <s> BIB005 </s> Secret-Sharing Schemes: A Survey <s> Summary and Open Problems <s> Secret sharing is a very important primitive in cryptography and distributed computing. In this work, we consider computational secret sharing (CSS) which provably allows a smaller share size (and hence greater efficiency) than its information-theoretic counterparts. Extant CSS schemes result in succinct share-size and are in a few cases, like threshold access structures, optimal. However, in general, they are not efficient (share-size not polynomial in the number of players n), since they either assume efficient perfect schemes for the given access structure (as in [10]) or make use of exponential (in n) amount of public information (like in [5]). In this paper, our goal is to explore other classes of access structures that admit of efficient CSS, without making any other assumptions. We construct efficient CSS schemes for every access structure in monotone P. As of now, most of the efficient information-theoretic schemes known are for access structures in algebraic NC 2. Monotone P and algebraic NC 2 are not comparable in the sense one does not include other. Thus our work leads to secret sharing schemes for a new class of access structures. In the second part of the paper, we introduce the notion of secret sharing with a semi-trusted third party, and prove that in this relaxed model efficient CSS schemes exist for a wider class of access structures, namely monotone NP. <s> BIB006 </s> Secret-Sharing Schemes: A Survey <s> Summary and Open Problems <s> Monotone span programs represent a linear-algebraic model of computation. They are equivalent to linear secret sharing schemes and have various applications in cryptography and complexity. A fundamental question regarding them is how the choice of the field in which the algebraic operations are performed affects the power of the span program. In this paper we prove that the power of monotone span programs over finite fields of different characteristics is incomparable; we show a superpolynomial separation between any two fields with different characteristics, solving an open problem of Pudlak and Sgall [Algebraic models of computation and interpolation for algebraic proof systems, in Proof Complexity and Feasible Arithmetic, DIMACS Ser. Discrete Math. Theoret. Comput. Sci. 39, P. W. Beame and S. Buss, eds., AMS, Providence, RI, 1998, pp. 279--296]. Using this result we prove a superpolynomial lower bound for monotone span programs for a function in uniform-${\cal N}C^2$ (and therefore in ${\cal P}$), solving an open problem of Babai, Gal, and Wigderson [Combinatorica, 19 (1999), pp. 301--319]. (All previous superpolynomial lower bounds for monotone span programs were for functions not known to be in ${\cal P}$.) Finally, we show that quasi-linear secret sharing schemes, a generalization of linear secret sharing schemes introduced in Beimel and Ishai [On the power of nonlinear secret-sharing, in Proceedings of the 16th Annual IEEE Conference on Computational Complexity, 2001, pp. 188--202], are stronger than linear secret sharing schemes. In particular, this proves, without any assumptions, that nonlinear secret sharing schemes are more efficient than linear secret sharing schemes. <s> BIB007 </s> Secret-Sharing Schemes: A Survey <s> Summary and Open Problems <s> In a secret sharing scheme, a dealer has a secret key. There is a finite set P of participants and a set ? of subsets of P. A secret sharing scheme with ? as the access structure is a method which the dealer can use to distribute shares to each participant so that a subset of participants can determine the key if and only if that subset is in ?. The share of a participant is the information sent by the dealer in private to the participant. A secret sharing scheme is ideal if any subset of participants who can use their shares to determine any information about the key can in fact actually determine the key, and if the set of possible shares is the same as the set of possible keys. In this paper, we show a relationship between ideal secret sharing schemes and matroids. <s> BIB008
In this survey we consider secret-sharing schemes, a basic tool in cryptography. We show several constructions of secret-sharing schemes, starting from the scheme of . We then described its generalization by BIB001 , showing that if an access structure can be described by a small monotone formula, then it has an efficient secret-sharing scheme. We also showed the construction of secret-sharing schemes from monotone span programs BIB002 BIB003 . Monotone span programs are equivalent to linear secret-sharing schemes and are equivalent to schemes where the reconstruction is linear . As every monotone formula can be transformed into a monotone span program of the same size, the monotone span program construction is a generalization of the construction of BIB001 . Furthermore, there are functions that have small monotone span programs and do not have small monotone formulae , thus, this is a strict generalization. Finally, we presented the multi-linear construction of secret-sharing schemes. All the constructions presented in Section 3 are linear over a finite field (some of the schemes work also over finite groups, e.g., the scheme of Benaloh and Leichter). The linearity of a scheme is important in many applications, as we demonstrated in Section 4 for the construction of secure multiparty protocols for general functions. Thus, it is interesting to understand the access structures that have efficient linear secret-sharing schemes. The access structures that can efficiently be realized by linear and multi-linear secret-sharing scheme are characterized by functions that have polynomial size monotone span programs, or, more generally, multi-target monotone span programs. We would like to consider the class of access structures that can be realized by linear secret-sharing schemes with polynomial share length. As this discussion is asymptotic, we consider a sequence of access structures {A n } n∈N , where A n has n parties. As linear algebra can be computed in N C (informally, N C is the class of problems that can be solved by parallel algorithms with polynomially many processors and poly-logarithmic running time), every sequence of access structures that has efficient linear secret-sharing schemes can be recognized by N C algorithms. For example, if P = N C, then access structures recognized by monotone P -complete problems do not have efficient linear secret-sharing schemes. The limitations of linear secret-sharing schemes raise the question if there are non-linear secret-sharing schemes. Beimel and Ishai BIB004 have constructed non-linear schemes for access structures that are not known to be in P (e.g., for an access structure related to the quadratic residuosity problem over N = pq). Thus, non-linear schemes are probably stronger than linear schemes. Furthermore, Beimel and Ishai defined quasi-linear schemes, which are compositions of linear schemes over different fields. Beimel and Weinreb BIB005 showed, without any assumptions, that quasi-linear schemes are stronger than linear schemes, that is, there exists an access structure that has quasi-linear schemes with constant information ratio while every linear secret-sharing scheme realizing this access structure has super-polynomial information ratio. However, Beimel and Ishai BIB004 proved that if an access structure has efficient quasi-linear scheme, then it can be recognized by an N C algorithm. Thus, also the class of access structures realized by efficient quasi-linear schemes is limited. Another non-linear construction of secret-sharing schemes is an unpublished result of Yao (see also BIB006 ). Yao showed that if an access structure can be described by a small monotone circuit, then it has an efficient computational secret-sharing scheme. This generalizes the results of BIB001 showing that if an access structure can be described by a small monotone formula, then it has an efficient perfect secret-sharing scheme. We will not describe the construction of Yao in this survey. An additional topic that we will not cover in this survey is ideal secret-sharing schemes. By Lemma 2, the size of the share of each party is at least the size of the secret. An ideal secret-sharing scheme is a scheme in which the size of the share of each party is exactly the size of the secret. For example, Shamir's scheme is ideal. An access structure is ideal if it has an ideal scheme over some finite domain of secrets. For example, threshold access structures are ideal, while the access structure described in Example 5 is not ideal. Brickell BIB002 considered ideal schemes and constructed ideal schemes for some access structures, i.e., for hierarchical access structures. Brickell and Davenport BIB008 showed an interesting connection between ideal access structures and matroids, that is, -If an access structure is ideal then it is a matroid port, -If an access structure is a matroid port of a representable matroid, then the access structure is ideal. Following this work, many works have constructed ideal schemes, and have studied ideal access structures and matroids. For example, Martí-Farré and Padró BIB007 showed that if an access structure is not a matroid port, then the information ratio of every secret-sharing scheme realizing it is at least 1.5 (compared to information ratio 1 of ideal schemes).
Secret-Sharing Schemes: A Survey <s> Question 3. <s> Weighted threshold functions with positive weights are a natural generalization of unweighted threshold functions. These functions are clearly monotone. However, the naive way of computing them is adding the weights of the satisfied variables and checking if the sum is greater than the threshold; this algorithm is inherently non-monotone since addition is a non-monotone function. In this work we by-pass this addition step and construct a polynomial size logarithmic depth unbounded fan-in monotone circuit for every weighted threshold function, i.e., we show that weighted threshold functions are in mAC^1. (To the best of our knowledge, prior to our work no polynomial monotone circuits were known for weighted threshold functions.) Our monotone circuits are applicable for the cryptographic tool of secret sharing schemes. Using general results for compiling monotone circuits (Yao, 1989) and monotone formulae (Benaloh and Leichter, 1990) into secret sharing schemes, we get secret sharing schemes for every weighted threshold access structure. Specifically, we get: (1) information-theoretic secret sharing schemes where the size of each share is quasi-polynomial in the number of users, and (2) computational secret sharing schemes where the size of each share is polynomial in the number of users. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Question 3. <s> In this work we study linear secret sharing schemes for s-tconnectivity in directed graphs. In such schemes the parties are edges of a complete directed graph, and a set of parties (i.e., edges) can reconstruct the secret if it contains a path from node sto node t. We prove that in every linear secret sharing scheme realizing the st-con function on a directed graph with nedges the total size of the shares is i¾?(n1.5). This should be contrasted with s-tconnectivity in undirected graphs, where there is a scheme with total share size n. Our result is actually a lower bound on the size monotone span programs for st i¾? con, where a monotone span program is a linear-algebraic model of computation equivalent to linear secret sharing schemes. Our results imply the best known separation between the power of monotone and non-monotone span programs. Finally, our results imply the same lower bounds for matching. <s> BIB002
Prove that there exists an explicit access structure such that the information ratio of every linear secret-sharing scheme realizing it is 2 Ω(n) . In this survey, we describe linear and multi-linear secret-sharing schemes. It is known that multi-linear schemes are more efficient than linear schemes for small access structures, e.g., . However, the possible improvement by using multi-linear schemes compared to linear schemes is open. There are interesting access structures that we do not know if they have efficient schemes. The first access structure is the directed connectivity access structure whose parties are edges in a complete directed graph and whose authorized sets are sets of edges containing a path from v 1 to v m . As there is a small monotone circuit for this access structure, by it has an efficient computational scheme. It is not known if this access structure can be described by a small monotone span program and it is open if it has an efficient perfect scheme. In BIB002 , it was proved that every monotone span program accepting the directed connectivity access structure has size Ω(n 3/2 ). In comparison, the undirected connectivity access structure has an efficient perfect scheme [15] (see Section 3.2). The second access structure that we do not know if it has an efficient scheme is the perfect matching access structure. The parties of this access structure are edges in a complete undirected graph and the authorized sets are sets of edges containing a perfect matching. It is not even known if this access structure has an efficient computational scheme as every monotone circuit for perfect matching has super-polynomial size. We remark that an efficient scheme for this access structure implies an efficient scheme for the directed connectivity access structure. The third interesting family of access structures is weighted threshold access structures. In such an access structure each party has a weight and there is some threshold. A set of parties is authorized if the sum of the weights of the parties in the set is bigger than the threshold. For these access structures there is an efficient computational scheme BIB001 and a perfect scheme with n O(log n) long shares. It is open if these access structures have a perfect scheme with polynomial shares. Furthermore, it is open if they can be described by polynomial size monotone formulae.
Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Introduction <s> There is an urgent need to reduce the growing backlog of forensic examinations in Digital Forensics Laboratories (DFLs). Currently, DFLs routinely create forensic duplicates and perform in-depth forensic examinations of all submitted media. This approach is rapidly becoming untenable as more cases involve increasing quantities of digital evidence. A more efficient and effective three-tiered strategy for performing forensic examinations will enable DFLs to produce useful results in a timely manner at different phases of an investigation, and will reduce unnecessary expenditure of resources on less serious matters. The three levels of forensic examination are described along with practical examples and suitable tools. Realizing that this is not simply a technical problem, we address the need to update training and establish thresholds in DFLs. Threshold considerations include the likelihood of missing exculpatory evidence and seriousness of the offense. We conclude with the implications of scaling forensic examinations to the investigation. <s> BIB001 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Introduction <s> Digital triage is a pre-digital-forensic phase that sometimes takes place as a way of gathering quick intelligence. Although effort has been undertaken to model the digital forensics process, little has been done to date to model digital triage. This work discuses the further development of a model that does attempt to address digital triage the Partially-automated Crime Specific Digital Triage Process model. The model itself will be presented along with a description of how its automated functionality was implemented to facilitate model testing. <s> BIB002 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Introduction <s> The digital forensic process as traditionally laid out begins with the collection, duplication, and authentication of every piece of digital media prior to examination. These first three phases of the digital forensic process are by far the most costly. However, complete forensic duplication is standard practice among digital forensic laboratories. The time it takes to complete these stages is quickly becoming a serious problem. Digital forensic laboratories do not have the resources and time to keep up with the growing demand for digital forensic examinations with the current methodologies. One solution to this problem is the use of pre-examination techniques commonly referred to as digital triage. Pre-examination techniques can assist the examiner with intelligence that can be used to prioritize and lead the examination process. This work discusses a proposed model for digital triage that is currently under development at Mississippi State University. <s> BIB003 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Introduction <s> In enterprise environments, digital forensic analysis generates data volumes that traditional forensic methods are no longer prepared to handle. Triaging has been proposed as a solution to systematically prioritize the acquisition and analysis of digital evidence. We explore the application of automated triaging processes in such settings, where reliability and customizability are crucial for a successful deployment. We specifically examine the use of GRR Rapid Response (GRR) - an advanced open source distributed enterprise forensics system - in the triaging stage of common incident response investigations. We show how this system can be leveraged for automated prioritization of evidence across the whole enterprise fleet and describe the implementation details required to obtain sufficient robustness for large scale enterprise deployment. We analyze the performance of the system by simulating several realistic incidents and discuss some of the limitations of distributed agent based systems for enterprise triaging. <s> BIB004 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Introduction <s> There are two main reasons the processing speed of current generation digital forensic tools is inadequate for the average case: a) users have failed to formulate explicit performance requirements; and b) developers have failed to put performance, specifically latency, as a top-level concern in line with reliability and correctness. In this work, we formulate forensic triage as a real-time computation problem with specific technical requirements, and we use these requirements to evaluate the suitability of different forensic methods for triage purposes. Further, we generalize our discussion to show that the complete digital forensics process should be viewed as a (soft) real-time computation with well-defined performance requirements. We propose and validate a new approach to target acquisition that enables file-centric processing without disrupting optimal data throughput from the raw device. We evaluate core forensic processing functions with respect to processing rates and show their intrinsic limitations in both desktop and server scenarios. Our results suggest that, with current software, keeping up with a commodity SATA HDD at 120 MB/s requires 120-200 cores. <s> BIB005 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Introduction <s> In many police investigations today, computer systems are somehow involved. The number and capacity of computer systems needing to be seized and examined is increasing, and in some cases it may be necessary to quickly find a single computer system within a large number of computers in a network. To investigate potential evidence from a large quantity of seized computer system, or from a computer network with multiple clients, triage analysis may be used. In this work we first define triage based on the medical definition. From this definition, we describe a PXE-based client-server environment that allows for triage tasks to be conducted over the network from a central triage server. Finally, three real world cases are described in which the proposed triage solution was used. <s> BIB006 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Introduction <s> Recently, digital evidence has been playing an increasingly important role in criminal cases. The seizure of Hard Disk Drives (HDDs) and creation of images of entire disk drives have become a best practice by law enforcement agencies. In most criminal cases, however, the incriminatory information found on an HDD is only a small portion of the entire HDD and the remaining information is not relevant to the case. For this reason, demands for the regulation of excessive search and seizure of defendants' innocuous information have been increasing and gaining strength. Some courts have even ruled out inadmissible digital evidence gathered from sites where the scope of a warrant has been exceeded, considering it to be a violation of due process. In order to protect the privacy of suspects, a standard should be made restricting excessive search and seizure. There are, however, many difficulties in selectively identifying and collecting digital evidence at a crime scene, and it is not realistic to expect law enforcement officers to search and collect completely only case-relevant evidence. Too much restriction can cause severe problems in investigations and may result in law enforcement authorities missing crucial evidence. Therefore, a model needs to be established that can assess and regulate excessive search and seizure of digital evidence in accordance with a reasonable standard that considers practical limitations. Consequently, we propose a new approach that balances two conflicting values: human rights protection versus the achievement of effective investigations. In this new approach, a triage model is derived from an assessment of the limiting factors of on-site search and seizure. For the assessment, a survey that provides information about the level of law enforcement, such as the available labor, equipment supply, technical limitations, and time constraints, was conducted using current field officers. A triage model that can meet the legal system's demand for privacy protection and which supports decision making by field officers that can have legal effects was implemented. Since the demands of each legal system and situation of law enforcement vary from country to country, the triage model should be established individually for each legal system. Along with experiment of our proposed approach, this paper presents a new triage model that is designed to meet the recent requirements of the Korean legal system for privacy protection from, specifically, a Korean perspective. <s> BIB007 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Introduction <s> Digital forensic triage is poorly defined and poorly understood. The lack of clarity surrounding the process of triage has given rise to legitimate concerns. By trying to define what triage actually is, one can properly engage with the concerns surrounding the process. This paper argues that digital forensic triage has been conducted on an informal basis for a number of years in digital forensic laboratories, even where there are legitimate objections to the process. Nevertheless, there are clear risks associated with the process of technical triage, as currently practised. The author has developed and deployed a technical digital forensic previewing process that negates many of the current concerns regarding the triage process and that can be deployed in any digital forensic laboratory at very little cost. This paper gives a high-level overview of how the system works and how it can be deployed in the digital forensic laboratory. <s> BIB008 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Introduction <s> The role of triage in digital forensics is disputed, with some practitioners questioning its reliability for identifying evidential data. Although successfully implemented in the field of medicine, triage has not established itself to the same degree in digital forensics. This article presents a novel approach to triage for digital forensics. Case-Based Reasoning Forensic Triager (CBR-FT) is a method for collecting and reusing past digital forensic investigation information in order to highlight likely evidential areas on a suspect operating system, thereby helping an investigator to decide where to search for evidence. The CBR-FT framework is discussed and the results of twenty test triage examinations are presented. CBR-FT has been shown to be a more effective method of triage when compared to a practitioner using a leading commercial application. <s> BIB009 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Introduction <s> The investigation of fraud in business has been a staple for the digital forensics practitioner since the introduction of computers in business. Much of this fraud takes place in the retail industry. When trying to stop losses from insider retail fraud, triage, i.e. the quick identification of sufficiently suspicious behaviour to warrant further investigation, is crucial, given the amount of normal, or insignificant behaviour. <s> BIB010 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Introduction <s> The Internet of Things (IoT) is the interconnection of uniquely identifiable embedded computing devices within the existing Internet infrastructure. Typically, internet of things (IoT) is expected to offer advanced connectivity of devices, systems, and services that goes beyond machine-to-machine communications (M2M) and covers a variety of protocols, domains, and applications. The interconnection of these embedded devices including smart objects, is expected to usher in automation in nearly all fields, while also enabling advanced applications like a Smart Grid. The main research challenge in Internet of things (IoT) for the forensic investigators is based size of the objects of forensic interest, relevancy, blurry network boundaries and edgeless networks, especially on method for conducting the investigation. The aim of this paper is to identify the best approach by designing a novel model to conduct the investigation situations for digital forensic professionals and experts. There was existing research works which introduce models for identifying the objects of forensics interest in investigations, but there were no rigorous testing for accepting the approach. Currently in this work, an integrated model is designed based on triage model and 1-2-3 zone model for volatile based data preservation. <s> BIB011 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Introduction <s> The BitTorrent client application is a popular utility for sharing large files over the Internet. Sometimes, this powerful utility is used to commit cybercrimes, like sharing of illegal material or illegal sharing of legal material. In order to help forensics investigators to fight against these cybercrimes, we carried out an investigation of the artifacts left by the BitTorrent client. We proposed a methodology to locate the artifacts that indicate the BitTorrent client activity performed. Additionally, we designed and implemented a tool that searches for the evidence left by the BitTorrent client application in a local computer running Windows. The tool looks for the four files holding the evidence. The files are as follows: *.torrent, dht.dat, resume.dat, and settings.dat. The tool decodes the files, extracts important information for the forensic investigator and converts it into XML format. The results are combined into a single result file. <s> BIB012 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Introduction <s> Following their detection and seizure by police and border guard authorities, false identity and travel documents are usually scanned, producing digital images. This research investigates the potential of these images to classify false identity documents, highlight links between documents produced by a same modus operandi or same source, and thus support forensic intelligence efforts. Inspired by previous research work about digital images of Ecstasy tablets, a systematic and complete method has been developed to acquire, collect, process and compare images of false identity documents. This first part of the article highlights the critical steps of the method and the development of a prototype that processes regions of interest extracted from images. Acquisition conditions have been fine-tuned in order to optimise reproducibility and comparability of images. Different filters and comparison metrics have been evaluated and the performance of the method has been assessed using two calibration and validation sets of documents, made up of 101 Italian driving licenses and 96 Portuguese passports seized in Switzerland, among which some were known to come from common sources. Results indicate that the use of Hue and Edge filters or their combination to extract profiles from images, and then the comparison of profiles with a Canberra distance-based metric provides the most accurate classification of documents. The method appears also to be quick, efficient and inexpensive. It can be easily operated from remote locations and shared amongst different organisations, which makes it very convenient for future operational applications. The method could serve as a first fast triage method that may help target more resource-intensive profiling methods (based on a visual, physical or chemical examination of documents for instance). Its contribution to forensic intelligence and its application to several sets of false identity documents seized by police and border guards will be developed in a forthcoming article (part II). (C) 2016 Elsevier Ireland Ltd. All rights reserved. <s> BIB013
The volume of data for forensic investigation keeps constantly growing. This is a result of the continuing technology development when scale and bounds of the Internet rapidly change and social networks come to everyday use. The storage capacity expands to new areas when smart phones become part of the Internet devices and cloud storage services are offered. The digital forensic process is very time consuming, because it requires the examination of all available data volumes collected from the cybercrime scene. The digital forensic process commences with the collection, duplication, and authentication of every piece of digital media prior to examination. Moreover, every action taken has to adhere to the legitimacy rules so that the obtained digital evidence could be presented in the court. However, life is very dynamic, and the situations, in which some information about a possible cybercrime has to be obtained as promptly as possible without adhering to the rules of long legal scrutiny, arise. Of course, the information obtained in a such way cannot be directly used in the court; however, a quick access to such knowledge can speed up the future process of digital forensics and, is some situations, can even save somebody's life. Therefore, such actions are justifiable. A process that takes place prior to the standard forensic methodology is called digital triage. It can provide valuable intelligence without subjecting digital evidence to a full examination. This quick intelligence can be used in the field to guide the search and seizure, and in the laboratory to determine if a media is worth to be examined. The term "triage" comes from the field of medicine, in which it refers to the situations when because of having limited resources, the injured people are ranked according to the necessity to receive treatment. Such ranking ensures the achievement of the least damage to patients when resources are limited BIB004 . Rogers et al. , the authors of the first field triage model in computer forensics, define triage as a process of ranking objects in terms of importance or priority. Casey et al. BIB001 define triage in digital forensics as part of forensic examination process. The forensic examination is described as three-tier strategy consisting of three levels: (i) survey/triage forensic inspection, (ii) preliminary forensic examination, and (iii) in-depth forensic examination. The first stage, in which many potential sources of digital evidence for specific information are reviewed, is alternatively referred to as survey or triage. The same idea that triage is part of forensic examination, is supported in later works BIB012 BIB005 . Casey underlines that triage is effective for prioritizing, but it is not a substitute for a more thorough review. Casey argues that triage is a technical process, which can be performed outside a laboratory by professionals with basic training and limited oversight. Categorizing digital triage as a technical process makes it more clear that the information has not undergone rigorous quality assessment and its legitimacy has not been evaluated. There are many other definitions of triage, which slightly differ depending on the attributed qualities BIB005 BIB002 BIB009 BIB006 . The diversity of triage definitions reflects the variety of the views and indicates the immaturity of the field. However, it is not the main problem. The focus should be devoted to the decision whether digital triage is a forensic process. As Cantrell et al. BIB003 state, "Digital triage is not a forensic process by definition". It is not clear to which definition Cantrell et al. BIB003 refer. It is possible to suppose that it is the definition by Rogers et al. . However, other definitions exist, and this statement is not true for all the cases BIB005 BIB006 BIB007 . Koopman and James BIB006 , and Roussev et al. BIB005 use the term "digital forensic triage". If digital triage is not the forensic process, then the term "forensic" cannot be used together with the term "digital triage", because it misleads. Hong et al. BIB007 introduce a triage model that is adapted to the requirements of the legal Korean system. Consequently, the proposed triage model adheres to the rules of the forensic process. Moreover, Hong et al. BIB007 suggest establishing a triage model individually for the legal system of a specific country. To summarize the diversity of views on digital triage, we stress the following features: 1. Digital triage is a technical process to provide information for the forensic examination, but does not involve the evaluation of digital evidence 2. The goal of digital triage is to rapidly review many potential sources of digital evidence for specific information and prioritize the digital media to make the subsequent analysis easier 3. The term "forensic" cannot be used together with the term "digital triage" if the process of digital triage does not adhere to the rules of the forensic process specific to the country Digital triage comes in two forms: live and post-mortem. The post-mortem form of triage, which is conducted on the digital image, is not always recognized as triage. We suppose that both forms of digital triage are equally important. Live triage raises many concerns, because it is conducted on the live system, and the destruction of the likely evidence is possible. However, live digital triage has several advantages: 1. It enables a rapid extraction of intelligence that can be used for suspect interrogation 2. Some data can be lost if the computer is shut down The primary concern inherent to both forms of digital triage is that the evidential data can remain unnoticed BIB008 . Pollitt argues that the process of digital triage in the context of forensics is an admission of failure. However, he recognizes that for now a better approach does not exist. Moreover, the term "triage" becomes the common word to indicate the initial and rapid step in the different areas of the forensic investigation. For example, it is used in the retail industry BIB010 , in the internet of things BIB011 , in the fraud of identity and travel documents BIB013 . We review the research works related to digital triage. We divide the review into four sections as follows: live triage, post-mortem triage, mobile device triage, and triage tools. The largest section is on the triage tools. Such abundance of research works highlights the practical need for triage tools. In the next section, we review the models and methods of live triage.
Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Models and Methods of Live Triage <s> This paper describes the Advanced Forensic Format (AFF), which is designed as an alternative to current proprietary disk image formats. AFF offers two significant benefits. First, it is more flexible because it allows extensive metadata to be stored with images. Second, AFF images consume less disk space than images in other formats (e.g., EnCase images). This paper also describes the Advanced Disk Imager, a new program for acquiring disk images that compares favorably with existing alternatives. <s> BIB001 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Models and Methods of Live Triage <s> The digital forensic process as traditionally laid out begins with the collection, duplication, and authentication of every piece of digital media prior to examination. These first three phases of the digital forensic process are by far the most costly. However, complete forensic duplication is standard practice among digital forensic laboratories. The time it takes to complete these stages is quickly becoming a serious problem. Digital forensic laboratories do not have the resources and time to keep up with the growing demand for digital forensic examinations with the current methodologies. One solution to this problem is the use of pre-examination techniques commonly referred to as digital triage. Pre-examination techniques can assist the examiner with intelligence that can be used to prioritize and lead the examination process. This work discusses a proposed model for digital triage that is currently under development at Mississippi State University. <s> BIB002 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Models and Methods of Live Triage <s> Recently, digital evidence has been playing an increasingly important role in criminal cases. The seizure of Hard Disk Drives (HDDs) and creation of images of entire disk drives have become a best practice by law enforcement agencies. In most criminal cases, however, the incriminatory information found on an HDD is only a small portion of the entire HDD and the remaining information is not relevant to the case. For this reason, demands for the regulation of excessive search and seizure of defendants' innocuous information have been increasing and gaining strength. Some courts have even ruled out inadmissible digital evidence gathered from sites where the scope of a warrant has been exceeded, considering it to be a violation of due process. In order to protect the privacy of suspects, a standard should be made restricting excessive search and seizure. There are, however, many difficulties in selectively identifying and collecting digital evidence at a crime scene, and it is not realistic to expect law enforcement officers to search and collect completely only case-relevant evidence. Too much restriction can cause severe problems in investigations and may result in law enforcement authorities missing crucial evidence. Therefore, a model needs to be established that can assess and regulate excessive search and seizure of digital evidence in accordance with a reasonable standard that considers practical limitations. Consequently, we propose a new approach that balances two conflicting values: human rights protection versus the achievement of effective investigations. In this new approach, a triage model is derived from an assessment of the limiting factors of on-site search and seizure. For the assessment, a survey that provides information about the level of law enforcement, such as the available labor, equipment supply, technical limitations, and time constraints, was conducted using current field officers. A triage model that can meet the legal system's demand for privacy protection and which supports decision making by field officers that can have legal effects was implemented. Since the demands of each legal system and situation of law enforcement vary from country to country, the triage model should be established individually for each legal system. Along with experiment of our proposed approach, this paper presents a new triage model that is designed to meet the recent requirements of the Korean legal system for privacy protection from, specifically, a Korean perspective. <s> BIB003 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Models and Methods of Live Triage <s> This paper addresses the increasing resources overload being experienced by law enforcement digital forensics units with the proposal to introduce triage template pipelines into the investigative process, enabling devices and the data they contain to be examined according to a number of prioritised criteria. <s> BIB004 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Models and Methods of Live Triage <s> There are two main reasons the processing speed of current generation digital forensic tools is inadequate for the average case: a) users have failed to formulate explicit performance requirements; and b) developers have failed to put performance, specifically latency, as a top-level concern in line with reliability and correctness. In this work, we formulate forensic triage as a real-time computation problem with specific technical requirements, and we use these requirements to evaluate the suitability of different forensic methods for triage purposes. Further, we generalize our discussion to show that the complete digital forensics process should be viewed as a (soft) real-time computation with well-defined performance requirements. We propose and validate a new approach to target acquisition that enables file-centric processing without disrupting optimal data throughput from the raw device. We evaluate core forensic processing functions with respect to processing rates and show their intrinsic limitations in both desktop and server scenarios. Our results suggest that, with current software, keeping up with a commodity SATA HDD at 120 MB/s requires 120-200 cores. <s> BIB005 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Models and Methods of Live Triage <s> Digital Forensics is being actively researched and performed in various areas against changing IT environment such as mobile phone, e-commerce, cloud service and video surveillance. Moreover, it is necessary to research unified digital evidence management for correlation analysis from diverse sources. Meanwhile, various triage approaches have been developed to cope with the growing amount of digital evidence being encountered in criminal cases, enterprise investigations and military contexts. Despite of debating over whether triage inspection is necessary or not, it will be essential to develop a framework for managing scattered digital evidences. This paper presents a framework with unified digital evidence management for appropriate security convergence, which is based on triage investigation. Moreover, this paper describes a framework in network video surveillance system to shows how it works as an unified evidence management for storing diverse digital evidences, which is a good example of security convergence. <s> BIB006 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Models and Methods of Live Triage <s> The volume of digital forensic evidence is rapidly increasing, leading to large backlogs. In this paper, a Digital Forensic Data Reduction and Data Mining Framework is proposed. Initial research with sample data from South Australia Police Electronic Crime Section and Digital Corpora Forensic Images using the proposed framework resulted in significant reduction in the storage requirements — the reduced subset is only 0.196 percent and 0.75 percent respectively of the original data volume. The framework outlined is not suggested to replace full analysis, but serves to provide a rapid triage, collection, intelligence analysis, review and storage methodology to support the various stages of digital forensic examinations. Agencies that can undertake rapid assessment of seized data can more effectively target specific criminal matters. The framework may also provide a greater potential intelligence gain from analysis of current and historical data in a timely manner, and the ability to undertake research of trends over time. <s> BIB007 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Models and Methods of Live Triage <s> We present a new approach to digital forensic evidence acquisition and disk imaging called sifting collectors that images only those regions of a disk with expected forensic value. Sifting collectors produce a sector-by-sector, bit-identical AFF v3 image of selected disk regions that can be mounted and is fully compatible with existing forensic tools and methods. In our test cases, they have achieved an acceleration of >3× while collecting >95% of the evidence, and in some cases we have observed acceleration of up to 13×. Sifting collectors challenge many conventional notions about forensic acquisition and may help tame the volume challenge by enabling examiners to rapidly acquire and easily store large disks without sacrificing the many benefits of imaging. <s> BIB008 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Models and Methods of Live Triage <s> In recent years the capacity of digital storage devices has been increasing at a rate that has left digital forensic services struggling to cope. There is an acknowledgement that current forensic tools have failed to keep up. The workload is such that a form of 'administrative triage' takes place in many labs where perceived low priority jobs are delayed or dropped without reference to the data itself. In this paper we investigate the feasibility of first responders performing a fast initial scan of a device by sampling on the device itself. A Bloom filter is used to store the block hashes of large collections of contraband data. We show that by sampling disk clusters, we can achieve 99.9% accuracy scanning for contraband data in minutes. Even under the constraints imposed by low specification legacy equipment, it is possible to scan a device for contraband with a known and controllable margin of error in a reasonable time. We conclude that in this type of case it is feasible to boot the device into a forensically sound environment and do a pre-imaging scan to prioritise the device for further detailed investigation. <s> BIB009 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Models and Methods of Live Triage <s> The sharp rise in consumer computing, electronic and mobile devices and data volumes has resulted in increased workloads for digital forensic investigators and analysts. The number of crimes involving electronic devices is increasing, as is the amount of data for each job. This is becoming unscaleable and alternate methods to reduce the time trained analysts spend on each job are necessary.This work leverages standardised knowledge representations techniques and automated rule-based systems to encapsulate expert knowledge for forensic data. The implementation of this research can provide high-level analysis based on low-level digital artefacts in a way that allows an understanding of what decisions support the facts. Analysts can quickly make determinations as to which artefacts warrant further investigation and create high level case data without manually creating it from the low-level artefacts. Extraction and understanding of users and social networks and translating the state of file systems to sequences of events are the first uses for this work.A major goal of this work is to automatically derive 'events' from the base forensic artefacts. Events may be system events, representing logins, start-ups, shutdowns, or user events, such as web browsing, sending email. The same information fusion and homogenisation techniques are used to reconstruct social networks. There can be numerous social network data sources on a single computer; internet cache can locate?Facebook, LinkedIn, Google Plus caches; email has address books and copies of emails sent and received; instant messenger has friend lists and call histories. Fusing these into a single graph allows a more complete, less fractured view for an investigator.Both event creation and social network creation are expected to assist investigator-led triage and other fast forensic analysis situations. <s> BIB010 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Models and Methods of Live Triage <s> Due to budgetary constraints and the high level of training required, digital forensic analysts are in short supply in police forces the world over. This inevitably leads to a prolonged time taken between an investigator sending the digital evidence for analysis and receiving the analytical report back. In an attempt to expedite this procedure, various process models have been created to place the forensic analyst in the field conducting a triage of the digital evidence. By conducting triage in the field, an investigator is able to act upon pertinent information quicker, while waiting on the full report.The work presented as part of this paper focuses on the training of front-line personnel in the field triage process, without the need of a forensic analyst attending the scene. The premise has been successfully implemented within regular/non-digital forensics, i.e., crime scene investigation. In that field, front-line members have been trained in specific tasks to supplement the trained specialists. The concept of front-line members conducting triage of digital evidence in the field is achieved through the development of a new process model providing guidance to these members. To prove the model's viability, an implementation of this new process model is presented and evaluated. The results outlined demonstrate how a tiered response involving digital evidence specialists and non-specialists can better deal with the increasing number of investigations involving digital evidence. <s> BIB011 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Models and Methods of Live Triage <s> This paper discusses the challenges of performing a forensic investigation against a multi-node Hadoop cluster and proposes a methodology for examiners to use in such situations. The procedure's aim of minimising disruption to the data centre during the acquisition process is achieved through the use of RAM forensics. This affords initial cluster reconnaissance which in turn facilitates targeted data acquisition on the identified DataNodes. To evaluate the methodology's feasibility, a small Hadoop Distributed File System (HDFS) was configured and forensic artefacts simulated upon it by deleting data originally stored inźthe cluster. RAM acquisition and analysis was then performed on the NameNode in order to test the validity of the suggested methodology. The results are cautiously positive in establishing that RAM analysis of the NameNode can be used to pinpoint the data blocks affected by the attack, allowing a targeted approach to the acquisition of data from the DataNodes, provided that the physical locations can be determined. A full forensic analysis of the DataNodes was beyond the scope of this project. <s> BIB012 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Models and Methods of Live Triage <s> An issue that continues to impact digital forensics is the increasing volume of data and the growing number of devices. One proposed method to deal with the problem of "big digital forensic data": the volume, variety, and velocity of digital forensic data, is to reduce the volume of data at either the collection stage or the processing stage. We have developed a novel approach which significantly improves on current practice, and in this paper we outline our data volume reduction process which focuses on imaging a selection of key files and data such as: registry, documents, spreadsheets, email, internet history, communications, logs, pictures, videos, and other relevant file types. When applied to test cases, a hundredfold reduction of original media volume was observed. When applied to real world cases of an Australian Law Enforcement Agency, the data volume further reduced to a small percentage of the original media volume, whilst retaining key evidential files and data. The reduction process was applied to a range of real world cases reviewed by experienced investigators and detectives and highlighted that evidential data was present in the data reduced forensic subset files. A data reduction approach is applicable in a range of areas, including: digital forensic triage, analysis, review, intelligence analysis, presentation, and archiving. In addition, the data reduction process outlined can be applied using common digital forensic hardware and software solutions available in appropriately equipped digital forensic labs without requiring additional purchase of software or hardware. The process can be applied to a wide variety of cases, such as terrorism and organised crime investigations, and the proposed data reduction process is intended to provide a capability to rapidly process data and gain an understanding of the information and/or locate key evidence or intelligence in a timely manner. <s> BIB013
Rogers et al. introduce the model for the field triage process in computer forensics and name it the Cyber Forensic Field Triage Process Model (CFFTPM). The CFFTPM has six phases: planning, triage, usage/user profiles, chronology/timeline, internet activity, and case specific evidence. Each phase has several sub-tasks and considerations that vary according to the specifics of the case and operating system under investigation. The CFFTPM originates from child pornography cases. Nevertheless, it is general enough to be applicable to other possible cases; however, the model cannot be considered as the ultimate solution for every case. It is also important to note that the proposed model does not preclude transporting the system to a laboratory environment for a more thorough investigation. Cantrell et al. BIB002 discuss a proposed model for digital triage. The proposed model is a linear framework, except the preservation phase that is an investigative principle preserved throughout all the phases. The first phase is planning and readiness that occurs before the investigation onsite. The next phase is live forensic that is included as an optional step, depending on the need and expertise, and it must occur prior to the following phases because the volatile memory can be lost very quickly. The middle three phases: computer profile phase, crime potential phase, and presentation phase are intended to be an automated process, coded as a computer program or script using the existing tools. The last phase, triage examination phase, is optional depending on the need. The triage examination should be an automated process that is guided by the examiner using predefined templates specific to each case. Hong et al. BIB003 propose a theoretical framework for implementing a triage model. The requirement for the triage model is to consider the limiting factors of the onsite search and seizure. The framework consists of three phases: assessment, triage model, and reassessment. The proposed framework is based on the assumption that reassessments are performed periodically according to the changes in search and the conditions of the onsite seizure. To establish a triage model, a questionnaire that consists of 48 questions, which are provided in the paper, was prepared; it was answered by 58 respondents in total. The paper presents a large discussion of the results. After assessing the results of the questionnaire, a new triage model is proposed. The triage process is divided into four steps: planning, execution, categorization, and decision. The properly collected information mostly depends on the execution step. The execution step prioritizes the file types for the search according to three types of crime: personal general crime, personal high-tech crime, and corporate general crime. Next, the file search is conducted in the following order: timeline of interest; filename-or contents-based keywords search; and file/directory path-based search. Another important procedure in the execution step is the detection of suspicious files. The proposed triage model can be applied only to personal computers and it is tailored to the Korean legal system requirements for the privacy protection. Overill et al. BIB004 propose an attractive idea to introduce triage template pipelines into the investigative process for the most popular types of digital crimes, enabling digital evidence to be examined according to a number of prioritised criteria. Each specific digital crime has its own template of prioritised devices and the data based on the cost-effectiveness criteria of front-loading probative value and back-loading resource utilisation. The authors declare that about 80% of all digital crimes in Hong Kong are accounted for just five types of crime. However, they do not enumerate these types of crime. The authors state that "the work this far has addressed the set of five digital crime templates", however, the examples of templates for only two digital crimes are provided. To be more precise, they are the Distributed Denial of Service (DDoS) template diagram and the Peerto-Peer (P2P) template diagram. Moreover, the construction of these example templates is not discussed in detail. An advantage of the triage template pipeline approach over the triage tools is that the evidential recovery process can be terminated as soon as it becomes apparent that the probative value criterion has been fulfilled. Therefore, the triage time can be shorter in some cases. The essence of the proposed triage template pipelines is formalized common sense. Roussev et al. BIB005 argue and analyze forensic triage as a real-time computation problem, which has allotted limited time and resources. One hour is considered to be an acceptable time limit for triage. The authors assume that an increase in the performance can be achieved if the acquisition and processing start and complete at almost the same time. It means that the processing should be as fast as data the cloning. The suitability of the most common open-source implementations and of most common forensic procedures to fit into the time constraints is investigated experimentally. The authors state that the triage investigation can be carried out in the field and in the laboratory. For the fieldwork, they consider 8-core workstation and for the laboratory, they consider 48-core server. The obtained results show that only a few basic methods, like file metadata extraction, crypto-hashing, and registry extraction, can fit into the time budget in the workstation triage. To increase the performance of the file acquisition, Roussev et al. BIB005 implement a Latency-Optimized Target Acquisition (LOTA) scheme. The main idea of this scheme is that the metadata of a filesystem is parsed to make an inverse map from blocks to files before cloning the target. This procedure allows sequential scanning of blocks and reconstructing the files. The LOTA scheme enables an improvement of a factor of two for files larger than 1 M and a factor of 100 for smaller files. It is recommended to use the scheme in the forensic environment routinely. The authors advocate employing parallel computations to obtain higher processing rates. Lim and Lee BIB006 describe a unified evidence container XeBag for storing diverse digital evidence from different sources. The XeBag can be used for selective evidence collection and searching on the live system. The file structure of XeBag is based on well-known compression file formats, PKZip and WinRAR. To record forensic metadata, an Extensible Markup Language (XML) document is included additionally for each stored object. The XML format is a popular data exchange format, therefore, it enables easy access to the data. The authors provide a description of a video surveillance system to show how its digital evidence is stored and can be retrieved from the unified evidence container XeBag. Grier and Richard III BIB008 introduce a new approach, called sifting collectors, for imaging of the selected regions of disk drives. The sifting collectors create a sector-by-sector, bit-for-bit exact image of disk regions that have forensic value. The forensics image is produced in an Advanced Forensics Format v3 BIB001 , and it is fully compatible with the existing forensic tools. The selection of the regions that have forensics value is based on profiles. The authors do not expect that the examiners can prepare the profiles themselves, therefore, the profiles must be created and stored in a library. The sifting collectors firstly collect the metadata according to the defined profile. Then they interpret metadata, determine sectors of interest, and assemble them in the disk order. As a result, their methods are not suitable for unknown filesystems. If profiles are not possible to define, the alternative proposes to include a person in the scanning loop to decide what is relevant. The implemented prototype targets New Technology File System (NTFS) as a file system and uses the Master File Table as its primary source. The conducted experiment shows a speed up from 3 to 13 times in comparison to the forensic image acquisition tool Sleuthkit [24] for the test cases. The absolute values of runtimes are not provided. The accuracy of the region selection is between 54% and 95% for the considered test cases. Faster image acquisition time gives less accuracy. One important limitation of sifting collectors is their susceptibility to steganography and anti-forensics. Penrose et al. BIB009 present an approach for fast contraband file detection on the device itself. The approach is based on clusters scanning, hash calculating, and comparison to the database. The cluster size is 4 KiB. A Bloom filter is used to store the cluster hashes of the contraband files. The Bloom filter reduces the size of the database of the block level Message-Digest Algorithm 5 (MD5) hashes by an order of magnitude; however, it costs a small false positive rate. The designed Bloom filter is 1 GiB in size and it uses eight hash functions. A larger Bloom filter enables faster access to the hashes of the contraband files. The performed experiment shows that the approach achieves 99.9% accuracy scanning for contraband files in minutes. Some false positives are encountered; however, the results are positive for the existence of all contraband files. The experiment was conducted in legitimate computing environment. The authors draw a conclusion that this type of case can be further investigated in a forensically sound environment. Turnbull and Randhawa BIB010 describe an ontology-based approach to assist examiner-led triage. The purpose of the approach is to enable a less technically intrinsic user to run a triage tool. This is implemented by collecting low-level artifacts and inferencing hypotheses from the collected facts. The approach is oriented to automatically deriving events from the base of the forensics artefacts. A Resource Descriptive Framework (RDF) is used as the basis of the ontology. The representative feature of the approach is that the layered multiple ontologies are designed over the same dataset. The description of the ontologies used is vague. The authors find some advantages of the RDF; however, they recognize that a Web Ontology Language (OWL) could provide more possibilities. The authors suggest that the approach is applicable for the extraction of information from social networks, though, no evidence of such application can be found in the paper. The implemented system to provide a proof-of-concept consists of a knowledge base, data ingestors, reasoners, and a visualiser. The visualiser is hardcoded into the used ontology. Neither test, nor real cases are provided. To conclude, the idea of the approach is attractive, however, the description and the development are immature. Hitchcock et al. BIB011 introduce a Digital Field Triage (DFT) model to offload some of the initial tasks performed in the field by forensic examiners to non-digital evidence specialists. The primary goals of the model are twofold: (i) To increase the efficiency of an investigation by providing digital evidence in a timely manner; (ii) To decrease the backlog of files at a forensic laboratory. The proposed model is based on Rogers et al. and it has four phases: planning, assessment, reporting, and threshold. The DFT model has inherent risks associated with it. They are as follows: the management, training, and supporting tools. The management and ongoing training are integral parts of the success of the DFT model. The tools must support the management. For the DFT to work, there are three fundamental concepts: 1. DFT must work with a supervising examiner 2. DFT must maintain the forensic integrity of the digital evidence 3. A DFT assessment does not replace the forensic analysis Therefore, the DFT model is not a replacement for full analysis, but is part of the overall strategy of handling digital evidence. The first version of the DFT model was implemented in Canada six years ago. The implementation achieved the goals pursued by the model; however, persistent attention needs to be turned to the risks associated with the model. Leimich et al. BIB012 propose a variation of cloud forensic methodology tailored to a live analysis of Random-Access Memory (RAM) for Hadoop Distributed File System (HDFS). The aim of the methodology is to minimize the disruption to the data center after data breach. The Hadoop is a Java implemented system developed for UNIX based operating systems. It is a master/slave distributed architecture for storing and processing big data. The HDFS consists of DataNodes (slaves), which store the data, and NameNode (master) that manages the DataNodes. The methodology is oriented to the acquisition of the NameNode contents of to pinpoint the affected DataNodes. The forensic analysis of the DataNodes is out of scope of the proposed methodology. The methodology contains nine phases: preparation, live acquisition of the NameNode, initial cluster reconnaissance, checkpointing via a forensic workstation, live artefact analysis, establish 'suspect' transactions and map to data block, perform targeted dead acquisition of the DataNodes, data reconstruction, and report. To test the validity of the methodology a small HDFS cluster that has one master and three slaves, was configured with a single scenario of deleted data. The phase of data reconstruction is not carried out. The experiment confirms that the methodology enables locating the deleted data blocks. Liemich et al. BIB012 discuss the ability to implement the proposed methodology in forensic tool in compliance with the National Institute of Standards and Technology (NIST) Computer Forensic Tool Testing criteria. Montasari extends the Rogers et al.'s model by dividing all phases into two stages and introducing new sub-tasks into the phases. The single planning activity is assigned to the first stage. The planning should be carried out before attending the site. Montasari considers many models of the forensics process, not just triage models, because according to the author, the single model proposed by Rogers et al. exists for the onsite triage process. The author selects activities, which would be appropriate for the triage process, from other models. Therefore, several sub-tasks are added to the model of the forensics field triage process, and the model is presented in a more detailed and categorized way. Additionally, the model is extended by a set of investigative principles joined into a group under the name of "Overriding Principles", which are an additional contribution of the paper. These principles are as follows: 1. To preserve chain of custody 2. To maintain an accurate audit trail 3. To maintain a restricted access control 4. To maintain an effective case management 5. To maintain the information flow Peersman et al. present an approach that incorporates artificial intelligence and machine learning techniques (support vector machines) to automatically label new Child Sexual Abuse (CSA) media. The approach employs two stages for labelling the unknown CSA files. The first stage uses the text categorization techniques to determine whether a file contains CSA content based on its filename. The text categorization applies the following features: predefined keywords, forms of explicit language use, expressions relating to children and family relations in English, French, German, Italian, Dutch, and Japanese. Additionally, all patterns of two, three, and four consecutive characters are extracted from the filenames. The second stage gets the files from the first level and examines the visual content of images and audio files. The second stage bases the decision on multi-modal features. The multi-modal features consist of the following representations: colour-correlograms, skin features, visual words and visual pyramids, and audio words for audio files. The conducted experiment shows a false positive rate of 20.3% after the first stage. The second stage reduces the false positive rate to 7.9% for images and 4.3% for videos. The approach is implemented into the iCOP toolkit [30] that performs live forensic analysis on a P2P network. Therefore, the proposed approach is designed for a proactive monitoring activity. To label the most pertinent candidates for the CSA media, an examiner can login to the iCOP canvas that automatically arrange the results. Additionally, the approach can be adapted to the identification of the new CSA media during a reactive investigation. The approach is implemented in the Gnutella P2P network. Quick and Choo BIB013 develop the idea of data reduction introduced in BIB007 . The authors present the methodology to reduce the data volume using selective imaging. The methodology suggests to select only the key files and data. Windows, Apple and Linux operating systems and their filesystems are considered. A forensic examiner makes the decision to include or exclude particular file types. The decision is based on the data, contained in these file types, relevance to the case. The other possibility considered for reducing data volume is a thumbnailing of video, movie, and picture files. The thumbnailing significantly reduces large image files. Once the file types are selected and some thumbnails are loaded into the forensics software, the logical image file is created. The presented methodology can be applied using common digital forensics tools. The methodology is applied to test as well as real world data. Many results of the experiments that illustrate the viability of the methodology are provided. In general, time reductions observed are 14 min on average to collect a logical image and process in the Internet Evidence Finder, meanwhile the processing of full forensic image takes 8 h 4 min on average. The presented methodology can be applied to either write-blocked physical media or a forensic image.
Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Methods of Post-Mortem Triage <s> Forensic study of mobile devices is a relatively new field, dating from the early 2000s. The proliferation of phones (particularly smart phones) on the consumer market has caused a growing demand for forensic examination of the devices, which could not be met by existing Computer Forensics techniques. As a matter of fact, Law enforcement are much more likely to encounter a suspect with a mobile device in his possession than a PC or laptop and so the growth of demand for analysis of mobiles has increased exponentially in the last decade. Early investigations, moreover, consisted of live analysis of mobile devices by examining phone contents directly via the screen and photographing it with the risk of modifying the device content, as well as leaving many parts of the proprietary operating system inaccessible. The recent development of Mobile Forensics, a branch of Digital Forensics, is the answer to the demand of forensically sound examination procedures of gathering, retrieving, identifying, storing and documenting evidence of any digital device that has both internal memory and communication ability [1]. Over time commercial tools appeared which allowed analysts to recover phone content with minimal interference and examine it separately. By means of such toolkits, moreover, it is now possible to think of a new approach to Mobile Forensics which takes also advantage of "Data Mining" and "Machine Learning" theory. This paper is the result of study concerning cell phones classification in a real case of pedophilia. Based on Mobile Forensics "Triaging" concept and the adoption of self-knowledge algorithms for classifying mobile devices, we focused our attention on a viable way to predict phone usage's classifications. Based on a set of real sized phones, the research has been extensively discussed with Italian law enforcement cyber crime specialists in order to find a viable methodology to determine the likelihood that a mobile phone has been used to commit the specific crime of pedophilia, which could be very relevant during a forensic investigation. <s> BIB001 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Methods of Post-Mortem Triage <s> Over the past few years, the number of crimes related to the worldwide diffusion of digital devices with large storage and broadband network connections has increased dramatically. In order to better address the problem, law enforcement specialists have developed new ideas and methods for retrieving evidence more effectively. In accordance with this trend, our research aims to add new pieces of information to the automated analysis of evidence according to Machine Learning-based “post mortem” triage. The scope consists of some copyright infringement court cases coming from the Italian Cybercrime Police Unit database. We draw our inspiration from this “low level” crime which is normally sat at the bottom of the forensic analyst's queue, behind higher priority cases and dealt with the lowest priority. The present work aims to bring order back in the analyst's queue by providing a method to rank each queued item, e.g. a seized device, before being analyzed in detail. The paper draws the guidelines for drive-under-triage classification (e.g. hard disk drive, thumb drive, solid state drive etc.), according to a list of crime-dependent features such as installed software, file statistics and browser history. The model, inspired by the theory of Data Mining and Machine Learning, is able to classify each exhibit by predicting the problem dependent variable (i.e. the class) according to the aforementioned crime-dependent features. In our research context the “class” variable identifies with the likelihood that a drive image may contain evidence concerning the crime and, thus, the associated item must receive an high (or low) ranking in the list. <s> BIB002 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Methods of Post-Mortem Triage <s> A novel concept for improving the trustworthiness of results obtained from digital investigations is presented. Case Based Reasoning Forensic Auditor (CBR-FA) is a method by which results from previous digital forensic examinations are stored and reused to audit current digital forensic investigations. CBR-FA provides a method for evaluating digital forensic investigations in order to provide a practitioner with a level of reassurance that evidence that is relevant to their case has not been missed. The structure of CBR-FA is discussed as are the methodologies it incorporates as part of its auditing functionality. <s> BIB003 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Methods of Post-Mortem Triage <s> The global diffusion of smartphones and tablets, exceeding traditional desktops and laptops market share, presents investigative opportunities and poses serious challenges to law enforcement agencies and forensic professionals. Traditional Digital Forensics techniques, indeed, may be no longer appropriate for timely analysis of digital devices found at the crime scene. Nevertheless, dealing with specific crimes such as murder, child abductions, missing persons, death threats, such activity may be crucial to speed up investigations. Motivated by this, the paper explores the field of Triage, a relatively new branch of Digital Forensics intended to provide investigators with actionable intelligence through digital media inspection, and describes a new interdisciplinary approach that merges Digital Forensics techniques and Machine Learning principles. The proposed Triage methodology aims at automating the categorization of digital media on the basis of plausible connections between traces retrieved (i.e. digital evidence) and crimes under investigation. As an application of the proposed method, two case studies about copyright infringement and child pornography exchange are then presented to actually prove that the idea is viable. The term ''feature'' will be regarded in the paper as a quantitative measure of a ''plausible digital evidence'', according to the Machine Learning terminology. In this regard, we (a) define a list of crime-related features, (b) identify and extract them from available devices and forensic copies, (c) populate an input matrix and (d) process it with different Machine Learning mining schemes to come up with a device classification. We perform a benchmark study about the most popular mining algorithms (i.e. Bayes Networks, Decision Trees, Locally Weighted Learning and Support Vector Machines) to find the ones that best fit the case in question. Obtained results are encouraging as we will show that, triaging a dataset of 13 digital media and 45 copyright infringement-related features, it is possible to obtain more than 93% of correctly classified digital media using Bayes Networks or Support Vector Machines while, concerning child pornography exchange, with a dataset of 23 cell phones and 23 crime-related features it is possible to classify correctly 100% of the phones. In this regards, methods to reduce the number of linearly independent features are explored and classification results presented. <s> BIB004 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Methods of Post-Mortem Triage <s> There are two main reasons the processing speed of current generation digital forensic tools is inadequate for the average case: a) users have failed to formulate explicit performance requirements; and b) developers have failed to put performance, specifically latency, as a top-level concern in line with reliability and correctness. In this work, we formulate forensic triage as a real-time computation problem with specific technical requirements, and we use these requirements to evaluate the suitability of different forensic methods for triage purposes. Further, we generalize our discussion to show that the complete digital forensics process should be viewed as a (soft) real-time computation with well-defined performance requirements. We propose and validate a new approach to target acquisition that enables file-centric processing without disrupting optimal data throughput from the raw device. We evaluate core forensic processing functions with respect to processing rates and show their intrinsic limitations in both desktop and server scenarios. Our results suggest that, with current software, keeping up with a commodity SATA HDD at 120 MB/s requires 120-200 cores. <s> BIB005 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Methods of Post-Mortem Triage <s> The evolution of modern digital devices is outpacing the scalability and effectiveness of Digital Forensics techniques. Digital Forensics Triage is one solution to this problem as it can extract evidence quickly at the crime scene and provide vital intelligence in time critical investigations. Similarly, such methodologies can be used in a laboratory to prioritize deeper analysis of digital devices and alleviate examination backlog. Developments in Digital Forensics Triage methodologies have moved towards automating the device classification process and those which incorporate Machine Learning principles have proven to be successful. Such an approach depends on crime-related features which provide a relevant basis upon which device classification can take place. In addition, to be an accepted and viable methodology it should be also as accurate as possible. Previous work has concentrated on the issues of feature extraction and classification, where less attention has been paid to improving classification accuracy through feature manipulation. In this regard, among the several techniques available for the purpose, we concentrate on feature weighting, a process which places more importance on specific features. A twofold approach is followed: on one hand, automated feature weights are quantified using Kullback-Leibler measure and applied to the training set whereas, on the other hand, manual weights are determined with the contribution of surveyed digital forensic experts. Experimental results of manual and automatic feature weighting are described which conclude that both the techniques are effective in improving device classification accuracy in crime investigations. <s> BIB006 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Methods of Post-Mortem Triage <s> The role of triage in digital forensics is disputed, with some practitioners questioning its reliability for identifying evidential data. Although successfully implemented in the field of medicine, triage has not established itself to the same degree in digital forensics. This article presents a novel approach to triage for digital forensics. Case-Based Reasoning Forensic Triager (CBR-FT) is a method for collecting and reusing past digital forensic investigation information in order to highlight likely evidential areas on a suspect operating system, thereby helping an investigator to decide where to search for evidence. The CBR-FT framework is discussed and the results of twenty test triage examinations are presented. CBR-FT has been shown to be a more effective method of triage when compared to a practitioner using a leading commercial application. <s> BIB007 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Methods of Post-Mortem Triage <s> A sharp increase in malware and cyber-attacks has been observed in recent years. Analysing cyber-attacks on the affected digital devices falls under the purview of digital forensics. The Internet is the main source of cyber and malware attacks, which sometimes result in serious damage to the digital assets. The motive behind digital crimes varies – such as online banking fraud, information stealing, denial of services, security breaches, deceptive output of running programs and data distortion. Digital forensics analysts use a variety of tools for data acquisition, evidence analysis and presentation of malicious activities. This leads to device diversity posing serious challenges for investigators. For this reason, some attack scenarios have to be examined repeatedly, which entails tremendous effort on the part of the examiners when analysing the evidence. To counter this problem, Muhammad Shamraiz Bashir and Muhammad Naeem Ahmed Khan at the Shaheed Zulfikar Ali Bhutto Institute of Science and Technology, Islamabad, Pakistan propose a novel triage framework for digital forensics. <s> BIB008 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Methods of Post-Mortem Triage <s> Criminal investigations invariably involve the triage or cursory examination of relevant electronic media for evidentiary value. Legislative restrictions and operational considerations can result in investigators having minimal time and resources to establish such relevance, particularly in situations where a person is in custody and awaiting interview. Traditional uninformed search methods can be slow, and informed search techniques are very sensitive to the search heuristic's quality. This research introduces Monte-Carlo Filesystem Search, an efficient crawl strategy designed to assist investigators by identifying known materials of interest in minimum time, particularly in bandwidth constrained environments. This is achieved by leveraging random selection with non-binary scoring to ensure robustness. The algorithm is then expanded with the integration of domain knowledge. A rigorous and extensive training and testing regime conducted using electronic media seized during investigations into online child exploitation proves the efficacy of this approach. <s> BIB009 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Methods of Post-Mortem Triage <s> Computer forensics faces a range of challenges due to the widespread use of computing technologies. Examples include the increasing volume of data and devices that need to be analysed in any single case, differing platforms, use of encryption and new technology paradigms (such as cloud computing and the Internet of Things). Automation within forensic tools exists, but only to perform very simple tasks, such as data carving and file signature analysis. Investigators are responsible for undertaking the cognitively challenging and time-consuming process of identifying relevant artefacts. Due to the volume of cyber-dependent (e.g., malware and hacking) and cyber-enabled (e.g., fraud and online harassment) crimes, this results in a large backlog of cases. With the aim of speeding up the analysis process, this paper investigates the role that unsupervised pattern recognition can have in identifying notable artefacts. A study utilising the Self-Organising Map (SOM) to automatically cluster notable artefacts was devised using a series of four cases. Several SOMs were created - a File List SOM containing the metadata of files based upon the file system, and a series of application level SOMs based upon metadata extracted from files themselves (e.g., EXIF data extracted from JPEGs and email metadata extracted from email files). A total of 275 sets of experiments were conducted to determine the viability of clustering across a range of network configurations. The results reveal that more than 93.5% of notable artefacts were grouped within the rank-five clusters in all four cases. The best performance was achieved by using a 10ź×ź10 SOM where all notables were clustered in a single cell with only 1.6% of the non-notable artefacts (noise) being present, highlighting that SOM-based analysis does have the potential to cluster notable versus noise files to a degree that would significantly reduce the investigation time. Whilst clustering has proven to be successful, operationalizing it is still a challenge (for example, how to identify the cluster containing the largest proportion of notables within the case). The paper continues to propose a process that capitalises upon SOM and other parameters such as the timeline to identify notable artefacts whilst minimising noise files. Overall, based solely upon unsupervised learning, the approach is able to achieve a recall rate of up to 93%. <s> BIB010
Marturana and Tacconi BIB004 summarize the research works BIB001 BIB002 delivered at conferences and present a model intended for both live and post-mortem triage using machine learning techniques. The presented model consists of the following four steps: forensic acquisition, feature extraction and normalization, context and priority definition, and data classification. For such model, there are two main challenges, the definition of crime-related features and collection of a consistent set of classified samples related to the investigated crimes. The crime-related features are defined for two cases studies, copyright infringement and child pornography exchange. Guidelines for using the classifiers are provided. The attention of the experiment is mostly directed to the comparison of the classifiers used at the last stage of the model. No conclusion is made as to which classifier is best suited for the investigated cases. The presented statistical approach has proven to be valid for ranking the digital evidence related to copyright infringement and child pornography exchange. However, for this approach to be viable, it is necessary to have a deep understanding of possible relations between the crime under investigation and the potential digital evidence. McClelland and Marturana BIB006 extend the research presented by Marturana and Tacconi BIB004 . The authors investigate the impact of the feature manipulation on the accuracy of the classification. The weights are assigned to the features. Two approaches are used for assigning weights to the features, automatic and manual. The automated feature weights are quantified using the Kullback-Leibler measure. The manual weights are determined on the basis of the surveyed digital forensic experts' contribution. The Naïve Bayes classifier is used for the experiment. The only improvement is achieved in the child pornography case. Horsman et al. BIB007 extend the ideas presented in BIB003 and discuss a Case-Based Reasoning Forensic Triager (CBR-FT) method for retrieving the evidential data based on the location of the digital evidence in the past cases. The CBR-FT maintains a knowledge base for gathering the previous experience. Each location on the system stored in the knowledge base is assigned an evidence relevance rating (ERR), which is used as the prior probabilities in the Bayesian model to determine the priority of a particular location for searching. The model enables calculating a primary relevance figure (PRF) for each location. The search is carried out in two stages: in the first stage, only locations with a PRF above 0.5 are used, while the second stage is optional. If the examiner suspects that additional evidence can exist, s/he proceeds to the second stage. During the second stage, the examiner focuses on identifying similar patterns in cases stored in the CBR-FT knowledge base. The CBR-FT knowledge base must cover enough cases to reflect its target population correctly. That is the first restriction for application of the method. The study focuses on fraud offences and it has constructed a fraud knowledge base from 47 prior investigations. The experiment shows that the CBR-FT is more effective when compared to a commercial application EnCase Portable [38] , which uses precision and recall rates. However, an additional shortcoming of this study is that it focuses only on offences of fraud. Bashir and Khan BIB008 suggest a triage framework oriented to analyzing and resolving an attack. The framework contains the usual steps that belong to a general investigative process. The term "triage" refers to a certain part of the framework. The main idea of the triage framework is to create a blacklist database that contains a list of the previously known attacks with details on how to resolve. Every attack is characterized by six attributes: identifier, name, description, status, signature, and then counter measures. The key attribute is the signature that is a placeholder to store unique signatures of cyber-attacks in the form of MD5 hashes. If the signature of any of the affected files is found in the blacklist database, then it means that the attack is known. The answer to how to resolve it is in the blacklist database. However, if the attack is unknown, there is no triage process; a detailed analysis follows. The blacklist database is updated periodically on the basis of the new knowledge and new attacks. Dalins et al. BIB009 introduce a crawl and search method that can be used for digital triage. The proposed method adopts the Monte Carlo Tree Search strategy that is used in games for the filesystem search, which is called Monte Carlo Filesystem Search (MCFS). The original random selection is leveraged with non-binary scoring to keep guided search. Three file scoring methods are introduced, each built on the previous one: simple scorer, type of interest scorer, and similarity-based scorer. Other customizations are made to deliver better performance: integration of domain knowledge to enhance guided search, use of proprietary Microsoft PhotoDNA algorithm to measure the similarity of images, and skin tone detection to identify exposed skin that is usual component of child pornography. The experiment is carried out on real data that was obtained from the Australian Federal Police. The data presented as forensics images are related to the possession and online trading of child pornography. The experiment shows that the proposed MCFS is an effective method for larger and complex tree structures of the file system hierarchy. The search efficiency can be improved by around a third compared to uninformed depth-first search. However, the integration of domain knowledge and skin tone detection scoring showed lower results than expected. An additional investigation is necessary to improve these customizations. In general, the improved proposed method is promising, since many performance limitations arise due to the complicated filesystem design BIB005 . Fahdi et al. BIB010 investigate the possibility of utilizing the Self-Organising Map (SOM) technique to automatically cluster notable artefacts that are relevant to the case. A SOM is a neural network that generates a mapping from the high dimensional input data into a regular two dimensional array of nodes based upon their similarity in an unsupervised manner. The approach is based on using the metadata from several sources, such as the file system, email, and Internet, as the input into the SOM clustering. Moreover, the approach is oriented at the investigation of the suspects' systems rather than the victims' systems. Several pre-processing options are employed before the application of the approach. These options include the creation of the file list, expanding compound files, data carving, entropy test for encryption, and known file search. The results of data carving are not included into the file list of the SOM. Data carving should not be deployed during triage, since data carving tends to generate a lot of data due to high false positive rates BIB005 . The experiment shows that the use of the approach as a triage to verify the existence of the notable files allows identifying 38.6% of notable files at a cost of 1.3% of noise files. It is possible to expand the network size to increase the percentage of the notable files, however, at the cost of picking up more noise files. Most of the analysis takes a relatively trivial amount of time for small data sets (several GB); however, it takes an hour on average to process a large data set (0.5 TB). The appeal of the approach is that the only examiner interaction required in this process is when selecting the crime category. The approach can be a building block with further research and refinement to provide a triage tool for investigating simpler and technically more trivial cases that represent a large proportion of the forensic examiners' daily activities.
Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Triage of Mobile Devices <s> The increasing number of mobile devices being submitted to Digital Forensic Laboratories (DFLs) is creating a backlog that can hinder investigations and negatively impact public safety and the criminal justice system. In a military context, delays in extracting intelligence from mobile devices can negatively impact troop and civilian safety as well as the overall mission. To address this problem, there is a need for more effective on-scene triage methods and tools to provide investigators with information in a timely manner, and to reduce the number of devices that are submitted to DFLs for analysis. Existing tools that are promoted for on-scene triage actually attempt to fulfill the needs of both on-scene triage and in-lab forensic examination in a single solution. On-scene triage has unique requirements because it is a precursor to and distinct from the forensic examination process, and may be performed by mobile device technicians rather than forensic analysts. This paper formalizes the on-scene triage process, placing it firmly in the overall forensic handling process and providing guidelines for standardization of on-scene triage. In addition, this paper outlines basic requirements for automated triage tools. <s> BIB001 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Triage of Mobile Devices <s> We present DEC0DE, a system for recovering information from phones with unknown storage formats, a critical problem for forensic triage. Because phones have myriad custom hardware and software, we examine only the stored data. Via flexible descriptions of typical data structures, and using a classic dynamic programming algorithm, we are able to identify call logs and address book entries in phones across varied models and manufacturers. We designed DEC0DE by examining the formats of one set of phone models, and we evaluate its performance on other models. Overall, we are able to obtain high performance for these unexamined models: an average recall of 97% and precision of 80% for call logs; and average recall of 93% and precision of 52% for address books. Moreover, at the expense of recall dropping to 14%, we can increase precision of address book recovery to 94% by culling results that don't match between call logs and address book entries on the same phone. <s> BIB002 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Triage of Mobile Devices <s> Forensic study of mobile devices is a relatively new field, dating from the early 2000s. The proliferation of phones (particularly smart phones) on the consumer market has caused a growing demand for forensic examination of the devices, which could not be met by existing Computer Forensics techniques. As a matter of fact, Law enforcement are much more likely to encounter a suspect with a mobile device in his possession than a PC or laptop and so the growth of demand for analysis of mobiles has increased exponentially in the last decade. Early investigations, moreover, consisted of live analysis of mobile devices by examining phone contents directly via the screen and photographing it with the risk of modifying the device content, as well as leaving many parts of the proprietary operating system inaccessible. The recent development of Mobile Forensics, a branch of Digital Forensics, is the answer to the demand of forensically sound examination procedures of gathering, retrieving, identifying, storing and documenting evidence of any digital device that has both internal memory and communication ability [1]. Over time commercial tools appeared which allowed analysts to recover phone content with minimal interference and examine it separately. By means of such toolkits, moreover, it is now possible to think of a new approach to Mobile Forensics which takes also advantage of "Data Mining" and "Machine Learning" theory. This paper is the result of study concerning cell phones classification in a real case of pedophilia. Based on Mobile Forensics "Triaging" concept and the adoption of self-knowledge algorithms for classifying mobile devices, we focused our attention on a viable way to predict phone usage's classifications. Based on a set of real sized phones, the research has been extensively discussed with Italian law enforcement cyber crime specialists in order to find a viable methodology to determine the likelihood that a mobile phone has been used to commit the specific crime of pedophilia, which could be very relevant during a forensic investigation. <s> BIB003 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Triage of Mobile Devices <s> Bulk data analysis eschews file extraction and analysis, common in forensic practice today, and instead processes data in "bulk," recognizing and extracting salient details ("features") of use in the typical digital forensics investigation. This article presents the requirements, design and implementation of the bulk_extractor, a high-performance carving and feature extraction tool that uses bulk data analysis to allow the triage and rapid exploitation of digital media. Bulk data analysis and the bulk_extractor are designed to complement traditional forensic approaches, not replace them. The approach and implementation offer several important advances over today's forensic tools, including optimistic decompression of compressed data, context-based stop-lists, and the use of a "forensic path" to document both the physical location and forensic transformations necessary to reconstruct extracted evidence. The bulk_extractor is a stream-based forensic tool, meaning that it scans the entire media from beginning to end without seeking the disk head, and is fully parallelized, allowing it to work at the maximum I/O capabilities of the underlying hardware (provided that the system has sufficient CPU resources). Although bulk_extractor was developed as a research prototype, it has proved useful in actual police investigations, two of which this article recounts. <s> BIB004 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Triage of Mobile Devices <s> When forensic triage techniques designed for feature phones are applied to smart phones, these recovery techniques return hundreds of thousands of results, only a few of which are relevant to the investigation. We propose the use of relevance feedback to address this problem: a small amount of investigator input can efficiently and accurately rank in order of relevance, the results of a forensic triage tool. We present LIFTR, a novel system for prioritizing information recovered from Android phones. We evaluate LIFTR's ranking algorithm on 13 previously owned Android smart phones and three recovery engines -- DEC0DE, Bulk Extractor, and Strings? using a standard information retrieval metric, Normalized Discounted Cumulative Gain (NDCG). LIFTR's initial ranking improves the NDCG scores of the three engines from 0.0 to an average of 0.73; and with as little as 5 rounds of feedback, the ranking score in- creases to 0.88. Our results demonstrate the efficacy of relevance feedback for quickly locating useful information among the large amount of irrelevant data returned by current recovery techniques. Further, our empirical findings show that a significant amount of important user information persists for weeks or even months in the expired space of a phone's memory. This phenomenon underscores the importance of using file system agnostic recovery techniques, which are the type of techniques that benefit most from LIFTR. <s> BIB005 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Triage of Mobile Devices <s> Commercial mobile forensic vendors continue to use and rely upon outdated physical acquisition techniques in their products. As new mobile devices are introduced and storage capacities trend upward, so will the time it takes to perform physical forensic acquisitions, especially when performed over limited bandwidth means such as Universal Serial Bus (USB). We introduce an automated differential forensic acquisition technique and algorithm that uses baseline datasets and hash comparisons to limit the amount of data sent from a mobile device to an acquisition endpoint. We were able to produce forensically validated bit-for-bit copies of device storage in significantly reduced amounts of time compared to commonly available techniques. For example, using our technique, we successfully achieved an average imaging rate of under 7źmin per device for a corpus of actively used, real-world 16źGB Samsung Galaxy S3 smartphones. Current commercially available mobile forensic kits would typically take between one to 3źh to yield the same result. Details of our differential forensic imaging technique, algorithm, testing procedures, and results are documented herein. <s> BIB006
Mislan et al. BIB001 discuss the onsite triage process for mobile devices. The following steps are suggested for an on-scene triage investigation of mobile devices: 1. Initiate the chain of custody 2. Isolate the device from the network 3. Disable the security features 4. Extract the limited data 5. Review the extracted data 6. Preview the removable storage media. All the steps are discussed in details. The process of the investigation should be well documented in order to validate the results. The mobile device technicians, who are less experienced as technical examiners, should perform the onsite triage. The basic requirements for the automated onsite triage tools are outlined. To present shortly, they are as follows: simplicity of use, audit trail, and access control. The legal allowances of the United States to examine mobile devices are considered as well. Walls et al. BIB002 introduce an investigative tool DEC0DE for recovering information from mobile phones with unknown storage formats. The main idea is that the data formats from known phone models can be leveraged for recovering information from the new phone models. The evaluation focuses on feature phones, i.e., phones with less capability than that of smartphones. The DEC0DE takes the physical image of a mobile phone as input. It is the first limitation of the tool, because the image is not its concern. The second limitation is the assumption that the owner of the phone has left the data in plaintext format. The next shortcoming is that the extracted results are limited to address books and call log records. The contribution of the paper is a technique for an empirical mobile phone data analysis. The used technique consists of two steps-removal of known data and recovering information from the remaining data. The latter step is called an inference process. Block hash filtering accomplishes the first step. The second step adapts the techniques from natural language processing, namely the context-free grammar, and uses probabilistic finite state machines to encode typical data structures. The Viterbi algorithm treats the created finite state machines twice. Finally, the decision tree classifier is used to remove the potential false positive. The development is based on the four following models: Nokia 3200B, LG G4015, Motorola v551, and Samsung SGH-T309. The performance of DEC0DE's inference engine is evaluated against two metrics, recall and precision. The conducted experiment on the phones that have not been seen previously shows an average recall of 93% and precision of 52% for address books, and an average recall of 97% and precision of 80% for call logs. Marturana et al. BIB003 discuss the application of machine learning algorithms for digital triage of mobile phones. The triage stage is introduced between the stages of acquisition and analysis. The extracted data are firstly preprocessed in order to clean data, remove redundant attributes, and normalize data. Several classification algorithms are used to show the ability to classify whether a mobile phone was used to commit a pedophilia crime. The attention is devoted to the performance of the classification algorithms. The research is the first step towards the post-mortem forensic triage of mobile phones. Varma et al. BIB005 present a system, called LIFTR, for prioritizing the information recovered from Android phones. The initial data for the system is a forensic image extracted by a recovery engine. Three recovery engines-DEC0DE BIB002 , Bulk Extractor BIB004 , and Strings, a common UNIX utility for identifying stings of printable characters in a file -are used as the suppliers of the forensic images. Therefore, the LIFTR should operate in concert with the recovery engine, as it augments the results obtained by the engine. The basic idea is that the recovery engine returns many unrelated items to the investigated crime results, since it does not consider the semantics behind the recovered content. Varma et al. BIB005 explore the filesystem of the Android phones and learn the rules of storing the information. These rules learnt and the feedback from the examiner form the basis for information prioritizing. The examiner labels the relevant information units of the investigated crime at the page level. The labeling takes several times and it is performed in the cycle. All the information is ranked based on a combination of the examiner's feedback, the actual content, and the storage system locality information. To test the validity of the approach, the open-source prototype of the system LIFTR is implemented. The LIFTR's ranking algorithm is evaluated against 13 previously owned Android smart phones. Moreover, the set includes nine phones with the Yaffs filesystem [46] . To improve the results, the authors wrote a special Yaffs parser to identify the expired pages that are important to the information relevance. The experiment shows that the LIFTR ranking improves the score of standard information retrieval metric from 0.0 to an average 0.88. Guido et al. BIB006 introduce a differential acquisition technique that can be used for forensic image acquisition of mobile devices for triage purposes. The advantage of the technique introduced is its runtime that is several times faster than other compared commercial tools or techniques. The main idea is to use the precomputed baseline hashes. Therefore, the hashes of the unknown blocks are only sent to the server. The prototype named Hawkeye is implemented. The Hawkeye uses MD5 algorithm for hashing. Several other improvements are implemented to obtain less runtime. They are as follows: threading (10 threads by default) and comparison function of the zero block. The Hawkeye runs on Android devices in the recovery mode. The experiment is performed with 16 GB Samsung Galaxy S3 smartphone (Samsung, Seoul, South Korea). The acquisition techniques of the tool can be applied to other platforms, such as iOS (Apple Inc., Cupertino, CA, USA) as well.
Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Roussev and Quates <s> This paper explores the use of purpose-built functions and cryptographic hashes of small data blocks for identifying data in sectors, file fragments, and entire files. It introduces and defines the concept of a ''distinct'' disk sector-a sector that is unlikely to exist elsewhere except as a copy of the original. Techniques are presented for improved detection of JPEG, MPEG and compressed data; for rapidly classifying the forensic contents of a drive using random sampling; and for carving data based on sector hashes. <s> BIB001 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Roussev and Quates <s> Digital triage is a pre-digital-forensic phase that sometimes takes place as a way of gathering quick intelligence. Although effort has been undertaken to model the digital forensics process, little has been done to date to model digital triage. This work discuses the further development of a model that does attempt to address digital triage the Partially-automated Crime Specific Digital Triage Process model. The model itself will be presented along with a description of how its automated functionality was implemented to facilitate model testing. <s> BIB002 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Roussev and Quates <s> The digital forensic process as traditionally laid out begins with the collection, duplication, and authentication of every piece of digital media prior to examination. These first three phases of the digital forensic process are by far the most costly. However, complete forensic duplication is standard practice among digital forensic laboratories. The time it takes to complete these stages is quickly becoming a serious problem. Digital forensic laboratories do not have the resources and time to keep up with the growing demand for digital forensic examinations with the current methodologies. One solution to this problem is the use of pre-examination techniques commonly referred to as digital triage. Pre-examination techniques can assist the examiner with intelligence that can be used to prioritize and lead the examination process. This work discusses a proposed model for digital triage that is currently under development at Mississippi State University. <s> BIB003 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Roussev and Quates <s> The ever growing capacity of hard drives poses a severe problem to forensic practitioners who strive to deal with digital investigations in a timely manner. Therefore, the on-the-spot digital investigation paradigm is emerging as a new standard to select only that evidence which is important for the case being investigated. In the light of this issue, we propose an incident response tool which is able to speed up the investigation by finding crime-related evidence in a faster way compared with the traditional state-of-the-art post-mortem analysis tools. The tool we have implemented is called Live Data Forensic System (LDFS). LDFS is an on-the-spot live forensic toolkit, which can be used to collect and analyze relevant data in a timely manner and to perform a triage of a Microsoft Windows-based system. Particularly, LDFS demonstrates the ability of the tool to automatically gather evidence according to general categories, such as live data, Windows Registry, file system metadata, instant messaging services clients, web browser artifacts, memory dump and page file. In addition, unified analysis tools of ELF provide a fast and effective way to obtain a picture of the system at the time the analysis is done. The result of the analysis from different categories can be easily correlated to provide useful clues for the sake of the investigation. <s> BIB004 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Roussev and Quates <s> The number of forensic examinations being performed by digital forensic laboratories is rising, and the amount of data received for each examination is increasing significantly. At the same time, because forensic investigations are results oriented, the demand for timely results has remained steady, and in some instances has increased. In order to keep up with these growing demands, digital forensic laboratories are being compelled to rethink the overall forensic process. This work dismantles the barriers between steps in prior digital investigation process models and concentrates on supporting key decision points. In addition to increasing efficiency of forensic processes, one of the primary goals of these efforts is to enhance the comprehensiveness and investigative usefulness of forensic results. The purpose of honing digital forensic processes is to empower the forensic examiner to focus on the unique and interesting aspects of their work, allowing them to spend more time addressing the probative questions in an investigation, enabling them to be decision makers rather than tool runners, and ultimately increase the quality of service to customers. This paper describes a method of evaluating the complete forensic process performed by examiners, and applying this approach to developing tools that recognize the interconnectivity of examiner tasks across a digital forensic laboratory. Illustrative examples are provided to demonstrate how this approach can be used to increase the overall efficiency and effectiveness of forensic examination of file systems, malware, and network traffic. <s> BIB005 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Roussev and Quates <s> Bulk data analysis eschews file extraction and analysis, common in forensic practice today, and instead processes data in "bulk," recognizing and extracting salient details ("features") of use in the typical digital forensics investigation. This article presents the requirements, design and implementation of the bulk_extractor, a high-performance carving and feature extraction tool that uses bulk data analysis to allow the triage and rapid exploitation of digital media. Bulk data analysis and the bulk_extractor are designed to complement traditional forensic approaches, not replace them. The approach and implementation offer several important advances over today's forensic tools, including optimistic decompression of compressed data, context-based stop-lists, and the use of a "forensic path" to document both the physical location and forensic transformations necessary to reconstruct extracted evidence. The bulk_extractor is a stream-based forensic tool, meaning that it scans the entire media from beginning to end without seeking the disk head, and is fully parallelized, allowing it to work at the maximum I/O capabilities of the underlying hardware (provided that the system has sufficient CPU resources). Although bulk_extractor was developed as a research prototype, it has proved useful in actual police investigations, two of which this article recounts. <s> BIB006 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Roussev and Quates <s> In many police investigations today, computer systems are somehow involved. The number and capacity of computer systems needing to be seized and examined is increasing, and in some cases it may be necessary to quickly find a single computer system within a large number of computers in a network. To investigate potential evidence from a large quantity of seized computer system, or from a computer network with multiple clients, triage analysis may be used. In this work we first define triage based on the medical definition. From this definition, we describe a PXE-based client-server environment that allows for triage tasks to be conducted over the network from a central triage server. Finally, three real world cases are described in which the proposed triage solution was used. <s> BIB007
Cantrell and Dampier BIB002 present the implementation of the automated phases in the partiallyautomated digital triage process model BIB003 . The implementation is carried out on the basis of series of scripts comprised of original and open source tools written in Perl. The Linux distribution CAINE [49] installed to a USB drive is chosen as the development and testing environment in order to provide some form of boot media and to incorporate full onsite capability. The Windows registry is obtained by using the open source tool RegRipper [50] . The final report is provided in the form of HyperText Markup Language (HTML) pages. The tool is implemented to search the Web browser history for Internet Explorer only. The initial testing is done on a series of 300 GB drives. The runtimes are not provided. Lim et al. BIB004 introduce a Live Data Forensic System (LDFS) designed to collect and analyze live data for Microsoft Windows-based systems. The LDFS consist of two separate tools, LDFS collection and LDFS analysis. The LDFS collection system gathers volatile and non-volatile data such as: memory dump, page file, web browser artifacts, instant messaging services clients, Windows Registry, and file system metadata. The distinctive feature of the LDFS collection system is that it can decode encoded BuddyBuddy, Yahoo, and MissLee messenger clients' chat logs. The physical memory dump and dump of all active processes are performed by means of third-party applications. The focus of choosing these applications is based on the least changes to the investigated system by the tool. The XML collection report holds all the collected items with their MD5 and Secure Hash Algorithm 1 (SHA1) hash values. The LDFS collection system is tested against five different types of Windows OSs (Microsoft, Redmond, WA, USA). Several experiments are conducted to test the performance of the system. The largest collection time does not exceed 49 min. The LDFS analysis module has the capabilities for analyzing all the collected data; however, it has not been fully implemented yet. Lim et al. BIB004 argue that the input data and its trustworthiness are of paramount importance in the live forensics analysis. However, it is not clear whether any defense against the subversion of the collection process is implemented in the LDFS collection system. Casey et al. BIB005 discuss the need for and possibilities of honing the digital forensic processes to obtain the timely results. Many tasks in the forensic processes are not resource limited, and rethinking the overall organization of the forensic processes can assure greater improvements than considering the tasks separately. Therefore, improving the complete forensic process is oriented towards two areas, namely, dismantling the barriers between the tasks of the forensic process and providing useful information to support the key decisions. The efforts discussed in this paper focus on processing data from three primary sources: (i) filesystems, (ii) malware, and (iii) network traffic. Many triage tools analyze the filesystems. The analysis reveals that the main bottleneck in this process is the disk Input/Output (I/O) speeds. Using the results of the analysis, Casey et al. BIB005 provide the following guidelines for the triage or forensic data extraction tools to improve efficiency: 1. A tool can simultaneously deliver data into multiple extraction operations and create the forensic duplicate 2. A tool can store extracted information in both, the XML format and SQLite database 3. A tool should provide a user-friendly interface to facilitate the viewing, sorting, and classification of files Additionally, tool developers have to consult about each step of the development with their customers. For the malware, the main suggestion is that the tool should firstly determine whether the file has been seen before. Next, the automatic malware processing tool developed by Defense Cyber Crime Center (DC3) is presented as an illustrative example. However, no suggestions are provided for the network traffic tools. The suite of tools PCAPFAST, developed by DC3, is provided as the example of the right network traffic tool. Garfinkel BIB006 extends the research work presented by Garfinkel et al. BIB001 and introduces a forensic tool bulk_extractor devoted to the initial part of an investigation. The base of the bulk_extractor is the analysis of bulk data. The bulk_extractor scans raw disk images or any data dump for useful patterns (emails, credit card numbers, Internet Protocol (IP) addresses, etc.). It uses multiple scanners tailored to the certain patterns and heuristics to reduce false positive results and noises. The identified patterns are stored in feature files. When processing is complete, the bulk_extractor creates a feature histogram for each feature file. To improve the speed of processing, the bulk_extractor takes advantage of available multi-core capabilities. It detects and decompresses the compressed data. A lot of attention is devoted to the decompression of data. This feature is not usual for triage tools, because it consumes a lot of processing time. However, the feature is very useful for the forensic tool. The performance of the bulk_extractor is compared to the commercial tool EnCase. The results indicate that the bulk_extractor extracts email addresses from the forensic 42 GB disk image 10 times faster than EnCase, and it takes 44 min. The processing time of the bulk_extractor is between 1 and 8 h per piece of media, depending on the size and complexity of the subject data. The processing time does not meet the triage requirements. The bulk_extractor is successfully applied to 250 GB hard disk drives in two real cases. The processing time is 2.5 h for the first case and 2 h for the second. In general, the bulk_extractor is nice-to-have; however, it is not a triage tool. Koopmans and James BIB007 introduce an automated network triage (ANT) solution designed for client-server environment. The purpose of the solution is to sort the analyzed systems by their likely relevance to the investigated case. The ANT is developed on the basis of the Preboot eXecution Environment (PXE) protocol and is composed of a network server that runs various services, and the clients, which are the systems to be analyzed, in a physically isolated network. The ANT server boots a suspected computer via a network. The authors provide many technical details that explain the specific steps-what software to use and how to boot the seized computers. The interface is developed in Personal Home Page (PHP) programming language. The data for triage are as follow:
Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> A list of keywords to search for 2. A list of preferred file names or extensions 3. A list of preferred directories 4. A hash database that contains the hashes of files of interest 5. A hash database index file <s> Remote live forensics has recently been increasingly used in order to facilitate rapid remote access to enterprise machines. We present the GRR Rapid Response Framework (GRR), a new multi-platform, open source tool for enterprise forensic investigations enabling remote raw disk and memory access. GRR is designed to be scalable, opening the door for continuous enterprise wide forensic analysis. This paper describes the architecture used by GRR and illustrates how it is used routinely to expedite enterprise forensic investigations. <s> BIB001 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> A list of keywords to search for 2. A list of preferred file names or extensions 3. A list of preferred directories 4. A hash database that contains the hashes of files of interest 5. A hash database index file <s> In enterprise environments, digital forensic analysis generates data volumes that traditional forensic methods are no longer prepared to handle. Triaging has been proposed as a solution to systematically prioritize the acquisition and analysis of digital evidence. We explore the application of automated triaging processes in such settings, where reliability and customizability are crucial for a successful deployment. We specifically examine the use of GRR Rapid Response (GRR) - an advanced open source distributed enterprise forensics system - in the triaging stage of common incident response investigations. We show how this system can be leveraged for automated prioritization of evidence across the whole enterprise fleet and describe the implementation details required to obtain sufficient robustness for large scale enterprise deployment. We analyze the performance of the system by simulating several realistic incidents and discuss some of the limitations of distributed agent based systems for enterprise triaging. <s> BIB002 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> A list of keywords to search for 2. A list of preferred file names or extensions 3. A list of preferred directories 4. A hash database that contains the hashes of files of interest 5. A hash database index file <s> Digital forensic triage is poorly defined and poorly understood. The lack of clarity surrounding the process of triage has given rise to legitimate concerns. By trying to define what triage actually is, one can properly engage with the concerns surrounding the process. This paper argues that digital forensic triage has been conducted on an informal basis for a number of years in digital forensic laboratories, even where there are legitimate objections to the process. Nevertheless, there are clear risks associated with the process of technical triage, as currently practised. The author has developed and deployed a technical digital forensic previewing process that negates many of the current concerns regarding the triage process and that can be deployed in any digital forensic laboratory at very little cost. This paper gives a high-level overview of how the system works and how it can be deployed in the digital forensic laboratory. <s> BIB003 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> A list of keywords to search for 2. A list of preferred file names or extensions 3. A list of preferred directories 4. A hash database that contains the hashes of files of interest 5. A hash database index file <s> Considering that a triage related task may essentially make-or-break a digital investigation and the fact that a number of triage tools are freely available online but there is currently no mature framework for practically testing and evaluating them, in this paper we put three open source triage tools to the test. In an attempt to identify common issues, strengths and limitations we evaluate them both in terms of efficiency and compliance to published forensic principles. Our results show that due to the increased complexity and wide variety of system configurations, the triage tools should be made more adaptable, either dynamically or manually (depending on the case and context) instead of maintaining a monolithic functionality. <s> BIB004 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> A list of keywords to search for 2. A list of preferred file names or extensions 3. A list of preferred directories 4. A hash database that contains the hashes of files of interest 5. A hash database index file <s> Bulk data analysis eschews file extraction and analysis, common in forensic practice today, and instead processes data in "bulk," recognizing and extracting salient details ("features") of use in the typical digital forensics investigation. This article presents the requirements, design and implementation of the bulk_extractor, a high-performance carving and feature extraction tool that uses bulk data analysis to allow the triage and rapid exploitation of digital media. Bulk data analysis and the bulk_extractor are designed to complement traditional forensic approaches, not replace them. The approach and implementation offer several important advances over today's forensic tools, including optimistic decompression of compressed data, context-based stop-lists, and the use of a "forensic path" to document both the physical location and forensic transformations necessary to reconstruct extracted evidence. The bulk_extractor is a stream-based forensic tool, meaning that it scans the entire media from beginning to end without seeking the disk head, and is fully parallelized, allowing it to work at the maximum I/O capabilities of the underlying hardware (provided that the system has sufficient CPU resources). Although bulk_extractor was developed as a research prototype, it has proved useful in actual police investigations, two of which this article recounts. <s> BIB005 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> A list of keywords to search for 2. A list of preferred file names or extensions 3. A list of preferred directories 4. A hash database that contains the hashes of files of interest 5. A hash database index file <s> The role of triage in digital forensics is disputed, with some practitioners questioning its reliability for identifying evidential data. Although successfully implemented in the field of medicine, triage has not established itself to the same degree in digital forensics. This article presents a novel approach to triage for digital forensics. Case-Based Reasoning Forensic Triager (CBR-FT) is a method for collecting and reusing past digital forensic investigation information in order to highlight likely evidential areas on a suspect operating system, thereby helping an investigator to decide where to search for evidence. The CBR-FT framework is discussed and the results of twenty test triage examinations are presented. CBR-FT has been shown to be a more effective method of triage when compared to a practitioner using a leading commercial application. <s> BIB006 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> A list of keywords to search for 2. A list of preferred file names or extensions 3. A list of preferred directories 4. A hash database that contains the hashes of files of interest 5. A hash database index file <s> This paper describes a five-phase, multi-threaded bootable approach to digital forensic triage, which is implemented in a product called Forensics2020. The first phase collects metadata for every logical file on the hard drive of a computer system. The second phase collects EXIF camera data from each image found on the hard drive. The third phase analyzes and categorizes each file based on its header information. The fourth phase parses each executable file to provide a complete audit of the software applications on the system; a signature is generated for every executable file, which is later checked against a threat detection database. The fifth and final phase hashes each file and records its hash value. All five phases are performed in the background while the first responder interacts with the system. This paper assesses the forensic soundness of Forensics2020. The tool makes certain changes to a hard drive that are similar to those made by other bootable forensic examination environments, although the changes are greater in number. The paper also describes the lessons learned from developing Forensics2020, which can help guide the development of other forensic triage tools. <s> BIB007 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> A list of keywords to search for 2. A list of preferred file names or extensions 3. A list of preferred directories 4. A hash database that contains the hashes of files of interest 5. A hash database index file <s> Purpose – The purpose of this paper is to propose a novel approach that automates the visualisation of both quantitative data (the network) and qualitative data (the content) within emails to aid the triage of evidence during a forensics investigation. Email remains a key source of evidence during a digital investigation, and a forensics examiner may be required to triage and analyse large email data sets for evidence. Current practice utilises tools and techniques that require a manual trawl through such data, which is a time-consuming process. Design/methodology/approach – This paper applies the methodology to the Enron email corpus, and in particular one key suspect, to demonstrate the applicability of the approach. Resulting visualisations of network narratives are discussed to show how network narratives may be used to triage large evidence data sets. Findings – Using the network narrative approach enables a forensics examiner to quickly identify relevant evidence within large email data sets. Within... <s> BIB008 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> A list of keywords to search for 2. A list of preferred file names or extensions 3. A list of preferred directories 4. A hash database that contains the hashes of files of interest 5. A hash database index file <s> Imagine the following scenario: an inexperienced law enforcement officer enters a crime scene and – on finding a USB key on a potential suspect – inserts it into a nearby Windows desktop computer hoping to find some information which may help an ongoing investigation. The desktop crashes and all data on the USB key and on the Windows desktop has now been potentially compromised. However, the law enforcement officer in question is using a Virtual Crime Scene Simulator and has just learned a valuable lesson. This paper discusses the development and initial user evaluation of a Virtual Crime Scene Simulator that includes the ability to interact with and perform live triage of commonly-found digital devices. Based on our experience of teaching digital evidence handling, we aimed to create a realistic virtual environment that integrates many different aspects of the digital and physical crime scene processing, such as physical search activities, triage of digital devices, note taking and form filling, interaction with suspects at the scene, as well as search team training. <s> BIB009 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> A list of keywords to search for 2. A list of preferred file names or extensions 3. A list of preferred directories 4. A hash database that contains the hashes of files of interest 5. A hash database index file <s> Abstract Large email data sets are often the focus of criminal and civil investigations. This has created a daunting task for investigators due to the extraordinary size of many of these collections. Our work offers an interactive visual analytic alternative to the current, manually intensive methodology used in the search for evidence in large email data sets. These sets usually contain many emails which are irrelevant to an investigation, forcing investigators to manually comb through information in order to find relevant emails, a process which is costly in terms of both time and money. To aid the investigative process we combine intelligent preprossessing, a context aware visual search, and a results display that presents an integrated view of diverse information contained within emails. This allows an investigator to reduce the number of emails that need to be viewed in detail without the current tedious manual search and comb process. <s> BIB010
Three real cases of the likelihood that the suspicious computers actually pose threat are very successfully investigated; the runtimes of the three cases are within 10 min. The runtimes are very short, however, it is not clear why they are so short, and an explanation is not provided. Moreover, Horsman et al. BIB006 state that hashing and keyword searching approaches can limit the effectiveness of digital triage because they are too restrictive. The limitations of the ANT solution are the following: there is no possibility to boot from the external source and encrypted data could not be analysed. Moser and Cohen BIB002 discuss the use of triage in quite a different context than the traditional criminal case investigation-an incident response. The authors consider the use of the GRR Rapid Response (GRR) system. It is an agent-based open source distributed enterprise forensics system. Moser and Cohen BIB002 overview the components of the GRR system. A more detailed description of the GRR system is available elsewhere BIB001 . This method lowers the total time cost of triage analysis by distributing this task to the system agents. The main attention is directed towards the reliability of agents. Constant monitoring of used resources-memory and central processing unit (CPU)ensures the reliability of agents. The investigation consists of three phases: planning, collection, and analysis. The experiment is carried out on many corporate workstations and laptops. The GRR agents are installed on these computers. The goal of the experiment is to examine the representative cases of a typical enterprise investigation performed by an incident response team. Four cases are analyzed. The majority of agents pick up artifacts in the first few minutes after the start. Nevertheless, the GRR continues running to 24 h so, if the missing machines come back online later, the artifacts will still be detected. The case of the autorun key comparison required an extensive manual analysis, therefore, improvement is necessary for such cases. Shaw and Browne BIB003 argue that a digital forensic triage has been conducted on an informal basis for several years. The authors introduce the concepts of administrative and technical triage. The administrative triage assesses the circumstances of a new case before starting an examination of the evidence. Shaw and Browne BIB003 discuss and summarize the weaknesses of digital triage. The enhanced previewing is suggested as an alternative to digital triage. The Linux forensic distribution CAINE [49] installed on a compact disc (CD) is chosen as a base for the implementation. The bootable CD is remastered to include the existing open source forensic tools and to add new analysis software. A high-level overview of system work is presented. The possibilities to deploy the enhanced previewing in the digital forensic laboratory are analyzed. The weaknesses of the enhanced previewing are as follows: the case management becomes more complicated and the system is not suitable to the field use at all. The authors doubt "whether the Enhanced Previewing process is a subset of technical triage or whether it is a distinct process only loosely related to technical triage". We are inclined to state that the enhanced previewing is not a subset of technical triage, because the processing time of the enhanced previewing would be quite long. We base our conclusion on the provided description of the system. Shiaeles et al. BIB004 review three open source triage tools and suggest the ways to improve them. The TriageIR, TR3Secure, and Kludge tools are tested for various Microsoft Windows versions. There is currently no mature framework for practically testing and evaluating triage tools, however, the authors do not suggest a framework and evaluate the tools in their best way imagined. The first principle to assess is the access to volatile data. The next principle to assess is the adherence of tools to forensic principles ensuring the admissibility of the collected evidence to the court. An experiment shows that no single considered tool is better than others. All the tools have their strengths and weaknesses. The solution is to preferably have several tools and maintain a profile of the tool capabilities. The recommendations for improving the tools are as follows: 1. The tools should be made more adaptable, either dynamically or manually 2. Disabling Prefetch on Windows systems will result to less system alterations 3. The tools should record and undo all registry changes, which they perform to the examined system 4. The tools should collect the Internet activity artifacts that belong to all known browsers Woods et al. present an open source software for automated analysis and visualization of disk images created as part of the BitCurator project [57] . The goal of the presented software is to assist in triage tasks. The data for analysis is obtained from open source forensic tools fiwalk [58] and bulk_extractor BIB005 . The fiwalk tool recognizes and interprets the content of filesystems that are contained in disk images, and produces an XML report. The bulk_extractor tool reads the raw contents of the disk image and reports on various features. The BitCurator reporting tools produce Portable Document Format (PDF) reports on filesystem and for each feature separately. If data entry datasets are large, it is possible to configure the reporting tools to produce the report for a subset of the filesystem or a subset of features. The time required to manage a given disk image with forensic tools fiwalk and bulk_extractor is within the range of tens of minutes. The limiting factor in terms of time is the BitCurator reporting tools that may have to process an extremely large XML filesystem report and text feature reports. The BitCurator project freely distributes these reporting tools in a variety of ways for the practitioners and researchers to use. Baggili et al. BIB007 present a five-phase, multi-threaded bootable tool Forensics2020 for forensics triage. The tool is loaded from a bootable Windows Pre-installation Environment using a USB stick. Phases proceed in sequence, however, while the tool is working, the examiner can interact with the tool to see the results up to that point and to request certain types of data. The first phase collects logical files and their metadata. The second phase analyses every image for the Exchangeable Image File Format (EXIF) data. The third phase explores and classifies each file based on its header. The fourth phase parses executable files for audit and threat purposes. The fifth phase hashes each file and takes the longest time of all the phases. The experiment is carried out to assess the efficacy and a forensic soundness of Forensics2020. In sum, 26.33 TB of data from 57 computers are analyzed. The total time required to complete the process is 10,356 s. The tool makes certain changes to the hard drive; however, the changes are greater in number than those of similar Linux-based tools. Two lessons can be learned from the development of Forensics2020. Firstly, a multi-threaded, multi-stage tool allows the examiner to interact with the evidence while the system is performing the forensics processing. Secondly, the mounting of the hard drive by a bootable tool has influence over the perception of the forensic soundness. Haggerty et al. BIB008 propose an approach to automate the visualization of quantitative and qualitative email data to assist the triage of digital evidence during a forensics investigation. The quantitative information, which is retrieved from the email, refers to the network events and actor relationships. The qualitative information refers to the body of the emails themselves. The authors have developed a TagSNet software to implement the proposed approach. The software provides two views-a network of the actors and a tag of keywords that are found in the email bodies. Both views are interactive in that the forensics examiner may move the actors and text around. The experiment is carried out on the Enron email data. The average time to process and visualize email data is about 10 min. However, the visualization is not aimed at answering the investigative questions; it only aids the forensics examiner to triage email data more quickly than in the manual mode. Vidas et al. describe a free forensic tool, OpenLV, which can be deployed in the field and in the laboratory. It is noteworthy that over the past years it has been used under the name of "LiveView". The interface of the tool is oriented to the examiners with little training. The OpenLV asks for configuration and creates a virtual machine out of a forensic image or physical disk. The virtual machine enables booting up the image and gains an interactive environment without modifying the underlying image. The tool natively supports only the dd/raw image format. Other formats require third party software that can be integrated into the tool, which is Windows centric, and a limited Linux support is added. Additionally, the OpenLV aids to remove the barrier of passwords for Windows users. The authors claim that "OpenLV aims to meet the demand for an easy-to-use triage tool", however, neither an example nor a reference is provided for how OpenLV is used for triage purposes. Conway et al. BIB009 discuss a development of a Virtual Crime Scene Simulator (VCSS) that can perform a live triage of digital devices. Training is important for the law enforcement officers; therefore, the tool will have a field of its application. The VCSS is an open source project, and it is implemented as game playing, where Unity3D [63] is chosen as the base platform. The virtual environment includes a three-dimensional (3D) representation of a house with four rooms, a hallway, and outside scenery. The crime scenery has a set of the following items: furniture, various hardware devices, and an avatar for interrogation. The following in-game actions are possible: live examination of the various digital devices, interrogation of the avatar, and other actions related to the crime scene. The full device interaction is implemented on Windows version only. The trainer can add new logic by modifying the existing JavaScript. The law enforcement officers from a developing country used the VCSS for training. The participants highly evaluated the educational purpose of the application. Hegarty and Haggerty present the SlackStick approach to identify files of interest for forensic examiner on the live system. The approach is based on the signatures of the files. To create the signature of the file, a block within the original file is chosen, which may be from anywhere within a file, except for the first and the last blocks. Several predetermined bytes are chosen to represent the file. The number of bytes can be chosen by balancing the tradeoff between the false positives and false negatives. The higher number of bytes decreases the likelihood of false positives. The SlackStick software written in Python under Slax operating system (Software Manufacturer, City, State, Country). runs from an external device. SlackStick reads the memory blocks on the target machine sequentially to generate block signatures for comparison with the signature library. If a match is found, a report that includes the matched signature and the physical location of the file in the storage media is generated. They conducted an experiment in which it took a dozen of seconds to analyze 1 GB partition that has 2 194 JPEG images. Signatures are generated by selecting 11 bytes within the second block of each target files. Neither false positives nor false negatives are found. As the number of signatures increases, no measurable impact on performance is observed. Further, van Beek et al. introduce a development of the distributed digital forensic system HANSKEN [67] that is the successor of the operating digital forensic system XIRAF . The goal of HANSKEN is to speed-up the computations of big data. The three forensic drivers for the system are as follows: minimization of the case lead time, maximization of the trace coverage, and specialization of the people involved. These drivers justify the building of the distributed big data forensic platform. To mitigate the threats associated with a big data platform, the development of the system HANSKEN is based on eight design principles. They are enumerated in the order of the priority: 1. Security, 2. Privacy, 3. Transparency, 4. Multi tenancy, 5. Future proof, 6. Data retention, 7. Reliability, and 8. High availability. The first three principles are sociological; meanwhile the other five are business principles and define the system boundaries. The system uses its own forensic image format. The authors justify the need for its own format; however it could be the limitation of the system, especially for the future development. The system HANSKEN stores the data compressed and encrypted. The encryption of data ensures a restricted access to it. The process of extracting data from a forensic image starts as soon as the first bits of the image are uploaded to the system. Such approach acknowledges the right organization of the forensic processes to improve the efficiency of the forensic investigation. The authors admit that triage is a valuable approach for ordering the processing of images, not for leaving images unprocessed. Such form of triage is planned to be included into the system HANSKEN. The system is implemented on the Hadoop realization of MapReduce. The system HANSKEN was planned to be put into production at the end of the year 2015. Koven et al. BIB010 further explore and develop the idea of email data visualization BIB008 . The authors present a visual email search tool InVEST. Firstly, the tool preprocesses the email data to create indexes for various email fields. The duplicate information and junk data are excluded from indexing. Next, the user starts the search process with defined keywords. The search results are presented in five different visual views. The visual views enable better understanding and interpreting of the search results as well as finding the relationships between the search entities. The diverse views show different relationships between search entities and present the contextual information found within these results. All the views support the possibility to refine the search results using filtering and expanding. The process of filtering and expanding is iterative until the search is successful. An experiment is carried out on the Enron email data set. Two case studies are successfully investigated. Koven et al. BIB010 used the term "triage" in the title of the paper. The term "triage" is used in the sense of a tool, which allows selecting a subset of the emails that are related to a particular subject from the whole email set. However, the time spent to select can be quite long. The process of selecting the subset of the email is interactive heavily involving the user. The authors present an example that "the time to make the discovery and exploration including the skimming of at least 30 of the discovered emails was approximately 1 h". Therefore, the use of the tool in triage process is quite unlikely, unless the data captured is only in form of email.
Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> Forensic study of mobile devices is a relatively new field, dating from the early 2000s. The proliferation of phones (particularly smart phones) on the consumer market has caused a growing demand for forensic examination of the devices, which could not be met by existing Computer Forensics techniques. As a matter of fact, Law enforcement are much more likely to encounter a suspect with a mobile device in his possession than a PC or laptop and so the growth of demand for analysis of mobiles has increased exponentially in the last decade. Early investigations, moreover, consisted of live analysis of mobile devices by examining phone contents directly via the screen and photographing it with the risk of modifying the device content, as well as leaving many parts of the proprietary operating system inaccessible. The recent development of Mobile Forensics, a branch of Digital Forensics, is the answer to the demand of forensically sound examination procedures of gathering, retrieving, identifying, storing and documenting evidence of any digital device that has both internal memory and communication ability [1]. Over time commercial tools appeared which allowed analysts to recover phone content with minimal interference and examine it separately. By means of such toolkits, moreover, it is now possible to think of a new approach to Mobile Forensics which takes also advantage of "Data Mining" and "Machine Learning" theory. This paper is the result of study concerning cell phones classification in a real case of pedophilia. Based on Mobile Forensics "Triaging" concept and the adoption of self-knowledge algorithms for classifying mobile devices, we focused our attention on a viable way to predict phone usage's classifications. Based on a set of real sized phones, the research has been extensively discussed with Italian law enforcement cyber crime specialists in order to find a viable methodology to determine the likelihood that a mobile phone has been used to commit the specific crime of pedophilia, which could be very relevant during a forensic investigation. <s> BIB001 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> We present DEC0DE, a system for recovering information from phones with unknown storage formats, a critical problem for forensic triage. Because phones have myriad custom hardware and software, we examine only the stored data. Via flexible descriptions of typical data structures, and using a classic dynamic programming algorithm, we are able to identify call logs and address book entries in phones across varied models and manufacturers. We designed DEC0DE by examining the formats of one set of phone models, and we evaluate its performance on other models. Overall, we are able to obtain high performance for these unexamined models: an average recall of 97% and precision of 80% for call logs; and average recall of 93% and precision of 52% for address books. Moreover, at the expense of recall dropping to 14%, we can increase precision of address book recovery to 94% by culling results that don't match between call logs and address book entries on the same phone. <s> BIB002 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> A novel concept for improving the trustworthiness of results obtained from digital investigations is presented. Case Based Reasoning Forensic Auditor (CBR-FA) is a method by which results from previous digital forensic examinations are stored and reused to audit current digital forensic investigations. CBR-FA provides a method for evaluating digital forensic investigations in order to provide a practitioner with a level of reassurance that evidence that is relevant to their case has not been missed. The structure of CBR-FA is discussed as are the methodologies it incorporates as part of its auditing functionality. <s> BIB003 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> Over the past few years, the number of crimes related to the worldwide diffusion of digital devices with large storage and broadband network connections has increased dramatically. In order to better address the problem, law enforcement specialists have developed new ideas and methods for retrieving evidence more effectively. In accordance with this trend, our research aims to add new pieces of information to the automated analysis of evidence according to Machine Learning-based “post mortem” triage. The scope consists of some copyright infringement court cases coming from the Italian Cybercrime Police Unit database. We draw our inspiration from this “low level” crime which is normally sat at the bottom of the forensic analyst's queue, behind higher priority cases and dealt with the lowest priority. The present work aims to bring order back in the analyst's queue by providing a method to rank each queued item, e.g. a seized device, before being analyzed in detail. The paper draws the guidelines for drive-under-triage classification (e.g. hard disk drive, thumb drive, solid state drive etc.), according to a list of crime-dependent features such as installed software, file statistics and browser history. The model, inspired by the theory of Data Mining and Machine Learning, is able to classify each exhibit by predicting the problem dependent variable (i.e. the class) according to the aforementioned crime-dependent features. In our research context the “class” variable identifies with the likelihood that a drive image may contain evidence concerning the crime and, thus, the associated item must receive an high (or low) ranking in the list. <s> BIB004 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> There are two main reasons the processing speed of current generation digital forensic tools is inadequate for the average case: a) users have failed to formulate explicit performance requirements; and b) developers have failed to put performance, specifically latency, as a top-level concern in line with reliability and correctness. In this work, we formulate forensic triage as a real-time computation problem with specific technical requirements, and we use these requirements to evaluate the suitability of different forensic methods for triage purposes. Further, we generalize our discussion to show that the complete digital forensics process should be viewed as a (soft) real-time computation with well-defined performance requirements. We propose and validate a new approach to target acquisition that enables file-centric processing without disrupting optimal data throughput from the raw device. We evaluate core forensic processing functions with respect to processing rates and show their intrinsic limitations in both desktop and server scenarios. Our results suggest that, with current software, keeping up with a commodity SATA HDD at 120 MB/s requires 120-200 cores. <s> BIB005 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> This paper addresses the increasing resources overload being experienced by law enforcement digital forensics units with the proposal to introduce triage template pipelines into the investigative process, enabling devices and the data they contain to be examined according to a number of prioritised criteria. <s> BIB006 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> The global diffusion of smartphones and tablets, exceeding traditional desktops and laptops market share, presents investigative opportunities and poses serious challenges to law enforcement agencies and forensic professionals. Traditional Digital Forensics techniques, indeed, may be no longer appropriate for timely analysis of digital devices found at the crime scene. Nevertheless, dealing with specific crimes such as murder, child abductions, missing persons, death threats, such activity may be crucial to speed up investigations. Motivated by this, the paper explores the field of Triage, a relatively new branch of Digital Forensics intended to provide investigators with actionable intelligence through digital media inspection, and describes a new interdisciplinary approach that merges Digital Forensics techniques and Machine Learning principles. The proposed Triage methodology aims at automating the categorization of digital media on the basis of plausible connections between traces retrieved (i.e. digital evidence) and crimes under investigation. As an application of the proposed method, two case studies about copyright infringement and child pornography exchange are then presented to actually prove that the idea is viable. The term ''feature'' will be regarded in the paper as a quantitative measure of a ''plausible digital evidence'', according to the Machine Learning terminology. In this regard, we (a) define a list of crime-related features, (b) identify and extract them from available devices and forensic copies, (c) populate an input matrix and (d) process it with different Machine Learning mining schemes to come up with a device classification. We perform a benchmark study about the most popular mining algorithms (i.e. Bayes Networks, Decision Trees, Locally Weighted Learning and Support Vector Machines) to find the ones that best fit the case in question. Obtained results are encouraging as we will show that, triaging a dataset of 13 digital media and 45 copyright infringement-related features, it is possible to obtain more than 93% of correctly classified digital media using Bayes Networks or Support Vector Machines while, concerning child pornography exchange, with a dataset of 23 cell phones and 23 crime-related features it is possible to classify correctly 100% of the phones. In this regards, methods to reduce the number of linearly independent features are explored and classification results presented. <s> BIB007 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> The volume of digital forensic evidence is rapidly increasing, leading to large backlogs. In this paper, a Digital Forensic Data Reduction and Data Mining Framework is proposed. Initial research with sample data from South Australia Police Electronic Crime Section and Digital Corpora Forensic Images using the proposed framework resulted in significant reduction in the storage requirements — the reduced subset is only 0.196 percent and 0.75 percent respectively of the original data volume. The framework outlined is not suggested to replace full analysis, but serves to provide a rapid triage, collection, intelligence analysis, review and storage methodology to support the various stages of digital forensic examinations. Agencies that can undertake rapid assessment of seized data can more effectively target specific criminal matters. The framework may also provide a greater potential intelligence gain from analysis of current and historical data in a timely manner, and the ability to undertake research of trends over time. <s> BIB008 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> The role of triage in digital forensics is disputed, with some practitioners questioning its reliability for identifying evidential data. Although successfully implemented in the field of medicine, triage has not established itself to the same degree in digital forensics. This article presents a novel approach to triage for digital forensics. Case-Based Reasoning Forensic Triager (CBR-FT) is a method for collecting and reusing past digital forensic investigation information in order to highlight likely evidential areas on a suspect operating system, thereby helping an investigator to decide where to search for evidence. The CBR-FT framework is discussed and the results of twenty test triage examinations are presented. CBR-FT has been shown to be a more effective method of triage when compared to a practitioner using a leading commercial application. <s> BIB009 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> The evolution of modern digital devices is outpacing the scalability and effectiveness of Digital Forensics techniques. Digital Forensics Triage is one solution to this problem as it can extract evidence quickly at the crime scene and provide vital intelligence in time critical investigations. Similarly, such methodologies can be used in a laboratory to prioritize deeper analysis of digital devices and alleviate examination backlog. Developments in Digital Forensics Triage methodologies have moved towards automating the device classification process and those which incorporate Machine Learning principles have proven to be successful. Such an approach depends on crime-related features which provide a relevant basis upon which device classification can take place. In addition, to be an accepted and viable methodology it should be also as accurate as possible. Previous work has concentrated on the issues of feature extraction and classification, where less attention has been paid to improving classification accuracy through feature manipulation. In this regard, among the several techniques available for the purpose, we concentrate on feature weighting, a process which places more importance on specific features. A twofold approach is followed: on one hand, automated feature weights are quantified using Kullback-Leibler measure and applied to the training set whereas, on the other hand, manual weights are determined with the contribution of surveyed digital forensic experts. Experimental results of manual and automatic feature weighting are described which conclude that both the techniques are effective in improving device classification accuracy in crime investigations. <s> BIB010 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> When forensic triage techniques designed for feature phones are applied to smart phones, these recovery techniques return hundreds of thousands of results, only a few of which are relevant to the investigation. We propose the use of relevance feedback to address this problem: a small amount of investigator input can efficiently and accurately rank in order of relevance, the results of a forensic triage tool. We present LIFTR, a novel system for prioritizing information recovered from Android phones. We evaluate LIFTR's ranking algorithm on 13 previously owned Android smart phones and three recovery engines -- DEC0DE, Bulk Extractor, and Strings? using a standard information retrieval metric, Normalized Discounted Cumulative Gain (NDCG). LIFTR's initial ranking improves the NDCG scores of the three engines from 0.0 to an average of 0.73; and with as little as 5 rounds of feedback, the ranking score in- creases to 0.88. Our results demonstrate the efficacy of relevance feedback for quickly locating useful information among the large amount of irrelevant data returned by current recovery techniques. Further, our empirical findings show that a significant amount of important user information persists for weeks or even months in the expired space of a phone's memory. This phenomenon underscores the importance of using file system agnostic recovery techniques, which are the type of techniques that benefit most from LIFTR. <s> BIB011 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> This paper describes a five-phase, multi-threaded bootable approach to digital forensic triage, which is implemented in a product called Forensics2020. The first phase collects metadata for every logical file on the hard drive of a computer system. The second phase collects EXIF camera data from each image found on the hard drive. The third phase analyzes and categorizes each file based on its header information. The fourth phase parses each executable file to provide a complete audit of the software applications on the system; a signature is generated for every executable file, which is later checked against a threat detection database. The fifth and final phase hashes each file and records its hash value. All five phases are performed in the background while the first responder interacts with the system. This paper assesses the forensic soundness of Forensics2020. The tool makes certain changes to a hard drive that are similar to those made by other bootable forensic examination environments, although the changes are greater in number. The paper also describes the lessons learned from developing Forensics2020, which can help guide the development of other forensic triage tools. <s> BIB012 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> Purpose – The purpose of this paper is to propose a novel approach that automates the visualisation of both quantitative data (the network) and qualitative data (the content) within emails to aid the triage of evidence during a forensics investigation. Email remains a key source of evidence during a digital investigation, and a forensics examiner may be required to triage and analyse large email data sets for evidence. Current practice utilises tools and techniques that require a manual trawl through such data, which is a time-consuming process. Design/methodology/approach – This paper applies the methodology to the Enron email corpus, and in particular one key suspect, to demonstrate the applicability of the approach. Resulting visualisations of network narratives are discussed to show how network narratives may be used to triage large evidence data sets. Findings – Using the network narrative approach enables a forensics examiner to quickly identify relevant evidence within large email data sets. Within... <s> BIB013 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> We present a new approach to digital forensic evidence acquisition and disk imaging called sifting collectors that images only those regions of a disk with expected forensic value. Sifting collectors produce a sector-by-sector, bit-identical AFF v3 image of selected disk regions that can be mounted and is fully compatible with existing forensic tools and methods. In our test cases, they have achieved an acceleration of >3× while collecting >95% of the evidence, and in some cases we have observed acceleration of up to 13×. Sifting collectors challenge many conventional notions about forensic acquisition and may help tame the volume challenge by enabling examiners to rapidly acquire and easily store large disks without sacrificing the many benefits of imaging. <s> BIB014 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> The sharp rise in consumer computing, electronic and mobile devices and data volumes has resulted in increased workloads for digital forensic investigators and analysts. The number of crimes involving electronic devices is increasing, as is the amount of data for each job. This is becoming unscaleable and alternate methods to reduce the time trained analysts spend on each job are necessary.This work leverages standardised knowledge representations techniques and automated rule-based systems to encapsulate expert knowledge for forensic data. The implementation of this research can provide high-level analysis based on low-level digital artefacts in a way that allows an understanding of what decisions support the facts. Analysts can quickly make determinations as to which artefacts warrant further investigation and create high level case data without manually creating it from the low-level artefacts. Extraction and understanding of users and social networks and translating the state of file systems to sequences of events are the first uses for this work.A major goal of this work is to automatically derive 'events' from the base forensic artefacts. Events may be system events, representing logins, start-ups, shutdowns, or user events, such as web browsing, sending email. The same information fusion and homogenisation techniques are used to reconstruct social networks. There can be numerous social network data sources on a single computer; internet cache can locate?Facebook, LinkedIn, Google Plus caches; email has address books and copies of emails sent and received; instant messenger has friend lists and call histories. Fusing these into a single graph allows a more complete, less fractured view for an investigator.Both event creation and social network creation are expected to assist investigator-led triage and other fast forensic analysis situations. <s> BIB015 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> An issue that continues to impact digital forensics is the increasing volume of data and the growing number of devices. One proposed method to deal with the problem of "big digital forensic data": the volume, variety, and velocity of digital forensic data, is to reduce the volume of data at either the collection stage or the processing stage. We have developed a novel approach which significantly improves on current practice, and in this paper we outline our data volume reduction process which focuses on imaging a selection of key files and data such as: registry, documents, spreadsheets, email, internet history, communications, logs, pictures, videos, and other relevant file types. When applied to test cases, a hundredfold reduction of original media volume was observed. When applied to real world cases of an Australian Law Enforcement Agency, the data volume further reduced to a small percentage of the original media volume, whilst retaining key evidential files and data. The reduction process was applied to a range of real world cases reviewed by experienced investigators and detectives and highlighted that evidential data was present in the data reduced forensic subset files. A data reduction approach is applicable in a range of areas, including: digital forensic triage, analysis, review, intelligence analysis, presentation, and archiving. In addition, the data reduction process outlined can be applied using common digital forensic hardware and software solutions available in appropriately equipped digital forensic labs without requiring additional purchase of software or hardware. The process can be applied to a wide variety of cases, such as terrorism and organised crime investigations, and the proposed data reduction process is intended to provide a capability to rapidly process data and gain an understanding of the information and/or locate key evidence or intelligence in a timely manner. <s> BIB016 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> Computer forensics faces a range of challenges due to the widespread use of computing technologies. Examples include the increasing volume of data and devices that need to be analysed in any single case, differing platforms, use of encryption and new technology paradigms (such as cloud computing and the Internet of Things). Automation within forensic tools exists, but only to perform very simple tasks, such as data carving and file signature analysis. Investigators are responsible for undertaking the cognitively challenging and time-consuming process of identifying relevant artefacts. Due to the volume of cyber-dependent (e.g., malware and hacking) and cyber-enabled (e.g., fraud and online harassment) crimes, this results in a large backlog of cases. With the aim of speeding up the analysis process, this paper investigates the role that unsupervised pattern recognition can have in identifying notable artefacts. A study utilising the Self-Organising Map (SOM) to automatically cluster notable artefacts was devised using a series of four cases. Several SOMs were created - a File List SOM containing the metadata of files based upon the file system, and a series of application level SOMs based upon metadata extracted from files themselves (e.g., EXIF data extracted from JPEGs and email metadata extracted from email files). A total of 275 sets of experiments were conducted to determine the viability of clustering across a range of network configurations. The results reveal that more than 93.5% of notable artefacts were grouped within the rank-five clusters in all four cases. The best performance was achieved by using a 10ź×ź10 SOM where all notables were clustered in a single cell with only 1.6% of the non-notable artefacts (noise) being present, highlighting that SOM-based analysis does have the potential to cluster notable versus noise files to a degree that would significantly reduce the investigation time. Whilst clustering has proven to be successful, operationalizing it is still a challenge (for example, how to identify the cluster containing the largest proportion of notables within the case). The paper continues to propose a process that capitalises upon SOM and other parameters such as the timeline to identify notable artefacts whilst minimising noise files. Overall, based solely upon unsupervised learning, the approach is able to achieve a recall rate of up to 93%. <s> BIB017 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> Abstract Large email data sets are often the focus of criminal and civil investigations. This has created a daunting task for investigators due to the extraordinary size of many of these collections. Our work offers an interactive visual analytic alternative to the current, manually intensive methodology used in the search for evidence in large email data sets. These sets usually contain many emails which are irrelevant to an investigation, forcing investigators to manually comb through information in order to find relevant emails, a process which is costly in terms of both time and money. To aid the investigative process we combine intelligent preprossessing, a context aware visual search, and a results display that presents an integrated view of diverse information contained within emails. This allows an investigator to reduce the number of emails that need to be viewed in detail without the current tedious manual search and comb process. <s> BIB018
To summarize the field of live triage, the noteworthy research focusses are as follows: 1. The stress of a real-time computation problem having allotted limited time and resources for triage, presented by Roussev et al. BIB005 . The idea is that an increase in the performance can be achieved if acquisition and processing start and complete at almost same time. The implementation of the forensic system HANSKEN proves the appropriateness of the presented idea 2. The selective imaging approaches to reduce data volume, presented by Grier and Richard III BIB014 and Quick and Choo BIB016 BIB008 . The difference between the approaches is in selecting the regions that have a forensic value. Grier and Richard III BIB014 state that the profiles must be created and stored in a library. Moreover, Quick and Choo BIB008 suggest the idea of thumbnailing video, movie, and picture files 3. The introduction of triage template pipelines into the investigative process for the most popular types of digital crimes, presented by Overill et al. BIB006 . However, the authors do not enumerate these types of crimes and provide only the DDoS and P2P template diagrams without the discussion of the details 4. The artificial intelligence approaches presented by Turnbull and Radhava BIB015 and Peersman et al. . Turnbull and Randhawa BIB015 describe an approach to assist a less technically intrinsic user to run a triage tool. Peersman et al. present an approach to automatically label new child sexual abuse media To summarize the field of post-mortem triage, the noteworthy research focusses are as follows: 1. Storing and using the knowledge of the past cases, presented by Horsman et al. BIB009 BIB003 and Bashir and Khan [39] 2. The use of machine learning techniques, presented by Marturana and Taconi BIB007 BIB001 BIB004 , McCleland and Marturana BIB010 , and Fahdi et al. BIB017 . The trend is promising because such techniques are indeed valuable in many research areas; however, the presented research works are immature To summarize the field of triage of mobile devices, the noteworthy research achievement is only single one: 1. The information recovery engine DEC0DE, offered by Walls et al. BIB002 and the information prioritization system LIFTR, which uses the data obtained from DEC0DE, offered by Varna et al. BIB011 To summarize the field of triage tools, the noteworthy research achievements are as follows: 1. The method of similarity digests, offered by Roussev and Quates 2. The online GRR Rapid Response system used for incident response, offered by Moser and Cohen [1] 3. The multi-threaded bootable tool Forensic2020, which allows interaction of the examiner, while the tool is processing data, offered by Baggili et al. BIB012 4. The visualization of email data offered by Haggerty et al. BIB013 . Koven et al. BIB018 presented an approach of email data visualization, as well. However, the provided runtimes are quite long and, therefore, the tool is not suitable for triage purposes 5. The SlackStick approach to identify the files of interest, when several predetermined bytes are chosen to represent the file, offered by Hegarty and Haggerty [64] 6. The distributed digital forensic system HANSKEN that works on a big data platform, offered by van Beek et al. .
Network service orchestration standardization: A technology survey <s> Requirements <s> Operator interviews and anecdotal evidence suggest that an operator's ability to manage a network decreases as the network becomes more complex. However, there is currently no way to systematically quantify how complex a network's design is nor how complexity may impact network management activities. In this paper, we develop a suite of complexity models that describe the routing design and configuration of a network in a succinct fashion, abstracting away details of the underlying configuration languages. Our models, and the complexity metrics arising from them, capture the difficulty of configuring control and data plane behaviors on routers. They also measure the inherent complexity of the reachability constraints that a network implements via its routing design. Our models simplify network design and management by facilitating comparison between alternative designs for a network. We tested our models on seven networks, including four university networks and three enterprise networks. We validated the results through interviews with the operators of five of the networks, and we show that the metrics are predictive of the issues operators face when reconfiguring their networks. <s> BIB001 </s> Network service orchestration standardization: A technology survey <s> Requirements <s> Telecom providers struggle with low service flexibility, increasing complexity and related costs. Although "cloud" has been an active field of research, there is currently little integration between the vast networking assets and data centres of telecom providers. UNIFY considers the entire network, from home networks up to data centre, as a "unified production environment" supporting virtualization, programmability and automation and guarantee a high level of agility for network operations and for deploying new, secure and quality services, seamlessly instantiatable across the entire infrastructure. UNIFY focuses on the required enablers and will develop an automated, dynamic service creation platform, leveraging fine-granular service chaining. A service abstraction model and a proper service creation language and a global orchestrator, with novel optimization algorithms, will enable the automatic optimal placement of networking, computing and storage components across the infrastructure. New management technologies based on experience from DCs, called Service Provider DevOps, will be developed and integrated into the orchestration architecture to cope with the dynamicity of services. The applicability of a universal node based on commodity hardware will be evaluated in order to support both network functions and traditional data centre workloads, with an investigation of the need of hardware acceleration. <s> BIB002 </s> Network service orchestration standardization: A technology survey <s> Requirements <s> New and emerging use cases, such as the interconnection of geographically distributed data centers (DCs), are drawing attention to the requirement for dynamic end-to-end service provisioning, spanning multiple and heterogeneous optical network domains. This heterogeneity is, not only due to the diverse data transmission and switching technologies, but also due to the different options of control plane techniques. In light of this, the problem of heterogeneous control plane interworking needs to be solved, and in particular, the solution must address the specific issues of multi-domain networks, such as limited domain topology visibility, given the scalability and confidentiality constraints. In this article, some of the recent activities regarding the Software-Defined Networking (SDN) orchestration are reviewed to address such a multi-domain control plane interworking problem. Specifically, three different models, including the single SDN controller model, multiple SDN controllers in mesh, and multiple SDN controllers in a hierarchical setting, are presented for the DC interconnection network with multiple SDN/ OpenFlow domains or multiple OpenFlow/ Generalized Multi-Protocol Label Switching (GMPLS) heterogeneous domains. In addition, two concrete implementations of the orchestration architectures are detailed, showing the overall feasibility and procedures of SDN orchestration for the end-to-end service provisioning in multi-domain data center optical networks. <s> BIB003 </s> Network service orchestration standardization: A technology survey <s> Requirements <s> Network Function Virtualization (NFV) enables to implement network functions in software, high-speed packet processing functions which traditionally are dominated by hardware implementations. Virtualized Network Functions (NFs) may be deployed on generic-purpose servers, e.g., in datacenters. The latter enables flexibility and scalability which previously were only possible for web services deployed on cloud platforms. The merit of NFV is challenged by control challenges related to the selection of NF implementations, discovery and reservation of sufficient network and server resources, and interconnecting both in a way which ful fills SLAs related to reliability and scalability. This paper details the role of a scalable orchestrator in charge of finding and reserving adequate resources. The latter will steer network and cloud control and management platforms to actually reserve and deploy requested services. We highlight the role of involved interfaces, propose elements of algorithmic components, and will identify major blocks in orchestration time in a proof of concept prototype which accounts for most functional parts in the considered architecture. Based on these evaluations, we propose several architectural enhancements in order to implement a highly scalable network orchestrator for carrier and cloud networks. <s> BIB004 </s> Network service orchestration standardization: A technology survey <s> Requirements <s> Software Defined Networking (SDN) and Network Function Virtualization (NFV) provide an alluring vision of how to transform broadcast, contribution and content distribution networks. In our laboratory we assembled a multi-vendor, multi-layer media network environment that used SDN controllers and NFV-based applications to schedule, coordinate, and control media flows across broadcast and contribution network infrastructure. — This paper will share our experiences of investigating, designing and experimenting in order to build the next generation broadcast and contribution network. We will describe our experience of dynamic workflow automation of high-bandwidth broadcast and media services across multi-layered optical network environment using SDN-based technologies for programmatic forwarding plane control and orchestration of key network functions hosted on virtual machines. Finally, we will outline the prospects for the future of how packet and optical technologies might continue to scale to support the transport of increasingly growing broadcast media. <s> BIB005 </s> Network service orchestration standardization: A technology survey <s> Requirements <s> Network virtualization is an emerging technique that enables multiple tenants to share an underlying physical infrastructure, isolating the traffic running over different virtual infrastructures/tenants. This technique aims to improve network utilization, while reducing the complexities in terms of network management for operators. Applied to this context, software-defined networking (SDN) paradigm can ease network configurations by enabling network programability and automation, which reduces the amount of operations required from both service and infrastructure providers. SDN techniques are decreasing vendor lock-in issues due to specific configuration methods or protocols. Application-based network operations (ABNO) are a toolbox of key network functional components with the goal of offering application-driven network management. Service provisioning using ABNO may involve direct configuration of data plane elements or delegate it to several control plane modules. We validate the applicability of ABNO to multitenant virtual networks in multitechnology optical domains based on two scenarios, in which multiple control plane instances are orchestrated by the architecture. Congestion detection and failure recovery are chosen to demonstrate fast recalculation and reconfiguration, while hiding the configurations in the physical layer from the upper layer. <s> BIB006
A Service orchestration is a complex high-level control system and relevant research efforts have proposed a wide range of goals for a service orchestrator. We identify the following functional properties: Coordination: Operator infrastructures comprise of a wide range of network and computation systems providing a diverse set of resources, including network bandwidth, CPU and storage. Effective deployment of a network service depends on their coordinated configuration. The network manager must provision network resources and modify the forwarding policy of the network, to ensure ordering and connectivity between the service NFs. This process becomes complex when considering the different control capabilities and interfaces across network technologies found in the metropolitan, access and wide area layers of the operator network. Furthermore, the network manager must configure the devices that will host the service NFs, either in software or hardware. The service orchestrator is responsible for abstracting the management and configuration heterogeneity of the different technologies and administrative domains BIB006 BIB002 . Automation: Existing infrastructures incur significant operational workload for the configuration, troubleshooting and management of network services. Network technologies typically provide different configuration interfaces in each network layer and require manual and repetitive configuration by network managers to deploy a network service BIB001 . In addition, vertical integration of network devices requires extensive human intervention to deploy and manage a network service in a multivendor and multi-technology environment. A key goal for service orchestration is to minimize human intervention during the deployment and management of network services. Efforts in programmable network and NFV control, like SDN, ABNO and ETSI NFV MANO, provide low-level automation capabilities, which can be exploited by the service orchestrator to synthesize high-level automation service deployment and management mechanisms BIB003 . Resource Provision and Monitor: The specification of network services contain complex SLA guarantees, which perplex network management. For example, allocating resources, which meet service delivery guarantees, is an NP-hard problem from the perspective of the operator and the re-optimization of a large network can take days. In parallel, existing service deployment approaches rely on static resource allocations and require resource provision for the worst-case service load scenarios. A key goal for service orchestration is to enable dynamic and flexible resource control and monitoring mechanisms, which converge resource control across the underlying technologies and abstract their heterogeneity [10, BIB004 . Efforts towards service orchestration are still limited. Relevant architecture and interface specifications define mechanisms for effective automation and programmability of individual resource types, like the SDN and ABNO paradigms for network resources and the NFV MANO for compute and storage resources. Nonetheless, these architectures remain low-level and provide partial control over the infrastructure towards ser- vice orchestration. Service orchestration initiatives from network operators and vendors BIB005 propose the development of a new orchestration layer above and beyond the existing individual control mechanisms which will capitalize on their lowlevel automation and flexibility capabilities to support a serviceoriented control abstraction exposed to the OSS/BSS, as depicted in Figure 1 . In terms of network control, the service orchestrator can access low-level forwarding interfaces, as well as high high-level control interfaces implementing standardized forwarding control mechanisms, like Segment Routing and Service Function Chain, through the network controller. In parallel, NF management across the operator datacenters can be achieved through a dual-layer control and management stack, as suggested by relevant NF management architectures. The lower layer contains the Virtual Infrastructure Manager (VIM), which manages and configures the virtualization policy of compute and storage resources. The top layer contains the VNF Manager (VNFM) responsible for the configuration, control and monitor of individual NFs. The service orchestrator will operate on top of these two management services (network and IT, see Figure 1 ) and will be responsible for exploiting their functionality to provide network service delivery, given the policy of the operator, channeled through the OSS. The effectiveness of the service orchestrator highly depends on the granularity and flexibility of the underlying control interfaces. This paper surveys standardization efforts for infrastructure control in an effort to discuss the existing opportunities and challenges towards service orchestration.
Network service orchestration standardization: A technology survey <s> Radio Access Network (RAN) <s> The cellular industry is evaluating architectures to distribute the signal processing in radio access networks. One of the options is to process the signals of all base stations on a shared pool of compute resources in a central location. In this centralized architecture, the existing base stations will be replaced with just the antennas and a few other active RF components, and the remainder of the digital processing including the physical layer will be carried out in a central location. This model has potential benefits that include a reduction in the cost of operating the network due to fewer site visits, easy upgrades, and lower site lease costs, and an improvement in the network performance with joint signal processing techniques that span multiple base stations. Further there is a potential to exploit variations in the processing load across base stations, to pool the base stations into fewer compute resources, thereby allowing the operator to either reduce energy consumption by turning the remaining processors off or reducing costs by provisioning fewer compute resources. We focus on this aspect in this paper. Specifically, we make the following contributions in the paper. Based on real-world data, we characterise the potential savings if shared homogeneous compute resources are used to process the signals from multiple base stations in the centralized architecture. We show that the centralized architecture can potentially result in savings of at least 22 % in compute resources by exploiting the variations in the processing load across base stations. These savings are achievable with statistical guarantees on successfully processing the base station's signals. We also design a framework that has two objectives: (i) partitioning the set of base stations into groups that are simultaneously processed on a shared homogeneous compute platform for a given statistical guarantee, and (ii) scheduling the set of base stations allocated to a platform in order to meet their real-time processing requirements. This partitioning and scheduling framework saves up to 19 % of the compute resources for a probability of failure of one in 100 million. We refer to this solution as CloudIQ. Finally we implement and extensively evaluate the CloudIQ framework with a 3GPP compliant implementation of 5 MHz LTE. <s> BIB001 </s> Network service orchestration standardization: A technology survey <s> Radio Access Network (RAN) <s> Driven by the need to cope with exponentially growing mobile data traffic and to support new traffic types from massive numbers of machine-type devices, academia and industry are thinking beyond the current generation of mobile cellular networks to chalk a path towards fifth generation (5G) mobile networks. Several new approaches and technologies are being considered as potential elements making up such a future mobile network, including cloud RANs, application of SDN principles, exploiting new and unused portions of spectrum, use of massive MIMO and full-duplex communications. Research on these technologies requires realistic and flexible experimentation platforms that offer a wide range of experimentation modes from real-world experimentation to controlled and scalable evaluations while at the same time retaining backward compatibility with current generation systems. Towards this end, we present OpenAirInterface (OAI) as a suitably flexible platform. In addition, we discuss the use of OAI in the context of several widely mentioned 5G research directions. <s> BIB002 </s> Network service orchestration standardization: A technology survey <s> Radio Access Network (RAN) <s> Past years have witnessed the surge of mobile broadband Internet traffic due to the broad adoption of a number of major technical advances in new wireless technologies and consumer electronics. In this respect, mobile networks have greatly increased their availability to meet the exponentially growing capacity demand of modern mobile applications and services. The upcoming scenario in the near future lays down the possibility of a continuum of communications thanks also to the deployment of so called small cells. Conventional cellular networks and the small cells will form the foundation of this pervasive communication system. Therefore, future wireless systems must carry the necessary scalability and seamless operation to accommodate the users and integrate the macro cells and small cells together. In this work we propose the V-Cell concept and architecture. V-Cell is potentially leading to a paradigm shift when approaching the system designs that allows to overcome most of the limitations of physical layer techniques in conventional wireless networks. <s> BIB003
The 3G standards split the mobile RAN in two functional blocks: the Remote Radio Head (RRH), which receives and transmit the wireless signal and applies the appropriate signal transformations and amplification, and the Base Band Unit (BBU), which runs the MAC protocol and coordinates neighboring cells. The channel between these two entities has high bandwidth and ultra-low latency requirements and the two systems are typically co-located in production deployments. Nonetheless, this design choice increases the operator cost to deploy and operate its RAN. BBUs are expensive components which increase the overall acquisition cost of a base station, while the BBU cooling requirements makes the RAN a significant contributor to the aggregate power consumption of the operator . Recent trends in RAN design separate the two components, by moving the BBU to the central office of the operator; an architectural paradigm commonly termed Cloud-RAN (C-RAN). C-RAN significantly reduces deployment and operational costs and improves elasticity and resilience of the RAN. In parallel, the centralization of multiple RRHs under the control of a single BBU improves resource utilization and cell handovers, and minimizes cell-interference. Currently multiple interfaces, architectures and testbeds provide the technological capabilities to run and test C-RAN systems BIB001 BIB002 , while vendors currently provide production-ready virtualized BBU appliances [17] . In addition, novel control abstractions can converge RAN control with underlying transport technologies and enable flexible deployment strategies BIB003 . A challenge for C-RAN architectures is the high multi-Gb bandwidth requirements and strict sub-milliseconds latency and jitter demands for the links between the RRH and the datacenter [19] . These connectivity guarantees exhibit significant variability (from a few Mb to 30 Gb) within the course of a day, reflecting the varying loads of mobile cell, as well as the signal modulation and channel configuration. To provide flexible and on-demand front-haul connectivity with strong latency guarantees, operators require novel orchestration mechanisms supporting dynamic and multi-technology resource management. In addition, effective RAN virtualization requires a framework for the management and monitoring of BBU instances to provide service resiliency. The service orchestrator can monitor the performance of the BBU VNF instances and adjust the compute resource allocation, the VNF replication degree and the load distribution policy. In parallel, the orchestrator can improve front-haul efficiency by mapping the connectivity requirements between the BBU and the RRH in network resource allocation policy. The 3rd Generation Partnership Project (3GPP) is actively exploring the applicability of NFV technologies on a range of mobile network use-cases, like fault-management and performance monitoring, and has defined a set of management requirements in the RAN, the Mobile Core Network and the IP Multimedia Subsystem (IMS) . In parallel, the 5G Public Private Partnership (5G PPP), within its effort to standardize the technologies and protocols for the next generation communication network defines end-to-end network service orchestration as a core design goal .
Network service orchestration standardization: A technology survey <s> Content Delivery Network (CDN) <s> Content distribution networks (CDNs) are a mechanism to deliver content to end users on behalf of origin Web sites. Content distribution offloads work from origin servers by serving some or all of the contents of Web pages. We found an order of magnitude increase in the number and percentage of popular origin sites using CDNs between November 1999 and December 2000.In this paper we discuss how CDNs are commonly used on the Web and define a methodology to study how well they perform. A performance study was conducted over a period of months on a set of CDN companies employing the techniques of DNS redirection and URL rewriting to balance load among their servers. Some CDNs generally provide better results than others when we examine results from a set of clients. The performance of one CDN company clearly improved between the two testing periods in our study due to a dramatic increase in the number of distinct servers employed in its network. More generally, the results indicate that use of a DNS lookup in the critical path of a resource retrieval does not generally result in better server choices being made relative to client response time in either average or worst case situations. <s> BIB001 </s> Network service orchestration standardization: A technology survey <s> Content Delivery Network (CDN) <s> Today a spectrum of solutions are available for istributing content over the Internet, ranging from commercial CDNs to ISP-operated CDNs to content-provider-operated CDNs to peer-to-peer CDNs. Some deploy servers in just a few large data centers while others deploy in thousands of locations or even on millions of desktops. Recently, major CDNs have formed strategic alliances with large ISPs to provide content delivery network solutions. Such alliances show the natural evolution of content delivery today driven by the need to address scalability issues and to take advantage of new technology and business opportunities. In this paper we revisit the design and operating space of CDN-ISP collaboration in light of recent ISP and CDN alliances. We identify two key enablers for supporting collaboration and improving content delivery performance: informed end-user to server assignment and in-network server allocation. We report on the design and evaluation of a prototype system, NetPaaS, that materializes them. Relying on traces from the largest commercial CDN and a large tier-1 ISP, we show that NetPaaS is able to increase CDN capacity on-demand, enable coordination, reduce download time, and achieve multiple traffic engineering goals leading to a win-win situation for both ISP and CDN. <s> BIB002 </s> Network service orchestration standardization: A technology survey <s> Content Delivery Network (CDN) <s> The demand for rich multimedia services over mobile networks has been soaring at a tremendous pace over recent years. However, due to the centralized architecture of current cellular networks, the wireless link capacity as well as the bandwidth of the radio access networks and the backhaul network cannot practically cope with the explosive growth in mobile traffic. Recently, we have observed the emergence of promising mobile content caching and delivery techniques, by which popular contents are cached in the intermediate servers (or middleboxes, gateways, or routers) so that demands from users for the same content can be accommodated easily without duplicate transmissions from remote servers; hence, redundant traffic can be significantly eliminated. In this article, we first study techniques related to caching in current mobile networks, and discuss potential techniques for caching in 5G mobile networks, including evolved packet core network caching and radio access network caching. A novel edge caching scheme based on the concept of content-centric networking or information-centric networking is proposed. Using trace-driven simulations, we evaluate the performance of the proposed scheme and validate the various advantages of the utilization of caching content in 5G mobile networks. Furthermore, we conclude the article by exploring new relevant opportunities and challenges. <s> BIB003 </s> Network service orchestration standardization: A technology survey <s> Content Delivery Network (CDN) <s> We propose joint bandwidth provisioning and base station caching for video delivery in software-defined PONs. Performance evaluation via custom simulation models reveals 30% increase in served video requests and 50% reduction in service response delays. <s> BIB004 </s> Network service orchestration standardization: A technology survey <s> Content Delivery Network (CDN) <s> High quality video streaming has become an essential part of many consumers' lives.We designed OpenCache; an OpenFlow-assisted in-network caching service.OpenCache benefits last mile environments by improving network utilisation.OpenCache increases the Quality of Experience for the end-user.OpenCache was evaluated on a large pan-European OpenFlow testbed with clear benefits. High quality online video streaming, both live and on-demand, has become an essential part of many consumers' lives. The popularity of video streaming, however, places a burden on the underlying network infrastructure. This is because it needs to be capable of delivering significant amounts of data in a time-critical manner to users. The Video-on-Demand (VoD) distribution paradigm uses a unicast independent flow for each user request. This results in multiple duplicate flows carrying the same video assets that only serve to exacerbate the burden placed upon the network. In this paper we present OpenCache: a highly configurable, efficient and transparent in-network caching service that aims to improve the VoD distribution efficiency by caching video assets as close to the end-user as possible. OpenCache leverages Software Defined Networking technology to benefit last mile environments by improving network utilisation and increasing the Quality of Experience for the end-user. Our evaluation on a pan-European OpenFlow testbed uses adaptive bitrate video to demonstrate that with the use of OpenCache, streaming applications play back higher quality video and experience increased throughput, higher bitrate, and shorter start up and buffering times. <s> BIB005
CDN services provide efficient distribution of static content on behalf of third-party Internet applications BIB001 . They rely on a well-provisioned and highly-available network of cache servers and allow end-users to retrieve static content with low latency by automatically redirecting them to an appropriate cache server, based on the user location, the caching policy and cache load. CDN traffic currently constitutes a large portion of the operator traffic volumes and providers, like Akamai, serve 15-30% of the global Internet traffic . The CDN service chain is simple and consists of a loadbalancing function and a cache function, as depicted in Figure 2 . The greatest challenge in the deployment of such a service is the aggregate network data volumes of the service and the large number of network end-points. As a result, temporal variations in CDN traffic patterns can have a dramatic effect on the traffic matrix of the operator and affect Internet service delivery. In parallel, CDN-ISP integration lacks support for dynamical resource provision, in order to gracefully manage the dynamic traffic patterns. Connectivity relies on fixedcapacity peering relationships through popular IXPs or CDNoperated peering locations , which must be provisioned for the worst-case scenario. The current design of CDN services introduces an interesting joint optimization problem between operators and CDN service providers. A CDN service bring content closer to the user and enable dynamic deployment of caching NFs in the central offices of the operator and enforce network resource guarantees. The service can provide sufficient elasticity for the CDN caching layer, while the ISP can reduce core network load. Similar approaches have been proposed in the context of mobile operators, mobile CDN emerged to faster access to mobile apps, facilitate mobile video streaming and supporting dynamic contents BIB003 BIB004 . In parallel, new network control architectures based on SDN and NFV principles enable CDN services to localize users and offload the redirection task in the network forwarding policy BIB005 BIB002 . These approaches provide an innovative environment to improve CDN functionality, but require a flexible control mechanism to integrate CDN services and infrastructures. A service orchestrator can autonomously adapt the CDN service deployment plan to the CDN load characteristics, using a policy specification from the CDN provider. In parallel, the orchestrator can monitor traffic volumes to infer content locality and hotspot development and deploy NF caches close to the end-user to improve latency and network efficiency.
Network service orchestration standardization: A technology survey <s> Software Defined Networking (SDN) <s> Active networks are a novel approach to network architecture in which the switches (or routers) of the network perform customized computations on the messages flowing through them. This approach is motivated by both lead user applications, which perform user-driven computation at nodes within the network today, and the emergence of mobile code technologies that make dynamic network service innovation attainable. The authors discuss two approaches to the realization of active networks and provide a snapshot of the current research issues and activities. They illustrate how the routers of an IP network could be augmented to perform such customized processing on the datagrams flowing through them. These active routers could also interoperate with legacy routers, which transparently forward datagrams in the traditional manner. <s> BIB001 </s> Network service orchestration standardization: A technology survey <s> Software Defined Networking (SDN) <s> This document describes the use of RSVP (Resource Reservation Protocol), including all the necessary extensions, to establish label-switched paths (LSPs) in MPLS (Multi-Protocol Label Switching). Since the flow along an LSP is completely identified by the label applied at the ingress node of the path, these paths may be treated as tunnels. A key application of LSP tunnels is traffic engineering with MPLS as specified in RFC 2702. <s> BIB002 </s> Network service orchestration standardization: A technology survey <s> Software Defined Networking (SDN) <s> The routers in an Autonomous System (AS) must distribute the information they learn about how to reach external destinations. Unfortunately, today's internal Border Gateway Protocol (iBGP) architectures have serious problems: a "full mesh" iBGP configuration does not scale to large networks and "route reflection" can introduce problems such as protocol oscillations and persistent loops. Instead, we argue that a Routing Control Platform (RCP) should collect information about external destinations and internal topology and select the BGP routes for each router in an AS. RCP is a logically-centralized platform, separate from the IP forwarding plane, that performs route selection on behalf of routers and communicates selected routes to the routers using the unmodified iBGP protocol. RCP provides scalability without sacrificing correctness. In this paper, we present the design and implementation of an RCP prototype on commodity hardware. Using traces of BGP and internal routing data from a Tier-1 backbone, we demonstrate that RCP is fast and reliable enough to drive the BGP routing decisions for a large network. We show that RCP assigns routes correctly, even when the functionality is replicated and distributed, and that networks using RCP can expect comparable convergence delays to those using today's iBGP architectures. <s> BIB003 </s> Network service orchestration standardization: A technology survey <s> Software Defined Networking (SDN) <s> This document specifies the Path Computation Element (PCE) ::: Communication Protocol (PCEP) for communications between a Path ::: Computation Client (PCC) and a PCE, or between two PCEs. Such ::: interactions include path computation requests and path computation ::: replies as well as notifications of specific states related to the use ::: of a PCE in the context of Multiprotocol Label Switching (MPLS) and ::: Generalized MPLS (GMPLS) Traffic Engineering. PCEP is designed to be ::: flexible and extensible so as to easily allow for the addition of ::: further messages and objects, should further requirements be expressed ::: in the future. [STANDARDS-TRACK] <s> BIB004 </s> Network service orchestration standardization: A technology survey <s> Software Defined Networking (SDN) <s> Peer-to-peer applications, such as file sharing, real-time ::: communication, and live media streaming, use a significant amount of ::: Internet resources. Such applications often transfer large amounts of ::: data in direct peer-to-peer connections. However, they usually have ::: little knowledge of the underlying network topology. As a result, they ::: may choose their peers based on measurements and statistics that, in ::: many situations, may lead to suboptimal choices. This document ::: describes problems related to optimizing traffic generated by peer-to- ::: peer applications and associated issues such optimizations raise in ::: the use of network-layer information. <s> BIB005 </s> Network service orchestration standardization: A technology survey <s> Software Defined Networking (SDN) <s> The Network Configuration Protocol (NETCONF) defined in this document ::: provides mechanisms to install, manipulate, and delete the ::: configuration of network devices. It uses an Extensible Markup ::: Language (XML)-based data encoding for the configuration data as well ::: as the protocol messages. The NETCONF protocol operations are realized ::: as remote procedure calls (RPCs). This document obsoletes RFC 4741. ::: [STANDARDS-TRACK] <s> BIB006 </s> Network service orchestration standardization: A technology survey <s> Software Defined Networking (SDN) <s> We present our experiences to date building ONOS (Open Network Operating System), an experimental distributed SDN control platform motivated by the performance, scalability, and availability requirements of large operator networks. We describe and evaluate two ONOS prototypes. The first version implemented core features: a distributed, but logically centralized, global network view; scale-out; and fault tolerance. The second version focused on improving performance. Based on experience with these prototypes, we identify additional steps that will be required for ONOS to support use cases such as core network traffic engineering and scheduling, and to become a usable open source, distributed network OS platform that the SDN community can build upon. <s> BIB007 </s> Network service orchestration standardization: A technology survey <s> Software Defined Networking (SDN) <s> Networking is undergoing a transformation throughout our industry. The shift from hardware driven products with ad hoc control to Software Defined Networks is now well underway. In this paper, we adopt the perspective of the Promise Theory to examine the current state of networking technologies so that we might see beyond specific technologies to principles for building flexible and scalable networks. Today's applications are increasingly distributed planet-wide in cloud-like hosting environments. Promise Theory's bottom-up modelling has been applied to server management for many years and lends itself to principles of self-healing, scalability and robustness. <s> BIB008 </s> Network service orchestration standardization: A technology survey <s> Software Defined Networking (SDN) <s> As IP networks grow more complicated, these networks require a new ::: interaction mechanism between customers and their networks based on ::: intent rather than detailed specifics. An intent-based language is ::: need to enable customers to easily describe their diverse intent for ::: network connectivity to the network management systems. This document ::: describes the problem Intent-Based NEtwork Modelling (IB- NEMO) ::: language is trying to solve, a summary of the use cases that ::: demonstrate this problem, and a proposed scope of work. Part of the ::: scope is the validation of the language as a minimal (or reduced) ::: subset. The IB-NEMO language consists of commands exchanged between an ::: application and a network manager/controller. Some would call this ::: boundary between the application and the network management system as ::: northbound interface (NBI). IB-NEMO focuses on creating minimal subset ::: of the total possible Intent-Based commands to pass across this NBI. ::: By creating a minimal subset (about 20% of the total possible) of all ::: intent commands, the IB-NEMO can be a simple Intent interface for most ::: applications (hopefully 80%). Part of validation of this command ::: language is to to provide test cases where a set of commands are used ::: to convey information for a use case which result in a particular data ::: model in the network controller. <s> BIB009
SDN is a recent network paradigm aiming for automated, flexible and user-controlled network forwarding and management. SDN is motivated by earlier network programmability efforts, including Active Networks BIB001 , ForCES , RCP BIB003 and Tempest . Unlike most earlier network programmability architectures, which explored clean-slate design of data plane protocols, SDN maintains backwards compatibility with existing network technologies. SDN design is driven by four major design goals: i) network control and data plane separation; ii) logical control centralization; iii) open and flexible interfaces between control layers; and iv) network programmability. SDN standardization efforts are primarily driven by the Open Network Foundation (ONF), while the IRTF SDNRG WG explores complementary standards for the higher control layers. Similar standardization activities take place within various SDOs, namely the Broadband Forum (broadband network applications) and the International Telecommunication Union (ITU) study groups (SG) 11 (SDN signaling), SG 13 (SDN applications in future networks), SG 15 (transport network applications of SDN) and SG 17 (applications of SDN for secure services), but efforts in these SDOs are currently in early stages and provide initial problem statements and requirement analysis. Figure 3 presents an architectural model of an SDN control stack. The architecture separates the control functionalities into three distinct layers. The data plane is the bottom layer and contains all the network devices of the infrastructure. Data plane devices are designed to efficiently perform a restricted set of low-level traffic monitoring and packet manipulation functions and have limited control intelligence. Each devices implements one or more southbound Interfaces (SBIs) which enable control of the forwarding and resource allocation policy from external entities. SBIs can be categorized into control interfaces like OpenFlow and PCE BIB004 , designed to manipulate the device forwarding policy, and management interfaces, like NETCONF BIB006 and OF-CONFIG , designed to provide remote device configuration, monitoring and fault management. SDN functionality is not limited to networks supporting new clean-slate programmable interfaces and includes SBIs based on existing control protocol, like routing protocols. The control plane is the middle layer of the architecture and contains the Network Operating System (NOS), a focal point of the architecture. A NOS aggregates and centralizes control of multiple data plane devices and synthesize new highlevel Northbound Interfaces (NBIs) for management applications. For example, existing NOS implementations provide topology monitoring and resource virtualization services and enable high-level policy specification languages, among other functionalities. Furthermore, a NOS aggregates control policy requirements from management applications and provides them accurate network state information. The NOS is responsible to analyze policy requests from individual management applications, ensure conformance with the administrative domain policy, detect and mitigate policy conflicts between management applications and translate these requests into appropriate data plane device configurations. A key element for the scalability of the architecture is logical centralization of network control; a control plane can consist of multiple NOS instances, each controlling an overlapping network segment, and use synchronization mechanisms, typically termed as eastbound and westbound interfaces, to converge in a common network-wide view of the network state and policy between NOS instances. This way, an SDN control domain can recover from multiple NOS instance failures and the control load can be distributed across the remaining instances. Finally, the application plane is the top layer of the architecture and contains specialized applications that use NBIs to implement high-level NFs, like load balancing and resource management. Detailed presentation of the standardization, research and implementation efforts in the SDN community are presented in . For the rest of this section we focus on NBI standardization efforts. NBIs are crucial for service orchestration, since they enable control and monitoring of service connectivity and network resource utilization and flexible fault-management. Nonetheless, NBI standardization is limited and existing control interface and mechanism design is driven by NOS development efforts. NBIs can be organized in two broad categories. The first category contains low-level information modeling NBIs. Information models converge the state representation of data plane devices and abstract the heterogeneity of SBIs. Network information models have been developed before the introduction of the SDN paradigm by multiple SDOs, like the ITU and the Distributed Management Task Force (DMTF) . Relevant to the SDN paradigm is the ONF information modeling working group (WG), which develops the Common Information Model (CoreModel) specifications. The CoreModel is hierarchical and includes a core model, which provides a basic abstraction for data plane forwarding elements, and a technology forwarding and an application-specific model, which evolve the core model abstraction. CoreModel specifications exploit object inheritance and allow control applications to acquire abstract network connectivity information and, in parallel, access technology-specific attributes of individual network devices. The CoreModel adoption is limited and existing NOSes employ custom information models. The second NBI category contains high-level and innovative control abstractions, exploring interfaces beyond the typical match-action-forward model. These interfaces are typically implemented as NOS management applications, use the information model to implement their control logic and are consumed by external entities, like the Operation Support System (OSS), the service orchestrator and other control applications. Effectively, these interfaces manifest the reference points between the Network and Service Orchestrator components (Figure 1 ). For the rest of this section we elaborate on NBI formal specifications, as well as NBI designs developed in production NOSes. We elaborate on legacy control interfaces implemented in SDN environment, as well as interfaces supported by the ONOS BIB007 and OpenDayLight (ODL) projects, the most popular and mature open-source NOS implementations. Path Computation. Path Computation Element (PCE) is a control technology which addresses resource and forwarding control limitations in label-switched technologies. Generalized MultiProtocol Label Switching (GMPLS) and Multi-Protocol Label Switching (MPLS) technologies follow a distributed approach for path establishment. Switches use traffic engineering extensions to routing protocols, like OSPF-TE , to collect network resource and topology information. Path requests trigger a label switch to compute an end-to-end path to the destination network using its topology information and provisions the path using signaling protocols, like RSVP-TE BIB002 . A significant limitation in MPLS path computation is the increased computational requirements for the co-processor of edge label switches in large networks, while limited visibility between network layers or across administrative domains can lead to sub-optimal path selections. PCE proposes a centralized path computation architecture and defines a protocol which allows the network controller to receive path requests from the NMS and to configure paths across individual network forwarding elements. PCE control can be used by the service orchestrator to provision connectivity between the NF nodes. The ONOS PCEP project 1 enables ONOS to serve Path Computation Client (PCC) requests and to manage label switched paths (LSP) and MPLS-TE tunnels. In addition, the PCEP project develops a path computation mechanism for the ONOS tunneling subsystem and provides tunnels as a system resource. Tunnel establishment support, both as L2 and L3 VPNs, is available to application through a RESTful NBI and applications are distinguished between tunnel providers and tunnel consumers. LSP computation relies on network topology information, stored in a traffic engineering database (TED) and populated by an Interior Gateway Protocol (IGP). This information remains locoal within an Autonomous System (AS), limiting Path Computation in a single administrative domain. The IETF InterDomain Routing WG defines a mechanism to share link-state information across domains using the Network Layer Reachability Information (NLRI) field of the BGP protocol, standardized in the BGP-LS protocol extensions . The ONOS BGP-LS project introduces support for the BGP-LS protocol (peering and link state information support) as SBI to complement the ONOS PCEP project . The BGP-LS/PCEP module 2 of the ODL project implements support for the aforementioned protocols as a control application. Furthermore, the module supports additional PCE extensions, like stateful-PCE , PCEP for segment routing ( § 5.4), and secure transport for PCEP (PCEPS) . Stateful-PCE introduces time, sequence and resource usage synchronization within and across PCEP sessions, allowing dynamic LSP management. Furthermore, PCEPS adds security extension to the control channel of the PCE protocol. ALTO. The Application Layer Traffic Optimization BIB005 is an IETF WG developing specifications that allow end-user applications to access accurate network performance information. Distributed network applications, like peer-to-peer and content distribution, can improve their peer-selection logic using network path information towards alternative service end-points. This better-than-random decision improves the performance of bandwidth-intensive or latency-sensitive applications, while the network provider can improve link utilization across its network. The ALTO protocol enables a service orchestrator to monitor the network of the operator and make informed service deployment decisions. ODL provides an ALTO server module 2 with a RESTful ALTO NBI. Virtual Tenant Networks. Virtual Tenant Networks (VTNs) is a network virtualization architecture, developed by NEC. VTN develops an abstraction that logically disassociates the specification of virtual overlay networks from the topology of the underlying network infrastructure. Effectively, users can define any network topology and the VTN management system will map the requested topology over the physical topology. VTN enables seamless service deployment for the service orchestrator, by decoupling the deployment plan from the underlying infrastructure. The VTN abstraction is extensively supported by the ODL project 2 . Locator/ID Separation. The IETF Locator/ID separation protocol (LISP) is a network architecture addressing the scalability problems of routing systems at Internet-scale. LISP proposes a dual addressing mechanism, which decouples the location of a host from its unique identifier. LISP-aware end-hosts require only a unique destination end-point identifier (EID) to transmit a packet, while intermediate routing nodes use a distributed mapping service to translate EIDs to Routing Locations (RLOCs), an identifier of the network of the destination host. A packet is send to an Edge LISP router in the EID domain, where a LISP header with the RLOC address of the destination network is added. The packet is then routed across the underlay network to the destination EID domain. The LISP architecture provides a scalable mechanism for NFs connectivity and mobility. ODL provides a LISP flow mapping module 2 . The module uses an SBI to acquire RLOC and EID information from the underlying network and exposes this information through a RESTCONF NBI. In addition, the NBI allows applications, like load balancers, to create custom overlay networks. The module is currently compatible with the Service Function Chain (SFC) ( § 5.3) functionality and holds future integration plans with group-based policy mechanisms. Real time media. The ONF has currently a dedicated WG exploring standardization requirements for SDN NBIs. At the time of writing, the group has released an NBI specifications for a Real Time Media control protocol, in collaboration with the International Multimedia Telecommunication Consortium (IMTC). The protocol allows end-user applications to communicate with the local network controller, discover available resources and assign individual flows to specific quality of experience (QoE) classes, through a RESTful API. ONF is currently developing a proof-of-concept implementation of the API as part of the ASPEN project . Intent-based networking. Intent-based networking is a popular SDN NBI exploring the applicability of declarative policy languages in network management. Unlike traditional imperative policy language, Intent-based policies describe to the NOS the set of acceptable network states and leave low-level network configuration and adaptation to the NOS. As a result, Intents are invariant to network parameters like link outages and vendor variance, because they lack any implementation details. In addition, intents are portable across controllers, thus simplifying application integration and run-time complexity, but requires a common NBI across platforms, which is currently an active goal for multiple SDOs WG. The IETF has adopted the NEMO specifications BIB009 , an Intent-based networking policy language. NEMO is a Domain Specific Language (DSL), following the declarative programming paradigm. NEMO applications do not define the underlying mechanisms for data storage and manipulation, but rather describe their goals. The language defines three major abstractions: an end-point, describes a network end-point, a connection, describes connectivity requirements between network end-points, and an operation, describes packet operations. Huawei is currently leading an implementation initiative, based on ODL and the OPNFV project . In parallel, the ONF has recently organized a WG to standardize a common Intent model. The group aims to fulfill two objectives: i) describe the architecture and requirements of Intent implementations across controllers and define portable intent expressions, and ii) develop a community-approved information model which unifies Intent interfaces across controllers. The respective standard is coupled with the development of the Boulder framework , an open-source and portable Intent framework which can integrate with all major SDN NOSes. Boulder organizes intents through a grammar which consists of subjects, predicates and targets. The language can be extended to include constraints and conditions. The reference Boulder implementation has established compatibility with ODL through the Network Intent Composition (NIC) project, while ONOS support is currently under development. Group-Based Policy (GBP) is an alternative Intent-based networking paradigm, developed by the ODL project. Based upon promise-theory BIB008 , GBP separates application concerns and simplifies dependency mapping, thus allowing greater automation during the consolidation and deployment of multiple policy specifications. The GBP abstraction models policy using the notions of end-point and end-point groups and provides language primitives to control the communication between them. Developers can specify through GBP their application requirements and the relationship between different tiers of their application, while remaining opaque towards the topology and capabilities of the underlying network. The ODL GBD module provides an NBI 2 which leverages the low-level control of several network virtualization technologies, like OpenStack Neutron and SFC( § 5.3).
Network service orchestration standardization: A technology survey <s> Application-Based Network Operations (ABNO) <s> Services such as content distribution, distributed databases, or ::: inter-data center connectivity place a set of new requirements on the ::: operation of networks. They need on-demand and application-specific ::: reservation of network connectivity, reliability, and resources (such ::: as bandwidth) in a variety of network applications (such as point-to- ::: point connectivity, network virtualization, or mobile back-haul) and ::: in a range of network technologies from packet (IP/MPLS) down to ::: optical. An environment that operates to meet these types of ::: requirements is said to have Application-Based Network Operations ::: (ABNO). ABNO brings together many existing technologies and may be ::: seen as the use of a toolbox of existing components enhanced with a ::: few new elements. This document describes an architecture and ::: framework for ABNO, showing how these components fit together. It ::: provides a cookbook of existing technologies to satisfy the ::: architecture and meet the needs of the applications. <s> BIB001 </s> Network service orchestration standardization: A technology survey <s> Application-Based Network Operations (ABNO) <s> Transport networks provide reliable delivery of data between two end points. Today's most advanced transport networks are based on Wavelength Switching Optical Networks (WSON) and offer connections of 10Gbps up to 100Gbps. However, a significant disadvantage of WSON is the rigid bandwidth granularity because only single, large chunks of bandwidth can be assigned matching the available fixed wavelengths resulting in considerable waste of network resources. Elastic Optical Networks (EON) provides spectrum-efficient and scalable transport by introducing flexible granular grooming in the optical frequency domain. EON provides arbitrary contiguous concatenation of optical spectrum that allows creation of custom-sized bandwidth. The allocation is performed according to the traffic volume or user request in a highly spectrum-efficient and scalable manner. The Adaptive Network Manager (ANM) concept appears as a necessity for operators to dynamically configure their infrastructure based on user requirements and network conditions. This work introduces the ANM and defines ANM use cases, and its requirements, and proposes an architecture for ANM that is aligned with solutions being developed by the industry. <s> BIB002 </s> Network service orchestration standardization: A technology survey <s> Application-Based Network Operations (ABNO) <s> ABNO architecture is proposed in IETF as a framework which enables network automation and programmability thanks to the utilization of standard protocols and components. This work not only justifies the architecture but also presents the first experimental demonstration. <s> BIB003 </s> Network service orchestration standardization: A technology survey <s> Application-Based Network Operations (ABNO) <s> Huge amount of algorithmic research is being done in the field of optical networks, including Routing and Spectrum Allocation (RSA), elastic operations, spectrum defragmentation, and other re-optimization algorithms. Frequently, those algorithms are developed and executed on simulated environments, where many assumptions are done about network control and management issues. Those issues are relevant, since they might prevent algorithms to be deployed in real scenarios. To completely validate network-related algorithms, we developed an extensible control and management plane test-bed, named as iONE, for single layer and multilayer flexgrid-based optical networks. iONE is based on the Applications-Based Network Operations (ABNO) architecture currently under standardization by the IETF. iONE enables deploying and assessing the designed algorithms by defining workflows. This paper presents the iONE test-bed architecture, describes its components, and experimentally demonstrates its operation with a specific use-case. <s> BIB004 </s> Network service orchestration standardization: A technology survey <s> Application-Based Network Operations (ABNO) <s> Traditionally, routing systems have implemented routing and signaling ::: (e.g., MPLS) to control traffic forwarding in a network. Route ::: computation has been controlled by relatively static policies that ::: define link cost, route cost, or import and export routing policies. ::: Requirements have emerged to more dynamically manage and program ::: routing systems due to the advent of highly dynamic data-center ::: networking, on-demand WAN services, dynamic policy-driven traffic ::: steering and service chaining, the need for real-time security threat ::: responsiveness via traffic control, and a paradigm of separating ::: policy-based decision-making from the router itself. These ::: requirements should allow controlling routing information and traffic ::: paths and extracting network topology information, traffic statistics, ::: and other network analytics from routing systems. This document ::: proposes meeting this need via an Interface to the Routing System ::: (I2RS). <s> BIB005 </s> Network service orchestration standardization: A technology survey <s> Application-Based Network Operations (ABNO) <s> Abstract As current traffic growth is expected to strain capacity of today׳s metro network, novel content distribution architectures where contents are placed closer to the users are being investigated. In that regard, telecom operators can deploy datacenters (DCs) in metro areas, thus reducing the impact of the traffic between users and DCs. In this paper, a hierarchical content distribution architecture for the telecom cloud is investigated: core DCs placed in geographically distributed locations, are interconnected through permanent “per content provider” (CP) virtual network topologies (CP-VNT); additionally, metro DCs need to be interconnected with the core DCs. CP׳s data is replicated in the core DCs through the CP-VNTs, while metro-to-core (M2C) anycast connections are established periodically for content synchronization. Since network failures might disconnect the CP-VNTs, recovery mechanisms are proposed to reconnect both topologies and anycast connections. Topology creation, anycast provisioning, and recovery problems are first formally stated and modeled as Integer Linear Programs (ILP) and heuristic algorithms are proposed. Exhaustive simulation results show significant improvements in both supported traffic and restorability. Workflows to implement the algorithms within the Applications-based Network Operations (ABNO) architecture and extensions for PCEP are proposed. Finally, the architecture is experimentally validated in UPC's SYNERGY test-bed running our ABNO-based iONE architecture. <s> BIB006
The evolution of the SDN paradigm has highlighted that clean-slate design approaches are prone to protocol and interface proliferation which can limit the evolvability and interoperability of a deployment. ABNO BIB001 s an alternative modular control architecture standard, published as an Area Director sponsored RFC document, and it reuse existing standards to provide connectivity services. ABNO by-design provides network orchestration capabilities for multi-technology and multidomain environments, since it relies on production protocols developed and adopted to fulfill these requirements. The architecture enables network applications to automatically provision network paths and access network state information, controlled by an operator-defined network policy. ABNO consists of eight functional blocks, presented in Figure 4 along with their interfaces, but production deployments do not require to implement all the components. A core element of the architecture is the ABNO controller. The controller allows applications and NMS/OSS to specify end-to-end path requirements and access path state information. A path request triggers the controller to inspect the current network connectivity and resource allocations, and to provision a path which fulfills the resource requirements and does not violate the network policy. In addition, the controller is responsible to re-optimize paths at run-time, taking under consideration other path requests, routing state and network errors. The architecture contains an OAM handler to collect network error from all network layers. The OAM handler monitors the network and collects error notifications from network devices, using interfaces like IPFIX and NETCONF, which are correlated in order to synthesize highlevel error reports for the ABNO controller and the NMS. In addition, the ABNO architecture integrates with the network routing policy through an Interface to the Routing System (I2RS) client. I2RS BIB005 is an IETF WG that develops an architecture for real-time and event-based application interaction with the routing system of network devices. Furthermore, the WG has developed a detailed information model that allows external applications to monitor the RIB of a forwarding device. As a result, the I2RS client of the ABNO architecture aggregates information from network routers in order to adapt its routing policy, while it can by modify routing tables the routing policy to reflect path availability. Path selection is provided by a PCE controller, while a provisioning manager is responsible for path deployment and configuration using existing control plane protocols, like OpenFlow and NETCONF. It is important to highlight that these functional blocks may be omitted in a production deployment and the architecture proposes multiple overlapping control channels. In addition, the architecture contains an optional Virtual Network Topology Manager (VNTM), which can provision connectivity in the network physical layer, like configuring virtual links in WDM networks. Topology discovery is a key requirement for the path selection algorithm of the PCE controller and the ABNO architecture uses multiple databases to store relevant information. The Traffic-Engineering Database (TED) is a required database for any ABNO architecture and contains the network topology along with link resource and capability information. The database is populated with information through traffic engineering extensions in the routing protocol. Optionally, the architecture suggests support for an LSP database, which stores information for established paths, and a database to store associative information between physical links and network paths, for link capacity prediction during virtual link provision over optical technologies. A critical element for production deployment is the ability of the ABNO architecture to employ a common policy for all path selection decisions. The ABNO architecture incorporates a Policy Agent which is controlled by the NMS/OSS. The policy agent authenticates requests, maintains accounting information and reflects policy restrictions for the path selection algorithm. The policy agent is a focal point in the architecture and any decision by the ABNO controller, the PCE controller and the ALTO server requires a check with the active network policy. In addition to the ABNO control interfaces, the architecture provides additional application interfaces which expose network state information through an ALTO server. The server uses the ALTO protocol to provide accurate path capacity and load information to applications and assist the application server selection process and performance monitoring. A number of ABNO-based implementations exist detailing how the architecture was used to orchestrate resources in complex network environments, including: iONE BIB004 for content distribution in the telecom Cloud BIB006 , and Adaptive Network Manager BIB003 for co-ordinating operations in flex-grid optical and packet networks BIB002 . The large telecom vendor Infinera and network operator Telefonica, also provided a joint demonstration to orchestrate and provision bandwidth services in real-time (Network as a Service NaaS) across a multi-vendor IP/MPLS and optical transport network, using a variety of APIs .
Network service orchestration standardization: A technology survey <s> Service Function Chain (SFC) <s> This document describes requirements for conveying information between ::: Service Function Chaining (SFC) control elements (including management ::: components) and SFC functional elements. Also, this document ::: identifies a set of control interfaces to interact with SFC- aware ::: elements to establish, maintain or recover service function chains. ::: This document does not specify protocols nor extensions to existing ::: protocols. This document exclusively focuses on SFC deployments that ::: are under the responsibility of a single administrative entity. Inter- ::: domain considerations are out of scope. <s> BIB001 </s> Network service orchestration standardization: A technology survey <s> Service Function Chain (SFC) <s> This document defines a YANG data model that can be used to configure ::: and manage Service Function Chains. <s> BIB002
SFC is a recently formed IETF WG which aims to define the architectural principles and protocols for the deployment and management of NF forwarding graphs. An SFC deployment operates as a network overlay, logically separating the control plane of the service from the control of the underlying network. The overlay functionality is implemented by specialized forwarding elements, using a new network header. Figure 6 presents an example deployment scenario of an SFC domain. An administrative network domain can contain one or more SFC domains. An SFC domain is a set of SFC-enabled network devices sharing a common information context. The information context contains state regarding the deployed service graphs, the available paths for each service graph and classification information mapping incoming traffic to a service path. An SFC-specific header is appended on all packets on the edges of the SFC domain by an SFC-Classifier. The SFC-Classifier assigns incoming traffic to a service path by appending an appropriate SFC header to each packet. For outgoing traffic, the SFC-Classifier is responsible to remove any SFC headers and forward each packet appropriately. Once the packet is within the SFC domain, it is forwarded by the classifier to an SF Forwarder (SFF), an element responsible to forward traffic to an SF according to the service function ordering. Finally, the architecture is designed to accommodate both SFC-aware and legacy NFs. The main difference between them is that the SFCaware NFs can parse and manipulate SFC headers. For legacy NFs, the architecture defines a specialized element to manipulate SFC headers on behalf of the service function, the SFCProxy. The network overlay of the SFC architecture is realized through a new protocol layer, the Network Service Header (NSH) [83] . NSH contains information which define the position of a packet in the service path, using a service path and path index identifiers, and carry metadata between service functions regarding policy and post-service delivery. Highly relevant for service orchestration is the control and management interfaces of the SFC architecture. At the time of writing, the SFC WG currently explores the SFC control channel requirements and initial design goals BIB001 define four main control interfaces. C1 is the control channel of the SFCClassifier and allows manipulation of the classification policy which assigns incoming traffic to specific service paths. This control interface can be used to load balance traffic between service paths and optimize resource utilization. C2 is a control channel of the SFF forwarding policy and exposes monitoring information, like latency and load. C3 is the control protocol used to aggregate status, liveness and performance information from each NF-aware service function. Finally, the controller can use the C4 protocol to configure SFC-Proxies with respect to NSH header manipulation before and after a packet traverses an SFC-unaware NF. In parallel, the WG has proposed a set of YANG models to implement the proposed control interfaces BIB002 . Furthermore, the WG has also specified a set of YANG models for the management interface of an SFC controller BIB001 . This interface provides information about the liveness of individual SFC paths, topological information for the underlying SFC infrastructure, performance counters and control of the fault and error management strategies. In addition, the management interface allows external applications to re-optimize service paths and control load balancing policy. At the time of writing, multiple open-source platforms introduce SFC support. The Open vSwitch soft-switch has introduced SFC support both in the data and the control (OpenFlow extensions) plane. The OpenStack cloud management platform exploits the Open vSwitch SFC support and implements a highlevel SFC control interface . Furthermore, the ONOS controller currently supports SFC functionality using VTN overlays, while ODL implements SFC support using LISP tunnels. In addition, ONF has released recommendations for an L4-L7 SFC architecture which uses OpenFlow as the SBI of the SFC controller and explores the applicability and required extension to the OpenFlow abstraction to improve support for SFF elements.
Network service orchestration standardization: A technology survey <s> Segment Routing (SR) <s> Network operators anticipate the offering of an increasing variety of cloud-based services with stringent Service Level Agreements. Technologies currently supporting IP networks however lack the flexibility and scalability properties to realize such evolution. In this article, we present Segment Routing (SR), a new network architecture aimed at filling this gap, driven by use-cases defined by network operators. SR implements the source routing and tunneling paradigms, letting nodes steer packets over paths using a sequence of instructions (segments) placed in the packet header. As such, SR allows the implementation of routing policies without per-flow entries at intermediate routers. This paper introduces the SR architecture, describes its related ongoing standardization efforts, and reviews the main use-cases envisioned by network operators. <s> BIB001 </s> Network service orchestration standardization: A technology survey <s> Segment Routing (SR) <s> Segment Routing (SR) leverages the source routing paradigm. A node ::: steers a packet through an ordered list of instructions, called ::: segments. A segment can represent any instruction, topological or ::: service-based. A segment can have a semantic local to an SR node or ::: global within an SR domain. SR allows to enforce a flow through any ::: topological path while maintaining per-flow state only at the ingress ::: nodes to the SR domain. Segment Routing can be directly applied to the ::: MPLS architecture with no change on the forwarding plane. A segment is ::: encoded as an MPLS label. An ordered list of segments is encoded as a ::: stack of labels. The segment to process is on the top of the stack. ::: Upon completion of a segment, the related label is popped from the ::: stack. Segment Routing can be applied to the IPv6 architecture, with a ::: new type of routing header. A segment is encoded as an IPv6 address. ::: An ordered list of segments is encoded as an ordered list of IPv6 ::: addresses in the routing header. The active segment is indicated by ::: the Destination Address of the packet. The next active segment is ::: indicated by a pointer in the new routing header. <s> BIB002 </s> Network service orchestration standardization: A technology survey <s> Segment Routing (SR) <s> Segment Routing can be applied to the IPv6 data plane using a new type ::: of Routing Extension Header called the Segment Routing Header. This ::: document describes the Segment Routing Header and how it is used by ::: Segment Routing capable nodes. <s> BIB003
Segment Routing (SR) BIB001 is an architecture for the instantiation of service graphs over a network infrastructure using source routing mechanisms, standardized by the IETF Source Packet Routing in Networking (SPRING) WG BIB002 . SR is a data plane technology and uses existing protocols to store instructions (segments) for the packet path in its header. SR segments can have local or global semantics, and the architecture defines three segments types: a node segment forwards a packet over the shortest path towards a network node, an adjacency segment forwards the packet through a specific router port and a service segment introduces service differentiation on a service path. Currently, the SR architecture has defined a set of extensions for the IPv6 BIB003 and the MPLS protocols, which define protocol-compliant mechanisms to store the segment stack and the active segment pointer in the protocol header. In addition, to enable dynamic adaptation of the forwarding policy, the architecture defines a set of control operations for forwarding elements to manipulate the packet segment list and to update established paths dynamically. The selection of the packet path is implemented on the edge routers of the SR domain. The architecture specifies multiple path selection mechanisms, including static configurations, distributed shortest-path selection algorithms and programmatic control of segment path using SDN SBIs. The network IGP protocol can be used to provide segment visibility between routers and a YANG management interface is defined for SR segment information retrieval and SR routing entry control. SR provides a readily-available framework to instantiate service forwarding graphs. A forwarding graph can be implemented as a segment stack and existing VNFs can be integrated with the architecture by introducing appropriate support for MPLS and IPv6 SR extensions. In comparison to the SFC architecture, SR provides a simpler architecture which does not require deployment of new network elements. Nonetheless, SFC provides wider protocol support and the architecture is designed to support different data plane technologies, while SR is closely aligned with MPLS technologies. SR support is currently introduced in both major SDN NOSes. The ONOS project has introduced support for SR to implement CORD, a flexible central office architecture designed to simplify network service management . Similarly, ODL supports SR functionality using MPLS labels and the PCE SBI module. In parallel, CISCO has introduced SR support in recent XR IOS versions .
Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> Neurologically impaired infants have immature, damaged, or abnormally developed nervous systems that may cause abnormalities of sucking and swallowing, among other problems. Sucking abnormalities usually present as absence of the sucking response, weakness or incoordination of sucking and swallowing, or some combination of these problems. More investigation of the responses of these infants to various stimuli and training techniques is greatly needed. Although training neurologically impaired infants to breastfeed may present a challenge to even the most experienced neonatal nurse, physician, or therapist, most infants improve and can learn to suckle at the breast. If a mother has intended to nurse her infant, she should be encouraged to do so, even when the infant has abnormalities of sucking, except in the rare and most severely affected infants who remain dependent on gavage or gastrostomy feedings. Various techniques of stimulating, positioning, and progressive weaning to the breast can be helpful in teaching mother and infant to breastfeed. Encouraging support should be provided by all professionals involved with the mother and infant, as well as by a team experienced in helping with such problems. Most importantly, mother and staff must be patient, because the rewards for both the infant and mother are worth the effort. <s> BIB001 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> The sucking patterns of 42 healthy full-term and 44 preterm infants whose gestational age at birth was 30.9 +/- 2.1 weeks were compared using the Kron Nutritive Sucking Apparatus for a 5-minute period. The measured pressures were used to calculate six characteristics of the sucking response: maximum pressure generated, amount of nutrient consumed per suck, number of sucks per episode, the duration or width of each suck, the length of time between sucks, and the length of time between sucking episodes. The maximum pressure of the term infant (100.3 +/- 35) was higher, p less than .05, than the maximum pressure of the preterm infant (84 +/- 33). Term infants also consumed more formula per suck (45.3 +/- 20.3 vs. 37.6 +/- 15.9, p less than .05). In addition, they had more sucks/episode (13.6 +/- 8.7 vs. 7.7 +/- 4.1, p less than .001) and maintained the pressure longer for a wider suck width (0.49 +/- 0.1 vs. 0.45 +/- 0.08, p less than .05). Sucking profiles of the preterm infant are significantly different from the full-term infant. These sucking profiles can be developed as a clinically useful tool for nursing practice. <s> BIB002 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> Non-invasive, sensitive equipment was designed to record nasal air flow, the timing and volume of milk flow, intraoral pressure and swallowing in normal full-term newborn babies artificially fed under strictly controlled conditions. Synchronous recordings of these events are presented in chart form. Interpretation of the charts, with the aid of applied anatomy, suggests an hypothesis of the probable sequence of events during an ideal feeding cycle under the test conditions. This emphasises the importance of complete coordination between breathing, sucking and swallowing. The feeding respiratory pattern and its relationship to the other events was different from the non-nutritive respiratory pattern. The complexity of the coordinated patterns, the small bolus size which influenced the respiratory pattern, together with the coordination of all these events when milk was present in the mouth, emphasise the importance of the sensory mechanisms. The discussion considers (1) the relationship between these results, those reported by other workers under other feeding conditions and the author's (WGS) clinical experience, (2) factors which appear to be essential to permit conventional bottle feeding and (3) the importance of the coordination between the muscles of articulation, by which babies obtain their nourishment in relation to normal development and maturation. <s> BIB003 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> OBJECTIVE ::: To determine the prevalence and nature of feeding difficulties and oral motor dysfunction among a representative sample of 49 children with cerebral palsy (12 to 72 months of age). ::: ::: ::: STUDY DESIGN ::: A population survey was undertaken by means of a combination of interview and home observational measures. ::: ::: ::: RESULTS ::: Sucking (57%) and swallowing (38%) problems in the first 12 months of life were common, and 80% had been fed nonorally on at least one occasion. More than 90% had clinically significant oral motor dysfunction. One in three (36.2%) was severely impaired and therefore at high risk of chronic undernourishment. There was a substantial discrepancy between the lengthy duration of mealtimes reported by mothers and those actually observed in the home (mean, 19 minutes 21 seconds; range, 5 minutes 21 seconds to 41 minutes 39 seconds). In 60% of the children, severe feeding problems preceded the diagnosis of cerebral palsy. ::: ::: ::: CONCLUSIONS ::: Using a standardized assessment of oral motor function, we found the majority of children to have clinically significant oral motor dysfunction. Contrary to maternal report, mealtimes were relatively brief, and this, combined with the severity of oral motor dysfunction, made it difficult for some children to achieve a satisfactory nutritional intake. The study illustrates the importance of observing feeding, preferably in the home. <s> BIB004 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> Abstract The American Academy of Pediatrics (AAP) recently released a policy statement on the issue of hospital discharge of the high-risk neonate., The statement has been developed, to the extent possible, on the basis of published, scientifically derived information. <s> BIB005 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> Human newborns appear to regulate sucking pressure when bottle feeding by employing, with similar precision, the same principle of control evidenced by adults in skilled behavior, such as reaching (Lee et al., 1998a). In particular, the present study of 12 full-term newborn infants indicated that the intraoral sucking pressures followed an internal dynamic prototype – an intrinsic τ-guide. The intrinsic τ-guide, a recent hypothesis of general tau theory is a time-varying quantity, τg, assumed to be generated within the nervous system. It corresponds to some quantity (e.g., electrical charge), changing with a constant second-order temporal derivative from a rest level to a goal level, in the sense that τg equals τ of the gap between the quantity and its goal level at each time t. (τ of a gap is the time-to-closure of the gap at the current closure-rate.) According to the hypothesis, the infant senses τp, the τ of the gap between the current intraoral pressure and its goal level, and regulates intraoral pressure so that τp and τg remain coupled in a constant ratio, k; i.e., τp=kτg. With k in the range 0–1, the τ-coupling would result in a bell-shaped rate of change pressure profile, as was, in fact, found. More specifically, the high mean r2 values obtained when regressing τp on τg, for both the increasing and decreasing suction periods of the infants’ suck, supported a strong τ-coupling between τp and τg. The mean k values were significantly higher in the increasing suction period, indicating that the ending of the movement was more forceful, a finding which makes sense given the different functions of the two periods of the suck. <s> BIB006 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> As a consequence of the fragility of various neural structures, preterm infants born at a low gestation and/or birthweight are at an increased risk of developing motor abnormalities. The lack of a reliable means of assessing motor integrity prevents early therapeutic intervention. In this paper, we propose a new method of assessing neonatal motor performance, namely the recording and subsequent analysis of intraoral sucking pressures generated when feeding nutritively. By measuring the infant's control of sucking in terms of a new development of tau theory, normal patterns of intraoral motor control were established for term infants. Using this same measure, the present study revealed irregularities in sucking control of preterm infants. When these findings were compared to a physiotherapist's assessment six months later, the preterm infants who sucked irregularly were found to be delayed in their motor development. Perhaps a goal-directed behaviour such as sucking control that can be measured objectively at a very young age, could be included as part of the neurological assessment of the preterm infant. More accurate classification of a preterm infant's movement abnormalities would allow for early therapeutic interventions to be realised when the infant is still acquiring the most basic of motor functions. <s> BIB007 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> Finding ways to consistently prepare preterm infants and their families for more timely discharge must continue as a focus for everyone involved in the care of these infants in the neonatal intensive care unit. The gold standards for discharge from the neonatal intensive care unit are physiologic stability (especially respiratory stability), consistent weight gain, and successful oral feeding, usually from a bottle. Successful bottle-feeding is considered the most complex task of infancy. Fostering successful oral feeding in preterm infants requires consistently high levels of skilled nursing care, which must begin with accurate assessment of feeding readiness and thoughtful progression to full oral feeding. This comprehensive review of the literature provides an overview of the state of the science related to feeding readiness and progression in the preterm infant. The theoretical foundation for feeding readiness and factors that appear to affect bottle-feeding readiness, progression, and success are presented in this article. <s> BIB008 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> The development of feeding and swallowing is the result of a complex interface between the developing nervous system, various physiological systems, and the environment. The purpose of this article is to review the neurobiology, development, and assessment of feeding and swallowing during early infancy. In recent years, there have been exciting advances in our understanding of the physiology and neurological control of feeding and swallowing. These advances may prove useful in furthering our understanding of the pathophysiology of dysphagia in infancy. Progress in developing standardized, reliable, and valid measures of oral sensorimotor and swallowing function in infancy has been slow. However, there have been significant advances in the instrumental analysis of feeding and swallowing disorders in infancy, including manometric analyses of sucking and swallowing, measures of respiration during feeding, videofluoroscopic swallow evaluations, ultrasonography, and flexible endoscopic examination of swallowing. Further efforts are needed to develop clinical evaluative measures of dysphagia in infancy. <s> BIB009 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> This study aimed to determine whether neonatal feeding performance can predict the neurodevelopmental outcome of infants at 18 months of age. We measured the expression and sucking pressures of 65 infants (32 males and 33 females, mean gestational age 37.8 weeks [SD 0.5]; range 35.1 to 42.7 weeks and mean birthweight 2722g [SD 92]) with feeding problems and assessed their neurodevelopmental outcome at 18 months of age. Their diagnoses varied from mild asphyxia and transient tachypnea to Chiari malformation. A neurological examination was performed at 40 to 42 weeks postmenstrual age by means of an Amiel-Tison examination. Feeding performance at 1 and 2 weeks after initiation of oral feeding was divided into four classes: class 1, no suction and weak expression; class 2, arrhythmic alternation of expression/suction and weak pressures; class 3, rhythmic alternation, but weak pressures; and class 4, rhythmic alternation with normal pressures. Neurodevelopmental outcome was evaluated with the Bayley Scales of Infant Development-II and was divided into four categories: severe disability, moderate delay, minor delay, and normal. We examined the brain ultrasound on the day of feeding assessment, and compared the prognostic value of ultrasound and feeding performance. There was a significant correlation between feeding assessment and neurodevelopmental outcome at 18 months (p < 0.001). Improvements of feeding pattern at the second evaluation resulted in better neurodevelopmental outcome. The sensitivity and specificity of feeding assessment were higher than those of ultrasound assessment. Neonatal feeding performance is, therefore, of prognostic value in detecting future developmental problems. <s> BIB010 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> Preterm infants often have difficulties in learning how to suckle from the breast or how to drink from a bottle. As yet, it is unclear whether this is part of their prematurity or whether it is caused by neurological problems. Is it possible to decide on the basis of how an infant learns to suckle or drink whether it needs help and if so, what kind of help? In addition, can any predictions be made regarding the relationship between these difficulties and later neurodevelopmental outcome? We searched the literature for recent insights into the development of sucking and the factors that play a role in acquiring this skill. Our aim was to find a diagnostic tool that focuses on the readiness for feeding or that provides guidelines for interventions. At the same time, we searched for studies on the relationship between early sucking behavior and developmental outcome. It appeared that there is a great need for a reliable, user-friendly and noninvasive diagnostic tool to study sucking in preterm and full-term infants. <s> BIB011 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> Abstract Neonatal motor behavior predicts both current neurological status and future neurodevelopmental outcomes. For speech pathologists, the earliest observable patterned oromotor behavior is su... <s> BIB012 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> ABSTRACT:Objectives:The relationship between the pattern of sucking behavior of preterm infants during the early weeks of life and neurodevelopmental outcomes during the first year of life was evaluated.Methods:The study sample consisted of 105 preterm infants (postmenstrual age [PMA] at birth = 30. <s> BIB013 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> Preterm infants often display difficulty establishing oral feeding in the weeks following birth. This article aims to provide an overview of the literature investigating the development of feeding skills in preterm infants, as well as of interventions aimed at assisting preterm infants to develop their feeding skills. Available research suggests that preterm infants born at a lower gestational age and/or with a greater degree of morbidity are most at risk of early feeding difficulties. Respiratory disease was identified as a particular risk factor. Mechanisms for feeding difficulty identified in the literature include immature or dysfunctional sucking skills and poor suck–swallow–breath coordination. Available evidence provides some support for therapy interventions aimed at improving feeding skills, as well as the use of restricted milk flow to assist with maintaining appropriate ventilation during feeds. Further research is needed to confirm these findings, as well as to answer remaining clinical questions. <s> BIB014 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> BACKGROUND ::: One of the most challenging milestones for preterm infants is the acquisition of safe and efficient feeding skills. The majority of healthy full term infants are born with skills to coordinate their suck, swallow and respiration. However, this is not the case for preterm infants who develop these skills gradually as they transition from tube feeding to suck feeds. For preterm infants the ability to engage in oral feeding behaviour is dependent on many factors. The complexity of factors influencing feeding readiness has led some researchers to investigate the use of an individualised assessment of an infant's abilities. A limited number of instruments that aim to indicate an individual infant's readiness to commence either breast or bottle feeding have been developed. ::: ::: ::: OBJECTIVES ::: To determine the effects of using a feeding readiness instrument when compared to no instrument or another instrument on the outcomes of time to establish full oral feeding and duration of hospitalisation. ::: ::: ::: SEARCH METHODS ::: We used the standard methods of the Cochrane Neonatal Review Group, including a search of the Cochrane Central Register of Controlled Trials (The Cochrane Library 2010, Issue 2), MEDLINE via EBSCO (1966 to July 2010), EMBASE (1980 to July 2010), CINAHL via EBSCO (1982 to July 2010), Web of Science via EBSCO (1980 to July 2010) and Health Source (1980 to July 2010). Other sources such as cited references from retrieved articles and databases of clinical trials were also searched. We did not apply any language restriction. We updated this search in March 2012. ::: ::: ::: SELECTION CRITERIA ::: Randomised and quasi-randomised trials comparing a formal instrument to assess a preterm infant's readiness to commence suck feeds with either no instrument (usual practice) or another feeding readiness instrument. ::: ::: ::: DATA COLLECTION AND ANALYSIS ::: The standard methods of the Cochrane Neonatal Review Group were used. Two authors independently screened potential studies for inclusion. No studies were found that met our inclusion criteria. ::: ::: ::: MAIN RESULTS ::: No studies met the inclusion criteria. ::: ::: ::: AUTHORS' CONCLUSIONS ::: There is currently no evidence to inform clinical practice, with no studies meeting the inclusion criteria for this review. Research is needed in this area to establish an evidence base for the clinical utility of implementing the use of an instrument to assess feeding readiness in the preterm infant population. <s> BIB015 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> AIM ::: Early sucking and swallowing problems may be potential markers of neonatal brain injury and assist in identifying those infants at increased risk of adverse outcomes, but the relation between early sucking and swallowing problems and neonatal brain injury has not been established. The aim of the review was, therefore, to investigate the relation between early measures of sucking and swallowing and neurodevelopmental outcomes in infants diagnosed with neonatal brain injury and in infants born very preterm (<32wks) with very low birthweight (<1500g), at risk of neonatal brain injury. ::: ::: ::: METHOD ::: We conducted a systematic review of English-language articles using CINAHL, EMBASE, and MEDLINE OVID (from 1980 to May 2011). Additional studies were identified through manual searches of key journals and the works of expert authors. Extraction of data informed an assessment of the level of evidence and risk of bias for each study using a predefined set of quality indicators. ::: ::: ::: RESULTS ::: A total of 394 abstracts were generated by the search but only nine studies met the inclusion criterion. Early sucking and swallowing problems were present in a consistent proportion of infants and were predictive of neurodevelopmental outcome in infancy in five of the six studies reviewed. ::: ::: ::: LIMITATIONS ::: The methodological quality of studies was variable in terms of research design, level of evidence (National Health and Medical Research Council levels II, III, and IV), populations studied, assessments used and the nature and timing of neurodevelopmental follow-up. ::: ::: ::: CONCLUSIONS ::: Based upon the results of this review, there is currently insufficient evidence to clearly determine the relation between early sucking and swallowing problems and neonatal brain injury. Although early sucking and swallowing problems may be related to later neurodevelopmental outcomes, further research is required to delineate their value in predicting later motor outcomes and to establish reliable measures of early sucking and swallowing function. <s> BIB016
A recent report of the World Health Organization (WHO) describes how the rate of preterm births all over the world is increasing . This result is particularly interesting since prematurity is the leading cause of newborns' death and because premature newborns represent a copious and ever-increasing population at high risk for chronic diseases and neurodevelopmental problems. Feeding support is one of the possible strategies reported in to reduce deaths among premature infants. Such intervention requires specifically designed tools to assess oral feeding ability, so as to provide clinicians with new devices that may be used for routine clinical monitoring and decision-making. Several studies BIB008 BIB014 BIB015 stress the importance of introducing oral feeding for preterm infants as early as the Neonatal Intensive Care Unit (NICU), highlighting the need of evidence-based clinical tools for the assessment of infants' oral feeding readiness. The need for reliable assessment of feeding ability is further highlighted by the American Academy of Pediatrics that included the attainment of independent oral feeding as an essential criterion for hospital discharge BIB005 . The acquisition of efficient Nutritive Sucking (NS) skills is a fundamental and challenging milestone for newborns. It is essential during the first six months of life and it requires the complex coordination of three different processes: sucking, swallowing and breathing. The development of such precocious motor skills depends on intact brainstem pathways and cranial nerves. Hence, the immaturity of the Central Nervous System (CNS) can affect oral motor functions BIB004 and/or cause the inability to successfully perform oral feeding BIB001 BIB002 BIB009 BIB003 . NS is one of the most precocious goal-directed action evident in a newborn's movement repertoire, and it may provide an opportunity to investigate mechanisms of fine motor control in the neonate, as reported by Craig and Lee in BIB006 . For these reasons, sucking skills can provide valuable insights into the infant's neurological status and its future development BIB012 BIB013 BIB010 BIB016 BIB007 . Moreover, since sucking control involves similar oral motor structures to those required for coherent speech production, early sucking problems have also been suggested as predictors of significant delays in the emergence or development of speech-language skills . The importance of early sucking monitoring has been confirmed over the years, and the need for reliable instruments for neonatal sucking assessment is stressed in several works BIB008 BIB015 BIB016 BIB011 , even though no standardized instrumental assessment tools exist as yet. NS assessment is in fact part of the clinical evaluation, but this is not carried out objectively. With few objective criteria for the assessment of its progress in the hospital, and no organized home follow-up care, poor feeding skills may go undetected for too long. Notwithstanding the ongoing development of tools for the assessment of NS, there is not a common approach to this issue, thus causing problems of variability of the measurements, as highlighted by several authors BIB009 BIB016 BIB011 . Such heterogeneity represents one of the causes of the discrepant findings reported in literature, and a major challenge in applying them to clinical practice, as reported by Slattery et al. in 2012 [15] . The use of standard pre-discharge assessment tools may foster the development of common quantitative criteria useful to assist clinicians in planning clinical interventions. Such devices, or a simplified version of them, might be adopted also for patients' follow-up, as remote monitoring of infants at home after discharge. Section 2 provides a detailed survey of the main quantities and indices measured and/or estimated to characterize sucking behavior skills and their development. Section 3 presents the main characteristics of the technological sensing solutions adopted to measure the previously identified quantities and indices. Finally, we will discuss the main functional specifications required to a proper feeding assessment device, and the main advantages and weaknesses of the adopted sensing systems, taking into consideration the application to the clinical practice, or to at home monitoring as post-discharge assessment tools.
Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Preliminary Definitions <s> The sucking rhythms of infants with a benign perinatal course are compared to those of infants with a history of perinatal distress. The ontogenesis of sucking rhythms, and the sucking patterns of children with major congenital malformations of the brain and various metabolic disorders are described. The analysis of rhythms of non-nutritive sucking discriminates to a statistically significant degree between normal infants and infants with a history of perinatal distress who have no gross neurological signs. <s> BIB001 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Preliminary Definitions <s> Feeding by sucking is one of the first activities of daily life performed by infants. Sucking plays a fundamental role in neurological development and may be considered a good early predictor of neuromotor development. In this paper, a new method for ecological assessment of infants' nutritive sucking behavior is presented and experimentally validated. Preliminary data on healthy newborn subjects are first acquired to define the main technical specifications of a novel instrumented device. This device is designed to be easily integrated in a commercially available feeding bottle, allowing clinical method development for screening large numbers of subjects. The new approach proposed allows: 1) accurate measurement of intra-oral pressure for neuromotor control analysis and 2) estimation of milk volume delivered to the mouth within variation between estimated and reference volumes. <s> BIB002
Sucking is one of the first oromotor behaviors to occur in the womb. There are two basic forms of sucking: Non-Nutritive sucking (NNS) when no nutrient is involved, and Nutritive Sucking (NS) when a nutrient such as milk is ingested from a bottle or breast. A nutritive suck is characterized by the rhythmic alternation of Suction (S), i.e., creation of a negative Intraoral Pressure (IP) through the depression of jaw and tongue, and Expression (E), i.e., the generation of positive Expression Pressure (EP) through the compression of the nipple between the tongue and the hard palate. This S/E alternation allows the infant to create the extraction pressure over the fluid, contained in a vessel, towards the oral cavity. From birth throughout the first 6 months of life, infants obtain their primary food through NS. During this process, the infant must control oral sucking pressures to optimize the milk flow from the feeding vessel into the mouth, and to move the expressed milk to the back of the mouth, prior to being swallowed. The amount of milk entering the mouth dictates the swallow event, which in turn interrupts breathing. Hence, during NS, Sucking (Sk), Swallowing (Sw) and Breathing (B) are closely dependent on each other. This dependence represents another strong difference between NS and NNS: during NNS, the demands on swallowing are minimal (the infant has only to handle their own secretions), and respiration can operate independently. Safety in NS implies a proper coordination of Sk, Sw and B to avoid aspiration, as the anatomical pathways for air and nutrients share the same pharyngeal tract. During the Sw phase, airflow falls to zero, where it remains for an average duration of 530 ms, to be rapidly restored after this time. This period of flow cessation between functionally significant airflows is usually referred to as "swallow apnea" . In full-term healthy infants, the NS process is characterized by a burst-pause sucking pattern where a burst consists of a series of suck events, occurring with a typical frequency of 1 Hz BIB001 , separated by the following suck event through a pause of at least 2 s. This burst-pause pattern evolves during feeding in three stages: continuous, intermittent and paused . At the beginning of a feeding period, infants suck vigorously and continuously with a stable rhythm and long bursts (continuous sucking phase). This phase is generally followed by an intermittent phase in which sucks are less vigorous, bursts are shorter and pauses are longer (intermittent sucking phase). The final paused phase is characterized by weak sucks and very short sporadic bursts. Figure 1 reports a typical 10 s pressure burst: experimental data, acquired on healthy subjects and reported in BIB002 , showed that intraoral pressure is in the range of [−140,+15] mmHg. The bandwidth of the pressure signal was estimated, calculating its Power Spectral Density (PSD) by means of the Welch overlapped segmented average: it may be considered well below 20 Hz. Moreover, in a coordinated cycle of NS, the 1:1:1 relational pattern among sucking (S/E), swallowing and breathing is expected, and creates a rhythmic unit where breaths seem uninterrupted (no asphyxia or choking signs) .
Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> In 100 bottle-fed preterm infants feeding efficiency was studied by quantifying the volume of milk intake per minute and the number of teat insertions per 10 ml of milk intake. These variables were related to gestational age and to number of weeks of feeding experience. Feeding efficiency was greater in infants above 34 weeks gestational age than in those below this age. There was a significant correlation between feeding efficiency and the duration of feeding experience at most gestational ages between 32 and 37 weeks. A characteristic adducted and flexed arm posture was observed during feeding: it changed along with feeding experience. A neonatal feeding score was devised that allowed the quantification of the early oral feeding behavior. The feeding score correlated well with some aspects of perinatal assessment, with some aspects of the neonatal neurological evaluation and with developmental assessment at 7 months of age. These findings are a stimulus to continue our study into the relationships between feeding behaviour and other aspects of early development, especially of neurological development. <s> BIB001 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> Milk flow achieved during feeding may contribute to the ventilatory depression observed during nipple feeding. One of the important determinants of milk flow is the size of the feeding hole. In the first phase of the study, investigators compared the breathing patterns of 10 preterm infants during bottle feeding with two types of commercially available (Enfamil) single-hole nipples: one type designed for term infants and the other for preterm infants. Reductions in ventilation, tidal volume, and breathing frequency, compared with prefeeding control values, were observed with both nipple types during continuous and intermittent sucking phases; no significant differences were observed for any of the variables. Unlike the commercially available, mechanically drilled nipples, laser-cut nipple units showed a markedly lower coefficient of variation in milk flow. In the second phase of the study, two sizes of laser-cut nipple units, low and high flow, were used to feed nine preterm infants. Significantly lower sucking pressures were observed with high-flow nipples as compared with low-flow nipples. Decreases in minute ventilation and breathing frequency were also significantly greater with high-flow nipples. These results suggest that milk flow contributes to the observed reduction in ventilation during nipple feeding and that preterm infants have limited ability to self-regulate milk flow. <s> BIB002 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> OBJECTIVE ::: To describe the bottle-feeding histories of preterm infants and determine physical indices related to and predictive of bottle-feeding initiation and progression. ::: ::: ::: DESIGN ::: Ex post facto. ::: ::: ::: SETTING ::: Academic medical center. ::: ::: ::: PARTICIPANTS ::: A convenience sample of 40 preterm infants without concomitant cardiac, gastrointestinal, or cognitive impairment. ::: ::: ::: MAIN OUTCOME MEASURES ::: Postconceptional age at first bottle-feeding, full bottle-feeding, and discharge. ::: ::: ::: RESULTS ::: The morbidity rating, using the Neonatal Medical Index (NMI), was most strongly correlated with postconceptional age at first bottle-feeding (r = .34, p < .05), full bottle-feeding (r = .65, p < .01), and discharge (r = .55, p < .05). The morbidity rating also accounted for 12%, 42%, and 30% of the variance in postconceptional age at first bottle-feeding, full bottle-feeding, and discharge, respectively. ::: ::: ::: CONCLUSIONS ::: The NMI may be a useful tool for predicting the initiation and progression of bottle-feeding in preterm infants. <s> BIB003 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> The maturation of deglutition apnoea time was investigated in 42 bottle-fed preterm infants, 28 to 37 weeks gestation, and in 29 normal term infants as a comparison group. Deglutition apnoea times reduced as infants matured, as did the number and length of episodes of multiple-swallow deglutition apnoea. The maturation appears related to developmental age (gestation) rather than feeding experience (postnatal age). Prolonged (>4 seconds) episodes of deglutition apnoea remained significantly more frequent in preterm infants reaching term postconceptual age compared to term infants. However, multiple-swallow deglutition apnoeas also occurred in the term comparison group, showing that maturation of this aspect is not complete at term gestation. The establishment of normal data for maturation should be valuable in assessing infants with feeding difficulties as well as for evaluation of neurological maturity and functioning of ventilatory control during feeding. <s> BIB004 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> It is acknowledged that the difficulty many preterm infants have in feeding orally results from their immature sucking skills. However, little is known regarding the development of sucking in these infants. The aim of this study was to demonstrate that the bottle-feeding performance of preterm infants is positively correlated with the developmental stage of their sucking. Infants' oral-motor skills were followed longitudinally using a special nipple/bottle system which monitored the suction and expression/compression component of sucking. The maturational process was rated into five primary stages based on the presence/absence of suction and the rhythmicity of the two components of sucking, suction and expression/compression. This five-point scale was used to characterize the developmental stage of sucking of each infant. Outcomes of feeding performance consisted of overall transfer (percent total volume transfered/volume to be taken) and rate of transfer (ml/min). Assessments were conducted when infants were taking 1-2, 3-5 and 6-8 oral feedings per day. Significant positive correlations were observed between the five stages of sucking and postmenstrual age, the defined feeding outcomes, and the number of daily oral feedings. Overall transfer and rate of transfer were enhanced when infants reached the more mature stages of sucking. ::: ::: We have demonstrated that oral feeding performance improves as infants' sucking skills mature. In addition, we propose that the present five-point sucking scale may be used to assess the developmental stages of sucking of preterm infants. Such knowledge would facilitate the management of oral feeding in these infants. <s> BIB005 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> Twenty healthy preterm infants (gestational age 26 to 33 weeks, postmenstrual age [PMA] 32.1 to 39.6 weeks, postnatal age [PNA] 2.0 to 11.6 weeks) were studied weekly from initiation of bottle feeding until discharge, with simultaneous digital recordings of pharyngeal and nipple (teat) pressure and nasal thermistor and thoracic strain gauge readings. The percentage of sucks aggregated into 'runs' (defined as > or = 3 sucks with < or = 2 seconds between suck peaks) increased over time and correlated significantly with PMA (r=0.601, p<0.001). The length of the sucking-runs also correlated significantly with PMA (r=0.613, p<0.001). The stability of sucking rhythm, defined as a function of the mean/SD of the suck interval, was also directly correlated with increasing PMA (r=0.503, p=0.002), as was increasing suck rate (r=0.379, p<0.03). None of these measures was correlated with PNA. Similarly, increasing PMA, but not PNA, correlated with a higher percentage of swallows in runs (r=0.364, p<0.03). Stability of swallow rhythm did not change significantly from 32 to 40 weeks' PMA. In low-risk preterm infants, increasing PMA is correlated with a faster and more stable sucking rhythm and with increasing organization into longer suck and swallow runs. Stable swallow rhythm appears to be established earlier than suck rhythm. The fact that PMA is a better predictor than PNA of these patterns lends support to the concept that these patterns are innate rather than learned behaviors. Quantitative assessment of the stability of suck and swallow rhythms in preterm infants may allow prediction of subsequent feeding dysfunction as well as more general underlying neurological impairment. Knowledge of the normal ontogeny of the rhythms of suck and swallow may also enable us to differentiate immature (but normal) feeding patterns in preterm infants from dysmature (abnormal) patterns, allowing more appropriate intervention measures. <s> BIB006 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> The aim of this study was to gain a better understanding of the development of sucking behavior in infants with Down's syndrome. The sucking behavior of 14 infants with Down's syndrome was consecutively studied at 1, 4, 8 and 12 mo of age. They were free from complications that may cause sucking difficulty. The sucking pressure, expression pressure, frequency and duration were measured. In addition, an ultrasound study during sucking was performed in sagittal planes. Although levels of the sucking pressure and duration were in the normal range, significant development occurred with time. Ultrasonographic images showed deficiency in the smooth peristaltic tongue movement. ::: ::: ::: ::: Conclusion: The sucking deficiency in Down's syndrome may result from not only hypotonicity of the perioral muscles, lips and masticatory muscles, but also deficiency in the smooth tongue movement. This approach using the sucking pressure waveform and ultrasonography can help in the examination of the development of sucking behavior, intraoral movement and therapeutic effects. <s> BIB007 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> To quantify parameters of rhythmic suckle feeding in healthy term infants and to assess developmental changes during the first month of life, we recorded pharyngeal and nipple pressure in 16 infants at 1 to 4 days of age and again at 1 month. Over the first month of life in term infants, sucks and swallows become more rapid and increasingly organized into runs. Suck rate increased from 55/minute in the immediate postnatal period to 70/minute by the end of the first month (p<0.001). The percentage of sucks in runs of ≧3 increased from 72.7% (SD 12.8) to 87.9% (SD 9.1; p=0.001). Average length of suck runs also increased over the first month. Swallow rate increased slightly by the end of the first month, from about 46 to 50/minute (p=0.019), as did percentage of swallows in runs (76.8%, SD 14.9 versus 54.6%, SD 19.2;p=0.002). Efficiency of feeding, as measured by volume of nutrient per suck (0.17, SD 0.08 versus 0.30, SD 0.11cc/suck; p=0.008) and per swallow (0.23, SD 0.11 versus 0.44, SD 0.19 cc/swallow; p=0.002), almost doubled over the first month. The rhythmic stability of swallow-swallow, suck-suck, and suck-swallow dyadic interval, quantified using the coefficient of variation of the interval, was similar at the two age points, indicating that rhythmic stability of suck and swallow, individually and interactively, appears to be established by term. Percentage of sucks and swallows in 1:1 ratios (dyads), decreased from 78.8% (SD 20.1) shortly after birth to 57.5% (SD 25.8) at 1 month of age (p=0.002), demonstrating that the predominant 1:1 ratio of suck to swallow is more variable at 1 month, with the addition of ratios of 2:1, 3:1, and so on, and suggesting that infants gain the ability to adjust feeding patterns to improve efficiency. Knowledge of normal development in term infants provides a gold standard against which rhythmic patterns in preterm and other high-risk infants can be measured, and may allow earlier identification of infants at risk of neurodevelopmental delay and feeding disorders. <s> BIB008 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> OBJECTIVES ::: Our objectives were to establish normative maturational data for feeding behavior of preterm infants from 32 to 36 weeks of postconception and to evaluate how the relation between swallowing and respiration changes with maturation. ::: ::: ::: STUDY DESIGN ::: Twenty-four infants (28 to 31 weeks of gestation at birth) without complications or defects were studied weekly between 32 and 36 weeks after conception. During bottle feeding with milk flowing only when infants were sucking, sucking efficiency, pressure, frequency, and duration were measured and the respiratory phase in which swallowing occurs was also analyzed. Statistical analysis was performed by repeated-measures analysis of variance with post hoc analysis. ::: ::: ::: RESULTS ::: The sucking efficiency significantly increased between 34 and 36 weeks after conception and exceeded 7 mL/min at 35 weeks. There were significant increases in sucking pressure and frequency as well as in duration between 33 and 36 weeks. Although swallowing occurred mostly during pauses in respiration at 32 and 33 weeks, after 35 weeks swallowing usually occurred at the end of inspiration. ::: ::: ::: CONCLUSIONS ::: Feeding behavior in premature infants matured significantly between 33 and 36 weeks after conception, and swallowing infrequently interrupted respiration during feeding after 35 weeks after conception. <s> BIB009 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> UNLABELLED ::: Safe oral feeding of infants necessitates the coordination of suck-swallow-breathe. Healthy full-term infants demonstrate such skills at birth. But, preterm infants are known to have difficulty in the transition from tube to oral feeding. ::: ::: ::: AIM ::: To examine the relationship between suck and swallow and between swallow and breathe. It is hypothesized that greater milk transfer results from an increase in bolus size and/or swallowing frequency, and an improved swallow-breathe interaction. ::: ::: ::: METHODS ::: Twelve healthy preterm (<30 wk of gestation) and 8 full-term infants were recruited. Sucking (suction and expression), swallowing, and respiration were recorded simultaneously when the preterm infants began oral feeding (i.e. taking 1-2 oral feedings/d) and at 6-8 oral feedings/d. The full-term infants were similarly monitored during their first and 2nd to 4th weeks. Rate of milk transfer (ml/min) was used as an index of oral feeding performance. Sucking and swallowing frequencies (#/min), average bolus size (ml), and suction amplitude (mmHg) were measured. ::: ::: ::: RESULTS ::: The rate of milk transfer in the preterm infants increased over time and was correlated with average bolus size and swallowing frequency. Average bolus size was not correlated with swallowing frequency. Bolus size was correlated with suction amplitude, whereas the frequency of swallowing was correlated with sucking frequency. Preterm infants swallowed preferentially at different phases of respiration than those of their full-term counterparts. ::: ::: ::: CONCLUSION ::: As feeding performance improved, sucking and swallowing frequency, bolus size, and suction amplitude increased. It is speculated that feeding difficulties in preterm infants are more likely to result from inappropriate swallow-respiration interfacing than suck-swallow interaction. <s> BIB010 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> The development of feeding and swallowing is the result of a complex interface between the developing nervous system, various physiological systems, and the environment. The purpose of this article is to review the neurobiology, development, and assessment of feeding and swallowing during early infancy. In recent years, there have been exciting advances in our understanding of the physiology and neurological control of feeding and swallowing. These advances may prove useful in furthering our understanding of the pathophysiology of dysphagia in infancy. Progress in developing standardized, reliable, and valid measures of oral sensorimotor and swallowing function in infancy has been slow. However, there have been significant advances in the instrumental analysis of feeding and swallowing disorders in infancy, including manometric analyses of sucking and swallowing, measures of respiration during feeding, videofluoroscopic swallow evaluations, ultrasonography, and flexible endoscopic examination of swallowing. Further efforts are needed to develop clinical evaluative measures of dysphagia in infancy. <s> BIB011 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> This study aimed to determine whether neonatal feeding performance can predict the neurodevelopmental outcome of infants at 18 months of age. We measured the expression and sucking pressures of 65 infants (32 males and 33 females, mean gestational age 37.8 weeks [SD 0.5]; range 35.1 to 42.7 weeks and mean birthweight 2722g [SD 92]) with feeding problems and assessed their neurodevelopmental outcome at 18 months of age. Their diagnoses varied from mild asphyxia and transient tachypnea to Chiari malformation. A neurological examination was performed at 40 to 42 weeks postmenstrual age by means of an Amiel-Tison examination. Feeding performance at 1 and 2 weeks after initiation of oral feeding was divided into four classes: class 1, no suction and weak expression; class 2, arrhythmic alternation of expression/suction and weak pressures; class 3, rhythmic alternation, but weak pressures; and class 4, rhythmic alternation with normal pressures. Neurodevelopmental outcome was evaluated with the Bayley Scales of Infant Development-II and was divided into four categories: severe disability, moderate delay, minor delay, and normal. We examined the brain ultrasound on the day of feeding assessment, and compared the prognostic value of ultrasound and feeding performance. There was a significant correlation between feeding assessment and neurodevelopmental outcome at 18 months (p < 0.001). Improvements of feeding pattern at the second evaluation resulted in better neurodevelopmental outcome. The sensitivity and specificity of feeding assessment were higher than those of ultrasound assessment. Neonatal feeding performance is, therefore, of prognostic value in detecting future developmental problems. <s> BIB012 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> OBJECTIVE ::: This study examined the relationship between the number of sucks in the first nutritive suck burst and feeding outcomes in preterm infants. The relationships of morbidity, maturity, and feeding experience to the number of sucks in the first suck burst were also examined. ::: ::: ::: METHODS ::: A non-experimental study of 95 preterm infants was used. Feeding outcomes included proficiency (percent consumed in first 5 min of feeding), efficiency (volume consumed over total feeding time), consumed (percent consumed over total feeding), and feeding success (proficiency >or=0.3, efficiency >or=1.5 mL/min, and consumed >or=0.8). Data were analyzed using correlation and regression analysis. ::: ::: ::: RESULTS AND CONCLUSIONS ::: There were statistically significant positive relationships between number of sucks in the first burst and all feeding outcomes-proficiency, efficiency, consumed, and success (r=0.303, 0.365, 0.259, and tau=0.229, P<.01, respectively). The number of sucks in the first burst was also positively correlated to behavior state and feeding experience (tau=0.104 and r=0.220, P<.01, respectively). Feeding experience was the best predictor of feeding outcomes; the number of sucks in the first suck burst also contributed significantly to all feeding outcomes. The findings suggest that as infants gain experience at feeding, the first suck burst could be a useful indicator for how successful a particular feeding might be. <s> BIB013 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> To study the coordination of respiration and swallow rhythms we assessed feeding episodes in 20 preterm infants (gestational age range at birth 26-33wks; postmenstrual age [PMA] range when studied 32-40wks) and 16 term infants studied on days 1 to 4 (PMA range 37-41wks) and at 1 month (PMA range 41-45wks). A pharyngeal pressure transducer documented swallows and a thoracoabdominal strain gauge recorded respiratory efforts. Coefficients of variation (COVs) of breath-breath (BBr-BR) and swallow-breath (SW-BR) intervals during swallow runs, percentage ofapneic swallows (at least three swallows without interposed breaths), and phase of respiration relative to swallowing efforts were analyzed. Percentage of apneic swallows decreased with increasing PMA (16.6% [SE 4.7] in preterm infants s35wks' PMA; 6.6% [SE 1.6] in preterms >35wks; 1.5% [SE 0.4] in term infants; p 35wks' PMA; 0.693 [SE 0.059] at <35wks' PMA). Phase relation between swallowing and respiration stabilized with increasing PMA, with decreased apnea, and a significant increase in percentage of swallows occurring at end-inspiration. These data indicate that unlike the stabilization of suck and suck-swallow rhythms, which occur before about 36 weeks' PMA, improvement in coordination of respiration and swallow begins later. Coordination of swallow-respiration and suck-swallow rhythms may be predictive of feeding, respiratory, and neurodevelopmental abnormalities. <s> BIB014 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> Abstract Aim The sucking pattern of term infants is composed of a rhythmic alteration of expression and suction movements. The aim is to evaluate if direct linear transformation (DLT) method could be used for the assessment of infant feeding. Subject and methods A total of 10 gnormalh infants and two infants with neurological disorders were studied using DLT procedures and expression/suction pressure recordings. Feeding pattern of seven gnormalh infants were evaluated simultaneously recording DLT and pressures. The other infants were tested non-simultaneously. We placed markers on the lateral angle of the eye, tip of the jaw, and throat. The faces of infants while sucking were recorded in profile. The jaw and throat movements were calculated using the DLT procedure. Regression analysis was implemented to investigate the relationship between suction and expression pressures and eye–jaw and eye–throat movement. All regression analyses investigated univariate relationships and adjusted for other covariates. Results Ten gnormalh infants demonstrated higher suction pressure than expression pressure, and their throat movements were larger than jaw movements. Two infants with neurological problems did not generate suction pressure and demonstrated larger movements in their jaw than throat. The simultaneous measurement ( n = 7) showed a significant correlation, not only between eye–jaw distance and the expression pressure, but also between eye–throat distance and suction pressure. The change in the eye–jaw distance was smaller than the changes in the eye–throat distance in gnormalh infants ( p Conclusions The DLT method can be used to evaluate feeding performance without any special device. <s> BIB015 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> Aim: Safe and successful oral feeding requires proper maturation of sucking, swallowing and respiration. We hypothesized that oral feeding difficulties result from different temporal development of the musculatures implicated in these functions. ::: ::: Methods: Sixteen medically stable preterm infants (26 to 29 weeks gestation, GA) were recruited. Specific feeding skills were monitored as indirect markers for the maturational process of oral feeding musculatures: rate of milk intake (mL/min); percent milk leakage (lip seal); sucking stage, rate (#/s) and suction/expression ratio; suction amplitude (mmHg), rate and slope (mmHg/s); sucking/swallowing ratio; percent occurrence of swallows at specific phases of respiration. Coefficients of variation (COV) were used as indices of functional stability. Infants, born at 26/27- and 28/29-week GA, were at similar postmenstrual ages (PMA) when taking 1–2 and 6–8 oral feedings per day. ::: ::: Results: Over time, feeding efficiency and several skills improved, some decreased and others remained unchanged. Differences in COVs between the two GA groups demonstrated that, despite similar oral feeding outcomes, maturation levels of certain skills differed. ::: ::: Conclusions: Components of sucking, swallowing, respiration and their coordinated activity matured at different times and rates. Differences in functional stability of particular outcomes confirm that maturation levels depend on infants' gestational rather than PMA. <s> BIB016 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> ABSTRACT:Objectives:The relationship between the pattern of sucking behavior of preterm infants during the early weeks of life and neurodevelopmental outcomes during the first year of life was evaluated.Methods:The study sample consisted of 105 preterm infants (postmenstrual age [PMA] at birth = 30. <s> BIB017 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> We report quantitative measurements of ten parameters of nutritive sucking behavior in 91 normal full-term infants obtained using a novel device (an Orometer) and a data collection/analytical system (Suck Editor). The sucking parameters assessed include the number of sucks, mean pressure amplitude of sucks, mean frequency of sucks per second, mean suck interval in seconds, sucking amplitude variability, suck interval variability, number of suck bursts, mean number of sucks per suck burst, mean suck burst duration, and mean interburst gap duration. For analyses, test sessions were divided into 4 × 2-min segments. In single-study tests, 36 of 60 possible comparisons of ten parameters over six pairs of 2-min time intervals showed a p value of 0.05 or less. In 15 paired tests in the same infants at different ages, 33 of 50 possible comparisons of ten parameters over five time intervals showed p values of 0.05 or less. Quantification of nutritive sucking is feasible, showing statistically valid results for ten parameters that change during a feed and with age. These findings suggest that further research, based on our approach, may show clinical value in feeding assessment, diagnosis, and clinical management. <s> BIB018 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> AIM ::: To obtain a better understanding of the changes in feeding behaviour from 1 to 6 months of age. By comparing breast- and bottle-feeding, we intended to clarify the difference in longitudinal sucking performance. ::: ::: ::: METHODS ::: Sucking variables were consecutively measured for 16 breast-fed and eight bottle-fed infants at 1, 3 and 6 months of age. ::: ::: ::: RESULTS ::: For breast-feeding, number of sucks per burst (17.8 +/- 8.8, 23.8 +/- 8.3 and 32.4 +/- 15.3 times), sucking burst duration (11.2 +/- 6.1, 14.7 +/- 8.0 and 17.9 +/- 8.8 sec) and number of sucking bursts per feed (33.9 +/- 13.9, 28.0 +/- 18.2 and 18.6 +/- 12.8 times) at 1, 3 and 6 months of age respectively showed significant differences between 1 and 6 months of age (p < 0.05). The sucking pressure and total number of sucks per feed did not differ among different ages. Bottle-feeding resulted in longer sucking bursts and more sucks per burst compared with breast-feeding in each month (p < 0.05). ::: ::: ::: CONCLUSION ::: The increase in the amount of ingested milk with maturation resulted from an increase in bolus volume per minute as well as the higher number of sucks continuously for both breast- and bottle-fed infants. <s> BIB019 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> AIM ::: Early sucking and swallowing problems may be potential markers of neonatal brain injury and assist in identifying those infants at increased risk of adverse outcomes, but the relation between early sucking and swallowing problems and neonatal brain injury has not been established. The aim of the review was, therefore, to investigate the relation between early measures of sucking and swallowing and neurodevelopmental outcomes in infants diagnosed with neonatal brain injury and in infants born very preterm (<32wks) with very low birthweight (<1500g), at risk of neonatal brain injury. ::: ::: ::: METHOD ::: We conducted a systematic review of English-language articles using CINAHL, EMBASE, and MEDLINE OVID (from 1980 to May 2011). Additional studies were identified through manual searches of key journals and the works of expert authors. Extraction of data informed an assessment of the level of evidence and risk of bias for each study using a predefined set of quality indicators. ::: ::: ::: RESULTS ::: A total of 394 abstracts were generated by the search but only nine studies met the inclusion criterion. Early sucking and swallowing problems were present in a consistent proportion of infants and were predictive of neurodevelopmental outcome in infancy in five of the six studies reviewed. ::: ::: ::: LIMITATIONS ::: The methodological quality of studies was variable in terms of research design, level of evidence (National Health and Medical Research Council levels II, III, and IV), populations studied, assessments used and the nature and timing of neurodevelopmental follow-up. ::: ::: ::: CONCLUSIONS ::: Based upon the results of this review, there is currently insufficient evidence to clearly determine the relation between early sucking and swallowing problems and neonatal brain injury. Although early sucking and swallowing problems may be related to later neurodevelopmental outcomes, further research is required to delineate their value in predicting later motor outcomes and to establish reliable measures of early sucking and swallowing function. <s> BIB020
The ability to nutritively suck is not always completely mature in infants at birth and may require time to develop or to mature. For immature infants, the developmental complexity of the feeding process can cause a series of difficulties associated with the initiation and progression of feeding from a bottle, which is the most frequent indicator of the discharge readiness adopted by healthcare personnel BIB003 . Bottle feeding indeed has been widely investigated because of this reason, and because it allows standardization or control of some feeding characteristics across infants (e.g., liquid composition, nipple hole size, and hydrostatic pressure of milk) BIB009 . For the same reasons this review work is focused on the tools adopted for the assessment of infants' NS skills during bottle feeding. The adoption of instrumental measures for this early assessment (in opposition to non-instrumental observational methods) is increasing, because of the growing interest in standardized, reliable, and valid measures of oral sensorimotor function in infancy BIB011 . Indeed, such instrumental measures of early oral feeding ability have been reported to be more sensitive and specific to predict later neurodevelopment outcomes, compared to non-instrumental observational tools BIB020 , whose psychometric properties are still debated . Literature reports this instrumental assessment of NS behavior and of its development through the analysis of a wide variety of indices that could be extracted from the measurements. The identification a b of the most significant indices may be important to lead future research to focus on their investigation and on the establishment of normative data for the identification of deviations from the norm. We have focused on providing a survey of the principal indices used for the assessment of NS behavior, as well as of the quantities measured to extract them. Table 1 reports the most significant indices adopted for the instrumental assessment of infants' NS behavior during bottle feeding. The indices have been grouped in three main categories according to the final objective of the assessment: (i) to evaluate the level of maturation of oral feeding skills in preterm infants BIB009 BIB005 BIB016 BIB006 BIB013 BIB010 BIB014 BIB002 BIB007 BIB001 ; (ii) to evaluate or characterize the level of maturation of oral feeding skills in term infants BIB018 BIB008 BIB019 BIB004 ; and (iii) to make an early detection of later neurodevelopmental outcomes BIB017 BIB012 BIB001 BIB015 . Table 2 lists the different physical quantities that have been measured to monitor the NS process and from which the evaluation indices have been extracted. Both tables are organized in order to separate the different components of the NS process, i.e., sucking, swallowing, breathing, and nutrient consumption. For the assessment of preterm infants' maturation in terms of sucking skills during bottle feeding, several indices have been adopted. The organization into bursts and the establishment of a stable temporal pattern are important developmental steps in the maturation process of the sucking component BIB006 . Some descriptive parameters represent important indices for the evaluation of this maturation, i.e., the number of sucks per burst and the percentage of sucks in bursts. Moreover, the number of sucks composing the first burst has turned out to be a useful indicator of the feeding outcome BIB013 . In addition to these descriptive parameters, several temporal parameters appear to be consistent indicators of preterm infants' maturation, such as: sucking frequency (sucks per min), burst duration, inter-burst width, inter-suck width, and an index of rhythmic stability referred to as Coefficient of Variation of the Sucking process (COV Sk ). This index is adopted by several studies to assess the maturational patterns in terms of rhythmicity BIB016 BIB006 BIB014 , and it is defined as follows: (1) where SD is the standard deviation, and I represents the time interval between two consecutive events of the considered X process (e.g., the interval between consecutive sucks). All these indices can be calculated measuring any quantity that allows the identification and the temporal characterization of sucking events, without distinction between suction and expression components. For example, these parameters can be estimated even through measures of intranipple pressure or chin movements, as in BIB006 BIB013 . On the other hand, the specific measure of the suction component (IP) is very frequent for the assessment of NS skills. This allows to estimate all the indices already mentioned, as well as the maximum suction amplitude the infant is able to generate (IP amplitude), which is reported as an indicator of the preterm infant's suction maturation BIB009 BIB005 BIB016 BIB010 . However, the maturational process of preterm infants' oral-motor skills has been proven to be characterized by some developmental stages defined according to indices of both expression and suction components BIB005 . Preterm infants develop and establish first the expression component, then suction, and finally the S/E rhythmic alternation. Hence, measures of both sucking pressures (IP/EP) are needed to estimate some significant indicators of this maturational progress BIB005 BIB016 : S and E rhythmicity, the S:E ratio, the time interval between S and E (S-E interval), IP and EP amplitude. The maturational level of sucking skills in term infants appear to be completely assessable through a set of descriptive and temporal indices, that do not require the measurement of both sucking
Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> In 100 bottle-fed preterm infants feeding efficiency was studied by quantifying the volume of milk intake per minute and the number of teat insertions per 10 ml of milk intake. These variables were related to gestational age and to number of weeks of feeding experience. Feeding efficiency was greater in infants above 34 weeks gestational age than in those below this age. There was a significant correlation between feeding efficiency and the duration of feeding experience at most gestational ages between 32 and 37 weeks. A characteristic adducted and flexed arm posture was observed during feeding: it changed along with feeding experience. A neonatal feeding score was devised that allowed the quantification of the early oral feeding behavior. The feeding score correlated well with some aspects of perinatal assessment, with some aspects of the neonatal neurological evaluation and with developmental assessment at 7 months of age. These findings are a stimulus to continue our study into the relationships between feeding behaviour and other aspects of early development, especially of neurological development. <s> BIB001 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> OBJECTIVE ::: To determine the prevalence and nature of feeding difficulties and oral motor dysfunction among a representative sample of 49 children with cerebral palsy (12 to 72 months of age). ::: ::: ::: STUDY DESIGN ::: A population survey was undertaken by means of a combination of interview and home observational measures. ::: ::: ::: RESULTS ::: Sucking (57%) and swallowing (38%) problems in the first 12 months of life were common, and 80% had been fed nonorally on at least one occasion. More than 90% had clinically significant oral motor dysfunction. One in three (36.2%) was severely impaired and therefore at high risk of chronic undernourishment. There was a substantial discrepancy between the lengthy duration of mealtimes reported by mothers and those actually observed in the home (mean, 19 minutes 21 seconds; range, 5 minutes 21 seconds to 41 minutes 39 seconds). In 60% of the children, severe feeding problems preceded the diagnosis of cerebral palsy. ::: ::: ::: CONCLUSIONS ::: Using a standardized assessment of oral motor function, we found the majority of children to have clinically significant oral motor dysfunction. Contrary to maternal report, mealtimes were relatively brief, and this, combined with the severity of oral motor dysfunction, made it difficult for some children to achieve a satisfactory nutritional intake. The study illustrates the importance of observing feeding, preferably in the home. <s> BIB002 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> The maturation of deglutition apnoea time was investigated in 42 bottle-fed preterm infants, 28 to 37 weeks gestation, and in 29 normal term infants as a comparison group. Deglutition apnoea times reduced as infants matured, as did the number and length of episodes of multiple-swallow deglutition apnoea. The maturation appears related to developmental age (gestation) rather than feeding experience (postnatal age). Prolonged (>4 seconds) episodes of deglutition apnoea remained significantly more frequent in preterm infants reaching term postconceptual age compared to term infants. However, multiple-swallow deglutition apnoeas also occurred in the term comparison group, showing that maturation of this aspect is not complete at term gestation. The establishment of normal data for maturation should be valuable in assessing infants with feeding difficulties as well as for evaluation of neurological maturity and functioning of ventilatory control during feeding. <s> BIB003 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> It is acknowledged that the difficulty many preterm infants have in feeding orally results from their immature sucking skills. However, little is known regarding the development of sucking in these infants. The aim of this study was to demonstrate that the bottle-feeding performance of preterm infants is positively correlated with the developmental stage of their sucking. Infants' oral-motor skills were followed longitudinally using a special nipple/bottle system which monitored the suction and expression/compression component of sucking. The maturational process was rated into five primary stages based on the presence/absence of suction and the rhythmicity of the two components of sucking, suction and expression/compression. This five-point scale was used to characterize the developmental stage of sucking of each infant. Outcomes of feeding performance consisted of overall transfer (percent total volume transfered/volume to be taken) and rate of transfer (ml/min). Assessments were conducted when infants were taking 1-2, 3-5 and 6-8 oral feedings per day. Significant positive correlations were observed between the five stages of sucking and postmenstrual age, the defined feeding outcomes, and the number of daily oral feedings. Overall transfer and rate of transfer were enhanced when infants reached the more mature stages of sucking. ::: ::: We have demonstrated that oral feeding performance improves as infants' sucking skills mature. In addition, we propose that the present five-point sucking scale may be used to assess the developmental stages of sucking of preterm infants. Such knowledge would facilitate the management of oral feeding in these infants. <s> BIB004 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> Twenty healthy preterm infants (gestational age 26 to 33 weeks, postmenstrual age [PMA] 32.1 to 39.6 weeks, postnatal age [PNA] 2.0 to 11.6 weeks) were studied weekly from initiation of bottle feeding until discharge, with simultaneous digital recordings of pharyngeal and nipple (teat) pressure and nasal thermistor and thoracic strain gauge readings. The percentage of sucks aggregated into 'runs' (defined as > or = 3 sucks with < or = 2 seconds between suck peaks) increased over time and correlated significantly with PMA (r=0.601, p<0.001). The length of the sucking-runs also correlated significantly with PMA (r=0.613, p<0.001). The stability of sucking rhythm, defined as a function of the mean/SD of the suck interval, was also directly correlated with increasing PMA (r=0.503, p=0.002), as was increasing suck rate (r=0.379, p<0.03). None of these measures was correlated with PNA. Similarly, increasing PMA, but not PNA, correlated with a higher percentage of swallows in runs (r=0.364, p<0.03). Stability of swallow rhythm did not change significantly from 32 to 40 weeks' PMA. In low-risk preterm infants, increasing PMA is correlated with a faster and more stable sucking rhythm and with increasing organization into longer suck and swallow runs. Stable swallow rhythm appears to be established earlier than suck rhythm. The fact that PMA is a better predictor than PNA of these patterns lends support to the concept that these patterns are innate rather than learned behaviors. Quantitative assessment of the stability of suck and swallow rhythms in preterm infants may allow prediction of subsequent feeding dysfunction as well as more general underlying neurological impairment. Knowledge of the normal ontogeny of the rhythms of suck and swallow may also enable us to differentiate immature (but normal) feeding patterns in preterm infants from dysmature (abnormal) patterns, allowing more appropriate intervention measures. <s> BIB005 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> The aim of this study was to gain a better understanding of the development of sucking behavior in infants with Down's syndrome. The sucking behavior of 14 infants with Down's syndrome was consecutively studied at 1, 4, 8 and 12 mo of age. They were free from complications that may cause sucking difficulty. The sucking pressure, expression pressure, frequency and duration were measured. In addition, an ultrasound study during sucking was performed in sagittal planes. Although levels of the sucking pressure and duration were in the normal range, significant development occurred with time. Ultrasonographic images showed deficiency in the smooth peristaltic tongue movement. ::: ::: ::: ::: Conclusion: The sucking deficiency in Down's syndrome may result from not only hypotonicity of the perioral muscles, lips and masticatory muscles, but also deficiency in the smooth tongue movement. This approach using the sucking pressure waveform and ultrasonography can help in the examination of the development of sucking behavior, intraoral movement and therapeutic effects. <s> BIB006 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> To quantify parameters of rhythmic suckle feeding in healthy term infants and to assess developmental changes during the first month of life, we recorded pharyngeal and nipple pressure in 16 infants at 1 to 4 days of age and again at 1 month. Over the first month of life in term infants, sucks and swallows become more rapid and increasingly organized into runs. Suck rate increased from 55/minute in the immediate postnatal period to 70/minute by the end of the first month (p<0.001). The percentage of sucks in runs of ≧3 increased from 72.7% (SD 12.8) to 87.9% (SD 9.1; p=0.001). Average length of suck runs also increased over the first month. Swallow rate increased slightly by the end of the first month, from about 46 to 50/minute (p=0.019), as did percentage of swallows in runs (76.8%, SD 14.9 versus 54.6%, SD 19.2;p=0.002). Efficiency of feeding, as measured by volume of nutrient per suck (0.17, SD 0.08 versus 0.30, SD 0.11cc/suck; p=0.008) and per swallow (0.23, SD 0.11 versus 0.44, SD 0.19 cc/swallow; p=0.002), almost doubled over the first month. The rhythmic stability of swallow-swallow, suck-suck, and suck-swallow dyadic interval, quantified using the coefficient of variation of the interval, was similar at the two age points, indicating that rhythmic stability of suck and swallow, individually and interactively, appears to be established by term. Percentage of sucks and swallows in 1:1 ratios (dyads), decreased from 78.8% (SD 20.1) shortly after birth to 57.5% (SD 25.8) at 1 month of age (p=0.002), demonstrating that the predominant 1:1 ratio of suck to swallow is more variable at 1 month, with the addition of ratios of 2:1, 3:1, and so on, and suggesting that infants gain the ability to adjust feeding patterns to improve efficiency. Knowledge of normal development in term infants provides a gold standard against which rhythmic patterns in preterm and other high-risk infants can be measured, and may allow earlier identification of infants at risk of neurodevelopmental delay and feeding disorders. <s> BIB007 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> OBJECTIVES ::: Our objectives were to establish normative maturational data for feeding behavior of preterm infants from 32 to 36 weeks of postconception and to evaluate how the relation between swallowing and respiration changes with maturation. ::: ::: ::: STUDY DESIGN ::: Twenty-four infants (28 to 31 weeks of gestation at birth) without complications or defects were studied weekly between 32 and 36 weeks after conception. During bottle feeding with milk flowing only when infants were sucking, sucking efficiency, pressure, frequency, and duration were measured and the respiratory phase in which swallowing occurs was also analyzed. Statistical analysis was performed by repeated-measures analysis of variance with post hoc analysis. ::: ::: ::: RESULTS ::: The sucking efficiency significantly increased between 34 and 36 weeks after conception and exceeded 7 mL/min at 35 weeks. There were significant increases in sucking pressure and frequency as well as in duration between 33 and 36 weeks. Although swallowing occurred mostly during pauses in respiration at 32 and 33 weeks, after 35 weeks swallowing usually occurred at the end of inspiration. ::: ::: ::: CONCLUSIONS ::: Feeding behavior in premature infants matured significantly between 33 and 36 weeks after conception, and swallowing infrequently interrupted respiration during feeding after 35 weeks after conception. <s> BIB008 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> UNLABELLED ::: Safe oral feeding of infants necessitates the coordination of suck-swallow-breathe. Healthy full-term infants demonstrate such skills at birth. But, preterm infants are known to have difficulty in the transition from tube to oral feeding. ::: ::: ::: AIM ::: To examine the relationship between suck and swallow and between swallow and breathe. It is hypothesized that greater milk transfer results from an increase in bolus size and/or swallowing frequency, and an improved swallow-breathe interaction. ::: ::: ::: METHODS ::: Twelve healthy preterm (<30 wk of gestation) and 8 full-term infants were recruited. Sucking (suction and expression), swallowing, and respiration were recorded simultaneously when the preterm infants began oral feeding (i.e. taking 1-2 oral feedings/d) and at 6-8 oral feedings/d. The full-term infants were similarly monitored during their first and 2nd to 4th weeks. Rate of milk transfer (ml/min) was used as an index of oral feeding performance. Sucking and swallowing frequencies (#/min), average bolus size (ml), and suction amplitude (mmHg) were measured. ::: ::: ::: RESULTS ::: The rate of milk transfer in the preterm infants increased over time and was correlated with average bolus size and swallowing frequency. Average bolus size was not correlated with swallowing frequency. Bolus size was correlated with suction amplitude, whereas the frequency of swallowing was correlated with sucking frequency. Preterm infants swallowed preferentially at different phases of respiration than those of their full-term counterparts. ::: ::: ::: CONCLUSION ::: As feeding performance improved, sucking and swallowing frequency, bolus size, and suction amplitude increased. It is speculated that feeding difficulties in preterm infants are more likely to result from inappropriate swallow-respiration interfacing than suck-swallow interaction. <s> BIB009 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> This study aimed to determine whether neonatal feeding performance can predict the neurodevelopmental outcome of infants at 18 months of age. We measured the expression and sucking pressures of 65 infants (32 males and 33 females, mean gestational age 37.8 weeks [SD 0.5]; range 35.1 to 42.7 weeks and mean birthweight 2722g [SD 92]) with feeding problems and assessed their neurodevelopmental outcome at 18 months of age. Their diagnoses varied from mild asphyxia and transient tachypnea to Chiari malformation. A neurological examination was performed at 40 to 42 weeks postmenstrual age by means of an Amiel-Tison examination. Feeding performance at 1 and 2 weeks after initiation of oral feeding was divided into four classes: class 1, no suction and weak expression; class 2, arrhythmic alternation of expression/suction and weak pressures; class 3, rhythmic alternation, but weak pressures; and class 4, rhythmic alternation with normal pressures. Neurodevelopmental outcome was evaluated with the Bayley Scales of Infant Development-II and was divided into four categories: severe disability, moderate delay, minor delay, and normal. We examined the brain ultrasound on the day of feeding assessment, and compared the prognostic value of ultrasound and feeding performance. There was a significant correlation between feeding assessment and neurodevelopmental outcome at 18 months (p < 0.001). Improvements of feeding pattern at the second evaluation resulted in better neurodevelopmental outcome. The sensitivity and specificity of feeding assessment were higher than those of ultrasound assessment. Neonatal feeding performance is, therefore, of prognostic value in detecting future developmental problems. <s> BIB010 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> To study the coordination of respiration and swallow rhythms we assessed feeding episodes in 20 preterm infants (gestational age range at birth 26-33wks; postmenstrual age [PMA] range when studied 32-40wks) and 16 term infants studied on days 1 to 4 (PMA range 37-41wks) and at 1 month (PMA range 41-45wks). A pharyngeal pressure transducer documented swallows and a thoracoabdominal strain gauge recorded respiratory efforts. Coefficients of variation (COVs) of breath-breath (BBr-BR) and swallow-breath (SW-BR) intervals during swallow runs, percentage ofapneic swallows (at least three swallows without interposed breaths), and phase of respiration relative to swallowing efforts were analyzed. Percentage of apneic swallows decreased with increasing PMA (16.6% [SE 4.7] in preterm infants s35wks' PMA; 6.6% [SE 1.6] in preterms >35wks; 1.5% [SE 0.4] in term infants; p 35wks' PMA; 0.693 [SE 0.059] at <35wks' PMA). Phase relation between swallowing and respiration stabilized with increasing PMA, with decreased apnea, and a significant increase in percentage of swallows occurring at end-inspiration. These data indicate that unlike the stabilization of suck and suck-swallow rhythms, which occur before about 36 weeks' PMA, improvement in coordination of respiration and swallow begins later. Coordination of swallow-respiration and suck-swallow rhythms may be predictive of feeding, respiratory, and neurodevelopmental abnormalities. <s> BIB011 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> OBJECTIVE ::: This study examined the relationship between the number of sucks in the first nutritive suck burst and feeding outcomes in preterm infants. The relationships of morbidity, maturity, and feeding experience to the number of sucks in the first suck burst were also examined. ::: ::: ::: METHODS ::: A non-experimental study of 95 preterm infants was used. Feeding outcomes included proficiency (percent consumed in first 5 min of feeding), efficiency (volume consumed over total feeding time), consumed (percent consumed over total feeding), and feeding success (proficiency >or=0.3, efficiency >or=1.5 mL/min, and consumed >or=0.8). Data were analyzed using correlation and regression analysis. ::: ::: ::: RESULTS AND CONCLUSIONS ::: There were statistically significant positive relationships between number of sucks in the first burst and all feeding outcomes-proficiency, efficiency, consumed, and success (r=0.303, 0.365, 0.259, and tau=0.229, P<.01, respectively). The number of sucks in the first burst was also positively correlated to behavior state and feeding experience (tau=0.104 and r=0.220, P<.01, respectively). Feeding experience was the best predictor of feeding outcomes; the number of sucks in the first suck burst also contributed significantly to all feeding outcomes. The findings suggest that as infants gain experience at feeding, the first suck burst could be a useful indicator for how successful a particular feeding might be. <s> BIB012 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> Abstract Aim The sucking pattern of term infants is composed of a rhythmic alteration of expression and suction movements. The aim is to evaluate if direct linear transformation (DLT) method could be used for the assessment of infant feeding. Subject and methods A total of 10 gnormalh infants and two infants with neurological disorders were studied using DLT procedures and expression/suction pressure recordings. Feeding pattern of seven gnormalh infants were evaluated simultaneously recording DLT and pressures. The other infants were tested non-simultaneously. We placed markers on the lateral angle of the eye, tip of the jaw, and throat. The faces of infants while sucking were recorded in profile. The jaw and throat movements were calculated using the DLT procedure. Regression analysis was implemented to investigate the relationship between suction and expression pressures and eye–jaw and eye–throat movement. All regression analyses investigated univariate relationships and adjusted for other covariates. Results Ten gnormalh infants demonstrated higher suction pressure than expression pressure, and their throat movements were larger than jaw movements. Two infants with neurological problems did not generate suction pressure and demonstrated larger movements in their jaw than throat. The simultaneous measurement ( n = 7) showed a significant correlation, not only between eye–jaw distance and the expression pressure, but also between eye–throat distance and suction pressure. The change in the eye–jaw distance was smaller than the changes in the eye–throat distance in gnormalh infants ( p Conclusions The DLT method can be used to evaluate feeding performance without any special device. <s> BIB013 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> Aim: Safe and successful oral feeding requires proper maturation of sucking, swallowing and respiration. We hypothesized that oral feeding difficulties result from different temporal development of the musculatures implicated in these functions. ::: ::: Methods: Sixteen medically stable preterm infants (26 to 29 weeks gestation, GA) were recruited. Specific feeding skills were monitored as indirect markers for the maturational process of oral feeding musculatures: rate of milk intake (mL/min); percent milk leakage (lip seal); sucking stage, rate (#/s) and suction/expression ratio; suction amplitude (mmHg), rate and slope (mmHg/s); sucking/swallowing ratio; percent occurrence of swallows at specific phases of respiration. Coefficients of variation (COV) were used as indices of functional stability. Infants, born at 26/27- and 28/29-week GA, were at similar postmenstrual ages (PMA) when taking 1–2 and 6–8 oral feedings per day. ::: ::: Results: Over time, feeding efficiency and several skills improved, some decreased and others remained unchanged. Differences in COVs between the two GA groups demonstrated that, despite similar oral feeding outcomes, maturation levels of certain skills differed. ::: ::: Conclusions: Components of sucking, swallowing, respiration and their coordinated activity matured at different times and rates. Differences in functional stability of particular outcomes confirm that maturation levels depend on infants' gestational rather than PMA. <s> BIB014 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> ABSTRACT:Objectives:The relationship between the pattern of sucking behavior of preterm infants during the early weeks of life and neurodevelopmental outcomes during the first year of life was evaluated.Methods:The study sample consisted of 105 preterm infants (postmenstrual age [PMA] at birth = 30. <s> BIB015 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> We report quantitative measurements of ten parameters of nutritive sucking behavior in 91 normal full-term infants obtained using a novel device (an Orometer) and a data collection/analytical system (Suck Editor). The sucking parameters assessed include the number of sucks, mean pressure amplitude of sucks, mean frequency of sucks per second, mean suck interval in seconds, sucking amplitude variability, suck interval variability, number of suck bursts, mean number of sucks per suck burst, mean suck burst duration, and mean interburst gap duration. For analyses, test sessions were divided into 4 × 2-min segments. In single-study tests, 36 of 60 possible comparisons of ten parameters over six pairs of 2-min time intervals showed a p value of 0.05 or less. In 15 paired tests in the same infants at different ages, 33 of 50 possible comparisons of ten parameters over five time intervals showed p values of 0.05 or less. Quantification of nutritive sucking is feasible, showing statistically valid results for ten parameters that change during a feed and with age. These findings suggest that further research, based on our approach, may show clinical value in feeding assessment, diagnosis, and clinical management. <s> BIB016 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> AIM ::: To obtain a better understanding of the changes in feeding behaviour from 1 to 6 months of age. By comparing breast- and bottle-feeding, we intended to clarify the difference in longitudinal sucking performance. ::: ::: ::: METHODS ::: Sucking variables were consecutively measured for 16 breast-fed and eight bottle-fed infants at 1, 3 and 6 months of age. ::: ::: ::: RESULTS ::: For breast-feeding, number of sucks per burst (17.8 +/- 8.8, 23.8 +/- 8.3 and 32.4 +/- 15.3 times), sucking burst duration (11.2 +/- 6.1, 14.7 +/- 8.0 and 17.9 +/- 8.8 sec) and number of sucking bursts per feed (33.9 +/- 13.9, 28.0 +/- 18.2 and 18.6 +/- 12.8 times) at 1, 3 and 6 months of age respectively showed significant differences between 1 and 6 months of age (p < 0.05). The sucking pressure and total number of sucks per feed did not differ among different ages. Bottle-feeding resulted in longer sucking bursts and more sucks per burst compared with breast-feeding in each month (p < 0.05). ::: ::: ::: CONCLUSION ::: The increase in the amount of ingested milk with maturation resulted from an increase in bolus volume per minute as well as the higher number of sucks continuously for both breast- and bottle-fed infants. <s> BIB017 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> BACKGROUND ::: Preterm infants often have difficulty in achieving a coordinated sucking pattern. To analyze the correlation between preterm infants with disorganized sucking and future development, weekly studies were performed of 27 preterm infants from initiation of bottle feeding until a normal sucking pattern was recognized. ::: ::: ::: METHODS ::: A total of 27 preterm infants without brain lesion participated in the present study. Neonatal Oral Motor Assessment Scale (NOMAS) was utilized to evaluate the sucking pattern. Infants who were initially assessed as having disorganized sucking on NOMAS and regained a normal sucking pattern by 37 weeks old were assigned to group I; infants with a persistent disorganized sucking pattern after 37 weeks were assigned to group II. The mental (MDI) and psychomotor (PDI) developmental indices of Bayley Scales of Infant Development, second edition were used for follow-up tests to demonstrate neurodevelopment at 6 months and 12 months of corrected age. ::: ::: ::: RESULTS ::: At 6 months follow up, subjects in group I had a significantly higher PDI score than group II infants (P= 0.04). At 12 months follow up, group I subjects had a significantly higher score on MDI (P= 0.03) and PDI (P= 0.04). There was also a higher rate for development delay in group II at 6 months (P= 0.05). ::: ::: ::: CONCLUSION ::: NOMAS-based assessment for neonatal feeding performance could be a helpful tool to predict neurodevelopmental outcome at 6 and 12 months. Close follow up and early intervention may be necessary for infants who present with a disorganized sucking pattern after 37 weeks post-conceptional age. <s> BIB018 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> AIM ::: Early sucking and swallowing problems may be potential markers of neonatal brain injury and assist in identifying those infants at increased risk of adverse outcomes, but the relation between early sucking and swallowing problems and neonatal brain injury has not been established. The aim of the review was, therefore, to investigate the relation between early measures of sucking and swallowing and neurodevelopmental outcomes in infants diagnosed with neonatal brain injury and in infants born very preterm (<32wks) with very low birthweight (<1500g), at risk of neonatal brain injury. ::: ::: ::: METHOD ::: We conducted a systematic review of English-language articles using CINAHL, EMBASE, and MEDLINE OVID (from 1980 to May 2011). Additional studies were identified through manual searches of key journals and the works of expert authors. Extraction of data informed an assessment of the level of evidence and risk of bias for each study using a predefined set of quality indicators. ::: ::: ::: RESULTS ::: A total of 394 abstracts were generated by the search but only nine studies met the inclusion criterion. Early sucking and swallowing problems were present in a consistent proportion of infants and were predictive of neurodevelopmental outcome in infancy in five of the six studies reviewed. ::: ::: ::: LIMITATIONS ::: The methodological quality of studies was variable in terms of research design, level of evidence (National Health and Medical Research Council levels II, III, and IV), populations studied, assessments used and the nature and timing of neurodevelopmental follow-up. ::: ::: ::: CONCLUSIONS ::: Based upon the results of this review, there is currently insufficient evidence to clearly determine the relation between early sucking and swallowing problems and neonatal brain injury. Although early sucking and swallowing problems may be related to later neurodevelopmental outcomes, further research is required to delineate their value in predicting later motor outcomes and to establish reliable measures of early sucking and swallowing function. <s> BIB019 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> OBJECTIVE ::: To examine the association between sucking patterns and the quality of fidgety movements in preterm infants. ::: ::: ::: STUDY DESIGN ::: We studied the sucking patterns and fidgety movements of 44 preterm infants (gestational age <35 weeks) longitudinally from 34 weeks' postmenstrual age up to 14 weeks postterm. We used the Neonatal Oral-Motor Assessment Scale during feeding and scored the sucking patterns as normal or abnormal. Abnormal sucking patterns were categorized into arrhythmic sucking and uncoordinated sucking. At 14 weeks postterm, we scored the quality of fidgety movements from videotapes as normal, abnormal, or absent. ::: ::: ::: RESULTS ::: The postmenstrual age at which sucking patterns became normal (median, 48 weeks; range, 34 to >50 weeks) was correlated with the quality of fidgety movements (Spearman ρ = -0.33; P = .035). The percentage per infant of normal and uncoordinated sucking patterns was also correlated with the quality of fidgety movements (ρ = 0.31; P = .048 and ρ = -0.33; P = .032, respectively). Infants with uncoordinated sucking patterns had a higher rate of abnormal fidgety movements (OR, 7.5; 95% CI, 1.4-40; P = .019). ::: ::: ::: CONCLUSION ::: The development of sucking patterns in preterm infants was related to the quality of fidgety movements. Uncoordinated sucking patterns were associated with abnormal fidgety movements, indicating that uncoordinated sucking, swallowing, and breathing may represent neurologic dysfunction. <s> BIB020
Swallowing Breathing Nutrient Consumption IP: BIB008 BIB016 Pharyngeal pressure: BIB008 BIB005 BIB011 BIB007 Nasal airflow/ Thoracic movements: BIB008 BIB005 Total transferred nutrient: BIB008 BIB004 BIB006 BIB007 EP/IP: BIB010 BIB004 BIB014 BIB009 BIB006 Hyoid bone movements: BIB014 BIB009 Thoracic movements: BIB014 BIB009 BIB011 Minute transferred volume: BIB009 BIB001 Intranipple pressure: BIB005 BIB007 Swallows sounds: Transferred milk weight: BIB014 Chin movements: BIB012 Throat-eye (S) and jaw-eye (E) distance: BIB013 pressures BIB016 BIB007 BIB017 . Almost all of these indices have been already mentioned. An additional one is introduced in BIB016 to quantify the sucking variability, through a measure of the suck-to-suck fluctuation in amplitude. The authors refer to it as inconsistency index and define it as the SD of the ratios of amplitudes of successive sucks within bursts. Moreover, an index of sucking intensity is defined as the mean maximum sucking pressure divided by the mean suck duration, and appears to be correlated with the efficiency of the sucking pattern . However, as Table 1 reported, some significant indices for the assessment of the oral feeding maturation also concern other components of the NS process. Immature NS does not only reflect sucking ability, but also the coordination of suck with swallowing and respiration. Among the principal indices adopted for the evaluation of these coordination skills in preterm infants, there are the coefficients of variation calculated from the breath-breath and swallow-swallow intervals (COV B and COV Sw ), that allow the analysis of the feeding-related respiratory and swallowing rhythms BIB011 . Another significant index is the percentage of apneic swallows, i.e., the number of series of at least three swallows not associated with breathing events, divided by total swallows. This index appears to be a clear indicator of maturation in bottle-fed preterm infants, as reported in BIB003 , stressing the importance of the ventilatory control during feeding. However, the maturation of this aspect cannot be complete at term gestation, hence this index of deglutition apnea represents an indicator of term infants assessment as well. Moreover, a safe coordination between Sw and B is reported as an important developmental achievement for preterm immature infants, and it is usually assessed as the percent occurrence of a specific Sw-B interface (e.g., Inspiration-Swallow-Expiration, I-S-E) BIB008 BIB014 BIB011 . This index is also an important indicator for the assessment of the term infant's feeding pattern, along with the sucking-breathing (Sk-B) interface and the Sk:Sw:B ratio . All these parameters to evaluate the preterm infants' ability to establish a mature coordination between sucking, swallowing and breathing, can be estimated from different measures of swallowing and breathing (see Table 2 ), which allow the detection of the events (swallows, inspirations, expirations). For both preterm and term infants, oral feeding performance is usually assessed through indices of sucking efficiency (usually defined as the nutrient volume per suck) and the rate of nutrient intake (intake volume divided by feeding duration), calculated using all the different measures of nutrient consumption adopted and reported in Table 2 . An alternative definition of the sucking efficiency that has been adopted is the average milk intake per suck divided by average effect (pressure • duration) per suck . The bolus size (volume per swallow) is another index of nutrient consumption that allows to assess feeding performance in relation to the swallowing pattern. Nutritive sucking has also been considered as an early motor marker for the prediction of later neurodevelopmental outcomes in infants BIB019 . Some already mentioned sucking indices (Sk frequency, number of sucks per burst, IP amplitude) have turned out to be predictors of later neurological outcomes BIB015 . Moreover, with measures of both suction and expression components (IP and EP), the newborn's sucking pattern has been classified according to the rhythmicity and amplitude of both components, inferring prediction of later neurological development BIB010 . S and E have also been assessed, through measurements of throat and jaw movements BIB013 (see Section 3 for additional details). The eye-jaw and eye-throat distances have demonstrated to be useful to identify the differences in feeding performance between healthy infants and infants with neurological disorders. Nutrient consumption is another important factor whose monitoring can allow the estimation of significant indices with predictive value. The newborn's feeding behavior, assessed through the milk intake rate (mL/min), has demonstrated to be correlated with future neurodevelopment assessment BIB001 . No measurements of the other components (breathing, swallowing) of the nutritive sucking process have been carried out to this aim. Such predictive potential of sucking assessment was also confirmed by other authors BIB002 BIB018 BIB020 whose studies are not reported in this work since they adopted non-instrumental tools for the assessment. The importance of the instrumental monitoring of NS has also been demonstrated in the case of neurodisabled infants with Down's syndrome: the use of sucking pressure waveforms (IP and EP measures) can be helpful in the examination of the development of sucking behavior, intraoral movements and therapeutic effects BIB006 . Moreover, problems with sucking and swallowing can be observed in children with cerebral palsy (CP) within the first 12 months of life, which often precede the diagnosis BIB002 . These observations emphasize the importance of monitoring feeding behavior, preferably at home, and taking a careful feeding history.
Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking Behavior Monitoring and Assessment: Technological Solutions and Methods Adopted in Research Studies <s> The aim of this study was to gain a better understanding of the development of sucking behavior in infants with Down's syndrome. The sucking behavior of 14 infants with Down's syndrome was consecutively studied at 1, 4, 8 and 12 mo of age. They were free from complications that may cause sucking difficulty. The sucking pressure, expression pressure, frequency and duration were measured. In addition, an ultrasound study during sucking was performed in sagittal planes. Although levels of the sucking pressure and duration were in the normal range, significant development occurred with time. Ultrasonographic images showed deficiency in the smooth peristaltic tongue movement. ::: ::: ::: ::: Conclusion: The sucking deficiency in Down's syndrome may result from not only hypotonicity of the perioral muscles, lips and masticatory muscles, but also deficiency in the smooth tongue movement. This approach using the sucking pressure waveform and ultrasonography can help in the examination of the development of sucking behavior, intraoral movement and therapeutic effects. <s> BIB001 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking Behavior Monitoring and Assessment: Technological Solutions and Methods Adopted in Research Studies <s> UNLABELLED ::: Safe oral feeding of infants necessitates the coordination of suck-swallow-breathe. Healthy full-term infants demonstrate such skills at birth. But, preterm infants are known to have difficulty in the transition from tube to oral feeding. ::: ::: ::: AIM ::: To examine the relationship between suck and swallow and between swallow and breathe. It is hypothesized that greater milk transfer results from an increase in bolus size and/or swallowing frequency, and an improved swallow-breathe interaction. ::: ::: ::: METHODS ::: Twelve healthy preterm (<30 wk of gestation) and 8 full-term infants were recruited. Sucking (suction and expression), swallowing, and respiration were recorded simultaneously when the preterm infants began oral feeding (i.e. taking 1-2 oral feedings/d) and at 6-8 oral feedings/d. The full-term infants were similarly monitored during their first and 2nd to 4th weeks. Rate of milk transfer (ml/min) was used as an index of oral feeding performance. Sucking and swallowing frequencies (#/min), average bolus size (ml), and suction amplitude (mmHg) were measured. ::: ::: ::: RESULTS ::: The rate of milk transfer in the preterm infants increased over time and was correlated with average bolus size and swallowing frequency. Average bolus size was not correlated with swallowing frequency. Bolus size was correlated with suction amplitude, whereas the frequency of swallowing was correlated with sucking frequency. Preterm infants swallowed preferentially at different phases of respiration than those of their full-term counterparts. ::: ::: ::: CONCLUSION ::: As feeding performance improved, sucking and swallowing frequency, bolus size, and suction amplitude increased. It is speculated that feeding difficulties in preterm infants are more likely to result from inappropriate swallow-respiration interfacing than suck-swallow interaction. <s> BIB002 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking Behavior Monitoring and Assessment: Technological Solutions and Methods Adopted in Research Studies <s> To study the coordination of respiration and swallow rhythms we assessed feeding episodes in 20 preterm infants (gestational age range at birth 26-33wks; postmenstrual age [PMA] range when studied 32-40wks) and 16 term infants studied on days 1 to 4 (PMA range 37-41wks) and at 1 month (PMA range 41-45wks). A pharyngeal pressure transducer documented swallows and a thoracoabdominal strain gauge recorded respiratory efforts. Coefficients of variation (COVs) of breath-breath (BBr-BR) and swallow-breath (SW-BR) intervals during swallow runs, percentage ofapneic swallows (at least three swallows without interposed breaths), and phase of respiration relative to swallowing efforts were analyzed. Percentage of apneic swallows decreased with increasing PMA (16.6% [SE 4.7] in preterm infants s35wks' PMA; 6.6% [SE 1.6] in preterms >35wks; 1.5% [SE 0.4] in term infants; p 35wks' PMA; 0.693 [SE 0.059] at <35wks' PMA). Phase relation between swallowing and respiration stabilized with increasing PMA, with decreased apnea, and a significant increase in percentage of swallows occurring at end-inspiration. These data indicate that unlike the stabilization of suck and suck-swallow rhythms, which occur before about 36 weeks' PMA, improvement in coordination of respiration and swallow begins later. Coordination of swallow-respiration and suck-swallow rhythms may be predictive of feeding, respiratory, and neurodevelopmental abnormalities. <s> BIB003 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking Behavior Monitoring and Assessment: Technological Solutions and Methods Adopted in Research Studies <s> Aim: Safe and successful oral feeding requires proper maturation of sucking, swallowing and respiration. We hypothesized that oral feeding difficulties result from different temporal development of the musculatures implicated in these functions. ::: ::: Methods: Sixteen medically stable preterm infants (26 to 29 weeks gestation, GA) were recruited. Specific feeding skills were monitored as indirect markers for the maturational process of oral feeding musculatures: rate of milk intake (mL/min); percent milk leakage (lip seal); sucking stage, rate (#/s) and suction/expression ratio; suction amplitude (mmHg), rate and slope (mmHg/s); sucking/swallowing ratio; percent occurrence of swallows at specific phases of respiration. Coefficients of variation (COV) were used as indices of functional stability. Infants, born at 26/27- and 28/29-week GA, were at similar postmenstrual ages (PMA) when taking 1–2 and 6–8 oral feedings per day. ::: ::: Results: Over time, feeding efficiency and several skills improved, some decreased and others remained unchanged. Differences in COVs between the two GA groups demonstrated that, despite similar oral feeding outcomes, maturation levels of certain skills differed. ::: ::: Conclusions: Components of sucking, swallowing, respiration and their coordinated activity matured at different times and rates. Differences in functional stability of particular outcomes confirm that maturation levels depend on infants' gestational rather than PMA. <s> BIB004
In the previous section we have reported the complex set of indices and quantities, used to objectively assess infant oral feeding. The measurement of such a heterogeneous set of indices requires several technological solutions that can be grouped into three categories: (i) measuring systems to monitor sucking process (Table 3) ; (ii) measuring systems to monitor swallowing and breathing process (Table 4) , and (iii) measuring systems to monitor nutrient consumption (Table 5 ). Swallowing and breathing are considered together, because several authors BIB004 BIB002 BIB003 BIB001 demonstrated the oral feeding performance in preterm infants to depend mainly on their coordination. Table 3 . Overview of the measuring systems used to monitor sucking process: measurands, sensors and measurement procedures.
Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> Non-invasive, sensitive equipment was designed to record nasal air flow, the timing and volume of milk flow, intraoral pressure and swallowing in normal full-term newborn babies artificially fed under strictly controlled conditions. Synchronous recordings of these events are presented in chart form. Interpretation of the charts, with the aid of applied anatomy, suggests an hypothesis of the probable sequence of events during an ideal feeding cycle under the test conditions. This emphasises the importance of complete coordination between breathing, sucking and swallowing. The feeding respiratory pattern and its relationship to the other events was different from the non-nutritive respiratory pattern. The complexity of the coordinated patterns, the small bolus size which influenced the respiratory pattern, together with the coordination of all these events when milk was present in the mouth, emphasise the importance of the sensory mechanisms. The discussion considers (1) the relationship between these results, those reported by other workers under other feeding conditions and the author's (WGS) clinical experience, (2) factors which appear to be essential to permit conventional bottle feeding and (3) the importance of the coordination between the muscles of articulation, by which babies obtain their nourishment in relation to normal development and maturation. <s> BIB001 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> Milk flow achieved during feeding may contribute to the ventilatory depression observed during nipple feeding. One of the important determinants of milk flow is the size of the feeding hole. In the first phase of the study, investigators compared the breathing patterns of 10 preterm infants during bottle feeding with two types of commercially available (Enfamil) single-hole nipples: one type designed for term infants and the other for preterm infants. Reductions in ventilation, tidal volume, and breathing frequency, compared with prefeeding control values, were observed with both nipple types during continuous and intermittent sucking phases; no significant differences were observed for any of the variables. Unlike the commercially available, mechanically drilled nipples, laser-cut nipple units showed a markedly lower coefficient of variation in milk flow. In the second phase of the study, two sizes of laser-cut nipple units, low and high flow, were used to feed nine preterm infants. Significantly lower sucking pressures were observed with high-flow nipples as compared with low-flow nipples. Decreases in minute ventilation and breathing frequency were also significantly greater with high-flow nipples. These results suggest that milk flow contributes to the observed reduction in ventilation during nipple feeding and that preterm infants have limited ability to self-regulate milk flow. <s> BIB002 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> The purpose of this investigation was to quantify normal nutritive sucking, using a microcomputer-based instrument which replicated the infant's customary bottle-feeding routine. 86 feeding sessions were recorded from infants ranging between 1.5 and 11.5 months of age. Suck height, suck area and percentage of time spent sucking were unrelated to age. Volume per suck declined with age, as did intersuck interval, which corresponded to a more rapid sucking rate. This meant that volume per minute of sucking time was fairly constant. The apparatus provided an objective description of the patterns of normal nutritive sucking in infants to which abnormal sucking patterns may be compared. <s> BIB003 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> During feeding, infants have been found to decrease ventilation in proportion to increasing swallowing frequency, presumably as a consequence of neural inhibition of breathing and airway closure during swallowing. To what extent infants decrease ventilatory compromise during feeding by modifying feeding behavior is unknown. We increased swallowing frequency in infants by facilitating formula flow to study potential ventilatory sparing mechanisms. We studied seven full-term healthy infants 5-12 days of age. Nasal air flow and tidal volume were recorded with a nasal flowmeter. Soft fluid-filled catheters in the oropharynx and bottle recorded swallowing and sucking activity, and volume changes in the bottle were continuously measured. Bottle pressure was increased to facilitate formula flow. Low- and high-pressure trials were then compared. With the change from low to high pressure, consumption rate increased, as did sucking and swallowing frequencies. This change reversed on return to low pressure. Under high-pressure conditions, we saw a decrease in minute ventilation as expected. With onset of high pressure, sucking and swallowing volumes increased, whereas duration of airway closure during swallows remained constant. Therefore, increased formula consumption was associated with reduced ventilation, a predictable consequence of increased swallowing frequency. However, when consumption rate was high, the infant also increased swallowing volume, a tactic that is potentially ventilatory sparing as a lower swallowing frequency is required to achieve the increased consumption rate. As well, when consumption rate is low, the sucking-to-swallowing ratio increases, again potentially conserving ventilation by decreasing swallowing frequency much more than if the sucking-to-swallowing ratio was constant. <s> BIB004 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> Abstract To gain a better understanding of the development of sucking behavior in low birth weight infants, the aims of this study were as follows: (1) to assess these infants' oral feeding performance when milk delivery was unrestricted, as routinely administered in nurseries, versus restricted when milk flow occurred only when the infant was sucking; (2) to determine whether the term sucking pattern of suction/expression was necessary for feeding success; and (3) to identify clinical indicators of successful oral feeding. Infants (26 to 29 weeks of gestation) were evaluated at their first oral feeding and on achieving independent oral feeding. Bottle nipples were adapted to monitor suction and expression. To assess performance during a feeding, proficiency (percent volume transferred during the first 5 minutes of a feeding/total volume ordered), efficiency (volume transferred per unit time), and overall transfer (percent volume transferred) were calculated. Restricted milk flow enhanced all three parameters. Successful oral feeding did not require the term sucking pattern. Infants who demonstrated both a proficiency ≥30% and efficiency ≥1.5 ml/min at their first oral feeding were successful with that feeding and attained independent oral feeding at a significantly earlier postmenstrual age than their counterparts with lower proficiency, efficiency, or both. Thus a restricted milk flow facilitates oral feeding in infants younger than 30 weeks of gestation, the term sucking pattern is not necessary for successful oral feeding, and proficiency and efficiency together may be used as reliable indicators of early attainment of independent oral feeding in low birth weight infants.(J Pediatr 1997;130:561-9) <s> BIB005 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> It is acknowledged that the difficulty many preterm infants have in feeding orally results from their immature sucking skills. However, little is known regarding the development of sucking in these infants. The aim of this study was to demonstrate that the bottle-feeding performance of preterm infants is positively correlated with the developmental stage of their sucking. Infants' oral-motor skills were followed longitudinally using a special nipple/bottle system which monitored the suction and expression/compression component of sucking. The maturational process was rated into five primary stages based on the presence/absence of suction and the rhythmicity of the two components of sucking, suction and expression/compression. This five-point scale was used to characterize the developmental stage of sucking of each infant. Outcomes of feeding performance consisted of overall transfer (percent total volume transfered/volume to be taken) and rate of transfer (ml/min). Assessments were conducted when infants were taking 1-2, 3-5 and 6-8 oral feedings per day. Significant positive correlations were observed between the five stages of sucking and postmenstrual age, the defined feeding outcomes, and the number of daily oral feedings. Overall transfer and rate of transfer were enhanced when infants reached the more mature stages of sucking. ::: ::: We have demonstrated that oral feeding performance improves as infants' sucking skills mature. In addition, we propose that the present five-point sucking scale may be used to assess the developmental stages of sucking of preterm infants. Such knowledge would facilitate the management of oral feeding in these infants. <s> BIB006 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> As a consequence of the fragility of various neural structures, preterm infants born at a low gestation and/or birthweight are at an increased risk of developing motor abnormalities. The lack of a reliable means of assessing motor integrity prevents early therapeutic intervention. In this paper, we propose a new method of assessing neonatal motor performance, namely the recording and subsequent analysis of intraoral sucking pressures generated when feeding nutritively. By measuring the infant's control of sucking in terms of a new development of tau theory, normal patterns of intraoral motor control were established for term infants. Using this same measure, the present study revealed irregularities in sucking control of preterm infants. When these findings were compared to a physiotherapist's assessment six months later, the preterm infants who sucked irregularly were found to be delayed in their motor development. Perhaps a goal-directed behaviour such as sucking control that can be measured objectively at a very young age, could be included as part of the neurological assessment of the preterm infant. More accurate classification of a preterm infant's movement abnormalities would allow for early therapeutic interventions to be realised when the infant is still acquiring the most basic of motor functions. <s> BIB007 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> The aim of this study was to gain a better understanding of the development of sucking behavior in infants with Down's syndrome. The sucking behavior of 14 infants with Down's syndrome was consecutively studied at 1, 4, 8 and 12 mo of age. They were free from complications that may cause sucking difficulty. The sucking pressure, expression pressure, frequency and duration were measured. In addition, an ultrasound study during sucking was performed in sagittal planes. Although levels of the sucking pressure and duration were in the normal range, significant development occurred with time. Ultrasonographic images showed deficiency in the smooth peristaltic tongue movement. ::: ::: ::: ::: Conclusion: The sucking deficiency in Down's syndrome may result from not only hypotonicity of the perioral muscles, lips and masticatory muscles, but also deficiency in the smooth tongue movement. This approach using the sucking pressure waveform and ultrasonography can help in the examination of the development of sucking behavior, intraoral movement and therapeutic effects. <s> BIB008 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> UNLABELLED ::: Safe oral feeding of infants necessitates the coordination of suck-swallow-breathe. Healthy full-term infants demonstrate such skills at birth. But, preterm infants are known to have difficulty in the transition from tube to oral feeding. ::: ::: ::: AIM ::: To examine the relationship between suck and swallow and between swallow and breathe. It is hypothesized that greater milk transfer results from an increase in bolus size and/or swallowing frequency, and an improved swallow-breathe interaction. ::: ::: ::: METHODS ::: Twelve healthy preterm (<30 wk of gestation) and 8 full-term infants were recruited. Sucking (suction and expression), swallowing, and respiration were recorded simultaneously when the preterm infants began oral feeding (i.e. taking 1-2 oral feedings/d) and at 6-8 oral feedings/d. The full-term infants were similarly monitored during their first and 2nd to 4th weeks. Rate of milk transfer (ml/min) was used as an index of oral feeding performance. Sucking and swallowing frequencies (#/min), average bolus size (ml), and suction amplitude (mmHg) were measured. ::: ::: ::: RESULTS ::: The rate of milk transfer in the preterm infants increased over time and was correlated with average bolus size and swallowing frequency. Average bolus size was not correlated with swallowing frequency. Bolus size was correlated with suction amplitude, whereas the frequency of swallowing was correlated with sucking frequency. Preterm infants swallowed preferentially at different phases of respiration than those of their full-term counterparts. ::: ::: ::: CONCLUSION ::: As feeding performance improved, sucking and swallowing frequency, bolus size, and suction amplitude increased. It is speculated that feeding difficulties in preterm infants are more likely to result from inappropriate swallow-respiration interfacing than suck-swallow interaction. <s> BIB009 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> OBJECTIVES ::: Our objectives were to establish normative maturational data for feeding behavior of preterm infants from 32 to 36 weeks of postconception and to evaluate how the relation between swallowing and respiration changes with maturation. ::: ::: ::: STUDY DESIGN ::: Twenty-four infants (28 to 31 weeks of gestation at birth) without complications or defects were studied weekly between 32 and 36 weeks after conception. During bottle feeding with milk flowing only when infants were sucking, sucking efficiency, pressure, frequency, and duration were measured and the respiratory phase in which swallowing occurs was also analyzed. Statistical analysis was performed by repeated-measures analysis of variance with post hoc analysis. ::: ::: ::: RESULTS ::: The sucking efficiency significantly increased between 34 and 36 weeks after conception and exceeded 7 mL/min at 35 weeks. There were significant increases in sucking pressure and frequency as well as in duration between 33 and 36 weeks. Although swallowing occurred mostly during pauses in respiration at 32 and 33 weeks, after 35 weeks swallowing usually occurred at the end of inspiration. ::: ::: ::: CONCLUSIONS ::: Feeding behavior in premature infants matured significantly between 33 and 36 weeks after conception, and swallowing infrequently interrupted respiration during feeding after 35 weeks after conception. <s> BIB010 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> This study aimed to determine whether neonatal feeding performance can predict the neurodevelopmental outcome of infants at 18 months of age. We measured the expression and sucking pressures of 65 infants (32 males and 33 females, mean gestational age 37.8 weeks [SD 0.5]; range 35.1 to 42.7 weeks and mean birthweight 2722g [SD 92]) with feeding problems and assessed their neurodevelopmental outcome at 18 months of age. Their diagnoses varied from mild asphyxia and transient tachypnea to Chiari malformation. A neurological examination was performed at 40 to 42 weeks postmenstrual age by means of an Amiel-Tison examination. Feeding performance at 1 and 2 weeks after initiation of oral feeding was divided into four classes: class 1, no suction and weak expression; class 2, arrhythmic alternation of expression/suction and weak pressures; class 3, rhythmic alternation, but weak pressures; and class 4, rhythmic alternation with normal pressures. Neurodevelopmental outcome was evaluated with the Bayley Scales of Infant Development-II and was divided into four categories: severe disability, moderate delay, minor delay, and normal. We examined the brain ultrasound on the day of feeding assessment, and compared the prognostic value of ultrasound and feeding performance. There was a significant correlation between feeding assessment and neurodevelopmental outcome at 18 months (p < 0.001). Improvements of feeding pattern at the second evaluation resulted in better neurodevelopmental outcome. The sensitivity and specificity of feeding assessment were higher than those of ultrasound assessment. Neonatal feeding performance is, therefore, of prognostic value in detecting future developmental problems. <s> BIB011 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> Abstract Aim The sucking pattern of term infants is composed of a rhythmic alteration of expression and suction movements. The aim is to evaluate if direct linear transformation (DLT) method could be used for the assessment of infant feeding. Subject and methods A total of 10 gnormalh infants and two infants with neurological disorders were studied using DLT procedures and expression/suction pressure recordings. Feeding pattern of seven gnormalh infants were evaluated simultaneously recording DLT and pressures. The other infants were tested non-simultaneously. We placed markers on the lateral angle of the eye, tip of the jaw, and throat. The faces of infants while sucking were recorded in profile. The jaw and throat movements were calculated using the DLT procedure. Regression analysis was implemented to investigate the relationship between suction and expression pressures and eye–jaw and eye–throat movement. All regression analyses investigated univariate relationships and adjusted for other covariates. Results Ten gnormalh infants demonstrated higher suction pressure than expression pressure, and their throat movements were larger than jaw movements. Two infants with neurological problems did not generate suction pressure and demonstrated larger movements in their jaw than throat. The simultaneous measurement ( n = 7) showed a significant correlation, not only between eye–jaw distance and the expression pressure, but also between eye–throat distance and suction pressure. The change in the eye–jaw distance was smaller than the changes in the eye–throat distance in gnormalh infants ( p Conclusions The DLT method can be used to evaluate feeding performance without any special device. <s> BIB012 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> Abstract A dynamical systems approach to infant feeding problems is presented. A theoretically motivated analysis of coordination among sucking, swallowing, and breathing is at the heart of the approach. Current views in neonatology and allied medical disciplines begin their analysis of feeding problems with reference to descriptive phases of moving fluid from the mouth to the gut. By contrast, in a dynamical approach, sucking, swallowing, and breathing are considered as a synergy characterized by more or less stable coordination patterns. Research with healthy and at-risk groups of infants is presented to illustrate how coordination dynamics distinguish safe swallowing from patterns of swallowing and breathing that place premature infants at risk for serious medical problems such as pneumonia. Coordination dynamics is also the basis for a new medical device: a computer-controlled milk bottle that controls milk flow on the basis of the infant's coordination patterns. The device is designed so that infants... <s> BIB013 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> Aim: Safe and successful oral feeding requires proper maturation of sucking, swallowing and respiration. We hypothesized that oral feeding difficulties result from different temporal development of the musculatures implicated in these functions. ::: ::: Methods: Sixteen medically stable preterm infants (26 to 29 weeks gestation, GA) were recruited. Specific feeding skills were monitored as indirect markers for the maturational process of oral feeding musculatures: rate of milk intake (mL/min); percent milk leakage (lip seal); sucking stage, rate (#/s) and suction/expression ratio; suction amplitude (mmHg), rate and slope (mmHg/s); sucking/swallowing ratio; percent occurrence of swallows at specific phases of respiration. Coefficients of variation (COV) were used as indices of functional stability. Infants, born at 26/27- and 28/29-week GA, were at similar postmenstrual ages (PMA) when taking 1–2 and 6–8 oral feedings per day. ::: ::: Results: Over time, feeding efficiency and several skills improved, some decreased and others remained unchanged. Differences in COVs between the two GA groups demonstrated that, despite similar oral feeding outcomes, maturation levels of certain skills differed. ::: ::: Conclusions: Components of sucking, swallowing, respiration and their coordinated activity matured at different times and rates. Differences in functional stability of particular outcomes confirm that maturation levels depend on infants' gestational rather than PMA. <s> BIB014 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> We report quantitative measurements of ten parameters of nutritive sucking behavior in 91 normal full-term infants obtained using a novel device (an Orometer) and a data collection/analytical system (Suck Editor). The sucking parameters assessed include the number of sucks, mean pressure amplitude of sucks, mean frequency of sucks per second, mean suck interval in seconds, sucking amplitude variability, suck interval variability, number of suck bursts, mean number of sucks per suck burst, mean suck burst duration, and mean interburst gap duration. For analyses, test sessions were divided into 4 × 2-min segments. In single-study tests, 36 of 60 possible comparisons of ten parameters over six pairs of 2-min time intervals showed a p value of 0.05 or less. In 15 paired tests in the same infants at different ages, 33 of 50 possible comparisons of ten parameters over five time intervals showed p values of 0.05 or less. Quantification of nutritive sucking is feasible, showing statistically valid results for ten parameters that change during a feed and with age. These findings suggest that further research, based on our approach, may show clinical value in feeding assessment, diagnosis, and clinical management. <s> BIB015
Intraoral Pressure PT PT embedded into a catheter and placed at the tip of the nipple BIB006 BIB014 BIB009 BIB002 BIB005 PT connected to a catheter, whose opposite tip ends into the oral cavity lumen BIB001 BIB011 BIB007 BIB010 BIB008 BIB012 BIB013 BIB004 BIB003 PT placed between the nipple and a flow limiting device (restriction orifice or a capillary tube) BIB015 Throat movements Videocamera and markers (DLT) Digital video camera at 1 m from the infant's face; markers placed on the lateral angle of the eye and on the throat BIB012 Expression Expression Pressure PT PT connected to a polyethylene catheter, connected to a catheter of compressible silicone rubber, placed on the nipple BIB014 BIB009 BIB002 BIB005 PT connected to the lumen of the nipple by means of a silicone catheter; one-way valve placed between the nipple chamber and the nutrient reservoir. BIB011 BIB008 BIB013 Jaw movements Videocamera and markers (DLT) Digital video camera at 1 m from the infant's face; markers placed on the lateral angle of the eye and on the tip of the jaw. [44]
Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> Energetics and mechanics of sucking in preterm and term neonates were determined by simultaneous records of intraoral pressure, flow, volume, and work of individual sucks. Nine term infants (mean postconceptional age: 38.6 +/- 0.7 SD weeks; mean postnatal age: 18.4 +/- 6.1 SD days) and nine preterm infants (mean postconceptional age: 35.2 +/- 0.7 SD weeks; mean postnatal age: 21.9 +/- 5.4 SD days) were studied under identical feeding conditions. Preterm infants generated significantly lower peak pressure (mean values of 48.5 cm H2O compared with 65.5 cm H2O in term infants; P less than 0.01), and the volume ingested per such was generally less than or equal to 0.5 mL. Term infants demonstrated a higher frequency of sucking, a well-defined suck-pause pattern, and a higher minute consumption of formula. Energy and caloric expenditure estimations revealed significantly lower work performed by preterm infants for isovolumic feeds (1190 g/cm/dL in preterm infants compared with 2030 g.cm/dL formula ingested in term infants; P less than 0.01). Furthermore, work performed by term infants was disproportionately higher for volumes greater than or equal to 0.5 mL ingested. This study indicates that preterm infants expend less energy than term infants to suck the same volume of feed and also describes an objective technique to evaluate nutritive sucking during growth and development. <s> BIB001 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> The sucking patterns of 42 healthy full-term and 44 preterm infants whose gestational age at birth was 30.9 +/- 2.1 weeks were compared using the Kron Nutritive Sucking Apparatus for a 5-minute period. The measured pressures were used to calculate six characteristics of the sucking response: maximum pressure generated, amount of nutrient consumed per suck, number of sucks per episode, the duration or width of each suck, the length of time between sucks, and the length of time between sucking episodes. The maximum pressure of the term infant (100.3 +/- 35) was higher, p less than .05, than the maximum pressure of the preterm infant (84 +/- 33). Term infants also consumed more formula per suck (45.3 +/- 20.3 vs. 37.6 +/- 15.9, p less than .05). In addition, they had more sucks/episode (13.6 +/- 8.7 vs. 7.7 +/- 4.1, p less than .001) and maintained the pressure longer for a wider suck width (0.49 +/- 0.1 vs. 0.45 +/- 0.08, p less than .05). Sucking profiles of the preterm infant are significantly different from the full-term infant. These sucking profiles can be developed as a clinically useful tool for nursing practice. <s> BIB002 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> Non-invasive, sensitive equipment was designed to record nasal air flow, the timing and volume of milk flow, intraoral pressure and swallowing in normal full-term newborn babies artificially fed under strictly controlled conditions. Synchronous recordings of these events are presented in chart form. Interpretation of the charts, with the aid of applied anatomy, suggests an hypothesis of the probable sequence of events during an ideal feeding cycle under the test conditions. This emphasises the importance of complete coordination between breathing, sucking and swallowing. The feeding respiratory pattern and its relationship to the other events was different from the non-nutritive respiratory pattern. The complexity of the coordinated patterns, the small bolus size which influenced the respiratory pattern, together with the coordination of all these events when milk was present in the mouth, emphasise the importance of the sensory mechanisms. The discussion considers (1) the relationship between these results, those reported by other workers under other feeding conditions and the author's (WGS) clinical experience, (2) factors which appear to be essential to permit conventional bottle feeding and (3) the importance of the coordination between the muscles of articulation, by which babies obtain their nourishment in relation to normal development and maturation. <s> BIB003 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> Incoordination of sucking, swallowing, and breathing might lead to the decreased ventilation that accompanies bottle feeding in infants, but the precise temporal relationship between these events has not been established. Therefore, we studied the coordination of sucks, swallows, and breaths in healthy infants (8 full-term and 5 preterm). Respiratory movements and airflow were recorded as were sucks and swallows (intraoral and intrapharyngeal pressure). Sucks did not interrupt breathing or decrease minute ventilation during nonnutritive sucking. Minute ventilation during bottle feedings was inversely related to swallow frequency, with elimination of ventilation as the swallowing frequency approached 1.4/s. Swallows were associated with a 600-ms period of decreased respiratory initiation and with a period of airway closure lasting 530 +/- 9.8 (SE) ms. Occasional periods of prolonged airway closure were observed in all infants during feedings. Respiratory efforts during airway closure (obstructed breaths) were common. The present findings indicate that the decreased ventilation observed during bottle feedings is primarily a consequence of airway closure associated with the act of swallowing, whereas the decreased ventilatory efforts result from respiratory inhibition during swallows. <s> BIB004 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> Milk flow achieved during feeding may contribute to the ventilatory depression observed during nipple feeding. One of the important determinants of milk flow is the size of the feeding hole. In the first phase of the study, investigators compared the breathing patterns of 10 preterm infants during bottle feeding with two types of commercially available (Enfamil) single-hole nipples: one type designed for term infants and the other for preterm infants. Reductions in ventilation, tidal volume, and breathing frequency, compared with prefeeding control values, were observed with both nipple types during continuous and intermittent sucking phases; no significant differences were observed for any of the variables. Unlike the commercially available, mechanically drilled nipples, laser-cut nipple units showed a markedly lower coefficient of variation in milk flow. In the second phase of the study, two sizes of laser-cut nipple units, low and high flow, were used to feed nine preterm infants. Significantly lower sucking pressures were observed with high-flow nipples as compared with low-flow nipples. Decreases in minute ventilation and breathing frequency were also significantly greater with high-flow nipples. These results suggest that milk flow contributes to the observed reduction in ventilation during nipple feeding and that preterm infants have limited ability to self-regulate milk flow. <s> BIB005 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> The purpose of this investigation was to quantify normal nutritive sucking, using a microcomputer-based instrument which replicated the infant's customary bottle-feeding routine. 86 feeding sessions were recorded from infants ranging between 1.5 and 11.5 months of age. Suck height, suck area and percentage of time spent sucking were unrelated to age. Volume per suck declined with age, as did intersuck interval, which corresponded to a more rapid sucking rate. This meant that volume per minute of sucking time was fairly constant. The apparatus provided an objective description of the patterns of normal nutritive sucking in infants to which abnormal sucking patterns may be compared. <s> BIB006 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> During feeding, infants have been found to decrease ventilation in proportion to increasing swallowing frequency, presumably as a consequence of neural inhibition of breathing and airway closure during swallowing. To what extent infants decrease ventilatory compromise during feeding by modifying feeding behavior is unknown. We increased swallowing frequency in infants by facilitating formula flow to study potential ventilatory sparing mechanisms. We studied seven full-term healthy infants 5-12 days of age. Nasal air flow and tidal volume were recorded with a nasal flowmeter. Soft fluid-filled catheters in the oropharynx and bottle recorded swallowing and sucking activity, and volume changes in the bottle were continuously measured. Bottle pressure was increased to facilitate formula flow. Low- and high-pressure trials were then compared. With the change from low to high pressure, consumption rate increased, as did sucking and swallowing frequencies. This change reversed on return to low pressure. Under high-pressure conditions, we saw a decrease in minute ventilation as expected. With onset of high pressure, sucking and swallowing volumes increased, whereas duration of airway closure during swallows remained constant. Therefore, increased formula consumption was associated with reduced ventilation, a predictable consequence of increased swallowing frequency. However, when consumption rate was high, the infant also increased swallowing volume, a tactic that is potentially ventilatory sparing as a lower swallowing frequency is required to achieve the increased consumption rate. As well, when consumption rate is low, the sucking-to-swallowing ratio increases, again potentially conserving ventilation by decreasing swallowing frequency much more than if the sucking-to-swallowing ratio was constant. <s> BIB007 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> Abstract To gain a better understanding of the development of sucking behavior in low birth weight infants, the aims of this study were as follows: (1) to assess these infants' oral feeding performance when milk delivery was unrestricted, as routinely administered in nurseries, versus restricted when milk flow occurred only when the infant was sucking; (2) to determine whether the term sucking pattern of suction/expression was necessary for feeding success; and (3) to identify clinical indicators of successful oral feeding. Infants (26 to 29 weeks of gestation) were evaluated at their first oral feeding and on achieving independent oral feeding. Bottle nipples were adapted to monitor suction and expression. To assess performance during a feeding, proficiency (percent volume transferred during the first 5 minutes of a feeding/total volume ordered), efficiency (volume transferred per unit time), and overall transfer (percent volume transferred) were calculated. Restricted milk flow enhanced all three parameters. Successful oral feeding did not require the term sucking pattern. Infants who demonstrated both a proficiency ≥30% and efficiency ≥1.5 ml/min at their first oral feeding were successful with that feeding and attained independent oral feeding at a significantly earlier postmenstrual age than their counterparts with lower proficiency, efficiency, or both. Thus a restricted milk flow facilitates oral feeding in infants younger than 30 weeks of gestation, the term sucking pattern is not necessary for successful oral feeding, and proficiency and efficiency together may be used as reliable indicators of early attainment of independent oral feeding in low birth weight infants.(J Pediatr 1997;130:561-9) <s> BIB008 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> To measure infant nutritive sucking reproducibly, nipple flow resistance must be controlled. Previous investigators have accomplished this with flow-limiting venturis, which has two limitations: flow resistance is highly dependent on fluid viscosity and older infants often reject the venturi nipple. This report describes the validation of calibrated-orifice nipples for the measurement of infant nutritive sucking. The flow characteristics of two infant formulas and water through these nipples were not different; those through venturi nipples were (analysis of variance; p < 0.0001). Flow characteristics did not differ among calibrated-orifice nipples constructed from three commercial nipple styles, indicating that the calibrated-orifice design is applicable to different types of baby bottle nipples. Among 3-month-old infants using calibrated-orifice nipples, acceptability was high, and sucking accounted for 85% of the variance in fluid intake during a feeding. We conclude that calibrated-orifice nipples are a valid and acceptable tool for the measurement of infant nutritive sucking. <s> BIB009 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> It is acknowledged that the difficulty many preterm infants have in feeding orally results from their immature sucking skills. However, little is known regarding the development of sucking in these infants. The aim of this study was to demonstrate that the bottle-feeding performance of preterm infants is positively correlated with the developmental stage of their sucking. Infants' oral-motor skills were followed longitudinally using a special nipple/bottle system which monitored the suction and expression/compression component of sucking. The maturational process was rated into five primary stages based on the presence/absence of suction and the rhythmicity of the two components of sucking, suction and expression/compression. This five-point scale was used to characterize the developmental stage of sucking of each infant. Outcomes of feeding performance consisted of overall transfer (percent total volume transfered/volume to be taken) and rate of transfer (ml/min). Assessments were conducted when infants were taking 1-2, 3-5 and 6-8 oral feedings per day. Significant positive correlations were observed between the five stages of sucking and postmenstrual age, the defined feeding outcomes, and the number of daily oral feedings. Overall transfer and rate of transfer were enhanced when infants reached the more mature stages of sucking. ::: ::: We have demonstrated that oral feeding performance improves as infants' sucking skills mature. In addition, we propose that the present five-point sucking scale may be used to assess the developmental stages of sucking of preterm infants. Such knowledge would facilitate the management of oral feeding in these infants. <s> BIB010 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> As a consequence of the fragility of various neural structures, preterm infants born at a low gestation and/or birthweight are at an increased risk of developing motor abnormalities. The lack of a reliable means of assessing motor integrity prevents early therapeutic intervention. In this paper, we propose a new method of assessing neonatal motor performance, namely the recording and subsequent analysis of intraoral sucking pressures generated when feeding nutritively. By measuring the infant's control of sucking in terms of a new development of tau theory, normal patterns of intraoral motor control were established for term infants. Using this same measure, the present study revealed irregularities in sucking control of preterm infants. When these findings were compared to a physiotherapist's assessment six months later, the preterm infants who sucked irregularly were found to be delayed in their motor development. Perhaps a goal-directed behaviour such as sucking control that can be measured objectively at a very young age, could be included as part of the neurological assessment of the preterm infant. More accurate classification of a preterm infant's movement abnormalities would allow for early therapeutic interventions to be realised when the infant is still acquiring the most basic of motor functions. <s> BIB011 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> The aim of this study was to gain a better understanding of the development of sucking behavior in infants with Down's syndrome. The sucking behavior of 14 infants with Down's syndrome was consecutively studied at 1, 4, 8 and 12 mo of age. They were free from complications that may cause sucking difficulty. The sucking pressure, expression pressure, frequency and duration were measured. In addition, an ultrasound study during sucking was performed in sagittal planes. Although levels of the sucking pressure and duration were in the normal range, significant development occurred with time. Ultrasonographic images showed deficiency in the smooth peristaltic tongue movement. ::: ::: ::: ::: Conclusion: The sucking deficiency in Down's syndrome may result from not only hypotonicity of the perioral muscles, lips and masticatory muscles, but also deficiency in the smooth tongue movement. This approach using the sucking pressure waveform and ultrasonography can help in the examination of the development of sucking behavior, intraoral movement and therapeutic effects. <s> BIB012 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> Twenty healthy preterm infants (gestational age 26 to 33 weeks, postmenstrual age [PMA] 32.1 to 39.6 weeks, postnatal age [PNA] 2.0 to 11.6 weeks) were studied weekly from initiation of bottle feeding until discharge, with simultaneous digital recordings of pharyngeal and nipple (teat) pressure and nasal thermistor and thoracic strain gauge readings. The percentage of sucks aggregated into 'runs' (defined as > or = 3 sucks with < or = 2 seconds between suck peaks) increased over time and correlated significantly with PMA (r=0.601, p<0.001). The length of the sucking-runs also correlated significantly with PMA (r=0.613, p<0.001). The stability of sucking rhythm, defined as a function of the mean/SD of the suck interval, was also directly correlated with increasing PMA (r=0.503, p=0.002), as was increasing suck rate (r=0.379, p<0.03). None of these measures was correlated with PNA. Similarly, increasing PMA, but not PNA, correlated with a higher percentage of swallows in runs (r=0.364, p<0.03). Stability of swallow rhythm did not change significantly from 32 to 40 weeks' PMA. In low-risk preterm infants, increasing PMA is correlated with a faster and more stable sucking rhythm and with increasing organization into longer suck and swallow runs. Stable swallow rhythm appears to be established earlier than suck rhythm. The fact that PMA is a better predictor than PNA of these patterns lends support to the concept that these patterns are innate rather than learned behaviors. Quantitative assessment of the stability of suck and swallow rhythms in preterm infants may allow prediction of subsequent feeding dysfunction as well as more general underlying neurological impairment. Knowledge of the normal ontogeny of the rhythms of suck and swallow may also enable us to differentiate immature (but normal) feeding patterns in preterm infants from dysmature (abnormal) patterns, allowing more appropriate intervention measures. <s> BIB013 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> To quantify parameters of rhythmic suckle feeding in healthy term infants and to assess developmental changes during the first month of life, we recorded pharyngeal and nipple pressure in 16 infants at 1 to 4 days of age and again at 1 month. Over the first month of life in term infants, sucks and swallows become more rapid and increasingly organized into runs. Suck rate increased from 55/minute in the immediate postnatal period to 70/minute by the end of the first month (p<0.001). The percentage of sucks in runs of ≧3 increased from 72.7% (SD 12.8) to 87.9% (SD 9.1; p=0.001). Average length of suck runs also increased over the first month. Swallow rate increased slightly by the end of the first month, from about 46 to 50/minute (p=0.019), as did percentage of swallows in runs (76.8%, SD 14.9 versus 54.6%, SD 19.2;p=0.002). Efficiency of feeding, as measured by volume of nutrient per suck (0.17, SD 0.08 versus 0.30, SD 0.11cc/suck; p=0.008) and per swallow (0.23, SD 0.11 versus 0.44, SD 0.19 cc/swallow; p=0.002), almost doubled over the first month. The rhythmic stability of swallow-swallow, suck-suck, and suck-swallow dyadic interval, quantified using the coefficient of variation of the interval, was similar at the two age points, indicating that rhythmic stability of suck and swallow, individually and interactively, appears to be established by term. Percentage of sucks and swallows in 1:1 ratios (dyads), decreased from 78.8% (SD 20.1) shortly after birth to 57.5% (SD 25.8) at 1 month of age (p=0.002), demonstrating that the predominant 1:1 ratio of suck to swallow is more variable at 1 month, with the addition of ratios of 2:1, 3:1, and so on, and suggesting that infants gain the ability to adjust feeding patterns to improve efficiency. Knowledge of normal development in term infants provides a gold standard against which rhythmic patterns in preterm and other high-risk infants can be measured, and may allow earlier identification of infants at risk of neurodevelopmental delay and feeding disorders. <s> BIB014 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> UNLABELLED ::: Safe oral feeding of infants necessitates the coordination of suck-swallow-breathe. Healthy full-term infants demonstrate such skills at birth. But, preterm infants are known to have difficulty in the transition from tube to oral feeding. ::: ::: ::: AIM ::: To examine the relationship between suck and swallow and between swallow and breathe. It is hypothesized that greater milk transfer results from an increase in bolus size and/or swallowing frequency, and an improved swallow-breathe interaction. ::: ::: ::: METHODS ::: Twelve healthy preterm (<30 wk of gestation) and 8 full-term infants were recruited. Sucking (suction and expression), swallowing, and respiration were recorded simultaneously when the preterm infants began oral feeding (i.e. taking 1-2 oral feedings/d) and at 6-8 oral feedings/d. The full-term infants were similarly monitored during their first and 2nd to 4th weeks. Rate of milk transfer (ml/min) was used as an index of oral feeding performance. Sucking and swallowing frequencies (#/min), average bolus size (ml), and suction amplitude (mmHg) were measured. ::: ::: ::: RESULTS ::: The rate of milk transfer in the preterm infants increased over time and was correlated with average bolus size and swallowing frequency. Average bolus size was not correlated with swallowing frequency. Bolus size was correlated with suction amplitude, whereas the frequency of swallowing was correlated with sucking frequency. Preterm infants swallowed preferentially at different phases of respiration than those of their full-term counterparts. ::: ::: ::: CONCLUSION ::: As feeding performance improved, sucking and swallowing frequency, bolus size, and suction amplitude increased. It is speculated that feeding difficulties in preterm infants are more likely to result from inappropriate swallow-respiration interfacing than suck-swallow interaction. <s> BIB015 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> OBJECTIVES ::: Our objectives were to establish normative maturational data for feeding behavior of preterm infants from 32 to 36 weeks of postconception and to evaluate how the relation between swallowing and respiration changes with maturation. ::: ::: ::: STUDY DESIGN ::: Twenty-four infants (28 to 31 weeks of gestation at birth) without complications or defects were studied weekly between 32 and 36 weeks after conception. During bottle feeding with milk flowing only when infants were sucking, sucking efficiency, pressure, frequency, and duration were measured and the respiratory phase in which swallowing occurs was also analyzed. Statistical analysis was performed by repeated-measures analysis of variance with post hoc analysis. ::: ::: ::: RESULTS ::: The sucking efficiency significantly increased between 34 and 36 weeks after conception and exceeded 7 mL/min at 35 weeks. There were significant increases in sucking pressure and frequency as well as in duration between 33 and 36 weeks. Although swallowing occurred mostly during pauses in respiration at 32 and 33 weeks, after 35 weeks swallowing usually occurred at the end of inspiration. ::: ::: ::: CONCLUSIONS ::: Feeding behavior in premature infants matured significantly between 33 and 36 weeks after conception, and swallowing infrequently interrupted respiration during feeding after 35 weeks after conception. <s> BIB016 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> Finding ways to consistently prepare preterm infants and their families for more timely discharge must continue as a focus for everyone involved in the care of these infants in the neonatal intensive care unit. The gold standards for discharge from the neonatal intensive care unit are physiologic stability (especially respiratory stability), consistent weight gain, and successful oral feeding, usually from a bottle. Successful bottle-feeding is considered the most complex task of infancy. Fostering successful oral feeding in preterm infants requires consistently high levels of skilled nursing care, which must begin with accurate assessment of feeding readiness and thoughtful progression to full oral feeding. This comprehensive review of the literature provides an overview of the state of the science related to feeding readiness and progression in the preterm infant. The theoretical foundation for feeding readiness and factors that appear to affect bottle-feeding readiness, progression, and success are presented in this article. <s> BIB017 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> This study aimed to determine whether neonatal feeding performance can predict the neurodevelopmental outcome of infants at 18 months of age. We measured the expression and sucking pressures of 65 infants (32 males and 33 females, mean gestational age 37.8 weeks [SD 0.5]; range 35.1 to 42.7 weeks and mean birthweight 2722g [SD 92]) with feeding problems and assessed their neurodevelopmental outcome at 18 months of age. Their diagnoses varied from mild asphyxia and transient tachypnea to Chiari malformation. A neurological examination was performed at 40 to 42 weeks postmenstrual age by means of an Amiel-Tison examination. Feeding performance at 1 and 2 weeks after initiation of oral feeding was divided into four classes: class 1, no suction and weak expression; class 2, arrhythmic alternation of expression/suction and weak pressures; class 3, rhythmic alternation, but weak pressures; and class 4, rhythmic alternation with normal pressures. Neurodevelopmental outcome was evaluated with the Bayley Scales of Infant Development-II and was divided into four categories: severe disability, moderate delay, minor delay, and normal. We examined the brain ultrasound on the day of feeding assessment, and compared the prognostic value of ultrasound and feeding performance. There was a significant correlation between feeding assessment and neurodevelopmental outcome at 18 months (p < 0.001). Improvements of feeding pattern at the second evaluation resulted in better neurodevelopmental outcome. The sensitivity and specificity of feeding assessment were higher than those of ultrasound assessment. Neonatal feeding performance is, therefore, of prognostic value in detecting future developmental problems. <s> BIB018 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> Abstract Aim The sucking pattern of term infants is composed of a rhythmic alteration of expression and suction movements. The aim is to evaluate if direct linear transformation (DLT) method could be used for the assessment of infant feeding. Subject and methods A total of 10 gnormalh infants and two infants with neurological disorders were studied using DLT procedures and expression/suction pressure recordings. Feeding pattern of seven gnormalh infants were evaluated simultaneously recording DLT and pressures. The other infants were tested non-simultaneously. We placed markers on the lateral angle of the eye, tip of the jaw, and throat. The faces of infants while sucking were recorded in profile. The jaw and throat movements were calculated using the DLT procedure. Regression analysis was implemented to investigate the relationship between suction and expression pressures and eye–jaw and eye–throat movement. All regression analyses investigated univariate relationships and adjusted for other covariates. Results Ten gnormalh infants demonstrated higher suction pressure than expression pressure, and their throat movements were larger than jaw movements. Two infants with neurological problems did not generate suction pressure and demonstrated larger movements in their jaw than throat. The simultaneous measurement ( n = 7) showed a significant correlation, not only between eye–jaw distance and the expression pressure, but also between eye–throat distance and suction pressure. The change in the eye–jaw distance was smaller than the changes in the eye–throat distance in gnormalh infants ( p Conclusions The DLT method can be used to evaluate feeding performance without any special device. <s> BIB019 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> To study the coordination of respiration and swallow rhythms we assessed feeding episodes in 20 preterm infants (gestational age range at birth 26-33wks; postmenstrual age [PMA] range when studied 32-40wks) and 16 term infants studied on days 1 to 4 (PMA range 37-41wks) and at 1 month (PMA range 41-45wks). A pharyngeal pressure transducer documented swallows and a thoracoabdominal strain gauge recorded respiratory efforts. Coefficients of variation (COVs) of breath-breath (BBr-BR) and swallow-breath (SW-BR) intervals during swallow runs, percentage ofapneic swallows (at least three swallows without interposed breaths), and phase of respiration relative to swallowing efforts were analyzed. Percentage of apneic swallows decreased with increasing PMA (16.6% [SE 4.7] in preterm infants s35wks' PMA; 6.6% [SE 1.6] in preterms >35wks; 1.5% [SE 0.4] in term infants; p 35wks' PMA; 0.693 [SE 0.059] at <35wks' PMA). Phase relation between swallowing and respiration stabilized with increasing PMA, with decreased apnea, and a significant increase in percentage of swallows occurring at end-inspiration. These data indicate that unlike the stabilization of suck and suck-swallow rhythms, which occur before about 36 weeks' PMA, improvement in coordination of respiration and swallow begins later. Coordination of swallow-respiration and suck-swallow rhythms may be predictive of feeding, respiratory, and neurodevelopmental abnormalities. <s> BIB020 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> Abstract A dynamical systems approach to infant feeding problems is presented. A theoretically motivated analysis of coordination among sucking, swallowing, and breathing is at the heart of the approach. Current views in neonatology and allied medical disciplines begin their analysis of feeding problems with reference to descriptive phases of moving fluid from the mouth to the gut. By contrast, in a dynamical approach, sucking, swallowing, and breathing are considered as a synergy characterized by more or less stable coordination patterns. Research with healthy and at-risk groups of infants is presented to illustrate how coordination dynamics distinguish safe swallowing from patterns of swallowing and breathing that place premature infants at risk for serious medical problems such as pneumonia. Coordination dynamics is also the basis for a new medical device: a computer-controlled milk bottle that controls milk flow on the basis of the infant's coordination patterns. The device is designed so that infants... <s> BIB021 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> Aim: Safe and successful oral feeding requires proper maturation of sucking, swallowing and respiration. We hypothesized that oral feeding difficulties result from different temporal development of the musculatures implicated in these functions. ::: ::: Methods: Sixteen medically stable preterm infants (26 to 29 weeks gestation, GA) were recruited. Specific feeding skills were monitored as indirect markers for the maturational process of oral feeding musculatures: rate of milk intake (mL/min); percent milk leakage (lip seal); sucking stage, rate (#/s) and suction/expression ratio; suction amplitude (mmHg), rate and slope (mmHg/s); sucking/swallowing ratio; percent occurrence of swallows at specific phases of respiration. Coefficients of variation (COV) were used as indices of functional stability. Infants, born at 26/27- and 28/29-week GA, were at similar postmenstrual ages (PMA) when taking 1–2 and 6–8 oral feedings per day. ::: ::: Results: Over time, feeding efficiency and several skills improved, some decreased and others remained unchanged. Differences in COVs between the two GA groups demonstrated that, despite similar oral feeding outcomes, maturation levels of certain skills differed. ::: ::: Conclusions: Components of sucking, swallowing, respiration and their coordinated activity matured at different times and rates. Differences in functional stability of particular outcomes confirm that maturation levels depend on infants' gestational rather than PMA. <s> BIB022 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> ABSTRACT:Objectives:The relationship between the pattern of sucking behavior of preterm infants during the early weeks of life and neurodevelopmental outcomes during the first year of life was evaluated.Methods:The study sample consisted of 105 preterm infants (postmenstrual age [PMA] at birth = 30. <s> BIB023 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> We report quantitative measurements of ten parameters of nutritive sucking behavior in 91 normal full-term infants obtained using a novel device (an Orometer) and a data collection/analytical system (Suck Editor). The sucking parameters assessed include the number of sucks, mean pressure amplitude of sucks, mean frequency of sucks per second, mean suck interval in seconds, sucking amplitude variability, suck interval variability, number of suck bursts, mean number of sucks per suck burst, mean suck burst duration, and mean interburst gap duration. For analyses, test sessions were divided into 4 × 2-min segments. In single-study tests, 36 of 60 possible comparisons of ten parameters over six pairs of 2-min time intervals showed a p value of 0.05 or less. In 15 paired tests in the same infants at different ages, 33 of 50 possible comparisons of ten parameters over five time intervals showed p values of 0.05 or less. Quantification of nutritive sucking is feasible, showing statistically valid results for ten parameters that change during a feed and with age. These findings suggest that further research, based on our approach, may show clinical value in feeding assessment, diagnosis, and clinical management. <s> BIB024
The literature suggests several methods to monitor the Sk process. Such methods rely on Pressure Transducers (PTs), optical motion capture systems, and resistive strain gauges to monitor EP and IP, chin, throat, and jaw movements (see Table 3 ). PTs are usually adopted to measure both IP and EP. In particular, the measurement of IP is always performed using PTs, but adopting two different nutrient delivery systems. In the first one, a common bottle nipple is used (Figure 2 ): a catheter is applied to the tip of the nipple for IP measurement, while the nutrient flows from the lumen of nipple to the infant's mouth through the orifice normally present on the nipple tip. This configuration has been adopted in several studies, which have used two different sensing solutions for IP measurement, depending on the position of the transducer and on its type, as illustrated in Figure 2 : a small pressure catheter (e.g., Millar Mikro-Tip SPR-524) is used and directly placed at the nipple tip BIB010 BIB022 BIB015 BIB005 BIB008 , or a semiconductor PT is connected to the end of a catheter whose tip is placed into the oral cavity BIB003 BIB018 BIB011 BIB016 BIB012 BIB019 BIB021 BIB007 BIB006 . Some studies BIB011 BIB007 BIB006 which use the first configuration specify that the catheter used is filled with fluid for a more robust pressure measurement, less sensitive to artifacts; in the others using the same configuration, it is not specified. The nipple can be also standardized and calibrated so that it responds to a certain differential pressure (difference between intranipple and intraoral pressure) with a known and acceptable milk flow rate, as in BIB018 BIB016 BIB012 BIB009 . In the second configuration (see Figure 3) , the nipple is modified to embed a tube for nutrient delivery within the nipple tip, and a second tube is connected to a PT to measure IP pressure. The properties of the nipple in this case do not completely resemble an ordinary nipple: it is not filled with fluid, so expression movements cannot influence the nutrient's release. This configuration implies that nutrient flows only when the infant develops an appropriate IP. One of the earliest works studying Sk describes the use of a capillary tube as flow meter. The system adopted in this study is composed of a stoppered burette connected to a capillary tube and then to a nipple. To guarantee a constant delivery pressure equal to the atmospheric one, an opening is placed to a side arm of the burette and always kept at the same height as the nipple. The flow-limiting capillary tube allows to regulate the flow of nutrient introducing a known linear relation between IP and flow throughout the range of infant sucking pressures (considered as 0 to −300 mmHg). Since such arrangement may be considered a closed hydraulic system, any increase or decrease of pressure applied to the nipple is transmitted to every part of the connected system. In particular, since the capillary can be considered as a concentrated resistance, it is possible to measure the main pressure drop along it, and a pressure equal to the desired IP in the part after it. Figure 3a shows the described configuration where the PT is placed between the capillary and the nipple. A similar capillary system has also been adopted in later studies BIB002 BIB023 where the PT is specified to be connected to the oral cavity by means of a second catheter inserted within the nipple, as Figure 3b illustrates. The nipple is stiffened with the use of a silicone rubber, in order to prevent nutrient delivery through expression movements, in both systems. With this arrangement, the nutrient flow rate through the tube can be calibrated, so that a certain intraoral pressure provides a known flow rate, and the consumption is proportional to the pressure-time integral. This calibration is allowed thanks to a proper configuration of the feeding system which eliminates two influencing factors: the hydrostatic pressure caused by the height of the level of nutrient over the infant's mouth, and the gradual vacuum increase inside a sealed nutrient reservoir as the milk flows out. Particular attention has to be paid in order to limit the effects of these two factors, which influence the net pressure forcing the liquid into the mouth thus hampering the feeding performance BIB010 . In Figure 3c another feeding apparatus adopting such expedients is shown. An opened reservoir is used to avoid vacuum creation, and the level of the nutrient is constantly maintained at the level of the catheter tip to eliminate any hydrostatic pressure. This measuring system, adopted in BIB001 , embeds two catheters: one for nutrient delivery into the oral cavity, and a second one ending into the same chamber as the nipple which is in direct communication with the oral cavity thanks to some holes in the nipple tip. This nutrient delivery system allows the establishment of a linear flow rate in the catheter over a range of 1 to 100 cm H 2 0 of infant sucking pressures. Lang and colleagues BIB024 developed a solution, very similar to an ordinary feeding bottle, embedding a nutrient delivery tube which enables a higher level of portability (see Figure 4a ). They use a modified commercial bottle (VentAire feeding bottle, produced by Platex) where a flow chamber is inserted between the milk reservoir and the outlet. The chamber has an inlet flow restriction orifice and an anti-backflow valve. The inlet chamber diameter is very small with respect to the outlet, offering a higher resistance to milk flow. The pressure inside the milk reservoir is maintained at the atmospheric value thanks to a gas permeable, fluid impermeable membrane. The shape of the bottle reduces the effect of the hydrostatic pressure allowing for an easy adjustment of the level of milk to that of the infant's mouth. Such system allows monitoring the suction pressure by measuring the pressure changes inside the chamber. The system may be modeled by the equivalent electronic circuit reported in Figure 4b . The inlet and outlet diameter are represented by two electrical resistances, R IN and R OUT respectively; the PT measuring the pressure (voltage) inside the flow chamber with respect to the atmospheric pressure (GND) is modeled by a voltmeter connected to the measuring node M; finally, sucking pressure is represented by a voltage generator (Sk). The voltage measured at node M will be: BIB017 since R in >> R out (due to the geometry), V M may be reasonably assumed as equal to V Sk . For the EP measurement, a rubber silicon tube can be placed on the outer surface of the nipple of a feeding bottle BIB022 BIB015 BIB005 BIB008 ] (see Figure 5a) . One end of the catheter (the extremity inside the mouth) is closed, while the other end is connected to a PT by means of a polyethylene catheter. This measuring system presents a limitation due to the rapid reaching of a plateau corresponding to the catheter's full compression. Otherwise, a PT can be connected to the lumen of the nipple by a silicone catheter to measure EP. In particular, EP is measured using this configuration and adding a one-way valve between the nipple chamber and the nutrient reservoir BIB018 BIB012 BIB021 ] (see Figure 5b) . Such a valve allows to isolate the interior of the nipple from the milk reservoir during the expression phase, in order to ensure that the nipple be always full. The same configuration without the valve allows monitoring the intra-nipple pressure changes due to the sucking events with no S/E distinction BIB013 BIB020 BIB014 BIB004 . Moreover, McGowan et al. BIB006 estimate the net pressure forcing the nutrient out of the nipple as the difference between intra-nipple pressure and negative intraoral pressure measured outside the nipple (see Figure 6 ). The two different sucking components can also be monitored through the measurement of throat and jaw movements. Such movements are assessed adopting two different technological solutions in BIB003 BIB019 . One includes the use of a strain-gauge transducer attached between the infant's forehead and the chin BIB003 , to measure jaw movements associated with mouthing movements. In BIB019 authors use a 1 meter distant camera placed at 90° with respect to the front of the baby's face. Three markers are placed on the infant' s face (see Figure 7) : one on the lateral Eye Angle (EA), one on the Tip of the Jaw (TJ), and the last one on the Throat (T). To estimate the distance of the marker in the object plane, the Direct Linear Transformation (DLT) method is used. Such method allows the definition of a linear transformation between the object space and the image-plane reference frame. Considering a point T in the object space ( ), it is mapped onto the image plane as and expressed in the image reference frame as ). We can write: where x 0 , y 0 , z 0 and u 0 , v 0 , d 0 represent the coordinates of the Projection Center (PC) respectively in the object reference frame and in the image reference frame; c represents a scale factor and the matrix R a transformation matrix which allows the projection from one space to the other one. The elements of this matrix are estimated thanks to a preliminary calibration procedure after which the camera should not be moved. Authors simultaneously recorded IP, EP and two anatomical distances, i.e., the eye-throat and the eye-jaw distance, and proved their correlation with suction and expression pressures respectively.
Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Figure 7. <s> The purpose of this study was to examine the concurrent validity of the Whitney strain gage for the measurement of nutritive sucking in preterm infants. Ten preterm infants were studied continuously during at least one entire bottle feeding per week, from admission into the study until discharge from the nursery. Sucking was measured simultaneously by an adapted nipple and the Whitney gage. The two instruments were compared on the following measures: number of sucking bursts, number of sucks per burst, and duration of bursts and pauses between bursts. Total percent agreement for the occurrence of a sucking burst was 99.3% (K = .99). Sucks per burst varied from 2 to 113, with 89.3% of the pairs of sucking bursts differing by < or = 1 suck per burst. The mean absolute difference between the two instruments for the duration of sucking bursts and pauses was .64 s and .72 s, respectively. These results demonstrate the concurrent validity of the Whitney gage for measurement of sucking events in preterm infants. <s> BIB001 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Figure 7. <s> ABSTRACTPURPOSEThe purpose of this study was to examine the effect of prefeeding non-nutritive sucking (NNS) on breathing, nutritive sucking (NS), and behavioral characteristics of bottle feeding.SUBJECTSThe convenience sample was composed of 10 preterm infants who were 33 to 40 weeks postconceptual <s> BIB002 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Figure 7. <s> OBJECTIVE ::: This study examined the relationship between the number of sucks in the first nutritive suck burst and feeding outcomes in preterm infants. The relationships of morbidity, maturity, and feeding experience to the number of sucks in the first suck burst were also examined. ::: ::: ::: METHODS ::: A non-experimental study of 95 preterm infants was used. Feeding outcomes included proficiency (percent consumed in first 5 min of feeding), efficiency (volume consumed over total feeding time), consumed (percent consumed over total feeding), and feeding success (proficiency >or=0.3, efficiency >or=1.5 mL/min, and consumed >or=0.8). Data were analyzed using correlation and regression analysis. ::: ::: ::: RESULTS AND CONCLUSIONS ::: There were statistically significant positive relationships between number of sucks in the first burst and all feeding outcomes-proficiency, efficiency, consumed, and success (r=0.303, 0.365, 0.259, and tau=0.229, P<.01, respectively). The number of sucks in the first burst was also positively correlated to behavior state and feeding experience (tau=0.104 and r=0.220, P<.01, respectively). Feeding experience was the best predictor of feeding outcomes; the number of sucks in the first suck burst also contributed significantly to all feeding outcomes. The findings suggest that as infants gain experience at feeding, the first suck burst could be a useful indicator for how successful a particular feeding might be. <s> BIB003
Position of the marker on the throat region, for DLT method application, is determined by first locating three facial markers: the external eye angle (A), the tip of the jaw (B) and the throat region (C). In other works BIB003 BIB001 , mercury-in-rubber strain gauges are used to monitor chin movements. Such sensors are connected to a plethysmograph that detects changes in electrical resistance of the gages as they are stretched with sucking activity. Strain gage is kept under tension during measurements, stretching it by at least 10% to 20% beyond its resting length before application. Such set up demonstrates reliability in sucking monitoring, even distinguishing chewing on the nipple and other non-sucking activity from true sucking BIB002 .
Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> Incoordination of sucking, swallowing, and breathing might lead to the decreased ventilation that accompanies bottle feeding in infants, but the precise temporal relationship between these events has not been established. Therefore, we studied the coordination of sucks, swallows, and breaths in healthy infants (8 full-term and 5 preterm). Respiratory movements and airflow were recorded as were sucks and swallows (intraoral and intrapharyngeal pressure). Sucks did not interrupt breathing or decrease minute ventilation during nonnutritive sucking. Minute ventilation during bottle feedings was inversely related to swallow frequency, with elimination of ventilation as the swallowing frequency approached 1.4/s. Swallows were associated with a 600-ms period of decreased respiratory initiation and with a period of airway closure lasting 530 +/- 9.8 (SE) ms. Occasional periods of prolonged airway closure were observed in all infants during feedings. Respiratory efforts during airway closure (obstructed breaths) were common. The present findings indicate that the decreased ventilation observed during bottle feedings is primarily a consequence of airway closure associated with the act of swallowing, whereas the decreased ventilatory efforts result from respiratory inhibition during swallows. <s> BIB001 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> Non-invasive, sensitive equipment was designed to record nasal air flow, the timing and volume of milk flow, intraoral pressure and swallowing in normal full-term newborn babies artificially fed under strictly controlled conditions. Synchronous recordings of these events are presented in chart form. Interpretation of the charts, with the aid of applied anatomy, suggests an hypothesis of the probable sequence of events during an ideal feeding cycle under the test conditions. This emphasises the importance of complete coordination between breathing, sucking and swallowing. The feeding respiratory pattern and its relationship to the other events was different from the non-nutritive respiratory pattern. The complexity of the coordinated patterns, the small bolus size which influenced the respiratory pattern, together with the coordination of all these events when milk was present in the mouth, emphasise the importance of the sensory mechanisms. The discussion considers (1) the relationship between these results, those reported by other workers under other feeding conditions and the author's (WGS) clinical experience, (2) factors which appear to be essential to permit conventional bottle feeding and (3) the importance of the coordination between the muscles of articulation, by which babies obtain their nourishment in relation to normal development and maturation. <s> BIB002 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> During feeding, infants have been found to decrease ventilation in proportion to increasing swallowing frequency, presumably as a consequence of neural inhibition of breathing and airway closure during swallowing. To what extent infants decrease ventilatory compromise during feeding by modifying feeding behavior is unknown. We increased swallowing frequency in infants by facilitating formula flow to study potential ventilatory sparing mechanisms. We studied seven full-term healthy infants 5-12 days of age. Nasal air flow and tidal volume were recorded with a nasal flowmeter. Soft fluid-filled catheters in the oropharynx and bottle recorded swallowing and sucking activity, and volume changes in the bottle were continuously measured. Bottle pressure was increased to facilitate formula flow. Low- and high-pressure trials were then compared. With the change from low to high pressure, consumption rate increased, as did sucking and swallowing frequencies. This change reversed on return to low pressure. Under high-pressure conditions, we saw a decrease in minute ventilation as expected. With onset of high pressure, sucking and swallowing volumes increased, whereas duration of airway closure during swallows remained constant. Therefore, increased formula consumption was associated with reduced ventilation, a predictable consequence of increased swallowing frequency. However, when consumption rate was high, the infant also increased swallowing volume, a tactic that is potentially ventilatory sparing as a lower swallowing frequency is required to achieve the increased consumption rate. As well, when consumption rate is low, the sucking-to-swallowing ratio increases, again potentially conserving ventilation by decreasing swallowing frequency much more than if the sucking-to-swallowing ratio was constant. <s> BIB003 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> Abstract To gain a better understanding of the development of sucking behavior in low birth weight infants, the aims of this study were as follows: (1) to assess these infants' oral feeding performance when milk delivery was unrestricted, as routinely administered in nurseries, versus restricted when milk flow occurred only when the infant was sucking; (2) to determine whether the term sucking pattern of suction/expression was necessary for feeding success; and (3) to identify clinical indicators of successful oral feeding. Infants (26 to 29 weeks of gestation) were evaluated at their first oral feeding and on achieving independent oral feeding. Bottle nipples were adapted to monitor suction and expression. To assess performance during a feeding, proficiency (percent volume transferred during the first 5 minutes of a feeding/total volume ordered), efficiency (volume transferred per unit time), and overall transfer (percent volume transferred) were calculated. Restricted milk flow enhanced all three parameters. Successful oral feeding did not require the term sucking pattern. Infants who demonstrated both a proficiency ≥30% and efficiency ≥1.5 ml/min at their first oral feeding were successful with that feeding and attained independent oral feeding at a significantly earlier postmenstrual age than their counterparts with lower proficiency, efficiency, or both. Thus a restricted milk flow facilitates oral feeding in infants younger than 30 weeks of gestation, the term sucking pattern is not necessary for successful oral feeding, and proficiency and efficiency together may be used as reliable indicators of early attainment of independent oral feeding in low birth weight infants.(J Pediatr 1997;130:561-9) <s> BIB004 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> The maturation of deglutition apnoea time was investigated in 42 bottle-fed preterm infants, 28 to 37 weeks gestation, and in 29 normal term infants as a comparison group. Deglutition apnoea times reduced as infants matured, as did the number and length of episodes of multiple-swallow deglutition apnoea. The maturation appears related to developmental age (gestation) rather than feeding experience (postnatal age). Prolonged (>4 seconds) episodes of deglutition apnoea remained significantly more frequent in preterm infants reaching term postconceptual age compared to term infants. However, multiple-swallow deglutition apnoeas also occurred in the term comparison group, showing that maturation of this aspect is not complete at term gestation. The establishment of normal data for maturation should be valuable in assessing infants with feeding difficulties as well as for evaluation of neurological maturity and functioning of ventilatory control during feeding. <s> BIB005 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> The coordination between swallowing and respiration is essential for safe feeding, and noninvasive feeding-respiratory instrumentation has been used in feeding and dysphagia assessment. Sometimes there are differences of interpretation of the data produced by the various respiratory monitoring techniques, some of which may be inappropriate for observing the rapid respiratory events associated with deglutition. Following a review of each of the main techniques employed for recording resting, pre-feeding, feeding, and post-feeding respiration on different subject groups (infants, children, and adults), a critical comparison of the methods is illustrated by simultaneous recordings from various respiratory transducers. As a result, a minimal combination of instruments is recommended which can provide the necessary respiratory information for routine feeding assessments in a clinical environment. <s> BIB006 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> Twenty healthy preterm infants (gestational age 26 to 33 weeks, postmenstrual age [PMA] 32.1 to 39.6 weeks, postnatal age [PNA] 2.0 to 11.6 weeks) were studied weekly from initiation of bottle feeding until discharge, with simultaneous digital recordings of pharyngeal and nipple (teat) pressure and nasal thermistor and thoracic strain gauge readings. The percentage of sucks aggregated into 'runs' (defined as > or = 3 sucks with < or = 2 seconds between suck peaks) increased over time and correlated significantly with PMA (r=0.601, p<0.001). The length of the sucking-runs also correlated significantly with PMA (r=0.613, p<0.001). The stability of sucking rhythm, defined as a function of the mean/SD of the suck interval, was also directly correlated with increasing PMA (r=0.503, p=0.002), as was increasing suck rate (r=0.379, p<0.03). None of these measures was correlated with PNA. Similarly, increasing PMA, but not PNA, correlated with a higher percentage of swallows in runs (r=0.364, p<0.03). Stability of swallow rhythm did not change significantly from 32 to 40 weeks' PMA. In low-risk preterm infants, increasing PMA is correlated with a faster and more stable sucking rhythm and with increasing organization into longer suck and swallow runs. Stable swallow rhythm appears to be established earlier than suck rhythm. The fact that PMA is a better predictor than PNA of these patterns lends support to the concept that these patterns are innate rather than learned behaviors. Quantitative assessment of the stability of suck and swallow rhythms in preterm infants may allow prediction of subsequent feeding dysfunction as well as more general underlying neurological impairment. Knowledge of the normal ontogeny of the rhythms of suck and swallow may also enable us to differentiate immature (but normal) feeding patterns in preterm infants from dysmature (abnormal) patterns, allowing more appropriate intervention measures. <s> BIB007 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> To quantify parameters of rhythmic suckle feeding in healthy term infants and to assess developmental changes during the first month of life, we recorded pharyngeal and nipple pressure in 16 infants at 1 to 4 days of age and again at 1 month. Over the first month of life in term infants, sucks and swallows become more rapid and increasingly organized into runs. Suck rate increased from 55/minute in the immediate postnatal period to 70/minute by the end of the first month (p<0.001). The percentage of sucks in runs of ≧3 increased from 72.7% (SD 12.8) to 87.9% (SD 9.1; p=0.001). Average length of suck runs also increased over the first month. Swallow rate increased slightly by the end of the first month, from about 46 to 50/minute (p=0.019), as did percentage of swallows in runs (76.8%, SD 14.9 versus 54.6%, SD 19.2;p=0.002). Efficiency of feeding, as measured by volume of nutrient per suck (0.17, SD 0.08 versus 0.30, SD 0.11cc/suck; p=0.008) and per swallow (0.23, SD 0.11 versus 0.44, SD 0.19 cc/swallow; p=0.002), almost doubled over the first month. The rhythmic stability of swallow-swallow, suck-suck, and suck-swallow dyadic interval, quantified using the coefficient of variation of the interval, was similar at the two age points, indicating that rhythmic stability of suck and swallow, individually and interactively, appears to be established by term. Percentage of sucks and swallows in 1:1 ratios (dyads), decreased from 78.8% (SD 20.1) shortly after birth to 57.5% (SD 25.8) at 1 month of age (p=0.002), demonstrating that the predominant 1:1 ratio of suck to swallow is more variable at 1 month, with the addition of ratios of 2:1, 3:1, and so on, and suggesting that infants gain the ability to adjust feeding patterns to improve efficiency. Knowledge of normal development in term infants provides a gold standard against which rhythmic patterns in preterm and other high-risk infants can be measured, and may allow earlier identification of infants at risk of neurodevelopmental delay and feeding disorders. <s> BIB008 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> OBJECTIVES ::: Our objectives were to establish normative maturational data for feeding behavior of preterm infants from 32 to 36 weeks of postconception and to evaluate how the relation between swallowing and respiration changes with maturation. ::: ::: ::: STUDY DESIGN ::: Twenty-four infants (28 to 31 weeks of gestation at birth) without complications or defects were studied weekly between 32 and 36 weeks after conception. During bottle feeding with milk flowing only when infants were sucking, sucking efficiency, pressure, frequency, and duration were measured and the respiratory phase in which swallowing occurs was also analyzed. Statistical analysis was performed by repeated-measures analysis of variance with post hoc analysis. ::: ::: ::: RESULTS ::: The sucking efficiency significantly increased between 34 and 36 weeks after conception and exceeded 7 mL/min at 35 weeks. There were significant increases in sucking pressure and frequency as well as in duration between 33 and 36 weeks. Although swallowing occurred mostly during pauses in respiration at 32 and 33 weeks, after 35 weeks swallowing usually occurred at the end of inspiration. ::: ::: ::: CONCLUSIONS ::: Feeding behavior in premature infants matured significantly between 33 and 36 weeks after conception, and swallowing infrequently interrupted respiration during feeding after 35 weeks after conception. <s> BIB009 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> UNLABELLED ::: Safe oral feeding of infants necessitates the coordination of suck-swallow-breathe. Healthy full-term infants demonstrate such skills at birth. But, preterm infants are known to have difficulty in the transition from tube to oral feeding. ::: ::: ::: AIM ::: To examine the relationship between suck and swallow and between swallow and breathe. It is hypothesized that greater milk transfer results from an increase in bolus size and/or swallowing frequency, and an improved swallow-breathe interaction. ::: ::: ::: METHODS ::: Twelve healthy preterm (<30 wk of gestation) and 8 full-term infants were recruited. Sucking (suction and expression), swallowing, and respiration were recorded simultaneously when the preterm infants began oral feeding (i.e. taking 1-2 oral feedings/d) and at 6-8 oral feedings/d. The full-term infants were similarly monitored during their first and 2nd to 4th weeks. Rate of milk transfer (ml/min) was used as an index of oral feeding performance. Sucking and swallowing frequencies (#/min), average bolus size (ml), and suction amplitude (mmHg) were measured. ::: ::: ::: RESULTS ::: The rate of milk transfer in the preterm infants increased over time and was correlated with average bolus size and swallowing frequency. Average bolus size was not correlated with swallowing frequency. Bolus size was correlated with suction amplitude, whereas the frequency of swallowing was correlated with sucking frequency. Preterm infants swallowed preferentially at different phases of respiration than those of their full-term counterparts. ::: ::: ::: CONCLUSION ::: As feeding performance improved, sucking and swallowing frequency, bolus size, and suction amplitude increased. It is speculated that feeding difficulties in preterm infants are more likely to result from inappropriate swallow-respiration interfacing than suck-swallow interaction. <s> BIB010 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> The premature infant has limited ability to integrate the swallowing-breathing cycle during feeding. The aim of this study was to assess the pattern of swallowing between the period of tube-bottle (TBF) and bottle (BF) feeding by means of cervical auscultation in premature infants. Twenty-three premature infants were enrolled (mean gestational age 34.7 +/- 1.7 weeks). Audiosignal recordings were made during TBF and BF with a small microphone set in front of the cricoid cartilage. The following parameters were calculated for 2 min and reported at 1 min: the percentage of time involved in swallowing (ST), the numbers of swallows (SN) and swallowing bursts (SB) and swallowing groups (SG). Individual histograms were established to show the individual pattern of swallowing behaviour and the distribution of groups, bursts and swallows over 2 min. Mean (STm), (SNm), (SBm), (SGm) values were calculated (+/- S.D.). Statistical analysis was used to compare the means and to establish correlations between parameters and curves. (STm), (SNm) and (SBm) increased significantly during BF compared with TBF for all premature infants and during follow-up. The histograms showed that in BF the groups were high in bursts. These findings and the histograms for each infant will allow determination of transition to bottle feeding without risk corresponding to the stage of maturation of swallowing function. <s> BIB011 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> To study the coordination of respiration and swallow rhythms we assessed feeding episodes in 20 preterm infants (gestational age range at birth 26-33wks; postmenstrual age [PMA] range when studied 32-40wks) and 16 term infants studied on days 1 to 4 (PMA range 37-41wks) and at 1 month (PMA range 41-45wks). A pharyngeal pressure transducer documented swallows and a thoracoabdominal strain gauge recorded respiratory efforts. Coefficients of variation (COVs) of breath-breath (BBr-BR) and swallow-breath (SW-BR) intervals during swallow runs, percentage ofapneic swallows (at least three swallows without interposed breaths), and phase of respiration relative to swallowing efforts were analyzed. Percentage of apneic swallows decreased with increasing PMA (16.6% [SE 4.7] in preterm infants s35wks' PMA; 6.6% [SE 1.6] in preterms >35wks; 1.5% [SE 0.4] in term infants; p 35wks' PMA; 0.693 [SE 0.059] at <35wks' PMA). Phase relation between swallowing and respiration stabilized with increasing PMA, with decreased apnea, and a significant increase in percentage of swallows occurring at end-inspiration. These data indicate that unlike the stabilization of suck and suck-swallow rhythms, which occur before about 36 weeks' PMA, improvement in coordination of respiration and swallow begins later. Coordination of swallow-respiration and suck-swallow rhythms may be predictive of feeding, respiratory, and neurodevelopmental abnormalities. <s> BIB012 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> Abstract A dynamical systems approach to infant feeding problems is presented. A theoretically motivated analysis of coordination among sucking, swallowing, and breathing is at the heart of the approach. Current views in neonatology and allied medical disciplines begin their analysis of feeding problems with reference to descriptive phases of moving fluid from the mouth to the gut. By contrast, in a dynamical approach, sucking, swallowing, and breathing are considered as a synergy characterized by more or less stable coordination patterns. Research with healthy and at-risk groups of infants is presented to illustrate how coordination dynamics distinguish safe swallowing from patterns of swallowing and breathing that place premature infants at risk for serious medical problems such as pneumonia. Coordination dynamics is also the basis for a new medical device: a computer-controlled milk bottle that controls milk flow on the basis of the infant's coordination patterns. The device is designed so that infants... <s> BIB013 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> Aim: Safe and successful oral feeding requires proper maturation of sucking, swallowing and respiration. We hypothesized that oral feeding difficulties result from different temporal development of the musculatures implicated in these functions. ::: ::: Methods: Sixteen medically stable preterm infants (26 to 29 weeks gestation, GA) were recruited. Specific feeding skills were monitored as indirect markers for the maturational process of oral feeding musculatures: rate of milk intake (mL/min); percent milk leakage (lip seal); sucking stage, rate (#/s) and suction/expression ratio; suction amplitude (mmHg), rate and slope (mmHg/s); sucking/swallowing ratio; percent occurrence of swallows at specific phases of respiration. Coefficients of variation (COV) were used as indices of functional stability. Infants, born at 26/27- and 28/29-week GA, were at similar postmenstrual ages (PMA) when taking 1–2 and 6–8 oral feedings per day. ::: ::: Results: Over time, feeding efficiency and several skills improved, some decreased and others remained unchanged. Differences in COVs between the two GA groups demonstrated that, despite similar oral feeding outcomes, maturation levels of certain skills differed. ::: ::: Conclusions: Components of sucking, swallowing, respiration and their coordinated activity matured at different times and rates. Differences in functional stability of particular outcomes confirm that maturation levels depend on infants' gestational rather than PMA. <s> BIB014
The evaluation indices on swallowing and breathing concern rhythmicity and their interface, rather than their "extent" (see Table 1 ). Different solutions in terms of complexity have been adopted to monitor these two processes. Sw events are often detected by monitoring the pharyngeal pressure with a PT connected to a tube, transnasally inserted as far as the pharynx BIB009 BIB007 BIB012 BIB008 BIB003 BIB001 . Otherwise, a PT connected to a small drum, placed on the hyoid region of the infant's neck, is used to monitor hyoid bone movements related to swallowing BIB014 BIB010 BIB004 : the upward movement of this bone, caused by Sw, results into a biphasic pressure wave in the drum, and the peak pressure deflection is generally used as marker of Sw. Other authors BIB002 BIB013 use microphones placed on the infant's throat to record swallow sounds. Such microphones need to be small enough to be effectively applied to infant and their bandwidth should cover a range from 100 Hz to 8 kHz . Moreover, they require to be shielded from external noise and to filter out some possible interference, such as e.g., babbling. The acoustic technique to record swallowing in premature infants is widely investigated also in BIB011 . Breathing process is monitored measuring nasal airflow and/or respiratory movements. All the different sensing solutions for respiratory monitoring during feeding in infants are shown in Figure 8 . Nasal thermistors or thermocouples below the nostrils are used to measure air flow. However, they are not sufficient to distinguish between inspiration and expiration. To address this issue, one possibility is represented by the use of additional sensors. To clearly identify flow direction, a PT connected to the nostrils by a soft catheter can be used BIB002 . Catheter and thermistor are embedded into a rigid tool (see Figure 9b) , which is kept in the infant's nostrils during feeding, capable of recording very low airflows (discrimination threshold less than 0.5 L/min), without adding any significant resistance to the flow. Another solution that allows measuring both the airflow and its direction is the use of a miniaturized pneumotachograph connected to a pressure transducer, placed in a nostril. Such nasal flowmeter has turned out to be suitable for preterm infants BIB005 , because of its low dead space (less than 0.11 mL), low resistance (0.1 mm H 2 O/mL•s), light weight (0.2 g) and compact design. Airflow monitoring is also widely performed by thermistors because of their rapid response to flow changes BIB002 BIB009 BIB007 BIB008 BIB013 ; however, they are prone to artifact caused by temperature equilibration when airflow stops BIB006 . To avoid such problem, many authors monitor breathing by measuring thoracic movements (see Figure 9a) . Such measures pertain to changes in lung inflation measured by chest and abdominal movements, and enable to determine the precise timing of the end of inspiration and expiration, not allowing for quantitative measures such as tidal volume or minute ventilation. Mercury-in-rubber or piezo-resistive strain gauges (respiratory bands) have been used to measure chest movements BIB009 BIB007 BIB012 BIB003 , as well as PTs, connected to a drum taped at the thoraco-abdominal junction BIB014 BIB010 . Figure 9 . Devices used for breathing monitoring. (a) Nasal thermistor or thermocouple applied below the nostrils for nasal airflow measurement; pressure drum or strain gauge band on the chest for respiratory movements measurement; (b) Rigid tool applied into the nostrils: the thermistor and the PT are used respectively to assess air flow and its versus.
Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Nutrient Consumption <s> Energetics and mechanics of sucking in preterm and term neonates were determined by simultaneous records of intraoral pressure, flow, volume, and work of individual sucks. Nine term infants (mean postconceptional age: 38.6 +/- 0.7 SD weeks; mean postnatal age: 18.4 +/- 6.1 SD days) and nine preterm infants (mean postconceptional age: 35.2 +/- 0.7 SD weeks; mean postnatal age: 21.9 +/- 5.4 SD days) were studied under identical feeding conditions. Preterm infants generated significantly lower peak pressure (mean values of 48.5 cm H2O compared with 65.5 cm H2O in term infants; P less than 0.01), and the volume ingested per such was generally less than or equal to 0.5 mL. Term infants demonstrated a higher frequency of sucking, a well-defined suck-pause pattern, and a higher minute consumption of formula. Energy and caloric expenditure estimations revealed significantly lower work performed by preterm infants for isovolumic feeds (1190 g/cm/dL in preterm infants compared with 2030 g.cm/dL formula ingested in term infants; P less than 0.01). Furthermore, work performed by term infants was disproportionately higher for volumes greater than or equal to 0.5 mL ingested. This study indicates that preterm infants expend less energy than term infants to suck the same volume of feed and also describes an objective technique to evaluate nutritive sucking during growth and development. <s> BIB001 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Nutrient Consumption <s> Non-invasive, sensitive equipment was designed to record nasal air flow, the timing and volume of milk flow, intraoral pressure and swallowing in normal full-term newborn babies artificially fed under strictly controlled conditions. Synchronous recordings of these events are presented in chart form. Interpretation of the charts, with the aid of applied anatomy, suggests an hypothesis of the probable sequence of events during an ideal feeding cycle under the test conditions. This emphasises the importance of complete coordination between breathing, sucking and swallowing. The feeding respiratory pattern and its relationship to the other events was different from the non-nutritive respiratory pattern. The complexity of the coordinated patterns, the small bolus size which influenced the respiratory pattern, together with the coordination of all these events when milk was present in the mouth, emphasise the importance of the sensory mechanisms. The discussion considers (1) the relationship between these results, those reported by other workers under other feeding conditions and the author's (WGS) clinical experience, (2) factors which appear to be essential to permit conventional bottle feeding and (3) the importance of the coordination between the muscles of articulation, by which babies obtain their nourishment in relation to normal development and maturation. <s> BIB002 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Nutrient Consumption <s> During feeding, infants have been found to decrease ventilation in proportion to increasing swallowing frequency, presumably as a consequence of neural inhibition of breathing and airway closure during swallowing. To what extent infants decrease ventilatory compromise during feeding by modifying feeding behavior is unknown. We increased swallowing frequency in infants by facilitating formula flow to study potential ventilatory sparing mechanisms. We studied seven full-term healthy infants 5-12 days of age. Nasal air flow and tidal volume were recorded with a nasal flowmeter. Soft fluid-filled catheters in the oropharynx and bottle recorded swallowing and sucking activity, and volume changes in the bottle were continuously measured. Bottle pressure was increased to facilitate formula flow. Low- and high-pressure trials were then compared. With the change from low to high pressure, consumption rate increased, as did sucking and swallowing frequencies. This change reversed on return to low pressure. Under high-pressure conditions, we saw a decrease in minute ventilation as expected. With onset of high pressure, sucking and swallowing volumes increased, whereas duration of airway closure during swallows remained constant. Therefore, increased formula consumption was associated with reduced ventilation, a predictable consequence of increased swallowing frequency. However, when consumption rate was high, the infant also increased swallowing volume, a tactic that is potentially ventilatory sparing as a lower swallowing frequency is required to achieve the increased consumption rate. As well, when consumption rate is low, the sucking-to-swallowing ratio increases, again potentially conserving ventilation by decreasing swallowing frequency much more than if the sucking-to-swallowing ratio was constant. <s> BIB003 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Nutrient Consumption <s> Abstract To gain a better understanding of the development of sucking behavior in low birth weight infants, the aims of this study were as follows: (1) to assess these infants' oral feeding performance when milk delivery was unrestricted, as routinely administered in nurseries, versus restricted when milk flow occurred only when the infant was sucking; (2) to determine whether the term sucking pattern of suction/expression was necessary for feeding success; and (3) to identify clinical indicators of successful oral feeding. Infants (26 to 29 weeks of gestation) were evaluated at their first oral feeding and on achieving independent oral feeding. Bottle nipples were adapted to monitor suction and expression. To assess performance during a feeding, proficiency (percent volume transferred during the first 5 minutes of a feeding/total volume ordered), efficiency (volume transferred per unit time), and overall transfer (percent volume transferred) were calculated. Restricted milk flow enhanced all three parameters. Successful oral feeding did not require the term sucking pattern. Infants who demonstrated both a proficiency ≥30% and efficiency ≥1.5 ml/min at their first oral feeding were successful with that feeding and attained independent oral feeding at a significantly earlier postmenstrual age than their counterparts with lower proficiency, efficiency, or both. Thus a restricted milk flow facilitates oral feeding in infants younger than 30 weeks of gestation, the term sucking pattern is not necessary for successful oral feeding, and proficiency and efficiency together may be used as reliable indicators of early attainment of independent oral feeding in low birth weight infants.(J Pediatr 1997;130:561-9) <s> BIB004 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Nutrient Consumption <s> The purpose of this study was to compare the mechanics of sucking for 48 term infants with four different nipple units: Gerber Newborn (Gerber Products Company, Fremont, Mich.), Playtex (Playtex Products, Westport, Conn.), Evenflo (Evenflo Products Co., Canton, Ga.), and Gerber NUK. At 24 hours after birth, infants were assigned randomly to one of the nipple units and were studied twice with that nipple unit. A customized data acquisition system was used to measure and record the following variables: intraoral suction, sucking frequency, work, power, milk flow, milk volume per suck, and oxygen saturation. Although no statistically significant differences among the nipple units were noted for intraoral suction, sucking frequency, power, and oxygen saturation, the data revealed that the Playtex nipple unit was accompanied by higher peak milk flow and greater volume of milk per suck ( p <s> BIB005 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Nutrient Consumption <s> It is acknowledged that the difficulty many preterm infants have in feeding orally results from their immature sucking skills. However, little is known regarding the development of sucking in these infants. The aim of this study was to demonstrate that the bottle-feeding performance of preterm infants is positively correlated with the developmental stage of their sucking. Infants' oral-motor skills were followed longitudinally using a special nipple/bottle system which monitored the suction and expression/compression component of sucking. The maturational process was rated into five primary stages based on the presence/absence of suction and the rhythmicity of the two components of sucking, suction and expression/compression. This five-point scale was used to characterize the developmental stage of sucking of each infant. Outcomes of feeding performance consisted of overall transfer (percent total volume transfered/volume to be taken) and rate of transfer (ml/min). Assessments were conducted when infants were taking 1-2, 3-5 and 6-8 oral feedings per day. Significant positive correlations were observed between the five stages of sucking and postmenstrual age, the defined feeding outcomes, and the number of daily oral feedings. Overall transfer and rate of transfer were enhanced when infants reached the more mature stages of sucking. ::: ::: We have demonstrated that oral feeding performance improves as infants' sucking skills mature. In addition, we propose that the present five-point sucking scale may be used to assess the developmental stages of sucking of preterm infants. Such knowledge would facilitate the management of oral feeding in these infants. <s> BIB006 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Nutrient Consumption <s> To quantify parameters of rhythmic suckle feeding in healthy term infants and to assess developmental changes during the first month of life, we recorded pharyngeal and nipple pressure in 16 infants at 1 to 4 days of age and again at 1 month. Over the first month of life in term infants, sucks and swallows become more rapid and increasingly organized into runs. Suck rate increased from 55/minute in the immediate postnatal period to 70/minute by the end of the first month (p<0.001). The percentage of sucks in runs of ≧3 increased from 72.7% (SD 12.8) to 87.9% (SD 9.1; p=0.001). Average length of suck runs also increased over the first month. Swallow rate increased slightly by the end of the first month, from about 46 to 50/minute (p=0.019), as did percentage of swallows in runs (76.8%, SD 14.9 versus 54.6%, SD 19.2;p=0.002). Efficiency of feeding, as measured by volume of nutrient per suck (0.17, SD 0.08 versus 0.30, SD 0.11cc/suck; p=0.008) and per swallow (0.23, SD 0.11 versus 0.44, SD 0.19 cc/swallow; p=0.002), almost doubled over the first month. The rhythmic stability of swallow-swallow, suck-suck, and suck-swallow dyadic interval, quantified using the coefficient of variation of the interval, was similar at the two age points, indicating that rhythmic stability of suck and swallow, individually and interactively, appears to be established by term. Percentage of sucks and swallows in 1:1 ratios (dyads), decreased from 78.8% (SD 20.1) shortly after birth to 57.5% (SD 25.8) at 1 month of age (p=0.002), demonstrating that the predominant 1:1 ratio of suck to swallow is more variable at 1 month, with the addition of ratios of 2:1, 3:1, and so on, and suggesting that infants gain the ability to adjust feeding patterns to improve efficiency. Knowledge of normal development in term infants provides a gold standard against which rhythmic patterns in preterm and other high-risk infants can be measured, and may allow earlier identification of infants at risk of neurodevelopmental delay and feeding disorders. <s> BIB007 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Nutrient Consumption <s> UNLABELLED ::: Safe oral feeding of infants necessitates the coordination of suck-swallow-breathe. Healthy full-term infants demonstrate such skills at birth. But, preterm infants are known to have difficulty in the transition from tube to oral feeding. ::: ::: ::: AIM ::: To examine the relationship between suck and swallow and between swallow and breathe. It is hypothesized that greater milk transfer results from an increase in bolus size and/or swallowing frequency, and an improved swallow-breathe interaction. ::: ::: ::: METHODS ::: Twelve healthy preterm (<30 wk of gestation) and 8 full-term infants were recruited. Sucking (suction and expression), swallowing, and respiration were recorded simultaneously when the preterm infants began oral feeding (i.e. taking 1-2 oral feedings/d) and at 6-8 oral feedings/d. The full-term infants were similarly monitored during their first and 2nd to 4th weeks. Rate of milk transfer (ml/min) was used as an index of oral feeding performance. Sucking and swallowing frequencies (#/min), average bolus size (ml), and suction amplitude (mmHg) were measured. ::: ::: ::: RESULTS ::: The rate of milk transfer in the preterm infants increased over time and was correlated with average bolus size and swallowing frequency. Average bolus size was not correlated with swallowing frequency. Bolus size was correlated with suction amplitude, whereas the frequency of swallowing was correlated with sucking frequency. Preterm infants swallowed preferentially at different phases of respiration than those of their full-term counterparts. ::: ::: ::: CONCLUSION ::: As feeding performance improved, sucking and swallowing frequency, bolus size, and suction amplitude increased. It is speculated that feeding difficulties in preterm infants are more likely to result from inappropriate swallow-respiration interfacing than suck-swallow interaction. <s> BIB008 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Nutrient Consumption <s> OBJECTIVES ::: Our objectives were to establish normative maturational data for feeding behavior of preterm infants from 32 to 36 weeks of postconception and to evaluate how the relation between swallowing and respiration changes with maturation. ::: ::: ::: STUDY DESIGN ::: Twenty-four infants (28 to 31 weeks of gestation at birth) without complications or defects were studied weekly between 32 and 36 weeks after conception. During bottle feeding with milk flowing only when infants were sucking, sucking efficiency, pressure, frequency, and duration were measured and the respiratory phase in which swallowing occurs was also analyzed. Statistical analysis was performed by repeated-measures analysis of variance with post hoc analysis. ::: ::: ::: RESULTS ::: The sucking efficiency significantly increased between 34 and 36 weeks after conception and exceeded 7 mL/min at 35 weeks. There were significant increases in sucking pressure and frequency as well as in duration between 33 and 36 weeks. Although swallowing occurred mostly during pauses in respiration at 32 and 33 weeks, after 35 weeks swallowing usually occurred at the end of inspiration. ::: ::: ::: CONCLUSIONS ::: Feeding behavior in premature infants matured significantly between 33 and 36 weeks after conception, and swallowing infrequently interrupted respiration during feeding after 35 weeks after conception. <s> BIB009 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Nutrient Consumption <s> Aim: Safe and successful oral feeding requires proper maturation of sucking, swallowing and respiration. We hypothesized that oral feeding difficulties result from different temporal development of the musculatures implicated in these functions. ::: ::: Methods: Sixteen medically stable preterm infants (26 to 29 weeks gestation, GA) were recruited. Specific feeding skills were monitored as indirect markers for the maturational process of oral feeding musculatures: rate of milk intake (mL/min); percent milk leakage (lip seal); sucking stage, rate (#/s) and suction/expression ratio; suction amplitude (mmHg), rate and slope (mmHg/s); sucking/swallowing ratio; percent occurrence of swallows at specific phases of respiration. Coefficients of variation (COV) were used as indices of functional stability. Infants, born at 26/27- and 28/29-week GA, were at similar postmenstrual ages (PMA) when taking 1–2 and 6–8 oral feedings per day. ::: ::: Results: Over time, feeding efficiency and several skills improved, some decreased and others remained unchanged. Differences in COVs between the two GA groups demonstrated that, despite similar oral feeding outcomes, maturation levels of certain skills differed. ::: ::: Conclusions: Components of sucking, swallowing, respiration and their coordinated activity matured at different times and rates. Differences in functional stability of particular outcomes confirm that maturation levels depend on infants' gestational rather than PMA. <s> BIB010 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Nutrient Consumption <s> Feeding by sucking is one of the first activities of daily life performed by infants. Sucking plays a fundamental role in neurological development and may be considered a good early predictor of neuromotor development. In this paper, a new method for ecological assessment of infants' nutritive sucking behavior is presented and experimentally validated. Preliminary data on healthy newborn subjects are first acquired to define the main technical specifications of a novel instrumented device. This device is designed to be easily integrated in a commercially available feeding bottle, allowing clinical method development for screening large numbers of subjects. The new approach proposed allows: 1) accurate measurement of intra-oral pressure for neuromotor control analysis and 2) estimation of milk volume delivered to the mouth within variation between estimated and reference volumes. <s> BIB011 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Nutrient Consumption <s> The Sucking Efficiency (SEF) is one of the main parameters used to monitor and assess the sucking pattern development in infants. Since utritive Sucking ( S) is one of the earliest motor activity performed by infants, its objective monitoring may allow to assess neurological and motor development of newborns. This work proposes a new ecological and low-cost method for SEF monitoring, specifically designed for feeding bottles. The methodology, based on the measure of the hydrostatic pressure exerted by the liquid at the teat base, is presented and experimentally validated at different operative conditions. Results show how the proposed method allows to estimate the minimum volume an infant ingests during a burst of sucks with a relative error within the range of [3-7]% depending on the inclination of the liquid reservoir. <s> BIB012
Nutrient consumption is often estimated measuring the residual nutrient volume at the end of the feeding session (total consumption) or at variable time intervals. This measure is frequently performed observing the liquid level in a graduated reservoir BIB006 BIB008 or using a balance BIB010 BIB004 . Many authors do not even mention this measure, despite reporting consideration about the total ingested nutrient volume BIB009 BIB007 . Measurements of the nutrient consumption at very close intervals of time have also been adopted BIB011 BIB003 BIB012 . The volume of delivered milk can be estimated measuring the changes of the air pressure inside a closed bottle (vacuum built-up) by means of a PT, while the liquid flows out, as reported in BIB011 . A PT can be also used to measure the hydrostatic pressure of the remaining liquid column in a cylindrical reservoir in order to estimate the residual volume of liquid, as presented in BIB012 . In this work, the PT was connected to an air-filled catheter ending at the base of an inverted bottle where the liquid column laid. Such sensing system also included the presence of an accelerometer to estimate the bottle tilt and correct its influence on the hydrostatic pressure. The same principle was also adopted by Al-Sayed BIB003 , who measured the hydrostatic pressure of the residual volume of milk, but in a more controlled situation where the reservoir was fixed and could not be tilted. Such approach enabled to measure the residual nutrient volume in the bottle at variable temporal intervals, when there was no sucking activity. Measures of the flow of nutrient have been performed using particular flow meters BIB002 BIB001 BIB005 . In BIB005 , the authors use an ultrasonic flow transducer to measure the liquid flow between the feeding bottle and the tip of the feeding nipple. In the other two studies the milk flow is estimated from the measurements of airflow entering the reservoir to fill the void left by the milk, using a pneumotachometer BIB001 , or a thermistor BIB002 . In addition to all these methods, many studies, as already described in Section 3.1, estimated the milk flow rate using a calibration of the nutrient delivery system.
Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> Abstract Two studies were performed to test the ability of the newborn human infant to separate the “suction” and “expression” components of the sucking response. An experimental nipple and nutrient delivery system were devised which permitted the delivery of nutrient as a function of the occurrence of either response component and which permitted independent measurement of the components. Thirty infants, 2 to 5 days old, were studied during two successive feedings, 4 hours apart. The results indicated that newborn infants were able to modify the components of the sucking response when performance of these components led to nutritive consequences. When Ss had to express at one of two different pressure levels in order to obtain nutrient they showed significant shifts in expression amplitudes. The results indicate that some learning might have occurred as a result of the experimental manipulations. However, the infant's ability to adapt his sucking behavior seems to be well established before the fifth day of life. <s> BIB001 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> In 100 bottle-fed preterm infants feeding efficiency was studied by quantifying the volume of milk intake per minute and the number of teat insertions per 10 ml of milk intake. These variables were related to gestational age and to number of weeks of feeding experience. Feeding efficiency was greater in infants above 34 weeks gestational age than in those below this age. There was a significant correlation between feeding efficiency and the duration of feeding experience at most gestational ages between 32 and 37 weeks. A characteristic adducted and flexed arm posture was observed during feeding: it changed along with feeding experience. A neonatal feeding score was devised that allowed the quantification of the early oral feeding behavior. The feeding score correlated well with some aspects of perinatal assessment, with some aspects of the neonatal neurological evaluation and with developmental assessment at 7 months of age. These findings are a stimulus to continue our study into the relationships between feeding behaviour and other aspects of early development, especially of neurological development. <s> BIB002 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> The purpose of this investigation was to quantify normal nutritive sucking, using a microcomputer-based instrument which replicated the infant's customary bottle-feeding routine. 86 feeding sessions were recorded from infants ranging between 1.5 and 11.5 months of age. Suck height, suck area and percentage of time spent sucking were unrelated to age. Volume per suck declined with age, as did intersuck interval, which corresponded to a more rapid sucking rate. This meant that volume per minute of sucking time was fairly constant. The apparatus provided an objective description of the patterns of normal nutritive sucking in infants to which abnormal sucking patterns may be compared. <s> BIB003 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> Milk flow achieved during feeding may contribute to the ventilatory depression observed during nipple feeding. One of the important determinants of milk flow is the size of the feeding hole. In the first phase of the study, investigators compared the breathing patterns of 10 preterm infants during bottle feeding with two types of commercially available (Enfamil) single-hole nipples: one type designed for term infants and the other for preterm infants. Reductions in ventilation, tidal volume, and breathing frequency, compared with prefeeding control values, were observed with both nipple types during continuous and intermittent sucking phases; no significant differences were observed for any of the variables. Unlike the commercially available, mechanically drilled nipples, laser-cut nipple units showed a markedly lower coefficient of variation in milk flow. In the second phase of the study, two sizes of laser-cut nipple units, low and high flow, were used to feed nine preterm infants. Significantly lower sucking pressures were observed with high-flow nipples as compared with low-flow nipples. Decreases in minute ventilation and breathing frequency were also significantly greater with high-flow nipples. These results suggest that milk flow contributes to the observed reduction in ventilation during nipple feeding and that preterm infants have limited ability to self-regulate milk flow. <s> BIB004 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> The maturation of deglutition apnoea time was investigated in 42 bottle-fed preterm infants, 28 to 37 weeks gestation, and in 29 normal term infants as a comparison group. Deglutition apnoea times reduced as infants matured, as did the number and length of episodes of multiple-swallow deglutition apnoea. The maturation appears related to developmental age (gestation) rather than feeding experience (postnatal age). Prolonged (>4 seconds) episodes of deglutition apnoea remained significantly more frequent in preterm infants reaching term postconceptual age compared to term infants. However, multiple-swallow deglutition apnoeas also occurred in the term comparison group, showing that maturation of this aspect is not complete at term gestation. The establishment of normal data for maturation should be valuable in assessing infants with feeding difficulties as well as for evaluation of neurological maturity and functioning of ventilatory control during feeding. <s> BIB005 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> The coordination between swallowing and respiration is essential for safe feeding, and noninvasive feeding-respiratory instrumentation has been used in feeding and dysphagia assessment. Sometimes there are differences of interpretation of the data produced by the various respiratory monitoring techniques, some of which may be inappropriate for observing the rapid respiratory events associated with deglutition. Following a review of each of the main techniques employed for recording resting, pre-feeding, feeding, and post-feeding respiration on different subject groups (infants, children, and adults), a critical comparison of the methods is illustrated by simultaneous recordings from various respiratory transducers. As a result, a minimal combination of instruments is recommended which can provide the necessary respiratory information for routine feeding assessments in a clinical environment. <s> BIB006 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> The purpose of this study was to compare the mechanics of sucking for 48 term infants with four different nipple units: Gerber Newborn (Gerber Products Company, Fremont, Mich.), Playtex (Playtex Products, Westport, Conn.), Evenflo (Evenflo Products Co., Canton, Ga.), and Gerber NUK. At 24 hours after birth, infants were assigned randomly to one of the nipple units and were studied twice with that nipple unit. A customized data acquisition system was used to measure and record the following variables: intraoral suction, sucking frequency, work, power, milk flow, milk volume per suck, and oxygen saturation. Although no statistically significant differences among the nipple units were noted for intraoral suction, sucking frequency, power, and oxygen saturation, the data revealed that the Playtex nipple unit was accompanied by higher peak milk flow and greater volume of milk per suck ( p <s> BIB007 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> To measure infant nutritive sucking reproducibly, nipple flow resistance must be controlled. Previous investigators have accomplished this with flow-limiting venturis, which has two limitations: flow resistance is highly dependent on fluid viscosity and older infants often reject the venturi nipple. This report describes the validation of calibrated-orifice nipples for the measurement of infant nutritive sucking. The flow characteristics of two infant formulas and water through these nipples were not different; those through venturi nipples were (analysis of variance; p < 0.0001). Flow characteristics did not differ among calibrated-orifice nipples constructed from three commercial nipple styles, indicating that the calibrated-orifice design is applicable to different types of baby bottle nipples. Among 3-month-old infants using calibrated-orifice nipples, acceptability was high, and sucking accounted for 85% of the variance in fluid intake during a feeding. We conclude that calibrated-orifice nipples are a valid and acceptable tool for the measurement of infant nutritive sucking. <s> BIB008 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> It is acknowledged that the difficulty many preterm infants have in feeding orally results from their immature sucking skills. However, little is known regarding the development of sucking in these infants. The aim of this study was to demonstrate that the bottle-feeding performance of preterm infants is positively correlated with the developmental stage of their sucking. Infants' oral-motor skills were followed longitudinally using a special nipple/bottle system which monitored the suction and expression/compression component of sucking. The maturational process was rated into five primary stages based on the presence/absence of suction and the rhythmicity of the two components of sucking, suction and expression/compression. This five-point scale was used to characterize the developmental stage of sucking of each infant. Outcomes of feeding performance consisted of overall transfer (percent total volume transfered/volume to be taken) and rate of transfer (ml/min). Assessments were conducted when infants were taking 1-2, 3-5 and 6-8 oral feedings per day. Significant positive correlations were observed between the five stages of sucking and postmenstrual age, the defined feeding outcomes, and the number of daily oral feedings. Overall transfer and rate of transfer were enhanced when infants reached the more mature stages of sucking. ::: ::: We have demonstrated that oral feeding performance improves as infants' sucking skills mature. In addition, we propose that the present five-point sucking scale may be used to assess the developmental stages of sucking of preterm infants. Such knowledge would facilitate the management of oral feeding in these infants. <s> BIB009 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> An earlier study demonstrated that oral feeding of premature infants (<30 wk gestation) was enhanced when milk was delivered through a self-paced flow system. The aims of this study were to identify the principle(s) by which this occurred and to develop a practical method to implement the self-paced system in neonatal nurseries. Feeding performance, measured by overall transfer, duration of oral feedings, efficiency, and percentage of successful feedings, was assessed at three time periods, when infants were taking 1-2, 3-5, and 6-8 oral feedings/day. At each time period, infants were fed, sequentially and in a random order, with a self-paced system, a standard bottle, and a test bottle, the shape of which allowed the elimination of the internal hydrostatic pressure. In a second study, infants were similarly fed with the self-paced system and a vacuum-free bottle which eliminated both hydrostatic pressure and vacuum within the bottle. The duration of oral feedings, efficiency, and percentage of successful feedings were improved with the self-paced system as compared to the standard and test bottles. The results were similar in the comparison between the self-paced system and the vacuum-free bottle. Elimination of the vacuum build-up naturally occurring in bottles enhances the feeding performance of infants born <30 wk gestation as they are transitioned from tube to oral feeding. The vacuum-free bottle is a tool which caretakers can readily use in neonatal nurseries. <s> BIB010 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> This study aimed to determine whether neonatal feeding performance can predict the neurodevelopmental outcome of infants at 18 months of age. We measured the expression and sucking pressures of 65 infants (32 males and 33 females, mean gestational age 37.8 weeks [SD 0.5]; range 35.1 to 42.7 weeks and mean birthweight 2722g [SD 92]) with feeding problems and assessed their neurodevelopmental outcome at 18 months of age. Their diagnoses varied from mild asphyxia and transient tachypnea to Chiari malformation. A neurological examination was performed at 40 to 42 weeks postmenstrual age by means of an Amiel-Tison examination. Feeding performance at 1 and 2 weeks after initiation of oral feeding was divided into four classes: class 1, no suction and weak expression; class 2, arrhythmic alternation of expression/suction and weak pressures; class 3, rhythmic alternation, but weak pressures; and class 4, rhythmic alternation with normal pressures. Neurodevelopmental outcome was evaluated with the Bayley Scales of Infant Development-II and was divided into four categories: severe disability, moderate delay, minor delay, and normal. We examined the brain ultrasound on the day of feeding assessment, and compared the prognostic value of ultrasound and feeding performance. There was a significant correlation between feeding assessment and neurodevelopmental outcome at 18 months (p < 0.001). Improvements of feeding pattern at the second evaluation resulted in better neurodevelopmental outcome. The sensitivity and specificity of feeding assessment were higher than those of ultrasound assessment. Neonatal feeding performance is, therefore, of prognostic value in detecting future developmental problems. <s> BIB011 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> Aim: Safe and successful oral feeding requires proper maturation of sucking, swallowing and respiration. We hypothesized that oral feeding difficulties result from different temporal development of the musculatures implicated in these functions. ::: ::: Methods: Sixteen medically stable preterm infants (26 to 29 weeks gestation, GA) were recruited. Specific feeding skills were monitored as indirect markers for the maturational process of oral feeding musculatures: rate of milk intake (mL/min); percent milk leakage (lip seal); sucking stage, rate (#/s) and suction/expression ratio; suction amplitude (mmHg), rate and slope (mmHg/s); sucking/swallowing ratio; percent occurrence of swallows at specific phases of respiration. Coefficients of variation (COV) were used as indices of functional stability. Infants, born at 26/27- and 28/29-week GA, were at similar postmenstrual ages (PMA) when taking 1–2 and 6–8 oral feedings per day. ::: ::: Results: Over time, feeding efficiency and several skills improved, some decreased and others remained unchanged. Differences in COVs between the two GA groups demonstrated that, despite similar oral feeding outcomes, maturation levels of certain skills differed. ::: ::: Conclusions: Components of sucking, swallowing, respiration and their coordinated activity matured at different times and rates. Differences in functional stability of particular outcomes confirm that maturation levels depend on infants' gestational rather than PMA. <s> BIB012 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> ABSTRACT:Objectives:The relationship between the pattern of sucking behavior of preterm infants during the early weeks of life and neurodevelopmental outcomes during the first year of life was evaluated.Methods:The study sample consisted of 105 preterm infants (postmenstrual age [PMA] at birth = 30. <s> BIB013 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> AIM ::: To obtain a better understanding of the changes in feeding behaviour from 1 to 6 months of age. By comparing breast- and bottle-feeding, we intended to clarify the difference in longitudinal sucking performance. ::: ::: ::: METHODS ::: Sucking variables were consecutively measured for 16 breast-fed and eight bottle-fed infants at 1, 3 and 6 months of age. ::: ::: ::: RESULTS ::: For breast-feeding, number of sucks per burst (17.8 +/- 8.8, 23.8 +/- 8.3 and 32.4 +/- 15.3 times), sucking burst duration (11.2 +/- 6.1, 14.7 +/- 8.0 and 17.9 +/- 8.8 sec) and number of sucking bursts per feed (33.9 +/- 13.9, 28.0 +/- 18.2 and 18.6 +/- 12.8 times) at 1, 3 and 6 months of age respectively showed significant differences between 1 and 6 months of age (p < 0.05). The sucking pressure and total number of sucks per feed did not differ among different ages. Bottle-feeding resulted in longer sucking bursts and more sucks per burst compared with breast-feeding in each month (p < 0.05). ::: ::: ::: CONCLUSION ::: The increase in the amount of ingested milk with maturation resulted from an increase in bolus volume per minute as well as the higher number of sucks continuously for both breast- and bottle-fed infants. <s> BIB014 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> We report quantitative measurements of ten parameters of nutritive sucking behavior in 91 normal full-term infants obtained using a novel device (an Orometer) and a data collection/analytical system (Suck Editor). The sucking parameters assessed include the number of sucks, mean pressure amplitude of sucks, mean frequency of sucks per second, mean suck interval in seconds, sucking amplitude variability, suck interval variability, number of suck bursts, mean number of sucks per suck burst, mean suck burst duration, and mean interburst gap duration. For analyses, test sessions were divided into 4 × 2-min segments. In single-study tests, 36 of 60 possible comparisons of ten parameters over six pairs of 2-min time intervals showed a p value of 0.05 or less. In 15 paired tests in the same infants at different ages, 33 of 50 possible comparisons of ten parameters over five time intervals showed p values of 0.05 or less. Quantification of nutritive sucking is feasible, showing statistically valid results for ten parameters that change during a feed and with age. These findings suggest that further research, based on our approach, may show clinical value in feeding assessment, diagnosis, and clinical management. <s> BIB015 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> Perioral movements and sucking pattern during bottle feeding with a novel, experimental teat are similar to breastfeeding <s> BIB016 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> Feeding by sucking is one of the first activities of daily life performed by infants. Sucking plays a fundamental role in neurological development and may be considered a good early predictor of neuromotor development. In this paper, a new method for ecological assessment of infants' nutritive sucking behavior is presented and experimentally validated. Preliminary data on healthy newborn subjects are first acquired to define the main technical specifications of a novel instrumented device. This device is designed to be easily integrated in a commercially available feeding bottle, allowing clinical method development for screening large numbers of subjects. The new approach proposed allows: 1) accurate measurement of intra-oral pressure for neuromotor control analysis and 2) estimation of milk volume delivered to the mouth within variation between estimated and reference volumes. <s> BIB017 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> The Sucking Efficiency (SEF) is one of the main parameters used to monitor and assess the sucking pattern development in infants. Since utritive Sucking ( S) is one of the earliest motor activity performed by infants, its objective monitoring may allow to assess neurological and motor development of newborns. This work proposes a new ecological and low-cost method for SEF monitoring, specifically designed for feeding bottles. The methodology, based on the measure of the hydrostatic pressure exerted by the liquid at the teat base, is presented and experimentally validated at different operative conditions. Results show how the proposed method allows to estimate the minimum volume an infant ingests during a burst of sucks with a relative error within the range of [3-7]% depending on the inclination of the liquid reservoir. <s> BIB018
Oral feeding is a complex process requiring a mature sucking ability and an especially mature coordination of sucking with breathing and swallowing. The proposed overview of scientific literature about this topic highlighted the absence of a unique technological sensing solution to assess such skills. Oral feeding behavior is assessed in literature, monitoring sucking, breathing, swallowing, and nutrient consumption, through a wide set of quantities and indices. Such monitoring has resulted potentially useful for the assessment of oral feeding pattern maturation in preterm infants, in term infants, and as prognostic tool for predicting later neurodevelopmental outcomes. To study preterm infants' feeding behavior, intraoral and expression pressures are two fundamental quantities. In particular, S/E coordination and rhythmicity are principally investigated and are significant to characterize and assess the development of sucking skills and, possibly, their immaturity. Besides, the importance of these two components has also been confirmed in the case neurodisabled infants, confirming the importance of S/E monitoring for the assessment of immature sucking patterns. On the contrary, sucking skills of term infants can be assessed without distinction between IP and EP measures. Regarding the prognostic value of infants' oral feeding skills, it seems that the assessment of the only sucking pattern is sufficient to predict later neurodevelopmental outcomes. However, the reported studies BIB013 BIB011 BIB005 , dealing with the issue, are rather recent and encourage further research focused on breathing and swallowing processes as well BIB002 . Both for preterm, and term infants, Sw-B coordination represent a challenging milestone to attain in development of feeding skills. Sw-B rhythmicity and their integration into the sucking process is fundamental, highlighting the importance of measuring systems for the detection of Sw and B events and their temporal pattern, rather than for their quantitative characterization. Volume consumption was monitored in most of the studies, and feeding efficiency, as well as feeding rate and bolus size, appear as important indices to evaluate the changing feeding performances in both preterm and term infants. Such heterogeneous set of quantities and indices has been assessed with different technological solutions. Due to the differences related to the application field (preterm assessment; term assessment; prognostic tool), each solution should be carefully assessed according to the specific application requirements, which should always aim at a balanced compromise between reliability, invasiveness and practicability. The distinction between suction and expression components of sucking seems to be a characteristic specific of the assessment of preterm infants. PTs are the most commonly adopted sensing solution for the measurement of intraoral and expression pressures. However, the simultaneous monitoring of both sucking components imposes special constraints. In particular, the measuring configuration including a tube for nutrient delivery (Figures 3 and 4 ) cannot be adopted, because it removes the contribution of the expression component on the extraction of nutrient and it drives infants to modify their sucking response BIB001 forcing a higher IP. This solution can be adopted if the only goal is to investigate suction ability, since it forces the newborn to rely on it. Moreover, the specific application to preterm infants imposes additional constraints, due to the compromised motor system of this population. The presence of an additional resistance to the nutrient flow, caused by the tube restriction (see configuration in Figure 3a ,b and Figure 4 ), makes such measuring solutions unsuitable when applied to preterm infants: additional resistance implies additional sucking efforts to extract the same amount of nutrient from the bottle. Furthermore, the acceptability of the feeding apparatus is essential for every application. A nutrient delivery tube within the nipple tip may compromise the nipple's mouth feel, even for full-term and/or healthy infants accustomed to feeding from standard nipples on commercial bottles: they often refuse anything other than their usual nipple style BIB008 . For such reason, the relative simplicity of the common orifice nipples adapted for IP monitoring, as shown in Figure 2 , is more advantageous. With such a configuration, infants' expression movements can alter flow rate through the nipple, and so the simultaneous measurement of EP, which is fundamental for preterm infants' assessment, makes sense. Both sensing solutions, illustrated in Figure 2 , are applicable on every feeding apparatus and nipple, not requiring a particular design: they can be embedded in both a clinical and a portable domestic assessment tool. Besides, they can be also adopted for IP monitoring during breastfeeding BIB014 BIB016 . The sensing solution where the pressure waveform is directed to the PT by means of a catheter (Figure 2a and Figure 3 ), should always adopt fluid-filled catheters (free of air bubbles), because air-filled lines do not respond to rapid pressure changes and underestimate peak negative pressures. However, the PT directly placed into the infants' mouth (see Figure 2b ) is more advantageous for several aspects (higher accuracy, no time delay, no motion artifacts), also avoiding the need for a fluid-filled system (higher easiness of use), but it implies higher costs. Regarding the measurement of EP using PTs, both reported methods ( Figure 5 ) appear to be suitable for clinical and domestic use, as they can be incorporated in a standard feeding apparatus. However, the one measuring EP from the nipple lumen is suggested since the other one presents limitations due to a plateau in the system response corresponding to a full compression of the catheter. Optical motion capture systems may be also considered for E and S monitoring through jaw and throat movements. The advantage of such monitoring approach is its complete non-invasiveness: any feeding apparatus can be adopted (depending on clinicians' or parents' decision), as no sensing elements are required (it can be even adopted for breastfeeding monitoring BIB016 ). Notwithstanding such advantage, its practicability is very low, as it would require specialized personnel, a structured environment, precise calibration procedure and it would be time consuming. These reasons make this kind of monitoring system not easily practicable both for clinical and domestic post-discharge application. Moreover, mouthing movements (jaw movements) are not directly linked to the effective nutrient expression, as infants could have an ineffective seal around the teat, which would prevent them from feeding properly. If no S/E distinction is necessary, sucking events can be monitored. The intranipple pressure can be easily recorded adopting a non-invasive and practical sensing solution, even embeddable in a common feeding apparatus. Sucking movements can also be recorded, adopting mercury-in-rubber strain-gauges on the infant's chin. The advantages of this latter solution is the fact that it does not require any special sensors to be applied on the feeding apparatus, which can consequently be freely selected by the user (it could be also used for breastfeeding monitoring). However, it is moderately invasive as the sensing element has to be placed on the infant's face. This may produce additional stress to the preterm newborn who often shows hypersensitivity of the facial area due to frequent necessary aversive oral and/or nasal procedures BIB012 . Two additional aspects should be taken into consideration in the definition of the main requirements of a standardized assessment tool for feeding: the hydrostatic pressure and the gradual build-up in negative pressure inside the bottle. In most feeding apparatuses, used to enable measures of feeding behavior, particular expedients were required to avoid these two factors, which might hamper the feeding performance of immature infants BIB009 , so as to permit the standardization of feeding across infants and the generalization of results. The adopted expedients often imply solutions showing low practicability and requiring structured environment, not suited for post-discharge use (see the schematic representation in Figure 3 ). More efforts are required to design and develop measuring feeding tools, which can be easy-to-use also in post-discharge usual environment. While the vacuum problem can be easily avoided using a commercial vented bottle, as in BIB015 (Figure 4 ) or in BIB017 , the hydrostatic pressure might also represent an important parameter to record in the case of daily home monitoring of sucking behavior, when the infant has to face this factor while bottle-feeding. Some described apparatus BIB003 BIB018 can be adoptable for such application: they report sensing solutions based on PTs measuring the pressure at the base of the nutrient column. Moreover, these factors suggest that another important aspect in the design of a standardized feeding assessment tool, is its shape BIB010 , as it can determine the extent of the influence of hydrostatic pressure on feeding (see Figure 4) . The Sw-B coordination is a challenging milestone in the development of oral feeding skills in term infants and even more in preterm ones, given their greater immaturity and risky condition. Therefore, its careful assessment at the discharge of NICUs is strongly suggested. Respiratory monitoring during feeding is quite thorny because of the high response time required to the sensors (breathing events are faster during NS), and because of the movement artifacts. Moreover, it is essential, particularly in premature infants, that any respiratory measurement be acceptable to the subject without imposing additional stress. The main techniques to record respiratory events in a clinical environment during feeding, were critically compared and analyzed in BIB006 , even taking into consideration the above-mentioned issues. The use of a PT in a nasal cannula just inside the nostrils, as shown in Figure 9b , and of an abdominal PT (Figure 9a ), can be considered suitable for clinical monitoring of respiratory events, because of their minimal invasiveness. The latter can be considered preferable in NICU where the subject may also receive oxygen via a nasal cannula, so any measure relying on nasal airflow would result useless. Quantitative information can be obtained using a nasal thermistor, embedded into a tube where the flow stream has to be canalized. However, it does not provide information about the airflow direction and it may also impose additional stress especially to a hypersensitive premature infant. A preferable solution can be the use of a pneumotachograph connected to a PT placed in a miniaturized cannula to be inserted in a nostril: it can measure both airflow and direction. However, as already said, these sensing solutions (measuring nasal airflow) may turn out to be impracticable in NICU applications. Considering a domestic post-discharge application requiring high easiness to use and portability, none of the respiratory monitoring solutions, reported in Section 3, would be easily practicable: none of them is embedded in the feeding apparatus, but they all imply the use of an additional dedicated apparatus. In a post-discharge setting, this may stress the user that is generally less inclined than a clinician to the application of any additional element on the infant's body. The same applies to the swallowing measuring systems described, as all of them imply the use of additional sensing tools. The use of PTs connected to the pharynx by means of a trans-nasal catheter has been often adopted, but it can be easily set up only in NIUCs, because it is highly invasive. A less invasive sensing solution for swallowing monitoring is represented by the use of a microphone or a pressure drum placed on the infant's neck. The nutrient consumption is mainly recorded at the beginning and at the end of feeding (or at specific time intervals), by weighing the bottle or verifying the level of the nutrient on the graduated reservoir. Even if such methods are quite simple and not invasive, their main drawback is that they enable a global estimation of the ingested volume, but they do not allow for continuous monitoring of milk volume intake. They exclude the energetic analysis of sucking process. Moreover, as the rate of milk flow during bottle feeding plays a crucial role in feeding-related ventilator changes of both term and preterm infants BIB004 , its assessment through reliable measures appears to be important both for infants' clinical evaluation and post-discharge monitoring. To this aim, the use of air-flow sensors (thermistors or pneumotachometers) mounted on the top of the inverted nutrient reservoir, may represent a practicable sensing solution to be further investigated to increase the level of integration and develop a portable feeding tool (it also implies the resolution of the vacuum build-up problem). The solutions using PTs to measure nutrient consumption may also be easily adopted at home for a remote continuous monitoring of infant's development, but further research is suggested for their validation in field BIB017 BIB018 . The same portability can be obtained adopting an ultrasonic flow sensor, as in BIB007 , but it represents an expensive solution. As described in Section 3, several systems adopted a calibration procedure in order to obtain a linear relation between the suction pressure and the consequent flow. However, they imply some particular expedients to eliminate hydrostatic pressure and vacuum in the nutrient reservoir, in order to make the flow solely depending on suction (see configurations in Figure 3 ). This affects the easiness to use and portability of the system, so it does not represent a recommended solution when both sucking components (S/E) need to be monitored, as already discussed. On the contrary, an interesting approach is the one based on the estimation of the net pressure causing the nutrient release, by measuring the pressure gradient causing the flow through a calibrated teat. Two PTs easily embeddable into the feeding teat can be used for such aim (see Figure 6 ) allowing both measures of sucking pressures and estimation of milk flow. Such a sensing solution allows a calibration that does not require the absence of hydrostatic pressure or vacuum in the bottle. Moreover, it is not dependent on the nutrient viscosity, which can vary among commercial formulas and breast milk, as the flow rate through an orifice is known to be independent from fluid viscosity.
Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Conclusions <s> Preterm infants often have difficulties in learning how to suckle from the breast or how to drink from a bottle. As yet, it is unclear whether this is part of their prematurity or whether it is caused by neurological problems. Is it possible to decide on the basis of how an infant learns to suckle or drink whether it needs help and if so, what kind of help? In addition, can any predictions be made regarding the relationship between these difficulties and later neurodevelopmental outcome? We searched the literature for recent insights into the development of sucking and the factors that play a role in acquiring this skill. Our aim was to find a diagnostic tool that focuses on the readiness for feeding or that provides guidelines for interventions. At the same time, we searched for studies on the relationship between early sucking behavior and developmental outcome. It appeared that there is a great need for a reliable, user-friendly and noninvasive diagnostic tool to study sucking in preterm and full-term infants. <s> BIB001 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Conclusions <s> Abstract Neonatal motor behavior predicts both current neurological status and future neurodevelopmental outcomes. For speech pathologists, the earliest observable patterned oromotor behavior is su... <s> BIB002
The acquisition of efficient nutritive sucking skills is a fundamental and challenging milestone for every newborn, and even more for premature ones, as it requires the complex coordination of sucking, swallowing and breathing processes, which is usually not yet developed in premature infants at birth. Such skills and their development, if monitored, may provide objective parameters for the assessment of infants' well-being and health status, and allow predictions about later neurodevelopmental outcomes. A specifically designed tool to assess infant's oral feeding ability may provide clinicians with new devices for prognosis, diagnosis and routine clinical monitoring of newborn patients. However, such a standardized instrumental tool does not exist yet, and clinical evaluation of feeding ability is not carried out objectively, running the risk of ignoring poor feeding skills for too long. This work carries out a critical analysis of the main instrumental solutions adopted up to now for infant NS monitoring. The first step was to identify the main application fields where the objective NS assessment may contribute to improve the level of healthcare assistance: preterm assessment, term and full term assessment, and early diagnosis of later neurological dysfunctions. Different guidelines may be useful in the design and development of a measurement tool suitable for the two principal environments where it will be used: a clinical setting (the NICU in particular), and a domestic environment for post-discharge monitoring. In the latter, an instrument for monitoring NS and its development should meet two main functional requirements, besides offering reliable and valid measures: its portability and its easiness of use. In addition, non-invasiveness is strongly required in a post-discharge environment, and always preferable when dealing with preterm infants as well. As previously discussed, some sensing solutions proposed for sucking monitoring, meeting these requirements, are based on the use of common PTs and thin catheters in a standard nipple. Even to estimate nutrient consumption, sensing solutions adopting PTs seemed to be suitable, because of their simplicity. They deserve further investigation for their reliability in field to be demonstrated. Further interest should also be addressed to the nutrient consumption estimation by means of air-flow sensors, as they show the advantages previously discussed. There is a need for sensitive, quantitative, and efficient analyses of sucking skills, first of all among preterm infants in the NICU BIB001 . The analytical tools for suck assessment in most NICUs are based on subjective judgment. There are obvious limitations to this approach, including reliability within and between examiners and an inability to access the fine structure of pressure dynamics, variability of suck patterning, and developmental progression BIB002 . The requirements for the application in clinical settings do not strictly include the portability. However, the need for a univocal assessing instrument should promote the development of the cited sensing solutions even for a clinical application, provided that reliability and validity of the measuring instrument are guaranteed. Further research should focus on the integration of the proper set of sensors for sucking monitoring on a practical feeding apparatus, and on its validation even in the case of untrained users. Concerning the monitoring of swallowing and breathing processes during NS, some of the sensing solutions, among the ones described in literature, resulted applicable for clinical monitoring. However, none of them seemed to be easily embeddable on a feeding tool for a practical and easy use in a domestic setting, where the user is more demanding. This suggests to orient further research efforts to the design of sensing solutions for breathing and swallowing recording that can be embeddable on a simple feeding apparatus, or to the analysis of the domestic practicability of some of the less invasive solutions proposed. Challenges and limitations discussed in this review should warrant further studies to overcome them, in order to obtain a valid and objective tool for standardizing infants' oral feeding assessment. The use of standard pre-discharge assessment devices may foster the establishment of common quantitative criteria useful to assist clinicians in planning clinical interventions. Such devices, or a simplified version of them, might be adopted also for patients' follow-up, as remote monitoring of infants at home after discharge, as everyday feeding problems can be an early symptom of disability. Besides the instrumental solution, the standardization of infants' oral feeding assessment will require a considerable work to collect the amount of data necessary to define normative indices. Moreover, the interpretation of this huge amount of data, will require further research to develop ad-hoc algorithms for data analysis.
A survey on Hamilton cycles in directed graphs <s> Introduction <s> Abstract A theorem is proved that is, in a sense to be made precise, the best possible generalization of the theorems of Dirac, Posa, and Bondy that give successively weaker sufficient conditions for a graph to be Hamiltonian. Some simple corollaries are deduced concerning Hamiltonian paths, n -Hamiltonian graphs, and Hamiltonian bipartite graphs. <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> Introduction <s> The theory of directed graphs has developed enormously over recent decades, yet this book (first published in 2000) remains the only book to cover more than a small fraction of the results. New research in the field has made a second edition a necessity. Substantially revised, reorganised and updated, the book now comprises eighteen chapters, carefully arranged in a straightforward and logical manner, with many new results and open problems. As well as covering the theoretical aspects of the subject, with detailed proofs of many important results, the authors present a number of algorithms, and whole chapters are devoted to topics such as branchings, feedback arc and vertex sets, connectivity augmentations, sparse subdigraphs with prescribed connectivity, and also packing, covering and decompositions of digraphs. Throughout the book, there is a strong focus on applications which include quantum mechanics, bioinformatics, embedded computing, and the travelling salesman problem. Detailed indices and topic-oriented chapters ease navigation, and more than 650 exercises, 170 figures and 150 open problems are included to help immerse the reader in all aspects of the subject. Digraphs is an essential, comprehensive reference for undergraduate and graduate students, and researchers in mathematics, operations research and computer science. It will also prove invaluable to specialists in related areas, such as meteorology, physics and computational biology. <s> BIB002 </s> A survey on Hamilton cycles in directed graphs <s> Introduction <s> In this paper we give an approximate answer to a question of Nash-Williams from 1970: we show that for every \alpha > 0, every sufficiently large graph on n vertices with minimum degree at least (1/2 + \alpha)n contains at least n/8 edge-disjoint Hamilton cycles. More generally, we give an asymptotically best possible answer for the number of edge-disjoint Hamilton cycles that a graph G with minimum degree \delta must have. We also prove an approximate version of another long-standing conjecture of Nash-Williams: we show that for every \alpha > 0, every (almost) regular and sufficiently large graph on n vertices with minimum degree at least $(1/2 + \alpha)n$ can be almost decomposed into edge-disjoint Hamilton cycles. <s> BIB003
The decision problem of whether a graph has a Hamilton cycle is NP-complete and so a satisfactory characterization of Hamiltonian graphs seems unlikely. Thus it makes sense to ask for degree conditions which ensure that a graph has a Hamilton cycle. One such result is Dirac's theorem , which states that every graph on n ≥ 3 vertices with minimum degree at least n/2 contains a Hamilton cycle. This is strengthened by Ore's theorem : If G is a graph with n ≥ 3 vertices such that every pair x = y of non-adjacent vertices satisfies d(x) + d(y) ≥ n, then G has a Hamilton cycle. Dirac's theorem can also be strengthened considerably by allowing many of the vertices to have small degree: Pósa's theorem states that a graph on n ≥ 3 vertices has a Hamilton cycle if its degree sequence d 1 ≤ · · · ≤ d n satisfies d i ≥ i + 1 for all i < (n − 1)/2 and if additionally d ⌈n/2⌉ ≥ ⌈n/2⌉ when n is odd. Again, this is best possible -none of the degree conditions can be relaxed. Chvátal's theorem BIB001 is a further generalization. It characterizes all those degree sequences which ensure the existence of a Hamilton cycle in a graph: suppose that the degrees of the graph G are d 1 ≤ · · · ≤ d n . If n ≥ 3 and d i ≥ i+1 or d n−i ≥ n−i for all i < n/2 then G is Hamiltonian. This condition on the degree sequence is best possible in the sense that for any degree sequence d 1 ≤ · · · ≤ d n violating this condition there is a corresponding graph with no Hamilton cycle whose degree sequence dominates d 1 , . . . , d n . These four results are among the most general and well-known Hamiltonicity conditions. There are many more -often involving additional structural conditions like planarity. The survey gives an extensive overview (which concentrates on undirected graphs). In this survey, we concentrate on recent progress for directed graphs. Though the problems are equally natural for directed graphs, it is usually much more difficult to obtain satisfactory results. Additional results beyond those discussed Date: June 4, 2010. here can be found in the corresponding chapter of the monograph BIB002 . In Section 2, we discuss digraph analogues and generalizations of the above four results. The next section is devoted to oriented graphs -these are obtained from undirected graphs by orienting the edges (and thus are digraphs without 2-cycles). Section 4 is concerned with tournaments. Section 5 is devoted to several generalizations of the notion of a Hamilton cycle, e.g. pancyclicity and k-ordered Hamilton cycles. The final section is devoted to the concept of 'robust expansion'. This has been useful in proving many of the recent results discussed in this survey. We will give a brief sketch of how it can be used. In this paper, we also use this notion (and several results from this survey) to obtain a new result (Theorem 18) which gives further support to Kelly's conjecture on Hamilton decompositions of regular tournaments. In a similar vein, we use a result of BIB003 to deduce that the edges of every sufficiently dense regular (undirected) graph can be covered by Hamilton cycles which are almost edge-disjoint (Theorem 21).
A survey on Hamilton cycles in directed graphs <s> Hamilton cycles in directed graphs <s> Diaphragms for electrolytic cells are prepared by depositing onto a cathode screen, discrete thermoplastic fibers. The fibers are highly branched, and which, when deposited form an entanglement or network thereof, which does not require bonding or cementing. <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> Hamilton cycles in directed graphs <s> This result is best possible for k = 3 since the Petersen graph is a nonhamiltonian, 2-connected, 3-regular graph on 10 vertices. It is essentially best possible for k > 4 since there exist non-hamiltonian, 2-connected, kregular graphs on 3k + 4 vertices for k even, and 3k + 5 vertices for all k. Examples of such graphs are given in [ 1, 3 1. The problem of determining the values of k for which all 2-connected, k-regular graphs on n vertices are hamiltonian was first suggested by G. Szekeres. Erdijs and Hobbs [ 3 ] proved that such graphs are hamiltonian if n < 2k + ck”*, where c is a positive constant. Subsequently, Bollobas and Hobbs [ 1 ] showed that G is hamiltonian if n < +k. We shall in fact prove a result slightly stronger than Theorem 1. <s> BIB002
2.1. Minimum degree conditions. For an analogue of Dirac's theorem in directed graphs it is natural to consider the minimum semidegree δ 0 (G) of a digraph G, which is the minimum of its minimum outdegree δ + (G) and its minimum indegree δ − (G). (Here a directed graph may have two edges between a pair of vertices, but in this case their directions must be opposite.) The corresponding result is a theorem of Ghouila-Houri BIB001 . Theorem 1 (Ghouila-Houri BIB001 ). Every strongly connected digraph on n vertices with δ + (G) + δ − (G) ≥ n contains a Hamilton cycle. In particular, every digraph with δ 0 (G) ≥ n/2 contains a Hamilton cycle. (When referring to paths and cycles in directed graphs we usually mean that these are directed, without mentioning this explicitly.) For undirected regular graphs, Jackson BIB002 showed that one can reduce the degree condition in Dirac's theorem considerably if we also impose a connectivity condition, i.e. every 2-connected d-regular graph on n vertices with d ≥ n/3 contains a Hamilton cycle. Hilbig improved the degree condition to n/3 − 1 unless G is the Petersen graph or another exceptional graph. The example in Figure 1 shows that the degree condition cannot be reduced any further. Clearly, the connectivity condition is necessary. We believe that a similar result should hold for directed graphs too. Conjecture 2. Every strongly 2-connected d-regular digraph on n vertices with d ≥ n/3 contains a Hamilton cycle. Replacing each edge in Figure 1 with two oppositely oriented edges shows that the degree condition cannot be reduced. Moreover, it is not hard to see that the strong 2-connectivity cannot be replaced by just strong connectivity.
A survey on Hamilton cycles in directed graphs <s> 2.2. <s> Abstract In this article we prove that a sufficient condition for an oriented strongly connected graph with n vertices to be Hamiltonian is: (1) for any two nonadjacent vertices x and y d + (x)+d − (x)+d + (y)+d − (y)⩽sn−1 . <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> 2.2. <s> We describe a new type of sufficient condition for a digraph to be Hamiltonian. Conditions of this type combine local structure of the digraph with conditions on the degrees of non-adjacent vertices. The main difference from earlier conditions is that we do not require a degree condition on all pairs of non-adjacent vertices. Our results generalize the classical conditions by Ghouila-Houri and Woodall. <s> BIB002 </s> A survey on Hamilton cycles in directed graphs <s> 2.2. <s> In \cite{suffcond} the following extension of Meyniels theorem was conjectured: If $D$ is a digraph on $n$ vertices with the property that $d(x)+d(y)\geq 2n-1$ for every pair of non-adjacent vertices $x,y$ with a common out-neighbour or a common in-neighbour, then $D$ is Hamiltonian. We verify the conjecture in the special case where we also require that $\min\{d^+(x)+d^-(y),d^-(x)+d^+(y)\}\geq n-1$ for all pairs of vertices $x,y$ as above. This generalizes one of the results in \cite{suffcond}. Furthermore we provide additional support for the conjecture above by showing that such a digraph always has a factor (a spanning collection of disjoint cycles). Finally we show that if $D$ satisfies that $d(x)+d(y)\geq\frac{5}{2}n-4$ for every pair of non-adjacent vertices $x,y$ with a common out-neighbour or a common in-neighbour, then $D$ is Hamiltonian. <s> BIB003 </s> A survey on Hamilton cycles in directed graphs <s> 2.2. <s> The theory of directed graphs has developed enormously over recent decades, yet this book (first published in 2000) remains the only book to cover more than a small fraction of the results. New research in the field has made a second edition a necessity. Substantially revised, reorganised and updated, the book now comprises eighteen chapters, carefully arranged in a straightforward and logical manner, with many new results and open problems. As well as covering the theoretical aspects of the subject, with detailed proofs of many important results, the authors present a number of algorithms, and whole chapters are devoted to topics such as branchings, feedback arc and vertex sets, connectivity augmentations, sparse subdigraphs with prescribed connectivity, and also packing, covering and decompositions of digraphs. Throughout the book, there is a strong focus on applications which include quantum mechanics, bioinformatics, embedded computing, and the travelling salesman problem. Detailed indices and topic-oriented chapters ease navigation, and more than 650 exercises, 170 figures and 150 open problems are included to help immerse the reader in all aspects of the subject. Digraphs is an essential, comprehensive reference for undergraduate and graduate students, and researchers in mathematics, operations research and computer science. It will also prove invaluable to specialists in related areas, such as meteorology, physics and computational biology. <s> BIB004
Ore-type conditions. Woodall proved the following digraph version of Ore's theorem, which generalizes Ghouila-Houri's theorem. d + (x) denotes the outdegree of a vertex x, and d − (x) its indegree. Theorem 3 (Woodall ). Let G be a strongly connected digraph on n ≥ 2 vertices. If d + (x) + d − (y) ≥ n for every pair x = y of vertices for which there is no edge from x to y, then G has a Hamilton cycle. Woodall's theorem in turn is generalized by Meyniel's theorem, where the degree condition is formulated in terms of the total degree of a vertex. Here the total degree d(x) of x is defined as d( Theorem 4 (Meyniel BIB001 ). Let G be a strongly connected digraph on n ≥ 2 vertices. If d(x) + d(y) ≥ 2n − 1 for all pairs of non-adjacent vertices in G, then G has a Hamilton cycle. The following conjecture of Bang-Jensen, Gutin and Li BIB002 would strengthen Meyniel's theorem by requiring the degree condition only for dominated pairs of vertices (a pair of vertices is dominated if there is a vertex which sends an edge to both of them). Conjecture 5 (Bang-Jensen, Gutin and Li BIB002 ). Let G be a strongly connected digraph on n ≥ 2 vertices. If d(x) + d(y) ≥ 2n − 1 for all dominated pairs of non-adjacent vertices in G, then G has a Hamilton cycle. An extremal example F can be constructed as in Figure 2 . To see that F has no Hamilton cycle, note that every Hamilton path in F − x has to start at z. Also, note that the only non-adjacent (dominated) pairs of vertices are z together with a vertex u in K and these satisfy d(z) + d(u) = 2n − 2. Some support for the conjecture is given e.g. by the following result of BangJensen, Guo and Yeo BIB003 : if we also assume the degree condition for all pairs of non-adjacent vertices which have a common outneighbour, then G has a 1-factor, i.e. a union of vertex-disjoint cycles covering all the vertices of G. An extremal example for Conjecture 5: let F be the digraph obtained from the complete digraph K = K ↔ n−3 and a complete digraph on 3 other vertices x, y, z as follows: remove the edge from x to z, add all edges in both directions between x and K and all edges from y to K. There are also a number of degree conditions which involve triples or 4-sets of vertices, see e.g. the corresponding chapter in BIB004 .
A survey on Hamilton cycles in directed graphs <s> 2.3. <s> Proof. Let G satisfy the hypothesis of Theorem 1. Clearly, G contains a circuit ; let C be the longest one . If G has no Hamiltonian circuit, there is a vertex x with x ~ C . Since G is s-connected, there are s paths starting at x and terminating in C which are pairwise disjoint apart from x and share with C just their terminal vertices x l, X2, . . ., x s (see [ 11, Theorem 1) . For each i = 1, 2, . . ., s, let y i be the successor of x i in a <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> 2.3. <s> The main subjects of this survey paper are Hamitonian cycles, cycles of prescirbed lengths, cycles in tournaments, and partitions, packings, and coverings by cycles. Several unsolved problems and a bibiligraphy are included. <s> BIB002 </s> A survey on Hamilton cycles in directed graphs <s> 2.3. <s> Abstract We give a survey of results and conjectures concerning sufficient conditions in terms of connectivity and independence number for which a graph or digraph has various path or cyclic properties, for example hamilton path/cycle, hamilton connected, pancyclic, path/cycle covers, 2-cyclic. <s> BIB003 </s> A survey on Hamilton cycles in directed graphs <s> 2.3. <s> We show that for each \eta>0 every digraph G of sufficiently large order n is Hamiltonian if its out- and indegree sequences d^+_1\le ... \le d^+_n and d^- _1 \le ... \le d^-_n satisfy (i) d^+_i \geq i+ \eta n or d^-_{n-i- \eta n} \geq n-i and (ii) d^-_i \geq i+ \eta n or d^+_{n-i- \eta n} \geq n-i for all i<n/2. This gives an approximate solution to a problem of Nash-Williams concerning a digraph analogue of Chv\'atal's theorem. In fact, we prove the stronger result that such digraphs G are pancyclic. <s> BIB004 </s> A survey on Hamilton cycles in directed graphs <s> 2.3. <s> We show that every sufficiently large oriented graph with minimum in- and outdegree at least (3n-4)/8 contains a Hamilton cycle. This is best possible and solves a problem of Thomassen from 1979. <s> BIB005 </s> A survey on Hamilton cycles in directed graphs <s> 2.3. <s> We show that for each $\beta > 0$, every digraph $G$ of sufficiently large order $n$ whose outdegree and indegree sequences $d_1^+ \leq \cdots \leq d_n^+$ and $d_1^- \leq \cdots \leq d_n^-$ satisfy $d_i^+, d_i^- \geq \min{\{i + \beta n, n/2\}}$ is Hamiltonian. In fact, we can weaken these assumptions to (i) $d_i^+ \geq \min{\{i + \beta n, n/2\}}$ or $d^-_{n - i - \beta n} \geq n-i$, (ii) $d_i^- \geq \min{\{i + \beta n, n/2\}}$ or $d^+_{n - i - \beta n} \geq n-i$, and still deduce that $G$ is Hamiltonian. This provides an approximate version of a conjecture of Nash-Williams from 1975 and improves a previous result of Kuhn, Osthus, and Treglown. <s> BIB006
Degree sequences forcing Hamilton cycles in directed graphs. NashWilliams raised the question of a digraph analogue of Chvátal's theorem quite soon after the latter was proved: for a digraph G it is natural to consider both its outdegree sequence d Conjecture 6 (Nash-Williams ). Suppose that G is a strongly connected digraph on n ≥ 3 vertices such that for all i < n/2 (i) d It is even an open problem whether the conditions imply the existence of a cycle through any pair of given vertices (see BIB002 ). The following example shows that the degree condition in Conjecture 6 would be best possible in the sense that for all n ≥ 3 and all k < n/2 there is a non-Hamiltonian strongly connected digraph G on n vertices which satisfies the degree conditions except that d − k ≥ k in the kth pair of conditions. To see this, take an independent set I of size k < n/2 and a complete digraph K of order n − k. Pick a set X of k vertices of K and add all possible edges (in both directions) between I and X. The digraph G thus obtained is strongly connected, not Hamiltonian and is both the out-and indegree sequence of G. In contrast to the undirected case there exist examples with a similar degree sequence to the above but whose structure is quite different (see BIB004 and BIB006 ). This is one of the reasons which makes the directed case much harder than the undirected one. In BIB006 , the following approximate version of Conjecture 6 for large digraphs was proved. Theorem 7 (Christofides, Keevash, Kühn and Osthus BIB006 ). For every β > 0 there exists an integer n 0 = n 0 (β) such that the following holds. Suppose that G is a digraph on n ≥ n 0 vertices such that for all i < n/2 (i) d This improved a recent result in BIB004 , where the degrees in the first parts of these conditions were not 'capped' at n/2. The earlier result in BIB004 was derived from a result in BIB005 on the existence of a Hamilton cycle in an oriented graph satisfying a certain expansion property. Capping the degrees at n/2 makes the proof far more difficult: the conditions of Theorem 7 only imply a rather weak expansion property and there are many types of digraphs which almost satisfy the conditions but are not Hamiltonian. The following weakening of Conjecture 6 was posed earlier by Nash-Williams . It would yield a digraph analogue of Pósa's theorem. Conjecture 8 (Nash-Williams ). Let G be a digraph on n ≥ 3 vertices such that d The previous example shows the degree condition would be best possible in the same sense as described there. The assumption of strong connectivity is not necessary in Conjecture 8, as it follows from the degree conditions. Theorem 7 immediately implies a corresponding approximate version of Conjecture 8. In particular, for half of the vertex degrees (namely those whose value is n/2), the result matches the conjectured value. 2.4. Chvátal-Erdős type conditions. Another sufficient condition for Hamiltonicity in undirected graphs which is just as fundamental as those listed in the introduction is the Chvátal-Erdős theorem BIB001 : suppose that G is an undirected graph with n ≥ 3 vertices, for which the vertex-connectivity number κ(G) and the independence number α(G) satisfy κ(G) ≥ α(G), then G has a Hamilton cycle. Currently, there is no digraph analogue of this. Given a digraph G, let α 0 (G) denote the size of the largest set S so that S induces no edge and let α 2 (G) be the size of the largest set S so that S induces no cycle of length 2. So α 0 (G) ≤ α 2 (G). α 0 (G) is probably the more natural extension of the independence number to digraphs. However, even the following basic question (already discussed e.g. in BIB003 ) is still open.
A survey on Hamilton cycles in directed graphs <s> Hamilton cycles in oriented graphs <s> The screen or shade for the window is of flexible material so that when in open position it can be pleated. Opposite sides of the screen have tapes sewed on near the edges and male snap fasteners are spaced from each other on the tapes. These fasteners snap into holes in plastic glides which travel along plastic tracks. Either converted by screws or integral with the tracks are angle sealing strips, one flange being on the outside and arranged to prevent air movement from the inside of the screen to the outside, or vice versa, when the screen is closed. Either attached by screws to the track and sealing strip assembly or integral therewith is a mounting rail which is attached by screws to the building or support. A slanting roof is shown having several windows equipped with independently glided screens. <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> Hamilton cycles in oriented graphs <s> We show that for each \alpha>0 every sufficiently large oriented graph G with \delta^+(G),\delta^-(G)\ge 3|G|/8+ \alpha |G| contains a Hamilton cycle. This gives an approximate solution to a problem of Thomassen. In fact, we prove the stronger result that G is still Hamiltonian if \delta(G)+\delta^+(G)+\delta^-(G)\geq 3|G|/2 + \alpha |G|. Up to the term \alpha |G| this confirms a conjecture of H\"aggkvist. We also prove an Ore-type theorem for oriented graphs. <s> BIB002 </s> A survey on Hamilton cycles in directed graphs <s> Hamilton cycles in oriented graphs <s> We show that every sufficiently large oriented graph with minimum in- and outdegree at least (3n-4)/8 contains a Hamilton cycle. This is best possible and solves a problem of Thomassen from 1979. <s> BIB003
Recall that an oriented graph is a directed graph with no 2-cycles. Results on oriented graphs seem even more difficult to obtain than results for the digraph case (the Caccetta-Häggkvist conjecture on the girth of oriented graphs of large minimum outdegree is a notorious example of this kind). In particular, most problems regarding Hamiltonicity of such graphs were open until recently and many open questions still remain. 3.1. Minimum degree conditions. Thomassen raised the natural question of determining the minimum semidegree that forces a Hamilton cycle in an oriented graph. Thomassen initially believed that the correct minimum semidegree bound should be n/3 (this bound is obtained by considering a 'blow-up' of an oriented triangle). However, Häggkvist BIB001 later gave a construction which gives a lower bound of ⌈(3n − 4)/8⌉ − 1: For n of the form n = 4m + 3 where m is odd, we construct G on n vertices as in Figure 3 . Since every path which joins two vertices in D has to pass through B, it follows that every cycle contains at least as many vertices from B as it contains from D. As |D| > |B| this means that one cannot cover all the vertices of G by disjoint cycles. This construction can be extended to arbitrary n (see BIB003 ). The following result exactly matches this bound and improves earlier ones of several authors, e.g. BIB001 . In particular, the proof builds on an approximate version which was proved in BIB002 . Theorem 12 (Keevash, Kühn and Osthus BIB003 ). There exists an integer n 0 so that any oriented graph G on n ≥ n 0 vertices with minimum semidegree δ 0 (G) ≥ 3n−4 8 contains a Hamilton cycle. Jackson conjectured that for regular oriented graphs one can significantly reduce the degree condition. The disjoint union of two regular tournaments on n/2 vertices shows that this would be best possible. Note that the degree condition is smaller than the one in Conjecture 2. We believe that it may actually be possible to reduce the degree condition even further if we assume that G is strongly 2-connected: is it true that for each d > 2, every d-regular strongly 2-connected oriented graph G on n ≤ 6d vertices has a Hamilton cycle? A suitable orientation of the example in Figure 1 shows that this would be best possible.
A survey on Hamilton cycles in directed graphs <s> 3.2. <s> The screen or shade for the window is of flexible material so that when in open position it can be pleated. Opposite sides of the screen have tapes sewed on near the edges and male snap fasteners are spaced from each other on the tapes. These fasteners snap into holes in plastic glides which travel along plastic tracks. Either converted by screws or integral with the tracks are angle sealing strips, one flange being on the outside and arranged to prevent air movement from the inside of the screen to the outside, or vice versa, when the screen is closed. Either attached by screws to the track and sealing strip assembly or integral therewith is a mounting rail which is attached by screws to the building or support. A slanting roof is shown having several windows equipped with independently glided screens. <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> 3.2. <s> Let D be an oriented graph of order n ≧ 9 and minimum degree n − 2. This paper proves that D is pancyclic if for any two vertices u and v, either uv ≅ A(D), or dD+(u) + dD−(v) ≧ n − 3. <s> BIB002 </s> A survey on Hamilton cycles in directed graphs <s> 3.2. <s> We show that for each \alpha>0 every sufficiently large oriented graph G with \delta^+(G),\delta^-(G)\ge 3|G|/8+ \alpha |G| contains a Hamilton cycle. This gives an approximate solution to a problem of Thomassen. In fact, we prove the stronger result that G is still Hamiltonian if \delta(G)+\delta^+(G)+\delta^-(G)\geq 3|G|/2 + \alpha |G|. Up to the term \alpha |G| this confirms a conjecture of H\"aggkvist. We also prove an Ore-type theorem for oriented graphs. <s> BIB003 </s> A survey on Hamilton cycles in directed graphs <s> 3.2. <s> We show that for each \eta>0 every digraph G of sufficiently large order n is Hamiltonian if its out- and indegree sequences d^+_1\le ... \le d^+_n and d^- _1 \le ... \le d^-_n satisfy (i) d^+_i \geq i+ \eta n or d^-_{n-i- \eta n} \geq n-i and (ii) d^-_i \geq i+ \eta n or d^+_{n-i- \eta n} \geq n-i for all i<n/2. This gives an approximate solution to a problem of Nash-Williams concerning a digraph analogue of Chv\'atal's theorem. In fact, we prove the stronger result that such digraphs G are pancyclic. <s> BIB004
Ore-type conditions. Häggkvist BIB001 also made the following conjecture which is closely related to Theorem 12. Given an oriented graph G, let δ(G) denote the minimum degree of G (i.e. the minimum number of edges incident to a vertex) and set δ * (G) : Conjecture 14 (Häggkvist BIB001 ). Every oriented graph G on n vertices with δ * (G) > (3n − 3)/2 contains a Hamilton cycle. (Note that this conjecture does not quite imply Theorem 12 as it results in a marginally greater minimum semidegree condition.) In BIB003 , Conjecture 14 was verified approximately, i.e. if δ * (G) ≥ (3/2 + o(1))n, then G has a Hamilton cycle (note this implies an approximate version of Theorem 12). The same methods also yield an approximate version of Ore's theorem for oriented graphs. Theorem 15 (Kelly, Kühn and Osthus BIB003 ). For every α > 0 there exists an integer n 0 = n 0 (α) such that every oriented graph G of order n ≥ n 0 with d + (x) + d − (y) ≥ (3/4 + α)n whenever G does not contain an edge from x to y contains a Hamilton cycle. The construction in Figure 3 shows that the bound is best possible up to the term αn. It would be interesting to obtain an exact version of this result. Song BIB002 proved that every oriented graph on n ≥ 9 vertices with δ(G) ≥ n − 2 and d + (x) + d − (y) ≥ n − 3 whenever G does not contain an edge from x to y is pancyclic (i.e. G contains cycles of all possible lengths). In BIB002 he also claims (without proof) that the condition is best possible for infinitely many n as G may fail to contain a Hamilton cycle otherwise. Note that Theorem 15 implies that this claim is false. 3.3. Degree sequence conditions and Chvátal-Erdős type conditions. In BIB004 a construction was described which showed that there is no satisfactory analogue of Pósa's theorem for oriented graphs: as soon as we allow a few vertices to have a degree somewhat below 3n/8, then one cannot guarantee a Hamilton cycle. The question of exactly determining all those degree sequences which guarantee a Hamilton cycle remains open though. It is also not clear whether there may be a version of the Chvátal-Erdős theorem for oriented graphs.
A survey on Hamilton cycles in directed graphs <s> Tournaments <s> We obtain several sufficient conditions on the degrees of an oriented graph for the existence of long paths and cycles. As corollaries of our results we deduce that a regular tournament contains an edge-disjoint Hamilton cycle and path, and that a regular bipartite tournament is hamiltonian. <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> Tournaments <s> The so-called Kelly conjecture1 A proof of the Kelly conjecture for large k has been announced by R. Haggkvist at several conferences and in [5] but to this date no proof has been published.states that every regular tournament on 2k+1 vertices has a decomposition into k-arc-disjoint hamiltonian cycles. In this paper we formulate a generalization of that conjecture, namely we conjecture that every k-arc-strong tournament contains k arc-disjoint spanning strong subdigraphs. We prove several results which support the conjecture:If D = (V, A) is a 2-arc-strong semicomplete digraph then it contains 2 arc-disjoint spanning strong subdigraphs except for one digraph on 4 vertices.Every tournament which has a non-trivial cut (both sides containing at least 2 vertices) with precisely k arcs in one direction contains k arc-disjoint spanning strong subdigraphs. In fact this result holds even for semicomplete digraphs with one exception on 4 vertices.Every k-arc-strong tournament with minimum in- and out-degree at least 37k contains k arc-disjoint spanning subdigraphs H1, H2, . . . , Hk such that each Hi is strongly connected.The last result implies that if T is a 74k-arc-strong tournament with speci.ed not necessarily distinct vertices u1, u2, . . . , uk, v1, v2, . . . , vk then T contains 2k arc-disjoint branchings $$F^{ - }_{{u_{1} }} ,F^{ - }_{{u_{2} }} ,...,F^{ - }_{{u_{k} }} ,F^{ + }_{{v_{1} }} ,F^{ + }_{{v_{2} }} ,...,F^{ + }_{{v_{k} }}$$ where $$F^{ - }_{{u_{i} }}$$ is an in-branching rooted at the vertex ui and $$F^{ + }_{{v_{i} }}$$ is an out-branching rooted at the vertex vi, i=1,2, . . . , k. This solves a conjecture of Bang-Jensen and Gutin [3].We also discuss related problems and conjectures. <s> BIB002 </s> A survey on Hamilton cycles in directed graphs <s> Tournaments <s> We show that every sufficiently large regular tournament can a lmost completely be decomposed into edge-disjoint Hamilton cycles. More precisely, for each � > 0 every regular tournament G of sufficiently large ordern contains at least (1/2 �)n edge-disjoint Hamilton cycles. This gives an approximate solution to a conjecture of Kelly from 1968. Our result also extends to almost regular tournaments. <s> BIB003 </s> A survey on Hamilton cycles in directed graphs <s> Tournaments <s> In this paper we give an approximate answer to a question of Nash-Williams from 1970: we show that for every \alpha > 0, every sufficiently large graph on n vertices with minimum degree at least (1/2 + \alpha)n contains at least n/8 edge-disjoint Hamilton cycles. More generally, we give an asymptotically best possible answer for the number of edge-disjoint Hamilton cycles that a graph G with minimum degree \delta must have. We also prove an approximate version of another long-standing conjecture of Nash-Williams: we show that for every \alpha > 0, every (almost) regular and sufficiently large graph on n vertices with minimum degree at least $(1/2 + \alpha)n$ can be almost decomposed into edge-disjoint Hamilton cycles. <s> BIB004
A tournament is an orientation of a complete graph. It has long been known that tournaments enjoy particularly strong Hamiltonicity properties: Camion showed that we only need to assume strong connectivity to ensure that a tournament has a Hamilton cycle. Moon strengthened this by proving that every strongly connected tournament is even pancyclic. It is easy to see that a minimum semidegree of n/4 forces a tournament on n vertices to be strongly connected, leading to a better degree condition for Hamiltonicity than that of (3n − 4)/8 for the class of all oriented graphs. 4.1. Edge-disjoint Hamilton cycles and decompositions. A Hamilton decomposition of a graph or digraph G is a set of edge-disjoint Hamilton cycles which together cover all the edges of G. Not many examples of graphs with such decompositions are known. One can construct a Hamilton decomposition of a complete graph if and only if its order is odd (this was first observed by Walecki in the late 19th century). Tillson proved that a complete digraph G on n vertices has a Hamilton decomposition if and only if n = 4, 6. The following conjecture of Kelly from 1968 (see Moon ) would be a far-reaching generalization of Walecki's result: Conjecture 16 (Kelly) . Every regular tournament on n vertices can be decomposed into (n − 1)/2 edge-disjoint Hamilton cycles. In BIB003 we proved an approximate version of Kelly's conjecture. Moreover, the result holds even for oriented graphs G which are not quite regular and whose 'underlying' undirected graph is not quite complete. Theorem 17 (Kühn, Osthus and Treglown BIB003 ). For every η 1 > 0 there exist n 0 = n 0 (η 1 ) and η 2 = η 2 (η 1 ) > 0 such that the following holds. Suppose that G is an oriented graph on n ≥ n 0 vertices such that δ 0 (G) ≥ (1/2 − η 2 )n. Then G contains at least (1/2 − η 1 )n edge-disjoint Hamilton cycles. We also proved that the condition on the minimum semidegree can be relaxed to δ 0 (G) ≥ (3/8+η 2 )n. This is asymptotically best possible since the construction described in Figure 3 is almost regular. Some earlier support for Kelly's conjecture was provided by Thomassen [63] , who showed that the edges of every regular tournament can be covered by at most 12n Hamilton cycles. In this paper, we improve this to an asymptotically best possible result. We will give a proof (which relies on Theorem 17) in Section 6.1. Theorem 18. For every ξ > 0 there exists an integer n 0 = n 0 (ξ) so that every regular tournament G on n ≥ n 0 vertices contains a set of (1/2 + ξ)n Hamilton cycles which together cover all the edges of G. Kelly's conjecture has been generalized in several ways, e.g. Bang-Jensen and Yeo BIB002 conjectured that every k-edge-connected tournament has a decomposition into k spanning strong digraphs. A bipartite version of Kelly's conjecture was also formulated by Jackson BIB001 . Thomassen made the following conjecture which replaces the assumption of regularity by high connectivity. Conjecture 19 (Thomassen ). For every k ≥ 2 there is an integer f (k) so that every strongly f (k)-connected tournament has k edge-disjoint Hamilton cycles. A conjecture of Erdős (see ) which is also related to Kelly's conjecture states that almost all tournaments G have at least δ 0 (G) edge-disjoint Hamilton cycles. Similar techniques as in the proof of the approximate version of Kelly's conjecture were used at the same time in BIB004 to prove approximate versions of two long-standing conjectures of Nash-Williams on edge-disjoint Hamilton cycles in (undirected) graphs. One of these results states that one can almost decompose any dense regular graph into Hamilton cycles. Theorem 20 (Christofides, Kühn and Osthus BIB004 ). For every η > 0 there is an integer n 0 = n 0 (η) so that every d-regular graph on n ≥ n 0 vertices with d ≥ (1/2 + η)n contains at least (d − ηn)/2 edge-disjoint Hamilton cycles. In Section 6.1 we deduce the following analogue of Theorem 18: Theorem 21. For every ξ > 0 there is an integer n 0 = n 0 (ξ) so that every dregular graph G on n ≥ n 0 vertices with d ≥ (1/2+ξ)n contains at most (d+ξn)/2 Hamilton cycles which together cover all the edges of G.
A survey on Hamilton cycles in directed graphs <s> Counting Hamilton cycles in tournaments. <s> Solving an old conjecture of Szele we show that the maximum number of directed Hamiltonian paths in a tournament onn vertices is at mostc · n3/2· n!/2n−1, wherec is a positive constant independent ofn. <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> Counting Hamilton cycles in tournaments. <s> Let $P(n)$ and $C(n)$ denote, respectively, the maximum possible numbers of Hamiltonian paths and Hamiltonian cycles in a tournament on n vertices. The study of $P(n)$ was suggested by Szele [14], who showed in an early application of the probabilistic method that $P(n) \geq n!2^{-n+1}$, and conjectured that $\lim ( {P(n)}/ {n!} )^{1/n}= 1/2.$ This was proved by Alon [2], who observed that the conjecture follows from a suitable bound on $C(n)$, and showed $C(n) <O(n^{3/2}(n-1)!2^{-n}).$ Here we improve this to $C(n)<O\big(n^{3/2-\xi}(n-1)!2^{-n}\big),$ with $\xi = 0.2507$… Our approach is mainly based on entropy considerations. <s> BIB002
One of the earliest results on tournaments (and the probablistic method), was obtained by Szele , who showed that the maximum number P (n) of Hamilton paths in a tournament on n vertices satisfies P (n) = O(n!/2 3n/4 ) and P (n) ≥ n!/2 n−1 =: f (n). The lower bound is obtained by considering a random tournament. The best upper bound is due to Friedgut and Kahn BIB002 who showed that P (n) = O(n c f (n)), where c is slightly less than 5/4. The best current lower bound is due to Wormald , who showed that P (n) ≥ (2.855 + o(1))f (n). So in particular, P (n) is not attained for random tournaments. Also, he conjectured that this bound is very close to the correct value. Similarly, one can define the maximum number C(n) of Hamilton cycles in a tournament on n vertices. Note that by considering a random tournament again, we obtain C(n) ≥ (n − 1)!/2 n =: g(n). Unsurprisingly, C(n) and P (n) are very closely related, e.g. we have P (n) ≥ nC(n). In particular, the main result in BIB002 states that C(n) = O(n c g(n)), where c is the same as above. This implies the above bound on P (n), since Alon BIB001 observed that P (n) ≤ 4C(n + 1). Also, Wormald showed that C(n) ≥ (2.855 + o(1))g(n). (Note this also follows by combining Alon's observation with the lower bound on P (n) in .) Of course, in general it does not make sense to ask for the minimum number of Hamilton paths or cycles in a tournament. However, the question does make sense for regular tournaments. Friedgut and Kahn BIB002 asked whether the number of Hamilton cycles in a regular tournament is always at least Ω(g(n)). The best result towards this was recently obtained by Cuckler , who showed that every regular tournament on n vertices contains at least n!/(2 + o(1)) n Hamilton cycles. This also answers an earlier question of Thomassen. Asking for the minimum number of Hamilton paths in a tournament T also makes sense if we assume that T is strongly connected. Busch determined this number exactly by showing that an earlier construction of Moon is best possible. The related question on the minimum number of Hamilton cycles in a strongly 2-connected tournament is still open (see ).
A survey on Hamilton cycles in directed graphs <s> 4.3. <s> We prove that with three exceptions, every tournament of order n contains each oriented path of order n. The exceptions are the antidirected paths in the 3-cycle, in the regular tournament on 5 vertices, and in the Paley tournament on 7 vertices. <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> 4.3. <s> Sumner@?s universal tournament conjecture states that any tournament on 2n-2 vertices contains a copy of any directed tree on n vertices. We prove an asymptotic version of this conjecture, namely that any tournament on (2+o(1))n vertices contains a copy of any directed tree on n vertices. In addition, we prove an asymptotically best possible result for trees of bounded degree, namely that for any fixed @D, any tournament on (1+o(1))n vertices contains a copy of any directed tree on n vertices with maximum degree at most @D. <s> BIB002 </s> A survey on Hamilton cycles in directed graphs <s> 4.3. <s> Sumner's universal tournament conjecture states that any tournament on $2n-2$ vertices contains any directed tree on $n$ vertices. In this paper we prove that this conjecture holds for all sufficiently large $n$. The proof makes extensive use of results and ideas from a recent paper by the same authors, in which an approximate version of the conjecture was proved. <s> BIB003
Sumner's universal tournament conjecture. Sumner's universal tournament conjecture states that every tournament on 2n − 2 vertices contains every tree on n vertices. In BIB002 an approximate version of this conjecture was proved and subsequently in BIB003 , the conjecture was proved for all large trees (see e.g. BIB002 for a discussion of numerous previous results). The proof in BIB003 builds on several structural results proved in BIB002 . Theorem 22 (Kühn, Mycroft and Osthus BIB002 BIB003 ). There is an integer n 0 such that for all n ≥ n 0 every tournament G on 2n − 2 vertices contains any directed tree T on n vertices. While this result is not directly related to the main topic of the survey (i.e. Hamilton cycles), there are several connections. Firstly, just as with many of the new results in the other sections, the concept of a robust expander is crucial in the proof of Theorem 22. Secondly, the proof of Theorem 22 also makes direct use of the fact that a robust expander contains a Hamilton cycle (Theorem 30). Suitable parts of the tree T are embedded by considering a random walk on (the blow-up of) such a Hamilton cycle. In BIB002 , we also proved that if T has bounded maximum degree, then it suffices if the tournament G has (1 + α)n vertices. This is best possible in the sense that the 'error term' αn cannot be completely omitted in general. But it seems possible that it can be reduced to a constant which depends only on the maximum degree of T . If T is an orientation of a path, then the error term can be omitted completely: Havet and Thomassé BIB001 proved that every tournament on at least 8 vertices contains every possible orientation of a Hamilton path (for arbitrary orientations of Hamilton cycles, see Section 5.2).
A survey on Hamilton cycles in directed graphs <s> Generalizations <s> We show that for each \ell\geq 4 every sufficiently large oriented graph G with \delta^+(G), \delta^-(G) \geq \lfloor |G|/3 \rfloor +1 contains an \ell-cycle. This is best possible for all those \ell\geq 4 which are not divisible by 3. Surprisingly, for some other values of \ell, an \ell-cycle is forced by a much weaker minimum degree condition. We propose and discuss a conjecture regarding the precise minimum degree which forces an \ell-cycle (with \ell \geq 4 divisible by 3) in an oriented graph. We also give an application of our results to pancyclicity and consider \ell-cycles in general digraphs. <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> Generalizations <s> Abstract We prove that every digraph on n vertices with minimum out-degree 0.3465 n contains an oriented triangle. This improves the bound of 0.3532 n of Hamburger, Haxell and Kostochka. The main tool for our proof is the theory of flag algebras developed recently by Razborov. <s> BIB002
In this section, we discuss several natural ways of strengthening the notion of a Hamilton cycle. 5.1. Pancyclicity. Recall that a graph (or digraph) is pancyclic if it contains a cycle of every possible length. Dirac's theorem implies that a graph on n ≥ 3 vertices is pancyclic if it has minimum degree greater than n/2. (To see this, remove a vertex x and apply Dirac's theorem to the remaining subgraph to obtain a cycle of length n−1. Then consider the neighbourhood of x on this cycle to obtain cycles of all possible lengths through x.) Similarly, one can use Ghouila-Houri's theorem to deduce that every digraph on n vertices with minimum semidegree greater than n/2 is pancyclic. In both cases, the complete bipartite (di-)graph whose vertex class sizes are as equal as possible shows that the bound is best possible. More generally, the same trick also works for Meyniel's theorem: let G be a strongly connected digraph on n ≥ 2 vertices. If d(x) + d(y) ≥ 2n + 1 for all pairs of non-adjacent vertices x = y in G, then G is pancyclic. (Indeed, the conditions imply that either G contains a strongly connected tournament or contains a vertex x with d(x) > n, in which case we can proceed as above.) If n is even, the bound 2n + 1 is best possible. For n is odd, it follows from a result of Thomassen [60] that one can improve it to 2n. For oriented graphs the minimum semidegree threshold which guarantees pancyclicity turns out to be (3n − 4)/8, i.e. the same threshold as for Hamiltonicity (see BIB001 ). The above trick of removing a vertex does not work here. Instead, to obtain 'long' cycles one can modify the proof of Theorem 12. A triangle is guaranteed by results on the Caccetta-Häggkvist conjecture -e.g. a very recent result of Hladký, Král and Norine BIB002 states that every oriented graph on n vertices with minimum semidegree at least 0.347n contains a 3-cycle. Short cycles of length ℓ ≥ 4 can be guaranteed by a result in BIB001 which states that for all n ≥ 10 10 ℓ every oriented graph G on n vertices with δ 0 (G) ≥ ⌊n/3⌋ + 1 contains an ℓ-cycle. This is best possible for all those ℓ ≥ 4 which are not divisible by 3. Surprisingly, for some other values of ℓ, an ℓ-cycle is forced by a much weaker minimum degree condition. In particular, the following conjecture was made in BIB001 . Conjecture 23 (Kelly, Kühn and Osthus BIB001 ). Let ℓ ≥ 4 be a positive integer and let k be the smallest integer that is greater than 2 and does not divide ℓ. Then there exists an integer n 0 = n 0 (ℓ) such that every oriented graph G on n ≥ n 0 vertices with minimum semidegree δ 0 (G) ≥ ⌊n/k⌋ + 1 contains an ℓ-cycle. The extremal examples for this conjecture are always 'blow-ups' of cycles of length k. Possibly one can even weaken the condition by requiring only the outdegree of G to be large. It is easy to see that the only values of k that can appear in Conjecture 23 are of the form k = p s with k ≥ 3, where p is a prime and s a positive integer.
A survey on Hamilton cycles in directed graphs <s> Arbitrary orientations. <s> Sufficient conditions are given for the existence of an oriented path with given end vertices in a tournament. As a consequence a conjecture of Rosenfeld is established. This states that if n is large enough, then every non-strongly oriented cycle of order n is contained in every tournament of order n. It is well known and easy to see that every tournament has a directed hamilton path. Rosenfeld [8] conjectured that if n is large enough, then any oriented path of order n is contained in any tournament of order n. This has been established for alternating paths by Griinbaum [5] and Rosenfeld [8], for paths with two blocks (a block being a maximal directed subpath) by Alspach and Rosenfeld [1] and Straight [10], for paths where the ith block has length at least i + 1 by Alspach and Rosenfeld [1] and, curiously, for all paths if n is a power of 2 by Forcade [4]. Reid and Wormald [7] have shown that every oriented path of order n is contained in every tournament of order 3n/2. It is easy to show that a tournament has a strongly oriented hamilton cycle if and only if it is strongly connected. Rosenfeld in [9] conjectured that any non-strongly oriented cycle of order n is contained in any tournament of order n, provided n is large enough. This has been verified for cycles with a block of length n 1 by Griinbaum, for alternating cycles by Rosenfeld [9] and Thomassen [11], and for cycles with just two blocks by Benhocine and Wojda [2]. (It has also been shown by Heydemann, Sotteau and Thomassen [6] that every digraph of order n with (n 1) (n 2) + 3 edges contains every non-strong oriented cycle of order n.) In this paper we prove both conjectures (the first is of course a consequence of the second). "Large enough" in this case means at least 2128, or about 1039. In fact it seems that the path conjecture is true for n > 8 (indeed there are probably just three pairs (P, T) with P t T) and that the cycle conjecture is true for n > 9. We stress that we make no attempt to give a small lower bound, but aim to establish the conjecture in the shortest time possible. The main result is Theorem 14, which rests on Lemmas 9, 11 and 13. Roughly speaking, Lemma 13 proves the conjecture if the cycle has two separate and fair sized blocks (as do most cycles), Lemma 9 proves it if the cycle has a huge block (as in Griinbaum's result) and Lemma 11 takes care of cycles with many small blocks (such as alternating cycles). The proof of the conjecture is in the third section. In the first two sections we establish results of independent interest concerning the existence of oriented paths with specified end vertices. Apart from the odd detail, they are as follows: let P Received by the editors August 30, 1982 and, in revised form, January 20, 1985 and July 5, 1985. 1980 Mathematics Subject Classification. Primary 05C20; Secondary 05C45. (?)1986 American Mathematical Society 0002-9947/86 $1.00 + $.25 per page <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> Arbitrary orientations. <s> We show that a directed graph of order n will contain n-cycles of every orientation, provided each vertex has indegree and outdegree at least (1/2 + n-1/6)n and n is sufficiently large. © 1995 John Wiley & Sons, Inc. <s> BIB002 </s> A survey on Hamilton cycles in directed graphs <s> Arbitrary orientations. <s> We prove that with three exceptions, every tournament of order n contains each oriented path of order n. The exceptions are the antidirected paths in the 3-cycle, in the regular tournament on 5 vertices, and in the Paley tournament on 7 vertices. <s> BIB003 </s> A survey on Hamilton cycles in directed graphs <s> Arbitrary orientations. <s> We prove that every tournament of order n?68 contains every oriented Hamiltonian cycle except possibly the directed one when the tournament is reducible. <s> BIB004 </s> A survey on Hamilton cycles in directed graphs <s> Arbitrary orientations. <s> We use a randomised embedding method to prove that for all \alpha>0 any sufficiently large oriented graph G with minimum in-degree and out-degree \delta^+(G),\delta^-(G)\geq (3/8+\alpha)|G| contains every possible orientation of a Hamilton cycle. This confirms a conjecture of H\"aggkvist and Thomason. <s> BIB005
As mentioned earlier, the most natural notion of a cycle in a digraph is to have all edges directed consistently. But it also makes sense to ask for Hamilton cycles where the edges are oriented in some prescribed way, e.g. to ask for an 'antidirected' Hamilton cycle where consecutive edges have opposite directions. Surprisingly, it turns out that both for digraphs and oriented graphs the minimum degree threshold which guarantees a 'consistent' Hamilton cycle is approximately the same as that which guarantees an arbitrary orientation of a Hamilton cycle. Theorem 24 (Häggkvist and Thomason BIB002 ). There exists an n 0 so that every digraph G on n ≥ n 0 vertices with minimum semidegree δ 0 (G) ≥ n/2 + n 5/6 contains every orientation of a Hamilton cycle. In , they conjectured an analogue of this for oriented graphs, which was recently proved by Kelly. Theorem 25 (Kelly BIB005 ). For every α > 0 there exists an integer n 0 = n 0 (α) such that every oriented graph G on n ≥ n 0 vertices with minimum semidegree δ 0 (G) ≥ (3/8 + α)n contains every orientation of a Hamilton cycle. The proof of this result uses Theorem 12 as the well as the notion of expanding digraphs. Interestingly, Kelly observed that the thresholds for various orientations do not coincide exactly: for instance, if we modify the example in Figure 3 so that all classes have the same odd size, then the resulting oriented graph has minimum semidegree (3n − 4)/8 but no antidirected Hamilton cycle. Thomason BIB001 showed that for large tournaments strong connectivity ensures every possible orientation of a Hamilton cycle. More precisely, he showed that for n ≥ 2 128 , every tournament on n vertices contains all possible orientations of a Hamilton cycle, except possibly the 'consistently oriented' one. (Note that this also implies that every large tournament contains every orientation of a Hamilton path, i.e. a weaker version of the result in BIB003 mentioned earlier.) The bound on n was later reduced to 68 by Havet BIB004 . Thomason conjectured that the correct bound is n ≥ 9.
A survey on Hamilton cycles in directed graphs <s> k-ordered <s> A Hamiltonian graph $G$ of order $n$ is $k$-ordered, $2\leq k \leq n$, if for every sequence $v_1, v_2, \ldots ,v_k$ of $k$ distinct vertices of $G$, there exists a Hamiltonian cycle that encounters $v_1, v_2, \ldots , v_k$ in this order. In this paper, answering a question of Ng and Schultz, we give a sharp bound for the minimum degree guaranteeing that a graph is a $k$-ordered Hamiltonian graph under some mild restrictions. More precisely, we show that there are $\varepsilon, n_0> 0$ such that if $G$ is a graph of order $n\geq n_0$ with minimum degree at least $\lceil \frac{n}{2} \rceil + \lfloor \frac{k}{2} \rfloor - 1$ and $2\leq k \leq \eps n$, then $G$ is a $k$-ordered Hamiltonian graph. It is also shown that this bound is sharp for every $2\leq k \leq \lfloor \frac{n}{2} \rfloor$. <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> k-ordered <s> For a positive integer k, a graph G is k-ordered hamiltonian if for every ordered sequence of k vertices there is a hamiltonian cycle that encounters the vertices of the sequence in the given order. It is shown that if G is a graph of order n with 3 ≤ k ≤ n-2, and deg(u) + deg(v) ≥ n + (3k - 9)-2 for every pair u, v of nonadjacent vertices of G, then G is k-ordered hamiltonian. Minimum degree conditions are also given for k-ordered hamiltonicity. © 2003 Wiley Periodicals, Inc. J Graph Theory 42: 199–210, 2003 <s> BIB002
Hamilton cycles. Suppose that we require our (Hamilton) cycle to visit several vertices in a specific order. More formally, we say that a graph G is k-ordered if for every sequence s 1 , . . . , s k of distinct vertices of G there is a cycle which encounters s 1 , . . . , s k in this order. G is k-ordered Hamiltonian if it contains a Hamilton cycle with this property. Kierstead, Sárközy and Selkow BIB001 determined the minimum degree which forces an (undirected) graph to be k-ordered Hamiltonian. Theorem 26 (Kierstead, Sárközy and Selkow BIB001 ). For all k ≥ 2, every graph on n ≥ 11k − 3 vertices of minimum degree at least ⌈n/2⌉ + ⌊k/2⌋ − 1 is k-ordered Hamiltonian. The extremal example consists of two cliques intersecting in k − 1 vertices if k is even and two cliques intersecting in k − 2 vertices if k is odd. The case when n is not too large compared to k is still open. The corresponding Ore-type problem was solved in BIB002 . Here the Ore-type result does not imply the Dirac-type result above. Many variations and stronger notions have been investigated (see e.g. again). Directed graphs form a particularly natural setting for this kind of question. The following result gives a directed analogue of Theorem 26. Theorem 27 (Kühn, Osthus and Young ). For every k ≥ 3 there is an integer n 0 = n 0 (k) such that every digraph G on n ≥ n 0 vertices with minimum semidegree Note that if n is even and k is odd the bound on the minimum semidegree is slightly larger than in the undirected case. However, it is best possible in all cases. In fact, if the minimum semidegree is smaller, it turns out that G need not even be k-ordered. Again, the family of extremal examples turns out to be much richer than in the undirected case. Note that every Hamiltonian digraph is 2-ordered Hamiltonian, so the case when k ≤ 2 in Theorem 27 is covered by Ghouila-Houri's theorem. It would be interesting to obtain an Ore-type or an oriented version of Theorem 27.
A survey on Hamilton cycles in directed graphs <s> Factors with prescribed cycle lengths. <s> Our main aim is to show that for every e > 0 and k ∈ N there is an n(e, k) such that if T is a tournament of order n ≥n(e, k) and in T every vertex has indegree at least (14+e)n and at most (34−e)n then T contains the kth power of a Hamilton cycle. <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> Factors with prescribed cycle lengths. <s> In this paper, the following theorem and some related problems are investigated.THEOREM. Let T be a 2-connected n-tournament with n ≥ 6. Then T contains two vertex-disjoint cycles of lengths k and n − k for any integer k with n − 3 ≥ k ≥ 3, unless T is isomorphic to the 7-tournament which contains no transitive 4-tournament. <s> BIB002 </s> A survey on Hamilton cycles in directed graphs <s> Factors with prescribed cycle lengths. <s> Let k be a positive integer. A strong digraph G is termed k-connected if the removal of any set of fewer than k vertices results in a strongly connected digraph. The purpose of this paper is to show that every k-connected tournament with at least 8k vertices contains k vertex-disjoint directed cycles spanning the vertex set. This result answers a question posed by Bollobas. <s> BIB003 </s> A survey on Hamilton cycles in directed graphs <s> Factors with prescribed cycle lengths. <s> Packing and decomposition of combinatorial objects such as graphs, digraphs, and hypergraphs by smaller objects are central problems in combinatorics and combinatorial optimization. Their study combines probabilistic, combinatorial, and algebraic methods. In addition to being among the most fascinating purely combinatorial problems, they are often motivated by algorithmic applications. There is a considerable number of intriguing fundamental problems and results in this area, and the goal of this paper is to survey the state-of-the-art. <s> BIB004 </s> A survey on Hamilton cycles in directed graphs <s> Factors with prescribed cycle lengths. <s> An oriented graph is a directed graph which can be obtained from a simple undirected graph by orienting its edges. In this paper we show that any oriented graph G on n vertices with minimum indegree and outdegree at least (1/2-o(1))n contains a packing of cyclic triangles covering all but at most 3 vertices. This almost answers a question of Cuckler and Yuster and is best possible, since for n = 3 mod 18 there is a tournament with no perfect triangle packing and with all indegrees and outdegrees (n-1)/2 or (n-1)/2 \pm 1. Under the same hypotheses, we also show that one can embed any prescribed almost 1-factor, i.e. for any sequence n_1,...,n_t with n_1+...+n_t < n-O(1) we can find a vertex-disjoint collection of directed cycles with lengths n_1,...,n_t. In addition, under quite general conditions on the n_i we can remove the O(1) additive error and find a prescribed 1-factor. <s> BIB005
Another natural way of generalizing Dirac's theorem is to ask for a certain set of vertex-disjoint cycles in G which together cover all the vertices of G (note this also generalizes the notion of pancyclicity). For large undirected graphs, Abassi determined the minimum degree which guarantees k vertex-disjoint cycles in a graph G whose (given) lengths are n 1 , . . . , n k , where the n i sum up to n and where the order n of G is sufficiently large. As in the case of Hamilton cycles, the corresponding questions for directed and oriented graphs appear much harder than in the undirected case and again much less is known. Keevash and Sudakov BIB005 recently obtained the following result. Theorem 28 (Keevash and Sudakov BIB005 ). There exist positive constants c, C and an integer n 0 so that whenever G is an oriented graph on n ≥ n 0 vertices with minimum semidegree at least (1/2 − c)n and whenever n 1 , . . . , n t are so that t i=1 n i ≤ n − C, then G contains vertex-disjoint cycles of length n 1 , . . . , n t . In general, one cannot take C = 0. In the case of triangles (i.e. when all the n i = 3), they show that one can choose C = 3. This comes very close to proving a recent conjecture formulated independently by Cuckler and Yuster BIB004 , which states that every regular tournament on n = 6k + 3 vertices contains vertexdisjoint triangles covering all the vertices of the tournament. Similar questions were also raised earlier by Song BIB002 . For instance, given t, he asked for the smallest integer f (t) so that all but a finite number of strongly f (t)-connected tournaments T satisfy the following: Let n be the number of vertices of T and let t i=1 n i = n. Then T contains vertex-disjoint cycles of length n 1 , . . . , n t . Chen, and E) indicates an orientation of the complete bipartite graph so that within each set, the in-and outdegrees of the vertices differ by at most one. Gould and Li BIB003 proved the weaker result that every sufficiently large t-connected tournament G contains t vertex-disjoint cycles which together cover all the vertices of G. This proved a conjecture of Bollobás. 5.5. Powers of Hamilton cycles. Sarközy, Komlós and Szemerédi showed that every sufficiently large graph G on n vertices with minimum degree at least kn/(k + 1) contains the kth power of a Hamilton cycle. Extremal examples are complete (k + 1)-partite graphs with classes of almost equal size. It appears likely that the situation for digraphs is similar. However, just as for ordinary Hamilton cycles, it seems that for oriented graphs the picture is rather different. (Both for digraphs and oriented graphs, the most natural definition of the kth power of a cycle is a cyclically ordered set of vertices so that every vertex sends an edge to the next k vertices in the ordering.) Conjecture 29 (Treglown [64] ). For every ε > 0 there is an integer n 0 = n 0 (ε) so that every oriented graph G on n ≥ n 0 vertices with minimum semidegree at least (5/12 + ε)n contains the square of a Hamilton cycle. A construction which shows that the constant 5/12 cannot be improved is given in Figure 4 . We claim that the square of any Hamilton cycle would have to visit a vertex of B in between two visits of E. Since |B| < |E|, this shows that the graph does not contain the square of a Hamilton cycle. To prove the claim, suppose that F is a squared Hamilton cycle and consider a vertex e of F which lies in E. Then the predecessor of e lies in C or B, so without loss of generality we may assume that it is a vertex c 1 ∈ C. Again, the predecessor of c 1 lies in C or B (since it must lie in the common inneighbourhood of C and E), so without loss of generality we may assume that it is a vertex c 2 ∈ C. The predecessor of c 2 can now lie in A, B or C. If it lies in B we are done again, if it is a vertex c 3 ∈ C, we consider its predecessor, which can again only lie in A, B or C. Since F must visit all vertices, it follows that we eventually arrive at a predecessor a ∈ A whose successor on F is some vertex c ∈ C. The predecessor of a on F must lie in the common inneighbourhood of a and c, so it must lie in B, as required. For the case of tournaments, the problem was solved asymptotically by Bollobás and Häggkvist BIB001 . Given a tournament T of large order n with minimum semidegree at least n/4 + εn, they proved that (for fixed k) T contains the kth power of a Hamilton cycle. So asymptotically, the semidegree threshold for an ordinary Hamilton cycle in a tournament is the same as that for the kth power of a Hamilton cycle.
A survey on Hamilton cycles in directed graphs <s> Robustly expanding digraphs <s> We prove that any strong tournament with minimum outdegree at least 3k+3 has at least 4 K k! distinct Hamilton circuits and that every regular tournament of order n can be covered by a collection of 12n Hamilton circuits. <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> Robustly expanding digraphs <s> We provide an NC algorithm for nding Hamilton cycles in directed graphs with a certain robust expansion property. This property captures several known criteria for the existence of Hamilton cycles in terms of the degree sequence and thus we provide algorithmic proofs of (i) an ‘oriented’ analogue of Dirac’s theorem and (ii) an approximate version (for directed graphs) of Chv atal’s theorem. Moreover, our main result is used as a tool <s> BIB002 </s> A survey on Hamilton cycles in directed graphs <s> Robustly expanding digraphs <s> We show that for each \alpha>0 every sufficiently large oriented graph G with \delta^+(G),\delta^-(G)\ge 3|G|/8+ \alpha |G| contains a Hamilton cycle. This gives an approximate solution to a problem of Thomassen. In fact, we prove the stronger result that G is still Hamiltonian if \delta(G)+\delta^+(G)+\delta^-(G)\geq 3|G|/2 + \alpha |G|. Up to the term \alpha |G| this confirms a conjecture of H\"aggkvist. We also prove an Ore-type theorem for oriented graphs. <s> BIB003 </s> A survey on Hamilton cycles in directed graphs <s> Robustly expanding digraphs <s> Let G be a simple graph on n vertices. A conjecture of Bollobas and Eldridge [5] asserts that if δ(G) ≥ kn−1 k+1 then G contains any n vertex graph H with ∆(H) = k. We prove a strengthened version of this conjecture for bipartite, bounded degree H, for sufficiently large n. This is the first result on this conjecture for expander graphs of arbitrary (but bounded) degree. An important tool for the proof is a new version of the Blow-up Lemma. <s> BIB004 </s> A survey on Hamilton cycles in directed graphs <s> Robustly expanding digraphs <s> We show that for each \eta>0 every digraph G of sufficiently large order n is Hamiltonian if its out- and indegree sequences d^+_1\le ... \le d^+_n and d^- _1 \le ... \le d^-_n satisfy (i) d^+_i \geq i+ \eta n or d^-_{n-i- \eta n} \geq n-i and (ii) d^-_i \geq i+ \eta n or d^+_{n-i- \eta n} \geq n-i for all i<n/2. This gives an approximate solution to a problem of Nash-Williams concerning a digraph analogue of Chv\'atal's theorem. In fact, we prove the stronger result that such digraphs G are pancyclic. <s> BIB005 </s> A survey on Hamilton cycles in directed graphs <s> Robustly expanding digraphs <s> We show that every sufficiently large oriented graph with minimum in- and outdegree at least (3n-4)/8 contains a Hamilton cycle. This is best possible and solves a problem of Thomassen from 1979. <s> BIB006 </s> A survey on Hamilton cycles in directed graphs <s> Robustly expanding digraphs <s> In this paper we prove a sufficient condition for the existence of a Hamilton cycle, which is applicable to a wide variety of graphs, including relatively sparse graphs. In contrast to previous criteria, ours is based on only two properties: one requiring expansion of ``small'' sets, the other ensuring the existence of an edge between any two disjoint ``large'' sets. We also discuss applications in positional games, random graphs and extremal graph theory. <s> BIB007 </s> A survey on Hamilton cycles in directed graphs <s> Robustly expanding digraphs <s> We show that for each $\beta > 0$, every digraph $G$ of sufficiently large order $n$ whose outdegree and indegree sequences $d_1^+ \leq \cdots \leq d_n^+$ and $d_1^- \leq \cdots \leq d_n^-$ satisfy $d_i^+, d_i^- \geq \min{\{i + \beta n, n/2\}}$ is Hamiltonian. In fact, we can weaken these assumptions to (i) $d_i^+ \geq \min{\{i + \beta n, n/2\}}$ or $d^-_{n - i - \beta n} \geq n-i$, (ii) $d_i^- \geq \min{\{i + \beta n, n/2\}}$ or $d^+_{n - i - \beta n} \geq n-i$, and still deduce that $G$ is Hamiltonian. This provides an approximate version of a conjecture of Nash-Williams from 1975 and improves a previous result of Kuhn, Osthus, and Treglown. <s> BIB008
Roughly speaking, a graph is an expander if for every set S of vertices the neighbourhood N (S) of S is significantly larger than S itself. A number of papers have recently demonstrated that there is a remarkably close connection between Hamiltonicity and expansion (see e.g. BIB007 ). The following notion of robustly expanding (dense) digraphs was introduced in BIB005 . Let 0 < ν ≤ τ < 1. Given any digraph G on n vertices and S ⊆ V (G), the ν-robust outneighbourhood RN + ν,G (S) of S is the set of all those vertices x of G which have at least νn inneighbours in S. G is called a robust (ν, τ )-outexpander if |RN + ν,G (S)| ≥ |S| + νn for all S ⊆ V (G) with τ n < |S| < (1 − τ )n. As the name suggests, this notion has the advantage that it is preserved even if we delete some vertices and edges from G. We will also use the more traditional (and weaker) notion of a (ν, τ )-outexpander, which means |N + (S)| ≥ |S| + νn for all S ⊆ V (G) with τ n < |S| < (1 − τ )n. Theorem 30 (Kühn, Osthus and Treglown BIB005 ). Let n 0 be a positive integer and ν, τ, η be positive constants such that 1/n 0 ≪ ν ≤ τ ≪ η < 1. Let G be a digraph on n ≥ n 0 vertices with δ 0 (G) ≥ ηn which is a robust (ν, τ )-outexpander. Then G contains a Hamilton cycle. Theorem 30 is used in BIB005 to give a weaker version of Theorem 7 (i.e. without the degrees capped at n/2). In the same paper it is also applied to prove a conjecture of Thomassen regarding a weak version of Conjecture 16 (Kelly's conjecture). One can also use it to prove e.g. Theorem 15 and thus an approximate version of Theorem 12. (Indeed, as proved in BIB003 , the degree conditions of Theorem 15 imply expansion, the proof for robust expansion is similar.) As mentioned earlier, it is also used as a tool in the proof of Theorem 22. Finally, we will also use it in the next subsection to prove Theorem 18. In BIB005 , Theorem 30 was deduced from a result in BIB006 . The proof of the result in BIB006 (and a similar approach in BIB003 ) in turn relied on Szemerédi's regularity lemma and a (rather technical) version of the Blow-up lemma due to Csaba BIB004 . A (parallel) algorithmic version of Theorem 30 was also proved in BIB002 . Below, we give a brief sketch of a proof of Theorem 30 which avoids any use of the Blow-up lemma and is based on an approach in BIB008 . The density of a bipartite graph G with vertex classes A and B is defined to be d(A, B) = e (A,B) |A||B| , where e(A, B) denotes the number of edges between A and B. Given ε > 0, we say that G is ε-regular if for all subsets X ⊆ A and Y ⊆ B with |X| ≥ ε|A| and |Y | ≥ ε|B| we have that |d(X, Y ) − d(A, B)| < ε. We also say that G is (ε, d)-super-regular if it is ε-regular and furthermore every vertex a ∈ A has degree at least d|B| and similarly for every b ∈ B. These definitions generalize naturally to non-bipartite (di-)graphs. We also need the result that every super-regular digraph contains a Hamilton cycle. Lemma 31 is a special case e.g. of a result of Frieze and Krivelevich , who proved that an (ε, d)-super-regular digraph on n vertices has almost dn edgedisjoint Hamilton cycles if n is large. Here we also give a sketch of a direct proof of Lemma 31. We first prove that G contains a 1-factor. Consider the auxiliary bipartite graph whose vertex classes A and B are copies of V (G) with an edge between a ∈ A and b ∈ B if there is an edge from a to b in G. One can show that this bipartite graph has a perfect matching (by Hall's marriage theorem), which in turn corresponds to a 1-factor in G. It is now not hard to prove the lemma using the 'rotation-extension' technique: Choose a 1-factor of G. Now remove an edge of a cycle in this 1-factor and let P be the resulting path. If the final vertex of P has any outneighbours on another cycle C of the 1-factor, we can extend P into a longer path which includes the vertices of C (and similarly for the initial vertex of P ). We repeat this as long as possible (and one can always ensure that the extension step can be carried out at least once). So we may assume that all outneighbours of the final vertex of P lie on P and similarly for the initial vertex of P . Together with the ε-regularity this can be used to find a cycle with the same vertex set as P . Eventually, we arrive at a Hamilton cycle. Sketch proof of Theorem 30. Choose ε, d to satisfy 1/n 0 ≪ ε ≪ d ≪ ν. The first step is to apply a directed version of Szemerédi's regularity lemma to G. This gives us a partition of the vertices of G into clusters V 1 , . . . , V k and an exceptional set V 0 so that |V 0 | ≤ εn and all the clusters have size m. Now define a 'reduced' digraph R whose vertices are the clusters V 1 , . . . , V k and with an edge from V i to V j if the bipartite graph spanned by the edges from V i to V j is ε-regular and has density at least d. Then one can show (see Lemma 14 in [46] ) that R is still a (ν/2, 2τ )-outexpander (this is the point where we need the robustness of the expansion in G) with minimum semidegree at least ηk/2. This in turn can be used to show that R has a 1-factor F (using the same auxiliary bipartite graph as in the proof of Lemma 31) . By removing a small number of vertices from the clusters, we can also assume that the bipartite subgraphs spanned by successive clusters on each cycle of F are super-regular, i.e. have high minimum degree. For simplicity, assume that the cluster size is still m. Moreover, since G is an expander, we can find a short path in G between clusters of different cycles of F and also between any pair of exceptional vertices. However, we need to choose such paths without affecting any of the useful structures that we have found so far. For this, we will consider paths which 'wind around' cycles in F before moving to another cycle. More precisely, a shifted walk from a cluster A to a cluster B is a walk W (A, B) of the form where X 1 = A, X t+1 = B, C i is the cycle of F containing X i , and for each 1 ≤ i ≤ t, X − i is the predecessor of X i on C i and the edge X − i X i+1 belongs to R. We say that W as above traverses t cycles (even if some C i appears several times in W ). We also say that the clusters X 2 , . . . , X t+1 are the entry clusters (as this is where W 'enters' a cycle C i ) and the clusters X (ii) for any clusters A and B there is a shifted walk from A to B which does not traverse too many cycles. Indeed, the expansion property implies that the number of clusters one can reach by traversing t cycles is at least tνk/2 as long as this is significantly less than the total number k of clusters. Now we will 'join up' the exceptional vertices using shifted walks. For this, write V 0 = {a 1 , . . . , a ℓ }. For each exceptional vertex a i choose a cluster T i so that a i has many outneighbours in T i . Similarly choose a cluster U i so that a i has many inneighbours in U i and so that (iii) no cluster appears too often as a T i or a U i . Given a cluster X, let X − be the predecessor of X on the cycle of F which contains X and let X + be its successor. Form a 'walk' W on V 0 ∪ V (R) which starts at a 1 , then moves to T 1 , then follows a shifted walk from T 1 to U + 2 , then it winds around the entire cycle of F containing U + 2 until it reaches U 2 . Then W moves to a 2 , then to a 3 using a shifted walk as above until it has visited all the exceptional vertices. Proceeding similarly, we can ensure that W has the following properties: (a) W is a closed walk which visits all of V 0 and all of V (R). (b) For any cycle of F , its clusters are visited the same number of times by W . (c) Every cluster appears at most m/10 times as an entry or exit cluster. (b) follows from (i) and (c) follows from (ii) and (iii). The next step towards a Hamilton cycle would be to find a cycle C in G which corresponds to W (i.e. each occurrence of a cluster in W is replaced by a distinct vertex of G lying in this cluster). Unfortunately, the fact that V 0 may be much larger than the cluster size m implies that there may be clusters which are visited more than m times by W , which makes it impossible to find such a C. So we will apply a 'short-cutting' technique to W which avoids 'winding around' the cycles of F too often. For this, we now fix edges in G corresponding to all those edges of W that do not lie within a cycle of F . These edges of W are precisely the edges in W at the exceptional vertices as well as all the edges of the form AB where A is used as an exit cluster by W and B is used as an entrance cluster by W . For each edge a i T i at an exceptional vertex we choose an edge a i x, where x is an outneighbour of a i in T i . We similarly choose an edge ya i from U i to a i for each U i a i . We do this in such a way that all these edges are disjoint outside V 0 . For each occurrence of AB in W , where A is used as an exit cluster by W and B is used as an entrance cluster, we choose an edge ab from A to B in G so that all these edges are disjoint from each other and from the edges chosen for the exceptional vertices (we use (c) here). Given a cluster A, let A entry be the set of all those vertices in A which are the final vertex of an edge of G fixed so far and let A exit be the set of all those vertices in A which are the initial vertex of an edge of G fixed so far. So A entry ∩ A exit = ∅. Let G A be the bipartite graph whose vertex classes are A \ A exit and A + \ A + entry and whose edges are all the edges from A \ A exit to A + \ A + entry in G. Since W consists of shifted walks, it is easy to see that the vertex classes of G A have equal size. Moreover, it is possible to carry out the previous steps in such a way that G A is super-regular (here we use (c) again). This in turn means that G A has a perfect matching M A . These perfect matchings (for all clusters A) together with all the edges of G fixed so far form a 1-factor C of G. It remains to transform C into a Hamilton cycle. We claim that for any cluster A, we can find a perfect matching M ′ A in G A so that if we replace M A in C with M ′ A , then all vertices of G A will lie on a common cycle in the new 1-factor C. To prove this claim we proceed as follows. For every a ∈ A + \ A + entry , we move along the cycle C a of C containing a (starting at a) and let f (a) be the first vertex on C a in A \ A exit . Define an auxiliary digraph J on A + \ A Since G A is super-regular, it follows that J is also super-regular. By Lemma 31, J has a Hamilton cycle, which clearly corresponds to a perfect matching M ′ A in G A with the desired property. We now repeatedly apply the above claim to every cluster. Since A entry ∩A exit = ∅ for each cluster A, this ensures that all vertices which lie in clusters on the same cycle of F will lie on the same cycle of the new 1-factor C. Since by (a) W visits all clusters, this in turn implies that all the non-exceptional vertices will lie in the same cycle of C. Since the exceptional vertices form an independent set in C, it follows that C is actually a Hamilton cycle. Proof of Theorem 18. Choose new constants η 1 , ν, τ such that 1/n 0 ≪ η 1 ≪ ν ≤ τ ≪ ξ. Consider any regular tournament G on n ≥ n 0 vertices. Apply Theorem 17 to G in order to obtain a collection C of at least (1/2 − η 1 )n edgedisjoint Hamilton cycles. Let F be the undirected graph consisting of all those edges of G which are not covered by the Hamilton cycles in C. Note that F is k-regular for some k ≤ 2η 1 n. By Vizing's theorem the edges of F can be coloured with at most ∆(F ) + 1 ≤ 3η 1 n colours and thus F can be decomposed into at most 3η 1 n matchings. Split each of these matchings into at most 1/η 1/2 1 edgedisjoint matchings, each containing at most η 1/2 1 n edges. So altogether this yields a collection M of at most 3η 1/2 1 n matchings covering all edges of F . It is enough to show that for each M ∈ M there exists a Hamilton cycle of G which contains all the edges in M . So consider any M ∈ M. As observed in BIB005 (see the proof of Corollary 16 there), any regular tournament is a robust (ν, τ )-outexpander. Let D be the digraph obtained from G by 'contracting' all the edges in M , i.e. by successively replacing each edge xy ∈ M with a vertex v xy whose inneighbourhood is the inneighbourhood of x and whose outneighbourhood is the outneighbourhood of y. Using that M consists of at most η Note that we cannot simply apply Theorem 12 instead of Theorem 30 at the end of the proof, because D may not been an oriented graph. However, instead of using Theorem 30, one can also use the following result of Thomassen BIB001 : for every set E of n/24 independent edges in a regular tournament on n vertices, there is a Hamilton cycle which contains all edges in E. Theorem 21 can be proved in a similar way, using Ghouila-Houri's theorem instead of Theorem 30. Proof of Theorem 21. Choose a new constant η such that 1/n 0 ≪ η ≪ ξ and apply Theorem 20 to find a collection of at least (d− ηn)/2 edge-disjoint Hamilton cycles. Let F denote the subgraph of G consisting of all edges not lying in these Hamilton cycles. Then F is k-regular for some k ≤ ηn. Choose a collection M of matchings covering all edges of F as in the the proof of Theorem 18. So each matching consists of at most η 1/2 n edges. As before, for each M ∈ M it suffices to find a Hamilton cycle of G containing all edges of M . Let D ′ be the digraph obtained from G by orienting each edge in M and replacing each edge in E(G)\M with two edges, one in each direction. Let D be the digraph obtained from D ′ by 'contracting' the edges in M as in the the proof of Theorem 18. Then D ′ has minimum semidegree at least n/2 and thus contains a Hamilton cycle by GhouilaHouri's theorem (Theorem 1). This Hamilton cycle corresponds to a Hamilton cycle in G containing all edges of M , as required.
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> INTRODUCTION <s> The complete instruction-by-instruction simulation of one computer system on a different system is a well-known computing technique. It is often used for software development when a hardware base is being altered. For example, if a programmer is developing software for some new special purpose (e.g., aerospace) computer X which is under construction and as yet unavailable, he will likely begin by writing a simulator for that computer on some available general-purpose machine G. The simulator will provide a detailed simulation of the special-purpose environment X, including its processor, memory, and I/O devices. Except for possible timing dependencies, programs which run on the “simulated machine X” can later run on the “real machine X” (when it is finally built and checked out) with identical effect. The programs running on X can be arbitrary — including code to exercise simulated I/O devices, move data and instructions anywhere in simulated memory, or execute any instruction of the simulated machine. The simulator provides a layer of software filtering which protects the resources of the machine G from being misused by programs on X. <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> INTRODUCTION <s> Current cloud computing infrastructure typically assumes a homogeneous collection of commodity hardware, with details about hardware variation intentionally hidden from users. In this paper, we present our approach for extending the traditional notions of cloud computing to provide a cloud-based access model to clusters that contain a heterogeneous architectures and accelerators. We describe our ongoing work extending the Open Stack cloud computing stack to support heterogeneous architectures and accelerators, and our experiences running Open Stack on our local heterogeneous cluster test bed. <s> BIB002 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> INTRODUCTION <s> Cloud and heterogeneous computing solutions exist today for the emerging big data problems in biology <s> BIB003 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> INTRODUCTION <s> Data analytics are key applications running in the cloud computing environment. To improve performance and cost-effectiveness of a data analytics cluster in the cloud, the data analytics system should account for heterogeneity of the environment and workloads. In addition, it also needs to provide fairness among jobs when multiple jobs share the cluster. In this paper, we rethink resource allocation and job scheduling on a data analytics system in the cloud to embrace the heterogeneity of the underlying platforms and workloads. To that end, we suggest an architecture to allocate resources to a data analytics cluster in the cloud, and propose a metric of share in a heterogeneous cluster to realize a scheduling scheme that achieves high performance and fairness. <s> BIB004
Since the early 2000s, high-performance computing (HPC) programmers and researchers have adopted a new computing paradigm that combines two architectures: namely multi-core processors with powerful and general-purpose cores and many-core accelerators, the leading example of which is graphics processing units (GPUs), with a massive number of simple cores that accelerate algorithms with a high degree of data parallelism. Despite an increasing number of cores, multi-core processor designs still aim at reducing latency in sequential programs by using sophisticated control logic and large cache memories. Conversely, GPUs seek to boost the execution throughput of parallel applications with thousands of simple cores and a high memory bandwidth This work is supported by the European Commission under the Horizon 2020 program (RAPID project H2020-ICT-644312). Authors' addresses: C.-H. Hong, I. Spence, and D. S. Nikolopoulos, Data Science and Scalable Computing, School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, University Road, Belfast BT7 1NN, Northern Ireland, United Kingdom; emails: {c.hong, i.spence, d.nikolopoulos}@qub.ac.uk. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may be requested from Publications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY 10121-0701 USA, fax +1 (212) 869-0481, or [email protected]. c 2017 ACM 0360-0300/2017/06-ART35 $15.00 DOI: http://dx.doi.org/10.1145/3068281 architecture. Heterogeneous systems combining multi-core processors and GPUs can meet the diverse requirements of a wide range of high-performance computing applications with both control-intensive components and highly data-parallel components. The success of heterogeneous computing systems with GPUs is evident in the latest Top500 list , where more than 19% of supercomputers adopt both CPUs and GPUs. Cloud computing platforms can leverage heterogeneous compute nodes to reduce the total cost of ownership and achieve higher performance and energy efficiency BIB002 BIB003 . A cloud with heterogeneous compute nodes would allow users to deploy computationally intensive applications without the need to acquire and maintain large-scale clusters. In addition to this benefit, heterogeneous computing can offer better performance within the same power budget compared to systems based on homogeneous processors, as computational tasks can be placed on either conventional processors or GPUs depending on the degree of parallelism. These combined benefits have been motivating cloud service providers to equip their offerings with GPUs and heterogeneous programming environments BIB004 . A number of HPC applications can benefit from execution on heterogeneous cloud environments. These include particle simulation ], molecular dynamics simulation [Glaser et al. 2015] , and computational finance , as well as two-dimensional (2D) and 3D graphics acceleration workloads, which exhibit high efficiency when exploiting GPUs. System virtualization is a key enabling technology for the Cloud. The virtualization software creates an elastic virtual computing environment, which is essential for improving resource utilization and reducing cost of ownership. Virtualization systems are invariably underpinned by methods of multiplexing system resources. Most of the system resources including processors and peripheral devices can be completely virtualized nowadays and there is ample research in this field dating from the early 1960s BIB001 ]. However, virtualizing GPUs is a relatively new area of study and remains a challenging endeavor. A key barrier to this has been the implementations of GPU drivers, which are not open for modification due to intellectual property protection reasons. Furthermore, GPU architectures are not standardized, and GPU vendors have been offering architectures with vastly different levels of support for virtualization. For these reasons, conventional virtualization techniques are not directly applicable to virtualizing GPUs.
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> GPU Architecture <s> Programming Massively Parallel Processors. A Hands-on Approach David Kirk and Wen-mei Hwu ISBN: 978-0-12-381472-2 Copyright 2010 Introduction This book is designed for graduate/undergraduate students and practitioners from any science and engineering discipline who use computational power to further their field of research. This comprehensive test/reference provides a foundation for the understanding and implementation of parallel programming skills which are needed to achieve breakthrough results by developing parallel applications that perform well on certain classes of Graphic Processor Units (GPUs). The book guides the reader to experience programming by using an extension to C language, in CUDA which is a parallel programming environment supported on NVIDIA GPUs, and emulated on less parallel CPUs. Given the fact that parallel programming on any High Performance Computer is complex and requires knowledge about the underlying hardware in order to write an efficient program, it becomes an advantage of this book over others to be specific toward a particular hardware. The book takes the readers through a series of techniques for writing and optimizing parallel programming for several real-world applications. Such experience opens the door for the reader to learn parallel programming in depth. Outline of the Book Kirk and Hwu effectively organize and link a wide spectrum of parallel programming concepts by focusing on the practical applications in contrast to most general parallel programming texts that are mostly conceptual and theoretical. The authors are both affiliated with NVIDIA; Kirk is an NVIDIA Fellow and Hwu is principle investigator for the first NVIDIA CUDA Center of Excellence at the University of Illinois at Urbana-Champaign. Their coverage in the book can be divided into four sections. The first part (Chapters 1–3) starts by defining GPUs and their modern architectures and later providing a history of Graphics Pipelines and GPU computing. It also covers data parallelism, the basics of CUDA memory/threading models, the CUDA extensions to the C language, and the basic programming/debugging tools. The second part (Chapters 4–7) enhances student programming skills by explaining the CUDA memory model and its types, strategies for reducing global memory traffic, the CUDA threading model and granularity which include thread scheduling and basic latency hiding techniques, GPU hardware performance features, techniques to hide latency in memory accesses, floating point arithmetic, modern computer system architecture, and the common data-parallel programming patterns needed to develop a high-performance parallel application. The third part (Chapters 8–11) provides a broad range of parallel execution models and parallel programming principles, in addition to a brief introduction to OpenCL. They also include a wide range of application case studies, such as advanced MRI reconstruction, molecular visualization and analysis. The last chapter (Chapter 12) discusses the great potential for future architectures of GPUs. It provides commentary on the evolution of memory architecture, Kernel Execution Control Evolution, and programming environments. Summary In general, this book is well-written and well-organized. A lot of difficult concepts related to parallel computing areas are easily explained, from which beginners or even advanced parallel programmers will benefit greatly. It provides a good starting point for beginning parallel programmers who can access a Tesla GPU. The book targets specific hardware and evaluates performance based on this specific hardware. As mentioned in this book, approximately 200 million CUDA-capable GPUs have been actively in use. Therefore, the chances are that a lot of beginning parallel programmers can have access to Telsa GPU. Also, this book gives clear descriptions of Tesla GPU architecture, which lays a solid foundation for both beginning parallel programmers and experienced parallel programmers. The book can also serve as a good reference book for advanced parallel computing courses. Jie Cheng, University of Hawaii Hilo <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> GPU Architecture <s> Haswell, Intel's fourth-generation core processor architecture, delivers a range of client parts, a converged core for the client and server, and technologies used across many products. It uses an optimized version of Intel 22-nm process technology. Haswell provides enhancements in power-performance efficiency, power management, form factor and cost, core and uncore microarchitecture, and the core's instruction set. <s> BIB002
GPUs adopt a fundamentally different design for executing parallel applications compared to conventional multi-core processors BIB001 . GPUs are based on a throughput-oriented design and offer thousands of simple cores and a high bandwidth memory architecture. This design enables maximizing the execution throughput of applications with a high degree of data parallelism, which are expected to be decomposable into a large number of threads operating on different points in the program data space. In this design, when some threads are waiting for the completion of arithmetic operations or memory accesses with long latency, other threads can be scheduled by the hardware scheduler to hide the latency ]. This mechanism may lengthen the respective execution time of individual threads but improve total execution throughput. On the contrary, the design of conventional processors is optimized for reducing the execution time of sequential code on each core, thus adding complexity to each core at the cost of offering fewer cores in the processor package. Conventional processors typically use sophisticated control logic and large cache memories to efficiently deal with conditional branches, pipeline stalls, and poor data locality. Modern GPUs also handle complex control flows, have large SRAM-based local memories, and adopt some additional features of conventional processors but preserve the fundamental properties of offering a higher degree of thread-level parallelism and higher memory bandwidth. Figure 1 shows the architecture of a traditional heterogeneous system equipping a discrete GPU. The GPU part is based on the Fermi architecture of NVIDIA but is not limited to NVIDIA architectures, as recent GPUs adopt a similar high-level design. A GPU has several streaming multiprocessors (SMs), each of which has 32 computing cores. Each SM also has an L1 data cache and a low latency shared memory. Each core has local registers, an integer arithmetic logic unit (ALU), a floating point unit (FPU), and several special function units (SFUs) that execute transcendental instructions such as sine and cosine operations. A GPU memory management unit (MMU) provides virtual address spaces for GPU applications. A GPU memory reference by an application is resolved into a physical address by the MMU using the application's own page table. Memory accesses from each application therefore cannot refer to other applications' address spaces. The host connects the discrete GPU using the PCI Express (PCIe) interface. The CPU in the host interacts with the GPU via memory mapped input/output (MMIO). The GPU registers and device memory can be accessed by the CPU through the MMIO interface. The MMIO region is configured at boot time based on the PCI base address registers (BARs), which are memory windows that can be used by the host for communication. GPU operations issued by an application are submitted into a ring buffer associated with the application's command submission channel, which is a GPU hardware unit and visible to the CPU via MMIO. Large data can be transferred between the host memory and the GPU device memory by the direct memory access (DMA) engine. The discrete GPU architecture shown in Figure 1 can cause data transfer overhead over the PCIe interface, because the maximum bandwidth that current PCIe can offer is low (i.e., 16GB/s) compared to the internal memory bandwidth of the GPU (i.e., hundreds of GB/s). Furthermore, the architecture incurs large programming effort to manage data manipulated by both the CPU and the GPU. To address these issues, GPUs have been integrated into the CPU chip. Intel's GPU architecture BIB002 ] and AMD's HSA architecture ] integrate the two processors on the same bus with shared system memory. These architectures enable a unified virtual address space and eliminate data copying between the devices. They can also reduce a programmer's burden to manage the separate data address spaces.
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> GPU Applications <s> This paper presents and characterizes Rodinia, a benchmark suite for heterogeneous computing. To help architects study emerging platforms such as GPUs (Graphics Processing Units), Rodinia includes applications and kernels which target multi-core CPU and GPU platforms. The choice of applications is inspired by Berkeley's dwarf taxonomy. Our characterization shows that the Rodinia benchmarks cover a wide range of parallel communication patterns, synchronization techniques and power consumption, and has led to some important architectural insight, such as the growing importance of memory-bandwidth limitations and the consequent importance of data layout. <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> GPU Applications <s> Scalable heterogeneous computing systems, which are composed of a mix of compute devices, such as commodity multicore processors, graphics processors, reconfigurable processors, and others, are gaining attention as one approach to continuing performance improvement while managing the new challenge of energy efficiency. As these systems become more common, it is important to be able to compare and contrast architectural designs and programming systems in a fair and open forum. To this end, we have designed the Scalable HeterOgeneous Computing benchmark suite (SHOC). SHOC's initial focus is on systems containing graphics processing units (GPUs) and multi-core processors, and on the new OpenCL programming standard. SHOC is a spectrum of programs that test the performance and stability of these scalable heterogeneous computing systems. At the lowest level, SHOC uses microbenchmarks to assess architectural features of the system. At higher levels, SHOC uses application kernels to determine system-wide performance including many system features such as intranode and internode communication among devices. SHOC includes benchmark implementations in both OpenCL and CUDA in order to provide a comparison of these programming models. <s> BIB002 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> GPU Applications <s> The Parboil benchmarks are a set of throughput computing applications useful for studying the performance of throughput computing architecture and compilers. The name comes from the culinary term for a partial cooking process, which represents our belief that useful throughput computing benchmarks must be “cooked”, or preselected to implement a scalable algorithm with fine-grained paralle l tasks. But useful benchmarks for this field cannot be “fully cooked”, because the architectures and programming models and supporting tools are evolving rapidly enough that static benchmark codes will lose relevance very quickly. We have collected benchmarks from throughput computing application researchers in many different scientific and commercial fields including image processing, biomolec ular simulation, fluid dynamics, and astronomy. Each benchmark includes several implementations. Some implementations we provide as readable base implementations from which new optimization efforts can begin, and others as examples of the current state-of-the-art targeting specific CPU and GPU architectures. As we continue to optimiz e these benchmarks for new and existing architectures ourselves, we will also gladly accept new implementations and benchmark contributions from developers to recognize those at the frontier of performance optimization on each architecture. Finally, by including versions of varying levels of optimization of the same fundamental algorithm, the benchmarks present opportunities to demonstrate tools and architectures that help programmers get the most out of their parallel hardware. Less optimized versions are presented as challenges to the compiler and architecture research communities: to develop the technology that automatically raises the performance of simpler implementations to the performance level of sophisticated programmer-optimized implementations, or demonstrate any other performance or programmability improvements. We hope that these benchmarks will facilitate effective demonstrations of such technology. <s> BIB003 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> GPU Applications <s> Modern vehicles are evolving with more electronic components than ever before (In this paper, “vehicle” means “automotive vehicle.” It is also equal to “car.”) One notable example is graphical processing unit (GPU), which is a key component to implement a digital cluster. To implement the digital cluster that displays all the meters (such as speed and fuel gauge) together with infotainment services (such as navigator and browser), the GPU needs to be virtualized; however, GPU virtualization for the digital cluster has not been addressed yet. This paper presents a Virtualized Automotive DIsplay (VADI) system to virtualize a GPU and its attached display device. VADI manages two execution domains: one for the automotive control software and the other for the in-vehicle infotainment (IVI) software. Through GPU virtualization, VADI provides GPU rendering to both execution domains, and it simultaneously displays their images on a digital cluster. In addition, VADI isolates GPU from the IVI software in order to protect it from potential failures of the IVI software. We implement VADI with Vivante GC2000 GPU and perform experiments to ensure requirements of International Standard Organization (ISO) safety standards. The results show that VADI guarantees 30 frames per second (fps), which is the minimum frame rate for digital cluster mandated by ISO safety standards even with the failure of the IVI software. It also achieves 60 fps in a synthetic workload. <s> BIB004
GPU programs are categorized into graphics acceleration and general-purpose computing workloads. The former category includes 2D and 3D graphics workloads. The latter includes a wide range of general-purpose data parallel computations. Graphics acceleration workloads: 3DMark ] is a GPU benchmark test application developed by Futuremark Corporation for measuring the performance of 3D graphics rendering capabilities. 3DMark evaluates various Direct3D features including tessellation, compute shaders, and multi-threading. The Phoronix Test Suite (PTS) BIB004 ] is a set of open source benchmark applications developed by Phoronix Media. Phoronix performs comprehensive evaluation for measuring the performance of computing systems. GPU software developers usually utilize Phoronix for testing the performance of OpenGL games such as Doom 3, Nexuiz, and Enemy Territory. General-purpose computing workloads: Rodinia BIB001 ] is a benchmark suite focusing on the performance evaluation of compute-intensive applications implemented by CUDA, OpenMP, and OpenCL. Each application or kernel covers different types of behavior of compute-intensive applications, and the suite broadly covers the features of the Berkeley Seven Dwarfs [Asanovic et al. 2006] . The Scalable HeterOgeneous Computing (SHOC) Benchmark Suite BIB002 ] is a set of benchmark programs evaluating the performance and stability of GPGPU computing systems using CUDA and OpenCL applications. The suite supports the evaluation of both cluster-level parallelism with the Message Passing Interface (MPI) and node-level parallelism using multiple GPUs in a single node. The application scope of SHOC includes the Fast Fourier Transform (FFT), linear algebra, and molecular dynamics among others. Parboil BIB003 ] is a collection of compute-intensive applications implemented by CUDA, OpenMP, and OpenCL to measure the throughput of CPU and GPU architectures. Parboil provides collected benchmark applications from diverse scientific and commercial fields. They include bio-molecular simulation, fluid dynamics, image processing, and astronomy. The CUDA SDK benchmark suite ] is released as a part of CUDA Toolkit. It covers a diverse range of GPGPU applications performing data-parallel algorithms used in linear algebra operations, computational fluid dynamics (CFD), image convolution, and Black-Scholes & binomial option pricing.
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> System Virtualization <s> Numerous systems have been designed which use virtualization to subdivide the ample resources of a modern computer. Some require specialized hardware, or cannot support commodity operating systems. Some target 100% binary compatibility at the expense of performance. Others sacrifice security or functionality for speed. Few offer resource isolation or performance guarantees; most provide only best-effort provisioning, risking denial of service.This paper presents Xen, an x86 virtual machine monitor which allows multiple commodity operating systems to share conventional hardware in a safe and resource managed fashion, but without sacrificing either performance or functionality. This is achieved by providing an idealized virtual machine abstraction to which operating systems such as Linux, BSD and Windows XP, can be ported with minimal effort.Our design is targeted at hosting up to 100 virtual machine instances simultaneously on a modern server. The virtualization approach taken by Xen is extremely efficient: we allow operating systems such as Linux and Windows XP to be hosted simultaneously for a negligible performance overhead --- at most a few percent compared with the unvirtualized case. We considerably outperform competing commercial and freely available solutions in a range of microbenchmarks and system-wide tests. <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> System Virtualization <s> A virtualized system includes a new layer of software, the virtual machine monitor. The VMM's principal role is to arbitrate accesses to the underlying physical host platform's resources so that multiple operating systems (which are guests of the VMM) can share them. The VMM presents to each guest OS a set of virtual platform interfaces that constitute a virtual machine (VM). Once confined to specialized, proprietary, high-end server and mainframe systems, virtualization is now becoming more broadly available and is supported in off-the-shelf systems based on Intel architecture (IA) hardware. This development is due in part to the steady performance improvements of IA-based systems, which mitigates traditional virtualization performance overheads. Intel virtualization technology provides hardware support for processor virtualization, enabling simplifications of virtual machine monitor software. Resulting VMMs can support a wider range of legacy and future operating systems while maintaining high performance. <s> BIB002 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> System Virtualization <s> Implement a Hyper-V virtualization solution ::: ::: Microsoft Virtualization with Hyper-V shows you how to deploy Microsoft's next-generation hypervisor-based server virtualization technology in a corporate environment. You'll get step-by-step guidelines for getting Hyper-V up and running, followed by best practices for building a larger, fault-tolerant solution using System Center Virtual Machine Manager 2008. This hands-on guide explains how to migrate physical systems to the virtual environment; use System Center Operations Manager; and secure, back up, and restore your Hyper-V solution. ::: ::: Plan and implement a Hyper-V installation ::: Configure Hyper-V components ::: Install and configure System Center Virtual Machine Manager 2008 ::: Create and manage virtual machines ::: Back up and restore virtual machines ::: Monitor, back up, and restore the virtual solution ::: Secure your Hyper-V environment ::: Understand the virtual desktop infrastructure ::: Use third-party virtualization tools for Hyper-V ::: ::: ::: Table of contents ::: 1 Virtualization Overview ::: 2 Planning and Installation ::: 3 Configuring Hyper-V Components ::: 4 Planning and Designing Systems Center Virtual Machine Manager 2008 ::: 5 Installing and Configuring Systems Center Virtual Machine Manager 2008 ::: 6 Configuring Systems Center Virtual Machine Manager 2008 ::: 7 Creating and Managing Virtual Machines ::: 8 Managing Your Virtual Machines ::: 9 Backing Up, Restoring, and Disaster Recovery for Your Virtual Machines ::: 10 Monitoring Your Virtual Solution ::: 11 Hyper-V Security ::: 12 Virtual Desktop Infrastructure ::: A Third Party Virtualization Tools for Hyper-V ::: B Windows Server 2008 Hyper-V Command Line Reference <s> BIB003 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> System Virtualization <s> General-purpose GPUs now account for substantial computing power on many platforms, but the management of GPU resources--cycles, memory, bandwidth-- is frequently hidden in black-box libraries, drivers, and devices, outside the control of mainstream OS kernels. We believe that this situation is untenable, and that vendors will eventually expose sufficient information about cross-black-box interactions to enable whole-system resource management. In the meantime, we want to enable research into what that management should look like. ::: ::: We systematize, in this paper, a methodology to uncover the interactions within black-box GPU stacks. The product of this methodology is a state machine that captures interactions as transitions among semantically meaningful states. The uncovered semantics can be of significant help in understanding and tuning application performance. More importantly, they allow the OS kernel to intercept--and act upon--the initiation and completion of arbitrary GPU requests, affording it full control over scheduling and other resource management. While insufficiently robust for production use, our tools open whole new fields of exploration to researchers outside the GPU vendor labs. <s> BIB004
System virtualization allows several operating systems (OSs) to run simultaneously on a single physical machine, thus achieving effective sharing of system resources in personal and shared (e.g., cloud) computing platforms. The software for system virtualization includes a hypervisor, also known as a virtual machine monitor (VMM), and virtual machines (VMs). A hypervisor virtualizes physical resources in the system such as the CPU, memory, and I/O devices. A VM is composed of these virtualized resources and is provided to a guest OS. The guest OS can run on the VM as though the VM were a real physical machine. Popular hypervisors used widely for personal and cloud computing include VMware ESXi ], KVM ], Hyper-V BIB003 , and Xen BIB001 ]. System virtualization can be categorized into three major classes: full, para, and hardware-supported virtualization. Full virtualization completely emulates the CPU, memory, and I/O devices to provide a guest OS with an environment identical to the underlying hardware. Privileged instructions of a guest OS that modify the system state are trapped into the hypervisor by a binary translation technique that automatically inserts trapping operations in the binary code of the guest OS. The advantage of this approach is that guest OSs run in the virtualization environment without modification. However, full virtualization usually exhibits high-performance penalties due to the cost for emulation of the underlying hardware. Para virtualization addresses the performance limitations of full system virtualization by modifying the guest OS code to support more efficient virtualization. Privileged instructions of a guest OS are replaced with hypercalls, which provide a communication channel between the guest OS and the hypervisor. This optimization eliminates the need for binary translation. Para virtualization offers a guest OS an environment similar but not identical to the underlying hardware. The advantage of this approach is that it has lower virtualization overhead than full virtualization. The limitation is that it requires modification to guest OSs, which can be tedious when new versions of an OS kernel or device driver are released. Hardware-supported virtualization requires hardware capabilities such as Intel VTx BIB002 ] to trap privileged instructions from guest OSs. These capabilities typically introduce two operating modes for virtualization: guest (for an OS) and root (for the hypervisor). When a guest OS executes a privileged instruction, the processor intervenes and transfers the control to the hypervisor executing in the root mode. The hypervisor then emulates the privileged instruction and returns the control to guest mode. The mode change from guest to root is called a VM Exit. The reverse action is called a VM Entry. The advantage of this approach is that it does not have to modify a guest OS and that it exhibits higher performance than full virtualization. BIB004 . This approach uses a custom GPU driver based on the available documentation to realize GPU virtualization at the driver level. Para virtualization slightly modifies the custom driver in the guest for delivering sensitive operations directly to the host • GPU scheduling: This indicates whether the solution provides GPU scheduling for fair or SLA-based sharing on GPUs. Details about GPU scheduling are discussed in Section 7.
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> API REMOTING <s> Modern graphics co-processors (GPUs) can produce high fidelity images several orders of magnitude faster than general purpose CPUs, and this performance expectation is rapidly becoming ubiquitous in personal computers. Despite this, GPU virtualization is a nascent field of research. This paper introduces a taxonomy of strategies for GPU virtualization and describes in detail the specific GPU virtualization architecture developed for VMware's hosted products (VMware Workstation and VMware Fusion). ::: ::: We analyze the performance of our GPU virtualization with a combination of applications and micro bench-marks. We also compare against software rendering, the GPU virtualization in Parallels Desktop 3.0, and the native GPU. We find that taking advantage of hardware acceleration significantly closes the gap between pure emulation and native, but that different implementations and host graphics stacks show distinct variation. The micro bench-marks show that our architecture amplifies the overheads in the traditional graphics API bottlenecks: draw calls, downloading buffers, and batch sizes. ::: ::: Our virtual GPU architecture runs modern graphics-intensive games and applications at interactive frame rates while preserving virtual machine portability. The applications we tested achieve from 86% to 12% of native rates and 43 to 18 frames per second with VMware Fusion 2.0. <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> API REMOTING <s> Today's operating systems treat GPUs and other computational accelerators as if they were simple devices, with bounded and predictable response times. With accelerators assuming an increasing share of the workload on modern machines, this strategy is already problematic, and likely to become untenable soon. If the operating system is to enforce fair sharing of the machine, it must assume responsibility for accelerator scheduling and resource management. Fair, safe scheduling is a particular challenge on fast accelerators, which allow applications to avoid kernel-crossing overhead by interacting directly with the device. We propose a disengaged scheduling strategy in which the kernel intercedes between applications and the accelerator on an infrequent basis, to monitor their use of accelerator cycles and to determine which applications should be granted access over the next time interval. Our strategy assumes a well defined, narrow interface exported by the accelerator. We build upon such an interface, systematically inferred for the latest Nvidia GPUs. We construct several example schedulers, including Disengaged Timeslice with overuse control that guarantees fairness and Disengaged Fair Queueing that is effective in limiting resource idleness, but probabilistic. Both schedulers ensure fair sharing of the GPU, even among uncooperative or adversarial applications; Disengaged Fair Queueing incurs a 4% overhead on average (max 18%) compared to direct device access across our evaluation scenarios. <s> BIB002
Virtualizing GPUs has been regarded as more difficult than virtualizing I/O devices such as network cards or disks. Several reasons add complexity to multiplexing and sharing GPU resources between VMs. First, GPU vendors tend not to reveal the source code and implementation details of their GPU drivers for commercial reasons. Such technical specifications are essential for virtualizing GPUs at the driver level. Second, even when driver implementations are unveiled, for example, by reverse engineering methods BIB002 , GPU vendors still introduce significant changes with every new generation of GPUs to improve performance. As a consequence, specifications revealed by reverse engineering become unusable. Finally, some OS vendors provide proprietary GPU drivers for virtualization, but the proprietary drivers cannot be used across all OSs. In summary, there are no standard interfaces for accessing GPUs, which are required for virtualizing these devices. The API remoting approach overcomes the aforementioned limitations and is now the most prevalent approach to GPU virtualization. The premise of API remoting is to provide a guest OS with a wrapper library that has the same API as the original GPU library. The wrapper library intercepts GPU calls (e.g., OpenGL, Direct3D, CUDA, and OpenCL calls) from an application before the calls reach the GPU driver in the guest OS. The intercepted calls are redirected to the host OS in the same machine through shared memory or a remote machine with available GPUs. The redirected calls are processed remotely and only the results are delivered to the application through the wrapper library. The API remoting approach can emulate a GPU execution environment without exposing physical GPU devices in the guest OS. Figure 2 illustrates an example of a system that adopts the API remoting approach, which forwards GPU calls in the guest to the host in the same machine. The architecture adopts a split device model where the frontend and backend drivers are placed in the guest and host OSs, respectively. The wrapper library installed in the guest OS intercepts a GPU call from the application and delivers it to the frontend driver. The frontend packs the GPU operation with its parameters into a transferable message and sends the message to the backend in the host OS via shared memory. In the host OS, the backend driver parses the message and converts it into the original GPU call. The call handler executes the requested operation on the GPU through the GPU driver. The call handler returns the result back to the application via the reverse path. The key advantage of this approach is that it can support applications using GPUs without recompilation in most cases. The wrapper library can be dynamically linked to existing applications at runtime. In addition, it incurs negligible virtualization overhead as the virtualization architecture is simple and bypasses the hypervisor layer . Finally, as the virtualization layer is usually implemented in user space, this approach can be agnostic on underlying hypervisors , specifically if it does not use hypervisor-specific inter-VM communication methods. The limitation is that keeping the wrapper libraries updated can be a daunting task as new functions are gradually added to vendor GPU libraries BIB002 ]. In addition, as GPU requests bypass the hypervisor, it is difficult to implement basic virtualization features such as execution checkpointing, live migration, and fault tolerance BIB001 .
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods for Graphics Acceleration <s> Providing untrusted applications with shared and safe access to modern display hardware is of increasing importance. Our new display system, called Blink, safely multiplexes complex graphical content from multiple untrusted Virtual Machines onto a single Graphics Processing Unit (GPU). Blink does not allow clients to program the GPU directly, but instead provides a virtual processor abstraction which they can program. Blink executes virtual processor programs and controls the GPU on behalf of the client, in a manner that reduces processing and context switching overheads. Blink provides its own stored procedure abstraction for ecient hardware access, but also supports fast emulation of legacy OpenGL programs. To achieve performance and safety, Blink employs just-in-time compilation and simple program inspection. <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods for Graphics Acceleration <s> Security is an emerging topic in the field of mobile and embedded platforms. The Trusted Computing Group (TCG) has outlined one possible approach to mobile platform security by recently extending their set of Trusted Computing specifications with Mobile Trusted Modules (MTMs). The MTM specification [13] published by the TCG is a platform independent approach to Trusted Computing explicitly allowing for a wide range of potential implementations. ARM follows a different approach to mobile platform security, by extending platforms with hardware supported ARM TrustZone security [3] mechanisms. This paper outlines an approach to merge TCG-style Trusted Computing concepts with ARM TrustZone technology in order to build an open Linux-based embedded trusted computing platform. <s> BIB002
Chromium is an early example of API remoting. In the past, graphics processors could not be fully utilized by a number of applications in the same machine because the hosts were using slow serial interfaces to the graphic cards. The goal of Chromium is to aggregate GPU calls from different machines and to process them in a powerful cluster rendering system with multiple graphics accelerators. For this purpose, Chromium provides four OpenGL wrapper libraries that encapsulate frequently used operations: stream packing, stream unpacking, point-topoint connection-based networking abstractions, and complete OpenGL state tracker libraries. These libraries intercept OpenGL operations and transfer them to a rendering cluster. VMGL ] implements the API remoting approach for accelerating OpenGL applications in recent hypervisors including Xen and VMware. It provides hardware accelerated rendering abilities to OpenGL applications in each VM. VMGL consists of the following three modules: the VMGL library, the VMGL stub, and the VMGL X server extension. The VMGL library is an OpenGL wrapper library that replaces standard implementations. The VMGL stub is created in the host for each VMGL library instance to receive and process GPU requests from the library. OpenGL commands are delivered by a network transport, which makes VMGL agnostic of underlying hypervisors. The VMGL X server extension runs in the guest OS side and is used to register the size and visibility of OpenGL-enabled windows. Through the registered information, the VMGL stub only processes a region that can be visible in the guest OS's desktop. VMGL additionally supports suspend and resume functionalities by keeping track of the entire OpenGL state in the guest and restoring the state in a new stub. Blink BIB001 ] offers accelerated OpenGL rendering abilities to applications inside a VM similarly to VMGL but focuses more on performance optimization. Blink provides the BlinkGL wrapper library for guest OSs, which is a superset of OpenGL. The wrapper library provides stored procedures, each of which is a sequence of serialized BlinkGL commands to eliminate the performance overhead of additionally (de-)serializing GL command streams during communication between the guest and the host. The Blink Server in the host interprets the transferred stored procedure by using a Just-In-Time (JIT) compiler. The host and the guest OSs communicate with each other through shared memory to reduce the overhead of using a network transport on large texture or frame buffer objects. The Parallels Desktop ] offers a proprietary GPU driver for guest OSs to offload their OpenGL and Direct3D operations onto remote devices with GPUs. The proprietary GPU driver can be installed only on Parallels products such as Parallels Desktop and Parallels Workstation. The server module in the remote device receives access requests from a number of remote VMs and chooses a next VM to use the GPU. The module then sends a token to the selected guest and allows it to occupy the GPU for a specific time interval. The guest OS and the remote OS use a remote procedure call (RPC) protocol for delivering GPU commands and the corresponding results. VADI [Lee et al. 2016] implements GPU virtualization for vehicles by multiplexing a GPU device used by the digital cluster of a car. VADI works on a proprietary hypervisor for vehicles called the Secure Automotive Software Platform (SASP) . This hypervisor exploits the ARM TrustZone technology BIB002 ], which can accommodate two guest OSs in the secure and normal "worlds", respectively. VADI implements the GL wrapper library and the V-Bridge-normal in the normal world and the GL stub and the V-Bridge-secure in the secure one. GPU commands executed in the normal world are intercepted by the wrapper library and processed in the secure world by the GL stub. Each V-Bridge is connected by shared memory and is responsible for communication between the two worlds.
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods for GPGPU Computing <s> Numerous systems have been designed which use virtualization to subdivide the ample resources of a modern computer. Some require specialized hardware, or cannot support commodity operating systems. Some target 100% binary compatibility at the expense of performance. Others sacrifice security or functionality for speed. Few offer resource isolation or performance guarantees; most provide only best-effort provisioning, risking denial of service.This paper presents Xen, an x86 virtual machine monitor which allows multiple commodity operating systems to share conventional hardware in a safe and resource managed fashion, but without sacrificing either performance or functionality. This is achieved by providing an idealized virtual machine abstraction to which operating systems such as Linux, BSD and Windows XP, can be ported with minimal effort.Our design is targeted at hosting up to 100 virtual machine instances simultaneously on a modern server. The virtualization approach taken by Xen is extremely efficient: we allow operating systems such as Linux and Windows XP to be hosted simultaneously for a negligible performance overhead --- at most a few percent compared with the unvirtualized case. We considerably outperform competing commercial and freely available solutions in a range of microbenchmarks and system-wide tests. <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods for GPGPU Computing <s> Recent embedded systems such as mobile phones provide various multimedia applications and execute them concurrently. Embedded systems are being developed to provide new unique solutions and services. This forces manufacturers of embedded systems to select a more productive and sophisticated operating system such as Linux. However, there are several limitations in using a general purpose operating system (GPOS) because of software instability and workload of transporting legacy real-time applications into the system. A good solution is to use a virtualization technology such as Xen, which allows many guest operating systems to be executed simultaneously and in a stable manner on one physical machine. If Xen is applied to embedded systems such as mobile phones, the Xen can execute the legacy real-time operating system for critical tasks and also GPOS executing user-friendly task. In this case, the Xen should provide the IPC mechanism between processes on each operating system on Xen. However, Xen doesn't provide a simple way for sharing data between processes running on different guest operating systems. In this paper, we propose a simple method for sharing data between the processes in different guest operating systems by using a mechanism provided by Xen. <s> BIB002 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods for GPGPU Computing <s> Advances in virtualization technology have focused mainly on strengthening the isolation barrier between virtual machines (VMs) that are co-resident within a single physical machine. At the same time, a large category of communication intensive distributed applications and software components exist, such as web services, high performance grid applications, transaction processing, and graphics rendering, that often wish to communicate across this isolation barrier with other endpoints on co-resident VMs. State of the art inter-VM communication mechanisms do not adequately address the requirements of such applications. TCP/UDP based network communication tends to perform poorly when used between co-resident VMs, but has the advantage of being transparent to user applications. Other solutions exploit inter-domain shared memory mechanisms to improve communication latency and bandwidth, but require applications or user libraries to be rewritten against customized APIs - something not practical for a large majority of distributed applications. In this paper, we present the design and implementation of a fully transparent and high performance inter-VM network loopback channel, called XenLoop, in the Xen virtual machine environment. XenLoop does not sacrifice user-level transparency and yet achieves high communication performance between co-resident guest VMs. XenLoop intercepts outgoing network packets beneath the network layer and shepherds the packets destined to co-resident VMs through a high-speed inter-VM shared memory channel that bypasses the virtualized network interface. Guest VMs using XenLoop can migrate transparently across machines without disrupting ongoing network communications, and seamlessly switch between the standard network path and the XenLoop channel. In our evaluation using a number of unmodified benchmarks, we observe that XenLoop can reduce the inter-VM round trip latency by up to a factor of 5 and increase bandwidth by a up to a factor of 6. <s> BIB003 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods for GPGPU Computing <s> Despite advances in high performance inter-domain communication for virtual machines (VM), data intensive applications developed for VMs based on traditional remote procedure call (RPC) mechanism still suffer from performance degradation due to the inherent inefficiency of data serialization/deserilization operation. This paper presents VMRPC, a light-weight RPC framework specifically designed for VMs that leverages heap and stack sharing to circumvent unnecessary data copying and serialization/deserilization, and achieve high performance. Our evaluation shows that the performance of VMRPC is an order of magnitude better than traditional RPC systems and existing alternative inter-domain communication mechanisms. We adopt VMRPC in a real system, and the experiment results exhibit that the performance of VMRPC is even competitive to native environment. <s> BIB004 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods for GPGPU Computing <s> Past research in virtualisation technology has mainly focused on increasing isolation of co-resident virtual machines. At the same time network intensive applications, such as web services or database applications are being consolidated onto a single physical platform. The isolation properties of virtualisation, however, demand a strict separation of the shared resources. Co-resident virtual machines are therefore forced to fallback to inefficient network emulation for communication. Many inter virtual machine communication methods proposed recently, introduced shared memory, customised libraries or APIs. This is not only unpractical but can also undermine a system’s integrity; moreover transparency and live migration is commonly neglected. Therefore in this paper we discuss the challenges and requirements for inter virtual machine communication and examine available solutions proposed by academia and industry. We also discuss how the current evolution of virtualisation and modern CPUs pose new challenges for inter virtual machine communication. Finally, we consider the possibility of utilising previously unused CPU capabilities to accommodate an inter virtual machine communication mechanism. <s> BIB005 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods for GPGPU Computing <s> In this work we present an hypervisor-independent GPU Virtualization Service named GVirtus. It instantiates virtual machines able to access to the GPU in a transparent way. GPUs allow to speed up calculations over CPUs. Therefore, virtualizing GPUs is a major trend and can be considered a revolutionary tool for HPC. To test the performances of GVirtus we used a fluid simulator. Morover to exploit the computational power of GPUs in cloud computing we virtualized three different plugins for GVirtus Framework : Cuda Runtime, Cuda Driver and OpenCL plugins. Our results show that the overhead introduced by virtualization is almost irrelevant. <s> BIB006 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods for GPGPU Computing <s> Virtualization technology, in the computer architecture domain, is characterized by the property of sharing system resources among the virtual machines (VMs) hosted on a physical server. Isolation is also a characteristic property of virtualization, which prevents the guest operating system running on these virtual machines to be aware of each other's presence on the same host. <s> BIB007
Following NVIDIA's launch of CUDA in 2006, general-purpose computing on GPUs (GPGPU) became more popular and practical. NVIDIA's CUDA conceals the underlying graphics hardware architecture from developers and allows programmers to write scalable programs without learning new programming languages. Research on GPU virtualization has focused more on GPGPUs since the introduction of CUDA to accelerate compute-intensive applications running in the Cloud. GViM implements GPU virtualization at the CUDA API level in the Xen hypervisor BIB001 ]. To enable a guest VM to access the GPU located in the host, GViM implements the Interposer CUDA Library for guest OSs, and the frontend and backend drivers for communication between the host and the guest. GViM focuses on efficient sharing of large data volumes between the guest and the host when a GPU application is data intensive. For this purpose, GViM furnishes shared memory allocated by Xenstore BIB002 between the frontend and backend, instead of using a network transport. It further develops the one-copy mechanism that maps the shared memory into the address space of the GPU application. This removes data copying from user space to kernel space in the guest OS and improves communication performance. vCUDA ] also implements GPU virtualization in the Xen hypervisor. vCUDA provides a CUDA wrapper library and virtual GPUs (vGPUs) in the guest and the vCUDA stub in the host. The wrapper library intercepts and redirects API calls from the guest to the host. vGPUs are created per application by the wrapper library and give a complete view of the underlying GPUs to applications. The vCUDA stub creates an execution context for each guest OS and executes remote GPU requests. For communication between VMs, vCUDA adopts XML-RPC [Cerami 2002 ], which supports high-level communication between the guest and the host. In the latest version , vCUDA is ported to KVM using VMRPC BIB004 with VMCHANNEL BIB007 . VMRPC utilizes a shared memory zone between the host OS and the guest OS to reduce the overhead of using XML-RPC transmission with TCP/IP. VMCHANNEL enables an asynchronous notification mechanism in KVM to reduce the latency of inter-VM communication. vCUDA also develops Lazy RPC that performs batching specific CUDA calls that can be delayed (e.g., cudaConfigureCall()). This prevents frequent context switching between the guest OS and the hypervisor occurred by repeated RPCs and improves communication performance. rCUDA ] focuses on remote GPU-based acceleration, which offloads CUDA computation parts onto GPUs located in a remote host. rCUDA recognizes that prior virtualization research based on emulating local devices is not appropriate for HPC applications because of unacceptable virtualization overhead. Instead of device emulation, rCUDA implements virtual CUDA-compatible devices by adopting remote GPU-based acceleration without the hypervisor layer. More concretely, rCUDA provides a CUDA API wrapper library to the client side, which intercepts and forwards GPU calls from the client to the GPU server, and the server daemon in the server side, which receives and executes the remote GPU calls. The client and the server communicate with each other using a TCP/IP socket. rCUDA points out network performance bottlenecks when several clients concurrently access the remote GPU cluster. To overcome this issue, rCUDA provides a customized application-level communication protocol . Current rCUDA supports EDR 100G InfiniBand using Mellanox adapters for providing higher network bandwidth . GVirtuS implements a CUDA wrapper library, the frontend and backend drivers, and communicators supporting various hypervisors including KVM, Xen, and VMware. The frontend and backend drivers are placed in the guest and the host, respectively. The two drivers communicate with each other by a communicator specific to each hypervisor. GVirtuS identifies that the performance of GPU virtualization depends on communication throughput between the frontend and the backend. To address this issue, GVirtuS implements pluggable communication components that utilize high performance communication channels provided by the hypervisors. The communicators for KVM, Xen, and VMware employ VMSocket, XenLoop BIB003 , and the VMware Communicator Interface (VMCI) BIB005 as communication channels, respectively. In the latest version, the VMShm communicator BIB006 , which leverages shared memory, was introduced for better communication performance. It allocates a POSIX shared memory chunk on the host OS and allows both the backend and frontend to map the memory for communication. GVirtuS also provides a TCP/IP-based communicator for remote GPU-based acceleration. GVM ] is based on a model that predicts the performance of GPU applications. GVM validates this model by introducing its own virtualization infrastructure, which consists of the user process APIs, the GPU Virtualization Manager (GVM), and the virtual shared memory. The user process APIs expose virtual GPU resources to programmers. The source code then needs to be modified to contain the APIs to utilize the virtual GPUs. GVM runs in the host OS and is responsible for initializing the virtual GPUs, receiving requests from guest OSs, and passing them to physical GPUs. The virtual shared memory is implemented as POSIX shared memory by which the guest and host OSs can communicate with each other. Pegasus ] advances its predecessor, GViM ], by managing virtualized accelerators as first class schedulable and shareable entities. For this purpose, Pegasus introduces the notion of an accelerator virtual CPU (aVCPU), which embodies the state of a VM executing GPU calls on accelerators, similarly to the concept of virtual CPUs (VCPUs). In Pegasus, an aVCPU is a basic schedulable entity and consists of a shared call buffer per domain, a polling thread in the host OS, and the CUDA runtime context. GPU requests from a guest OS are stored in the call buffer shared between the frontend and backend drivers. A polling thread selected by the GPU scheduler then fetches the GPU requests from the buffer and passes them to the actual CUDA runtime in the host OS. The scheduling methods Pegasus adopts will be introduced in Section 7.2.2. Shadowfax ] enhances its predecessor, Pegasus . Shadowfax tackles the problem that under Pegasus applications requiring significant GPU computational power are limited to using only local GPUs, although remote nodes may boast additional GPUs. To address this issue, Shadowfax presents the concept of GPGPU assemblies, which can configure diverse virtual platforms based on application demands. This virtual platform concept allows applications to run across node boundaries. For local GPU execution, Shadowfax adopts the GPU virtualization architecture of Pegasus. For remote execution, Shadowfax implements a remote server thread that creates a fake guest VM environment, which consists of a call buffer and a polling thread per VM in the remote machine. To reduce remote execution overhead, Shadowfax additionally does batching of GPU calls and their data. VOCL ] presents a GPU virtualization solution for OpenCL applications. Similarly to rCUDA , VOCL adopts remote GPU-based acceleration to provide virtual devices supporting OpenCL. VOCL provides an OpenCL wrapper library on the client side and a VOCL proxy process on the server side. The proxy process receives inputs from the library and executes them on remote GPUs. The wrapper library and the proxy process communicate via MPI . The authors claim that MPI can provide a rich communication interface and dynamically establish communication channels compared to other transport methods. DS-CUDA ] provides a remote GPU virtualization platform similarly to rCUDA . It is composed of a compiler, which translates CUDA API calls to respective wrapper functions, and a server, which receives GPU calls and their data via an InfiniBand IBverb or RPC socket. Compared to other similar solutions, DS-CUDA implements redundant calculations to improve reliability where two different GPUs in the cluster perform the same calculation to ensure that the result from the cluster is correct.
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Full Virtualization <s> Numerous systems have been designed which use virtualization to subdivide the ample resources of a modern computer. Some require specialized hardware, or cannot support commodity operating systems. Some target 100% binary compatibility at the expense of performance. Others sacrifice security or functionality for speed. Few offer resource isolation or performance guarantees; most provide only best-effort provisioning, risking denial of service.This paper presents Xen, an x86 virtual machine monitor which allows multiple commodity operating systems to share conventional hardware in a safe and resource managed fashion, but without sacrificing either performance or functionality. This is achieved by providing an idealized virtual machine abstraction to which operating systems such as Linux, BSD and Windows XP, can be ported with minimal effort.Our design is targeted at hosting up to 100 virtual machine instances simultaneously on a modern server. The virtualization approach taken by Xen is extremely efficient: we allow operating systems such as Linux and Windows XP to be hosted simultaneously for a negligible performance overhead --- at most a few percent compared with the unvirtualized case. We considerably outperform competing commercial and freely available solutions in a range of microbenchmarks and system-wide tests. <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Full Virtualization <s> The most popular I/O virtualization method today is paravirtual I/O. Its popularity stems from its reasonable performance levels while allowing the host to interpose, i.e., inspect or control, the guest's I/O activity. ::: ::: We show that paravirtual I/O performance still significantly lags behind that of state-of-the-art noninterposing I/O virtualization, SRIOV. Moreover, we show that in the existing paravirtual I/O model, both latency and throughput significantly degrade with increasing number of guests. This scenario is becoming increasingly important, as the current trend of multi-core systems is towards an increasing number of guests per host. ::: ::: We present an efficient and scalable virtual I/O system that provides all of the benefits of paravirtual I/O. Running host functionality on separate cores dedicated to serving multiple guest's I/O combined with a fine-grained I/O scheduling and exitless notifications our I/O virtualization system provides performance which is 1.2×-3× better than the baseline, approaching and in some cases exceeding non-interposing I/O virtualization performance. <s> BIB002
GPUvm ] implements both full and para virtualization in the Xen hypervisor by using a Nouveau driver [X.OrgFoundation 2011] in the guest OS side. To isolate multiple VMs on a GPU in full virtualization, GPUvm partitions both physical GPU memory and the MMIO region into several pieces and assigns each portion to an individual VM. A GPU shadow page table per VM enables access to the partitioned memory by translating the virtual GPU addresses to the physical GPU addresses of the partitioned memory. Each shadow page table is updated on TLB flush. In CPU virtualization, the hypervisor updates shadow page tables when page faults occur. However, GPUvm cannot deal with page faults from the GPU because of a limitation of the current NVIDIA GPU design. Therefore, GPUvm should scan the entire page table on every TLB flush. The partitioned MMIO region is configured as read-only so every GPU access from a guest can generate a page fault. The OS then intercepts and emulates the access in the driver domain of Xen. Because the number of command submission channels (explained in Section 2.1) is limited in hardware, GPUvm also virtualizes them by creating shadow channels and mapping a virtual channel to a shadow channel. Actually, this full virtualization technique shows poor performance for the following reasons: (1) the interception of every GPU access and (2) the scanning of the entire page table on every TLB flush. GPUvm addresses the first limitation with BAR Remap, which only intercepts GPU calls related to accesses to GPU channel descriptors. A possible isolation issue caused by passing through other GPU accesses is addressed by utilizing shadow page tables, which isolate BAR area accesses among VMs. For the second limitation, GPUvm suggests a para virtualization technique. Similarly to Xen BIB001 ], GPUvm constructs guest GPU page tables and allows VMs to use these page tables directly instead of shadow page tables. The guest driver issues hypercalls to GPUvm when its GPU page table needs to be updated. GPUvm then validates these requests for isolation between VMs. gVirt ] is based on its previous work, XenGT , and implements full GPU virtualization for Intel on-chip GPUs in the Xen hypervisor. It focuses on graphics acceleration rather than GPGPU computing. gVirt asserts that the frame and command buffers are the most performance-critical resources in GPUs. It allows each VM to access the two buffers directly (pass-through) without intervention from the hypervisor. For this purpose, the graphics memory resource is partitioned by the gVirt Mediator so each VM can have its own frame and command buffers in the partitioned memory. At the same time, privileged GPU instructions are trapped and emulated by the gVirt Mediator in the driver domain of Xen. This enables secure isolation among multiple VMs without significant performance loss. The whole process is called mediated pass-through. KVMGT ] is a ported version of gVirt for KVM and has been integrated into the mainline Linux kernel since version 4.10. gHyvi points out that its predecessor, gVirt, suffers from severe performance degradation when a GPU application in a VM performs frequent updates on guest GPU page tables. This modification causes excessive VM exits BIB002 , which are expensive operations in hardware-based virtualization. This is also known as the massive update issue. gHyvi introduces relaxed page table shadowing, which removes the write-protection of the page tables to avoid excessive trapping. The technique rebuilds the guest page tables at a later point of time when rebuilding is required. This lazy reconstruction is possible because the modification to the guest page tables will not take effect before relevant GPU operations are submitted to the GPU command buffer. gScale ] solves gVirt's scalability limitation. gVirt partitions the global graphics memory (2GB) into several fixed size regions and allocates them to vGPUs. Due to the recommended memory allocation for each vGPU (e.g., 448MB in Linux), gVirt limits the total number of vGPUs to 4. gScale overcomes this limitation by making the GPU memory shareable. For the high graphics memory in Intel GPUs, gScale allows each vGPU to maintain its own private shadow graphics translation table (GTT). Each private GTT translates the vGPU's logical graphics address to any physical address in the high memory. On context switching, gScale copies the next vGPU's private GTT to the physical GTT to activate the vGPU's graphics address space. For the low memory, which is also accessible by the CPU, gScale introduces Ladder mapping combined with private shadow GTTs. As virtual CPUs and GPUs are scheduled asynchronously, a virtual CPU may access illegal memory if it refers to the current graphics address space. Ladder mapping modifies the Extended Page Table ( EPT) used by the virtual CPU so it can bypass the graphics memory space. With these schemes, gScale can host up to 15 vGPUs for Linux VMs and 12 for Windows VMs. The scheduling method that gScale adopts is discussed in Section 7.2.2.
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods Supporting a Single VM <s> SUMMARY This paper describes the LINPACK Benchmark and some of its variations commonly used to assess the performance of computer systems. Aside from the LINPACK Benchmark suite, the TOP500 and the HPL codes are presented. The latter is frequently used to obtained results for TOP500 submissions. Information is also given on how to interpret the results of the benchmark and how the results fit into the performance evaluation process. Copyright c � 2003 John Wiley & Sons, Ltd. <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods Supporting a Single VM <s> PURPOSE ::: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). ::: ::: ::: METHODS ::: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDATM programming model (NVIDIA Corporation, Santa Clara, CA). ::: ::: ::: RESULTS ::: An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. ::: ::: ::: CONCLUSIONS ::: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future. <s> BIB002 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods Supporting a Single VM <s> Virtualization poses new challenges to I/O performance. The single-root I/O virtualization (SR-IOV) standard allows an I/O device to be shared by multiple Virtual Machines (VMs), without losing performance. We propose a generic virtualization architecture for SR-IOV-capable devices, which can be implemented on multiple Virtual Machine Monitors (VMMs). With the support of our architecture, the SR-IOV-capable device driver is highly portable and agnostic of the underlying VMM. Because the Virtual Function (VF) driver with SR-IOV architecture sticks to hardware and poses a challenge to VM migration, we also propose a dynamic network interface switching (DNIS) scheme to address the migration challenge. Based on our first implementation of the network device driver, we deployed several optimizations to reduce virtualization overhead. Then, we conducted comprehensive experiments to evaluate SR-IOV performance. The results show that SR-IOV can achieve a line rate throughput (9.48 Gbps) and scale network up to 60 VMs, at the cost of only 1.76% additional CPU overhead per VM, without sacrificing throughput and migration. <s> BIB003 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods Supporting a Single VM <s> The Westmere processor is implemented on a high-к metal-gate 32nm process technology [1] as a compaction of the Nehalem processor family [2]. Figure 5.1.1 shows the 6-core dual-socket server processor and the 2-core single-socket processor for mainstream client. This paper focuses on innovations and circuit optimizations made to the 6-core processor. The 6-core design has 1.17B transistors including the 12MB shared L3 Cache and fits in approximately the same die area as its 45nm 4-core 8MB-L3-cache Nehalem counterpart. The core supports new instructions for accelerating encryption/decryption algorithms, speeds up performance under virtualized environments, and contains a host of other targeted performance features. <s> BIB004 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods Supporting a Single VM <s> The usage and adoption of General Purpose GPUs (GPGPU) in HPC systems is increasing due to the unparalleled performance advantage of the GPUs and the ability to fulfill the ever-increasing demands for floating points operations. While the GPU can offload many of the application parallel computations, the system architecture of a GPU-CPU-InfiniBand server does require the CPU to initiate and manage memory transfers between remote GPUs via the high speed InfiniBand network. In this paper we introduce for the first time a new innovative technology--GPUDirect that enables Tesla GPUs to transfer data via InfiniBand without the involvement of the CPU or buffer copies, hence dramatically reducing the GPU communication time and increasing overall system performance and efficiency. We also explore for the first time the performance benefits of GPUDirect using Amber and LAMMPS applications. <s> BIB005 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods Supporting a Single VM <s> This paper presents a GPU-assisted version of the LIBSVM library for Support Vector Machines. SVMs are particularly popular for classification procedures among the research community, but for large training data the processing time becomes unrealistic. The modification that is proposed is porting the computation of the kernel matrix elements to the GPU, to significantly decrease the processing time for SVM training without altering the classification results compared to the original LIBSVM. The experimental evaluation of the proposed approach highlights how the GPU-accelerated version of LIBSVM enables the more efficient handling of large problems, such as large-scale concept detection in video. <s> BIB006
Amazon Elastic Compute Cloud (Amazon EC2) ] is the first cloud hosting service that supports GPUs for cloud tenants by using the Intel GPU pass-through technology . In 2010, Amazon EC2 introduced Cluster GPU Instances (CGIs), which provide two NVIDIA Tesla GPUs per VM . CGIs can support HPC applications requiring massive parallel processing power by exposing native GPUs to each guest OS directly. explored the performance of a cluster of 32 CGIs in Amazon EC2. They tested the SHOC and Rodinia benchmark suites as synthetic kernels, NAMD and MC-GPU BIB002 as real-world applications in science and engineering, and the HPL benchmark ] as a widely used implementation of Linpack BIB001 ]. They measured the performance both in virtualization using Amazon EC2 CGIs and in a native environment using their own cluster. The authors show that computationally intensive programs can generally take full advantage of GPUs in the cloud setting. However, memory-intensive applications can experience a small penalty because Amazon EC2 CGIs enable ECC memory error check features, which can limit memory bandwidth. Also, network-intensive GPU applications may suffer from virtualized network access, which reduces scalability. implemented a GPU pass-through system using Xen and KVM and performed performance analysis of CUDA applications. The authors explain how to enable GPU pass-through technically in both hypervisors and evaluate the performance of the CUDA SDK benchmark suite. The authors claim that the GPU performance by using the Intel pass-through technology in both hypervisors is similar to the performance in a native environment. Shea and Liu [2013] explored the performance of cloud gaming in a GPU passthrough environment. They found that some gaming applications perform poorly when they are deployed in a VM using a dedicated GPU. This is because the virtualized environment cannot secure enough memory bandwidth while transferring data between the host and the GPU compared with a native environment. The authors identify that the performance in KVM is less than 59% of that of their bare-metal system. By detailed profiling, some gaming applications are observed to generate frequent context switching between the VM and the hypervisor to process memory access requests during memory transfers, which brings memory bandwidth utilization down. and evaluated the performance of a Xen VM infrastructure using a PCI pass-through technology and the SHOC benchmark. The authors found that there is only a 1.2% performance penalty in the worst case in the Kepler K20m GPU-enabled VM, whereas the API remoting approach incurs performance overhead up to 40%. In more recent work , they evaluated HPC workloads in a virtualized cluster using PCI pass-through with SR-IOV BIB003 and GPUDirect BIB005 . SR-IOV is a hardware-assisted network virtualization technology that provides near-native bandwidth on 10-Gigabit connectivity within VMs. GPUDirect reduces the overhead of data transfers across GPUs by supporting direct RDMA between GPUs on an InfiniBand interconnect. For evaluation, they used two molecular dynamics (MD) applications: large-scale atomic/molecular massively parallel simulator (LAMMPS) ] and highly optimized object-oriented molecular dynamics (HOOMD) . The authors observe that the MD applications using MPI and CUDA can run at near-native performance with only 1.9% and 1.5% overheads for LAMMPS and HOOMD, respectively, compared to their execution in a non-virtualized environment. characterized the performance of VMWare ESXi, KVM, Xen, and Linux Containers (LXC) using the PCI pass-through mode. They tested the CUDA SHOC and OpenCL SDK benchmark suites as microbenchmarks and the LAMMPS molecular dynamics simulator ], GPU-LIBSVM BIB006 , and the LULESH shock hydrodynamics simulator as application benchmarks. The authors observe that KVM consistently yields nearnative performance in all benchmark programs. VMWare ESXi performs well in the Sandy Bridge microarchitecture ], but not in the Westmere microarchitecture BIB004 . The authors speculate that VMWare ESXi is optimized for more recent microarchitectures. Xen consistently shows average performance among the hypervisors. Finally, LXC performs closest to the native environment because LXC guests share a single Linux kernel. tried to overcome the inability of GPU pass-through to support sharing of a single GPU between multiple VMs. In GPU pass-through environments, a GPU can be dedicated only to a single VM when the VM boots. The GPU cannot be deallocated until the VM is shutdown. The authors implemented coarse-grained sharing by utilizing the hot plug functionality of PCIe channels, which can install or remove GPU devices dynamically. To realize this implementation, a CUDA wrapper library is provided to VMs to monitor the activity of GPU applications. If an application requires a GPU, then a GPU allocation request is sent to Virtual Machine 0 (VM0), which is the host OS in KVM or domain 0 in Xen, by the wrapper library. The GPU-Admin in VM0 mediates this request and attaches an available GPU managed by the GPU pool to the VM. When the application finishes its execution, the wrapper library sends a de-allocation request to the GPU-Admin. The GPU-Admin then returns the GPU into the GPU pool.
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> SCHEDULING METHODS <s> The inexorable demand for computing power has lead to increasing interest in accelerator-based designs. An accelerator is specialized hardware unit that can perform a set of tasks with much higher performance or power efficiency than a general-purpose CPU. They may be embedded in the pipeline as a functional unit, as in SIMD instructions, or attached to the system as a separate device, as in a cryptographic co-processor. Current operating systems provide little support for accelerators: whether integrated into a processor or attached as a device, they are treated as CPU or a device and given no additional consideration. However, future processors may have designs that require more management by the operating system. For example, heterogeneous processors may only provision some cores with accelerators, and IBM's wire-speed processor allows user-mode code to launch computations on a shared accelerator without kernel involvement. In such systems, the OS can improve performance by allocating accelerator resources and scheduling access to the accelerator as it does for memory and CPU time. In this paper, we discuss the challenges presented by adopting accelerators as an execution resource managed by the operating system. We also present the initial design of our system, which provides flexible control over where and when code executes and can apply power and performance policies. It presents a simple software interface that can leverage new hardware interfaces as well as sharing of specialized units in a heterogeneous system. <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> SCHEDULING METHODS <s> Today's operating systems treat GPUs and other computational accelerators as if they were simple devices, with bounded and predictable response times. With accelerators assuming an increasing share of the workload on modern machines, this strategy is already problematic, and likely to become untenable soon. If the operating system is to enforce fair sharing of the machine, it must assume responsibility for accelerator scheduling and resource management. Fair, safe scheduling is a particular challenge on fast accelerators, which allow applications to avoid kernel-crossing overhead by interacting directly with the device. We propose a disengaged scheduling strategy in which the kernel intercedes between applications and the accelerator on an infrequent basis, to monitor their use of accelerator cycles and to determine which applications should be granted access over the next time interval. Our strategy assumes a well defined, narrow interface exported by the accelerator. We build upon such an interface, systematically inferred for the latest Nvidia GPUs. We construct several example schedulers, including Disengaged Timeslice with overuse control that guarantees fairness and Disengaged Fair Queueing that is effective in limiting resource idleness, but probabilistic. Both schedulers ensure fair sharing of the GPU, even among uncooperative or adversarial applications; Disengaged Fair Queueing incurs a 4% overhead on average (max 18%) compared to direct device access across our evaluation scenarios. <s> BIB002 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> SCHEDULING METHODS <s> GPUs are being increasingly adopted as compute accelerators in many domains, spanning environments from mobile systems to cloud computing. These systems are usually running multiple applications, from one or several users. However GPUs do not provide the support for resource sharing traditionally expected in these scenarios. Thus, such systems are unable to provide key multiprogrammed workload requirements, such as responsiveness, fairness or quality of service. In this paper, we propose a set of hardware extensions that allow GPUs to efficiently support multiprogrammed GPU workloads. We argue for preemptive multitasking and design two preemption mechanisms that can be used to implement GPU scheduling policies. We extend the architecture to allow concurrent execution of GPU kernels from different user processes and implement a scheduling policy that dynamically distributes the GPU cores among concurrently running kernels, according to their priorities. We extend the NVIDIA GK110 (Kepler) like GPU architecture with our proposals and evaluate them on a set of multiprogrammed workloads with up to eight concurrent processes. Our proposals improve execution time of high-priority processes by 15.6x, the average application turnaround time between 1.5x to 2x, and system fairness up to 3.4x <s> BIB003
GPU scheduling methods are required to fairly and effectively distribute GPU resources between tenants in a shared computing environment. However, GPU virtualization software faces several challenges on applying GPU scheduling polices because of the following reasons. First, GPUs normally do not provide the information of how long a GPU request occupies the GPU, which creates a task accounting problem [Dwarakinath BIB002 . Second, system software often regards GPUs as I/O devices rather than full processors and hides the methods of multiplexing the GPU in the device driver. This prevents GPU virtualization software from directly imposing certain scheduling policies on GPUs BIB001 . Finally, GPUs were non-preemptive until recently, which means a long-running GPU kernel cannot be preempted by software until it finishes. This will cause unfairness between multiple kernels and severely deteriorate the responsiveness of latency-critical kernels BIB003 . Currently, a new GPU architecture to support GPU kernel preemption has emerged in the market [NVIDIA 2016a ], but it is expected that existing GPUs will continue to suffer from this issue. In this section, we introduce representative GPU scheduling polices and mechanisms proposed in the literature to address these challenges. Table III shows a comparison of representative GPU scheduling methods in the literature. We classify the methods in terms of the scheduling discipline, support for load-balancing, and software platform. Scheduling discipline is an algorithm to distribute GPU resources among processes or virtual GPUs (vGPUs). We classify the GPU scheduling methods based on a commonly used classification [Silberschatz et al. 1998 ] as follows:
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Classification of GPU Scheduling Methods <s> This paper proposes a simple rate-based scheduling algorithm for packet-switched networks. Using a set of counters to keep track of the credits accumulated by each traffic flow, the bandwidth share allocated to each flow, and the size of the head-of-line (HOL) packets of the different flows, the algorithm decides which flow to serve next. Our proposed algorithm requires on average a smaller complexity than the most interesting alternative ones while guaranteeing comparable fairness, delay, and delay jitter bounds. To further reduce the complexity, a simplified version (CBFQ-F) of the general algorithm is also proposed for networks with fixed packet lengths, such as ATM, by relaxing the fairness bound by a negligibly small amount. <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Classification of GPU Scheduling Methods <s> Cloud computing has been considered as a solution for solving enterprise application distribution and configuration challenges in the traditional software sales model. Migrating from traditional software to Cloud enables on-going revenue for software providers. However, in order to deliver hosted services to customers, SaaS companies have to either maintain their own hardware or rent it from infrastructure providers. This requirement means that SaaS providers will incur extra costs. In order to minimize the cost of resources, it is also important to satisfy a minimum service level to customers. Therefore, this paper proposes resource allocation algorithms for SaaS providers who want to minimize infrastructure cost and SLA violations. Our proposed algorithms are designed in a way to ensure that Saas providers are able to manage the dynamic change of customers, mapping customer requests to infrastructure level parameters and handling heterogeneity of Virtual Machines. We take into account the customers' Quality of Service parameters such as response time, and infrastructure level parameters such as service initiation time. This paper also presents an extensive evaluation study to analyze and demonstrate that our proposed algorithms minimize the SaaS provider's cost and the number of SLA violations in a dynamic resource sharing Cloud environment. <s> BIB002 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Classification of GPU Scheduling Methods <s> Flash-based solid-state drives (SSDs) have the potential to eliminate the I/O bottlenecks in data-intensive applications. However, the large performance discrepancy between Flash reads and writes introduces challenges for fair resource usage. Further, existing fair queueing and quanta-based I/O schedulers poorly manage the I/O anticipation for Flash I/O fairness and efficiency. Some also suppress the I/O parallelism which causes substantial performance degradation on Flash. This paper develops FIOS, a new Flash I/O scheduler that attains fairness and high efficiency at the same time. FIOS employs a fair I/O timeslice management with mechanisms for read preference, parallelism, and fairness-oriented I/O anticipation. Evaluation demonstrates that FIOS achieves substantially better fairness and efficiency compared to the Linux CFQ scheduler, the SFQ(D) fair queueing scheduler, and the Argon quanta-based scheduler on several Flash-based storage devices (including a CompactFlash card in a low-power wimpy node). In particular, FIOS reduces the worst-case slowdown by a factor of 2.3 or more when the read-only SPECweb workload runs together with the write-intensive TPC-C. <s> BIB003
• FCFS: First-come, first-served (FCFS) serves processes or vGPUs in the order that they arrive. • Round-robin: Round-robin is similar to FCFS but assigns a fixed time unit per process or vGPU, referred to as a time quantum, and then cycles through processes or vGPUs. • Priority-based: Priority-based scheduling assigns a priority rank to every process or vGPU, and the scheduler executes processes or vGPUs in order of their priority. • Fair queuing: Fair queuing is common in network and disk scheduling to attain fairness when sharing a limited resource BIB003 . Fair queuing assigns start tags to processes or vGPUs and schedules them in increasing order of start tags. A start tag denotes the accumulated usage time of a GPU. • Credit-based: Credit-based scheduling is a computationally efficient substitute to fair queuing BIB001 . The scheduler periodically distributes credits to every process or vGPU, and each process or vGPU consumes credits when it is served on the CPU for exploiting the GPU. The scheduler selects a process or vGPU with a positive credit value. • Affinity-based: This scheduling algorithm produces affinity scores for a process or vGPU to predict the performance impact when it is scheduled on a certain resource. • Service Level Agreement-(SLA) based: SLA is a contract between a cloud service provider and a tenant regarding Quality of Service (QoS) and the price. SLAbased scheduling tries to meet the service requirement when distributing GPU resources BIB002 . Load balancing indicates whether the scheduling method supports the distribution of workloads across multiple processing units. The software platform denotes whether the scheduling method is developed in a single OS or hypervisor environment. We include GPU scheduling research performed in a single OS environment because the same research can be also applicable to virtualized environments without significant modifications to the system software. The GPU scheduling methods in Table III will be discussed in depth in the following section.
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Algorithms for Scheduling a Single GPU <s> Numerous systems have been designed which use virtualization to subdivide the ample resources of a modern computer. Some require specialized hardware, or cannot support commodity operating systems. Some target 100% binary compatibility at the expense of performance. Others sacrifice security or functionality for speed. Few offer resource isolation or performance guarantees; most provide only best-effort provisioning, risking denial of service.This paper presents Xen, an x86 virtual machine monitor which allows multiple commodity operating systems to share conventional hardware in a safe and resource managed fashion, but without sacrificing either performance or functionality. This is achieved by providing an idealized virtual machine abstraction to which operating systems such as Linux, BSD and Windows XP, can be ported with minimal effort.Our design is targeted at hosting up to 100 virtual machine instances simultaneously on a modern server. The virtualization approach taken by Xen is extremely efficient: we allow operating systems such as Linux and Windows XP to be hosted simultaneously for a negligible performance overhead --- at most a few percent compared with the unvirtualized case. We considerably outperform competing commercial and freely available solutions in a range of microbenchmarks and system-wide tests. <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Algorithms for Scheduling a Single GPU <s> Stony Brook University Libraries. ::: SBU Graduate School in Computer Science. ::: Lawrence Martin (Dean of Graduate School), Professor Tzi-cker Chiueh, Thesis Advisor ::: Computer Science Department, Professor Jennifer L.Wong ::: Computer Science Department. <s> BIB002 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Algorithms for Scheduling a Single GPU <s> The graphics processing unit (GPU) is becoming a very powerful platform to accelerate graphics and data-paralle l compute-intensive applications. It significantly outperforms traditional multi-core processors in performance and ener gy effi ciency. Its application domains also range widely from embedded systems to high-performance computing systems. However, operating systems support is not adequate, lackin g models, designs, and implementation efforts of GPU resource management for multi-tasking environments. This paper identifies a GPU resource management model to provide a basis for operating systems research using GPU technology. In particular, we present design concepts for G PU resource management. A list of operating systems challenge s is also provided to highlight future directions of this rese arch domain, including specific ideas of GPU scheduling for realtime systems. Our preliminary evaluation demonstrates tha t the performance of open-source software is competitive wit h that of proprietary software, and hence operating systems r esearch can start investigating GPU resource management. <s> BIB003 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Algorithms for Scheduling a Single GPU <s> The Graphics Processing Unit (GPU) is now commonly used for graphics and data-parallel computing. As more and more applications tend to accelerate on the GPU in multi-tasking environments where multiple tasks access the GPU concurrently, operating systems must provide prioritization and isolation capabilities in GPU resource management, particularly in real-time setups. ::: ::: We present TimeGraph, a real-time GPU scheduler at the device-driver level for protecting important GPU workloads from performance interference. TimeGraph adopts a new event-driven model that synchronizes the GPU with the CPU to monitor GPU commands issued from the user space and control GPU resource usage in a responsive manner. TimeGraph supports two priority-based scheduling policies in order to address the tradeoff between response times and throughput introduced by the asynchronous and non-preemptive nature of GPU processing. Resource reservation mechanisms are also employed to account and enforce GPU resource usage, which prevent misbehaving tasks from exhausting GPU resources. Prediction of GPU command execution costs is further provided to enhance isolation. ::: ::: Our experiments using OpenGL graphics benchmarks demonstrate that TimeGraph maintains the frame-rates of primary GPU tasks at the desired level even in the face of extreme GPU workloads, whereas these tasks become nearly unresponsive without TimeGraph support. Our findings also include that the performance overhead imposed on TimeGraph can be limited to 4-10%, and its event-driven scheduler improves throughput by about 30 times over the existing tick-driven scheduler. <s> BIB004 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Algorithms for Scheduling a Single GPU <s> General-purpose computing on graphics processing units, also known as GPGPU, is a burgeoning technique to enhance the computation of parallel programs. Applying this technique to real-time applications, however, requires additional support for timeliness of execution. In particular, the non-preemptive nature of GPGPU, associated with copying data to/from the device memory and launching code onto the device, needs to be managed in a timely manner. In this paper, we present a responsive GPGPU execution model (RGEM), which is a user-space runtime solution to protect the response times of high-priority GPGPU tasks from competing workload. RGEM splits a memory-copy transaction into multiple chunks so that preemption points appear at chunk boundaries. It also ensures that only the highest-priority GPGPU task launches code onto the device at any given time, to avoid performance interference caused by concurrent launches. A prototype implementation of an RGEM-based CUDA runtime engine is provided to evaluate the real-world impact of RGEM. Our experiments demonstrate that the response times of high-priority GPGPU tasks can be protected under RGEM, whereas their response times increase in an unbounded fashion without RGEM support, as the data sizes of competing workload increase. <s> BIB005 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Algorithms for Scheduling a Single GPU <s> Recent windowing systems allow graphics applications to directly access the graphics processing unit (GPU) for fast rendering. However, application tasks that render frames on the GPU contend heavily with the windowing server that also accesses the GPU to blit the rendered frames to the screen. This resource-sharing nature of direct rendering introduces core challenges of priority inversion and temporal isolation in multi-tasking environments. In this paper, we identify and address resource-sharing problems raised in GPU-accelerated windowing systems. Specifically, we propose two protocols that enable application tasks to efficiently share the GPU resource in the X Window System. The Priority Inheritance with X server (PIX) protocol eliminates priority inversion caused in accessing the GPU, and the Reserve Inheritance with X server (RIX) protocol addresses the same problem for resource-reservation systems. Our design and implementation of these protocols highlight the fact that neither the X server nor user applications need modifications to use our solutions. Our evaluation demonstrates that multiple GPU-accelerated graphics applications running concurrently in the X Window System can be correctly prioritized and isolated by the PIX and the RIX protocols. <s> BIB006 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Algorithms for Scheduling a Single GPU <s> GPGPU (General-purpose computing on graphics processing units) has several difficulties when used in cloud environment, such as narrow bandwidth, higher cost, and lower security, compared with computation using only CPUs. Most high performance computing applications require huge communication between nodes, and do not fit a cloud environment, since network topology and its bandwidth are not fixed and they affect the performance of the application program. However, there are some applications for which little communication is needed, such as molecular dynamics (MD) simulation with the replica exchange method (REM). For such applications, we propose DS-CUDA (Distributed-shared compute unified device architecture), a middleware to use many GPUs in a cloud environment with lower cost and higher security. It virtualizes GPUs in a cloud such that they appear to be locally installed GPUs in a client machine. Its redundant mechanism ensures reliable calculation with consumer GPUs, which reduce the cost greatly. It also enhances the security level since no data except command and data for GPUs are stored in the cloud side. REM-MD simulation with 64 GPUs showed 58 and 36 times more speed than a locally-installed GPU via InfiniBand and the Internet, respectively. <s> BIB007 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Algorithms for Scheduling a Single GPU <s> GPGPUs (General Purpose Graphic Processing Units) provide massive computational power. However, applying GPGPU technology to real-time computing is challenging due to the non-preemptive nature of GPGPUs. Especially, a job running in a GPGPU or a data copy between a GPGPU and CPU is non-preemptive. As a result, a high priority job arriving in the middle of a low priority job execution or memory copy suffers from priority inversion. To address the problem, we present a new lightweight approach to supporting preemptive memory copies and job executions in GPGPUs. Moreover, in our approach, a GPGPU job and memory copy between a GPGPU and the hosting CPU are run concurrently to enhance the responsiveness. To show the feasibility of our approach, we have implemented a prototype system for preemptive job executions and data copies in a GPGPU. The experimental results show that our approach can bound the response times in a reliable manner. In addition, the response time of our approach is significantly shorter than those of the unmodified GPGPU runtime system that supports no preemption and an advanced GPGPU model designed to support prioritization and performance isolation via preemptive data copies. <s> BIB008 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Algorithms for Scheduling a Single GPU <s> Today's operating systems treat GPUs and other computational accelerators as if they were simple devices, with bounded and predictable response times. With accelerators assuming an increasing share of the workload on modern machines, this strategy is already problematic, and likely to become untenable soon. If the operating system is to enforce fair sharing of the machine, it must assume responsibility for accelerator scheduling and resource management. Fair, safe scheduling is a particular challenge on fast accelerators, which allow applications to avoid kernel-crossing overhead by interacting directly with the device. We propose a disengaged scheduling strategy in which the kernel intercedes between applications and the accelerator on an infrequent basis, to monitor their use of accelerator cycles and to determine which applications should be granted access over the next time interval. Our strategy assumes a well defined, narrow interface exported by the accelerator. We build upon such an interface, systematically inferred for the latest Nvidia GPUs. We construct several example schedulers, including Disengaged Timeslice with overuse control that guarantees fairness and Disengaged Fair Queueing that is effective in limiting resource idleness, but probabilistic. Both schedulers ensure fair sharing of the GPU, even among uncooperative or adversarial applications; Disengaged Fair Queueing incurs a 4% overhead on average (max 18%) compared to direct device access across our evaluation scenarios. <s> BIB009 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Algorithms for Scheduling a Single GPU <s> Graphics processing units (GPUs) are being widely used as co-processors in many application domains to accelerate general-purpose workloads that are computationally intensive, known as GPGPU computing. Real-time multi-tasking support is a critical requirement for many emerging GPGPU computing domains. However, due to the asynchronous and non-preemptive nature of GPU processing, in multi-tasking environments, tasks with higher priority may be blocked by lower priority tasks for a lengthy duration. This severely harms the system’s timing predictability and is a serious impediment limiting the applicability of GPGPU in many real-time and embedded systems. In this paper, we present an efficient GPGPU preemptive execution system (GPES), which combines user-level and driverlevel runtime engines to reduce the pending time of high-priority GPGPU tasks that may be blocked by long-freezing low-priority competing workloads. GPES automatically slices a long-running kernel execution into multiple subkernel launches and splits data transaction into multiple chunks at user-level, then inserts preemption points between subkernel launches and memorycopy operations at driver-level. We implement a prototype of GPES, and use real-world benchmarks and case studies for evaluation. Experimental results demonstrate that GPES is able to reduce the pending time of high-priority tasks in a multitasking environment by up to 90% over the existing GPU driver solutions, while introducing small overheads. <s> BIB010 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Algorithms for Scheduling a Single GPU <s> Accelerators, such as Graphic Processing Units (GPUs), are popular components of modern parallel systems. Their energy-efficient performance make them attractive components for modern data center nodes. However, they lack control for fair resource sharing amongst multiple users. This paper presents a runtime and Just In Time compiler that enable resource sharing control and software managed scheduling on accelerators. It is portable and transparent, requiring no modification or recompilation of existing systems or user applications. We provide an extensive evaluation of our scheme with over 40,000 different workloads on 2 platforms and we deliver fairness improvements ranging from 6.8x to 13.66x. In addition, we also deliver system throughput speedups ranging from 1.13x to 1.31x. <s> BIB011
7.2.1. Single OS Environment. GERM BIB002 ] is a GPU scheduling policy that utilizes Deficit Round Robin fair queuing , which is a network scheduler for switching packets with multiple flows. GERM maintains perprocess queues for GPU commands and allows each queue to send commands to the GPU during a predefined time quantum. A queue's deficit or surplus time compared to the time quantum will be compensated or reimbursed in the next round. This scheme is suitable for non-preemptive GPUs where a GPU request cannot be preempted and the size of each request can vary significantly. Regarding the accounting of each request, GERM cannot measure the request size exactly because GPUs generally do not interrupt the CPU after a request is processed. Therefore, it adopts heuristics to estimate how long a group of commands will occupy the GPU on average. GERM injects a special GPU command that increases a scratch register containing the number of processed requests in the GPU. By reading this register periodically, GERM infers how much time is taken for a GPU command. TimeGraph BIB003 BIB004 focuses on GPU scheduling for soft real-time multi-tasking environments. It provides two scheduling polices: Predictable-ResponseTime (PRT) and High-Throughput (HT). The PRT policy schedules GPU applications based on their priorities, so important tasks can expect predictable response times. When a group of GPU commands is issued by a process, the group is buffered in the wait queue, which resides in kernel space. TimeGraph configures the GPU to generate an interrupt to the CPU after each group's execution is completed. This is enabled by using pscnv [PathScale 2012] , an open source NVIDIA GPU driver. The PRT scheduler is triggered by each interrupt and fetches the highest-priority group from the wait queue. As the scheduler is invoked every time a group of GPU commands finishes its execution, it incurs non-negligible overhead. The HT scheduler addresses this issue by allowing the current task occupying the GPU to execute its following groups without buffering into the wait queue, when there are no other higher priority groups waiting. RGEM BIB005 ] develops a responsive GPGPU execution model for GPGPU tasks in real-time multi-tasking environments, similarly to TimeGraph BIB004 . RGEM introduces two scheduling methods: Memory-Copy Transaction scheduling and Kernel Launch scheduling. The former policy splits a large memory copy operation into several small pieces and inserts preemption points between the separate pieces. This prevents a long-running memory copy operation from occupying the GPU boundlessly, which will block the execution of high priority tasks. The latter policy follows the scheduling algorithm of the PRT scheduler in TimeGraph, except that Kernel Launch scheduling is implemented in user space. PIX BIB006 ] applies TimeGraph BIB004 ] to GPU-accelerated X Window systems. When employing the PRT scheduler in TimeGraph, PIX solves a form of the priority inversion problem where the X server task (X) with low priority can be preempted by a medium-priority task (A) on the GPU while rendering the frames of a high-priority task (B) (i.e., P B > P A > P X ). The high priority task is then blocked for a long time while the X server task deals with the frames of the medium-priority task. PIX suggests a priority inheritance protocol where the X server inherits the priority of a certain task while rendering the frames of the task. This eliminates the priority inversion problem raised by the existence of the additional X server task. Gdev ] introduces a bandwidth-aware non-preemptive device (BAND) scheduling algorithm. The authors found that the Credit scheduler BIB001 ] fails to achieve good fairness in GPU scheduling because the Credit scheduler assumes that it will run preemptive CPU workloads, whereas GPUs do not support hardware-based preemption. To address this issue, Gdev performs two heuristic modifications to the Credit scheduler. First, the BAND scheduler does not degrade the priority of a GPU task after the credit value of the task becomes zero. The BAND scheduler lowers the priority when the task's actual utilization exceeds the assigned one. This modification compensates for credit errors caused by non-preemptive executions. Second, the BAND scheduler waits for the completion of GPU kernels of a task and assigns a credit value to the task based on its GPU usage. This modification contributes to fairer resource allocations. Disengaged scheduling BIB009 ] provides a framework for scheduling GPUs and introduces three algorithms to achieve both high fairness and high utilization. The framework endeavors to employ the original NVIDIA driver and the libraries; it uses neither the API remoting approach nor a custom GPU driver to mediate GPU calls. The framework makes the GPU MMIO region of each task read-only so every GPU access can generate a page fault. The OS then intercepts and buffers GPU calls in kernel space. Disengaged scheduling offers three scheduling policies. First, the Timeslice with Overuse Control scheduling algorithm implements a standard token-based time slice policy. A token is passed to a certain task and the task can use the GPU during its time slice. The scheduler accounts for overuse by waiting for all submitted requests of the token holder to be completed at the end of each time slice. Since the GPU requests of both the token holder and other tasks generate page faults, this policy causes significant overhead due to frequent trapping to the OS. In addition, it does not implement work-conserving because the GPU can be underutilized if applications are not GPU intensive. Second, Disengaged Timeslice reduces this overhead by allowing the token holder to issue GPU commands without buffering in kernel space. However, this scheduling is still not work-conserving. Finally, Disengaged Fair Queueing executes several tasks concurrently without trapping in the common case. Only in the accounting period does the scheduler enable the trapping mechanism and is each task run sequentially. In this period, the scheduler samples the request size of each task and feeds this information to fair queuing to approximate each task's cumulative GPU usage. The scheduler then selects several tasks that have low start tags to run them without trapping until the next accounting period. The scheduler is work conserving because several tasks can exploit the GPU simultaneously; from the Kepler microarchitecture, NVIDIA allows multiple GPU kernels from different tasks to run concurrently [NVIDIA 2012 ]. 7.2.2. Virtualization Environment. GViM ] uses both simple RoundRobin scheduling and Credit scheduling of the Xen hypervisor for scheduling tasks on GPUs. As GViM operates on top of the driver level, GViM controls the rate of GPU request submissions for scheduling before the requests reach the driver. GViM implements Round Robin-(RR) and XenoCredit-(XC) based scheduling. RR selects a vGPU sequentially for every fixed time slice and monitors the vGPU's call buffer during the period. XC uses the concept of credit, which represents the allocated GPU time of each vGPU. XC processes the vGPU's call buffer for a variable time in proportion to the credit amount, which enables weighted fair-sharing between guest VMs. Pegasus ] addresses one of the challenges in GPU scheduling that a GPU virtualization framework cannot impose a scheduling policy on GPUs because the method of GPU multiplexing is hidden in the device driver. Pegasus introduces the concept of an accelerator VCPU (aVCPU) to make GPUs basic schedulable entities; the components of an aVCPU are discussed in Section 4.2. Pegasus focuses on satisfying different application requirements by providing diverse methods for scheduling GPUs. Pegasus includes first-come, first-served (FCFS), proportional fair-share (AccCredit), strict co-scheduling (CoSched), augmented credit-based scheme (AugC), and SLA feedback-based (SLAF) schedulers. AccCredit adapts the Credit scheduling concept in Xen for GPU scheduling. CoSched applies co-scheduling for barrier-rich parallel applications where a VCPU of a VM and its corresponding aVCPU frequently synchronize with each other. CoSched forces both entities (i.e., the VCPU and its corresponding aVCPU) to be executed at the same time to address synchronization bottlenecks. However, this strict co-scheduling policy can hamper fairness between multiple VMs. AugC conditionally co-schedules both entities to achieve better fairness only when the target VCPU has enough credits and can lend its credits to its corresponding aVCPU. SLAF applies feedback-based proportional fair-share scheduling. The scheduler periodically monitors Service-Level Objective (SLO) violations by the feedback controller and compensates each domain by giving extra time when a violation is detected. tackled the problem that one GPU application sometimes cannot have enough parallelism to fully utilize a modern GPU. To increase overall GPU utilization, the authors try to consolidate multiple GPU kernels from different VMs in space and time. Space sharing co-schedules kernels that do not use all streaming multiprocessors (SMs) in the GPU. Time sharing allows more than one kernel to share the same SM if the cumulative resource requirements do not exceed the capability of the SM. Because the NVIDIA Fermi-based GPU used in this research only allows a set of kernels submitted from a single process to be executed concurrently, the authors let GPU kernels from different VMs be handled by a single thread. The scheduler then finds an affinity score between every two kernels to predict the performance improvement when they are space and time shared. In addition, the scheduler calculates potential affinity scores when they are space and time shared with a different number of thread blocks and threads. The scheduler then selects n kernels to run based on the set of affinity scores. GPUvm ] employs the BAND scheduler of Gdev ] and solves a flaw of Credit scheduling. The original BAND scheduler distributes credits to each VM based on the assumption that the total utilization of all vGPUs can reach 100%. However, when the GPU scheduler is active, the GPU can temporarily become idle. This situation causes each vGPU to have unused cumulative credit, which may lead to inopportune scheduling decisions. To address this issue, GPUvm first transforms the CPU time that the GPU scheduler occupies into a credit value and then subtracts the value from the total credit value of the current vGPU. gVirt ] implements a coarse-grained QoS policy. gVirt allows GPU commands from a VM to be submitted into the guest ring buffer during the VM's time slice. After the time slice, gVirt waits for the ring buffer to be emptied by the GPU, because the GPU is non-preemptive. To minimize this wait period, gVirt develops a coarse-grained flow control method, which ensures that the total length of submitted commands is within a time slice. gVirt also implements a gang scheduling policy where dependent graphic engines are scheduled together. The graphic engines in gVirt use semaphores to synchronize accesses to shared data. To eliminate synchronization bottlenecks, gVirt schedules the related engines at the same time. VGRIS tries to address GPU scheduling issues for gaming applications deployed in cloud computing. VGRIS introduces three scheduling policies to meet different performance requirements. The SLA-aware scheduling policy just provides minimum GPU resources to each VM to satisfy its SLA requirement. The authors observe that a fair scheduling policy provides resources evenly under contention, but non-GPU-intensive applications may obtain more resources than necessary while GPU-intensive ones may not satisfy the requirement. SLA-aware scheduling slows the execution speed of fast-running applications (i.e., non-GPU-intensive applications) so other slow applications can get more chances to occupy the GPU. For this purpose, it inserts a sleep call at the end of the frame computation code of fast-running applications before the frame is displayed. However, SLA-aware scheduling may lead to low GPU utilization when only a small number of VMs is available. The Proportional-share scheduling policy addresses this issue by distributing GPU resources fairly using the priority-based scheduling policy of TimeGraph BIB004 . Finally, the Hybrid scheduling policy combines SLA-aware scheduling and Proportional-share scheduling. Hybrid scheduling first applies SLA-aware scheduling and switches to Proportionalshare scheduling if a resource surplus is available. VGASA advances VGRIS ] by providing adaptive scheduling algorithms, which employ a dynamic feedback control loop using the proportional-integral (PI) controller BIB007 . Similarly to VGRIS, VGASA provides three scheduling policies. SLA-Aware (SA) receives the frames per second (FPS) information from the feedback controller and adjusts the length of sleep time in the frame computation code to meet the predefined SLA requirement (i.e., the rate of 30 FPS). Fair SLA-Aware (FSA) dispossesses fast running applications of their GPU resources and redistributes the resources to slow running ones. Enhanced SLA-Aware (ESA) allows all VMs to have the same FPS rate under the maximum GPU utilization. ESA improves SA by dynamically calculating the SLA requirement during runtime. ESA can address a tradeoff between deploying more applications and providing smoother experiences. gScale optimizes the GPU scheduler of gVirt. gScale develops private shadow GTTs to improve the scalability issue as explained in Section 5.2. However, applying private GTTs requires page table copying upon every context switch. To mitigate this overhead, gScale does not perform context switching for idle vGPUs. Furthermore, it implements slot sharing, which divides the high graphics memory into several slots and dedicates a single slot to each vGPU. gScale's scheduler distributes busy vGPUs across the slots so each busy one can monopolize each slot. This arrangement can decrease the amount of page table entry copying. The recent NVIDIA Pascal architecture [NVIDIA 2016a] implements hardwarebased preemption to address the problem of long-running GPU kernels monopolizing the GPU. This situation can cause unfairness between multiple kernels and significantly deteriorate the system responsiveness. Existing GPU scheduling methods address this issue by either killing a long-running kernel BIB009 or providing a kernel split tool BIB008 BIB010 BIB011 . The Pascal architecture allows GPU kernels to be interrupted at instruction-level granularity by saving and restoring each GPU context to and from the GPU's DRAM.
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> CHALLENGES AND FUTURE DIRECTIONS <s> Today's operating systems treat GPUs and other computational accelerators as if they were simple devices, with bounded and predictable response times. With accelerators assuming an increasing share of the workload on modern machines, this strategy is already problematic, and likely to become untenable soon. If the operating system is to enforce fair sharing of the machine, it must assume responsibility for accelerator scheduling and resource management. Fair, safe scheduling is a particular challenge on fast accelerators, which allow applications to avoid kernel-crossing overhead by interacting directly with the device. We propose a disengaged scheduling strategy in which the kernel intercedes between applications and the accelerator on an infrequent basis, to monitor their use of accelerator cycles and to determine which applications should be granted access over the next time interval. Our strategy assumes a well defined, narrow interface exported by the accelerator. We build upon such an interface, systematically inferred for the latest Nvidia GPUs. We construct several example schedulers, including Disengaged Timeslice with overuse control that guarantees fairness and Disengaged Fair Queueing that is effective in limiting resource idleness, but probabilistic. Both schedulers ensure fair sharing of the GPU, even among uncooperative or adversarial applications; Disengaged Fair Queueing incurs a 4% overhead on average (max 18%) compared to direct device access across our evaluation scenarios. <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> CHALLENGES AND FUTURE DIRECTIONS <s> Accelerated architectures such as GPUs (Graphics Processing Units) and MICs (Many Integrated Cores) have been proven to increase the performance of many algorithms compared to their CPU counterparts and are widely available in local, campus-wide and national infrastructures, however, their utilization is not following the same pace as their deployment. Reasons for the underutilization lay partly on the software side with proprietary and complex interfaces for development and usage. A common API providing an extra layer to abstract the differences and specific characteristics of those architectures would deliver a far more portable interface for application developers. This cloud challenge proposal presents such an API that addresses these issues using a container-based approach. The resulting environment provides Docker-based containers for deploying accelerator libraries, such as CUDA Toolkit, OpenCL and OpenACC, onto a wide variety of different platforms and operating systems. By leveraging the container approach, we can overlay accelerator libraries onto the host without needing to be concerned about the intricacies of underlying operating system of the host. Docker therefore provides the advantage of being easily applicable on diverse architectures, virtualizing the necessary environment and including libraries as well as applications in a standardized way. The novelty of our approach is the extra layer for utilization and device discovery in this layer improving the usability and uniform development of accelerated methods with direct access to resources. <s> BIB002
Through the analysis of existing GPU virtualization techniques, we conclude that technical challenges of how to virtualize GPUs have been addressed to a significant extent. However, a number of challenges remain open in terms of performance and capabilities of GPU virtualization environments. We discuss them in this section, along with some future research directions to address the challenges. Lightweight virtualization: Linux-based containers are an emerging cloud technology that offers process-level lightweight virtualization . Containers do not require additional wrapper libraries or front/backend driver models to virtualize GPUs, because multiple containers are multiplexed by a single Linux kernel BIB002 . This feature allows containers to achieve performance that is close to that of native environments. Unfortunately, current research on GPU virtualization using containers is at an initial stage. Published work just includes performance comparisons between containers and other virtualization solutions . To utilize GPU-equipped containers in cloud computing, fair and effective GPU scheduling is required. Most GPU schedulers require API extensions or driver changes in containers to mediate GPU calls, which will impose non-negligible overhead on containers. One promising option is to adapt Disengaged scheduling BIB001 ] in the host OS, which needs neither additional wrapper libraries nor custom drivers for GPU scheduling, as explained in Section 7.2.1.
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Security: <s> As virtual machines become pervasive users will be able to create, modify and distribute new "machines" with unprecedented ease. This flexibility provides tremendous benefits for users. Unfortunately, it can also undermine many assumptions that today's relatively static security architectures rely on about the number of hosts in a system, their mobility, connectivity, patch cycle, etc. ::: ::: We examine a variety of security problems virtual computing environments give rise to. We then discuss potential directions for changing security architectures to adapt to these demands. <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Security: <s> Haswell, Intel's fourth-generation core processor architecture, delivers a range of client parts, a converged core for the client and server, and technologies used across many products. It uses an optimized version of Intel 22-nm process technology. Haswell provides enhancements in power-performance efficiency, power management, form factor and cost, core and uncore microarchitecture, and the core's instruction set. <s> BIB002 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Security: <s> Recent years have witnessed phenomenal growth in the computational capabilities and applications of GPUs. However, this trend has also led to a dramatic increase in their power consumption. This article surveys research works on analyzing and improving energy efficiency of GPUs. It also provides a classification of these techniques on the basis of their main research idea. Further, it attempts to synthesize research works that compare the energy efficiency of GPUs with other computing systems (e.g., FPGAs and CPUs). The aim of this survey is to provide researchers with knowledge of the state of the art in GPU power management and motivate them to architect highly energy-efficient GPUs of tomorrow. <s> BIB003 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Security: <s> Unified Memory is an emerging technology which is supported by CUDA 6.X. Before CUDA 6.X, the existing CUDA programming model relies on programmers to explicitly manage data between CPU and GPU and hence increases programming complexity. CUDA 6.X provides a new technology which is called as Unified Memory to provide a new programming model that defines CPU and GPU memory space as a single coherent memory (imaging as a same common address space). The system manages data access between CPU and GPU without explicit memory copy functions. This paper is to evaluate the Unified Memory technology through different applications on different GPUs to show the users how to use the Unified Memory technology of CUDA 6.X efficiently. The applications include Diffusion3D Benchmark, Parboil Benchmark Suite, and Matrix Multiplication from the CUDA SDK Samples. We changed those applications to corresponding Unified Memory versions and compare those with the original ones. We selected the NVIDIA Keller K40 and the Jetson TK1, which can represent the latest GPUs with Keller architecture and the first mobile platform of NVIDIA series with Keller GPU. This paper shows that Unified Memory versions cause 10% performance loss on average. Furthermore, we used the NVIDIA Visual Profiler to dig the reason of the performance loss by the Unified Memory technology. <s> BIB004 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Security: <s> Modern graphics processing units (GPUs) have complex architectures that admit exceptional performance and energy efficiency for high-throughput applications. Although GPUs consume large amounts of power, their use for high-throughput applications facilitate state-of-the-art energy efficiency and performance. Consequently, continued development relies on understanding their power consumption. This work is a survey of GPU power modeling and profiling methods with increased detail on noteworthy efforts. As direct measurement of GPU power is necessary for model evaluation and parameter initiation, internal and external power sensors are discussed. Hardware counters, which are low-level tallies of hardware events, share strong correlation to power use and performance. Statistical correlation between power and performance counters has yielded worthwhile GPU power models, yet the complexity inherent to GPU architectures presents new hurdles for power modeling. Developments and challenges of counter-based GPU power modeling are discussed. Often building on the counter-based models, research efforts for GPU power simulation, which make power predictions from input code and hardware knowledge, provide opportunities for optimization in programming or architectural design. Noteworthy strides in power simulations for GPUs are included along with their performance or functional simulator counterparts when appropriate. Last, possible directions for future research are discussed. <s> BIB005
A critical function of the hypervisor is to provide secure isolation between VMs BIB001 . To fulfill this task, para and full virtualization frameworks, including LoGV , GPUvm , gVirt , and gScale , prevent a VM from mapping the GPU address spaces of other VMs. Despite this protection mechanism, GPU virtualization frameworks remain vulnerable to denial-of-service (DoS) attacks where a malicious VM uninterruptedly submits a massive number of GPU commands to the backend and thus jeopardizes the whole system. To address this issue, gVirt resets hung GPUs and kills suspicious VMs after examining each VM's execution state. Unfortunately, this can cause a service suspension time to normal VMs. To avoid a GPU reset, a fine-grained access control mechanism is required that can delay the execution speed of a malicious VM before the VM threatens the system. Methods that adopt API remoting, including vCUDA ] and VOCL , do not implement isolation mechanisms and their security features need to be reinforced. Fused CPU-GPU chips: Conventional systems with discrete GPUs have two major disadvantages: (1) data transfer overhead over the PCIe interface, which offers a low maximum bandwidth capacity (i.e., 16GB/s), and (2) programming effort to manage the separate data address spaces of the CPU and the GPU. To address these issues, fused CPU-GPU chips furnish shared memory space between the two processors. Examples include Intel's integrated CPU-GPU BIB002 ], AMD's HSA architecture , and NVIDIA's unified memory coupled with NVLink ]. These new architectures can boost the performance of big data applications that require a significant communication volume between the two processors. gVirt ] (Section 5.2) implemented full virtualization for Intel's GPUs, while (Section 5.1) developed a para virtualization solution for AMD's fused chips. However, these frameworks only focus on utilizing GPUs and need to adopt sophisticated scheduling algorithms that can utilize both processors by partitioning and load-balancing workloads differently for fused CPU-GPU architectures. explored NVIDIA's unified memory to simplify memory management in GPU virtualization. However, the sole use of unified memory incurs non-negligible performance degradation in data-intensive applications BIB004 because NVIDIA maintains its discrete GPU design and automatically migrates data between the host and the GPU. NVLink enables a high-bandwidth path between the GPU and the CPU (achieving between 80 and 200GB/s of bandwidth). A combination of NVIDIA's unified memory and NVLink is required to achieve high performance for data-intensive applications in GPU virtualization. Power efficiency: Energy efficiency is currently a high research priority for GPU platforms BIB003 BIB005 ]. Compared to a significant volume of research studying GPU power in non-virtualized environments, there is little work related to power and energy consumption studies in virtualized environments with GPUs. One example is pVOCL , which improves the energy efficiency of a remote GPU cluster by controlling peak power consumption between GPUs. Besides controlling power consumption of GPUs remotely, power efficiency is also required at the host side. Runtime systems that monitor different GPU usage patterns among VMs and dynamically adjust the GPU's power state according to the workload are an open area for further research. Space sharing: Recent GPUs allow multiple processes or VMs to launch GPU kernels on a single GPU simultaneously [NVIDIA 2012 ]. This space-multiplexing approach can improve GPU utilization by fully exploiting SMs with multiple kernels. However, most GPU scheduling methods are based on time-multiplexing where GPU kernels from different VMs run in sequence on a GPU, which can lead to underutilization. A combination of the two approaches is required to achieve both high GPU utilization and fairness in GPU scheduling.
Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Introduction <s> Pervasive systems must offer an open, extensible, and evolving portfolio of services which integrate sensor data from a diverse range of sources. The core challenge is to provide appropriate and consistent adaptive behaviours for these services in the face of huge volumes of sensor data exhibiting varying degrees of precision, accuracy and dynamism. Situation identification is an enabling technology that resolves noisy sensor data and abstracts it into higher-level concepts that are interesting to applications. We provide a comprehensive analysis of the nature and characteristics of situations, discuss the complexities of situation identification, and review the techniques that are most popularly used in modelling and inferring situations from sensor data. We compare and contrast these techniques, and conclude by identifying some of the open research opportunities in the area. <s> BIB001 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Introduction <s> It is essential for environments that aim at helping people in their daily life that they have some sort of Ambient Intelligence. Learning the preferences and habits of users then becomes an important step in allowing a system to provide such personalized services. Thus far, the exploration of these issues by the scientific community has not been extensive, but interest in the area is growing. Ambient Intelligence environments have special characteristics that have to be taken into account during the learning process. We identify these characteristics and use them to highlight the strengths and weaknesses of developments so far, providing direction to encourage further development in this specific area of Ambient Intelligence. <s> BIB002 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Introduction <s> Commercial home automation systems are becoming increasingly common, affording the opportunity to study technology-augmented homes in real world contexts. In order to understand how these technologies are being integrated into homes and their effects on inhabitants, we conducted a qualitative study involving smart home professionals who provide such technology, people currently in the process of planning or building smart homes, and people currently living in smart homes. We identified motivations for bringing smart technology into homes, and the phases involved in making a home smart. We also explored the varied roles of the smart home inhabitants that emerged during these phases, and several of the challenges and benefits that arise while living in a smart home. Based on these findings we propose open areas and new directions for smart home research. <s> BIB003
The progress of information and communication technologies has many faces; while computing speed, reliability and level of miniaturization of electronic devices increase year after year, their costs decrease. This allows a widespread adoption of embedded systems (e.g., appliances, sensors, actuators) and of powerful computing devices (e.g., laptops, smartphones), thus, turning pervasive (or ubiquitous) computing into reality. Pervasive computing embodies a vision of computers seamlessly integrating into everyday life, responding to information provided by sensors in the environment, with little or no direct instruction from users BIB001 . At the same time, connecting all these computing devices together, as networked artefacts, using local and global network infrastructures has become easy. The rise of applications that exploit these technologies represents a major characteristic of the Internet-of-Things (IoT) . Smart spaces represent an emerging class of IoT-based applications. Smart homes and offices are representative examples where pervasive computing could take advantage of ambient intelligence (AmI) more easily than in other scenarios where artificial intelligence-AI problems soon become intractable . A study about the current level of adoption of commercial smart home systems is provided in BIB003 . This study reveals how people understanding of the term "smart" has a more general meaning than what we presented here as AmI; in particular, it also includes non-technological aspects such as the spatial layout of the house. Additionally, an automated behavior is considered as smart, especially from people without a technical background, only if it performs a task quicker than the user could do by himself. The research also reveals that interest in smart home systems is subject to a virtuous circle such that people experiencing benefits from their services feel the need of upgrading them. Figure 1 depicts the closed loops that characterize a running smart space BIB002 . The main closed loop, depicted using solid arrows and shapes, shows how the knowledge of environment dynamics and of users behaviors and preferences is employed to interpret sensors output in order to perform appropriate actions on the environment. Sensor data is first analyzed to extract the current context, which is an internal abstraction of the state of the environment from the point of view of the AmI system. The extracted context is then employed to make decisions on the actions to perform on the controlled space. Actions related to these decisions modify the environment (both physical and digital) by means of actuators of different forms.
Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> We characterize situations as constraints on sensor readings expressed in rules. We also introduce an extension of Prolog which we call LogicCAP for programming context-aware applications, where situations are first-class entities. The operator "in-situation" in the language captures a common form of reasoning in context-aware applications, which is to ask if an entity is in a given situation. We show the usefulness of our approach via programming idioms, including defining relations among situations and integration with the Web. <s> BIB001 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> Technological advancements have and will revolutionise the support offered to persons in their home environment. As the population continues to grow and in addition the percentage of elderly within the population increases we now face the challenge of improving individual autonomy and quality of life. Smart home technology offering intelligent appliances and remote alarm-based monitoring are moving close towards addressing these issues. ::: ::: To date the research efforts on smart home technology have focused on communications and intelligent user interfaces. The trends in these areas must now, however, focus on the analysis on the data which is generated from the devices within the house as a means of producing 'profiles' of the users and providing intelligent interaction to support their daily activities. A key element in the implementation of these systems is the capability to handle time-related concepts. Here we report about one experience using Active Databases in connection with temporal reasoning in the form of complex event detection to accommodate prevention of hazardous situations. <s> BIB002 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> We are developing a personal activity recognition system that is practical, reliable, and can be incorporated into a variety of health-care related applications ranging from personal fitness to elder care. To make our system appealing and useful, we require it to have the following properties: (i) data only from a single body location needed, and it is not required to be from the same point for every user; (ii) should work out of the box across individuals, with personalization only enhancing its recognition abilities; and (iii) should be effective even with a cost-sensitive subset of the sensors and data features. In this paper, we present an approach to building a system that exhibits these properties and provide evidence based on data for 8 different activities collected from 12 different subjects. Our results indicate that the system has an accuracy rate of approximately 90% while meeting our requirements. We are now developing a fully embedded version of our system based on a cell-phone platform augmented with a Bluetooth-connected sensor board. <s> BIB003 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> Pervasive computing is by its nature open and extensible, and must integrate the information from a diverse range of sources. This leads to a problem of information exchange, so sub-systems must agree on shared representations. Ontologies potentially provide a well-founded mechanism for the representation and exchange of such structured information. A number of ontologies have been developed specifically for use in pervasive computing, none of which appears to cover adequately the space of concerns applicable to application designers. We compare and contrast the most popular ontologies, evaluating them against the system challenges generally recognized within the pervasive computing community. We identify a number of deficiencies that must be addressed in order to apply the ontological techniques successfully to next-generation pervasive systems. <s> BIB004 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> A sensor system capable of automatically recognizing activities would allow many potential ubiquitous applications. In this paper, we present an easy to install sensor network and an accurate but inexpensive annotation method. A recorded dataset consisting of 28 days of sensor data and its annotation is described and made available to the community. Through a number of experiments we show how the hidden Markov model and conditional random fields perform in recognizing activities. We achieve a timeslice accuracy of 95.6% and a class accuracy of 79.4%. <s> BIB005 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> Advancements in supporting fields have increased the likelihood that smart-home technologies will become part of our everyday environments. However, many of these technologies are brittle and do not adapt to the user's explicit or implicit wishes. Here, we introduce CASAS, an adaptive smart-home system that utilizes machine learning techniques to discover patterns in resident's daily activities and to generate automation polices that mimic these patterns. Our approach does not make any assumptions about the activity structure or other underlying model parameters but leaves it completely to our algorithms to discover the smart-home resident's patterns. Another important aspect of CASAS is that it can adapt to changes in the discovered patterns based on the resident implicit and explicit feedback and can automatically update its model to reflect the changes. In this paper, we provide a description of the CASAS technologies and the results of experiments performed on both synthetic and real-world data. <s> BIB006 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> In the last years, techniques for activity recognition have attracted increasing attention. Among many applications, a special interest is in the pervasive e-Health domain where automatic activity recognition is used in rehabilitation systems, chronic disease management, monitoring of the elderly, as well as in personal well being applications. Research in this field has mainly adopted techniques based on supervised learning algorithms to recognize activities based on contextual conditions (e.g., location, surrounding environment, used objects) and data retrieved from body-worn sensors. Since these systems rely on a sufficiently large amount of training data which is hard to collect, scalability with respect to the number of considered activities and contextual data is a major issue. In this paper, we propose the use of ontologies and ontological reasoning combined with statistical inferencing to address this problem. Our technique relies on the use of semantic relationships that express the feasibility of performing a given activity in a given context. The proposed technique neither increases the obtrusiveness of the statistical activity recognition system, nor introduces significant computational overhead to real-time activity recognition. The results of extensive experiments with data collected from sensors worn by a group of volunteers performing activities both indoor and outdoor show the superiority of the combined technique with respect to a solely statistical approach. To the best of our knowledge, this is the first work that systematically investigates the integration of statistical and ontological reasoning for activity recognition. <s> BIB007 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> This paper addresses the problem of learning situation models for providing context-aware services. Context for modeling human behavior in a smart environment is represented by a situation model describing environment, users, and their activities. A framework for acquiring and evolving different layers of a situation model in a smart environment is proposed. Different learning methods are presented as part of this framework: role detection per entity, unsupervised extraction of situations from multimodal data, supervised learning of situation representations, and evolution of a predefined situation model with feedback. The situation model serves as frame and support for the different methods, permitting to stay in an intuitive declarative framework. The proposed methods have been integrated into a whole system for smart home environment. The implementation is detailed, and two evaluations are conducted in the smart home environment. The obtained results validate the proposed approach. <s> BIB008 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> We explore a dense sensing approach that uses RFID sensor network technology to recognize human activities. In our setting, everyday objects are instrumented with UHF RFID tags called WISPs that are equipped with accelerometers. RFID readers detect when the objects are used by examining this sensor data, and daily activities are then inferred from the traces of object use via a Hidden Markov Model. In a study of 10 participants performing 14 activities in a model apartment, our approach yielded recognition rates with precision and recall both in the 90% range. This compares well to recognition with a more intrusive short-range RFID bracelet that detects objects in the proximity of the user; this approach saw roughly 95% precision and 60% recall in the same study. We conclude that RFID sensor networks are a promising approach for indoor activity monitoring. <s> BIB009 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> Advances in technology have provided the ability to equip the home environment with a layer of technology to provide a truly 'Smart Home'. These homes offer improved living conditions and levels of independence for the population who require support with both physical and cognitive functions. At the core of the Smart Home is a collection of sensing technology which is used to monitor the behaviour of the inhabitant and their interactions with the environment. A variety of different sensors measuring light, sound, contact and motion provide sufficient multi-dimensional information about the inhabitant to support the inference of activity determination. A problem which impinges upon the success of any information analysis is the fact that sensors may not always provide reliable information due to either faults, operational tolerance levels or corrupted data. In this paper we address the fusion process of contextual information derived from uncertain sensor data. Based on a series of information handling techniques, most notably the Dempster-Shafer theory of evidence and the Equally Weighted Sum operator, evidential contextual information is represented, analysed and merged to achieve a consensus in automatically inferring activities of daily living for inhabitants in Smart Homes. Within the paper we introduce the framework within which uncertainty can be managed and demonstrate the effects that the number of sensors in conjunction with the reliability level of each sensor can have on the overall decision making process. <s> BIB010 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> The pervasive sensing technologies found in smart homes offer unprecedented opportunities for providing health monitoring and assistance to individuals experiencing difficulties living independently at home. A primary challenge that needs to be tackled to meet this need is the ability to recognize and track functional activities that people perform in their own homes and everyday settings. In this paper, we look at approaches to perform real-time recognition of Activities of Daily Living. We enhance other related research efforts to develop approaches that are effective when activities are interrupted and interleaved. To evaluate the accuracy of our recognition algorithms we assess them using real data collected from participants performing activities in our on-campus smart apartment testbed. <s> BIB011 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> Intelligent Environments depend on their capability to understand and anticipate user’s habits and needs. Therefore, learning user’s common behaviours becomes an important step towards allowing an environment to provide such personalized services. Due to the complexity of the entire learning system, this paper will focus on the automatic discovering of models of user’s behaviours. Discovering the models means to discover the order of such actions, representing user’s behaviours as sequences of actions. <s> BIB012 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> By 2050, about one third of the French population will be over 65. Our laboratory's current research focuses on the monitoring of elderly people at home, to detect a loss of autonomy as early as possible. Our aim is to quantify criteria such as the international activities of daily living (ADL) or the French Autonomie Gerontologie Groupes Iso-Ressources (AGGIR) scales, by automatically classifying the different ADL performed by the subject during the day. A Health Smart Home is used for this. Our Health Smart Home includes, in a real flat, infrared presence sensors (location), door contacts (to control the use of some facilities), temperature and hygrometry sensor in the bathroom, and microphones (sound classification and speech recognition). A wearable kinematic sensor also informs postural transitions (using pattern recognition) and walk periods (frequency analysis). This data collected from the various sensors are then used to classify each temporal frame into one of the ADL that was previously acquired (seven activities: hygiene, toilet use, eating, resting, sleeping, communication, and dressing/undressing). This is done using support vector machines. We performed a 1-h experimentation with 13 young and healthy subjects to determine the models of the different activities, and then we tested the classification algorithm (cross validation) with real data. <s> BIB013 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> The machine learning and pervasive sensing technologies found in smart homes offer unprecedented opportunities for providing health monitoring and assistance to individuals experiencing difficulties living independently at home. In order to monitor the functional health of smart home residents, we need to design technologies that recognize and track activities that people normally perform as part of their daily routines. Although approaches do exist for recognizing activities, the approaches are applied to activities that have been preselected and for which labeled training data are available. In contrast, we introduce an automated approach to activity tracking that identifies frequent activities that naturally occur in an individual's routine. With this capability, we can then track the occurrence of regular activities to monitor functional health and to detect changes in an individual's patterns and lifestyle. In this paper, we describe our activity mining and tracking approach, and validate our algorithms on data collected in physical smart environments. <s> BIB014 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> This paper considers scalable and unobtrusive activity recognition using on-body sensing for context awareness in wearable computing. Common methods for activity recognition rely on supervised learning requiring substantial amounts of labeled training data. Obtaining accurate and detailed annotations of activities is challenging, preventing the applicability of these approaches in real-world settings. This paper proposes new annotation strategies that substantially reduce the required amount of annotation. We explore two learning schemes for activity recognition that effectively leverage such sparsely labeled data together with more easily obtainable unlabeled data. Experimental results on two public data sets indicate that both approaches obtain results close to fully supervised techniques. The proposed methods are robust to the presence of erroneous labels occurring in real-world annotation data. <s> BIB015 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> Smart home activity recognition systems can learn generalized models for common activities that span multiple environment settings and resident types. <s> BIB016 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> This paper introduces a knowledge-driven approach to real-time, continuous activity recognition based on multisensor data streams in smart homes. The approach goes beyond the traditional data-centric methods for activity recognition in three ways. First, it makes extensive use of domain knowledge in the life cycle of activity recognition. Second, it uses ontologies for explicit context and activity modeling and representation. Third and finally, it exploits semantic reasoning and classification for activity inferencing, thus enabling both coarse-grained and fine-grained activity recognition. In this paper, we analyze the characteristics of smart homes and Activities of Daily Living (ADL) upon which we built both context and ADL ontologies. We present a generic system architecture for the proposed knowledge-driven approach and describe the underlying ontology-based recognition process. Special emphasis is placed on semantic subsumption reasoning algorithms for activity recognition. The proposed approach has been implemented in a function-rich software system, which was deployed in a smart home research laboratory. We evaluated the proposed approach and the developed system through extensive experiments involving a number of various ADL use scenarios. An average activity recognition rate of 94.44 percent was achieved and the average recognition runtime per recognition operation was measured as 2.5 seconds. <s> BIB017 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> Many intelligent systems that focus on the needs of a human require information about the activities that are being performed by the human. At the core of this capability is activity recognition. Activity recognition techniques have become robust but rarely scale to handle more than a few activities. They also rarely learn from more than one smart home data set because of inherent differences between labeling techniques. In this paper we investigate a data-driven approach to creating an activity taxonomy from sensor data found in disparate smart home datasets. We investigate how the resulting taxonomy can help analyze the relationship between classes of activities. We also analyze how the taxonomy can be used to scale activity recognition to a large number of activity classes and training datasets. We describe our approach and evaluate it on 34 smart home datasets. The results of the evaluation indicate that the hierarchical modeling can reduce training time while maintaining accuracy of the learned model. <s> BIB018 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> Automated monitoring and the recognition of activities of daily living (ADLs) is a key challenge in ambient-assisted living (AAL) for the assistance of the elderly. Within this context, a formal approach may provide a means to fill the gap between the low-level observations acquired by sensing devices and the high-level concepts that are required for the recognition of human activities. We describe a system named ARA (Automated Recognizer of ADLs) that exploits propositional temporal logic and model checking to support automated real-time recognition of ADLs within a smart environment. The logic is shown to be expressive enough for the specification of realistic patterns of ADLs in terms of basic actions detected by a sensorized environment. The online model checking engine is shown to be capable of processing a stream of detected actions in real time. The effectiveness and viability of the approach are evaluated within the context of a smart kitchen, where different types of ADLs are repeatedly performed. <s> BIB019 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> A major challenge of ubiquitous computing resides in the acquisition and modelling of rich and heterogeneous context data, among which, ongoing human activities at different degrees of granularity. In a previous work, we advocated the use of probabilistic description logics (DLs) in a multilevel activity recognition framework. In this paper, we present an in-depth study of activity modeling and reasoning within that framework, as well as an experimental evaluation with a large real-world dataset. Our solution allows us to cope with the uncertain nature of ontological descriptions of activities, while exploiting the expressive power and inference tools of the OWL 2 language. Targeting a large dataset of real human activities, we developed a probabilistic ontology modeling nearly 150 activities and actions of daily living. Experiments with a prototype implementation of our framework confirm the viability of our solution. <s> BIB020 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> Business Process Management (BPM) is the art and science of how work should be performed in an organization in order to ensure consistent outputs and to take advantage of improvement opportunities, e.g. reducing costs, execution times or error rates. Importantly, BPM is not about improving the way individual activities are performed, but rather about managing entire chains of events, activities and decisions that ultimately produce added value for an organization and its customers. This textbook encompasses the entire BPM lifecycle, from process identification to process monitoring, covering along the way process modelling, analysis, redesign and automation. Concepts, methods and tools from business management, computer science and industrial engineering are blended into one comprehensive and inter-disciplinary approach. The presentation is illustrated using the BPMN industry standard defined by the Object Management Group and widely endorsed by practitioners and vendors worldwide. In addition to explaining the relevant conceptual background, the book provides dozens of examples, more than 100 hands-on exercises many with solutions as well as numerous suggestions for further reading. The textbook is the result of many years of combined teaching experience of the authors, both at the undergraduate and graduate levels as well as in the context of professional training. Students and professionals from both business management and computer science will benefit from the step-by-step style of the textbook and its focus on fundamental concepts and proven methods. Lecturers will appreciate the class-tested format and the additional teaching material available on the accompanying website fundamentals-of-bpm.org. <s> BIB021 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> Activity recognition has received increasing attention from the machine learning community. Of particular interest is the ability to recognize activities in real time from streaming data, but this presents a number of challenges not faced by traditional offline approaches. Among these challenges is handling the large amount of data that does not belong to a predefined class. In this paper, we describe a method by which activity discovery can be used to identify behavioral patterns in observational data. Discovering patterns in the data that does not belong to a predefined class aids in understanding this data and segmenting it into learnable classes. We demonstrate that activity discovery not only sheds light on behavioral patterns, but it can also boost the performance of recognition algorithms. We introduce this partnership between activity discovery and online activity recognition in the context of the CASAS smart home project and validate our approach using CASAS data sets. <s> BIB022 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> The increasing aging population in the coming decades will result in many complications for society and in particular for the healthcare system due to the shortage of healthcare professionals and healthcare facilities. To remedy this problem, researchers have pursued developing remote monitoring systems and assisted living technologies by utilizing recent advances in sensor and networking technology, as well as in the data mining and machine learning fields. In this article, we report on our fully automated approach for discovering and monitoring patterns of daily activities. Discovering and tracking patterns of daily activities can provide unprecedented opportunities for health monitoring and assisted living applications, especially for older adults and individuals with mental disabilities. Previous approaches usually rely on preselected activities or labeled data to track and monitor daily activities. In this article, we present a fully automated approach by discovering natural activity patterns and their variations in real-life data. We will show how our activity discovery component can be integrated with an activity recognition component to track and monitor various daily activity patterns. We also provide an activity visualization component to allow caregivers to visually observe and examine the activity patterns using a user-friendly interface. We validate our algorithms using real-life data obtained from two apartments during a three-month period. <s> BIB023 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> Recognition of activities of daily living (ADLs) is an enabling technology for several ubiquitous computing applications. In this field, most activity recognition systems rely on supervised learning methods to extract activity models from labeled datasets. An inherent problem of that approach consists in the acquisition of comprehensive activity datasets, which is expensive and may violate individuals' privacy. The problem is particularly challenging when focusing on complex ADLs, which are characterized by large intra- and inter-personal variability of execution. In this paper, we propose an unsupervised method to recognize complex ADLs exploiting the semantics of activities, context data, and sensing devices. Through ontological reasoning, we derive semantic correlations among activities and sensor events. By matching observed sensor events with semantic correlations, a statistical reasoner formulates initial hypotheses about the occurred activities. Those hypotheses are refined through probabilistic reasoning, exploiting semantic constraints derived from the ontology. Extensive experiments with real-world datasets show that the accuracy of our unsupervised method is comparable to the one of state of the art supervised approaches. <s> BIB024
, where H denotes the hypothesis (e.g., a certain activity is happening) and X represents the set of evidences (i.e., the current value of context objects). As calculating P(X|H) can be very expensive, different assumptions can be made to simplify the computation. For example, naïve Bayes (NB) is a simple classification model, which supposes the n single evidences composing X independent (that the occurrence of one does not affect the probability of the others) given the situational hypothesis; this assumption can be formalized as P(X|H) = ∏ n k=1 P(x k |H). The inference process within the naïve Bayes assumption chooses the situation with the maximum a posteriori (MAP) probability. Hidden Markov Models (HMMs) represent one of the most widely adopted formalism to model the transitions between different states of the environment or humans. Here hidden states represent situations and/or activities to be recognized, whereas observable states represent sensor measurements. HMMs are a statistical model where a system being modeled is assumed to be a Markov chain, which is a sequence of events. A HMM is composed of a finite set of hidden states (e.g., s t−1 , s t , and s t+1 ) and observations (e.g., o t−1 , o t , and o t+1 ) that are generated from states. HMM is built on three assumptions: (i) each state depends only on its immediate predecessor; (ii) each observation variable only depends on the current state; and (iii) observations are independent from each other. In a HMM, there are three types of probability distributions: (i) prior probabilities over initial state p(s 0 ); (ii) state transition probabilities p(s t |s t−1 ); and (iii) observation emission probabilities p(o t |s t ). A drawback of using a standard HMM is its lack of hierarchical modeling for representing human activities. To deal with this issue, several other HMM alternatives have been proposed, such as hierarchical and abstract HMMs. In a hierarchical HMM, each of the hidden states can be considered as an autonomous probabilistic model on its own; that is, each hidden state is also a hierarchical HMM. HMMs generally assume that all observations are independent, which could possibly miss long-term trends and complex relationships. Conditional Random Fields-CRFs, on the other hand, eliminate the independence assumptions by modeling the conditional probability of a particular sequence of hypothesis, Y, given a sequence of observations, X; succinctly, CRFs model P(Y|X). Modeling the conditional probability of the label sequence rather than the joint probability of both the labels and observations P(X, Y), as done by HMMs, allows CRFs to incorporate complex features of the Figure 2 . Examples of HMM and CRF models. Ellipses represent states (i.e., activities). Rectangles represent sensors. Arrows between states are state transition probabilities (i.e., the probability of moving from a state to another), whereas those from states to sensors are emission probabilities (i.e., the probability that in a specific state a sensor has a specific value). (a) HMM model example. Picture inspired by CASAS-HMM BIB011 and CASAS-HAM BIB006 . (b) CRF model example. Picture inspired by KROS-CRF BIB005 . Another statistical tool often employed is represented by Markov Chains (MCs), which are based on the assumption that the probability of an event is only conditional to the previous event. Even if they are very effective for some applications like capacity planning, in the smart spaces context, they are quite limited because they deal with deterministic transactions and modeling an intelligent environment with this formalism results in a very complicated model. Support Vector Machines (SVMs) allow to classify both linear and non-linear data. A SVM uses a non-linear mapping to transform the original training data into an higher dimension. Within this new dimension, it searches for the linear optimal separating hyperplane that separates the training data of one class from another. With an appropriate non-linear mapping to a sufficiently high dimension, data from two classes can always be separated. SVMs are good at handling large feature spaces since they employ overfitting protection, which does not necessarily depend on the number of features. Binary Classifiers are built to distinguish activities. Due to their characteristics, SVMs are better in generating other kind of models with a machine learning approach than modeling directly the smart environment. For instance in BIB018 authors uses them combined with Naive Bayes Classifiers to learn the activity model built on hierarchical taxonomy formalism shown in Figure 3 . Artificial Neural Networks (ANNs) are a sub-symbolic technique, originally inspired by biological neuron networks. They can automatically learn complex mappings and extract a non-linear combination of features. A neural network is composed of many artificial neurons that are linked together according to a specific network architecture. A neural classifier consists of an input layer, a hidden layer, and an output layer. Mappings between input and output features are represented in the composition of activation functions f at a hidden layer, which can be learned through a training process performed using gradient descent optimization methods or resilient backprogagation algorithms. Some techniques stem from data mining methods for market basket analysis (e.g., the Apriori algorithm ), which apply a windowing mechanism in order to transform the event/sensor log into what is called a database of transactions. Let I = {i 1 , . . . , i n E } be a set of binary variables corresponding to sensor event types. A transaction is an assignment that binds a value to each of the variables in I, where the values 0 and 1 respectively denote the fact that a certain event happened or not during the considered window. A database of transactions T is an (usually ordered) sequence of transactions each having a, possibly empty, set of properties (e.g., a timestamp). An item is an assignment of the kind i k = {0, 1}. An itemset is an assignment covering a proper subset of the variables in I. An itemset C has support Supp T (C) in the database of transactions T if a fraction of Supp T (C) of transactions in the database contain C. The techniques following this strategy turn the input log into a database of transactions, each of them corresponding to a window. Given two different databases of transactions T 1 and T 2 , the growth rate of an itemset C from T 1 to T 2 is defined as . Emerging patterns (EP) are those itemsets showing a growth rate greater than a certain threshold ρ. The ratio behind this definition is that an itemset that has high support in its target class (database) and low support in the contrasting class, can be seen as a strong signal, in order to discover the class of a test instance containing it. Market basket analysis is a special case of affinity analysis that discovers co-occurrence relationships among purchased items inside a single or more transactions. Initial approaches to the development of context-aware systems able to recognize situations were based on predicate logic. Loke BIB001 introduced a PROLOG extension called LogicCAP; here the "in-situation" operator captures a common form of reasoning in context-aware applications, which is to ask if an entity E is in a given situation S (denoted as S* > E). In particular, a situation is defined as a set of constraints imposed on output or readings that can be returned by sensors, i.e., if S is the current situation, we expect the sensors to return values satisfying some constraints associated with S. LogicCAP rules use backward chaining like PROLOG, but also utilizes forward chaining in determining situations, i.e., a mix of backward and forward chaining is used in evaluating LogicCAP programs. The work introduces different reasoning techniques with situations including selecting the best action to perform in a certain situation, understanding what situation a certain entity is in (or the most likely) and defining relationships between situations. There are many approaches borrowed from information technology areas, adapted to smart environments. For instance in BIB019 , the authors use temporal logic and model checking to perform activities modeling and recognition. The system proposed is called ARA. A graphical representation of a model example adopted by this approach is shown in Figure 4 . It evidences how the activities are composed by the time correlated states between consecutive actions. Ontologies (denoted as ONTO) represent the last evolution of logic-based approaches and have increasingly gained attention as a generic, formal and explicit way to "capture and specify the domain knowledge with its intrinsic semantics through consensual terminology and formal axioms and constraints" BIB004 . They provide a formal way to represent sensor data, context, and situations into well-structured terminologies, which make them understandable, shareable, and reusable by both humans and machines. A considerable amount of knowledge engineering effort is expected in constructing the knowledge base, while the inference is well supported by mature algorithms and rule engines. Some examples of using ontologies in identifying situations are given by BIB007 (later evolved in BIB020 BIB024 ). Instead of using ontologies to infer activities, they use ontologies to validate the result inferred from statistical techniques. The way an AmI system makes decisions on the actions can be compared to decision-making in AI agents. As an example, reflex agents with state, as introduced in , take as input the current state of the world and a set of Condition-Action rules to choose the action to be performed. Similarly, Augusto BIB002 introduces the concept of Active DataBase (ADB) composed by Event-Condition-Action (ECA) rules. An ECA rule basically has the form "ON event IF condition THEN action", where conditions can take into account time. The first attempts to apply techniques taken from the business process management-BPM BIB021 area were the employment of workflow specifications to anticipate user actions. A workflow is composed by a set of tasks related by qualitative and/or quantitative time relationships. Authors in present a survey of techniques for temporal calculus (i.e., Allen's Temporal Logic and Point Algebra) and spatial calculus aiming at decision-making. The SPUBS system BIB012 automatically retrieves these workflows from sensor data. Table 3 shows, for each surveyed paper, information about RQ-B1.1 (Formalism), RQ-B1.2 (Readability Level) and RQ-B1.3 (Granularity). BIB022 BIB014 BIB023 HMM M Activity CASAS-HMM BIB011 Activity CASAS-HMMNBCRF BIB016 Activity KROS-CRF BIB005 Activity REIG-SITUATION BIB008 Situation LES-PHI BIB003 Activity BUE-WISPS BIB009 Activity BIB015 Activity FLEURY-MCSVM BIB013 Activity CHEN-ONT BIB017 ONTO H Activity RIB-PROB BIB020 BIB024 Action/Activity NUG-EVFUS BIB010 Action
Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Model Construction <s> Pervasive systems must offer an open, extensible, and evolving portfolio of services which integrate sensor data from a diverse range of sources. The core challenge is to provide appropriate and consistent adaptive behaviours for these services in the face of huge volumes of sensor data exhibiting varying degrees of precision, accuracy and dynamism. Situation identification is an enabling technology that resolves noisy sensor data and abstracts it into higher-level concepts that are interesting to applications. We provide a comprehensive analysis of the nature and characteristics of situations, discuss the complexities of situation identification, and review the techniques that are most popularly used in modelling and inferring situations from sensor data. We compare and contrast these techniques, and conclude by identifying some of the open research opportunities in the area. <s> BIB001 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Model Construction <s> Advancements in supporting fields have increased the likelihood that smart-home technologies will become part of our everyday environments. However, many of these technologies are brittle and do not adapt to the user's explicit or implicit wishes. Here, we introduce CASAS, an adaptive smart-home system that utilizes machine learning techniques to discover patterns in resident's daily activities and to generate automation polices that mimic these patterns. Our approach does not make any assumptions about the activity structure or other underlying model parameters but leaves it completely to our algorithms to discover the smart-home resident's patterns. Another important aspect of CASAS is that it can adapt to changes in the discovered patterns based on the resident implicit and explicit feedback and can automatically update its model to reflect the changes. In this paper, we provide a description of the CASAS technologies and the results of experiments performed on both synthetic and real-world data. <s> BIB002 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Model Construction <s> Many real-world applications that focus on addressing needs of a human, require information about the activities being performed by the human in real-time. While advances in pervasive computing have led to the development of wireless and non-intrusive sensors that can capture the necessary activity information, current activity recognition approaches have so far experimented on either a scripted or pre-segmented sequence of sensor events related to activities. In this paper we propose and evaluate a sliding window based approach to perform activity recognition in an on line or streaming fashion; recognizing activities as and when new sensor events are recorded. To account for the fact that different activities can be best characterized by different window lengths of sensor events, we incorporate the time decay and mutual information based weighting of sensor events within a window. Additional contextual information in the form of the previous activity and the activity of the previous window is also appended to the feature describing a sensor window. The experiments conducted to evaluate these techniques on real-world smart home datasets suggests that combining mutual information based weighting of sensor events and adding past contextual information to the feature leads to best performance for streaming activity recognition. <s> BIB003
Modeling formalisms in the literature can be roughly divided into specification-based and learning-based BIB001 . Research in the field of AmI started when few kinds of sensors were available and the relationships between sensor data and underlying phenomena were easy to establish. Specification-based approaches represent hand-made expert knowledge in logic rules and apply reasoning engines to infer conclusions and to make decisions from sensor data. These techniques evolved in the last years in order to take into account uncertainty. The growing availability of different kinds of sensors made hand-made models impractical to be produced. In order to solve this problem, learning-based methods employ techniques from machine learning and data mining. Specification-based models are usually more human-readable (even though a basic experience with formal logic languages is required), but creating them is very expensive in terms of human resources. Most learning-based models are instead represented using mathematical and statistical formalisms (e.g., HMMs), which make them difficult to be revised by experts and understood by final users. These motivations are at the basis of the research of human-readable automatically inferred formalisms. Learning-based techniques can be divided into supervised and unsupervised techniques. The former expect the input to be previously labeled according to the required output function, hence, they require a big effort for organizing input data in terms of training examples, even though active learning can be employed to ease this task. Unsupervised techniques (or weakly supervised ones, i.e., those where only a part of the dataset is labeled) can be used to face this challenge but a limited number of works is available in the literature. Unsupervised techniques for AmI knowledge modeling can be useful for other two reasons. Firstly, as stated in the introduction, sometimes knowledge should not be considered as a static resource; instead it should be updated at runtime without a direct intervention of the users BIB002 , hence, updating techniques should rely on labeling of sensor data as little as possible. Moreover, unsupervised techniques may also result useful in supporting passive users, such as guests, that do not participate in the configuration of the system but should benefit from its services as well. Performing learning or mining from sequences of sensor measurements poses the issue of how to group events into aggregates of interests (i.e., actions, activities, situations). Even with supervised learning techniques, if labeling is provided at learning time, the same does not hold at runtime where a stream of events is fed into the AmI system. Even though most proposed approaches in the AmI literature (especially supervised learning ones) ignore this aspect, windowing mechanism are needed. As described in BIB003 , the different windowing methods can be classified into three main classes, namely, explicit, time-based and event-based. • Explicit segmentation. In this case, the stream is divided into chunks usually following some kind of classifier previously instructed over a training data set. Unfortunately, as the training data set simply cannot cover all the possible combinations of sensor events, the performance of such a kind of approach usually results in single activities divided into multiple chunks and multiple activities merged. • Time-based windowing. This approach divides the entire sequence into equal size time intervals. This is a good approach when dealing with data obtained from sources (e.g., sensors like accelerometers and gyroscopes) that operate continuously in time. As can be easily argued, the choice of the window size is fundamental especially in the case of sporadic sensors as a small window size could not contain enough information to be useful whereas a large window size could merge multiple activities when burst of sensors occur.
Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> Abstract Many existing rule learning systems are computationally expensive on large noisy datasets. In this paper we evaluate the recently-proposed rule learning algorithm IREP on a large and diverse collection of benchmark problems. We show that while IREP is extremely efficient, it frequently gives error rates higher than those of C4.5 and C4.5rules. We then propose a number of modifications resulting in an algorithm RIPPERk that is very competitive with C4.5rules with respect to error rates, but much more efficient on large samples. RIPPERk obtains error rates lower than or equivalent to C4.5rules on 22 of 37 benchmark problems, scales nearly linearly with the number of training examples, and can efficiently process noisy datasets containing hundreds of thousands of examples. <s> BIB001 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> We are developing a personal activity recognition system that is practical, reliable, and can be incorporated into a variety of health-care related applications ranging from personal fitness to elder care. To make our system appealing and useful, we require it to have the following properties: (i) data only from a single body location needed, and it is not required to be from the same point for every user; (ii) should work out of the box across individuals, with personalization only enhancing its recognition abilities; and (iii) should be effective even with a cost-sensitive subset of the sensors and data features. In this paper, we present an approach to building a system that exhibits these properties and provide evidence based on data for 8 different activities collected from 12 different subjects. Our results indicate that the system has an accuracy rate of approximately 90% while meeting our requirements. We are now developing a fully embedded version of our system based on a cell-phone platform augmented with a Bluetooth-connected sensor board. <s> BIB002 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> A sensor system capable of automatically recognizing activities would allow many potential ubiquitous applications. In this paper, we present an easy to install sensor network and an accurate but inexpensive annotation method. A recorded dataset consisting of 28 days of sensor data and its annotation is described and made available to the community. Through a number of experiments we show how the hidden Markov model and conditional random fields perform in recognizing activities. We achieve a timeslice accuracy of 95.6% and a class accuracy of 79.4%. <s> BIB003 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> This paper presents a systematic design approach for constructing neural classifiers that are capable of classifying human activities using a triaxial accelerometer. The philosophy of our design approach is to apply a divide-and-conquer strategy that separates dynamic activities from static activities preliminarily and recognizes these two different types of activities separately. Since multilayer neural networks can generate complex discriminating surfaces for recognition problems, we adopt neural networks as the classifiers for activity recognition. An effective feature subset selection approach has been developed to determine significant feature subsets and compact classifier structures with satisfactory accuracy. Experimental results have successfully validated the effectiveness of the proposed recognition scheme. <s> BIB004 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> Advances in technology have provided the ability to equip the home environment with a layer of technology to provide a truly 'Smart Home'. These homes offer improved living conditions and levels of independence for the population who require support with both physical and cognitive functions. At the core of the Smart Home is a collection of sensing technology which is used to monitor the behaviour of the inhabitant and their interactions with the environment. A variety of different sensors measuring light, sound, contact and motion provide sufficient multi-dimensional information about the inhabitant to support the inference of activity determination. A problem which impinges upon the success of any information analysis is the fact that sensors may not always provide reliable information due to either faults, operational tolerance levels or corrupted data. In this paper we address the fusion process of contextual information derived from uncertain sensor data. Based on a series of information handling techniques, most notably the Dempster-Shafer theory of evidence and the Equally Weighted Sum operator, evidential contextual information is represented, analysed and merged to achieve a consensus in automatically inferring activities of daily living for inhabitants in Smart Homes. Within the paper we introduce the framework within which uncertainty can be managed and demonstrate the effects that the number of sensors in conjunction with the reliability level of each sensor can have on the overall decision making process. <s> BIB005 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> This paper addresses the problem of learning situation models for providing context-aware services. Context for modeling human behavior in a smart environment is represented by a situation model describing environment, users, and their activities. A framework for acquiring and evolving different layers of a situation model in a smart environment is proposed. Different learning methods are presented as part of this framework: role detection per entity, unsupervised extraction of situations from multimodal data, supervised learning of situation representations, and evolution of a predefined situation model with feedback. The situation model serves as frame and support for the different methods, permitting to stay in an intuitive declarative framework. The proposed methods have been integrated into a whole system for smart home environment. The implementation is detailed, and two evaluations are conducted in the smart home environment. The obtained results validate the proposed approach. <s> BIB006 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> We explore a dense sensing approach that uses RFID sensor network technology to recognize human activities. In our setting, everyday objects are instrumented with UHF RFID tags called WISPs that are equipped with accelerometers. RFID readers detect when the objects are used by examining this sensor data, and daily activities are then inferred from the traces of object use via a Hidden Markov Model. In a study of 10 participants performing 14 activities in a model apartment, our approach yielded recognition rates with precision and recall both in the 90% range. This compares well to recognition with a more intrusive short-range RFID bracelet that detects objects in the proximity of the user; this approach saw roughly 95% precision and 60% recall in the same study. We conclude that RFID sensor networks are a promising approach for indoor activity monitoring. <s> BIB007 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> Advancements in supporting fields have increased the likelihood that smart-home technologies will become part of our everyday environments. However, many of these technologies are brittle and do not adapt to the user's explicit or implicit wishes. Here, we introduce CASAS, an adaptive smart-home system that utilizes machine learning techniques to discover patterns in resident's daily activities and to generate automation polices that mimic these patterns. Our approach does not make any assumptions about the activity structure or other underlying model parameters but leaves it completely to our algorithms to discover the smart-home resident's patterns. Another important aspect of CASAS is that it can adapt to changes in the discovered patterns based on the resident implicit and explicit feedback and can automatically update its model to reflect the changes. In this paper, we provide a description of the CASAS technologies and the results of experiments performed on both synthetic and real-world data. <s> BIB008 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> The pervasive sensing technologies found in smart homes offer unprecedented opportunities for providing health monitoring and assistance to individuals experiencing difficulties living independently at home. A primary challenge that needs to be tackled to meet this need is the ability to recognize and track functional activities that people perform in their own homes and everyday settings. In this paper, we look at approaches to perform real-time recognition of Activities of Daily Living. We enhance other related research efforts to develop approaches that are effective when activities are interrupted and interleaved. To evaluate the accuracy of our recognition algorithms we assess them using real data collected from participants performing activities in our on-campus smart apartment testbed. <s> BIB009 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> By 2050, about one third of the French population will be over 65. Our laboratory's current research focuses on the monitoring of elderly people at home, to detect a loss of autonomy as early as possible. Our aim is to quantify criteria such as the international activities of daily living (ADL) or the French Autonomie Gerontologie Groupes Iso-Ressources (AGGIR) scales, by automatically classifying the different ADL performed by the subject during the day. A Health Smart Home is used for this. Our Health Smart Home includes, in a real flat, infrared presence sensors (location), door contacts (to control the use of some facilities), temperature and hygrometry sensor in the bathroom, and microphones (sound classification and speech recognition). A wearable kinematic sensor also informs postural transitions (using pattern recognition) and walk periods (frequency analysis). This data collected from the various sensors are then used to classify each temporal frame into one of the ADL that was previously acquired (seven activities: hygiene, toilet use, eating, resting, sleeping, communication, and dressing/undressing). This is done using support vector machines. We performed a 1-h experimentation with 13 young and healthy subjects to determine the models of the different activities, and then we tested the classification algorithm (cross validation) with real data. <s> BIB010 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> Monitoring daily activities of a person has many potential benefits in pervasive computing. These include providing proactive support for the elderly and monitoring anomalous behaviors. A typical approach in existing research on activity detection is to construct sequence-based models of low-level activity features based on the order of object usage. However, these models have poor accuracy, require many parameters to estimate, and demand excessive computational effort. Many other supervised learning approaches have been proposed but they all suffer from poor scalability due to the manual labeling involved in the training process. In this paper, we simplify the activity modeling process by relying on the relevance weights of objects as the basis of activity discrimination rather than on sequence information. For each activity, we mine the web to extract the most relevant objects according to their normalized usage frequency. We develop a KeyExtract algorithm for activity recognition and two algorithms, MaxGap and MaxGain, for activity segmentation with linear time complexities. Simulation results indicate that our proposed algorithms achieve high accuracy in the presence of different noise levels indicating their good potential in real-world deployment. <s> BIB011 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> Recognizing human activities from sensor readings has recently attracted much research interest in pervasive computing due to its potential in many applications, such as assistive living and healthcare. This task is particularly challenging because human activities are often performed in not only a simple (i.e., sequential), but also a complex (i.e., interleaved or concurrent) manner in real life. Little work has been done in addressing complex issues in such a situation. The existing models of interleaved and concurrent activities are typically learning-based. Such models lack of flexibility in real life because activities can be interleaved and performed concurrently in many different ways. In this paper, we propose a novel pattern mining approach to recognize sequential, interleaved, and concurrent activities in a unified framework. We exploit Emerging Pattern-a discriminative pattern that describes significant changes between classes of data-to identify sensor features for classifying activities. Different from existing learning-based approaches which require different training data sets for building activity models, our activity models are built upon the sequential activity trace only and can be applied to recognize both simple and complex activities. We conduct our empirical studies by collecting real-world traces, evaluating the performance of our algorithm, and comparing our algorithm with static and temporal models. Our results demonstrate that, with a time slice of 15 seconds, we achieve an accuracy of 90.96 percent for sequential activity, 88.1 percent for interleaved activity, and 82.53 percent for concurrent activity. <s> BIB012 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> The machine learning and pervasive sensing technologies found in smart homes offer unprecedented opportunities for providing health monitoring and assistance to individuals experiencing difficulties living independently at home. In order to monitor the functional health of smart home residents, we need to design technologies that recognize and track activities that people normally perform as part of their daily routines. Although approaches do exist for recognizing activities, the approaches are applied to activities that have been preselected and for which labeled training data are available. In contrast, we introduce an automated approach to activity tracking that identifies frequent activities that naturally occur in an individual's routine. With this capability, we can then track the occurrence of regular activities to monitor functional health and to detect changes in an individual's patterns and lifestyle. In this paper, we describe our activity mining and tracking approach, and validate our algorithms on data collected in physical smart environments. <s> BIB013 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> This paper considers scalable and unobtrusive activity recognition using on-body sensing for context awareness in wearable computing. Common methods for activity recognition rely on supervised learning requiring substantial amounts of labeled training data. Obtaining accurate and detailed annotations of activities is challenging, preventing the applicability of these approaches in real-world settings. This paper proposes new annotation strategies that substantially reduce the required amount of annotation. We explore two learning schemes for activity recognition that effectively leverage such sparsely labeled data together with more easily obtainable unlabeled data. Experimental results on two public data sets indicate that both approaches obtain results close to fully supervised techniques. The proposed methods are robust to the presence of erroneous labels occurring in real-world annotation data. <s> BIB014 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> Intelligent Environments are expected to act proactively, anticipating the user's needs and preferences. To do that, the environment must somehow obtain knowledge of those need and preferences, but unlike current computing systems, in Intelligent Environments, the user ideally should be released from the burden of providing information or programming any device as much as possible. Therefore, automated learning of a user's most common behaviors becomes an important step towards allowing an environment to provide highly personalized services. In this article, we present a system that takes information collected by sensors as a starting point and then discovers frequent relationships between actions carried out by the user. The algorithm developed to discover such patterns is supported by a language to represent those patterns and a system of interaction that provides the user the option to fine tune their preferences in a natural way, just by speaking to the system. <s> BIB015 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> This paper introduces a knowledge-driven approach to real-time, continuous activity recognition based on multisensor data streams in smart homes. The approach goes beyond the traditional data-centric methods for activity recognition in three ways. First, it makes extensive use of domain knowledge in the life cycle of activity recognition. Second, it uses ontologies for explicit context and activity modeling and representation. Third and finally, it exploits semantic reasoning and classification for activity inferencing, thus enabling both coarse-grained and fine-grained activity recognition. In this paper, we analyze the characteristics of smart homes and Activities of Daily Living (ADL) upon which we built both context and ADL ontologies. We present a generic system architecture for the proposed knowledge-driven approach and describe the underlying ontology-based recognition process. Special emphasis is placed on semantic subsumption reasoning algorithms for activity recognition. The proposed approach has been implemented in a function-rich software system, which was deployed in a smart home research laboratory. We evaluated the proposed approach and the developed system through extensive experiments involving a number of various ADL use scenarios. An average activity recognition rate of 94.44 percent was achieved and the average recognition runtime per recognition operation was measured as 2.5 seconds. <s> BIB016 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> Smart home activity recognition systems can learn generalized models for common activities that span multiple environment settings and resident types. <s> BIB017 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> Real-time activity recognition in body sensor networks is an important and challenging task. In this paper, we propose a real-time, hierarchical model to recognize both simple gestures and complex activities using a wireless body sensor network. In this model, we first use a fast and lightweight algorithm to detect gestures at the sensor node level, and then propose a pattern based real-time algorithm to recognize complex, high-level activities at the portable device level. We evaluate our algorithms over a real-world dataset. The results show that the proposed system not only achieves good performance (an average utility of 0.81, an average accuracy of 82.87%, and an average real-time delay of 5.7 seconds), but also significantly reduces the network's communication cost by 60.2%. <s> BIB018 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> A major challenge of ubiquitous computing resides in the acquisition and modelling of rich and heterogeneous context data, among which, ongoing human activities at different degrees of granularity. In a previous work, we advocated the use of probabilistic description logics (DLs) in a multilevel activity recognition framework. In this paper, we present an in-depth study of activity modeling and reasoning within that framework, as well as an experimental evaluation with a large real-world dataset. Our solution allows us to cope with the uncertain nature of ontological descriptions of activities, while exploiting the expressive power and inference tools of the OWL 2 language. Targeting a large dataset of real human activities, we developed a probabilistic ontology modeling nearly 150 activities and actions of daily living. Experiments with a prototype implementation of our framework confirm the viability of our solution. <s> BIB019 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> Activity recognition has received increasing attention from the machine learning community. Of particular interest is the ability to recognize activities in real time from streaming data, but this presents a number of challenges not faced by traditional offline approaches. Among these challenges is handling the large amount of data that does not belong to a predefined class. In this paper, we describe a method by which activity discovery can be used to identify behavioral patterns in observational data. Discovering patterns in the data that does not belong to a predefined class aids in understanding this data and segmenting it into learnable classes. We demonstrate that activity discovery not only sheds light on behavioral patterns, but it can also boost the performance of recognition algorithms. We introduce this partnership between activity discovery and online activity recognition in the context of the CASAS smart home project and validate our approach using CASAS data sets. <s> BIB020 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> The increasing aging population in the coming decades will result in many complications for society and in particular for the healthcare system due to the shortage of healthcare professionals and healthcare facilities. To remedy this problem, researchers have pursued developing remote monitoring systems and assisted living technologies by utilizing recent advances in sensor and networking technology, as well as in the data mining and machine learning fields. In this article, we report on our fully automated approach for discovering and monitoring patterns of daily activities. Discovering and tracking patterns of daily activities can provide unprecedented opportunities for health monitoring and assisted living applications, especially for older adults and individuals with mental disabilities. Previous approaches usually rely on preselected activities or labeled data to track and monitor daily activities. In this article, we present a fully automated approach by discovering natural activity patterns and their variations in real-life data. We will show how our activity discovery component can be integrated with an activity recognition component to track and monitor various daily activity patterns. We also provide an activity visualization component to allow caregivers to visually observe and examine the activity patterns using a user-friendly interface. We validate our algorithms using real-life data obtained from two apartments during a three-month period. <s> BIB021 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> Many real-world applications that focus on addressing needs of a human, require information about the activities being performed by the human in real-time. While advances in pervasive computing have led to the development of wireless and non-intrusive sensors that can capture the necessary activity information, current activity recognition approaches have so far experimented on either a scripted or pre-segmented sequence of sensor events related to activities. In this paper we propose and evaluate a sliding window based approach to perform activity recognition in an on line or streaming fashion; recognizing activities as and when new sensor events are recorded. To account for the fact that different activities can be best characterized by different window lengths of sensor events, we incorporate the time decay and mutual information based weighting of sensor events within a window. Additional contextual information in the form of the previous activity and the activity of the previous window is also appended to the feature describing a sensor window. The experiments conducted to evaluate these techniques on real-world smart home datasets suggests that combining mutual information based weighting of sensor events and adding past contextual information to the feature leads to best performance for streaming activity recognition. <s> BIB022 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> Recognition of activities of daily living (ADLs) is an enabling technology for several ubiquitous computing applications. In this field, most activity recognition systems rely on supervised learning methods to extract activity models from labeled datasets. An inherent problem of that approach consists in the acquisition of comprehensive activity datasets, which is expensive and may violate individuals' privacy. The problem is particularly challenging when focusing on complex ADLs, which are characterized by large intra- and inter-personal variability of execution. In this paper, we propose an unsupervised method to recognize complex ADLs exploiting the semantics of activities, context data, and sensing devices. Through ontological reasoning, we derive semantic correlations among activities and sensor events. By matching observed sensor events with semantic correlations, a statistical reasoner formulates initial hypotheses about the occurred activities. Those hypotheses are refined through probabilistic reasoning, exploiting semantic constraints derived from the ontology. Extensive experiments with real-world datasets show that the accuracy of our unsupervised method is comparable to the one of state of the art supervised approaches. <s> BIB023
(c) Figure 7 . Ontologies used in CHEN-ONT BIB016 to model the aspects of smart spaces. (a) Ontology example used to model the Smart Environment domain. Picture inspired to CHEN-ONT BIB016 . (b) Ontology example used to model the activities correlations. Picture taken from CHEN-ONT BIB016 . (c) Ontology example used to model the sensors properties. Picture taken from CHEN-ONT BIB016 . In RIB-PROB BIB019 BIB023 , the multilevel model is obtained combining ontologies and/or grouping elements at previous levels. The Atomic Gestures model is obtained just considering log elements. The Manipulative Gestures are computed considering ontology and axioms. The Simple Activities are obtained grouping Manipulative Gestures. Finally, for Complex Activities, ontologies are involved. Figure 5c represents a portion of the resulting ontology model. The dashed lines represent the relations super/sub between classes. The individual classes have relations that describe dependencies. Moreover Description Logic is employed to support ontological reasoning, which allows also to check the knowledge base consistency. It also infers additional information from registered facts. In NUG-EVFUS BIB005 , the interrelationships between sensors, context and activities are represented as a hierarchical network of ontologies (see Figure 8) . A particular activity can be performed or associated with a certain room in the house, this information is modeled with an ontology of the network. Figure 8 . Hierarchical ontology structure adopted in NUG-EVFUS BIB005 to model activities in a smart space. In CASAS-HMM BIB009 , each activity is performed in a protected environment, and the resulting log is recorded and labeled. Then, HMM model is built upon this dataset in a supervised way. The resulting model is shown in Figure 2a . Observations (squares) model the sensor triggering, the states (circles) model the activities that can generate the observations according to certain probabilities. The goal is to infer the activities by processing the observations. This recognition technique is supporting single user data, but the problem of modeling multiple users is introduced. The same team, in CASAS-SVM BIB022 , employs SVM. In this second work, authors propose an interesting analysis of the different windowing strategies to be employed to gather measurements into observation vectors. Finally, in CASAS-HMMNBCRF BIB017 , experiments are performed with the same methodology adding CRF and NB modeling techniques to the analysis. In WANG-EP BIB012 , from log of sequential activities, Emerging Patterns are mined and the resulting set composes the model. In KROS-CRF BIB003 , the model is trained out from a labeled dataset. The log is divided in segments 60-s long and each segment is labeled. The dataset is composed by multi-days logs: one day is used for testing the approach, the remaining for training the models. The resulting model is an undirected graph as in Figure 2b . In REIG-SITUATION BIB006 , a SVM model, built on statistical values extracted from the measurements of a given user, is used for classifying the roles. Then, this information, combined with other statistically extracted ones, is involved into the training of the HMM that models the situations. In YANG-NN BIB004 , the input vector contains the features to consider, the output vector the classes (activities). The back-propagation learning algorithm is used for training the ANNs. Three neural networks are built on labeled logs: a pre-classifier and two classifiers; static activities and dynamic activities are modeled with separated ANNs. The structure of the neural classifier consists of an input layer, an hidden layer and an output layer. In LES-PHI BIB002 , given the maximum number of features the activity recognition system can use, the system automatically chooses the most discriminative sub-set of features and uses them to learn an ensemble of discriminative static classifiers for the activities that need to be recognized. Then, the class probabilities estimated from the static classifiers are used as inputs into HMMs. In BUE-WISPS BIB007 , the users are asked to perform activities. The resulting log is used for training an HMM. In FLEURY-MCSVM BIB010 , the classes of the classifier model the activities. Binary classifiers are built to distinguish activities: pairwise combinations selection. The number of SVMs for n activities will be n − 1. The features used are statistics from measurements. The algorithm proposed in CASAS-DISCOREC BIB020 BIB013 BIB021 is to improve the performance of activity recognition algorithms by trying to reduce the part of the dataset that has not been labeled during data acquisition. In particular, for the unlabeled section of the log, the authors employ a pattern mining technique in order to discover, in an unsupervised manner, human activity patterns. A pattern is defined here as a set of events where order is not specified and events can be repeated multiple times. Patterns are mined by iteratively compressing the sensor log. The data mining method used for activity discovery is completely unsupervised without the need of manually segmenting the dataset or choosing windows and allows to discover interwoven activities as well. Starting from singleton patterns, at each step, the proposed technique compresses the logs by exploiting them and iteratively reprocesses the compressed log for recognizing new patterns and further compress the log. When it is difficult to further compress the log, each remaining pattern represents an activity class. Discovered labels are employed to train HMM, BN and SVM models following the same approach as in the supervised works of the same group. In CASAS-HAM BIB008 , the sensor log is considered completely unlabeled. Here temporal patterns (patterns with the addition of temporal information) are discovered similarly as in CASAS-DISCOREC BIB020 BIB013 BIB021 and are used for structuring a tree of Markov Chains. Different activations in different timestamps generate new paths in the tree. Depending to temporal constraints, a sub-tree containing Markov Chains at the leafs that model activities is generated. A technique to update the model is also proposed. Here the goal of the model is the actuation of target devices more than recognition. Authors in STIK-MISVM BIB014 introduce a weakly supervised approach where two strategies are proposed for assigning labels to unlabeled data. The first strategy is based on the miSVM algorithm. miSVM is a SVM with two levels, the first for assigning labels to unlabeled data, the second one for applying recognition to the activities logs. The second strategy is instead called graph-based label propagation, where the nodes of the graphs are vectors of features. The nodes are connected by weighted edges, the weights represent similarity between nodes. When the entire training set is labeled, an SVM is trained for activity recognition. In AUG-APUBS BIB015 , the system generates ECA rules by considering the typology of the sensors involved in the measurements and the time relations between their activations. APUBS makes clear the difference between different categories of sensors: • Type O sensors installed in objects, thus, providing direct information about the actions of the users. • Type C sensors providing information about the environment (e.g., temperature, day of the week). • Type M sensors providing information about the position of the user inside the house (e.g., in the bedroom). Events in the event part of the ECA rule always come from sets O and M. Conditions are usually expressed in terms of the values provided by Type C sensors. Finally, the action part contains only Type O sensors. The set of Type O sensor is called mainSeT. The first step of the APUBS method consists of discovering, for each sensor in the mainSeT, the set associatedSeT of potential O and M sensors that can be potentially related to it as triggering events. The method employed is APriori for association rules ; the only difference is that possible association rules X ⇒ Y are limited to those where cardinality of both X and Y is unitary and Y only contains events contained in mainSeT. Obviously, this step requires a window size value to be specified in order to create transactions. As a second step, the technique discovers the temporal relationships between the events in associatedSeT and those in mainSeT. During this step, non-significant relations are pruned. As a third step, the conditions for the ECA rules are mined with a JRip classifier BIB001 . In WANG-HIER BIB018 , starting from the raw log, the authors use a K-Medoids clustering method to discover template gestures. This method finds the k representative instances which best represent the clusters. Based on these templates, gestures are identified applying a template matching algorithm: Dynamic Time Warping is a classic dynamic programming based algorithm to match two time series with temporal dynamics. In PALMES-OBJREL BIB011 , the KeyExtract algorithm mines keys from the web that best identify activities. For each activity, the set of most important keys is mined. In the recognition phase, an unsupervised segmentation based on heuristics is performed. Table 4 shows a quick recap of this section and answers questions RQ-B2.1 (Technique Class), RQ-B2.2 (Multi-user Support) and RQ-B2.3 (Additional Labeling Requirement). BIB015 Unsupervised S N CASAS-HAM BIB008 Unsupervised S N WANG-HIER BIB018 Unsupervised S N PALMES-OBJREL BIB011 Unsupervised S N
Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Related Work <s> Pervasive systems must offer an open, extensible, and evolving portfolio of services which integrate sensor data from a diverse range of sources. The core challenge is to provide appropriate and consistent adaptive behaviours for these services in the face of huge volumes of sensor data exhibiting varying degrees of precision, accuracy and dynamism. Situation identification is an enabling technology that resolves noisy sensor data and abstracts it into higher-level concepts that are interesting to applications. We provide a comprehensive analysis of the nature and characteristics of situations, discuss the complexities of situation identification, and review the techniques that are most popularly used in modelling and inferring situations from sensor data. We compare and contrast these techniques, and conclude by identifying some of the open research opportunities in the area. <s> BIB001 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Related Work <s> Development of context-aware applications is inherently complex. These applications adapt to changing context information: physical context, computational context, and user context/tasks. Context information is gathered from a variety of sources that differ in the quality of information they produce and that are often failure prone. The pervasive computing community increasingly understands that developing context-aware applications should be supported by adequate context information modelling and reasoning techniques. These techniques reduce the complexity of context-aware applications and improve their maintainability and evolvability. In this paper we discuss the requirements that context modelling and reasoning techniques should meet, including the modelling of a variety of context information types and their relationships, of situations as abstractions of context information facts, of histories of context information, and of uncertainty of context information. This discussion is followed by a description and comparison of current context modelling and reasoning techniques. <s> BIB002 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Related Work <s> It is essential for environments that aim at helping people in their daily life that they have some sort of Ambient Intelligence. Learning the preferences and habits of users then becomes an important step in allowing a system to provide such personalized services. Thus far, the exploration of these issues by the scientific community has not been extensive, but interest in the area is growing. Ambient Intelligence environments have special characteristics that have to be taken into account during the learning process. We identify these characteristics and use them to highlight the strengths and weaknesses of developments so far, providing direction to encourage further development in this specific area of Ambient Intelligence. <s> BIB003 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Related Work <s> Research on sensor-based activity recognition has, recently, made significant progress and is attracting growing attention in a number of disciplines and application domains. However, there is a lack of high-level overview on this topic that can inform related communities of the research state of the art. In this paper, we present a comprehensive survey to examine the development and current status of various aspects of sensor-based activity recognition. We first discuss the general rationale and distinctions of vision-based and sensor-based activity recognition. Then, we review the major approaches and methods associated with sensor-based activity monitoring, modeling, and recognition from which strengths and weaknesses of those approaches are highlighted. We make a primary distinction in this paper between data-driven and knowledge-driven approaches, and use this distinction to structure our survey. We also discuss some promising directions for future research. <s> BIB004 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Related Work <s> Ambient Intelligence (AmI) is a new paradigm in information technology aimed at empowering people's capabilities by the means of digital environments that are sensitive, adaptive, and responsive to human needs, habits, gestures, and emotions. This futuristic vision of daily environment will enable innovative human-machine interactions characterized by pervasive, unobtrusive and anticipatory communications. Such innovative interaction paradigms make ambient intelligence technology a suitable candidate for developing various real life solutions, including in the health care domain. This survey will discuss the emergence of ambient intelligence (AmI) techniques in the health care domain, in order to provide the research community with the necessary background. We will examine the infrastructure and technology required for achieving the vision of ambient intelligence, such as smart environments and wearable medical devices. We will summarize of the state of the art artificial intelligence methodologies used for developing AmI system in the health care domain, including various learning techniques (for learning from user interaction), reasoning techniques (for reasoning about users' goals and intensions) and planning techniques (for planning activities and interactions). We will also discuss how AmI technology might support people affected by various physical or mental disabilities or chronic disease. Finally, we will point to some of the successful case studies in the area and we will look at the current and future challenges to draw upon the possible future research paths. <s> BIB005 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Related Work <s> The technology of Smart Homes (SH), as an instance of ambient assisted living technologies, is designed to assist the homes’ residents accomplishing their daily-living activities and thus having a better quality of life while preserving their privacy. A SH system is usually equipped with a collection of inter-related software and hardware components to monitor the living space by capturing the behaviour of the resident and understanding his activities. By doing so the system can inform about risky situations and take actions on behalf of the resident to his satisfaction. The present survey will address technologies and analysis methods and bring examples of the state of the art research studies in order to provide background for the research community. In particular, the survey will expose infrastructure technologies such as sensors and communication platforms along with artificial intelligence techniques used for modeling and recognizing activities. A brief overview of approaches used to develop Human–Computer interfaces for SH systems is given. The survey also highlights the challenges and research trends in this area. <s> BIB006
The literature contains several surveys attempting to classify works in the field of smart spaces and ambient intelligence. Papers are presented in chronological orders. None of the reported surveys, clearly states the modality by which papers have been selected. Authors in BIB003 follow an approach similar to this work, i.e., they separately analyze the different phases of the life-cycle of the models. Differently from our work, for what concerns the model construction phase, they focus on classes of learning algorithms instead of analyzing the specific work. Additionally, specification-based methods are not taken into account. The survey BIB002 is focused on logical formalisms to represent ambient intelligence contexts and reasoning about them. The analyzed approaches are solely specification-based. Differently from our work, the survey is focused on the reasoning aspect, which is not the focus of our work. The work [50] is an extensive analysis of methods employed in ambient intelligence. This work separately analyzes the different methods without clearly defining a taxonomy. Authors in BIB001 introduce a clear taxonomy of approaches in the field of context recognition (and more generally, about situation identification). The survey embraces the vast majority of the proposed approaches in the area. Equivalently, paper BIB004 is a complete work covering not only activity recognition but also fine-grained action recognition. Differently from our work, both surveys are not focusing on the life-cycle of models. Authors in BIB005 focus on reviewing the possible applications of ambient intelligence in the specific case of health and elderly care. The work is orthogonal to the present paper and all the other reported works, as it is less focused on the pros and cons of each approach, while instead focusing on applications and future perspectives. A manifesto of the applications and principles behind smart spaces and ambient intelligence is presented in . As in BIB005 , authors in BIB006 take moves from the health care application scenario in order to describe possible applications. Anyway, this work goes more into details of employed techniques with particular focus on classical machine learning methods.
Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> BACKGROUND ::: In the era of evidence based medicine, with systematic reviews as its cornerstone, adequate quality assessment tools should be available. There is currently a lack of a systematically developed and evaluated tool for the assessment of diagnostic accuracy studies. The aim of this project was to combine empirical evidence and expert opinion in a formal consensus method to develop a tool to be used in systematic reviews to assess the quality of primary studies of diagnostic accuracy. ::: ::: ::: METHODS ::: We conducted a Delphi procedure to develop the quality assessment tool by refining an initial list of items. Members of the Delphi panel were experts in the area of diagnostic research. The results of three previously conducted reviews of the diagnostic literature were used to generate a list of potential items for inclusion in the tool and to provide an evidence base upon which to develop the tool. ::: ::: ::: RESULTS ::: A total of nine experts in the field of diagnostics took part in the Delphi procedure. The Delphi procedure consisted of four rounds, after which agreement was reached on the items to be included in the tool which we have called QUADAS. The initial list of 28 items was reduced to fourteen items in the final tool. Items included covered patient spectrum, reference standard, disease progression bias, verification bias, review bias, clinical review bias, incorporation bias, test execution, study withdrawals, and indeterminate results. The QUADAS tool is presented together with guidelines for scoring each of the items included in the tool. ::: ::: ::: CONCLUSIONS ::: This project has produced an evidence based quality assessment tool to be used in systematic reviews of diagnostic accuracy studies. Further work to determine the usability and validity of the tool continues. <s> BIB001 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> An adjustable chiropractic diagnostic apparatus 910) for measuring the postural deficiencies of a patient (100) wherein, the apparatus (10) comprises a posterior (30) and a lateral (31) framework member each having a single vertical alignment cord (60)(60') and a plurality of horizontal alignment cords (61)(61')(62)(62') which are connected together by body marker members (65) that are translatable to specific locations on the patient's body once the vertical (60) and lateral (61)(62) alignment cords have been aligned with other portions of the users body to produce a recordable record of the patient's postural deficiencies. <s> BIB002 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> Diagnostic tests are often much less rigorously evaluated than new drugs. It is time to ensure that the harms and benefits of new tests are fully understood ::: ::: No international consensus exists on the methods for assessing diagnostic tests. Previous recommendations stress that studies of diagnostic tests should match the type of diagnostic question.1 2 Once the specificity and sensitivity of a test have been established, the final question is whether tested patients fare better than similar untested patients. This usually requires a randomised trial. Few tests are currently evaluated in this way. In this paper, we propose an architecture for research into diagnostic tests that parallels the established phases in drug research. ::: ::: We have divided studies of diagnostic tests into four phases (box). We use research on brain natriuretic peptide for diagnosing heart failure as an illustrative example.2 However, the architecture is applicable to a wide range of tests including laboratory techniques, diagnostic imaging, pathology, evaluation of disability, electrodiagnostic tests, and endoscopy. ::: ::: In drug research, phase I studies deal with pharmacokinetics, pharmacodynamics, and safe doses.3 Phase I diagnostic studies are done to determine the range of results obtained with a newly developed test in healthy people. For example, after development of a test to measure brain natriuretic peptide in human plasma, phase I studies were done to establish the normal range of values in healthy participants.4 5 ::: ::: ![][1] ::: ::: The harms and benefits of diagnostic tests needs evaluating—just as drugs do ::: ::: Credit: GUSTO/SPL ::: ::: Diagnostic phase I studies must be large enough to examine the potential influence of characteristics such as sex, age, time of day, physical activity, and exposure to drugs. The studies are relatively quick, cheap, and easy to conduct, but they may occasionally raise ethical problems—for example, finding abnormal results in an apparently healthy person.6 … ::: ::: [1]: /embed/graphic-1.gif <s> BIB003 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> The analysis of subgroups is often used as a way to glean additional information from data sets. The strengths and weaknesses of this approach and new Journal policies concerning the reporting of subgroup analyses are discussed in this article. <s> BIB004 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> BackgroundOur objective was to develop an instrument to assess the methodological quality of systematic reviews, building upon previous tools, empirical evidence and expert consensus.MethodsA 37-item assessment tool was formed by combining 1) the enhanced Overview Quality Assessment Questionnaire (OQAQ), 2) a checklist created by Sacks, and 3) three additional items recently judged to be of methodological importance. This tool was applied to 99 paper-based and 52 electronic systematic reviews. Exploratory factor analysis was used to identify underlying components. The results were considered by methodological experts using a nominal group technique aimed at item reduction and design of an assessment tool with face and content validity.ResultsThe factor analysis identified 11 components. From each component, one item was selected by the nominal group. The resulting instrument was judged to have face and content validity.ConclusionA measurement tool for the 'assessment of multiple systematic reviews' (AMSTAR) was developed. The tool consists of 11 items and has good face and content validity for measuring the methodological quality of systematic reviews. Additional studies are needed with a focus on the reproducibility and construct validity of AMSTAR, before strong recommendations can be made on its use. <s> BIB005 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> Purpose: To determine the quality of reporting of diagnostic accuracy studies before and after the Standards for Reporting of Diagnostic Accuracy (STARD) statement publication and to determine whether there is a difference in the quality of reporting by comparing STARD (endorsing) and non-STARD (nonendorsing) journals. Materials and Methods: Diagnostic accuracy studies were identified by hand searching six STARD and six non-STARD journals for 2001, 2002, 2004, and 2005. Diagnostic accuracy studies (n = 240) were assessed by using a checklist of 13 of 25 STARD items. The change in the mean total score on the modified STARD checklist was evaluated with analysis of covariance. The change in proportion of times that each individual STARD item was reported before and after STARD statement publication was evaluated (χ2 tests for linear trend). Results: With mean total score as dependent factor, analysis of covariance showed that the interaction between the two independent factors (STARD or non-STARD journal and... <s> BIB006 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> BACKGROUND ::: Missed or delayed diagnoses are a common but understudied area in patient safety research. To better understand the types, causes, and prevention of such errors, we surveyed clinicians to solicit perceived cases of missed and delayed diagnoses. ::: ::: ::: METHODS ::: A 6-item written survey was administered at 20 grand rounds presentations across the United States and by mail at 2 collaborating institutions. Respondents were asked to report 3 cases of diagnostic errors and to describe their perceived causes, seriousness, and frequency. ::: ::: ::: RESULTS ::: A total of 669 cases were reported by 310 clinicians from 22 institutions. After cases without diagnostic errors or lacking sufficient details were excluded, 583 remained. Of these, 162 errors (28%) were rated as major, 241 (41%) as moderate, and 180 (31%) as minor or insignificant. The most common missed or delayed diagnoses were pulmonary embolism (26 cases [4.5% of total]), drug reactions or overdose (26 cases [4.5%]), lung cancer (23 cases [3.9%]), colorectal cancer (19 cases [3.3%]), acute coronary syndrome (18 cases [3.1%]), breast cancer (18 cases [3.1%]), and stroke (15 cases [2.6%]). Errors occurred most frequently in the testing phase (failure to order, report, and follow-up laboratory results) (44%), followed by clinician assessment errors (failure to consider and overweighing competing diagnosis) (32%), history taking (10%), physical examination (10%), and referral or consultation errors and delays (3%). ::: ::: ::: CONCLUSIONS ::: Physicians readily recalled multiple cases of diagnostic errors and were willing to share their experiences. Using a new taxonomy tool and aggregating cases by diagnosis and error type revealed patterns of diagnostic failures that suggested areas for improvement. Systematic solicitation and analysis of such errors can identify potential preventive strategies. <s> BIB007 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> In this paper we set out what we consider to be a set of best practices for statisticians in the reporting of pharmaceutical industry-sponsored clinical trials. We make eight recommendations covering: author responsibilities and recognition; publication timing; conflicts of interest; freedom to act; full author access to data; trial registration and independent review. These recommendations are made in the context of the prominent role played by statisticians in the design, conduct, analysis and reporting of pharmaceutical sponsored trials and the perception of the reporting of these trials in the wider community. <s> BIB008 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> In 2003, the QUADAS tool for systematic reviews of diagnostic accuracy studies was developed. Experience, anecdotal reports, and feedback suggested areas for improvement; therefore, QUADAS-2 was developed. This tool comprises 4 domains: patient selection, index test, reference standard, and flow and timing. Each domain is assessed in terms of risk of bias, and the first 3 domains are also assessed in terms of concerns regarding applicability. Signalling questions are included to help judge risk of bias. The QUADAS-2 tool is applied in 4 phases: summarize the review question, tailor the tool and produce review-specific guidance, construct a flow diagram for the primary study, and judge bias and applicability. This tool will allow for more transparent rating of bias and applicability of primary diagnostic accuracy studies. <s> BIB009 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> The treatment policy of chronic myeloid leukemia (CML), particularly with tyrosine kinase inhibitors, has been influenced by several recent studies that were well designed and rapidly performed, but their interpretation is of some concern because different end points and methodologies were used. To understand and compare the results of the previous and future studies and to translate their conclusion into clinical practice, there is a need for common definitions and methods for analyses of CML studies. A panel of experts was appointed by the European LeukemiaNet with the aim of developing a set of definitions and recommendations to be used in design, analyses, and reporting of phase 3 clinical trials in this disease. This paper summarizes the consensus of the panel on events and major end points of interest in CML. It also focuses on specific issues concerning the intention-to-treat principle and longitudinal data analyses in the context of long-term follow-up. The panel proposes that future clinical trials follow these recommendations. <s> BIB010 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> In 2003, the Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement was published in 13 biomedical journals.1 ,2 Diagnostic accuracy studies provide estimates of a test's ability to discriminate between patients with and without a predefined condition, by comparing the test results against a clinical reference standard. The STARD initiative was developed in response to accumulating evidence of poor methodological quality and poor reporting among test accuracy studies in the prior years.3 ,4 The STARD checklist contains 25 items which invite authors and reviewers to verify that critical information about the study is included in the study report. In addition, a flow chart that specifies the number of included and excluded patients and characterises the flow of participants through the study is strongly recommended. Since its launch, the STARD checklist has been adopted by over 200 biomedical journals (http://www.stard-statement.org/). ::: ::: Over the past 20 years, reporting guidelines have been developed and evaluated in many different fields of research. Although a modest increase in reporting quality is sometimes noticed in the years following the introduction of such guidelines,5 ,6 improvements in adherence tend to be slow.7 This makes it difficult to make statements about the impact of such guidelines. For STARD, there has been some controversy around its effect.8 While one study noticed a small increase in reporting quality of diagnostic accuracy studies shortly after the introduction of STARD,9 another study could not confirm this.10 ::: ::: Systematic reviews can provide more precise and more generalisable estimates of effect. A recently published systematic review evaluated adherence to several reporting guidelines in different fields of research, but STARD was not among the evaluated guidelines.11 To fill this gap, we systematically reviewed all the studies that aimed to investigate diagnostic accuracy studies’ adherence to … <s> BIB011 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> AIMS ::: Diagnostic accuracy studies determine the clinical value of non-invasive cardiac imaging tests. The 'STAndards for the Reporting of Diagnostic accuracy studies' (STARD) were published in 2003 to improve the quality of study reporting. We aimed to assess the reporting quality of cardiac computed tomography (CCT), single positron emission computed tomography (SPECT), and cardiac magnetic resonance (CMR) diagnostic accuracy studies; to evaluate the impact of STARD; and to investigate the relationships between reporting quality, journal impact factor, and study citation index. ::: ::: ::: METHODS AND RESULTS ::: We randomly generated six groups of 50 diagnostic accuracy studies: 'CMR 1995-2002', 'CMR 2004-11', 'CCT 1995-2002', 'CCT 2004-11', 'SPECT 1995-2002', and 'SPECT 2004-11'. The 300 studies were double-read by two blinded reviewers and reporting quality determined by % adherence to the 25 STARD criteria. Reporting quality increased from 65.3% before STARD to 74.1% after (P = 0.003) in CMR studies and from 61.6 to 79.0% (P < 0.001) in CCT studies. SPECT studies showed no significant change: 71.9% before and 71.5% after STARD (P = 0.92). Journals advising authors to refer to STARD had significantly higher impact factors than those that did not (P = 0.03), and journals with above-median impact factors published studies of significantly higher reporting quality (P < 0.001). Since STARD, citation index has not significantly increased (P = 0.14), but, after adjustment for impact factor, reporting quality continues to increase by ∼1.5% each year. ::: ::: ::: CONCLUSION ::: Reporting standards for diagnostic accuracy studies of non-invasive cardiac imaging are at most satisfactory and have improved since the introduction of STARD. Adherence to STARD should be mandatory for authors of diagnostic accuracy studies. <s> BIB012 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> Diagnostic errors have emerged as a serious patient safety problem but they are hard to detect and complex to define. At the research summit of the 2013 Diagnostic Error in Medicine 6th International Conference, we convened a multidisciplinary expert panel to discuss challenges in defining and measuring diagnostic errors in real-world settings. In this paper, we synthesize these discussions and outline key research challenges in operationalizing the definition and measurement of diagnostic error. Some of these challenges include 1) difficulties in determining error when the disease or diagnosis is evolving over time and in different care settings, 2) accounting for a balance between underdiagnosis and overaggressive diagnostic pursuits, and 3) determining disease diagnosis likelihood and severity in hindsight. We also build on these discussions to describe how some of these challenges can be addressed while conducting research on measuring diagnostic error. <s> BIB013 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> The first major study of the quality of statistical orting in the biomedical literature was published in 6 (Schor and Karten, 1966). Since then, dozens of ilar studies have been published, every one of which found that large proportions of articles contain errors he application, analysis, interpretation, or reporting of istics or in the design or conduct of research (see, for mple, Altman, 1991; Avram et al., 1985; Bakker and herts, 2011; Gardner et al., 1983; Glantz, 1980; frey, 1985; Gore et al., 1977; Kurichi and Sonnad, 6; Lionel and Herxheimer, 1970; Murray, 1991; Nagele, 3; Neville et al., 2006; Pocock et al., 1987; Scales et al., 5; White, 1979; Yancy, 1990). Further, large propors of these errors are serious enough to call the authors’ clusions into question (Glantz, 1980; Murray, 1991; cy, 1990). The problem is made worse by the fact that st of these studies are of the world’s leading peeriewed general medical and specialty journals. Although errors have been found in more complex statistical procedures (Burton and Altman, 2004; Mackinnon, 2010; Schwarzer et al., 2000), paradoxically, many errors are in basic, not advanced, statistical methods (George, 1985). Perhaps advanced methods are suggested by consulting statisticians, who then competently perform the analyses, but it is also true that authors are far more likely to use only elementary statistical methods, if they use any at all (Emerson and Colditz, 1985; George, 1985; Golden et al., 1996; Lee et al., 2004). Still, articles with even major errors continue to pass editorial and peer review and to be published in leading journals. The truth is that the problem of poor statistical reporting is long-standing, widespread, potentially serious, concerns mostly basic statistics, and yet is largely unsuspected by most readers of the biomedical literature (Lang and Secic, 2006). More than 30 years ago, O’Fallon and colleagues recommended that ‘‘Standards governing the content and format of statistical aspects should be developed to guide authors in the preparation of manuscripts’’ (O’Fallon et al., 1978). Despite the fact that this call has since been echoed by several others (Altman and Bland, 1991; Altman et al., 1983; Hayden, 1983; Murray, 1991; Pocock et al., 1987; Shott, 1985), most journals have still not included in their Instructions for Authors more than a paragraph or two about reporting statistical methods (Bailar and Mosteller, 1988). However, given that many statistical errors concern basic statistics, a comprehensive — and comprehensible — set of reporting guidelines might improve how statistical analyses are documented. In light of the above, we present here a set of statistical reporting guidelines suitable for medical journals to include in their Instructions for Authors. These guidelines tell authors, journal editors, and reviewers how to report basic statistical methods and results. Although these This paper was originally published in: Smart P, Maisonneuve H, erman A (eds). Science Editors’ Handbook, European Association of nce Editors, 2013. Reproduced with kind permission as part of a series lassic Methods papers. An introductory Commentary is available as ://dx.doi.org/10.1016/j.ijnurstu.2014.09.007. <s> BIB014 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> Prediction models are developed to aid health-care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health-care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). <s> BIB015 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> Abstract To perform a systematic review assessing accuracy and completeness of diagnostic studies of procalcitonin (PCT) for early-onset neonatal sepsis (EONS) using the Standards for Reporting of Diagnostic Accuracy (STARD) initiative. EONS, diagnosed during the first 3 days of life, remains a common and serious problem. Increased PCT is a potentially useful diagnostic marker of EONS, but reports in the literature are contradictory. There are several possible explanations for the divergent results including the quality of studies reporting the clinical usefulness of PCT in ruling in or ruling out EONS. We systematically reviewed PubMed, Scopus, and the Cochrane Library databases up to October 1, 2014. Studies were eligible for inclusion in our review if they provided measures of PCT accuracy for diagnosing EONS. A data extraction form based on the STARD checklist and adapted for neonates with EONS was used to appraise the quality of the reporting of included studies. We found 18 articles (1998–2014) fulfilling our eligibility criteria which were included in the final analysis. Overall, the results of our analysis showed that the quality of studies reporting diagnostic accuracy of PCT for EONS was suboptimal leaving ample room for improvement. Information on key elements of design, analysis, and interpretation of test accuracy were frequently missing. Authors should be aware of the STARD criteria before starting a study in this field. We welcome stricter adherence to this guideline. Well-reported studies with appropriate designs will provide more reliable information to guide decisions on the use and interpretations of PCT test results in the management of neonates with EONS. <s> BIB016 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> OBJECTIVE ::: To determine the rate with which diagnostic test accuracy studies that are published in a general radiology journal adhere to the Standards for Reporting of Diagnostic Accuracy Studies (STARD) 2015, and to explore the relationship between adherence rate and citation rate while avoiding confounding by journal factors. ::: ::: ::: MATERIALS AND METHODS ::: All eligible diagnostic test accuracy studies that were published in the Korean Journal of Radiology in 2011-2015 were identified. Five reviewers assessed each article for yes/no compliance with 27 of the 30 STARD 2015 checklist items (items 28, 29, and 30 were excluded). The total STARD score (number of fulfilled STARD items) was calculated. The score of the 15 STARD items that related directly to the Quality Assessment of Diagnostic Accuracy Studies (QUADAS)-2 was also calculated. The number of times each article was cited (as indicated by the Web of Science) after publication until March 2016 and the article exposure time (time in months between publication and March 2016) were extracted. ::: ::: ::: RESULTS ::: Sixty-three articles were analyzed. The mean (range) total and QUADAS-2-related STARD scores were 20.0 (14.5-25) and 11.4 (7-15), respectively. The mean citation number was 4 (0-21). Citation number did not associate significantly with either STARD score after accounting for exposure time (total score: correlation coefficient = 0.154, p = 0.232; QUADAS-2-related score: correlation coefficient = 0.143, p = 0.266). ::: ::: ::: CONCLUSION ::: The degree of adherence to STARD 2015 was moderate for this journal, indicating that there is room for improvement. When adjusted for exposure time, the degree of adherence did not affect the citation rate. <s> BIB017 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> Importance ::: While guidance on statistical principles for clinical trials exists, there is an absence of guidance covering the required content of statistical analysis plans (SAPs) to support transparency and reproducibility. ::: ::: ::: Objective ::: To develop recommendations for a minimum set of items that should be addressed in SAPs for clinical trials, developed with input from statisticians, previous guideline authors, journal editors, regulators, and funders. ::: ::: ::: Design ::: Funders and regulators (n = 39) of randomized trials were contacted and the literature was searched to identify existing guidance; a survey of current practice was conducted across the network of UK Clinical Research Collaboration-registered trial units (n = 46, 1 unit had 2 responders) and a Delphi survey (n = 73 invited participants) was conducted to establish consensus on SAPs. The Delphi survey was sent to statisticians in trial units who completed the survey of current practice (n = 46), CONSORT (Consolidated Standards of Reporting Trials) and SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) guideline authors (n = 16), pharmaceutical industry statisticians (n = 3), journal editors (n = 9), and regulators (n = 2) (3 participants were included in 2 groups each), culminating in a consensus meeting attended by experts (N = 12) with representatives from each group. The guidance subsequently underwent critical review by statisticians from the surveyed trial units and members of the expert panel of the consensus meeting (N = 51), followed by piloting of the guidance document in the SAPs of 5 trials. ::: ::: ::: Findings ::: No existing guidance was identified. The registered trials unit survey (46 responses) highlighted diversity in current practice and confirmed support for developing guidance. The Delphi survey (54 of 73, 74% participants completing both rounds) reached consensus on 42% (n = 46) of 110 items. The expert panel (N = 12) agreed that 63 items should be included in the guidance, with an additional 17 items identified as important but may be referenced elsewhere. Following critical review and piloting, some overlapping items were combined, leaving 55 items. ::: ::: ::: Conclusions and Relevance ::: Recommendations are provided for a minimum set of items that should be addressed and included in SAPs for clinical trials. Trial registration, protocols, and statistical analysis plans are critically important in ensuring appropriate reporting of clinical trials. <s> BIB018 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> The quality of reporting practice guidelines is often poor, and there is no widely accepted guidance or standards for such reporting in health care. The international RIGHT (Reporting Items for practice Guidelines in HealThcare) Working Group was established to address this gap. The group followed an existing framework for developing guidelines for health research reporting and the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network approach. A checklist and an explanation and elaboration statement were developed. The RIGHT checklist includes 22 items that are considered essential for good reporting of practice guidelines: basic information (items 1 to 4), background (items 5 to 9), evidence (items 10 to 12), recommendations (items 13 to 15), review and quality assurance (items 16 and 17), funding and declaration and management of interests (items 18 and 19), and other information (items 20 to 22). The RIGHT checklist can assist developers in reporting guidelines, support journal editors and peer reviewers when considering guideline reports, and help health care practitioners understand and implement a guideline. <s> BIB019 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> BACKGROUND ::: Diagnostic tests are used frequently in the emergency department (ED) to guide clinical decision making and, hence, influence clinical outcomes. The Standards for Reporting of Diagnostic Accuracy (STARD) criteria were developed to ensure that diagnostic test studies are performed and reported to best inform clinical decision making in the ED. ::: ::: ::: OBJECTIVE ::: The objective was to determine the extent to which diagnostic studies published in emergency medicine journals adhered to STARD 2003 criteria. ::: ::: ::: METHODS ::: Diagnostic studies published in eight MEDLINE-listed, peer-reviewed, emergency medicine journals over a 5-year period were reviewed for compliance to STARD criteria. ::: ::: ::: RESULTS ::: A total of 12,649 articles were screened and 114 studies were included in our study. Twenty percent of these were randomly selected for assessment using STARD 2003 criteria. Adherence to STARD 2003 reporting standards for each criteria ranged from 8.7% adherence (criteria-reporting adverse events from performing index test or reference standard) to 100% (multiple criteria). ::: ::: ::: CONCLUSION ::: Just over half of STARD criteria are reported in more than 80% studies. As poorly reported studies may negatively impact their clinical usefulness, it is essential that studies of diagnostic test accuracy be performed and reported adequately. Future studies should assess whether studies have improved compliance with the STARD 2015 criteria amendment. <s> BIB020 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> Research is of little use if its results are not effectively communicated. Data visualised in tables (and graphs) are key components in any scientific report, but their design leaves much to be desired. This article focuses on table design, following two general principles: clear vision and clear understanding. Clear vision is achieved by maximising the signal to noise ratio. In a table, the signal is the data in the form of numbers, and the noise is the support structure necessary to interpret the numbers. Clear understanding is achieved when the story in the data is told effectively, through organisation of the data and use of text. These principles are illustrated by original and improved tables from recent publications. Two special cases are discussed separately: tables produced by the pharmaceutical industry (in clinical study reports and reports to data safety monitoring boards), and study flow diagrams as proposed by the Consolidated Standards of Reporting Trials and Preferred Reporting Items for Systematic Reviews and Meta-Analyses initiatives. <s> BIB021 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> ABSTRACT The number of published systematic reviews of studies of healthcare interventions has increased rapidly and these are used extensively for clinical and policy decisions. Systematic reviews are subject to a range of biases and increasingly include non‐randomised studies of interventions. It is important that users can distinguish high quality reviews. Many instruments have been designed to evaluate different aspects of reviews, but there are few comprehensive critical appraisal instruments. AMSTAR was developed to evaluate systematic reviews of randomised trials. In this paper, we report on the updating of AMSTAR and its adaptation to enable more detailed assessment of systematic reviews that include randomised or non‐randomised studies of healthcare interventions, or both. With moves to base more decisions on real world observational evidence we believe that AMSTAR 2 will assist decision makers in the identification of high quality systematic reviews, including those based on non‐randomised studies of healthcare interventions. <s> BIB022 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> PURPOSE ::: To evaluate adherence of diagnostic accuracy studies in imaging journals to the STAndards for Reporting of Diagnostic accuracy studies (STARD) 2015. The secondary objective was to identify differences in reporting for magnetic resonance imaging (MRI) studies. ::: ::: ::: MATERIALS AND METHODS ::: MEDLINE was searched for diagnostic accuracy studies published in imaging journals in 2016. Studies were evaluated for adherence to STARD 2015 (30 items, including expanded imaging specific subitems). Evaluation for differences in STARD adherence based on modality, impact factor, journal STARD adoption, country, subspecialty area, study design, and journal was performed. ::: ::: ::: RESULTS ::: Adherence (n = 142 studies) was 55% (16.6/30 items, SD = 2.2). Index test description (including imaging-specific subitems) and interpretation were frequently reported (>66% of studies); no important differences in reporting of individual items were identified for studies on MRI. Infrequently reported items (<33% of studies) included some critical to generalizability (study setting and location) and assessment of bias (blinding of assessor of reference standard). New STARD 2015 items: sample size calculation, protocol reporting, and registration were infrequently reported. Higher impact factor (IF) journals reported more items than lower IF journals (17.2 vs. 16 items; P = 0.001). STARD adopter journals reported more items than nonadopters (17.5 vs. 16.4 items; P = 0.01). Adherence varied between journals (P = 0.003). No variability for study design (P = 0.32), subspecialty area (P = 0.75), country (P = 0.28), or imaging modality (P = 0.80) was identified. ::: ::: ::: CONCLUSION ::: Imaging accuracy studies show moderate adherence to STARD 2015, with only minor differences for studies evaluating MRI. This baseline evaluation will guide targeted interventions towards identified deficiencies and help track progress in reporting. ::: ::: ::: LEVEL OF EVIDENCE ::: 1 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:523-544. <s> BIB023
An accurate and timely diagnostic with the smallest probability of misdiagnosis, missed diagnosis, or delayed diagnosis is crucial in the management of any disease BIB007 . e diagnostic is an evolving process since both disease (the likelihood and the severity of the disease) and diagnostic approaches evolve BIB013 . In clinical practice, it is essential to correctly identify the diagnostic test that is useful to a specific patient with a specific condition [4] . e over-or underdiagnostic closely reflects on unnecessary or no treatment and harms both the subjects and the health-care systems BIB013 . Statistical methods used to assess a sign or a symptom in medicine depend on the phase of the study and are directly related to the research question and the design of the experiment (Table 1) BIB003 . A significant effort was made to develop the standards in reporting clinical studies, both for primary (e.g., casecontrol studies, cohort studies, and clinical trials) and secondary (e.g., systematic review and meta-analysis) research. e effort led to the publication of four hundred twelve guidelines available on the EQUATOR Network on April 20, 2019 [8] . Each guideline is accompanied by a short checklist describing the information needed to be present in each section and also include some requirements on the presentation of statistical results (information about what, e.g., mean (SD) where SD is the standard deviation, and how to report, e.g., the number of decimals). ese guidelines are also used as support in the critical evaluation of an article in evidence-based clinical practice. However, insufficient attention has been granted to the minimum set of items or methods and their quality in reporting the results. Different designs of experiments received more attention, and several statistical guidelines, especially for clinical trials, were developed to standardize the content of the statistical analysis plan BIB018 , for phase III clinical trials in myeloid leukemia BIB010 , pharmaceutical industry-sponsored clinical trials BIB008 , subgroup analysis BIB004 , or graphics and statistics for cardiology BIB021 . e SAMPL Guidelines provide general principles for reporting statistical methods and results BIB014 . SAMPL recommends to provide numbers with the appropriate degree of precision, the sample size, numerator and denominator for percentages, and mean (SD) (where SD � standard deviation) for data approximately normally distributed; otherwise medians and interpercentile ranges, verification of the assumption of statistical tests, name of the test and the tailed (one-or two-tailed), significance level (α), P values even statistically significant or not, adjustment(s) (if any) for multivariate analysis, statistical package used in the analysis, missing data, regression equation with regression coefficients for each explanatory variable, associated confidence intervals and P values, and models' goodness of fit (coefficient of determination) BIB014 . In regard to diagnostic tests, standards are available for reporting accuracy (QUADAS BIB001 , QUADAS-2 BIB009 , STARD BIB002 , and STARD 2015 ), diagnostic predictive models (TRIPOD BIB015 ), systematic reviews and meta-analysis (AMSTAR BIB005 and AMSTAR 2 BIB022 ), and recommendations and guidelines (AGREE , AGREE II , and RIGHT BIB019 ). e requirements highlight what and how to report (by examples), with an emphasis on the design of experiment which is mandatory to assure the validity and reliability of the reported results. Several studies have been conducted to evaluate if the available standards in reporting results are followed. e number of articles that adequately report the accuracy is reported from low BIB006 BIB011 BIB020 to satisfactory BIB012 , but not excellent, still leaving much room for improvements BIB016 BIB017 BIB023 . e diagnostic tests are frequently reported in the scientific literature, and the clinicians must know how a good report looks like to apply just the higher-quality information collected from the scientific literature to decision related to a particular patient. is review aimed to present the most frequent statistical methods used in the evaluation of a diagnostic test by linking the statistical treatment of data with the phase of the evaluation and clinical questions.
Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> Background: Newer glucose meters are easier to use, but direct comparisons with older instruments are lacking. We wished to compare analytical performances of four new and four previous generation meters. Methods: On average, 248 glucose measurements were performed with two of each brand of meter on capillary blood samples from diabetic patients attending our outpatient clinic. Two to three different lots of strips were used. All measurements were performed by one experienced technician, using blood from the same sample for the meters and the comparison method (Beckman Analyzer 2). Results were evaluated by analysis of clinical relevance using the percentage of values within a maximum deviation of 5% from the reference value, by the method of residuals, by error grid analysis, and by the CVs for measurements in series. Results: Altogether, 1987 blood glucose values were obtained with meters compared with the reference values. By error grid analysis, the newer devices gave more accurate results without significant differences within the group (zone A, 98–98.5%). Except for the One Touch II (zone A, 98.5%), the other older devices were less exact (zone A, 87–92.5%), which was also true for all other evaluation procedures. Conclusions: New generation blood glucose meters are not only smaller and more aesthetically appealing but are more accurate compared with previous generation devices except the One Touch II. The performance of the newer meters improved but did not meet the goals of the latest American Diabetes Association recommendations in the hands of an experienced operator. <s> BIB001 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> Diagnostic tests are often much less rigorously evaluated than new drugs. It is time to ensure that the harms and benefits of new tests are fully understood ::: ::: No international consensus exists on the methods for assessing diagnostic tests. Previous recommendations stress that studies of diagnostic tests should match the type of diagnostic question.1 2 Once the specificity and sensitivity of a test have been established, the final question is whether tested patients fare better than similar untested patients. This usually requires a randomised trial. Few tests are currently evaluated in this way. In this paper, we propose an architecture for research into diagnostic tests that parallels the established phases in drug research. ::: ::: We have divided studies of diagnostic tests into four phases (box). We use research on brain natriuretic peptide for diagnosing heart failure as an illustrative example.2 However, the architecture is applicable to a wide range of tests including laboratory techniques, diagnostic imaging, pathology, evaluation of disability, electrodiagnostic tests, and endoscopy. ::: ::: In drug research, phase I studies deal with pharmacokinetics, pharmacodynamics, and safe doses.3 Phase I diagnostic studies are done to determine the range of results obtained with a newly developed test in healthy people. For example, after development of a test to measure brain natriuretic peptide in human plasma, phase I studies were done to establish the normal range of values in healthy participants.4 5 ::: ::: ![][1] ::: ::: The harms and benefits of diagnostic tests needs evaluating—just as drugs do ::: ::: Credit: GUSTO/SPL ::: ::: Diagnostic phase I studies must be large enough to examine the potential influence of characteristics such as sex, age, time of day, physical activity, and exposure to drugs. The studies are relatively quick, cheap, and easy to conduct, but they may occasionally raise ethical problems—for example, finding abnormal results in an apparently healthy person.6 … ::: ::: [1]: /embed/graphic-1.gif <s> BIB002 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> BACKGROUND Multiple laboratory tests are used to diagnose and manage patients with diabetes mellitus. The quality of the scientific evidence supporting the use of these tests varies substantially. APPROACH An expert committee compiled evidence-based recommendations for the use of laboratory testing for patients with diabetes. A new system was developed to grade the overall quality of the evidence and the strength of the recommendations. Draft guidelines were posted on the Internet and presented at the 2007 Arnold O. Beckman Conference. The document was modified in response to oral and written comments, and a revised draft was posted in 2010 and again modified in response to written comments. The National Academy of Clinical Biochemistry and the Evidence-Based Laboratory Medicine Committee of the American Association for Clinical Chemistry jointly reviewed the guidelines, which were accepted after revisions by the Professional Practice Committee and subsequently approved by the Executive Committee of the American Diabetes Association. CONTENT In addition to long-standing criteria based on measurement of plasma glucose, diabetes can be diagnosed by demonstrating increased blood hemoglobin A 1c (HbA 1c ) concentrations. Monitoring of glycemic control is performed by self-monitoring of plasma or blood glucose with meters and by laboratory analysis of HbA 1c . The potential roles of noninvasive glucose monitoring, genetic testing, and measurement of autoantibodies, urine albumin, insulin, proinsulin, C-peptide, and other analytes are addressed. SUMMARY The guidelines provide specific recommendations that are based on published data or derived from expert consensus. Several analytes have minimal clinical value at present, and their measurement is not recommended. <s> BIB003 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> The history of the theory of reference values can be written as an unfinished symphony. The first movement, allegro con fuoco, played from 1960 to 1980: a mix of themes devoted to the study of biological variability (intra-, inter-individual, short- and long-term), preanalytical conditions, standardization of analytical methods, quality control, statistical tools for deriving reference limits, all of them complex variations developed on a central melody: the new concept of reference values that would replace the notion of normality whose definition was unclear. Additional contributions (multivariate reference values, use of reference limits from broad sets of patient data, drug interferences) conclude the movement on the variability of laboratory tests. The second movement, adagio, from 1980 to 2000, slowly develops and implements initial works. International and national recommendations were published by the IFCC-LM (International Federation of Clinical Chemistry and Laboratory Medicine) and scientific societies [French (SFBC), Spanish (SEQC), Scandinavian societies…]. Reference values are now topics of many textbooks and of several congresses, workshops, and round tables that are organized all over the world. Nowadays, reference values are part of current practice in all clinical laboratories, but not without difficulties, particularly for some laboratories to produce their own reference values and the unsuitability of the concept with respect to new technologies such as HPLC, GCMS, and PCR assays. Clinicians through consensus groups and practice guidelines have introduced their own tools, the decision limits, likelihood ratios and Reference Change Value (RCV), creating confusion among laboratorians and clinicians in substituting reference values and decision limits in laboratory reports. The rapid development of personalized medicine will eventually call for the use of individual reference values. The beginning of the second millennium is played allegro ma non-troppo from 2000 to 2012: the theory of reference values is back into fashion. The need to revise the concept is emerging. The manufacturers make a friendly pressure to facilitate the integration of Reference Intervals (RIs) in their technical documentation. Laboratorians are anxiously awaiting the solutions for what to do. The IFCC-LM creates Reference Intervals and Decision Limits Committee (C-RIDL) in 2005. Simultaneously, a joint working group IFCC-CLSI is created on the same topic. In 2008 the initial recommendations of IFCC-LM are revised and new guidelines are published by the Clinical and Laboratory Standards Institute (CLSI C28-A3). Fundamentals of the theory of reference values are not changed, but new avenues are explored: RIs transference, multicenter reference intervals, and a robust method for deriving RIs from small number of subjects. Concomitantly, other statistical methods are published such as bootstraps calculation and partitioning procedures. An alternative to recruiting healthy subjects proposes the use of biobanks conditional to the availability of controlled preanalytical conditions and of bioclinical data. The scope is also widening to include veterinary biology! During the early 2000s, several groups proposed the concept of 'Universal RIs' or 'Global RIs'. Still controversial, their applications await further investigations. The fourth movement, finale: beyond the methodological issues (statistical and analytical essentially), important questions remain unanswered. Do RIs intervene appropriately in medical decision-making? Are RIs really useful to the clinicians? Are evidence-based decision limits more appropriate? It should be appreciated that many laboratory tests represent a continuum that weakens the relevance of RIs. In addition, the boundaries between healthy and pathological states are shady areas influenced by many biological factors. In such a case the use of a single threshold is questionable. Wherever it will apply, individual reference values and reference change values have their place. A variation on an old theme! It is strange that in the period of personalized medicine (that is more stratified medicine), the concept of reference values which is based on stratification of homogeneous subgroups of healthy people could not be discussed and developed in conjunction with the stratification of sick patients. That is our message for the celebration of the 50th anniversary of Clinical Chemistry and Laboratory Medicine. Prospects are broad, enthusiasm is not lacking: much remains to be done, good luck for the new generations! <s> BIB004 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> “Evidence-based medicine de-emphasises intuition, unsystematic clinical experience and pathophysiological rationale as sufficient grounds for clinical decision making and stresses the examination of evidence from clinical research.”1 <s> BIB005 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> 38 The use of echocardiographic measurements to detect disease and predict outcomes can be confounded by a number of nondisease factors, including the effect of body size, that contribute to the variance of these measurements. The process of normal growth is associated with a nearly 200-fold increase in normal left ventricular enddiastolic volume (EDV) from premature infants up to large adolescents, making it imperative to account for changes in body size in pediatrics. Although this issue is often ignored in adult echocardiography, the sensitivity and specificity of parameters of left ventricular size are significantly improved when adjustment for body size in adults is performed. The article by Mawad et al. in this issue of JASE addresses an important aspect of this process, although it is likely a topic that is unfamiliar to most echocardiographers, even those in pediatric cardiology who rely heavily on Z scores. The concept of Z scores itself is often unfamiliar to adult echocardiographers. What is a Z score? Normative anatomic data are often presented as nomograms, a graphic representation of the mean values and one or more percentile curves, perhapsmost familiar to clinicians as the common height and weight curves used to track growth in children. Instead of percentiles, the distance from the mean value can be expressed as the standard deviation (SD), which has similar information content but is more easily interpreted for values outside the normal range. The number of SDs from the mean is termed the Z score, also known as the normal deviate or a standard score. Ameasurement that is 2 SDs above themean (the 97.7th percentile) has a Z score of 2, whereas a measurement that is 2 SDs below the mean (the 2.3rd percentile) has aZ score of 2. The use of Z scores dramatically simplifies clinical interpretation, because the clinician need not remember the age-specific or body surface area (BSA)–specific normal range for a variety of echocardiographic measurements. Instead, all variables have a mean of 0 and a normal range of 2 to +2, regardless of age or BSA. In addition, interpretation of change over time is simplified, because cardiovascular structures remain at the same Z score over time in normal subjects, despite changes in body size. Longitudinal change in the size of a cardiovascular structure that is more or less than expected for growth is easily recognized as change in Z score over time. Statistical analysis of research results is similarly facilitated and indeed enhanced by using Z scores. The analytic power added through the use of Z scores is often underappreciated and can be illustrated through an example. A common study design is to attempt to control for the effects of age and body size by selecting controls matched for age and body size and then performing a paired comparison of change over time in subjects versus controls. However, the sensitivity of this type of study design to detect therapeutic effects is diminished when there is significant variation in body size in the sample. Consider, for example, a hypothetical clinical trial of a therapy for dilated <s> BIB006 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> A finding of high BMD on routine DXA scanning is not infrequent and most commonly reflects degenerative disease. However, BMD increases may also arise secondary to a range of underlying disorders affecting the skeleton. Although low BMD increases fracture risk, the converse may not hold for high BMD, since elevated BMD may occur in conditions where fracture risk is increased, unaffected or reduced. Here we outline a classification for the causes of raised BMD, based on identification of focal or generalized BMD changes, and discuss an approach to guide appropriate investigation by clinicians after careful interpretation of DXA scan findings within the context of the clinical history. We will also review the mild skeletal dysplasia associated with the currently unexplained high bone mass phenotype and discuss recent advances in osteoporosis therapies arising from improved understanding of rare inherited high BMD disorders. <s> BIB007 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> BACKGROUND: Biological covariates such as age and sex can markedly influence biochemical marker reference values, but no comprehensive study has examined such changes across pediatric, adult, and geriatric ages. The Canadian Health Measures Survey (CHMS) collected comprehensive nationwide health information and blood samples from children and adults in the household population and, in collaboration with the Canadian Laboratory Initiative on Pediatric Reference Intervals (CALIPER), examined biological changes in biochemical markers from pediatric to geriatric age, establishing a comprehensive reference interval database for routine disease biomarkers. METHODS: The CHMS collected health information, physical measurements, and biosamples (blood and urine) from approximately 12 000 Canadians aged 3–79 years and measured 24 biochemical markers with the Ortho Vitros 5600 FS analyzer or a manual microplate. By use of CLSI C28-A3 guidelines, we determined age- and sex-specific reference intervals, including corresponding 90% CIs, on the basis of specific exclusion criteria. RESULTS: Biochemical marker reference values exhibited dynamic changes from pediatric to geriatric age. Most biochemical markers required some combination of age and/or sex partitioning. Two or more age partitions were required for all analytes except bicarbonate, which remained constant throughout life. Additional sex partitioning was required for most biomarkers, except bicarbonate, total cholesterol, total protein, urine iodine, and potassium. CONCLUSIONS: Understanding the fluctuations in biochemical markers over a wide age range provides important insight into biological processes and facilitates clinical application of biochemical markers to monitor manifestation of various disease states. The CHMS-CALIPER collaboration addresses this important evidence gap and allows the establishment of robust pediatric and adult reference intervals. <s> BIB008 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> Purpose: The evaluation and management of male hypogonadism should be based on symptoms and on serum testosterone levels. Diagnostically this relies on accurate testing and reference values. Our objective was to define the distribution of reference values and assays for free and total testosterone by clinical laboratories in the United States.Materials and Methods: Upper and lower reference values, assay methodology and source of published reference ranges were obtained from laboratories across the country. A standardized survey was reviewed with laboratory staff via telephone. Descriptive statistics were used to tabulate results.Results: We surveyed a total of 120 laboratories in 47 states. Total testosterone was measured in house at 73% of laboratories. At the remaining laboratories studies were sent to larger centralized reference facilities. The mean ± SD lower reference value of total testosterone was 231 ± 46 ng/dl (range 160 to 300) and the mean upper limit was 850 ± 141 ng/dl (range 726 to 1,130).... <s> BIB009 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> OBJECTIVE ::: To observe the changes of complete blood count (CBC) parameters during pregnancy and establish appropriate reference intervals for healthy pregnant women. ::: ::: ::: METHODS ::: Healthy pregnant women took the blood tests at all trimesters. All blood samples were processed on Sysmex XE-2100. The following CBC parameters were analyzed: red blood cell count (RBC), hemoglobin (Hb), hematocrit (Hct), mean corpuscular volume (MCV), mean corpuscular hemoglobin (MCH), mean corpuscular hemoglobin concentration (MCHC), red blood cell distribution width (RDW), platelet count (PLT), mean platelet volume (MPV), platelet distribution width (PDW), white blood cell count (WBC), and leukocyte differential count. Reference intervals were established using the 2.5th and 97.5th percentile of the distribution. ::: ::: ::: RESULTS ::: Complete blood count parameters showed dynamic changes during trimesters. RBC, Hb, Hct declined at trimester 1, reaching their lowest point at trimester 2, and began to rise again at trimester 3. WBC, neutrophil count (Neut), monocyte count (MONO), RDW, and PDW went up from trimester 1 to trimester 3. On the contrary, MCHC, lymphocyte count (LYMPH), PLT, and MPV gradually descended during pregnancy. There were statistical significances in all CBC parameters between pregnant women and normal women, regardless of the trimesters (P<.001). The median obtained were (normal vs pregnancy) as follows: RBC 4.50 vs 3.94×1012 /L, Hb 137 vs 120 g/L, WBC 5.71 vs 9.06×109 /L, LYMPH% 32.2 vs 18.0, Neut% 58.7 vs 75.0, and PLT 251 vs 202×109 /L. ::: ::: ::: CONCLUSION ::: The changes of CBC parameters during pregnancy are described, and reference intervals for Beijing pregnant women are demonstrated in this study. <s> BIB010 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> AbstractReference Intervals (RIs) and clinical decision limits (CDLs) are a vital part of the information supplied by laboratories to support the interpretation of numerical clinical pathology resu... <s> BIB011 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> Abstract Background The Italian Society of Clinical Biochemistry (SIBioC) and the Italian Section of the European Ligand Assay Society (ELAS) have recently promoted a multicenter study (Italian hs-cTnI Study) with the aim to accurately evaluate analytical performances and reference values of the most popular cTnI methods commercially available in Italy. The aim of this article is to report the results of the Italian hs-cTnI Study concerning the evaluation of the 99th percentile URL and reference change (RCV) values around the 99th URL of the Access cTnI method. Materials and methods Heparinized plasma samples were collected from 1306 healthy adult volunteers by 8 Italian clinical centers. Every center collected from 50 to 150 plasma samples from healthy adult subjects. All volunteers denied the presence of chronic or acute diseases and had normal values of routine laboratory tests (including creatinine, electrolytes, glucose and blood counts). An older cohort of 457 adult subjects (mean age 63.0 years; SD 8.1 years, minimum 47 years, maximum 86 years) underwent also ECG and cardiac imaging analysis in order to exclude the presence of asymptomatic cardiac disease. Results and conclusions The results of the present study confirm that the Access hsTnI method using the DxI platform satisfies the two criteria required by international guidelines for high-sensitivity methods for cTn assay. Furthermore, the results of this study confirm that the calculation of the 99th percentile URL values are greatly affected not only by age and sex of the reference population, but also by the statistical approach used for calculation of cTnI distribution parameters. <s> BIB012 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> The aim was to elude differences in published paediatric reference intervals (RIs) and the implementations hereof in terms of classification of samples. Predicaments associated with transferring RIs published elsewhere are addressed. A local paediatric (aged 0 days to < 18 years) population of platelet count, haemoglobin level and white blood cell count, based on first draw samples from general practitioners was established. PubMed was used to identify studies with transferable RIs. The classification of local samples by the individual RIs was evaluated. Transference was done in accordance with the Clinical and Laboratory Standards Institute EP28-A3C guideline. Validation of transference was done using a quality demand based on biological variance. Twelve studies with a combined 28 RIs were transferred onto the local population, which was derived from 20,597 children. Studies varied considerably in methodology and results. In terms of classification, up to 63% of the samples would change classification from normal to diseased, depending on which RI was applied. When validating the transferred RIs, one RI was implementable in the local population. Conclusion: Published paediatric RIs are heterogeneous, making assessment of transferability problematic and resulting in marked differences in classification of paediatric samples, thereby potentially affecting diagnosis and treatment of children. ::: ::: ::: What is Known: ::: ::: • Reference intervals (RIs) are fundamental for the interpretation of paediatric samples and thus correct diagnosis and treatment of the individual child. ::: ::: • Guidelines for the establishment of adult RIs exist, but there are no specific recommendations for establishing paediatric RIs, which is problematic, and laboratories often implement RIs published elsewhere as a consequence. ::: ::: ::: What is New: ::: ::: • Paediatric RIs published in peer-reviewed scientific journals differ considerably in methodology applied for the establishment of the RI. ::: ::: • The RIs show marked divergence in the classification of local samples from healthy children. <s> BIB013 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> Abstract Study Design Descriptive normative. Introduction Intrinsic hand strength can be impacted by hand arthritis, peripheral nerve injuries, and spinal cord injuries. Grip dynamometry does not isolate intrinsic strength, and manual muscle testing is not sensitive to change in grades 4 and 5. The Rotterdam Intrinsic Hand Myometer is a reliable and valid test of intrinsic hand strength; however, no adult normative data are available. Purpose of the Study To describe age- and gender-stratified intrinsic hand strength norms in subjects aged 21 years and above and to determine if factors known to predict grip dynamometry also predict measures of intrinsic hand strength. Methods Three trials of 5 measures of maximal isometric intrinsic strength were performed bilaterally by 607 “healthy-handed” adult males and females. Average strength values were stratified by age and gender. Data were analyzed to determine the influence of demographic and anthropometric variables on intrinsic strength. Results Intrinsic strength generally followed age and gender trends similar to grip dynamometry. Age, gender, body mass index, and the interaction between gender and body mass index were predictors of intrinsic strength, whereas in most cases, the hand being tested did not predict the intrinsic strength. Discussion With the addition of these findings, age- and gender-stratified hand intrinsic strength norms now span from age 4 through late adulthood. Many factors known to predict grip dynamometry also predict intrinsic myometry. Additional research is needed to evaluate the impact of vocational and avocational demands on intrinsic strength. Conclusions These norms can be referenced to evaluate and plan hand therapy and surgical interventions for intrinsic weakness. <s> BIB014 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> Laboratory results interpretation for diagnostic accuracy and clinical decision-making in this period of evidence-based medicine requires cut-off values or reference ranges that are reflective of the geographical area where the individual resides. Several studies have shown significant differences between and within populations, emphasizing the need for population-specific reference ranges. This cross-sectional experimental study sought to establish the haematological reference values in apparently healthy individuals in three regions in Ghana. Study sites included Nkenkaasu, Winneba, and Nadowli in the Ashanti, Central, and Upper West regions of Ghana, respectively. A total of 488 healthy participants were recruited using the Clinical and Laboratory Standards Institute (United States National Consensus Committee on Laboratory Standards, NCCLS) Guidance Document C28A2. Medians for haematological parameters were calculated and reference values determined at and percentiles and compared with Caucasian values adopted by our laboratory as reference ranges and values from other African and Western countries. RBC count, haemoglobin, and haematocrit (HCT) were significantly higher in males compared to females. There were significant intraregional and interregional as well as international variations of haematological reference ranges in the populations studied. We conclude that, for each geographical area, there is a need to establish geography-specific reference ranges if accurate diagnosis and concise clinical decisions are to be made. <s> BIB015 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> To determine Z-score equations and reference ranges for Doppler flow velocity indices of cardiac outflow tracts in normal fetuses. A prospective cross-sectional echocardiographic study was performed in 506 normal singleton fetuses from 18 to 40 weeks. Twelve pulsed-wave Doppler (PWD) measurements were derived from fetal echocardiography. The regression analysis of the mean and the standard deviation (SD) for each parameter were performed against estimated fetal weight (EFW) and gestational age (GA), in order to construct Z-score models. The correlation between these variables and fetal heart rate were also investigated. Strong positive correlations were found between the twelve PWD indices and the independent variables. A linear-quadratic regression model was the best description of the mean and SD of most parameters, with the exception of the velocity time interval (VTI) of ascending aorta against EFW, which was best fitted by a fractional polynomial. Z-score equations and reference values for PWD indices of fetal cardiac outflow tracts were proposed against GA and EFW, which may be useful for quantitative assessment of potential hemodynamic alternations, particularly in cases of intrauterine growth retardation and structural cardiac defects. <s> BIB016
A diagnostic test could be used in clinical settings for confirmation/exclusion, triage, monitoring, prognosis, or screening (Table 2) BIB005 . Table 2 presents the role of a diagnostic test, its aim, and a real-life example. Different statistical methods are used to support the results of a diagnostic test according to the question, phase, and study design. e statistical analysis depends on the test outcome type. Table 3 presents the most common types of diagnostic test outcome and provides some examples. e result of an excellent diagnostic test must be accurate (the measured value is as closest as possible by the true value) and precise (repeatability and reproducibility of the measurement) . An accurate and precise measurement is the primary characteristic of a valid diagnostic test. e reference range or reference interval and ranges of normal values determined in healthy persons are also essential to classify a measurement as a positive or negative result and generally refer to continuous measurements. Under the assumption of a normal distribution, the reference value of a diagnostic measurement had a lower reference limit/lower limit of normal (LRL) and an upper reference limit/upper limit of normal (URL) . Frequently, the reference interval takes the central 95% of a reference population, but exceptions from this rule are observed (e.g., cTn-cardiac troponins and glucose levels BIB001 with <5% deviation from reference intervals) BIB003 BIB012 . e reference ranges could be different among laboratories BIB009 BIB013 , genders and/or ages BIB014 , populations [79] (with variations inclusive within the same population BIB008 BIB015 ), and to physiological conditions (e.g., pregnancy BIB010 , time of sample collection, or posture). Within-subject biological variation is smaller than the between-subject variation, so reference change values could better reflect the changes in measurements for an individual as compared to reference Adapted from BIB002 . ranges BIB004 . Furthermore, a call for establishing the clinical decision limits (CDLs) with the involvement of laboratory professionals had also been emphasized BIB011 . e Z-score (standardized value, standardized score, or Z-value, Z-score � (measurement − μ)/σ)) is a dimensionless metric used to evaluate how many standard deviations (σ) a measurement is far from the population mean (μ) BIB006 . A Zscore of 3 refers to 3 standard deviations that would mean that more than 99% of the population was covered by the Zscore . e Z-score is properly used under the assumption of normal distribution and when the parameters of the population are known . It has the advantage that allows comparing different methods of measurements . e Z-scores are used on measurements on pediatric population 89] or fetuses BIB016 , but not exclusively (e.g., bone density tests BIB007 ).
Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> A 36-item short-form (SF-36) was constructed to survey health status in the Medical Outcomes Study. The SF-36 was designed for use in clinical practice and research, health policy evaluations, and general population surveys. The SF-36 includes one multi-item scale that assesses eight health concepts: 1) limitations in physical activities because of health problems; 2) limitations in social activities because of physical or emotional problems; 3) limitations in usual role activities because of physical health problems; 4) bodily pain; 5) general mental health (psychological distress and well-being); 6) limitations in usual role activities because of emotional problems; 7) vitality (energy and fatigue); and 8) general health perceptions. The survey was constructed for self-administration by persons 14 years of age and older, and for administration by a trained interviewer in person or by telephone. The history of the development of the SF-36, the origin of specific items, and the logic underlying their selection are summarized. The content and features of the SF-36 are compared with the 20-item Medical Outcomes Study short-form. <s> BIB001 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> Abstract This research develops and evaluates a simple method of grading the severity of chronic pain for use in general population surveys and studies of primary care pain patients. Measures of pain intensity, disability, persistence and recency of onset were tested for their ability to grade chronic pain severity in a longitudinal study of primary care back pain (n = 1213), headache (n = 779) and temporomandibular disorder pain (n = 397) patients. A Guttman scale analysis showed that pain intensity and disability measures formed a reliable hierarchical scale. Pain intensity measures appeared to scale the lower range of global severity while disability measures appeared to scale the upper range of global severity. Recency of onset and days in pain in the prior 6 months did not scale with pain intensity or disability. Using simple scoring rules, pain severity was graded into 4 hierarchical classes: Grade I, low disability-low intensity; Grade II, low disability-high intensity; Grade III, high disability-moderately limiting; and Grade IV, high disability-severely limiting. For each pain site, Chronic Pain Grade measured at baseline showed a highly statistically significant and monotonically increasing relationship with unemployment rate, pain-related functional limitations, depression, fair to poor self-rated health, frequent use of opioid analgesics, and frequent pain-related doctor visits both at baseline and at 1-year follow-up. Days in Pain was related to these variables, but not as strongly as Chronic Pain Grade. Recent onset cases (first onset within the prior 3 months) did not show differences in psychological and behavioral dysfunction when compared to persons with less recent onset. Using longitudinal data from a population-based study (n = 803), Chronic Pain Grade at baseline predicted the presence of pain in the prior 2 weeks, Chronic Pain Grade and pain-related functional limitations at 3-year follow-up. Grading chronic pain as a function of pain intensity and pain-related disability may be useful when a brief ordinal measure of global pain severity is required. Pain persistence, measured by days in pain in a fixed time period, provides useful additional information. <s> BIB002 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> AIMS ::: The purpose of this study was to develop and validate an easily used disease-specific quality of life (QOL) measure for patients with chronic lower limb ischemia and to design an evaluative instrument, responsive to within-subject change, that adds to clinical measures of outcome when comparing treatment options in the management of lower limb ischemia. ::: ::: ::: METHODS ::: The first phase consisted of item generation, item reduction, formulating, and pretesting in patients with ischemia. The proportion of patients who selected an item as troublesome and the mean importance they attached to it were combined to give a clinical impact factor. Items with the highest clinical impact factor were used to formulate a new 25-item questionnaire that was then pretested in 20 patients with lower limb ischemia. In the second phase, reliability, validity, and responsiveness of the new questionnaire were assessed in 39 patients with lower limb ischemia who were tested at 0 and 4 weeks. The King's College Hospital's Vascular Quality of Life Questionnaire and the Short-Form 36 were administered at each visit, and treadmill walking distance and ankle/brachial pressure indices were recorded. The new questionnaire's reliability, internal consistency, responsiveness, and validity were determined. ::: ::: ::: RESULTS ::: Areas of QOL impairment were consistent through the ranges of disease severity and age, with no apparent differences between the men and women. Therefore, a single questionnaire is applicable to all patients with chronic lower limb ischemia. In stable patients test-retest scores demonstrated a reliability of r more than 0.90. Each item had internal consistency (item-domain Cronbach alpha =.7-.9). The questionnaire was responsive to change, with correlation between change in the questionnaire's total score and both global and clinical indicators of change (P <.001). The questionnaire showed face and construct validity. ::: ::: ::: CONCLUSIONS ::: This disease-specific questionnaire is reliable, responsive, valid, and ready for use as an outcome measure in clinical trials. It is sensitive to the concerns of patients with lower limb ischemia, offering a simple method to measure the effect of interventions on their QOL. <s> BIB003 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> Objective: To determine normal values of plasma B type natriuretic peptide from infancy to adolescence using a commercially available rapid assay. ::: ::: Setting: Tertiary referral centre. ::: ::: Design: The study was cross sectional. Plasma BNP concentration was measured in 195 healthy infants, children, and adolescents from birth to 17.6 years using the triage BNP assay (a fluorescence immunoassay). ::: ::: Results: During the first week of life, the mean (SD) plasma concentration of BNP in newborn infants decreased significantly from 231.6 (197.5) to 48.4 (49.1) pg/ml (p = 0.001). In all subjects older than two weeks plasma BNP concentration was less than 32.7 pg/ml. There was no significant difference in mean plasma BNP measured in boys and girls younger than 10 years (8.3 (6.9) v 8.5 (7.5) pg/ml). In contrast, plasma concentration of BNP in girls aged 10 years or older was significantly higher than in boys of the same age group (12.1 (9.6) v 5.1 (3.5) pg/ml, p < 0.001). Plasma BNP concentrations were higher in pubertal than in prepubertal girls (14.4 (9.7) v 7.1 (6.6) pg/ml, p < 0.001) and were correlated with the Tanner stage (r = 0.41, p = 0.001). ::: ::: Conclusions: Plasma BNP concentrations in newborn infants are relatively high, vary greatly, and decrease rapidly during the first week of life. In children older than 2 weeks, the mean plasma concentration of BNP is lower than in adults. There is a sex related difference in the second decade of life, with higher BNP concentrations in girls. BNP concentrations in girls are related to pubertal stage. <s> BIB004 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> Quality of life may be considerably reduced in patients who are suffering from chronic lower limb venous insufficiency, although existing generic quality of life instruments (NHP, SF-36 or SIP) cannot completely identify their specific complaints. The Chronic Venous Insufficiency Questionnaire (CIVIQ) has been developed by iterative process. First, a pilot group of 20 patients was used to identify a number of important features of quality of life affected by venous insufficiency, other than physical symptoms of discomfort. A second study involving 2,001 subjects was used to reduce the number of items. Subjects were asked to score both the severity of their problems and the importance they attributed to each problem on a 5-point Likert scale. The importance items found in patients with venous insufficiency were subjected to factorial analyses (PCA, PAF). The final version is a 20-item self-administered questionnaire which explores four dimensions: psychological, physical and social functioning and pain. Internal consistency of the questionnaire was validated for each dimension (Cronbach's alpha > 0.820 for three out of four factors). Reproducibility was confirmed in a 60 patient test-retest study. Pearson's correlation coefficients for both the four dimension subscales and for the global score at 2-week intervals were greater than 0.940. Finally, the questionnaire was tested in a randomized clinical trial of 934 patients in order to assess responsiveness and the convergent validity of the instrument, together with the patient's own quality of life. This study demonstrated that convergence was valid: Pearson's correlation coefficients between clinical score differences and quality of life score differences were small (from 0.199–0.564) but were statistically different from 0 (p 0.80). Reliability, face, content, construct validity and responsiveness were also determined for this specific quality of life questionnaire relating to venous insufficiency. Results suggest that this questionnaire may be used with confidence to assess quality of life in clinical trials on chronic venous insufficiency. <s> BIB005 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> Renal Doppler resistance index measurement may represent a clinically useful noninvasive method for early detection of occult hemorrhagic shock. <s> BIB006 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> OBJECTIVES ::: To improve glycemic control and prevent late complications, the patient and diabetes team need to adjust insulin therapy. The aim of this study is to evaluate the efficacy of thrice-daily versus twice-daily insulin regimens on HbA1c for type 1 diabetes mellitus by a randomized controlled trial in Hamedan, west of Iran. ::: ::: ::: METHODS ::: The study included 125 patients under 19 years of age with type 1 diabetes mellitus over a 3-month period. All patients with glycohemoglobin (HbA1c) ≥8% were followed prospectively and randomized into two trial and control groups. The control group received conventional two insulin injections per day: a mixture of short-acting (regular) + intermediated acting (NPH) insulins pre-breakfast (twice daily), and the trial group was treated by an extra dose of regular insulin before lunch (three times daily). Main outcome measure was HbA1c at baseline and at the end of 3 months. The mean blood glucose level and number of hypoglycemia were recorded. All patients underwent monthly intervals follow up for assessing their home blood glucose records and insulin adjustment. ::: ::: ::: RESULTS ::: Overall, 100 patients completed the study protocol. 52% were females, mean ±SD of age of 12.91 ± 3.9 years. There were no significant differences in baseline characteristics including age, gender, pubertal stage, adherence to diet, duration of disease and total daily insulin dose (p>0.05). There was a significant decrease individually in both groups in HbA1c level (p<0.05), but there was no significant difference in HbA1c reduction in patients on twice-daily insulin injections and those on thrice-daily insulin injection groups (1.12 ± 2.12 and 0.98±2.1% respectively, p>0.05). ::: ::: ::: CONCLUSION ::: Compared with twice daily insulin, a therapeutic regimen involving the addition of one dose regular insulin before lunch caused no significant change in the overall glycemic control of patients with type 1 diabetes mellitus. Our results emphasize that further efforts for near normoglycemia should be focused upon education of patients in terms of frequent outpatient visits, more blood glucose monitoring and attention to insulin adjustments. <s> BIB007 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> A number of case–control studies were conducted to investigate the association of IL6 gene polymorphisms with colorectal cancer (CRC). However, the results were not always consistent. We performed a systematic review and meta-analysis to examine the association between the IL6 gene polymorphisms and CRC. Data were collected from the following electronic databases: PubMed, EMBASE, Web of Science, BIOSIS Previews, HuGENet, and Chinese Biomedical Literature Database, with the last report up to July 2011. A total of 17 studies involving 4 SNPs were included (16 for rs1800795, 2 for rs1800796, 2 for rs1800797, and 1 for rs13306435). Overall, no significant association of these polymorphisms with CRC was found in heterozygote comparisons as well as homozygote comparison, dominant genetic model and recessive model. In subgroup analysis, among studies using population-based controls, fulfilling Hardy–Weinberg equilibrium, or using Taqman genotyping method, we did not find any significant association. However, the rs1800795 C allele was significantly associated with reduced risk for CRC among persons who regularly or currently took NSAIDs (four studies, OR = 0.750; 95 % CI, 0.64–0.88; P = 0.474 for heterogeneity test), and with increased risk for CRC among persons who drank (one study, OR = 1.97; 95 % CI, 1.32–2.94). Individuals with the rs1800795 C allele in the IL6 gene have a significantly lower risk of CRC, but in the setting of NSAIDs use. Further studies are merited to assess the association between the IL6 gene polymorphisms and CRC risk among persons who take NSAIDs, drink or smoke, etc. <s> BIB008 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> Abstract Sporeforming bacteria are ubiquitous in the environment and exhibit a wide range of diversity leading to their natural prevalence in foodstuff. The state of the art of sporeformer prevalence in ingredients and food was investigated using a multiparametric PCR-based tool that enables simultaneous detection and identification of various genera and species mostly encountered in food, i.e. Alicyclobacillus , Anoxybacillus flavithermus , Bacillus , B. cereus group, B. licheniformis , B. pumilus , B. sporothermodurans , B. subtilis , Brevibacillus laterosporus , Clostridium , Geobacillus stearothermophilus , Moorella and Paenibacillus species. In addition, 16S rDNA sequencing was used to extend identification to other possibly present contaminants. A total of 90 food products, with or without visible trace of spoilage were analysed, i.e. 30 egg-based products, 30 milk and dairy products and 30 canned food and ingredients. Results indicated that most samples contained one or several of the targeted genera and species. For all three tested food categories, 30 to 40% of products were contaminated with both Bacillus and Clostridium . The percentage of contaminations associated with Clostridium or Bacillus represented 100% in raw materials, 72% in dehydrated ingredients and 80% in processed foods. In the last two product types, additional thermophilic contaminants were identified ( A. flavithermus , Geobacillus spp., Thermoanaerobacterium spp. and Moorella spp.). These results suggest that selection, and therefore the observed (re)-emergence of unexpected sporeforming contaminants in food might be favoured by the use of given food ingredients and food processing technologies. <s> BIB009 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> Background: Interleukins, interferons and oxidative DNA products are important biomarkers assessing the inflammations and tissue damages caused by toxic materials in the body. We tried to evaluate distributions, reference values and age related changes of blood levels of inflammatory cytokines, C-reactive protein (CRP), IgE and urine levels of 8-hydroxy-2 ′ -deoxyguanosine (8-OHdG) among workers in a cohort study evaluating the health influences of toner particles. Methods: A total of 1366 male workers under age 50 years (age 19 – 49 years; 718 exposed and 648 not exposed to toner particles) in a cross sectional study of 1614 (categorized as 809 exposed and 805 not exposed, age 19 – 59 years) workers in a photocopier company has been followed prospectively as the cohort. Blood levels of interleukin (IL)-4, IL-6, IL-8, interferon- γ (IFN- γ ), CRP, IgE and urine 8-OHdG were measured annually for 5 years. Results: Reference values of the biomarkers are; CRP: 0.01 – 0.63 × 10 -2 g/L, IgE: 6 – 1480 IU/mL, IL-4: 2.6 – 76.1 pg/mL, IL-6: 0.4 – 4.9 pg/mL and 8-OHdG: 1.5 – 8.2 ng/mgCr. We could not evaluate reference values for IL-8 and IFN- γ because most of the values were below the sensitivity limits (2.0 pg/mL and 0.1 IU/mL, respectively). There were no differences of the biomarker levels between the toner exposed and the control workers. We observed a statistically significant age related decrease of serum IL-4 levels. Conclusions: This is the first report assessing the distributions and reference values of inflammatory biomarker levels in a large scaled cohort. We observed age related changes of some of the biomarkers. We could not detect any differences of the studied biomarker values between the toner exposed and the control workers. <s> BIB010 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> Latent tubercular infection (LTBI) in children, as in adults, lacks a diagnostic gold standard. Until some time ago the only available test for the diagnosis of LTBI was the tuberculin skin test (TST) but it has drawbacks such as poor sensitivity and specificity (Pai et al., 2008). QuantiFERON-TB Gold In-Tube (QFT-IT) has been approved for clinical use by the Food and Drug Administration and its major benefit is a high specificity even in BCG-vaccinated subjects (Pai et al., 2008). The performance of QFT-IT in children has not been extensively explored but preliminary data suggest that it performs better than TST (Pai et al., 2008). Few studies present <s> BIB011 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> PURPOSE ::: To summarize the results of a 4-year period in which endorectal magnetic resonance imaging (MRI) was considered for all men referred for salvage radiation therapy (RT) at a single academic center; to describe the incidence and location of locally recurrent disease in a contemporary cohort of men with biochemical failure after radical prostatectomy (RP), and to identify prognostic variables associated with MRI findings in order to define which patients may have the highest yield of the study. ::: ::: ::: METHODS AND MATERIALS ::: Between 2007 and 2011, 88 men without clinically palpable disease underwent eMRI for detectable prostate-specific antigen (PSA) after RP. The median interval between RP and eMRI was 32 months (interquartile range, 14-57 months), and the median PSA level was 0.30 ng/mL (interquartile range, 0.19-0.72 ng/mL). Magnetic resonance imaging scans consisting of T2-weighted, diffusion-weighted, and dynamic contrast-enhanced imaging were evaluated for features consistent with local recurrence. The prostate bed was scored from 0-4, whereby 0 was definitely normal, 1 probably normal, 2 indeterminate, 3 probably abnormal, and 4 definitely abnormal. Local recurrence was defined as having a score of 3-4. ::: ::: ::: RESULTS ::: Local recurrence was identified in 21 men (24%). Abnormalities were best appreciated on T2-weighted axial images (90%) as focal hypointense lesions. Recurrence locations were perianastomotic (67%) or retrovesical (33%). The only risk factor associated with local recurrence was PSA; recurrence was seen in 37% of men with PSA >0.3 ng/mL vs 13% if PSA ≤0.3 ng/mL (P<.01). The median volume of recurrence was 0.26 cm(3) and was directly associated with PSA (r=0.5, P=.02). The correlation between MRI-based tumor volume and PSA was even stronger in men with positive margins (r=0.8, P<.01). ::: ::: ::: CONCLUSIONS ::: Endorectal MRI can define areas of local recurrence after RP in a minority of men without clinical evidence of disease, with yield related to PSA. Further study is necessary to determine whether eMRI can improve patient selection and success of salvage RT. <s> BIB012 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> BackgroundThe importance of apolipoprotein E (APOE) in lipid and lipoprotein metabolism is well established. However, the impact of APOE polymorphisms has never been investigated in an Algerian population. This study assessed, for the fist time, the relationships between three APOE polymorphisms (epsilon, rs439401, rs4420638) and plasma lipid concentrations in a general population sample from Algeria.MethodsThe association analysis was performed in the ISOR study, a representative sample of the population living in Oran (787 subjects aged between 30 and 64). Polymorphisms were considered both individually and as haplotypes.ResultsIn the ISOR sample, APOE ϵ4 allele carriers had higher plasma triglyceride (p=0.0002), total cholesterol (p=0.009) and LDL-cholesterol (p=0.003) levels than ϵ3 allele carriers. No significant associations were detected for the rs4420638 and rs439401 SNPs. Linkage disequilibrium and haplotype analyses confirmed the respectively deleterious and protective impacts of the ϵ4 and ϵ2 alleles on LDL-cholesterol levels and showed that the G allele of the rs4420638 polymorphism may exert a protective effect on LDL-cholesterol levels in subjects bearing the APOE epsilon 4 allele.ConclusionOur results showed that (i) the APOE epsilon polymorphism has the expected impact on the plasma lipid profile and (ii) the rs4420638 G allele may counterbalance the deleterious effect of the ϵ4 allele on LDL-cholesterol levels in an Algerian population. <s> BIB013 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> Background Most commonly used outcome measures in peripheral arterial disease (PAD) provide scarce information about achieved patient benefit. Therefore, patient-reported outcome measures have become increasingly important as complementary outcome measures. The abundance of items in most health-related quality of life instruments makes everyday clinical use difficult. This study aimed to develop a short version of the 25-item Vascular Quality of Life Questionnaire (VascuQoL-25), a PAD-specific health-related quality of life instrument. Methods The study recruited 129 individuals with intermittent claudication and 71 with critical limb ischemia from two university hospitals. Participants were a mean age of 70 ± 9 years, and 57% were men. All patients completed the original VascuQoL when evaluated for treatment, and 127 also completed the questionnaire 6 months after a vascular procedure. The VascuQoL-25 was reduced based on cognitive interviews and psychometric testing. The short instrument, the VascuQoL-6, was tested using item-response theory, exploring structure, precision, item fit, and targeting. A subgroup of 21 individuals with intermittent claudication was also tested correlating the results of VascuQoL-6 to the actual walking capacity, as measured using global positioning system technology. Results On the basis of structured psychometric testing, the six most informative items were selected (VascuQoL-6) and tested vs the original VascuQoL-25. The correlation between VascuQoL-25 and VascuQoL-6 was r = 0.88 before intervention, r = 0.96 after intervention, and the difference was r = 0.91 ( P r = 0.72; P Conclusions VascuQoL-6 is a valid and responsive instrument for the assessment of health-related quality of life in PAD. The main advantage is the compact format that offers a possibility for routine use in busy clinical settings. <s> BIB014 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> The purpose of this study was to determine the usefulness of color doppler sonography and resistivity index (RI) in differentiating liver tumors. The study was carried out in the Department of Radiology and Imaging, Mymensingh Medical College Hospital, and Institute of Nuclear Medicine and Allied Sciences (INMAS), Mymensingh, Bangladesh, during the period of July 2009 to June 2011. Total 50 consecutive cases were studied. Among them 27 were hepatocellular carcinomas, 19 were metastatic tumors, 03 were hemangiomas and 01 was hepatic adenoma. Doppler sonographic findings were then correlated, case by case, with final diagnosis- either pathologically by USG guided Fine-needle aspiration or by other imaging modalities (e.g., CT scan and RBC liver scan for hepatic hemangioma). The RI value of hepatocellular carcinoma was 0.69±0.096 and in metastatic tumors 0.73±0.079. The results showed no significant difference between the RI of hepatocellular carcinomas and metastatic liver tumors but it was significantly higher than benign lesions (p<0.05). RI of hemangiomas was 0.49±0.64 and in one hepatic adenoma was 0.65. When RI was <0.6 for benign liver tumors and ≥0.6 for malignant tumors we calculated a sensitivity of 89.14%, specificity of 66.7%, accuracy of 85.71% positive predictive value of 97.62% and negative predictive value of 28.57% in differentiating benign and malignant tumors. Thirty four of 46(73.9%) malignant lesions had intratumoral flow and 25% of benign lesions also showed intratumoral flow. The difference of intratumoral flow between malignant and benign lesions was significant (p<0.01). Two of 4 benign lesions (50%) had peritumoral vascularity where 6% of the malignant tumors showed peritumoral vascularity. In conclusion, combined studies of the type of intra-and peri-tumoral flow signals in CDFI and the parameter of RI would be more helpful in the differential diagnosis of benign and malignant liver tumors. <s> BIB015 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> This article discusses the rationale of the Society of Radiologists in Ultrasound 2012 consensus statement and the new discriminatory values for visualization of a gestational sac, a yolk sac, an embryo, and cardiac activity; reviews normal US findings and those that are suspicious for or definitive of pregnancy failure in the early first trimester; and describes the implications of “pregnancy of unknown location” and “pregnancy of unknown viability.” <s> BIB016 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> ObjectiveTo assess the ability of Glasgow Aneurysm Score in predicting postoperative mortality for ruptured aortic aneurysm which may assist in decision making regarding the open surgical repair of an individual patient.MethodsA total of 121 patients diagnosed of ruptured abdominal aortic aneurysm who underwent open surgery in our hospital between 1999 and 2013 were included. The Glasgow Aneurysm Score for each patient was graded according to the Glasgow Aneurysm Score (Glasgow Aneurysm Score = age in years + 17 for shock + 7 for myocardial disease + 10 for cerebrovascular disease + 14 for renal disease). The groups were divided as Group 1 (containing the patients who died) and Group 2 (the patients who were discharged). The Glasgow Aneurysm Scores amongst the groups were compared.ResultsOut of 121 patients, 108 (89.3%) were males and 13 (10.7%) were females. The in-hospital mortality was 48 patients (39.7%). The Glasgow Aneurysm Score was 84.15 ± 15.94 in Group 1 and 75.14 ± 14.67 in Group 2 which reveal... <s> BIB017 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> Abstract Anti-smoking legislation has been associated with an improvement in health indicators. Since the cadmium (Cd) body burden in the general population is markedly increased by smoke exposure, we analyzed the impact of the more restrictive legislation that came into force in Spain in 2011 by measuring Cd and cotinine in first morning urine samples from 83 adults in Madrid (Spain) before (2010) and after (2011) introduction of this law. Individual pair-wise comparisons showed a reduction of creatinine corrected Cotinine and Cd levels for non-active smokers, i. e. those which urinary cotinine levels are below 50 μg/L. After the application of the stricter law, cotinine levels in urine only decreased in non-active smokers who self-reported not to be exposed to second-hand smoke. The reduction in second hand smoke exposure was significantly higher in weekends (Friday to Sunday) than in working days (Monday to Thursday). The decrease in U-Cd was highly significant in non-active smokers and, in general, correlated with lower creatinine excretion. Therefore correction by creatinine could bias urinary Cd results, at least for cotinine levels higher than 500 μg/L. The biochemical/toxicological benefits detected herein support the stricter application of anti-smoking legislation and emphasize the need to raise the awareness of the population as regards exposure at home. <s> BIB018 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> Aim : To study the usefulness of color or power Doppler ultrasound (US) in the pre-surgical evaluation of skin melanoma, and to correlate the Doppler characteristics with the appearance on high frequency ultrasound strain elastography (SE) in the preoperative evaluation of cutaneous melanoma. Materials and method : The study included 42 cutaneous melanoma lesions in 39 adult subjects examined between September 2011 and January 2015. Doppler US features (the presence and aspect of vascularization, and the number of vascular pedicles) and elasticity by strain elastography were evaluated together with the pathological results. Results : The melanoma lesions presented hyper-vascularization, with multiple vascular pedicles and stiff appearance. Significant correlations between the thickness of the tumor, measured histopathologically by the Breslow index, and the degree of vascularization (p=0.0167), and number of vascular pedicles (p=0.0065) were identified. Strong correlations between the SE appearance and vascularization on one hand, and SE and the number of vascular pedicles were also identified (p<0.001). Conclusion : Our study demonstrates that Doppler US and SE offer useful information for THE preoperative evaluation of cutaneous melanoma and may contribute to better defining the long term prognosis. <s> BIB019 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> The aim of the study was to assess the use of echocardiographic measurements in newborns of diabetic mothers. Maternal diabetes is associated with an increased risk of morbidity and mortality in pregnancy and in perinatal period. Thirty-five newborns of diabetic mothers (pre- gestational or gestational diabetes; case group) and thirty-five controls (control group), born between January 2009 and December 2012 in Cluj-Napoca (north-west of Romania), were included in this study. A Logiq e ultrasound with an 8 MHz transducer was used to measure echocardiographic parameters. The interventricular septal thickness in case group was higher as compared with control group (at end systole = 6.61 ± 1.64 mm vs. 5.75 ± 0.95 mm, p = 0.0371; at end diastole = 4.61 ± 1.59 mm vs. 3.42 ± 0.70 mm, p = 0.0001). A risk ratio of 2.333 (0.656, 8.298) was obtained for septal hypertrophy. A higher proportion of septal hypertrophy was identified in the newborns of mothers with gestational diabetes compared to the newborns of pregestational diabetes mothers (p = 0.0058). The mean birth weight was significantly higher in newborns of diabetic mothers (3695.57 ± 738.63) as compared with controls (3276.14 ± 496.51; p = 0.0071). Infants born to mothers with diabetes proved to be at a high risk of septal hypertrophy. <s> BIB020 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> OBJECTIVES ::: Biomarker research is an important area of investigation in Gaucher disease, caused by an inherited deficiency of a lysosomal enzyme, glucocerebrosidase. We evaluated the usefulness of neopterin, as a novel biomarker reflecting chronic inflammation and immune system activation in Gaucher disease and analysed its evolution in response to enzyme replacement therapy (ERT). ::: ::: ::: METHODS ::: Circulating plasma neopterin levels in 31 patients with non-neuronopathic Gaucher disease were measured before and after the onset of ERT and were compared with those of 18 healthy controls. Plasma chitotriosidase activity was also monitored, as a reference biomarker, against which we evaluated the evolution of neopterin. ::: ::: ::: RESULTS ::: Neopterin levels were significantly increased in treatment-naïve patients (mean 11.90 ± 5.82 nM) compared with controls (6.63 ± 5.59 nM, Mann-Whitney U test P = 0.001), but returned to normal levels (6.92 ± 4.66 nM) following ERT. Investigating the diagnostic value of neopterin by receiver operating characteristic analysis, we found a cut-off value of 7.613 nM that corresponds to an area under the curve of 0.780 and indicates a good discrimination capacity, with a sensitivity of 0.774 and a specificity of 0.778. ::: ::: ::: DISCUSSION ::: Our results suggest that measurement of circulating neopterin may be considered as a novel test for the confirmation of diagnosis and monitoring of the efficacy of therapeutic intervention in Gaucher disease. Plasma neopterin levels reflect the global accumulation and activation of Gaucher cells and the extent of chronic immune activation in this disorder. ::: ::: ::: CONCLUSION ::: Neopterin may be an alternative storage cell biomarker in Gaucher disease, especially in chitotriosidase-deficient patients. <s> BIB021 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> Statistical editors of the Malaysian Journal of Medical Sciences (MJMS) must go through many submitted manuscripts, focusing on the statistical aspect of the manuscripts. However, the editors notice myriad styles of reporting the statistical results, which are not standardised among the authors. This could be due to the lack of clear written instructions on reporting statistics in the guidelines for authors. The aim of this editorial is to briefly outline reporting methods for several important and common statistical results. It will also address a number of common mistakes made by the authors. The editorial will serve as a guideline for authors aiming to publish in the MJMS as well as in other medical journals. <s> BIB022 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> BACKGROUND ::: Chitotriosidase is an enzyme secreted by activated macrophages. This study aims to investigate the usefulness of circulating chitotriosidase activity as a marker of inflammatory status in patients with critical limb ischemia (CLI). ::: ::: ::: MATERIALS AND METHODS ::: An observational gender-matched case-control study was conducted on patients hospitalized with the primary diagnosis of CLI, as well as a control group. The control group consisted of healthy volunteers. ::: ::: ::: RESULTS ::: Forty-three patients were included in each group. Similar demographic characteristics (median age of 60-62 years and overweight) were observed in both groups. Chitotriosidase activity ranged from 110 nmol/ml/hr to 1530 nmol/ml/hr in the CLI group and from 30 nmol/ml/hr to 440 nmol/ml/hr in the control group; demonstrating significantly elevated values in the CLI group (p<0.001). Median plasma chitotriosidase activity was significantly elevated in smokers compared with non-smokers in both groups (p<0.05). However, this activity had higher values in CLI than in control subjects. Receiver operating characteristic (ROC) analysis was then performed in order to verify the diagnostic accuracy of chitotriosidase as an inflammatory biomarker in CLI. ::: ::: ::: CONCLUSION ::: Circulating chitotriosidase is a test which can potentially be used for the monitoring of CLI patients without other inflammatory conditions. However, the interpretation of elevated values must take into account the inflammatory response induced by tobacco exposure. <s> BIB023 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> Purpose To assess the accuracy of staging positron emission tomography (PET)/computed tomography (CT) in detecting distant metastasis in patients with local-regionally advanced cervical and high-risk endometrial cancer in the clinical trial by the American College of Radiology Imaging Network (ACRIN) and the Gynecology Oncology Group (GOG) (ACRIN 6671/GOG 0233) and to compare central and institutional reader performance. Materials and Methods In this prospective multicenter trial, PET/CT and clinical data were reviewed for patients enrolled in ACRIN 6671/GOG 0233. Two central readers, blinded to site read and reference standard, reviewed PET/CT images for distant metastasis. Central review was then compared with institutional point-of-care interpretation. Reference standard was pathologic and imaging follow-up. Test performance for central and site reviews of PET/CT images was calculated and receiver operating characteristic analysis was performed. Generalized estimating equations and nonparametric bootstrap procedure for clustered data were used to assess statistical significance. Results There were 153 patients with cervical cancer and 203 patients with endometrial cancer enrolled at 28 sites. Overall prevalence of distant metastasis was 13.7% (21 of 153) for cervical cancer and 11.8% (24 of 203) for endometrial cancer. Central reader PET/CT interpretation demonstrated sensitivity, specificity, positive predictive value (PPV), and negative predictive value of 54.8%, 97.7%, 79.3%, and 93.1% for cervical cancer metastasis versus 64.6%, 98.6%, 86.1%, and 95.4% for endometrial cancer, respectively. By comparison, local institutional review demonstrated sensitivity, specificity, PPV, and negative predictive value of 47.6%, 93.9%, 55.6%, and 91.9% for cervical cancer metastasis and 66.7%, 93.9%, 59.3%, and 95.5% for endometrial cancer, respectively. For central readers, the specificity and PPV of PET/CT detection of cervical and endometrial cancer metastases were all significantly higher compared with that of local institutional review (P < .05). Central reader area under the receiver operating characteristic curve (AUC) values were 0.78 and 0.89 for cervical and endometrial cancer, respectively; these were not significantly different from local institutional AUC values (0.75 and 0.84, respectively; P > .05 for both). Conclusion FDG PET/CT demonstrates high specificity and PPV for detecting distant metastasis in cervical and endometrial cancer and should be included in the staging evaluation. Blinded central review of imaging provides improved specificity and PPV for the detection of metastases and should be considered for future oncologic imaging clinical trials. © RSNA, 2017. <s> BIB024 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> Screening in women has decreased the incidence and mortality of cervical cancer. Precancerous cervical lesions (cervical intraepithelial neoplasias) and cervical carcinomas are strongly associated with sexually-transmitted high-risk human papillomavirus (HPV) infection, which causes more than 99% of cervical cancers. Screening methods include cytology (Papanicolaou test) and HPV testing, alone or in combination. The American Academy of Family Physicians and the U.S. Preventive Services Task Force recommend starting screening in immunocompetent, asymptomatic women at 21 years of age. Women 21 to 29 years of age should be screened every three years with cytology alone. Women 30 to 65 years of age should be screened every five years with cytology plus HPV testing or every three years with cytology alone. Screening is not recommended for women younger than 21 years or in women older than 65 years with an adequate history of negative screening results. The U.S. Preventive Services Task Force is in the process of updating its guidelines. In 2015, the American Society for Colposcopy and Cervical Pathology and the Society of Gynecologic Oncology published interim guidance for the use of primary HPV testing. <s> BIB025
A cohort cross-sectional study is frequently used to establish the normal range of values. Whenever data follow the normal distribution (normality Renal Doppler resistive index: hemorrhagic shock in polytrauma patients BIB006 Monitoring A repeated test that allows assessing the efficacy of an intervention Glycohemoglobin (A1c Hb): overall glycemic control of patients with diabetes BIB007 Prognosis Assessment of an outcome or the disease progression PET/CT scan in the identification of distant metastasis in cervical and endometrial cancer BIB024 Screening Presence of the disease in apparently asymptomatic persons Cytology test: screening of cervical uterine cancer BIB025 Positive/negative or abnormal/normal (i) Endovaginal ultrasound in the diagnosis of normal intrauterine pregnancy BIB016 (ii) QuantiFERON-TB test for the determination of tubercular infection BIB011 Qualitative ordinal (i) Prostate bed after radiation therapy: definitely normal/probably normal/uncertain/ probably abnormal/definitely abnormal BIB012 (ii) Scores: Apgar score (assessment of infants after delivery): 0 (no activity, pulse absent, floppy grimace, skin blue or pale, and respiration is absent) to 10 (active baby, pulse over 100 bps, prompt response to stimulation, pink skin, and vigorous cry) ; Glasgow coma score: eye opening (from 1 � no eye opening to 4 � spontaneously), verbal response (from 1 � none to 5 � patient oriented), and motor response (from 1 � none to 6 � obeys commands) ; Alvarado score (the risk of appendicitis) evaluates 6 clinical items and 2 laboratory measurements and had an overall score from 0 (no appendicitis) to 10 ("very probable" appendicitis) ; and sonoelastographic scoring systems in evaluation of lymph nodes (iii) Scales: quality-of-life scales (SF-36 BIB001 , EQ-5D , VascuQoL BIB003 BIB014 , and CIVIQ BIB005 ) and pain scale (e.g., 0 (no pain) to 10 (the worst pain)) BIB002 Qualitative nominal (i) Apolipoprotein E gene (ApoE) genotypes: E2/E2, E2/E3, E2/E4, E3/E3, E3/E4, and E4/ E4 BIB013 (ii) SNP (single-nucleotide polymorphism) of IL-6: at position −174 (rs1800795), −572 (rs1800796), −596 (rs1800797), and T15 A (rs13306435) BIB008 Quantitative discrete (i) Number of bacteria in urine or other fluids (ii) Number of contaminated products with different bacteria BIB009 (iii) Glasgow aneurysm score (� age in years + 17 for shock + 7 for myocardial disease + 10 for cerebrovascular disease + 14 for renal disease) BIB017 Quantitative continuous (i) Biomarkers: chitotriosidase BIB023 , neopterin BIB021 , urinary cotinine BIB018 , and urinary cadmium levels BIB018 (ii) Measurements: resistivity index BIB015 , ultrasound thickness BIB019 , and interventricular septal thickness BIB020 BIB022 . e continuous data are reported with one or two decimals (sufficient to assure the accuracy of the result), while the P values are reported with four decimals even if the significance threshold was or not reached . e norms and good practice are not always seen in the scientific literature while the studies are frequently more complex (e.g., investigation of changes in the values of biomarkers with age or comparison of healthy subjects with subjects with a specific disease). One example is given by Koch and Singer BIB004 , which aimed to determine the range of normal values of the plasma B-type natriuretic peptide (BNP) from infancy to adolescence. One hundred ninetyfive healthy subjects, infants, children, and adolescents were evaluated. Even that the values of BNP varied considerably, the results were improper reported as mean (standard deviation) on the investigated subgroups, but correctly compared subgroups using nonparametric tests BIB004 . Taheri et al. compared the serum levels of hepcidin (a low molecular weight protein role in the iron metabolism) and prohepcidin in hemodialysis patients (44 patients) and healthy subjects (44 subjects) . Taheri et al. reported the values of hepcidin and prohepcidin as a mean and standard deviation, suggesting the normal distribution of data, and compared using nonparametric tests, inducing the absence of normal distribution of experimental data . Furthermore, they correlated these two biomarkers while no reason exists for this analysis since one is derived from the other . Zhang et al. determined the reference values for plasma pro-gastrin-releasing peptide (ProGrP) levels in healthy Han Chinese adults. ey tested the distribution of ProGrP, identified that is not normally distributed, and correctly reported the medians, ranges, and 2.5 th , 5 th , 50 th , 95 th , and 97.5 th percentiles on two subgroups by ages. Spearman's correlation coefficient was correctly used to test the relation between ProGrP and age, but the symbol of this correlation coefficient was r (symbol attributed to Pearson's correlation coefficient) instead of ρ. e differences in the ProGrP among groups were accurately tested with the Mann-Whitney test (two groups) and the Kruskal-Wallis test (more than two groups). e authors reported the age-dependent reference interval on this specific population without significant differences between genders . e influence of the toner particles on seven biomarkers (serum C-reactive protein (CRP), IgE, interleukin (IL-4, IL-6, and IL-8), serum interferon-c (IFN-c), and urine 8-hydroxy-2′-deoxyguanosine (8OHdG)) was investigated by Murase et al. BIB010 . ey conducted a prospective cohort study (toner exposed and unexposed) with a five-year follow-up and measured annually the biomarkers. e reference values of the studied biomarkers were correctly reported as median and 27
Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Variation Analysis. Coefficient of variation (CV) <s> There are few branches of the Theory of Evolution which appear to the mathematical statistician so much in need of exact treatment as those of Regression, Heredity, and Panmixia. Round the notion of panmixia much obscurity has accumulated, owing to the want of precise definition and quantitative measurement. The problems of regression and heredity have been dealt with by Mr. Francis Galton in his epochmaking work on ‘Natural Inheritance,’ but, although he has shown exact methods of dealing, both experimentally and mathematically, with the problems of inheritance, it does not appear that mathematicians have hitherto developed his treatment, or that biologists and medical men have yet fully appreciated that he has really shown how many of the problems which perplex them may receive at any rate a partial answer. A considerable portion of the present memoir will be devoted to the expansion and fuller development of Mr. Galton’s ideas, particularly their application to the problem of bi-parental inheritance . At the same time I shall endeavour to point out how the results apply to some current biological and medical problems. In the first place, we must definitely free our minds, in the present state of our knowledge of the mechanism of inheritance and reproduction, of any hope of reaching a mathematical relation expressing the degree of correlation between individual parent and individual offspring. The causes in any individual case of inheritance are far too complex to admit of exact treatment; and up to the present the classification of the circumstances under which greater or less degrees of correlation between special groups of parents and offspring may be expected has made but little progress. This is largely owing to a certain prevalence of almost metaphysical speculation as to the causes of heredity, which has usurped the place of that careful collection and elaborate experiment by which alone sufficient data might have been accumulated, with a view to ultimately narrowing and specialising the circumstances under which correlation was measured. We must proceed from inheritance in the mass to inheritance in narrower and narrwoer classes, rather than attempt to build up general rules on the observation of individual instances. Shortly, we must proceed by the method of statistics, rather than by the consideration of typical cases. It may seem discouraging to the medical practitioner, with the problem before him of inheritance in a particular family, to be told that nothing but averages, means, and probabilities with regard to large classes can as yet be scientifically dealt with ; but the very nature of the distribution of variation, whether healthy or morhid, seems to indicate that we are dealing with that sphere of indefinitely numerous small causes, which in so many other instances has shown itself only amenable to the calculus of chance, and not to any analysis of the individual instance. On the other hand, the mathematical theory wall be of assistance to the medical man by answering, inter alia, in its discussion of regression the problem as to the average effect upon the offspring of given degrees of morbid variation in the parents. It may enable the physician, in many cases, to state a belief based on a high degree of probability, if it offers no ground for dogma in individual cases. One of the most noteworthy results of Mr. Francis Galton’s researches is his discovery of the mode in which a population actually reproduces itself by regression and fraternal variation. It is with some expansion and fuller mathematical treatment of these ideas that this memoir commences. <s> BIB001 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Variation Analysis. Coefficient of variation (CV) <s> (Subsections not included) Chapter 1: 1.1 Introduction. 1.2 Terminology. 1.3 Early development of radioimmunoassay. 1.4 Basic principles of binding assays. 1.5 Binder dilution curves and standard curves. 1.6 Methods for plotting the standard curve. 1.7 The measurement of K value. Chapter 2: 2.1 The need for purified ligand. 2.2 Availability of pure ligand. 2.3 Dissimilarity between purified ligand and endogenous ligand. 2.4 Standards. 2.5 Storage of materials used in binding assays. Chapter 3: 3.1 Radioactive isotopes. 3.2 Counting of radioactive isotopes. 3.3 Choice of counter. 3.4 Some practical aspects of isotope counting. 3.5 Essential characteristics of a tracer. 3.6 Preparation of tracers. 3.7 Iodinated tracers. 3.8 Variations on the use of radiolabelled tracers. Chapter 4: 4.1 Particle labels. 4.2 Enzyme labels (enzymoimmunoassay, EIA). 4.3 Fluorescent labels (fluoroimmunoassay, FIA). 4.4 Luminescent labels. 4.5 Advantages and disadvantages of non-isotopic labels in immunoassays. 4.6 Conclusions: the place of non-isotopic binding assays. Chapter 5: 5.1 Antibodies and the immune response. 5.2 Preparation of Antisera for use in RIA. 5.3 'Monoclonal' antibodies. 5.4 Cell receptors. 5.5 Circulating binding proteins. 5.6 Assays for the detection of endogenous antibodies, circulating binding proteins and receptors. Chapter 6: 6.1 Efficiency of separation procedures. 6.2 Practicality of separation procedures. 6.3 Methods of separation of bound and free ligand. 6.4 Immunometric techniques. Chapter 7: 7.1 General aspects of extraction procedures. 7.2 Extraction using particulate adsorbents. 7.3 Extraction with immunoadsorbents. 7.4 Extraction with organic solvents. 7.5 Dissociation procedures. 7.6 Measurement of 'free' hormone or drug. 7.7 Conclusions - the elimination of extraction procedures. 7.8 Sample collection and transport for immunoassay. Chapter 8: 8.1 Calculation of results by manual extrapolation. 8.2 Data transformation of the standard curve. 8.3 The logit transformation. 8.4 Identification of outliers. 8.5 Estimation of confidence limits to the result of an unknown. 8.6 Computer calculation of results. 8.7 Calculation of results of labelled antibody assays. 8.8 Presentation of results. Chapter 9: 9.1 Definition of sensitivity. 9.2 Methods of increasing the sensitivity of a labelled antigen immunoassay. 9.3 Methods of increasing the sensitivity of a labelled antibody assay (immunometric assays). 9.4 Methods of decreasing the sensitivity of an assay. 9.5 The low-dose hook effect. 9.6 Targeting of binding assays - the importance of ranges. 9.7 Optimisation of an assay by theoretical analysis. 9.8 Conclusions. Chapter 10: 10.1 Definition of specificity. 10.2 Specific non-specificity. 10.3 Non-specific non-specificity. Chapter 11: 11.1 Definitions. 11.2 Factors affecting precision. 11.3 Quality control to monitor the precision of a binding assay. 11.4 Practical use of a quality-control scheme. 11.5 External quality control schemes. <s> BIB002 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Variation Analysis. Coefficient of variation (CV) <s> The coefficient of variation is often used as a guide of the repeatability of measurements in clinical trials and other medical work. When possible, one makes repeated measurements on a set of individuals to calculate the relative variability of the test with the understanding that a reliable clinical test should give similar results when repeated on the same patient. There are times, however, when repeated measurements on the same patient are not possible. Under these circumstances, to combine results from different clinical trials or test sites, it is necessary to compare the coefficients of variation of several clinical trials. Using the work of Miller, we develop a general statistic for testing the hypothesis that the coefficients of variation are the same for k populations, with unequal sample sizes. This statistic is invariant under the choice of the order of the populations, and is asymptotically χ2. We provide an example using data from Yang and HayGlass. We compare the size and the power of the test to that of Bennett, Doornbos and Dijkstra and a statistic based on Hedges and Olkin. <s> BIB003 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Variation Analysis. Coefficient of variation (CV) <s> An approximate confidence interval is proposed for a robust measure of relative dispersion-the coefficient of quartile variation. The proposed method provides an alternative to interval estimates for other measures of relative dispersion. <s> BIB004 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Variation Analysis. Coefficient of variation (CV) <s> Inference for the coefficient of variation in normal distributions is considered. An explicit estimator of a coefficient of variation that is shared by several populations with normal distributions is proposed. Methods for making confidence intervals and statistical tests, based on McKay's approximation for the coefficient of variation, are provided. Exact expressions for the first two moments of McKay's approximation are given. An approximate F-test for equality of a coefficient of variation that is shared by several normal distributions and a coefficient of variation that is shared by several other normal distributions is introduced. <s> BIB005 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Variation Analysis. Coefficient of variation (CV) <s> SummaryTo determine the laboratory reproducibility of urine N-telopeptide and serum bone-specific alkaline phosphatase measurements, we sent identical specimens to six US commercial labs over an 8-month period. Longitudinal and within-run laboratory reproducibility varied substantially. Efforts to improve the reproducibility of these tests are needed.IntroductionWe assessed the laboratory reproducibility of urine N-telopeptide (NTX) and serum bone-specific alkaline phosphatase (BAP).MethodsSerum and urine were collected from five postmenopausal women, pooled, divided into identical aliquots, and frozen. To evaluate longitudinal reproducibility, identical specimens were sent to six US commercial labs on five dates over an 8-month period. To evaluate within-run reproducibility, on the fifth date, each lab was sent five identical specimens. Labs were unaware of the investigation.ResultsLongitudinal coefficients of variation (CVs) ranged from 5.4% to 37.6% for NTX and from 3.1% to 23.6% for BAP. Within-run CVs ranged from 1.5% to 17.2% for NTX. Compared to the Osteomark NTX assay, the Vitros ECi NTX assay had significantly higher longitudinal reproducibility (mean CV 7.2% vs. 30.3%, p < 0.0005) and within-run reproducibility (mean CV 3.5% vs. 12.7%, p < 0.0005).ConclusionsReproducibility of urine NTX and serum BAP varies substantially across US labs. <s> BIB006 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Variation Analysis. Coefficient of variation (CV) <s> Abstract The need to measure and benchmark university governance practices at institutional level has been growing, yet there is a lack of a comprehensive, weighted indicator system to facilitate the process. The paper discusses the development of university governance indicators and their weighting system using a three-round Delphi method. Discussions, a questionnaire, and interviews were used in Round 1 to 3, respectively, to collect experts’ opinions to construct the indicator list and indicator weights, and to shed light on the divergence of expert judgements on some aspects. Non-parametric statistical techniques were applied to analyse the survey data. Ninety-one indicators grouped in five dimensions of university governance, namely Management and Direction, Participation, Accountability, Autonomy and Transparency, were proposed and rated in terms of their importance. The preliminary results show relatively high levels of importance for all of the proposed indicators, thus none was excluded. The weighting of the indicators and factors vary remarkably. Among the five dimensions, Participation is found to be the least important; experts’ consensus is found to be low in Participation and Transparency. The study results also provide important implications to researchers and practitioners in university governance. <s> BIB007 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Variation Analysis. Coefficient of variation (CV) <s> The problem of testing the equality of coefficients of variation of independent normal populations is considered. For comparing two coefficients, we consider the signed-likelihood ratio test (SLRT) and propose a modified version of the SLRT, and a generalized test. Monte Carlo studies on the type I error rates of the tests indicate that the modified SLRT and the generalized test work satisfactorily even for very small samples, and they are comparable in terms of power. Generalized confidence intervals for the ratio of (or difference between) two coefficients of variation are also developed. A modified LRT for testing the equality of several coefficients of variation is also proposed and compared with an asymptotic test and a simulation-based small sample test. The proposed modified LRTs seem to be very satisfactory even for samples of size three. The methods are illustrated using two examples. <s> BIB008 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Variation Analysis. Coefficient of variation (CV) <s> ABSTRACTThis article examines confidence intervals for the single coefficient of variation and the difference of coefficients of variation in the two-parameter exponential distributions, using the method of variance of estimates recovery (MOVER), the generalized confidence interval (GCI), and the asymptotic confidence interval (ACI). In simulation, the results indicate that coverage probabilities of the GCI maintain the nominal level in general. The MOVER performs well in terms of coverage probability when data only consist of positive values, but it has wider expected length. The coverage probabilities of the ACI satisfy the target for large sample sizes. We also illustrate our confidence intervals using a real-world example in the area of medical science. <s> BIB009
, also known as relative standard deviation (RSD), is a standardized measure of dispersion used to express the precision (intra-assay (the same sample assayed in duplicate) CV < 10% is considered acceptable; interassay (comparison of results across assay runs) CV < 15% is deemed to be acceptable) of an assay BIB002 . e coefficient of variation was introduced by Karl Pearson in 1896 BIB001 and could also be used to test the reliability of a method (the smaller the CV values, the higher the reliability is) , to compare methods (the smallest CV belongs to the better method) or variables expressed with different units BIB009 . e CV is defined as the ratio of the standard deviation to the mean expressed as percentage [116] and is correctly calculated on quantitative data measured on the ratio scale . e coefficient of quartile variation/dispersion (CQV/CQD) was introduced as a preferred measure of dispersion when data did not follow the normal distribution and was defined based on the third and first quartile as (Q3 -Q1)/(Q3 + Q1) * 100 BIB004 . In a survey analysis, the CQV is used as a measure of convergence in experts' opinions BIB007 . e confidence interval associated with CV is expected to be reported for providing the readers with sufficient information for a correct interpretation of the reported results, and several online implementations are available (Table 5 ). e inference on CVs can be made using specific statistical tests according to the distribution of data. For normal distributions, tests are available to compare two BIB005 or more than two CVs (Feltz and Miller test BIB003 or Krishnamoorthy and Lee test BIB008 , the last one also implemented in R ). Reporting the CVs with associated 95% confidence intervals allows a proper interpretation of its point estimator value (CV). Schafer et al. BIB006 investigated laboratory reproducibility of urine N-telopeptide (NTX) and serum bone-specific alkaline phosphatase (BAP) measurements with six labs over eight months and correctly reported the CVs with associated 95% confidence intervals. Furthermore, they also compared the CVs between two assays and between labs and highlighted the need for improvements in the analytical precision of both NTX and BAP biomarkers BIB006 . ey concluded with the importance of the availability of laboratory performance reports to clinicians and institutions along with the need for proficiency testing and standardized guidelines to improve market reproducibility BIB006 .
Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Computational and Mathematical Methods in Medicine <s> We have derived the mathematical relationship between the coefficient of variation associated with repeated measurements from quantitative assays and the expected fraction of pairs of those measurements that differ by at least some given factor, i.e., the expected frequency of disparate results that are due to assay variability rather than true differences. Knowledge of this frequency helps determine what magnitudes of differences can be expected by chance alone when the particular coefficient of variation is in effect. This frequency is an operational index of variability in the sense that it indicates the probability of observing a particular disparity between two measurements under the assumption that they measure the same quantity. Thus the frequency or probability becomes the basis for assessing if an assay is sufficiently precise. This assessment also provides a standard for determining if two assay results for the same subject, separated by an intervention such as vaccination or infection, differ by more than expected from the variation of the assay, thus indicating an intervention effect. Data from an international collaborative study are used to illustrate the application of this proposed interpretation of the coefficient of variation, and they also provide support for the assumptions used in the mathematical derivation. <s> BIB001 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Computational and Mathematical Methods in Medicine <s> BACKGROUND ::: It is vital for clinicians to understand and interpret correctly medical statistics as used in clinical studies. In this review, we address current issues and focus on delivering a simple, yet comprehensive, explanation of common research methodology involving receiver operating characteristic (ROC) curves. ROC curves are used most commonly in medicine as a means of evaluating diagnostic tests. ::: ::: ::: METHODS ::: Sample data from a plasma test for the diagnosis of colorectal cancer were used to generate a prediction model. These are actual, unpublished data that have been used to describe the calculation of sensitivity, specificity, positive predictive and negative predictive values, and accuracy. The ROC curves were generated to determine the accuracy of this plasma test. These curves are generated by plotting the sensitivity (true-positive rate) on the y axis and 1 - specificity (false-positive rate) on the x axis. ::: ::: ::: RESULTS ::: Curves that approach closest to the coordinate (x = 0, y = 1) are more highly predictive, whereas ROC curves that lie close to the line of equality indicate that the result is no better than that obtained by chance. The optimum sensitivity and specificity can be determined from the graph as the point where the minimum distance line crosses the ROC curve. This point corresponds to the Youden index (J), a function of sensitivity and specificity used commonly to rate diagnostic tests. The area under the curve is used to quantify the overall ability of a test to discriminate between 2 outcomes. ::: ::: ::: CONCLUSION ::: By following these simple guidelines, interpretation of ROC curves will be less difficult and they can then be interpreted more reliably when writing, reviewing, or analyzing scientific papers. <s> BIB002 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Computational and Mathematical Methods in Medicine <s> ObjectiveHundreds of scientific publications are produced annually that involve the measurement of cortisol in saliva. Intra- and inter-laboratory variation in salivary cortisol results has the potential to contribute to cross-study inconsistencies in findings, and the perception that salivary cortisol results are unreliable. This study rigorously estimates sources of measurement variability in the assay of salivary cortisol within and between established international academic-based laboratories that specialize in saliva analyses. One hundred young adults (Mean age: 23.10 years; 62 females) donated 2 mL of whole saliva by passive drool. Each sample was split into multiple- 100 µL aliquots and immediately frozen. One aliquot of each of the 100 participants’ saliva was transported to academic laboratories (N = 9) in the United States, Canada, UK, and Germany and assayed for cortisol by the same commercially available immunoassay.Results1.76% of the variance in salivary cortisol levels was attributable to differences between duplicate assays of the same sample within laboratories, 7.93% of the variance was associated with differences between laboratories, and 90.31% to differences between samples. In established-qualified laboratories, measurement error of salivary cortisol is minimal, and inter-laboratory differences in measurement are unlikely to have a major influence on the determined values. <s> BIB003 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Computational and Mathematical Methods in Medicine <s> IntroductionThe choice of criteria in determining optimal cut-off value is a matter of concern in quantitative diagnostic tests. Several indexes such as Youden’s index, Euclidean index, product of ... <s> BIB004 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Computational and Mathematical Methods in Medicine <s> AIMS ::: The purpose of this study was to determine the impact of strain elastography (SE) on the Breast Imaging Reporting Data System (BI-RADS) classification depending on invasive lobular carcinoma (ILC) lesion size. ::: ::: ::: MATERIALS AND METHODS ::: We performed a retrospective analysis on a sample of 152 female subjects examined between January 2010 - January 2017. SE was performed on all patients and ILC was subsequently diagnosed by surgical or ultrasound-guided biopsy. BI-RADS 1, 2, 6 and Tsukuba BGR cases were omitted. BI-RADS scores were recorded before and after the use of SE. The differences between scores were compared to the ILC tumor size using nonparametric tests and logistic binary regression. We controlled for age, focality, clinical assessment, heredo-collateral antecedents, B-mode and Doppler ultrasound examination. An ROC curve was used to identify the optimal cut-off point for size in relationship to BI-RADS classificationdifference using Youden's index. ::: ::: ::: RESULTS ::: The histological subtypes of ILC lesions (n=180) included in the sample were luminal A (70%, n=126), luminal B (27.78%, n=50), triple negative (1.67%, n=3) and HER2+ (0.56%, n=1). The BI-RADS classification was higher when SE was performed (Z=- 6.629, p<0.000). The ROC curve identified a cut-off point of 13 mm for size in relationship to BI-RADS classification difference (J=0.670, p<0.000). Small ILC tumors were 17.92% more likely to influence BI-RADS classification (p<0.000). ::: ::: ::: CONCLUSIONS ::: SE offers enhanced BI-RADS classification in small ILC tumors (<13 mm). Sonoelastography brings added value to B-mode breast ultrasound as an adjacent to mammography in breast cancer screening. <s> BIB005
However, good practice in reporting CVs is not always observed. Inter-and intra-assay CVs within laboratories reported by Calvi et al. BIB003 on measurements of cortisol in saliva are reported as point estimators, and neither confidence intervals nor statistical test is provided. Reed et al. BIB001 reported the variability of measurements (thirty-three laboratories with fifteen repeated measurements on each lab) of human serum antibodies against Bordetella pertussis antigens by ELISA method using just the CVs (no associated 95% confidence intervals) in relation with the expected fraction of pairs of those measurements that differ by at least a given factor (k). , minimum), the weighted number needed to misdiagnose (maximum, considered the pretest probability and the cost of a misdiagnosis) , and Euclidean index BIB004 . e metrics used to identify the best cutoff value are a matter of methodology and are not expected to be reported as a result (reporting a J index of 0.670 for discrimination in small invasive lobular carcinoma BIB005 is not informative because the same J could be obtained for different values of Se and Sp: 0.97/0.77, 0.7/0.97, 0.83/0.84, etc.). Youden's index has been reported as the best metric in choosing the cutoff value BIB004 but is not able to differentiate between differences in sensitivity and specificity BIB002 . Furthermore, Youden's index can be used as an indicator of quality when reported with associated 95% confidence intervals, and a poor quality being associated with the presence of 0.5 is the confidence interval BIB002 .
Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Agreement Analysis. <s> A previously described coefficient of agreement for nominal scales, kappa, treats all disagreements equally. A generalization to weighted kappa (Kw) is presented. The Kw provides for the incorpation of ratio-scaled degrees of disagreement (or agreement) to each of the cells of the k * k table of joint nominal scale assignments such that disagreements of varying gravity (or agreements of varying degree) are weighted accordingly. Although providing for partial credit, Kw is fully chance corrected. Its sampling characteristics and procedures for hypothesis testing and setting confidence limits are given. Under certain conditions, Kw equals product-moment r. The use of unequal weights for symmetrical cells makes Kw suitable as a measure of validity. (PsycINFO Database Record (c) 2006 APA, all rights reserved) <s> BIB001 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Agreement Analysis. <s> In clinical measurement comparison of a new measurement technique with an established one is often needed to see whether they agree sufficiently for the new to replace the old. Such investigations are often analysed inappropriately, notably by using correlation coefficients. The use of correlation is misleading. An alternative approach, based on graphical techniques and simple calculations, is described, together with the relation between this analysis and the assessment of repeatability. <s> BIB002 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Agreement Analysis. <s> Although intraclass correlation coefficients (ICCs) are commonly used in behavioral measurement, psychometrics, and behavioral genetics, procedures available for forming inferences about ICCs are not widely known. Following a review of the distinction between various forms of the ICC, this article presents procedures available for calculating confidence intervals and conducting tests on ICCs developed using data from one-way and two-way random and mixed-effect analysis of variance models. <s> BIB003 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Agreement Analysis. <s> Agreement between two methods of clinical measurement can be quantified using the differences between observations made using the two methods on the same subjects. The 95% limits of agreement, estimated by mean difference +/- 1.96 standard deviation of the differences, provide an interval within which 95% of differences between measurements by the two methods are expected to lie. We describe how graphical methods can be used to investigate the assumptions of the method and we also give confidence intervals. We extend the basic approach to data where there is a relationship between difference and magnitude, both with a simple logarithmic transformation approach and a new, more general, regression approach. We discuss the importance of the repeatability of each method separately and compare an estimate of this to the limits of agreement. We extend the limits of agreement approach to data with repeated measurements, proposing new estimates for equal numbers of replicates by each method on each subject, for unequal numbers of replicates, and for replicated data collected in pairs, where the underlying value of the quantity being measured is changing. Finally, we describe a nonparametric approach to comparing methods. <s> BIB004 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Agreement Analysis. <s> Objective:Brain microbleeds on gradient-recalled echo (GRE) T2*-weighted MRI may be a useful biomarker for bleeding-prone small vessel diseases, with potential relevance for diagnosis, prognosis (especially for antithrombotic-related bleeding risk), and understanding mechanisms of symptoms, including cognitive impairment. To address these questions, it is necessary to reliably measure their presence and distribution in the brain. We designed and systematically validated the Microbleed Anatomical Rating Scale (MARS). We measured intrarater and interrater agreement for presence, number, and anatomical distribution of microbleeds using MARS across different MRI sequences and levels of observer experience. Methods:We studied a population of 301 unselected consecutive patients admitted to our stroke unit using 2 GRE T2*-weighted MRI sequences (echo time [TE] 40 and 26 ms). Two independent raters with different MRI rating expertise identified, counted, and anatomically categorized microbleeds. Results:At TE = 40 ms, agreement for microbleed presence in any brain location was good to very good (intrarater &kgr; = 0.85 [95% confidence interval (CI) 0.77–0.93]; interrater &kgr; = 0.68 [95% CI 0.58–0.78]). Good to very good agreement was reached for the presence of microbleeds in each anatomical region and in individual cerebral lobes. Intrarater and interrater reliability for the number of microbleeds was excellent (intraclass correlation coefficient [ICC] = 0.98 [95% CI 0.97–0.99] and ICC = 0.93 [0.91–0.94]). Very good interrater reliability was obtained at TE = 26 ms (&kgr; = 0.87 [95% CI 0.61–1]) for definite microbleeds in any location. Conclusion:The Microbleed Anatomical Rating Scale has good intrarater and interrater reliability for the presence of definite microbleeds in all brain locations when applied to different MRI sequences and levels of observer experience. GLOSSARYBOMBS = Brain Observer Microbleed Scale; CAA = cerebral amyloid angiopathy; CI = confidence interval; DPWM = deep and periventricular white matter; FA = flip angle; FLAIR = fluid-attenuated inversion recovery; FOV = field of view; GRE = gradient-recalled echo; ICC = intraclass correlation coefficient; MARS = Microbleed Anatomical Rating Scale; NEX = number of excitations; NHNN = National Hospital for Neurology and Neurosurgery; TE = echo time; TR = repetition time. <s> BIB005 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Agreement Analysis. <s> The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data collected in the study are correct representations of the variables measured. Measurement of the extent to which data collectors (raters) assign the same score to the same variable is called interrater reliability. While there have been a variety of methods to measure interrater reliability, traditionally it was measured as percent agreement, calculated as the number of agreement scores divided by the total number of scores. In 1960, Jacob Cohen critiqued use of percent agreement due to its inability to account for chance agreement. He introduced the Cohen’s kappa, developed to account for the possibility that raters actually guess on at least some variables due to uncertainty. Like most correlation statistics, the kappa can range from −1 to +1. While the kappa is one of the most commonly used statistics to test interrater reliability, it has limitations. Judgments about what level of kappa should be acceptable for health research are questioned. Cohen’s suggested interpretation may be too lenient for health related studies because it implies that a score as low as 0.41 might be acceptable. Kappa and percent agreement are compared, and levels for both kappa and percent agreement that should be demanded in healthcare studies are suggested. <s> BIB006 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Agreement Analysis. <s> BACKGROUND/AIMS ::: At present, automated analysis of high-resolution manometry (HRM) provides details of upper esophageal sphincter (UES) relaxation parameters. The aim of this study was to assess the accuracy of automatic analysis of UES relaxation parameters. ::: ::: ::: MATERIALS AND METHODS ::: One hundred and fifty three subjects (78 males, mean age 68.6 years, range 26-97) underwent HRM. UES relaxation parameters were interpreted twice, once visually (V) by two experts and once automatically (AS) using the ManoView ESO analysis software. Agreement between the two analysis methods was assessed using Bland-Altman plots and Lin's concordance correlation coefficient (CCC). ::: ::: ::: RESULTS ::: The agreement between V and AS analyses of basal UES pressure (CCC 0.996; 95% confidence interval (CI) 0.994-0.997) and residual UES pressure (CCC 0.918; 95% CI 0.895-0.936) was good to excellent. Agreement for time to UES relaxation nadir (CCC 0.208; 95% CI 0.068-0.339) and UES relaxation duration (CCC 0.286; 95% CI 0.148-0.413) between V and AS analyses was poor. There was moderate agreement for recovery time of UES relaxation (CCC 0.522; 95% CI 0.397-0.627) and peak pharyngeal pressure (CCC 0.695; 95% CI 0.605-0.767) between V and AS analysis. ::: ::: ::: CONCLUSION ::: AS analysis was unreliable, especially regarding the time variables of UES relaxation. Due to the difference in the clinical interpretation of pharyngoesophageal dysfunction between V and AS analysis, the use of visual analysis is justified. <s> BIB007 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Agreement Analysis. <s> In a contemporary clinical laboratory it is very common to have to assess the agreement between two quantitative methods of measurement. The correct statistical approach to assess this degree of agreement is not obvious. Correlation and regression studies are frequently proposed. However, correlation studies the relationship between one variable and another, not the differences, and it is not recommended as a method for assessing the comparability between methods. ::: In 1983 Altman and Bland (B&A) proposed an alternative analysis, based on the quantification of the agreement between two quantitative measurements by studying the mean difference and constructing limits of agreement. ::: The B&A plot analysis is a simple way to evaluate a bias between the mean differences, and to estimate an agreement interval, within which 95% of the differences of the second method, compared to the first one, fall. Data can be analyzed both as unit differences plot and as percentage differences plot. ::: The B&A plot method only defines the intervals of agreements, it does not say whether those limits are acceptable or not. Acceptable limits must be defined a priori, based on clinical necessity, biological considerations or other goals. ::: The aim of this article is to provide guidance on the use and interpretation of Bland Altman analysis in method comparison studies. <s> BIB008 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Agreement Analysis. <s> OBJECTIVE ::: Intraclass correlation coefficient (ICC) is a widely used reliability index in test-retest, intrarater, and interrater reliability analyses. This article introduces the basic concept of ICC in the content of reliability analysis. ::: ::: ::: DISCUSSION FOR RESEARCHERS ::: There are 10 forms of ICCs. Because each form involves distinct assumptions in their calculation and will lead to different interpretations, researchers should explicitly specify the ICC form they used in their calculation. A thorough review of the research design is needed in selecting the appropriate form of ICC to evaluate reliability. The best practice of reporting ICC should include software information, "model," "type," and "definition" selections. ::: ::: ::: DISCUSSION FOR READERS ::: When coming across an article that includes ICC, readers should first check whether information about the ICC form has been reported and if an appropriate ICC form was used. Based on the 95% confident interval of the ICC estimate, values less than 0.5, between 0.5 and 0.75, between 0.75 and 0.9, and greater than 0.90 are indicative of poor, moderate, good, and excellent reliability, respectively. ::: ::: ::: CONCLUSION ::: This article provides a practical guideline for clinical researchers to choose the correct form of ICC and suggests the best practice of reporting ICC parameters in scientific publications. This article also gives readers an appreciation for what to look for when coming across ICC while reading an article. <s> BIB009 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Agreement Analysis. <s> OBJECTIVE ::: To synthesize the literature and perform a meta-analysis for both the interrater and intrarater reliability of the FMS™. ::: ::: ::: METHODS ::: Academic Search Complete, CINAHL, Medline and SportsDiscus databases were systematically searched from inception to March 2015. Studies were included if the primary purpose was to determine the interrater or intrarater reliability of the FMS™, assessed and scored all 7-items using the standard scoring criteria, provided a composite score and employed intraclass correlation coefficients (ICCs). Studies were excluded if reliability was not the primary aim, participants were injured at data collection, or a modified FMS™ or scoring system was utilized. ::: ::: ::: RESULTS ::: Seven papers were included; 6 assessing interrater and 6 assessing intrarater reliability. There was moderate evidence in good interrater reliability with a summary ICC of 0.843 (95% CI = 0.640, 0.936; Q7 = 84.915, p < 0.0001). There was moderate evidence in good intrarater reliability with a summary ICC of 0.869 (95% CI = 0.785, 0.921; Q12 = 60.763, p < 0.0001). ::: ::: ::: CONCLUSION ::: There was moderate evidence for both forms of reliability. The sensitivity assessments revealed this interpretation is stable and not influenced by any one study. Overall, the FMS™ is a reliable tool for clinical practice. <s> BIB010 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Agreement Analysis. <s> Degenerated discs have shorter T2-relaxation time and lower MR signal. The location of the signal-intensity-weighted-centroid reflects the water distribution within a region-of-interest (ROI). This study compared the reliability of the location of the signal-intensity-weighted-centroid to mean signal intensity and area measurements. L4-L5 and L5-S1 discs were measured on 43 mid-sagittal T2-weighted 3T MRI images in adults with back pain. One rater analysed images twice and another once, blinded to measurements. Discs were semi-automatically segmented into a whole disc, nucleus, anterior and posterior annulus. The coordinates of the signal-intensity-weighted-centroid for all regions demonstrated excellent intraclass-correlation-coefficients for intra- (0.99-1.00) and inter-rater reliability (0.97-1.00). The standard error of measurement for the Y-coordinates of the signal-intensity-weighted-centroid for all ROIs were 0 at both levels and 0 to 2.7 mm for X-coordinates. The mean signal intensity and area for the whole disc and nucleus presented excellent intra-rater reliability with intraclass-correlation-coefficients from 0.93 to 1.00, and 0.92 to 1.00 for inter-rater reliability. The mean signal intensity and area had lower reliability for annulus ROIs, with intra-rater intraclass-correlation-coefficient from 0.5 to 0.76 and inter-rater from 0.33 to 0.58. The location of the signal-intensity-weighted-centroid is a reliable biomarker for investigating the effects of disc interventions. <s> BIB011
Percentage agreement (p o ), the number of agreements divided into the number of cases, is the easiest agreement coefficient that could be calculated but may be misleading. Several agreement coefficients that adjust the proportional agreement by the agreement expected by chance were introduced: BIB004 )) e Cohen's kappa coefficient has three assumptions: (i) the units are independent, (ii) the categories on the nominal scale are independent and mutually exclusive, and (iii) the readers/raters are independent . Cohen's kappa coefficient takes a value between −1 (perfect disagreement) and 1 (complete agreement). e empirical rules used to interpret the Cohen's kappa coefficient BIB006 Weighted kappa is used to discriminate between different readings on ordinal diagnostic test results (different grade of disagreement exists between good and excellent compared to poor and excellent). Different weights reflecting the importance of agreement and the weights (linear, proportional to the number of categories apart or quadratic, proportional to the square of the number of classes apart) must be established by the researcher BIB001 . Intra-and interclass correlation coefficients (ICCs) are used as a measure of reliability of measurements and had their utility in the evaluation of a diagnostic test. Interrater reliability (defined as two or more raters who measure the same group of individuals), test-retest reliability (defined as the variation in measurements by the same instrument on the same subject by the same conditions), and intrarater reliability (defined as variation of data measured by one rater across two or more trials) are common used BIB009 . McGraw and Wong BIB003 defined in 1996 the ten forms of ICC based on the model (1-way random effects, 2-way random effects, or 2-way fixed effects), the number of rates/measurements (single rater/measurement or the mean of k raters/ measurements), and hypothesis (consistency or absolute agreement). McGraw and Wong also discuss how to correctly select the correct ICC and recommend to report the ICC values along with their 95% CI BIB003 . Lin's concordance correlation coefficient (ρ c ) measures the concordance between two observations, one measurement as the gold standard. e ranges of values of Lin's concordance correlation coefficient are the same as for Cohen's kappa coefficient. e interpretation of ρ c takes into account the scale of measurements, with more strictness for continuous measurements (Table 6) . For intraand interobserver agreement, Martins and Nastri introduced the metric called limits of agreement (LoA) and proposed a cutoff < 5% for very good reliability/agreement. Reporting the ICC and/or CCC along with associated 95% confidence intervals is good practice for agreement Computational and Mathematical Methods in Medicine analysis. e results are reported in both primary (such as reliability analysis of the Microbleed Anatomical Rating Scale in the evaluation of microbleeds BIB005 , automatic analysis of relaxation parameters of the upper esophageal sphincter BIB007 , and the use of signal intensity weighted centroid in magnetic resonance images of patients with discs degeneration BIB011 ) and secondary research studies (systematic review and/or meta-analysis: evaluation of the functional movement screen BIB010 , evaluation of the Manchester triage scale on an emergency department , reliability of the specific physical examination tests for the diagnosis of shoulder pathologies , etc.). Altman and Bland criticized the used of correlation (this is a measure of association, and it is not correct to infer that the two methods can be used interchangeably), linear regression analysis (the method has several assumptions that need to be checked before application, and the assessment of residuals is mandatory for a proper interpretation), and the differences between means as comparison methods aimed to measure the same quantity BIB002 BIB008 . ey proposed a graphical method called the B&A plot to analyze the agreement between two quantitative measurements by studying the mean difference and constructing limits of agreement BIB004 . Whenever a gold standard method exists, the difference between the two methods is plotted against the reference values . Besides the fact that the B&A plot provides the limits of agreements, no information regarding the acceptability of the boundaries is supplied, and the acceptable limits must be a priori defined based on clinical significance BIB008 .
Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Accuracy Analysis. <s> If the performance of a diagnostic imaging system is to be evaluated objectively and meaningfully, one must compare radiologists' image-based diagnoses with actual states of disease and health in a way that distinguishes between the inherent diagnostic capacity of the radiologists' interpretations of the images, and any tendencies to "under-read" or "over-read". ROC methodology provides the only known basis for distinguishing between these two aspects of diagnostic performance. After identifying the fundamental issues that motivate ROC analysis, this article develops ROC concepts in an intuitive way. The requirements of a valid ROC study and practical techniques for ROC data collection and data analysis are sketched briefly. A survey of the radiologic literature indicates the broad variety of evaluation studies in which ROC analysis has been employed. <s> BIB001 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Accuracy Analysis. <s> The clinical performance of a laboratory test can be described in terms of diagnostic accuracy, or the ability to correctly classify subjects into clinically relevant subgroups. Diagnostic accuracy refers to the quality of the information provided by the classification device and should be distinguished from the usefulness, or actual practical value, of the information. Receiver-operating characteristic (ROC) plots provide a pure index of accuracy by demonstrating the limits of a test's ability to discriminate between alternative states of health over the complete spectrum of operating conditions. Furthermore, ROC plots occupy a central or unifying position in the process of assessing and using diagnostic tools. Once the plot is generated, a user can readily go on to many other activities such as performing quantitative ROC analysis and comparisons of tests, using likelihood ratio to revise the probability of disease in individual subjects, selecting decision thresholds, using logistic-regression analysis, using discriminant-function analysis, or incorporating the tool into a clinical strategy by using decision analysis. <s> BIB002 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Accuracy Analysis. <s> Receiver operating characteristic (ROC) curves are frequently used in biomedical informatics research to evaluate classification and prediction models for decision support, diagnosis, and prognosis. ROC analysis investigates the accuracy of a model's ability to separate positive from negative cases (such as predicting the presence or absence of disease), and the results are independent of the prevalence of positive cases in the study population. It is especially useful in evaluating predictive models or other tests that produce output values over a continuous range, since it captures the trade-off between sensitivity and specificity over that range. There are many ways to conduct an ROC analysis. The best approach depends on the experiment; an inappropriate approach can easily lead to incorrect conclusions. In this article, we review the basic concepts of ROC analysis, illustrate their use with sample calculations, make recommendations drawn from the literature, and list readily available software. <s> BIB003 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Accuracy Analysis. <s> Laboratory tests provide the most definitive information for diagnosing and managing many diseases, and most patients look to laboratory tests as the most important information from a medical visit. Most patients who have rheumatoid arthritis (RA) have a positive test for rheumatoid factor and anticyclic citrullinated peptide (anti-CCP) antibodies, as well as an elevated erythrocyte sedimentation rate (ESR) and C-reactive protein (CRP). More than 30% 40% of patients with RA, however, have negative tests for rheumatoid factor or anti-CCP antibodies or a normal ESR or CRP. More than 30% of patients with RA, however, have negative tests for rheumatoid factor or anti-CCP antibodies, and 40% have a normal ESR or CRP. These observations indicate that, although they can be helpful to monitor certain patients, laboratory measures cannot serve as a gold standard for diagnosis and management in all individual patients with RA or any rheumatic disease. Physicians and patients would benefit from an improved understanding of the limitations of laboratory tests in diagnosis and management of patients with RA. <s> BIB004 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Accuracy Analysis. <s> Abstract Background Several studies and systematic reviews have reported results that indicate that sensitivity and specificity may vary with prevalence. Study design and setting We identify and explore mechanisms that may be responsible for sensitivity and specificity varying with prevalence and illustrate them with examples from the literature. Results Clinical and artefactual variability may be responsible for changes in prevalence and accompanying changes in sensitivity and specificity. Clinical variability refers to differences in the clinical situation that may cause sensitivity and specificity to vary with prevalence. For example, a patient population with a higher disease prevalence may include more severely diseased patients, therefore, the test performs better in this population. Artefactual variability refers to effects on prevalence and accuracy associated with study design, for example, the verification of index test results by a reference standard. Changes in prevalence influence the extent of overestimation due to imperfect reference standard classification. Conclusions Sensitivity and specificity may vary in different clinical populations, and prevalence is a marker for such differences. Clinicians are advised to base their decisions on studies that most closely match their own clinical situation, using prevalence to guide the detection of differences in study population or study design. <s> BIB005 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Accuracy Analysis. <s> In 1996, shortly after the founding of The Cochrane Collaboration, leading figures in test evaluation research established a Methods Group to focus on the relatively new and rapidly evolving methods for the systematic review of studies of diagnostic tests. Seven years later, the Collaboration decided it was time to develop a publication format and methodology for Diagnostic Test Accuracy (DTA) reviews, as well as the software needed to implement these reviews in The Cochrane Library. A meeting hosted by the German Cochrane Centre in 2004 brought together key methodologists in the area, many of whom became closely involved in the subsequent development of the methodological framework for DTA reviews. DTA reviews first appeared in The Cochrane Library in 2008 and are now an integral part of the work of the Collaboration. <s> BIB006 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Accuracy Analysis. <s> In 1975, Fagan published a nomogram to help practitioners determine, without the use of a calculator or computer, the probability of a patient truly having a condition of interest given a particular test result. Nomograms are very useful for bedside interpretations of test results, as no test is perfect. However, the practicality of Fagan9s nomogram is limited by its use of the likelihood ratio (LR), a parameter not commonly reported in the evaluation studies of diagnostic tests. The LR reflects the direction and strength of evidence provided by a test result and can be computed from the conventional diagnostic sensitivity (DSe) and specificity (DSp) of the test. This initial computation is absent in Fagan9s nomogram, making it impractical for routine use. We have seamlessly integrated the initial step to compute the LR and the resulting two-step nomogram allows the user to quickly interpret the outcome of a test. With the addition of the DSe and DSp, the nomogram, for the purposes of interpreting a dichotomous test result, is now complete. This tool is more accessible and flexible than the original, which will facilitate its use in routine evidence-based practice. The nomogram can be downloaded at: www.adelaide.edu.au/vetsci/research/pub_pop/2step-nomogram/. <s> BIB007 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Accuracy Analysis. <s> Evaluation of diagnostic performance is a necessary component of new developments in many fields including medical diagnostics and decision making. The methodology for statistical analysis of diagnostic performance continues to develop, offering new analytical tools for conventional inferences and solutions for novel and increasingly more practically relevant questions. ::: ::: In this paper we focus on the partial area under the Receiver Operating Characteristic (ROC) curve, or pAUC. This summary index is considered to be more practically relevant than the area under the entire ROC curve (AUC), but because of several perceived limitations, it is not used as often. In order to improve interpretation, results for pAUC analysis are frequently reported using a rescaled index such as the standardized partial AUC proposed by McClish (1989). ::: ::: We derive two important properties of the relationship between the “standardized” pAUC and the defined range of interest, which could facilitate a wider and more appropriate use of this important summary index. First, we mathematically prove that the “standardized” pAUC increases with increasing range of interest for practically common ROC curves. Second, using comprehensive numerical investigations we demonstrate that, contrary to common belief, the uncertainty about the estimated standardized pAUC can either decrease or increase with an increasing range of interest. ::: ::: Our results indicate that the partial AUC could frequently offer advantages in terms of statistical uncertainty of the estimation. In addition, selection of a wider range of interest will likely lead to an increased estimate even for standardized pAUC. <s> BIB008
e accuracy of a diagnostic test is related to the extent that the test gives the right answer, and the evaluations are done relative to the best available test (also known as gold standard test or reference test and hypothetical ideal test with sensitivity (Se) � 100% and specificity (Sp) � 100%) able to reveal the right answer. Microscopic examinations are considered the gold standard in the diagnosis process but could not be applied to any disease (e.g., stable coronary artery disease , rheumatologic diseases BIB004 , psychiatric disorders , and rare diseases with not yet fully developed histological assessment [155] ). e factors that could affect the accuracy of the diagnostic test can be summarized as follows BIB005 BIB006 : sampling bias, imperfect gold standard test, artefactual variability (e.g., changes in prevalence due to inappropriate design) or clinical variability (e.g., patient spectrum and "gold-standard" threshold), subgroups differences, or reader expectations. Several metrics calculated based on the 2 × 2 contingency table are frequently used to assess the accuracy of a diagnostic test. A gold standard or reference test is used to classify the subject either in the group with the disease or in the group without the disease of interest. Whatever the type of data for the diagnostic test is, a 2 × 2 contingency table can be created and used to compute the accuracy metrics. e generic structure of a 2 × 2 contingency table is presented in Table 7 , and if the diagnostic test is with high accuracy, a significant association with the reference test is observed (significant Chi-square test or equivalent (for details, see )). Several standard indicators and three additional metrics useful in the assessment of the accuracy of a diagnostic test are briefly presented in Tables 8 and 9 . e reflection of a positive or negative diagnosis on the probability that a patient has/not a particular disease could be investigated using Fagan's diagram . e Fagan's nomogram is frequently referring in the context of evidencebased medicine, reflecting the decision-making for a particular patient BIB007 . e Bayes' theorem nomogram was published in 2011, the method incorporating in the prediction of the posttest probability the following metrics: pretest probability, pretest odds (for and against), PLR or NLR, posttest odds (for and against), and posttest probability . e latest form of Fagan's nomogram, called two-step Fagan's nomogram, considered pretest probability, Se (Se of test for PLR), LRs, and Sp (Sp of test for NLR), in predicting the posttest probability BIB007 . Total on the rows represents the number of subjects with positive and respectively negative test results; total on the columns represents the number of subjects with (disease present) and respectively without (disease absent) the disease of interest; and the classification as test positive/test negative is done using the cutoff value for ordinal and continuous data. (ii) PLR (the higher, the better) (a) > 10 ⟶ convincing diagnostic evidence (b) 5 < PLR < 10 ⟶ strong diagnostic evidence Negative likelihood ratio (NLR/ LR−) * (1 − Se)/Sp (i) Indicates how much the odds of the disease decrease when a test is negative (indicator to rule-out) (ii) NLR (the lower, the better) (a) < 0.1 ⟶ convincing diagnostic evidence (b) 0.2 < PLR < 0.1 ⟶ strong diagnostic evidence 8 Computational and Mathematical Methods in Medicine e receiver operating characteristic (ROC) analysis is conducted to investigate the accuracy of a diagnostic test when the outcome is quantitative or ordinal with at least five classes BIB002 BIB001 . ROC analysis evaluates the ability of a diagnostic test to discriminate positive from negative cases. Several metrics are reported related to the ROC analysis in the evaluation of a diagnostic test, and the most frequently used metrics are described in Table 10 BIB003 BIB008 . e closest the left-upper corner of the graph, the better the test. Different metrics are used to choose the cutoff for the optimum Se and Sp, such as Youden's index (J, maximum),
Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Background Background. There is uncertainty about the diagnostic significance of specific symptoms of major depressive disorder (MDD). There is also interest in using one or two specific symptoms in the development of brief scales. Our aim was to elucidate the best possible specific symptoms that would assist in ruling in or ruling out a major depressive episode in a psychiatric out-patient setting. Method A total of 1523 psychiatric out-patients were evaluated in the Methods to Improve Diagnostic Assessment and Services (MIDAS) project. The accuracy and added value of specific symptoms from a comprehensive item bank were compared against the Structured Clinical Interview for DSM-IV (SCID). Results The prevalence of depression in our sample was 54.4%. In this high prevalence setting the optimum specific symptoms for ruling in MDD were psychomotor retardation, diminished interest/pleasure and indecisiveness. The optimum specific symptoms for ruling out MDD were the absence of depressed mood, the absence of diminished drive and the absence of loss of energy. However, some discriminatory items were relatively uncommon. Correcting for frequency, the most clinically valuable rule-in items were depressed mood, diminished interest/pleasure and diminished drive. The most clinically valuable rule-out items were depressed mood, diminished interest/pleasure and poor concentration. Conclusions The study supports the use of the questions endorsed by the two-item Patient Health Questionnaire (PHQ-2) with the additional consideration of the item diminished drive as a rule-in test and poor concentration as a rule-out test. The accuracy of these questions may be different in primary care studies where prevalence differs and when they are combined into multi-question tests or algorithmic models. <s> BIB001 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Summary Background Prostate cancer is one of the leading causes of death from malignant disease among men in the developed world. One strategy to decrease the risk of death from this disease is screening with prostate-specific antigen (PSA); however, the extent of benefit and harm with such screening is under continuous debate. Methods In December, 1994, 20 000 men born between 1930 and 1944, randomly sampled from the population register, were randomised by computer in a 1:1 ratio to either a screening group invited for PSA testing every 2 years (n=10 000) or to a control group not invited (n=10 000). Men in the screening group were invited up to the upper age limit (median 69, range 67–71 years) and only men with raised PSA concentrations were offered additional tests such as digital rectal examination and prostate biopsies. The primary endpoint was prostate-cancer specific mortality, analysed according to the intention-to-screen principle. The study is ongoing, with men who have not reached the upper age limit invited for PSA testing. This is the first planned report on cumulative prostate-cancer incidence and mortality calculated up to Dec 31, 2008. This study is registered as an International Standard Randomised Controlled Trial ISRCTN54449243. Findings In each group, 48 men were excluded from the analysis because of death or emigration before the randomisation date, or prevalent prostate cancer. In men randomised to screening, 7578 (76%) of 9952 attended at least once. During a median follow-up of 14 years, 1138 men in the screening group and 718 in the control group were diagnosed with prostate cancer, resulting in a cumulative prostate-cancer incidence of 12·7% in the screening group and 8·2% in the control group (hazard ratio 1·64; 95% CI 1·50–1·80; p Interpretation This study shows that prostate cancer mortality was reduced almost by half over 14 years. However, the risk of over-diagnosis is substantial and the number needed to treat is at least as high as in breast-cancer screening programmes. The benefit of prostate-cancer screening compares favourably to other cancer screening programs. Funding The Swedish Cancer Society, the Swedish Research Council, and the National Cancer Institute. <s> BIB002 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Both obesity and breast cancer incidence increased dramatically during two recent decades in a rapidly changing society in northern Iran. In this study, we examined the ability of body mass index (BMI) and waist circumference (WC) as predictor biomarkers of breast cancer risk in Iranian women. In a case–control study of 100 new cases of histological confirmed breast cancer and 200 age-matched controls, in Babol, we measured weight, height, waist and hip circumference at time of diagnosis with standard methods. The data of demographic, characteristics, reproductive and lifestyle factors were collected by interview. We used both regression and receiver operator characteristics (ROC) analysis to estimate the predictive ability of BMI and WC for breast cancer as estimated by area under the curve (AUC). The results showed a significant difference in the mean of weight, BMI and WC between patients and controls in pre- and postmenopausal women (P < 0.001). While after adjusting for BMI, no longer an association between WC and breast cancer was found. The overall accuracy of observed BMI and WC were 0.79 (95% CI: 0.74–0.84) and 0.68 (95% CI: 0.61–0.74), respectively. The accuracy of BMI and WC were 0.82 (95% CI: 0.76–0.89), 0.75(0.67–0.83) for premenopausal and 0.77(0.68–0.85), 0.60 (0.50–0.71) for postmenopausal women, respectively. BMI and WC are predictor biomarkers of breast cancer risk in both pre- and postmenopausal Iranian women while after adjusting for BMI, no longer an association between WC and breast cancer was observed. These findings imply to perform breast cancer screening program in women with a higher BMI and WC. <s> BIB003 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Background ::: In most of the world, microbiologic diagnosis of tuberculosis (TB) is limited to microscopy. Recent guidelines recommend culture-based diagnosis where feasible. <s> BIB004 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Objective ::: To evaluate the diagnostic performance of digital breast tomosynthesis (DBT) and digital mammography (DM) for benign and malignant lesions in breasts. <s> BIB005 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Background. Diagnostic evaluations of dementia are often performed in primary health care (PHC). Cognitive evaluation requires validated instruments. Objective. To investigate the diagnostic accuracy and clinical utility of Cognistat in a primary care population. Methods. Participants were recruited from 4 PHC centres; 52 had cognitive symptoms and 29 were presumed cognitively healthy. Participants were tested using the Mini-Mental State Examination (MMSE), the Clock Drawing Test (CDT), and Cognistat. Clinical diagnoses, based on independent neuropsychological examination and a medical consensus discussion in secondary care, were used as criteria for diagnostic accuracy analyses. Results. The sensitivity, specificity, positive predictive value, and negative predictive value were 0.85, 0.79, 0.85, and 0.79, respectively, for Cognistat; 0.59, 0.91, 0.90, and 0.61 for MMSE; 0.26, 0.88, 0.75, and 0.46 for CDT; 0.70, 0.79, 0.82, and 0.65 for MMSE and CDT combined. The area under the receiver operating characteristic curve was 0.82 for Cognistat, 0.75 for MMSE, 0.57 for CDT, and 0.74 for MMSE and CDT combined. Conclusions. The diagnostic accuracy and clinical utility of Cognistat was better than the other tests alone or combined. Cognistat is well adapted for cognitive evaluations in PHC and can help the general practitioner to decide which patients should be referred to secondary care. <s> BIB006 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> BACKGROUND ::: Several instruments have been developed to screen Parkinson's disease (PD); yet, there is no consensus on the items, number of questions, and diagnostic accuracy. We aimed to develop a new questionnaire combining the best items with highest validity to screen parkinsonism and to compare its diagnostic value with that of the previous instruments using the same database. ::: ::: ::: METHODS ::: 157 patients with parkinsonism and 110 healthy controls completed a comprehensive screening questionnaire consisting of 25 items on different PD symptoms used in previous studies. To select the optimal items, clinical utility index (CUI) was calculated and those who met at least good negative utility (CUI ≥0.64) were selected. Receiver operating characteristics (ROC) curves analysis was used to compare the area under the curve (AUC) of different screening instruments. ::: ::: ::: RESULTS ::: Six items on 'stiffness & rigidity', 'tremor & shaking', 'troublesome buttoning', 'troublesome arm swing', 'feet stuck to floor' and 'slower daily activity' demonstrated good CUI. The new screening instrument had the largest AUC (0.977) compared to other instruments. ::: ::: ::: CONCLUSIONS ::: We selected a new set of six items to screen parkinsonism, which showed higher diagnostic values compared to the previously developed questionnaires. This screening instrument could be used in population-based PD surveys in poor-resource settings. <s> BIB007 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Abstract Objective To analyse the evidence concerning the accuracy of the Mini-Mental State Examination (MMSE) as a diagnostic and screening test for the presence of delirium in adults. Method Two authors searched MEDLINE, PsychINFO and EMBASE from inception till March 2014. Articles were included that investigated the diagnostic validity of the MMSE to detect delirium against standardised criteria. A diagnostic validity meta-analysis was conducted. Results Thirteen studies were included representing 2017 patients in medical settings of whom 29.4% had delirium. The meta-analysis revealed the MMSE had an overall sensitivity and specificity estimate of 84.1% and 73.0%, but this was 81.1% and 82.8% in a subgroup analysis involving robust high quality studies. Sensitivity was unchanged but specificity was 68.4% (95% CI=50.9–83.5%) in studies using a predefined cutoff of Conclusion The MMSE cannot be recommended as a case-finding confirmatory test of delirium, but may be used as an initial screen to rule out high scorers who are unlikely to have delirium with approximately 93% accuracy. <s> BIB008 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Background: Great concern about occupational exposure to chromium (Cr [VI]) has been reported due to escalated risk of lung cancer in exposed workers. Consequences of occupational exposure to Cr (VI) have been reported as oxidative stress and lung tissue damage. Objective: To investigate the feasibility of biological effect monitoring of chrome electroplaters through analysis of serum malondialdehyde (MDA). Methods: 90 workers directly involved in chrome electroplating—categorized into three equal groups based on their job as near bath workers, degreaser, and washers—and 30 workers without exposure to Cr (VI), served as the control group, were studied. Personal samples were collected and analyzed according to NIOSH method 7600. Serum MDA level was measured by HPLC using a UV detector. Results: Median Cr (VI) exposure level was 0.38 mg/m 3 in near bath workers, 0.20 mg/m 3 in degreasers, and 0.05 mg/m 3 in washers. The median serum MDA level of three exposed groups (2.76 μmol/L) was significantly (p<0.001) higher than that in the control group (2.00 μmol/L). There was a positive correlation between electroplaters' level of exposure to Cr (VI) and their serum MDA level (Spearman's ρ 0.806, p<0.001). Conclusion: Serum MDA level is a good biomarker for the level of occupational exposure to Cr (VI) in electroplaters. <s> BIB009 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Objective ::: Systemic inflammatory response syndrome (SIRS)-based severe sepsis screening algorithms have been utilised in stratification and initiation of early broad spectrum antibiotics for patients presenting to EDs with suspected sepsis. We aimed to investigate the performance of some of these algorithms on a cohort of suspected sepsis patients. ::: ::: Methods ::: We conducted a retrospective analysis on an ED-based prospective sepsis registry at a tertiary Sydney hospital, Australia. Definitions for sepsis were based on the 2012 Surviving Sepsis Campaign guidelines. Numerical values for SIRS criteria and ED investigation results were recorded at the trigger of sepsis pathway on the registry. Performance of specific SIRS-based screening algorithms at sites from USA, Canada, UK, Australia and Ireland health institutions were investigated. ::: ::: Results ::: Severe sepsis screening algorithms' performance was measured on 747 patients presenting with suspected sepsis (401 with severe sepsis, prevalence 53.7%). Sensitivity and specificity of algorithms to flag severe sepsis ranged from 20.2% (95% CI 16.4–24.5%) to 82.3% (95% CI 78.2–85.9%) and 57.8% (95% CI 52.4–63.1%) to 94.8% (95% CI 91.9–96.9%), respectively. Variations in SIRS values between uncomplicated and severe sepsis cohorts were only minor, except a higher mean lactate (>1.6 mmol/L, P < 0.01). ::: ::: Conclusions ::: We found the Ireland and JFK Medical Center sepsis algorithms performed modestly in stratifying suspected sepsis patients into high-risk groups. Algorithms with lactate levels thresholds of >2 mmol/L rather than >4 mmol/L performed better. ED sepsis registry-based characterisation of patients may help further refine sepsis definitions of the future. <s> BIB010 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> BACKGROUND ::: There is a lack of studies testing accuracy of fast screening methods for alcohol use disorder in mental health settings. We aimed at estimating clinical utility of a standard single-item test for case finding and screening of DSM-5 alcohol use disorder among individuals suffering from anxiety and mood disorders. ::: ::: ::: METHODS ::: We recruited adults consecutively referred, in a 12-month period, to an outpatient clinic for anxiety and depressive disorders. We assessed the National Institute on Alcohol Abuse and Alcoholism (NIAAA) single-item test, using the Mini- International Neuropsychiatric Interview (MINI), plus an additional item of Composite International Diagnostic Interview (CIDI) for craving, as reference standard to diagnose a current DSM-5 alcohol use disorder. We estimated sensitivity and specificity of the single-item test, as well as positive and negative Clinical Utility Indexes (CUIs). ::: ::: ::: RESULTS ::: 242 subjects with anxiety and mood disorders were included. The NIAAA single-item test showed high sensitivity (91.9%) and specificity (91.2%) for DSM-5 alcohol use disorder. The positive CUI was 0.601, whereas the negative one was 0.898, with excellent values also accounting for main individual characteristics (age, gender, diagnosis, psychological distress levels, smoking status). ::: ::: ::: DISCUSSION ::: Testing for relevant indexes, we found an excellent clinical utility of the NIAAA single-item test for screening true negative cases. Our findings support a routine use of reliable methods for rapid screening in similar mental health settings. <s> BIB011 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> BACKGROUND ::: Chitotriosidase is an enzyme secreted by activated macrophages. This study aims to investigate the usefulness of circulating chitotriosidase activity as a marker of inflammatory status in patients with critical limb ischemia (CLI). ::: ::: ::: MATERIALS AND METHODS ::: An observational gender-matched case-control study was conducted on patients hospitalized with the primary diagnosis of CLI, as well as a control group. The control group consisted of healthy volunteers. ::: ::: ::: RESULTS ::: Forty-three patients were included in each group. Similar demographic characteristics (median age of 60-62 years and overweight) were observed in both groups. Chitotriosidase activity ranged from 110 nmol/ml/hr to 1530 nmol/ml/hr in the CLI group and from 30 nmol/ml/hr to 440 nmol/ml/hr in the control group; demonstrating significantly elevated values in the CLI group (p<0.001). Median plasma chitotriosidase activity was significantly elevated in smokers compared with non-smokers in both groups (p<0.05). However, this activity had higher values in CLI than in control subjects. Receiver operating characteristic (ROC) analysis was then performed in order to verify the diagnostic accuracy of chitotriosidase as an inflammatory biomarker in CLI. ::: ::: ::: CONCLUSION ::: Circulating chitotriosidase is a test which can potentially be used for the monitoring of CLI patients without other inflammatory conditions. However, the interpretation of elevated values must take into account the inflammatory response induced by tobacco exposure. <s> BIB012 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> “What can be asserted without evidence can also be dismissed without evidence.” —Christopher Hitchens [1949–2011], British journalist and writer (1). <s> BIB013 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> BACKGROUND AND OBJECTIVES ::: Previous limited experiences have reported the 19-gauge flexible needle to be highly effective in performing endoscopic ultrasound-guided fine needle biopsy (EUS-FNB) for transduodenal lesions. We designed a large multicenter prospective study with the aim at evaluating the performance of this newly developed needle. ::: ::: ::: PATIENTS AND METHODS ::: Consecutive patients with solid lesions who needed to undergo EUS sampling from the duodenum were enrolled in 6 tertiary care referral centers. Puncture of the lesion was performed with the 19-gauge flexible needle (Expect™ and Slimline Expect™ 19 Flex). The feasibility, procurement yield, and diagnostic accuracy were evaluated. ::: ::: ::: RESULTS ::: Totally, 246 patients (144 males, mean age 65.1 ± 12.7 years) with solid lesions (203 cases) or enlarged lymph nodes (43 cases) were enrolled, with a mean size of 32.6 ± 12.2 mm. The procedure was technically feasible in 228 patients, with an overall procurement yield of 76.8%. Two centers had suboptimal procurement yields (66.7% and 64.2%). Major complications occurred in six cases: two of bleeding, two of mild acute pancreatitis, one perforation requiring surgery, and one duodenal hematoma. Considering malignant versus nonmalignant disease, the sensitivity, specificity, positive/negative likelihood ratios, and diagnostic accuracy were 70.7% (95% confidence interval [CI]: 64.3-76.6), 100% (95% CI: 79.6-100), 35.3 (95% CI: 2.3-549.8)/0.3 (95% CI: 0.2-0.4), and 73.6% (95% CI: 67.6-79). On multivariate analysis, the only determinant of successful EUS-FNB was the center in which the procedure was performed. ::: ::: ::: CONCLUSIONS ::: Our results suggest that the use of the 19-gauge flexible needle cannot be widely advocated and its implementation should receive local validation after careful evaluation of both the technical success rates and diagnostic yield. <s> BIB014 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Purpose ::: To evaluate the diagnostic value of integrated positron emission tomography/magnetic resonance imaging (PET/MRI) compared with conventional multiparametric MRI and PET/computed tomography (CT) for the detailed and accurate segmental detection/localization of prostate cancer. ::: ::: Materials and Methods ::: Thirty-one patients who underwent integrated PET/MRI using 18F-choline and 18F-FDG with an integrated PET/MRI scanner followed by radical prostatectomy were included. The prostate was divided into six segments (sextants) according to anatomical landmarks. Three radiologists noted the presence and location of cancer in each sextant on four different image interpretation modalities in consensus (1, multiparametric MRI; 2, integrated 18F-FDG PET/MRI; 3, integrated 18F-choline PET/MRI; and 4, combined interpretation of 1 and 18F-FDG PET/CT). Sensitivity, specificity, accuracy, positive and negative predictive values, likelihood ratios, and diagnostic performance based on the DOR (diagnostic odds ratio) and NNM (number needed to misdiagnose) were evaluated for each interpretation modality, using the pathologic result as the reference standard. Detection rates of seminal vesicle invasion and extracapsular invasion were also evaluated. ::: ::: Results ::: Integrated 18F-choline PET/MRI showed significantly higher sensitivity than did multiparametric MRI alone in high Gleason score patients (77.0% and 66.2%, P = 0.011), low Gleason score patients (66.7% and 47.4%, P = 0.007), and total patients (72.5% and 58.0%, P = 0.008) groups. Integrated 18F-choline PET/MRI and 18F-FDG PET/MRI showed similar sensitivity and specificity to combined interpretation of multiparametric MRI and 18F-FDG PET/CT (for sensitivity, 58.0%, 63.4%, 72.5%, and 68.7%, respectively, and for specificity, 87.3%, 80.0%, 81.8%, 72.7%, respectively, in total patient group). However, integrated 18F-choline PET/MRI showed the best diagnostic performance (as DOR, 11.875 in total patients, 27.941 in high Gleason score patients, 5.714 in low Gleason score groups) among the imaging modalities, regardless of Gleason score. Integrated 18F-choline PET/MRI showed higher sensitivity and diagnostic performance than did integrated 18F-FDG PET/MRI (as DOR, 6.917 in total patients, 15.143 in high Gleason score patients, 3.175 in low Gleason score groups) in all three patient groups. ::: ::: Conclusion ::: Integrated PET/MRI carried out using a dedicated integrated PET/MRI scanner provides better sensitivity, accuracy, and diagnostic value for detection/localization of prostate cancer compared to multiparametric MRI. Generally, integrated 18F-choline PET/MRI shows better sensitivity, accuracy, and diagnostic performance than does integrated 18F-FDG PET/MRI as well as combined interpretation of multiparametric MRI with 18F-FDG PET/CT. J. Magn. Reson. Imaging 2016. <s> BIB015 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Biliary atresia is a progressive infantile cholangiopathy of complex pathogenesis. Although early diagnosis and surgery are the best predictors of treatment response, current diagnostic approaches are imprecise and time-consuming. We used large-scale, quantitative serum proteomics at the time of diagnosis of biliary atresia and other cholestatic syndromes (serving as disease controls) to identify biomarkers of disease. In a discovery cohort of 70 subjects, the lead biomarker was matrix metalloproteinase-7 (MMP-7), which retained high distinguishing features for biliary atresia in two validation cohorts. Notably, the diagnostic performance reached 95% when MMP-7 was combined with γ-glutamyltranspeptidase (GGT), a marker of cholestasis. Using human tissue and an experimental model of biliary atresia, we found that MMP-7 is primarily expressed by cholangiocytes, released upon epithelial injury, and promotes the experimental disease phenotype. Thus, we propose that serum MMP-7 (alone or in combination with GGT) is a diagnostic biomarker for biliary atresia and may serve as a therapeutic target. <s> BIB016 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Pleural or abdominal effusions are frequent findings in ICU and Internal Medicine patients. Diagnostic gold standard to distinguish between transudate and exudate is represented by “Light’s Criteria,” but, unfortunately, the chemical–physical examination for their calculation is not a rapid test. Pursuing an acid–base assessment of the fluid by a blood-gas analyzer, an increase of lactate beyond the normal serum range is reported in the exudative effusions. The advantages of this test are that it is a very fast bed-side test, executable directly by the physician. The aim of this study is to evaluate whether the increase in lactate in pleural and abdominal effusions might be used as a criterion for the differential diagnosis of the nature of the fluid. Sixty-nine patients with pleural or abdominal effusions and clinical indication for thoracentesis or paracentesis were enrolled. Acid–base assessment with lactate, total protein, and LDH dosage on the serum, and acid–base assessment with lactate, total protein, and LDH dosage, cytology, and bacterial culture on the fluid were performed to each patient. Fluid–blood lactate difference (ΔLacFB) and fluid–blood lactate ratio (LacFB ratio) were calculated. A statistical analysis was carried out for fluid lactate (LacF), ΔLacFB, and LacFB ratio, performing ROC curves to find the cut-off values with best sensitivity (Sn) and specificity (Sp) predicting an exudate diagnosis: LacF: cut-off value: 2.4 mmol/L; AU-ROC 0.854 95% CI 0.756–0.952; Sn 0.77; Sp 0.84. ΔLacFB: cut-off value: 0.95 mmol/L; Au-ROC 0.876 95% CI 0.785–0.966; Sn 0.80; Sp 0.92. LacFB ratio: cut-off value: 2 mmol/L; Au-ROC 0.730 95% CI 0.609–0.851; Sn 0.74; Sp 0.65. Lactate dosage by blood-gas analyzer on pleural and abdominal effusions seems to be a promising tool to predict a diagnosis of exudate. <s> BIB017 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Background Stroke-associated pneumonia is a leading cause of in-hospital death and post-stroke outcome. Screening patients at high risk is one of the main challenges in acute stroke units. Several screening tests have been developed, but their feasibility and validity still remain unclear. Objective The aim of our study was to evaluate the validity of four risk scores (Pneumonia score, A2DS2, ISAN score, and AIS-APS) in a population of ischemic stroke patients admitted in a French stroke unit. Methods Consecutive ischemic stroke patients admitted to a stroke unit were retrospectively analyzed. Data that allowed to retrospectively calculate the different pneumonia risk scores were recorded. Sensitivity and specificity of each score were assessed for in-hospital stroke-associated pneumonia and mortality. The qualitative and quantitative accuracy and utility of each diagnostic screening test were assessed by measuring the Youden Index and the Clinical Utility Index. Results Complete data were available for only 1960 patients. Pneumonia was observed in 8.6% of patients. Sensitivity and specificity were, respectively, .583 and .907 for Pneumonia score, .744 and .796 for A2DS2, and .696 and .812 for ISAN score. Data were insufficient to test AIS-APS. Stroke-associated pneumonia risk scores had an excellent negative Clinical Utility Index (.77-.87) to screen for in-hospital risk of pneumonia after acute ischemic stroke. Conclusion All scores might be useful and applied to screen stroke-associated pneumonia in stroke patients treated in French comprehensive stroke units. <s> BIB018 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> CONTEXT ::: India is currently becoming capital for diabetes mellitus. This significantly increasing incidence of diabetes putting an additional burden on health care in India. Unfortunately, half of diabetic individuals are unknown about their diabetic status. Hence, there is an emergent need of effective screening instrument to identify "diabetes risk" individuals. ::: ::: ::: AIMS ::: The aim is to evaluate and compare the diagnostic accuracy and clinical utility of Indian Diabetes Risk Score (IDRS) and Finnish Diabetes Risk Score (FINDRISC). ::: ::: ::: SETTINGS AND DESIGN ::: This is retrospective, record-based study of diabetes detection camp organized by a teaching hospital. Out of 780 people attended this camp voluntarily only 763 fulfilled inclusion criteria of the study. ::: ::: ::: SUBJECTS AND METHODS ::: In this camp, pro forma included the World Health Organization STEP guidelines for surveillance of noncommunicable diseases. Included primary sociodemographic characters, physical measurements, and clinical examination. After that followed the random blood glucose estimation of each individual. ::: ::: ::: STATISTICAL ANALYSIS USED ::: Diagnostic accuracy of IDRS and FINDRISC compared by using receiver operative characteristic curve (ROC). Sensitivity, specificity, likelihood ratio, positive predictive and negative predictive values were compared. Clinical utility index (CUI) of each score also compared. SPSS version 22, Stata 13, R3.2.9 used. ::: ::: ::: RESULTS ::: Out of 763 individuals, 38 were new diabetics. By IDRS 347 and by FINDRISC 96 people were included in high-risk category for diabetes. Odds ratio for high-risk people in FINDRISC for getting affected by diabetes was 10.70. Similarly, it was 4.79 for IDRS. Area under curves of ROCs of both scores were indifferent (P = 0.98). Sensitivity and specificity of IDRS was 78.95% and 56.14%; whereas for FINDRISC it was 55.26% and 89.66%, respectively. CUI was excellent (0.86) for FINDRISC while IDRS it was "satisfactory" (0.54). Bland-Altman plot and Cohen's Kappa suggested fair agreement between these score in measuring diabetes risk. ::: ::: ::: CONCLUSIONS ::: Diagnostic accuracy and clinical utility of FINDRISC is fairly good than IDRS. <s> BIB019 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Abstract Purpose Up to 60% of people with epilepsy (PwE) have psychiatric comorbidity including anxiety. Anxiety remains under recognized in PwE. This study investigates if screening tools validated for depression could be used to detect anxiety disorders in PWE. Additionally it analyses the effect of anxiety on QoL. Method 261 participants with a confirmed diagnosis of epilepsy were included. Neurological Disorders Depression Inventory for Epilepsy (NDDI-E) and Emotional Thermometers (ET), both validated to screen for depression were used. Hospital Anxiety and Depression Scale-Anxiety (HADS-A) with a cut off for moderate and severe anxiety was used as the reference standard. QoL was measured with EQ5-D. Sensitivity, specificity, positive and negative predictive value and ROC analysis as well as multivariate regression analysis were performed. Results Patients with depression (n=46) were excluded as multivariate regression analysis showed that depression was the only significant determinant of having anxiety in the group. Against HADS-A, NDDI-E and ET-7 showed highest level of accuracy in recognizing anxiety with ET7 being the most effective tool. QoL was significantly reduced in PwE and anxiety. Conclusion Our study showed that reliable screening for moderate to severe anxiety in PwE without co-morbid depression is feasible with screening tools for depression. The cut off values for anxiety are different from those for depression in ET7 but very similar in NDDI-E. ET7 can be applied to screen simultaneously for depression and "pure" anxiety. Anxiety reduces significantly QoL. We recommend screening as an initial first step to rule out patients who are unlikely to have anxiety. <s> BIB020 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Background ::: A clinical and research challenge is to identify which depressed youth are at risk of "early transition to bipolar disorders (ET-BD)." This 2-part study (1) examines the clinical utility of previously reported BD at-risk (BAR) criteria in differentiating ET-BD cases from unipolar depression (UP) controls; and (2) estimates the Number Needed to Screen (NNS) for research and general psychiatry settings. ::: ::: ::: Methods ::: Fifty cases with reliably ascertained, ET-BD I and II cases were matched for gender and birth year with 50 UP controls who did not develop BD over 2 years. We estimated the clinical utility for finding true cases and screening out non-cases for selected risk factors and their NNS. Using a convenience sample (N = 80), we estimated the NNS when adjustments were made to account for data missing from clinical case notes. ::: ::: ::: Results ::: Sub-threshold mania, cyclothymia, family history of BD, atypical depression symptoms and probable antidepressant-emergent elation, occurred significantly more frequently in ET-BD youth. Each of these "BAR-Depression" criteria demonstrated clinical utility for screening out non-cases. Only cyclothymia demonstrated good utility for case finding in research settings; sub-threshold mania showed moderate utility. In the convenience sample, the NNS for each criterion ranged from ~4 to 7. ::: ::: ::: Conclusions ::: Cyclothymia showed the optimum profile for case finding, screening and NNS in research settings. However, its presence or absence was only reported in 50% of case notes. Future studies of ET-BD instruments should distinguish which criteria have clinical utility for case finding vs screening. <s> BIB021 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> BACKGROUND ::: Experts in the autoimmune paraneoplastic field recommend autoantibody testing as "panels" to improve the poor sensitivity of individual autoantibodies in detecting paraneoplastic neurological syndromes (PNS). The sensitivity of those panels was not reported to date in a fashion devoid of incorporation bias. We aimed to assess the collective sensitivity and specificity of one of the commonly used panels in detecting PNS. ::: ::: ::: METHODS ::: A single-centered retrospective cohort of all patients tested for paraneoplastic evaluation panel (PAVAL; test ID: 83380) over one year for the suspicion of PNS. Case adjudication was based on newly proposed diagnostic criteria in line with previously published literature, but modified to exclude serological status to avoid incorporation bias. Measures of diagnostic accuracy were subsequently calculated. Cases that failed to show association with malignancy within the follow-up time studied, reflecting a possibly pure autoimmune process was considered paraneoplastic-like syndromes. ::: ::: ::: RESULTS ::: Out of 321 patients tested, 51 patients tested positive. Thirty-two patients met diagnostic criteria for paraneoplastic/paraneoplastic-like syndromes. The calculated collective sensitivity was 34% (95% CI: 17-53), specificity was 86% (95% CI: 81-90), Youden's index 0.2 and a positive clinical utility index 0.07 suggesting poor utility for case-detection. ::: ::: ::: CONCLUSION ::: This is the first reported diagnostic accuracy measures of paraneoplastic panels without incorporation bias. Despite recommended panel testing to improve detection of PNS, sensitivity remains low with poor utility for case-detection. The high-calculated specificity suggests a possible role in confirming the condition in difficult cases suspicious for PNS, when enough supportive evidence is lacking on ancillary testing. <s> BIB022 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Background ::: The current prevalence of the condition is not yet known. No screening tool for the condition exists. By developing a questionnaire that may be used by community health workers, the study is intended to be the first step in identifying the prevalence of X-linked dystonia parkinsonism (XDP). ::: ::: Aim ::: To develop and validate a simple, easy to use, community-based, screening questionnaire for the diagnosis of XDP ::: ::: Methods ::: Community health workers administered an 11-item yes/no questionnaire, in the native Panay island language on 54 genetically-confirmed XDP patients and 54 healthy controls all from the island of Panay. The questionnaire is made up of elements from existing questionnaires on Parkinson's disease and dystonia, and known clinical features of XDP. The subjects were partitioned into training (n= 88) and test (n= 20) data sets. To select which items were predictive of XDP the Clinical Utility Index (CUI) of each item was determined. Afterwards, multivariable binary logistic regression was done to build a predictive model that was subsequently run on the test data set. ::: ::: Results ::: Four items on ‘sustained twisting’, ‘jaw opening and closing’, ‘slowness in movement’ and ‘shuffling steps’ were found to be the most predictive of XDP. All had at least a ‘good’ CUI. The questions demonstrated 100% sensitivity and 90% specificity (95% CI: 65.6-100%) in identifying XDP suspects. ::: ::: Conclusion ::: The resulting 4-item questionnaire was found to be predictive of XDP. The screening instrument can be used to screen for XDP in a large-scale population-based prevalence study. ::: ::: This article is protected by copyright. All rights reserved. <s> BIB023 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> OBJECTIVE ::: This study examined whether previously reported results, indicating that prostate-specific antigen (PSA) screening can reduce prostate cancer (PC) mortality regardless of sociodemographic inequality, could be corroborated in an 18 year follow-up. ::: ::: ::: MATERIALS AND METHODS ::: In 1994, 20,000 men aged 50-64 years were randomized from the Göteborg population register to PSA screening or control (1:1) (study ID: ISRCTN54449243). Men in the screening group (n = 9950) were invited for biennial PSA testing up to the median age of 69 years. Prostate biopsy was recommended for men with PSA ≥2.5 ng/ml. Last follow-up was on 31 December 2012. ::: ::: ::: RESULTS ::: In the screening group, 77% (7647/9950) attended at least once. After 18 years, 1396 men in the screening group and 962 controls had been diagnosed with PC [hazard ratio 1.51, 95% confidence interval (CI) 1.39-1.64]. Cumulative PC mortality was 0.98% (95% CI 0.78-1.22%) in the screening group versus 1.50% (95% CI 1.26-1.79%) in controls, an absolute reduction of 0.52% (95% CI 0.17-0.87%). The rate ratio (RR) for PC death was 0.65 (95% CI 0.49-0.87). To prevent one death from PC, the number needed to invite was 231 and the number needed to diagnose was 10. Systematic PSA screening demonstrated greater benefit in PC mortality for men who started screening at age 55-59 years (RR 0.47, 95% CI 0.29-0.78) and men with low education (RR 0.49, 95% CI 0.31-0.78). ::: ::: ::: CONCLUSIONS ::: These data corroborate previous findings that systematic PSA screening reduces PC mortality and suggest that systematic screening may reduce sociodemographic inequality in PC mortality. <s> BIB024 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> BACKGROUND ::: The diagnosis of pediatric septic arthritis (SA) can be challenging due to wide variability in the presentation of musculoskeletal infection. Synovial fluid Gram stain is routinely obtained and often used as an initial indicator of the presence or absence of pediatric SA. The purpose of this study was to examine the clinical utility of the Gram stain results from a joint aspiration in the diagnosis and management of pediatric SA. ::: ::: ::: METHODS ::: All patients with suspected SA who underwent arthrocentesis and subsequent surgical irrigation and debridement at an urban tertiary care children's hospital between January 2007 and October 2016 were identified. Results of the synovial fluid Gram stain, as well as synovial cell count/differential and serum markers, were evaluated. ::: ::: ::: RESULTS ::: A total of 302 patients that underwent incision and drainage for suspected SA were identified. In total, 102 patients (34%) had positive synovial fluid cultures and 47 patients (16%) had a microorganism detected on Gram stain. Gram stain sensitivity and specificity for the detection of SA were 0.40 and 0.97, respectively. This yielded a number needed to misdiagnose of 4.5 (ie, every fifth patient was misdiagnosed by Gram stain). For gram-negative organisms, the sensitivity dropped further to 0.13, with only 2/16 gram-negative organisms identified on Gram stain. Stepwise regression showed that age, serum white blood cell, and absolute neutrophil count were significant independent predictors for having a true positive Gram stain result. Elevated synovial white blood cell count was a significant predictor of having an accurate (culture matching the Gram stain) result. ::: ::: ::: CONCLUSIONS ::: The Gram stain result is a poor screening tool for the detection of SA and is particularly ineffective for the detection of gram-negative organisms. The clinical relevance of the Gram stain and cost-effectiveness of this test performed on every joint aspiration sent for culture requires additional evaluation. Patients with gram-negative SA may be at high risk for inadequate coverage with empiric antibiotics due to poor detection of gram-negative organisms on initial Gram stain. ::: ::: ::: LEVEL OF EVIDENCE ::: Level III-case-control study. <s> BIB025 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Purpose ::: Accurate pain assessment is critical to detect pain and facilitate effective pain management in dementia patients. The electronic Pain Assessment Tool (ePAT) is a point-of-care solution that uses automated facial analysis in conjunction with other clinical indicators to evaluate the presence and intensity of pain in patients with dementia. This study aimed to examine clini-metric properties (clinical utility and predictive validity) of the ePAT in this population group. ::: ::: ::: Methods ::: Data were extracted from a prospective validation (observational) study of the ePAT in dementia patients who were ≥65 years of age, living in a facility for ≥3 months, and had Psychogeriatric Assessment Scales - cognitive scores ≥10. The study was conducted in two residential aged-care facilities in Perth, Western Australia, where residents were sampled using purposive convenience strategy. Predictive validity was measured using accuracy statistics (sensitivity, specificity, positive predictive value, and negative predictive value). Positive and negative clinical utility index (CUI) scores were calculated using Mitchell's formula. Calculations were based on comparison with the Abbey Pain Scale, which was used as a criterion reference. ::: ::: ::: Results ::: A total of 400 paired pain assessments for 34 residents (mean age 85.5±6.3 years, range 68.0-93.2 years) with moderate-severe dementia (Psychogeriatric Assessment Scales - cognitive score 11-21) were included in the analysis. Of those, 303 episodes were classified as pain by the ePAT based on a cutoff score of 7. Unadjusted prevalence findings were sensitivity 96.1% (95% CI 93.9%-98.3%), specificity 91.4% (95% CI 85.7%-97.1%), accuracy 95.0% (95% CI 92.9%-97.1%), positive predictive value 97.4% (95% CI 95.6%-99.2%), negative predictive value 87.6% (95% CI 81.1%-94.2%), CUI+ 0.936 (95% CI 0.911-0.960), CUI- 0.801 (95% CI 0.748-0.854). ::: ::: ::: Conclusion ::: The clinimetric properties demonstrated were excellent, thus supporting the clinical usefulness of the ePAT when identifying pain in patients with moderate-severe dementia. <s> BIB026 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Abstract Introduction Age and years of education influence the risk of dementia and may impact the prognostic accuracy of mild cognitive impairment subtypes. Methods Memory clinic patients without dementia (N = 358, age 64.0 ± 7.9) were stratified into four groups based on years of age (≤64 and ≥65) and education (≤12 and ≥13), examined with a neuropsychological test battery at baseline and followed up after 2 years. Results The prognostic accuracy of amnestic multi-domain mild cognitive impairment for dementia was highest in younger patients with more years of education and lowest in older patients with fewer years of education. Conversely, conversion rates to dementia were lowest in younger patients with more years of education and highest in older patients with fewer years of education. Discussion Mild cognitive impairment subtypes and demographic information should be combined to increase the accuracy of prognoses for dementia. <s> BIB027
e body mass index (BMI) was identified as a predictor marker of breast cancer risk on Iranian population BIB003 , with an AUC 0.79 (95% CI: 0.74 to 0.84). A simulation dataset was used to illustrate how the performances of a diagnostic test could be evaluated, evaluating the BMI as a marker for breast cancer. , and the ROC curve with associated AUC is presented in Figure 1 . e ROC curve graphically represents the pairs of Se and (1 − Sp) for different cutoff values. e AUC of 0.825 proved significantly different by 0.5 (p < 0.001), and the point estimator indicates a good accuracy, but if the evaluation is done based on the interpretation of the 95% lower bound, we (i) Gives the odds that the patient has to the target disorder after the test is carried out (ii) Gives the proportion of patients with that particular test result who have the target disorder All indicators excepting J are reported with associated 95% confidence intervals; ROC � receiver-operating characteristic; * patient-centered indicator; TP � true positive; FP � false positive; FN � false negative; TN � true negative; and PPV and NPV depend on the prevalence (to be used only if (no. of subjects with disease)/(no. of patients without disease) is equivalent with the prevalence of the disease in the studied population). (Table 11) . A cutoff with a low value is chosen whenever the aim is to minimize the number of false negatives, assuring a Se of 1 (19.5 kg/m 2 , TP � 100, Table 10 ). If a test able to correctly classify the true negatives is desired, the value of the cutoff must be high (38.5 kg/m 2 , TN � 200, Table 11 ) assuring a Sp of 1. e analysis of the performance metrics for our simulation dataset showed that the maximum CUI+ and CUI− values are obtained for the cutoff value identified by the J index, supporting the usefulness of the BMI for screening not for case finding. e accuracy analysis is reported frequently in the scientific literature both in primary and secondary studies. Different actors such as the authors, reviewers, and editors could contribute to the quality of the statistics reported. e evaluation of plasma chitotriosidase as a biomarker in critical Partial area under the curve (pAUC) (i) Nonparametric (no assumptions) (ii) Parametric: using the binomial assumption limb ischemia reported the AUC with associated 95% confidence intervals, cutoff values BIB012 , but no information on patient-centered metrics or utility indications are provided. Similar parameters as reported by Ciocan et al. BIB012 have also been reported in the evaluation of sonoelastographic scores in the differentiation of benign by malign cervical lymph nodes . Lei et al. conducted a secondary study to evaluate the accuracy of the digital breast tomosynthesis versus digital mammography to discriminate between malign and benign breast lesions and correctly reported Se, Sp, PLR, NLR, and DOR for both the studies included in the analysis and the pooled value BIB005 . However, insufficient details are provided in regard to ROC analysis (e.g., no AUCs confidence intervals are reported) or any utility index BIB005 . Furthermore, Lei et al. reported the Q * index which reflect the point on the SROC (summary receiver operating characteristic curve) at which the Se is equal with Sp that could be useful in specific clinical situations BIB005 . e number needed to diagnose (NND) and number needed to misdiagnose (NNM) are currently used in the identification of the cutoff value on continuous diagnostic test results , in methodological articles, or teaching materials BIB013 . e NND and NNM are less frequently reported in the evaluation of the accuracy of a diagnostic test. Several examples identified in the available scientific literature are as follows: color duplex ultrasound in the diagnosis of carotid stenosis , culture-based diagnosis of tuberculosis BIB004 , prostate-specific antigen BIB024 , endoscopic ultrasound-guided fine needle biopsy with 19-gauge flexible needle BIB014 , number needed to screen-prostate cancer BIB002 , the integrated positron emission tomography/magnetic resonance imaging (PET/ MRI) for segmental detection/localization of prostate cancer BIB015 , serum malondialdehyde in the evaluation of exposure to chromium BIB009 , the performances of the matrix metalloproteinase-7 (MMP-7) in the diagnosis of epithelial injury and of biliary atresia BIB016 , lactate as a diagnostic marker of pleural and abdominal exudate BIB017 , the Gram stain from a joint aspiration in the diagnosis of pediatric septic arthritis BIB025 , and performances of a sepsis algorithm in an emergency department BIB010 . Unfortunately, the NND or NNM point estimators are not all the time reported with the associated 95% confidence intervals BIB004 BIB002 BIB015 BIB017 BIB025 . e reporting of the clinical utility index (CUI) is more frequently seen in the evaluation of a questionnaire. e grades not the values of CUIs were reported by Michell et al. BIB001 in the assessment of a semistructured diagnostic interview as a diagnostic tool for the major depressive disorder. Johansson et al. BIB006 reported both the CUI + value and its interpretation in cognitive evaluation using Cognistat. e CUI+/CUI− reported by Michell et al. on the patient health questionnaire for depression in primary care (PHQ-9 and PHQ-2) is reported as a value with associated 95% confidence interval as well as interpretation. e CUI+ and CUI− values and associated confidence intervals were also reported by Fereshtehnejad et al. BIB007 in the evaluation of the screening questionnaire for Parkinsonism but just for the significant items. Fereshtehnejad et al. BIB007 also used the values of CUI+ and CUI− to select the optimal screening items whenever the value of point estimator was higher than 0.63. Bartoli et al. BIB011 represented the values of CUI graphically as column bars (not necessarily correct since the CUI is a single value, and a column could induce that is a range of values) in the evaluation of a questionnaire for alcohol use disorder on different subgroups. e accurate reporting of CUIs as values and associated confidence intervals could also be seen in some articles BIB026 BIB008 , but is not a common practice BIB027 BIB018 BIB019 BIB020 BIB021 BIB022 BIB023 . Besides the commercial statistical programs able to assist researchers in conducting an accuracy analysis for a diagnostic test, several free online (Table 12) or offline applications exist (CATmaker [208] and CIcalculator ). Smartphone applications have also been developed to assist in daily clinical practice. e DocNomo application for iPhone/iPad free application allows calculation of posttest probability using the two-step Fagan nomogram. Other available applications are Bayes' posttest probability calculator, EBM Tools app, and EBM Stats Calc. Allen et al. and Power et al. implemented two online tools for the visual examination of the effect of Se, Sp, and prevalence on TP, FP, FN, and TN values and the evaluation
Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Cost-Benefit Analysis <s> This paper designs an object-oriented, continuous-time, full simulation model for addressing a wide range of clinical, procedural, administrative, and financial decisions in health care at a high level of biological, clinical, and administrative detail. The full model has two main parts, which with some simplification can be designated "physiology models" and "models of care processes." The models of care processes, although highly detailed, are mathematically straightforward. However, the mathematics that describes human biology, diseases, and the effects of interventions are more difficult. This paper describes the mathematical formulation and methods for deriving equations, for a variety of different sources of data. Although Archimedes was originally designed for health care applications, the formulation, and equations are general and can be applied to many natural systems. <s> BIB001 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Cost-Benefit Analysis <s> This is a review of the Health Utilities Index (HUI®) multi-attribute health-status classification systems, and single- and multi-attribute utility scoring systems. HUI refers to both HUI Mark 2 (HUI2) and HUI Mark 3 (HUI3) instruments. The classification systems provide compact but comprehensive frameworks within which to describe health status. The multi-attribute utility functions provide all the information required to calculate single-summary scores of health-related quality of life (HRQL) for each health state defined by the classification systems. The use of HUI in clinical studies for a wide variety of conditions in a large number of countries is illustrated. HUI provides comprehensive, reliable, responsive and valid measures of health status and HRQL for subjects in clinical studies. Utility scores of overall HRQL for patients are also used in cost-utility and cost-effectiveness analyses. Population norm data are available from numerous large general population surveys. The widespread use of HUI facilitates the interpretation of results and permits comparisons of disease and treatment outcomes, and comparisons of long-term sequelae at the local, national and international levels. <s> BIB002 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Cost-Benefit Analysis <s> Correlation coefficients and their associated squared values are examined for the validation of estimates of the activity of biological compounds when a molecular descriptors family is used in the framework of structure-activity relationship (SAR) methods [1]. Starting with the assumption that the measured activity of a biologically active compound is a semiquantitative outcome, we examined Pearson, Spearman, and Kendall’s correlation coefficients. Toxicity descriptors of sixty-seven biologic active compounds were analyzed by applying the molecular descriptors family using SAR modeling. The correlation between the measured toxicity and that estimated by the best performing model was investigated by applying the Pearson, Spearman and Kendall's τa , τb , τc squared correlation coefficient. The results obtained were express as squared correlation coefficients, 95% confidence intervals (CI) of correlation coefficient, Student's t or Z test value, and theirs associated pvalue. They were as follows: Pearson: rPrs 2 = 0.90577, [0.9223, 0.9701], tPrs = 24.99 (p < 0.0001); Spearman: ρSpm 2 = 0.86064, [0.8846, 0.9550], tSpm = 20.03 (p < 0.0001); Kendall's τa: τKen,a 2 = 0.61294, [0.6683, 0.8611], ZKen,τa = 9.37 (p < 0.0001); Kendall's τb: τKen,b 2 = 0.61769, [0.6726, 0.8631], ZKen,τb = 9.37 (p < 0.0001); Kendall's τc: τKen,c 2 = 0.59478, [0.6517, 0.8533], ZKen,τc = 9.23 (p < 0.0001) We remark, that the toxicity of biologically active compounds is a semi-quantitative variable and that its determination may depend on various external factors, e.g. the type of equipment used, the researcher's skills and performance, the type and class of chemicals used. Under those circumstances, a rank correlation coefficient would provide a more reliable estimate of the association than the parametric Pearson coefficient. Our study shows that all five computational methods used to evaluate the squared correlation coefficients resulted in a statistically significant p-value (always less than 0.0001). As expected, lower values of squared correlation coefficients were obtained with Kendall’s methods, and the 95% CI associated with the correlation coefficients overlapped. Looking at the correlation coefficients and their 95% CI calculated with the Pearson and Spearman formulas and how they overlap with the Kendall's τa , τb , τc squared correlation coefficients we suggest that there are no significant differences between them. More research on other classes of biologic active compounds may reveal whether it is appropriate to analyze the activity of molecular descriptors family based on SAR methods using the Pearson correlation coefficient or whether a rank correlation coefficient must be applied <s> BIB003 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Cost-Benefit Analysis <s> Importance Increased use of computed tomography (CT) in pediatrics raises concerns about cancer risk from exposure to ionizing radiation. Objectives To quantify trends in the use of CT in pediatrics and the associated radiation exposure and cancer risk. Design Retrospective observational study. Setting Seven US health care systems. Participants The use of CT was evaluated for children younger than 15 years of age from 1996 to 2010, including 4 857 736 child-years of observation. Radiation doses were calculated for 744 CT scans performed between 2001 and 2011. Main Outcomes and Measures Rates of CT use, organ and effective doses, and projected lifetime attributable risks of cancer. Results The use of CT doubled for children younger than 5 years of age and tripled for children 5 to 14 years of age between 1996 and 2005, remained stable between 2006 and 2007, and then began to decline. Effective doses varied from 0.03 to 69.2 mSv per scan. An effective dose of 20 mSv or higher was delivered by 14% to 25% of abdomen/pelvis scans, 6% to 14% of spine scans, and 3% to 8% of chest scans. Projected lifetime attributable risks of solid cancer were higher for younger patients and girls than for older patients and boys, and they were also higher for patients who underwent CT scans of the abdomen/pelvis or spine than for patients who underwent other types of CT scans. For girls, a radiation-induced solid cancer is projected to result from every 300 to 390 abdomen/pelvis scans, 330 to 480 chest scans, and 270 to 800 spine scans, depending on age. The risk of leukemia was highest from head scans for children younger than 5 years of age at a rate of 1.9 cases per 10 000 CT scans. Nationally, 4 million pediatric CT scans of the head, abdomen/pelvis, chest, or spine performed each year are projected to cause 4870 future cancers. Reducing the highest 25% of doses to the median might prevent 43% of these cancers. Conclusions and Relevance The increased use of CT in pediatrics, combined with the wide variability in radiation doses, has resulted in many children receiving a high-dose examination. Dose-reduction strategies targeted to the highest quartile of doses could dramatically reduce the number of radiation-induced cancers. <s> BIB004 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Cost-Benefit Analysis <s> BACKGROUND & AIMS ::: Colorectal cancer (CRC) screening guidelines recommend screening schedules for each single type of test except for concurrent sigmoidoscopy and fecal occult blood test (FOBT). We investigated the cost-effectiveness of a hybrid screening strategy that was based on a fecal immunological test (FIT) and colonoscopy. ::: ::: ::: METHODS ::: We conducted a cost-effectiveness analysis by using the Archimedes Model to evaluate the effects of different CRC screening strategies on health outcomes and costs related to CRC in a population that represents members of Kaiser Permanente Northern California. The Archimedes Model is a large-scale simulation of human physiology, diseases, interventions, and health care systems. The CRC submodel in the Archimedes Model was derived from public databases, published epidemiologic studies, and clinical trials. ::: ::: ::: RESULTS ::: A hybrid screening strategy led to substantial reductions in CRC incidence and mortality, gains in quality-adjusted life years (QALYs), and reductions in costs, comparable with those of the best single-test strategies. Screening by annual FIT of patients 50-65 years old and then a single colonoscopy when they were 66 years old (FIT/COLOx1) reduced CRC incidence by 72% and gained 110 QALYs for every 1000 people during a period of 30 years, compared with no screening. Compared with annual FIT, FIT/COLOx1 gained 1400 QALYs/100,000 persons at an incremental cost of $9700/QALY gained and required 55% fewer FITs. Compared with FIT/COLOx1, colonoscopy at 10-year intervals gained 500 QALYs/100,000 at an incremental cost of $35,100/QALY gained but required 37% more colonoscopies. Over the ranges of parameters examined, the cost-effectiveness of hybrid screening strategies was slightly more sensitive to the adherence rate with colonoscopy than the adherence rate with yearly FIT. Uncertainties associated with estimates of FIT performance within a program setting and sensitivities for flat and right-sided lesions are expected to have significant impacts on the cost-effectiveness results. ::: ::: ::: CONCLUSIONS ::: In our simulation model, a strategy of annual or biennial FIT, beginning when patients are 50 years old, with a single colonoscopy when they are 66 years old, delivers clinical and economic outcomes similar to those of CRC screening by single-modality strategies, with a favorable impact on resources demand. <s> BIB005 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Cost-Benefit Analysis <s> BACKGROUND AND AIMS ::: Suspected latent tuberculosis infection (LTBI) is a common reason for referral to TB clinics. Interferon-gamma release assays (IGRAs) are more specific than tuberculin skin tests (TSTs) for diagnosing LTBI. The aim of this study is to determine if IGRA changes practice in the management of cases referred to a TB clinic for possible LTBI. ::: ::: ::: DESIGN AND METHODS ::: A prospective study was performed over 29 months. All adult patients who had TST, CXR & IGRA were included. The original decision regarding TB chemoprophylaxis was made by TB team consensus, based on clinical history and TST. Cases were then analysed with the addition of IGRA to determine if this had altered management. An independent physician subsequently reviewed the cases. ::: ::: ::: RESULTS ::: Of 204 patients studied, 68 were immunocompromised. 120 patients had positive TSTs. Of these, 36 (30%) had a positive QFT and 84 (70%) had a negative QFT. Practice changed in 78 (65%) cases with positive TST, all avoiding TB chemoprophylaxis due to QFT. Of the immunocompromised patients, 17 (25%) underwent change of practice. No cases of active TB have developed. ::: ::: ::: CONCLUSION ::: This study demonstrates a significant change of clinical practice due to IGRA use. Our findings support the NICE 2011 recommendations. <s> BIB006 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Cost-Benefit Analysis <s> BACKGROUND: Early diagnosis of acute myocardial infarction (AMI) can ensure quick and effective treatment but only 20% of adults with emergency admissions for chest pain have an AMI. High-sensitivity cardiac troponin (hs-cTn) assays may allow rapid rule-out of AMI and avoidance of unnecessary hospital admissions and anxiety. OBJECTIVE: To assess the clinical effectiveness and cost-effectiveness of hs-cTn assays for the early (within 4 hours of presentation) rule-out of AMI in adults with acute chest pain. METHODS: Sixteen databases, including MEDLINE and EMBASE, research registers and conference proceedings, were searched to October 2013. Study quality was assessed using QUADAS-2. The bivariate model was used to estimate summary sensitivity and specificity for meta-analyses involving four or more studies, otherwise random-effects logistic regression was used. The health-economic analysis considered the long-term costs and quality-adjusted life-years (QALYs) associated with different troponin (Tn) testing methods. The de novo model consisted of a decision tree and Markov model. A lifetime time horizon (60 years) was used. RESULTS: Eighteen studies were included in the clinical effectiveness review. The optimum strategy, based on the Roche assay, used a limit of blank (LoB) threshold in a presentation sample to rule out AMI [negative likelihood ratio (LR-) 0.10, 95% confidence interval (CI) 0.05 to 0.18]. Patients testing positive could then have a further test at 2 hours; a result above the 99th centile on either sample and a delta (Δ) of ≥ 20% has some potential for ruling in an AMI [positive likelihood ratio (LR+) 8.42, 95% CI 6.11 to 11.60], whereas a result below the 99th centile on both samples and a Δ of < 20% can be used to rule out an AMI (LR- 0.04, 95% CI 0.02 to 0.10). The optimum strategy, based on the Abbott assay, used a limit of detection (LoD) threshold in a presentation sample to rule out AMI (LR- 0.01, 95% CI 0.00 to 0.08). Patients testing positive could then have a further test at 3 hours; a result above the 99th centile on this sample has some potential for ruling in an AMI (LR+ 10.16, 95% CI 8.38 to 12.31), whereas a result below the 99th centile can be used to rule out an AMI (LR- 0.02, 95% CI 0.01 to 0.05). In the base-case analysis, standard Tn testing was both most effective and most costly. Strategies considered cost-effective depending upon incremental cost-effectiveness ratio thresholds were Abbott 99th centile (thresholds of < £6597), Beckman 99th centile (thresholds between £6597 and £30,042), Abbott optimal strategy (LoD threshold at presentation, followed by 99th centile threshold at 3 hours) (thresholds between £30,042 and £103,194) and the standard Tn test (thresholds over £103,194). The Roche 99th centile and the Roche optimal strategy [LoB threshold at presentation followed by 99th centile threshold and/or Δ20% (compared with presentation test) at 1-3 hours] were extendedly dominated in this analysis. CONCLUSIONS: There is some evidence to suggest that hs-CTn testing may provide an effective and cost-effective approach to early rule-out of AMI. Further research is needed to clarify optimal diagnostic thresholds and testing strategies. STUDY REGISTRATION: This study is registered as PROSPERO CRD42013005939. FUNDING: The National Institute for Health Research Health Technology Assessment programme. <s> BIB007 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Cost-Benefit Analysis <s> Comparing diagnostic tests on accuracy alone can be inconclusive. For example, a test may have better sensitivity than another test yet worse specificity. Comparing tests on benefit risk may be more conclusive because clinical consequences of diagnostic error are considered. For benefit-risk evaluation, we propose diagnostic yield, the expected distribution of subjects with true positive, false positive, true negative, and false negative test results in a hypothetical population. We construct a table of diagnostic yield that includes the number of false positive subjects experiencing adverse consequences from unnecessary work-up. We then develop a decision theory for evaluating tests. The theory provides additional interpretation to quantities in the diagnostic yield table. It also indicates that the expected utility of a test relative to a perfect test is a weighted accuracy measure, the average of sensitivity and specificity weighted for prevalence and relative importance of false positive and false negative testing errors, also interpretable as the cost-benefit ratio of treating non-diseased and diseased subjects. We propose plots of diagnostic yield, weighted accuracy, and relative net benefit of tests as functions of prevalence or cost-benefit ratio. Concepts are illustrated with hypothetical screening tests for colorectal cancer with test positive subjects being referred to colonoscopy. <s> BIB008 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Cost-Benefit Analysis <s> Many decisions in medicine involve trade-offs, such as between diagnosing patients with disease versus unnecessary additional testing for those who are healthy. Net benefit is an increasingly reported decision analytic measure that puts benefits and harms on the same scale. This is achieved by specifying an exchange rate, a clinical judgment of the relative value of benefits (such as detecting a cancer) and harms (such as unnecessary biopsy) associated with models, markers, and tests. The exchange rate can be derived by asking simple questions, such as the maximum number of patients a doctor would recommend for biopsy to find one cancer. As the answers to these sorts of questions are subjective, it is possible to plot net benefit for a range of reasonable exchange rates in a “decision curve.” For clinical prediction models, the exchange rate is related to the probability threshold to determine whether a patient is classified as being positive or negative for a disease. Net benefit is useful for determining whether basing clinical decisions on a model, marker, or test would do more good than harm. This is in contrast to traditional measures such as sensitivity, specificity, or area under the curve, which are statistical abstractions not directly informative about clinical value. Recent years have seen an increase in practical applications of net benefit analysis to research data. This is a welcome development, since decision analytic techniques are of particular value when the purpose of a model, marker, or test is to help doctors make better clinical decisions. <s> BIB009
e studies conducted in phase III and IV in the investigation of a diagnostic test could be covered under the generic name of cost-benefit analysis. Different aspects of the benefit could be investigated such as societal impact (the impact on the society), cost-effectiveness (affordability), clinical efficacy or effectiveness (effects on the outcome), cost-consequence analysis, cost-utility analysis, sensitivity analysis (probability of disease and/or recurrence, cost of tests, impact on QALY (quality-adjusted life-year), and impact of treatment), and analytical performances (precision, linearity, and cost-effectiveness ratio) . us, the evaluation of diagnostic tests benefits could be investigated from different perspectives (e.g., societal, health-care system, and health-care provider) and considering different items (e.g., productivity, patient and family time, medication, and physician time) . Furthermore, an accurate comparison of two diagnostic tests must consider both the accuracy and benefit/harm in the assessment of the clinical utility BIB008 BIB009 . Generally, then cost-benefit analysis employs multivariate and multifactorial analysis using different designs of the experiment, including survival analysis, and the statistical approach is selected according to the aim of the study. Analysis of relationships is done using correlation method (Person's correlation (r) when the variables (two) are quantitative and normal distributed, and a linear relation is assuming between them; Spearman's (ρ) or Kendall's (τ) correlation coefficient otherwise; it is recommended to use Kendall's tau instead of Spearman's rho when data have ties BIB003 ) or regression analysis when the nature of the relationship is of interest and an outcome ), while the dose exceeding 20 mSv was reported as percentages. e mean organ dose was also reported and the lifetime attributable risk of solid cancer or leukemia, as well as some CT scans leading to one case of cancer per 10,000 scans BIB004 . e reported numbers and risks were not accompanied by the 95% confidence intervals BIB004 excepting the estimated value of the total number of future radiation-induced cancers related to pediatric CT use (they named it as uncertainty limit). Dinh et al. BIB005 evaluated the effectiveness of a combined screening test (fecal immunological test and colonoscopy) for colorectal cancer using the Archimedes model (human physiology, diseases, interventions, and health-care systems BIB001 ). e reported results, besides frequently used descriptive metrics, are the health utility score BIB002 , cost per person, quality-adjusted life-years (QALYs) gained per person, and cost/QALYs gain as numerical point estimators not accompanied by the 95% confidence interval. Westwood et al. BIB007 conducted a secondary study to evaluate the performances of the high-sensitivity cardiac troponin (hs-cTn) assays in ruling-out the patients with acute myocardial infarction (AMI). Clinical effectiveness using metrics such as Se, Sp, NLR, and PLR (for both any threshold and 99 th percentile threshold) was reported with associated 95% confidence intervals. As the costeffectiveness metrics the long-term costs, cost per life-year (LY) gained, quality-adjusted life-years (QALYs), and costs/ QALYs were reported with associated 95% confidence intervals for different Tn testing methods. Furthermore, the incremental cost-effectiveness ratio (ICER) was used to compare the mean costs of two Tn testing methods along with the multivariate analysis (reported as estimates, standard error of the estimate, and the distribution of data). Tiernan et al. BIB006 reported the changes in the clinical practice for the diagnosis of latent tuberculosis infection (LTBI) with interferon-gamma release assay, namely, QuantiFERON-TB Gold In-Tube (QFT, Cellestis, Australia). Unfortunately, the reported outcome was limited to the number of changes in practice due to QFT as absolute frequency and percentages BIB006 .