aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
cs0411010
2952058472
We propose a new simple logic that can be used to specify , i.e. security properties that refer to a single participant of the protocol specification. Our technique allows a protocol designer to provide a formal specification of the desired security properties, and integrate it naturally into the design process of cryptographic protocols. Furthermore, the logic can be used for formal verification. We illustrate the utility of our technique by exposing new attacks on the well studied protocol TMN.
The approach presented in this paper belongs to the spectrum of intensional specifications, and is related to @cite_18 @cite_6 . In @cite_6 , a requirement specification language is proposed. This language is useful for specifying sets of requirements for classes of protocol; the requirements can be mapped onto a particular protocol instance, which can be later verified using their tool, called NRL Protocol Analyzer. This approach has been subsequently used to specify the GDOI secure multicast protocol @cite_10 .
{ "cite_N": [ "@cite_18", "@cite_10", "@cite_6" ], "mid": [ "2078142047", "1997596059", "1926128235", "1577867985" ], "abstract": [ "In this paper we present a formal language for specifying and reasoning about cryptographic protocol requirements. We give sets of requirements for key distribution protocols and for key agreement protocols in that language. We look at a key agreement protocol due to Aziz and Diffie that might meet those requirements and show how to specify it in the language of the NRL Protocol Analyzer. We also show how to map our formal requirements to the language of the NRL Protocol Analyzer and use the Analyzer to show that the protocol meets those requirements. In other words, we use the Analyzer to assess the validity of the formulae that make up the requirements in models of the protocol. Our analysis reveals an implicit assumption about implementations of the protocol and reveals subtleties in the kinds of requirements one might specify for similar protocols.", "Although there is a substantial amount of work on formal requirements for two and three-party key distribution protocols, very little has been done on requirements for group protocols. However, since the latter have security requirements that can differ in important but subtle ways, we believe that a rigorous expression of these requirements can be useful in determining whether a given protocol can satisfy an application's needs. In this paper we make a first step in providing a formal understanding of security requirements for group key distribution by using the NPATRL language, a temporal requirement specification language for use with the NRL Protocol Analyzer. We specify the requirements for GDOI, a protocol being proposed as an IETF standard, which we are formally specifying and verifying in cooperation with the MSec working group.", "When formalizing security protocols, different specification languages support very different reasoning methodologies, whose results are not directly or easily comparable. Therefore, establishing clear mappings among different frameworks is highly desirable, as it permits various methodologies to cooperate by interpreting theoretical and practical results of one system in another. In this paper, we examine the non-trivial relationship between two general verification frameworks: multiset rewriting (MSR) and a process algebra (PA) inspired to CCS and the π-calculus. Although defining a simple and general bijection between MSR and PA appears difficult, we show that the sublanguages needed to specify a large class of cryptographic protocols (immediate decryption protocols) admits an effective translation that is not only bijective and trace-preserving, but also induces a weak form of bisimulation across the two languages. In particular, the correspondence sketched in this abstract permits transferring several important trace-based properties such as secrecy and many forms of authentication.", "Many languages and algebras have been proposed in recent years for the specification of authorization policies. For some proposals, such as XACML, the main motivation is to address real-world requirements, typically by providing a complex policy language with somewhat informal evaluation methods; others try to provide a greater degree of formality --- particularly with respect to policy evaluation --- but support far fewer features. In short, there are very few proposals that combine a rich set of language features with a well-defined semantics, and even fewer that do this for authorization policies for attribute-based access control in open environments. In this paper, we decompose the problem of policy specification into two distinct sub-languages: the policy target language (PTL) for target specification, which determines when a policy should be evaluated; and the policy composition language (PCL) for building more complex policies from existing ones. We define syntax and semantics for two such languages and demonstrate that they can be both simple and expressive. PTaCL, the language obtained by combining the features of these two sub-languages, supports the specification of a wide range of policies. However, the power of PTaCL means that it is possible to define policies that could produce unexpected results. We provide an analysis of how PTL should be restricted and how policies written in PCL should be evaluated to minimize the likelihood of undesirable results." ] }
cs0411010
2952058472
We propose a new simple logic that can be used to specify , i.e. security properties that refer to a single participant of the protocol specification. Our technique allows a protocol designer to provide a formal specification of the desired security properties, and integrate it naturally into the design process of cryptographic protocols. Furthermore, the logic can be used for formal verification. We illustrate the utility of our technique by exposing new attacks on the well studied protocol TMN.
In @cite_20 , Cremers, Mauw and de Vink present another logic for specifying local security properties. Similarly to us, in @cite_20 the authors define the message authenticity property by referring to the variables occurring in the protocol role. In addition, in @cite_20 , it is defined a new kind of authentication, called , which is then compared with the Lowe's intensional specification. The logic presented in this paper cannot handle the specification of the synchronization authentication. In fact, we cannot handle the weaker notion of injective authentication, since we cannot match corresponding events in a trace. However, we believe we can extend our logic to support these properties. Briefly, this could be achieved by decorating the different runs with label identifiers and adding a primitive to reason about events that happenned before others in a trace.
{ "cite_N": [ "@cite_20" ], "mid": [ "146967524", "2160964355", "1926128235", "2626217303" ], "abstract": [ "In this paper we define a general trace model for security protocols which allows to reason about various formal definitions of authentication. In the model, we define a strong form of authentication which we call synchronization. We present both an injective and a noninjective version. We relate synchronization to a formulation of agreement in our trace model and contribute to the discussion on intensional vs. extensional specifications.", "Authentication is one of the foremost goals of many security protocols. It is most often formalised as a form of agreement, which expresses that the communicating partners agree on the values of a number of variables. In this paper we formalise and study an intensional form of authentication which we call synchronisation. Synchronisation expresses that the messages are transmitted exactly as prescribed by the protocol description. Synchronisation is a strictly stronger property than agreement for the standard intruder model, because it can be used to detect preplay attacks. In order to prevent replay attacks on simple protocols, we also define injective synchronisation. Given a synchronising protocol, we show that a sufficient syntactic criterion exists that guarantees that the protocol is injective as well.", "When formalizing security protocols, different specification languages support very different reasoning methodologies, whose results are not directly or easily comparable. Therefore, establishing clear mappings among different frameworks is highly desirable, as it permits various methodologies to cooperate by interpreting theoretical and practical results of one system in another. In this paper, we examine the non-trivial relationship between two general verification frameworks: multiset rewriting (MSR) and a process algebra (PA) inspired to CCS and the π-calculus. Although defining a simple and general bijection between MSR and PA appears difficult, we show that the sublanguages needed to specify a large class of cryptographic protocols (immediate decryption protocols) admits an effective translation that is not only bijective and trace-preserving, but also induces a weak form of bisimulation across the two languages. In particular, the correspondence sketched in this abstract permits transferring several important trace-based properties such as secrecy and many forms of authentication.", "We present a novel approach to proving the absence of timing channels. The idea is to partition the programâ s execution traces in such a way that each partition component is checked for timing attack resilience by a time complexity analysis and that per-component resilience implies the resilience of the whole program. We construct a partition by splitting the program traces at secret-independent branches. This ensures that any pair of traces with the same public input has a component containing both traces. Crucially, the per-component checks can be normal safety properties expressed in terms of a single execution. Our approach is thus in contrast to prior approaches, such as self-composition, that aim to reason about multiple (kâ � 2) executions at once. We formalize the above as an approach called quotient partitioning, generalized to any k-safety property, and prove it to be sound. A key feature of our approach is a demand-driven partitioning strategy that uses a regex-like notion called trails to identify sets of execution traces, particularly those influenced by tainted (or secret) data. We have applied our technique in a prototype implementation tool called Blazer, based on WALA, PPL, and the brics automaton library. We have proved timing-channel freedom of (or synthesized an attack specification for) 24 programs written in Java bytecode, including 6 classic examples from the literature and 6 examples extracted from the DARPA STAC challenge problems." ] }
cs0411046
1622839875
We present a novel framework, called balanced overlay networks (BON), that provides scalable, decentralized load balancing for distributed computing using large-scale pools of heterogeneous computers. Fundamentally, BON encodes the information about each node's available computational resources in the structure of the links connecting the nodes in the network. This distributed encoding is self-organized, with each node managing its in-degree and local connectivity via random-walk sampling. Assignment of incoming jobs to nodes with the most free resources is also accomplished by sampling the nodes via short random walks. Extensive simulations show that the resulting highly dynamic and self-organized graph structure can efficiently balance computational load throughout large-scale networks. These simulations cover a wide spectrum of cases, including significant heterogeneity in available computing resources and high burstiness in incoming load. We provide analytical results that prove BON's scalability for truly large-scale networks: in particular we show that under certain ideal conditions, the network structure converges to Erdos-Renyi (ER) random graphs; our simulation results, however, show that the algorithm does much better, and the structures seem to approach the ideal case of d-regular random graphs. We also make a connection between highly-loaded BONs and the well-known ball-bin randomized load balancing framework.
The authors have previously considered topologically-based load balancing with a simpler model than BON which is amenable to analytical study @cite_17 . In that work each node's resources were proportional to in-degree and load was distributed by performing a short random walk and migrating load to the last node of the walk; this method produces Erd "o s-R 'e nyi (ER) random graphs and exhibits good load-balancing performance. As we demonstrate in the current work, performing more complex functions on the random walk can significantly improve performance.
{ "cite_N": [ "@cite_17" ], "mid": [ "2121441899", "1986076296", "2396984061", "2141350198" ], "abstract": [ "By spreading the workload across a sensor network, load balancing reduces hot spots in the sensor network and increases the energy lifetime of the sensor network. In this paper, we design a node-centric algorithm that constructs a load-balanced tree in sensor networks of asymmetric architecture. We utilize a Chebyshev Sum metric to evaluate via simulation the balance of the routing trees produced by our algorithm. We find that our algorithm achieves routing trees that are more effectively balanced than the routing based on breadth-first search (BFS) and shortest-path obtained by Dijkstra's algorithm.", "We present a dynamic distributed load balancing algorithm for parallel, adaptive Finite Element simulations in which we use preconditioned Conjugate Gradient solvers based on domain-decomposition. The load balancing is designed to maintain good partition aspect ratio and we show that cut size is not always the appropriate measure in load balancing. Furthermore, we attempt to answer the question why the aspect ratio of partitions plays an important role for certain solvers. We define and rate different kinds of aspect ratio and present a new center-based partitioning method of calculating the initial distribution which implicitly optimizes this measure. During the adaptive simulation, the load balancer calculates a balancing flow using different versions of the diffusion algorithm and a variant of breadth first search. Elements to be migrated are chosen according to a cost function aiming at the optimization of subdomain shapes. Experimental results for Bramble's preconditioner and comparisons to state-of-the-art load balancers show the benefits of the construction.", "Load balancing is important for the efficient execution of numerical simulations on parallel computers. In particular when the simulation domain changes over time, the mapping of computational tasks to processors needs to be modified accordingly. Most state-of-the-art libraries addressing this problem are based on graph repartitioning with a parallel variant of the Kernighan-Lin (KL) heuristic. The KL approach has a number of drawbacks, including the optimized metric and solutions with undesirable properties. Here we further explore the promising diffusion-based multilevel graph partitioning algorithm DibaP. We describe the evolution of the algorithm and report on its MPI implementation PDibaP for parallelism with distributed memory. PDibaP is targeted at small to medium scale parallelism with dozens of processors. The presented experiments use graph sequences that imitate adaptive numerical simulations. They demonstrate the applicability and quality of PDibaP for load balancing by repartitioning on this scale. Compared to the faster ParMETIS, PDibaP’s solutions often have partitions with fewer external edges and a smaller communication volume in an underlying numerical simulation.", "The task of balancing dynamically generated work load occurs in a wide range of parallel and distributed applications. Diffusion based schemes, which belong to the class of nearest neighbor load balancing algorithms, are a popular way to address this problem. Originally created to equalize the amount of arbitrarily divisible load among the nodes of a static and homogeneous network, they have been generalized to heterogeneous topologies. Additionally, some simple diffusion algorithms have been adapted to work in dynamic networks as well. However, if the load is not divisible arbitrarily but consists of indivisible unit size tokens, diffusion schemes are not able to balance the load properly. In this paper we consider the problem of balancing indivisible unit size tokens on heterogeneous systems. By modifying a randomized strategy invented for homogeneous systems, we can achieve an asymptotically minimal expected overload in l1, l2 and l1 norm while only slightly increasing the run-time by a logarithmic factor. Our experiments show that this additional factor is usually not required in applications." ] }
cs0411046
1622839875
We present a novel framework, called balanced overlay networks (BON), that provides scalable, decentralized load balancing for distributed computing using large-scale pools of heterogeneous computers. Fundamentally, BON encodes the information about each node's available computational resources in the structure of the links connecting the nodes in the network. This distributed encoding is self-organized, with each node managing its in-degree and local connectivity via random-walk sampling. Assignment of incoming jobs to nodes with the most free resources is also accomplished by sampling the nodes via short random walks. Extensive simulations show that the resulting highly dynamic and self-organized graph structure can efficiently balance computational load throughout large-scale networks. These simulations cover a wide spectrum of cases, including significant heterogeneity in available computing resources and high burstiness in incoming load. We provide analytical results that prove BON's scalability for truly large-scale networks: in particular we show that under certain ideal conditions, the network structure converges to Erdos-Renyi (ER) random graphs; our simulation results, however, show that the algorithm does much better, and the structures seem to approach the ideal case of d-regular random graphs. We also make a connection between highly-loaded BONs and the well-known ball-bin randomized load balancing framework.
The majority of distributed computing research has focused on central server methods, DHT architectures, agent-based systems, randomized algorithms and local diffusive techniques @cite_22 @cite_21 @cite_13 @cite_10 @cite_3 @cite_18 @cite_12 . Some of the most successful systems to date @cite_14 @cite_5 have used a centralized approach. This can be explained by the relatively small scale of the networked systems or by special properties of the workload experienced by these systems. However since a central server must have @math bandwidth capacity and CPU power, systems that depend on central architectures are unscalable @cite_9 @cite_23 . Reliability is also a concern since a central server is a single point of failure. BON addresses both of these issues by using @math maximum communications scaling and no single points of failure. Furthermore since the networks created by the BON algorithm are random graphs, they will be highly robust to random failures.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_14", "@cite_22", "@cite_9", "@cite_21", "@cite_3", "@cite_23", "@cite_5", "@cite_10", "@cite_12" ], "mid": [ "2484626270", "1558940048", "2151682391", "2031684765" ], "abstract": [ "The field of structured P2P systems has seen fast growth upon the introduction of Distributed Hash Tables (DHTs) in the early 2000s. The first proposals, including Chord, Pastry, Tapestry, were gradually improved to cope with scalability, locality and security issues. By utilizing the processing and bandwidth resources of end users, the P2P approach enables high performance of data distribution which is hard to achieve with traditional client-server architectures. The P2P computing community is also being actively utilized for software updates to the Internet, P2PSIP VoIP, video-on-demand, and distributed backups. The recent introduction of the identifier-locator split proposal for future Internet architectures poses another important application for DHTs, namely mapping between host permanent identity and changing IP address. The growing complexity and scale of modern P2P systems requires the introduction of hierarchy and intelligence in routing of requests. Structured Peer-to-Peer Systems covers fundamental issues in organization, optimization, and tradeoffs of present large-scale structured P2P systems, as well as, provides principles, analytical models, and simulation methods applicable in designing future systems. Part I presents the state-of-the-art of structured P2P systems, popular DHT topologies and protocols, and the design challenges for efficient P2P network topology organization, routing, scalability, and security. Part II shows that local strategies with limited knowledge per peer provide the highest scalability level subject to reasonable performance and security constraints. Although the strategies are local, their efficiency is due to elements of hierarchical organization, which appear in many DHT designs that traditionally are considered as flat ones. Part III describes methods to gradually enhance the local view limit when a peer is capable to operate with larger knowledge, still partial, about the entire system. These methods were formed in the evolution of hierarchical organization from flat DHT networks to hierarchical DHT architectures, look-ahead routing, and topology-aware ranking. Part IV highlights some known P2P-based experimental systems and commercial applications in the modern Internet. The discussion clarifies the importance of P2P technology for building present and future Internet systems.", "In this paper, we address the problem of designing a scalable, accurate query processor for peer-to-peer filesharing and similar distributed keyword search systems. Using a globally-distributed monitoring infrastructure, we perform an extensive study of the Gnutella filesharing network, characterizing its topology, data and query workloads. We observe that Gnutella's query processing approach performs well for popular content, but quite poorly for rare items with few replicas. We then consider an alternate approach based on Distributed Hash Tables (DHTs). We describe our implementation of PIERSearch, a DHT-based system, and propose a hybrid system where Gnutella is used to locate popular items, and PIERSearch for handling rare items. We develop an analytical model of the two approaches, and use it in concert with our Gnutella traces to study the trade-off between query recall and system overhead of the hybrid system. We evaluate a variety of localized schemes for identifying items that are rare and worth handling via the DHT. Lastly, we show in a live deployment on fifty nodes on two continents that it nicely complements Gnutella in its ability to handle rare items.", "Mobile ad-hoc networks (MANETs) and distributed hash-tables (DHTs) share key characteristics in terms of self organization, decentralization, redundancy requirements, and limited infrastructure. However, node mobility and the continually changing physical topology pose a special challenge to scalability and the design of a DHT for mobile ad-hoc network. The mobile hash-table (MHT) [9] addresses this challenge by mapping a data item to a path through the environment. In contrast to existing DHTs, MHT does not to maintain routing tables and thereby can be used in networks with highly dynamic topologies. Thus, in mobile environments it stores data items with low maintenance overhead on the moving nodes and allows the MHT to scale up to several ten thousands of nodes.This paper addresses the problem of churn in mobile hash tables. Similar to Internet based peer-to-peer systems a deployed mobile hash table suffers from suddenly leaving nodes and the need to recover lost data items. We evaluate how redundancy and recovery technique used in the internet domain can be deployed in the mobile hash table. Furthermore, we show that these redundancy techniques can greatly benefit from the local broadcast properties of typical mobile ad-hoc networks.", "We study a fundamental tradeoff issue in designing a distributed hash table (DHT) in peer-to-peer (P2P) networks: the size of the routing table versus the network diameter. Observing that existing DHT schemes have either 1) a routing table size and network diameter both of O(log sub 2 n), or 2) a routing table of size d and network diameter of O(n sup 1 d ), S. (2001) asked whether this represents the best asymptotic \"state-efficiency\" tradeoffs. We show that some straightforward routing algorithms achieve better asymptotic tradeoffs. However, such algorithms all cause severe congestion on certain network nodes, which is undesirable in a P2P network. We rigorously define the notion of \"congestion\" and conjecture that the above tradeoffs are asymptotically optimal for a congestion-free network. The answer to this conjecture is negative in the strict sense. However, it becomes positive if the routing algorithm is required to eliminate congestion in a \"natural\" way by being uniform. We also prove that the tradeoffs are asymptotically optimal for uniform algorithms. Furthermore, for uniform algorithms, we find that the routing table size of O(log sub 2 n) is a magic threshold point that separates two different \"state-efficiency\" regions. Our third result is to study the exact (instead of asymptotic) optimal tradeoffs for uniform algorithms. We propose a new routing algorithm that reduces the routing table size and the network diameter of Chord both by 21.4 without introducing any other protocol overhead, based on a novel number-theory technique. Our final result is to present Ulysses, a congestion-free nonuniform algorithm that achieves a better asymptotic \"state-efficiency\" tradeoff than existing schemes in the probabilistic sense, even under dynamic node joins leaves." ] }
cs0411046
1622839875
We present a novel framework, called balanced overlay networks (BON), that provides scalable, decentralized load balancing for distributed computing using large-scale pools of heterogeneous computers. Fundamentally, BON encodes the information about each node's available computational resources in the structure of the links connecting the nodes in the network. This distributed encoding is self-organized, with each node managing its in-degree and local connectivity via random-walk sampling. Assignment of incoming jobs to nodes with the most free resources is also accomplished by sampling the nodes via short random walks. Extensive simulations show that the resulting highly dynamic and self-organized graph structure can efficiently balance computational load throughout large-scale networks. These simulations cover a wide spectrum of cases, including significant heterogeneity in available computing resources and high burstiness in incoming load. We provide analytical results that prove BON's scalability for truly large-scale networks: in particular we show that under certain ideal conditions, the network structure converges to Erdos-Renyi (ER) random graphs; our simulation results, however, show that the algorithm does much better, and the structures seem to approach the ideal case of d-regular random graphs. We also make a connection between highly-loaded BONs and the well-known ball-bin randomized load balancing framework.
BON is designed to be deployed on extremely large ensembles of nodes. This is a major similarity with BOINC @cite_14 . The Einstein@home project which processes gravitation data and Predictor@home which studies protein-related disease are based on BOINC, the latest infrastructure for creating public-resource computing projects. Such projects are single-purpose and are designed to handle massive, embarrassingly parallel problems with tens or hundreds of thousands of nodes. BON should scale to networks of this scale and beyond while providing a dynamic, multi-user environment instead of the special purpose environment provided by BOINC.
{ "cite_N": [ "@cite_14" ], "mid": [ "2142863519", "2732951378", "2141662114", "137863291" ], "abstract": [ "BOINC (Berkeley Open Infrastructure for Network Computing) is a software system that makes it easy for scientists to create and operate public-resource computing projects. It supports diverse applications, including those with large storage or communication requirements. PC owners can participate in multiple BOINC projects, and can specify how their resources are allocated among these projects. We describe the goals of BOINC, the design issues that we confronted, and our solutions to these problems.", "This paper develops a novel tree-based algorithm, called Bonsai, for efficient prediction on IoT devices - such as those based on the Arduino Uno board having an 8 bit ATmega328P microcontroller operating at 16 MHz with no native floating point support, 2 KB RAM and 32 KB read-only flash. Bonsai maintains prediction accuracy while minimizing model size and prediction costs by: (a) developing a tree model which learns a single, shallow, sparse tree with powerful nodes; (b) sparsely projecting all data into a low-dimensional space in which the tree is learnt; and (c) jointly learning all tree and projection parameters. Experimental results on multiple benchmark datasets demonstrate that Bonsai can make predictions in milliseconds even on slow microcontrollers, can fit in KB of memory, has lower battery consumption than all other algorithms while achieving prediction accuracies that can be as much as 30 higher than state-of-the-art methods for resource-efficient machine learning. Bonsai is also shown to generalize to other resource constrained settings beyond IoT by generating significantly better search results as compared to Bing's L3 ranker when the model size is restricted to 300 bytes. Bonsai's code can be downloaded from (BonsaiCode).", "Many emerging large-scale data science applications require searching large graphs distributed across multiple memories and processors. This paper presents a distributed breadth- first search (BFS) scheme that scales for random graphs with up to three billion vertices and 30 billion edges. Scalability was tested on IBM BlueGene L with 32,768 nodes at the Lawrence Livermore National Laboratory. Scalability was obtained through a series of optimizations, in particular, those that ensure scalable use of memory. We use 2D (edge) partitioning of the graph instead of conventional 1D (vertex) partitioning to reduce communication overhead. For Poisson random graphs, we show that the expected size of the messages is scalable for both 2D and 1D partitionings. Finally, we have developed efficient collective communication functions for the 3D torus architecture of BlueGene L that also take advantage of the structure in the problem. The performance and characteristics of the algorithm are measured and reported.", "Many real-world domains can be represented as large node-link graphs: backbone Internet routers connect with 70,000 other hosts, mid-sized Web servers handle between 20,000 and 200,000 hyperlinked documents, and dictionaries contain millions of words defined in terms of each other. Computational manipulation of such large graphs is common, but previous tools for graph visualization have been limited to datasets of a few thousand nodes. Visual depictions of graphs and networks are external representations that exploit human visual processing to reduce the cognitive load of many tasks that require understanding of global or local structure. We assert that the two key advantages of computer-based systems for information visualization over traditional paper-based visual exposition are interactivity and scalability. We also argue that designing visualization software by taking the characteristics of a target user's task domain into account leads to systems that are more effective and scale to larger datasets than previous work. This thesis contains a detailed analysis of three specialized systems for the interactive exploration of large graphs, relating the intended tasks to the spatial layout and visual encoding choices. We present two novel algorithms for specialized layout and drawing that use quite different visual metaphors. The H3 system for visualizing the hyperlink structures of web sites scales to datasets of over 100,000 nodes by using a carefully chosen spanning tree as the layout backbone, 3D hyperbolic geometry for a Focus+Context view, and provides a fluid interactive experience through guaranteed frame rate drawing. The Constellation system features a highly specialized 2D layout intended to spatially encode domain-specific information for computational linguists checking the plausibility of a large semantic network created from dictionaries. The Planet Multicast system for displaying the tunnel topology of the Internet's multicast backbone provides a literal 3D geographic layout of arcs on a globe to help MBone maintainers find misconfigured long-distance tunnels. Each of these three systems provides a very different view of the graph structure, and we evaluate their efficacy for the intended task. We generalize these findings in our analysis of the importance of interactivity and specialization for graph visualization systems that are effective and scalable." ] }
cs0410066
2949608853
Data intensive applications on clusters often require requests quickly be sent to the node managing the desired data. In many applications, one must look through a sorted tree structure to determine the responsible node for accessing or storing the data. Examples include object tracking in sensor networks, packet routing over the internet, request processing in publish-subscribe middleware, and query processing in database systems. When the tree structure is larger than the CPU cache, the standard implementation potentially incurs many cache misses for each lookup; one cache miss at each successive level of the tree. As the CPU-RAM gap grows, this performance degradation will only become worse in the future. We propose a solution that takes advantage of the growing speed of local area networks for clusters. We split the sorted tree structure among the nodes of the cluster. We assume that the structure will fit inside the aggregation of the CPU caches of the entire cluster. We then send a word over the network (as part of a larger packet containing other words) in order to examine the tree structure in another node's CPU cache. We show that this is often faster than the standard solution, which locally incurs multiple cache misses while accessing each successive level of the tree.
The concept of the memory wall has been popularized by Wulf @cite_5 . Many researchers have been working on improving cache efficiency to overcome the memory wall problem. The pioneering work @cite_11 done by has both theoretically and experimentally studied the blocking technique and described the factors that affect the cache formance. However, there is not an easy way to apply the blocking technique to the tree traversal problem or to the index structure lookup problem to improve the cache efficiency.
{ "cite_N": [ "@cite_5", "@cite_11" ], "mid": [ "2128746953", "2295173494", "2117703621", "182451592" ], "abstract": [ "In this report we propose a parallel cache oblivious spatial and temporal blocking algorithm for the lattice Boltzmann method in three spatial dimensions. The algorithm has originally been proposed by (1999) and divides the space-time domain of stencil-based methods in an optimal way, independently of any external parameters, e.g., cache size. In view of the increasing gap between processor speed and memory performance this approach offers a promising path to increase cache utilisation. We find that even a straightforward cache oblivious implementation can reduce memory traffic at least by a factor of two if compared to a highly optimised standard kernel and improves scalability for shared memory parallelisation. Due to the recursive structure of the algorithm we use an unconventional parallelisation scheme based on task queuing.", "A program can benefit from improved cache block utilization when contemporaneously accessed data elements are placed in the same memory block. This can reduce the program's memory block working set and thereby, reduce the capacity miss rate. We formally define the problem of data packing for arbitrary number of blocks in the cache and packing factor (the number of data objects fitting in a cache block) and study how well the optimal solution can be approximated for two dual problems. On the one hand, we show that the cache hit maximization problem is approximable within a constant factor, for every fixed number of blocks in the cache. On the other hand, we show that unless P=NP, the cache miss minimization problem cannot be efficiently approximated.", "The cost of accessing main memory is increasing. Machine designers have tried to mitigate the consequences of the processor and memory technology trends underlying this increasing gap with a variety of techniques to reduce or tolerate memory latency. These techniques, unfortunately, are only occasionally successful for pointer-manipulating programs. Recent research has demonstrated the value of a complementary approach, in which pointer-based data structures are reorganized to improve cache locality.This paper studies a technique for using a generational garbage collector to reorganize data structures to produce a cache-conscious data layout, in which objects with high temporal affinity are placed next to each other, so that they are likely to reside in the same cache block. The paper explains how to collect, with low overhead, real-time profiling information about data access patterns in object-oriented languages, and describes a new copying algorithm that utilizes this information to produce a cache-conscious object layout.Preliminary results show that this technique reduces cache miss rates by 21--42 , and improves program performance by 14--37 over Cheney's algorithm. We also compare our layouts against those produced by the Wilson-Lam-Moher algorithm, which attempts to improve program locality at the page level. Our cache-conscious object layouts reduces cache miss rates by 20--41 and improves program performance by 18--31 over their algorithm, indicating that improving locality at the page level is not necessarily beneficial at the cache level.", "Computer systems have enjoyed an exponential growth in processor speed for the past 20 years, while main memory speed has improved only moderately. Today a cache miss to main memory takes hundreds of processor cycles. Recent studies have demonstrated that on commercial databases, about 50 or more of execution time in memory is often wasted due to cache misses. In light of this problem, a number of recent studies focused on reducing the number of cache misses of database algorithms. In this thesis, we investigate a different approach: reducing the impact of cache misses through a technique called cache prefetching. Since prefetching for sequential array accesses has been well studied, we are interested in studying non-contiguous access patterns found in two classes of database algorithms: the B+-Tree index algorithm and the hash join algorithm. We re-examine their designs with cache prefetching in mind, and combine prefetching and data locality optimizations to achieve good cache performance. For B+-Trees, we first propose and evaluate a novel main memory index structure, Prefetching B+Trees, which uses prefetching to accelerate two major access patterns of B+-Tree indices: searches and range scans. We then apply our findings in the development of a novel index structure, Fractal Prefetching B+-Trees, that optimizes index operations both for CPU cache performance and for disk performance in commercial database systems by intelligently embedding cache-optimized trees into disk pages. For hash joins, we first exploit cache prefetching separately for the I O partition phase and the join phase of the algorithm. We propose and evaluate two techniques, Group Prefetching and Software-Pipelined Prefetching, that exploit inter-tuple parallelism to overlap cache misses across the processing of multiple tuples. Then we present a novel algorithm, Inspector Joins, that exploits the free information obtained from one pass of the hash join algorithm to improve the performance of a later pass. This new algorithm addresses the memory bandwidth sharing problem in shared-bus multiprocessor systems. We compare our techniques against state-of-the-art cache-friendly algorithms for B+-Trees and hash joins through both simulation studies and real machine experiments. Our experimental results demonstrate dramatic performance benefits of our cache prefetching enabled techniques." ] }
cs0410066
2949608853
Data intensive applications on clusters often require requests quickly be sent to the node managing the desired data. In many applications, one must look through a sorted tree structure to determine the responsible node for accessing or storing the data. Examples include object tracking in sensor networks, packet routing over the internet, request processing in publish-subscribe middleware, and query processing in database systems. When the tree structure is larger than the CPU cache, the standard implementation potentially incurs many cache misses for each lookup; one cache miss at each successive level of the tree. As the CPU-RAM gap grows, this performance degradation will only become worse in the future. We propose a solution that takes advantage of the growing speed of local area networks for clusters. We split the sorted tree structure among the nodes of the cluster. We assume that the structure will fit inside the aggregation of the CPU caches of the entire cluster. We then send a word over the network (as part of a larger packet containing other words) in order to examine the tree structure in another node's CPU cache. We show that this is often faster than the standard solution, which locally incurs multiple cache misses while accessing each successive level of the tree.
In the area of theory and experimental algorithms, @cite_0 proposed an analytical model to predict the cache performance. In their model, they assume all nodes in a tree are accessed uniformly. This model is not accurate for the tree lookup problem. Because the number of nodes from root node to leaf nodes is exponentially increasing, nodes' access rates are exponentially decreasing as the their positioned levels in the tree increase. Hankins and Patel @cite_7 proposed a model with an exponential distributed node access rate in a B+ tree according to the level of a node positioned. However, they only considered the compulsory cache misses, and not the capacity cache misses. They also assume that the tree can fit in the cache. So, for tree structures that can't fit in the cache, the model in @cite_7 is not applicable.
{ "cite_N": [ "@cite_0", "@cite_7" ], "mid": [ "1549860141", "2949272603", "1982134210", "1965929361" ], "abstract": [ "Many researchers have been working on the performance analysis of caching in Information-Centric Networks (ICNs) under various replacement policies like Least Recently Used (LRU), FIFO or Random (RND). However, no exact results are provided, and many approximate models do not scale even for the simple network of two caches connected in tandem. In this paper, we introduce a Time-To-Live based policy (TTL), that assigns a timer to each content stored in the cache and redraws the timer each time the content is requested (at each hit miss). We show that our TTL policy is more general than LRU, FIFO or RND, since it is able to mimic their behavior under an appropriate choice of its parameters. Moreover, the analysis of networks of TTL-based caches appears simpler not only under the Independent Reference Model (IRM, on which many existing results rely) but also with the Renewal Model for requests. In particular, we determine exact formulas for the performance metrics of interest for a linear network and a tree network with one root cache and N leaf caches. For more general networks, we propose an approximate solution with the relative errors smaller than 10−3 and 10−2 for exponentially distributed and constant TTLs respectively.", "We investigate the problem of optimal request routing and content caching in a heterogeneous network supporting in-network content caching with the goal of minimizing average content access delay. Here, content can either be accessed directly from a back-end server (where content resides permanently) or be obtained from one of multiple in-network caches. To access a piece of content, a user must decide whether to route its request to a cache or to the back-end server. Additionally, caches must decide which content to cache. We investigate the problem complexity of two problem formulations, where the direct path to the back-end server is modeled as i) a congestion-sensitive or ii) a congestion-insensitive path, reflecting whether or not the delay of the uncached path to the back-end server depends on the user request load, respectively. We show that the problem is NP-complete in both cases. We prove that under the congestion-insensitive model the problem can be solved optimally in polynomial time if each piece of content is requested by only one user, or when there are at most two caches in the network. We also identify a structural property of the user-cache graph that potentially makes the problem NP-complete. For the congestion-sensitive model, we prove that the problem remains NP-complete even if there is only one cache in the network and each content is requested by only one user. We show that approximate solutions can be found for both models within a (1-1 e) factor of the optimal solution, and demonstrate a greedy algorithm that is found to be within 1 of optimal for small problem sizes. Through trace-driven simulations we evaluate the performance of our greedy algorithms, which show up to a 50 reduction in average delay over solutions based on LRU content caching.", "The overall performance of content distribution networks as well as recently proposed information-centric networks rely on both memory and bandwidth capacities. The hit ratio is the key performance indicator which captures the bandwidth memory tradeoff for a given global performance. This paper focuses on the estimation of the hit ratio in a network of caches that employ the Random replacement policy (RND). Assuming that requests are independent and identically distributed, general expressions of miss probabilities for a single RND cache are provided as well as exact results for specific popularity distributions (such results also hold for the FIFO replacement policy). Moreover, for any Zipf popularity distribution with exponent @a>1, we obtain asymptotic equivalents for the miss probability in the case of large cache size. We extend the analysis to networks of RND caches, when the topology is either a line or a homogeneous tree. In that case, approximations for miss probabilities across the network are derived by neglecting time correlations between miss events at any node; the obtained results are compared to the same network using the Least-Recently-Used discipline, already addressed in the literature. We further analyze the case of a mixed tandem cache network where the two nodes employ either Random or Least-Recently-Used policies. In all scenarios, asymptotic formulas and approximations are extensively compared to simulation results and shown to be very accurate. Finally, our results enable us to propose recommendations for cache replacement disciplines in a network dedicated to content distribution.", "analyze them. This paper describes a model for studying the cache performance of algorithms in a direct-mapped cache. Using this model, we analyze the cache performance of several commonly occurring memory access patterns: (i) sequential and random memory traversals, (ii) systems of random accesses, and (iii) combinations of each. For each of these, we give exact expressions for the number of cache misses per memory access in our model. We illustrate the application of these analyses by determining the cache performance of two algorithms: the traversal of a binary search tree and the counting of items in a large array. Trace driven cache simulations validate that our analyses accurately predict cache performance. The key application of cache performance analysis is towards the cache conscious design of data structures and algorithms. In our previous work we studied the cache conscious design of priority queues [13] and sorting algorithms [14], and were able to make significant performance improvements over traditional implementations by considering cache effects." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
The system was the first setup in which a supergravity dual of pure @math SYM theory with no hypermultiplets was studied @cite_9 . It was constructed by wrapping BPS D-branes on a K3 manifold, and studying the resulting geometry. From the supergravity point of view, the system exhibited a novel singularity resolution mechanism. Naively, there appeared to be a naked timelike singularity in the space transverse to the branes, dubbed the repulson, because a massive particle would feel a repulsive potential which becomes infinite in magnitude at a finite radius from the naive position of the branes. Probing the background with a wrapped D-brane, however, showed that the @math source D-branes do not, in fact, sit at the origin. Rather, they expand to form a shell of branes, inside of which the geometry does not, after all, become singular.
{ "cite_N": [ "@cite_9" ], "mid": [ "2040482607", "2079936263", "1992572456", "2048444779" ], "abstract": [ "We study brane configurations that give rise to large-N gauge theories with eight supersymmetries and no hypermultiplets. These configurations include a variety of wrapped, fractional, and stretched branes or strings. The corresponding spacetime geometries which we study have a distinct kind of singularity known as a repulson. We find that this singularity is removed by a distinctive mechanism, leaving a smooth geometry with a core having an enhanced gauge symmetry. The spacetime geometry can be related to large-N Seiberg-Witten theory.", "The enhancon mechanism removes a family of time-like singularities from certain supergravity spacetimes by forming a shell of branes on which the exterior geometry terminates. The problematic interior geometry is replaced by a new spacetime, which in the prototype extremal case is simply flat. We show that this excision process, made inevitable by stringy phenomena such as enhanced gauge symmetry and the vanishing of certain D-branes' tension at the shell, is also consistent at the purely gravitational level. The source introduced at the excision surface between the interior and exterior geometries behaves exactly as a shell of wrapped D6-branes, and in particular, the tension vanishes at precisely the enhancon radius. These observations can be generalised, and we present the case for non-extremal generalisations of the geometry, showing that the procedure allows for the possibility that the interior geometry contains an horizon. Further knowledge of the dynamics of the enhancon shell itself is needed to determine the precise position of the horizon, and to uncover a complete physical interpretation of the solutions.", "We construct new supersymmetric solutions of D = 11 supergravity describing n orthogonally “overlapping” membranes and fivebranes for n = 2,…,8. Overlapping branes arise after separating intersecting branes in a direction transverse to all of the branes. The solutions, which generalize known intersecting brane solutions, preserve at least 2−n of the supersymmetry. Each pairwise overlap involves a membrane overlapping a membrane in a 0-brane, a fivebrane overlapping a fivebrane in a 3-brane or a membrane overlapping a fivebrane in a string. After reducing n overlapping membranes to obtain n overlapping D-2-branes in D = 10, T-duality generates new overlapping D-brane solutions in type IIA and type IIB string theory. Uplifting certain type IIA solutions leads to the D = 11 solutions. Some of the new solutions reduce to dilaton black holes in D = 4. Additionally, we present a D = 10 solution that describes two D-5-branes overlapping in a string. T-duality then generates further D = 10 solutions and uplifting one of the type IIA solutions gives a new D = 11 solution describing two fivebranes overlapping in a string.", "We construct the most general supersymmetric configuration of @math -branes and @math -branes on a 6-torus. It contains arbitrary numbers of branes at relative @math angles. The corresponding supergravity solutions are constructed and expressed in a remarkably simple form, using the complex geometry of the compact space. The spacetime supersymmetry of the configuration is verified explicitly, by solution of the Killing spinor equations, and the equations of motion are verified too. Our configurations can be interpreted as a 16-parameter family of regular extremal black holes in four dimensions. Their entropy is interpreted microscopically by counting the degeneracy of bound states of @math -branes. Our result agrees in detail with the prediction for the degeneracy of BPS states in terms of the quartic invariant of the E(7,7) duality group." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
A natural generalisation was to study geometries for which the system gains energy above the BPS bound. An unusual two-branch structure was found @cite_9 @cite_0 . One class of possible solutions had the appearance of a black hole (or black brane), and was dubbed the horizon branch, while the other appeared to have an -like shell surrounding an inner event horizon and was dubbed the shell branch. Only the shell branch correctly matches onto the BPS solution in the limit of zero energy above extremality but, for sufficiently high extra energy, both solutions were seen to be consistent with the asymptotic charges. The presence of the horizon branch far from extremality was expected, since there, the system should look like an uncharged black hole, when the energy is highly dominant over the charge. Additionally, for the shell branch, fixing the asymptotic charges did not specify exactly how the extra energy distributed itself between the inner horizon and the shell.
{ "cite_N": [ "@cite_0", "@cite_9" ], "mid": [ "2471626564", "2028523941", "1868264628", "1979169890" ], "abstract": [ "We study the supergravity solutions describing nonextremal enhan c c ons. There are two branches of solutions: a shell branch'' connected to the extremal solution, and a horizon branch'' which connects to the Schwarzschild black hole at large mass. We show that the shell branch solutions violate the weak energy condition, and are hence unphysical. We investigate linearized perturbations of the horizon branch and the extremal solution numerically, completing an investigation initiated in a previous paper. We show that these solutions are stable against the perturbations we consider. This provides further evidence that these latter supergravity solutions are capturing some of the true physics of the enhan c c on.", "The stability of the inner Reissner-Nordstroem geometry is studied with test massless integer-spin fields. In contrast to previous mathematical treatments we present physical arguments for the processes involved and show that ray tracing and simple first-order scattering suffice to elucidate most of the results. Monochromatic waves which are of small amplitude and ingoing near the outer horizon develop infinite energy densities near the inner Cauchy horizon (as measured by a freely falling observer). Previous work has shown that certain derivatives of the field in a general (nonmonochromatic) disturbance must fall off exponentially near the inner (Cauchy) horizon (r = r sub - ) if energy densities are to remain finite. Thus the solution is unstable to physically reasonable perturbations which arise outside the black hole because such perturbations, if localized near past null infinity (I sup - ), cannot be localized near r sub + , the outer horizon. The mass-energy of an infalling disturbance would generate multipole moments on the black hole. Price, Sibgatullin, and Alekseev have shown that such moments are radiated away as ''tails'' which travel outward and are rescattered inward yielding a wave field with a time dependence t sup -p , p > 0. This decay in time is sufficiently slow that themore » tails yield infinite energy densities on the Cauchy horizon. (The amplification of the low-frequency tails upon interacting with the time-dependent potential between the horizons is an important feature guaranteeing the infinite energy density.) The interior structure of the analytically extended solution is thus disrupted by finite external disturbances. have further shown that even perturbations which are localized as they cross the outer horizon produce singularities at the inner horizon. It is shown that this singularity arises when the incoming radiation is first scattered just inside the outer horizon« less", "We consider solutions to the linear wave equation @math on a non-extremal maximally extended Schwarzschild-de Sitter spacetime arising from arbitrary smooth initial data prescribed on an arbitrary Cauchy hypersurface. (In particular, no symmetry is assumed on initial data, and the support of the solutions may contain the sphere of bifurcation of the black white hole horizons and the cosmological horizons.) We prove that in the region bounded by a set of black white hole horizons and cosmological horizons, solutions @math converge pointwise to a constant faster than any given polynomial rate, where the decay is measured with respect to natural future-directed advanced and retarded time coordinates. We also give such uniform decay bounds for the energy associated to the Killing field as well as for the energy measured by local observers crossing the event horizon. The results in particular include decay rates along the horizons themselves. Finally, we discuss the relation of these results to previous heuristic analysis of Price and", "We investigate a recently proposed model for a full quantum description of two-dimensional black hole evaporation, in which a reflecting boundary condition is imposed in the strong-coupling region. It is shown that in this model each initial state is mapped to a well-defined asymptotic out state, provided one performs a certain projection in the gravitational zero mode sector. We find that for an incoming localized energy pulse, the corresponding outgoing state contains approximately thermal radiation, in accordance with semiclassical predictions. In addition, our model allows for certain acausal strong-coupling effects near the singularity that give rise to corrections to the Hawking spectrum and restore the coherence of the out state. To an asymptotic observer these corrections appear to originate from behind the receding apparent horizon and start to influence the outgoing state long before the black hole has emitted most of its mass. Finally, by putting the system in a finite box, we are able to derive some algebraic properties of the scattering matrix and prove that the final state contains all initial information." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
Dimitriadis and Ross did a preliminary search @cite_6 for a classical instability that would provide evidence that the two branches are connected. Such an instability, which is fundamentally different in nature from the Gregory-Laflamme instability, could be interpreted as signalling a phase transition in the dual gauge theory. Such instability was not found. Also presented was an entropic argument that, at high mass, the horizon branch should dominate over the shell branch in a canonical ensemble. In later work @cite_7 , a numerical study of perturbations of the non-BPS shell branch was completed, but still no instability was found. An analytic proof of non-existence of such instabilities could not be found either, owing to the non-linearity of the coupled equations. Furthermore, @cite_7 investigated whether the shell branch might violate a standard gravitational energy condition. Indeed, they found that the shell branch violates the weak energy condition (WEC). This matter will be important for us in a later section, and so we review it here.
{ "cite_N": [ "@cite_7", "@cite_6" ], "mid": [ "2471626564", "2031810620", "1893146140", "2028523941" ], "abstract": [ "We study the supergravity solutions describing nonextremal enhan c c ons. There are two branches of solutions: a shell branch'' connected to the extremal solution, and a horizon branch'' which connects to the Schwarzschild black hole at large mass. We show that the shell branch solutions violate the weak energy condition, and are hence unphysical. We investigate linearized perturbations of the horizon branch and the extremal solution numerically, completing an investigation initiated in a previous paper. We show that these solutions are stable against the perturbations we consider. This provides further evidence that these latter supergravity solutions are capturing some of the true physics of the enhan c c on.", "Abstract At the heart of this article will be the study of a branching Brownian motion (BBM) with killing , where individual particles move as Brownian motions with drift − ρ , perform dyadic branching at rate β and are killed on hitting the origin. Firstly, by considering properties of the right-most particle and the extinction probability, we will provide a probabilistic proof of the classical result that the ‘one-sided’ FKPP travelling-wave equation of speed − ρ with solutions f : [ 0 , ∞ ) → [ 0 , 1 ] satisfying f ( 0 ) = 1 and f ( ∞ ) = 0 has a unique solution with a particular asymptotic when ρ 2 β , and no solutions otherwise. Our analysis is in the spirit of the standard BBM studies of [S.C. Harris, Travelling-waves for the FKPP equation via probabilistic arguments, Proc. Roy. Soc. Edinburgh Sect. A 129 (3) (1999) 503–517] and [A.E. Kyprianou, Travelling wave solutions to the K-P-P equation: alternatives to Simon Harris' probabilistic analysis, Ann. Inst. H. Poincare Probab. Statist. 40 (1) (2004) 53–72] and includes an intuitive application of a change of measure inducing a spine decomposition that, as a by product, gives the new result that the asymptotic speed of the right-most particle in the killed BBM is 2 β − ρ on the survival set. Secondly, we introduce and discuss the convergence of an additive martingale for the killed BBM, W λ , that appears of fundamental importance as well as facilitating some new results on the almost-sure exponential growth rate of the number of particles of speed λ ∈ ( 0 , 2 β − ρ ) . Finally, we prove a new result for the asymptotic behaviour of the probability of finding the right-most particle with speed λ > 2 β − ρ . This result combined with Chauvin and Rouault's [B. Chauvin, A. Rouault, KPP equation and supercritical branching Brownian motion in the subcritical speed area. Application to spatial trees, Probab. Theory Related Fields 80 (2) (1988) 299–314] arguments for standard BBM readily yields an analogous Yaglom-type conditional limit theorem for the killed BBM and reveals W λ as the limiting Radon–Nikodým derivative when conditioning the right-most particle to travel at speed λ into the distant future.", "This is the second in a series of papers in which we derive a @math -expansion for the two-dimensional non-local Ginzburg-Landau energy with Coulomb repulsion known as the Ohta-Kawasaki model in connection with diblock copolymer systems. In this model, two phases appear, which interact via a nonlocal Coulomb type energy. Here we focus on the sharp interface version of this energy in the regime where one of the phases has very small volume fraction, thus creating small \"droplets\" of the minority phase in a \"sea\" of the majority phase. In our previous paper, we computed the @math -limit of the leading order energy, which yields the averaged behavior for almost minimizers, namely that the density of droplets should be uniform. Here we go to the next order and derive a next order @math -limit energy, which is exactly the Coulombian renormalized energy obtained by Sandier and Serfaty as a limiting interaction energy for vortices in the magnetic Ginzburg-Landau model. The derivation is based on the abstract scheme of Sandier-Serfaty that serves to obtain lower bounds for 2-scale energies and express them through some probabilities on patterns via the multiparameter ergodic theorem. Without thus appealing to the Euler-Lagrange equation, we establish for all configurations which have \"almost minimal energy\" the asymptotic roundness and radius of the droplets, and the fact that they asymptotically shrink to points whose arrangement minimizes the renormalized energy in some averaged sense. Via a kind of @math -equivalence, the obtained results also yield an expansion of the minimal energy for the original Ohta-Kawasaki energy. This leads to expecting to see triangular lattices of droplets as energy minimizers.", "The stability of the inner Reissner-Nordstroem geometry is studied with test massless integer-spin fields. In contrast to previous mathematical treatments we present physical arguments for the processes involved and show that ray tracing and simple first-order scattering suffice to elucidate most of the results. Monochromatic waves which are of small amplitude and ingoing near the outer horizon develop infinite energy densities near the inner Cauchy horizon (as measured by a freely falling observer). Previous work has shown that certain derivatives of the field in a general (nonmonochromatic) disturbance must fall off exponentially near the inner (Cauchy) horizon (r = r sub - ) if energy densities are to remain finite. Thus the solution is unstable to physically reasonable perturbations which arise outside the black hole because such perturbations, if localized near past null infinity (I sup - ), cannot be localized near r sub + , the outer horizon. The mass-energy of an infalling disturbance would generate multipole moments on the black hole. Price, Sibgatullin, and Alekseev have shown that such moments are radiated away as ''tails'' which travel outward and are rescattered inward yielding a wave field with a time dependence t sup -p , p > 0. This decay in time is sufficiently slow that themore » tails yield infinite energy densities on the Cauchy horizon. (The amplification of the low-frequency tails upon interacting with the time-dependent potential between the horizons is an important feature guaranteeing the infinite energy density.) The interior structure of the analytically extended solution is thus disrupted by finite external disturbances. have further shown that even perturbations which are localized as they cross the outer horizon produce singularities at the inner horizon. It is shown that this singularity arises when the incoming radiation is first scattered just inside the outer horizon« less" ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
Surprisingly, when the system is near extremality and the asymptotic volume of the K3 is large, the first two terms combine into a dominant, negative, contribution. Thus the shell branch violates the WEC. It was argued @cite_7 that the shell branch should therefore be regarded as unphysical. Accordingly, the horizon branch should be considered the dominant, valid, supergravity solution for non-BPS , for the range of parameters admitting it. For the region of parameter space in which no horizon branch exists, other solutions, more general than those yet considered, might be valid @cite_7 .
{ "cite_N": [ "@cite_7" ], "mid": [ "2471626564", "2079936263", "326163211", "2332339252" ], "abstract": [ "We study the supergravity solutions describing nonextremal enhan c c ons. There are two branches of solutions: a shell branch'' connected to the extremal solution, and a horizon branch'' which connects to the Schwarzschild black hole at large mass. We show that the shell branch solutions violate the weak energy condition, and are hence unphysical. We investigate linearized perturbations of the horizon branch and the extremal solution numerically, completing an investigation initiated in a previous paper. We show that these solutions are stable against the perturbations we consider. This provides further evidence that these latter supergravity solutions are capturing some of the true physics of the enhan c c on.", "The enhancon mechanism removes a family of time-like singularities from certain supergravity spacetimes by forming a shell of branes on which the exterior geometry terminates. The problematic interior geometry is replaced by a new spacetime, which in the prototype extremal case is simply flat. We show that this excision process, made inevitable by stringy phenomena such as enhanced gauge symmetry and the vanishing of certain D-branes' tension at the shell, is also consistent at the purely gravitational level. The source introduced at the excision surface between the interior and exterior geometries behaves exactly as a shell of wrapped D6-branes, and in particular, the tension vanishes at precisely the enhancon radius. These observations can be generalised, and we present the case for non-extremal generalisations of the geometry, showing that the procedure allows for the possibility that the interior geometry contains an horizon. Further knowledge of the dynamics of the enhancon shell itself is needed to determine the precise position of the horizon, and to uncover a complete physical interpretation of the solutions.", "We develop a definitive physical-space scattering theory for the scalar wave equation on Kerr exterior backgrounds in the general subextremal case |a|<M. In particular, we prove results corresponding to \"existence and uniqueness of scattering states\" and \"asymptotic completeness\" and we show moreover that the resulting \"scattering matrix\" mapping radiation fields on the past horizon and past null infinity to radiation fields on the future horizon and future null infinity is a bounded operator. The latter allows us to give a time-domain theory of superradiant reflection. The boundedness of the scattering matrix shows in particular that the maximal amplification of solutions associated to ingoing finite-energy wave packets on past null infinity is bounded. On the frequency side, this corresponds to the novel statement that the suitably normalised reflection and transmission coefficients are uniformly bounded independently of the frequency parameters. We further complement this with a demonstration that superradiant reflection indeed amplifies the energy radiated to future null infinity of suitable wave-packets as above. The results make essential use of a refinement of our recent proof [M. Dafermos, I. Rodnianski and Y. Shlapentokh-Rothman, Decay for solutions of the wave equation on Kerr exterior spacetimes III: the full subextremal case |a|<M, arXiv:1402.6034] of boundedness and decay for solutions of the Cauchy problem so as to apply in the class of solutions where only a degenerate energy is assumed finite. We show in contrast that the analogous scattering maps cannot be defined for the class of finite non-degenerate energy solutions. This is due to the fact that the celebrated horizon red-shift effect acts as a blue-shift instability when solving the wave equation backwards.", "The aim of this paper is to show that spatial coupling can be viewed not only as a means to build better graphical models, but also as a tool to better understand uncoupled models. The starting point is the observation that some asymptotic properties of graphical models are easier to prove in the case of spatial coupling. In such cases, one can then use the so-called interpolation method to transfer known results for the spatially coupled case to the uncoupled one. Our main use of this framework is for Low-density parity check (LDPC) codes, where we use interpolation to show that the average entropy of the codeword conditioned on the observation is asymptotically the same for spatially coupled as for uncoupled ensembles. We give three applications of this result for a large class of LDPC ensembles. The first one is a proof of the so-called Maxwell construction stating that the MAP threshold is equal to the area threshold of the BP GEXIT curve. The second is a proof of the equality between the BP and MAP GEXIT curves above the MAP threshold. The third application is the intimately related fact that the replica symmetric formula for the conditional entropy in the infinite block length limit is exact." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
In subsequent work on non-BPS , involving two of the current authors, we used simple supergravity techniques to find the most general solutions with the correct symmetries and asymptotic charges of the hot system @cite_3 . We showed that the only non-BPS solution with a well-behaved event horizon is the horizon branch.
{ "cite_N": [ "@cite_3" ], "mid": [ "2471626564", "1965302333", "2031810620", "2048444779" ], "abstract": [ "We study the supergravity solutions describing nonextremal enhan c c ons. There are two branches of solutions: a shell branch'' connected to the extremal solution, and a horizon branch'' which connects to the Schwarzschild black hole at large mass. We show that the shell branch solutions violate the weak energy condition, and are hence unphysical. We investigate linearized perturbations of the horizon branch and the extremal solution numerically, completing an investigation initiated in a previous paper. We show that these solutions are stable against the perturbations we consider. This provides further evidence that these latter supergravity solutions are capturing some of the true physics of the enhan c c on.", "We extend the investigation of nonextremal enhan c c ons, finding the most general solutions with the correct symmetry and charges. There are two families of solutions. One of these contains a solution with a regular horizon found previously; this previous example is shown to be the unique solution with a regular horizon. The other family generalizes a previous nonextreme extension of the enhan c c on, producing solutions with shells that satisfy the weak energy condition. We argue that identifying a unique solution with a shell requires input beyond supergravity.", "Abstract At the heart of this article will be the study of a branching Brownian motion (BBM) with killing , where individual particles move as Brownian motions with drift − ρ , perform dyadic branching at rate β and are killed on hitting the origin. Firstly, by considering properties of the right-most particle and the extinction probability, we will provide a probabilistic proof of the classical result that the ‘one-sided’ FKPP travelling-wave equation of speed − ρ with solutions f : [ 0 , ∞ ) → [ 0 , 1 ] satisfying f ( 0 ) = 1 and f ( ∞ ) = 0 has a unique solution with a particular asymptotic when ρ 2 β , and no solutions otherwise. Our analysis is in the spirit of the standard BBM studies of [S.C. Harris, Travelling-waves for the FKPP equation via probabilistic arguments, Proc. Roy. Soc. Edinburgh Sect. A 129 (3) (1999) 503–517] and [A.E. Kyprianou, Travelling wave solutions to the K-P-P equation: alternatives to Simon Harris' probabilistic analysis, Ann. Inst. H. Poincare Probab. Statist. 40 (1) (2004) 53–72] and includes an intuitive application of a change of measure inducing a spine decomposition that, as a by product, gives the new result that the asymptotic speed of the right-most particle in the killed BBM is 2 β − ρ on the survival set. Secondly, we introduce and discuss the convergence of an additive martingale for the killed BBM, W λ , that appears of fundamental importance as well as facilitating some new results on the almost-sure exponential growth rate of the number of particles of speed λ ∈ ( 0 , 2 β − ρ ) . Finally, we prove a new result for the asymptotic behaviour of the probability of finding the right-most particle with speed λ > 2 β − ρ . This result combined with Chauvin and Rouault's [B. Chauvin, A. Rouault, KPP equation and supercritical branching Brownian motion in the subcritical speed area. Application to spatial trees, Probab. Theory Related Fields 80 (2) (1988) 299–314] arguments for standard BBM readily yields an analogous Yaglom-type conditional limit theorem for the killed BBM and reveals W λ as the limiting Radon–Nikodým derivative when conditioning the right-most particle to travel at speed λ into the distant future.", "We construct the most general supersymmetric configuration of @math -branes and @math -branes on a 6-torus. It contains arbitrary numbers of branes at relative @math angles. The corresponding supergravity solutions are constructed and expressed in a remarkably simple form, using the complex geometry of the compact space. The spacetime supersymmetry of the configuration is verified explicitly, by solution of the Killing spinor equations, and the equations of motion are verified too. Our configurations can be interpreted as a 16-parameter family of regular extremal black holes in four dimensions. Their entropy is interpreted microscopically by counting the degeneracy of bound states of @math -branes. Our result agrees in detail with the prediction for the degeneracy of BPS states in terms of the quartic invariant of the E(7,7) duality group." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
Here, the story is particularly simple. We find that, at some radius greater than @math , the volume of the K3 always shrinks to zero, indicating that somewhere outside this radius, the K3 has reached its stringy volume. Note that the old ( @math ) shell solution @cite_0 falls into this category.
{ "cite_N": [ "@cite_0" ], "mid": [ "2064943293", "2038932251", "2056986231", "2169164634" ], "abstract": [ "Abstract The problem of finding the maximum diameter of n equal mutually disjoint circles inside a unit square is addressed in this paper. Exact solutions exist for only n = 1, …, 9,10,16,25,36 while for other n only conjectural solutions have been reported. In this work a max-min optimization approach is introduced which matches the best reported solutions in the literature for all n ⩽ 30, yields a better configuration for n = 15, and provides new results for n = 28 and 29.", "It is known that random k-CNF formulas have a so-called satisfiability threshold at a density (namely, clause-variable ratio) of roughly 2kln2: at densities slightly below this threshold almost all k-CNF formulas are satisfiable, whereas slightly above this threshold almost no k-CNF formula is satisfiable. In the current work we consider satisfiable random formulas and inspect another parameter—the diameter of the solution space (that is, the maximal Hamming distance between a pair of satisfying assignments). It was previously shown that for all densities up to a density slightly below the satisfiability threshold the diameter is almost surely at least roughly n 2 (and n at much lower densities). At densities very much higher than the satisfiability threshold, the diameter is almost surely zero (a very dense satisfiable formula is expected to have only one satisfying assignment). In this paper we show that for all densities above a density that is slightly above the satisfiability threshold (more precisel...", "Let P be a Poisson process of intensity one in a square S n of area n. We construct a random geometric graph G n , k by joining each point of P to its k ≡ k(n) nearest neighbours. Recently, Xue and Kumar proved that if k ≤ 0.074 log n then the probability that G n , k is connected tends to 0 as n → ∞ while, if k ≥ 5.1774 log n, then the probability that G n , k is connected tends to 1 as n → ∞. They conjectured that the threshold for connectivity is k = (1 + o(1)) log n. In this paper we improve these lower and upper bounds to 0.3043 log n and 0.5139 log n, respectively, disproving this conjecture. We also establish lower and upper bounds of 0.7209 log n and 0.9967 log n for the directed version of this problem. A related question concerns coverage. With G n , k as above, we surround each vertex by the smallest (closed) disc containing its k nearest neighbours. We prove that if k ≤ 0.7209 log n then the probability that these discs cover S n tends to 0 as n → ∞ while, if k > 0.9967 log n, then the probability that the discs cover S n tends to 1 as n → ∞.", "The kissing number k(3) is the maximal number of equal size nonoverlapping spheres in three dimensions that can touch another sphere of the same size. This number was the subject of a famous discussion between Isaac Newton and David Gregory in 1694. The first proof that k(3) = 12 was given by Schutte and van der Waerden only in 1953. In this paper we present a new solution of the Newton--Gregory problem that uses our extension of the Delsarte method. This proof relies on basic calculus and simple spherical geometry." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
It is straightforward to find an expression for the radius of the @math -shell solutions: We could also rewrite this in terms of the parameters: @math , @math , @math , in order to put the solution exactly in terms of the language of previous studies @cite_0 .
{ "cite_N": [ "@cite_0" ], "mid": [ "2218835040", "2407537893", "2068164539", "2064943293" ], "abstract": [ "We speed up previous (1 + e)-factor approximation algorithms for a number of geometric optimization problems in fixed dimensions: diameter, width, minimum-radius enclosing cylinder, minimum-width enclosing annulus, minimum-width enclosing cylindrical shell, etc. Linear time bounds were known before; we further improve the dependence of the \"constants\" in terms of e.We next consider the data-stream model and present new (1 + e)-factor approximation algorithms that need only constant space for all of the above problems in any fixed dimension. Previously, such a result was known only for diameter.Both sets of results are obtained using the core-set framework recently proposed by Agarwal, Har-Peled, and Varadarajan. Published by Elsevier B.V.", "Given a polygon @math , for two points @math and @math contained in the polygon, their is the length of the shortest @math -path within @math . A of radius @math centered at a point @math is the set of points in @math whose geodesic distance to @math is at most @math . We present a polynomial time @math -approximation algorithm for finding a densest geodesic unit disk packing in @math . Allowing arbitrary radii but constraining the number of disks to be @math , we present a @math -approximation algorithm for finding a packing in @math with @math geodesic disks whose minimum radius is maximized. We then turn our focus on of @math and present a @math -approximation algorithm for covering @math with @math geodesic disks whose maximal radius is minimized. Furthermore, we show that all these problems are @math -hard in polygons with holes. Lastly, we present a polynomial time exact algorithm which covers a polygon with two geodesic disks of minimum maximal radius.", "Given a set of @math n points, each is painted by one of the @math k given colors, we want to choose @math k points with distinct colors to form a color spanning set. For each color spanning set, we can construct the convex hull and the smallest axis-aligned enclosing rectangle, etc. Assuming that each point is chosen independently and identically from the subset of points of the same color, we propose an @math O ( n 2 ) time algorithm to compute the expected area of convex hulls of the color spanning sets and an @math O ( n 2 ) time algorithm to compute the expected perimeter of convex hulls of the color spanning sets. For the expected perimeter (resp. area) of the smallest perimeter (resp. area) axis-aligned enclosing rectangles of the color spanning sets, we present an @math O ( n log n ) (resp. @math O ( n 2 ) ) time algorithm. We also propose a simple approximation algorithm to compute the expected diameter of the color spanning sets. For the expected distance of the closest pair, we show that it is @math # P-complete to compute and there exists no polynomial time @math 2 n 1 - ? approximation algorithm to compute the probability that the closest pair distance of all color spanning sets equals to a given value @math d unless @math P = N P , even in one dimension and when each color paints two points.", "Abstract The problem of finding the maximum diameter of n equal mutually disjoint circles inside a unit square is addressed in this paper. Exact solutions exist for only n = 1, …, 9,10,16,25,36 while for other n only conjectural solutions have been reported. In this work a max-min optimization approach is introduced which matches the best reported solutions in the literature for all n ⩽ 30, yields a better configuration for n = 15, and provides new results for n = 28 and 29." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
In a related context, the geometry of fractional D @math -branes was studied @cite_5 . Fractional branes can be described as regular D @math -branes wrapped on a vanishing two-cycle inside the @math orbifold limit of K3. The dual gauge theory is again @math SYM with no hypermultiplets. Attempting to take the decoupling limit once again fails to yield a clean strong weak duality. This happens in a way directly analogous to the original case.
{ "cite_N": [ "@cite_5" ], "mid": [ "2152342374", "1992572456", "2019049541", "2902333126" ], "abstract": [ "Abstract We construct non-extremal fractional D-brane solutions of type-II string theory at the Z 2 orbifold point of K3. These solutions generalize known extremal fractional-brane solutions and provide further insights into N =2 supersymmetric gauge theories and dual descriptions thereof. In particular, we find that for these solutions the horizon radius cannot exceed the non-extremal enhancon radius. As a consequence, we conclude that a system of non-extremal fractional branes cannot develop into a black brane. This conclusion is in agreement with known dual descriptions of the system.", "We construct new supersymmetric solutions of D = 11 supergravity describing n orthogonally “overlapping” membranes and fivebranes for n = 2,…,8. Overlapping branes arise after separating intersecting branes in a direction transverse to all of the branes. The solutions, which generalize known intersecting brane solutions, preserve at least 2−n of the supersymmetry. Each pairwise overlap involves a membrane overlapping a membrane in a 0-brane, a fivebrane overlapping a fivebrane in a 3-brane or a membrane overlapping a fivebrane in a string. After reducing n overlapping membranes to obtain n overlapping D-2-branes in D = 10, T-duality generates new overlapping D-brane solutions in type IIA and type IIB string theory. Uplifting certain type IIA solutions leads to the D = 11 solutions. Some of the new solutions reduce to dilaton black holes in D = 4. Additionally, we present a D = 10 solution that describes two D-5-branes overlapping in a string. T-duality then generates further D = 10 solutions and uplifting one of the type IIA solutions gives a new D = 11 solution describing two fivebranes overlapping in a string.", "Abstract By looking at fractional D p -branes of type IIA on T 4 Z 2 as wrapped branes and by using boundary state techniques we construct the effective low-energy action for the fields generated by fractional branes, build their worldvolume action and find the corresponding classical geometry. The explicit form of the classical background is consistent only outside an enhancon sphere of radius r e , which encloses a naked singularity of repulson-type. The perturbative running of the gauge coupling constant, dictated by the NS–NS twisted field that keeps its one-loop expression at any distance, also fails at r e .", "We study B-branes in two-dimensional N=(2,2) anomalous models, and their behaviour as we vary bulk parameters in the quantum Kahler moduli space. We focus on the case of (2,2) theories defined by abelian gauged linear sigma models (GLSM). We use the hemisphere partition function as a guide to find how B-branes split in the IR into components supported on Higgs, mixed and Coulomb branches: this generalizes the band restriction rule of Herbst-Hori-Page to anomalous models. As a central example, we work out in detail the case of GLSMs for Hirzebruch-Jung resolutions of cyclic surface singularities. In these non-compact models we explain how to compute and regularize the hemisphere partition function for a brane with compact support, and check that its Higgs branch component explicitly matches with the geometric central charge of an object in the derived category." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
The authors of @cite_5 found supergravity solutions for fractional branes in six dimensions using two different methods. First, they used boundary state technology to produce a consistent truncation of Type II supergravity coupled to fractional brane sources; second, they related their consistent truncation to the heterotic theory via a chain of dualities. The BPS solutions they found exhibit repulson-like behaviour and an analogous phenomenon occurs.
{ "cite_N": [ "@cite_5" ], "mid": [ "1992572456", "2048444779", "2019049541", "2152342374" ], "abstract": [ "We construct new supersymmetric solutions of D = 11 supergravity describing n orthogonally “overlapping” membranes and fivebranes for n = 2,…,8. Overlapping branes arise after separating intersecting branes in a direction transverse to all of the branes. The solutions, which generalize known intersecting brane solutions, preserve at least 2−n of the supersymmetry. Each pairwise overlap involves a membrane overlapping a membrane in a 0-brane, a fivebrane overlapping a fivebrane in a 3-brane or a membrane overlapping a fivebrane in a string. After reducing n overlapping membranes to obtain n overlapping D-2-branes in D = 10, T-duality generates new overlapping D-brane solutions in type IIA and type IIB string theory. Uplifting certain type IIA solutions leads to the D = 11 solutions. Some of the new solutions reduce to dilaton black holes in D = 4. Additionally, we present a D = 10 solution that describes two D-5-branes overlapping in a string. T-duality then generates further D = 10 solutions and uplifting one of the type IIA solutions gives a new D = 11 solution describing two fivebranes overlapping in a string.", "We construct the most general supersymmetric configuration of @math -branes and @math -branes on a 6-torus. It contains arbitrary numbers of branes at relative @math angles. The corresponding supergravity solutions are constructed and expressed in a remarkably simple form, using the complex geometry of the compact space. The spacetime supersymmetry of the configuration is verified explicitly, by solution of the Killing spinor equations, and the equations of motion are verified too. Our configurations can be interpreted as a 16-parameter family of regular extremal black holes in four dimensions. Their entropy is interpreted microscopically by counting the degeneracy of bound states of @math -branes. Our result agrees in detail with the prediction for the degeneracy of BPS states in terms of the quartic invariant of the E(7,7) duality group.", "Abstract By looking at fractional D p -branes of type IIA on T 4 Z 2 as wrapped branes and by using boundary state techniques we construct the effective low-energy action for the fields generated by fractional branes, build their worldvolume action and find the corresponding classical geometry. The explicit form of the classical background is consistent only outside an enhancon sphere of radius r e , which encloses a naked singularity of repulson-type. The perturbative running of the gauge coupling constant, dictated by the NS–NS twisted field that keeps its one-loop expression at any distance, also fails at r e .", "Abstract We construct non-extremal fractional D-brane solutions of type-II string theory at the Z 2 orbifold point of K3. These solutions generalize known extremal fractional-brane solutions and provide further insights into N =2 supersymmetric gauge theories and dual descriptions thereof. In particular, we find that for these solutions the horizon radius cannot exceed the non-extremal enhancon radius. As a consequence, we conclude that a system of non-extremal fractional branes cannot develop into a black brane. This conclusion is in agreement with known dual descriptions of the system." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
The natural extension of this work was, again, to consider the systems when energy is added to take them above the BPS bound. In @cite_2 , a consistent six-dimensional truncation ansatz for fractional Dp-branes in orbifold backgrounds was provided, for general @math . Solutions corresponding to the geometry of non-BPS fractional branes were found, in analogy to the non-BPS work @cite_0 . After imposition of positivity of ADM mass, half of the solutions were disposed of. One of the remaining solutions was discarded because it did not have a BPS limit.
{ "cite_N": [ "@cite_0", "@cite_2" ], "mid": [ "2152342374", "2018505968", "2019049541", "2082888708" ], "abstract": [ "Abstract We construct non-extremal fractional D-brane solutions of type-II string theory at the Z 2 orbifold point of K3. These solutions generalize known extremal fractional-brane solutions and provide further insights into N =2 supersymmetric gauge theories and dual descriptions thereof. In particular, we find that for these solutions the horizon radius cannot exceed the non-extremal enhancon radius. As a consequence, we conclude that a system of non-extremal fractional branes cannot develop into a black brane. This conclusion is in agreement with known dual descriptions of the system.", "We study the wall-crossing phenomena of D4-D2-D0 bound states with two units of D4-branechargeontheresolvedconifold. We identify the walls of marginal stability and evaluate the discrete changes of the BPS indices by using the Kontsevich-Soibelman wall-crossing formula. In particular, we find that the field theories on D4-branes in two large radius limits are properly connected by the wall-crossings involving the flop transition of the conifold. We also find that in one of the large radius limits there are stable bound states of two D4-D2-D0 fragments.", "Abstract By looking at fractional D p -branes of type IIA on T 4 Z 2 as wrapped branes and by using boundary state techniques we construct the effective low-energy action for the fields generated by fractional branes, build their worldvolume action and find the corresponding classical geometry. The explicit form of the classical background is consistent only outside an enhancon sphere of radius r e , which encloses a naked singularity of repulson-type. The perturbative running of the gauge coupling constant, dictated by the NS–NS twisted field that keeps its one-loop expression at any distance, also fails at r e .", "We discuss the wall-crossing of the BPS bound states of a non-compact holomorphic D4-brane with D2 and D0-branes on the conifold. We use the Kontsevich-Soibelman wall-crossing formula and analyze the BPS degeneracy in various chambers. In particular we obtain a relation between BPS degeneracies in two limiting attractor chambers related by a flop transition. Our result is consistent with known results and predicts BPS degeneracies in all chambers." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
The construction of fractional brane geometries that exhibit the mechanism is expected to be dual (through T-duality of type IIA on K3) to the original geometries @cite_9 @cite_5 @cite_2 . However, in view of work reviewed in the previous subsection, the conclusion that horizons never form in the non-BPS fractional brane geometries is puzzling.
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_2" ], "mid": [ "2152342374", "2019049541", "1992572456", "2022574854" ], "abstract": [ "Abstract We construct non-extremal fractional D-brane solutions of type-II string theory at the Z 2 orbifold point of K3. These solutions generalize known extremal fractional-brane solutions and provide further insights into N =2 supersymmetric gauge theories and dual descriptions thereof. In particular, we find that for these solutions the horizon radius cannot exceed the non-extremal enhancon radius. As a consequence, we conclude that a system of non-extremal fractional branes cannot develop into a black brane. This conclusion is in agreement with known dual descriptions of the system.", "Abstract By looking at fractional D p -branes of type IIA on T 4 Z 2 as wrapped branes and by using boundary state techniques we construct the effective low-energy action for the fields generated by fractional branes, build their worldvolume action and find the corresponding classical geometry. The explicit form of the classical background is consistent only outside an enhancon sphere of radius r e , which encloses a naked singularity of repulson-type. The perturbative running of the gauge coupling constant, dictated by the NS–NS twisted field that keeps its one-loop expression at any distance, also fails at r e .", "We construct new supersymmetric solutions of D = 11 supergravity describing n orthogonally “overlapping” membranes and fivebranes for n = 2,…,8. Overlapping branes arise after separating intersecting branes in a direction transverse to all of the branes. The solutions, which generalize known intersecting brane solutions, preserve at least 2−n of the supersymmetry. Each pairwise overlap involves a membrane overlapping a membrane in a 0-brane, a fivebrane overlapping a fivebrane in a 3-brane or a membrane overlapping a fivebrane in a string. After reducing n overlapping membranes to obtain n overlapping D-2-branes in D = 10, T-duality generates new overlapping D-brane solutions in type IIA and type IIB string theory. Uplifting certain type IIA solutions leads to the D = 11 solutions. Some of the new solutions reduce to dilaton black holes in D = 4. Additionally, we present a D = 10 solution that describes two D-5-branes overlapping in a string. T-duality then generates further D = 10 solutions and uplifting one of the type IIA solutions gives a new D = 11 solution describing two fivebranes overlapping in a string.", "The recent discovery of an explicit conformal field theory description of Type II p-branes makes it possible to investigate the existence of bound states of such objects. In particular, it is possible with reasonable precision to verify the prediction that the Type IIB superstring in ten dimensions has a family of soliton and bound state strings permuted by SL(2,Z). The space-time coordinates enter tantalizingly in the formalism as non-commuting matrices." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
We will show that this apparent discord is actually an artifact. The hot fractional brane system exhibits the exact dual behavior to that of the hot . In particular, we will show that the solutions of @cite_2 are related by duality to the hot solutions of @cite_0 . By continuously varying the K3 moduli away from the orbifold point, we can reach solutions in which the shell branch solutions once again violate the WEC. In the following sections we pin down the precise map between the two setups, and resurrect the horizon branch on the fractional brane side. We will also exhibit the fractional brane equivalent of the @math -shell solutions.
{ "cite_N": [ "@cite_0", "@cite_2" ], "mid": [ "2152342374", "2902333126", "2019049541", "1992572456" ], "abstract": [ "Abstract We construct non-extremal fractional D-brane solutions of type-II string theory at the Z 2 orbifold point of K3. These solutions generalize known extremal fractional-brane solutions and provide further insights into N =2 supersymmetric gauge theories and dual descriptions thereof. In particular, we find that for these solutions the horizon radius cannot exceed the non-extremal enhancon radius. As a consequence, we conclude that a system of non-extremal fractional branes cannot develop into a black brane. This conclusion is in agreement with known dual descriptions of the system.", "We study B-branes in two-dimensional N=(2,2) anomalous models, and their behaviour as we vary bulk parameters in the quantum Kahler moduli space. We focus on the case of (2,2) theories defined by abelian gauged linear sigma models (GLSM). We use the hemisphere partition function as a guide to find how B-branes split in the IR into components supported on Higgs, mixed and Coulomb branches: this generalizes the band restriction rule of Herbst-Hori-Page to anomalous models. As a central example, we work out in detail the case of GLSMs for Hirzebruch-Jung resolutions of cyclic surface singularities. In these non-compact models we explain how to compute and regularize the hemisphere partition function for a brane with compact support, and check that its Higgs branch component explicitly matches with the geometric central charge of an object in the derived category.", "Abstract By looking at fractional D p -branes of type IIA on T 4 Z 2 as wrapped branes and by using boundary state techniques we construct the effective low-energy action for the fields generated by fractional branes, build their worldvolume action and find the corresponding classical geometry. The explicit form of the classical background is consistent only outside an enhancon sphere of radius r e , which encloses a naked singularity of repulson-type. The perturbative running of the gauge coupling constant, dictated by the NS–NS twisted field that keeps its one-loop expression at any distance, also fails at r e .", "We construct new supersymmetric solutions of D = 11 supergravity describing n orthogonally “overlapping” membranes and fivebranes for n = 2,…,8. Overlapping branes arise after separating intersecting branes in a direction transverse to all of the branes. The solutions, which generalize known intersecting brane solutions, preserve at least 2−n of the supersymmetry. Each pairwise overlap involves a membrane overlapping a membrane in a 0-brane, a fivebrane overlapping a fivebrane in a 3-brane or a membrane overlapping a fivebrane in a string. After reducing n overlapping membranes to obtain n overlapping D-2-branes in D = 10, T-duality generates new overlapping D-brane solutions in type IIA and type IIB string theory. Uplifting certain type IIA solutions leads to the D = 11 solutions. Some of the new solutions reduce to dilaton black holes in D = 4. Additionally, we present a D = 10 solution that describes two D-5-branes overlapping in a string. T-duality then generates further D = 10 solutions and uplifting one of the type IIA solutions gives a new D = 11 solution describing two fivebranes overlapping in a string." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
In order to embed the non-extremal D4 brane solutions of @cite_3 in the six dimensional supergravity, we display a simple two charge truncation which describes the solutions studied in @cite_3 . These solutions can then be lifted straight across into the larger supergravity theory. In deriving the truncation, it is convenient to switch to heterotic variables using the well-known duality between type IIA on K3 and heterotic strings on @math . This is also convenient for comparing with the fractional brane solutions of @cite_2 since that paper presents solutions in the heterotic frame. However, we should stress that we are performing T-dualities between different IIA solutions and in principle we could have worked in IIA variables throughout.
{ "cite_N": [ "@cite_3", "@cite_2" ], "mid": [ "1992572456", "2152342374", "1987603965", "2086840642" ], "abstract": [ "We construct new supersymmetric solutions of D = 11 supergravity describing n orthogonally “overlapping” membranes and fivebranes for n = 2,…,8. Overlapping branes arise after separating intersecting branes in a direction transverse to all of the branes. The solutions, which generalize known intersecting brane solutions, preserve at least 2−n of the supersymmetry. Each pairwise overlap involves a membrane overlapping a membrane in a 0-brane, a fivebrane overlapping a fivebrane in a 3-brane or a membrane overlapping a fivebrane in a string. After reducing n overlapping membranes to obtain n overlapping D-2-branes in D = 10, T-duality generates new overlapping D-brane solutions in type IIA and type IIB string theory. Uplifting certain type IIA solutions leads to the D = 11 solutions. Some of the new solutions reduce to dilaton black holes in D = 4. Additionally, we present a D = 10 solution that describes two D-5-branes overlapping in a string. T-duality then generates further D = 10 solutions and uplifting one of the type IIA solutions gives a new D = 11 solution describing two fivebranes overlapping in a string.", "Abstract We construct non-extremal fractional D-brane solutions of type-II string theory at the Z 2 orbifold point of K3. These solutions generalize known extremal fractional-brane solutions and provide further insights into N =2 supersymmetric gauge theories and dual descriptions thereof. In particular, we find that for these solutions the horizon radius cannot exceed the non-extremal enhancon radius. As a consequence, we conclude that a system of non-extremal fractional branes cannot develop into a black brane. This conclusion is in agreement with known dual descriptions of the system.", "Abstract The strong coupling dynamics of string theories in dimension d ⩾ 4 are studied. It is argued, among other things, that eleven-dimensional supergravity arises as a low energy limit of the ten-dimensional Type IIA superstring, and that a recently conjectured duality between the heterotic string and Type IIA superstrings controls the strong coupling dynamics of the heterotic string in five, six, and seven dimensions and implies S -duality for both heterotic and Type II strings.", "Abstract We present a general rule determining how extremal branes can intersect in a configuration with zero binding energy. The rule is derived in a model independent way and in arbitrary spacetime dimensions D by solving the equations of motion of gravity coupled to a dilaton and several different n -form field strengths. The intersection rules are all compatible with supersymmetry, although derived without using it. We then specialize to the branes occurring in type II string theories and in M-theory. We show that the intersection rules are consistent with the picture that open branes can have boundaries on some other branes. In particular, all the D-branes of dimension q , with 1 ≤ q ≤ 6, can have boundaries on the solitonic 5-brane." ] }
cs0408007
2952840318
We consider a the general online convex optimization framework introduced by Zinkevich. In this setting, there is a sequence of convex functions. Each period, we must choose a signle point (from some feasible set) and pay a cost equal to the value of the next function on our chosen point. Zinkevich shows that, if the each function is revealed after the choice is made, then one can achieve vanishingly small regret relative the best single decision chosen in hindsight. We extend this to the bandit setting where we do not find out the entire functions but rather just their value at our chosen point. We show how to get vanishingly small regret in this setting. Our approach uses a simple approximation of the gradient that is computed from evaluating a function at a single (random) point. We show that this estimate is sufficient to mimic Zinkevich's gradient descent online analysis, with access to the gradient (only being able to evaluate the function at a single point).
For direct offline optimization, i.e. from an oracle that evaluates the function, in theory one can use the ellipsoid @cite_6 or more recent random-walk based approaches @cite_2 . In black-box optimization, practitioners often use Simulated Annealing @cite_12 or finite difference simulated perturbation stochastic approximation methods (see, for example, @cite_16 ). In the case that the functions may change dramatically over time, a single-point approximation to the gradient may be necessary. Granichin and Spall propose a different single-point estimate of the gradient @cite_5 @cite_11 .
{ "cite_N": [ "@cite_6", "@cite_2", "@cite_5", "@cite_16", "@cite_12", "@cite_11" ], "mid": [ "2154682027", "2277643282", "2192240806", "2500690480" ], "abstract": [ "We give novel algorithms for stochastic strongly-convex optimization in the gradient oracle model which return a O(1 T)-approximate solution after T iterations. The first algorithm is deterministic, and achieves this rate via gradient updates and historical averaging. The second algorithm is randomized, and is based on pure gradient steps with a random step size. his rate of convergence is optimal in the gradient oracle model. This improves upon the previously known best rate of O(log(T T), which was obtained by applying an online strongly-convex optimization algorithm with regret O(log(T)) to the batch setting. We complement this result by proving that any algorithm has expected regret of Ω(log(T)) in the online stochastic strongly-convex optimization setting. This shows that any online-to-batch conversion is inherently suboptimal for stochastic strongly-convex optimization. This is the first formal evidence that online convex optimization is strictly more difficult than batch stochastic convex optimization.", "We consider parallel global optimization of derivative-free expensive-to-evaluate functions, and propose an efficient method based on stochastic approximation for implementing a conceptual Bayesian optimization algorithm proposed by (2007). To accomplish this, we use infinitessimal perturbation analysis (IPA) to construct a stochastic gradient estimator and show that this estimator is unbiased. We also show that the stochastic gradient ascent algorithm using the constructed gradient estimator converges to a stationary point of the q-EI surface, and therefore, as the number of multiple starts of the gradient ascent algorithm and the number of steps for each start grow large, the one-step Bayes optimal set of points is recovered. We show in numerical experiments that our method for maximizing the q-EI is faster than methods based on closed-form evaluation using high-dimensional integration, when considering many parallel function evaluations, and is comparable in speed when considering few. We also show that the resulting one-step Bayes optimal algorithm for parallel global optimization finds high quality solutions with fewer evaluations that a heuristic based on approximately maximizing the q-EI. A high quality open source implementation of this algorithm is available in the open source Metrics Optimization Engine (MOE).", "This monograph presents the main complexity theorems in convex optimization and their corresponding algorithms. Starting from the fundamental theory of black-box optimization, the material progresses towards recent advances in structural optimization and stochastic optimization. Our presentation of black-box optimization, strongly influenced by Nesterov's seminal book and Nemirovski's lecture notes, includes the analysis of cutting plane methods, as well as (accelerated) gradient descent schemes. We also pay special attention to non-Euclidean settings (relevant algorithms include Frank-Wolfe, mirror descent, and dual averaging) and discuss their relevance in machine learning. We provide a gentle introduction to structural optimization with FISTA (to optimize a sum of a smooth and a simple non-smooth term), saddle-point mirror prox (Nemirovski's alternative to Nesterov's smoothing), and a concise description of interior point methods. In stochastic optimization we discuss stochastic gradient descent, mini-batches, random coordinate descent, and sublinear algorithms. We also briefly touch upon convex relaxation of combinatorial problems and the use of randomness to round solutions, as well as random walks based methods.", "We consider stochastic strongly convex optimization with a complex inequality constraint. This complex inequality constraint may lead to computationally expensive projections in algorithmic iterations of the stochastic gradient descent (SGD) methods. To reduce the computation costs pertaining to the projections, we propose an Epoch-Projection Stochastic Gradient Descent (Epro-SGD) method. The proposed Epro-SGD method consists of a sequence of epochs; it applies SGD to an augmented objective function at each iteration within the epoch, and then performs a projection at the end of each epoch. Given a strongly convex optimization and for a total number of @math iterations, Epro-SGD requires only @math projections, and meanwhile attains an optimal convergence rate of @math , both in expectation and with a high probability. To exploit the structure of the optimization problem, we propose a proximal variant of Epro-SGD, namely Epro-ORDA, based on the optimal regularized dual averaging method. We apply the proposed methods on real-world applications; the empirical results demonstrate the effectiveness of our methods." ] }
cs0408007
2952840318
We consider a the general online convex optimization framework introduced by Zinkevich. In this setting, there is a sequence of convex functions. Each period, we must choose a signle point (from some feasible set) and pay a cost equal to the value of the next function on our chosen point. Zinkevich shows that, if the each function is revealed after the choice is made, then one can achieve vanishingly small regret relative the best single decision chosen in hindsight. We extend this to the bandit setting where we do not find out the entire functions but rather just their value at our chosen point. We show how to get vanishingly small regret in this setting. Our approach uses a simple approximation of the gradient that is computed from evaluating a function at a single (random) point. We show that this estimate is sufficient to mimic Zinkevich's gradient descent online analysis, with access to the gradient (only being able to evaluate the function at a single point).
In addition to the appeal of an online model of convex optimization, Zinkevich's gradient descent analysis can be applied to several other online problems for which gradient descent and other special-purpose algorithms have been carefully analyzed, such as Universal Portfolios @cite_0 @cite_18 @cite_19 , online linear regression @cite_13 , and online shortest paths @cite_3 (one convexifies to get an online shortest flow problem).
{ "cite_N": [ "@cite_18", "@cite_3", "@cite_0", "@cite_19", "@cite_13" ], "mid": [ "2952840318", "2129160848", "2004001705", "2153749284" ], "abstract": [ "We consider a the general online convex optimization framework introduced by Zinkevich. In this setting, there is a sequence of convex functions. Each period, we must choose a signle point (from some feasible set) and pay a cost equal to the value of the next function on our chosen point. Zinkevich shows that, if the each function is revealed after the choice is made, then one can achieve vanishingly small regret relative the best single decision chosen in hindsight. We extend this to the bandit setting where we do not find out the entire functions but rather just their value at our chosen point. We show how to get vanishingly small regret in this setting. Our approach uses a simple approximation of the gradient that is computed from evaluating a function at a single (random) point. We show that this estimate is sufficient to mimic Zinkevich's gradient descent online analysis, with access to the gradient (only being able to evaluate the function at a single point).", "In an online convex optimization problem a decision-maker makes a sequence of decisions, i.e., chooses a sequence of points in Euclidean space, from a fixed feasible set. After each point is chosen, it encounters a sequence of (possibly unrelated) convex cost functions. Zinkevich (ICML 2003) introduced this framework, which models many natural repeated decision-making problems and generalizes many existing problems such as Prediction from Expert Advice and Cover's Universal Portfolios. Zinkevich showed that a simple online gradient descent algorithm achieves additive regret @math , for an arbitrary sequence of T convex cost functions (of bounded gradients), with respect to the best single decision in hindsight. In this paper, we give algorithms that achieve regret O(log?(T)) for an arbitrary sequence of strictly convex functions (with bounded first and second derivatives). This mirrors what has been done for the special cases of prediction from expert advice by Kivinen and Warmuth (EuroCOLT 1999), and Universal Portfolios by Cover (Math. Finance 1:1---19, 1991). We propose several algorithms achieving logarithmic regret, which besides being more general are also much more efficient to implement. The main new ideas give rise to an efficient algorithm based on the Newton method for optimization, a new tool in the field. Our analysis shows a surprising connection between the natural follow-the-leader approach and the Newton method. We also analyze other algorithms, which tie together several different previous approaches including follow-the-leader, exponential weighting, Cover's algorithm and gradient descent.", "We study a general online convex optimization problem. We have a convex set S and an unknown sequence of cost functions c 1 , c 2 ,..., and in each period, we choose a feasible point x t in S, and learn the cost c t (x t ). If the function c t is also revealed after each period then, as Zinkevich shows in [25], gradient descent can be used on these functions to get regret bounds of O(√n). That is, after n rounds, the total cost incurred will be O(√n) more than the cost of the best single feasible decision chosen with the benefit of hindsight, min x Σ ct(x).We extend this to the \"bandit\" setting, where, in each period, only the cost c t (x t ) is revealed, and bound the expected regret as O(n3 4).Our approach uses a simple approximation of the gradient that is computed from evaluating c t at a single (random) point. We show that this biased estimate is sufficient to approximate gradient descent on the sequence of functions. In other words, it is possible to use gradient descent without seeing anything more than the value of the functions at a single point. The guarantees hold even in the most general case: online against an adaptive adversary.For the online linear optimization problem [15], algorithms with low regrets in the bandit setting have recently been given against oblivious [1] and adaptive adversaries [19]. In contrast to these algorithms, which distinguish between explicit explore and exploit periods, our algorithm can be interpreted as doing a small amount of exploration in each period.", "We propose a new method for unconstrained optimization of a smooth and strongly convex function, which attains the optimal rate of convergence of Nesterov’s accelerated gradient descent. The new algorithm has a simple geometric interpretation, loosely inspired by the ellipsoid method. We provide some numerical evidence that the new method can be superior to Nesterov’s accelerated gradient descent." ] }
cs0408007
2952840318
We consider a the general online convex optimization framework introduced by Zinkevich. In this setting, there is a sequence of convex functions. Each period, we must choose a signle point (from some feasible set) and pay a cost equal to the value of the next function on our chosen point. Zinkevich shows that, if the each function is revealed after the choice is made, then one can achieve vanishingly small regret relative the best single decision chosen in hindsight. We extend this to the bandit setting where we do not find out the entire functions but rather just their value at our chosen point. We show how to get vanishingly small regret in this setting. Our approach uses a simple approximation of the gradient that is computed from evaluating a function at a single (random) point. We show that this estimate is sufficient to mimic Zinkevich's gradient descent online analysis, with access to the gradient (only being able to evaluate the function at a single point).
A similar line of research has developed for the problem of online linear optimization @cite_1 @cite_10 @cite_9 . Here, one wants to solve the related but incomparable problem of optimizing a sequence of linear functions, over a possibly non-convex feasible set, modeling problems such as online shortest paths and online binary search trees (which are difficult to convexify). Kalai and Vempala @cite_1 show that, for such linear optimization problems in general, if the offline optimization problem is solvable efficiently, then regret can be bounded by @math also by an efficient online algorithm, in the full-information model. Awerbuch and Kleinberg @cite_10 generalize this to the bandit setting against an oblivious adversary (like ours). Blum and McMahan @cite_9 give a simpler algorithm that applies to adaptive adversaries, that may choose their functions @math depending on the previous points.
{ "cite_N": [ "@cite_10", "@cite_9", "@cite_1" ], "mid": [ "2473549844", "2120745256", "2004001705", "2152898676" ], "abstract": [ "We consider the adversarial convex bandit problem and we build the first @math -time algorithm with @math -regret for this problem. To do so we introduce three new ideas in the derivative-free optimization literature: (i) kernel methods, (ii) a generalization of Bernoulli convolutions, and (iii) a new annealing schedule for exponential weights (with increasing learning rate). The basic version of our algorithm achieves @math -regret, and we show that a simple variant of this algorithm can be run in @math -time per step at the cost of an additional @math factor in the regret. These results improve upon the @math -regret and @math -time result of the first two authors, and the @math -regret and @math -time result of Hazan and Li. Furthermore we conjecture that another variant of the algorithm could achieve @math -regret, and moreover that this regret is unimprovable (the current best lower bound being @math and it is achieved with linear functions). For the simpler situation of zeroth order stochastic convex optimization this corresponds to the conjecture that the optimal query complexity is of order @math .", "In the online linear optimization problem, a learner must choose, in each round, a decision from a set D ⊂ ℝn in order to minimize an (unknown and changing) linear cost function. We present sharp rates of convergence (with respect to additive regret) for both the full information setting (where the cost function is revealed at the end of each round) and the bandit setting (where only the scalar cost incurred is revealed). In particular, this paper is concerned with the price of bandit information, by which we mean the ratio of the best achievable regret in the bandit setting to that in the full-information setting. For the full information case, the upper bound on the regret is O*( √nT), where n is the ambient dimension and T is the time horizon. For the bandit case, we present an algorithm which achieves O*(n3 2 √T) regret — all previous (nontrivial) bounds here were O(poly(n)T2 3) or worse. It is striking that the convergence rate for the bandit setting is only a factor of n worse than in the full information case — in stark contrast to the K-arm bandit setting, where the gap in the dependence on K is exponential (√TK vs. √T log K). We also present lower bounds showing that this gap is at least √n, which we conjecture to be the correct order. The bandit algorithm we present can be implemented efficiently in special cases of particular interest, such as path planning and Markov Decision Problems.", "We study a general online convex optimization problem. We have a convex set S and an unknown sequence of cost functions c 1 , c 2 ,..., and in each period, we choose a feasible point x t in S, and learn the cost c t (x t ). If the function c t is also revealed after each period then, as Zinkevich shows in [25], gradient descent can be used on these functions to get regret bounds of O(√n). That is, after n rounds, the total cost incurred will be O(√n) more than the cost of the best single feasible decision chosen with the benefit of hindsight, min x Σ ct(x).We extend this to the \"bandit\" setting, where, in each period, only the cost c t (x t ) is revealed, and bound the expected regret as O(n3 4).Our approach uses a simple approximation of the gradient that is computed from evaluating c t at a single (random) point. We show that this biased estimate is sufficient to approximate gradient descent on the sequence of functions. In other words, it is possible to use gradient descent without seeing anything more than the value of the functions at a single point. The guarantees hold even in the most general case: online against an adaptive adversary.For the online linear optimization problem [15], algorithms with low regrets in the bandit setting have recently been given against oblivious [1] and adaptive adversaries [19]. In contrast to these algorithms, which distinguish between explicit explore and exploit periods, our algorithm can be interpreted as doing a small amount of exploration in each period.", "We address online linear optimization problems when the possible actions of the decision maker are represented by binary vectors. The regret of the decision maker is the difference between her realized loss and the minimal loss she would have achieved by picking, in hindsight, the best possible action. Our goal is to understand the magnitude of the best possible minimax regret. We study the problem under three different assumptions for the feedback the decision maker receives: full information, and the partial information models of the so-called “semi-bandit” and “bandit” problems. In the full information case we show that the standard exponentially weighted average forecaster is a provably suboptimal strategy. For the semi-bandit model, by combining the Mirror Descent algorithm and the INF Implicitely Normalized Forecaster strategy, we are able to prove the first optimal bounds. Finally, in the bandit case we discuss existing results in light of a new lower bound, and suggest a conjecture on the optimal regret in that case." ] }
cs0408007
2952840318
We consider a the general online convex optimization framework introduced by Zinkevich. In this setting, there is a sequence of convex functions. Each period, we must choose a signle point (from some feasible set) and pay a cost equal to the value of the next function on our chosen point. Zinkevich shows that, if the each function is revealed after the choice is made, then one can achieve vanishingly small regret relative the best single decision chosen in hindsight. We extend this to the bandit setting where we do not find out the entire functions but rather just their value at our chosen point. We show how to get vanishingly small regret in this setting. Our approach uses a simple approximation of the gradient that is computed from evaluating a function at a single (random) point. We show that this estimate is sufficient to mimic Zinkevich's gradient descent online analysis, with access to the gradient (only being able to evaluate the function at a single point).
A few comparisons are interesting to make with the online linear optimization problem. First of all, for the bandit versions of the linear problems, there was a distinction between exploration phases and exploitation phases. During exploration phases, one action from a barycentric spanner @cite_10 basis of @math actions was chosen, for the sole purpose of estimating the linear objective function. In contrast, our algorithm does a little bit of exploration each time. Secondly, Blum and McMahan @cite_9 were able to compete against an adaptive adversary, using a careful Martingale analysis. It is not clear if that can be done in our setting.
{ "cite_N": [ "@cite_9", "@cite_10" ], "mid": [ "2004001705", "1618543586", "2473549844", "2951667255" ], "abstract": [ "We study a general online convex optimization problem. We have a convex set S and an unknown sequence of cost functions c 1 , c 2 ,..., and in each period, we choose a feasible point x t in S, and learn the cost c t (x t ). If the function c t is also revealed after each period then, as Zinkevich shows in [25], gradient descent can be used on these functions to get regret bounds of O(√n). That is, after n rounds, the total cost incurred will be O(√n) more than the cost of the best single feasible decision chosen with the benefit of hindsight, min x Σ ct(x).We extend this to the \"bandit\" setting, where, in each period, only the cost c t (x t ) is revealed, and bound the expected regret as O(n3 4).Our approach uses a simple approximation of the gradient that is computed from evaluating c t at a single (random) point. We show that this biased estimate is sufficient to approximate gradient descent on the sequence of functions. In other words, it is possible to use gradient descent without seeing anything more than the value of the functions at a single point. The guarantees hold even in the most general case: online against an adaptive adversary.For the online linear optimization problem [15], algorithms with low regrets in the bandit setting have recently been given against oblivious [1] and adaptive adversaries [19]. In contrast to these algorithms, which distinguish between explicit explore and exploit periods, our algorithm can be interpreted as doing a small amount of exploration in each period.", "In sequential decision problems in an unknown environment, the decision maker often faces a dilemma over whether to explore to discover more about the environment, or to exploit current knowledge. We address the exploration-exploitation dilemma in a general setting encompassing both standard and contextualised bandit problems. The contextual bandit problem has recently resurfaced in attempts to maximise click-through rates in web based applications, a task with significant commercial interest. In this article we consider an approach of Thompson (1933) which makes use of samples from the posterior distributions for the instantaneous value of each action. We extend the approach by introducing a new algorithm, Optimistic Bayesian Sampling (OBS), in which the probability of playing an action increases with the uncertainty in the estimate of the action value. This results in better directed exploratory behaviour. We prove that, under unrestrictive assumptions, both approaches result in optimal behaviour with respect to the average reward criterion of Yang and Zhu (2002). We implement OBS and measure its performance in simulated Bernoulli bandit and linear regression domains, and also when tested with the task of personalised news article recommendation on a Yahoo! Front Page Today Module data set. We find that OBS performs competitively when compared to recently proposed benchmark algorithms and outperforms Thompson's method throughout.", "We consider the adversarial convex bandit problem and we build the first @math -time algorithm with @math -regret for this problem. To do so we introduce three new ideas in the derivative-free optimization literature: (i) kernel methods, (ii) a generalization of Bernoulli convolutions, and (iii) a new annealing schedule for exponential weights (with increasing learning rate). The basic version of our algorithm achieves @math -regret, and we show that a simple variant of this algorithm can be run in @math -time per step at the cost of an additional @math factor in the regret. These results improve upon the @math -regret and @math -time result of the first two authors, and the @math -regret and @math -time result of Hazan and Li. Furthermore we conjecture that another variant of the algorithm could achieve @math -regret, and moreover that this regret is unimprovable (the current best lower bound being @math and it is achieved with linear functions). For the simpler situation of zeroth order stochastic convex optimization this corresponds to the conjecture that the optimal query complexity is of order @math .", "We address the online linear optimization problem when the actions of the forecaster are represented by binary vectors. Our goal is to understand the magnitude of the minimax regret for the worst possible set of actions. We study the problem under three different assumptions for the feedback: full information, and the partial information models of the so-called \"semi-bandit\", and \"bandit\" problems. We consider both @math -, and @math -type of restrictions for the losses assigned by the adversary. We formulate a general strategy using Bregman projections on top of a potential-based gradient descent, which generalizes the ones studied in the series of papers (2007), (2008), (2008), Cesa-Bianchi and Lugosi (2009), Helmbold and Warmuth (2009), (2010), (2010), (2010) and Audibert and Bubeck (2010). We provide simple proofs that recover most of the previous results. We propose new upper bounds for the semi-bandit game. Moreover we derive lower bounds for all three feedback assumptions. With the only exception of the bandit game, the upper and lower bounds are tight, up to a constant factor. Finally, we answer a question asked by (2010) by showing that the exponentially weighted average forecaster is suboptimal against @math adversaries." ] }
cs0407006
2951800949
Predicate abstraction provides a powerful tool for verifying properties of infinite-state systems using a combination of a decision procedure for a subset of first-order logic and symbolic methods originally developed for finite-state model checking. We consider models containing first-order state variables, where the system state includes mutable functions and predicates. Such a model can describe systems containing arbitrarily large memories, buffers, and arrays of identical processes. We describe a form of predicate abstraction that constructs a formula over a set of universally quantified variables to describe invariant properties of the first-order state variables. We provide a formal justification of the soundness of our approach and describe how it has been used to verify several hardware and software designs, including a directory-based cache coherence protocol.
Regular model checking @cite_17 @cite_28 uses regular languages to represent parameterized systems and computes the closure for the regular relations to construct the reachable state space. In general, the method is not guaranteed to be complete and requires various acceleration techniques (sometimes guided by the user) to ensure termination. Moreover, approaches based on regular language are not suited for representing data in the system. Several examples that we consider in this work can't be modeled in this framework; the out-of-order processor which contains data operations or the Peterson's mutual exclusion are few such examples. Even though the Bakery algorithm can be verified in this framework, it requires considerable user ingenuity to encode the protocol in a regular language.
{ "cite_N": [ "@cite_28", "@cite_17" ], "mid": [ "1861590051", "2055083505", "2167672803", "2096765155" ], "abstract": [ "We present regular model checking, a framework for algorithmic verification of infinite-state systems with, e.g., queues, stacks, integers, or a parameterized linear topology. States are represented by strings over a finite alphabet and the transition relation by a regular length-preserving relation on strings. Major problems in the verification of parameterized and infinite-state systems are to compute the set of states that are reachable from some set of initial states, and to compute the transitive closure of the transition relation. We present two complementary techniques for these problems. One is a direct automata-theoretic construction, and the other is based on widening. Both techniques are incomplete in general, but we give sufficient conditions under which they work. We also present a method for verifying ω-regular properties of parameterized systems, by computation of the transitive closure of a transition relation.", "Regular model checking is a form of symbolic model checking for parameterized and infinite-state systems whose states can be represented as words of arbitrary length over a finite alphabet, in which regular sets of words are used to represent sets of states. We present LTL(MSO), a combination of the logics monadic second-order logic (MSO) and LTL as a natural logic for expressing the temporal properties to be verified in regular model checking. In other words, LTL(MSO) is a natural specification language for both the system and the property under consideration. LTL(MSO) is a two-dimensional modal logic, where MSO is used for specifying properties of system states and transitions, and LTL is used for specifying temporal properties. In addition, the first-order quantification in MSO can be used to express properties parameterized on a position or process. We give a technique for model checking LTL(MSO), which is adapted from the automata-theoretic approach: a formula is translated to a buchi regular transition system with a regular set of accepting states, and regular model checking techniques are used to search for models. We have implemented the technique, and show its application to a number of parameterized algorithms from the literature.", "We study the problem of model checking software product line (SPL) behaviours against temporal properties. This is more difficult than for single systems because an SPL with n features yields up to 2n individual systems to verify. As each individual verification suffers from state explosion, it is crucial to propose efficient formalisms and heuristics. We recently proposed featured transition systems (FTS), a compact representation for SPL behaviour, and defined algorithms for model checking FTS against linear temporal properties. Although they showed to outperform individual system verifications, they still face a state explosion problem as they enumerate and visit system states one by one. In this paper, we tackle this latter problem by using symbolic representations of the state space. This lead us to consider computation tree logic (CTL) which is supported by the industry-strength symbolic model checker NuSMV. We first lay the foundations for symbolic SPL model checking by defining a feature-oriented version of CTL and its dedicated algorithms. We then describe an implementation that adapts the NuSMV language and tool infrastructure. Finally, we propose theoretical and empirical evaluations of our results. The benchmarks show that for certain properties, our algorithm is over a hundred times faster than model checking each system with the standard algorithm.", "Most current statistical natural language processing models use only local features so as to permit dynamic programming in inference, but this makes them unable to fully account for the long distance structure that is prevalent in language use. We show how to solve this dilemma with Gibbs sampling, a simple Monte Carlo method used to perform approximate inference in factored probabilistic models. By using simulated annealing in place of Viterbi decoding in sequence models such as HMMs, CMMs, and CRFs, it is possible to incorporate non-local structure while preserving tractable inference. We use this technique to augment an existing CRF-based information extraction system with long-distance dependency models, enforcing label consistency and extraction template consistency constraints. This technique results in an error reduction of up to 9 over state-of-the-art systems on two established information extraction tasks." ] }
cs0407006
2951800949
Predicate abstraction provides a powerful tool for verifying properties of infinite-state systems using a combination of a decision procedure for a subset of first-order logic and symbolic methods originally developed for finite-state model checking. We consider models containing first-order state variables, where the system state includes mutable functions and predicates. Such a model can describe systems containing arbitrarily large memories, buffers, and arrays of identical processes. We describe a form of predicate abstraction that constructs a formula over a set of universally quantified variables to describe invariant properties of the first-order state variables. We provide a formal justification of the soundness of our approach and describe how it has been used to verify several hardware and software designs, including a directory-based cache coherence protocol.
Several researchers have investigated restrictions on the system description to make the parameterized verification problem decidable. Notable among them is the early work by German and Sistla @cite_0 for verifying single-indexed properties for synchronously communicating systems. For restricted systems, finite cut-off'' based approaches @cite_16 @cite_12 @cite_27 reduce the problem to verifying networks of some fixed finite size. These bounds have been established for verifying restricted classes of ring networks and cache coherence protocols. Emerson and Kahlon @cite_27 have verified the version of German's cache coherence protocol with single entry channels by manually reducing it to a snoopy protocol, for which finite cut-off exists. However, the reduction is manually performed and exploits details of operation of the protocol, and thus requires user ingenuity. It can't be easily extended to verify other unbounded systems including the Bakery algorithm or the out-of-order processors.
{ "cite_N": [ "@cite_0", "@cite_27", "@cite_16", "@cite_12" ], "mid": [ "2963688871", "2504057811", "2088665600", "2130123502" ], "abstract": [ "We study verification problems for autonomous swarms of mobile robots that self-organize and cooperate to solve global objectives. In particular, we focus in this paper on the model proposed by Suzuki and Yamashita of anonymous robots evolving in a discrete space with a finite number of locations (here, a ring). A large number of algorithms have been proposed working for rings whose size is not a priori fixed and can be hence considered as a parameter. Handmade correctness proofs of these algorithms have been shown to be error-prone, and recent attention had been given to the application of formal methods to automatically prove those. Our work is the first to study the verification problem of such algorithms in the parameterized case. We show that safety and reachability problems are undecidable for robots evolving asynchronously. On the positive side, we show that safety properties are decidable in the synchronous case, as well as in the asynchronous case for a particular class of algorithms. Several properties on the protocol can be decided as well. Decision procedures rely on an encoding in Presburger arithmetics formulae that can be verified by an SMT-solver. Feasibility of our approach is demonstrated by the encoding of several case studies.", "We propose a new method for the verification of parameterized cache coherence protocols. Cache coherence protocols are used to maintain data consistency in multiprocessor systems equipped with local fast caches. In our approach we use arithmetic constraints to model possibly infinite sets of global states of a multiprocessor system with many identical caches. In preliminary experiments using symbolic model checkers for infinite-state systems based on real arithmetics (HyTech [HHW97] and DMC [DP99]) we have automatically verified safety properties for parameterized versions of widely implemented write-invalidate and write-update cache coherence policies like the Mesi, Berkeley, Illinois, Firefly and Dragon protocols [Han93]. With this application, we show that symbolic model checking tools originally designed for hybrid and concurrent systems can be applied successfully to a new class of infinite-state systems of practical interest.", "In the unconditionally reliable message transmission (URMT) problem, two non-faulty players, the sender S and the receiver R are part of a synchronous network modeled as a directed graph. S has a message that he wishes to send to R; the challenge is to design a protocol such that after exchanging messages as per the protocol, the receiver R should correctly obtain S's message with arbitrarily small error probability Δ, in spite of the influence of a Byzantine adversary that may actively corrupt up to t nodes in the network (we denote such a URMT protocol as (t, (1 - Δ))-reliable). While it is known that (2t + 1) vertex disjoint directed paths from S to R are necessary and sufficient for (t, 1)-reliable URMT (that is with zero error probability), we prove that a strictly weaker condition, which we define and denote as (2t, t)-special-connectivity, together with just (t+1) vertex disjoint directed paths from S to R, is necessary and sufficient for (t, (1' - Δ))-reliable URMT with arbitrarily small (but non-zero) error probability, Δ. Thus, we demonstrate the power of randomization in the context of reliable message transmission. In fact, for any positive integer k > 0, we show that there always exists a digraph Gk such that (k, 1)-reliable URMT is impossible over Gk whereas there exists a (2k, (1 - Δ))-reliable URMT protocol, Δ > 0 in Gk. In a digraph G on which (t, (1 - Δ))-reliable URMT is possible, an edge is called critical if the deletion of that edge renders (t, (1 - Δ))-reliable URMT impossible. We give an example of a digraph G on n vertices such that G has Ω(n2) critical edges. This is quite baffling since no such graph exists for the case of perfect reliable message transmission (or equivalently (t, 1)-reliable URMT) or when the underlying graph is undirected. Such is the anomalous behavior of URMT protocols (when \"randomness meet directedness\") that it makes it extremely hard to design efficient protocols over arbitrary digraphs. However, if URMT is possible between every pair of vertices in the network, then we present efficient protocols for the same.", "We present decidability results for the verification of cryptographic protocols in the presence of equational theories corresponding to xor and Abelian groups. Since the perfect cryptography assumption is unrealistic for cryptographic primitives with visible algebraic properties such as xor, we extend the conventional Dolev-Yao model by permitting the intruder to exploit these properties. We show that the ground reachability problem in NP for the extended intruder theories in the cases of xor and Abelian groups. This result follows from a normal proof theorem. Then, we show how to lift this result in the xor case: we consider a symbolic constraint system expressing the reachability (e.g., secrecy) problem for a finite number of sessions. We prove that such a constraint system is decidable, relying in particular on an extension of combination algorithms for unification procedures. As a corollary, this enables automatic symbolic verification of cryptographic protocols employing xor for a fixed number of sessions." ] }
cs0407006
2951800949
Predicate abstraction provides a powerful tool for verifying properties of infinite-state systems using a combination of a decision procedure for a subset of first-order logic and symbolic methods originally developed for finite-state model checking. We consider models containing first-order state variables, where the system state includes mutable functions and predicates. Such a model can describe systems containing arbitrarily large memories, buffers, and arrays of identical processes. We describe a form of predicate abstraction that constructs a formula over a set of universally quantified variables to describe invariant properties of the first-order state variables. We provide a formal justification of the soundness of our approach and describe how it has been used to verify several hardware and software designs, including a directory-based cache coherence protocol.
Flanagan and Qadeer @cite_2 use indexed predicates to synthesize loop invariants for sequential software programs that involve unbounded arrays. They also provide heuristics to extract some of the predicates from the program text automatically. The heuristics are specific to loops in sequential software and not suited for verifying more general unbounded systems that we handle in this paper. In this work, we explore formal properties of this formulation and apply it for verifying distributed systems. In a recent work @cite_22 , we provide a weakest precondition transformer @cite_25 based syntactic heuristic for discovering most of the predicates for many of the systems that we consider in this paper.
{ "cite_N": [ "@cite_25", "@cite_22", "@cite_2" ], "mid": [ "1503677488", "39834257", "171295454", "1970168990" ], "abstract": [ "We present a new method for automatic generation of loop invariants for programs containing arrays. Unlike all previously known methods, our method allows one to generate first-order invariants containing alternations of quantifiers. The method is based on the automatic analysis of the so-called update predicates of loops. An update predicate for an array A expresses updates made to A . We observe that many properties of update predicates can be extracted automatically from the loop description and loop properties obtained by other methods such as a simple analysis of counters occurring in the loop, recurrence solving and quantifier elimination over loop variables. We run the theorem prover Vampire on some examples and show that non-trivial loop invariants can be generated.", "Geometric heuristics for the quantifier elimination approach presented by Kapur (2004) are investigated to automatically derive loop invariants expressing weakly relational numerical properties (such as l≤x≤h or l≤±x ±y≤h) for imperative programs. Such properties have been successfully used to analyze commercial software consisting of hundreds of thousands of lines of code (using for example, the Astree tool based on abstract interpretation framework proposed by Cousot and his group). The main attraction of the proposed approach is its much lower complexity in contrast to the abstract interpretation approach (O(n2) in contrast to O(n4), where n is the number of variables) with the ability to still generate invariants of comparable strength. This approach has been generalized to consider disjunctive invariants of the similar form, expressed using maximum function (such as max (x+a,y+b,z+c,d)≤max (x+e,y+f,z+g,h)), thus enabling automatic generation of a subclass of disjunctive invariants for imperative programs as well.", "We present a technique for using infeasible program paths to automatically infer Range Predicates that describe properties of unbounded array segments. First, we build proofs showing the infeasibility of the paths, using axioms that precisely encode the high-level (but informal) rules with which programmers reason about arrays. Next, we mine the proofs for Craig Interpolants which correspond to predicates that refute the particular counterexample path. By embedding the predicate inference technique within a Counterexample-Guided Abstraction-Refinement (CEGAR) loop, we obtain a method for verifying data-sensitive safety properties whose precision is tailored in a program- and property-sensitive manner. Though the axioms used are simple, we show that the method suffices to prove a variety of array-manipulating programs that were previously beyond automatic model checkers.", "This paper presents a new method for generating inductive loop invariants that are expressible as boolean combinations of linear integer constraints. The key idea underlying our technique is to perform a backtracking search that combines Hoare-style verification condition generation with a logical abduction procedure based on quantifier elimination to speculate candidate invariants. Starting with true, our method iteratively strengthens loop invariants until they are inductive and strong enough to verify the program. A key feature of our technique is that it is lazy: It only infers those invariants that are necessary for verifying program correctness. Furthermore, our technique can infer arbitrary boolean combinations (including disjunctions) of linear invariants. We have implemented the proposed approach in a tool called HOLA. Our experiments demonstrate that HOLA can infer interesting invariants that are beyond the reach of existing state-of-the-art invariant generation tools." ] }
math0407092
2952242105
In this paper we study a random graph with @math nodes, where node @math has degree @math and @math are i.i.d. with @math . We assume that @math for some @math and some constant @math . This graph model is a variant of the so-called configuration model, and includes heavy tail degrees with finite variance. The minimal number of edges between two arbitrary connected nodes, also known as the graph distance or the hopcount, is investigated when @math . We prove that the graph distance grows like @math , when the base of the logarithm equals @math . This confirms the heuristic argument of Newman, Strogatz and Watts NSW00 . In addition, the random fluctuations around this asymptotic mean @math are characterized and shown to be uniformly bounded. In particular, we show convergence in distribution of the centered graph distance along exponentially growing subsequences.
A second related model can be found in @cite_15 and @cite_42 , where edges between nodes @math and @math are present with probability equal to @math for some expected degree vector' @math . Chung and Lu @cite_15 show that when @math is proportional to @math the average distance between pairs of nodes is @math when @math , and @math when @math . The difference between this model and ours is that the nodes are not exchangeable in @cite_15 , but the observed phenomena are similar. This result can be heuristically understood as follows. Firstly, the actual degree vector in @cite_15 should be close to the expected degree vector. Secondly, for the expected degree vector, we can compute that the number of nodes for which the degree is less than or equal to @math equals @math Thus, one expects that the number of nodes with degree at most @math decreases as @math , similarly as in our model. In @cite_42 , Chung and Lu study the sizes of the connected components in the above model. The advantage of this model is that the edges are independently present, which makes the resulting graph closer to a traditional random graph.
{ "cite_N": [ "@cite_15", "@cite_42" ], "mid": [ "2950469527", "2964292545", "2104812688", "2950981509" ], "abstract": [ "We consider a system of @math servers inter-connected by some underlying graph topology @math . Tasks arrive at the various servers as independent Poisson processes of rate @math . Each incoming task is irrevocably assigned to whichever server has the smallest number of tasks among the one where it appears and its neighbors in @math . Tasks have unit-mean exponential service times and leave the system upon service completion. The above model has been extensively investigated in the case @math is a clique. Since the servers are exchangeable in that case, the queue length process is quite tractable, and it has been proved that for any @math , the fraction of servers with two or more tasks vanishes in the limit as @math . For an arbitrary graph @math , the lack of exchangeability severely complicates the analysis, and the queue length process tends to be worse than for a clique. Accordingly, a graph @math is said to be @math -optimal or @math -optimal when the occupancy process on @math is equivalent to that on a clique on an @math -scale or @math -scale, respectively. We prove that if @math is an Erd o s-R 'enyi random graph with average degree @math , then it is with high probability @math -optimal and @math -optimal if @math and @math as @math , respectively. This demonstrates that optimality can be maintained at @math -scale and @math -scale while reducing the number of connections by nearly a factor @math and @math compared to a clique, provided the topology is suitably random. It is further shown that if @math contains @math bounded-degree nodes, then it cannot be @math -optimal. In addition, we establish that an arbitrary graph @math is @math -optimal when its minimum degree is @math , and may not be @math -optimal even when its minimum degree is @math for any @math .", "We consider the problem of clustering a graph @math into two communities by observing a subset of the vertex correlations. Specifically, we consider the inverse problem with observed variables @math , where @math is the incidence matrix of a graph @math , @math is the vector of unknown vertex variables (with a uniform prior), and @math is a noise vector with Bernoulli @math i.i.d. entries. All variables and operations are Boolean. This model is motivated by coding, synchronization, and community detection problems. In particular, it corresponds to a stochastic block model or a correlation clustering problem with two communities and censored edges. Without noise, exact recovery (up to global flip) of @math is possible if and only the graph @math is connected, with a sharp threshold at the edge probability @math for Erdős-Renyi random graphs. The first goal of this paper is to determine how the edge probability @math needs to scale to allow exact recovery in the presence of noise. Defining the degree rate of the graph by @math , it is shown that exact recovery is possible if and only if @math . In other words, @math is the information theoretic threshold for exact recovery at low-SNR. In addition, an efficient recovery algorithm based on semidefinite programming is proposed and shown to succeed in the threshold regime up to twice the optimal rate. For a deterministic graph @math , defining the degree rate as @math , where @math is the minimum degree of the graph, it is shown that the proposed method achieves the rate @math , where @math is the spectral gap of the graph @math .", "Motivation: Analogous to biological sequence comparison, comparing cellular networks is an important problem that could provide insight into biological understanding and therapeutics. For technical reasons, comparing large networks is computationally infeasible, and thus heuristics, such as the degree distribution, clustering coefficient, diameter, and relative graphlet frequency distribution have been sought. It is easy to demonstrate that two networks are different by simply showing a short list of properties in which they differ. It is much harder to show that two networks are similar, as it requires demonstrating their similarity in all of their exponentially many properties. Clearly, it is computationally prohibitive to analyze all network properties, but the larger the number of constraints we impose in determining network similarity, the more likely it is that the networks will truly be similar. Results: We introduce a new systematic measure of a network's local structure that imposes a large number of similarity constraints on networks being compared. In particular, we generalize the degree distribution, which measures the number of nodes 'touching' k edges, into distributions measuring the number of nodes 'touching' k graphlets, where graphlets are small connected non-isomorphic subgraphs of a large network. Our new measure of network local structure consists of 73 graphlet degree distributions of graphlets with 2--5 nodes, but it is easily extendible to a greater number of constraints (i.e. graphlets), if necessary, and the extensions are limited only by the available CPU. Furthermore, we show a way to combine the 73 graphlet degree distributions into a network 'agreement' measure which is a number between 0 and 1, where 1 means that networks have identical distributions and 0 means that they are far apart. Based on this new network agreement measure, we show that almost all of the 14 eukaryotic PPI networks, including human, resulting from various high-throughput experimental techniques, as well as from curated databases, are better modeled by geometric random graphs than by Erdos--Reny, random scale-free, or Barabasi--Albert scale-free networks. Availability: Software executables are available upon request. Contact: [email protected]", "Let @math be a tree space (or tree network) represented by a weighted tree with @math vertices, and @math be a set of @math stochastic points in @math , each of which has a fixed location with an independent existence probability. We investigate two fundamental problems under such a stochastic setting, the closest-pair problem and the nearest-neighbor search. For the former, we study the computation of the @math -threshold probability and the expectation of the closest-pair distance of a realization of @math . We propose the first algorithm to compute the @math -threshold probability in @math time for any given threshold @math , which immediately results in an @math -time algorithm for computing the expected closest-pair distance. Based on this, we further show that one can compute a @math -approximation for the expected closest-pair distance in @math time, by arguing that the expected closest-pair distance can be approximated via @math threshold probability queries. For the latter, we study the @math most-likely nearest-neighbor search ( @math -LNN) via a notion called @math most-likely Voronoi Diagram ( @math -LVD). We show that the size of the @math -LVD @math of @math on @math is bounded by @math if the existence probabilities of the points in @math are constant-far from 0. Furthermore, we establish an @math average-case upper bound for the size of @math , by regarding the existence probabilities as i.i.d. random variables drawn from some fixed distribution. Our results imply the existence of an LVD data structure which answers @math -LNN queries in @math time using average-case @math space, and worst-case @math space if the existence probabilities are constant-far from 0. Finally, we also give an @math -time algorithm to construct the LVD data structure." ] }
math0407092
2952242105
In this paper we study a random graph with @math nodes, where node @math has degree @math and @math are i.i.d. with @math . We assume that @math for some @math and some constant @math . This graph model is a variant of the so-called configuration model, and includes heavy tail degrees with finite variance. The minimal number of edges between two arbitrary connected nodes, also known as the graph distance or the hopcount, is investigated when @math . We prove that the graph distance grows like @math , when the base of the logarithm equals @math . This confirms the heuristic argument of Newman, Strogatz and Watts NSW00 . In addition, the random fluctuations around this asymptotic mean @math are characterized and shown to be uniformly bounded. In particular, we show convergence in distribution of the centered graph distance along exponentially growing subsequences.
The reason why we study the random graphs at a given time instant is that we are interested in the topology of the random graph. In @cite_36 , and inspired by the observed power law degree sequence in @cite_12 , the configuration model with i.i.d. degrees is proposed as a model for the AS-graph in Internet, and it is argued on a qualitative basis that this simple model serves as a better model for the Internet topology than currently used topology generators. Our results can be seen as a step towards the quantitative understanding of whether the hopcount in Internet is described well by the average graph distance in the configuration model.
{ "cite_N": [ "@cite_36", "@cite_12" ], "mid": [ "2107648668", "1978479505", "2170362389", "2169015768" ], "abstract": [ "Considerable attention has been focused on the properties of graphs derived from Internet measurements. Router-level topologies collected via traceroute-like methods have led some to conclude that the router graph of the Internet is well modeled as a power-law random graph. In such a graph, the degree distribution of nodes follows a distribution with a power-law tail. We argue that the evidence to date for this conclusion is at best insufficient We show that when graphs are sampled using traceroute-like methods, the resulting degree distribution can differ sharply from that of the underlying graph. For example, given a sparse Erdos-Renyi random graph, the subgraph formed by a collection of shortest paths from a small set of random sources to a larger set of random destinations can exhibit a degree distribution remarkably like a power-law. We explore the reasons for how this effect arises, and show that in such a setting, edges are sampled in a highly biased manner. This insight allows us to formulate tests for determining when sampling bias is present. When we apply these tests to a number of well-known datasets, we find strong evidence for sampling bias.", "Random graphs with a given degree sequence are a useful model capturing several features absent in the classical Erd˝os-Renyi model, such as dependent edges and non-binomial degrees. In this paper, we use a characterization due to Erd˝os and Gallai to develop a sequential algorithm for generating a random labeled graph with a given degree sequence. The algorithm is easy to implement and allows surprisingly ecient sequential importance sampling. Applications are given, in- cluding simulating a biological network and estimating the number of graphs with a given degree sequence. 1. Introduction. Random graphs with given vertex degrees have recently attracted great interest as a model for many real-world complex networks, including the World Wide Web, peer-to-peer networks, social networks, and biological networks. Newman (58) contains an excellent survey of these networks, with extensive references. A common approach to simulating these systems is to study (empirically or theoretically) the degrees of the vertices in instances of the network, and then to generate a random graph with the appropriate degrees. Graphs with prescribed degrees also appear in random matrix theory and string theory, which can call for large simulations based on random k-regular graphs. Throughout, we are concerned with generating simple graphs, i.e., no loops or multiple edges are allowed (the problem becomes considerably easier if loops and multiple edges are allowed). The main result of this paper is a new sequential importance sampling algorithm for generating random graphs with a given degree sequence. The idea is to build up the graph sequentially, at each stage choosing an edge from a list of candidates with probability proportional to the degrees. Most previously studied algorithms for this problem sometimes either get stuck or produce loops or multiple edges in the output, which is handled by starting over and trying again. Often for such algorithms, the probability of a restart being needed on a trial rapidly approaches 1 as the degree parameters grow, resulting in an enormous number of trials being needed on average to obtain a simple graph. A major advantage of our algorithm is that it never gets stuck. This is achieved using the Erd˝os- Gallai characterization, which is explained in Section 2, and a carefully chosen order of edge selection.", "We consider the issue of protection in very large networks displaying randomness in topology. We employ random graph models to describe such networks, and obtain probabilistic bounds on several parameters related to reliability. In particular, we take the case of random regular networks for simplicity and consider the length of primary and backup paths in terms of the number of hops. First, for a randomly picked pair of nodes, we derive a lower bound on the average distance between the pair and discuss the tightness of the bound. In addition, noting that primary and protection paths form cycles, we obtain a lower bound on the average length of the shortest cycle around the pair. Finally, we show that the protected connections of a given maximum finite length are rare. We then generalize our network model so that different degrees are allowed according to some arbitrary distribution, and show that the second moment of degree over the first moment is an important shorthand for behavior of a network. Notably, we show that most of the results in regular networks carry over with minor modifications, which significantly broadens the scope of networks to which our approach applies. We present as an example the case of networks with a power-law degree distribution.", "Recent work on the structure of social networks and the internet has focused attention on graphs with distributions of vertex degree that are significantly different from the Poisson degree distributions that have been widely studied in the past. In this paper we develop in detail the theory of random graphs with arbitrary degree distributions. In addition to simple undirected, unipartite graphs, we examine the properties of directed and bipartite graphs. Among other results, we derive exact expressions for the position of the phase transition at which a giant component first forms, the mean component size, the size of the giant component if there is one, the mean number of vertices a certain distance away from a randomly chosen vertex, and the average vertex-vertex distance within a graph. We apply our theory to some real-world graphs, including the worldwide web and collaboration graphs of scientists and Fortune 1000 company directors. We demonstrate that in some cases random graphs with appropriate distributions of vertex degree predict with surprising accuracy the behavior of the real world, while in others there is a measurable discrepancy between theory and reality, perhaps indicating the presence of additional social structure in the network that is not captured by the random graph." ] }
cs0406019
2950243200
We consider the problem of providing service guarantees in a high-speed packet switch. As basic requirements, the switch should be scalable to high speeds per port, a large number of ports and a large number of traffic flows with independent guarantees. Existing scalable solutions are based on Virtual Output Queuing, which is computationally complex when required to provide service guarantees for a large number of flows. We present a novel architecture for packet switching that provides support for such service guarantees. A cost-effective fabric with small external speedup is combined with a feedback mechanism that enables the fabric to be virtually lossless, thus avoiding packet drops indiscriminate of flows. Through analysis and simulation, we show that this architecture provides accurate support for service guarantees, has low computational complexity and is scalable to very high port speeds.
In recent years, these potential scalability concerns have been addressed by implementing a very small number of independent service guarantees. Under the Differentiated Services framework @cite_12 , flows are aggregated in @math classes, and service guarantees are offered for classes. The downside is that the realized QoS per flow has a lower level of assurance (higher probability of violating the desired service level) than the QoS per aggregate @cite_9 , @cite_1 . Moreover, recently proposed VPN and VLAN services @cite_11 , @cite_17 require per-VPN or VLAN QoS guarantees. All the above are arguments in favor of implemeting a number of independent service guarantees per port much larger than six.
{ "cite_N": [ "@cite_11", "@cite_9", "@cite_1", "@cite_12", "@cite_17" ], "mid": [ "2046531425", "1985441584", "2136136555", "1979708789" ], "abstract": [ "The paper addresses the issue of providing Quality of Service (QoS) guarantees in the Internet. After a brief discussion of Internet traffic characteristics, we consider the possibility of performing multiplexing with predictable performance for stream and elastic traffic using open–loop and closed–loop control, respectively. QoS depends essentially on providing sufficient capacity to handle expected demand. We argue that flow awareness is additionally necessary to ensure that traffic is directed over routes with available capacity and to avoid congestion collapse in case of overload. Proposed flow–aware controls allow simple volume–based charging and the development of an economic model similar to that of the telephone network.", "Providing differentiated Quality of Service (QoS) over unreliable wireless channels is an important challenge for supporting several future applications. We analyze a model that has been proposed to describe the QoS requirements by four criteria: traffic pattern, channel reliability, delay bound, and throughput bound. We study this mathematical model and extend it to handle variable bit rate applications. We then obtain a sharp characterization of schedulability vis-a-vis latencies and timely throughput. Our results extend the results so that they are general enough to be applied on a wide range of wireless applications, including MPEG Variable-Bit-Rate (VBR) video streaming, VoIP with differentiated quality, and wireless sensor networks (WSN). Two major issues concerning QoS over wireless are admission control and scheduling. Based on the model incorporating the QoS criteria, we analytically derive a necessary and sufficient condition for a set of variable bit-rate clients to be feasible. Admission control is reduced to evaluating the necessary and sufficient condition. We further analyze two scheduling policies that have been proposed, and show that they are both optimal in the sense that they can fulfill every set of clients that is feasible by some scheduling algorithms. The policies are easily implemented on the IEEE 802.11 standard. Simulation results under various settings support the theoretical study.", "We propose a novel approach to QoS for real-time traffic over wireless mesh networks, in which application layer characteristics are exploited or shaped in the design of medium access control. Specifically, we consider the problem of efficiently supporting a mix of Voice over IP (VoIP) and delay-insensitive traffic, assuming a narrowband physical layer with CSMA CA capabilities. The VoIP call carrying capacity of wireless mesh networks based on classical CSMA CA (e.g., the IEEE 802.11 standard) is low compared to the raw available bandwidth, due to lack of bandwidth and delay guarantees. Time Division Multiplexing (TDM) could potentially provide such guarantees, but it requires fine-grained network-wide synchronization and scheduling, which are difficult to implement. In this paper, we introduce Sticky CSMA CA, a new medium access mechanism that provides TDM-like performance to real-time flows without requiring explicit synchronization. We exploit the natural periodicity of VoIP flows to obtain implicit synchronization and multiplexing gains. Nodes monitor the medium using the standard CSMA CA mechanism, except that they remember the recent history of activity in the medium. A newly arriving VoIP flow uses this information to grab the medium at the first available opportunity, and then sticks to a periodic schedule, providing delay and bandwidth guarantees. Delay-insensitive traffic fills the gaps left by the real-time flows using novel contention mechanisms to ensure efficient use of the leftover bandwidth. Large gains over IEEE 802.11 networks are demonstrated in terms of increased voice call carrying capacity (more than 100 in some cases). We briefly discuss extensions of these ideas to a broader class of real-time applications, in which artificially imposing periodicity (or some other form of regularity) at the application layer can lead to significant enhancements of QoS due to improved medium access.", "Abstract This paper studies the quality of service (QoS) provision problem in noncooperative networks where applications or users are selfish and routers implement generalized processor sharing based packet scheduling. We formulate a model of QoS provision in noncooperative networks where users are given the freedom to choose both the service classes and traffic volume allocated, and heterogenous QoS preferences are captured by a user's utility function. We present a comprehensive analysis of the noncooperative multi-class QoS provision game, giving a complete characterization of Nash equilibria and their existence criteria, and show under what conditions they are Pareto- and system-optimal. We show that, in general, Nash equilibria need not exist, and when they do exist, they need not be Pareto- nor system-optimal. For certain “resource-plentiful” systems, however, we show that the world indeed can be nice with Nash equilibria, Pareto optima, and system optima collapsing into a single class. We study the problem of facilitating effective QoS in systems with multi-dimensional QoS vectors containing both mean- and burstiness-related QoS measures. We extend the game-theoretic analysis to multi-dimensional QoS vector games and show under what conditions the aforementioned results carry over." ] }
cs0406019
2950243200
We consider the problem of providing service guarantees in a high-speed packet switch. As basic requirements, the switch should be scalable to high speeds per port, a large number of ports and a large number of traffic flows with independent guarantees. Existing scalable solutions are based on Virtual Output Queuing, which is computationally complex when required to provide service guarantees for a large number of flows. We present a novel architecture for packet switching that provides support for such service guarantees. A cost-effective fabric with small external speedup is combined with a feedback mechanism that enables the fabric to be virtually lossless, thus avoiding packet drops indiscriminate of flows. Through analysis and simulation, we show that this architecture provides accurate support for service guarantees, has low computational complexity and is scalable to very high port speeds.
More recent proposals @cite_16 decrease the time interval between two runs of the matching algorithm, but with a tradeoff in increased burstiness and additional scheduling algorithms for mitigating unbounded delays. Moreover, the service presented in @cite_16 is of type Premium 1-to-1, but cannot provide Assured N-to-1 service.
{ "cite_N": [ "@cite_16" ], "mid": [ "1984382275", "2606268922", "2052551455", "2058056324" ], "abstract": [ "This paper proposes a new class of online policies for scheduling in input-buffered crossbar switches. Given an initial configuration of packets at the input buffers, these policies drain all packets in the system in the minimal amount of time provided that there are no further arrivals. These policies are also throughput optimal for a large class of arrival processes which satisfy strong-law of large numbers. We show that it is possible for policies in our class to be throughput optimal even if they are not constrained to be maximal in every time slot. Most algorithms for switch scheduling take an edge based approach; in contrast, we focus on scheduling (a large enough set of) the most congested ports. This alternate approach allows for lower-complexity algorithms, and also requires a non-standard technique to prove throughput-optimality. One algorithm in our class, Maximum Vertex-weighted Matching (MVM) has worst-case complexity similar to Max-size Matching, and in simulations shows slightly better delay performance than Max-(edge)weighted-Matching (MWM).", "This paper introduces a scheduling problem with coupled-tasks in presence of a compatibility graph on a single processor. We investigate a specific configuration, in which the coupled-tasks possess an idle time equal to 2. The complexity of these problems will be studied according to the presence or absence of triangles in the compatibility graph. As an extended matching, we propose a polynomial-time algorithm which consists in minimizing the number of non-covered vertices, by covering vertices with edges or paths of length two in the compatibility graph. This type of covering will be denoted by 2-cover technique. According on the compatibility graph type, the 2-cover technique provides a polynomial-time rho-approximation algorithm with rho=13 12 (resp. rho=10 9) in absence (resp. presence) of triangles.", "Let A and B be two sets of n objects in d , and let Match be a (one-to-one) matching between A and B . Let min(Match ), max(Match ), and Σ(Match) denote the length of the shortest edge, the length of the longest edge, and the sum of the lengths of the edges of Match , respectively. Bottleneck matching— a matching that minimizes max(Match )— is suggested as a convenient way for measuring the resemblance between A and B . Several algorithms for computing, as well as approximating, this resemblance are proposed. The running time of all the algorithms involving planar objects is roughly O(n 1.5 ) . For instance, if the objects are points in the plane, the running time of the exact algorithm is O(n 1.5 log n ) . A semidynamic data structure for answering containment problems for a set of congruent disks in the plane is developed. This data structure may be of independent interest.", "This paper considers the rescheduling of operations with release dates and multiple resources when disruptions prevent the use of a preplanned schedule. The overall strategy is to follow the preschedule until a disruption occurs. After a disruption, part of the schedule is reconstructed to match up with the preschedule at some future time. Conditions are given for the optimality of this approach. A practical implementation is compared with the alternatives of preplanned static scheduling and myopic dynamic scheduling. A set of practical test problems demonstrates the advantages of the matchup approach. We also explore the solution of the matchup scheduling problem and show the advantages of an integer programming approach for allocating resources to jobs." ] }
cond-mat0406404
1540064387
Mapping the Internet generally consists in sampling the network from a limited set of sources by using "traceroute"-like probes. This methodology, akin to the merging of different spanning trees to a set of destinations, has been argued to introduce uncontrolled sampling biases that might produce statistical properties of the sampled graph which sharply differ from the original ones. Here we explore these biases and provide a statistical analysis of their origin. We derive a mean-field analytical approximation for the probability of edge and vertex detection that exploits the role of the number of sources and targets and allows us to relate the global topological properties of the underlying network with the statistical accuracy of the sampled graph. In particular we find that the edge and vertex detection probability is depending on the betweenness centrality of each element. This allows us to show that shortest path routed sampling provides a better characterization of underlying graphs with scale-free topology. We complement the analytical discussion with a throughout numerical investigation of simulated mapping strategies in different network models. We show that sampled graphs provide a fair qualitative characterization of the statistical properties of the original networks in a fair range of different strategies and exploration parameters. The numerical study also allows the identification of intervals of the exploration parameters that optimize the fraction of nodes and edges discovered in the sampled graph. This finding might hint the steps toward more efficient mapping strategies.
Work by @cite_21 has shown that power-law like distributions can be obtained for subgraphs of Erd "os-R 'enyi random graphs when the subgraph is the result of a traceroute exploration with relatively few sources and destinations. They discuss the origin of these biases and the effect of the distance between source and target in the mapping process.
{ "cite_N": [ "@cite_21" ], "mid": [ "2107648668", "2137465421", "2120511087", "2000042664" ], "abstract": [ "Considerable attention has been focused on the properties of graphs derived from Internet measurements. Router-level topologies collected via traceroute-like methods have led some to conclude that the router graph of the Internet is well modeled as a power-law random graph. In such a graph, the degree distribution of nodes follows a distribution with a power-law tail. We argue that the evidence to date for this conclusion is at best insufficient We show that when graphs are sampled using traceroute-like methods, the resulting degree distribution can differ sharply from that of the underlying graph. For example, given a sparse Erdos-Renyi random graph, the subgraph formed by a collection of shortest paths from a small set of random sources to a larger set of random destinations can exhibit a degree distribution remarkably like a power-law. We explore the reasons for how this effect arises, and show that in such a setting, edges are sampled in a highly biased manner. This insight allows us to formulate tests for determining when sampling bias is present. When we apply these tests to a number of well-known datasets, we find strong evidence for sampling bias.", "A power law degree distribution is established for a graph evolution model based on the graph class of k-trees. This k-tree-based graph process can be viewed as an idealized model that captures some characteristics of the preferential attachment and copying mechanisms that existing evolving graph processes fail to model due to technical obstacles. The result also serves as a further cautionary note reinforcing the point of view that a power law degree distribution should not be regarded as the only important characteristic of a complex network, as has been previously argued [D. Achlioptas, A. Clauset, D. Kempe, C. Moore, On the bias of traceroute sampling, or power-law degree distribution in regular graphs, in: Proceedings of the 37th ACM Symposium on Theory of Computing, STOC'05, 2005, pp. 694-703; L. Li, D. Alderson, J. Doyle, W. Willinger, Towards a theory of scale-free graphs: Definition, properties, and implications, Internet Mathematics 2 (4) (2005) 431-523; M. Mitzenmacher, The future of power law research, Internet Mathematics, 2 (4) (2005) 525-534].", "Understanding the structure of the Internet graph is a crucial step for building accurate network models and designing efficient algorithms for Internet applications. Yet, obtaining its graph structure is a surprisingly difficult task, as edges cannot be explicitly queried. Instead, empirical studies rely on traceroutes to build what are essentially single-source, all-destinations, shortest-path trees. These trees only sample a fraction of the network's edges, and a recent paper by found empirically that the resuting sample is intrinsically biased. For instance, the observed degree distribution under traceroute sampling exhibits a power law even when the underlying degree distribution is Poisson.In this paper, we study the bias of traceroute sampling systematically, and, for a very general class of underlying degree distributions, calculate the likely observed distributions explicitly. To do this, we use a continuous-time realization of the process of exposing the BFS tree of a random graph with a given degree distribution, calculate the expected degree distribution of the tree, and show that it is sharply concentrated. As example applications of our machinery, we show how traceroute sampling finds power-law degree distributions in both δ-regular and Poisson-distributed random graphs. Thus, our work puts the observations of on a rigorous footing, and extends them to nearly arbitrary degree distributions.", "Power-law distributions occur in many situations of scientific interest and have significant consequences for our understanding of natural and man-made phenomena. Unfortunately, the detection and characterization of power laws is complicated by the large fluctuations that occur in the tail of the distribution—the part of the distribution representing large but rare events—and by the difficulty of identifying the range over which power-law behavior holds. Commonly used methods for analyzing power-law data, such as least-squares fitting, can produce substantially inaccurate estimates of parameters for power-law distributions, and even in cases where such methods return accurate answers they are still unsatisfactory because they give no indication of whether the data obey a power law at all. Here we present a principled statistical framework for discerning and quantifying power-law behavior in empirical data. Our approach combines maximum-likelihood fitting methods with goodness-of-fit tests based on the Kolmogorov-Smirnov (KS) statistic and likelihood ratios. We evaluate the effectiveness of the approach with tests on synthetic data and give critical comparisons to previous approaches. We also apply the proposed methods to twenty-four real-world data sets from a range of different disciplines, each of which has been conjectured to follow a power-law distribution. In some cases we find these conjectures to be consistent with the data, while in others the power law is ruled out." ] }
cond-mat0406404
1540064387
Mapping the Internet generally consists in sampling the network from a limited set of sources by using "traceroute"-like probes. This methodology, akin to the merging of different spanning trees to a set of destinations, has been argued to introduce uncontrolled sampling biases that might produce statistical properties of the sampled graph which sharply differ from the original ones. Here we explore these biases and provide a statistical analysis of their origin. We derive a mean-field analytical approximation for the probability of edge and vertex detection that exploits the role of the number of sources and targets and allows us to relate the global topological properties of the underlying network with the statistical accuracy of the sampled graph. In particular we find that the edge and vertex detection probability is depending on the betweenness centrality of each element. This allows us to show that shortest path routed sampling provides a better characterization of underlying graphs with scale-free topology. We complement the analytical discussion with a throughout numerical investigation of simulated mapping strategies in different network models. We show that sampled graphs provide a fair qualitative characterization of the statistical properties of the original networks in a fair range of different strategies and exploration parameters. The numerical study also allows the identification of intervals of the exploration parameters that optimize the fraction of nodes and edges discovered in the sampled graph. This finding might hint the steps toward more efficient mapping strategies.
In Ref. @cite_11 , Petermann and De Los Rios have studied a traceroute -like procedure on various examples of scale-free graphs, showing that, in the case of a single source, power-law distributions with underestimated exponents are obtained. Analytical estimates of the measured exponents as a function of the true ones were also derived. Finally, in a recent preprint appeared during the completion of our work, Guillaume and Latapy @cite_28 report about the shortest-paths explorations of synthetic graphs, comparing properties of the resulting sampled graph with those of the original network. The exploration is made using level plots for the proportion of discovered nodes and edges in the graph as a function of the number of sources and targets, giving also hints for optimal placement of sources and targets. All these pieces of work make clear the relevance of determining up to which extent the topological properties observed in sampled graphs are representative of that of the real networks.
{ "cite_N": [ "@cite_28", "@cite_11" ], "mid": [ "1540064387", "2107648668", "1693225151", "2120511087" ], "abstract": [ "Mapping the Internet generally consists in sampling the network from a limited set of sources by using \"traceroute\"-like probes. This methodology, akin to the merging of different spanning trees to a set of destinations, has been argued to introduce uncontrolled sampling biases that might produce statistical properties of the sampled graph which sharply differ from the original ones. Here we explore these biases and provide a statistical analysis of their origin. We derive a mean-field analytical approximation for the probability of edge and vertex detection that exploits the role of the number of sources and targets and allows us to relate the global topological properties of the underlying network with the statistical accuracy of the sampled graph. In particular we find that the edge and vertex detection probability is depending on the betweenness centrality of each element. This allows us to show that shortest path routed sampling provides a better characterization of underlying graphs with scale-free topology. We complement the analytical discussion with a throughout numerical investigation of simulated mapping strategies in different network models. We show that sampled graphs provide a fair qualitative characterization of the statistical properties of the original networks in a fair range of different strategies and exploration parameters. The numerical study also allows the identification of intervals of the exploration parameters that optimize the fraction of nodes and edges discovered in the sampled graph. This finding might hint the steps toward more efficient mapping strategies.", "Considerable attention has been focused on the properties of graphs derived from Internet measurements. Router-level topologies collected via traceroute-like methods have led some to conclude that the router graph of the Internet is well modeled as a power-law random graph. In such a graph, the degree distribution of nodes follows a distribution with a power-law tail. We argue that the evidence to date for this conclusion is at best insufficient We show that when graphs are sampled using traceroute-like methods, the resulting degree distribution can differ sharply from that of the underlying graph. For example, given a sparse Erdos-Renyi random graph, the subgraph formed by a collection of shortest paths from a small set of random sources to a larger set of random destinations can exhibit a degree distribution remarkably like a power-law. We explore the reasons for how this effect arises, and show that in such a setting, edges are sampled in a highly biased manner. This insight allows us to formulate tests for determining when sampling bias is present. When we apply these tests to a number of well-known datasets, we find strong evidence for sampling bias.", "Despite great effort spent measuring topological features of large networks like the Internet, it was recently argued that sampling based on taking paths through the network (e.g., traceroutes) introduces a fundamental bias in the observed degree distribution. We examine this bias analytically and experimentally. For classic random graphs with mean degree c, we show analytically that traceroute sampling gives an observed degree distribution P(k) 1 k for k < c, even though the underlying degree distribution is Poisson. For graphs whose degree distributions have power-law tails P(k) k^-alpha, the accuracy of traceroute sampling is highly sensitive to the population of low-degree vertices. In particular, when the graph has a large excess (i.e., many more edges than vertices), traceroute sampling can significantly misestimate alpha.", "Understanding the structure of the Internet graph is a crucial step for building accurate network models and designing efficient algorithms for Internet applications. Yet, obtaining its graph structure is a surprisingly difficult task, as edges cannot be explicitly queried. Instead, empirical studies rely on traceroutes to build what are essentially single-source, all-destinations, shortest-path trees. These trees only sample a fraction of the network's edges, and a recent paper by found empirically that the resuting sample is intrinsically biased. For instance, the observed degree distribution under traceroute sampling exhibits a power law even when the underlying degree distribution is Poisson.In this paper, we study the bias of traceroute sampling systematically, and, for a very general class of underlying degree distributions, calculate the likely observed distributions explicitly. To do this, we use a continuous-time realization of the process of exposing the BFS tree of a random graph with a given degree distribution, calculate the expected degree distribution of the tree, and show that it is sharply concentrated. As example applications of our machinery, we show how traceroute sampling finds power-law degree distributions in both δ-regular and Poisson-distributed random graphs. Thus, our work puts the observations of on a rigorous footing, and extends them to nearly arbitrary degree distributions." ] }
cs0405044
2950057546
Most previous work on the recently developed language-modeling approach to information retrieval focuses on document-specific characteristics, and therefore does not take into account the structure of the surrounding corpus. We propose a novel algorithmic framework in which information provided by document-based language models is enhanced by the incorporation of information drawn from clusters of similar documents. Using this framework, we develop a suite of new algorithms. Even the simplest typically outperforms the standard language-modeling approach in precision and recall, and our new interpolation algorithm posts statistically significant improvements for both metrics over all three corpora tested.
Document clustering has a long history in information retrieval @cite_11 @cite_12 ; in particular, approximating topics via clusters is a recurring theme @cite_18 . Arguably the work most related to ours by dint of employing both clustering and language modeling in the context of ad hoc retrieval See e.g., @cite_16 , @cite_3 , and @cite_5 for applications of clustering in related areas. is that on latent-variable models, e.g., @cite_1 @cite_13 @cite_15 @cite_10 , of which the classic aspect model is one instantiation. Such work takes a strictly probabilistic approach to the problems we have discussed with standard language modeling, as opposed to our algorithmic viewpoint. Also, a focus in the latent-variable work has been on sophisticated cluster induction, whereas we find that a very simple clustering scheme works rather well in practice. Interestingly, Hofmann @cite_13 linearly interpolated his probabilistic model's score, which is based on (soft) clusters, with the usual cosine metric; this is quite close in spirit to what our algorithm does.
{ "cite_N": [ "@cite_18", "@cite_10", "@cite_1", "@cite_3", "@cite_5", "@cite_15", "@cite_16", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2103986443", "2042980227", "1965667542", "2060314721" ], "abstract": [ "Document clustering is useful in many information retrieval tasks: document browsing, organization and viewing of retrieval results, generation of Yahoo-like hierarchies of documents, etc. The general goal of clustering is to group data elements such that the intra-group similarities are high and the inter-group similarities are low. We present a clustering algorithm called CBC (Clustering By Committee) that is shown to produce higher quality clusters in document clustering tasks as compared to several well known clustering algorithms. It initially discovers a set of tight clusters (high intra-group similarity), called committees, that are well scattered in the similarity space (low inter-group similarity). The union of the committees is but a subset of all elements. The algorithm proceeds by assigning elements to their most similar committee. Evaluating cluster quality has always been a difficult task. We present a new evaluation methodology that is based on the editing distance between output clusters and manually constructed classes (the answer key). This evaluation measure is more intuitive and easier to interpret than previous evaluation measures.", "Search algorithms incorporating some form of topic model have a long history in information retrieval. For example, cluster-based retrieval has been studied since the 60s and has recently produced good results in the language model framework. An approach to building topic models based on a formal generative model of documents, Latent Dirichlet Allocation (LDA), is heavily cited in the machine learning literature, but its feasibility and effectiveness in information retrieval is mostly unknown. In this paper, we study how to efficiently use LDA to improve ad-hoc retrieval. We propose an LDA-based document model within the language modeling framework, and evaluate it on several TREC collections. Gibbs sampling is employed to conduct approximate inference in LDA and the computational complexity is analyzed. We show that improvements over retrieval using cluster-based models can be obtained with reasonable efficiency.", "A novel probabilistic retrieval model is presented. It forms a basis to interpret the TF-IDF term weights as making relevance decisions. It simulates the local relevance decision-making for every location of a document, and combines all of these “local” relevance decisions as the “document-wide” relevance decision for the document. The significance of interpreting TF-IDF in this way is the potential to: (1) establish a unifying perspective about information retrieval as relevance decision-making; and (2) develop advanced TF-IDF-related term weights for future elaborate retrieval models. Our novel retrieval model is simplified to a basic ranking formula that directly corresponds to the TF-IDF term weights. In general, we show that the term-frequency factor of the ranking formula can be rendered into different term-frequency factors of existing retrieval systems. In the basic ranking formula, the remaining quantity - log p(rvt ∈ d) is interpreted as the probability of randomly picking a nonrelevant usage (denoted by r) of term t. Mathematically, we show that this quantity can be approximated by the inverse document-frequency (IDF). Empirically, we show that this quantity is related to IDF, using four reference TREC ad hoc retrieval data collections.", "We present a novel implementation of the recently introduced information bottleneck method for unsupervised document clustering. Given a joint empirical distribution of words and documents, p(x, y), we first cluster the words, Y, so that the obtained word clusters, Ytilde;, maximally preserve the information on the documents. The resulting joint distribution. p(X, Ytilde;), contains most of the original information about the documents, I(X; Ytilde;) a I(X; Y), but it is much less sparse and noisy. Using the same procedure we then cluster the documents, X, so that the information about the word-clusters is preserved. Thus, we first find word-clusters that capture most of the mutual information about to set of documents, and then find document clusters, that preserve the information about the word clusters. We tested this procedure over several document collections based on subsets taken from the standard 20Newsgroups corpus. The results were assessed by calculating the correlation between the document clusters and the correct labels for these documents. Finding from our experiments show that this double clustering procedure, which uses the information bottleneck method, yields significantly superior performance compared to other common document distributional clustering algorithms. Moreover, the double clustering procedure improves all the distributional clustering methods examined here." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
The protocol described here is derived from earlier work @cite_21 in which we covered the background of the LOCKSS system. That protocol used redundancy, rate limitation, effort balancing, bimodal behavior (polls must be won or lost by a landslide) and friend bias (soliciting some percentage of votes from peers on the friends list) to prevent powerful adversaries from modifying the content without detection, or discrediting the intrusion detection system with false alarms. To mitigate its vulnerability to attrition, in this work we reinforce these defenses using admission control, desynchronization, and redundancy, and restructure votes to support a block-based repair mechanism that penalizes free-riding. In this section we list work that describes the nature and types of denial of service attacks, as well as related work that applies defenses similar to ours.
{ "cite_N": [ "@cite_21" ], "mid": [ "2950945875", "2144552569", "2051343255", "2535351647" ], "abstract": [ "The LOCKSS project has developed and deployed in a world-wide test a peer-to-peer system for preserving access to journals and other archival information published on the Web. It consists of a large number of independent, low-cost, persistent web caches that cooperate to detect and repair damage to their content by voting in opinion polls.'' Based on this experience, we present a design for and simulations of a novel protocol for voting in systems of this kind. It incorporates rate limitation and intrusion detection in new ways that ensure even an adversary capable of unlimited effort over decades has only a small probability of causing irrecoverable damage before being detected.", "The LOCKSS project has developed and deployed in a world-wide test a peer-to-peer system for preserving access to journals and other archival information published on the Web. It consists of a large number of independent, low-cost, persistent web caches that cooperate to detect and repair damage to their content by voting in \"opinion polls.\" Based on this experience, we present a design for and simulations of a novel protocol for voting in systems of this kind. It incorporates rate limitation and intrusion detection to ensure that even some very powerful adversaries attacking over many years have only a small probability of causing irrecoverable damage before being detected.", "This paper introduces a novel technique for access by a cognitive Secondary User (SU) to a spectrum with an incumbent Primary User (PU), which uses Type-I Hybrid ARQ. The technique allows the SU to perform selective retransmissions of previously corrupted SU data packets. The temporal redundancy introduced by the primary ARQ protocol and by the selective SU retransmission process can be exploited by the SU receiver to perform Interference Cancellation (IC) over the entire interference pattern, thus creating a \"clean\" channel for the decoding of the concurrent message. The chain decoding technique, initiated by a successful decoding operation of a SU or PU message, consists in the iterative application of IC, as previously corrupted messages become decodable. Based on this scheme, we design an optimal policy that maximizes the SU throughput under a constraint on the average long-term PU throughput degradation. We show that the optimal policy can be found by first optimizing the SU access policy using a Markov Decision Process formulation, and then applying a chain decoding protocol defined by five basic rules. Such an approach enables a compact state representation of the protocol, and its efficient numerical optimization. Finally, we show by numerical results the throughput benefit of the proposed technique.", "For achieving optimized spectrum usage, most existing opportunistic spectrum sensing and access protocols model the spectrum sensing and access problem as a partially observed Markov decision process by assuming that the information states and or the primary users' (PUs) traffic statistics are known a priori to the secondary users (SUs). While theoretically sound, the existing solutions may not be effective in practice due to two main concerns. First, the assumptions are not practical, as before the communication starts, PUs' traffic statistics may not be readily available to the SUs. Second and more serious, existing approaches are extremely vulnerable to malicious jamming attacks. By leveraging the same statistic information and stochastic dynamic decision-making process that the SUs would follow, a cognitive attacker with sensing capability can sense and jam the channels to be accessed by SUs, while not interfering PUs. To address these concerns, we formulate the antijamming, multichannel access problem as a nonstochastic multi-armed bandit problem. By leveraging probabilistically shared information between the sender and the receiver, our proposed protocol enables them to hop to the same set of channels with high probability while gaining resilience to jamming attacks without affecting PUs' activities. We analytically show the convergence of the learning algorithms and derive the performance bound based on regret . We further discuss the problem of tracking the best adaptive strategy and characterize the performance bound based on a new regret . Extensive simulation results show that the probabilistic spectrum sensing and access protocol can overcome the limitation of existing solutions and is highly resilient to various jamming attacks even with jammed acknowledgment (ACK) information." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
Our attrition adversary draws on a wide range of work in detecting @cite_60 , measuring @cite_34 , and combating @cite_8 @cite_38 @cite_27 @cite_49 network-level DDoS attacks capable of stopping traffic to and from our peers. This work observes that current attacks are not simultaneously of high intensity, long duration, and high coverage (many peers) @cite_34 .
{ "cite_N": [ "@cite_38", "@cite_8", "@cite_60", "@cite_27", "@cite_49", "@cite_34" ], "mid": [ "2119227347", "2170810185", "2903785932", "2159160833" ], "abstract": [ "A low-rate distributed denial of service (DDoS) attack has significant ability of concealing its traffic because it is very much like normal traffic. It has the capacity to elude the current anomaly-based detection schemes. An information metric can quantify the differences of network traffic with various probability distributions. In this paper, we innovatively propose using two new information metrics such as the generalized entropy metric and the information distance metric to detect low-rate DDoS attacks by measuring the difference between legitimate traffic and attack traffic. The proposed generalized entropy metric can detect attacks several hops earlier (three hops earlier while the order α = 10 ) than the traditional Shannon metric. The proposed information distance metric outperforms (six hops earlier while the order α = 10) the popular Kullback-Leibler divergence approach as it can clearly enlarge the adjudication distance and then obtain the optimal detection sensitivity. The experimental results show that the proposed information metrics can effectively detect low-rate DDoS attacks and clearly reduce the false positive rate. Furthermore, the proposed IP traceback algorithm can find all attacks as well as attackers from their own local area networks (LANs) and discard attack traffic.", "This paper presents a new distributed approach to detecting DDoS (distributed denial of services) flooding attacks at the traffic-flow level The new defense system is suitable for efficient implementation over the core networks operated by Internet service providers (ISPs). At the early stage of a DDoS attack, some traffic fluctuations are detectable at Internet routers or at the gateways of edge networks. We develop a distributed change-point detection (DCD) architecture using change aggregation trees (CAT). The idea is to detect abrupt traffic changes across multiple network domains at the earliest time. Early detection of DDoS attacks minimizes the floe cling damages to the victim systems serviced by the provider. The system is built over attack-transit routers, which work together cooperatively. Each ISP domain has a CAT server to aggregate the flooding alerts reported by the routers. CAT domain servers collaborate among themselves to make the final decision. To resolve policy conflicts at different ISP domains, a new secure infrastructure protocol (SIP) is developed to establish mutual trust or consensus. We simulated the DCD system up to 16 network domains on the Cyber Defense Technology Experimental Research (DETER) testbed, a 220-node PC cluster for Internet emulation experiments at the University of Southern California (USC) Information Science Institute. Experimental results show that four network domains are sufficient to yield a 98 percent detection accuracy with only 1 percent false-positive alarms. Based on a 2006 Internet report on autonomous system (AS) domain distribution, we prove that this DDoS defense system can scale well to cover 84 AS domains. This security coverage is wide enough to safeguard most ISP core networks from real-life DDoS flooding attacks.", "Adversarial attacks to image classification systems present challenges to convolutional networks and opportunities for understanding them. This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, we develop new network architectures that increase adversarial robustness by performing feature denoising. Specifically, our networks contain blocks that denoise the features using non-local means or other filters; the entire networks are trained end-to-end. When combined with adversarial training, our feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings. On ImageNet, under 10-iteration PGD white-box attacks where prior art has 27.9 accuracy, our method achieves 55.7 ; even under extreme 2000-iteration PGD white-box attacks, our method secures 42.6 accuracy. Our method was ranked first in Competition on Adversarial Attacks and Defenses (CAAD) 2018 --- it achieved 50.6 classification accuracy on a secret, ImageNet-like test dataset against 48 unknown attackers, surpassing the runner-up approach by 10 . Code is available at this https URL.", "Launching a denial of service (DoS) attack is trivial, but detection and response is a painfully slow and often a manual process. Automatic classification of attacks as single- or multi-source can help focus a response, but current packet-header-based approaches are susceptible to spoofing. This paper introduces a framework for classifying DoS attacks based on header content, and novel techniques such as transient ramp-up behavior and spectral analysis. Although headers are easily forged, we show that characteristics of attack ramp-up and attack spectrum are more difficult to spoof. To evaluate our framework we monitored access links of a regional ISP detecting 80 live attacks. Header analysis identified the number of attackers in 67 attacks, while the remaining 13 attacks were classified based on ramp-up and spectral analysis. We validate our results through monitoring at a second site, controlled experiments, and simulation. We use experiments and simulation to understand the underlying reasons for the characteristics observed. In addition to helping understand attack dynamics, classification mechanisms such as ours are important for the development of realistic models of DoS traffic, can be packaged as an automated tool to aid in rapid response to attacks, and can also be used to estimate the level of DoS activity on the Internet." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
Related to first-hand reputation is the use of game-theoretic analysis of peer behavior by @cite_7 to show that a reciprocative strategy in admission control policy can motivate cooperation among selfish peers.
{ "cite_N": [ "@cite_7" ], "mid": [ "1506509256", "1913214437", "2125545976", "2164146906" ], "abstract": [ "We commonly use the experience of others when taking decisions. Reputation mechanisms aggregate in a formal way the feedback collected from peers and compute the reputation of products, services, or providers. The success of reputation mechanisms is however conditioned on obtaining true feedback. Side-payments (i.e. agents get paid for submitting feedback) can make honest reporting rational (i.e. Nash equilibrium). Unfortunately, known schemes also have other Nash equilibria that imply lying. In this paper we analyze the equilibria of two incentive-compatible reputation mechanisms and investigate how undesired equilibrium points can be eliminated by using trusted reports.", "Some misbehavior detection and reputation systems in mobile ad-hoc networks rely on the dissemination of information of observed behavior, which makes them vulnerable to false accusations. This vulnerability could be removed by forbidding the dissemination of information on observed behavior in the first place, but, as we show here, this has more drawbacks than a solution that allows dissemination and copes with false accusations. We propose a method for reducing the impact of false accusations. In our approach, nodes collect first-hand information about the behavior of other nodes by direct observation. In addition, nodes maintain a rating about every other node that they care about, in the form of a continuous variable per node. From time to time nodes exchange their first-hand information with others, but, using the Bayesian approach we designed and present in this paper, only second-hand information that is not incompatible with the current rating is accepted. Ratings are slightly modified by accepted information. The reputation of a given node is the collection of ratings maintained by others about this node. By means of simulation we evaluated the robustness of our approach against several types of adversaries that spread false information, and its efficiency at detecting malicious nodes. The simulation results indicate that our system largely reduces the impact of false accusations, while still benefiting from the accelerated detection of malicious nodes provided by second-hand information. We also found that when information dissemination is not used, the time until malicious nodes are detected can be unacceptable.", "There are two perspectives on the role of reputation in collaborative online projects such as Wikipedia or Yahoo! Answers. One, user reputation should be minimized in order to increase the number of contributions from a wide user base. Two, user reputation should be used as a heuristic to identify and promote high quality contributions. The current study examined how offline and online reputations of contributors affect perceived quality in MathOverflow, an online community with 3470 active users. On MathOverflow, users post high-level mathematics questions and answers. Community members also rate the quality of the questions and answers. This study is unique in being able to measure offline reputation of users. Both offline and online reputations were consistently and independently related to the perceived quality of authors' submissions, and there was only a moderate correlation between established offline and newly developed online reputation.", "Reputation systems rate the contributions to participatory sensing campaigns from each user by associating a reputation score. The reputation scores are used to weed out incorrect sensor readings. However, an adversary can deanonmyize the users even when they use pseudonyms by linking the reputation scores associated with multiple contributions. Since the contributed readings are usually annotated with spatiotemporal information, this poses a serious breach of privacy for the users. In this paper, we address this privacy threat by proposing a framework called IncogniSense. Our system utilizes periodic pseudonyms generated using blind signature and relies on reputation transfer between these pseudonyms. The reputation transfer process has an inherent trade-off between anonymity protection and loss in reputation. We investigate by means of extensive simulations several reputation cloaking schemes that address this tradeoff in different ways. Our system is robust against reputation corruption and a prototype implementation demonstrates that the associated overheads are minimal." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
Admission control has been used to improve the usability of overloaded services. For example, @cite_1 propose admission control strategies that help protect long-running Web service sessions (i.e., related sequences of requests) from abrupt termination. Preserving the responsiveness of Web services in the face of demand spikes is critical, whereas LOCKSS peers need only manage their resources to make progress at the necessary rate in the long term. They can treat demand spikes as hostile behavior. In a P2P context, @cite_14 use admission control (and rate limiting) to mitigate the effects of a query flood attack against superpeers in unstructured file-sharing peer-to-peer networks such as Gnutella.
{ "cite_N": [ "@cite_14", "@cite_1" ], "mid": [ "2160436229", "2148414233", "2282269972", "2062706778" ], "abstract": [ "We consider a new, session-based workload for measuring web server performance. We define a session as a sequence of client's individual requests. Using a simulation model, we show that an overloaded web server can experience a severe loss of throughput measured as a number of completed sessions compared against the server throughput measured in requests per second. Moreover, statistical analysis of completed sessions reveals that the overloaded web server discriminates against longer sessions. For e-commerce retail sites, longer sessions are typically the ones that would result in purchases, so they are precisely the ones for which the companies want to guarantee completion. To improve Web QoS for commercial Web servers, we introduce a session-based admission control (SBAC) to prevent a web server from becoming overloaded and to ensure that longer sessions can be completed. We show that a Web server augmented with the admission control mechanism is able to provide a fair guarantee of completion, for any accepted session, independent of a session length. This provides a predictable and controllable platform for web applications and is a critical requirement for any e-business. Additionally, we propose two new adaptive admission control strategies, hybrid and predictive, aiming to optimize the performance of SBAC mechanism. These new adaptive strategies are based on a self-tunable admission control function, which adjusts itself accordingly to variations in traffic loads.", "An admission control algorithm must coordinate between flows to provide guarantees about how the medium is shared. In wired networks, nodes can monitor the medium to see how much bandwidth is being used. However, in ad hoc networks, communication from one node may consume the bandwidth of neighboring nodes. Therefore, the bandwidth consumption of flows and the available resources to a node are not local concepts, but related to the neighboring nodes in carrier-sensing range. Current solutions do not address how to perform admission control in such an environment so that the admitted flows in the network do not exceed network capacity. In this paper, we present a scalable and efficient admission control framework - contention-aware admission control protocol (CACP) - to support QoS in ad hoc networks. We present several options for the design of CACP and compare the performance of these options using both mathematical analysis and simulation results. We also demonstrate the effectiveness of CACP compared to existing approaches through extensive simulations.", "There is a growing adoption of cloud computing services, attracting users with different requirements and budgets to run their applications in cloud infrastructures. In order to match users' needs, cloud providers can offer multiple service classes with different pricing and Service Level Objective (SLO) guarantees. Admission control mechanisms can help providers to meet target SLOs by limiting the demand at peak periods. This paper proposes a prediction-based admission control model for IaaS clouds with multiple service classes, aiming to maximize request admission rates while fulfilling availability SLOs defined for each class. We evaluate our approach with trace-driven simulations fed with data from production systems. Our results show that admission control can reduce SLO violations significantly, specially in underprovisioned scenarios. Moreover, our predictive heuristics are less sensitive to different capacity planning and SLO decisions, as they fulfill availability SLOs for more than 91 of requests even in the worst case scenario, for which only 56 of SLOs are fulfilled by a simpler greedy heuristic and as little as 0.2 when admission control is not used.", "Networks that support multiple services through \"link-sharing\" must address the fundamental conflicting requirement between isolation among service classes to satisfy each class' quality of service requirements, and statistical sharing of resources for efficient network utilization. While a number of service disciplines have been devised which provide mechanisms to both isolate flows and fairly share excess capacity, admission control algorithms are needed which exploit the effects of inter-class resource sharing. In this paper, we develop a framework of using statistical service envelopes to study inter-class statistical resource sharing. We show how this service envelope enables a class to over-book resources beyond its deterministically guaranteed capacity by statistically characterizing the excess service available due to fluctuating demands of other service classes. We apply our techniques to several multi-class schedulers, including generalized processor sharing, and design new admission control algorithms for multi-class link-sharing environments. We quantify the utilization gains of our approach with a set of experiments using long traces of compressed video." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
Golle and Mironov @cite_2 provide compliance enforcement in the context of distributed computation using a receipt technique similar to ours. Random auditing using challenges and hashing has been proposed @cite_42 @cite_37 as a means of enforcing trading requirements in some distributed storage systems.
{ "cite_N": [ "@cite_37", "@cite_42", "@cite_2" ], "mid": [ "2105526656", "2585130803", "202035697", "1981780124" ], "abstract": [ "We introduce transactors, a fault-tolerant programming model for composing loosely-coupled distributed components running in an unreliable environment such as the internet into systems that reliably maintain globally consistent distributed state. The transactor model incorporates certain elements of traditional transaction processing, but allows these elements to be composed in different ways without the need for central coordination, thus facilitating the study of distributed fault-tolerance from a semantic point of view. We formalize our approach via the τ-calculus, an extended lambda-calculus based on the actor model, and illustrate its usage through a number of examples. The τ-calculus incorporates constructs which distributed processes can use to create globally-consistent checkpoints. We provide an operational semantics for the τ-calculus, and formalize the following safety and liveness properties: first, we show that globally-consistent checkpoints have equivalent execution traces without any node failures or application-level failures, and second, we show that it is possible to reach globally-consistent checkpoints provided that there is some bounded failure-free interval during which checkpointing can occur.", "Increasing transaction volumes have led to a resurgence of interest in distributed transaction processing. In particular, partitioning data across several servers can improve throughput by allowing servers to process transactions in parallel. But executing transactions across servers limits the scalability and performance of these systems. In this paper, we quantify the effects of distribution on concurrency control protocols in a distributed environment. We evaluate six classic and modern protocols in an in-memory distributed database evaluation framework called Deneva, providing an apples-to-apples comparison between each. Our results expose severe limitations of distributed transaction processing engines. Moreover, in our analysis, we identify several protocol-specific scalability bottlenecks. We conclude that to achieve truly scalable operation, distributed concurrency control solutions must seek a tighter coupling with either novel network hardware (in the local area) or applications (via data modeling and semantically-aware execution), or both.", "The ubiquity of the Internet has led to increased resource sharing between large numbers of users in widely-disparate administrative domains. Unfortunately, traditional identity-based solutions to the authorization problem do not allow for the dynamic establishment of trust, and thus cannot be used to facilitate interactions between previously-unacquainted parties. Furthermore, the management of identity-based systems becomes burdensome as the number of users in the system increases. To address this gap between the needs of open computing systems and existing authorization infrastructures, researchers have begun to investigate novel attribute-based access control (ABAC) systems based on techniques such as trust negotiation and other forms of distributed proving. To date, research in these areas has been largely theoretical and has produced many important foundational results. However, if these techniques are to be safely deployed in practice, the systems-level barriers hindering their adoption must be overcome. In this thesis, we show that safely and securely adopting decentralized ABAC approaches to authorization is not simply a matter of implementation and deployment, but requires careful consideration of both formal properties and practical issues. To this end, we investigate a progression of important questions regarding the safety analysis, deployment, implementation, and optimization of these types of systems. We first show that existing ABAC theory does not properly account for the asynchronous nature of open systems, which allows attackers to subvert these systems by forcing decisions to be made using inconsistent system states. To address this, we develop provably-secure and lightweight consistency enforcement mechanisms suitable for use in trust negotiation and distributed proof systems. We next focus on deployment issues, and investigate how user interactions can be audited in the absence of concrete user identities. We develop the technique of virtual fingerprinting, which can be used to accomplish this task without adversely affecting the scalability of audit systems. Lastly, we present TrustBuilder2, which is the first fully-configurable framework for trust negotiation. Within this framework, we examine availability problems associated with the trust negotiation process and develop a novel approach to policy compliance checking that leverages an efficient pattern-matching approach to outperform existing techniques by orders of magnitude.", "Datastores today rely on distribution and replication to achieve improved performance and fault-tolerance. But correctness of many applications depends on strong consistency properties--something that can impose substantial overheads, since it requires coordinating the behavior of multiple nodes. This paper describes a new approach to achieving strong consistency in distributed systems while minimizing communication between nodes. The key insight is to allow the state of the system to be inconsistent during execution, as long as this inconsistency is bounded and does not affect transaction correctness. In contrast to previous work, our approach uses program analysis to extract semantic information about permissible levels of inconsistency and is fully automated. We then employ a novel homeostasis protocol to allow sites to operate independently, without communicating, as long as any inconsistency is governed by appropriate treaties between the nodes. We discuss mechanisms for optimizing treaties based on workload characteristics to minimize communication, as well as a prototype implementation and experiments that demonstrate the benefits of our approach on common transactional benchmarks." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
In DHTs waves of synchronized routing updates caused by joins or departures cause instability during periods of high churn. Bamboo's @cite_28 desynchronization defense using lazy updates is effective.
{ "cite_N": [ "@cite_28" ], "mid": [ "2162733677", "2150288915", "1587208850", "2005497978" ], "abstract": [ "This paper addresses the problem of churn--the continuous process of node arrival and departure--in distributed hash tables (DHTs). We argue that DHTs should perform lookups quickly and consistently under churn rates at least as high as those observed in deployed P2P systems such as Kazaa. We then show through experiments on an emulated network that current DHT implementations cannot handle such churn rates. Next, we identify and explore three factors affecting DHT performance under churn: reactive versus periodic failure recovery, message timeout calculation, and proximity neighbor selection. We work in the context of a mature DHT implementation called Bamboo, using the ModelNet network emulator, which models in-network queuing, cross-traffic, and packet loss. These factors are typically missing in earlier simulation-based DHT studies, and we show that careful attention to them in Bamboo's design allows it to function effectively at churn rates at or higher than that observed in P2P file-sharing applications, while using lower maintenance bandwidth than other DHT implementations.", "During recent years, Distributed Hash Tables (DHTs) have been extensively studied through simulation and analysis. However, due to their limited deployment, it has not been possible to observe the behavior of a widely-deployed DHT in practice. Recently, the popular eMule file-sharing software incorporated a Kademlia-based DHT, called Kad, which currently has around one million simultaneous users. In this paper, we empirically study the performance of the key DHT operation, lookup, over Kad. First, we analytically derive the benefits of different ways to increase the richness of routing tables in Kademlia-based DHTs. Second, we empirically characterize two aspects of the accuracy of routing tables in Kad, namely completeness and freshness, and characterize their impact on Kad’s lookup performance. Finally, we investigate how the efficiency and consistency of lookup in Kad can be improved by performing parallel lookup and maintaining multiple replicas, respectively. Our results pinpoint the best operating point for the degree of lookup parallelism and the degree of replication for Kad.", "Distributed Hash Tables (DHTs) are very efficient distributed systems for routing, but at the same time vulnerable to disruptive nodes. Designers of such systems want them used in open networks, where an adversary can perform a sybil attack by introducing a large number of corrupt nodes in the network, considerably degrading its performance. We introduce a routing strategy that alleviates some of the effects of such an attack by making sure that lookups are performed using a diverse set of nodes. This ensures that at least some of the nodes queried are good, and hence the search makes forward progress. This strategy makes use of latent social information present in the introduction graph of the network.", "Jamming attacks are especially harmful to the reliability of wireless communication, as they can effectively disrupt communication between any node pairs. Existing jamming defenses primarily focus on repairing connectivity between adjacent nodes. In this paper, we address jamming at the network level and focus on restoring the end-to-end data delivery through multipath routing. As long as all paths do not fail concurrently, the end-to-end path availability is maintained. Prior work in multipath selection improves routing availability by choosing node-disjoint paths or link-disjoint paths. However, through our experiments on jamming effects using MicaZ nodes, we show that disjointness is insufficient for selecting fault-independent paths. Thus, we address multipath selection based on the knowledge of a path's availability history. Using Availability History Vectors (AHVs) of paths, we present a centralized AHV-based algorithm to select fault-independent paths, and a distributed AHV-based routing protocol built on top of a classic routing algorithm in ad hoc networks. Our extensive simulation results validate that both AHV-based algorithms are effective in overcoming the jamming impact by maximizing the end-to-end availability of the selected paths." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
The previous version of the LOCKSS protocol used rate-limiting, inherent intrusion detection through bimodal system behavior, and churning of friends into the reference list to prevent poll samples from being influenced by nominated peers. These techniques are effective in defending against adversaries attempting to modify content without being detected or trying to trigger intrusion detection alarms to discredit the system @cite_21 . The previous version of the protocol, however, did not tolerate attrition attacks well. An attrition adversary with about 50 nodes of computational power was able to bring a system of 1000 peers to a crawl. By further leveraging the rate-limitation defense to provide admission control, compliance enforcement, and desynchronization of poll invitations raise the computational power an adversary must use to equal that used by the defenders.
{ "cite_N": [ "@cite_21" ], "mid": [ "2950945875", "2144552569", "2535351647", "2152926062" ], "abstract": [ "The LOCKSS project has developed and deployed in a world-wide test a peer-to-peer system for preserving access to journals and other archival information published on the Web. It consists of a large number of independent, low-cost, persistent web caches that cooperate to detect and repair damage to their content by voting in opinion polls.'' Based on this experience, we present a design for and simulations of a novel protocol for voting in systems of this kind. It incorporates rate limitation and intrusion detection in new ways that ensure even an adversary capable of unlimited effort over decades has only a small probability of causing irrecoverable damage before being detected.", "The LOCKSS project has developed and deployed in a world-wide test a peer-to-peer system for preserving access to journals and other archival information published on the Web. It consists of a large number of independent, low-cost, persistent web caches that cooperate to detect and repair damage to their content by voting in \"opinion polls.\" Based on this experience, we present a design for and simulations of a novel protocol for voting in systems of this kind. It incorporates rate limitation and intrusion detection to ensure that even some very powerful adversaries attacking over many years have only a small probability of causing irrecoverable damage before being detected.", "For achieving optimized spectrum usage, most existing opportunistic spectrum sensing and access protocols model the spectrum sensing and access problem as a partially observed Markov decision process by assuming that the information states and or the primary users' (PUs) traffic statistics are known a priori to the secondary users (SUs). While theoretically sound, the existing solutions may not be effective in practice due to two main concerns. First, the assumptions are not practical, as before the communication starts, PUs' traffic statistics may not be readily available to the SUs. Second and more serious, existing approaches are extremely vulnerable to malicious jamming attacks. By leveraging the same statistic information and stochastic dynamic decision-making process that the SUs would follow, a cognitive attacker with sensing capability can sense and jam the channels to be accessed by SUs, while not interfering PUs. To address these concerns, we formulate the antijamming, multichannel access problem as a nonstochastic multi-armed bandit problem. By leveraging probabilistically shared information between the sender and the receiver, our proposed protocol enables them to hop to the same set of channels with high probability while gaining resilience to jamming attacks without affecting PUs' activities. We analytically show the convergence of the learning algorithms and derive the performance bound based on regret . We further discuss the problem of tracking the best adaptive strategy and characterize the performance bound based on a new regret . Extensive simulation results show that the probabilistic spectrum sensing and access protocol can overcome the limitation of existing solutions and is highly resilient to various jamming attacks even with jammed acknowledgment (ACK) information.", "We present a novel approach to fully homomorphic encryption (FHE) that dramatically improves performance and bases security on weaker assumptions. A central conceptual contribution in our work is a new way of constructing leveled fully homomorphic encryption schemes (capable of evaluating arbitrary polynomial-size circuits), without Gentry's bootstrapping procedure. Specifically, we offer a choice of FHE schemes based on the learning with error (LWE) or ring-LWE (RLWE) problems that have 2λ security against known attacks. For RLWE, we have: • A leveled FHE scheme that can evaluate L-level arithmetic circuits with O(λ · L3) per-gate computation -- i.e., computation quasi-linear in the security parameter. Security is based on RLWE for an approximation factor exponential in L. This construction does not use the bootstrapping procedure. • A leveled FHE scheme that uses bootstrapping as an optimization, where the per-gate computation (which includes the bootstrapping procedure) is O(λ2), independent of L. Security is based on the hardness of RLWE for quasi-polynomial factors (as opposed to the sub-exponential factors needed in previous schemes). We obtain similar results to the above for LWE, but with worse performance. Based on the Ring LWE assumption, we introduce a number of further optimizations to our schemes. As an example, for circuits of large width -- e.g., where a constant fraction of levels have width at least λ -- we can reduce the per-gate computation of the bootstrapped version to O(λ), independent of L, by batching the bootstrapping operation. Previous FHE schemes all required Ω(λ3.5) computation per gate. At the core of our construction is a much more effective approach for managing the noise level of lattice-based ciphertexts as homomorphic operations are performed, using some new techniques recently introduced by Brakerski and Vaikuntanathan (FOCS 2011)." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
. Rate limits on peers joining a DHT have been suggested @cite_47 @cite_37 as a defense against attempts to control parts of the hash space, for example to control the placement of certain data objects or for misrouting. Limiting both joins and stores to empirically determined safe rates will also be needed to thwart the attrition adversary. At least for file sharing, studies @cite_24 have suggested that users' behavior may not be sensitive to latency. The increased storage latency that rate limits create is probably unimportant. XXX Matt Williamson's viral work XXX
{ "cite_N": [ "@cite_24", "@cite_47", "@cite_37" ], "mid": [ "2049794981", "2049130980", "2143339817", "2150288915" ], "abstract": [ "Distributed hash table (DHT) systems are an important class of peer-to-peer routing infrastructures. They enable scalable wide-area storage and retrieval of information, and will support the rapid development of a wide variety of Internet-scale applications ranging from naming systems and file systems to application-layer multicast. DHT systems essentially build an overlay network, but a path on the overlay between any two nodes can be significantly different from the unicast path between those two nodes on the underlying network. As such, the lookup latency in these systems can be quite high and can adversely impact the performance of applications built on top of such systems.In this paper, we discuss a random sampling technique that incrementally improves lookup latency in DHT systems. Our sampling can be implemented using information gleaned from lookups traversing the overlay network. For this reason, we call our approach lookup-parasitic random sampling (LPRS). LPRS is fast, incurs little network overhead, and requires relatively few modifications to existing DHT systems.For idealized versions of DHT systems like Chord, Tapestry and Pastry, we analytically prove that LPRS can result in lookup latencies proportional to the average unicast latency of the network, provided the underlying physical topology has a power-law latency expansion. We then validate this analysis by implementing LPRS in the Chord simulator. Our simulations reveal that LPRS-Chord exhibits a qualitatively better latency scaling behavior relative to unmodified Chord.Finally, we provide evidence which suggests that the Internet router-level topology resembles power-law latency expansion. This finding implies that LPRS has significant practical applicability as a general latency reduction technique for many DHT systems. This finding is also of independent interest since it might inform the design of latency-sensitive topology models for the Internet.", "A number of distributed hash table (DHT)-based protocols have been proposed to address the issue of scalability in peer-to-peer networks. In this paper, we present Ulysses, a peer-to-peer network based on the butterfly topology that achieves the theoretical lower bound of log n log log n on network diameter when the average routing table size at nodes is no more than log n. Compared to existing DHT-based schemes with similar routing table size, Ulysses reduces the network diameter by a factor of log log n, which is 2–4 for typical configurations. This translates into the same amount of reduction on query latency and average traffic per link node. In addition, Ulysses maintains the same level of robustness in terms of routing in the face of faults and recovering from graceful ungraceful joins and departures, as provided by existing DHT-based schemes. The performance of the protocol has been evaluated using both analysis and simulation. Copyright © 2004 AEI", "Large-scale distributed systems are hard to deploy, and distributed hash tables (DHTs) are no exception. To lower the barriers facing DHT-based applications, we have created a public DHT service called OpenDHT. Designing a DHT that can be widely shared, both among mutually untrusting clients and among a variety of applications, poses two distinct challenges. First, there must be adequate control over storage allocation so that greedy or malicious clients do not use more than their fair share. Second, the interface to the DHT should make it easy to write simple clients, yet be sufficiently general to meet a broad spectrum of application requirements. In this paper we describe our solutions to these design challenges. We also report our early deployment experience with OpenDHT and describe the variety of applications already using the system.", "During recent years, Distributed Hash Tables (DHTs) have been extensively studied through simulation and analysis. However, due to their limited deployment, it has not been possible to observe the behavior of a widely-deployed DHT in practice. Recently, the popular eMule file-sharing software incorporated a Kademlia-based DHT, called Kad, which currently has around one million simultaneous users. In this paper, we empirically study the performance of the key DHT operation, lookup, over Kad. First, we analytically derive the benefits of different ways to increase the richness of routing tables in Kademlia-based DHTs. Second, we empirically characterize two aspects of the accuracy of routing tables in Kad, namely completeness and freshness, and characterize their impact on Kad’s lookup performance. Finally, we investigate how the efficiency and consistency of lookup in Kad can be improved by performing parallel lookup and maintaining multiple replicas, respectively. Our results pinpoint the best operating point for the degree of lookup parallelism and the degree of replication for Kad." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
: Admission control appears frequently as a defense against overloading, for example in the context of Web services. For example, @cite_1 propose admission control strategies that help protect long-running sessions (i.e., related sequences of requests) from abrupt termination. However, several of the pertinent assumptions that hold true in a Web environment are inapplicable to LOCKSS: request rejection costs much less than an accepted request, and explicit rejection rarely stems the tide of further requests when a denial of service attack is under way. @cite_14 use admission control as well as rate limiting to mitigate the effects of a query flood attack against superpeers in unstructured file-sharing peer-to-peer network such as Gnutella.
{ "cite_N": [ "@cite_14", "@cite_1" ], "mid": [ "2160436229", "2038154299", "1586012628", "2139386764" ], "abstract": [ "We consider a new, session-based workload for measuring web server performance. We define a session as a sequence of client's individual requests. Using a simulation model, we show that an overloaded web server can experience a severe loss of throughput measured as a number of completed sessions compared against the server throughput measured in requests per second. Moreover, statistical analysis of completed sessions reveals that the overloaded web server discriminates against longer sessions. For e-commerce retail sites, longer sessions are typically the ones that would result in purchases, so they are precisely the ones for which the companies want to guarantee completion. To improve Web QoS for commercial Web servers, we introduce a session-based admission control (SBAC) to prevent a web server from becoming overloaded and to ensure that longer sessions can be completed. We show that a Web server augmented with the admission control mechanism is able to provide a fair guarantee of completion, for any accepted session, independent of a session length. This provides a predictable and controllable platform for web applications and is a critical requirement for any e-business. Additionally, we propose two new adaptive admission control strategies, hybrid and predictive, aiming to optimize the performance of SBAC mechanism. These new adaptive strategies are based on a self-tunable admission control function, which adjusts itself accordingly to variations in traffic loads.", "This paper presents a novel approach for stream-based admission control and job scheduling for video transcoding called SBACS (Stream-Based Admission Control and Scheduling). SBACS uses queue waiting time of transcoding servers to make admission control decisions for incoming video streams. It implements stream-based admission control with per stream admission. To ensure efficient utilization of the transcoding servers, video streams are segmented at the Group of Pictures level. In addition to the traditional rejection policy, SBACS also provides a stream deferment policy, which exploits cloud elasticity to allow temporary deferment of the incoming video streams. In other words, the admission controller can decide to admit, defer, or reject an incoming stream and hence reduce rejection rate. In order to prevent transcoding jitters in the admitted streams, we introduce a job scheduling mechanism, which drops a small proportion of video frames from a video segment to ensure continued delivery of video contents to the user. The approach is demonstrated in a discrete-event simulation with a series of experiments involving different load patterns and stream arrival rates.", "Calls that make large, persistent demands for network resources will be denied consistent service, unless the network employs adequate control mechanisms. Calls of this type include video conferences. Although overprovisioning network capacity would increase the likelihood of accepting these calls, it is a very expensive option to apply uniformly in a large network, especially as the calls require high bandwidth and low blocking probabilities. Such large calls typically require coordination of geographically distributed facilities and people at the end systems. So it is natural to book the network requirements ahead of their actual use. We present a new, effective admission control algorithm for booking ahead network services. The admission control is based on a novel application of effective bandwidth theory to the time domain. Systematic and comprehensive simulation experiments provide an understanding of how booking ahead affects call blocking and network utilization, considering call duration, number of links, bandwidth, routing, and the mix of book ahead versus immediate arrival traffic. Allowing some calls to book ahead radically reduces their chance of service denial, while allowing flexible and efficient sharing of network resources with normal calls that do not book ahead.", "This paper builds upon the scalable admission control schemes for CDMA networks developed in F. (2003, December 2004). These schemes are based on an exact representation of the geometry of both the downlink and the uplink channels and ensure that the associated power allocation problems have solutions under constraints on the maximal power of each station user. These schemes are decentralized in that they can be implemented in such a way that each base station only has to consider the load brought by its own users to decide on admission. By load we mean here some function of the configuration of the users and of their bit rates that is described in the paper. When implemented in each base station, such schemes ensure the global feasibility of the power allocation even in a very large (infinite number of cells) network. The estimation of the capacity of large CDMA networks controlled by such schemes was made in these references. In certain cases, for example for a Poisson pattern of mobiles in an hexagonal network of base stations, this approach gives explicit formulas for the infeasibility probability, defined as the fraction of cells where the population of users cannot be entirely admitted by the base station. In the present paper we show that the notion of infeasibility probability is closely related to the notion of blocking probability, defined as the fraction of users that are rejected by the admission control policy in the long run, a notion of central practical importance within this setting. The relation between these two notions is not bound to our particular admission control schemes, but is of more general nature, and in a simplified scenario it can be identified with the well-known Erlang loss formula. We prove this relation using a general spatial birth-and-death process, where customer locations are represented by a spatial point process that evolves over time as users arrive or depart. This allows our model to include the exact representation of the geometry of inter-cell and intra-cell interferences, which play an essential role in the load indicators used in these cellular network admission control schemes." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
. Some researchers have proposed storing useless content in exchange for having content be stored as a way to enforce symmetric storage relationships. Compliance enforcement is achieved by asking the peer storing the file of interest to hash some portion of the file as proof that it is still storing the file @cite_42 @cite_37 .
{ "cite_N": [ "@cite_37", "@cite_42" ], "mid": [ "2148042433", "2019586918", "1994184009", "1517978270" ], "abstract": [ "Peer-to-peer storage systems assume that their users consume resources in proportion to their contribution. Unfortunately, users are unlikely to do this without some enforcement mechanism. Prior solutions to this problem require centralized infrastructure, constraints on data placement, or ongoing administrative costs. All of these run counter to the design philosophy of peer-to-peer systems.Samsara enforces fairness in peer-to-peer storage systems without requiring trusted third parties, symmetric storage relationships, monetary payment, or certified identities. Each peer that requests storage of another must agree to hold a claim in return---a placeholder that accounts for available space. After an exchange, each partner checks the other to ensure faithfulness. Samsara punishes unresponsive nodes probabilistically. Because objects are replicated, nodes with transient failures are unlikely to suffer data loss, unlike those that are dishonest or chronically unavailable. Claim storage overhead can be reduced when necessary by forwarding among chains of nodes, and eliminated when cycles are created. Forwarding chains increase the risk of exposure to failure, but such risk is modest under reasonable assumptions of utilization and simultaneous, persistent failure.", "We propose a privacy protection framework for large-scale content-based information retrieval. It offers two layers of protection. First, robust hash values are used as queries to prevent revealing original content or features. Second, the client can choose to omit certain bits in a hash value to further increase the ambiguity for the server. Due to the reduced information, it is computationally difficult for the server to know the client’s interest. The server has to return the hash values of all possible candidates to the client. The client performs a search within the candidate list to find the best match. Since only hash values are exchanged between the client and the server, the privacy of both parties is protected. We introduce the concept oftunable privacy, where the privacy protection level can be adjusted according to a policy. It is realized through hash-based piecewise inverted indexing. The idea is to divide a feature vector into pieces and index each piece with a subhash value. Each subhash value is associated with an inverted index list. The framework has been extensively tested using a large image database. We have evaluated both retrieval performance and privacy-preserving performance for a particular content identification application. Two different constructions of robust hash algorithms are used. One is based on random projections; the other is based on the discrete wavelet transform. Both algorithms exhibit satisfactory performance in comparison with state-of-the-art retrieval schemes. The results show that the privacy enhancement slightly improves the retrieval performance. We consider the majority voting attack for estimating the query category and identification. Experiment results show that this attack is a threat when there are near-duplicates, but the success rate decreases with the number of omitted bits and the number of distinct items.", "In a Peer-to-Peer (P2P) file-sharing system, a node finds and retrieves its desired file. If multiple nodes cache the same file to provide others, we can achieve a dependable file-sharing system with low latency and high file availability. However, a node has to spend costs, e.g., processing load or storage capacity, on caching a file. Consequently, a node may selfishly behave and hesitate to cache a file. In such a case, unpopular files are likely to disappear from the system. In this paper, we aim to reveal whether effective caching in the whole system emerges from autonomous and selfish node behavior. We discuss relationship between selfish node behavior and system dynamics by using evolutionary game theory. Through theoretic analysis, we show that a file-sharing system can be robust to file disappearance depending on a cost and demand model for caching even if nodes behave selfishly. Furthermore, we also conduct several simulation-based analysis in terms of network structures, evolving network, load balancing, and system stability. As a result, we demonstrate that a file-sharing system with good properties, i.e., robustness to file disappearance, low search latency, well load-balancing, and high stability, can be achieved independent of network structures and dynamics.", "In this paper we propose a new Peer-to-Peer architecture for a censorship resistant system with user, server and active-server document anonymity as well as efficient document retrieval. The retrieval service is layered on top of an existing Peer-to-Peer infrastructure, which should facilitate its implementation. The key idea is to separate the role of document storers from the machines visible to the users, which makes each individual part of the system less prone to attacks, and therefore to censorship." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
. Waves of synchronized routing updates caused by joins or departures cause instability during periods of high churn @cite_28 . Breaking the synchrony through lazy updates (e.g., in Bamboo @cite_28 ) can absorb the brunt of a churn attack.
{ "cite_N": [ "@cite_28" ], "mid": [ "2162733677", "2519194043", "2005497978", "2015774390" ], "abstract": [ "This paper addresses the problem of churn--the continuous process of node arrival and departure--in distributed hash tables (DHTs). We argue that DHTs should perform lookups quickly and consistently under churn rates at least as high as those observed in deployed P2P systems such as Kazaa. We then show through experiments on an emulated network that current DHT implementations cannot handle such churn rates. Next, we identify and explore three factors affecting DHT performance under churn: reactive versus periodic failure recovery, message timeout calculation, and proximity neighbor selection. We work in the context of a mature DHT implementation called Bamboo, using the ModelNet network emulator, which models in-network queuing, cross-traffic, and packet loss. These factors are typically missing in earlier simulation-based DHT studies, and we show that careful attention to them in Bamboo's design allows it to function effectively at churn rates at or higher than that observed in P2P file-sharing applications, while using lower maintenance bandwidth than other DHT implementations.", "We study complexity and algorithms for network updates in the setting of Software Defined Networks. Our focus lies on consistent updates for the case of updating forwarding rules in a loop free manner and the migration of flows without congestion. In both cases, we study how the power of two affects the respective problem setting. For loop freedom, we show that scheduling consistent updates for two destinations is NP-hard for a sublinear number of rounds. We also consider the dynamic case, and show that this problem is NP-hard as well via a reduction from Feedback Arc Set. While the power of two increases the complexity for loop freedom, the converse is true when allowing to split flows twice. For the NP-hard problem of consistently migrating unsplittable flows to new routes while respecting waypointing and service chains, we prove that two-splittability allows the problem to be tractable again.", "Jamming attacks are especially harmful to the reliability of wireless communication, as they can effectively disrupt communication between any node pairs. Existing jamming defenses primarily focus on repairing connectivity between adjacent nodes. In this paper, we address jamming at the network level and focus on restoring the end-to-end data delivery through multipath routing. As long as all paths do not fail concurrently, the end-to-end path availability is maintained. Prior work in multipath selection improves routing availability by choosing node-disjoint paths or link-disjoint paths. However, through our experiments on jamming effects using MicaZ nodes, we show that disjointness is insufficient for selecting fault-independent paths. Thus, we address multipath selection based on the knowledge of a path's availability history. Using Availability History Vectors (AHVs) of paths, we present a centralized AHV-based algorithm to select fault-independent paths, and a distributed AHV-based routing protocol built on top of a classic routing algorithm in ad hoc networks. Our extensive simulation results validate that both AHV-based algorithms are effective in overcoming the jamming impact by maximizing the end-to-end availability of the selected paths.", "In a Fork-Join (FJ) queueing system an upstream fork station splits incoming jobs into N tasks to be further processed by N parallel servers, each with its own queue; the response time of one job is determined, at a downstream join station, by the maximum of the corresponding tasks' response times. This queueing system is useful to the modelling of multi-service systems subject to synchronization constraints, such as MapReduce clusters or multipath routing. Despite their apparent simplicity, FJ systems are hard to analyze. This paper provides the first computable stochastic bounds on the waiting and response time distributions in FJ systems. We consider four practical scenarios by combining 1a) renewal and 1b) non-renewal arrivals, and 2a) non-blocking and 2b) blocking servers. In the case of non blocking servers we prove that delays scale as O(logN), a law which is known for first moments under renewal input only. In the case of blocking servers, we prove that the same factor of log N dictates the stability region of the system. Simulation results indicate that our bounds are tight, especially at high utilizations, in all four scenarios. A remarkable insight gained from our results is that, at moderate to high utilizations, multipath routing 'makes sense' from a queueing perspective for two paths only, i.e., response times drop the most when N = 2; the technical explanation is that the resequencing (delay) price starts to quickly dominate the tempting gain due to multipath transmissions." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
As (the rate at which the peer population changes) increases, both the latency and the probability of failure of queries to a DHT increases @cite_28 . An attrition attack might consist of adversary peers joining and leaving fast enough to destabilize the routing infrastructure.
{ "cite_N": [ "@cite_28" ], "mid": [ "2049130980", "2049794981", "2031684765", "1971843894" ], "abstract": [ "A number of distributed hash table (DHT)-based protocols have been proposed to address the issue of scalability in peer-to-peer networks. In this paper, we present Ulysses, a peer-to-peer network based on the butterfly topology that achieves the theoretical lower bound of log n log log n on network diameter when the average routing table size at nodes is no more than log n. Compared to existing DHT-based schemes with similar routing table size, Ulysses reduces the network diameter by a factor of log log n, which is 2–4 for typical configurations. This translates into the same amount of reduction on query latency and average traffic per link node. In addition, Ulysses maintains the same level of robustness in terms of routing in the face of faults and recovering from graceful ungraceful joins and departures, as provided by existing DHT-based schemes. The performance of the protocol has been evaluated using both analysis and simulation. Copyright © 2004 AEI", "Distributed hash table (DHT) systems are an important class of peer-to-peer routing infrastructures. They enable scalable wide-area storage and retrieval of information, and will support the rapid development of a wide variety of Internet-scale applications ranging from naming systems and file systems to application-layer multicast. DHT systems essentially build an overlay network, but a path on the overlay between any two nodes can be significantly different from the unicast path between those two nodes on the underlying network. As such, the lookup latency in these systems can be quite high and can adversely impact the performance of applications built on top of such systems.In this paper, we discuss a random sampling technique that incrementally improves lookup latency in DHT systems. Our sampling can be implemented using information gleaned from lookups traversing the overlay network. For this reason, we call our approach lookup-parasitic random sampling (LPRS). LPRS is fast, incurs little network overhead, and requires relatively few modifications to existing DHT systems.For idealized versions of DHT systems like Chord, Tapestry and Pastry, we analytically prove that LPRS can result in lookup latencies proportional to the average unicast latency of the network, provided the underlying physical topology has a power-law latency expansion. We then validate this analysis by implementing LPRS in the Chord simulator. Our simulations reveal that LPRS-Chord exhibits a qualitatively better latency scaling behavior relative to unmodified Chord.Finally, we provide evidence which suggests that the Internet router-level topology resembles power-law latency expansion. This finding implies that LPRS has significant practical applicability as a general latency reduction technique for many DHT systems. This finding is also of independent interest since it might inform the design of latency-sensitive topology models for the Internet.", "We study a fundamental tradeoff issue in designing a distributed hash table (DHT) in peer-to-peer (P2P) networks: the size of the routing table versus the network diameter. Observing that existing DHT schemes have either 1) a routing table size and network diameter both of O(log sub 2 n), or 2) a routing table of size d and network diameter of O(n sup 1 d ), S. (2001) asked whether this represents the best asymptotic \"state-efficiency\" tradeoffs. We show that some straightforward routing algorithms achieve better asymptotic tradeoffs. However, such algorithms all cause severe congestion on certain network nodes, which is undesirable in a P2P network. We rigorously define the notion of \"congestion\" and conjecture that the above tradeoffs are asymptotically optimal for a congestion-free network. The answer to this conjecture is negative in the strict sense. However, it becomes positive if the routing algorithm is required to eliminate congestion in a \"natural\" way by being uniform. We also prove that the tradeoffs are asymptotically optimal for uniform algorithms. Furthermore, for uniform algorithms, we find that the routing table size of O(log sub 2 n) is a magic threshold point that separates two different \"state-efficiency\" regions. Our third result is to study the exact (instead of asymptotic) optimal tradeoffs for uniform algorithms. We propose a new routing algorithm that reduces the routing table size and the network diameter of Chord both by 21.4 without introducing any other protocol overhead, based on a novel number-theory technique. Our final result is to present Ulysses, a congestion-free nonuniform algorithm that achieves a better asymptotic \"state-efficiency\" tradeoff than existing schemes in the probabilistic sense, even under dynamic node joins leaves.", "Several recent research results describe how to design Distributed Hash Tables (DHTs) that are robust to adversarial attack via Byzantine faults. Unfortunately, all of these results require a significant blowup in communication costs over standard DHTs. For example, to perform a lookup operation, all such robust DHTs of which we are aware require sending O(log^3n) messages while standard DHTs require sending only O(logn), where n is the number of nodes in the network. In this paper, we describe protocols to reduce the communication costs of all such robust DHTs. In particular, we give a protocol to reduce the number of messages sent to perform a lookup operation from O(log^3n) to O(log^2n) in expectation. Moreover, we also give a protocol for sending a large (i.e. containing @W(log^4n) bits) message securely through a robust DHT that requires, in expectation, only a constant blowup in the total number of bits sent compared with performing the same operation in a standard DHT. This is an improvement over the O(log^2n) bit blowup that is required to perform such an operation in all current robust DHTs. Both of our protocols are robust against an adaptive adversary." ] }
cs0405070
2949521043
We propose a model for the World Wide Web graph that couples the topological growth with the traffic's dynamical evolution. The model is based on a simple traffic-driven dynamics and generates weighted directed graphs exhibiting the statistical properties observed in the Web. In particular, the model yields a non-trivial time evolution of vertices and heavy-tail distributions for the topological and traffic properties. The generated graphs exhibit a complex architecture with a hierarchy of cohesiveness levels similar to those observed in the analysis of real data.
A very interesting class of models that considers the main features of the WWW growth has been introduced by @cite_11 in order to produce a mechanism which does not assume the knowledge of the degree of the existing vertices. Each newly introduced vertex @math selects at random an already existing vertex @math ; for each out-neighbour @math of @math , @math connects to @math with a certain probability @math ; with probability @math it connects instead to another randomly chosen node. This model describes the growth process of the WWW as a copy mechanism in which newly arriving web-pages tends to reproduce the hyperlinks of similar web-pages; i.e. the first to which they connect. Interestingly, this model effectively recovers a preferential attachment mechanism without explicitely introducing it.
{ "cite_N": [ "@cite_11" ], "mid": [ "2099674897", "2016986674", "1992403724", "2170002160" ], "abstract": [ "We present an analysis of the statistical properties and growth of the free on-line encyclopedia Wikipedia. By describing topics by vertices and hyperlinks between them as edges, we can represent this encyclopedia as a directed graph. The topological properties of this graph are in close analogy with those of the World Wide Web, despite the very different growth mechanism. In particular, we measure a scale-invariant distribution of the in and out degree and we are able to reproduce these features by means of a simple statistical model. As a major consequence, Wikipedia growth can be described by local rules such as the preferential attachment mechanism, though users, who are responsible of its evolution, can act globally on the network.", "We consider a growing network, whose growth algorithm is based on the preferential attachment typical for scale-free constructions, but where the long-range bonds are disadvantaged. Thus, the probability of getting connected to a site at distance d is proportional to @math where is a tunable parameter of the model. We show that the properties of the networks grown with @math are close to those of the genuine scale-free construction, while for @math the structure of the network is quite different. Thus, in this regime, the node degree distribution is no longer a power law, and it is well represented by a stretched exponential. On the other hand, the small-world property of the growing networks is preserved at all values of .", "We consider the popular and well-studied push model, which is used to spread information in a given network with n vertices. Initially, some vertex owns a rumour and passes it to one of its neighbours, which is chosen randomly. In each of the succeeding rounds, every vertex that knows the rumour informs a random neighbour. It has been shown on various network topologies that this algorithm succeeds in spreading the rumour within O(log n) rounds. However, many studies are quite coarse and involve huge constants that do not allow for a direct comparison between different network topologies. In this paper, we analyse the push model on several important families of graphs, and obtain tight runtime estimates. We first show that, for any almost-regular graph on n vertices with small spectral expansion, rumour spreading completes after log2n + log n+o(log n) rounds with high probability. This is the first result that exhibits a general graph class for which rumour spreading is essentially as fast as on complete graphs. Moreover, for the random graph G(n,p) with p=c log n n, where c > 1, we determine the runtime of rumour spreading to be log2n + γ (c)log n with high probability, where γ(c) = clog(c (c−1)). In particular, this shows that the assumption of almost regularity in our first result is necessary. Finally, for a hypercube on n=2d vertices, the runtime is with high probability at least (1+β) ⋅ (log2n + log n), where β > 0. This reveals that the push model on hypercubes is slower than on complete graphs, and thus shows that the assumption of small spectral expansion in our first result is also necessary. In addition, our results combined with the upper bound of O(log n) for the hypercube (see [11]) imply that the push model is faster on hypercubes than on a random graph G(n, clog n n), where c is sufficiently close to 1.", "In this paper we consider a fundamental problem in the area of viral marketing, called T ARGET S ET S ELECTION problem. In a a viral marketing setting, social networks are modeled by graphs with potential customers of a new product as vertices and friend relationships as edges, where each vertex @math is assigned a threshold value @math . The thresholds represent the different latent tendencies of customers (vertices) to buy the new product when their friend (neighbors) do. Consider a repetitive process on social network @math where each vertex @math is associated with two states, active and inactive, which indicate whether @math is persuaded into buying the new product. Suppose we are given a target set @math . Initially, all vertices in @math are inactive. At time step 0, we choose all vertices in @math to become active. Then, at every time step @math , all vertices that were active in time step @math remain active, and we activate any vertex @math if at least @math of its neighbors were active at time step @math . The activation process terminates when no more vertices can get activated. We are interested in the following optimization problem, called T ARGET S ET S ELECTION : Finding a target set @math of smallest possible size that activates all vertices of @math . There is an important and well-studied threshold called strict majority threshold, where for every vertex @math in @math we have @math and @math is the degree of @math in @math . In this paper, we consider the T ARGET S ET S ELECTION problem under strict majority thresholds and focus on three popular regular network structures: cycle permutation graphs, generalized Petersen graphs and torus cordalis." ] }
cs0404002
2138964611
We review existing approaches to mathematical modeling and analysis of multi-agent systems in which complex collective behavior arises out of local interactions between many simple agents. Though the behavior of an individual agent can be considered to be stochastic and unpredictable, the collective behavior of such systems can have a simple probabilistic description. We show that a class of mathematical models that describe the dynamics of collective behavior of multi-agent systems can be written down from the details of the individual agent controller. The models are valid for Markov or memoryless agents, in which each agents future state depends only on its present state and not any of the past states. We illustrate the approach by analyzing in detail applications from the robotics domain: collaboration and foraging in groups of robots.
With the exceptions noted below, there has been very little prior work on mathematical analysis of multi-agent systems. The closest in spirit to our paper is the work by Huberman, Hogg and coworkers on computational ecologies @cite_5 @cite_73 . These authors mathematically studied collective behavior in a system of agents, each choosing between two alternative strategies. They derived a rate equation for the average number of agents using each strategy from the underlying probability distributions. Our approach is consistent with theirs --- in fact, we can easily write down the same rate equations from the macroscopic state diagram of the system, without having to derive them from the underlying probability distributions. Computational ecologies can, therefore, be considered an application of the methodology described in this paper. Yet another application of the approach presented here is the author's work on coalition formation in electronic marketplaces @cite_71 .
{ "cite_N": [ "@cite_5", "@cite_73", "@cite_71" ], "mid": [ "2145374086", "2761043131", "2084044206", "2617547828" ], "abstract": [ "In this paper, we formulate and solve a randomized optimal consensus problem for multi-agent systems with stochastically time-varying interconnection topology. The considered multi-agent system with a simple randomized iterating rule achieves an almost sure consensus meanwhile solving the optimization problem min\"z\"@?\"R\"^\"[email protected]?\"i\"=\"1^nf\"i(z), in which the optimal solution set of objective function f\"i can only be observed by agent i itself. At each time step, simply determined by a Bernoulli trial, each agent independently and randomly chooses either taking an average among its neighbor set, or projecting onto the optimal solution set of its own optimization component. Both directed and bidirectional communication graphs are studied. Connectivity conditions are proposed to guarantee an optimal consensus almost surely with proper convexity and intersection assumptions. The convergence analysis is carried out using convex analysis. We compare the randomized algorithm with the deterministic one via a numerical example. The results illustrate that a group of autonomous agents can reach an optimal opinion by each node simply making a randomized trade-off between following its neighbors or sticking to its own opinion at each time step.", "Agent-based modeling of zapping behavior of viewers, television commercial allocation, and advertisement markets by Hiroyuki Kyan and Jun-ichi Inoue.- Agent based modeling of Housing asset bubble: A simple utility function based investigation by Kausik Gangopadhyay and Kousik Guhathakurta.- Urn model-based adaptive Multi-arm clinical trials: A stochastic approximation approach by Sophie Laruelle and Gilles Pages.- Logistic modeling of a Religious Sect features by Marcel Ausloos.- Characterizing financial crisis by means of the three states random field Ising model by Mitsuaki Murota and Jun-ichi Inoue.- Themes and applications of kinetic exchange models: Redux by Asim Ghosh, Anindya S. Chakrabarti, Anjan Kumar Chandra and Anirban Chakraborti.- Kinetic exchange opinion model: solution in the single parameter map limit by Krishanu Roy Chowdhury, Asim Ghosh, Soumyajyoti Biswas and Bikas K. Chakrabarti.- An overview of the new frontiers of Economic Complexity by Matthieu Cristelli, Andrea Tacchella, Luciano Pietronero.- Jan Tinbergen's legacy for economic networks: from the gravity model to quantum statistics by Tiziano Squartini and Diego Garlaschelli.- A macroscopic order of consumer demand due to heterogenous consumer behaviors on Japanese household demand tested by the random matrix theory by Yuji Aruka,Yuichi Kichikawa and Hiroshi Iyetomi.- Uncovering the network structure of the world currency market: Cross-correlations in the fluctuations of daily exchange rates by Sitabhra Sinha and Uday Kovur.- Systemic risk in Japanese credit network by Hideaki Aoyama.- Pricing of goods with Bandwagon properties: The curse of coordination by Mirta B. Gordon, Jean-Pierre Nadal, Denis Phan and Viktoriya Semeshenko.- Evolution of Econophysics by Kishore C. Dash.- Econophysics and sociophysics: Problems and prospects by Asim Ghosh and Anindya S. Chakrabarti.- A discussion on Econophysics by Hideaki Aoyama.", "We study the average consensus problem of multi-agent systems for general network topologies with unidirectional information flow. We propose two linear distributed algorithms, deterministic and gossip, respectively for the cases where the inter-agent communication is synchronous and asynchronous. In both cases, the developed algorithms guarantee state averaging on arbitrary strongly connected digraphs; in particular, this graphical condition does not require that the network be balanced or symmetric, thereby extending previous results in the literature. The key novelty of our approach is to augment an additional variable for each agent, called \"surplus\", whose function is to locally record individual state updates. For convergence analysis, we employ graph-theoretic and nonnegative matrix tools, plus the eigenvalue perturbation theory playing a crucial role.", "Cooperative multi-agent systems can be naturally used to model many real world problems, such as network packet routing and the coordination of autonomous vehicles. There is a great need for new reinforcement learning methods that can efficiently learn decentralised policies for such systems. To this end, we propose a new multi-agent actor-critic method called counterfactual multi-agent (COMA) policy gradients. COMA uses a centralised critic to estimate the Q-function and decentralised actors to optimise the agents' policies. In addition, to address the challenges of multi-agent credit assignment, it uses a counterfactual baseline that marginalises out a single agent's action, while keeping the other agents' actions fixed. COMA also uses a critic representation that allows the counterfactual baseline to be computed efficiently in a single forward pass. We evaluate COMA in the testbed of StarCraft unit micromanagement, using a decentralised variant with significant partial observability. COMA significantly improves average performance over other multi-agent actor-critic methods in this setting, and the best performing agents are competitive with state-of-the-art centralised controllers that get access to the full state." ] }
cs0404002
2138964611
We review existing approaches to mathematical modeling and analysis of multi-agent systems in which complex collective behavior arises out of local interactions between many simple agents. Though the behavior of an individual agent can be considered to be stochastic and unpredictable, the collective behavior of such systems can have a simple probabilistic description. We show that a class of mathematical models that describe the dynamics of collective behavior of multi-agent systems can be written down from the details of the individual agent controller. The models are valid for Markov or memoryless agents, in which each agents future state depends only on its present state and not any of the past states. We illustrate the approach by analyzing in detail applications from the robotics domain: collaboration and foraging in groups of robots.
In the robotics domain, Sugawara and coworkers @cite_62 @cite_26 developed simple state-based analytical models of cooperative foraging in groups of communicating and non-co -mmu -ni -cating robots and studied them quantitatively. Although these models are similar to ours, they are overly simplified and fail to take crucial interactions among robots into account. In separate papers, we have analyzed collaborative @cite_46 and foraging @cite_29 behavior in groups of robots. The focus of that work is on realistic models and the comparison of the models' predictions to experimental and simulations results. For example, in @cite_46 , we considered the same model of collaborative stick-pulling presented here, but studied it under the same conditions as the experiments. In @cite_29 , we found that we had to include avoiding-while-searching and wall-avoiding states in the model in order to obtain good quantitative agreement between the model and results of sensor-based simulations. The focus of this paper, on the other hand, is to show that there is a principled way to construct a macroscopic model of collective dynamics of a MAS, and, more importantly, a practical recipe'' for creating such a model from the details of the microscopic controller.
{ "cite_N": [ "@cite_29", "@cite_46", "@cite_62", "@cite_26" ], "mid": [ "1578969637", "2137153348", "2140542425", "2139610651" ], "abstract": [ "In multi-robot applications, such as foraging or collection tasks, interference, which results from competition for space between spatially extended robots, can significantly affect the performance of the group. We present a mathematical model of foraging in a homogeneous multi-robot system, with the goal of understanding quantitatively the effects of interference. We examine two foraging scenarios: a simplified collection task where the robots only collect objects, and a foraging task, where they find objects and deliver them to some pre-specified “home” location. In the first case we find that the overall group performance improves as the system size growss however, interference causes this improvement to be sublinear, and as a result, each robot's individual performance decreases as the group size increases. We also examine the full foraging task where robots collect objects and deliver them home. We find an optimal group size that maximizes group performance. For larger group sizes, the group performance declines. However, again due to the effects of interference, the individual robot's performance is a monotonically decreasing function of the group size. We validate both models by comparing their predictions to results of sensor-based simulations in a multi-robot system and find good agreement between theory and simulations data.", "In this article, we present a macroscopic analytical model of collaboration in a group of reactive robots. The model consists of a series of coupled differential equations that describe the dynamics of group behavior. After presenting the general model, we analyze in detail a case study of collaboration, the stick-pulling experiment, studied experimentally and in simulation by (Autonomous Robots, 11, 149-171). The robots' task is to pull sticks out of their holes, and it can be successfully achieved only through the collaboration of two robots. There is no explicit communication or coordination between the robots. Unlike microscopic simulations (sensor-based or using a probabilistic numerical model), in which computational time scales with the robot group size, the macroscopic model is computationally efficient, because its solutions are independent of robot group size. Analysis reproduces several qualitative conclusions of : namely, the different dynamical regimes for different values of the ratio of robots to sticks, the existence of optimal control parameters that maximize system performance as a function of group size, and the transition from superlinear to sublinear performance as the number of robots is increased.", "We consider the problem of navigating a mobile robot through dense human crowds. We begin by exploring a fundamental impediment to classical motion planning algorithms called the “freezing robot problem”: once the environment surpasses a certain level of dynamic complexity, the planner decides that all forward paths are unsafe, and the robot freezes in place or performs unnecessary maneuvers to avoid collisions. We argue that this problem can be avoided if the robot anticipates human cooperation, and accordingly we develop interacting Gaussian processes, a prediction density that captures cooperative collision avoidance, and a “multiple goal” extension that models the goal-driven nature of human decision making. We validate this model with an empirical study of robot navigation in dense human crowds 488 runs, specifically testing how cooperation models effect navigation performance. The multiple goal interacting Gaussian processes algorithm performs comparably with human teleoperators in crowd densities nearing 0.8 humans m2, while a state-of-the-art non-cooperative planner exhibits unsafe behavior more than three times as often as the multiple goal extension, and twice as often as the basic interacting Gaussian process approach. Furthermore, a reactive planner based on the widely used dynamic window approach proves insufficient for crowd densities above 0.55 people m2. We also show that our non-cooperative planner or our reactive planner capture the salient characteristics of nearly any dynamic navigation algorithm. Based on these experimental results and theoretical observations, we conclude that a cooperation model is critical for safe and efficient robot navigation in dense human crowds.", "This paper presents a methodology for finding optimal control parameters as well as optimal system parameters for robot swarm controllers using probabilistic, population dynamic models. With distributed task allocation as a case study, we show how optimal control parameters leading to a desired steady-state task distribution for two fully-distributed algorithms can be found even if the parameters of the system are unknown. First, a reactive algorithm in which robots change states independently from each other and which leads to a linear macroscopic model describing the dynamics of the system is considered. Second, a threshold-based algorithm where robots change states based on the number of other robots in this state and which leads to a non-linear model is investigated. Whereas analytical results can be obtained for the linear system, the optimization of the non-linear controller is performed numerically. Finally, we show using stochastic simulations that whereas the presented methodology and models work best if the swarm size is large, useful results can already be obtained for team-sizes below a hundred robots. The methodology presented can be applied to scenarios involving the control of large numbers of entities with limited computational and communication abilities as well as a tight energy budget, such as swarms of robots from the centimeter to nanometer range or sensor networks." ] }
cs0404037
1653657989
Component-based software development has posed a serious challenge to system verification since externally-obtained components could be a new source of system failures. This issue can not be completely solved by either model-checking or traditional software testing techniques alone due to several reasons: 1) externally obtained components are usually unspecified partially specified; 2)it is generally difficult to establish an adequacy criteria for testing a component; 3)components may be used to dynamically upgrade a system. This paper introduces a new approach (called model-checking driven black-box testing ) that combines model-checking with traditional black-box software testing to tackle the problem in a complete, sound, and automatic way. The idea is to, with respect to some requirement (expressed in CTL or LTL) about the system, use model-checking techniques to derive a condition (expressed in communication graphs) for an unspecified component such that the system satisfies the requirement iff the condition is satisfied by the component, and which can be established by testing the component with test cases generated from the condition on-the-fly. In this paper, we present model-checking driven black-box testing algorithms to handle both CTL and LTL requirements. We also illustrate the idea through some examples.
Recently, Bertolino et. al. @cite_17 recognized the importance of testing a software component in its deployment environment. They developed a framework that supports functional testing of a software component with respect to customer's specification, which also provides a simple way to enclose with a component the developer's test suites which can be re-executed by the customer. Yet their approach requires the customer to have a complete specification about the component to be incorporated into a system, which is not always possible. McCamant and Ernst @cite_19 considered the issue of predicting the safety of dynamic component upgrade, which is part of the problem we consider. But their approach is completely different since they try to generate some abstract operational expectation about the new component through observing a system's run-time behavior with the old component.
{ "cite_N": [ "@cite_19", "@cite_17" ], "mid": [ "2121376435", "2100161032", "131057022", "2328067583" ], "abstract": [ "We present a new, automatic technique to assess whether replacing a component of a software system by a purportedly compatible component may change the behavior of the system. The technique operates before integrating the new component into the system or running system tests, permitting quicker and cheaper identification of problems. It takes into account the system's use of the component, because a particular component upgrade may be desirable in one context but undesirable in another. No formal specifications are required, permitting detection of problems due either to errors in the component or to errors in the system. Both external and internal behaviors can be compared, enabling detection of problems that are not immediately reflected in the output.The technique generates an operational abstraction for the old component in the context of the system and generates an operational abstraction for the new component in the context of its test suite; an operational abstraction is a set of program properties that generalizes over observed run-time behavior. If automated logical comparison indicates that the new component does not make all the guarantees that the old one did, then the upgrade may affect system behavior and should not be performed without further scrutiny. In case studies, the technique identified several incompatibilities among software components.", "Component-based development is the emerging paradigm in software production, though several challenges still slow down its full taking up. In particular, the \"component trust problem\" refers to how adequate guarantees and documentation about a component' s behaviour can be transferred from the component developer to its potential users. The capability to test a component when deployed within the target application environment can help establish the compliance of a candidate component to the customer's expectations and certainly contributes to \"increase trust\". To this purpose, we propose the CDT framework for Component Deployment Testing. CDT provides the customer with both a technique to early specify a deployment test suite and an environment for running and reusing the specified tests on any component implementation. The framework can also be used to deliver the component developer's test suite and to later re-execute it. The central feature of CDT is the complete decoupling between the specification of the tests and the component implementation.", "Abstract : Testing is an expensive but essential part of any software project. Having the right methods to detect faults is a primary factor for success in the software industry. Component based systems are problematic because they are prone to unexpected interaction faults, yet these may be left undetected by traditional testing techniques. In all but the smallest of systems, it is not possible to test every component interaction. One can use a reduced test suite that guarantees to include a defined subset of interactions instead. A well studied combinatorial object, the covering array, can be used to achieve this goal. Constructing covering arrays for a specific software system is not always simple and the resulting object may not closely mirror the real test environment. Not only are new methods for building covering arrays needed, but new tools to support these are required as well. Our aim is to develop methods for building smaller test suites that provide stronger interaction coverage, while retaining the flexibility required in a practical test suite. We combine ideas from combinatorial design theory, computational search, statistical design of experiments and software engineering. We begin with a description of a framework for greedy algorithms that has formed the basis for several published methods and a widely used commercial tool. We compare this with a meta-heuristic search algorithm, simulated annealing. The results suggest that simulated annealing is more effective at finding smaller test suites, and in some cases improves on combinatorial methods as well. We then develop a mathematical model for variable strength interaction testing. This allows us to balance the cost and the time of testing by targeting individual subsets of components. We present construction techniques using meta-heuristic search and provide the first known bounds for objects of this type. We end by presenting some new cut-and-paste techniques that merge recursive combinatorial constructions", "Software testing is all too often simply a bug hunt rather than a wellconsidered exercise in ensuring quality. A more methodical approach than a simple cycle of system-level test-fail-patch-test will be required to deploy safe autonomous vehicles at scale. The ISO 26262 development V process sets up a framework that ties each type of testing to a corresponding design or requirement document, but presents challenges when adapted to deal with the sorts of novel testing problems that face autonomous vehicles. This paper identifies five major challenge areas in testing according to the V model for autonomous vehicles: driver out of the loop, complex requirements, non-deterministic algorithms, inductive learning algorithms, and failoperational systems. General solution approaches that seem promising across these different challenge areas include: phased deployment using successively relaxed operational scenarios, use of a monitor actuator pair architecture to separate the most complex autonomy functions from simpler safety functions, and fault injection as a way to perform more efficient edge case testing. While significant challenges remain in safety-certifying the type of algorithms that provide high-level autonomy themselves, it seems within reach to instead architect the system and its accompanying design process to be able to employ existing software safety approaches." ] }
cs0404037
1653657989
Component-based software development has posed a serious challenge to system verification since externally-obtained components could be a new source of system failures. This issue can not be completely solved by either model-checking or traditional software testing techniques alone due to several reasons: 1) externally obtained components are usually unspecified partially specified; 2)it is generally difficult to establish an adequacy criteria for testing a component; 3)components may be used to dynamically upgrade a system. This paper introduces a new approach (called model-checking driven black-box testing ) that combines model-checking with traditional black-box software testing to tackle the problem in a complete, sound, and automatic way. The idea is to, with respect to some requirement (expressed in CTL or LTL) about the system, use model-checking techniques to derive a condition (expressed in communication graphs) for an unspecified component such that the system satisfies the requirement iff the condition is satisfied by the component, and which can be established by testing the component with test cases generated from the condition on-the-fly. In this paper, we present model-checking driven black-box testing algorithms to handle both CTL and LTL requirements. We also illustrate the idea through some examples.
In the formal verification area, there has been a long history of research on verification of systems with modular structure (called modular verification @cite_9 ). A key idea @cite_31 @cite_11 in modular verification is the assume-guarantee paradigm: A module should guarantee to have the desired behavior once the environment with which the module is interacting has the assumed behavior. There have been a variety of implementations for this idea (see, e.g., @cite_0 ). However, the assume-guarantee idea does not immediately fit with our problem setup since it requires that users must have clear assumptions about a module's environment.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_31", "@cite_11" ], "mid": [ "2471765219", "1488659932", "2956034981", "169399732" ], "abstract": [ "In modular verification the specification of a module consists of two parts. One hart describes the guaranteed behavior of the module. The other part describes the assumed behavior of the system iii which the module is interacting. This is called the assume-guarantee paradigm. In this paper we consider assume-guarantee specifications in which the guarantee is specified by branching temporal formulas. We distinguish between two approaches. In the first approach, the assumption is specified by branching temporal formulas. In the second approach, the assumption is specified by linear temporal logic. We consider guarantees in ∀CTL and ∀CTL * , the universal fragments of CTL and CTL * , and assumptions in LTL. ∀CTL, and ∀CTL * . We describe a reduction of modular model checking to standard model checking. Using the reduction, we show that modular model checking is PSPACE-complete for ∀CTL and is EXPSPACE-complete for ∀CTL * . We then show that the case of LTL assumption is a special case of the case of ∀CTL * assumption, but that the EXPSPACE-hardness result apply already to assumptions in LTL.", "Assume-guarantee reasoning has long been advertised as an important method for decomposing proof obligations in system verification. Refinement mappings (homomorphisms) have long been advertised as an important method for solving the language-inclusion problem in practice. When confronted with large verification problems, we therefore attempted to make use of both techniques. We soon found that rather than offering instant solutions, the success of assume-guarantee reasoning depends critically on the construction of suitable abstraction modules, and the success of refinement checking depends critically on the construction of suitable witness modules. Moreover, as abstractions need to be witnessed, and witnesses abstracted, the process must be iterated. We present here the main lessons we learned from our experiments, in limn of a systematic and structured discipline for the compositional verification of reactive modules. An infrastructure to support this discipline, and automate parts of the verification, has been implemented in the tool Mocha.", "Formal verification of a control system can be performed by checking if a model of its dynamical behavior conforms to temporal requirements. Unfortunately, adoption of formal verification in an industrial setting is a formidable challenge as design requirements are often vague, nonmodular, evolving, or sometimes simply unknown. We propose a framework to mine requirements from a closed-loop model of an industrial-scale control system, such as one specified in Simulink. The input to our algorithm is a requirement template expressed in parametric signal temporal logic: a logical formula in which concrete signal or time values are replaced with parameters. Given a set of simulation traces of the model, our method infers values for the template parameters to obtain the strongest candidate requirement satisfied by the traces. It then tries to falsify the candidate requirement using a falsification tool. If a counterexample is found, it is added to the existing set of traces and these steps are repeated; otherwise, it terminates with the synthesized requirement. Requirement mining has several usage scenarios: mined requirements can be used to formally validate future modifications of the model, they can be used to gain better understanding of legacy models or code, and can also help enhancing the process of bug finding through simulations. We demonstrate the scalability and utility of our technique on three complex case studies in the domain of automotive powertrain systems: a simple automatic transmission controller, an air-fuel controller with a mean-value model of the engine dynamics, and an industrial-size prototype airpath controller for a diesel engine. We include results on a bug found in the prototype controller by our method.", "We propose a typing theory, based on multiparty session types, for modular verification of real-time choreographic interactions. To model real-time implementations, we introduce a simple calculus with delays and a decidable static proof system. The proof system ensures type safety and time-error freedom, namely processes respect the prescribed timing and causalities between interactions. A decidable condition on timed global types guarantees time-progress for validated processes with delays, and gives a sound and complete characterisation of a new class of CTAs with general topologies that enjoys progress and liveness." ] }
cs0404037
1653657989
Component-based software development has posed a serious challenge to system verification since externally-obtained components could be a new source of system failures. This issue can not be completely solved by either model-checking or traditional software testing techniques alone due to several reasons: 1) externally obtained components are usually unspecified partially specified; 2)it is generally difficult to establish an adequacy criteria for testing a component; 3)components may be used to dynamically upgrade a system. This paper introduces a new approach (called model-checking driven black-box testing ) that combines model-checking with traditional black-box software testing to tackle the problem in a complete, sound, and automatic way. The idea is to, with respect to some requirement (expressed in CTL or LTL) about the system, use model-checking techniques to derive a condition (expressed in communication graphs) for an unspecified component such that the system satisfies the requirement iff the condition is satisfied by the component, and which can be established by testing the component with test cases generated from the condition on-the-fly. In this paper, we present model-checking driven black-box testing algorithms to handle both CTL and LTL requirements. We also illustrate the idea through some examples.
In the past decade, there has also been some research on combining model-checking and testing techniques for system verification, which can be classified into a broader class of techniques called specification-based testing. But most of the work only utilizes model-checkers' ability of generating counter-examples from a system's specification to produce test cases against an implementation @cite_32 @cite_18 @cite_35 @cite_25 @cite_30 @cite_39 @cite_4 .
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_18", "@cite_4", "@cite_32", "@cite_39", "@cite_25" ], "mid": [ "1498432697", "2118645685", "2171520043", "2052495090" ], "abstract": [ "Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, and request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field. The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.", "Model verification examines the correctness of a model implementation with respect to a model specification. While being described from model specification, implementation prepares to execute or evaluate a simulation model by a computer program. Viewing model verification as a program test this paper proposes a method for generation of test sequences that completely covers all possible behavior in specification at an I O level. Timed State Reachability Graph (TSRG) is proposed as a means of model specification. Graph theoretical analysis of TSRG has generated a test set of timed I O event sequences, which guarantees 100 test coverage of an implementation under test.", "Model Based Development (MBD) using Mathworks tools like Simulink, Stateflow etc. is being pursued in Honeywell for the development of safety critical avionics software. Formal verification techniques are well-known to identify design errors of safety critical systems reducing development cost and time. As of now, formal verification of Simulink design models is being carried out manually resulting in excessive time consumption during the design phase. We present a tool that automatically translates certain Simulink models into input language of a suitable model checker. Formal verification of safety critical avionics components becomes faster and less error prone with this tool. Support is also provided for reverse translation of traces violating requirements (as given by the model checker) into Simulink notation for playback.", "About a decade after the initial proposal to use model checkers for the generation of test cases we take a look at the results in this field of research. Model checkers are formal verification tools, capable of providing counterexamples to violated properties. Normally, these counterexamples are meant to guide an analyst when searching for the root cause of a property violation. They are, however, also very useful as test cases. Many different approaches have been presented, many problems have been solved, yet many issues remain. This survey paper reviews the state of the art in testing with model checkers. Copyright © 2008 John Wiley & Sons, Ltd." ] }
cs0404037
1653657989
Component-based software development has posed a serious challenge to system verification since externally-obtained components could be a new source of system failures. This issue can not be completely solved by either model-checking or traditional software testing techniques alone due to several reasons: 1) externally obtained components are usually unspecified partially specified; 2)it is generally difficult to establish an adequacy criteria for testing a component; 3)components may be used to dynamically upgrade a system. This paper introduces a new approach (called model-checking driven black-box testing ) that combines model-checking with traditional black-box software testing to tackle the problem in a complete, sound, and automatic way. The idea is to, with respect to some requirement (expressed in CTL or LTL) about the system, use model-checking techniques to derive a condition (expressed in communication graphs) for an unspecified component such that the system satisfies the requirement iff the condition is satisfied by the component, and which can be established by testing the component with test cases generated from the condition on-the-fly. In this paper, we present model-checking driven black-box testing algorithms to handle both CTL and LTL requirements. We also illustrate the idea through some examples.
Callahan et. al. @cite_32 used the model-checker SPIN @cite_18 to check a program's execution traces generated during white-box testing and to generate new test-cases from the counter-example found by SPIN; in @cite_35 , SPIN was also used to generate test-cases from counter-examples found during model-checking system specifications. Gargantini and Heitmeyer @cite_25 used SMV to both generate test-cases from the operational SCR specifications and as test oracles. In @cite_30 @cite_39 , Ammann et. al. also exploited the ability of producing counter-examples with the model-checker SMV @cite_12 ; but their approach is by mutating both specifications and properties such that a large set of test cases can be generated. (A detailed introduction on using model-checkers in testing can be found in @cite_4 ).
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_18", "@cite_4", "@cite_32", "@cite_39", "@cite_25", "@cite_12" ], "mid": [ "1889756448", "2115309705", "1548575501", "2052495090" ], "abstract": [ "We present combined-case k-induction, a novel technique for verifying software programs. This technique draws on the strengths of the classical inductive-invariant method and a recent application of k-induction to program verification. In previous work, correctness of programs was established by separately proving a base case and inductive step. We present a new k-induction rule that takes an unstructured, reducible control flow graph (CFG), a natural loop occurring in the CFG, and a positive integer k, and constructs a single CFG in which the given loop is eliminated via an unwinding proportional to k. Recursively applying the proof rule eventually yields a loop-free CFG, which can be checked using SAT- SMT-based techniques. We state soundness of the rule, and investigate its theoretical properties. We then present two implementations of our technique: K-INDUCTOR, a verifier for C programs built on top of the CBMC model checker, and K-BOOGIE, an extension of the Boogie tool. Our experiments, using a large set of benchmarks, demonstrate that our k-induction technique frequently allows program verification to succeed using significantly weaker loop invariants than are required with the standard inductive invariant approach.", "SPIN is an efficient verification system for models of distributed software systems. It has been used to detect design errors in applications ranging from high-level descriptions of distributed algorithms to detailed code for controlling telephone exchanges. The paper gives an overview of the design and structure of the verifier, reviews its theoretical foundation, and gives an overview of significant practical applications.", "The main limiting factor of the model checker SPIN is currently the amount of available physical memory. This paper explores the possibility of exploiting a distributed-memory execution environment, such as a network of workstations interconnected by a standard LAN, to extend the size of the verification problems that can be successfully handled by SPIN. A distributed version of the algorithm used by SPIN to verify safety properties is presented, and its compatibility with the main memory and complexity reduction mechanisms of SPIN is discussed. Finally, some preliminary experimental results are presented.", "About a decade after the initial proposal to use model checkers for the generation of test cases we take a look at the results in this field of research. Model checkers are formal verification tools, capable of providing counterexamples to violated properties. Normally, these counterexamples are meant to guide an analyst when searching for the root cause of a property violation. They are, however, also very useful as test cases. Many different approaches have been presented, many problems have been solved, yet many issues remain. This survey paper reviews the state of the art in testing with model checkers. Copyright © 2008 John Wiley & Sons, Ltd." ] }
cs0404037
1653657989
Component-based software development has posed a serious challenge to system verification since externally-obtained components could be a new source of system failures. This issue can not be completely solved by either model-checking or traditional software testing techniques alone due to several reasons: 1) externally obtained components are usually unspecified partially specified; 2)it is generally difficult to establish an adequacy criteria for testing a component; 3)components may be used to dynamically upgrade a system. This paper introduces a new approach (called model-checking driven black-box testing ) that combines model-checking with traditional black-box software testing to tackle the problem in a complete, sound, and automatic way. The idea is to, with respect to some requirement (expressed in CTL or LTL) about the system, use model-checking techniques to derive a condition (expressed in communication graphs) for an unspecified component such that the system satisfies the requirement iff the condition is satisfied by the component, and which can be established by testing the component with test cases generated from the condition on-the-fly. In this paper, we present model-checking driven black-box testing algorithms to handle both CTL and LTL requirements. We also illustrate the idea through some examples.
Peled et. al. @cite_27 @cite_29 @cite_5 studied the issue of checking a black-box against a temporal property (called black-box checking). But their focus is on how to efficiently establish an abstract model of the black-box through black-box testing , and their approach requires a clearly-defined property (LTL formula) about the black-box, which is not always possible in component-based systems. Kupferman and Vardi @cite_28 investigated module checking by considering the problem of checking an open finite-state system under all possible environments. Module checking is different from the problem in (*) mentioned at the beginning of the paper in the sense that a component understood as an environment in @cite_28 is a specific one. Fisler et. al. @cite_24 @cite_6 proposed an idea of deducing a model-checking condition for extension features from the base feature, which is adopted to study model-checking feature-oriented software designs. Their approach relies totally on model-checking techniques; their algorithms have false negatives and do not handle LTL formulas.
{ "cite_N": [ "@cite_28", "@cite_29", "@cite_6", "@cite_24", "@cite_27", "@cite_5" ], "mid": [ "1498432697", "2167672803", "2122213509", "1986424898" ], "abstract": [ "Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, and request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field. The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.", "We study the problem of model checking software product line (SPL) behaviours against temporal properties. This is more difficult than for single systems because an SPL with n features yields up to 2n individual systems to verify. As each individual verification suffers from state explosion, it is crucial to propose efficient formalisms and heuristics. We recently proposed featured transition systems (FTS), a compact representation for SPL behaviour, and defined algorithms for model checking FTS against linear temporal properties. Although they showed to outperform individual system verifications, they still face a state explosion problem as they enumerate and visit system states one by one. In this paper, we tackle this latter problem by using symbolic representations of the state space. This lead us to consider computation tree logic (CTL) which is supported by the industry-strength symbolic model checker NuSMV. We first lay the foundations for symbolic SPL model checking by defining a feature-oriented version of CTL and its dedicated algorithms. We then describe an implementation that adapts the NuSMV language and tool infrastructure. Finally, we propose theoretical and empirical evaluations of our results. The benchmarks show that for certain properties, our algorithm is over a hundred times faster than model checking each system with the standard algorithm.", "Using ideas from automata theory we design a new efficient (deterministic) identity test for the noncommutative polynomial identity testing problem (first introduced and studied in [RS05, BW05]). More precisely, given as input a noncommutative circuit C x1, ldrldrldr , xn computing a polynomial in F x1, ldrldrldr , xn of degree d with at most t monomials, where the variables xi are noncommuting, we give a deterministic polynomial identity test that checks if C equiv 0 and runs in time polynomial in d, n, |C|, and t. The same methods works in a black-box setting: given a noncommuting black-box polynomial f isin F x1, ldrldrldr , xn of degree d with t monomials we can, in fact, reconstruct the entire polynomial f in time polynomial in n, d and t. Indeed, we apply this idea to the reconstruction of black-box noncommuting algebraic branching programs (the ABPs considered by Nisan in [N91] and Raz-Shpilka in [RS05]). Assuming that the black-box model allows us to query the ABP for the output at any given gate then we can reconstruct an (equivalent) ABP in deterministic polynomial time. Finally, we turn to commutative identity testing and explore the complexity of the problem when the coefficients of the input polynomial come from an arbitrary finite commutative ring with unity whose elements are uniformly encoded as strings and the ring operations are given by an oracle. We show that several algorithmic results for polynomial identity testing over fields also hold when the coefficients come from such finite rings.", "This paper presents the linear temporal logic of rewriting (LTLR) model checker under localized fairness assumptions for the Maude system. The linear temporal logic of rewriting extends linear temporal logic (LTL) with spatial action patterns that describe patterns of rewriting events. Since LTLR generalizes and extends various state-based and event-based logics, mixed properties involving both state propositions and actions, such as fairness properties, can be naturally expressed in LTLR. However, often the needed fairness assumptions cannot even be expressed as propositional temporal logic formulas because they are parametric, that is, they correspond to universally quantified temporal logic formulas. Such universal quantification is succinctly captured by the notion of localized fairness; for example, fairness is localized to the object name parameter in object fairness conditions. We summarize the foundations, and present the language design and implementation of the Maude Fair LTLR model checker, developed at the C++ level within the Maude system by extending the existing Maude LTL model checker. Our tool provides not only an efficient LTLR model checking algorithm under parameterized fairness assumptions but also suitable specification languages as part of its user interface. The expressiveness and effectiveness of the Maude Fair LTLR model checker are illustrated by five case studies. This is the first tool we are aware of that can model check temporal logic properties under parameterized fairness assumptions. We develop the LTLR model checker under localized fairness assumptions.The linear temporal logic of rewriting (LTLR) extends LTL with action patterns.Localized fairness specifies parameterized fairness over generic system entities.We present the foundations, the language design, and the implementation of our tool.We illustrate the expressiveness and effectiveness of our tool with case studies." ] }
cs0402003
2952176458
The notion of preference is becoming more and more ubiquitous in present-day information systems. Preferences are primarily used to filter and personalize the information reaching the users of such systems. In database systems, preferences are usually captured as preference relations that are used to build preference queries. In our approach, preference queries are relational algebra or SQL queries that contain occurrences of the winnow operator ("find the most preferred tuples in a given relation"). We present here a number of semantic optimization techniques applicable to preference queries. The techniques make use of integrity constraints, and make it possible to remove redundant occurrences of the winnow operator and to apply a more efficient algorithm for the computation of winnow. We also study the propagation of integrity constraints in the result of the winnow. We have identified necessary and sufficient conditions for the applicability of our techniques, and formulated those conditions as constraint satisfiability problems.
The basic reference for semantic query optimization is @cite_11 . The most common techniques are: join elimination introduction, predicate elimination and introduction, and detecting an empty answer set. @cite_15 discusses the implementation of predicate introduction and join elimination in an industrial query optimizer. Semantic query optimization techniques for relational queries are studied in @cite_8 in the context of denial and referential constraints, and in @cite_14 in the context of constraint tuple-generating dependencies (a generalization of CGDs and classical relational dependencies). FDs are used for reasoning about sort orders in @cite_2 .
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_2", "@cite_15", "@cite_11" ], "mid": [ "2006519067", "1548134621", "2108930340", "2106082581" ], "abstract": [ "The purpose of semantic query optimization is to use semantic knowledge (e.g., integrity constraints) for transforming a query into a form that may be answered more efficiently than the original version. In several previous papers we described and proved the correctness of a method for semantic query optimization in deductive databases couched in first-order logic. This paper consolidates the major results of these papers emphasizing the techniques and their applicability for optimizing relational queries. Additionally, we show how this method subsumes and generalizes earlier work on semantic query optimization. We also indicate how semantic query optimization techniques can be extended to databases that support recursion and integrity constraints that contain disjunction, negation, and recursion.", "We critically evaluate the current state of research in multiple query opGrnization, synthesize the requirements for a modular opCrnizer, and propose an architecture. Our objective is to facilitate future research by providing modular subproblems and a good general-purpose data structure. In rhe context of this archiuzcture. we provide an improved subsumption algorithm. and discuss migration paths from single-query to multiple-query oplimizers. The architecture has three key ingredients. First. each type of work is performed at an appropriate level of abstraction. Segond, a uniform and very compact representation stores all candidate strategies. Finally, search is handled as a discrete optimization problem separable horn the query processing tasks. 1. Problem Definition and Objectives A multiple query optimizer (h4QO) takes several queries as input and seeks to generate a good multi-strategy, an executable operator gaph that simultaneously computes answers to all the queries. The idea is to save by evaluating common subexpressions only once. The commonalities to be exploited include identical selections and joins, predicates that subsume other predicates, and also costly physical operators such as relation scans and SOULS. The multiple query optimization problem is to find a multi-strategy that minimizes the total cost (with overlap exploited). Figure 1 .l shows a multi-strategy generated exploiting commonalities among queries Ql-Q3 at both the logical and physical level. To be really satisfactory, a multi-query optimization algorithm must offer solution quality, ejjiciency, and ease of Permission to copy without fee all a part of this mataial is granted provided that the copies are nut made a diitributed for direct commercial advantage, the VIDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Da Blse Endowment. To copy otherwise. urturepublim identify many kinds of commonalities (e.g., by predicate splitting, sharing relation scans); and search effectively to choose a good combination of l-strategies. Efficiency requires that the optimization avoid a combinatorial explosion of possibilities, and that within those it considers, redundant work on common subexpressions be minimized. ‘Finally, ease of implementation is crucial an algorithm will be practically useful only if it is conceptually simple, easy to attach to an optimizer, and requires relatively little additional soft-", "The authors address the issue of reasoning with two classes of commonly used semantic integrity constraints in database and knowledge-base systems: implication constraints and referential constraints. They first consider a central problem in this respect, the IRC-refuting problem, which is to decide whether a conjunctive query always produces an empty relation on (finite) database instances satisfying a given set of implication and referential constraints. Since the general problem is undecidable, they only consider acyclic referential constraints. Under this assumption, they prove that the IRC-refuting problem is decidable, and give a novel necessary and sufficient condition for it. Under the same assumption, they also study several other problems encountered in semantic query optimization, such as the semantics-based query containment problem, redundant join problem, and redundant selection-condition problem, and show that they are polynomially equivalent or reducible to the IRC-refuting problem. Moreover, they give results on reducing the complexity for some special cases of the IRC-refuting problem.", "New applications of information systems need to integrate a large number of heterogeneous databases over computer networks. Answering a query in these applications usually involves selecting relevant information sources and generating a query plan to combine the data automatically. As significant progress has been made in source selection and plan generation, the critical issue has been shifting to query optimization. This paper presents a semantic query optimization (SQO) approach to optimizing query plans of heterogeneous multidatabase systems. This approach provides global optimization for query plans as well as local optimization for subqueries that retrieve data from individual database sources. An important feature of our local optimization algorithm is that we prove necessary and sufficient conditions to eliminate an unnecessary join in a conjunctive query of arbitrary join topology. This feature allows our optimizer to utilize more expressive relational rules to provide a wider range of possible optimizations than previous work in SQO. The local optimization algorithm also features a new data structure called AND-OR implication graphs to facilitate the search for optimal queries. These features allow the global optimization to effectively use semantic knowledge to reduce the data transmission cost. We have implemented this approach in the PESTO (Plan Enhancement by SemanTic Optimization) query plan optimizer as a part of the SIMS information mediator. Experimental results demonstrate that PESTO can provide significant savings in query execution cost over query plan execution without optimization." ] }
cs0312023
2951755603
This paper focuses on the inference of modes for which a logic program is guaranteed to terminate. This generalises traditional termination analysis where an analyser tries to verify termination for a specified mode. Our contribution is a methodology in which components of traditional termination analysis are combined with backwards analysis to obtain an analyser for termination inference. We identify a condition on the components of the analyser which guarantees that termination inference will infer all modes which can be checked to terminate. The application of this methodology to enhance a traditional termination analyser to perform also termination inference is demonstrated.
This paper draws on results from two areas: termination (checking) analysis and backwards analysis. It shows how to combine components implementing these so as to obtain an analyser for termination inference. Termination checking for logic programs has been studied extensively (see for example the survey @cite_18 ). Backwards reasoning for imperative programs dates back to the early days of static analysis and has been applied extensively in functional programming. Applications of backwards analysis in the context of logic programming are few. For details concerning other applications of backwards analysis, see @cite_14 . The only other work on termination inference that we are aware of is that of Mesnard and coauthors. The implementation of Mesnard's cTI analyser is described in @cite_15 and its formal justification is given in @cite_23 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_23", "@cite_15" ], "mid": [ "1524883003", "2136242294", "2009286786", "2012816689" ], "abstract": [ "We present the implementation of cTI, a system for universal left-termination inference of logic programs. Termination inference generalizes termination analysis checking. Traditionally, a termination analyzer tries to prove that a given class of queries terminates. This class must be provided to the system, requiringu ser annotations. With termination inference such annotations are no longer necessary. Instead, all provably terminatingclasses to all related predicates are inferred at once. The architecture of cTI is described1 and some optimizations are discussed. Runningti mes for classical examples from the termination literature in LP and for some middle-sized logic programs are given.", "We describe a new program termination analysis designed to handle imperative programs whose termination depends on the mutation of the program's heap. We first describe how an abstract interpretation can be used to construct a finite number of relations which, if each is well-founded, implies termination. We then give an abstract interpretation based on separation logic formulaewhich tracks the depths of pieces of heaps. Finally, we combine these two techniques to produce an automatic termination prover. We show that the analysis is able to prove the termination of loops extracted from Windows device drivers that could not be proved terminating before by other means; we also discuss a previously unknown bug found with the analysis.", "Abstract We survey termination analysis techniques for Logic Programs. We give an extensive introduction to the topic. We recall several motivations for the work, and point out the intuitions behind a number of LP-specific issues that turn up, such as: the study of different classes of programs and LP languages, of different classes of queries and of different selection rules, the difference between existential and universal termination, and the treatment of backward unification and local variables. Then, we turn to more technical aspects: the structure of the termination proofs, the selection of well-founded orderings, norms and level mappings, the inference of interargument relations, and special treatments proposed for dealing with mutual recursion. For each of these, we briefly sketch the main approaches presented in the literature, using a fixed example as a file rouge. We conclude with some comments on loop detection and cycle unification and state some open problems.", "Proof, verification and analysis methods for termination all rely on two induction principles: (1) a variant function or induction on data ensuring progress towards the end and (2) some form of induction on the program structure. The abstract interpretation design principle is first illustrated for the design of new forward and backward proof, verification and analysis methods for safety. The safety collecting semantics defining the strongest safety property of programs is first expressed in a constructive fixpoint form. Safety proof and checking verification methods then immediately follow by fixpoint induction. Static analysis of abstract safety properties such as invariance are constructively designed by fixpoint abstraction (or approximation) to (automatically) infer safety properties. So far, no such clear design principle did exist for termination so that the existing approaches are scattered and largely not comparable with each other. For (1), we show that this design principle applies equally well to potential and definite termination. The trace-based termination collecting semantics is given a fixpoint definition. Its abstraction yields a fixpoint definition of the best variant function. By further abstraction of this best variant function, we derive the Floyd Turing termination proof method as well as new static analysis methods to effectively compute approximations of this best variant function. For (2), we introduce a generalization of the syntactic notion of struc- tural induction (as found in Hoare logic) into a semantic structural induction based on the new semantic concept of inductive trace cover covering execution traces by segments, a new basis for formulating program properties. Its abstractions allow for generalized recursive proof, verification and static analysis methods by induction on both program structure, control, and data. Examples of particular instances include Floyd's handling of loop cutpoints as well as nested loops, Burstall's intermittent assertion total correctness proof method, and Podelski-Rybalchenko transition invariants." ] }
cs0312023
2951755603
This paper focuses on the inference of modes for which a logic program is guaranteed to terminate. This generalises traditional termination analysis where an analyser tries to verify termination for a specified mode. Our contribution is a methodology in which components of traditional termination analysis are combined with backwards analysis to obtain an analyser for termination inference. We identify a condition on the components of the analyser which guarantees that termination inference will infer all modes which can be checked to terminate. The application of this methodology to enhance a traditional termination analyser to perform also termination inference is demonstrated.
Both systems compute the greatest fixed point of a system of recursive equations. In our case the implementation is based on a simple meta-interpreter written in Prolog. In cTI, the implementation is based on a @math -calculus interpreter. In our case this system of equations is set up as an instance of backwards analysis hence providing a clear motivation and justification @cite_23 .
{ "cite_N": [ "@cite_23" ], "mid": [ "2075296484", "1524883003", "2084777452", "2128659236" ], "abstract": [ "A natural term rewriting framework for the Bellantoni Cook schemata of predicative recursion, which yields a canonical definition of the polynomial time computable functions, is introduced. In terms of an exponential function both, an upper bound and a lower bound are proved for the resulting derivation lengths of the functions in question. It is proved that any natural reduction strategy yields an algorithm which runs in exponential time. We give an example in which this estimate is tight. It is proved that the resulting derivation lengths become polynomially bounded in the lengths of the inputs if the rewrite rules are only applied to terms in which the safe arguments – no restrictions are assumed for the normal arguments – consist of values, i.e. numerals, and not of names, i.e. non numeral terms. It is proved that in the latter situation any inside first reduction strategy and any head reduction strategy yield algorithms, for the function under consideration, for which the running time is bounded by an appropriate polynomial in the lengths of the input. A feasible rewrite system for predicative recursion with predicative parameter substitution is defined. It is proved that the derivation lengths of this rewrite system are polynomially bounded in the lengths of the inputs. As a corollary we reobtain Bellantoni’s result stating that predicative recursion is closed under predicative parameter recursion.", "We present the implementation of cTI, a system for universal left-termination inference of logic programs. Termination inference generalizes termination analysis checking. Traditionally, a termination analyzer tries to prove that a given class of queries terminates. This class must be provided to the system, requiringu ser annotations. With termination inference such annotations are no longer necessary. Instead, all provably terminatingclasses to all related predicates are inferred at once. The architecture of cTI is described1 and some optimizations are discussed. Runningti mes for classical examples from the termination literature in LP and for some middle-sized logic programs are given.", "We present a unified framework for the design and convergence analysis of a class of algorithms based on approximate solution of proximal point subproblems. Our development further enhances the constructive approximation approach of the recently proposed hybrid projection–proximal and extragradient–proximal methods. Specifically, we introduce an even more flexible error tolerance criterion, as well as provide a unified view of these two algorithms. Our general method possesses global convergence and local (super)linear rate of convergence under standard assumptions, while using a constructive approximation criterion suitable for a number of specific implementations. For example, we show that close to a regular solution of a monotone system of semismooth equations, two Newton iterations are sufficient to solve the proximal subproblem within the required error tolerance. Such systems of equations arise naturally when reformulating the nonlinear complementarity problem. *Research of the first author is suppo...", "We describe a recursive algorithm to compute representations of functions with respect to nonorthogonal and possibly overcomplete dictionaries of elementary building blocks e.g. affine (wavelet) frames. We propose a modification to the matching pursuit algorithm of Mallat and Zhang (1992) that maintains full backward orthogonality of the residual (error) at every step and thereby leads to improved convergence. We refer to this modified algorithm as orthogonal matching pursuit (OMP). It is shown that all additional computation required for the OMP algorithm may be performed recursively. >" ] }
math0312490
2166075559
As a sequel to our proof of the analog of Serre's conjecture for function fields in Part I of this work, we study in this paper the deformation rings of @math -dimensional mod @math representations @math of the arithmetic fundamental group @math where @math is a geometrically irreducible, smooth curve over a finite field @math of characteristic @math ( @math ). We are able to show in many cases that the resulting rings are finite flat over @math . The proof principally uses a lifting result of the authors in Part I of this two-part work, Taylor-Wiles systems and the result of Lafforgue. This implies a conjecture of A.J. de Jong for representations with coefficients in power series rings over finite fields of characteristic @math , that have this mod @math representation as their reduction.
The key qualitative difference between the mentioned works and ours is that we can prove automorphy of residual representations like @math in the theorem, while in the other works this has to be at the moment an imprtant assumption that seems extremely difficult to verify in their number field case; further we are mainly interested in establishing algebraic properties of deformation rings, while in the number field case these are established en route to proving modularity of @math -adic representations (which is known in our context by @cite_22 !). Thus our uses of the methods pioneered by Wiles can be deemed to a certain extent to be warped!
{ "cite_N": [ "@cite_22" ], "mid": [ "230449344", "1945101555", "2591592591", "2170546552" ], "abstract": [ "Let ( O ) k be the ring of integers of a finite extension k of the field ( Q ) p of p-adic numbers. The endomorphisms of a formal group law defined over ( O ) k provide nontrivial examples of commuting formal series with coefficients in ( O ) k . This article deals with the inverse problem formulated by Jonathan Lubin within the context of non-Archimedean dynamical systems. We present a large family of series, with coefficients in ( Z ) p , which satisfy Lubin's conjecture. These series are constructed with the help of Lubin–Tate formal group laws over ( Q ) p . We introduce the notion of minimally ramified series which turn out to be modulo p reductions of some series of this family. The commutant monoids of these minimally ramified series are determined by using the Fontaine–Wintenberger theory of the field of norms which allows an interpretation of them as automorphisms of ( Z ) p -extensions of local fields of characteristic zero. A particularly effective example illustrating the paper is given by a family of series generalizing Cebysev polynomials", "Let @math be a smooth scheme over an algebraically closed field @math of characteristic zero and @math a regular function, and write @math Crit @math , as a closed subscheme of @math . The motivic vanishing cycle @math is an element of the @math -equivariant motivic Grothendieck ring @math defined by Denef and Loeser math.AG 0006050 and Looijenga math.AG 0006220, and used in Kontsevich and Soibelman's theory of motivic Donaldson-Thomas invariants, arXiv:0811.2435. We prove three main results: (a) @math depends only on the third-order thickenings @math of @math . (b) If @math is another smooth scheme, @math is regular, @math Crit @math , and @math is an embedding with @math and @math an isomorphism, then @math equals @math \"twisted\" by a motive associated to a principal @math -bundle defined using @math , where now we work in a quotient ring @math of @math . (c) If @math is an \"oriented algebraic d-critical locus\" in the sense of Joyce arXiv:1304.4508, there is a natural motive @math , such that if @math is locally modelled on Crit @math , then @math is locally modelled on @math . Using results from arXiv:1305.6302, these imply the existence of natural motives on moduli schemes of coherent sheaves on a Calabi-Yau 3-fold equipped with \"orientation data\", as required in Kontsevich and Soibelman's motivic Donaldson-Thomas theory arXiv:0811.2435, and on intersections of oriented Lagrangians in an algebraic symplectic manifold. This paper is an analogue for motives of results on perverse sheaves of vanishing cycles proved in arXiv:1211.3259. We extend this paper to Artin stacks in arXiv:1312.0090.", "This paper considers the problem of designing maximum distance separable (MDS) codes over small fields with constraints on the support of their generator matrices. For any given @math binary matrix @math , the GM-MDS conjecture, proposed by , states that if @math satisfies the so-called MDS condition, then for any field @math of size @math , there exists an @math MDS code whose generator matrix @math , with entries in @math , fits the matrix @math (i.e., @math is the support matrix of @math ). Despite all the attempts by the coding theory community, this conjecture remains still open in general. It was shown, independently by and , that the GM-MDS conjecture holds if the following conjecture, referred to as the TM-MDS conjecture, holds: if @math satisfies the MDS condition, then the determinant of a transform matrix @math , such that @math fits @math , is not identically zero, where @math is a Vandermonde matrix with distinct parameters. In this work, we first reformulate the TM-MDS conjecture in terms of the Wronskian determinant, and then present an algebraic-combinatorial approach based on polynomial-degree reduction for proving this conjecture. Our proof technique's strength is based primarily on reducing inherent combinatorics in the proof. We demonstrate the strength of our technique by proving the TM-MDS conjecture for the cases where the number of rows ( @math ) of @math is upper bounded by @math . For this class of special cases of @math where the only additional constraint is on @math , only cases with @math were previously proven theoretically, and the previously used proof techniques are not applicable to cases with @math .", "Mulmuley [Mul12a] recently gave an explicit version of Noether’s Normalization lemma for ring of invariants of matrices under simultaneous conjugation, under the conjecture that there are deterministic black-box algorithms for polynomial identity testing (PIT). He argued that this gives evidence that constructing such algorithms for PIT is beyond current techniques. In this work, we show this is not the case. That is, we improve Mulmuley’s reduction and correspondingly weaken the conjecture regarding PIT needed to give explicit Noether Normalization. We then observe that the weaker conjecture has recently been nearly settled by the authors ([FS12]), who gave quasipolynomial size hitting sets for the class of read-once oblivious algebraic branching programs (ROABPs). This gives the desired explicit Noether Normalization unconditionally, up to quasipolynomial factors. As a consequence of our proof we give a deterministic parallel polynomial-time algorithm for deciding if two matrix tuples have intersecting orbit closures, under simultaneous conjugation. We also study the strength of conjectures that Mulmuley requires to obtain similar results as ours. We prove that his conjectures are stronger, in the sense that the computational model he needs PIT algorithms for is equivalent to the well-known algebraic branching program (ABP) model, which is provably stronger than the ROABP model. Finally, we consider the depth-3 diagonal circuit model as defined by Saxena [Sax08], as PIT algorithms for this model also have implications in Mulmuley’s work. Previous work (such as [ASS12] and [FS12]) have given quasipolynomial size hitting sets for this model. In this work, we give a much simpler construction of such hitting sets, using techniques of Shpilka and Volkovich [SV09]." ] }
cs0311008
2952174649
Argumentation has proved a useful tool in defining formal semantics for assumption-based reasoning by viewing a proof as a process in which proponents and opponents attack each others arguments by undercuts (attack to an argument's premise) and rebuts (attack to an argument's conclusion). In this paper, we formulate a variety of notions of attack for extended logic programs from combinations of undercuts and rebuts and define a general hierarchy of argumentation semantics parameterised by the notions of attack chosen by proponent and opponent. We prove the equivalence and subset relationships between the semantics and examine some essential properties concerning consistency and the coherence principle, which relates default negation and explicit negation. Most significantly, we place existing semantics put forward in the literature in our hierarchy and identify a particular argumentation semantics for which we prove equivalence to the paraconsistent well-founded semantics with explicit negation, WFSX @math . Finally, we present a general proof theory, based on dialogue trees, and show that it is sound and complete with respect to the argumentation semantics.
In @cite_20 , an argumentation semantics for extended logic programs, similar to Prakken and Sartor's, is proposed; it is influenced by WFSX, and distinguishes between sceptical and credulous conclusions of an argument. It also provides a proof theory based on dialogue trees, similar to Prakken and Sartor's.
{ "cite_N": [ "@cite_20" ], "mid": [ "2395246457", "2405720308", "2891615286", "2140044962" ], "abstract": [ "It is well-known, in the area of argumentation theory, that there is a direct relationship between extension-based argumentation semantics and logic programming semantics with negation as failure. One of the main implication of this relationship is that one can explore the implementation of argumentation engines by considering logic programming solvers. Recently, it was proved that the argumentation semantics CF2 can be characterized by the stratified minimal model semantics (MM). The stratified minimal model semantics is also a recently introduced logic programming semantics which is based on a recursive construction and minimal models. In this paper, we introduce a solver based on MINISAT algorithm for inferring the logic programming semantics MM∗. As one of the applications of the MM solver, we will argue that this solver is a suitable tool for computing the argumentation semantics CF2.", "Classical semantics for abstract argumentation frameworks are usually defined in terms of extensions or, more recently, labelings. That is, an argument is either regarded as accepted with respect to a labeling or not. In order to reason with a specific semantics one takes either a credulous or skeptical approach, i. e. an argument is ultimately accepted, if it is accepted in one or all labelings, respectively. In this paper, we propose a more general approach for a semantics that allows for a more fine-grained differentiation between those two extreme views on reasoning. In particular, we propose a probabilistic semantics for abstract argumentation that assigns probabilities or degrees of belief to individual arguments. We show that our semantics generalizes the classical notions of semantics and we point out interesting relationships between concepts from argumentation and probabilistic reasoning. We illustrate the usefulness of our semantics on an example from the medical domain.", "We propose a new graded semantics for abstract argumentation frameworks that is based on the constellations approach to probabilistic argumentation. Given an abstract argumentation framework, our approach assigns uniform probability to all arguments and then ranks arguments according to the probability of acceptance wrt. some classical semantics. Albeit relying on a simple idea this approach (1) is based on the solid theoretical foundations of probability theory, and (2) complies with many rationality postulates proposed for graded semantics. We also investigate an application of our approach for inconsistency measurement in argumentation frameworks and show that the measure induced by the probabilistic graded semantics also complies with the basic rationality postulates from that area.", "The ability to view extended logic programs as argumentation systems opens the way for the use of this language in formalizing communication among reasoning computing agents in a distributed framework. In this paper we define an argumentative and cooperative multi-agent framework, introducing credulous and sceptical conclusions. We also present an algorithm for inference and show how the agents can have more credulous or sceptical conclusions." ] }
cs0311008
2952174649
Argumentation has proved a useful tool in defining formal semantics for assumption-based reasoning by viewing a proof as a process in which proponents and opponents attack each others arguments by undercuts (attack to an argument's premise) and rebuts (attack to an argument's conclusion). In this paper, we formulate a variety of notions of attack for extended logic programs from combinations of undercuts and rebuts and define a general hierarchy of argumentation semantics parameterised by the notions of attack chosen by proponent and opponent. We prove the equivalence and subset relationships between the semantics and examine some essential properties concerning consistency and the coherence principle, which relates default negation and explicit negation. Most significantly, we place existing semantics put forward in the literature in our hierarchy and identify a particular argumentation semantics for which we prove equivalence to the paraconsistent well-founded semantics with explicit negation, WFSX @math . Finally, we present a general proof theory, based on dialogue trees, and show that it is sound and complete with respect to the argumentation semantics.
Defeasible Logic Programming @cite_44 @cite_25 @cite_30 is a formalism very similar to Prakken and Sartor's, based on the first order logic argumentation framework of @cite_1 . It includes logic programming with two kinds of negation, distinction between strict and defeasible rules, and allowing for various criteria for comparing arguments. Its semantics is given operationally, by proof procedures based on dialectical trees @cite_44 @cite_25 . In @cite_19 , the semantics of Defeasible Logic Programming is related to the well-founded semantics, albeit only for the restricted language corresponding to normal logic programs @cite_41 .
{ "cite_N": [ "@cite_30", "@cite_41", "@cite_1", "@cite_44", "@cite_19", "@cite_25" ], "mid": [ "2159569510", "2156092566", "2170232725", "190056634" ], "abstract": [ "The work reported here introduces Defeasible Logic Programming (DeLP), a formalism that combines results of Logic Programming and Defeasible Argumentation. DeLP provides the possibility of representing information in the form of weak rules in a declarative manner, and a defeasible argumentation inference mechanism for warranting the entailed conclusions. In DeLP an argumentation formalism will be used for deciding between contradictory goals. Queries will be supported by arguments that could be defeated by other arguments. A query @math will succeed when there is an argument @math for @math that is warranted, i.e. the argument @math that supports @math is found undefeated by a warrant procedure that implements a dialectical analysis. The defeasible argumentation basis of DeLP allows to build applications that deal with incomplete and contradictory information in dynamic domains. Thus, the resulting approach is suitable for representing agent's knowledge and for providing an argumentation based reasoning mechanism to agents.", "In this dissertation I present a formal approach to defeasible reasoning. This mathematical approach is based on the notion of specificity introduced by Poole and the general theory of warrant as presented by Pollock. General background information on the subject of Nonmonotonic Reasoning is presented and some of the shortcomings of existing systems are analyzed. We believe that the approach presented here represents a definite improvement over past systems. The main contribution of this thesis is a formally precise, elegant, clean, well-defined system which exhibits a correct behavior when applied to the benchmark examples in the literature. Model-theoretic semantical issues have been addressed. The investigation on the theoretical issues has aided the study of how this kind of reasoner can be realized on a computer. An interpreter of a restricted language, an extension of Horn clauses with defeasible rules, has been implemented. Finally, the implementation details are discussed.", "This paper relates the Defeasible Logic Programming (DeLP) framework and its semantics SEMDeLP to classical logic programming frameworks. In DeLP, we distinguish between two different sorts of rules: strict and defeasible rules. Negative literals (∼A) in these rules are considered to represent classical negation. In contrast to this, in normal logic programming (NLP), there is only one kind of rules, but the meaning of negative literals (not A) is different: they represent a kind of negation as failure, and thereby introduce defeasibility. Various semantics have been defined for NLP, notably the well-founded semantics (WFS) (van , Proceedings of the Seventh Symposium on Principles of Database Systems, 1988, pp. 221-230; J. ACM 38 (3) (1991) 620) and the stable semantics Stable (Gelfond and Lifschitz, Fifth Conference on Logic Programming, MIT Press, Cambridge, MA, 1988, pp. 1070-1080; Proceedings of the Seventh International Conference on Logical Programming, Jerusalem, MIT Press, Cambridge, MA, 1991, pp. 579-597).In this paper we consider the transformation properties for NLP introduced by Brass and Dix (J. Logic Programming 38(3) (1999) 167) and suitably adjusted for the DeLP framework. We show which transformation properties are satisfied, thereby identifying aspects in which NLP and DeLP differ. We contend that the transformation rules presented in this paper can help to gain a better understanding of the relationship of DeLP semantics with respect to more traditional logic programming approaches. As a byproduct, we obtain the result that DeLP is a proper extension of NLP.", "We present here a knowledge representation language, where defeasible and non-defeasible rules can be expressed. The language has two different negations: classical negation, which is represented by the symbol “∼” used for representing contradictory knowledge; and negation as failure, represented by the symbol “not” used for representing incomplete information. Defeasible reasoning is done using a argumentation formalism. Thus, systems for acting in a dynamic domain, that properly handle contradictory and or incomplete information can be developed with this language. An argument is used as a defeasible reason for supporting conclusions. A conclusion q will be considered justified only when the argument that supports it becomes a justification. Building a justification involves the construction of a nondefeated argument A for q. In order to establish that A is a non-defeated argument, the system looks for counterarguments that could be defeaters for A. Since defeaters are arguments, there may exist defeaters for the defeaters, and so on, thus requiring a complete dialectical analysis. The system also detects, avoids, circular argumentation. The language was implemented using an abstract machine defined and developed as an extension of the Warren Abstract Machine (wam)." ] }
cs0311008
2952174649
Argumentation has proved a useful tool in defining formal semantics for assumption-based reasoning by viewing a proof as a process in which proponents and opponents attack each others arguments by undercuts (attack to an argument's premise) and rebuts (attack to an argument's conclusion). In this paper, we formulate a variety of notions of attack for extended logic programs from combinations of undercuts and rebuts and define a general hierarchy of argumentation semantics parameterised by the notions of attack chosen by proponent and opponent. We prove the equivalence and subset relationships between the semantics and examine some essential properties concerning consistency and the coherence principle, which relates default negation and explicit negation. Most significantly, we place existing semantics put forward in the literature in our hierarchy and identify a particular argumentation semantics for which we prove equivalence to the paraconsistent well-founded semantics with explicit negation, WFSX @math . Finally, we present a general proof theory, based on dialogue trees, and show that it is sound and complete with respect to the argumentation semantics.
A number of authors @cite_23 @cite_18 @cite_15 @cite_27 @cite_37 @cite_45 @cite_14 @cite_20 work on argumentation for negotiating agents. Of these, the approaches of @cite_37 @cite_45 @cite_14 are based on logic programming. The advantage of the logic programming approach for arguing agents is the availability of goal-directed, top-down proof procedures. This is vital when implementing systems which need to react in real-time and therefore cannot afford to compute all justified arguments, as would be required when a bottom-up argumentation semantics would be used.
{ "cite_N": [ "@cite_18", "@cite_37", "@cite_14", "@cite_27", "@cite_45", "@cite_23", "@cite_15", "@cite_20" ], "mid": [ "2095587681", "1814404551", "2206132849", "2140044962" ], "abstract": [ "In a multi-agent environment, where self-motivated agents try to pursue their own goals, cooperation cannot be taken for granted. Cooperation must be planned for and achieved through communication and negotiation. We present a logical model of the mental states of the agents based on a representation of their beliefs, desires, intentions, and goals. We present argumentation as an iterative process emerging from exchanges among agents to persuade each other and bring about a change in intentions. We look at argumentation as a mechanism for achieving cooperation and agreements. Using categories identified from human multi-agent negotiation, we demonstrate how the logic can be used to specify argument formulation and evaluation. We also illustrate how the developed logic can be used to describe different types of agents. Furthermore, we present a general Automated Negotiation Agent which we implemented, based on the logical model. Using this system, a user can analyze and explore different methods to negotiate and argue in a noncooperative environment where no centralized mechanism for coordination exists. The development of negotiating agents in the framework of the Automated Negotiation Agent is illustrated with an example where the agents plan, act, and resolve conflicts via negotiation in a Blocks World environment.", "Agents engage in dialogues having as goals to make some arguments acceptable or unacceptable. To do so they may put forward arguments, adding them to the argumentation framework. Argumentation semantics can relate a change in the framework to the resulting extensions but it is not clear, given an argumentation framework and a desired acceptance state for a given set of arguments, which further arguments should be added in order to achieve those justification statuses. Our methodology, called conditional labelling, is based on argument labelling and assigns to each argument three propositional formulae. These formulae describe which arguments should be attacked by the agent in order to get a particular argument in, out, or undecided, respectively. Given a conditional labelling, the agents have a full knowledge about the consequences of the attacks they may raise on the acceptability of each argument without having to recompute the overall labelling of the framework for each possible set of attack they may raise.", "This paper studies an abduction problem in formal argumentation frameworks. Given an argument, an agent verifies whether the argument is justified or not in its argumentation framework. If the argument is not justified, the agent seeks conditions to explain the argument in its argumentation framework. We formulate such abductive reasoning in argumentation semantics and provide its computation in logic programming. Next we apply abduction in argumentation frameworks to reasoning by players in debate games. In debate games, two players have their own argumentation frameworks and each player builds claims to refute the opponent. A player may provide false or inaccurate arguments as a tactic to win the game. We show that abduction is used not only for seeking counter-claims but also for building dishonest claims in debate games.", "The ability to view extended logic programs as argumentation systems opens the way for the use of this language in formalizing communication among reasoning computing agents in a distributed framework. In this paper we define an argumentative and cooperative multi-agent framework, introducing credulous and sceptical conclusions. We also present an algorithm for inference and show how the agents can have more credulous or sceptical conclusions." ] }
cs0310016
1673079227
By recording every state change in the run of a program, it is possible to present the programmer every bit of information that might be desired. Essentially, it becomes possible to debug the program by going backwards in time,'' vastly simplifying the process of debugging. An implementation of this idea, the Omniscient Debugger,'' is used to demonstrate its viability and has been used successfully on a number of large programs. Integration with an event analysis engine for searching and control is presented. Several small-scale user studies provide encouraging results. Finally performance issues and implementation are discussed along with possible optimizations. This paper makes three contributions of interest: the concept and technique of going backwards in time,'' the GUI which presents a global view of the program state and has a formal notion of navigation through time,'' and the integration with an event analyzer.
HERCULE @cite_9 is a tool which can record and replay distributed events, in particular, window events and appearence. It does for windows much of what ODB does for programs, and provides much of the functionality that the ODB lacks.
{ "cite_N": [ "@cite_9" ], "mid": [ "1566707746", "2116420167", "2155512447", "2084989254" ], "abstract": [ "This paper presents HERCULE, an approach to non-invasively tracking end-user application activity in a distributed, component-based system. Such tracking can support the visualisation of user and application activity, system auditing, monitoring of system performance and the provision of feedback. A framework is provided that allows the insertion of proxies, dynamically and transparently, into a component-based system. Proxies are inserted in between the user and the graphical user-interface and between the client application and the rest of the distributed, component-based system. The paper describes: how the code for the proxies is generated by mining component documentation; how they are inserted without affecting pre-existing code; and how information produced by the proxies can be used to model application activity. The viability of this approach is demonstrated by means of a prototype implementation.", "textabstractX100 is a new execution engine for the MonetDB system, that improves execution speed and overcomes its main memory limitation. It introduces the concept of in-cache vectorized processing that strikes a balance between the existing column-at-a-time MIL execution primitives of MonetDB and the tuple-at-a-time Volcano pipelining model, avoiding their drawbacks: intermediate result materialization and large interpretation overhead, respectively. We explain how the new query engine makes better use of cache memories as well as parallel computation resources of modern super-scalar CPUs. MonetDB X100 can be one to two orders of magnitude faster than commercial DBMSs and close to hand-coded C programs for computationally intensive queries on in-memory datasets. To address larger disk-based datasets with the same efficiency, a new ColumnBM storage layer is developed that boosts bandwidth using ultra lightweight compression and cooperative scans.", "PATRICIA is an algorithm which provides a flexible means of storing, indexing, and retrieving information in a large file, which is economical of index space and of reindexing time. It does not require rearrangement of text or index as new material is added. It requires a minimum restriction of format of text and of keys; it is extremely flexible in the variety of keys it will respond to. It retrieves information in response to keys furnished by the user with a quantity of computation which has a bound which depends linearly on the length of keys and the number of their proper occurrences and is otherwise independent of the size of the library. It has been implemented in several variations as FORTRAN programs for the CDC-3600, utilizing disk file storage of text. It has been applied to several large information-retrieval problems and will be applied to others.", "This article proposes a process to retrieve the URL of a document for which metadata records exist in a digital library catalog but a pointer to the full text of the document is not available. The process uses results from queries submitted to Web search engines for finding the URL of the corresponding full text or any related material. We present a comprehensive study of this process in different situations by investigating different query strategies applied to three general purpose search engines (Google, Yahoo!, MSN) and two specialized ones (Scholar and CiteSeer), considering five user scenarios. Specifically, we have conducted experiments with metadata records taken from the Brazilian Digital Library of Computing (BDBComp) and The DBLP Computer Science Bibliography (DBLP). We found that Scholar was the most effective search engine for this task in all considered scenarios and that simple strategies for combining and re-ranking results from Scholar and Google significantly improve the retrieval quality. Moreover, we study the influence of the number of query results on the effectiveness of finding missing information as well as the coverage of the proposed scenarios." ] }
cs0310020
2119104528
A simple mathematical definition of the 4-port model for pure Prolog is given. The model combines the intuition of ports with a compact representation of execution state. Forward and backward derivation steps are possible. The model satisfies a modularity claim, making it suitable for formal reasoning.
In contrast to the few specifications of the Byrd box, there are many more general models of pure (or even full) Prolog execution. Due to space limitations we mention here only some models, directly relevant to , and for a more comprehensive discussion see @cite_1 . Comparable to our work are the stack-based approaches. St "ark gives in @cite_3 , as a side issue, a simple operational semantics of pure logic programming. A state of execution is a stack of frame stacks, where each frame consists of a goal (ancestor) and an environment. In comparison, our state of execution consists of exactly one environment and one ancestor stack . The seminal paper of Jones and Mycroft @cite_10 was the first to present a stack-based model of execution, applicable to pure Prolog with cut added. It uses a sequence of frames. In these stack-based approaches (including our previous attempt @cite_1 ), there is no , it is not possible to abstract the execution of a subgoal.
{ "cite_N": [ "@cite_10", "@cite_1", "@cite_3" ], "mid": [ "2625909586", "844671342", "1909063750", "2165220145" ], "abstract": [ "This article describes the final solution of team monkeytyping, who finished in second place in the YouTube-8M video understanding challenge. The dataset used in this challenge is a large-scale benchmark for multi-label video classification. We extend the work in [1] and propose several improvements for frame sequence modeling. We propose a network structure called Chaining that can better capture the interactions between labels. Also, we report our approaches in dealing with multi-scale information and attention pooling. In addition, We find that using the output of model ensemble as a side target in training can boost single model performance. We report our experiments in bagging, boosting, cascade, and stacking, and propose a stacking algorithm called attention weighted stacking. Our final submission is an ensemble that consists of 74 sub models, all of which are listed in the appendix.", "Symbolic execution extends concrete execution by allowing symbolic input data and then exploring all feasible execution paths. It has been defined and used in the context of many different programming languages and paradigms. A symbolic execution engine is at the heart of many program analysis and transformation techniques, like partial evaluation, test case generation or model checking, to name a few. Despite its relevance, traditional symbolic execution also suffers from several drawbacks. For instance, the search space is usually huge (often infinite) even for the simplest programs. Also, symbolic execution generally computes an overapproximation of the concrete execution space, so that false positives may occur. In this paper, we propose the use of a variant of symbolic execution, called concolic execution, for test case generation in Prolog. Our technique aims at full statement coverage. We argue that this technique computes an underapproximation of the concrete execution space (thus avoiding false positives) and scales up better to medium and large Prolog applications.", "In this paper we introduce a model for a wide class of computational systems, whose behaviour can be described by certain rewriting rules. We gathered our inspiration both from the world of term rewriting, in particular from the rewriting logic framework Mes92 , and of concurrency theory: among the others, the structured operational semantics Plo81 , the context systems LX90 and the structured transition systems CM92 approaches. Our model recollects many properties of these sources: first, it provides a compositional way to describe both the states and the sequences of transitions performed by a given system, stressing their distributed nature. Second, a suitable notion of typed proof allows to take into account also those formalisms relying on the notions of synchronization and side-effects to determine the actual behaviour of a system. Finally, an equivalence relation over sequences of transitions is defined, equipping the system under analysis with a concurrent semantics, where each equivalence class denotes a family of computationally equivalent'''' behaviours, intended to correspond to the execution of the same set of (causally unrelated) events. As a further abstraction step, our model is conveniently represented using double-categories: its operational semantics is recovered with a free construction, by means of a suitable adjunction.", "Recent work has demonstrated the benefits of adopting a fully probabilistic SLAM approach in sequential motion and structure estimation from an image sequence. Unlike standard Structure from Motion (SFM) methods, this 'monocular SLAM' approach is able to achieve drift-free estimation with high frame-rate real-time operation, particularly benefitting from highly efficient active feature search, map management and mismatch rejection. A consistent thread in this research on real-time monocular SLAM has been to reduce the assumptions required. In this paper we move towards the logical conclusion of this direction by implementing a fully Bayesian Interacting Multiple Models (IMM) framework which can switch automatically between parameter sets in a dimensionless formulation of monocular SLAM. Remarkably, our approach of full sequential probability propagation means that there is no need for penalty terms to achieve the Occam property of favouring simpler models - this arises automatically. We successfully tackle the known stiffness in on-the-fly monocular SLAM start up without known patterns in the scene. The search regions for matches are also reduced in size with respect to single model EKF increasing the rejection of spurious matches. We demonstrate our method with results on a complex real image sequence with varied motion." ] }
cs0309030
1519115557
ABSTRACTThis paper introduces an automatic debuggingframework that relies on model–based reasoning techniquesto locate faults in programs. In particular, model–based diagnosis, together with an abstract interpretationbased conflict detection mechanism is used to derive diagnoses, which correspond to possible faults in pro-grams. Design information and partial specifications are applied to guide a model revision process, whichallows for automatic detection and correction of structural faults. KEYWORDS : Model–based Debugging, Diagnosis, Abstract Interpretation, Program Analysis 1 Introduction Detecting a faulty behavior within a program, locating the cause of the fault, and fixing the faultby means of changing the program, continues to be a crucial and challenging task in software de-velopment. Many papers have been published so far in the domain of detecting faults in software,e.g., testing or formal verification [CDH + 00], and locating them, e.g., program slicing [Wei84] andautomatic program debugging [Llo87]. More recently model–based diagnosis [Rei87] has been usedfor locating faults in software [CFD93, MSWW02a].This paper extends previous research in several directions: Firstly, a parameterized debuggingframework is introduced, which integrates dynamic and static properties, as well as design infor-mation of programs. The framework is based on results derived in the field of abstract interpreta-tion [CC77], and can therefore be parameterized with different lattices and context selection strate-gies.Secondly, the one–to–one correspondence between model components and program statementsis replaced by a hierarchy of components, which provides means for more efficient reasoning proce-dures, as well as more flexibility when focusing on interesting parts of a program.This work is organized as follows. In Section 2, we give an introduction to model-based debug-ging. Section 3 describes mapping from source code to model components and the (approximate)computation of program effects in the style of [CC77] and [Bou93]. The next section discusses themodeling of programs and the reasoning framework. In Section 5, we provide an example whichputs together the different models and demonstrates the debugging capabilities of our approach.
In Program Slicing @cite_15 @cite_4 , statements that cannot influence the value of a variable at a given program point are eliminated by considering the dependencies between the statements. Backward reasoning from output values, as in our approach, is not possible. Similar ideas were successfully utilized in a MBD tool analyzing VHDL programs @cite_29 @cite_8 .
{ "cite_N": [ "@cite_15", "@cite_29", "@cite_4", "@cite_8" ], "mid": [ "2253239179", "2069300761", "2066486177", "2060526835" ], "abstract": [ "We derive a least-squares formulation for MDDMp technique.A novel multi-label feature extraction algorithm is proposed.Our algorithm maximizes both feature variance and feature-label dependence.Experiments show that our algorithm is a competitive candidate. Dimensionality reduction is an important pre-processing procedure for multi-label classification to mitigate the possible effect of dimensionality curse, which is divided into feature extraction and selection. Principal component analysis (PCA) and multi-label dimensionality reduction via dependence maximization (MDDM) represent two mainstream feature extraction techniques for unsupervised and supervised paradigms. They produce many small and a few large positive eigenvalues respectively, which could deteriorate the classification performance due to an improper number of projection directions. It has been proved that PCA proposed primarily via maximizing feature variance is associated with a least-squares formulation. In this paper, we prove that MDDM with orthonormal projection directions also falls into the least-squares framework, which originally maximizes Hilbert-Schmidt independence criterion (HSIC). Then we propose a novel multi-label feature extraction method to integrate two least-squares formulae through a linear combination, which maximizes both feature variance and feature-label dependence simultaneously and thus results in a proper number of positive eigenvalues. Experimental results on eight data sets show that our proposed method can achieve a better performance, compared with other seven state-of-the-art multi-label feature extraction algorithms.", "This paper attempts to provide an adequate basis for formal definitions of the meanings of programs in appropriately defined programming languages, in such a way that a rigorous standard is established for proofs about computer programs, including proofs of correctness, equivalence, and termination. The basis of our approach is the notion of an interpretation of a program: that is, an association of a proposition with each connection in the flow of control through a program, where the proposition is asserted to hold whenever that connection is taken. To prevent an interpretation from being chosen arbitrarily, a condition is imposed on each command of the program. This condition guarantees that whenever a command is reached by way of a connection whose associated proposition is then true, it will be left (if at all) by a connection whose associated proposition will be true at that time. Then by induction on the number of commands executed, one sees that if a program is entered by a connection whose associated proposition is then true, it will be left (if at all) by a connection whose associated proposition will be true at that time. By this means, we may prove certain properties of programs, particularly properties of the form: ‘If the initial values of the program variables satisfy the relation R l, the final values on completion will satisfy the relation R 2’.", "Probabilistic programs use familiar notation of programming languages to specify probabilistic models. Suppose we are interested in estimating the distribution of the return expression r of a probabilistic program P. We are interested in slicing the probabilistic program P and obtaining a simpler program Sli(P) which retains only those parts of P that are relevant to estimating r, and elides those parts of P that are not relevant to estimating r. We desire that the Sli transformation be both correct and efficient. By correct, we mean that P and Sli(P) have identical estimates on r. By efficient, we mean that estimation over Sli(P) be as fast as possible. We show that the usual notion of program slicing, which traverses control and data dependencies backward from the return expression r, is unsatisfactory for probabilistic programs, since it produces incorrect slices on some programs and sub-optimal ones on others. Our key insight is that in addition to the usual notions of control dependence and data dependence that are used to slice non-probabilistic programs, a new kind of dependence called observe dependence arises naturally due to observe statements in probabilistic programs. We propose a new definition of Sli(P) which is both correct and efficient for probabilistic programs, by including observe dependence in addition to control and data dependences for computing slices. We prove correctness mathematically, and we demonstrate efficiency empirically. We show that by applying the Sli transformation as a pre-pass, we can improve the efficiency of probabilistic inference, not only in our own inference tool R2, but also in other systems for performing inference such as Church and Infer.NET.", "While the covering algorithm has been perfected recently by the iterative approaches, such as DAOmap and IMap, its application has been limited to technology mapping. The main factor preventing the covering problem's migration to other logic transformations, such as elimination and resynthesis region identification found in SIS and FBDD, is the exponential number of alternative cuts that have to be evaluated. Traditional methods of cut generation do not scale beyond a cut size of 6. In this paper, a symbolic method that can enumerate all cuts is proposed without any pruning, up to a cut size of 10. We show that it can outperform traditional methods by an order of magnitude and, as a result, scales to 100K gate benchmarks. As a practical driver, the covering problem applied to elimination is shown where it can not only produce competitive area, but also provide more than 6times average runtime reduction of the total runtime in FBDD, a BDD based logic synthesis tool with a reported order of magnitude faster runtime than SIS and commercial tools with negligible impact on area." ] }
cs0309030
1519115557
ABSTRACTThis paper introduces an automatic debuggingframework that relies on model–based reasoning techniquesto locate faults in programs. In particular, model–based diagnosis, together with an abstract interpretationbased conflict detection mechanism is used to derive diagnoses, which correspond to possible faults in pro-grams. Design information and partial specifications are applied to guide a model revision process, whichallows for automatic detection and correction of structural faults. KEYWORDS : Model–based Debugging, Diagnosis, Abstract Interpretation, Program Analysis 1 Introduction Detecting a faulty behavior within a program, locating the cause of the fault, and fixing the faultby means of changing the program, continues to be a crucial and challenging task in software de-velopment. Many papers have been published so far in the domain of detecting faults in software,e.g., testing or formal verification [CDH + 00], and locating them, e.g., program slicing [Wei84] andautomatic program debugging [Llo87]. More recently model–based diagnosis [Rei87] has been usedfor locating faults in software [CFD93, MSWW02a].This paper extends previous research in several directions: Firstly, a parameterized debuggingframework is introduced, which integrates dynamic and static properties, as well as design infor-mation of programs. The framework is based on results derived in the field of abstract interpreta-tion [CC77], and can therefore be parameterized with different lattices and context selection strate-gies.Secondly, the one–to–one correspondence between model components and program statementsis replaced by a hierarchy of components, which provides means for more efficient reasoning proce-dures, as well as more flexibility when focusing on interesting parts of a program.This work is organized as follows. In Section 2, we give an introduction to model-based debug-ging. Section 3 describes mapping from source code to model components and the (approximate)computation of program effects in the style of [CC77] and [Bou93]. The next section discusses themodeling of programs and the reasoning framework. In Section 5, we provide an example whichputs together the different models and demonstrates the debugging capabilities of our approach.
@cite_41 @cite_22 use probability measurements to guide diagnosis. The program debugging process is divided into two steps. In the first one, program parts that may cause a discrepancy are computed by tracing the incorrect output back to the inputs and collecting the involved statements. In a second step, a belief network is used to identify the most probable statements causing the fault. Although this approach was successful in debugging a very large program, it requires statistics relating the statement types and fault symptoms, which makes it unsuitable for debugging general programs.
{ "cite_N": [ "@cite_41", "@cite_22" ], "mid": [ "2162045655", "2043811931", "2166007208", "2344277659" ], "abstract": [ "One of the most expensive and time-consuming components of the debugging process is locating the errors or faults. To locate faults, developers must identify statements involved in failures and select suspicious statements that might contain faults. This paper presents a new technique that uses visualization to assist with these tasks. The technique uses color to visually map the participation of each program statement in the outcome of the execution of the program with a test suite, consisting of both passed and failed test cases. Based on this visual mapping, a user can inspect the statements in the program, identify statements involved in failures, and locate potentially faulty statements. The paper also describes a prototype tool that implements our technique along with a set of empirical studies that use the tool for evaluation of the technique. The empirical studies show that, for the subject we studied, the technique can be effective in helping a user locate faults in a program.", "A major obstacle to finding program errors in a real system is knowing what correctness rules the system must obey. These rules are often undocumented or specified in an ad hoc manner. This paper demonstrates techniques that automatically extract such checking information from the source code itself, rather than the programmer, thereby avoiding the need for a priori knowledge of system rules.The cornerstone of our approach is inferring programmer \"beliefs\" that we then cross-check for contradictions. Beliefs are facts implied by code: a dereference of a pointer, p, implies a belief that p is non-null, a call to \"unlock(1)\" implies that 1 was locked, etc. For beliefs we know the programmer must hold, such as the pointer dereference above, we immediately flag contradictions as errors. For beliefs that the programmer may hold, we can assume these beliefs hold and use a statistical analysis to rank the resulting errors from most to least likely. For example, a call to \"spin_lock\" followed once by a call to \"spin_unlock\" implies that the programmer may have paired these calls by coincidence. If the pairing happens 999 out of 1000 times, though, then it is probably a valid belief and the sole deviation a probable error. The key feature of this approach is that it requires no a priori knowledge of truth: if two beliefs contradict, we know that one is an error without knowing what the correct belief is.Conceptually, our checkers extract beliefs by tailoring rule \"templates\" to a system --- for example, finding all functions that fit the rule template \"a must be paired with b.\" We have developed six checkers that follow this conceptual framework. They find hundreds of bugs in real systems such as Linux and OpenBSD. From our experience, they give a dramatic reduction in the manual effort needed to check a large system. Compared to our previous work [9], these template checkers find ten to one hundred times more rule instances and derive properties we found impractical to specify manually.", "We present a method for performing fault localization using similar program spectra. Our method assumes the existence of a faulty run and a larger number of correct runs. It then selects according to a distance criterion the correct run that most resembles the faulty run, compares the spectra corresponding to these two runs, and produces a report of \"suspicious\" parts of the program. Our method is widely applicable because it does not require any knowledge of the programinput and no more information from the user than a classification of the runs as either \"correct\" or \"faulty\". To experimentally validate the viability of the method, we implemented it in a tool, WHITHER using basic block profiling spectra. We experimented with two different similarity measures and the Siemens suite of 132 programs with injected bugs. To measure the success of the tool, we developed a generic method for establishing the quality of a report. The method is based on the way an \"ideal user\" would navigate the program using the report to save effort during debugging. The best results we obtained were, on average, above 50 , meaning that our ideal user would avoid looking at half of the program.", "We consider the problem of identifying the source of failure in a network after receiving alarms or having observed symptoms. To locate the root cause accurately and timely in a large communication system is challenging because a single fault can often result in a large number of alarms, and multiple faults can occur concurrently. In this paper, we present a new fault localization method using a machine-learning approach. We propose to use logistic regression to study the correlation among network events based on end-to-end measurements. Then based on the regression model, we develop fault hypothesis that best explains the observed symptoms. Unlike previous work, the machine-learning algorithm requires neither the knowledge of dependencies among network events, nor the probabilities of faults, nor the conditional probabilities of fault propagation as input. The “low requirement” feature makes it suitable for large complex networks where accurate dependencies and prior probabilities are difficult to obtain. We then evaluate the performance of the learning algorithm with respect to the accuracy of fault hypothesis and the concentration property. Experimental results and theoretical analysis both show satisfactory performance." ] }
cs0309030
1519115557
ABSTRACTThis paper introduces an automatic debuggingframework that relies on model–based reasoning techniquesto locate faults in programs. In particular, model–based diagnosis, together with an abstract interpretationbased conflict detection mechanism is used to derive diagnoses, which correspond to possible faults in pro-grams. Design information and partial specifications are applied to guide a model revision process, whichallows for automatic detection and correction of structural faults. KEYWORDS : Model–based Debugging, Diagnosis, Abstract Interpretation, Program Analysis 1 Introduction Detecting a faulty behavior within a program, locating the cause of the fault, and fixing the faultby means of changing the program, continues to be a crucial and challenging task in software de-velopment. Many papers have been published so far in the domain of detecting faults in software,e.g., testing or formal verification [CDH + 00], and locating them, e.g., program slicing [Wei84] andautomatic program debugging [Llo87]. More recently model–based diagnosis [Rei87] has been usedfor locating faults in software [CFD93, MSWW02a].This paper extends previous research in several directions: Firstly, a parameterized debuggingframework is introduced, which integrates dynamic and static properties, as well as design infor-mation of programs. The framework is based on results derived in the field of abstract interpreta-tion [CC77], and can therefore be parameterized with different lattices and context selection strate-gies.Secondly, the one–to–one correspondence between model components and program statementsis replaced by a hierarchy of components, which provides means for more efficient reasoning proce-dures, as well as more flexibility when focusing on interesting parts of a program.This work is organized as follows. In Section 2, we give an introduction to model-based debug-ging. Section 3 describes mapping from source code to model components and the (approximate)computation of program effects in the style of [CC77] and [Bou93]. The next section discusses themodeling of programs and the reasoning framework. In Section 5, we provide an example whichputs together the different models and demonstrates the debugging capabilities of our approach.
Jackson @cite_35 introduces a framework to detect faults in programs that manifest through changed dependencies between the input and the output variables of a program. The approach detects differences between the dependencies computed for a program and the dependencies specified by the user. It is able to detect certain kinds of structural faults but no test case information is exploited. Whereas Jackson focuses on bug detection, the model--based approach is also capable of locating faults. Further, the information obtained from present and absent dependencies can aid the debugger to focus on certain regions and types of faults, and thus find possible causes more quickly.
{ "cite_N": [ "@cite_35" ], "mid": [ "2093831363", "2578609449", "2162045655", "2126905619" ], "abstract": [ "The resources allocated for software quality assurance and improvement have not increased with the ever-increasing need for better software quality. A targeted software quality inspection can detect faulty modules and reduce the number of faults occurring during operations. We present a software fault prediction modeling approach with case-based reasoning (CBR), a part of the computational intelligence field focusing on automated reasoning processes. A CBR system functions as a software fault prediction model by quantifying, for a module under development, the expected number of faults based on similar modules that were previously developed. Such a system is composed of a similarity function, the number of nearest neighbor cases used for fault prediction, and a solution algorithm. The selection of a particular similarity function and solution algorithm may affect the performance accuracy of a CBR-based software fault prediction system. This paper presents an empirical study investigating the effects of using three different similarity functions and two different solution algorithms on the prediction accuracy of our CBR system. The influence of varying the number of nearest neighbor cases on the performance accuracy is also explored. Moreover, the benefits of using metric-selection procedures for our CBR system is also evaluated. Case studies of a large legacy telecommunications system are used for our analysis. It is observed that the CBR system using the Mahalanobis distance similarity function and the inverse distance weighted solution algorithm yielded the best fault prediction. In addition, the CBR models have better performance than models based on multiple linear regression.", "In the present work, fault detection in industrial automation processes is investigated. A fault detection method for observable process variables is extended for application cases, where the observations of process variables are noisy. The principle of this method consists in building a probability distribution model and evaluating the likelihood of observations under that model. The probability distribution model is based on a hybrid automaton which takes into account several system modes, i.e. phases with continuous system behaviour. Transitions between the modes are attributed to discrete control events such as on off signals. The discrete event system composed of system modes and transitions is modeled as finite state machine. Continuous process behaviour in the particular system modes is modeled with stochastic state space models, which incorporate neural networks. Fault detection is accomplished by evaluation of the underlying probability distribution model with a particle filter. In doing so both the hybrid system model and a linear observation model for noisy observations are taken into account. Experimental results show superior fault detection performance compared to the baseline method for observable process variables. The runtime of the proposed fault detection method has been significantly reduced by parallel implementation on a GPU.", "One of the most expensive and time-consuming components of the debugging process is locating the errors or faults. To locate faults, developers must identify statements involved in failures and select suspicious statements that might contain faults. This paper presents a new technique that uses visualization to assist with these tasks. The technique uses color to visually map the participation of each program statement in the outcome of the execution of the program with a test suite, consisting of both passed and failed test cases. Based on this visual mapping, a user can inspect the statements in the program, identify statements involved in failures, and locate potentially faulty statements. The paper also describes a prototype tool that implements our technique along with a set of empirical studies that use the tool for evaluation of the technique. The empirical studies show that, for the subject we studied, the technique can be effective in helping a user locate faults in a program.", "During development and testing, changes made to a system to repair a detected fault can often inject a new fault into the code base. These injected faults may not be in the same files that were just changed, since the effects of a change in the code base can have ramifications in other parts of the system. We propose a methodology for determining the effect of a change and then prioritizing regression test cases by gathering software change records and analyzing them through singular value decomposition. This methodology generates clusters of files that historically tend to change together. Combining these clusters with test case information yields a matrix that can be multiplied by a vector representing a new system modification to create a prioritized list of test cases. We performed a post hoc case study using this technique with three minor releases of a software product at IBM. We found that our methodology suggested additional regression tests in 50 of test runs and that the highest-priority suggested test found an additional fault 60 of the time." ] }
cs0309030
1519115557
ABSTRACTThis paper introduces an automatic debuggingframework that relies on model–based reasoning techniquesto locate faults in programs. In particular, model–based diagnosis, together with an abstract interpretationbased conflict detection mechanism is used to derive diagnoses, which correspond to possible faults in pro-grams. Design information and partial specifications are applied to guide a model revision process, whichallows for automatic detection and correction of structural faults. KEYWORDS : Model–based Debugging, Diagnosis, Abstract Interpretation, Program Analysis 1 Introduction Detecting a faulty behavior within a program, locating the cause of the fault, and fixing the faultby means of changing the program, continues to be a crucial and challenging task in software de-velopment. Many papers have been published so far in the domain of detecting faults in software,e.g., testing or formal verification [CDH + 00], and locating them, e.g., program slicing [Wei84] andautomatic program debugging [Llo87]. More recently model–based diagnosis [Rei87] has been usedfor locating faults in software [CFD93, MSWW02a].This paper extends previous research in several directions: Firstly, a parameterized debuggingframework is introduced, which integrates dynamic and static properties, as well as design infor-mation of programs. The framework is based on results derived in the field of abstract interpreta-tion [CC77], and can therefore be parameterized with different lattices and context selection strate-gies.Secondly, the one–to–one correspondence between model components and program statementsis replaced by a hierarchy of components, which provides means for more efficient reasoning proce-dures, as well as more flexibility when focusing on interesting parts of a program.This work is organized as follows. In Section 2, we give an introduction to model-based debug-ging. Section 3 describes mapping from source code to model components and the (approximate)computation of program effects in the style of [CC77] and [Bou93]. The next section discusses themodeling of programs and the reasoning framework. In Section 5, we provide an example whichputs together the different models and demonstrates the debugging capabilities of our approach.
@cite_16 apply similar ideas to knowledge base maintenance, exploiting hierarchical information to speed up the diagnostic process and to reduce the number of diagnoses.
{ "cite_N": [ "@cite_16" ], "mid": [ "199096196", "2405901625", "2133291246", "229019754" ], "abstract": [ "Debugging, validation, and maintenance of configurator knowledge bases are important tasks for the successful deployment of product configuration systems, due to frequent changes (e.g., new component types, new regulations) in the configurable products. Model based diagnosis techniques have shown to be a promising approach to support the test engineer in identifying faulty parts in declarative knowledge bases. Given positive (existing configurations) and negative test cases, explanations for the unexpected behavior of the configuration systems can be calculated using a consistency based approach. For the case of large and complex knowledge bases, we show how the usage of hierarchical abstractions can reduce the computation times for the explanations and in addition gives the possibility to iteratively and interactively refine diagnoses from abstract to more detailed levels. Starting from a logical definition of configuration and diagnosis of knowledge bases, we show how a basic diagnostic algorithm can be extended to support hierarchical abstractions in the configuration domain. Finally, experimental results from a prototypical implementation using an industrial constraint based configurator library are presented.", "Sequential diagnosis methods compute a series of queries for discriminating between diagnoses. Queries are answered by probing such that eventually the set of faults is identified. The computation of queries is based on the generation of a set of most probable diagnoses. However, in diagnosis problem instances where the number of minimal diagnoses and their cardinality is high, even the generation of a set of minimum cardinality diagnoses is unfeasible with the standard conflict-based approach. In this paper we propose to base sequential diagnosis on the computation of some set of minimal diagnoses using the direct diagnosis method, which requires less consistency checks to find a minimal diagnosis than the standard approach. We study the application of this direct method to high cardinality faults in knowledge-bases. In particular, our evaluation shows that the direct method results in almost the same number of queries for cases when the standard approach is applicable. However, for the cases when the standard approach is not applicable, sequential diagnosis based on the direct method is able to locate the faults correctly.", "Several artificial intelligence architectures and systems based on \"deep\" models of a domain have been proposed, in particular for the diagnostic task. These systems have several advantages over traditional knowledge based systems, but they have a main limitation in their computational complexity. One of the ways to face this problem is to rely on a knowledge compilation phase, which produces knowledge that can be used more effectively with respect to the original one. We show how a specific knowledge compilation approach can focus reasoning in abductive diagnosis, and, in particular, can improve the performances of AID, an abductive diagnosis system. The approach aims at focusing the overall diagnostic cycle in two interdependent ways: avoiding the generation of candidate solutions to be discarded a posteriori and integrating the generation of candidate solutions with discrimination among different candidates. Knowledge compilation is used off-line to produce operational (i.e., easily evaluated) conditions that embed the abductive reasoning strategy and are used in addition to the original model, with the goal of ruling out parts of the search space or focusing on parts of it. The conditions are useful to solve most cases using less time for computing the same solutions, yet preserving all the power of the model-based system for dealing with multiple faults and explaining the solutions. Experimental results showing the advantages of the approach are presented.", "This paper presents a new approach, called knowledge-base reduction, to the problem of checking knowledge bases for inconsistency and redundancy. The algorithm presented here makes use of concepts and techniques that have recently been advocated by de Kleer [deKleer, 1986] in conjunction with an assumption-based truth maintenance system. Knowledge-base reduction is more comprehensive than previous approaches to this problem in that it can in principle detect all potential contradictions and redundancies that exist in knowledge bases (having expressive power equivalent to propositional logic). While any approach that makes such a guarantee must be computationally intractable in the worst case, experience with KB-Reducer - a system that implements a specialized version of knowledge-base reduction and is described in this paper - has demonstrated that this technique is feasible and effective for fairly complex \"real world\" knowledge bases. Although KB-Reducer is currently intended for use by expert system developers, it is also a first step in the direction of providing safe \"local end-user modifiability\" for distant \"sites\" in a nationwide network of expert systems." ] }
cs0309030
1519115557
ABSTRACTThis paper introduces an automatic debuggingframework that relies on model–based reasoning techniquesto locate faults in programs. In particular, model–based diagnosis, together with an abstract interpretationbased conflict detection mechanism is used to derive diagnoses, which correspond to possible faults in pro-grams. Design information and partial specifications are applied to guide a model revision process, whichallows for automatic detection and correction of structural faults. KEYWORDS : Model–based Debugging, Diagnosis, Abstract Interpretation, Program Analysis 1 Introduction Detecting a faulty behavior within a program, locating the cause of the fault, and fixing the faultby means of changing the program, continues to be a crucial and challenging task in software de-velopment. Many papers have been published so far in the domain of detecting faults in software,e.g., testing or formal verification [CDH + 00], and locating them, e.g., program slicing [Wei84] andautomatic program debugging [Llo87]. More recently model–based diagnosis [Rei87] has been usedfor locating faults in software [CFD93, MSWW02a].This paper extends previous research in several directions: Firstly, a parameterized debuggingframework is introduced, which integrates dynamic and static properties, as well as design infor-mation of programs. The framework is based on results derived in the field of abstract interpreta-tion [CC77], and can therefore be parameterized with different lattices and context selection strate-gies.Secondly, the one–to–one correspondence between model components and program statementsis replaced by a hierarchy of components, which provides means for more efficient reasoning proce-dures, as well as more flexibility when focusing on interesting parts of a program.This work is organized as follows. In Section 2, we give an introduction to model-based debug-ging. Section 3 describes mapping from source code to model components and the (approximate)computation of program effects in the style of [CC77] and [Bou93]. The next section discusses themodeling of programs and the reasoning framework. In Section 5, we provide an example whichputs together the different models and demonstrates the debugging capabilities of our approach.
Abstract Interpretation to analyze programs was first introduced by @cite_0 , and later extended by @cite_25 @cite_23 to include assertions for abstract debugging. Their approach aims at analyzing every possible execution of a program, which makes is suitable to detect errors even in the case where no test cases are available. A common problem of these approaches is that of choosing appropriate abstractions in order to obtain useful results, which hinders the automatic applicability of these approaches for many programs. @cite_27 introduces a relaxed form of representation for abstract interpretation, which allows for more complex domains, while building the structure of the approximation dynamically. Our framework is strongly inspired by this work, but provides more insight on how to choose approximation operators for debugging, in particular in the case where test information is known. These questions are not addressed in @cite_27 .
{ "cite_N": [ "@cite_0", "@cite_27", "@cite_25", "@cite_23" ], "mid": [ "2043100293", "2165069483", "2158735282", "2170736936" ], "abstract": [ "A program denotes computations in some universe of objects. Abstract interpretation of programs consists in using that denotation to describe computations in another universe of abstract objects, so that the results of abstract execution give some information on the actual computations. An intuitive example (which we borrow from Sintzoff [72]) is the rule of signs. The text -1515 * 17 may be understood to denote computations on the abstract universe (+), (-), (±) where the semantics of arithmetic operators is defined by the rule of signs. The abstract execution -1515 * 17 → -(+) * (+) → (-) * (+) → (-), proves that -1515 * 17 is a negative number. Abstract interpretation is concerned by a particular underlying structure of the usual universe of computations (the sign, in our example). It gives a summary of some facets of the actual executions of a program. In general this summary is simple to obtain but inaccurate (e.g. -1515 + 17 → -(+) + (+) → (-) + (+) → (±)). Despite its fundamentally incomplete results abstract interpretation allows the programmer or the compiler to answer questions which do not need full knowledge of program executions or which tolerate an imprecise answer, (e.g. partial correctness proofs of programs ignoring the termination problems, type checking, program optimizations which are not carried in the absence of certainty about their feasibility, …).", "Abstract interpretation is a formal method that enables the static determination (i.e. at compile-time) of the dynamic properties (i.e. at run-time) of programs. We present an abstract interpretation-based method, called abstract debugging , which enables the static and formal debugging of programs, prior to their execution, by finding the origin of potential bugs as well as necessary conditions for these bugs not to occur at run-time. We show how invariant assertions and intermittent assertions , such as termination, can be used to formally debug programs. Finally, we show how abstract debugging can be effectively and efficiently applied to higher-order imperative programs with exceptions and jumps to non-local labels, and present the Syntox system that enables the abstract debugging of the Pascal language by the determination of the range of the scalar variables of programs.", "Abstract interpretation provides an elegant formalism for performing program analysis. Unfortunately, designing and implementing a sound, precise, scalable, and extensible abstract interpreter is difficult. In this paper, we describe an approach to creating correct-by-construction abstract interpreters that also attain the fundamental limits on precision that abstract-interpretation theory establishes. Our approach requires the analysis designer to implement only a small number of operations. In particular, we describe a systematic method for implementing an abstract interpreter that solves the following problem:Given program P, and an abstract domain A, find the most-precise inductive A-invariant for P.", "We show that abstract interpretation-based static program analysis can be made efficient and precise enough to formally verify a class of properties for a family of large programs with few or no false alarms. This is achieved by refinement of a general purpose static analyzer and later adaptation to particular programs of the family by the end-user through parametrization. This is applied to the proof of soundness of data manipulation operations at the machine level for periodic synchronous safety critical embedded software.The main novelties are the design principle of static analyzers by refinement and adaptation through parametrization (Sect. 3 and 7), the symbolic manipulation of expressions to improve the precision of abstract transfer functions (Sect. 6.3), the octagon (Sect. 6.2.2), ellipsoid (Sect. 6.2.3), and decision tree (Sect. 6.2.4) abstract domains, all with sound handling of rounding errors in oating point computations, widening strategies (with thresholds: Sect. 7.1.2, delayed: Sect. 7.1.3) and the automatic determination of the parameters (parametrized packing: Sect. 7.2)." ] }
cs0309030
1519115557
ABSTRACTThis paper introduces an automatic debuggingframework that relies on model–based reasoning techniquesto locate faults in programs. In particular, model–based diagnosis, together with an abstract interpretationbased conflict detection mechanism is used to derive diagnoses, which correspond to possible faults in pro-grams. Design information and partial specifications are applied to guide a model revision process, whichallows for automatic detection and correction of structural faults. KEYWORDS : Model–based Debugging, Diagnosis, Abstract Interpretation, Program Analysis 1 Introduction Detecting a faulty behavior within a program, locating the cause of the fault, and fixing the faultby means of changing the program, continues to be a crucial and challenging task in software de-velopment. Many papers have been published so far in the domain of detecting faults in software,e.g., testing or formal verification [CDH + 00], and locating them, e.g., program slicing [Wei84] andautomatic program debugging [Llo87]. More recently model–based diagnosis [Rei87] has been usedfor locating faults in software [CFD93, MSWW02a].This paper extends previous research in several directions: Firstly, a parameterized debuggingframework is introduced, which integrates dynamic and static properties, as well as design infor-mation of programs. The framework is based on results derived in the field of abstract interpreta-tion [CC77], and can therefore be parameterized with different lattices and context selection strate-gies.Secondly, the one–to–one correspondence between model components and program statementsis replaced by a hierarchy of components, which provides means for more efficient reasoning proce-dures, as well as more flexibility when focusing on interesting parts of a program.This work is organized as follows. In Section 2, we give an introduction to model-based debug-ging. Section 3 describes mapping from source code to model components and the (approximate)computation of program effects in the style of [CC77] and [Bou93]. The next section discusses themodeling of programs and the reasoning framework. In Section 5, we provide an example whichputs together the different models and demonstrates the debugging capabilities of our approach.
Recently, model checking approaches have been extended to attempt fault localization in counterexample traces. @cite_40 extended a model checking algorithm that is able to pinpoint transitions in traces responsible for a faulty behavior. @cite_39 presents another approach, which explores the neighborhood of counterexamples to determine causes of faulty behavior. These techniques mostly consider deviations in control flow and do not take data dependencies into account. Also, the derivation of the abstract model from the concrete program usually is non--trivial and is difficult to automate.
{ "cite_N": [ "@cite_40", "@cite_39" ], "mid": [ "1523041988", "2158870716", "1511405608", "2218365969" ], "abstract": [ "A traditional counterexample to a linear-time safety property shows the values of all signals at all times prior to the error. However, some signals may not be critical to causing the failure. A succinct explanation may help human understanding as well as speed up algorithms that have to analyze many such traces. In Bounded Model Checking (BMC), a counterexample is constructed from a satisfying assignment to a Boolean formula, typically in CNF. Modern SAT solvers usually assign values to all variables when the input formula is satisfiable. Deriving minimal satisfying assignments from such complete assignments does not lead to concise explanations of counterexamples because of how CNF formulae are derived from the models. Hence, we formulate the extraction of a succinct counterexample as the problem of finding a minimal assignment that, together with the Boolean formula describing the model, implies an objective. We present a two-stage algorithm for this problem, such that the result of each stage contributes to identify the “interesting” events that cause the failure. We demonstrate the effectiveness of our approach with an example and with experimental results.", "There is significant room for improving users' experiences with model checking tools. An error trace produced by a model checker can be lengthy and is indicative of a symptom of an error. As a result, users can spend considerable time examining an error trace in order to understand the cause of the error. Moreover, even state-of-the-art model checkers provide an experience akin to that provided by parsers before syntactic error recovery was invented: they report a single error trace per run. The user has to fix the error and run the model checker again to find more error traces.We present an algorithm that exploits the existence of correct traces in order to localize the error cause in an error trace, report a single error trace per error cause, and generate multiple error traces having independent causes. We have implemented this algorithm in the context of slam , a software model checker that automatically verifies temporal safety properties of C programs, and report on our experience using it to find and localize errors in device drivers. The algorithm typically narrows the location of a cause down to a few lines, even in traces consisting of hundreds of statements.", "One of the chief advantages of model checking is the production of counterexamples demonstrating that a system does not satisfy a specification. However, it may require a great deal of human effort to extract the essence of an error from even a detailed source-level trace of a failing run. We use an automated method for finding multiple versions of an error (and similar executions that do not produce an error), and analyze these executions to produce a more succinct description of the key elements of the error. The description produced includes identification of portions of the source code crucial to distinguishing failing and succeeding runs, differences in invariants between failing and nonfailing runs, and information on the necessary changes in scheduling and environmental actions needed to cause successful runs to fail.", "Many software model checkers only detect counterexamples with deep loops after exploring numerous spurious and increasingly longer counterexamples. We propose a technique that aims at eliminating this weakness by constructing auxiliary paths that represent the effect of a range of loop iterations. Unlike acceleration, which captures the exact effect of arbitrarily many loop iterations, these auxiliary paths may under-approximate the behaviour of the loops. In return, the approximation is sound with respect to the bit-vector semantics of programs. Our approach supports arbitrary conditions and assignments to arrays in the loop body, but may as a result introduce quantified conditionals. To reduce the resulting performance penalty, we present two quantifier elimination techniques specially geared towards our application. Loop under-approximation can be combined with a broad range of verification techniques. We paired our techniques with lazy abstraction and bounded model checking, and evaluated the resulting tool on a number of buffer overflow benchmarks, demonstrating its ability to efficiently detect deep counterexamples in C programs that manipulate arrays." ] }
cs0309031
1638678101
Many programmers have had to deal with an overwritten variable resulting for example from an aliasing problem. The culprit is obviously the last write-access to that memory location before the manifestation of the bug. The usual technique for removing such bugs starts with the debugger by (1) finding the last write and (2) moving the control point of execution back to that time by re-executing the program from the beginning. We wish to automate this. Step (2) is easy if we can somehow mark the last write found in step (1) and control the execution-point to move it back to this time. In this paper we propose a new concept, position, that is, a point in the program execution trace, as needed for step (2) above. The position enables debuggers to automate the control of program execution to support common debugging activities. We have implemented position in C by modifying GCC and in Java with a bytecode transformer. Measurements show that position can be provided with an acceptable amount of overhead.
Boothe @cite_18 made a C debugger with reverse execution capability using a step counter which counts the number of step executions and re-execution from the beginning of debuggees. The capability could be also implemented with our timestamp counter and re-execution. The difference comes from the purpose of each project. Boothe made reverse execution version of existing debugger commands such as backward step'', backward finish'', and so on. Since we try to implement more abstract control of program execution than raw debugger commands, the counter of step execution is too expensive for our purpose.
{ "cite_N": [ "@cite_18" ], "mid": [ "1969550081", "116894366", "2626217303", "1673079227" ], "abstract": [ "This paper discusses our research into algorithms for creating anefficient bidirectional debugger in which all traditional forward movement commands can be performed with equal ease in the reverse direction. We expect that adding these backwards movement capabilities to a debugger will greatly increase its efficacy as a programming tool. The efficiency of our methods arises from our use of event countersthat are embedded into the program being debugged. These counters areused to precisely identify the desired target event on the fly as thetarget program executes. This is in contrast to traditional debuggers that may trap back to the debugger many times for some movements. For reverse movements we re-execute the program (possibly using two passes) to identify and stop at the desired earlier point. Our counter based techniques are essential for these reverse movements because they allow us to efficiently execute through the millions of events encountered during re-execution. Two other important components of this debugger are its I O logging and checkpointing. We log and later replay the results of system callsto ensure deterministic re-execution, and we use checkpointing to bound theamount of re-execution used for reverse movements. Short movements generally appear instantaneous, and the time for longer movements is usually bounded within a small constant factor of the temporal distance moved back.", "In this paper, we study the problem of automatically finding program executions that reach a particular target line. This problem arises in many debugging scenarios; for example, a developer may want to confirm that a bug reported by a static analysis tool on a particular line is a true positive. We propose two new directed symbolic execution strategies that aim to solve this problem: shortest-distance symbolic execution (SDSE) uses a distance metric in an interprocedural control flow graph to guide symbolic execution toward a particular target; and call-chain-backward symbolic execution (CCBSE) iteratively runs forward symbolic execution, starting in the function containing the target line, and then jumping backward up the call chain until it finds a feasible path from the start of the program. We also propose a hybrid strategy, Mix-CCBSE, which alternates CCBSE with another (forward) search strategy. We compare these three with several existing strategies from the literature on a suite of six GNU Coreutils programs. We find that SDSE performs extremely well in many cases but may fail badly. CCBSE also performs quite well, but imposes additional overhead that sometimes makes it slower than SDSE. Considering all our benchmarks together, Mix-CCBSE performed best on average, combining to good effect the features of its constituent components.", "We present a novel approach to proving the absence of timing channels. The idea is to partition the programâ s execution traces in such a way that each partition component is checked for timing attack resilience by a time complexity analysis and that per-component resilience implies the resilience of the whole program. We construct a partition by splitting the program traces at secret-independent branches. This ensures that any pair of traces with the same public input has a component containing both traces. Crucially, the per-component checks can be normal safety properties expressed in terms of a single execution. Our approach is thus in contrast to prior approaches, such as self-composition, that aim to reason about multiple (kâ � 2) executions at once. We formalize the above as an approach called quotient partitioning, generalized to any k-safety property, and prove it to be sound. A key feature of our approach is a demand-driven partitioning strategy that uses a regex-like notion called trails to identify sets of execution traces, particularly those influenced by tainted (or secret) data. We have applied our technique in a prototype implementation tool called Blazer, based on WALA, PPL, and the brics automaton library. We have proved timing-channel freedom of (or synthesized an attack specification for) 24 programs written in Java bytecode, including 6 classic examples from the literature and 6 examples extracted from the DARPA STAC challenge problems.", "By recording every state change in the run of a program, it is possible to present the programmer every bit of information that might be desired. Essentially, it becomes possible to debug the program by going backwards in time,'' vastly simplifying the process of debugging. An implementation of this idea, the Omniscient Debugger,'' is used to demonstrate its viability and has been used successfully on a number of large programs. Integration with an event analysis engine for searching and control is presented. Several small-scale user studies provide encouraging results. Finally performance issues and implementation are discussed along with possible optimizations. This paper makes three contributions of interest: the concept and technique of going backwards in time,'' the GUI which presents a global view of the program state and has a formal notion of navigation through time,'' and the integration with an event analyzer." ] }
cs0309031
1638678101
Many programmers have had to deal with an overwritten variable resulting for example from an aliasing problem. The culprit is obviously the last write-access to that memory location before the manifestation of the bug. The usual technique for removing such bugs starts with the debugger by (1) finding the last write and (2) moving the control point of execution back to that time by re-executing the program from the beginning. We wish to automate this. Step (2) is easy if we can somehow mark the last write found in step (1) and control the execution-point to move it back to this time. In this paper we propose a new concept, position, that is, a point in the program execution trace, as needed for step (2) above. The position enables debuggers to automate the control of program execution to support common debugging activities. We have implemented position in C by modifying GCC and in Java with a bytecode transformer. Measurements show that position can be provided with an acceptable amount of overhead.
@cite_17 , Moher @cite_8 and @cite_15 save complete memory history of process to achieve fully random accessibility to program states. Their systems have to deal with large log''. Our system, however, saves only a pair of line number and value of timestamp to obtain the same capability by assuming the determinism of debuggees.
{ "cite_N": [ "@cite_8", "@cite_15", "@cite_17" ], "mid": [ "2117703621", "2020301027", "2079632486", "2043811931" ], "abstract": [ "The cost of accessing main memory is increasing. Machine designers have tried to mitigate the consequences of the processor and memory technology trends underlying this increasing gap with a variety of techniques to reduce or tolerate memory latency. These techniques, unfortunately, are only occasionally successful for pointer-manipulating programs. Recent research has demonstrated the value of a complementary approach, in which pointer-based data structures are reorganized to improve cache locality.This paper studies a technique for using a generational garbage collector to reorganize data structures to produce a cache-conscious data layout, in which objects with high temporal affinity are placed next to each other, so that they are likely to reside in the same cache block. The paper explains how to collect, with low overhead, real-time profiling information about data access patterns in object-oriented languages, and describes a new copying algorithm that utilizes this information to produce a cache-conscious object layout.Preliminary results show that this technique reduces cache miss rates by 21--42 , and improves program performance by 14--37 over Cheney's algorithm. We also compare our layouts against those produced by the Wilson-Lam-Moher algorithm, which attempts to improve program locality at the page level. Our cache-conscious object layouts reduces cache miss rates by 20--41 and improves program performance by 18--31 over their algorithm, indicating that improving locality at the page level is not necessarily beneficial at the cache level.", "A Euclidean approximate sparse recovery system consists of parameters k,N, an m-by-N measurement matrix, Φ, and a decoding algorithm, D. Given a vector, x, the system approximates x by ^x=D(Φ x), which must satisfy ||x - x||2≤ C ||x - xk||2, where xk denotes the optimal k-term approximation to x. (The output ^x may have more than k terms). For each vector x, the system must succeed with probability at least 3 4. Among the goals in designing such systems are minimizing the number m of measurements and the runtime of the decoding algorithm, D. In this paper, we give a system with m=O(k log(N k)) measurements--matching a lower bound, up to a constant factor--and decoding time k log O(1) N, matching a lower bound up to log(N) factors. We also consider the encode time (i.e., the time to multiply Φ by x), the time to update measurements (i.e., the time to multiply Φ by a 1-sparse x), and the robustness and stability of the algorithm (adding noise before and after the measurements). Our encode and update times are optimal up to log(k) factors. The columns of Φ have at most O(log2(k)log(N k)) non-zeros, each of which can be found in constant time. Our full result, an FPRAS, is as follows. If x=xk+ν1, where ν1 and ν2 (below) are arbitrary vectors (regarded as noise), then, setting ^x = D(Φ x + ν2), and for properly normalized ν, we get [||^x - x||22 ≤ (1+e)||ν1||22 + e||ν2||22,] using O((k e)log(N k)) measurements and (k e)logO(1)(N) time for decoding.", "The property of locality in program behavior has been studied and modelled extensively because of its application to memory design, code optimization, multiprogramming etc. We propose a k order Markov chain based scheme to model the sequence of time intervals between successive references to the same address in memory during program execution. Each unique address in a program is modelled separately. To validate our model, which we call the Inter-Reference Gap (IRG) model, we show substantial improvements in three different areas where it is applied. (1) We improve upon the miss ratio for the Least Recently Used (LRU) memory replacement algorithm by up to 37 . (2) We achieve up to 22 space-time product improvement over the Working Set (WS) algorithm for dynamic memory management. (3) A new trace compression technique is proposed which compresses up to 2.5 with zero error in WS simulations and up to 3.7 error in the LRU simulations. All these results are obtained experimentally, via trace driven simulations over a wide range of cache traces, page reference traces, object traces and database traces.", "A major obstacle to finding program errors in a real system is knowing what correctness rules the system must obey. These rules are often undocumented or specified in an ad hoc manner. This paper demonstrates techniques that automatically extract such checking information from the source code itself, rather than the programmer, thereby avoiding the need for a priori knowledge of system rules.The cornerstone of our approach is inferring programmer \"beliefs\" that we then cross-check for contradictions. Beliefs are facts implied by code: a dereference of a pointer, p, implies a belief that p is non-null, a call to \"unlock(1)\" implies that 1 was locked, etc. For beliefs we know the programmer must hold, such as the pointer dereference above, we immediately flag contradictions as errors. For beliefs that the programmer may hold, we can assume these beliefs hold and use a statistical analysis to rank the resulting errors from most to least likely. For example, a call to \"spin_lock\" followed once by a call to \"spin_unlock\" implies that the programmer may have paired these calls by coincidence. If the pairing happens 999 out of 1000 times, though, then it is probably a valid belief and the sole deviation a probable error. The key feature of this approach is that it requires no a priori knowledge of truth: if two beliefs contradict, we know that one is an error without knowing what the correct belief is.Conceptually, our checkers extract beliefs by tailoring rule \"templates\" to a system --- for example, finding all functions that fit the rule template \"a must be paired with b.\" We have developed six checkers that follow this conceptual framework. They find hundreds of bugs in real systems such as Linux and OpenBSD. From our experience, they give a dramatic reduction in the manual effort needed to check a large system. Compared to our previous work [9], these template checkers find ten to one hundred times more rule instances and derive properties we found impractical to specify manually." ] }
cs0309031
1638678101
Many programmers have had to deal with an overwritten variable resulting for example from an aliasing problem. The culprit is obviously the last write-access to that memory location before the manifestation of the bug. The usual technique for removing such bugs starts with the debugger by (1) finding the last write and (2) moving the control point of execution back to that time by re-executing the program from the beginning. We wish to automate this. Step (2) is easy if we can somehow mark the last write found in step (1) and control the execution-point to move it back to this time. In this paper we propose a new concept, position, that is, a point in the program execution trace, as needed for step (2) above. The position enables debuggers to automate the control of program execution to support common debugging activities. We have implemented position in C by modifying GCC and in Java with a bytecode transformer. Measurements show that position can be provided with an acceptable amount of overhead.
Ducass ' e @cite_26 allows the programmer to control the execution not by source statement orientation, but by event orientation such as assignments, function calls, loops, and so on. Users write Prolog-like forms to designate breakpoints which have complex conditions. This mechanism is complementary to our system and suitable for a front end of it in order to designate appropriate positions where we would move control point to.
{ "cite_N": [ "@cite_26" ], "mid": [ "2028053566", "2096979400", "2045474169", "1985444680" ], "abstract": [ "In this paper, we compose six di erent Python and Prolog VMs into 4 pairwise compositions: one using C interpreters; one running on the JVM; one using meta-tracing interpreters; and one using a C interpreter and a meta-tracing interpreter. We show that programs that cross the language barrier frequently execute faster in a meta-tracing composition, and that meta-tracing imposes a significantly lower overhead on composed programs relative to mono-language programs. 1. Overview Programming language composition aims to allow the mixing of programming languages in a fine-grained manner. This vision brings many challenging problems, from the interaction of language semantics to performance. In this paper, we investigate the runtime performance of composed programs in high-level languages. We start from the assumption that execution of such programs is most likely to be through composing language implementations that use interpreters and VMs, rather than traditional compilers. This raises the question: how do di erent styles of composition a ect performance? Clearly, such a question cannot have a single answer but, to the best of our knowledge, this issue has not been explored in the past. This paper's hypothesis is that meta-tracing - a relatively new technique used to produce JIT (Just-In-Time) compilers from interpreters (1) - will lead to faster interpreter composition than traditional approaches. To test this hypothesis, we present a Python and Prolog composition which allows Python programs to embed and call Prolog programs. We then implement the composition in four di erent ways, comparing the absolute times and the relative cross-language costs of each. In addition to the 'traditional' approaches to composing interpreters (in C and upon the JVM), we also investigate the application of meta-tracing to interpreter composition. The experiments we then carry out confirm our initial hypothesis. There is a long tradition of composing Prolog with other languages (with e.g. Icon (2), Lisp (3), and Smalltalk (4)) because one can express certain types of programs far easier in Prolog than in other languages. We have two additional reasons for choosing this pairing. First, these languages represent very di erent points in the language design space: Prolog's inherently", "Delaying-based tabling mechanisms, such as the one adopted in XSB, are non-linear in the sense that the computation state of delayed calls has to be preserved. In this paper, we present the implementation of a linear tabling mechanism. The key idea is to let a call execute from the backtracking point of a former variant call if such a call exists. The linear tabling mechanism has the following advantages over non-linear ones: (1) it is relatively easy to implement; (2) it imposes no overhead on standard Prolog programs; and (3) the cut operator works as for standard Prolog programs and thus it is possible to use the cut operator to express negation-as-failure and conditionals in tabled programs. The weakness of the linear mechanism is the necessity of re-computation for computing fix-points. However, we have found that re-computation can be avoided for a large portion of calls of directly-recursive tabled predicates. We have implemented the linear tabling mechanism in B-Prolog. Experimental comparison shows that B-Prolog is close in speed to XSB and outperforms XSB when re-computation can be avoided. Concerning space efficiency, B-Prolog is an order of magnitude better than XSB for some programs.", "Abstract This paper proposes a new logic programming language called GOLOG whose interpreter automatically maintains an explicit representation of the dynamic world being modeled, on the basis of user supplied axioms about the preconditions and effects of actions and the initial state of the world. This allows programs to reason about the state of the world and consider the effects of various possible courses of action before committing to a particular behavior. The net effect is that programs may be written at a much higher level of abstraction than is usually possible. The language appears well suited for applications in high level control of robots and industrial processes, intelligent software agents, discrete event simulation, etc. It is based on a formal theory of action specified in an extended version of the situation calculus. A prototype implementation in Prolog has been developed.", "Presents Coca, an automated debugger for C, where the breakpoint mechanism is based on events related to language constructs. Events have semantics, whereas the source lines used by most debuggers do not have any. A trace is a sequence of events. It can be seen as an ordered relation in a database. Users can specify precisely which events they want to see by specifying values for event attributes. At each event, visible variables can be queried. The trace query language is Prolog with a handful of primitives. The trace query mechanism searches through the execution traces using both control flow and data, whereas debuggers usually search according to either control flow or data. As opposed to fully \"relational\" debuggers which use plain database querying mechanisms, the Coca trace querying mechanism does not require any storage. The analysis is done on-the-fly, synchronously with the traced execution. Coca is therefore more powerful than \"source-line\" debuggers and more efficient than relational debuggers." ] }
cs0309031
1638678101
Many programmers have had to deal with an overwritten variable resulting for example from an aliasing problem. The culprit is obviously the last write-access to that memory location before the manifestation of the bug. The usual technique for removing such bugs starts with the debugger by (1) finding the last write and (2) moving the control point of execution back to that time by re-executing the program from the beginning. We wish to automate this. Step (2) is easy if we can somehow mark the last write found in step (1) and control the execution-point to move it back to this time. In this paper we propose a new concept, position, that is, a point in the program execution trace, as needed for step (2) above. The position enables debuggers to automate the control of program execution to support common debugging activities. We have implemented position in C by modifying GCC and in Java with a bytecode transformer. Measurements show that position can be provided with an acceptable amount of overhead.
@cite_1 @cite_10 developed a event-based instrumentation tool, CCI, which inserts instrumentation codes into C source codes. The converted codes have platform independence. The execution slowdown, however, is 2.09 times in the case of laplace.c and 5.85 times in the case of life.c @cite_10 . In order to achieve position system, events only about control flow should be generated.
{ "cite_N": [ "@cite_10", "@cite_1" ], "mid": [ "2096537660", "1737373496", "2143183535", "2788459922" ], "abstract": [ "Automatic software instrumentation is usually done at the machine level or is targeted at specific program behavior for use with a particular monitoring application. The paper describes CCI, an automatic software instrumentation tool for ANSI C designed to serve a broad range of program execution monitors. CCI supports high level instrumentation for both application-specific behavior as well as standard libraries and data types. The event generation mechanism is defined by the execution monitor which uses CCI, providing flexibility for different monitors' execution models. Code explosion and the runtime cost of instrumentation are reduced by declarative configuration facilities that allow the monitor to select specific events to be instrumented. Higher level events can be defined by combining lower level events with information obtained from semantic analysis of the instrumented program.", "In this work we propose solving huge-scale instances of the truss topology design problem with coordinate descent methods. We develop four efficient codes: serial and parallel implementations of randomized and greedy rules for the selection of the variable(s) (potential bar(s)) to be updated in the next iteration. Both serial methods enjoy an O(n k) iteration complexity guarantee, where n is the number of potential bars and k the iteration counter. Our parallel implementations, written in CUDA and running on a graphical processing unit (GPU), are capable of speedups of up to two orders of magnitude when compared to their serial counterparts. Numerical experiments were performed on instances with up to 30 million potential bars.", "Objective. Recently, canonical correlation analysis (CCA) has been widely used in steady-state visual evoked potential (SSVEP)-based brain–computer interfaces (BCIs) due to its high efficiency, robustness, and simple implementation. However, a method with which to make use of harmonic SSVEP components to enhance the CCA-based frequency detection has not been well established. Approach. This study proposed a filter bank canonical correlation analysis (FBCCA) method to incorporate fundamental and harmonic frequency components to improve the detection of SSVEPs. A 40-target BCI speller based on frequency coding (frequency range: 8–15.8 Hz, frequency interval: 0.2 Hz) was used for performance evaluation. To optimize the filter bank design, three methods (M1: sub-bands with equally spaced bandwidths; M2: sub-bands corresponding to individual harmonic frequency bands; M3: sub-bands covering multiple harmonic frequency bands) were proposed for comparison. Classification accuracy and information transfer rate (ITR) of the three FBCCA methods and the standard CCA method were estimated using an offline dataset from 12 subjects. Furthermore, an online BCI speller adopting the optimal FBCCA method was tested with a group of 10 subjects. Main results. The FBCCA methods significantly outperformed the standard CCA method. The method M3 achieved the highest classification performance. At a spelling rate of 33.3 characters min, the online BCI speller obtained an average ITR of 151.18 ± 20.34 bits min−1. Significance. By incorporating the fundamental and harmonic SSVEP components in target identification, the proposed FBCCA method significantly improves the performance of the SSVEP-based BCI, and thereby facilitates its practical applications such as high-speed spelling.", "This paper presents the interesting observation that by performing fewer of the optimizations available in a standard compiler optimization level such as -02, while preserving their original ordering, significant savings can be achieved in both execution time and energy consumption. This observation has been validated on two embedded processors, namely the ARM Cortex-M0 and the ARM Cortex-M3, using two different versions of the LLVM compilation framework; v3.8 and v5.0. Experimental evaluation with 71 embedded benchmarks demonstrated performance gains for at least half of the benchmarks for both processors. An average execution time reduction of 2.4 and 5.3 was achieved across all the benchmarks for the Cortex-M0 and Cortex-M3 processors, respectively, with execution time improvements ranging from 1 up to 90 over the -02. The savings that can be achieved are in the same range as what can be achieved by the state-of-the-art compilation approaches that use iterative compilation or machine learning to select flags or to determine phase orderings that result in more efficient code. In contrast to these time consuming and expensive to apply techniques, our approach only needs to test a limited number of optimization configurations, less than 64, to obtain similar or even better savings. Furthermore, our approach can support multi-criteria optimization as it targets execution time, energy consumption and code size at the same time." ] }
cs0309031
1638678101
Many programmers have had to deal with an overwritten variable resulting for example from an aliasing problem. The culprit is obviously the last write-access to that memory location before the manifestation of the bug. The usual technique for removing such bugs starts with the debugger by (1) finding the last write and (2) moving the control point of execution back to that time by re-executing the program from the beginning. We wish to automate this. Step (2) is easy if we can somehow mark the last write found in step (1) and control the execution-point to move it back to this time. In this paper we propose a new concept, position, that is, a point in the program execution trace, as needed for step (2) above. The position enables debuggers to automate the control of program execution to support common debugging activities. We have implemented position in C by modifying GCC and in Java with a bytecode transformer. Measurements show that position can be provided with an acceptable amount of overhead.
@cite_3 made EEL, which is a library for building tools to analyze and modify an executable program. Using EEL, we could implement the insertion of codes to maintain timestamp in executable code level. The solution, however, is dependent on a specified platform, so we chose the intermediate code level and modified GCC.
{ "cite_N": [ "@cite_3" ], "mid": [ "2040183246", "2156560068", "2126851641", "2058033966" ], "abstract": [ "EEL (Executable Editing Library) is a library for building tools to analyze and modify an executable (compiled) program. The systems and languages communities have built many tools for error detection, fault isolation, architecture translation, performance measurement, simulation, and optimization using this approach of modifying executables. Currently, however, tools of this sort are difficult and time-consuming to write and are usually closely tied to a particular machine and operating system. EEL supports a machine- and system-independent editing model that enables tool builders to modify an executable without being aware of the details of the underlying architecture or operating system or being concerned with the consequences of deleting instructions or adding foreign code.", "Tuning compiler optimizations for rapidly evolving hardware makes porting and extending an optimizing compiler for each new platform extremely challenging. Iterative optimization is a popular approach to adapting programs to a new architecture automatically using feedback-directed compilation. However, the large number of evaluations required for each program has prevented iterative compilation from widespread take-up in production compilers. Machine learning has been proposed to tune optimizations across programs systematically but is currently limited to a few transformations, long training phases and critically lacks publicly released, stable tools. Our approach is to develop a modular, extensible, self-tuning optimization infrastructure to automatically learn the best optimizations across multiple programs and architectures based on the correlation between program features, run-time behavior and optimizations. In this paper we describe Milepost GCC, the first publicly-available open-source machine learning-based compiler. It consists of an Interactive Compilation Interface (ICI) and plugins to extract program features and exchange optimization data with the cTuning.org open public repository. It automatically adapts the internal optimization heuristic at function-level granularity to improve execution time, code size and compilation time of a new program on a given architecture. Part of the MILEPOST technology together with low-level ICI-inspired plugin framework is now included in the mainline GCC. We developed machine learning plugins based on probabilistic and transductive approaches to predict good combinations of optimizations. Our preliminary experimental results show that it is possible to automatically reduce the execution time of individual MiBench programs, some by more than a factor of 2, while also improving compilation time and code size. On average we are able to reduce the execution time of the MiBench benchmark suite by 11 for the ARC reconfigurable processor. We also present a realistic multi-objective optimization scenario for Berkeley DB library using Milepost GCC and improve execution time by approximately 17 , while reducing compilation time and code size by 12 and 7 respectively on Intel Xeon processor.", "It has become common to distribute software in forms that are isomorphic to the original source code. An important example is Java bytecode. Since such codes are easy to decompile, they increase the risk of malicious reverse engineering attacks.In this paper we describe the design of a Java code obfuscator, a tool which - through the application of code transformations - converts a Java program into an equivalent one that is more difficult to reverse engineer.We describe a number of transformations which obfuscate control-flow. Transformations are evaluated with respect to potency (To what degree is a human reader confused?), resilience (How well are automatic deobfuscation attacks resisted?), cost (How much time space overhead is added?), and stealth (How well does obfuscated code blend in with the original code?).The resilience of many control-altering transformations rely on the resilience of opaque predicates. These are boolean valued expressions whose values are known to the obfuscator but difficult to determine for an automatic deobfuscator. We show how to construct resilient, cheap, and stealthy opaque predicates based on the intractability of certain static analysis problems such as alias analysis.", "This paper presents a new approach for automatically deriving worst-case resource bounds for C programs. The described technique combines ideas from amortized analysis and abstract interpretation in a unified framework to address four challenges for state-of-the-art techniques: compositionality, user interaction, generation of proof certificates, and scalability. Compositionality is achieved by incorporating the potential method of amortized analysis. It enables the derivation of global whole-program bounds with local derivation rules by naturally tracking size changes of variables in sequenced loops and function calls. The resource consumption of functions is described abstractly and a function call can be analyzed without access to the function body. User interaction is supported with a new mechanism that clearly separates qualitative and quantitative verification. A user can guide the analysis to derive complex non-linear bounds by using auxiliary variables and assertions. The assertions are separately proved using established qualitative techniques such as abstract interpretation or Hoare logic. Proof certificates are automatically generated from the local derivation rules. A soundness proof of the derivation system with respect to a formal cost semantics guarantees the validity of the certificates. Scalability is attained by an efficient reduction of bound inference to a linear optimization problem that can be solved by off-the-shelf LP solvers. The analysis framework is implemented in the publicly-available tool C4B. An experimental evaluation demonstrates the advantages of the new technique with a comparison of C4B with existing tools on challenging micro benchmarks and the analysis of more than 2900 lines of C code from the cBench benchmark suite." ] }
cs0309037
2158179037
This paper presents a novel technique for the automatic type identification of arbitrary memory objects from a memory dump. Our motivating application is debugging memory corruption problems in optimized, production systems — a problem domain largely unserved by extant methodologies. We describe our algorithm as applicable to any typed language, and we discuss it with respect to the formidable obstacles posed by C. We describe the heuristics that we have developed to overcome these difficulties and achieve effective type identification on C-based systems. We further describe the implementation of our heuristics on one Cbased system — the Solaris operating system kernel — and describe the extensions that we have added to the Solaris postmortem debugger to allow for postmortem type identification. We show that our implementation yields a sufficiently high rate of type identification to be useful for debugging memory corruption problems. Finally, we discuss some of the novel automated debugging mechanisms that can be layered upon postmortem type identification.
The problem of debugging memory corruption problems in production was explicitly identified by Patil and Fischer in @cite_2 , in which they describe using idle processors to absorb their technique's substantial performance impact. Unfortunately, this is not practical in a general-purpose system: idle processors cannot be relied upon to be available for extraneous processing. Indeed, in performance critical systems any performance impact is often unacceptable.
{ "cite_N": [ "@cite_2" ], "mid": [ "2005139304", "2098809490", "2091921136", "2233814931" ], "abstract": [ "By studying the behavior of several programs that crash due to memory errors, we observed that locating the errors can be challenging because significant propagation of corrupt memory values can occur prior to the point of the crash. In this article, we present an automated approach for locating memory errors in the presence of memory corruption propagation. Our approach leverages the information revealed by a program crash: when a crash occurs, this reveals a subset of the memory corruption that exists in the execution. By suppressing (nullifying) the effect of this known corruption during execution, the crash is avoided and any remaining (hidden) corruption may then be exposed by subsequent crashes. The newly exposed corruption can then be suppressed in turn. By iterating this process until no further crashes occur, the first point of memory corruption—and the likely root cause of the program failure—can be identified. However, this iterative approach may terminate prematurely, since programs may not crash even when memory corruption is present during execution. To address this, we show how crashes can be exposed in an execution by manipulating the relative ordering of particular variables within memory. By revealing crashes through this variable re-ordering, the effectiveness and applicability of the execution suppression approach can be improved. We describe a set of experiments illustrating the effectiveness of our approach in consistently and precisely identifying the first points of memory corruption in executions that fail due to memory errors. We also discuss a baseline software implementation of execution suppression that incurs an average overhead of 7.2x, and describe how to reduce this overhead to 1.8x through hardware support.", "Memory leaks and memory corruption are two major forms of software bugs that severely threaten system availability and security. According to the US-CERT vulnerability notes database, 68 of all reported vulnerabilities in 2003 were caused by memory leaks or memory corruption. Dynamic monitoring tools, such as the state-of-the-art Purify, are commonly used to detect memory leaks and memory corruption. However, most of these tools suffer from high overhead, with up to a 20 times slowdown, making them infeasible to be used for production-runs. This paper proposes a tool called SafeMem to detect memory leaks and memory corruption on-the-fly during production-runs. This tool does not rely on any new hardware support. Instead, it makes a novel use of existing ECC memory technology and exploits intelligent dynamic memory usage behavior analysis to detect memory leaks and corruption. We have evaluated SafeMem with seven real-world applications that contain memory leak or memory corruption bugs. SafeMem detects all tested bugs with low overhead (only 1.6 -14.4 ), 2-3 orders of magnitudes smaller than Purify. Our results also show that ECC-protection is effective in pruning false positives for memory leak detection, and in reducing the amount of memory waste (by a factor of 64-74) used for memory monitoring in memory corruption detection compared to page-protection.", "In this paper, we consider processors which provide an idle instruction to the user for powering down processor units which are not required during portions of program execution. We describe algorithms which can be implemented in an energy-aware compiler to make efficient use of such an instruction. These algorithms are based on program static analysis and a combinatorial optimization formulation of the problem. We assume as input an assembly language program of the processor in question. The problem is to insert the idle instruction at different places in the assembly language program such that energy saving is maximized and the execution time of the resulting program is not increased beyond a user-specified value.", "Memory errors are a common cause of incorrect software execution and security vulnerabilities. We have developed two new techniques that help software continue to execute successfully through memory errors: failure-oblivious computing and boundless memory blocks. The foundation of both techniques is a compiler that generates code that checks accesses via pointers to detect out of bounds accesses. Instead of terminating or throwing an exception, the generated code takes another action that keeps the program executing without memory corruption. Failure-oblivious code simply discards invalid writes and manufactures values to return for invalid reads, enabling the program to continue its normal execution path. Code that implements boundless memory blocks stores invalid writes away in a hash table to return as the values for corresponding out of bounds reads. The net effect is to (conceptually) give each allocated memory block unbounded size and to eliminate out of bounds accesses as a programming error. We have implemented both techniques and acquired several widely used open source servers (Apache, Sendmail, Pine, Mutt, and Midnight Commander). With standard compilers, all of these servers are vulnerable to buffer overflow attacks as documented at security tracking web sites. Both failure-oblivious computing and boundless memory blocks eliminate these security vulnerabilities (as well as other memory errors). Our results show that our compiler enables the servers to execute successfully through buffer overflow attacks to continue to correctly service user requests without security vulnerabilities." ] }
cs0309037
2158179037
This paper presents a novel technique for the automatic type identification of arbitrary memory objects from a memory dump. Our motivating application is debugging memory corruption problems in optimized, production systems — a problem domain largely unserved by extant methodologies. We describe our algorithm as applicable to any typed language, and we discuss it with respect to the formidable obstacles posed by C. We describe the heuristics that we have developed to overcome these difficulties and achieve effective type identification on C-based systems. We further describe the implementation of our heuristics on one Cbased system — the Solaris operating system kernel — and describe the extensions that we have added to the Solaris postmortem debugger to allow for postmortem type identification. We show that our implementation yields a sufficiently high rate of type identification to be useful for debugging memory corruption problems. Finally, we discuss some of the novel automated debugging mechanisms that can be layered upon postmortem type identification.
Some memory allocators have addressed debugging problems in production by allowing their behavior to be dynamically changed to provide greater debugging support @cite_13 . This allows optimal allocators to be deployed into production, while still allowing their debugging features to be later enabled should problems arise. A common way for these allocators to detect buffer overruns is to optionally place red zones around allocated memory. However, this only provides for immediate identification of the errant code if stores to the red zone induce a synchronous fault. Such faults are typically achieved by coopting the virtual memory system in some way --- either by surrounding a buffer with unmapped regions, or by performing a check on each access. The first has enormous cost in terms of space, and the second in terms of time --- neither can be acceptably enabled at all times. Thus, these approaches are still only useful for reproducible memory corruption problems.
{ "cite_N": [ "@cite_13" ], "mid": [ "2128274900", "2048242615", "2079366392", "2114700811" ], "abstract": [ "Parallel, multithreaded C and C++ programs such as web servers, database managers, news servers, and scientific applications are becoming increasingly prevalent. For these applications, the memory allocator is often a bottleneck that severely limits program performance and scalability on multiprocessor systems. Previous allocators suffer from problems that include poor performance and scalability, and heap organizations that introduce false sharing. Worse, many allocators exhibit a dramatic increase in memory consumption when confronted with a producer-consumer pattern of object allocation and freeing. This increase in memory consumption can range from a factor of P (the number of processors) to unbounded memory consumption.This paper introduces Hoard, a fast, highly scalable allocator that largely avoids false sharing and is memory efficient. Hoard is the first allocator to simultaneously solve the above problems. Hoard combines one global heap and per-processor heaps with a novel discipline that provably bounds memory consumption and has very low synchronization costs in the common case. Our results on eleven programs demonstrate that Hoard yields low average fragmentation and improves overall program performance over the standard Solaris allocator by up to a factor of 60 on 14 processors, and up to a factor of 18 over the next best allocator we tested.", "In commercial-off-the-shelf (COTS) multi-core systems, the execution times of tasks become hard to predict because of contention on shared resources in the memory hierarchy. In particular, a task running in one processor core can delay the execution of another task running in another processor core. This is due to the fact that tasks can access data in the same cache set shared among processor cores or in the same memory bank in the DRAM memory (or both). Such cache and bank interference effects have motivated the need to create isolation mechanisms for resources accessed by more than one task. One popular isolation mechanism is cache coloring that divides the cache into multiple partitions. With cache coloring, each task can be assigned exclusive cache partitions, thereby preventing cache interference from other tasks. Similarly, bank coloring allows assigning exclusive bank partitions to tasks. While cache coloring and some bank coloring mechanisms have been studied separately, interactions between the two schemes have not been studied. Specifically, while memory accesses to two different bank colors do not interfere with each other at the bank level, they may interact at the cache level. Similarly, two different cache colors avoid cache interference but may not prevent bank interference. Therefore it is necessary to coordinate cache and bank coloring approaches. In this paper, we present a coordinated cache and bank coloring scheme that is designed to prevent cache and bank interference simultaneously. We also developed color allocation algorithms for configuring a virtual memory system to support our scheme which has been implemented in the Linux kernel. In our experiments, we observed that the execution time can increase by 60 due to inter-task interference when we use only cache coloring. Our coordinated approach can reduce this figure down to 12 (an 80 reduction).", "This paper introduces dynamic object colocation, an optimization to reduce copying costs in generational and other incremental garbage collectors by allocating connected objects together in the same space. Previous work indicates that connected objects belong together because they often have similar lifetimes. Generational collectors, however, allocate all new objects in a nursery space. If these objects are connected to data structures residing in the mature space, the collector must copy them. Our solution is a cooperative optimization that exploits compiler analysis to make runtime allocation decisions. The compiler analysis discovers potential object connectivity for newly allocated objects. It then replaces these allocations with calls to coalloc, which takes an extra parameter called the colocator object. At runtime, coalloc determines the location of the colocator and allocates the new object together with it in either the nursery or mature space. Unlike pretenuring, colocation makes precise per-object allocation decisions and does not require lifetime analysis or allocation site homogeneity. Experimental results for SPEC Java benchmarks using Jikes RVM show colocation can reduce garbage collection time by 50 to 75 , and total performance by up to 1 .", "Spatial memory errors (like buffer overflows) are still a major threat for applications written in C. Most recent work focuses on memory safety - when a memory error is detected at runtime, the application is aborted. Our goal is not only to increase the memory safety of applications but also to increase the application's availability. Therefore, we need to tolerate spatial memory errors at runtime. We have implemented a compiler extension, Boundless, that automatically adds the tolerance feature to C applications at compile time. We show that this can increase the availability of applications. Our measurements also indicate that Boundless has a lower performance overhead than SoftBound, a state-of-the-art approach to detect spatial memory errors. Our performance gains result from a novel way to represent pointers. Nevertheless, Boundless is compatible with existing C code. Additionally, Boundless provides a trade-off to reduce the runtime overhead even further: We introduce vulnerability specific patching for spatial memory errors to tolerate only known vulnerabilities. Vulnerability specific patching has an even lower runtime overhead than full tolerance." ] }
cs0309037
2158179037
This paper presents a novel technique for the automatic type identification of arbitrary memory objects from a memory dump. Our motivating application is debugging memory corruption problems in optimized, production systems — a problem domain largely unserved by extant methodologies. We describe our algorithm as applicable to any typed language, and we discuss it with respect to the formidable obstacles posed by C. We describe the heuristics that we have developed to overcome these difficulties and achieve effective type identification on C-based systems. We further describe the implementation of our heuristics on one Cbased system — the Solaris operating system kernel — and describe the extensions that we have added to the Solaris postmortem debugger to allow for postmortem type identification. We show that our implementation yields a sufficiently high rate of type identification to be useful for debugging memory corruption problems. Finally, we discuss some of the novel automated debugging mechanisms that can be layered upon postmortem type identification.
If memory corruption cannot be acceptably prevented in production code, then the focus must shift to debugging the corruption postmortem. While the notion of postmortem debugging has existed since the earliest dawn of debugging @cite_5 , there seems to have been very little work on postmortem debugging of memory corruption per se; such as it is, work on postmortem debugging has focused on race condition detection in parallel and distributed programs. The lack of work on postmortem debugging is surprising given its clear advantages for debugging production systems --- advantages that were clearly elucidated by McGregor and Malone in @cite_10 :
{ "cite_N": [ "@cite_5", "@cite_10" ], "mid": [ "2005139304", "2098809490", "2136938453", "2170200862" ], "abstract": [ "By studying the behavior of several programs that crash due to memory errors, we observed that locating the errors can be challenging because significant propagation of corrupt memory values can occur prior to the point of the crash. In this article, we present an automated approach for locating memory errors in the presence of memory corruption propagation. Our approach leverages the information revealed by a program crash: when a crash occurs, this reveals a subset of the memory corruption that exists in the execution. By suppressing (nullifying) the effect of this known corruption during execution, the crash is avoided and any remaining (hidden) corruption may then be exposed by subsequent crashes. The newly exposed corruption can then be suppressed in turn. By iterating this process until no further crashes occur, the first point of memory corruption—and the likely root cause of the program failure—can be identified. However, this iterative approach may terminate prematurely, since programs may not crash even when memory corruption is present during execution. To address this, we show how crashes can be exposed in an execution by manipulating the relative ordering of particular variables within memory. By revealing crashes through this variable re-ordering, the effectiveness and applicability of the execution suppression approach can be improved. We describe a set of experiments illustrating the effectiveness of our approach in consistently and precisely identifying the first points of memory corruption in executions that fail due to memory errors. We also discuss a baseline software implementation of execution suppression that incurs an average overhead of 7.2x, and describe how to reduce this overhead to 1.8x through hardware support.", "Memory leaks and memory corruption are two major forms of software bugs that severely threaten system availability and security. According to the US-CERT vulnerability notes database, 68 of all reported vulnerabilities in 2003 were caused by memory leaks or memory corruption. Dynamic monitoring tools, such as the state-of-the-art Purify, are commonly used to detect memory leaks and memory corruption. However, most of these tools suffer from high overhead, with up to a 20 times slowdown, making them infeasible to be used for production-runs. This paper proposes a tool called SafeMem to detect memory leaks and memory corruption on-the-fly during production-runs. This tool does not rely on any new hardware support. Instead, it makes a novel use of existing ECC memory technology and exploits intelligent dynamic memory usage behavior analysis to detect memory leaks and corruption. We have evaluated SafeMem with seven real-world applications that contain memory leak or memory corruption bugs. SafeMem detects all tested bugs with low overhead (only 1.6 -14.4 ), 2-3 orders of magnitudes smaller than Purify. Our results also show that ECC-protection is effective in pruning false positives for memory leak detection, and in reducing the amount of memory waste (by a factor of 64-74) used for memory monitoring in memory corruption detection compared to page-protection.", "Applications written in unsafe languages like C and C++ are vulnerable to memory errors such as buffer overflows, dangling pointers, and reads of uninitialized data. Such errors can lead to program crashes, security vulnerabilities, and unpredictable behavior. We present DieHard, a runtime system that tolerates these errors while probabilistically maintaining soundness. DieHard uses randomization and replication to achieve probabilistic memory safety by approximating an infinite-sized heap. DieHard's memory manager randomizes the location of objects in a heap that is at least twice as large as required. This algorithm prevents heap corruption and provides a probabilistic guarantee of avoiding memory errors. For additional safety, DieHard can operate in a replicated mode where multiple replicas of the same application are run simultaneously. By initializing each replica with a different random seed and requiring agreement on output, the replicated version of Die-Hard increases the likelihood of correct execution because errors are unlikely to have the same effect across all replicas. We present analytical and experimental results that show DieHard's resilience to a wide range of memory errors, including a heap-based buffer overflow in an actual application.", "Detecting data races in shared-memory parallel programs is an important debugging problem. This paper presents a new protocol for run-time detection of data races in ex­ ecutions of shared-memory programs with nested fork-join parallelism and no other inter-thread synch ronization. This protocol has signifi cantly smaller worst-case run-time over­ head than previous techniques. The worst-case space re­ quired by our protocol when monitoring an execution of a program P is O(V N), where V is the number of shared variables in P, and N is the maximum dynamic nesting of parallel constructs in P's execution. The worst-case time required to perform any monitoring operation is O(N). We formally prove that our new protocol always reports a non­ empty subset of the data races in a monitored program ex­ ecution and describe how this property leads to an effective debugging strategy." ] }
cs0309037
2158179037
This paper presents a novel technique for the automatic type identification of arbitrary memory objects from a memory dump. Our motivating application is debugging memory corruption problems in optimized, production systems — a problem domain largely unserved by extant methodologies. We describe our algorithm as applicable to any typed language, and we discuss it with respect to the formidable obstacles posed by C. We describe the heuristics that we have developed to overcome these difficulties and achieve effective type identification on C-based systems. We further describe the implementation of our heuristics on one Cbased system — the Solaris operating system kernel — and describe the extensions that we have added to the Solaris postmortem debugger to allow for postmortem type identification. We show that our implementation yields a sufficiently high rate of type identification to be useful for debugging memory corruption problems. Finally, we discuss some of the novel automated debugging mechanisms that can be layered upon postmortem type identification.
The only nod to postmortem debugging of memory corruption seems to come from memory allocators such as the slab allocator @cite_13 used by the Solaris kernel. This allocator can optionally log information with each allocation and deallocation; in the event of failure, these logs can be used to determine the subsystem allocating the overrun buffer. While this mechanism has proved to be enormously useful in debugging memory corruption problems in the Solaris kernel, it is still far too space- and time-intensive to be enabled at all times in production environments.
{ "cite_N": [ "@cite_13" ], "mid": [ "1746694335", "2151971447", "1989945523", "2128274900" ], "abstract": [ "This paper presents a comprehensive design overview of the SunOS 5.4 kernel memory allocator. This allocator is based on a set of object-caching primitives that reduce the cost of allocating complex objects by retaining their state between uses. These same primitives prove equally effective for managing stateless memory (e.g. data pages and temporary buffers) because they are space-efficient and fast. The allocator's object caches respond dynamically to global memory pressure, and employ an object-coloring scheme that improves the system's overall cache utilization and bus balance. The allocator also has several statistical and debugging features that can detect a wide range of problems throughout the system.", "Buffer caches in operating systems keep active file blocks in memory to reduce disk accesses. Related studies have been focused on how to minimize buffer misses and the caused performance degradation. However, the side effects and performance implications of accessing the data in buffer caches (i.e. buffer cache hits) have not been paid attention. In this paper, we show that accessing buffer caches can cause serious performance degradation on multicores, particularly with shared last level caches (LLCs). There are two reasons for this problem. First, data in files normally have weaker localities than data objects in virtual memory spaces. Second, due to the shared structure of LLCs on multicore processors, an application accessing the data in a buffer cache may flush the to-be-reused data of its co-running applications from the shared LLC and significantly slow down these applications. The paper proposes a buffer cache design called Selected Region Mapping Buffer (SRM-buffer) for multicore systems to effectively address the cache pollution problem caused by OS buffer. SRM-buffer improves existing OS buffer management with an enhanced page allocation policy that carefully selects mapping physical pages upon buffer misses. For a sequence of blocks accessed by an application, SRM-buffer allocates physical pages that are mapped to a selected region consisting of a small portion of sets in LLC. Thus, when these blocks are accessed, cache pollution is effectively limited within the small cache region. We have implemented a prototype of SRM-buffer into Linux kernel, and tested it with extensive workloads. Performance evaluation shows SRM-buffer can improve system performance and decrease the execution times of workloads by up to 36 .", "The memory system performance of many programs can be improved by coallocating contemporaneously accessed heap objects in the same cache block. We present a novel profile-based analysis for producing such a layout. The analysis achieves cacheconscious coallocation of a hot data stream H (i.e., a regular data access pattern that frequently repeats) by isolating and combining allocation sites of object instances that appear in H such that intervening allocations coming from other sites are separated. The coallocation solution produced by the analysis is enforced by an automatic tool, cminstr, that redirects a program's heap allocations to a run-time coallocation library comalloc. We also extend the analysis to coallocation at object field granularity. The resulting field coallocation solution generalizes common data restructuring techniques, such as field reordering, object splitting, and object merging, and allows their combination. Furthermore, it provides insight into object restructuring by breaking down the coallocation benefit on a per-technique basis, which provides the opportunity to pick the \"sweet spot\" for each program. Experimental results using a set of memory-performance-limited benchmarks, including a few SPECInt2000 programs, and Microsoft VisualFoxPro, indicate that programs possess significant coallocation opportunities. Automatic object coallocation improves execution time by 13 on average in the presence of hardware prefetching. Hand-implemented field coallocation solutions for two of the benchmarks produced additional improvements (12 and 22 ) but the effort involved suggests implementing an automated version for type-safe languages, such as Java and C#.", "Parallel, multithreaded C and C++ programs such as web servers, database managers, news servers, and scientific applications are becoming increasingly prevalent. For these applications, the memory allocator is often a bottleneck that severely limits program performance and scalability on multiprocessor systems. Previous allocators suffer from problems that include poor performance and scalability, and heap organizations that introduce false sharing. Worse, many allocators exhibit a dramatic increase in memory consumption when confronted with a producer-consumer pattern of object allocation and freeing. This increase in memory consumption can range from a factor of P (the number of processors) to unbounded memory consumption.This paper introduces Hoard, a fast, highly scalable allocator that largely avoids false sharing and is memory efficient. Hoard is the first allocator to simultaneously solve the above problems. Hoard combines one global heap and per-processor heaps with a novel discipline that provably bounds memory consumption and has very low synchronization costs in the common case. Our results on eleven programs demonstrate that Hoard yields low average fragmentation and improves overall program performance over the standard Solaris allocator by up to a factor of 60 on 14 processors, and up to a factor of 18 over the next best allocator we tested." ] }
cs0309055
1495757250
In this paper, we propose a mathematical framework for automated bug localization. This framework can be briefly summarized as follows. A program execution can be represented as a rooted acyclic directed graph. We define an execution snapshot by a cut-set on the graph. A program state can be regarded as a conjunction of labels on edges in a cut-set. Then we argue that a debugging task is a pruning process of the execution graph by using cut-sets. A pruning algorithm, i.e., a debugging task, is also presented.
* Shapiro's algorithmic debugging was invented for prolog programs @cite_2 . Fig. shows our interpretation of his work. From our viewpoint, it uses a proof tree as an execution graph. (Attention: This interpretation differs from a normal proof tree. Our interpretation is based on a line graph A line graph can be get by interchanging vertices and edges of an original graph. of a normal proof tree.) He used only one edge as a cut-set since removal of any edge divides a tree into two disconnected subtrees. A state is also simple because only one label, i.e., one unified clause, is enough. In this work, step of the pruning process is fully automated and a programmer carries out step by answering yes'' or no'' to tell a system the correctness of the label on the edge. GADT @cite_6 and Lichtenstein's system @cite_0 can be interpreted as the same manner because they are straightforward extensions of Shapiro's work.
{ "cite_N": [ "@cite_0", "@cite_6", "@cite_2" ], "mid": [ "1514468887", "2134080718", "2963702702", "2737338020" ], "abstract": [ "The thesis lays a theoretical framework for program debugging, with the goal of partly mechanizing this activity. In particular, we formalize and develop algorithmic solutions to the following two questions: (1) How do we identify a bug in a program that behaves incorrectly? (2) How do we fix a bug, once one is identified? We develop interactive diagnosis algorithms that identify a bug in a program that behaves incorrectly, and implement them in Prolog for the diagnosis of Prolog programs. Their performance suggests that they can be the backbone of debugging aids that go far beyond what is offered by current programming environments. We develop an inductive inference algorithm that synthesizes logic programs from examples of their behavior. The algorithm incorporates the diagnosis algorithms as a component. It is incremental, and progresses by debugging a program with respect to the examples. The Model Inference System is a Prolog implementation of the algorithm. Its range of applications and efficiency is comparable to existing systems for program synthesis from examples and grammatical inference. We develop an algorithm that can fix a bug that has been identified, and integrate it with the diagnosis algorithms to form an interactive debugging system. By restricting the class of bugs we attempt to correct, the system can debug programs that are too complex for the Model Inference System to synthesize.", "This paper presents a method for semi-automatic bug localization, generalized algorithmic debugging, which has been integrated with the category partition method for functional testing. In this way the efficiency of the algorithmic debugging method for bug localization can be improved by using test specifications and test results. The long-range goal of this work is a semi-automatic debugging and testing system which can be used during large-scale program development of nontrivial programs. The method is generally applicable to procedural langua ges and is not dependent on any ad hoc assumptions regarding the subject program. The original form of algorithmic debugging, introduced by Shapiro, was however limited to small Prolog programs without side-effects, but has later been generalized to concurrent logic programming languages. Another drawback of the original method is the large number of interactions with the user during bug localization. To our knowledge, this is the first method which uses category partition testing to improve the bug localization properties of algorithmic debugging. The method can avoid irrelevant questions to the programmer by categorizing input parameters and then match these against test cases in the test database. Additionally, we use program slicing, a data flow analysis technique, to dynamically compute which parts of the program are relevant for the search, thus further improving bug localization. We believe that this is the first generalization of algorithmic debugging for programs with side-effects written in imperative languages such as Pascal. These improvements together makes it more feasible to debug larger programs. However, additional improvements are needed to make it handle pointer-related side-effects and concurrent Pascal programs. A prototype generalized algorithmic debugger for a Pascal subset without pointer side-effects and a test case generator for application programs in Pascal, C, dBase, and LOTUS have been implemented.", "We prove tight upper bounds on the logarithmic derivative of the independence and matching polynomials of d-regular graphs. For independent sets, this theorem is a strengthening of Kahn's result that a disjoint union of copies of Kd;d maximizes the number of independent sets of a bipartite d-regular graph, Galvin and Tet ali's result that the independence polynomial is maximized by the same, and Zhao's extension of both results to all d-regular graphs. For matchings, this shows that the matching polynomial and the total number of matchings of a d-regular graph are maximized by a union of copies of Kd;d. Using this we prove the asymptotic upper matching conjecture of Friedland, Krop, Lundow, and Markstrom. In probabilistic language, our main theorems state that for all d-regular graphs and all �, the occupancy fraction of the hard-core model and the edge occupancy fraction of the monomer-dimer model with fugacity � are maximized by Kd;d. Our method involves constrained optimization problems over distributions of random variables and applies to all d-regular graphs directly, without a reduction to the bipartite case. Using a variant of the method we prove a lower bound on the occupancy fraction of the hard-core model on any d-regular, vertex-transitive, bipartite graph: the occupancy fraction of such a graph is strictly greater than the occupancy fraction of the unique translationinvariant hard-core measure on the infinite d-regular tree", "We present a randomised distributed maximal independent set (MIS) algorithm for arbitrary graphs of size @math that halts in time @math with probability @math , each message containing @math bit: thus its bit complexity per channel is @math (the bit complexity is the number of bits we need to solve a distributed task, it measures the communication complexity). We assume that the graph is anonymous: unique identities are not available to distinguish the processes; we only assume that each vertex distinguishes between its neighbours by locally known channel names. Furthermore we do not assume that the size (or an upper bound on the size) of the graph is known. This algorithm is optimal (modulo a multiplicative constant) for the bit complexity and improves the best previous randomised distributed MIS algorithms (deduced from the randomised PRAM algorithm due to Luby) for general graphs which is @math per channel (it halts in time @math and the size of each message is @math ). This result is based on a powerful and general technique for converting unrealistic exchanges of messages containing real numbers drawn at random on each vertex of a network into exchanges of bits. Then we consider a natural question: what is the impact of a vertex inclusion in the MIS on distant vertices? We prove that this impact vanishes rapidly as the distance grows for bounded-degree vertices. We provide a counter-example that shows this result does not hold in general. We prove also that these results remain valid for Luby's algorithm presented by Lynch and by Wattenhofer. This question remains open for the variant given by Peleg." ] }
cs0308007
2950479998
The past years have seen widening efforts at increasing Prolog's declarativeness and expressiveness. Tabling has proved to be a viable technique to efficiently overcome SLD's susceptibility to infinite loops and redundant subcomputations. Our research demonstrates that implicit or-parallelism is a natural fit for logic programs with tabling. To substantiate this belief, we have designed and implemented an or-parallel tabling engine -- OPTYap -- and we used a shared-memory parallel machine to evaluate its performance. To the best of our knowledge, OPTYap is the first implementation of a parallel tabling engine for logic programming systems. OPTYap builds on Yap's efficient sequential Prolog engine. Its execution model is based on the SLG-WAM for tabling, and on the environment copying for or-parallelism. Preliminary results indicate that the mechanisms proposed to parallelize search in the context of SLD resolution can indeed be effectively and naturally generalized to parallelize tabled computations, and that the resulting systems can achieve good performance on shared-memory parallel machines. More importantly, it emphasizes our belief that through applying or-parallelism and tabling to logic programs the range of applications for Logic Programming can be increased.
A first proposal on how to exploit implicit parallelism in tabling systems was Freire's @cite_53 . In this model, each tabled subgoal is computed independently in a single computational thread, a . Each generator thread is associated with a unique tabled subgoal and it is responsible for fully exploiting its search tree in order to obtain the complete set of answers. A generator thread dependent on other tabled subgoals will asynchronously consume answers as the correspondent generator threads will make them available. Within this model, parallelism results from having several generator threads running concurrently. Parallelism arising from non-tabled subgoals or from execution alternatives to tabled subgoals is not exploited. Moreover, we expect that scheduling and load balancing would be even harder than for traditional parallel systems.
{ "cite_N": [ "@cite_53" ], "mid": [ "2787223504", "2126170153", "1994534328", "2540490069" ], "abstract": [ "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.", "The authors show how to design truthful (dominant strategy) mechanisms for several combinatorial problems where each agent's secret data is naturally expressed by a single positive real number. The goal of the mechanisms we consider is to allocate loads placed on the agents, and an agent's secret data is the cost she incurs per unit load. We give an exact characterization for the algorithms that can be used to design truthful mechanisms for such load balancing problems using appropriate side payments. We use our characterization to design polynomial time truthful mechanisms for several problems in combinatorial optimization to which the celebrated VCG mechanism does not apply. For scheduling related parallel machines (Q spl par C sub max ), we give a 3-approximation mechanism based on randomized rounding of the optimal fractional solution. This problem is NP-complete, and the standard approximation algorithms (greedy load-balancing or the PTAS) cannot be used in truthful mechanisms. We show our mechanism to be frugal, in that the total payment needed is only a logarithmic factor more than the actual costs incurred by the machines, unless one machine dominates the total processing power. We also give truthful mechanisms for maximum flow, Q spl par spl Sigma C sub j (scheduling related machines to minimize the sum of completion times), optimizing an affine function over a fixed set, and special cases of uncapacitated facility location. In addition, for Q spl par spl Sigma w sub j C sub j (minimizing the weighted sum of completion times), we prove a lower bound of 2 spl radic 3 for the best approximation ratio achievable by truthful mechanism.", "In this paper, we propose a joint subchannel and power allocation algorithm for the downlink of an orthogonal frequency-division multiple access (OFDMA) mixed femtocell macrocell network deployment. Specifically, the total throughput of all femtocell user equipments (FUEs) is maximized while the network capacity of an existing macrocell is always protected. Towards this end, we employ an iterative approach in which OFDM subchannels and transmit powers of base stations (BS) are alternatively assigned and optimized at every step. For a fixed power allocation, we prove that the optimal policy in each cell is to give each subchannel to the user with the highest signal-to-interference-plus-noise ratio (SINR) on that subchannel. For a given subchannel assignment, we adopt the successive convex approximation (SCA) approach and transform the highly nonconvex power allocation problem into a sequence of convex subproblems. In the arithmetic-geometric mean (AGM) approximation, we apply geometric programming to find optimal solutions after condensing a posynomial into a monomial. On the other hand, logarithmic and d ifference-of-two- c oncave-functions (D.C.) approximations lead us to solving a series of convex relaxation programs. With the three proposed SCA-based power optimization solutions, we show that the overall joint subchannel and power allocation algorithm converges to some local maximum of the original design problem. While a central processing unit is required to implement the AGM approximation-based solution, each BS locally computes the optimal subchannel and power allocation for its own servicing cell in the logarithmic and D.C. approximation-based solutions. Numerical examples confirm the merits of the proposed algorithm.", "In this paper, we study resource allocation for a multicarrier-based cognitive radio (CR) network. More specifically, we investigate the problem of secondary users’ energy-efficiency (EE) maximization problem under secondary total power and primary interference constraints. First, assuming cooperation among the secondary base stations (BSs), a centralized approach is considered to solve the EE optimization problem for the CR network where the primary and secondary users are using either orthogonal frequency-division multiplexing (OFDM) or filter bank based multicarrier (FBMC) modulations. We propose an alternating-based approach to solve the joint power-subcarrier allocation problem. More importantly, in the first place, subcarriers are allocated using a heuristic method for a given feasible power allocation. Then, we conservatively approximate the nonconvex power control problem and propose a joint successive convex approximation-Dinkelbach algorithm (SCADA) to efficiently obtain a solution to the nonconvex power control problem. The proposed algorithm is shown to converge to a solution that coincides with the stationary point of the original nonconvex power control subproblem. Moreover, we propose a dual decomposition-based decentralized version of the SCADA. Second, under the assumption of no cooperation among the secondary BSs, we propose a fully distributed power control algorithm from the perspective of game theory. The proposed algorithm is shown to converge to a Nash-equilibrium (NE) point. Moreover, we propose a sufficient condition that guarantees the uniqueness of the achieved NE. Extensive simulation analyses are further provided to highlight the advantages and demonstrate the efficiency of our proposed schemes." ] }
cs0308007
2950479998
The past years have seen widening efforts at increasing Prolog's declarativeness and expressiveness. Tabling has proved to be a viable technique to efficiently overcome SLD's susceptibility to infinite loops and redundant subcomputations. Our research demonstrates that implicit or-parallelism is a natural fit for logic programs with tabling. To substantiate this belief, we have designed and implemented an or-parallel tabling engine -- OPTYap -- and we used a shared-memory parallel machine to evaluate its performance. To the best of our knowledge, OPTYap is the first implementation of a parallel tabling engine for logic programming systems. OPTYap builds on Yap's efficient sequential Prolog engine. Its execution model is based on the SLG-WAM for tabling, and on the environment copying for or-parallelism. Preliminary results indicate that the mechanisms proposed to parallelize search in the context of SLD resolution can indeed be effectively and naturally generalized to parallelize tabled computations, and that the resulting systems can achieve good performance on shared-memory parallel machines. More importantly, it emphasizes our belief that through applying or-parallelism and tabling to logic programs the range of applications for Logic Programming can be increased.
There have been other proposals for concurrent tabling but in a distributed memory context. Hu @cite_46 was the first to formulate a method for distributed tabled evaluation termed . This method matches subgoals with processors in a similar way to Freire's approach. Each processor gets a single subgoal and it is responsible for fully exploiting its search tree and obtain the complete set of answers. One of the main contributions of SLGMP is its controlled scheme of propagation of subgoal dependencies in order to safely perform distributed completion. An implementation prototype of SLGMP was developed, but as far as we know no results have been reported.
{ "cite_N": [ "@cite_46" ], "mid": [ "2495949634", "1518621415", "2040110103", "1608806855" ], "abstract": [ "SLG resolution, a type of tabled resolution and a technique of logic programming (LP), has polynomial data complexity for ground Datalog queries with negation, making it suitable for deductive database (DDB). It evaluates non-stratified negation according to the three-valued Well-Founded Semantics, making it a suitable starting point for non-monotonic reasoning (NMR). Furthermore, SLG has an efficient partial implementation in the SLG-WAM which, in the XSB logic programming system, has proven an order of magnitude faster than current DDR systems for in-memory queries. Building on SLG resolution, we formulate a method for distributed tabled resolution termed Multi-Processor SLG (SLGMP). Since SLG is modeled as a forest of trees, it then becomes natural to think of these trees as executing at various places over a distributed network in SLGMP. Incremental completion, which is necessary for efficient sequential evaluation, can be modeled through the use of a subgoal dependency graph (SDG), or its approximation. However the subgoal dependency graph is a global property of a forest; in a distributed environment each processor should maintain as small a view of the SDG as possible. The formulation of what and when dependency information must be maintained and propagated in order for distributed completion to be performed safely is the central contribution of SLGMP. Specifically, subgoals in SLGMP are properly numbered such that most of the dependencies among subgoals are represented by the subgoal numbers. Dependency information that is not represented by subgoal numbers is maintained explicitly at each processor and propagated by each processor. SLGMP resolution aims at efficiently evaluating normal logic programs in a distributed environment. SLGMP operations are explicitly defined and soundness and completeness is proven for SLGMP with respect to SLG for programs which terminate for SLG evaluation. The resulting framework can serve as a basis for query processing and non-monotonic reasoning within a distributed environment. We also implemented Distributed XSB, a prototype implementation of SLGMP. Distributed XSB, as a distributed tabled evaluation model, is really a distributed problem solving system, where the data to solve the problem is distributed and each participating process cooperates with other participants (perhaps including itself), by sending and receiving data. Distributed XSB proposes a distributed data computing model, where there may be cyclic dependencies among participating processes and the dependencies can be both negative and positive.", "Tabled evaluations ensure termination of logic programs with finite models by keeping track of which subgoals have been called. Given several variant subgoals in an evaluation, only the first one encountered will use program clause resolution; the rest uses answer resolution. This use of answer resolution prevents infinite looping which happens in SLD. Given the asynchronicity of answer generation and answer return, tabling systems face an important scheduling choice not present in traditional top-down evaluation: How does the order of returning answers to consuming subgoals affect program efficiency.", "Abstract This paper presents the application of geometric reasoning to the automatic construction of an assembly partial order from an attributed liaison graph representation of an assembly. The construction is based on the principle of assembly by disassembly and on the extraction of preferred subassemblies. On the basis of accessibility and manipulability criteria, the system first decomposes the given assembly into clusters of mutually inseparable parts. An abstract liaison graph is then generated wherein each cluster of mutually inseparable parts is represented as a supernode. A set of subassemblies is then generated by decomposing the abstract liaison graph into subgraphs, verifying the disassemblability of individual subgraphs, and applying the criteria for selecting preferred subassemblies to the disassemblable subgraphs. The recursive application of this process to the selected subassemblies results in a Hierarchical Partial-Order Graph (HPOG). A HPOG not only provides the precedence relations among assembly operations but also presents the temporal and spatial parallelism for implementing distributed and cooperative assembly. The system is organized under the “cooperative problem solving (CPS)” paradigm.", "This paper addresses general issues involved in parallelizing tabled evaluations by introducing a model of shared-memory parallelism which we call table-parallelism, and by comparing it to traditional models of parallelizing SLD. A basic architecture for supporting table-parallelism in the framework of the SLG-WAM[14] is also presented, along with an algorithm for detecting termination of subcomputations." ] }
cs0308015
2109841503
OpenPGP, an IETF Proposed Standard based on PGP application, has its own Public Key Infrastructure (PKI) architecture which is different from the one based on X.509, another standard from ITU. This paper describes the OpenPGP PKI; the historical perspective as well as its current use. The current OpenPGP PKI issues include the capability of a PGP keyserver and its performance. PGP keyservers have been developed and operated by volunteers since the 1990s. The keyservers distribute, merge, and expire the OpenPGP public keys. Major keyserver managers from several countries have built the globally distributed network of PGP keyservers. However, the current PGP Public Keyserver (pksd) has some limitations. It does not support fully the OpenPGP format so that it is neither expandable nor flexible, without any cluster technology. Finally we introduce the project on the next generation OpenPGP public keyserver called the OpenPKSD, lead by Hironobu Suzuki, one of the authors, and funded by Japanese Information-technology Promotion Agency(IPA).
A web of trust'' used in PGP is referred in several researches including the peer-to-peer authentication @cite_9 , trust computation @cite_7 @cite_14 , and privacy enhanced technology @cite_6 . However, there are few description on PGP keyserver. It might be because PGP keyserver mechanism is too simple. It is not a CA but just a pool of public keys. From users' viewpoint, PGP keyserver has a large amount of OpenPGP public keys that provide the interesting material for social analysis of network community. For example, OpenPGP keyserver developer Jonathan McDowell also developed Experimental PGP key path finder'' @cite_20 that searches and displays the chain of certification between the users.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_9", "@cite_6", "@cite_20" ], "mid": [ "1879433355", "1494178662", "2133032376", "2601330228" ], "abstract": [ "PGP is built upon a Distributed Web of Trust in which a user’s trustworthiness is established by others who can vouch through a digital signature for that user’s identity. Preventing its wholesale adoption are a number of inherent weaknesses to include (but not limited to) the following: 1) Trust Relationships are built on a subjective honor system, 2) Only first degree relationships can be fully trusted, 3) Levels of trust are difficult to quantify with actual values, and 4) Issues with the Web of Trust itself (Certification and Endorsement). Although the security that PGP provides is proven to be reliable, it has largely failed to garner large scale adoption. In this paper, we propose several novel contributions to address the aforementioned issues with PGP and associated Web of Trust. To address the subjectivity of the Web of Trust, we provide a new certificate format based on Bitcoin which allows a user to verify a PGP certificate using Bitcoin identity-verification transactions - forming first degree trust relationships that are tied to actual values (i.e., number of Bitcoins transferred during transaction). Secondly, we present the design of a novel Distributed PGP key server that leverages the Bitcoin transaction blockchain to store and retrieve our certificates.", "From the Publisher: Use of the Internet is expanding beyond anyone's expectations. As corporations, government offices, and ordinary citizens begin to rely on the information highway to conduct business, they are realizing how important it is to protect their communications -- both to keep them a secret from prying eyes and to ensure that they are not altered during transmission. Encryption, which until recently was an esoteric field of interest only to spies, the military, and a few academics, provides a mechanism for doing this. PGP, which stands for Pretty Good Privacy, is a free and widely available encryption program that lets you protect files and electronic mail. Written by Phil Zimmermann and released in 1991, PGP works on virtually every platform and has become very popular both in the U.S. and abroad. Because it uses state-of-the-art public key cryptography, PGP can be used to authenticate messages, as well as keep them secret. With PGP, you can digitally \"sign\" a message when you send it. By checking the digital signature at the other end, the recipient can be sure that the message was not changed during transmission and that the message actually came from you. PGP offers a popular alternative to U.S. government initiatives like the Clipper Chip because, unlike Clipper, it does not allow the government or any other outside agency access to your secret keys. PGP: Pretty Good Privacy by Simson Garfinkel is both a readable technical user's guide and a fascinating behind-the-scenes look at cryptography and privacy. Part I, \"PGP Overview,\" introduces PGP and the cryptography that underlies it. Part II, \"Cryptography History and Policy,\" describes the history of PGP -- its personalities, legal battles, and other intrigues; it also provides background on the battles over public key cryptography patents and the U.S. government export restrictions, and other aspects of the ongoing public debates about privacy and free speech. Part III, \"Using PGP,\" describes how to use PGP: protecting files and email, creating and using keys, signing messages, certifying and distributing keys, and using key servers. Part IV, \"Appendices,\" describes how to obtain PGP from Internet sites, how to install it on PCs, UNIX systems, and the Macintosh, and other background information. The book also contains a glossary, a bibliography, and a handy reference card that summarizes all of the PGP commands, environment variables, and configuration variables.", "Authentication using a path of trusted intermediaries, each able to authenticate the next in the path, is a well-known technique for authenticating channels in a large distributed system. In this paper, we explore the use of multiple paths to redundantly authenticate a channel and focus on two notions of path independence-disjoint paths and connective paths-that seem to increase assurance in the authentication. We give evidence that there are no efficient algorithms for locating maximum sets of paths with these independence properties and propose several approximation algorithms for these problems. We also describe a service we have deployed, called PathServer, that makes use of our algorithms to find such sets of paths to support authentication in PGP applications.", "The Public Key Infrastructure (PKI) in use today on the Internet to secure communications has several drawbacks arising from its centralised and non-transparent design. In the past there has been instances of certificate authorities publishing rogue certificates for targeted attacks, and this has been difficult to immediately detect as certificate authorities are not transparent about the certificates they issue. Furthermore, the centralised selection of trusted certificate authorities by operating system and browser vendors means that it is not practical to untrust certificate authorities that have issued rogue certificates, as this would disrupt the TLS process for many other hosts. SCPKI is an alternative PKI system based on a decentralised and transparent design using a web-of-trust model and a smart contract on the Ethereum blockchain, to make it easily possible for rogue certificates to be detected when they are published. The web-of-trust model is designed such that an entity or authority in the system can verify (or vouch for) fine-grained attributes of another entity's identity (such as company name or domain name), as an alternative to the centralised certificate authority identity verification model." ] }
cs0308015
2109841503
OpenPGP, an IETF Proposed Standard based on PGP application, has its own Public Key Infrastructure (PKI) architecture which is different from the one based on X.509, another standard from ITU. This paper describes the OpenPGP PKI; the historical perspective as well as its current use. The current OpenPGP PKI issues include the capability of a PGP keyserver and its performance. PGP keyservers have been developed and operated by volunteers since the 1990s. The keyservers distribute, merge, and expire the OpenPGP public keys. Major keyserver managers from several countries have built the globally distributed network of PGP keyservers. However, the current PGP Public Keyserver (pksd) has some limitations. It does not support fully the OpenPGP format so that it is neither expandable nor flexible, without any cluster technology. Finally we introduce the project on the next generation OpenPGP public keyserver called the OpenPKSD, lead by Hironobu Suzuki, one of the authors, and funded by Japanese Information-technology Promotion Agency(IPA).
OpenPGP PKI itself can be described as the superset of PKI @cite_27 , however, combining OpenPGP PKI with other authentication system is challenging work in both theoretical and operational field. Formal study of trust relationship of PKI started in the late 1990s @cite_7 @cite_14 and GnuPG development version in December 2002 started to support its trust calculation with GnuPGP's trust signature.
{ "cite_N": [ "@cite_27", "@cite_14", "@cite_7" ], "mid": [ "2601330228", "2883738719", "2511395838", "2797677683" ], "abstract": [ "The Public Key Infrastructure (PKI) in use today on the Internet to secure communications has several drawbacks arising from its centralised and non-transparent design. In the past there has been instances of certificate authorities publishing rogue certificates for targeted attacks, and this has been difficult to immediately detect as certificate authorities are not transparent about the certificates they issue. Furthermore, the centralised selection of trusted certificate authorities by operating system and browser vendors means that it is not practical to untrust certificate authorities that have issued rogue certificates, as this would disrupt the TLS process for many other hosts. SCPKI is an alternative PKI system based on a decentralised and transparent design using a web-of-trust model and a smart contract on the Ethereum blockchain, to make it easily possible for rogue certificates to be detected when they are published. The web-of-trust model is designed such that an entity or authority in the system can verify (or vouch for) fine-grained attributes of another entity's identity (such as company name or domain name), as an alternative to the centralised certificate authority identity verification model.", "The public key infrastructure (PKI) based authentication protocol provides the basic security services for vehicular ad-hoc networks (VANETs). However, trust and privacy are still open issues due to the unique characteristics of vehicles. It is crucial for VANETs to prevent internal vehicles from broadcasting forged messages while simultaneously protecting the privacy of each vehicle against tracking attacks. In this paper, we propose a blockchain-based anonymous reputation system (BARS) to break the linkability between real identities and public keys to preserve privacy. The certificate and revocation transparency is implemented efficiently using two blockchains. We design a trust model to improve the trustworthiness of messages relying on the reputation of the sender based on both direct historical interactions and indirect opinions about the sender. Experiments are conducted to evaluate BARS in terms of security and performance and the results show that BARS is able to establish distributed trust management, while protecting the privacy of vehicles.", "The current Transport Layer Security (TLS) Public-Key Infrastructure (PKI) is based on a weakest-link security model that depends on over a thousand trust roots. The recent history of malicious and compromised Certification Authorities has fueled the desire for alternatives. Creating a new, secure infrastructure is, however, a surprisingly challenging task due to the large number of parties involved and the many ways that they can interact. A principled approach to its design is therefore mandatory, as humans cannot feasibly consider all the cases that can occur due to the multitude of interleavings of actions by legitimate parties and attackers, such as private key compromises (e.g., domain, Certification Authority, log server, other trusted entities), key revocations, key updates, etc. We present ARPKI, a PKI architecture that ensures that certificate-related operations, such as certificate issuance, update, revocation, and validation, are transparent and accountable. ARPKI efficiently supports these operations, and gracefully handles catastrophic events such as domain key loss or compromise. Moreover ARPKI is the first PKI architecture that is co-designed with a formal model, and we verify its core security property using the TAMARIN prover. We prove that ARPKI offers extremely strong security guarantees, where compromising even n 1 trusted signing and verifying entities is insufficient to launch a man-in-the-middle attack. Moreover, ARPKI’s use deters misbehavior as all operations are publicly visible. Finally, we present a proof-of-concept implementation that provides all the features required for deployment. Our experiments indicate that ARPKI efficiently handles the certification process with low overhead. It does not incur additional latency to TLS, since no additional round trips are required.", "Traditional PKIs face a well-known vulnerability that caused by compromised Certificate Authorities (CA) issuing bogus certificates. Several solutions like AKI and ARPKI have been proposed to address this vulnerability. However, they require complex interactions and synchronization among related entities, and their security has not been validated with wide deployment. We propose an accountable, flexible and efficient decentralized PKI to achieve the same goal using the blockchain technology of Bitcoin, which has been proven to be secure and reliable. The proposed scheme, called BKI, realizes certificate issuance, update and revocation with transactions on a special blockchain that is managed by multiple trusted maintainers. BKI achieves accountability and is easy to check certificate validity, and it is also more secure than centralized PKIs. Moreover, the certificate status update interval of BKI is in seconds, significantly reducing the vulnerability window. In addition, BKI is more flexible than AKI and ARPKI in that the number of required CAs to issue certificates is tunable for different applications. We analyze BKI’s security and performance, and present details on implementation of BKI. Experiments using Ethereum show that certificate issuance update revocation cost 2.38 ms 2.39 ms 1.59 ms respectively." ] }
cs0308044
2950013766
A new method of hierarchical clustering of graph vertexes is suggested. In the method, the graph partition is determined with an equivalence relation satisfying a recursive definition stating that vertexes are equivalent if the vertexes they point to (or vertexes pointing to them) are equivalent. Iterative application of the partitioning yields a hierarchical clustering of graph vertexes. The method is applied to the citation graph of hep-th. The outcome is a two-level classification scheme for the subject field presented in hep-th, and indexing of the papers from hep-th in this scheme. A number of tests show that the classification obtained is adequate.
In this subsection, we demonstrate that above equivalence relation @math is a natural development of the recursive algorithms PageRank @cite_5 , HITS @cite_12 , and SimRank @cite_3 , which became lately quite popular among the network miners.
{ "cite_N": [ "@cite_5", "@cite_3", "@cite_12" ], "mid": [ "2951132123", "1545879303", "2117831564", "1502916507" ], "abstract": [ "We address the problem of replicating a Voronoi diagram @math of a planar point set @math by making proximity queries, which are of three possible (in decreasing order of information content): 1. the exact location of the nearest site(s) in @math ; 2. the distance to and label(s) of the nearest site(s) in @math ; 3. a unique label for every nearest site in @math . We provide algorithms showing how queries of Type 1 and Type 2 allow an exact cloning of @math with @math queries and @math processing time. We also prove that queries of Type 3 can never exactly clone @math , but we show that with @math queries we can construct an @math -approximate cloning of @math . In addition to showing the limits of nearest-neighbor database security, our methods also provide one of the first natural algorithmic applications of retroactive data structures.", "Similarity assessment is one of the core tasks in hyperlink analysis. Recently, with the proliferation of applications, e.g., web search and collaborative filtering, SimRank has been a well-studied measure of similarity between two nodes in a graph. It recursively follows the philosophy that \"two nodes are similar if they are referenced (have incoming edges) from similar nodes\", which can be viewed as an aggregation of similarities based on incoming paths. Despite its popularity, SimRank has an undesirable property, i.e., \"zero-similarity\": It only accommodates paths with equal length from a common \"center\" node. Thus, a large portion of other paths are fully ignored. This paper attempts to remedy this issue. (1) We propose and rigorously justify SimRank*, a revised version of SimRank, which resolves such counter-intuitive \"zero-similarity\" issues while inheriting merits of the basic SimRank philosophy. (2) We show that the series form of SimRank* can be reduced to a fairly succinct and elegant closed form, which looks even simpler than SimRank, yet enriches semantics without suffering from increased computational cost. This leads to a fixed-point iterative paradigm of SimRank* in O(Knm) time on a graph of n nodes and m edges for K iterations, which is comparable to SimRank. (3) To further optimize SimRank* computation, we leverage a novel clustering strategy via edge concentration. Due to its NP-hardness, we devise an efficient and effective heuristic to speed up SimRank* computation to O(Knm) time, where m is generally much smaller than m. (4) Using real and synthetic data, we empirically verify the rich semantics of SimRank*, and demonstrate its high computation efficiency.", "The problem of measuring \"similarity\" of objects arises in many applications, and many domain-specific measures have been developed, e.g., matching text across documents or computing overlap among item-sets. We propose a complementary approach, applicable in any domain with object-to-object relationships, that measures similarity of the structural context in which objects occur, based on their relationships with other objects. Effectively, we compute a measure that says \"two objects are similar if they are related to similar objects:\" This general similarity measure, called SimRank, is based on a simple and intuitive graph-theoretic model. For a given domain, SimRank can be combined with other domain-specific similarity measures. We suggest techniques for efficient computation of SimRank scores, and provide experimental results on two application domains showing the computational feasibility and effectiveness of our approach.", "The nearestor near-neighbor query problems arise in a large variety of database applications, usually in the context of similarity searching. Of late, there has been increasing interest in building search index structures for performing similarity search over high-dimensional data, e.g., image databases, document collections, time-series databases, and genome databases. Unfortunately, all known techniques for solving this problem fall prey to the of dimensionality.\" That is, the data structures scale poorly with data dimensionality; in fact, if the number of dimensions exceeds 10 to 20, searching in k-d trees and related structures involves the inspection of a large fraction of the database, thereby doing no better than brute-force linear search. It has been suggested that since the selection of features and the choice of a distance metric in typical applications is rather heuristic, determining an approximate nearest neighbor should su ce for most practical purposes. In this paper, we examine a novel scheme for approximate similarity search based on hashing. The basic idea is to hash the points Supported by NAVY N00014-96-1-1221 grant and NSF Grant IIS-9811904. Supported by Stanford Graduate Fellowship and NSF NYI Award CCR-9357849. Supported by ARO MURI Grant DAAH04-96-1-0007, NSF Grant IIS-9811904, and NSF Young Investigator Award CCR9357849, with matching funds from IBM, Mitsubishi, Schlumberger Foundation, Shell Foundation, and Xerox Corporation. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and or special permission from the Endowment. Proceedings of the 25th VLDB Conference, Edinburgh, Scotland, 1999. from the database so as to ensure that the probability of collision is much higher for objects that are close to each other than for those that are far apart. We provide experimental evidence that our method gives signi cant improvement in running time over other methods for searching in highdimensional spaces based on hierarchical tree decomposition. Experimental results also indicate that our scheme scales well even for a relatively large number of dimensions (more than 50)." ] }
cs0307073
1636672680
We present a new application for keyword search within relational databases, which uses a novel algorithm to solve the join discovery problem by finding Memex-like trails through the graph of foreign key dependencies. It differs from previous efforts in the algorithms used, in the presentation mechanism and in the use of primary-key only database queries at query-time to maintain a fast response for users. We present examples using the DBLP data set.
DBXplorer @cite_3 was developed by Microsoft Research, and like BANKS and Mragyati, it uses join trees to compute an SQL statement to access the data. The algorithm to compute these differs, as does the implementation, which was developed for Microsoft's IIS and SQL Server, the others being implemented in Java. DbSurfer does not require access to the database to discover the trails, only to display the data when user clicks on a link in that trail.
{ "cite_N": [ "@cite_3" ], "mid": [ "2121350579", "2096660972", "2161584750", "2768409085" ], "abstract": [ "Internet search engines have popularized the keyword-based search paradigm. While traditional database management systems offer powerful query languages, they do not allow keyword-based search. In this paper, we discuss DBXplorer, a system that enables keyword-based searches in relational databases. DBXplorer has been implemented using a commercial relational database and Web server and allows users to interact via a browser front-end. We outline the challenges and discuss the implementation of our system, including results of extensive experimental evaluation.", "To compensate for the inherent impedance mismatch between the relational data model (tables of tuples) and XML (ordered, unranked trees), tree join algorithms have become the prevalent means to process XML data in relational databases, most notably the TwigStack[6], structural join[1], and staircase join[13] algorithms. However, the addition of these algorithms to existing systems depends on a significant invasion of the underlying database kernel, an option intolerable for most database vendors. Here, we demonstrate that we can achieve comparable XPath performance without touching the heart of the system. We carefully exploit existing database functionality and accelerate XPath navigation by purely relational means: partitioned B-trees bring access costs to secondary storage to a minimum, while aggregation functions avoid an expensive computation and removal of duplicate result nodes to comply with the XPath semantics. Experiments carried out on IBM DB2 confirm that our approach can turn off-the-shelf database systems into efficient XPath processors.", "The Semantic Web community, until now, has used traditional database systems for the storage and querying of RDF data. The SPARQL query language also closely follows SQL syntax. As a natural consequence, most of the SPARQL query processing techniques are based on database query processing and optimization techniques. For SPARQL join query optimization, previous works like RDF-3X and Hexastore have proposed to use 6-way indexes on the RDF data. Although these indexes speed up merge-joins by orders of magnitude, for complex join queries generating large intermediate join results, the scalability of the query processor still remains a challenge. In this paper, we introduce (i) BitMat - a compressed bit-matrix structure for storing huge RDF graphs, and (ii) a novel, light-weight SPARQL join query processing method that employs an initial pruning technique, followed by a variable-binding-matching algorithm on BitMats to produce the final results. Our query processing method does not build intermediate join tables and works directly on the compressed data. We have demonstrated our method against RDF graphs of upto 1.33 billion triples - the largest among results published until now (single-node, non-parallel systems), and have compared our method with the state-of-the-art RDF stores - RDF-3X and MonetDB. Our results show that the competing methods are most effective with highly selective queries. On the other hand, BitMat can deliver 2-3 orders of magnitude better performance on complex, low-selectivity queries over massive data.", "Synthesizing SQL queries from natural language is a long-standing open problem and has been attracting considerable interest recently. Toward solving the problem, the de facto approach is to employ a sequence-to-sequence-style model. Such an approach will necessarily require the SQL queries to be serialized. Since the same SQL query may have multiple equivalent serializations, training a sequence-to-sequence-style model is sensitive to the choice from one of them. This phenomenon is documented as the \"order-matters\" problem. Existing state-of-the-art approaches rely on reinforcement learning to reward the decoder when it generates any of the equivalent serializations. However, we observe that the improvement from reinforcement learning is limited. In this paper, we propose a novel approach, i.e., SQLNet, to fundamentally solve this problem by avoiding the sequence-to-sequence structure when the order does not matter. In particular, we employ a sketch-based approach where the sketch contains a dependency graph, so that one prediction can be done by taking into consideration only the previous predictions that it depends on. In addition, we propose a sequence-to-set model as well as the column attention mechanism to synthesize the query based on the sketch. By combining all these novel techniques, we show that SQLNet can outperform the prior art by 9 to 13 on the WikiSQL task." ] }
cs0307073
1636672680
We present a new application for keyword search within relational databases, which uses a novel algorithm to solve the join discovery problem by finding Memex-like trails through the graph of foreign key dependencies. It differs from previous efforts in the algorithms used, in the presentation mechanism and in the use of primary-key only database queries at query-time to maintain a fast response for users. We present examples using the DBLP data set.
DISCOVER is the latest offering and shares many similarities to Mragyati, BANKS and DbXplorer, but uses a greedy algorithm to discover the @cite_17 . It also takes greater advantage of the database's internal keyword search facilities by using Oracle's Context cartridge for the text indexing.
{ "cite_N": [ "@cite_17" ], "mid": [ "2098388305", "2116420167", "2121350579", "2085828533" ], "abstract": [ "DISCOVER operates on relational databases and facilitates information discovery on them by allowing its user to issue keyword queries without any knowledge of the database schema or of SQL. DISCOVER returns qualified joining networks of tuples, that is, sets of tuples that are associated because they join on their primary and foreign keys and collectively contain all the keywords of the query. DISCOVER proceeds in two steps. First the Candidate Network Generator generates all candidate networks of relations, that is, join expressions that generate the joining networks of tuples. Then the Plan Generator builds plans for the efficient evaluation of the set of candidate networks, exploiting the opportunities to reuse common subexpressions of the candidate networks. We prove that DISCOVER finds without redundancy all relevant candidate networks, whose size can be data bound, by exploiting the structure of the schema. We prove that the selection of the optimal execution plan (way to reuse common subexpressions) is NP-complete. We provide a greedy algorithm and we show that it provides near-optimal plan execution time cost. Our experimentation also provides hints on tuning the greedy algorithm.", "textabstractX100 is a new execution engine for the MonetDB system, that improves execution speed and overcomes its main memory limitation. It introduces the concept of in-cache vectorized processing that strikes a balance between the existing column-at-a-time MIL execution primitives of MonetDB and the tuple-at-a-time Volcano pipelining model, avoiding their drawbacks: intermediate result materialization and large interpretation overhead, respectively. We explain how the new query engine makes better use of cache memories as well as parallel computation resources of modern super-scalar CPUs. MonetDB X100 can be one to two orders of magnitude faster than commercial DBMSs and close to hand-coded C programs for computationally intensive queries on in-memory datasets. To address larger disk-based datasets with the same efficiency, a new ColumnBM storage layer is developed that boosts bandwidth using ultra lightweight compression and cooperative scans.", "Internet search engines have popularized the keyword-based search paradigm. While traditional database management systems offer powerful query languages, they do not allow keyword-based search. In this paper, we discuss DBXplorer, a system that enables keyword-based searches in relational databases. DBXplorer has been implemented using a commercial relational database and Web server and allows users to interact via a browser front-end. We outline the challenges and discuss the implementation of our system, including results of extensive experimental evaluation.", "We showcase QUEST (QUEry generator for STructured sources), a search engine for relational databases that combines semantic and machine learning techniques for transforming keyword queries into meaningful SQL queries. The search engine relies on two approaches: the forward, providing mappings of keywords into database terms (names of tables and attributes, and domains of attributes), and the backward, computing the paths joining the data structures identified in the forward step. The results provided by the two approaches are combined within a probabilistic framework based on the Dempster-Shafer Theory. We demonstrate QUEST capabilities, and we show how, thanks to the flexibility obtained by the probabilistic combination of different techniques, QUEST is able to compute high quality results even with few training data and or with hidden data sources such as those found in the Deep Web." ] }
cs0307073
1636672680
We present a new application for keyword search within relational databases, which uses a novel algorithm to solve the join discovery problem by finding Memex-like trails through the graph of foreign key dependencies. It differs from previous efforts in the algorithms used, in the presentation mechanism and in the use of primary-key only database queries at query-time to maintain a fast response for users. We present examples using the DBLP data set.
have also introduced a system for keyword search @cite_40 . Their system works by finding results for queries of the form @math near @math (e.g. find movie near travolta cage). Two sets of entries are found - and the contents of the first set are returned based upon their proximity to members of the second set. In comparison to DbSurfer, there is no support for navigation of the database (manual or assisted) nor any display of the context of the results.
{ "cite_N": [ "@cite_40" ], "mid": [ "2062180302", "1502916507", "2621688363", "2092639952" ], "abstract": [ "This article deals with the computation of consistent answers to queries on relational databases that violate primary key constraints. A repair of such inconsistent database is obtained by selecting a maximal number of tuples from each relation without ever selecting two distinct tuples that agree on the primary key. We are interested in the following problem: Given a Boolean conjunctive query q, compute a Boolean first-order (FO) query @j such that for every database db, @j evaluates to true on db if and only if q evaluates to true on every repair of db. Such @j is called a consistent FO rewriting of q. We use novel techniques to characterize classes of queries that have a consistent FO rewriting. In this way, we are able to extend previously known classes and discover new ones. Finally, we use an Ehrenfeucht-Fraisse game to show the non-existence of a consistent FO rewriting for @[email protected]?y(R([email protected]?,y)@?R([email protected]?,c)), where c is a constant and the first coordinate of R is the primary key.", "The nearestor near-neighbor query problems arise in a large variety of database applications, usually in the context of similarity searching. Of late, there has been increasing interest in building search index structures for performing similarity search over high-dimensional data, e.g., image databases, document collections, time-series databases, and genome databases. Unfortunately, all known techniques for solving this problem fall prey to the of dimensionality.\" That is, the data structures scale poorly with data dimensionality; in fact, if the number of dimensions exceeds 10 to 20, searching in k-d trees and related structures involves the inspection of a large fraction of the database, thereby doing no better than brute-force linear search. It has been suggested that since the selection of features and the choice of a distance metric in typical applications is rather heuristic, determining an approximate nearest neighbor should su ce for most practical purposes. In this paper, we examine a novel scheme for approximate similarity search based on hashing. The basic idea is to hash the points Supported by NAVY N00014-96-1-1221 grant and NSF Grant IIS-9811904. Supported by Stanford Graduate Fellowship and NSF NYI Award CCR-9357849. Supported by ARO MURI Grant DAAH04-96-1-0007, NSF Grant IIS-9811904, and NSF Young Investigator Award CCR9357849, with matching funds from IBM, Mitsubishi, Schlumberger Foundation, Shell Foundation, and Xerox Corporation. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and or special permission from the Endowment. Proceedings of the 25th VLDB Conference, Edinburgh, Scotland, 1999. from the database so as to ensure that the probability of collision is much higher for objects that are close to each other than for those that are far apart. We provide experimental evidence that our method gives signi cant improvement in running time over other methods for searching in highdimensional spaces based on hierarchical tree decomposition. Experimental results also indicate that our scheme scales well even for a relatively large number of dimensions (more than 50).", "We consider the problem of single-round private information retrieval (PIR) from @math replicated databases. We consider the case when @math databases are outdated (unsynchronized), or even worse, adversarial (Byzantine), and therefore, can return incorrect answers. In the PIR problem with Byzantine databases (BPIR), a user wishes to retrieve a specific message from a set of @math messages with zero-error, irrespective of the actions performed by the Byzantine databases. We consider the @math -privacy constraint in this paper, where any @math databases can collude, and exchange the queries submitted by the user. We derive the information-theoretic capacity of this problem, which is the maximum number of that can be retrieved privately (under the @math -privacy constraint) for every symbol of the downloaded data. We determine the exact BPIR capacity to be @math , if @math . This capacity expression shows that the effect of Byzantine databases on the retrieval rate is equivalent to removing @math databases from the system, with a penalty factor of @math , which signifies that even though the number of databases needed for PIR is effectively @math , the user still needs to access the entire @math databases. The result shows that for the unsynchronized PIR problem, if the user does not have any knowledge about the fraction of the messages that are mis-synchronized, the single-round capacity is the same as the BPIR capacity. Our achievable scheme extends the optimal achievable scheme for the robust PIR (RPIR) problem to correct the introduced by the Byzantine databases as opposed to in the RPIR problem. Our converse proof uses the idea of the cut-set bound in the network coding problem against adversarial nodes.", "We present a new algorithm for searching video repositories using free-hand sketches. Our queries express both appearance (color, shape) and motion attributes, as well as semantic properties (object labels) enabling hybrid queries to be specified. Unlike existing sketch based video retrieval (SBVR) systems that enable hybrid queries of this form, we do not adopt a model fitting optimization approach to match at query-time. Rather, we create an efficiently searchable index via a novel space-time descriptor that encapsulates all these properties. The real-time performance yielded by our indexing approach enables interactive refinement of search results within a relevance feedback (RF) framework; a unique contribution to SBVR. We evaluate our system over 700 sports footage clips exhibiting a variety of clutter and motion conditions, demonstrating significant accuracy and speed gains over the state of the art." ] }
cs0307073
1636672680
We present a new application for keyword search within relational databases, which uses a novel algorithm to solve the join discovery problem by finding Memex-like trails through the graph of foreign key dependencies. It differs from previous efforts in the algorithms used, in the presentation mechanism and in the use of primary-key only database queries at query-time to maintain a fast response for users. We present examples using the DBLP data set.
The join discovery problem is related to the problem tackled by the @cite_18 @cite_11 . The idea underlying the universal relation model is to allow querying the database soley through its attributes without explicitly specifying the join paths. The expressive querying power of such a system is essentially that of a union of conjunctive queries (see @cite_19 ). DbSurfer takes this approach further by allowing the user to specify values (keywords) without stating their related attributes and providing relevance based filtering.
{ "cite_N": [ "@cite_19", "@cite_18", "@cite_11" ], "mid": [ "2396012111", "2098388305", "2790840297", "2161956202" ], "abstract": [ "The join ordering problem is a fundamental challenge that has to be solved by any query optimizer. Since the high-performance RDF systems are often implemented as triple stores (i.e., they represent RDF data as a single table with three attributes, at least conceptually), the query optimization strategies employed by such systems are often adopted from relational query optimization. In this paper we show that the techniques borrowed from traditional SQL query optimization (such as Dynamic Programming algorithm or greedy heuristics) are not immediately capable of handling large SPARQL queries. We introduce a new join ordering algorithm that performs a SPARQL-tailored query simplification. Furthermore, we present a novel RDF statistical synopsis that accurately estimates cardinalities in large SPARQL queries. Our experiments show that this algorithm is highly superior to the state-of-the-art SPARQL optimization approaches, including the RDF-3X’s original Dynamic Programming strategy.", "DISCOVER operates on relational databases and facilitates information discovery on them by allowing its user to issue keyword queries without any knowledge of the database schema or of SQL. DISCOVER returns qualified joining networks of tuples, that is, sets of tuples that are associated because they join on their primary and foreign keys and collectively contain all the keywords of the query. DISCOVER proceeds in two steps. First the Candidate Network Generator generates all candidate networks of relations, that is, join expressions that generate the joining networks of tuples. Then the Plan Generator builds plans for the efficient evaluation of the set of candidate networks, exploiting the opportunities to reuse common subexpressions of the candidate networks. We prove that DISCOVER finds without redundancy all relevant candidate networks, whose size can be data bound, by exploiting the structure of the schema. We prove that the selection of the optimal execution plan (way to reuse common subexpressions) is NP-complete. We provide a greedy algorithm and we show that it provides near-optimal plan execution time cost. Our experimentation also provides hints on tuning the greedy algorithm.", "Efficient join processing is one of the most fundamental and well-studied tasks in database research. In this work, we examine algorithms for natural join queries over many relations and describe a novel algorithm to process these queries optimally in terms of worst-case data complexity. Our result builds on recent work by Atserias, Grohe, and Marx, who gave bounds on the size of a full conjunctive query in terms of the sizes of the individual relations in the body of the query. These bounds, however, are not constructive: they rely on Shearer's entropy inequality which is information-theoretic. Thus, the previous results leave open the question of whether there exist algorithms whose running time achieve these optimal bounds. An answer to this question may be interesting to database practice, as we show in this paper that any project-join plan is polynomially slower than the optimal bound for some queries. We construct an algorithm whose running time is worst-case optimal for all natural join queries. Our result may be of independent interest, as our algorithm also yields a constructive proof of the general fractional cover bound by Atserias, Grohe, and Marx without using Shearer's inequality. In addition, we show that this bound is equivalent to a geometric inequality by Bollobas and Thomason, one of whose special cases is the famous Loomis-Whitney inequality. Hence, our results algorithmically prove these inequalities as well. Finally, we discuss how our algorithm can be used to compute a relaxed notion of joins.", "Evaluating the relational join is one of the central algorithmic and most well-studied problems in database systems. A staggering number of variants have been considered including Block-Nested loop join, Hash-Join, Grace, Sort-merge (see Grafe [17] for a survey, and [4, 7, 24] for discussions of more modern issues). Commercial database engines use finely tuned join heuristics that take into account a wide variety of factors including the selectivity of various predicates, memory, IO, etc. This study of join queries notwithstanding, the textbook description of join processing is suboptimal. This survey describes recent results on join algorithms that have provable worst-case optimality runtime guarantees. We survey recent work and provide a simpler and unified description of these algorithms that we hope is useful for theory-minded readers, algorithm designers, and systems implementors. Much of this progress can be understood by thinking about a simple join evaluation problem that we illustrate with the so-called triangle query, a query that has become increasingly popular in the last decade with the advent of social networks, biological motifs, and graph databases [36, 37]" ] }
quant-ph0307150
2110240249
The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine.
Ideas stemming from linear logic have been used previously by Abramsky in the study of classical reversible computation @cite_44 .
{ "cite_N": [ "@cite_44" ], "mid": [ "1675985504", "1986834688", "1502855507", "2063521547" ], "abstract": [ "We review some of our recent results (with collaborators) on information processing in an ordered linear spaces framework for probabilistic theories. These include demonstrations that many \"inherently quantum\" phenomena are in reality quite general characteristics of non-classical theories, quantum or otherwise. As an example, a set of states in such a theory is broadcastable if, and only if, it is contained in a simplex whose vertices are cloneable, and therefore distinguishable by a single measurement. As another example, information that can be obtained about a system in this framework without causing disturbance to the system state, must be inherently classical. We also review results on teleportation protocols in the framework, and the fact that any non-classical theory without entanglement allows exponentially secure bit commitment in this framework. Finally, we sketch some ways of formulating our framework in terms of categories, and in this light consider the relation of our work to that of Abramsky, Coecke, Selinger, Baez and others on information processing and other aspects of theories formulated categorically.", "With the emergence of thread-level parallelism as the primary means for continued performance improvement, the programmability issue has reemerged as an obstacle to the use of architectural advances. We argue that evolving legacy libraries for dense and banded linear algebra is not a viable solution due to constraints imposed by early design decisions. We propose a philosophy of abstraction and separation of concerns that provides a promising solution in this problem domain. The first abstraction, FLASH, allows algorithms to express computation with matrices consisting of contiguous blocks, facilitating algorithms-by-blocks. Operand descriptions are registered for a particular operation a priori by the library implementor. A runtime system, SuperMatrix, uses this information to identify data dependencies between suboperations, allowing them to be scheduled to threads out-of-order and executed in parallel. But not all classical algorithms in linear algebra lend themselves to conversion to algorithms-by-blocks. We show how our recently proposed LU factorization with incremental pivoting and a closely related algorithm-by-blocks for the QR factorization, both originally designed for out-of-core computation, overcome this difficulty. Anecdotal evidence regarding the development of routines with a core functionality demonstrates how the methodology supports high productivity while experimental results suggest that high performance is abundantly achievable.", "Bottom-up logic programming can be used to declaratively specify many algorithms in a succinct and natural way, and McAllester and Ganzinger have shown that it is possible to define a cost semantics that enables reasoning about the running time of algorithms written as inference rules. Previous work with the programming language Lollimon demonstrates the expressive power of logic programming with linear logic in describing algorithms that have imperative elements or that must repeatedly make mutually exclusive choices. In this paper, we identify a bottom-up logic programming language based on linear logic that is amenable to efficient execution and describe a novel cost semantics that can be used for complexity analysis of algorithms expressed in linear logic.", "Abstract We propose an approach to declarative programming which integrates the functional and relational paradigms by taking possibly non-deterministic lazy functions as the fundamental notion. Classical equational logic does not supply a suitable semantics in a natural way. Therefore, we suggest to view programs as theories in a constructor-based conditional rewriting logic. We present proof calculi and a model theory for this logic, and we prove the existence of free term models which provide an adequate intended semantics for programs. We develop a sound and strongly complete lazy narrowing calculus, which is able to support sharing without the technical overhead of graph rewriting and to identify safe cases for eager variable elimination. Moreover, we give some illustrative programming examples, and we discuss the implementability of our approach." ] }
quant-ph0307150
2110240249
The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine.
One of the earlier attempts at formulating a language for quantum computation was Greg Baker's Qgol @cite_23 . Its implementation (which remained incomplete) used so-called uniqueness types (similar but not identical to our linear variables) for quantum objects @cite_36 . The language is not universal for quantum computation.
{ "cite_N": [ "@cite_36", "@cite_23" ], "mid": [ "2115164346", "2157601714", "1999626800", "102856817" ], "abstract": [ "In this paper, we show that all languages in NP have logarithmic-size quantum proofs which can be verified provided that two unentangled copies are given. More formally, we introduce the complexity class QMAlog(2) and show that 3COL E QMAlog(2). To obtain this strong and surprising result we have to relax the usual requirements: the completeness is one but the soundness is 1-1 poly. Since the natural classical equivalent of QMAlog(2) is uninteresting (it would be equal to P), this result, like many others, stresses the fact that quantum information is fundamentally different from classical information. It also contributes to our understanding of entanglement since QMAlog = BQP[7].", "We introduce the language QML, a functional language for quantum computations on finite types. Its design is guided by its categorical semantics: QML programs are interpreted by morphisms in the category FQC of finite quantum computations, which provides a constructive semantics of irreversible quantum computations realisable as quantum gates. QML integrates reversible and irreversible quantum computations in one language, using first order strict linear logic to make weakenings explicit. Strict programs are free from decoherence and hence preserve superpositions and entanglement -which is essential for quantum parallelism.", "We propose the design of a programming language for quantum computing. Traditionally, quantum algorithms are frequently expressed at the hardware level, for instance in terms of the quantum circuit model or quantum Turing machines. These approaches do not encourage structured programming or abstractions such as data types. In this paper, we describe the syntax and semantics of a simple quantum programming language with high-level features such as loops, recursive procedures, and structured data types. The language is functional in nature, statically typed, free of run-time errors, and has an interesting denotational semantics in terms of complete partial orders of superoperators.", "We present an imperative quantum programming language LanQ which was designed to support combination of quantum and classical programming and basic process operations - process creation and interprocess communication. The language can thus be used for implementing both classical and quantum algorithms and protocols. Its syntax is similar to that of C language what makes it easy to learn for existing programmers. In this paper, we present operational semantics of the language and a proof of type soundness of the noncommunicating part of the language. We provide an example run of a quantum random number generator." ] }
quant-ph0307150
2110240249
The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine.
Another imperative language, based on C++, is the Q language developed by Bettelli, Calarco and Serafini @cite_7 . As in the case of QCL, no formal calculus is provided. A simulator is also available.
{ "cite_N": [ "@cite_7" ], "mid": [ "102856817", "2034373223", "1548801608", "1968265180" ], "abstract": [ "We present an imperative quantum programming language LanQ which was designed to support combination of quantum and classical programming and basic process operations - process creation and interprocess communication. The language can thus be used for implementing both classical and quantum algorithms and protocols. Its syntax is similar to that of C language what makes it easy to learn for existing programmers. In this paper, we present operational semantics of the language and a proof of type soundness of the noncommunicating part of the language. We provide an example run of a quantum random number generator.", "We present a new approach to adding state and state-changing commands to a term language. As a formal semantics it can be seen as a generalization of predicate transformer semantics, but beyond that it brings additional opportunities for specifying and verifying programs. It is based on a construct called a phrase, which is a term of the form C r t, where C stands for a command and t stands for a term of any type. If R is boolean, C r R is closely related to the weakest precondition wp(C,R). The new theory draws together functional and imperative programming in a simple way. In particular, imperative procedures and functions are seen to be governed by the same laws as classical functions. We get new techniques for reasoning about programs, including the ability to dispense with logical variables and their attendant complexities. The theory covers both programming and specification languages, and supports unbounded demonic and angelic nondeterminacy in both commands and terms.", "We present a new method for inferring complexity properties for imperative programs with bounded loops. The properties handled are: polynomial (or linear) boundedness of computed values, as a function of the input; and similarly for the running time. It is well known that complexity properties are undecidable for a Turing-complete programming language. Much work in program analysis overcomes this obstacle by relaxing the correctness notion: one does not ask for an algorithm that correctly decides whether the property of interest holds or not, but only for \"yes\" answers to be sound. In contrast, we reshaped the problem by defining a \"core\" programming language that is Turing-incomplete, but strong enough to model real programs of interest. For this language, our method is the first to give a certain answer; in other words, our inference is both sound and complete. The essence of the method is that every command is assigned a \"complexity certificate\", which is a concise specification of dependencies of output values on input. These certificates are produced by inference rules that are compositional and efficiently computable. The approach is inspired by previous work by Niggl and Wunderlich and by Jones and Kristiansen, but use a novel, more expressive kind of certificates.", "We describe here an implemented small programming language, called Alma-O, that augments the expressive power of imperative programming by a limited number of features inspired by the logic programming paradigm. These additions encourage declarative programming and make it a more attractive vehicle for problems that involve search. We illustrate the use of Alma-O by presenting solutions to a number of classical problems, including α-β search, STRIPS planning, knapsack, and Eight Queens. These solutions are substantially simpler than their counterparts written in the imperative or in the logic programming style and can be used for different purposes without any modification. We also discuss here the implementation of Alma-O and an operational, executable, semantics of a large subset of the language." ] }
quant-ph0307150
2110240249
The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine.
A more theoretical approach is taken by Selinger in his description of the functional language QPL @cite_26 . This language has both a graphical and a textual representation. A formal semantics is provided.
{ "cite_N": [ "@cite_26" ], "mid": [ "1847465957", "2157601714", "1515039666", "2617930519" ], "abstract": [ "In order to define models of simply typed functional programming languages being closer to the operational semantics of these languages, the notions of sequentiality, stability and seriality were introduced. These works originated from the definability problem for PCF, posed in [Sco72], and the full abstraction problem for PCF, raised in [Plo77].", "We introduce the language QML, a functional language for quantum computations on finite types. Its design is guided by its categorical semantics: QML programs are interpreted by morphisms in the category FQC of finite quantum computations, which provides a constructive semantics of irreversible quantum computations realisable as quantum gates. QML integrates reversible and irreversible quantum computations in one language, using first order strict linear logic to make weakenings explicit. Strict programs are free from decoherence and hence preserve superpositions and entanglement -which is essential for quantum parallelism.", "We show how functional languages can be used to write programs for real-valued functionals in exact real arithmetic. We concentrate on two useful functionals: definite integration, and the functional returning the maximum value of a continuous function over a closed interval. The algorithms are a practical application of a method, due to Berger, for computing quantifiers over streams. Correctness proofs for the algorithms make essential use of domain theory.", "We design an interpretation-based theory of higher-order functions that is well-suited for the complexity analysis of a standard higher-order functional language a la ml. We manage to express the interpretation of a given program in terms of a least fixpoint and we show that when restricted to functions bounded by higher-order polynomials, they characterize exactly classes of tractable functions known as Basic Feasible Functions at any order." ] }
quant-ph0307150
2110240249
The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine.
The imperative language qGCL, developed by Sanders and Zuliani @cite_63 , is based on Dijkstra's guarded command language. It has a formal semantics and proof system.
{ "cite_N": [ "@cite_63" ], "mid": [ "2034373223", "2137628566", "1481810685", "1548801608" ], "abstract": [ "We present a new approach to adding state and state-changing commands to a term language. As a formal semantics it can be seen as a generalization of predicate transformer semantics, but beyond that it brings additional opportunities for specifying and verifying programs. It is based on a construct called a phrase, which is a term of the form C r t, where C stands for a command and t stands for a term of any type. If R is boolean, C r R is closely related to the weakest precondition wp(C,R). The new theory draws together functional and imperative programming in a simple way. In particular, imperative procedures and functions are seen to be governed by the same laws as classical functions. We get new techniques for reasoning about programs, including the ability to dispense with logical variables and their attendant complexities. The theory covers both programming and specification languages, and supports unbounded demonic and angelic nondeterminacy in both commands and terms.", "In joint work with Peter O'Hearn and others, based on early ideas of Burstall, we have developed an extension of Hoare logic that permits reasoning about low-level imperative programs that use shared mutable data structure. The simple imperative programming language is extended with commands (not expressions) for accessing and modifying shared structures, and for explicit allocation and deallocation of storage. Assertions are extended by introducing a \"separating conjunction\" that asserts that its subformulas hold for disjoint parts of the heap, and a closely related \"separating implication\". Coupled with the inductive definition of predicates on abstract data structures, this extension permits the concise and flexible description of structures with controlled sharing. In this paper, we survey the current development of this program logic, including extensions that permit unrestricted address arithmetic, dynamically allocated arrays, and recursive procedures. We also discuss promising future directions.", "The most widely used scheme in GP was proposed by Koza, where programs are represented as Lisp-like trees and evolved by a genetic algorithm. Many other paradigms were devised these last years to automatically evolve programs. For instance, linear genetic programming (LGP) [6] is based on an interesting feature: instead of creating program trees, LGP directly evolves programs represented as linear sequences of imperative computer instructions. LGP is successful enough to have given birth to a derived commercial product named discipulus. The representation (or genotype) of programs in LGP is a bounded-length list of integers. These integers are mapped into imperative instructions of a simple imperative language (a subset of C for instance).", "We present a new method for inferring complexity properties for imperative programs with bounded loops. The properties handled are: polynomial (or linear) boundedness of computed values, as a function of the input; and similarly for the running time. It is well known that complexity properties are undecidable for a Turing-complete programming language. Much work in program analysis overcomes this obstacle by relaxing the correctness notion: one does not ask for an algorithm that correctly decides whether the property of interest holds or not, but only for \"yes\" answers to be sound. In contrast, we reshaped the problem by defining a \"core\" programming language that is Turing-incomplete, but strong enough to model real programs of interest. For this language, our method is the first to give a certain answer; in other words, our inference is both sound and complete. The essence of the method is that every command is assigned a \"complexity certificate\", which is a concise specification of dependencies of output values on input. These certificates are produced by inference rules that are compositional and efficiently computable. The approach is inspired by previous work by Niggl and Wunderlich and by Jones and Kristiansen, but use a novel, more expressive kind of certificates." ] }
quant-ph0307150
2110240249
The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine.
A previous attempt to construct a lambda calculus for quantum computation is described by Maymin in @cite_27 . However, his calculus appears to be strictly stronger than the quantum Turing machine @cite_19 . It seems to go beyond quantum mechanics in that it does not appear to have a unitary and reversible operational model, instead relying on a more general class of transformations. It is an open question whether the calculus is physically realizable.
{ "cite_N": [ "@cite_19", "@cite_27" ], "mid": [ "1668464107", "1676498955", "2121369607", "2006290558" ], "abstract": [ "We show that the lambda-q calculus can efficiently simulate quantum Turing machines by showing how the lambda-q calculus can efficiently simulate a class of quantum cellular automaton that are equivalent to quantum Turing machines. We conclude by noting that the lambda-q calculus may be strictly stronger than quantum computers because NP-complete problems such as satisfiability are efficiently solvable in the lambda-q calculus but there is a widespread doubt that they are efficiently solvable by quantum computers.", "This paper introduces a formal met alanguage called the lambda-q calculus for the specification of quantum programming languages. This met alanguage is an extension of the lambda calculus, which provides a formal setting for the specification of classical programming languages. As an intermediary step, we introduce a formal met alanguage called the lambda-p calculus for the specification of programming languages that allow true random number generation. We demonstrate how selected randomized algorithms can be programmed directly in the lambda-p calculus. We also demonstrate how satisfiability can be solved in the lambda-q calculus.", "It is becoming increasingly clear that, if a useful device for quantum computation will ever be built, it will be embodied by a classical computing machine with control over a truly quantum subsystem, this apparatus performing a mixture of classical and quantum computation. This paper investigates a possible approach to the problem of programming such machines: a template high level quantum language is presented which complements a generic general purpose classical language with a set of quantum primitives. The underlying scheme involves a run-time environment which calculates the byte-code for the quantum operations and pipes it to a quantum device controller or to a simulator. This language can compactly express existing quantum algorithms and reduce them to sequences of elementary operations; it also easily lends itself to automatic, hardware independent, circuit simplification. A publicly available preliminary implementation of the proposed ideas has been realised using the language.", "Recently a great deal of attention has been focused on quantum computation following a sequence of results [Bernstein and Vazirani, in Proc. 25th Annual ACM Symposium Theory Comput., 1993, pp. 11--20, SIAM J. Comput., 26 (1997), pp. 1277--1339], [Simon, in Proc. 35th Annual IEEE Symposium Foundations Comput. Sci., 1994, pp. 116--123, SIAM J. Comput., 26 (1997), pp. 1340--1349], [Shor, in Proc. 35th Annual IEEE Symposium Foundations Comput. Sci., 1994, pp. 124--134] suggesting that quantum computers are more powerful than classical probabilistic computers. Following Shor's result that factoring and the extraction of discrete logarithms are both solvable in quantum polynomial time, it is natural to ask whether all of @math can be efficiently solved in quantum polynomial time. In this paper, we address this question by proving that relative to an oracle chosen uniformly at random with probability 1 the class @math cannot be solved on a quantum Turing machine (QTM) in time @math . We also show that relative to a permutation oracle chosen uniformly at random with probability 1 the class @math cannot be solved on a QTM in time @math . The former bound is tight since recent work of Grover [in Proc. @math th Annual ACM Symposium Theory Comput. , 1996] shows how to accept the class @math relative to any oracle on a quantum computer in time @math ." ] }
quant-ph0307150
2110240249
The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine.
A seminar by Wehr @cite_41 suggests that linear logic may be useful in constructing a calculus for quantum computation within the mathematical framework of Chu spaces. However, the author stops short of developing such a calculus.
{ "cite_N": [ "@cite_41" ], "mid": [ "1534027338", "2053035915", "2801305581", "1675985504" ], "abstract": [ "The paper deals with the relationship of committed-choice logic programming languages and their proof-theoretic semantics based on linear logic. Fragments of linear logic are used in order to express various aspects of guarded clause concurrent programming and behavior of the system. The outlined translation comprises structural properties of concurrent computations, providing a sound and complete model wrt. to the interleaving operational semantics based on transformation systems. In the presence of variables, just asynchronous properties are captured without resorting to special proof-generating strategies, so the model is only correct for deadlock-free programs.", "In this paper we investigate a propositional fuzzy logical system L? which contains the well-known Lukasiewicz, Product and Godel fuzzy logics as sublogics. We define the corresponding algebraic structures, called L?-algebras and prove the following completeness result: a formula f is provable in the L? logic iff it is a tautology for all linear L?-algebras. Moreover, linear L?-algebras are shown to be embeddable in linearly ordered abelian rings with a strong unit and cancellation law.", "We construct quantum circuits which exactly encode the spectra of correlated electron models up to errors from rotation synthesis. By invoking these circuits as oracles within the recently introduced \"qubitization\" framework, one can use quantum phase estimation to sample states in the Hamiltonian eigenbasis with optimal query complexity @math where @math is an absolute sum of Hamiltonian coefficients and @math is target precision. For both the Hubbard model and electronic structure Hamiltonian in a second quantized basis diagonalizing the Coulomb operator, our circuits have T gate complexity @math where @math is number of orbitals in the basis. Compared to prior approaches, our algorithms are asymptotically more efficient in gate complexity and require fewer T gates near the classically intractable regime. Compiling to surface code fault-tolerant gates and assuming per gate error rates of one part in a thousand reveals that one can error correct phase estimation on interesting instances of these problems beyond the current capabilities of classical methods using only about one million superconducting qubits in a matter of hours.", "We review some of our recent results (with collaborators) on information processing in an ordered linear spaces framework for probabilistic theories. These include demonstrations that many \"inherently quantum\" phenomena are in reality quite general characteristics of non-classical theories, quantum or otherwise. As an example, a set of states in such a theory is broadcastable if, and only if, it is contained in a simplex whose vertices are cloneable, and therefore distinguishable by a single measurement. As another example, information that can be obtained about a system in this framework without causing disturbance to the system state, must be inherently classical. We also review results on teleportation protocols in the framework, and the fact that any non-classical theory without entanglement allows exponentially secure bit commitment in this framework. Finally, we sketch some ways of formulating our framework in terms of categories, and in this light consider the relation of our work to that of Abramsky, Coecke, Selinger, Baez and others on information processing and other aspects of theories formulated categorically." ] }
quant-ph0307150
2110240249
The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine.
Abramsky and Coecke describe a realization of a model of multiplicative linear logic via the quantum processes of entangling and de-entangling by means of typed projectors. They briefly discuss how these processes can be represented as terms of an affine lambda calculus @cite_15 .
{ "cite_N": [ "@cite_15" ], "mid": [ "2807350215", "1675985504", "2020559278", "1864664928" ], "abstract": [ "We show that any language in nondeterministic time @math , where the number of iterated exponentials is an arbitrary function @math , can be decided by a multiprover interactive proof system with a classical polynomial-time verifier and a constant number of quantum entangled provers, with completeness @math and soundness @math , where the number of iterated exponentials is @math and @math is a universal constant. The result was previously known for @math and @math ; we obtain it for any time-constructible function @math . The result is based on a compression technique for interactive proof systems with entangled provers that significantly simplifies and strengthens a protocol compression result of Ji (STOC'17). As a separate consequence of this technique we obtain a different proof of Slofstra's recent result (unpublished) on the uncomputability of the entangled value of multiprover games. Finally, we show that even minor improvements to our compression result would yield remarkable consequences in computational complexity theory and the foundations of quantum mechanics: first, it would imply that the class MIP* contains all computable languages; second, it would provide a negative resolution to a multipartite version of Tsirelson's problem on the relation between the commuting operator and tensor product models for quantum correlations.", "We review some of our recent results (with collaborators) on information processing in an ordered linear spaces framework for probabilistic theories. These include demonstrations that many \"inherently quantum\" phenomena are in reality quite general characteristics of non-classical theories, quantum or otherwise. As an example, a set of states in such a theory is broadcastable if, and only if, it is contained in a simplex whose vertices are cloneable, and therefore distinguishable by a single measurement. As another example, information that can be obtained about a system in this framework without causing disturbance to the system state, must be inherently classical. We also review results on teleportation protocols in the framework, and the fact that any non-classical theory without entanglement allows exponentially secure bit commitment in this framework. Finally, we sketch some ways of formulating our framework in terms of categories, and in this light consider the relation of our work to that of Abramsky, Coecke, Selinger, Baez and others on information processing and other aspects of theories formulated categorically.", "We consider the image of some classes of bipartite quantum states under a tensor product of random quantum channels. Depending on natural assumptions that we make on the states, the eigenvalues of their outputs have new properties which we describe. Our motivation is provided by the additivity questions in quantum information theory, and we build on the idea that a Bell state sent through a product of conjugated random channels has at least one large eigenvalue. We generalize this setting in two directions. First, we investigate general entangled pure inputs and show that Bell states give the least entropy among those inputs in the asymptotic limit. We then study mixed input states, and obtain new multi-scale random matrix models that allow to quantify the difference of the outputs’ eigenvalues between a quantum channel and its complementary version in the case of a non-pure input.", "Quantum information theory is a multidisciplinary field whose objective is to understand what happens when information is stored in the state of a quantum system. Quantum mechanics provides us with a new resource, called quantum entanglement, which can be exploited to achieve novel tasks such as teleportation and superdense coding. Current technologies allow the transmission of entangled photon pairs across distances up to roughly 100 kilometers. For longer distances, noise arising from various sources degrade the transmission of entanglement to the point that it becomes impossible to use the entanglement as a resource for future tasks. One strategy for dealing with this difficulty is to employ quantum repeaters, stations intermediate between the sender and receiver that can participate in the process of entanglement distillation, thereby improving on what the sender and receiver could do on their own. Motivated by the problem of designing quantum repeaters, we study entanglement distillation between two parties, Alice and Bob, starting from a mixed state and with the help of repeater stations. We extend the notion of entanglement of assistance to arbitrary tripartite states and exhibit a protocol, based on a random coding strategy, for extracting pure entanglement. We use these results to find achievable rates for the more general scenario, where many spatially separated repeaters help two recipients distill entanglement. We also study multiparty quantum communication protocols in a more general context. We give a new protocol for the task of multiparty state merging. The previous multiparty state merging protocol required the use of time-sharing, an impossible strategy when a single copy of the input state is available to the parties. Our protocol does not require time-sharing for distributed compression of two senders. In the one-shot regime, we can achieve multiparty state merging with entanglement costs not restricted to corner points of the entanglement cost region. Our analysis of the entanglement cost is performed using (smooth) min- and max-entropies. We illustrate the benefits of our approach by looking at different examples." ] }
cs0306044
2952425141
We define a measure of competitive performance for distributed algorithms based on throughput, the number of tasks that an algorithm can carry out in a fixed amount of work. This new measure complements the latency measure of , which measures how quickly an algorithm can finish tasks that start at specified times. The novel feature of the throughput measure, which distinguishes it from the latency measure, is that it is compositional: it supports a notion of algorithms that are competitive relative to a class of subroutines, with the property that an algorithm that is k-competitive relative to a class of subroutines, combined with an l-competitive member of that class, gives a combined algorithm that is kl-competitive. In particular, we prove the throughput-competitiveness of a class of algorithms for collect operations, in which each of a group of n processes obtains all values stored in an array of n registers. Collects are a fundamental building block of a wide variety of shared-memory distributed algorithms, and we show that several such algorithms are competitive relative to collects. Inserting a competitive collect in these algorithms gives the first examples of competitive distributed algorithms obtained by composition using a general construction.
In addition, there is a long history of interest in optimality of a distributed algorithm given certain conditions, such as a particular pattern of failures @cite_23 @cite_26 @cite_11 @cite_35 @cite_27 @cite_33 , or a particular pattern of message delivery @cite_1 @cite_42 @cite_3 . In a sense, work on optimality envisions a fundamentally different role for the adversary in which it is trying to produce bad performance for both the candidate and champion algorithms; in contrast, the adversary used in competitive analysis usually cooperates with the champion.
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_33", "@cite_42", "@cite_1", "@cite_3", "@cite_27", "@cite_23", "@cite_11" ], "mid": [ "2134659242", "2139297408", "1726079515", "2963230393" ], "abstract": [ "We consider optimal load balancing in a distributed computing environment consisting of homogeneous unreliable processors. Each processor receives its own sequence of tasks from outside users, some of which can be redirected to the other processors. Processing times are independent and identically distributed with an arbitrary distribution. The arrival sequence of outside tasks to each processor may be arbitrary as long as it is independent of the state of the system. Processors may fail, with arbitrary failure and repair processes that are also independent of the state of the system. The only information available to a processor is the history of its decisions for routing work to other processors, and the arrival times of its own arrival sequence. We prove the optimality of the round-robin policy, in which each processor sends all the tasks that can be redirected to each of the other processors in turn. We show that, among all policies that balance workload, round robin stochastically minimizes the nth task completion time for all n, and minimizes response times and queue lengths in a separable increasing convex sense for the entire system. We also show that if there is a single centralized controller, round-robin is the optimal policy, and a single controller using round-robin routing is better than the optimal distributed system in which each processor routes its own arrivals. Again \"optimal\" and \"better\" are in the sense of stochastically minimizing task completion times, and minimizing response time and queue lengths in the separable increasing convex sense.", "In this work, we study the notion of competing campaigns in a social network and address the problem of influence limitation where a \"bad\" campaign starts propagating from a certain node in the network and use the notion of limiting campaigns to counteract the effect of misinformation. The problem can be summarized as identifying a subset of individuals that need to be convinced to adopt the competing (or \"good\") campaign so as to minimize the number of people that adopt the \"bad\" campaign at the end of both propagation processes. We show that this optimization problem is NP-hard and provide approximation guarantees for a greedy solution for various definitions of this problem by proving that they are submodular. We experimentally compare the performance of the greedy method to various heuristics. The experiments reveal that in most cases inexpensive heuristics such as degree centrality compare well with the greedy approach. We also study the influence limitation problem in the presence of missing data where the current states of nodes in the network are only known with a certain probability and show that prediction in this setting is a supermodular problem. We propose a prediction algorithm that is based on generating random spanning trees and evaluate the performance of this approach. The experiments reveal that using the prediction algorithm, we are able to tolerate about 90 missing data before the performance of the algorithm starts degrading and even with large amounts of missing data the performance degrades only to 75 of the performance that would be achieved with complete data.", "We prove depth optimality of sorting networks from \"The Art of Computer Programming\".Sorting networks posses symmetry that can be used to generate a few representatives.These representatives can be efficiently encoded using regular expressions.We construct SAT formulas whose unsatisfiability is sufficient to show optimality.Resulting algorithm is orders of magnitude faster than prior work on small instances. We solve a 40-year-old open problem on depth optimality of sorting networks. In 1973, Donald E. Knuth detailed sorting networks of the smallest depth known for n ź 16 inputs, quoting optimality for n ź 8 (Volume 3 of \"The Art of Computer Programming\"). In 1989, Parberry proved optimality of networks with 9 ź n ź 10 inputs. We present a general technique for obtaining such results, proving optimality of the remaining open cases of 11 ź n ź 16 inputs. Exploiting symmetry, we construct a small set R n of two-layer networks such that: if there is a depth-k sorting network on n inputs, then there is one whose first layers are in R n . For each network in R n , we construct a propositional formula whose satisfiability is necessary for the existence of a depth-k sorting network. Using an off-the-shelf SAT solver we prove optimality of the sorting networks listed by Knuth. For n ź 10 inputs, our algorithm is orders of magnitude faster than prior ones.", "We introduce randomized Limited View (LV) adversary codes that provide protection against an adversary that uses their partial view of the channel to construct an adversarial error vector that is added to the channel. For a codeword of length N, the adversary selects a subset of size ρrN of components to “see”, and then “adds” an adversarial error vector of weight ρwN to the codeword. Performance of the code is measured by the probability of the decoder failure in recovering the sent message. An (N, qRN, δ)-limited view adversary is a code of rate R that ensures that the success chance of the adversary in making decoder to fail is bounded by δ. Our main motivation to study these codes is providing protection for wireless communication at the physical layer of networks. We formalize the definition of adversarial error and decoder failure, construct a code with efficient encoding and decoding that allows the adversary to, depending on the code rate, read up to half of the sent codeword and add error on the same coordinates. The code is non-linear, has an efficient decoding algorithm, and is constructed using a message authentication code (MAC) and a Folded Reed-Solomon (FRS) code. The decoding algorithm uses an innovative approach that combines the list decoding algorithm of the FRS codes and the MAC verification algorithm to eliminate the exponential size of the list output from the decoding algorithm. We discuss our results and future work." ] }
cs0306048
1675778287
Dataset storage, exchange, and access play a critical role in scientific applications. For such purposes netCDF serves as a portable and efficient file format and programming interface, which is popular in numerous scientific application domains. However, the original interface does not provide an efficient mechanism for parallel data storage and access. In this work, we present a new parallel interface for writing and reading netCDF datasets. This interface is derived with minimum changes from the serial netCDF interface but defines semantics for parallel access and is tailored for high performance. The underlying parallel I O is achieved through MPI-IO, allowing for dramatic performance gains through the use of collective I O optimizations. We compare the implementation strategies with HDF5 and analyze both. Our tests indicate programming convenience and significant I O performance improvement with this parallel netCDF interface.
MPI-IO is a parallel I O interface specified in the MPI-2 standard. It is implemented and used on a wide range of platforms. The most popular implementation, ROMIO @cite_13 is implemented portably on top of an abstract I O device layer @cite_1 @cite_22 that enables portability to new underlying I O systems. One of the most important features in ROMIO is collective I O operations, which adopt a two-phase I O strategy @cite_11 @cite_20 @cite_18 @cite_2 and improve the parallel I O performance by significantly reducing the number of I O requests that would otherwise result in many small, noncontiguous I O requests. However, MPI-IO reads and writes data in a raw format without providing any functionality to effectively manage the associated metadata. Nor does it guarantee data portability, thereby making it inconvenient for scientists to organize, transfer, and share their application data.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_1", "@cite_2", "@cite_13", "@cite_20", "@cite_11" ], "mid": [ "2174300520", "2104486653", "2567936601", "2111925167" ], "abstract": [ "The I O access patterns of parallel programs often consist of accesses to a large number of small, noncontiguous pieces of data. If an application's I O needs are met by making many small, distinct I O requests, however, the I O performance degrades drastically. To avoid this problem, MPI-IO allows users to access a noncontiguous data set with a single I O function call. This feature provides MPI-IO implementations an opportunity to optimize data access. We describe how our MPI-IO implementation, ROMIO, delivers high performance in the presence of noncontiguous requests. We explain in detail the two key optimizations ROMIO performs: data sieving for noncontiguous requests from one process and collective I O for noncontiguous requests from multiple processes. We describe how one can implement these optimizations portably on multiple machines and file systems, control their memory requirements, and also achieve high performance. We demonstrate the performance and portability with performance results for three applications--an astrophysics-application template (DIST3D), the NAS BTIO benchmark, and an unstructured code (UNSTRUC)--on five different parallel machines: HP Exemplar, IBM SP, Intel Paragon, NEC SX-4, and SGI Origin2000.", "We discuss the issues involved in implementing MPI-IO portably on multiple machines and file systems and also achieving high performance. One way to implement MPI-IO portably is to implement it on top of the basic Unix I O functions (open, lseek, read, write, and close), which are themselves portable. We argue that this approach has limitations in both functionality and performance. We instead advocate an implementation approach that combines a large portion of portable code and a small portion of code that is optimized separately for different machines and file systems. We have used such an approach to develop a high-performance, portable MPI-IO implementation, called ROMIO. In addition to basic I O functionality, we consider the issues of supporting other MPI-IO features, such as 64-bit file sizes, noncontiguous accesses, collective I O, asynchronous I O, consistency and atomicity semantics, user-supplied hints, shared file pointers, portable data representation, and file preallocation. We describe how we implemented each of these features on various machines and file systems. The machines we consider are the HP Exemplar, IBM SP, Intel Paragon, NEC SX-4, SGI Origin2000, and networks of workstations; and the file systems we consider are HP HFS, IBM PIOFS, Intel PFS, NEC SFS, SGI XFS, NFS, and any general Unix file system (UFS). We also present our thoughts on how a file system can be designed to better support MPI-IO. We provide a list of features desired from a file system that would help in implementing MPI-IO correctly and with high performance.", "ROMIO is a high-performance, portable implementation of MPI-IO (the I O chapter in MPI-2). This document describes how to install and use ROMIO version 1.0.0 on the following machines: IBM SP; Intel Paragon; HP Convex Exemplar; SGI Origin 2000, Challenge, and Power Challenge; and networks of workstations (Sun4, Solaris, IBM, DEC, SGI, HP, FreeBSD, and Linux).", "We propose a strategy for implementing parallel I O interfaces portably and efficiently. We have defined an abstract device interface for parallel I O, called ADIO. Any parallel I O API can be implemented on multiple file systems by implementing the API portably on top of ADIO, and implementing only ADIO on different file systems. This approach simplifies the task of implementing an API and yet exploits the specific high performance features of individual file systems. We have used ADIO to implement the Intel PFS interface and subsets of MPI-IO and IBM PIOFS interfaces on PFS, PIOFS, Unix, and NFS file systems. Our performance studies indicate that the overhead of using ADIO as an implementation strategy is very low." ] }
cs0305010
1668544585
Numerous systems for dissemination, retrieval, and archiving of documents have been developed in the past. Those systems often focus on one of these aspects and are hard to extend and combine. Typically, the transmission protocols, query and filtering languages are fixed as well as the interfaces to other systems. We rather envisage the seamless establishment of networks among the providers, repositories and consumers of information, supporting information retrieval and dissemination while being highly interoperable and extensible. We propose a framework with a single event-based mechanism that unifies document storage, retrieval, and dissemination. This framework offers complete openness with respect to document and metadata formats, transmission protocols, and filtering mechanisms. It specifies a high-level building kit, by which arbitrary processors for document streams can be incorporated to support the retrieval, transformation, aggregation and disaggregation of documents. Using the same kit, interfaces for different transmission protocols can be added easily to enable the communication with various information sources and information consumers.
@cite_12 is a push-model publish subscribe system for alerting within a wide-area network. It offers scalability by distributing filters over servers within the network and saving bandwidth by filtering close to the event sources and bundling similar subscriptions. Siena is modular and offers sophisticated filtering mechanisms including dynamic configuration and distribution. It lacks openness, document stream transformation, and scheduling.
{ "cite_N": [ "@cite_12" ], "mid": [ "2103856529", "2159393864", "2131975004", "1488414089" ], "abstract": [ "The publish subscribe (pub sub for short) paradigm is used to deliver events from a source to interested clients in an asynchronous way. Recently, extending a pub sub system in wireless networks has become a promising topic. However, most existing works focus on pub sub systems in infrastructured wireless networks. To adapt pub sub systems to mobile ad hoc networks, we propose DRIP, a dynamic Voronoi region-based pub sub protocol. In our design, the network is dynamically divided into several Voronoi regions after choosing proper nodes as broker nodes. Each broker node is used to collect subscriptions and detected events, as well as efficiently notify subscribers with matched events in its Voronoi region. Other nodes join their nearest broker nodes to submit subscriptions, publish events, and wait for notifications of their requested events. Broker nodes cooperate with each other for sharing subscriptions and useful events. Our proposal includes two major components: a Voronoi regions construction protocol, and a delivery mechanism that implements the pub sub paradigm. The effectiveness of DRIP is demonstrated through comprehensive simulation studies.", "Location-based services have become widely available on mobile devices. Existing methods employ a pull model or user-initiated model, where a user issues a query to a server which replies with location-aware answers. To provide users with instant replies, a push model or server-initiated model is becoming an inevitable computing model in the next-generation location-based services. In the push model, subscribers register spatio-textual subscriptions to capture their interests, and publishers post spatio-textual messages. This calls for a high-performance location-aware publish subscribe system to deliver publishers' messages to relevant subscribers.In this paper, we address the research challenges that arise in designing a location-aware publish subscribe system. We propose an rtree based index structure by integrating textual descriptions into rtree nodes. We devise efficient filtering algorithms and develop effective pruning techniques to improve filtering efficiency. Experimental results show that our method achieves high performance. For example, our method can filter 500 tweets in a second for 10 million registered subscriptions on a commodity computer.", "The components of a loosely coupled system are typically designed to operate by generating and responding to asynchronous events. An event notification service is an application-independent infrastructure that supports the construction of event-based systems, whereby generators of events publish event notifications to the infrastructure and consumers of events subscribe with the infrastructure to receive relevant notifications. The two primary services that should be provided to components by the infrastructure are notification selection (i.e., determining which notifications match which subscriptions) and notification delivery (i.e., routing matching notifications from publishers to subscribers). Numerous event notification services have been developed for local-area networks, generally based on a centralized server to select and deliver event notifications. Therefore, they suffer from an inherent inability to scale to wide-area networks, such as the Internet, where the number and physical distribution of the service’s clients can quickly overwhelm a centralized solution. The critical challenge in the setting of a wide-area network is to maximize the expressiveness in the selection mechanism without sacrificing scalability in the delivery mechanism. This paper presents SIENA, an event notification service that we have designed and implemented to exhibit both expressiveness and scalability. We describe the service’s interface to applications, the algorithms used by networks of servers to select and deliver event notifications, and the strategies used", "With the rapid progress of mobile Internet and the growing popularity of smartphones, location-aware publish subscribe systems have recently attracted significant attention. Different from traditional content-based publish subscribe, subscriptions registered by subscribers and messages published by publishers include both spatial information and textual descriptions, and messages should be delivered to relevant subscribers whose subscriptions have high relevancy to the messages. To evaluate the relevancy between spatio-textual messages and subscriptions, we should combine the spatial proximity and textual relevancy. Since subscribers have different preferences - some subscribers prefer messages with high spatial proximity and some subscribers pay more attention to messages with high textual relevancy, it calls for new location-aware publish subscribe techniques to meet various needs from different subscribers. In this paper, we allow subscribers to parameterize their subscriptions and study the location-aware publish subscribe problem on parameterized spatio-textual subscriptions. One big challenge is to achieve high performance. To meet this requirement, we propose a filter-verification framework to efficiently deliver messages to relevant subscribers. In the filter step, we devise effective filters to prune large numbers of irreverent results and obtain some candidates. In the verification step, we verify the candidates to generate the answers. We propose three effective filters by integrating prefix filtering and spatial pruning techniques. Experimental results show our method achieves higher performance and better quality than baseline approaches." ] }
math0304100
2115080784
The Shub-Smale Tau Conjecture is a hypothesis relating the number of integral roots of a polynomial f in one variable and the Straight-Line Program (SLP) complexity of f. A consequence of the truth of this conjecture is that, for the Blum-Shub-Smale model over the complex numbers, P differs from NP. We prove two weak versions of the Tau Conjecture and in so doing show that the Tau Conjecture follows from an even more plausible hypothesis. Our results follow from a new p-adic analogue of earlier work relating real algebraic geometry to additive complexity. For instance, we can show that a nonzero univariate polynomial of additive complexity s can have no more than 15+s^3(s+1)(7.5)^s s! =O(e^ s s ) roots in the 2-adic rational numbers Q_2, thus dramatically improving an earlier result of the author. This immediately implies the same bound on the number of ordinary rational roots, whereas the best previous upper bound via earlier techniques from real algebraic geometry was a quantity in Omega((22.6)^ s^2 ). This paper presents another step in the author's program of establishing an algorithmic arithmetic version of fewnomial theory.
That the @math -conjecture is still open is a testament to the fact that we know far less about the complexity measures @math and @math than we should. For example, there is still no more elegant method known to compute @math for a fixed polynomial than brute force enumeration. Also, the computability of additive complexity is still an open question, although a more efficient variant (allowing radicals as well) can be computed in triply exponential time @cite_12 .
{ "cite_N": [ "@cite_12" ], "mid": [ "2807350215", "2949121808", "2947949186", "2950708966" ], "abstract": [ "We show that any language in nondeterministic time @math , where the number of iterated exponentials is an arbitrary function @math , can be decided by a multiprover interactive proof system with a classical polynomial-time verifier and a constant number of quantum entangled provers, with completeness @math and soundness @math , where the number of iterated exponentials is @math and @math is a universal constant. The result was previously known for @math and @math ; we obtain it for any time-constructible function @math . The result is based on a compression technique for interactive proof systems with entangled provers that significantly simplifies and strengthens a protocol compression result of Ji (STOC'17). As a separate consequence of this technique we obtain a different proof of Slofstra's recent result (unpublished) on the uncomputability of the entangled value of multiprover games. Finally, we show that even minor improvements to our compression result would yield remarkable consequences in computational complexity theory and the foundations of quantum mechanics: first, it would imply that the class MIP* contains all computable languages; second, it would provide a negative resolution to a multipartite version of Tsirelson's problem on the relation between the commuting operator and tensor product models for quantum correlations.", "Let @math be a @math -ary predicate over a finite alphabet. Consider a random CSP @math instance @math over @math variables with @math constraints. When @math the instance @math will be unsatisfiable with high probability, and we want to find a refutation - i.e., a certificate of unsatisfiability. When @math is the @math -ary OR predicate, this is the well studied problem of refuting random @math -SAT formulas, and an efficient algorithm is known only when @math . Understanding the density required for refutation of other predicates is important in cryptography, proof complexity, and learning theory. Previously, it was known that for a @math -ary predicate, having @math constraints suffices for refutation. We give a criterion for predicates that often yields efficient refutation algorithms at much lower densities. Specifically, if @math fails to support a @math -wise uniform distribution, then there is an efficient algorithm that refutes random CSP @math instances @math whp when @math . Indeed, our algorithm will \"somewhat strongly\" refute @math , certifying @math , if @math then we get the strongest possible refutation, certifying @math . This last result is new even in the context of random @math -SAT. Regarding the optimality of our @math requirement, prior work on SDP hierarchies has given some evidence that efficient refutation of random CSP @math may be impossible when @math . Thus there is an indication our algorithm's dependence on @math is optimal for every @math , at least in the context of SDP hierarchies. Along these lines, we show that our refutation algorithm can be carried out by the @math -round SOS SDP hierarchy. Finally, as an application of our result, we falsify assumptions used to show hardness-of-learning results in recent work of Daniely, Linial, and Shalev-Shwartz.", "We study the problem of approximating the commuting-operator value of a two-player non-local game. It is well-known that it is @math -complete to decide whether the classical value of a non-local game is 1 or @math . Furthermore, as long as @math is small enough, this result does not depend on the gap @math . In contrast, a recent result of Fitzsimons, Ji, Vidick, and Yuen shows that the complexity of computing the quantum value grows without bound as the gap @math decreases. In this paper, we show that this also holds for the commuting-operator value of a game. Specifically, in the language of multi-prover interactive proofs, we show that the power of @math (proofs with two provers, one round, completeness probability @math , soundness probability @math , and commuting-operator strategies) can increase without bound as the gap @math gets arbitrarily small. Our results also extend naturally in two ways, to perfect zero-knowledge protocols, and to lower bounds on the complexity of computing the approximately-commuting value of a game. Thus we get lower bounds on the complexity class @math - @math of perfect zero-knowledge multi-prover proofs with approximately-commuting operator strategies, as the gap @math gets arbitrarily small. While we do not know any computable time upper bound on the class @math , a result of the first author and Vidick shows that for @math and @math , the class @math , with constant communication from the provers, is contained in @math . We give a lower bound of @math (ignoring constants inside the function) for this class, which is tight up to polynomial factors assuming the exponential time hypothesis.", "One of the fundamental questions of Algorithmic Mechanism Design is whether there exists an inherent clash between truthfulness and computational tractability: in particular, whether polynomial-time truthful mechanisms for combinatorial auctions are provably weaker in terms of approximation ratio than non-truthful ones. This question was very recently answered for universally truthful mechanisms for combinatorial auctions D11 , and even for truthful-in-expectation mechanisms DughmiV11 . However, both of these results are based on information-theoretic arguments for valuations given by a value oracle, and leave open the possibility of polynomial-time truthful mechanisms for succinctly described classes of valuations. This paper is the first to prove computational hardness results for truthful mechanisms for combinatorial auctions with succinctly described valuations. We prove that there is a class of succinctly represented submodular valuations for which no deterministic truthful mechanism provides an @math -approximation for a constant @math , unless @math ( @math denotes the number of items). Furthermore, we prove that even truthful-in-expectation mechanisms cannot approximate combinatorial auctions with certain succinctly described submodular valuations better than within @math , where @math is the number of bidders and @math some absolute constant, unless @math . In addition, we prove computational hardness results for two related problems." ] }