aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1906.00850
2947982151
Interest in smart cities is rapidly rising due to the global rise in urbanization and the wide-scale instrumentation of modern cities. Due to the considerable infrastructural cost of setting up smart cities and smart communities, researchers are exploring the use of existing vehicles on the roads as "message ferries" for the transport data for smart community applications to avoid the cost of installing new communication infrastructure. In this paper, we propose an opportunistic data ferry selection algorithm that strives to select vehicles that can minimize the overall delay for data delivery from a source to a given destination. Our proposed opportunistic algorithm utilizes an ensemble of online hiring algorithms, which are run together in passive mode, to select the online hiring algorithm that has performed the best in recent history. The proposed ensemble based algorithm is evaluated empirically using real-world traces from taxies plying routes in Shanghai, China, and its performance is compared against a baseline of four state-of-the-art online hiring algorithms. A number of experiments are conducted and our results indicate that the proposed algorithm can reduce the overall delay compared to the baseline by an impressive 13 to 258 .
@cite_1 , authors state that existing network infrastructure in smart cities can not sustain the traffic generated by sensors. To overcome this problem, an investment in telecommunication infrastructure is required. However, authors proposed to exploit buses in a Delay Tolerant Network (DTN) to transfer data in smart cities. In @cite_5 , the authors introduce mobile cloud servers by installing servers on vehicles and use them in relief efforts of large-scale disasters to collect and share data. These mobile cloud servers convey data among isolated shelters while traveling and finally returning to the disaster relief headquarters. Vehicles exchange data while waiting in the disaster relief headquarters, which is connected to the Internet.
{ "cite_N": [ "@cite_5", "@cite_1" ], "mid": [ "2782962104", "2607377528", "2809740924", "2050848313" ], "abstract": [ "During large-scale disasters, such as the Great East Japan Earthquake in 2011 or Kumamoto huge Earthquake in 2016, many regions were isolated from critical information exchanges due to problems with communication infrastructures. In those serious disasters, quick and flexible disaster recovery network is required to deliver the disaster related information after disaster. In this paper, mobile cloud computing for vehicle server for information exchange among isolated shelters in such cases is introduced. The vehicle with mobile cloud server traverses the isolated shelters and exchanges information and returns to the disaster headquarter which is connected to Internet. DTN function is introduced to store, carry and exchange message as a message ferry among the shelters even in the challenged network environment where wired and wireless communication means are completely damaged. The prototype system is constructed using Wi-Fi network as mobility network and a note PC mobile cloud server and IBR-DTN and DTN2 software as the DTN function.", "Sensors in future smart cities will continuously monitor the environment in order to prevent critical situations and waste of resources or to offer new services to end users. Likely, the existing networks will not be able to sustain such a traffic without huge investments in the telecommunication infrastructure. One possible solution to overcome these problems is to apply the Delay Tolerant Network (DTN) paradigm. This paper presents the Sink and Delay Aware Bus (S&DA-Bus) routing protocol, a DTN routing protocol designed for smart cities able to exploit mobility of people, vehicles and buses roaming around the city. Particular attention is put on the public transportation system: S&DA-Bus takes advantage of the predictable and quasi-periodic mobility that characterizes it.", "Abstract With the rapid increase in the development of the Internet of Things and 5G networks in the smart city context, a large amount of data (i.e., big data) is expected to be generated, resulting in increased latency for the traditional cloud computing paradigm. To reduce the latency, mobile edge computing has been considered for offloading a part of the workload from mobile devices to nearby edge servers that have sufficient computation resources. Although there has been significant research in the field of mobile edge computing, little attention has been given to understanding the placement of edge servers in smart cities to optimize the mobile edge computing network performance. In this paper, we study the edge server placement problem in mobile edge computing environments for smart cities. First, we formulate the problem as a multi-objective constraint optimization problem that places edge servers in some strategic locations with the objective to make balance the workloads of edge servers and minimize the access delay between the mobile user and edge server. Then, we adopt mixed integer programming to find the optimal solution. Experimental results based on Shanghai Telecom’s base station dataset show that our approach outperforms several representative approaches in terms of access delay and workload balancing.", "To cope with the explosive traffic demands and limited capacity provided by the current cellular networks, Delay Tolerant Networking (DTN) is used to migrate traffic from the cellular networks to the free and high capacity device-to-device networks. The current DTN-based mobile data offloading models do not address the heterogeneity of mobile traffic and are based on simple network assumptions. In this paper, we establish a mathematical framework to study the problem of multiple mobile data offloading under realistic network assumptions, where 1) mobile data is heterogeneous in terms of size and lifetime, 2) mobile users have different data subscribing interests, and 3) the storage of offloading helpers is limited. We formulate the maximum mobile data offloading as a Submodular Function Maximization problem with multiple linear constraints of limited storage and propose greedy, approximated and optimal algorithms for different offloading scenarios. We show that our algorithms can effectively offload data to DTNs by extensive simulations which employ real traces of both humans and vehicles." ] }
1906.00850
2947982151
Interest in smart cities is rapidly rising due to the global rise in urbanization and the wide-scale instrumentation of modern cities. Due to the considerable infrastructural cost of setting up smart cities and smart communities, researchers are exploring the use of existing vehicles on the roads as "message ferries" for the transport data for smart community applications to avoid the cost of installing new communication infrastructure. In this paper, we propose an opportunistic data ferry selection algorithm that strives to select vehicles that can minimize the overall delay for data delivery from a source to a given destination. Our proposed opportunistic algorithm utilizes an ensemble of online hiring algorithms, which are run together in passive mode, to select the online hiring algorithm that has performed the best in recent history. The proposed ensemble based algorithm is evaluated empirically using real-world traces from taxies plying routes in Shanghai, China, and its performance is compared against a baseline of four state-of-the-art online hiring algorithms. A number of experiments are conducted and our results indicate that the proposed algorithm can reduce the overall delay compared to the baseline by an impressive 13 to 258 .
@cite_18 conduct a study on using taxi cabs as oblivious data mules for data collection and delivery in smart cities. They have no guarantee on data communications since they are using taxi cabs without any selection criteria. They use real taxi traces in the city of Rome and divide the city into blocks of size @math meter @math . Depending only on opportunistic connections between vehicles and nodes, the authors claim achieving a coverage of 80 The aforementioned papers mostly utilize multiple relays for transferring data between source-destination locations. Furthermore, these papers do not approach the ferry selection problem from an online perspective. Conversely, in this paper we propose an approach where each vehicle transfers a data bundle from source to destination without having to use relays and decisions are made in an online fashion---these assumptions are practical as more vehicles utilize OBU and GPS units that provide exact or probabilistic information about the path of the vehicle. Additionally, this paper considers online hiring algorithms for data ferry selection.
{ "cite_N": [ "@cite_18" ], "mid": [ "2254736503", "2364071307", "1988580225", "2584174354" ], "abstract": [ "Abstract How to deliver data to, or collect data from the hundreds of thousands of sensors and actuators integrated in “things” spread across virtually every smart city streets (garbage cans, storm drains, advertising panels, etc.)? The answer to the question is neither straightforward nor unique, given the scale of the issue, the lack of a single administrative entity for such tiny devices (arguably run by a multiplicity of distinct and independent service providers), and the cost and power concerns that their direct connectivity to the cellular network might pose. This paper posits that one possible alternative consists in connecting such devices to their data collection gateways using “oblivious data mules”, namely transport fleets such as taxi cabs which (unlike most data mules considered in past work) have no relation whatsoever with the smart city service providers, nor are required to follow any pre-established or optimized path, nor are willing to share their LTE connectivity. We experimentally evaluate data collection and delivery performance using real world traces gathered over a six month period in the city of Rome. Results suggest that even relatively small fleets, such as an average of about 120 vehicles, operating in parallel in a very large and irregular city such as Rome, can achieve an 80 coverage of the downtown area in less than 24 h.", "The ubiquitous deployment of mobile and sensor technologies has led to both the capacity to observe human behavior in physical (offline) settings as well as to record it. This provides researchers with a new lens to study and better understand the individual decision processes that were previously unobserved. In this paper, we study decision making behavior of 11,196 taxi drivers in a large Asian city using a rich data set consisting of 10.6 million fine-grained GPS trip records. These records include detailed taxi GPS trajectories, taxi occupancy data (i.e., whether a taxi was occupied with a passenger or was vacant) and taxi drivers’ daily incomes. This capacity to use data where occupancy of the taxi is known is a distinctive feature of our data set and sets this work apart from prior work which has attempted to study driver behavior. The specific decision we focus on pertains to actions drivers take to find new passengers after they have dropped off their current passengers. In particular, we study the role of information derivable from the GPS trace data (e.g., where passengers are dropped off, where passengers are picked up, longitudinal taxicab travel history with fine-grained time stamps) observable by or made available to drivers in enabling them to learn the distribution of demand for their services over space and time. We conduct our study using a heterogeneous Bayesian learning model. We find strong heterogeneity in individual learning behavior and driving decisions, which is significantly associated with individual economic outcomes. Drivers with higher incomes benefit significantly from their ability to learn from not only demand information directly observable in the local market, but also aggregate information on demand flows across markets. Interestingly, our policy simulations indicate information that is noisy at the individual level becomes valuable after being aggregated across various spatial and temporal dimensions. Moreover, the value of information does not increase monotonically with the scale and frequency of information sharing. Finally, our study has important welfare implications in that efficient information sharing leads to an income increase among all drivers, instead of a redistribution of income between different types of drivers. Our work allows us not only to explain driver decision making behavior using these detailed behavioral traces, but also to prescribe information sharing strategy for the firm in order to improve the overall market efficiency.\u0000", "Informed driving is increasingly becoming a key feature for increasing the sustainability of taxi companies. The sensors that are installed in each vehicle are providing new opportunities for automatically discovering knowledge, which, in return, delivers information for real-time decision making. Intelligent transportation systems for taxi dispatching and for finding time-saving routes are already exploring these sensing data. This paper introduces a novel methodology for predicting the spatial distribution of taxi-passengers for a short-term time horizon using streaming data. First, the information was aggregated into a histogram time series. Then, three time-series forecasting techniques were combined to originate a prediction. Experimental tests were conducted using the online data that are transmitted by 441 vehicles of a fleet running in the city of Porto, Portugal. The results demonstrated that the proposed framework can provide effective insight into the spatiotemporal distribution of taxi-passenger demand for a 30-min horizon.", "In big cities, taxi service is imbalanced. In some areas, passengers wait too long for a taxi, while in others, many taxis roam without passengers. Knowledge of where a taxi will become available can help us solve the taxi demand imbalance problem. In this paper, we employ a holistic approach to predict taxi demand at high spatial resolution. We showcase our techniques using two real-world data sets, yellow cabs and Uber trips in New York City, and perform an evaluation over 9,940 building blocks in Manhattan. Our approach consists of two key steps. First, we use entropy and the temporal correlation of human mobility to measure the demand uncertainty at the building block level. Second, to identify which predictive algorithm can approach the theoretical maximum predictability, we implement and compare three predictors: the Markov predictor (a probability-based predictive algorithm), the Lempel-Ziv-Welch predictor (a sequence-based predictive algorithm), and the Neural Network predictor (a predictive algorithm that uses machine learning). The results show that predictability varies by building block and, on average, the theoretical maximum predictability can be as high as 83 . The performance of the predictors also vary: the Neural Network predictor provides better accuracy for blocks with low predictability, and the Markov predictor provides better accuracy for blocks with high predictability. In blocks with high maximum predictability, the Markov predictor is able to predict the taxi demand with an 89 accuracy, 11 better than the Neural Network predictor, while requiring only 0.03 computation time. These findings indicate that the maximum predictability can be a good metric for selecting prediction algorithms." ] }
1906.00852
2947767238
Conventional application of convolutional neural networks (CNNs) for image classification and recognition is based on the assumption that all target classes are equal(i.e., no hierarchy) and exclusive of one another (i.e., no overlap). CNN-based image classifiers built on this assumption, therefore, cannot take into account an innate hierarchy among target classes (e.g., cats and dogs in animal image classification) or additional information that can be easily derived from the data (e.g.,numbers larger than five in the recognition of handwritten digits), thereby resulting in scalability issues when the number of target classes is large. Combining two related but slightly different ideas of hierarchical classification and logical learning by auxiliary inputs, we propose a new learning framework called hierarchical auxiliary learning, which not only address the scalability issues with a large number of classes but also could further reduce the classification recognition errors with a reasonable number of classes. In the hierarchical auxiliary learning, target classes are semantically or non-semantically grouped into superclasses, which turns the original problem of mapping between an image and its target class into a new problem of mapping between a pair of an image and its superclass and the target class. To take the advantage of superclasses, we introduce an auxiliary block into a neural network, which generates auxiliary scores used as additional information for final classification recognition; in this paper, we add the auxiliary block between the last residual block and the fully-connected output layer of the ResNet. Experimental results demonstrate that the proposed hierarchical auxiliary learning can reduce classification errors up to 0.56, 1.6 and 3.56 percent with MNIST, SVHN and CIFAR-10 datasets, respectively.
It is well known that transferring learned information to a new task as an auxiliary information enables efficient learning of a new task @cite_21 , while providing acquired information from a wider network to a thinner network improves the performance of the thinner network @cite_3 .
{ "cite_N": [ "@cite_21", "@cite_3" ], "mid": [ "2294193936", "1775792793", "2963946410", "54398672" ], "abstract": [ "We consider an interesting problem in this paper that uses transfer learning in two directions to compensate missing knowledge from the target domain. Transfer learning tends to be exploited as a powerful tool that mitigates the discrepancy between different databases used for knowledge transfer. It can also be used for knowledge transfer between different modalities within one database. However, in either case, transfer learning will fail if the target data are missing. To overcome this, we consider knowledge transfer between different databases and modalities simultaneously in a single framework, where missing target data from one database are recovered to facilitate recognition task. We referred to this framework as Latent Low-rank Transfer Subspace Learning method (L2TSL). We first propose to use a low-rank constraint as well as dictionary learning in a learned subspace to guide the knowledge transfer between and within different databases. We then introduce a latent factor to uncover the underlying structure of the missing target data. Next, transfer learning in two directions is proposed to integrate auxiliary database for transfer learning with missing target data. Experimental results of multi-modalities knowledge transfer with missing target data demonstrate that our method can successfully inherit knowledge from the auxiliary database to complete the target domain, and therefore enhance the performance when recognizing data from the modality without any training data.", "Transfer learning is usually exploited to leverage previously well-learned source domain for evaluating the unknown target domain; however, it may fail if no target data are available in the training stage. This problem arises when the data are multi-modal. For example, the target domain is in one modality, while the source domain is in another. To overcome this, we first borrow an auxiliary database with complete modalities, then consider knowledge transfer across databases and across modalities within databases simultaneously in a unified framework. The contributions are threefold: 1) a latent factor is introduced to uncover the underlying structure of the missing modality from the known data; 2) transfer learning in two directions allows the data alignment between both modalities and databases, giving rise to a very promising recovery; and 3) an efficient solution with theoretical guarantees to the proposed latent low-rank transfer learning algorithm. Comprehensive experiments on multi-modal knowledge transfer with missing target modality verify that our method can successfully inherit knowledge from both auxiliary database and source modality, and therefore significantly improve the recognition performance even when test modality is inaccessible in the training stage.", "Transferring knowledge from prior source tasks in solving a new target task can be useful in several learning applications. The application of transfer poses two serious challenges which have not been adequately addressed. First, the agent should be able to avoid negative transfer, which happens when the transfer hampers or slows down the learning instead of helping it. Second, the agent should be able to selectively transfer, which is the ability to select and transfer from different and multiple source tasks for different parts of the state space of the target task. We propose A2T (Attend Adapt and Transfer), an attentive deep architecture which adapts and transfers from these source tasks. Our model is generic enough to effect transfer of either policies or value functions. Empirical evaluations on different learning algorithms show that A2T is an effective architecture for transfer by being able to avoid negative transfer while transferring selectively from multiple source tasks in the same domain.", "A touted advantage of symbolic representations is the ease of transferring learned information from one intelligent agent to another. This paper investigates an analogous problem: how to use information from one neural network to help a second network learn a related task. Rather than translate such information into symbolic form (in which it may not be readily expressible), we investigate the direct transfer of information encoded as weights. Here, we focus on how transfer can be used to address the important problem of improving neural network learning speed. First we present an exploratory study of the somewhat surprising effects of pre-setting network weights on subsequent learning. Guided by hypotheses from this study, we sped up back-propagation learning for two speech recognition tasks. By transferring weights from smaller networks trained on subtasks, we achieved speedups of up to an order of magnitude compared with training starting with random weights, even taking into account the time to train the smaller networks. We include results on how transfer scales to a large phoneme recognition problem." ] }
1906.00852
2947767238
Conventional application of convolutional neural networks (CNNs) for image classification and recognition is based on the assumption that all target classes are equal(i.e., no hierarchy) and exclusive of one another (i.e., no overlap). CNN-based image classifiers built on this assumption, therefore, cannot take into account an innate hierarchy among target classes (e.g., cats and dogs in animal image classification) or additional information that can be easily derived from the data (e.g.,numbers larger than five in the recognition of handwritten digits), thereby resulting in scalability issues when the number of target classes is large. Combining two related but slightly different ideas of hierarchical classification and logical learning by auxiliary inputs, we propose a new learning framework called hierarchical auxiliary learning, which not only address the scalability issues with a large number of classes but also could further reduce the classification recognition errors with a reasonable number of classes. In the hierarchical auxiliary learning, target classes are semantically or non-semantically grouped into superclasses, which turns the original problem of mapping between an image and its target class into a new problem of mapping between a pair of an image and its superclass and the target class. To take the advantage of superclasses, we introduce an auxiliary block into a neural network, which generates auxiliary scores used as additional information for final classification recognition; in this paper, we add the auxiliary block between the last residual block and the fully-connected output layer of the ResNet. Experimental results demonstrate that the proposed hierarchical auxiliary learning can reduce classification errors up to 0.56, 1.6 and 3.56 percent with MNIST, SVHN and CIFAR-10 datasets, respectively.
Auxiliary information from the input data also improves the performance. In the stage-wise learning, coarse to finer images, which are subsampled from the original images, are fed to the network step by step to enhance the learning process @cite_22 . The ROCK architecture introduces an auxiliary block which can perform multiple tasks of extracting useful information from the input and inserting it to the input for a main task @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_22" ], "mid": [ "2891303672", "2787420051", "2962949867", "2962961439" ], "abstract": [ "Multi-Task Learning (MTL) is appealing for deep learning regularization. In this paper, we tackle a specific MTL context denoted as primary MTL, where the ultimate goal is to improve the performance of a given primary task by leveraging several other auxiliary tasks. Our main methodological contribution is to introduce ROCK, a new generic multi-modal fusion block for deep learning tailored to the primary MTL context. ROCK architecture is based on a residual connection, which makes forward prediction explicitly impacted by the intermediate auxiliary representations. The auxiliary predictor's architecture is also specifically designed to our primary MTL context, by incorporating intensive pooling operators for maximizing complementarity of intermediate representations. Extensive experiments on NYUv2 dataset (object detection with scene classification, depth prediction, and surface normal estimation as auxiliary tasks) validate the relevance of the approach and its superiority to flat MTL approaches. Our method outperforms state-of-the-art object detection models on NYUv2 by a large margin, and is also able to handle large-scale heterogeneous inputs (real and synthetic images) and missing annotation modalities.", "We propose an end-to-end-trainable attention module for convolutional neural network (CNN) architectures built for image classification. The module takes as input the 2D feature vector maps which form the intermediate representations of the input image at different stages in the CNN pipeline, and outputs a 2D matrix of scores for each map. Standard CNN architectures are modified through the incorporation of this module, and trained under the constraint that a convex combination of the intermediate 2D feature vectors, as parameterised by the score matrices, must be used for classification. Incentivised to amplify the relevant and suppress the irrelevant or misleading, the scores thus assume the role of attention values. Our experimental observations provide clear evidence to this effect: the learned attention maps neatly highlight the regions of interest while suppressing background clutter. Consequently, the proposed function is able to bootstrap standard CNN architectures for the task of image classification, demonstrating superior generalisation over 6 unseen benchmark datasets. When binarised, our attention maps outperform other CNN-based attention maps, traditional saliency maps, and top object proposals for weakly supervised segmentation as demonstrated on the Object Discovery dataset. We also demonstrate improved robustness against the fast gradient sign method of adversarial attack.", "It is widely believed that the success of deep convolutional networks is based on progressively discarding uninformative variability about the input with respect to the problem at hand. This is supported empirically by the difficulty of recovering images from their hidden representations, in most commonly used network architectures. In this paper we show that this loss of information is not a necessary condition to learn representations that generalize well on complicated problems, such as ImageNet. Via a cascade of homeomorphic layers, we build the i-RevNet, a network that can be fully inverted up to the final projection onto the classes, i.e. no information is discarded. Building an invertible architecture is difficult, for example, because the local inversion is ill-conditioned, we overcome this by providing an explicit inverse. An analysis of i-RevNet’s learned representations suggests an explanation of the good accuracy by a progressive contraction and linear separation with depth. To shed light on the nature of the model learned by the i-RevNet we reconstruct linear interpolations between natural images representations.", "We propose an end-to-end-trainable attention module for convolutional neural network (CNN) architectures built for image classification. The module takes as input the 2D feature vector maps which form the intermediate representations of the input image at different stages in the CNN pipeline, and outputs a 2D matrix of scores for each map. Standard CNN architectures are modified through the incorporation of this module, and trained under the constraint that a convex combination of the intermediate 2D feature vectors, as parametrised by the score matrices, must alone be used for classification. Incentivised to amplify the relevant and suppress the irrelevant or misleading, the scores thus assume the role of attention values. Our experimental observations provide clear evidence to this effect: the learned attention maps neatly highlight the regions of interest while suppressing background clutter. Consequently, the proposed function is able to bootstrap standard CNN architectures for the task of image classification, demonstrating superior generalisation over 6 unseen benchmark datasets. When binarised, our attention maps outperform other CNN-based attention maps, traditional saliency maps, and top object proposals for weakly supervised segmentation as demonstrated on the Object Discovery dataset. We also demonstrate improved robustness against the fast gradient sign method of adversarial attack." ] }
1906.00852
2947767238
Conventional application of convolutional neural networks (CNNs) for image classification and recognition is based on the assumption that all target classes are equal(i.e., no hierarchy) and exclusive of one another (i.e., no overlap). CNN-based image classifiers built on this assumption, therefore, cannot take into account an innate hierarchy among target classes (e.g., cats and dogs in animal image classification) or additional information that can be easily derived from the data (e.g.,numbers larger than five in the recognition of handwritten digits), thereby resulting in scalability issues when the number of target classes is large. Combining two related but slightly different ideas of hierarchical classification and logical learning by auxiliary inputs, we propose a new learning framework called hierarchical auxiliary learning, which not only address the scalability issues with a large number of classes but also could further reduce the classification recognition errors with a reasonable number of classes. In the hierarchical auxiliary learning, target classes are semantically or non-semantically grouped into superclasses, which turns the original problem of mapping between an image and its target class into a new problem of mapping between a pair of an image and its superclass and the target class. To take the advantage of superclasses, we introduce an auxiliary block into a neural network, which generates auxiliary scores used as additional information for final classification recognition; in this paper, we add the auxiliary block between the last residual block and the fully-connected output layer of the ResNet. Experimental results demonstrate that the proposed hierarchical auxiliary learning can reduce classification errors up to 0.56, 1.6 and 3.56 percent with MNIST, SVHN and CIFAR-10 datasets, respectively.
There have been proposed numerous approaches to utilize hierarchical class information as well. connect multi-layer perceptrons (MLPs) and let each MLP sequentially learn a hierarchical class as rear layer takes the output of the preceding layer as its input. insert coarse category component and fine category component after a shared layer. Classes are classified into K-coarse categories, and K-fine category components are targeted at each coarse category. In @cite_9 , CNN learns label generated by maximum margin clustering at root node, and images in the same cluster are classified at leaf node.
{ "cite_N": [ "@cite_9" ], "mid": [ "2756815061", "1937922215", "2773003563", "2905563945" ], "abstract": [ "Convolutional Neural Network (CNN) image classifiers are traditionally designed to have sequential convolutional layers with a single output layer. This is based on the assumption that all target classes should be treated equally and exclusively. However, some classes can be more difficult to distinguish than others, and classes may be organized in a hierarchy of categories. At the same time, a CNN is designed to learn internal representations that abstract from the input data based on its hierarchical layered structure. So it is natural to ask if an inverse of this idea can be applied to learn a model that can predict over a classification hierarchy using multiple output layers in decreasing order of class abstraction. In this paper, we introduce a variant of the traditional CNN model named the Branch Convolutional Neural Network (B-CNN). A B-CNN model outputs multiple predictions ordered from coarse to fine along the concatenated convolutional layers corresponding to the hierarchical structure of the target classes, which can be regarded as a form of prior knowledge on the output. To learn with B-CNNs a novel training strategy, named the Branch Training strategy (BT-strategy), is introduced which balances the strictness of the prior with the freedom to adjust parameters on the output layers to minimize the loss. In this way we show that CNN based models can be forced to learn successively coarse to fine concepts in the internal layers at the output stage, and that hierarchical prior knowledge can be adopted to boost CNN models' classification performance. Our models are evaluated to show that the B-CNN extensions improve over the corresponding baseline CNN on the benchmark datasets MNIST, CIFAR-10 and CIFAR-100.", "In image classification, visual separability between different object categories is highly uneven, and some categories are more difficult to distinguish than others. Such difficult categories demand more dedicated classifiers. However, existing deep convolutional neural networks (CNN) are trained as flat N-way classifiers, and few efforts have been made to leverage the hierarchical structure of categories. In this paper, we introduce hierarchical deep CNNs (HD-CNNs) by embedding deep CNNs into a category hierarchy. An HD-CNN separates easy classes using a coarse category classifier while distinguishing difficult classes using fine category classifiers. During HD-CNN training, component-wise pretraining is followed by global finetuning with a multinomial logistic loss regularized by a coarse category consistency term. In addition, conditional executions of fine category classifiers and layer parameter compression make HD-CNNs scalable for large-scale visual recognition. We achieve state-of-the-art results on both CIFAR100 and large-scale ImageNet 1000-class benchmark datasets. In our experiments, we build up three different HD-CNNs and they lower the top-1 error of the standard CNNs by 2.65 , 3.1 and 1.1 , respectively.", "Recognizing fine-grained categories (e.g., bird species) highly relies on discriminative part localization and part-based fine-grained feature learning. Existing approaches predominantly solve these challenges independently, while neglecting the fact that part localization (e.g., head of a bird) and fine-grained feature learning (e.g., head shape) are mutually correlated. In this paper, we propose a novel part learning approach by a multi-attention convolutional neural network (MA-CNN), where part generation and feature learning can reinforce each other. MA-CNN consists of convolution, channel grouping and part classification sub-networks. The channel grouping network takes as input feature channels from convolutional layers, and generates multiple parts by clustering, weighting and pooling from spatially-correlated channels. The part classification network further classifies an image by each individual part, through which more discriminative fine-grained features can be learned. Two losses are proposed to guide the multi-task learning of channel grouping and part classification, which encourages MA-CNN to generate more discriminative parts from feature channels and learn better fine-grained features from parts in a mutual reinforced way. MA-CNN does not need bounding box part annotation and can be trained end-to-end. We incorporate the learned parts from MA-CNN with part-CNN for recognition, and show the best performances on three challenging published fine-grained datasets, e.g., CUB-Birds, FGVC-Aircraft and Stanford-Cars.", "The availability of large-scale annotated data and the uneven separability of different data categories have become two major impediments of deep learning for image classification. In this paper, we present a semi-supervised hierarchical convolutional neural network (SS-HCNN) to address these two challenges. A large-scale unsupervised maximum margin clustering technique is designed, which splits images into a number of hierarchical clusters iteratively to learn cluster-level CNNs at parent nodes and category-level CNNs at leaf nodes. The splitting uses the similarity of CNN features to group visually similar images into the same cluster, which relieves the uneven data separability constraint. With the hierarchical cluster-level CNNs capturing certain high-level image category information, the category-level CNNs can be trained with a small amount of labeled images, and this relieves the data annotation constraint. A novel cluster splitting criterion is also designed, which automatically terminates the image clustering in the tree hierarchy. The proposed SS-HCNN has been evaluated on the CIFAR-100 and ImageNet classification datasets. The experiments show that the SS-HCNN trained using a portion of labeled training images can achieve comparable performance with other fully trained CNNs using all labeled images. Additionally, the SS-HCNN trained using all labeled images clearly outperforms other fully trained CNNs." ] }
1906.00852
2947767238
Conventional application of convolutional neural networks (CNNs) for image classification and recognition is based on the assumption that all target classes are equal(i.e., no hierarchy) and exclusive of one another (i.e., no overlap). CNN-based image classifiers built on this assumption, therefore, cannot take into account an innate hierarchy among target classes (e.g., cats and dogs in animal image classification) or additional information that can be easily derived from the data (e.g.,numbers larger than five in the recognition of handwritten digits), thereby resulting in scalability issues when the number of target classes is large. Combining two related but slightly different ideas of hierarchical classification and logical learning by auxiliary inputs, we propose a new learning framework called hierarchical auxiliary learning, which not only address the scalability issues with a large number of classes but also could further reduce the classification recognition errors with a reasonable number of classes. In the hierarchical auxiliary learning, target classes are semantically or non-semantically grouped into superclasses, which turns the original problem of mapping between an image and its target class into a new problem of mapping between a pair of an image and its superclass and the target class. To take the advantage of superclasses, we introduce an auxiliary block into a neural network, which generates auxiliary scores used as additional information for final classification recognition; in this paper, we add the auxiliary block between the last residual block and the fully-connected output layer of the ResNet. Experimental results demonstrate that the proposed hierarchical auxiliary learning can reduce classification errors up to 0.56, 1.6 and 3.56 percent with MNIST, SVHN and CIFAR-10 datasets, respectively.
B-CNN learns from coarse features to fine features by calculating loss between superclasses and outputs from the branches of the architecture @cite_2 , where the loss of B-CNN is the weighted sum of all losses over branches. In @cite_4 , an ultrametric tree is proposed based on semantic meaning of all classes to use hierarchical class information. The probability of each node of the ultrametric tree is the sum of the probabilities of leaves (which has a path from the leaves to the node) and all nodes on the path from the leaves to the node.
{ "cite_N": [ "@cite_4", "@cite_2" ], "mid": [ "2903794034", "2756815061", "2953280703", "2336829997" ], "abstract": [ "In this paper, we proposed a novel Probabilistic Attribute Tree-CNN (PAT-CNN) to explicitly deal with the large intra-class variations caused by identity-related attributes, e.g., age, race, and gender. Specifically, a novel PAT module with an associated PAT loss was proposed to learn features in a hierarchical tree structure organized according to attributes, where the final features are less affected by the attributes. Then, expression-related features are extracted from leaf nodes. Samples are probabilistically assigned to tree nodes at different levels such that expression-related features can be learned from all samples weighted by probabilities. We further proposed a semi-supervised strategy to learn the PAT-CNN from limited attribute-annotated samples to make the best use of available data. Experimental results on five facial expression datasets have demonstrated that the proposed PAT-CNN outperforms the baseline models by explicitly modeling attributes. More impressively, the PAT-CNN using a single model achieves the best performance for faces in the wild on the SFEW dataset, compared with the state-of-the-art methods using an ensemble of hundreds of CNNs.", "Convolutional Neural Network (CNN) image classifiers are traditionally designed to have sequential convolutional layers with a single output layer. This is based on the assumption that all target classes should be treated equally and exclusively. However, some classes can be more difficult to distinguish than others, and classes may be organized in a hierarchy of categories. At the same time, a CNN is designed to learn internal representations that abstract from the input data based on its hierarchical layered structure. So it is natural to ask if an inverse of this idea can be applied to learn a model that can predict over a classification hierarchy using multiple output layers in decreasing order of class abstraction. In this paper, we introduce a variant of the traditional CNN model named the Branch Convolutional Neural Network (B-CNN). A B-CNN model outputs multiple predictions ordered from coarse to fine along the concatenated convolutional layers corresponding to the hierarchical structure of the target classes, which can be regarded as a form of prior knowledge on the output. To learn with B-CNNs a novel training strategy, named the Branch Training strategy (BT-strategy), is introduced which balances the strictness of the prior with the freedom to adjust parameters on the output layers to minimize the loss. In this way we show that CNN based models can be forced to learn successively coarse to fine concepts in the internal layers at the output stage, and that hierarchical prior knowledge can be adopted to boost CNN models' classification performance. Our models are evaluated to show that the B-CNN extensions improve over the corresponding baseline CNN on the benchmark datasets MNIST, CIFAR-10 and CIFAR-100.", "We present a tree-structured network architecture for large scale image classification. The trunk of the network contains convolutional layers optimized over all classes. At a given depth, the trunk splits into separate branches, each dedicated to discriminate a different subset of classes. Each branch acts as an expert classifying a set of categories that are difficult to tell apart, while the trunk provides common knowledge to all experts in the form of shared features. The training of our \"network of experts\" is completely end-to-end: the partition of categories into disjoint subsets is learned simultaneously with the parameters of the network trunk and the experts are trained jointly by minimizing a single learning objective over all classes. The proposed structure can be built from any existing convolutional neural network (CNN). We demonstrate its generality by adapting 4 popular CNNs for image categorization into the form of networks of experts. Our experiments on CIFAR100 and ImageNet show that in every case our method yields a substantial improvement in accuracy over the base CNN, and gives the best result achieved so far on CIFAR100. Finally, the improvement in accuracy comes at little additional cost: compared to the base network, the training time is only moderately increased and the number of parameters is comparable or in some cases even lower.", "We present a tree-structured network architecture for large-scale image classification. The trunk of the network contains convolutional layers optimized over all classes. At a given depth, the trunk splits into separate branches, each dedicated to discriminate a different subset of classes. Each branch acts as an expert classifying a set of categories that are difficult to tell apart, while the trunk provides common knowledge to all experts in the form of shared features. The training of our “network of experts” is completely end-to-end: the partition of categories into disjoint subsets is learned simultaneously with the parameters of the network trunk and the experts are trained jointly by minimizing a single learning objective over all classes. The proposed structure can be built from any existing convolutional neural network (CNN). We demonstrate its generality by adapting 4 popular CNNs for image categorization into the form of networks of experts. Our experiments on CIFAR100 and ImageNet show that in every case our method yields a substantial improvement in accuracy over the base CNN, and gives the best result achieved so far on CIFAR100. Finally, the improvement in accuracy comes at little additional cost: compared to the base network, the training time is only moderately increased and the number of parameters is comparable or in some cases even lower." ] }
1906.00852
2947767238
Conventional application of convolutional neural networks (CNNs) for image classification and recognition is based on the assumption that all target classes are equal(i.e., no hierarchy) and exclusive of one another (i.e., no overlap). CNN-based image classifiers built on this assumption, therefore, cannot take into account an innate hierarchy among target classes (e.g., cats and dogs in animal image classification) or additional information that can be easily derived from the data (e.g.,numbers larger than five in the recognition of handwritten digits), thereby resulting in scalability issues when the number of target classes is large. Combining two related but slightly different ideas of hierarchical classification and logical learning by auxiliary inputs, we propose a new learning framework called hierarchical auxiliary learning, which not only address the scalability issues with a large number of classes but also could further reduce the classification recognition errors with a reasonable number of classes. In the hierarchical auxiliary learning, target classes are semantically or non-semantically grouped into superclasses, which turns the original problem of mapping between an image and its target class into a new problem of mapping between a pair of an image and its superclass and the target class. To take the advantage of superclasses, we introduce an auxiliary block into a neural network, which generates auxiliary scores used as additional information for final classification recognition; in this paper, we add the auxiliary block between the last residual block and the fully-connected output layer of the ResNet. Experimental results demonstrate that the proposed hierarchical auxiliary learning can reduce classification errors up to 0.56, 1.6 and 3.56 percent with MNIST, SVHN and CIFAR-10 datasets, respectively.
Furthermore, auxiliary inputs are used to check logical reasoning in @cite_20 . Auxiliary inputs based on human knowledge are provided to the network to let the network learn logical reasoning. The network verifies the logical information with the auxiliary inputs first and proceeds to the next stage.
{ "cite_N": [ "@cite_20" ], "mid": [ "2809918697", "2466714650", "2617995259", "2624614404" ], "abstract": [ "This paper describes a neural network design using auxiliary inputs, namely the indicators, that act as the hints to explain the predicted outcome through logical reasoning, mimicking the human behavior of deductive reasoning. Besides the original network input and output, we add an auxiliary input that reflects the specific logic of the data to formulate a reasoning process for cross-validation. We found that one can design either meaningful indicators, or even meaningless ones, when using such auxiliary inputs, upon which one can use as the basis of reasoning to explain the predicted outputs. As a result, one can formulate different reasonings to explain the predicted results by designing different sets of auxiliary inputs without the loss of trustworthiness of the outcome. This is similar to human explanation process where one can explain the same observation from different perspectives with reasons. We demonstrate our network concept by using the MNIST data with different sets of auxiliary inputs, where a series of design guidelines are concluded. Later, we validated our results by using a set of images taken from a robotic grasping platform. We found that our network enhanced the last 1-2 of the prediction accuracy while eliminating questionable predictions with self-conflicting logics. Future application of our network with auxiliary inputs can be applied to robotic detection problems such as autonomous object grasping, where the logical reasoning can be introduced to optimize robotic learning.", "Our goal is to combine the rich multistep inference of symbolic logical reasoning with the generalization capabilities of neural networks. We are particularly interested in complex reasoning about entities and relations in text and large-scale knowledge bases (KBs). (2015) use RNNs to compose the distributed semantics of multi-hop paths in KBs; however for multiple reasons, the approach lacks accuracy and practicality. This paper proposes three significant modeling advances: (1) we learn to jointly reason about relations, entities, and entity-types; (2) we use neural attention modeling to incorporate multiple paths; (3) we learn to share strength in a single RNN that represents logical composition across all relations. On a largescale Freebase+ClueWeb prediction task, we achieve 25 error reduction, and a 53 error reduction on sparse relations due to shared strength. On chains of reasoning in WordNet we reduce error in mean quantile by 84 versus previous state-of-the-art. The code and data are available at this https URL", "We introduce neural networks for end-to-end differentiable proving of queries to knowledge bases by operating on dense vector representations of symbols. These neural networks are constructed recursively by taking inspiration from the backward chaining algorithm as used in Prolog. Specifically, we replace symbolic unification with a differentiable computation on vector representations of symbols using a radial basis function kernel, thereby combining symbolic reasoning with learning subsymbolic vector representations. By using gradient descent, the resulting neural network can be trained to infer facts from a given incomplete knowledge base. It learns to (i) place representations of similar symbols in close proximity in a vector space, (ii) make use of such similarities to prove queries, (iii) induce logical rules, and (iv) use provided and induced logical rules for multi-hop reasoning. We demonstrate that this architecture outperforms ComplEx, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules.", "Relational reasoning is a central component of generally intelligent behavior, but has proven difficult for neural networks to learn. In this paper we describe how to use Relation Networks (RNs) as a simple plug-and-play module to solve problems that fundamentally hinge on relational reasoning. We tested RN-augmented networks on three tasks: visual question answering using a challenging dataset called CLEVR, on which we achieve state-of-the-art, super-human performance; text-based question answering using the bAbI suite of tasks; and complex reasoning about dynamic physical systems. Then, using a curated dataset called Sort-of-CLEVR we show that powerful convolutional networks do not have a general capacity to solve relational questions, but can gain this capacity when augmented with RNs. Our work shows how a deep learning architecture equipped with an RN module can implicitly discover and learn to reason about entities and their relations." ] }
1906.00928
2947601766
We consider the problem of learning a causal graph in the presence of measurement error. This setting is for example common in genomics, where gene expression is corrupted through the measurement process. We develop a provably consistent procedure for estimating the causal structure in a linear Gaussian structural equation model from corrupted observations on its nodes, under a variety of measurement error models. We provide an estimator based on the method-of-moments, which can be used in conjunction with constraint-based causal structure discovery algorithms. We prove asymptotic consistency of the procedure and also discuss finite-sample considerations. We demonstrate our method's performance through simulations and on real data, where we recover the underlying gene regulatory network from zero-inflated single-cell RNA-seq data.
In the presence of latent variables, identifiability is further weakened (only the so-called PAG is identifiable) and various algorithms have been developed for learning a PAG @cite_13 @cite_22 @cite_4 @cite_19 . However, these algorithms cannot estimate causal relations among the latent variables, which is our problem of interest. @cite_28 study identifiability of directed Gaussian graphical models in the presence of a single latent variable. @cite_6 , @cite_21 , @cite_26 and @cite_23 all consider the problem of learning causal edges among latent variables from the observed variables, i.e. models as in Figure a or generalizations thereof, but under assumptions that may not hold for our applications of interest, namely that the measurement error is independent of the latent variables @cite_6 , that the observed variables are a linear function of the latent variables @cite_21 , that the observed variables are binary @cite_26 , or that each latent variable is non-Gaussian with sufficient outgoing edges to guarantee identifiability @cite_23 .
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_22", "@cite_28", "@cite_21", "@cite_6", "@cite_19", "@cite_23", "@cite_13" ], "mid": [ "2963254467", "2763292376", "2146531590", "2626207843" ], "abstract": [ "We study parameter identifiability of directed Gaussian graphical models with one latent variable. In the scenario we consider, the latent vari- able is a confounder that forms a source node of the graph and is a parent to all other nodes, which correspond to the observed variables. We give a graphical condition that is sufficient for the Jacobian matrix of the parametrization map to be full rank, which entails that the parametrization is generically finite-to- one, a fact that is sometimes also referred to as local identifiability. We also derive a graphical condition that is necessary for such identifiability. Finally, we give a condition under which generic parameter identifiability can be deter- mined from identifiability of a model associated with a subgraph. The power of these criteria is assessed via an exhaustive algebraic computational study on models with 4, 5, and 6 observable variables.", "Suppose we observe samples of a subset of a collection of random variables. No additional information is provided about the number of latent variables, nor of the relationship between the latent and observed variables. Is it possible to discover the number of latent components, and to learn a statistical model over the entire collection of variables? We address this question in the setting in which the latent and observed variables are jointly Gaussian, with the conditional statistics of the observed variables conditioned on the latent variables being specified by a graphical model. As a first step we give natural conditions under which such latent-variable Gaussian graphical models are identifiable given marginal statistics of only the observed variables. Essentially these conditions require that the conditional graphical model among the observed variables is sparse, while the effect of the latent variables is \"spread out\" over most of the observed variables. Next we propose a tractable convex program based on regularized maximum-likelihood for model selection in this latent-variable setting; the regularizer uses both the @math norm and the nuclear norm. Our modeling framework can be viewed as a combination of dimensionality reduction (to identify latent variables) and graphical modeling (to capture remaining statistical structure not attributable to the latent variables), and it consistently estimates both the number of latent components and the conditional graphical model structure among the observed variables. These results are applicable in the high-dimensional setting in which the number of latent observed variables grows with the number of samples of the observed variables. The geometric properties of the algebraic varieties of sparse matrices and of low-rank matrices play an important role in our analysis.", "By taking into account the nonlinear effect of the cause, the inner noise effect, and the measurement distortion effect in the observed variables, the post-nonlinear (PNL) causal model has demonstrated its excellent performance in distinguishing the cause from effect. However, its identifiability has not been properly addressed, and how to apply it in the case of more than two variables is also a problem. In this paper, we conduct a systematic investigation on its identifiability in the two-variable case. We show that this model is identifiable in most cases; by enumerating all possible situations in which the model is not identifiable, we provide sufficient conditions for its identifiability. Simulations are given to support the theoretical results. Moreover, in the case of more than two variables, we show that the whole causal structure can be found by applying the PNL causal model to each structure in the Markov equivalent class and testing if the disturbance is independent of the direct causes for each variable. In this way the exhaustive search over all possible causal structures is avoided.", "Measurement error in the observed values of the variables can greatly change the output of various causal discovery methods. This problem has received much attention in multiple fields, but it is not clear to what extent the causal model for the measurement-error-free variables can be identified in the presence of measurement error with unknown variance. In this paper, we study precise sufficient identifiability conditions for the measurement-error-free causal model and show what information of the causal model can be recovered from observed data. In particular, we present two different sets of identifiability conditions, based on the second-order statistics and higher-order statistics of the data, respectively. The former was inspired by the relationship between the generating model of the measurement-error-contaminated data and the factor analysis model, and the latter makes use of the identifiability result of the over-complete independent component analysis problem." ] }
1906.00679
2947597348
The holy grail of networking is to create that organize, manage, and drive themselves. Such a vision now seems attainable thanks in large part to the progress in the field of machine learning (ML), which has now already disrupted a number of industries and revolutionized practically all fields of research. But are the ML models foolproof and robust to security attacks to be in charge of managing the network? Unfortunately, many modern ML models are easily misled by simple and easily-crafted adversarial perturbations, which does not bode well for the future of ML-based cognitive networks unless ML vulnerabilities for the cognitive networking environment are identified, addressed, and fixed. The purpose of this article is to highlight the problem of insecure ML and to sensitize the readers to the danger of adversarial ML by showing how an easily-crafted adversarial ML example can compromise the operations of the cognitive self-driving network. In this paper, we demonstrate adversarial attacks on two simple yet representative cognitive networking applications (namely, intrusion detection and network traffic classification). We also provide some guidelines to design secure ML models for cognitive networks that are robust to adversarial attacks on the ML pipeline of cognitive networks.
With the well-known attacks proposed in the literature @cite_8 , the bar of effort required for launching new attacks has lowered since the same canned attacks can be used by others. Although Sommer and Paxson @cite_14 were probably right in 2010 to downplay the potential of security attacks on ML saying exploiting the specifics of a machine learning implementation requires significant effort, time, and expertise on the attacker's side,'' the danger is real now when an attack can be launched on ML-based implementations with minimal effort, time, and expertise.
{ "cite_N": [ "@cite_14", "@cite_8" ], "mid": [ "2969695741", "2603766943", "2951807304", "1882350379" ], "abstract": [ "Machine learning (ML) applications are increasingly prevalent. Protecting the confidentiality of ML models becomes paramount for two reasons: (a) a model can be a business advantage to its owner, and (b) an adversary may use a stolen model to find transferable adversarial examples that can evade classification by the original model. Access to the model can be restricted to be only via well-defined prediction APIs. Nevertheless, prediction APIs still provide enough information to allow an adversary to mount model extraction attacks by sending repeated queries via the prediction API. In this paper, we describe new model extraction attacks using novel approaches for generating synthetic queries, and optimizing training hyperparameters. Our attacks outperform state-of-the-art model extraction in terms of transferability of both targeted and non-targeted adversarial examples (up to +29-44 percentage points, pp), and prediction accuracy (up to +46 pp) on two datasets. We provide take-aways on how to perform effective model extraction attacks. We then propose PRADA, the first step towards generic and effective detection of DNN model extraction attacks. It analyzes the distribution of consecutive API queries and raises an alarm when this distribution deviates from benign behavior. We show that PRADA can detect all prior model extraction attacks with no false positives.", "Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassified by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24 of the adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19 and 88.94 . We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.", "Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassified by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24 of the adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19 and 88.94 . We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.", "Recent work in security and systems has embraced the use of machine learning (ML) techniques for identifying misbehavior, e.g. email spam and fake (Sybil) users in social networks. However, ML models are typically derived from fixed datasets, and must be periodically retrained. In adversarial environments, attackers can adapt by modifying their behavior or even sabotaging ML models by polluting training data. In this paper, we perform an empirical study of adversarial attacks against machine learning models in the context of detecting malicious crowdsourcing systems, where sites connect paying users with workers willing to carry out malicious campaigns. By using human workers, these systems can easily circumvent deployed security mechanisms, e.g. CAPTCHAs. We collect a dataset of malicious workers actively performing tasks on Weibo, China's Twitter, and use it to develop ML-based detectors. We show that traditional ML techniques are accurate (95 -99 ) in detection but can be highly vulnerable to adversarial attacks, including simple evasion attacks (workers modify their behavior) and powerful poisoning attacks (where administrators tamper with the training set). We quantify the robustness of ML classifiers by evaluating them in a range of practical adversarial models using ground truth data. Our analysis provides a detailed look at practical adversarial attacks on ML models, and helps defenders make informed decisions in the design and configuration of ML detectors." ] }
1906.00679
2947597348
The holy grail of networking is to create that organize, manage, and drive themselves. Such a vision now seems attainable thanks in large part to the progress in the field of machine learning (ML), which has now already disrupted a number of industries and revolutionized practically all fields of research. But are the ML models foolproof and robust to security attacks to be in charge of managing the network? Unfortunately, many modern ML models are easily misled by simple and easily-crafted adversarial perturbations, which does not bode well for the future of ML-based cognitive networks unless ML vulnerabilities for the cognitive networking environment are identified, addressed, and fixed. The purpose of this article is to highlight the problem of insecure ML and to sensitize the readers to the danger of adversarial ML by showing how an easily-crafted adversarial ML example can compromise the operations of the cognitive self-driving network. In this paper, we demonstrate adversarial attacks on two simple yet representative cognitive networking applications (namely, intrusion detection and network traffic classification). We also provide some guidelines to design secure ML models for cognitive networks that are robust to adversarial attacks on the ML pipeline of cognitive networks.
All classification schemes depicted in the taxonomy are directly related to the intent goal of the adversary. Most of the existing adversarial ML attacks are white-box attacks, which are later converted to black-box attacks by exploiting the transferability property of adversarial examples @cite_7 . The transferability property of adversarial ML means that adversarial perturbations generated for one ML model will often mislead other unseen ML models. Related research has been carried out on adversarial pattern recognition for more than a decade, and even before that there was a smattering of works focused on performing ML in the presence of malicious errors @cite_8 .
{ "cite_N": [ "@cite_7", "@cite_8" ], "mid": [ "2950774971", "2903785932", "2604505099", "2612637113" ], "abstract": [ "It has been well demonstrated that adversarial examples, i.e., natural images with visually imperceptible perturbations added, generally exist for deep networks to fail on image classification. In this paper, we extend adversarial examples to semantic segmentation and object detection which are much more difficult. Our observation is that both segmentation and detection are based on classifying multiple targets on an image (e.g., the basic target is a pixel or a receptive field in segmentation, and an object proposal in detection), which inspires us to optimize a loss function over a set of pixels proposals for generating adversarial perturbations. Based on this idea, we propose a novel algorithm named Dense Adversary Generation (DAG), which generates a large family of adversarial examples, and applies to a wide range of state-of-the-art deep networks for segmentation and detection. We also find that the adversarial perturbations can be transferred across networks with different training data, based on different architectures, and even for different recognition tasks. In particular, the transferability across networks with the same architecture is more significant than in other cases. Besides, summing up heterogeneous perturbations often leads to better transfer performance, which provides an effective method of black-box adversarial attack.", "Adversarial attacks to image classification systems present challenges to convolutional networks and opportunities for understanding them. This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, we develop new network architectures that increase adversarial robustness by performing feature denoising. Specifically, our networks contain blocks that denoise the features using non-local means or other filters; the entire networks are trained end-to-end. When combined with adversarial training, our feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings. On ImageNet, under 10-iteration PGD white-box attacks where prior art has 27.9 accuracy, our method achieves 55.7 ; even under extreme 2000-iteration PGD white-box attacks, our method secures 42.6 accuracy. Our method was ranked first in Competition on Adversarial Attacks and Defenses (CAAD) 2018 --- it achieved 50.6 classification accuracy on a secret, ImageNet-like test dataset against 48 unknown attackers, surpassing the runner-up approach by 10 . Code is available at this https URL.", "It has been well demonstrated that adversarial examples, i.e., natural images with visually imperceptible perturbations added, cause deep networks to fail on image classification. In this paper, we extend adversarial examples to semantic segmentation and object detection which are much more difficult. Our observation is that both segmentation and detection are based on classifying multiple targets on an image (e.g., the target is a pixel or a receptive field in segmentation, and an object proposal in detection). This inspires us to optimize a loss function over a set of targets for generating adversarial perturbations. Based on this, we propose a novel algorithm named Dense Adversary Generation (DAG), which applies to the state-of-the-art networks for segmentation and detection. We find that the adversarial perturbations can be transferred across networks with different training data, based on different architectures, and even for different recognition tasks. In particular, the transfer ability across networks with the same architecture is more significant than in other cases. Besides, we show that summing up heterogeneous perturbations often leads to better transfer performance, which provides an effective method of black-box adversarial attack.", "Adversarial examples are maliciously perturbed inputs designed to mislead machine learning (ML) models at test-time. They often transfer: the same adversarial example fools more than one model. In this work, we propose novel methods for estimating the previously unknown dimensionality of the space of adversarial inputs. We find that adversarial examples span a contiguous subspace of large ( 25) dimensionality. Adversarial subspaces with higher dimensionality are more likely to intersect. We find that for two different models, a significant fraction of their subspaces is shared, thus enabling transferability. In the first quantitative analysis of the similarity of different models' decision boundaries, we show that these boundaries are actually close in arbitrary directions, whether adversarial or benign. We conclude by formally studying the limits of transferability. We derive (1) sufficient conditions on the data distribution that imply transferability for simple model classes and (2) examples of scenarios in which transfer does not occur. These findings indicate that it may be possible to design defenses against transfer-based attacks, even for models that are vulnerable to direct attacks." ] }
1906.00860
2947878631
We prove the linear stability of slowly rotating Kerr black holes as solutions of the Einstein vacuum equation: linearized perturbations of a Kerr metric decay at an inverse polynomial rate to a linearized Kerr metric plus a pure gauge term. We work in a natural wave map DeTurck gauge and show that the pure gauge term can be taken to lie in a fixed 7-dimensional space with a simple geometric interpretation. Our proof rests on a robust general framework, based on recent advances in microlocal analysis and non-elliptic Fredholm theory, for the analysis of resolvents of operators on asymptotically flat spaces. With the mode stability of the Schwarzschild metric as well as of certain scalar and 1-form wave operators on the Schwarzschild spacetime as an input, we establish the linear stability of slowly rotating Kerr black holes using perturbative arguments; in particular, our proof does not make any use of special algebraic properties of the Kerr metric. The heart of the paper is a detailed description of the resolvent of the linearization of a suitable hyperbolic gauge-fixed Einstein operator at low energies. As in previous work by the second and third authors on the nonlinear stability of cosmological black holes, constraint damping plays an important role. Here, it eliminates certain pathological generalized zero energy states; it also ensures that solutions of our hyperbolic formulation of the linearized Einstein equation have the stated asymptotics and decay for general initial data and forcing terms, which is a useful feature in nonlinear and numerical applications.
In the algebraically more complicated but analytically less degenerate context of cosmological black holes, we recall that S 'a Barreto--Zworski @cite_113 studied the distribution of resonances of SdS black holes; exponential decay of linear scalar waves to constants was proved by Bony--H "afner @cite_36 and Melrose--S a Barreto--Vasy @cite_112 on SdS and by Dyatlov @cite_70 @cite_31 on KdS spacetimes, and substantially refined by Dyatlov @cite_73 to a full resonance expansion. (See @cite_56 for a physical space approach giving superpolynomial energy decay.) Tensor-valued and nonlinear equations on KdS spacetimes were studied in a series of works by Hintz--Vasy @cite_63 @cite_24 @cite_102 @cite_116 @cite_28 . For a physical space approach to resonances, see Warnick @cite_110 , and for the Maxwell equation on SdS spacetimes, see Keller @cite_66 .
{ "cite_N": [ "@cite_36", "@cite_70", "@cite_28", "@cite_112", "@cite_102", "@cite_113", "@cite_56", "@cite_24", "@cite_116", "@cite_63", "@cite_110", "@cite_31", "@cite_73", "@cite_66" ], "mid": [ "1666285156", "2150477501", "2722540036", "1868264628" ], "abstract": [ "This paper contains the first two parts (I-II) of a three-part series concerning the scalar wave equation = 0 on a fixed Kerr background. We here restrict to two cases: (II1) |a| M, general or (II2) |a| < M, axisymmetric. In either case, we prove a version of 'integrated local energy decay', specifically, that the 4-integral of an energy-type density (degenerating in a neighborhood of the Schwarzschild photon sphere and at infinity), integrated over the domain of dependence of a spacelike hypersurface connecting the future event horizon with spacelike infinity or a sphere on null infinity, is bounded by a natural (non-degenerate) energy flux of through . (The case (II1) has in fact been treated previously in our Clay Lecture notes: Lectures on black holes and linear waves, arXiv:0811.0354.) In our forthcoming Part III, the restriction to axisymmetry for the general |a| < M case is removed. The complete proof is surveyed in our companion paper The black hole stability problem for linear scalar perturbations, which includes the essential details of our forthcoming Part III. Together with previous work (see our: A new physical-space approach to decay for the wave equation with applications to black hole spacetimes, in XVIth International Congress on Mathematical Physics, Pavel Exner ed., Prague 2009 pp. 421-433, 2009, arxiv:0910.4957), this result leads, under suitable assumptions on initial data of , to polynomial decay bounds for the energy flux of through the foliation of the black hole exterior defined by the time translates of a spacelike hypersurface terminating on null infinity, as well as to pointwise decay estimates, of a definitive form useful for nonlinear applications.", "These lecture notes, based on a course given at the Zurich Clay Summer School (June 23-July 18, 2008), review our current mathematical understanding of the global behaviour of waves on black hole exterior backgrounds. Interest in this problem stems from its relationship to the non-linear stability of the black hole spacetimes themselves as solutions to the Einstein equations, one of the central open problems of general relativity. After an introductory discussion of the Schwarzschild geometry and the black hole concept, the classical theorem of Kay and Wald on the boundedness of scalar waves on the exterior region of Schwarzschild is reviewed. The original proof is presented, followed by a new more robust proof of a stronger boundedness statement. The problem of decay of scalar waves on Schwarzschild is then addressed, and a theorem proving quantitative decay is stated and its proof sketched. This decay statement is carefully contrasted with the type of statements derived heuristically in the physics literature for the asymptotic tails of individual spherical harmonics. Following this, our recent proof of the boundedness of solutions to the wave equation on axisymmetric stationary backgrounds (including slowly-rotating Kerr and Kerr-Newman) is reviewed and a new decay result for slowly-rotating Kerr spacetimes is stated and proved. This last result was announced at the summer school and appears in print here for the first time. A discussion of the analogue of these problems for spacetimes with a positive cosmological constant follows. Finally, a general framework is given for capturing the red-shift effect for non-extremal black holes. This unifies and extends some of the analysis of the previous sections. The notes end with a collection of open problems.", "In this work, we consider solutions of the Maxwell equations on the Schwarzschild-de Sitter family of black hole spacetimes. We prove that, in the static region bounded by black hole and cosmological horizons, solutions of the Maxwell equations decay to stationary Coulomb solutions at a super-polynomial rate, with decay measured according to ingoing and outgoing null coordinates. Our method employs a differential transformation of Maxwell tensor components to obtain higher-order quantities satisfying a Fackerell-Ipser equation, in the style of Chandrasekhar and the more recent work of Pasqualotto. The analysis of the Fackerell-Ipser equation is accomplished by means of the vector field method, with decay estimates for the higher-order quantities leading to decay estimates for components of the Maxwell tensor.", "We consider solutions to the linear wave equation @math on a non-extremal maximally extended Schwarzschild-de Sitter spacetime arising from arbitrary smooth initial data prescribed on an arbitrary Cauchy hypersurface. (In particular, no symmetry is assumed on initial data, and the support of the solutions may contain the sphere of bifurcation of the black white hole horizons and the cosmological horizons.) We prove that in the region bounded by a set of black white hole horizons and cosmological horizons, solutions @math converge pointwise to a constant faster than any given polynomial rate, where the decay is measured with respect to natural future-directed advanced and retarded time coordinates. We also give such uniform decay bounds for the energy associated to the Killing field as well as for the energy measured by local observers crossing the event horizon. The results in particular include decay rates along the horizons themselves. Finally, we discuss the relation of these results to previous heuristic analysis of Price and" ] }
1906.01009
2948602696
The Mallows model, introduced in the seminal paper of Mallows 1957, is one of the most fundamental ranking distribution over the symmetric group @math . To analyze more complex ranking data, several studies considered the Generalized Mallows model defined by Fligner and Verducci 1986. Despite the significant research interest of ranking distributions, the exact sample complexity of estimating the parameters of a Mallows and a Generalized Mallows Model is not well-understood. The main result of the paper is a tight sample complexity bound for learning Mallows and Generalized Mallows Model. We approach the learning problem by analyzing a more general model which interpolates between the single parameter Mallows Model and the @math parameter Mallows model. We call our model Mallows Block Model -- referring to the Block Models that are a popular model in theoretical statistics. Our sample complexity analysis gives tight bound for learning the Mallows Block Model for any number of blocks. We provide essentially matching lower bounds for our sample complexity results. As a corollary of our analysis, it turns out that, if the central ranking is known, one single sample from the Mallows Block Model is sufficient to estimate the spread parameters with error that goes to zero as the size of the permutations goes to infinity. In addition, we calculate the exact rate of the parameter estimation error.
There has been a significant volume of research work on algorithmic and learning problems related to our work. In the , a finite set @math of rankings is given, and we want to compute the ranking @math . This problem is known to be NP-hard , but it admits a polynomial-time @math -approximation algorithm problem and a PTAS . When the rankings are i.i.d. samples from a Mallows distribution, consensus ranking is equivalent to computing the maximum likelihood ranking, which does not depend on the spread parameter. Intuitively, the problem of finding the central ranking should not be hard, if the probability mass is concentrated around the central ranking. @cite_8 came up with a branch and bound technique which relies on this observation. @cite_9 proposed a dynamic programming approach that computes the consensus ranking efficiently, under the Mallows model. @cite_10 showed that the central ranking can be recovered from a logarithmic number of i.i.d. samples from a Mallows distribution (see also Theorem ).
{ "cite_N": [ "@cite_9", "@cite_10", "@cite_8" ], "mid": [ "2113815377", "2952852844", "2487418934", "1521197246" ], "abstract": [ "We analyze the generalized Mallows model, a popular exponential model over rankings. Estimating the central (or consensus) ranking from data is NP-hard. We obtain the following new results: (1) We show that search methods can estimate both the central ranking pi0 and the model parameters theta exactly. The search is n! in the worst case, but is tractable when the true distribution is concentrated around its mode; (2) We show that the generalized Mallows model is jointly exponential in (pi0; theta), and introduce the conjugate prior for this model class; (3) The sufficient statistics are the pairwise marginal probabilities that item i is preferred to item j. Preliminary experiments confirm the theoretical predictions and compare the new algorithm and existing heuristics.", "The probability that a user will click a search result depends both on its relevance and its position on the results page. The position based model explains this behavior by ascribing to every item an attraction probability, and to every position an examination probability. To be clicked, a result must be both attractive and examined. The probabilities of an item-position pair being clicked thus form the entries of a rank- @math matrix. We propose the learning problem of a Bernoulli rank- @math bandit where at each step, the learning agent chooses a pair of row and column arms, and receives the product of their Bernoulli-distributed values as a reward. This is a special case of the stochastic rank- @math bandit problem considered in recent work that proposed an elimination based algorithm Rank1Elim, and showed that Rank1Elim's regret scales linearly with the number of rows and columns on \"benign\" instances. These are the instances where the minimum of the average row and column rewards @math is bounded away from zero. The issue with Rank1Elim is that it fails to be competitive with straightforward bandit strategies as @math . In this paper we propose Rank1ElimKL which simply replaces the (crude) confidence intervals of Rank1Elim with confidence intervals based on Kullback-Leibler (KL) divergences, and with the help of a novel result concerning the scaling of KL divergences we prove that with this change, our algorithm will be competitive no matter the value of @math . Experiments with synthetic data confirm that on benign instances the performance of Rank1ElimKL is significantly better than that of even Rank1Elim, while experiments with models derived from real data confirm that the improvements are significant across the board, regardless of whether the data is benign or not.", "Abstract A partition ( C 1 , C 2 , … , C q ) of G = ( V , E ) into clusters of strong (respectively, weak) diameter d , such that the supergraph obtained by contracting each C i is l -colorable is called a strong (resp., weak) ( d , l ) -network-decomposition. Network-decompositions were introduced in a seminal paper by Awerbuch, Goldberg, Luby and Plotkin in 1989. showed that strong ( d , l ) -network-decompositions with d = l = exp ⁡ O ( log ⁡ n log ⁡ log ⁡ n ) can be computed in distributed deterministic time O ( d ) . Even more importantly, they demonstrated that network-decompositions can be used for a great variety of applications in the message-passing model of distributed computing. The result of was improved by Panconesi and Srinivasan in 1992: in the latter result d = l = exp ⁡ O ( log ⁡ n ) , and the running time is O ( d ) as well. In another remarkable breakthrough Linial and Saks (in 1992) showed that weak ( O ( log ⁡ n ) , O ( log ⁡ n ) ) -network-decompositions can be computed in distributed randomized time O ( log 2 ⁡ n ) . Much more recently Barenboim (2012) devised a distributed randomized constant-time algorithm for computing strong network decompositions with d = O ( 1 ) . However, the parameter l in his result is O ( n 1 2 + ϵ ) . In this paper we drastically improve the result of Barenboim and devise a distributed randomized constant-time algorithm for computing strong ( O ( 1 ) , O ( n ϵ ) ) -network-decompositions. As a corollary we derive a constant-time randomized O ( n ϵ ) -approximation algorithm for the distributed minimum coloring problem, improving the previously best-known O ( n 1 2 + ϵ ) approximation guarantee. We also derive other improved distributed algorithms for a variety of problems. Most notably, for the extremely well-studied distributed minimum dominating set problem currently there is no known deterministic polylogarithmic-time algorithm. We devise a deterministic polylogarithmic-time approximation algorithm for this problem, addressing an open problem of Lenzen and Wattenhofer (2010).", "We study a statistical model for the tensor principal component analysis problem introduced by Montanari and Richard: Given a order- @math tensor @math of the form @math , where @math is a signal-to-noise ratio, @math is a unit vector, and @math is a random noise tensor, the goal is to recover the planted vector @math . For the case that @math has iid standard Gaussian entries, we give an efficient algorithm to recover @math whenever @math , and certify that the recovered vector is close to a maximum likelihood estimator, all with high probability over the random choice of @math . The previous best algorithms with provable guarantees required @math . In the regime @math , natural tensor-unfolding-based spectral relaxations for the underlying optimization problem break down (in the sense that their integrality gap is large). To go beyond this barrier, we use convex relaxations based on the sum-of-squares method. Our recovery algorithm proceeds by rounding a degree- @math sum-of-squares relaxations of the maximum-likelihood-estimation problem for the statistical model. To complement our algorithmic results, we show that degree- @math sum-of-squares relaxations break down for @math , which demonstrates that improving our current guarantees (by more than logarithmic factors) would require new techniques or might even be intractable. Finally, we show how to exploit additional problem structure in order to solve our sum-of-squares relaxations, up to some approximation, very efficiently. Our fastest algorithm runs in nearly-linear time using shifted (matrix) power iteration and has similar guarantees as above. The analysis of this algorithm also confirms a variant of a conjecture of Montanari and Richard about singular vectors of tensor unfoldings." ] }
1906.01009
2948602696
The Mallows model, introduced in the seminal paper of Mallows 1957, is one of the most fundamental ranking distribution over the symmetric group @math . To analyze more complex ranking data, several studies considered the Generalized Mallows model defined by Fligner and Verducci 1986. Despite the significant research interest of ranking distributions, the exact sample complexity of estimating the parameters of a Mallows and a Generalized Mallows Model is not well-understood. The main result of the paper is a tight sample complexity bound for learning Mallows and Generalized Mallows Model. We approach the learning problem by analyzing a more general model which interpolates between the single parameter Mallows Model and the @math parameter Mallows model. We call our model Mallows Block Model -- referring to the Block Models that are a popular model in theoretical statistics. Our sample complexity analysis gives tight bound for learning the Mallows Block Model for any number of blocks. We provide essentially matching lower bounds for our sample complexity results. As a corollary of our analysis, it turns out that, if the central ranking is known, one single sample from the Mallows Block Model is sufficient to estimate the spread parameters with error that goes to zero as the size of the permutations goes to infinity. In addition, we calculate the exact rate of the parameter estimation error.
@cite_5 considered learning the spread parameter of a Mallows model based on a single sample, assuming that the central ranking is known. He studied the asymptotic behavior of his estimator and proved consistency. We strengthen this result by showing that our parameter estimator, based on single sample, can achieve optimal error for Mallows Block model (Corollary ).
{ "cite_N": [ "@cite_5" ], "mid": [ "2113815377", "1511376624", "1574648663", "2907176385" ], "abstract": [ "We analyze the generalized Mallows model, a popular exponential model over rankings. Estimating the central (or consensus) ranking from data is NP-hard. We obtain the following new results: (1) We show that search methods can estimate both the central ranking pi0 and the model parameters theta exactly. The search is n! in the worst case, but is tractable when the true distribution is concentrated around its mode; (2) We show that the generalized Mallows model is jointly exponential in (pi0; theta), and introduce the conjugate prior for this model class; (3) The sufficient statistics are the pairwise marginal probabilities that item i is preferred to item j. Preliminary experiments confirm the theoretical predictions and compare the new algorithm and existing heuristics.", "Asymptotics of the normalizing constant is computed for a class of one parameter exponential families on permutations which includes Mallows model with Spearmans's Footrule and Spearman's Rank Correlation Statistic. The MLE, and a computable approximation of the MLE are shown to be consistent. The pseudo-likelihood estimator of Besag is shown to be @math -consistent. An iterative algorithm (IPFP) is proved to converge to the limiting normalizing constant. The Mallows model with Kendall's Tau is also analyzed to demonstrate flexibility of the tools of this paper.", "We propose a novel parameterized family of Mixed Membership Mallows Models (M4) to account for variability in pairwise comparisons generated by a heterogeneous population of noisy and inconsistent users. M4 models individual preferences as a user-specific probabilistic mixture of shared latent Mallows components. Our key algorithmic insight for estimation is to establish a statistical connection between M4 and topic models by viewing pairwise comparisons as words, and users as documents. This key insight leads us to explore Mallows components with a separable structure and leverage recent advances in separable topic discovery. While separability appears to be overly restrictive, we nevertheless show that it is an inevitable outcome of a relatively small number of latent Mallows components in a world of large number of items. We then develop an algorithm based on robust extreme-point identification of convex polygons to learn the reference rankings, and is provably consistent with polynomial sample complexity guarantees. We demonstrate that our new model is empirically competitive with the current state-of-the-art approaches in predicting real-world preferences.", "As deep neural networks (DNNs) are applied to increasingly challenging problems, they will need to be able to represent their own uncertainty. Modeling uncertainty is one of the key features of Bayesian methods. Using Bernoulli dropout with sampling at prediction time has recently been proposed as an efficient and well performing variational inference method for DNNs. However, sampling from other multiplicative noise based variational distributions has not been investigated in depth. We evaluated Bayesian DNNs trained with Bernoulli or Gaussian multiplicative masking of either the units (dropout) or the weights (dropconnect). We tested the calibration of the probabilistic predictions of Bayesian convolutional neural networks (CNNs) on MNIST and CIFAR-10. Sampling at prediction time increased the calibration of the DNNs' probabalistic predictions. Sampling weights, whether Gaussian or Bernoulli, led to more robust representation of uncertainty compared to sampling of units. However, using either Gaussian or Bernoulli dropout led to increased test set classification accuracy. Based on these findings we used both Bernoulli dropout and Gaussian dropconnect concurrently, which we show approximates the use of a spike-and-slab variational distribution without increasing the number of learned parameters. We found that spike-and-slab sampling had higher test set performance than Gaussian dropconnect and more robustly represented its uncertainty compared to Bernoulli dropout." ] }
1906.01009
2948602696
The Mallows model, introduced in the seminal paper of Mallows 1957, is one of the most fundamental ranking distribution over the symmetric group @math . To analyze more complex ranking data, several studies considered the Generalized Mallows model defined by Fligner and Verducci 1986. Despite the significant research interest of ranking distributions, the exact sample complexity of estimating the parameters of a Mallows and a Generalized Mallows Model is not well-understood. The main result of the paper is a tight sample complexity bound for learning Mallows and Generalized Mallows Model. We approach the learning problem by analyzing a more general model which interpolates between the single parameter Mallows Model and the @math parameter Mallows model. We call our model Mallows Block Model -- referring to the Block Models that are a popular model in theoretical statistics. Our sample complexity analysis gives tight bound for learning the Mallows Block Model for any number of blocks. We provide essentially matching lower bounds for our sample complexity results. As a corollary of our analysis, it turns out that, if the central ranking is known, one single sample from the Mallows Block Model is sufficient to estimate the spread parameters with error that goes to zero as the size of the permutations goes to infinity. In addition, we calculate the exact rate of the parameter estimation error.
The parameter estimation of the Generalized Mallows Model has been examined from a practical point of view by @cite_7 but no theoretical guarantees for the sample complexity have been provided. Several ranking models are routinely used in analyzing ranking data , such as Plackett-Luce model , Babington-Smith model and spectral analysis based methods and non-parametric methods . However, to our best knowledge, none of these ranking methods have been analyzed from point of distribution learning which comes with guarantee on some information theoretic distance. considered the problem of learning parameters of Plackett-Luce model and they came up with high probability bounds for their estimator that is tight in a sense that there is no algorithm which can achieve lower estimation error with fewer examples.
{ "cite_N": [ "@cite_7" ], "mid": [ "2113815377", "1526445072", "1574648663", "2115822807" ], "abstract": [ "We analyze the generalized Mallows model, a popular exponential model over rankings. Estimating the central (or consensus) ranking from data is NP-hard. We obtain the following new results: (1) We show that search methods can estimate both the central ranking pi0 and the model parameters theta exactly. The search is n! in the worst case, but is tractable when the true distribution is concentrated around its mode; (2) We show that the generalized Mallows model is jointly exponential in (pi0; theta), and introduce the conjugate prior for this model class; (3) The sufficient statistics are the pairwise marginal probabilities that item i is preferred to item j. Preliminary experiments confirm the theoretical predictions and compare the new algorithm and existing heuristics.", "This paper introduces two new methods for label ranking based on a probabilistic model of ranking data, called the Plackett-Luce model. The idea of the first method is to use the PL model to fit locally constant probability models in the context of instance-based learning. As opposed to this, the second method estimates a global model in which the PL parameters are represented as functions of the instance. Comparing our methods with previous approaches to label ranking, we find that they offer a number of advantages. Experimentally, we moreover show that they are highly competitive to start-of-the-art methods in terms of predictive accuracy, especially in the case of training data with incomplete ranking information.", "We propose a novel parameterized family of Mixed Membership Mallows Models (M4) to account for variability in pairwise comparisons generated by a heterogeneous population of noisy and inconsistent users. M4 models individual preferences as a user-specific probabilistic mixture of shared latent Mallows components. Our key algorithmic insight for estimation is to establish a statistical connection between M4 and topic models by viewing pairwise comparisons as words, and users as documents. This key insight leads us to explore Mallows components with a separable structure and leverage recent advances in separable topic discovery. While separability appears to be overly restrictive, we nevertheless show that it is an inevitable outcome of a relatively small number of latent Mallows components in a world of large number of items. We then develop an algorithm based on robust extreme-point identification of convex polygons to learn the reference rankings, and is provably consistent with polynomial sample complexity guarantees. We demonstrate that our new model is empirically competitive with the current state-of-the-art approaches in predicting real-world preferences.", "This paper is concerned with rank aggregation, which aims to combine multiple input rankings to get a better ranking. A popular approach to rank aggregation is based on probabilistic models on permutations, e.g., the Luce model and the Mallows model. However, these models have their limitations in either poor expressiveness or high computational complexity. To avoid these limitations, in this paper, we propose a new model, which is defined with a coset-permutation distance, and models the generation of a permutation as a stagewise process. We refer to the new model as coset-permutation distance based stagewise (CPS) model. The CPS model has rich expressiveness and can therefore be used in versatile applications, because many different permutation distances can be used to induce the coset-permutation distance. The complexity of the CPS model is low because of the stagewise decomposition of the permutation probability and the efficient computation of most coset-permutation distances. We apply the CPS model to supervised rank aggregation, derive the learning and inference algorithms, and empirically study their effectiveness and efficiency. Experiments on public datasets show that the derived algorithms based on the CPS model can achieve state-of-the-art ranking accuracy, and are much more efficient than previous algorithms." ] }
1906.00777
2947226932
Drone base station (DBS) is a promising technique to extend wireless connections for uncovered users of terrestrial radio access networks (RAN). To improve user fairness and network performance, in this paper, we design 3D trajectories of multiple DBSs in the drone assisted radio access networks (DA-RAN) where DBSs fly over associated areas of interests (AoIs) and relay communications between the base station (BS) and users in AoIs. We formulate the multi-DBS 3D trajectory planning and scheduling as a mixed integer non-linear programming (MINLP) problem with the objective of minimizing the average DBS-to-user (D2U) pathloss. The 3D trajectory variations in both horizontal and vertical directions, as well as the state-of-the-art DBS-related channel models are considered in the formulation. To address the non-convexity and NP-hardness of the MINLP problem, we first decouple it into multiple integer linear programming (ILP) and quasi-convex sub-problems in which AoI association, D2U communication scheduling, horizontal trajectories and flying heights of DBSs are respectively optimized. Then, we design a multi-DBS 3D trajectory planning and scheduling algorithm to solve the sub-problems iteratively based on the block coordinate descent (BCD) method. A k-means-based initial trajectory generation and a search-based start slot scheduling are considered in the proposed algorithm to improve trajectory design performance and ensure inter-DBS distance constraint, respectively. Extensive simulations are conducted to investigate the impacts of DBS quantity, horizontal speed and initial trajectory on the trajectory planning results. Compared with the static DBS deployment, the proposed trajectory planning can achieve 10-15 dB reduction on average D2U pathloss, and reduce the D2U pathloss standard deviation by 68 , which indicate the improvements of network performance and user fairness.
Promoted by the advancements in the flying control and communication technologies, both industry and academia are devoting many efforts to exploit the full potential of DA-RAN @cite_11 . As the foundation for drone communication and DA-RAN research, Al-Hourani . built the D2U pathloss model for DBS according to abundant field test data in various scenarios @cite_23 . A close-form expression of D2U pathloss model suiting different scenarios is proposed in which the probabilities of both LoS and NLoS D2U links are considered. As the extension work, they further formulated the pathloss model for D2B communication in suburban scenario @cite_8 where the D2B links are dominated by LoS links. Leveraging the pathloss model in @cite_23 and @cite_8 , various studies have emerged in both static DBS deployment and DBS trajectory planning.
{ "cite_N": [ "@cite_8", "@cite_23", "@cite_11" ], "mid": [ "2206930994", "2039409843", "2084503286", "2533776012" ], "abstract": [ "In this paper, the deployment of an unmanned aerial vehicle (UAV) as a flying base station used to provide the fly wireless communications to a given geographical area is analyzed. In particular, the coexistence between the UAV, that is transmitting data in the downlink, and an underlaid device-to-device (D2D) communication network is considered. For this model, a tractable analytical framework for the coverage and rate analysis is derived. Two scenarios are considered: a static UAV and a mobile UAV. In the first scenario, the average coverage probability and the system sum-rate for the users in the area are derived as a function of the UAV altitude and the number of D2D users. In the second scenario, using the disk covering problem, the minimum number of stop points that the UAV needs to visit in order to completely cover the area is computed. Furthermore, considering multiple retransmissions for the UAV and D2D users, the overall outage probability of the D2D users is derived. Simulation and analytical results show that, depending on the density of D2D users, the optimal values for the UAV altitude, which lead to the maximum system sum-rate and coverage probability, exist. Moreover, our results also show that, by enabling the UAV to intelligently move over the target area, the total required transmit power of UAV while covering the entire area, can be minimized. Finally, in order to provide full coverage for the area of interest, the tradeoff between the coverage and delay, in terms of the number of stop points, is discussed.", "We consider a collection of single-antenna ground nodes communicating with a multi-antenna unmanned aerial vehicle (UAV) over a multiple-access ground-to-air communications link. The UAV uses beamforming to mitigate inter-user interference and achieve spatial division multiple access (SDMA). First, we consider a simple scenario with two static ground nodes and analytically investigate the effect of the UAV's heading on the system sum rate. We then study a more general setting with multiple mobile ground-based terminals, and develop an algorithm for dynamically adjusting the UAV heading to maximize the approximate ergodic sum rate of the uplink channel, using a prediction filter to track the positions of the mobile ground nodes. For the common scenario where a strong line-of-sight (LOS) channel exists between the ground nodes and UAV, we use an asymptotic analysis to find simplified versions of the algorithm for low and high SNR. We present simulation results that demonstrate the benefits of adapting the UAV heading in order to optimize the uplink communications performance. The simulation results also show that the simplified algorithms provide near-optimal performance.", "A robust and accurate positioning solution is required to increase the safety in GPS-denied environments. Although there is a lot of available research in this area, little has been done for confined environments such as tunnels. Therefore, we organized a measurement campaign in a basement tunnel of Linkoping university, in which we obtained ultra-wideband (UWB) complex impulse responses for line-of-sight (LOS), and three non-LOS (NLOS) scenarios. This paper is focused on time-of-arrival (TOA) ranging since this technique can provide the most accurate range estimates, which are required for range-based positioning. We describe the measurement setup and procedure, select the threshold for TOA estimation, analyze the channel propagation parameters obtained from the power delay profile (PDP), and provide statistical model for ranging. According to our results, the rise-time should be used for NLOS identification, and the maximum excess delay should be used for NLOS error mitigation. However, the NLOS condition cannot be perfectly determined, so the distance likelihood has to be represented in a Gaussian mixture form. We also compared these results with measurements from a mine tunnel, and found a similar behavior.", "We introduce a channel-opportunistic architecture that enhances the user experience in terms of throughput, fairness, and energy efficiency. Our proposed architecture leverages D2D communication and it is built on top of the forthcoming D2D features of 5G networks. In particular, we focus on outband D2D where cellular users are allowed to exploit both cellular (i.e., LTE-A) and WLAN (i.e., WiFi Direct) technologies to establish a D2D connection. In this architecture, cellular users form clusters, in which only the user with the best channel condition communicates with the base station on behalf of the entire cluster. Within the cluster, the unlicensed spectrum is utilized to relay traffic. In this article, we provide analytical models for the proposed system and study the impact of several payoff distribution methods commonly adopted in the literature on coalitional game theory. We then introduce an operator-controlled relay protocol based on the D2D features of LTE-A and WiFi Direct, and demonstrate the feasibility and the advantages of D2D-assisted cellular communication with our SDR prototype." ] }
1906.00777
2947226932
Drone base station (DBS) is a promising technique to extend wireless connections for uncovered users of terrestrial radio access networks (RAN). To improve user fairness and network performance, in this paper, we design 3D trajectories of multiple DBSs in the drone assisted radio access networks (DA-RAN) where DBSs fly over associated areas of interests (AoIs) and relay communications between the base station (BS) and users in AoIs. We formulate the multi-DBS 3D trajectory planning and scheduling as a mixed integer non-linear programming (MINLP) problem with the objective of minimizing the average DBS-to-user (D2U) pathloss. The 3D trajectory variations in both horizontal and vertical directions, as well as the state-of-the-art DBS-related channel models are considered in the formulation. To address the non-convexity and NP-hardness of the MINLP problem, we first decouple it into multiple integer linear programming (ILP) and quasi-convex sub-problems in which AoI association, D2U communication scheduling, horizontal trajectories and flying heights of DBSs are respectively optimized. Then, we design a multi-DBS 3D trajectory planning and scheduling algorithm to solve the sub-problems iteratively based on the block coordinate descent (BCD) method. A k-means-based initial trajectory generation and a search-based start slot scheduling are considered in the proposed algorithm to improve trajectory design performance and ensure inter-DBS distance constraint, respectively. Extensive simulations are conducted to investigate the impacts of DBS quantity, horizontal speed and initial trajectory on the trajectory planning results. Compared with the static DBS deployment, the proposed trajectory planning can achieve 10-15 dB reduction on average D2U pathloss, and reduce the D2U pathloss standard deviation by 68 , which indicate the improvements of network performance and user fairness.
In most static DBS deployment works, the terrestrial user QoS or network performance is improved through optimizing the hovering position of single or multiple DBSs. For instance, through a clustering based approach, Mozaffari . designed the optimal locations of DBSs that maximize the information collection gain from terrestrial IoT devices @cite_13 . In @cite_28 , Zhang . optimized the DBS density in DBS network to maximize the network throughput while satisfying the efficiency requirements of the cellular network. Zhou . studied the downlink coverage features of DBS using Nakagami-m fading models, and calculated the optimal height and density of multiple DBSs to achieve maximal coverage probability @cite_16 . Although various works have investigated the static DBS deployments in different scenarios with different methods, the D2B link quality constraint is simplified or ignored by most works. In the works considering the D2B links, the D2B channel models are either as same as the D2U pathloss model @cite_24 or traditional terrestrial channel models @cite_10 . In this paper, we further implement the specific D2B channel model derived in @cite_8 to highlight the D2B channel features.
{ "cite_N": [ "@cite_13", "@cite_8", "@cite_28", "@cite_24", "@cite_16", "@cite_10" ], "mid": [ "2281709771", "1975618234", "2758501291", "2136340918" ], "abstract": [ "In this work we investigate optimal geographical caching in heterogeneous cellular networks where different types of base stations (BSs) have different cache capacities. Users request files from a content library according to a known probability distribution. The performance metric is the total hit probability, which is the probability that a user at an arbitrary location in the plane will find the content that it requires in one of the BSs that it is covered by. We consider the problem of optimally placing content in all BSs jointly. As this problem is not convex, we provide a heuristic scheme by finding the optimal placement policy for one type of base station conditioned on the placement in all other types. We demonstrate that these individual optimization problems are convex and we provide an analytical solution. As an illustration, we find the optimal placement policy of the small base stations (SBSs) depending on the placement policy of the macro base stations (MBSs). We show how the hit probability evolves as the deployment density of the SBSs varies. We show that the heuristic of placing the most popular content in the MBSs is almost optimal after deploying the SBSs with optimal placement policies. Also, for the SBSs no such heuristic can be used; the optimal placement is significantly better than storing the most popular content. Finally, we show that solving the individual problems to find the optimal placement policies for different types of BSs iteratively, namely repeatedly updating the placement policies, does not improve the performance.", "We investigate a wireless system of multiple cells, each having a downlink shared channel in support of high-speed packet data services. In practice, such a system consists of hierarchically organized entities including a central server, Base Stations (BSs), and Mobile Stations (MSs). Our goal is to improve global resource utilization and reduce regional congestion given asymmetric arrivals and departures of mobile users, a goal requiring load balancing among multiple cells. For this purpose, we propose a scalable cross-layer framework to coordinate packet-level scheduling, call-level cell-site selection and handoff, and system-level cell coverage based on load, throughput, and channel measurements. In this framework, an opportunistic scheduling algorithm--the weighted Alpha-Rule--exploits the gain of multiuser diversity in each cell independently, trading aggregate (mean) down-link throughput for fairness and minimum rate guarantees among MSs. Each MS adapts to its channel dynamics and the load fluctuations in neighboring cells, in accordance with MSs' mobility or their arrival and departure, by initiating load-aware handoff and cell-site selection. The central server adjusts schedulers of all cells to coordinate their coverage by prompting cell breathing or distributed MS handoffs. Across the whole system, BSs and MSs constantly monitor their load, throughput, or channel quality in order to facilitate the overall system coordination. Our specific contributions in such a framework are highlighted by the minimum-rate guaranteed weighted Alpha-Rule scheduling, the load-aware MS handoff cell-site selection, and the Media Access Control (MAC)-layer cell breathing. Our evaluations show that the proposed framework can improve global resource utilization and load balancing, resulting in a smaller blocking rate of MS arrivals without extra resources while the aggregate throughput remains roughly the same or improved at the hot-spots. Our simulation tests also show that the coordinated system is robust to dynamic load fluctuations and is scalable to both the system dimension and the size of MS population.", "Drone base stations (DBSs) can enhance network coverage and area capacity by moving supply towards demand when required. This degree of freedom could be especially useful for future applications with extreme demands, such as ultra reliable and low latency communications (uRLLC). However, deployment of DBSs can face several challenges. One issue is finding the 3D placement of such BSs to satisfy dynamic requirements of the system. Second, the availability of reliable wireless backhaul links and the related resource allocation are principal issues that should be considered. Finally, association of the users with BSs becomes an involved problem due to mobility of DBSs. In this paper, we consider a macro-BS (MBS) and several DBSs that rely on the wireless links to the MBS for backhauling. Considering regular and uRLLC users, we propose an algorithm to find efficient 3D locations of DBSs in addition to the user-BS associations and wireless backhaul bandwidth allocations to maximize the sum logarithmic rate of the users. To this end, a decomposition method is employed to first find the user-BS association and bandwidth allocations. Then DBS locations are updated using a heuristic particle swarm optimization algorithm. Simulation results show the effectiveness of the proposed method and provide useful insights on the effects of traffic distributions and antenna beamwidth.", "We consider the problem of resource allocation in downlink OFDMA systems for multi service and unknown environment. Due to users' mobility and intercell interference, the base station cannot predict neither the Signal to Noise Ratio (SNR) of each user in future time slots nor their probability distribution functions. In addition, the traffic is bursty in general with unknown arrival. The probability distribution functions of the SNR, channel state and traffic arrival density are then unknown. Achieving a multi service Quality of Service (QoS) while optimizing the performance of the system (e.g. total throughput) is a hard and interesting task since it depends on the unknown future traffic and SNR values. In this paper we solve this problem by modeling the multiuser queuing system as a discrete time linear dynamic system. We develop a robust H∞ controller to regulate the queues of different users. The queues and Packet Drop Rates (PDR) are controlled by proposing a minimum data rate according to the demanded service type of each user. The data rate vector proposed by the controller is then fed as a constraint to an instantaneous resource allocation framework. This instantaneous problem is formulated as a convex optimization problem for instantaneous subcarrier and power allocation decisions. Simulation results show small delays and better fairness among users." ] }
1906.01012
2948721154
Action recognition is so far mainly focusing on the problem of classification of hand selected preclipped actions and reaching impressive results in this field. But with the performance even ceiling on current datasets, it also appears that the next steps in the field will have to go beyond this fully supervised classification. One way to overcome those problems is to move towards less restricted scenarios. In this context we present a large-scale real-world dataset designed to evaluate learning techniques for human action recognition beyond hand-crafted datasets. To this end we put the process of collecting data on its feet again and start with the annotation of a test set of 250 cooking videos. The training data is then gathered by searching for the respective annotated classes within the subtitles of freely available videos. The uniqueness of the dataset is attributed to the fact that the whole process of collecting the data and training does not involve any human intervention. To address the problem of semantic inconsistencies that arise with this kind of training data, we further propose a semantical hierarchical structure for the mined classes.
Action recognition has been a challenging topic for long and a lot of innovative approaches, mainly for the task of action classification @cite_30 @cite_26 @cite_32 , have come up in the research community. But, obviously, we are still far away from the real-world task of learning arbitrary action classes from video data. One limitation here might be the lack of availability of real-world datasets that are just based on real random collections of videos.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_32" ], "mid": [ "1981781955", "2486913577", "2146048167", "2511475724" ], "abstract": [ "Action recognition on large categories of unconstrained videos taken from the web is a very challenging problem compared to datasets like KTH (6 actions), IXMAS (13 actions), and Weizmann (10 actions). Challenges like camera motion, different viewpoints, large interclass variations, cluttered background, occlusions, bad illumination conditions, and poor quality of web videos cause the majority of the state-of-the-art action recognition approaches to fail. Also, an increased number of categories and the inclusion of actions with high confusion add to the challenges. In this paper, we propose using the scene context information obtained from moving and stationary pixels in the key frames, in conjunction with motion features, to solve the action recognition problem on a large (50 actions) dataset with videos from the web. We perform a combination of early and late fusion on multiple features to handle the very large number of categories. We demonstrate that scene context is a very important feature to perform action recognition on very large datasets. The proposed method does not require any kind of video stabilization, person detection, or tracking and pruning of features. Our approach gives good performance on a large number of action categories; it has been tested on the UCF50 dataset with 50 action categories, which is an extension of the UCF YouTube Action (UCF11) dataset containing 11 action categories. We also tested our approach on the KTH and HMDB51 datasets for comparison.", "We consider the problem of detecting and localizing a human action from continuous action video from depth cameras. We believe that this problem is more challenging than the problem of traditional action recognition as we do not have the information about the starting and ending frames of an action class. Another challenge which makes the problem difficult, is the latency in detection of actions. In this paper, we introduce a greedy approach to detect the action class, invariant of their temporal scale in the testing sequences using class templates and basic skeleton based feature representation from the depth stream data generated using Microsoft Kinect. We evaluate the proposed method on the standard G3D and UTKinect-Action datasets consisting of five and ten actions, respectively. Our results demonstrate that the proposed approach performs well for action detection and recognition under different temporal scales, and is able to outperform the state of the art methods at low latency.", "Action recognition has often been posed as a classification problem, which assumes that a video sequence only have one action class label and different actions are independent. However, a single human body can perform multiple concurrent actions at the same time, and different actions interact with each other. This paper proposes a concurrent action detection model where the action detection is formulated as a structural prediction problem. In this model, an interval in a video sequence can be described by multiple action labels. An detected action interval is determined both by the unary local detector and the relations with other actions. We use a wavelet feature to represent the action sequence, and design a composite temporal logic descriptor to describe the action relations. The model parameters are trained by structural SVM learning. Given a long video sequence, a sequential decision window search algorithm is designed to detect the actions. Experiments on our new collected concurrent action dataset demonstrate the strength of our method.", "Action recognition in videos is a challenging task due to the complexity of the spatio-temporal patterns to model and the difficulty to acquire and learn on large quantities of video data. Deep learning, although a breakthrough for image classification and showing promise for videos, has still not clearly superseded action recognition methods using hand-crafted features, even when training on massive datasets. In this paper, we introduce hybrid video classification architectures based on carefully designed unsupervised representations of hand-crafted spatio-temporal features classified by supervised deep networks. As we show in our experiments on five popular benchmarks for action recognition, our hybrid model combines the best of both worlds: it is data efficient (trained on 150 to 10000 short clips) and yet improves significantly on the state of the art, including recent deep models trained on millions of manually labelled images and videos." ] }
1906.01012
2948721154
Action recognition is so far mainly focusing on the problem of classification of hand selected preclipped actions and reaching impressive results in this field. But with the performance even ceiling on current datasets, it also appears that the next steps in the field will have to go beyond this fully supervised classification. One way to overcome those problems is to move towards less restricted scenarios. In this context we present a large-scale real-world dataset designed to evaluate learning techniques for human action recognition beyond hand-crafted datasets. To this end we put the process of collecting data on its feet again and start with the annotation of a test set of 250 cooking videos. The training data is then gathered by searching for the respective annotated classes within the subtitles of freely available videos. The uniqueness of the dataset is attributed to the fact that the whole process of collecting the data and training does not involve any human intervention. To address the problem of semantic inconsistencies that arise with this kind of training data, we further propose a semantical hierarchical structure for the mined classes.
Apart from first generation datasets @cite_2 @cite_23 where actors were required to perform certain actions in controlled environment, current datasets such as HMDB @cite_22 , UCF @cite_27 or the recently released Kinetics dataset @cite_24 are mainly acquired from web sources such as YouTube clips or movies with the aim to represent realistic scenarios from training and testing. Here, videos were usually first searched by predefined action-queries and later clipped and organized to capture the atomic actions or its repetitions. Other datasets such as Thumos @cite_15 , MPI Cooking @cite_6 , Breakfast @cite_16 or the recently released Epic Kitchen dataset @cite_13 focus on the labeling of one or more action segments in single long videos, trying to temporally detect or segment predefined action classes within the video.
{ "cite_N": [ "@cite_22", "@cite_6", "@cite_24", "@cite_27", "@cite_23", "@cite_2", "@cite_15", "@cite_16", "@cite_13" ], "mid": [ "2780470340", "2963524571", "2619082050", "2949827582" ], "abstract": [ "This paper describes a procedure for the creation of large-scale video datasets for action classification and localization from unconstrained, realistic web data. The scalability of the proposed procedure is demonstrated by building a novel video benchmark, named SLAC (Sparsely Labeled ACtions), consisting of over 520K untrimmed videos and 1.75M clip annotations spanning 200 action categories. Using our proposed framework, annotating a clip takes merely 8.8 seconds on average. This represents a saving in labeling time of over 95 compared to the traditional procedure of manual trimming and localization of actions. Our approach dramatically reduces the amount of human labeling by automatically identifying hard clips, i.e., clips that contain coherent actions but lead to prediction disagreement between action classifiers. A human annotator can disambiguate whether such a clip truly contains the hypothesized action in a handful of seconds, thus generating labels for highly informative samples at little cost. We show that our large-scale dataset can be used to effectively pre-train action recognition models, significantly improving final metrics on smaller-scale benchmarks after fine-tuning. On Kinetics, UCF-101 and HMDB-51, models pre-trained on SLAC outperform baselines trained from scratch, by 2.0 , 20.1 and 35.4 in top-1 accuracy, respectively when RGB input is used. Furthermore, we introduce a simple procedure that leverages the sparse labels in SLAC to pre-train action localization models. On THUMOS14 and ActivityNet-v1.3, our localization model improves the mAP of baseline model by 8.6 and 2.5 , respectively.", "The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.2 on HMDB-51 and 97.9 on UCF-101.", "The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.9 on HMDB-51 and 98.0 on UCF-101.", "This paper introduces a video dataset of spatio-temporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 430 15-minute video clips, where actions are localized in space and time, resulting in 1.58M action labels with multiple labels per person occurring frequently. The key characteristics of our dataset are: (1) the definition of atomic visual actions, rather than composite actions; (2) precise spatio-temporal annotations with possibly multiple annotations for each person; (3) exhaustive annotation of these atomic actions over 15-minute video clips; (4) people temporally linked across consecutive segments; and (5) using movies to gather a varied set of action representations. This departs from existing datasets for spatio-temporal action recognition, which typically provide sparse annotations for composite actions in short video clips. We will release the dataset publicly. AVA, with its realistic scene and action complexity, exposes the intrinsic difficulty of action recognition. To benchmark this, we present a novel approach for action localization that builds upon the current state-of-the-art methods, and demonstrates better performance on JHMDB and UCF101-24 categories. While setting a new state of the art on existing datasets, the overall results on AVA are low at 15.6 mAP, underscoring the need for developing new approaches for video understanding." ] }
cs0003054
2949339967
The idle computers on a local area, campus area, or even wide area network represent a significant computational resource---one that is, however, also unreliable, heterogeneous, and opportunistic. This type of resource has been used effectively for embarrassingly parallel problems but not for more tightly coupled problems. We describe an algorithm that allows branch-and-bound problems to be solved in such environments. In designing this algorithm, we faced two challenges: (1) scalability, to effectively exploit the variably sized pools of resources available, and (2) fault tolerance, to ensure the reliability of services. We achieve scalability through a fully decentralized algorithm, by using a membership protocol for managing dynamically available resources. However, this fully decentralized design makes achieving reliability even more challenging. We guarantee fault tolerance in the sense that the loss of up to all but one resource will not affect the quality of the solution. For propagating information efficiently, we use epidemic communication for both the membership protocol and the fault-tolerance mechanism. We have developed a simulation framework that allows us to evaluate design alternatives. Results obtained in this framework suggest that our techniques can execute scalably and reliably.
The only fully decentralized, fault-tolerant B &B algorithm for distributed-memory architectures is DIB (Distributed Implementation of Backtracking) @cite_0 . DIB was designed for a wide range of tree-based applications, such as recursive backtrack, branch-and-bound, and alpha-beta pruning. It is a distributed, asynchronous algorithm that uses a dynamic load-balancing technique. Its failure recovery mechanism is based on keeping track of which machine is responsible for each unsolved problem. Each machine memorizes the problems for which it is responsible, as well as the machines to which it sent problems or from which it received problems. The completion of a problem is reported to the machine the problem came from. Hence, each machine can determine whether the work for which it is responsible is still unsolved, and can redo that work in the case of failure.
{ "cite_N": [ "@cite_0" ], "mid": [ "1984263429", "2134659242", "2902905458", "2067020218" ], "abstract": [ "DIB is a general-purpose package that allows a wide range of applications such as recursive backtrack, branch and bound, and alpha-beta search to be implemented on a multicomputer. It is very easy to use. The application program needs to specify only the root of the recursion tree, the computation to be performed at each node, and how to generate children at each node. In addition, the application program may optionally specify how to synthesize values of tree nodes from their children's values and how to disseminate information (such as bounds) either globally or locally in the tree. DIB uses a distributed algorithm, transparent to the application programmer, that divides the problem into subproblems and dynamically allocates them to any number of (potentially nonhomogeneous) machines. This algorithm requires only minimal support from the distributed operating system. DIB can recover from failures of machines even if they are not detected. DIB currently runs on the Crystal multicomputer at the University of Wisconsin-Madison. Many applications have been implemented quite easily, including exhaustive traversal ( N queens, knight's tour, negamax tree evaluation), branch and bound (traveling salesman) and alpha-beta search (the game of NIM). Speedup is excellent for exhaustive traversal and quite good for branch and bound.", "We consider optimal load balancing in a distributed computing environment consisting of homogeneous unreliable processors. Each processor receives its own sequence of tasks from outside users, some of which can be redirected to the other processors. Processing times are independent and identically distributed with an arbitrary distribution. The arrival sequence of outside tasks to each processor may be arbitrary as long as it is independent of the state of the system. Processors may fail, with arbitrary failure and repair processes that are also independent of the state of the system. The only information available to a processor is the history of its decisions for routing work to other processors, and the arrival times of its own arrival sequence. We prove the optimality of the round-robin policy, in which each processor sends all the tasks that can be redirected to each of the other processors in turn. We show that, among all policies that balance workload, round robin stochastically minimizes the nth task completion time for all n, and minimizes response times and queue lengths in a separable increasing convex sense for the entire system. We also show that if there is a single centralized controller, round-robin is the optimal policy, and a single controller using round-robin routing is better than the optimal distributed system in which each processor routes its own arrivals. Again \"optimal\" and \"better\" are in the sense of stochastically minimizing task completion times, and minimizing response time and queue lengths in the separable increasing convex sense.", "This paper introduces a new leaderless Byzantine consensus called the Democratic Byzantine Fault Tolerance (DBFT) for blockchains. While most blockchain consensus protocols rely on a correct leader or coordinator to terminate, our algorithm can terminate even when its coordinator is faulty. The key idea is to allow processes to complete asynchronous rounds as soon as they receive a threshold of messages, instead of having to wait for a message from a coordinator that may be slow. The resulting decentralization is particularly appealing for blockchains for two reasons: (i) each node plays a similar role in the execution of the consensus, hence making the decision inherently “democratic” (ii) decentralization avoids bottlenecks by balancing the load, making the solution scalable. DBFT is deterministic, assumes partial synchrony, is resilience optimal, time optimal and does not need signatures. We first present a simple safe binary Byzantine consensus algorithm, modify it to ensure termination, and finally present an optimized reduction from multivalue consensus to binary consensus whose fast path terminates in 4 message delays.", "This paper studies joint beamforming and power control in a coordinated multicell downlink system that serves multiple users per cell to maximize the minimum weighted signal-to-interference-plus-noise ratio. The optimal solution and distributed algorithm with geometrically fast convergence rate are derived by employing the nonlinear Perron-Frobenius theory and the multicell network duality. The iterative algorithm, though operating in a distributed manner, still requires instantaneous power update within the coordinated cluster through the backhaul. The backhaul information exchange and message passing may become prohibitive with increasing number of transmit antennas and increasing number of users. In order to derive asymptotically optimal solution, random matrix theory is leveraged to design a distributed algorithm that only requires statistical information. The advantage of our approach is that there is no instantaneous power update through backhaul. Moreover, by using nonlinear Perron-Frobenius theory and random matrix theory, an effective primal network and an effective dual network are proposed to characterize and interpret the asymptotic solution." ] }
cs0003008
1524644832
This paper presents a method of computing a revision of a function-free normal logic program. If an added rule is inconsistent with a program, that is, if it leads to a situation such that no stable model exists for a new program, then deletion and addition of rules are performed to avoid inconsistency. We specify a revision by translating a normal logic program into an abductive logic program with abducibles to represent deletion and addition of rules. To compute such deletion and addition, we propose an adaptation of our top-down abductive proof procedure to compute a relevant abducibles to an added rule. We compute a minimally revised program, by choosing a minimal set of abducibles among all the sets of abducibles computed by a top-down proof procedure.
There are many procedures to compute stable models, generalized stable models or abduction. If we use a bottom-up procedure for our translated abductive logic program to compute all the generalized stable models naively, then sets of abducibles to be compared would be larger since abducibles of irrelevant temporary rules and addable rules with inconsistency will be considered. Therefore, it is better to compute abducibles related with inconsistency. To our knowledge, top-down procedure which can be used for this purpose is only Satoh and Iwayama's procedure since we need a bottom-up consistency checking of addition deletion of literals during computing abducibles for revision. This task is similar to integrity constraint checking in @cite_16 and Satoh and Iwayama's procedure includes this task.
{ "cite_N": [ "@cite_16" ], "mid": [ "176609766", "1801368039", "2135625884", "1531065626" ], "abstract": [ "Horn clause logic programming can be extended to include abduction with integrity constraints. In the resulting extension of logic programming, negation by failure can be simulated by making negative conditions abducible and by imposing appropriate denials and disjunctions as integrity constraints. This gives an alternative semantics for negation by failure, which generalises the stable model semantics of negation by failure. The abductive extension of logic programming extends negation by failure in three ways: (1) computation can be perfonned in alternative minimal models, (2) positive as well as negative conditions can be made abducible, and (3) other integrity constraints can also be accommodated. * This paper was written while the first author was at Imperial College. 235 Introduction The tenn \"abduction\" was introduced by the philosopher Charles Peirce [1931] to refer to a particular kind of hypothetical reasoning. In the simplest case, it has the fonn: From A and A fB infer B as a possible \"explanation\" of A. Abduction has been given prominence in Charniak and McDennot's [1985] \"Introduction to Artificial Intelligence\", where it has been applied to expert systems and story comprehension. Independently, several authors have developed deductive techniques to drive the generation of abductive hypotheses. Cox and Pietrzykowski [1986] construct hypotheses from the \"dead ends\" of linear resolution proofs. Finger and Genesereth [1985] generate \"deductive solutions to design problems\" using the \"residue\" left behind in resolution proofs. Poole, Goebel and Aleliunas [1987] also use linear resolution to generate hypotheses. All impose the restriction that hypotheses should be consistent with the \"knowledge base\". Abduction is a fonn of non-monotonic reasoning, because hypotheses which are consistent with one state of a knowledge base may become inconSistent when new knowledge is added. Poole [1988] argues that abduction is preferable to noh-monotonic logics for default reasoning. In this view, defaults are hypotheses fonnulated within classical logic rather than conclusions derived withln some fonn of non-monotonic logic. The similarity between abduction and default reasoning was also pointed out in [Kowalski, 1979]. In this paper we show how abduction can be integrated with logic programming, and we concentrate on the use of abduction to generalise negation by failure. Conditional Answers Compared with Abduction In the simplest case, a logic program consists of a set of Horn Clauses, which are used backward to_reduce goals to sub goals. The initial goal is solved when there are no subgollls left;", "To explain observations from nonmonotonic background theories, one often needs removal of some hypotheses as well as addition of other hypotheses. Moreover, some observations should not be explained, while some are to be explained. In order to formalize these situations, extended abduction was introduced by Inoue and Sakama (1995) to generalize traditional abduction in the sense that it can compute negative explanations by removing hypotheses and antidexplanations to unexplain negative observations. In this paper, we propose a computational mechanism for extended abduction. When a background theory is written in a normal logic program, we introduce its transaction program for computing extended abduction. A transaction program is a set of nonddeterministic production rules that declaratively specify addition and deletion of abductive hypotheses. Abductive explanations are then computed by the fixpoint of a transaction program using a bottomdup model generation procedure. The correctness of the proposed procedure is shown for the class of acyclic covered abductive logic programs. In the context of deductive databases, a transaction program provides a declarative specification of database update.", "1. Summary In Part I, four ostensibly different theoretical models of induction are presented, in which the problem dealt with is the extrapolation of a very long sequence of symbols—presumably containing all of the information to be used in the induction. Almost all, if not all problems in induction can be put in this form. Some strong heuristic arguments have been obtained for the equivalence of the last three models. One of these models is equivalent to a Bayes formulation, in which a priori probabilities are assigned to sequences of symbols on the basis of the lengths of inputs to a universal Turing machine that are required to produce the sequence of interest as output. Though it seems likely, it is not certain whether the first of the four models is equivalent to the other three. Few rigorous results are presented. Informal investigations are made of the properties of these models. There are discussions of their consistency and meaningfulness, of their degree of independence of the exact nature of the Turing machine used, and of the accuracy of their predictions in comparison to those of other induction methods. In Part II these models are applied to the solution of three problems—prediction of the Bernoulli sequence, extrapolation of a certain kind of Markov chain, and the use of phrase structure grammars for induction. Though some approximations are used, the first of these problems is treated most rigorously. The result is Laplace's rule of succession. The solution to the second problem uses less certain approximations, but the properties of the solution that are discussed, are fairly independent of these approximations. The third application, using phrase structure grammars, is least exact of the three. First a formal solution is presented. Though it appears to have certain deficiencies, it is hoped that presentation of this admittedly inadequate model will suggest acceptable improvements in it. This formal solution is then applied in an approximate way to the determination of the “optimum” phrase structure grammar for a given set of strings. The results that are obtained are plausible, but subject to the uncertainties of the approximation used.", "Recent work has suggested abductive logic programming as a suitable formalism to represent active databases and intelligent agents. In particular, abducibles in abductive logic programs can be used to represent actions, and integrity constaints in abductive logic programs can be used to represent active rules of the kind encountered in active databases and reactive rules incorporating reactive behaviour in agents. One would expect that, in this approach, abductive proof procedures could provide the engine underlying active database management systems and the behaviour of agents. We analyse existing abductive proof procedures and argue that they are inadequate in handling these applications. The inadequacy is due to the inappropriate treatment of negative literals in integrity constraints. We propose a new abductive proof procedure and give examples of how this proof procedure can be used to achieve active behaviour in (deductive) databases and reactivity in agents. Finally, we prove some soundness and completeness results for the new proof procedure." ] }
cs0003028
2953269011
We describe an approach for compiling preferences into logic programs under the answer set semantics. An ordered logic program is an extended logic program in which rules are named by unique terms, and in which preferences among rules are given by a set of dedicated atoms. An ordered logic program is transformed into a second, regular, extended logic program wherein the preferences are respected, in that the answer sets obtained in the transformed theory correspond with the preferred answer sets of the original theory. Our approach allows both the specification of static orderings (as found in most previous work), in which preferences are external to a logic program, as well as orderings on sets of rules. In large part then, we are interested in describing a general methodology for uniformly incorporating preference information in a logic program. Since the result of our translation is an extended logic program, we can make use of existing implementations, such as dlv and smodels. To this end, we have developed a compiler, available on the web, as a front-end for these programming systems.
Dealing with preferences on rules seems to necessitate a two-level approach. This in fact is a characteristic of many approaches found in the literature. The majority of these approaches treat preference at the meta-level by defining alternative semantics. @cite_1 proposes a modification of well-founded semantics in which dynamic preferences may be given for rules employing @math . @cite_12 and @cite_5 propose different prioritized versions of answer set semantics. In @cite_12 static preferences are addressed first, by defining the reduct of a logic program @math , which is a subset of @math that is most preferred. For the following example, their approach gives two answer sets (one with @math and one with @math ) which seems to be counter-intuitive; ours in contrast has a single answer set containing @math . Moreover, the dynamic case is addressed by specifying a transformation of a dynamic program to a set of static programs.
{ "cite_N": [ "@cite_5", "@cite_1", "@cite_12" ], "mid": [ "2124627636", "2174235632", "1565029141", "2050890691" ], "abstract": [ "Abstract In this paper, we address the issue of how Gelfond and Lifschitz's answer set semantics for extended logic programs can be suitably modified to handle prioritized programs. In such programs an ordering on the program rules is used to express preferences. We show how this ordering can be used to define preferred answer sets and thus to increase the set of consequences of a program. We define a strong and a weak notion of preferred answer sets. The first takes preferences more seriously, while the second guarantees the existence of a preferred answer set for programs possessing at least one answer set. Adding priorities to rules is not new, and has been explored in different contexts. However, we show that many approaches to priority handling, most of which are inherited from closely related formalisms like default logic, are not suitable and fail on intuitive examples. Our approach, which obeys abstract, general principles that any approach to prioritized knowledge representation should satisfy, handles them in the expected way. Moreover, we investigate the complexity of our approach. It appears that strong preference on answer sets does not add on the complexity of the principal reasoning tasks, and weak preference leads only to a mild increase in complexity.", "We introduce a methodology and framework for expressing general preference information in logic programming under the answer set semantics. An ordered logic program is an extended logic program in which rules are named by unique terms, and in which preferences among rules are given by a set of atoms of form s p t where s and t are names. An ordered logic program is transformed into a second, regular, extended logic program wherein the preferences are respected, in that the answer sets obtained in the transformed program correspond with the preferred answer sets of the original program. Our approach allows the specification of dynamic orderings, in which preferences can appear arbitrarily within a program. Static orderings (in which preferences are external to a logic program) are a trivial restriction of the general dynamic case. First, we develop a specific approach to reasoning with preferences, wherein the preference ordering specifies the order in which rules are to be applied. We then demonstrate the wide range of applicability of our framework by showing how other approaches, among them that of Brewka and Eiter, can be captured within our framework. Since the result of each of these transformations is an extended logic program, we can make use of existing implementations, such as dlv and smodels. To this end, we have developed a publicly available compiler as a front-end for these programming systems.", "We extend answer set semantics to deal with inconsistent programs (containing classical negation), by finding a \"best\" answer set. Within the context of inconsistent programs, it is natural to have a partial order on rules, representing a preference for satisfying certain rules, possibly at the cost of violating less important ones. We show that such a rule order induces a natural order on extended answer sets, the minimal elements of which we call preferred answer sets. We characterize the expressiveness of the resulting semantics and show that it can simulate negation as failure as well as disjunction. We illustrate an application of the approach by considering database repairs, where minimal repairs are shown to correspond to preferred answer sets.", "The answer set semantics presented by [27] has been widely used to define so called FLP answer sets for different types of logic programs. However, it was recently observed that when being extended from normal to more general classes of logic programs, this approach may produce answer sets with circular justifications that are caused by self-supporting loops. The main reason for this behavior is that the FLP answer set semantics is not fully constructive by a bottom up construction of answer sets. In this paper, we overcome this problem by enhancing the FLP answer set semantics with a level mapping formalism such that every answer set I can be built by fixpoint iteration of a one-step provability operator (more precisely, an extended van Emden-Kowalski operator for the FLP reduct fΠI). This is inspired by the fact that under the standard answer set semantics, each answer set I of a normal logic program Π is obtainable by fixpoint iteration of the standard van Emden-Kowalski one-step provability operator for the Gelfond-Lifschitz reduct ΠI, which induces a level mapping. The enhanced FLP answer sets, which we call well-justified FLP answer sets, are thanks to the level mapping free of circular justifications. As a general framework, the well-justified FLP answer set semantics applies to logic programs with first-order formulas, logic programs with aggregates, description logic programs, hex-programs etc., provided that the rule satisfaction is properly extended to such general logic programs. We study in depth the computational complexity of FLP and well-justified FLP answer sets for general classes of logic programs. Our results show that the level mapping does not increase the worst-case complexity of FLP answer sets. Furthermore, we describe an implementation of the well-justified FLP answer set semantics, and report about an experimental evaluation, which indicates a potential for performance improvements by the level mapping in practice." ] }
cs0003028
2953269011
We describe an approach for compiling preferences into logic programs under the answer set semantics. An ordered logic program is an extended logic program in which rules are named by unique terms, and in which preferences among rules are given by a set of dedicated atoms. An ordered logic program is transformed into a second, regular, extended logic program wherein the preferences are respected, in that the answer sets obtained in the transformed theory correspond with the preferred answer sets of the original theory. Our approach allows both the specification of static orderings (as found in most previous work), in which preferences are external to a logic program, as well as orderings on sets of rules. In large part then, we are interested in describing a general methodology for uniformly incorporating preference information in a logic program. Since the result of our translation is an extended logic program, we can make use of existing implementations, such as dlv and smodels. To this end, we have developed a compiler, available on the web, as a front-end for these programming systems.
Brewka and Eiter @cite_5 address static preferences on rules in extended logic programs. They begin with a strict partial order on a set of rules, but define preference with respect to total orders that conform to the original partial order. Preferred answer sets are then selected from among the collection of answer sets of the (unprioritised) program. In contrast, we deal only with the original partial order, which is translated into the object theory. As well, only preferred extensions are produced in our approach; there is no need for meta-level filtering of extensions.
{ "cite_N": [ "@cite_5" ], "mid": [ "2174235632", "2124627636", "1565029141", "1986318362" ], "abstract": [ "We introduce a methodology and framework for expressing general preference information in logic programming under the answer set semantics. An ordered logic program is an extended logic program in which rules are named by unique terms, and in which preferences among rules are given by a set of atoms of form s p t where s and t are names. An ordered logic program is transformed into a second, regular, extended logic program wherein the preferences are respected, in that the answer sets obtained in the transformed program correspond with the preferred answer sets of the original program. Our approach allows the specification of dynamic orderings, in which preferences can appear arbitrarily within a program. Static orderings (in which preferences are external to a logic program) are a trivial restriction of the general dynamic case. First, we develop a specific approach to reasoning with preferences, wherein the preference ordering specifies the order in which rules are to be applied. We then demonstrate the wide range of applicability of our framework by showing how other approaches, among them that of Brewka and Eiter, can be captured within our framework. Since the result of each of these transformations is an extended logic program, we can make use of existing implementations, such as dlv and smodels. To this end, we have developed a publicly available compiler as a front-end for these programming systems.", "Abstract In this paper, we address the issue of how Gelfond and Lifschitz's answer set semantics for extended logic programs can be suitably modified to handle prioritized programs. In such programs an ordering on the program rules is used to express preferences. We show how this ordering can be used to define preferred answer sets and thus to increase the set of consequences of a program. We define a strong and a weak notion of preferred answer sets. The first takes preferences more seriously, while the second guarantees the existence of a preferred answer set for programs possessing at least one answer set. Adding priorities to rules is not new, and has been explored in different contexts. However, we show that many approaches to priority handling, most of which are inherited from closely related formalisms like default logic, are not suitable and fail on intuitive examples. Our approach, which obeys abstract, general principles that any approach to prioritized knowledge representation should satisfy, handles them in the expected way. Moreover, we investigate the complexity of our approach. It appears that strong preference on answer sets does not add on the complexity of the principal reasoning tasks, and weak preference leads only to a mild increase in complexity.", "We extend answer set semantics to deal with inconsistent programs (containing classical negation), by finding a \"best\" answer set. Within the context of inconsistent programs, it is natural to have a partial order on rules, representing a preference for satisfying certain rules, possibly at the cost of violating less important ones. We show that such a rule order induces a natural order on extended answer sets, the minimal elements of which we call preferred answer sets. We characterize the expressiveness of the resulting semantics and show that it can simulate negation as failure as well as disjunction. We illustrate an application of the approach by considering database repairs, where minimal repairs are shown to correspond to preferred answer sets.", "The addition of preferences to normal logic programs is a convenient way to represent many aspects of default reasoning. If the derivation of an atom A1 is preferred to that of an atom A2, a preference rule can be defined so that A2 is derived only if A1 is not. Although such situations can be modelled directly using default negation, it is often easier to define preference rules than it is to add negation to the bodies of rules. As first noted by [Proc. Internat. Conf. on Logic Programming, 1995, pp. 731-746], for certain grammars, it may be easier to disambiguate parses using preferences than by enforcing disambiguation in the grammar rules themselves. In this paper we define a general fixed-point semantics for preference logic programs based on an embedding into the well-founded semantics, and discuss its features and relation to previous preference logic semantics. We then study how preference logic grammars are used in data standardization, the commercially important process of extracting useful information from poorly structured textual data. This process includes correcting misspellings and truncations that occur in data, extraction of relevant information via parsing, and correcting inconsistencies in the extracted information. The declarativity of Prolog offers natural advantages for data standardization, and a commercial standardizer has been implemented using Prolog. However, we show that the use of preference logic grammars allow construction of a much more powerful and declarative commercial standardizer, and discuss in detail how the use of the non-monotonic construct of preferences leads to improved commercial software." ] }
cs0003028
2953269011
We describe an approach for compiling preferences into logic programs under the answer set semantics. An ordered logic program is an extended logic program in which rules are named by unique terms, and in which preferences among rules are given by a set of dedicated atoms. An ordered logic program is transformed into a second, regular, extended logic program wherein the preferences are respected, in that the answer sets obtained in the transformed theory correspond with the preferred answer sets of the original theory. Our approach allows both the specification of static orderings (as found in most previous work), in which preferences are external to a logic program, as well as orderings on sets of rules. In large part then, we are interested in describing a general methodology for uniformly incorporating preference information in a logic program. Since the result of our translation is an extended logic program, we can make use of existing implementations, such as dlv and smodels. To this end, we have developed a compiler, available on the web, as a front-end for these programming systems.
A two-level approach is also found in @cite_7 , where a methodology for directly encoding preferences in logic programs is proposed. The second-order flavour'' of this approach stems from the reification of rules and preferences. For example, a rule ( p r, s, q ) is expressed by the formula ( default (n, p, [r, s], [q]) ) where @math is the name of the rule. The Prolog-like list notation @math and @math raises the possibility of an infinite Herbrand universe; this is problematic for systems like smodels and dlv that rely on finite Herbrand universes.
{ "cite_N": [ "@cite_7" ], "mid": [ "2124627636", "2174235632", "1847820984", "2086092403" ], "abstract": [ "Abstract In this paper, we address the issue of how Gelfond and Lifschitz's answer set semantics for extended logic programs can be suitably modified to handle prioritized programs. In such programs an ordering on the program rules is used to express preferences. We show how this ordering can be used to define preferred answer sets and thus to increase the set of consequences of a program. We define a strong and a weak notion of preferred answer sets. The first takes preferences more seriously, while the second guarantees the existence of a preferred answer set for programs possessing at least one answer set. Adding priorities to rules is not new, and has been explored in different contexts. However, we show that many approaches to priority handling, most of which are inherited from closely related formalisms like default logic, are not suitable and fail on intuitive examples. Our approach, which obeys abstract, general principles that any approach to prioritized knowledge representation should satisfy, handles them in the expected way. Moreover, we investigate the complexity of our approach. It appears that strong preference on answer sets does not add on the complexity of the principal reasoning tasks, and weak preference leads only to a mild increase in complexity.", "We introduce a methodology and framework for expressing general preference information in logic programming under the answer set semantics. An ordered logic program is an extended logic program in which rules are named by unique terms, and in which preferences among rules are given by a set of atoms of form s p t where s and t are names. An ordered logic program is transformed into a second, regular, extended logic program wherein the preferences are respected, in that the answer sets obtained in the transformed program correspond with the preferred answer sets of the original program. Our approach allows the specification of dynamic orderings, in which preferences can appear arbitrarily within a program. Static orderings (in which preferences are external to a logic program) are a trivial restriction of the general dynamic case. First, we develop a specific approach to reasoning with preferences, wherein the preference ordering specifies the order in which rules are to be applied. We then demonstrate the wide range of applicability of our framework by showing how other approaches, among them that of Brewka and Eiter, can be captured within our framework. Since the result of each of these transformations is an extended logic program, we can make use of existing implementations, such as dlv and smodels. To this end, we have developed a publicly available compiler as a front-end for these programming systems.", "The paper describes an extension of well-founded semantics for logic programs with two types of negation. In this extension information about preferences between rules can be expressed in the logical language and derived dynamically. This is achieved by using a reserved predicate symbol and a naming technique. Conflicts among rules are resolved whenever possible on the basis of derived preference information. The well-founded conclusions of prioritized logic programs can be computed in polynomial time. A legal reasoning example illustrates the usefulness of the approach.", "This paper presents an inference algorithm that can discover temporal logic properties of a system from data. Our algorithm operates on finite time system trajectories that are labeled according to whether or not they demonstrate some desirable system properties (e.g. \"the car successfully stops before hitting an obstruction\"). A temporal logic formula that can discriminate between the desirable behaviors and the undesirable ones is constructed. The formulae also indicate possible causes for each set of behaviors (e.g. \"If the speed of the car is greater than 15 m s within 0.5s of brake application, the obstruction will be struck\") which can be used to tune designs or to perform on-line monitoring to ensure the desired behavior. We introduce reactive parameter signal temporal logic (rPSTL), a fragment of parameter signal temporal logic (PSTL) that is expressive enough to capture causal, spatial, and temporal relationships in data. We define a partial order over the set of rPSTL formulae that is based on language inclusion. This order enables a directed search over this set, i.e. given a candidate rPSTL formula that does not adequately match the observed data, we can automatically construct a formula that will fit the data at least as well. Two case studies, one involving a cattle herding scenario and one involving a stochastic hybrid gene circuit model, are presented to illustrate our approach." ] }
cs0005010
2951494809
An algorithm for computing the stable model semantics of logic programs is developed. It is shown that one can extend the semantics and the algorithm to handle new and more expressive types of rules. Emphasis is placed on the use of efficient implementation techniques. In particular, an implementation of lookahead that safely avoids testing every literal for failure and that makes the use of lookahead feasible is presented. In addition, a good heuristic is derived from the principle that the search space should be minimized. Due to the lack of competitive algorithms and implementations for the computation of stable models, the system is compared with three satisfiability solvers. This shows that the heuristic can be improved by breaking ties, but leaves open the question of how to break them. It also demonstrates that the more expressive rules of the stable model semantics make the semantics clearly preferable over propositional logic when a problem has a more compact logic program representation. Conjunctive normal form representations are never more compact than logic program ones.
If we look in a broader context, then finding a stable model is a combinatorial search problem. Other forms of combinatorial search problems are propositional satisfiability, constraint satisfaction, constraint logic programming and integer linear programming problems, and some other logic programming problems such as those expressible in @cite_28 . The difference between these problem formalisms and the stable model semantics is that they do not include default negation. In addition, all but the last one are not nonmonotonic.
{ "cite_N": [ "@cite_28" ], "mid": [ "49730540", "2152131859", "2085084839", "1516535258" ], "abstract": [ "Recently Ferraris, Lee and Lifschitz proposed a new definition of stable models that does not refer to grounding, which applies to the syntax of arbitrary first-order sentences. We show its relation to the idea of loop formulas with variables by Chen, Lin, Wang and Zhang, and generalize their loop formulas to disjunctive programs and to arbitrary first-order sentences. We also extend the syntax of logic programs to allow explicit quantifiers, and define its semantics as a subclass of the new language of stable models by Such programs inherit from the general language the ability to handle nonmonotonic reasoning under the stable model semantics even in the absence of the unique name and the domain closure assumptions, while yielding more succinct loop formulas than the general language due to the restricted syntax. We also show certain syntactic conditions under which query answering for an extended program can be reduced to entailment checking in first-order logic, providing a way to apply first-order theorem provers to reasoning about non-Herbrand stable models.", "Logic programming with the stable model semantics is put forward as a novel constraint programming paradigm. This paradigm is interesting because it bring advantages of logic programming based knowledge representation techniques to constraint programming and because implementation methods for the stable model semantics for ground (variabledfree) programs have advanced significantly in recent years. For a program with variables these methods need a grounding procedure for generating a variabledfree program. As a practical approach to handling the grounding problem a subclass of logic programs, domain restricted programs, is proposed. This subclass enables efficient grounding procedures and serves as a basis for integrating builtdin predicates and functions often needed in applications. It is shown that the novel paradigm embeds classical logical satisfiability and standard (finite domain) constraint satisfaction problems but seems to provide a more expressive framework from a knowledge representation point of view. The first steps towards a programming methodology for the new paradigm are taken by presenting solutions to standard constraint satisfaction problems, combinatorial graph problems and planning problems. An efficient implementation of the paradigm based on domain restricted programs has been developed. This is an extension of a previous implementation of the stable model semantics, the Smodels system, and is publicly available. It contains, e.g., builtdin integer arithmetic integrated to stable model computation. The implementation is described briefly and some test results illustrating the current level of performance are reported.", "We introduce the stable model semantics fordisjunctive logic programs and deductive databases, which generalizes the stable model semantics, defined earlier for normal (i.e., non-disjunctive) programs. Depending on whether only total (2-valued) or all partial (3-valued) models are used we obtain thedisjunctive stable semantics or thepartial disjunctive stable semantics, respectively. The proposed semantics are shown to have the following properties: • For normal programs, the disjunctive (respectively, partial disjunctive) stable semantics coincides with thestable (respectively,partial stable) semantics. • For normal programs, the partial disjunctive stable semantics also coincides with thewell-founded semantics. • For locally stratified disjunctive programs both (total and partial) disjunctive stable semantics coincide with theperfect model semantics. • The partial disjunctive stable semantics can be generalized to the class ofall disjunctive logic programs. • Both (total and partial) disjunctive stable semantics can be naturally extended to a broader class of disjunctive programs that permit the use ofclassical negation. • After translation of the programP into a suitable autoepistemic theory ( P ) the disjunctive (respectively, partial disjunctive) stable semantics ofP coincides with the autoepistemic (respectively, 3-valued autoepistemic) semantics of ( P ) .", "Partial stable models for deductive databases, i.e., normal function-free logic programs (also called datalog programs), have two equivalent definitions: one based on 3-valued logics and another based on the notion of unfounded set. The notion of partial stable model has been extended to disjunctive deductive databases using 3-valued logics. In this paper, a characterization of partial stable models for disjunctive datalog programs is given using a suitable extension of the notion of unfounded set. Two interesting sub-classes of partial stable models, M-stable (Maximal-stable) (also called regular models, preferred extension,and maximal stable classes) and L-stable (Least undefined-stable) models, are then extended from normal to disjunctive datalog programs. On the one hand, L-stable models are shown to be the natural relaxation of the notion of total stable model; on the other hand the less strict M-stable models, endowed with a nice modularity property, may be appealing from the programming and computational point of view. M-stable and L-stable models are also compared with the regular models for disjunctive datalog programs recently proposed in the literature." ] }
cs0005010
2951494809
An algorithm for computing the stable model semantics of logic programs is developed. It is shown that one can extend the semantics and the algorithm to handle new and more expressive types of rules. Emphasis is placed on the use of efficient implementation techniques. In particular, an implementation of lookahead that safely avoids testing every literal for failure and that makes the use of lookahead feasible is presented. In addition, a good heuristic is derived from the principle that the search space should be minimized. Due to the lack of competitive algorithms and implementations for the computation of stable models, the system is compared with three satisfiability solvers. This shows that the heuristic can be improved by breaking ties, but leaves open the question of how to break them. It also demonstrates that the more expressive rules of the stable model semantics make the semantics clearly preferable over propositional logic when a problem has a more compact logic program representation. Conjunctive normal form representations are never more compact than logic program ones.
From an algorithmic standpoint the progenitor of the @math algorithm is the Davis-Putnam (-Logemann-Loveland) procedure @cite_37 for determining the satisfiability of propositional formulas. This procedure can be seen as a backtracking search procedure that makes assumptions about the truth values of the propositional atoms in a formula and that then derives new truth values from these assumptions in order to prune the search space.
{ "cite_N": [ "@cite_37" ], "mid": [ "1973734335", "1565010992", "16743356", "1515930456" ], "abstract": [ "The Davis-Putnam-Logemann-Loveland algorithm is one of the most popular algorithms for solving the satisfiability problem. Its efficiency depends on its choice of a branching rule. We construct a sequence of instances of the satisfiability problem that fools a variety of sensible'''' branching rules in the following sense: when the instance has n variables, each of the sensible'''' branching rules brings about Omega(2^(n 5)) recursive calls of the Davis-Putnam-Logemann-Loveland algorithm, even though only O(1) such calls are necessary.", "As was shown recently, many important AI problems require counting the number of models of propositional formulas. The problem of counting models of such formulas is, according to present knowledge, computationally intractable in a worst case. Based on the Davis-Putnam procedure, we present an algorithm, CDP, that computes the exact number of models of a propositional CNF or DNF formula F. Let m and n be the number of clauses and variables of F, respectively, and let p denote the probability that a literal l of F occurs in a clause C of F, then the average running time of CDP is shown to be O(mdn), where d=[-1 log2(1-p)].The practical performance of CDP has been estimated in a series of experiments on a wide variety of CNF formulas.", "From a computational perspective, there is a close connection between various probabilistic reasoning tasks and the problem of counting or sampling satisfying assignments of a propositional theory. We consider the question of whether state-of-the-art satisfiability procedures, based on random walk strategies, can be used to sample uniformly or nearuniformly from the space of satisfying assignments. We first show that random walk SAT procedures often do reach the full set of solutions of complex logical theories. Moreover, by interleaving random walk steps with Metropolis transitions, we also show how the sampling becomes near-uniform.", "The problem of deciding the satisfiability of a quantifier-free formula with respect to a background theory, also known as Satisfiability Modulo Theories (SMT), is gaining increasing relevance in verification: representation capabilities beyond propositional logic allow for a natural modeling of real-world problems (e.g., pipeline and RTL circuits verification, proof obligations in software systems). In this paper, we focus on the case where the background theory is the combination T1∪T2 of two simpler theories. Many SMT procedures combine a boolean model enumeration with a decision procedure for T1∪T2, where conjunctions of literals can be decided by an integration schema such as Nelson-Oppen, via a structured exchange of interface formulae (e.g., equalities in the case of convex theories, disjunctions of equalities otherwise). We propose a new approach for SMT(T1∪T2), called Delayed Theory Combination, which does not require a decision procedure for T1∪T2, but only individual decision procedures for T1 and T2, which are directly integrated into the boolean model enumerator. This approach is much simpler and natural, allows each of the solvers to be implemented and optimized without taking into account the others, and it nicely encompasses the case of non-convex theories. We show the effectiveness of the approach by a thorough experimental comparison." ] }
cs0005010
2951494809
An algorithm for computing the stable model semantics of logic programs is developed. It is shown that one can extend the semantics and the algorithm to handle new and more expressive types of rules. Emphasis is placed on the use of efficient implementation techniques. In particular, an implementation of lookahead that safely avoids testing every literal for failure and that makes the use of lookahead feasible is presented. In addition, a good heuristic is derived from the principle that the search space should be minimized. Due to the lack of competitive algorithms and implementations for the computation of stable models, the system is compared with three satisfiability solvers. This shows that the heuristic can be improved by breaking ties, but leaves open the question of how to break them. It also demonstrates that the more expressive rules of the stable model semantics make the semantics clearly preferable over propositional logic when a problem has a more compact logic program representation. Conjunctive normal form representations are never more compact than logic program ones.
While the extended rules of this work are novel, there are some analogous constructions in the literature. The choice rule can be seen as a generalization of the disjunctive rule of the possible model semantics @cite_43 . The disjunctive rule of disjunctive logic programs @cite_14 also resembles the choice rule, but the semantics is in this case different. The stable models of a disjunctive program are subset minimal while the stable models of a logic program are grounded, i.e., atoms can not justify their own inclusion. If a program contains choice rules, then a grounded model is not necessarily subset minimal.
{ "cite_N": [ "@cite_43", "@cite_14" ], "mid": [ "2085084839", "49730540", "2087432892", "1984529395" ], "abstract": [ "We introduce the stable model semantics fordisjunctive logic programs and deductive databases, which generalizes the stable model semantics, defined earlier for normal (i.e., non-disjunctive) programs. Depending on whether only total (2-valued) or all partial (3-valued) models are used we obtain thedisjunctive stable semantics or thepartial disjunctive stable semantics, respectively. The proposed semantics are shown to have the following properties: • For normal programs, the disjunctive (respectively, partial disjunctive) stable semantics coincides with thestable (respectively,partial stable) semantics. • For normal programs, the partial disjunctive stable semantics also coincides with thewell-founded semantics. • For locally stratified disjunctive programs both (total and partial) disjunctive stable semantics coincide with theperfect model semantics. • The partial disjunctive stable semantics can be generalized to the class ofall disjunctive logic programs. • Both (total and partial) disjunctive stable semantics can be naturally extended to a broader class of disjunctive programs that permit the use ofclassical negation. • After translation of the programP into a suitable autoepistemic theory ( P ) the disjunctive (respectively, partial disjunctive) stable semantics ofP coincides with the autoepistemic (respectively, 3-valued autoepistemic) semantics of ( P ) .", "Recently Ferraris, Lee and Lifschitz proposed a new definition of stable models that does not refer to grounding, which applies to the syntax of arbitrary first-order sentences. We show its relation to the idea of loop formulas with variables by Chen, Lin, Wang and Zhang, and generalize their loop formulas to disjunctive programs and to arbitrary first-order sentences. We also extend the syntax of logic programs to allow explicit quantifiers, and define its semantics as a subclass of the new language of stable models by Such programs inherit from the general language the ability to handle nonmonotonic reasoning under the stable model semantics even in the absence of the unique name and the domain closure assumptions, while yielding more succinct loop formulas than the general language due to the restricted syntax. We also show certain syntactic conditions under which query answering for an extended program can be reduced to entailment checking in first-order logic, providing a way to apply first-order theorem provers to reasoning about non-Herbrand stable models.", "This paper addresses complexity issues for important problems arising with disjunctive logic programming. In particular, the complexity of deciding whether a disjunctive logic program is consistent is investigated for a variety of well-known semantics, as well as the complexity of deciding whether a propositional formula is satisfied by all models according to a given semantics. We concentrate on finite propositional disjunctive programs with as well as without integrity constraints, i.e., clauses with empty heads; the problems are located in appropriate slots of the polynomial hierarchy. In particular, we show that the consistency check is Σ 2 p -complete for the disjunctive stable model semantics (in the total as well as partial version), the iterated closed world assumption, and the perfect model semantics, and we show that the inference problem for these semantics is Π 2 p -complete; analogous results are derived for the answer sets semantics of extended disjunctive logic programs. Besides, we generalize previously derived complexity results for the generalized closed world assumption and other more sophisticated variants of the closed world assumption. Furthermore, we use the close ties between the logic programming framework and other nonmonotonic formalisms to provide new complexity results for disjunctive default theories and disjunctive autoepistemic literal theories.", "Logic programs with ordered disjunction (LPODs) combine ideas underlying Qualitative Choice Logic (Brewka, Benferhat, & Le Berre 2002) and answer set programming. Logic programming under answer set semantics is extended with a new connective called ordered disjunction. The new connective allows us to represent alternative, ranked options for problem solutions in the heads of rules: A × B intuitively means: if possible A, but if A is not possible then at least B. The semantics of logic programs with ordered disjunction is based on a preference relation on answer sets. LPODs are useful for applications in design and configuration and can serve as a basis for qualitative decision making." ] }
math0005204
1540167525
We present some new and recent algorithmic results concerning polynomial system solving over various rings. In particular, we present some of the best recent bounds on: (a) the complexity of calculating the complex dimension of an algebraic set, (b) the height of the zero-dimensional part of an algebraic set over C, and (c) the number of connected components of a semi-algebraic set. We also present some results which significantly lower the complexity of deciding the emptiness of hypersurface intersections over C and Q, given the truth of the Generalized Riemann Hypothesis. Furthermore, we state some recent progress on the decidability of the prefixes and , quantified over the positive integers. As an application, we conclude with a result connecting Hilbert's Tenth Problem in three variables and height bounds for integral points on algebraic curves. This paper is based on three lectures presented at the conference corresponding to this proceedings volume. The titles of the lectures were Some Speed-Ups in Computational Algebraic Geometry,'' Diophantine Problems Nearly in the Polynomial Hierarchy,'' and Curves, Surfaces, and the Frontier to Undecidability.''
As for more general relations between @math and its analogue over @math , it is easy to see that the decidability of @math implies the decidability of its analogue over @math . Unfortunately, the converse is currently unknown. Via Lagrange's Theorem (that any positive integer can be written as a sum of four squares) one can easily show that the un decidability of @math implies the un decidability of the analogue of @math over @math . More recently, Zhi-Wei Sun has shown that the @math can be replaced by @math @cite_12 .
{ "cite_N": [ "@cite_12" ], "mid": [ "1924023668", "1561261246", "2551334724", "2807350215" ], "abstract": [ "A function @math is @math -resilient if all its Fourier coefficients of degree at most @math are zero, i.e., @math is uncorrelated with all low-degree parities. We study the notion of @math @math of Boolean functions, where we say that @math is @math -approximately @math -resilient if @math is @math -close to a @math -valued @math -resilient function in @math distance. We show that approximate resilience essentially characterizes the complexity of agnostic learning of a concept class @math over the uniform distribution. Roughly speaking, if all functions in a class @math are far from being @math -resilient then @math can be learned agnostically in time @math and conversely, if @math contains a function close to being @math -resilient then agnostic learning of @math in the statistical query (SQ) framework of Kearns has complexity of at least @math . This characterization is based on the duality between @math approximation by degree- @math polynomials and approximate @math -resilience that we establish. In particular, it implies that @math approximation by low-degree polynomials, known to be sufficient for agnostic learning over product distributions, is in fact necessary. Focusing on monotone Boolean functions, we exhibit the existence of near-optimal @math -approximately @math -resilient monotone functions for all @math . Prior to our work, it was conceivable even that every monotone function is @math -far from any @math -resilient function. Furthermore, we construct simple, explicit monotone functions based on @math and @math that are close to highly resilient functions. Our constructions are based on a fairly general resilience analysis and amplification. These structural results, together with the characterization, imply nearly optimal lower bounds for agnostic learning of monotone juntas.", "We show that termination of a simple class of linear loops over the integers is decidable. Namely we show that termination of deterministic linear loops is decidable over the integers in the homogeneous case, and over the rationals in the general case. This is done by analyzing the powers of a matrix symbolically using its eigenvalues. Our results generalize the work of Tiwari [Tiw04], where similar results were derived for termination over the reals. We also gain some insights into termination of non-homogeneous integer programs, that are very common in practice.", "Let @math be a partially ordered set. If the Boolean lattice @math can be partitioned into copies of @math for some positive integer @math , then @math must satisfy the following two trivial conditions: (1) the size of @math is a power of @math , (2) @math has a unique maximal and minimal element. Resolving a conjecture of Lonc, it was shown by Gruslys, Leader and Tomon that these conditions are sufficient as well. In this paper, we show that if @math only satisfies condition (2), we can still almost partition @math into copies of @math . We prove that if @math has a unique maximal and minimal element, then there exists a constant @math such that all but at most @math elements of @math can be covered by disjoint copies of @math .", "We show that any language in nondeterministic time @math , where the number of iterated exponentials is an arbitrary function @math , can be decided by a multiprover interactive proof system with a classical polynomial-time verifier and a constant number of quantum entangled provers, with completeness @math and soundness @math , where the number of iterated exponentials is @math and @math is a universal constant. The result was previously known for @math and @math ; we obtain it for any time-constructible function @math . The result is based on a compression technique for interactive proof systems with entangled provers that significantly simplifies and strengthens a protocol compression result of Ji (STOC'17). As a separate consequence of this technique we obtain a different proof of Slofstra's recent result (unpublished) on the uncomputability of the entangled value of multiprover games. Finally, we show that even minor improvements to our compression result would yield remarkable consequences in computational complexity theory and the foundations of quantum mechanics: first, it would imply that the class MIP* contains all computable languages; second, it would provide a negative resolution to a multipartite version of Tsirelson's problem on the relation between the commuting operator and tensor product models for quantum correlations." ] }
cs0005026
1644526253
A one-time pad (OTP) based cipher to insure both data protection and integrity when mobile code arrives to a remote host is presented. Data protection is required when a mobile agent could retrieve confidential information that would be encrypted in untrusted nodes of the network; in this case, information management could not rely on carrying an encryption key. Data integrity is a prerequisite because mobile code must be protected against malicious hosts that, by counterfeiting or removing collected data, could cover information to the server that has sent the agent. The algorithm described in this article seems to be simple enough, so as to be easily implemented. This scheme is based on a non-interactive protocol and allows a remote host to change its own data on-the-fly and, at the same time, protecting information against handling by other hosts.
Strong foundation is a requirement for future work in the topic of mobile agents @cite_12 . To design semantics and type-safety languages for agents in untrusted networks @cite_14 and supporting permissions languages for specifying distributed processes in dynamically evolving networks, as the languages derived from the @math -calculus @cite_18 are important to protect hosts against malicious code. spoonhower:telephony have shown that agents could be used for collaborative applications reducing network bandwidth requeriments. sander:hosts have proposed a way to obtain code privacy using non-interactive evaluation of encrypted functions (EEF). hohl:mess has proposed the possibility of use algorithms to mess up'' code.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_12" ], "mid": [ "2071198236", "2006520458", "118562019", "2061692729" ], "abstract": [ "In mobile agent systems, program code together with some process state can autonomously migrate to new hosts. Despite its many practical benefits, mobile agent technology results in significant new security threats from malicious agents and hosts. In this paper, we propose a security architecture to achieve three goals: certification that a server has the authority to execute an agent on behalf of its sender; flexible selection of privileges, so that an agent arriving at a server may be given the privileges necessary to carry out the task for which it has come to the server; and state appraisal, to ensure that an agent has not become malicious as a consequence of alterations to its state. The architecture models the trust relations between the principals of mobile agent systems and includes authentication and authorization mechanisms.", "We describe a foundational language for specifying dynamically evolving networks of distributed processes, D�. The language is a distributed extension of the �-calculus which incorporates the notions of remote execution, migration, and site failure. Novel features of D� include 1. Communication channels are explicitly located: the use of a channel requires knowledge of both the channel and its location. 2. Names are endowed with permissions: the holder of a name may only use that name in the manner allowed by these permissions.A type system is proposed in which-the types control the allocation of permissions; in well-typed processes all names are used in accordance with the permissions allowed by the types. We prove Subject Reduction and Type Safety Theorems for the type system. In the final section we define a semantic theory based on barbed bisimulations and discuss its characterization in terms of a bisimulation relation over a relativized labelled transition system.", "Mobile agent technology offers a new computing paradigm in which a program, in the form of a software agent, can suspend its execution on a host computer, transfer itself to another agent-enabled host on the network, and resume execution on the new host. The use of mobile code has a long history dating back to the use of remote job entry systems in the 1960's. Today's agent incarnations can be characterized in a number of ways ranging from simple distributed objects to highly organized software with embedded intelligence. As the sophistication of mobile software has increased over time, so too have the associated threats to security. This report provides an overview of the range of threats facing the designers of agent platforms and the developers of agentbased applications. The report also identifies generic security objectives, and a range of measures for countering the identified threats and fulfilling these security objectives.", "Almost all agent development to date has been “homegrown” [4] and done from scratch, independently, byeach development team. This has led to the followingproblems:• Lack of an agreed definition: Agents built bydifferent teams have different capabilities.• Duplication of effort: There has been little reuse ofagent architectures, designs, or components.• Inability to satisfy industrial strengthrequirements: Agents must integrate with existingsoftware and computer infrastructure. They must alsoaddress security and scaling concerns.Agents are complex and ambitious software systems thatwill be entrusted with critical applications. As such,agent based systems must be engineered with validsoftware engineering principles and not constructed in anad hoc fashion.Agent systems must have a strong foundation based onmasterful software patterns. Software patterns arose outof Alexander’s [2] work in architecture and urbanplanning. Many urban plans and architectures aregrandiose and ill-fated. Overly ambitious agent basedsystems built in an ad hoc fashion risk the same fate.They may never be built, or, due to their fragile nature,they may be built and either never used or used once andthen abandoned. A software pattern is a recurringproblem and solution; it may address conceptual,architectural or design problems.A pattern is described in a set format to ease itsdissemination. The format states the problem addressedby the pattern and the forces acting on it. There is also acontext that must be present for the pattern to be valid, astatement of the solution, and any known uses. Thefollowing sections summarize some key patterns of agentbased systems; for brevity, many of the patterns arepresented in an abbreviated “patlet” form. When kn ownuses are not listed for an individual pattern, it means thatthe pattern has arisen from the JAFIMA activity. Thepatterns presented in this paper represent progress towarda pattern language or living methodology for intelligentand mobile agents." ] }
cs0006023
2949089885
We describe a statistical approach for modeling dialogue acts in conversational speech, i.e., speech-act-like units such as Statement, Question, Backchannel, Agreement, Disagreement, and Apology. Our model detects and predicts dialogue acts based on lexical, collocational, and prosodic cues, as well as on the discourse coherence of the dialogue act sequence. The dialogue model is based on treating the discourse structure of a conversation as a hidden Markov model and the individual dialogue acts as observations emanating from the model states. Constraints on the likely sequence of dialogue acts are modeled via a dialogue act n-gram. The statistical dialogue grammar is combined with word n-grams, decision trees, and neural networks modeling the idiosyncratic lexical and prosodic manifestations of each dialogue act. We develop a probabilistic integration of speech recognition with dialogue modeling, to improve both speech recognition and dialogue act classification accuracy. Models are trained and evaluated using a large hand-labeled database of 1,155 conversations from the Switchboard corpus of spontaneous human-to-human telephone speech. We achieved good dialogue act labeling accuracy (65 based on errorful, automatically recognized words and prosody, and 71 based on word transcripts, compared to a chance baseline accuracy of 35 and human accuracy of 84 ) and a small reduction in word recognition error.
Previous research on DA modeling has generally focused on task-oriented dialogue, with three tasks in particular garnering much of the research effort. The Map Task corpus @cite_49 @cite_12 consists of conversations between two speakers with slightly different maps of an imaginary territory. Their task is to help one speaker reproduce a route drawn only on the other speaker's map, all without being able to see each other's maps. Of the DA modeling algorithms described below, TaylorEtAl:LS98 and Wright:98 were based on Map Task. The VERBMOBIL corpus consists of two-party scheduling dialogues. A number of the DA modeling algorithms described below were developed for VERBMOBIL, including those of MastEtAl:96 , WarnkeEtAl:97 , Reithinger:96 , Reithinger:97 , and Samuel:98 . The ATR Conference corpus is a subset of a larger ATR Dialogue database consisting of simulated dialogues between a secretary and a questioner at international conferences. Researchers using this corpus include Nagata:92 , [1994] NagataMorimoto:93 , NagataMorimoto:94 and KitaEtAl:96 . Table shows the most commonly used versions of the tag sets from those three tasks.
{ "cite_N": [ "@cite_12", "@cite_49" ], "mid": [ "2118142207", "2122514299", "2593751037", "2143023047" ], "abstract": [ "This paper describes a corpus of unscripted, task-oriented dialogues which has been designed, digitally recorded, and transcribed to support the study of spontaneous speech on many levels. The corpus uses the Map Task (Brown, Anderson, Yule, and Shillcock, 1983) in which speakers must collaborate verbally to reproduce on one participant's map a route printed on the other's. In all, the corpus includes four conversations from each of 64 young adults and manipulates the following variables: familiarity of speakers, eye contact between speakers, matching between landmarks on the participants' maps, opportunities for contrastive stress, and phonological characteristics of landmark names. The motivations for the design are set out and basic corpus statistics are presented.", "Most previous work on trainable language generation has focused on two paradigms: (a) using a generation decisions of an existing generator. Both approaches rely on the existence of a handcrafted generation component, which is likely to limit their scalability to new domains. The first contribution of this article is to present Bagel, a fully data-driven generation method that treats the language generation task as a search for the most likely sequence of semantic concepts and realization phrases, according to Factored Language Models (FLMs). As domain utterances are not readily available for most natural language generation tasks, a large creative effort is required to produce the data necessary to represent human linguistic variation for nontrivial domains. This article is based on the assumption that learning to produce paraphrases can be facilitated by collecting data from a large sample of untrained annotators using crowdsourcing—rather than a few domain experts—by relying on a coarse meaning representation. A second contribution of this article is to use crowdsourced data to show how dialogue naturalness can be improved by learning to vary the output utterances generated for a given semantic input. Two data-driven methods for generating paraphrases in dialogue are presented: (a) by sampling from the n-best list of realizations produced by Bagel's FLM reranker; and (b) by learning a structured perceptron predicting whether candidate realizations are valid paraphrases. We train Bagel on a set of 1,956 utterances produced by 137 annotators, which covers 10 types of dialogue acts and 128 semantic concepts in a tourist information system for Cambridge. An automated evaluation shows that Bagel outperforms utterance class LM baselines on this domain. A human evaluation of 600 resynthesized dialogue extracts shows that Bagel's FLM output produces utterances comparable to a handcrafted baseline, whereas the perceptron classifier performs worse. Interestingly, human judges find the system sampling from the n-best list to be more natural than a system always returning the first-best utterance. The judges are also more willing to interact with the n-best system in the future. These results suggest that capturing the large variation found in human language using data-driven methods is beneficial for dialogue interaction.", "In this paper, we construct and train end-to-end neural network-based dialogue systems using an updated version of the recent Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words. This dataset is interesting because of its size, long context lengths, and technical nature; thus, it can be used to train large models directly from data with minimal feature engineering, which can be both time consuming and expensive. We provide baselines in two different environments: one where models are trained to maximize the log-likelihood of a generated utterance conditioned on the context of the conversation, and one where models are trained to select the correct next response from a list of candidate responses. These are both evaluated on a recall task that we call Next Utterance Classification (NUC), as well as other generation-specific metrics. Finally, we provide a qualitative error analysis to help determine the most promising directions for future research on the Ubuntu Dialogue Corpus, and for end-to-end dialogue systems in general.", "The HCRC Map Task corpus has been collected and transcribed in Glasgow and Edinburgh, and recently published on CD-ROM. This effort was made possible by funding from the British Economic and Social Research Council.The corpus is composed of 128 two-person conversations in both high-quality digital audio and orthographic transcriptions, amounting to 18 hours and 150,000 words respectively.The experimental design is quite detailed and complex, allowing a number of different phonemic, syntactico-semantic and pragmatic contrasts to be explored in a controlled way.The corpus is a uniquely valuable resource for speech recognition research in particular, as we move from developing systems intended for controlled use by familiar users to systems intended for less constrained circumstances and naive or occasional users. Examples supporting this claim are given, including preliminary evidence of the phonetic consequences of second mention and the impact of different styles of referent negotiation on communicative efficacy." ] }
cs0006029
2952170707
The advent of multipoint (multicast-based) applications and the growth and complexity of the Internet has complicated network protocol design and evaluation. In this paper, we present a method for automatic synthesis of worst and best case scenarios for multipoint protocol performance evaluation. Our method uses a fault-oriented test generation (FOTG) algorithm for searching the protocol and system state space to synthesize these scenarios. The algorithm is based on a global finite state machine (FSM) model. We extend the algorithm with timing semantics to handle end-to-end delays and address performance criteria. We introduce the notion of a virtual LAN to represent delays of the underlying multicast distribution tree. As a case study, we use our method to evaluate variants of the timer suppression mechanism, used in various multipoint protocols, with respect to two performance criteria: overhead of response messages and response time. Simulation results for reliable multicast protocols show that our method provides a scalable way for synthesizing worst-case scenarios automatically. We expect our method to serve as a model for applying systematic scenario generation to other multipoint protocols.
There is a large body of literature dealing with verification of protocols. Verification systems typically address well-defined properties --such as safety , liveness , and responsiveness @cite_37 -- and aim to detect violations of these properties. In general, the two main approaches for protocol verification are theorem proving and reachability analysis @cite_2 . Theorem proving systems define a set of axioms and relations to prove properties, and include model-based and logic-based formalisms @cite_35 @cite_17 . These systems are useful in many applications. However, these systems tend to abstract out some network dynamics that we will study (e.g., selective packet loss). Moreover, they do not synthesize network topologies and do not address performance issues per se.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_17", "@cite_2" ], "mid": [ "2033263591", "1508967933", "2291637985", "1926128235" ], "abstract": [ "In this article we present a comprehensive survey of various approaches for the verification of cache coherence protocols based on state enumeration, (symbolic model checking , and symbolic state models . Since these techniques search the state space of the protocol exhaustively, the amount of memory required to manipulate that state information and the verification time grow very fast with the number of processors and the complexity of the protocol mechanisms. To be successful for systems of arbitrary complexity, a verification technique must solve this so-called state space explosion problem. The emphasis of our discussion is onthe underlying theory in each method of handling the state space exposion problem, and formulationg and checking the safety properties (e.g., data consistency) and the liveness properties (absence of deadlock and livelock). We compare the efficiency and discuss the limitations of each technique in terms of memory and computation time. Also, we discuss issues of generality, applicability, automaticity, and amenity for existing tools in each class of methods. No method is truly superior because each method has its own strengths and weaknesses. Finally, refinements that can further reduce the verification time and or the memory requirement are also discussed.", "Since the 1980s, two approaches have been developed for analyzing security protocols. One of the approaches relies on a computational model that considers issues of complexity and probability. This approach captures a strong notion of security, guaranteed against all probabilistic polynomial-time attacks. The other approach relies on a symbolic model of protocol executions in which cryptographic primitives are treated as black boxes. Since the seminal work of Dolev and Yao, it has been realized that this latter approach enables significantly simpler and often automated proofs. However, the guarantees that it offers have been quite unclear. In this paper, we show that it is possible to obtain the best of both worlds: fully automated proofs and strong, clear security guarantees. Specifically, for the case of protocols that use signatures and asymmetric encryption, we establish that symbolic integrity and secrecy proofs are sound with respect to the computational model. The main new challenges concern secrecy properties for which we obtain the first soundness result for the case of active adversaries. Our proofs are carried out using Casrul, a fully automated tool.", "Formal verification is used to establish the compliance of software and hardware systems with important classes of requirements. System compliance with functional requirements is frequently analyzed using techniques such as model checking, and theorem proving. In addition, a technique called quantitative verification supports the analysis of the reliability, performance, and other quality-of-service (QoS) properties of systems that exhibit stochastic behavior. In this paper, we extend the applicability of quantitative verification to the common scenario when the probabilities of transition between some or all states of the Markov models analyzed by the technique are unknown, but observations of these transitions are available. To this end, we introduce a theoretical framework, and a tool chain that establish confidence intervals for the QoS properties of a software system modelled as a Markov chain with uncertain transition probabilities. We use two case studies from different application domains to assess the effectiveness of the new quantitative verification technique. Our experiments show that disregarding the above source of uncertainty may significantly affect the accuracy of the verification results, leading to wrong decisions, and low-quality software systems.", "When formalizing security protocols, different specification languages support very different reasoning methodologies, whose results are not directly or easily comparable. Therefore, establishing clear mappings among different frameworks is highly desirable, as it permits various methodologies to cooperate by interpreting theoretical and practical results of one system in another. In this paper, we examine the non-trivial relationship between two general verification frameworks: multiset rewriting (MSR) and a process algebra (PA) inspired to CCS and the π-calculus. Although defining a simple and general bijection between MSR and PA appears difficult, we show that the sublanguages needed to specify a large class of cryptographic protocols (immediate decryption protocols) admits an effective translation that is not only bijective and trace-preserving, but also induces a weak form of bisimulation across the two languages. In particular, the correspondence sketched in this abstract permits transferring several important trace-based properties such as secrecy and many forms of authentication." ] }
cs0006029
2952170707
The advent of multipoint (multicast-based) applications and the growth and complexity of the Internet has complicated network protocol design and evaluation. In this paper, we present a method for automatic synthesis of worst and best case scenarios for multipoint protocol performance evaluation. Our method uses a fault-oriented test generation (FOTG) algorithm for searching the protocol and system state space to synthesize these scenarios. The algorithm is based on a global finite state machine (FSM) model. We extend the algorithm with timing semantics to handle end-to-end delays and address performance criteria. We introduce the notion of a virtual LAN to represent delays of the underlying multicast distribution tree. As a case study, we use our method to evaluate variants of the timer suppression mechanism, used in various multipoint protocols, with respect to two performance criteria: overhead of response messages and response time. Simulation results for reliable multicast protocols show that our method provides a scalable way for synthesizing worst-case scenarios automatically. We expect our method to serve as a model for applying systematic scenario generation to other multipoint protocols.
There is a good number of publications dealing with conformance testing @cite_19 @cite_23 @cite_22 @cite_5 . However, conformance testing verifies that an implementation (as a black box) adheres to a given specification of the protocol by constructing input output sequences. Conformance testing is useful during the implementation testing phase --which we do not address in this paper-- but does not address performance issues nor topology synthesis for design testing. By contrast, our method synthesizes test scenarios for protocol design, according to evaluation criteria.
{ "cite_N": [ "@cite_19", "@cite_5", "@cite_22", "@cite_23" ], "mid": [ "2107360681", "2029755436", "2339842460", "2121954581" ], "abstract": [ "This chapter presents principles and techniques for modelbased black-box conformance testing of real-time systems using the Uppaal model-checking tool-suite. The basis for testing is given as a network of concurrent timed automata specified by the test engineer. Relativized input output conformance serves as the notion of implementation correctness, essentially timed trace inclusion taking environment assumptions into account. Test cases can be generated offline and later executed, or they can be generated and executed online. For both approaches this chapter discusses how to specify test objectives, derive test sequences, apply these to the system under test, and assign a verdict.", "A novel procedure presented here generates test sequences for checking the conformity of protocol implementations to their specifications. The test sequences generated by this procedure only detect the presence of many faults, but they do not locate the faults. It can always detect the problem in an implementation with a single fault. A protocol entity is specified as a finite state machine (FSM). It typically has two interfaces: an interface with the user and with the lower-layer protocol. The inputs from both interfaces are merged into a single set I and the outputs from both interfaces are merged into a single set O. The implementation is assumed to be a black box. The key idea in this procedure is to tour all states and state transitions and to check a unique signature for each state, called the Unique Input Output (UIO) sequence. A UIO sequence for a state is an I O behavior that is not exhibited by any other state.", "Industrial-sized hybrid systems are typically not amenable to formal verification techniques. For this reason, a common approach is to formally verify abstractions of (parts of) the original system. However, we need to show that this abstraction conforms to the actual system implementation including its physical dynamics. In particular, verified properties of the abstract system need to transfer to the implementation. To this end, we introduce a formal conformance relation, called reachset conformance, which guarantees transference of safety properties, while being a weaker relation than the existing trace inclusion conformance. Based on this formal relation, we present a conformance testing method which allows us to tune the trade-off between accuracy and computational load. Additionally, we present a test selection algorithm that uses a coverage measure to reduce the number of test cases for conformance testing. We experimentally show the benefits of our novel techniques based on an example from autonomous driving.", "The authors present a detailed study of four formal methods (T-, U-, D-, and W-methods) for generating test sequences for protocols. Applications of these methods to the NBS Class 4 Transport Protocol are discussed. An estimation of fault coverage of four protocol-test-sequence generation techniques using Monte Carlo simulation is also presented. The ability of a test sequence to decide whether a protocol implementation conforms to its specification heavily relies on the range of faults that it can capture. Conformance is defined at two levels, namely, weak and strong conformance. This study shows that a test sequence produced by T-method has a poor fault detection capability, whereas test sequences produced by U-, D-, and W-methods have comparable (superior to that for T-method) fault coverage on several classes of randomly generated machines used in this study. Also, some problems with a straightforward application of the four protocol-test-sequence generation methods to real-world communication protocols are pointed out. >" ] }
cs0006029
2952170707
The advent of multipoint (multicast-based) applications and the growth and complexity of the Internet has complicated network protocol design and evaluation. In this paper, we present a method for automatic synthesis of worst and best case scenarios for multipoint protocol performance evaluation. Our method uses a fault-oriented test generation (FOTG) algorithm for searching the protocol and system state space to synthesize these scenarios. The algorithm is based on a global finite state machine (FSM) model. We extend the algorithm with timing semantics to handle end-to-end delays and address performance criteria. We introduce the notion of a virtual LAN to represent delays of the underlying multicast distribution tree. As a case study, we use our method to evaluate variants of the timer suppression mechanism, used in various multipoint protocols, with respect to two performance criteria: overhead of response messages and response time. Simulation results for reliable multicast protocols show that our method provides a scalable way for synthesizing worst-case scenarios automatically. We expect our method to serve as a model for applying systematic scenario generation to other multipoint protocols.
Automatic test generation techniques have been used in several fields. VLSI chip testing @cite_12 uses test vector generation to detect target faults. Test vectors may be generated based on circuit and fault models, using the fault-oriented technique, that utilizes implication techniques. These techniques were adopted in @cite_25 to develop fault-oriented test generation (FOTG) for multicast routing. @cite_25 , FOTG was used to study correctness of a multicast routing protocol on a LAN. We extend FOTG to study performance of end-to-end multipoint mechanisms. We introduce the concept of a virtual LAN to represent the underlying network, integrate timing and delay semantics into our model and use performance criteria to drive our synthesis algorithm.
{ "cite_N": [ "@cite_25", "@cite_12" ], "mid": [ "1962021926", "2495715420", "2107709519", "2121954581" ], "abstract": [ "We present a new algorithm for automatic test generation for multicast routing. Our algorithm processes a finite state machine (FSM) model of the protocol and uses a mix of forward and backward search techniques to generate the tests. The output tests include a set of topologies, protocol events and network failures, that lead to violation of protocol correctness and behavioral requirements. We target protocol robustness in specific, and do not attempt to verify other properties in this paper. We apply our method to a multicast routing protocol; PIM-DM, and investigate its behavior in the presence of selective packet loss on LANs and router crashes. Our study unveils several robustness violations in PIM-DM, for which we suggest fixes with the aid of the presented algorithm.", "In engineering of safety critical systems, regulatory standards often put requirements on both traceable specification-based testing, and structural coverage on program units. Automated test generation techniques can be used to generate inputs to cover the structural aspects of a program. However, there is no conclusive evidence on how automated test generation compares to manual test design, or how testing based on the program implementation relates to specification-based testing. In this paper, we investigate specification -- and implementation-based testing of embedded software written in the IEC 61131-3 language, a programming standard used in many embedded safety critical software systems. Further, we measure the efficiency and effectiveness in terms of fault detection. For this purpose, a controlled experiment was conducted, comparing tests created by a total of twenty-three software engineering master students. The participants worked individually on manually designing and automatically generating tests for two IEC 61131-3 programs. Tests created by the participants in the experiment were collected and analyzed in terms of mutation score, decision coverage, number of tests, and testing duration. We found that, when compared to implementation-based testing, specification-based testing yields significantly more effective tests in terms of the number of faults detected. Specifically, specification-based tests more effectively detect comparison and value replacement type of faults, compared to implementation-based tests. On the other hand, implementation-based automated test generation leads to fewer tests (up to 85 improvement) created in shorter time than the ones manually created based on the specification.", "We present a technique that improves random test generation by incorporating feedback obtained from executing test inputs as they are created. Our technique builds inputs incrementally by randomly selecting a method call to apply and finding arguments from among previously-constructed inputs. As soon as an input is built, it is executed and checked against a set of contracts and filters. The result of the execution determines whether the input is redundant, illegal, contract-violating, or useful for generating more inputs. The technique outputs a test suite consisting of unit tests for the classes under test. Passing tests can be used to ensure that code contracts are preserved across program changes; failing tests (that violate one or more contract) point to potential errors that should be corrected. Our experimental results indicate that feedback-directed random test generation can outperform systematic and undirected random test generation, in terms of coverage and error detection. On four small but nontrivial data structures (used previously in the literature), our technique achieves higher or equal block and predicate coverage than model checking (with and without abstraction) and undirected random generation. On 14 large, widely-used libraries (comprising 780KLOC), feedback-directed random test generation finds many previously-unknown errors, not found by either model checking or undirected random generation.", "The authors present a detailed study of four formal methods (T-, U-, D-, and W-methods) for generating test sequences for protocols. Applications of these methods to the NBS Class 4 Transport Protocol are discussed. An estimation of fault coverage of four protocol-test-sequence generation techniques using Monte Carlo simulation is also presented. The ability of a test sequence to decide whether a protocol implementation conforms to its specification heavily relies on the range of faults that it can capture. Conformance is defined at two levels, namely, weak and strong conformance. This study shows that a test sequence produced by T-method has a poor fault detection capability, whereas test sequences produced by U-, D-, and W-methods have comparable (superior to that for T-method) fault coverage on several classes of randomly generated machines used in this study. Also, some problems with a straightforward application of the four protocol-test-sequence generation methods to real-world communication protocols are pointed out. >" ] }
cs0007002
2949562458
Many problems in robust control and motion planning can be reduced to either find a sound approximation of the solution space determined by a set of nonlinear inequalities, or to the guaranteed tuning problem'' as defined by Jaulin and Walter, which amounts to finding a value for some tuning parameter such that a set of inequalities be verified for all the possible values of some perturbation vector. A classical approach to solve these problems, which satisfies the strong soundness requirement, involves some quantifier elimination procedure such as Collins' Cylindrical Algebraic Decomposition symbolic method. Sound numerical methods using interval arithmetic and local consistency enforcement to prune the search space are presented in this paper as much faster alternatives for both soundly solving systems of nonlinear inequalities, and addressing the guaranteed tuning problem whenever the perturbation vector has dimension one. The use of these methods in camera control is investigated, and experiments with the prototype of a declarative modeller to express camera motion using a cinematic language are reported and commented.
The method presented by @cite_40 is strongly related to the one we present in the following, since they rely on usual interval constraint solving techniques to compute sound boxes for some constraint system. Starting from a seed that is known to belong to the solution space, they enlarge the domain of the variables around it in such a way that the new box computed is still included in the solution space. They do so by using local consistency techniques to find the points at which the truth value of the constraints change. Their algorithm is particularly well suited for the applications they target, the enlargement of tolerances. It is however not designed to solve the guaranteed tuning problem. In addition, it is necessary to obtain a seed for each connected subset of the solution space, and to apply the algorithm on each seed if one is interested in computing several solutions (e.g. to ensure representativeness of the samples).
{ "cite_N": [ "@cite_40" ], "mid": [ "179407972", "2064358676", "2017718716", "2056012370" ], "abstract": [ "We report on a novel technique called spatial coupling and its application in the analysis of random constraint satisfaction problems (CSP). Spatial coupling was invented as an engineering construction in the area of error correcting codes where it has resulted in efficient capacity-achieving codes for a wide range of channels. However, this technique is not limited to problems in communications, and can be applied in the much broader context of graphical models. We describe here a general methodology for applying spatial coupling to random constraint satisfaction problems and obtain lower bounds for their (rough) satisfiability threshold. The main idea is to construct a distribution of geometrically structured random K-SAT instances - namely the spatially coupled ensemble - which has the same (rough) satisfiability threshold, and is at the same time algorithmically easier to solve. Then by running well-known algorithms on the spatially coupled ensemble we obtain a lower bound on the (rough) satisfiability threshold of the original ensemble. The method is versatile because one can choose the CSP, there is a certain amount of freedom in the construction of the spatially coupled ensemble, and also in the choice of the algorithm. In this work we focus on random K-SAT but we have also checked that the method is successful for Coloring, NAE-SAT and XOR-SAT. We choose Unit Clause propagation for the algorithm which is analyzed over the spatially coupled instances. For K = 3, for instance, our lower bound is equal to 3.67 which is better than the current bounds in the literature. Similarly, for graph 3-colorability we get a bound of 2.22 which is also better than the current bounds in the literature.", "In this paper, we propose a new algorithm for pairwise rigid point set registration with unknown point correspondences. The main properties of our method are noise robustness, outlier resistance and global optimal alignment. The problem of registering two point clouds is converted to a minimization of a nonlinear cost function. We propose a new cost function based on an inverse distance kernel that significantly reduces the impact of noise and outliers. In order to achieve a global optimal registration without the need of any initial alignment, we develop a new stochastic approach for global minimization. It is an adaptive sampling method which uses a generalized BSP tree and allows for minimizing nonlinear scalar fields over complex shaped search spaces like, e.g., the space of rotations. We introduce a new technique for a hierarchical decomposition of the rotation space in disjoint equally sized parts called spherical boxes. Furthermore, a procedure for uniform point sampling from spherical boxes is presented. Tests on a variety of point sets show that the proposed registration method performs very well on noisy, outlier corrupted and incomplete data. For comparison, we report how two state-of-the-art registration algorithms perform on the same data sets.", "This paper studies sparse spikes deconvolution over the space of measures. We focus on the recovery properties of the support of the measure (i.e., the location of the Dirac masses) using total variation of measures (TV) regularization. This regularization is the natural extension of the @math l1 norm of vectors to the setting of measures. We show that support identification is governed by a specific solution of the dual problem (a so-called dual certificate) having minimum @math L2 norm. Our main result shows that if this certificate is non-degenerate (see the definition below), when the signal-to-noise ratio is large enough TV regularization recovers the exact same number of Diracs. We show that both the locations and the amplitudes of these Diracs converge toward those of the input measure when the noise drops to zero. Moreover, the non-degeneracy of this certificate can be checked by computing a so-called vanishing derivative pre-certificate. This proxy can be computed in closed form by solving a linear system. Lastly, we draw connections between the support of the recovered measure on a continuous domain and on a discretized grid. We show that when the signal-to-noise level is large enough, and provided the aforementioned dual certificate is non-degenerate, the solution of the discretized problem is supported on pairs of Diracs which are neighbors of the Diracs of the input measure. This gives a precise description of the convergence of the solution of the discretized problem toward the solution of the continuous grid-free problem, as the grid size tends to zero.", "The problem of finding heavy hitters and approximating the frequencies of items is at the heart of many problems in data stream analysis. It has been observed that several proposed solutions to this problem can outperform their worst-case guarantees on real data. This leads to the question of whether some stronger bounds can be guaranteed. We answer this in the positive by showing that a class of counter-based algorithms (including the popular and very space-efficient Frequent and SpacesSaving algorithms) provides much stronger approximation guarantees than previously known. Specifically, we show that errors in the approximation of individual elements do not depend on the frequencies of the most frequent elements, but only on the frequency of the remaining tail. This shows that counter-based methods are the most space-efficient (in fact, space-optimal) algorithms having this strong error bound. This tail guarantee allows these algorithms to solve the sparse recovery problem. Here, the goal is to recover a faithful representation of the vector of frequencies, f. We prove that using space O(k), the algorithms construct an approximation f* to the frequency vector f so that the L1 error ppf−pf*p1 is close to the best possible error minf′ pf′ − fp1, where f′ ranges over all vectors with at most k non-zero entries. This improves the previously best known space bound of about O(k log n) for streams without element deletions (where n is the size of the domain from which stream elements are drawn). Other consequences of the tail guarantees are results for skewed (Zipfian) data, and guarantees for accuracy of merging multiple summarized streams." ] }
cs0007004
1931024191
Despite the effort of many researchers in the area of multi-agent systems (MAS) for designing and programming agents, a few years ago the research community began to take into account that common features among different MAS exists. Based on these common features, several tools have tackled the problem of agent development on specific application domains or specific types of agents. As a consequence, their scope is restricted to a subset of the huge application domain of MAS. In this paper we propose a generic infrastructure for programming agents whose name is Brainstorm J. The infrastructure has been implemented as an object oriented framework. As a consequence, our approach supports a broader scope of MAS applications than previous efforts, being flexible and reusable.
JAFIMA (Java Framework for Intelligent and Mobile Agents) @cite_12 takes a different approach from the other tools: it is primarily targeted at expert developers who want to develop agents from scratch based on the abstract classes provided, so the programming effort is greater than in the other tools. The weakest point of JAFIMA is its rule-based mechanism for defining agents' behavior. This mechanism does not support complex behaviors such as on-line planning or learning. Moreover, the abstractions for representing mental states lack flexibility and services for manipulating symbolic data.
{ "cite_N": [ "@cite_12" ], "mid": [ "2061692729", "2118300983", "2107370049", "1544271477" ], "abstract": [ "Almost all agent development to date has been “homegrown” [4] and done from scratch, independently, byeach development team. This has led to the followingproblems:• Lack of an agreed definition: Agents built bydifferent teams have different capabilities.• Duplication of effort: There has been little reuse ofagent architectures, designs, or components.• Inability to satisfy industrial strengthrequirements: Agents must integrate with existingsoftware and computer infrastructure. They must alsoaddress security and scaling concerns.Agents are complex and ambitious software systems thatwill be entrusted with critical applications. As such,agent based systems must be engineered with validsoftware engineering principles and not constructed in anad hoc fashion.Agent systems must have a strong foundation based onmasterful software patterns. Software patterns arose outof Alexander’s [2] work in architecture and urbanplanning. Many urban plans and architectures aregrandiose and ill-fated. Overly ambitious agent basedsystems built in an ad hoc fashion risk the same fate.They may never be built, or, due to their fragile nature,they may be built and either never used or used once andthen abandoned. A software pattern is a recurringproblem and solution; it may address conceptual,architectural or design problems.A pattern is described in a set format to ease itsdissemination. The format states the problem addressedby the pattern and the forces acting on it. There is also acontext that must be present for the pattern to be valid, astatement of the solution, and any known uses. Thefollowing sections summarize some key patterns of agentbased systems; for brevity, many of the patterns arepresented in an abbreviated “patlet” form. When kn ownuses are not listed for an individual pattern, it means thatthe pattern has arisen from the JAFIMA activity. Thepatterns presented in this paper represent progress towarda pattern language or living methodology for intelligentand mobile agents.", "We describe JastAdd, a Java-based system for compiler construction. JastAdd is centered around an object-oriented representation of the abstract syntax tree where reference variables can be used to link together different parts of the tree. JastAdd supports the combination of declarative techniques (using Reference Attributed Grammars) and imperative techniques (using ordinary Java code) in implementing the compiler. The behavior can be modularized into different aspects, e.g. name analysis, type checking, code generation, etc., that are woven together into classes using aspect-oriented programming techniques, providing a safer and more powerful alternative to the Visitor pattern. The JastAdd system is independent of the underlying parsing technology and supports any noncircular dependencies between computations, thereby allowing general multi-pass compilation. The attribute evaluator (optimal recursive evaluation) is implemented very conveniently using Java classes, interfaces, and virtual methods.", "Java 2 has a security architecture that protects systems from unauthorized access by mobile or statically configured code. The problem is in manually determining the set of security access rights required to execute a library or application. The commonly used strategy is to execute the code, note authorization failures, allocate additional access rights, and test again. This process iterates until the code successfully runs for the test cases in hand. Test cases usually do not cover all paths through the code, so failures can occur in deployed systems. Conversely, a broad set of access rights is allocated to the code to prevent authorization failures from occurring. However, this often leads to a violation of the \"Principle of Least Privilege\"This paper presents a technique for computing the access rights requirements by using a context sensitive, flow sensitive, interprocedural data flow analysis. By using this analysis, we compute at each program point the set of access rights required by the code. We model features such as multi-threading, implicitly defined security policies, the semantics of the Permission.implies method and generation of a security policy description. We implemented the algorithms and present the results of our analysis on a set of programs. While the analysis techniques described in this paper are in the context of Java code, the basic techniques are applicable to access rights analysis issues in non-Java-based systems.", "The ability to effectively debug agent-oriented applications is vital if agent technologies are to become adopted as a viable alternative for complex systems development. Recent advances in the area have focussed on the provision of support for debugging agent interaction where tools have been provided that allow developers to analyse and debug the messages that are passed between agents. One potential approach for constructing agent-oriented applications is through the use of agent programming languages. Such languages employ mental notions such as beliefs, goals, commitments, and intentions to facilitate the construction of agent programs that specify the high-level behaviour of the agent. This paper describes how debugging has been supported for one such language, namely the Agent Factory Agent Programming Language (AFAPL)." ] }
cs0007004
1931024191
Despite the effort of many researchers in the area of multi-agent systems (MAS) for designing and programming agents, a few years ago the research community began to take into account that common features among different MAS exists. Based on these common features, several tools have tackled the problem of agent development on specific application domains or specific types of agents. As a consequence, their scope is restricted to a subset of the huge application domain of MAS. In this paper we propose a generic infrastructure for programming agents whose name is Brainstorm J. The infrastructure has been implemented as an object oriented framework. As a consequence, our approach supports a broader scope of MAS applications than previous efforts, being flexible and reusable.
A framework as Brainstorm J is not just a collection of components but also defines a generic design. When programmers use a framework they reuse that design and save time and effort. In addition, because of the bidirectional flow of control frameworks can contain much more functionality than a traditional library regardless if it is a procedural or class library @cite_9 .
{ "cite_N": [ "@cite_9" ], "mid": [ "2106259924", "2160985005", "1971458750", "2104583282" ], "abstract": [ "Programmers commonly reuse existing frameworks or libraries to reduce software development efforts. One common problem in reusing the existing frameworks or libraries is that the programmers know what type of object that they need, but do not know how to get that object with a specific method sequence. To help programmers to address this issue, we have developed an approach that takes queries of the form \"Source object type → Destination object type\" as input, and suggests relevant method-invocation sequences that can serve as solutions that yield the destination object from the source object given in the query. Our approach interacts with a code search engine (CSE) to gather relevant code samples and performs static analysis over the gathered samples to extract required sequences. As code samples are collected on demand through CSE, our approach is not limited to queries of any specific set of frameworks or libraries. We have implemented our approach with a tool called PARSEWeb, and conducted four different evaluations to show that our approach is effective in addressing programmer's queries. We also show that PARSEWeb performs better than existing related tools: Prospector and Strathcona", "We present the Storyboard Programming framework, a new synthesis system designed to help programmers write imperative low-level data-structure manipulations. The goal of this system is to bridge the gap between the \"boxes-and-arrows\" diagrams that programmers often use to think about data-structure manipulation algorithms and the low-level imperative code that implements them. The system takes as input a set of partial input-output examples, as well as a description of the high-level structure of the desired solution. From this information, it is able to synthesize low-level imperative implementations in a matter of minutes. The framework is based on a new approach for combining constraint-based synthesis and abstract-interpretation-based shape analysis. The approach works by encoding both the synthesis and the abstract interpretation problem as a constraint satisfaction problem whose solution defines the desired low-level implementation. We have used the framework to synthesize several data-structure manipulations involving linked lists and binary search trees, as well as an insertion operation into an And Inverter Graph.", "We propose a framework, called Lightning, for planning paths in high-dimensional spaces that is able to learn from experience, with the aim of reducing computation time. This framework is intended for manipulation tasks that arise in applications ranging from domestic assistance to robot-assisted surgery. Our framework consists of two main modules, which run in parallel: a planning-from-scratch module, and a module that retrieves and repairs paths stored in a path library. After a path is generated for a new query, a library manager decides whether to store the path based on computation time and the generated path's similarity to the retrieved path. To retrieve an appropriate path from the library we use two heuristics that exploit two key aspects of the problem: (i) A correlation between the amount a path violates constraints and the amount of time needed to repair that path, and (ii) the implicit division of constraints into those that vary across environments in which the robot operates and those that do not. We evaluated an implementation of the framework on several tasks for the PR2 mobile manipulator and a minimally-invasive surgery robot in simulation. We found that the retrieve-and-repair module produced paths faster than planning-from-scratch in over 90 of test cases for the PR2 and in 58 of test cases for the minimally-invasive surgery robot.", "We present a formal framework where a nonmonotonic formalism (the action description language @math ) is used to provide robots with high-level reasoning, such as planning, in the style of cognitive robotics. In particular, we introduce a novel method that bridges the high-level discrete action planning and the low-level continuous behavior by trajectory planning. We show the applicability of this framework on two LEGO MINDSTORMS NXT robots, in an action domain that involves concurrent execution of actions that cannot be serialized." ] }
cs0010019
2949302825
We take a critical look at the relationship between the security of cryptographic schemes in the Random Oracle Model, and the security of the schemes that result from implementing the random oracle by so called "cryptographic hash functions". The main result of this paper is a negative one: There exist signature and encryption schemes that are secure in the Random Oracle Model, but for which any implementation of the random oracle results in insecure schemes. In the process of devising the above schemes, we consider possible definitions for the notion of a "good implementation" of a random oracle, pointing out limitations and challenges.
Our definition of correlation-intractability is related to a definition by Okamoto @cite_21 . Using our terminology, Okamoto considers function ensembles for which it is infeasible to form input-output relations with respect to a specific evasive relation [Def. 19] Ok92 (rather than all such relations). He uses the assumption that such function ensembles exists, for a specific evasive relation in [Thm. 20] Ok92 .
{ "cite_N": [ "@cite_21" ], "mid": [ "1590334370", "1515930456", "2020315425", "1556276033" ], "abstract": [ "Correlation intractable function ensembles were introduced in an attempt to capture the \"unpredictability\" property of a random oracle: It is assumed that if R is a random oracle then it is infeasible to find an input x such that the input-output pair (x,R(x)) has some desired property. Since this property is often useful to design many cryptographic applications in the random oracle model, it is desirable that a plausible construction of correlation intractable function ensembles will be provided. However, no plausibility result has been proposed. In this paper, we show that proving the implication, \"if one-way functions exist then correlation intractable function ensembles exist\", is as hard as proving that \"3-round auxiliary-input zero-knowledge Arthur-Merlin proofs exist only for trivial languages such as BPP languages.\" As far as we know, proving the latter claim is a fundamental open problem in the theory of zero-knowledge proofs. Therefore, our result can be viewed as strong evidence that the construction based solely on one-way functions will be impossible, i.e., that any plausibility result will require stronger cryptographic primitives.", "The problem of deciding the satisfiability of a quantifier-free formula with respect to a background theory, also known as Satisfiability Modulo Theories (SMT), is gaining increasing relevance in verification: representation capabilities beyond propositional logic allow for a natural modeling of real-world problems (e.g., pipeline and RTL circuits verification, proof obligations in software systems). In this paper, we focus on the case where the background theory is the combination T1∪T2 of two simpler theories. Many SMT procedures combine a boolean model enumeration with a decision procedure for T1∪T2, where conjunctions of literals can be decided by an integration schema such as Nelson-Oppen, via a structured exchange of interface formulae (e.g., equalities in the case of convex theories, disjunctions of equalities otherwise). We propose a new approach for SMT(T1∪T2), called Delayed Theory Combination, which does not require a decision procedure for T1∪T2, but only individual decision procedures for T1 and T2, which are directly integrated into the boolean model enumerator. This approach is much simpler and natural, allows each of the solvers to be implemented and optimized without taking into account the others, and it nicely encompasses the case of non-convex theories. We show the effectiveness of the approach by a thorough experimental comparison.", "We describe a generative model of the relationship between two images. The model is defined as a factored three-way Boltzmann machine, in which hidden variables collaborate to define the joint correlation matrix for image pairs. Modeling the joint distribution over pairs makes it possible to efficiently match images that are the same according to a learned measure of similarity. We apply the model to several face matching tasks, and show that it learns to represent the input images using task-specific basis functions. Matching performance is superior to previous similar generative models, including recent conditional models of transformations. We also show that the model can be used as a plug-in matching score to perform invariant classification.", "In the conclusion of his monumental paper on optimal inapproximability results, Hastad [13] suggested that Fourier analysis of Dictator (Long Code) Tests may not be universally applicable in the study of CSPs. His main open question was to determine if the technique could resolve the approximability of satisfiable 3-bit constraint satisfaction problems. In particular, he asked if the \"Not Two\" (NTW) predicate is non-approximable beyond the random assignment threshold of 5 8 on satisfiable instances. Around the same time, Zwick [30] showed that all satisfiable 3-CSPs are 5 8-approximable and conjectured that the 5 8 is optimal. In this work we show that Fourier analysis techniques can produce a Dictator Test based on NTW with completeness 1 and soundness 5 8. Our test's analysis uses the Bonami-Gross-Beckner hypercontractive inequality. We also show a soundness lower bound of 5 8 for all 3-query Dictator Tests with perfect completeness. This lower bound for Property Testing is proved in part via a semidefinite programming algorithm of Zwick [30]. Our work precisely determines the 3-query \"Dictatorship Testing gap\". Although this represents progress on Zwick's conjecture, current PCP \"outer verifier\" technology is insufficient to convert our Dictator Test into an NP-hardness-of-approximation result." ] }
cs0010019
2949302825
We take a critical look at the relationship between the security of cryptographic schemes in the Random Oracle Model, and the security of the schemes that result from implementing the random oracle by so called "cryptographic hash functions". The main result of this paper is a negative one: There exist signature and encryption schemes that are secure in the Random Oracle Model, but for which any implementation of the random oracle results in insecure schemes. In the process of devising the above schemes, we consider possible definitions for the notion of a "good implementation" of a random oracle, pointing out limitations and challenges.
First steps in the direction of identifying and studying useful special-purpose properties of the have been taken by Canetti @cite_10 . Specifically, Canetti considered a property called perfect one-wayness'', provided a definition of this property, constructions which possess this property (under some reasonable assumptions), and applications for which such functions suffice. Additional constructions have been suggested by Canetti, Micciancio and Reingold @cite_4 . Another context where specific properties of the random oracle where captured and realized is the signature scheme of Gennaro, Halevi and Rabin @cite_28 .
{ "cite_N": [ "@cite_28", "@cite_10", "@cite_4" ], "mid": [ "1590334370", "2064423787", "2139033758", "2059271645" ], "abstract": [ "Correlation intractable function ensembles were introduced in an attempt to capture the \"unpredictability\" property of a random oracle: It is assumed that if R is a random oracle then it is infeasible to find an input x such that the input-output pair (x,R(x)) has some desired property. Since this property is often useful to design many cryptographic applications in the random oracle model, it is desirable that a plausible construction of correlation intractable function ensembles will be provided. However, no plausibility result has been proposed. In this paper, we show that proving the implication, \"if one-way functions exist then correlation intractable function ensembles exist\", is as hard as proving that \"3-round auxiliary-input zero-knowledge Arthur-Merlin proofs exist only for trivial languages such as BPP languages.\" As far as we know, proving the latter claim is a fundamental open problem in the theory of zero-knowledge proofs. Therefore, our result can be viewed as strong evidence that the construction based solely on one-way functions will be impossible, i.e., that any plausibility result will require stronger cryptographic primitives.", "We show that the existence of one-way functions is necessary and sufficient for the existence of pseudo-random generators in the following sense. Let ƒ be an easily computable function such that when x is chosen randomly: (1) from ƒ( x ) it is hard to recover an x 1 with ƒ( x 1 ) = ƒ( x ) by a small circuit, or; (2) ƒ has small degeneracy and from ƒ( x ) it is hard to recover x by a fast algorithm. From one-way functions of type (1) or (2) we show how to construct pseudo-random generators secure against small circuits or fast algorithms, respectively, and vice-versa. Previous results show how to construct pseudo-random generators from one-way functions that have special properties ([Blum, Micali 82], [Yao 82], [Levin 85], [Goldreich, Krawczyk, Luby 88]). We use the results of [Goldreich, Levin 89] in an essential way.", "The random oracle model is a very convenient setting for designing cryptographic protocols. In this idealized model all parties have access to a common, public random function, called a random oracle. Protocols in this model are often very simple and efficient; also the analysis is often clearer. However, we do not have a general mechanism for transforming protocols that are secure in the random oracle model into protocols that are secure in real life. In fact, we do not even know how to meaningfully specify the properties required from such a mechanism. Instead, it is a common practice to simply replace — often without mathematical justification — the random oracle with a ‘cryptographic hash function’ (e.g., MD5 or SHA). Consequently, the resulting protocols have no meaningful proofs of security.", "We describe a simple randomized construction for generating pairs of hash functions h 1 ,h 2 from a universe U to ranges V = [m] = (0,1,...,m-1) and W = [m] so that for every key set S ⊆ U with n = |S| ≤ m (1 + e) the (random) bipartite (multi)graph with node set V ∪ W and edge set (h 1 (x),h 2 (x))| x ∈ S exhibits a structure that is essentially random. The construction combines d-wise independent classes for d a relatively small constant with the well-known technique of random offsets. While keeping the space needed to store the description of h 1 and h 2 at O(nζ), for ζ < 1 fixed arbitrarily, we obtain a much smaller (constant) evaluation time than previous constructions of this kind, which involved Siegel's high-performance hash classes. The main new technique is the combined analysis of the graph structure and the inner structure of the hash functions, as well as a new way of looking at the cycle structure of random (multi)graphs. The construction may be applied to improve on Pagh and Rodler's \"cuckoo hashing\" (2001), to obtain a simpler and faster alternative to a recent construction of Ostlin and Pagh (2002 03) for simulating uniform hashing on a key set S, and to the simulation of shared memory on distributed memory machines. We also describe a novel way of implementing (approximate) d-wise independent hashing without using polynomials." ] }
cs0010019
2949302825
We take a critical look at the relationship between the security of cryptographic schemes in the Random Oracle Model, and the security of the schemes that result from implementing the random oracle by so called "cryptographic hash functions". The main result of this paper is a negative one: There exist signature and encryption schemes that are secure in the Random Oracle Model, but for which any implementation of the random oracle results in insecure schemes. In the process of devising the above schemes, we consider possible definitions for the notion of a "good implementation" of a random oracle, pointing out limitations and challenges.
Following the preliminary version of the current work @cite_25 , Hada and Tanaka observed that the existence of even restricted correlation intractable functions (in the non uniform model) would be enough to prove that 3-round auxiliary-input zero-knowledge AM proof systems only exist for languages in BPP @cite_0 . (Recall that auxiliary-input zero-knowledge is seemingly weaker than black-box zero-knowledge, and so the result of @cite_0 is incomparable to prior work of Goldreich and Krawczyk @cite_3 that showed that constant-round auxiliary-input zero-knowledge AM proof systems only exist for languages in BPP.)
{ "cite_N": [ "@cite_0", "@cite_25", "@cite_3" ], "mid": [ "1590334370", "1987890787", "2962993321", "2023675273" ], "abstract": [ "Correlation intractable function ensembles were introduced in an attempt to capture the \"unpredictability\" property of a random oracle: It is assumed that if R is a random oracle then it is infeasible to find an input x such that the input-output pair (x,R(x)) has some desired property. Since this property is often useful to design many cryptographic applications in the random oracle model, it is desirable that a plausible construction of correlation intractable function ensembles will be provided. However, no plausibility result has been proposed. In this paper, we show that proving the implication, \"if one-way functions exist then correlation intractable function ensembles exist\", is as hard as proving that \"3-round auxiliary-input zero-knowledge Arthur-Merlin proofs exist only for trivial languages such as BPP languages.\" As far as we know, proving the latter claim is a fundamental open problem in the theory of zero-knowledge proofs. Therefore, our result can be viewed as strong evidence that the construction based solely on one-way functions will be impossible, i.e., that any plausibility result will require stronger cryptographic primitives.", "The wide applicability of zero-knowledge interactive proofs comes from the possibility of using these proofs as subroutines in cryptographic protocols. A basic question concerning this use is whether the (sequential and or parallel) composition of zero-knowledge protocols is zero-knowledge too. We demonstrate the limitations of the composition of zero-knowledge protocols by proving that the original definition of zero-knowledge is not closed under sequential composition; and that even the strong formulations of zero-knowledge (e.g., black-box simulation) are not closed under parallel execution. We present lower bounds on the round complexity of zero-knowledge proofs, with significant implications for the parallelization of zero-knowledge protocols. We prove that three-round interactive proofs and constant-round Arthur--Merlin proofs that are black-box simulation zero-knowledge exist only for languages in BPP. In particular, it follows that the \"parallel versions\" of the first interactive proofs systems presented for quadratic residuosity, graph isomorphism, and any language in NP, are not black-box simulation zero-knowledge, unless the corresponding languages are in BPP. Whether these parallel versions constitute zero-knowledge proofs was an intriguing open questions arising from the early works on zero-knowledge. Other consequences are a proof of optimality for the round complexity of various known zero-knowledge protocols and the necessity of using secret coins in the design of \"parallelizable\" constant-round zero-knowledge proofs.", "Zero knowledge plays a central role in cryptography and complexity. The seminal work of Ben- (STOC 1988) shows that zero knowledge can be achieved unconditionally for any language in NEXP, as long as one is willing to make a suitable *physical assumption*: if the provers are spatially isolated, then they can be assumed to be playing independent strategies. Quantum mechanics, however, tells us that this assumption is unrealistic, because spatially-isolated provers could share a quantum entangled state and realize a non-local correlated strategy. The MIP^* model captures this setting. In this work we study the following question: does spatial isolation still suffice to unconditionally achieve zero knowledge even in the presence of quantum entanglement? We answer this question in the affirmative: we prove that every language in NEXP has a 2-prover *zero knowledge* interactive proof that is sound against entangled provers; that is, NEXP ⊆ ZK-MIP^*. Our proof consists of constructing a zero knowledge interactive PCP with a strong algebraic structure, and then lifting it to the MIP^* model. This lifting relies on a new framework that builds on recent advances in low-degree testing against entangled strategies, and clearly separates classical and quantum tools. Our main technical contribution is the development of algebraic techniques for obtaining unconditional zero knowledge; this includes a zero knowledge variant of the celebrated sumcheck protocol, a key building block in many probabilistic proof systems. A core component of our sumcheck protocol is a new algebraic commitment scheme, whose analysis relies on algebraic complexity theory.", "We construct a 1-round delegation scheme (i.e., argument system) for every language computable in time t = t(n), where the running time of the prover is poly(t) and the running time of the verifier is n · polylog(t). In particular, for every language in P we obtain a delegation scheme with almost linear time verification. Our construction relies on the existence of a computational sub-exponentially secure private information retrieval (PIR) scheme. The proof exploits a curious connection between the problem of computation delegation and the model of multi-prover interactive proofs that are sound against no-signaling (cheating) strategies, a model that was studied in the context of multi-prover interactive proofs with provers that share quantum entanglement, and is motivated by the physical principle that information cannot travel faster than light. For any language computable in time t = t(n), we construct a multi-prover interactive proof (MIP) that is sound against no-signaling strategies, where the running time of the provers is poly(t), the number of provers is polylog(t), and the running time of the verifier is n · polylog(t). In particular, this shows that the class of languages that have polynomial-time MIPs that are sound against no-signaling strategies, is exactly EXP. Previously, this class was only known to contain PSPACE. To convert our MIP into a 1-round delegation scheme, we use the method suggested by (ICALP, 2000). This method relies on the existence of a sub-exponentially secure PIR scheme, and was proved secure by (STOC, 2013) assuming the underlying MIP is secure against no-signaling provers." ] }
cs0011005
2952710481
This paper presents a practical solution for detecting data races in parallel programs. The solution consists of a combination of execution replay (RecPlay) with automatic on-the-fly data race detection. This combination enables us to perform the data race detection on an unaltered execution (almost no probe effect). Furthermore, the usage of multilevel bitmaps and snooped matrix clocks limits the amount of memory used. As the record phase of RecPlay is highly efficient, there is no need to switch it off, hereby eliminating the possibility of Heisenbugs because tracing can be left on all the time.
Although much theoretical work has been done in the field of data race detection @cite_19 @cite_25 @cite_10 @cite_17 few implementations for general systems have been proposed. Tools proposed in the past had limited capabilities: they were targeted at programs using one semaphore @cite_11 , programs using only post wait synchronisation @cite_22 or programs with nested fork-join parallelism @cite_10 @cite_21 . The tools that come closest to our data race detection mechanism, apart from @cite_26 for a proprietary system, is an on-the-fly data race detection mechanism for the CVM (Concurrent Virtual Machine) system @cite_24 . The tool only instruments the memory references to distributed shared data (about 1 unable to perform reference identification: it will return the variable that was involved in a data race, but not the instructions that are responsible for the reference.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_21", "@cite_17", "@cite_24", "@cite_19", "@cite_10", "@cite_25", "@cite_11" ], "mid": [ "2088270410", "2168171401", "2170200862", "2165365531" ], "abstract": [ "For shared-memory parallel programs that use explicit synchronization, data race detection is an important part of debugging. A data race exists when concurrently executing sections of code access common shared variables. In programs intended to be data race free, they are sources of nondeterminism usually considered bugs. Previous methods for detecting data races in executions of parallel programs can determine when races occurred, but can report many data races that are artifacts of others and not direct manifestations of program bugs. Artifacts exist because some races can cause others and can also make false races appear real. Such artifacts can overwhelm the programmer with information irrelevant for debugging. This paper presents results showing how to identify nonartifact data races by validation and ordering. Data race validation attempts to determine which races involve events that either did execute concurrently or could have (called feasible data races). We show how each detected race can either be guaranteed feasible, or when insufficient information is available, sets of races can be identified within which at least one is guaranteed feasible. Data race ordering attempts to identify races that did not occur only as a result of others. Data races can be partitioned so that it is known whether a race in one partition may have affected a race in another. The first partitions are guaranteed to contain at least one feasible data race that is not an artifact of any kind. By combining validation and ordering, the programmer can be directed to those data races that should be investigated first for debugging. hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh Research supported in part by National Science Foundation grant CCR-8815928, Office of Naval Research grant N00014-89-J-1222, and a Digital Equipment Corporation External Research Grant. To appear in Proc. of the Third ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, Williamsburg, VA, April 1991.", "We describe an integrated approach to support debugging of nondeterministic concurrent programs. Our tool provides reproducible program behavior and incorporates mechanisms to identify synchronization bugs commonly termed data races or access anomalies. Both features are based on partially ordered event logs captured at run time. Our mechanism identifies a race condition that is guaranteed to be unaffected by other races in the considered execution. Data collection and analysis for race detection has no impact on the original computation since it is done in replay mode. The race detection and execution replay mechanisms are integrated in the MOSKITO operating system.", "Detecting data races in shared-memory parallel programs is an important debugging problem. This paper presents a new protocol for run-time detection of data races in ex­ ecutions of shared-memory programs with nested fork-join parallelism and no other inter-thread synch ronization. This protocol has signifi cantly smaller worst-case run-time over­ head than previous techniques. The worst-case space re­ quired by our protocol when monitoring an execution of a program P is O(V N), where V is the number of shared variables in P, and N is the maximum dynamic nesting of parallel constructs in P's execution. The worst-case time required to perform any monitoring operation is O(N). We formally prove that our new protocol always reports a non­ empty subset of the data races in a monitored program ex­ ecution and describe how this property leads to an effective debugging strategy.", "The authors present a data-race-free-1, shared-memory model that unifies four earlier models: weak ordering, release consistency (with sequentially consistent special operations), the VAX memory model, and data-race-free-0. Data-race-free-1 unifies the models of weak ordering, release consistency, the VAX, and data-race-free-0 by formalizing the intuition that if programs synchronize explicitly and correctly, then sequential consistency can be guaranteed with high performance in a manner that retains the advantages of each of the four models. Data-race-free-1 expresses the programmer's interface more explicitly and formally than weak ordering and the VAX, and allows an implementation not allowed by weak ordering, release consistency, or data-race-free-0. The implementation proposal for data-race-free-1 differs from earlier implementations by permitting the execution of all synchronization operations of a processor even while previous data operations of the processor are in progress. To ensure sequential consistency, two sychronizing processors exchange information to delay later operations of the second processor that conflict with an incomplete data operation of the first processor. >" ] }
cs0012007
2950755945
We have implemented Kima, an automated error correction system for concurrent logic programs. Kima corrects near-misses such as wrong variable occurrences in the absence of explicit declarations of program properties. Strong moding typing and constraint-based analysis are turning to play fundamental roles in debugging concurrent logic programs as well as in establishing the consistency of communication protocols and data types. Mode type analysis of Moded Flat GHC is a constraint satisfaction problem with many simple mode type constraints, and can be solved efficiently. We proposed a simple and efficient technique which, given a non-well-moded typed program, diagnoses the reasons'' of inconsistency by finding minimal inconsistent subsets of mode type constraints. Since each constraint keeps track of the symbol occurrence in the program, a minimal subset also tells possible sources of program errors. Kima realizes automated correction by replacing symbol occurrences around the possible sources and recalculating modes and types of the rewritten programs systematically. As long as bugs are near-misses, Kima proposes a rather small number of alternatives that include an intended program.
Analysis of malfunctioning systems based on their intended logical specification has been studied in the field of artificial intelligence @cite_9 and known as model-based diagnosis, which has some similarities with our work. However, the purpose of model-based diagnosis is to analyze the differences between intended and observed behaviors, while our system does not require that the intended behavior of a program be given as declarations.
{ "cite_N": [ "@cite_9" ], "mid": [ "2133291246", "2108309071", "2137118045", "2172154252" ], "abstract": [ "Several artificial intelligence architectures and systems based on \"deep\" models of a domain have been proposed, in particular for the diagnostic task. These systems have several advantages over traditional knowledge based systems, but they have a main limitation in their computational complexity. One of the ways to face this problem is to rely on a knowledge compilation phase, which produces knowledge that can be used more effectively with respect to the original one. We show how a specific knowledge compilation approach can focus reasoning in abductive diagnosis, and, in particular, can improve the performances of AID, an abductive diagnosis system. The approach aims at focusing the overall diagnostic cycle in two interdependent ways: avoiding the generation of candidate solutions to be discarded a posteriori and integrating the generation of candidate solutions with discrimination among different candidates. Knowledge compilation is used off-line to produce operational (i.e., easily evaluated) conditions that embed the abductive reasoning strategy and are used in addition to the original model, with the goal of ruling out parts of the search space or focusing on parts of it. The conditions are useful to solve most cases using less time for computing the same solutions, yet preserving all the power of the model-based system for dealing with multiple faults and explaining the solutions. Experimental results showing the advantages of the approach are presented.", "Suppose one is given a description of a system, together with an observation of the system's behaviour which conflicts with the way the system is meant to behave. The diagnostic problem is to determine those components of the system which, when assumed to be functioning abnormally, will explain the discrepancy between the observed and correct system behaviour. We propose a general theory for this problem. The theory requires only that the system be described in a suitable logic. Moreover, there are many such suitable logics, e.g. first-order, temporal, dynamic, etc. As a result, the theory accommodates diagnostic reasoning in a wide variety of practical settings, including digital and analogue circuits, medicine, and database updates. The theory leads to an algorithm for computing all diagnoses, and to various results concerning principles of measurement for discriminating among competing diagnoses. Finally, the theory reveals close connections between diagnostic reasoning and nonmonotonic reasoning.", "When a system behaves abnormally, sequential diagnosis takes a sequence of measurements of the system until the faults causing the abnormality are identified, and the goal is to reduce the diagnostic cost, defined here as the number of measurements. To propose measurement points, previous work employs a heuristic based on reducing the entropy over a computed set of diagnoses. This approach generally has good performance in terms of diagnostic cost, but can fail to diagnose large systems when the set of diagnoses is too large. Focusing on a smaller set of probable diagnoses scales the approach but generally leads to increased average diagnostic costs. In this paper, we propose a new diagnostic framework employing four new techniques, which scales to much larger systems with good performance in terms of diagnostic cost. First, we propose a new heuristic for measurement point selection that can be computed efficiently, without requiring the set of diagnoses, once the system is modeled as a Bayesian network and compiled into a logical form known as d-DNNF. Second, we extend hierarchical diagnosis, a technique based on system abstraction from our previous work, to handle probabilities so that it can be applied to sequential diagnosis to allow larger systems to be diagnosed. Third, for the largest systems where even hierarchical diagnosis fails, we propose a novel method that converts the system into one that has a smaller abstraction and whose diagnoses form a superset of those of the original system; the new system can then be diagnosed and the result mapped back to the original system. Finally, we propose a novel cost estimation function which can be used to choose an abstraction of the system that is more likely to provide optimal average cost. Experiments with ISCAS-85 benchmark circuits indicate that our approach scales to all circuits in the suite except one that has a flat structure not susceptible to useful abstraction.", "Fault diagnosis approaches can generally be categorized into spectrum-based fault localization (SFL, correlating failures with abstractions of program traces), and model-based diagnosis (MBD, logic reasoning over a behavioral model). Although MBD approaches are inherently more accurate than SFL, their high computational complexity prohibits application to large programs. We present a framework to combine the best of both worlds, coined BARINEL. The program is modeled using abstractions of program traces (as in SFL) while Bayesian reasoning is used to deduce multiple-fault candidates and their probabilities (as in MBD). A particular feature of BARINEL is the usage of a probabilistic component model that accounts for the fact that faulty components may fail intermittently. Experimental results on both synthetic and real software programs show that BARINEL typically outperforms current SFL approaches at a cost complexity that is only marginally higher. In the context of single faults this superiority is established by formal proof." ] }
cs0012007
2950755945
We have implemented Kima, an automated error correction system for concurrent logic programs. Kima corrects near-misses such as wrong variable occurrences in the absence of explicit declarations of program properties. Strong moding typing and constraint-based analysis are turning to play fundamental roles in debugging concurrent logic programs as well as in establishing the consistency of communication protocols and data types. Mode type analysis of Moded Flat GHC is a constraint satisfaction problem with many simple mode type constraints, and can be solved efficiently. We proposed a simple and efficient technique which, given a non-well-moded typed program, diagnoses the reasons'' of inconsistency by finding minimal inconsistent subsets of mode type constraints. Since each constraint keeps track of the symbol occurrence in the program, a minimal subset also tells possible sources of program errors. Kima realizes automated correction by replacing symbol occurrences around the possible sources and recalculating modes and types of the rewritten programs systematically. As long as bugs are near-misses, Kima proposes a rather small number of alternatives that include an intended program.
Wand proposed an algorithm for diagnosing non-well-typed functional programs @cite_5 . His approach was to extend the unification algorithm for type reconstruction to record which symbol occurrence imposed which constraint. In contrast, our framework is built outside any underlying framework of constraint solving. It does not incur any overhead for well-moded typed programs or modify the constraint-solving algorithm.
{ "cite_N": [ "@cite_5" ], "mid": [ "2571527823", "2337062560", "2953338282", "2157785665" ], "abstract": [ "We study the compressed sensing reconstruction problem for a broad class of random, band-diagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by Krzakala [30], message passing algorithms can effectively solve the reconstruction problem for spatially coupled measurements with undersampling rates close to the fraction of nonzero coordinates. We use an approximate message passing (AMP) algorithm and analyze it through the state evolution method. We give a rigorous proof that this approach is successful as soon as the undersampling rate δ exceeds the (upper) Renyi information dimension of the signal, d(pX). More precisely, for a sequence of signals of diverging dimension n whose empirical distribution converges to pX, reconstruction is with high probability successful from d(pX) n+o(n) measurements taken according to a band diagonal matrix. For sparse signals, i.e., sequences of dimension n and k(n) nonzero entries, this implies reconstruction from k(n)+o(n) measurements. For “discrete” signals, i.e., signals whose coordinates take a fixed finite set of values, this implies reconstruction from o(n) measurements. The result is robust with respect to noise, does not apply uniquely to random signals, but requires the knowledge of the empirical distribution of the signal pX.", "We propose an approach for dense semantic 3D reconstruction which uses a data term that is defined as potentials over viewing rays, combined with continuous surface area penalization. Our formulation is a convex relaxation which we augment with a crucial non-convex constraint that ensures exact handling of visibility. To tackle the non-convex minimization problem, we propose a majorize-minimize type strategy which converges to a critical point. We demonstrate the benefits of using the non-convex constraint experimentally. For the geometry-only case, we set a new state of the art on two datasets of the commonly used Middlebury multi-view stereo benchmark. Moreover, our general-purpose formulation directly reconstructs thin objects, which are usually treated with specialized algorithms. A qualitative evaluation on the dense semantic 3D reconstruction task shows that we improve significantly over previous methods.", "We study the compressed sensing reconstruction problem for a broad class of random, band-diagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by KrzakalaEtAl , message passing algorithms can effectively solve the reconstruction problem for spatially coupled measurements with undersampling rates close to the fraction of non-zero coordinates. We use an approximate message passing (AMP) algorithm and analyze it through the state evolution method. We give a rigorous proof that this approach is successful as soon as the undersampling rate @math exceeds the (upper) R 'enyi information dimension of the signal, @math . More precisely, for a sequence of signals of diverging dimension @math whose empirical distribution converges to @math , reconstruction is with high probability successful from @math measurements taken according to a band diagonal matrix. For sparse signals, i.e., sequences of dimension @math and @math non-zero entries, this implies reconstruction from @math measurements. For discrete' signals, i.e., signals whose coordinates take a fixed finite set of values, this implies reconstruction from @math measurements. The result is robust with respect to noise, does not apply uniquely to random signals, but requires the knowledge of the empirical distribution of the signal @math .", "Sparse representation based classification has led to interesting image recognition results, while the dictionary used for sparse coding plays a key role in it. This paper presents a novel dictionary learning (DL) method to improve the pattern classification performance. Based on the Fisher discrimination criterion, a structured dictionary, whose dictionary atoms have correspondence to the class labels, is learned so that the reconstruction error after sparse coding can be used for pattern classification. Meanwhile, the Fisher discrimination criterion is imposed on the coding coefficients so that they have small within-class scatter but big between-class scatter. A new classification scheme associated with the proposed Fisher discrimination DL (FDDL) method is then presented by using both the discriminative information in the reconstruction error and sparse coding coefficients. The proposed FDDL is extensively evaluated on benchmark image databases in comparison with existing sparse representation and DL based classification methods." ] }
cs0102023
2122926560
AbstractThis note addresses the input and output of intervals in the sense of intervalarithmetic and interval constraints. The most obvious, and so far most widely usednotation, for intervals has drawbacks that we remedy with a new notation that wepropose to call factored notation. It is more compact and allows one to find a goodtrade-off between interval width and ease of reading. We describe how such a trade-off can be based on the information yield (in the sense of information theory) of thelast decimal shown. 1 Introduction Once upon a time, it was a matter of professional ethics among computers never to writea meaningless decimal. Since then computers have become machines and thereby lost anyform of ethics, professional or otherwise. The human computers of yore were helped intheir ethical behaviour by the fact that it took effort to write spurious decimals. Now thesituation is reversed: the lazy way is to use the default precision of the I O library function.As a result it is common to see fifteen decimals, all but three of which are meaningless.Of course interval arithmetic is not guilty of such negligence. After all, the very raisond’ˆetre of the subject is to be explicit about the precision of computed results. Yet, eveninterval arithmetic is plagued by phoney decimals, albeit in a more subtle way. Just as con-ventional computation often needs more care in the presentation of computational results,the most obvious interval notation with default precision needs improvement.As a bounded interval has two bounds, say, l and u, the most straightforward notationis something like [l,u]. Written like this, it may not be immediately obvious what is wrongwith writing it that way. But when confronted with a real-life consequence
Hansen @cite_3 , @cite_5 , and Kearfott @cite_1 opt for the straightforward @math notation. Hansen mostly presents bounds with few digits, but for instance on page 178 we find @math demonstrating the problems addressed here.
{ "cite_N": [ "@cite_5", "@cite_1", "@cite_3" ], "mid": [ "2000931246", "2508647357", "2010193278", "2034680223" ], "abstract": [ "Let f be a random Boolean formula that is an instance of 3-SAT. We consider the problem of computing the least real number k such that if the ratio of the number of clauses over the number of variables of f strictly exceeds k , then f is almost certainly unsatisfiable. By a well-known and more or less straightforward argument, it can be shown that kF5.191. This upper bound was improved by to 4.758 by first providing new improved bounds for the occupancy problem. There is strong experimental evidence that the value of k is around 4.2. In this work, we define, in terms of the random formula f, a decreasing sequence of random variables such that, if the expected value of any one of them converges to zero, then f is almost certainly unsatisfiable. By letting the expected value of the first term of the sequence converge to zero, we obtain, by simple and elementary computations, an upper bound for k equal to 4.667. From the expected value of the second term of the sequence, we get the value 4.601q . In general, by letting the U This work was performed while the first author was visiting the School of Computer Science, Carleton Ž University, and was partially supported by NSERC Natural Sciences and Engineering Research Council . of Canada , and by a grant from the University of Patras for sabbatical leaves. The second and third Ž authors were supported in part by grants from NSERC Natural Sciences and Engineering Research . Council of Canada . During the last stages of this research, the first and last authors were also partially Ž . supported by EU ESPRIT Long-Term Research Project ALCOM-IT Project No. 20244 . †An extended abstract of this paper was published in the Proceedings of the Fourth Annual European Ž Symposium on Algorithms, ESA’96, September 25]27, 1996, Barcelona, Spain Springer-Verlag, LNCS, . pp. 27]38 . That extended abstract was coauthored by the first three authors of the present paper. Correspondence to: L. M. Kirousis Q 1998 John Wiley & Sons, Inc. CCC 1042-9832r98r030253-17 253", "Spanners, emulators, and approximate distance oracles can be viewed as lossy compression schemes that represent an unweighted graph metric in small space, say O(n1+Δ) bits. There is an inherent tradeoff between the sparsity parameter Δ and the stretch function f of the compression scheme, but the qualitative nature of this tradeoff has remained a persistent open problem. It has been known for some time that when Δ ≥ 1 3 there are schemes with constant additive stretch (distance d is stretched to at most f(d) = d + O(1)), and recent results of Abboud and Bodwin show that when Δ In this paper we show that the lower bound of Abboud and Bodwin is just the first step in a hierarchy of lower bounds that characterize the asymptotic behavior of the optimal stretch function f for sparsity parameter Δ ∈ (0,1 3). Specifically, for any integer k ≥ 2, any compression scheme with size [EQUATION] has a sublinear additive stretch function f: f(d) = d + Ω(d1−1 k). This lower bound matches Thorup and Zwick's (2006) construction of sublinear additive emulators. It also shows that Elkin and Peleg's (1 + ϵ, β)-spanners have an essentially optimal tradeoff between Δ, ϵ, and β, and that the sublinear additive spanners of Pettie (2009) and Chechik (2013) are not too far from optimal. To complement these lower bounds we present a new construction of (1 + ϵ, O(k ϵ)k−1)-spanners with size [EQUATION], where hk Our lower bound technique exhibits several interesting degrees of freedom in the framework of Abboud and Bodwin. By carefully exploiting these freedoms, we are able to obtain lower bounds for several related combinatorial objects. We get lower bounds on the size of (β, ϵ)-hopsets, matching Elkin and Neiman's construction (2016), and lower bounds on shortcutting sets for digraphs that preserve the transitive closure. Our lower bound simplifies Hesse's (2003) refutation of Thorup's conjecture (1992), which stated that adding a linear number of shortcuts suffices to reduce the diameter to polylogarithmic. Finally, we show matching upper and lower bounds for graph compression schemes that work for graph metrics with girth at least 2γ + 1. One consequence is that 's (2010) additive O(γ)-spanners with size [EQUATION] cannot be improved in the exponent.", "Let Fk(n,m) be a random k-SAT formula on n variables formed by selecting uniformly and independently m out of all possible k-clauses. It is well-known that for r ≥ 2k ln 2, Fk(n,rn) is unsatisfiable with probability 1-o(1). We prove that there exists a sequence tk = O(k) such that for r ≥ 2k ln 2 - tk, Fk(n,rn) is satisfiable with probability 1-o(1).Our technique yields an explicit lower bound for every k which for k > 3 improves upon all previously known bounds. For example, when k=10 our lower bound is 704.94 while the upper bound is 708.94.", "Let G = (V,E) be a weighted undirected graph with |V | = n and |E| = m. An estimate ( u,v ) of the distance ( u,v ) in G between u, v V is said to be of stretch t iff ( u,v ) ( u,v ) t ? ( u,v ). The most efficient algorithms known for computing small stretch distances in G are the approximate distance oracles of [16] and the three algorithms in [9] to compute all-pairs stretch t distances for t = 2, 7 3, and 3. We present faster algorithms for these problems. For any integer k 1, Thorup and Zwick in [16] gave an O(kmn^ 1 k ) algorithm to construct a data structure of size O(kn^ 1+1 k ) which, given a query (u, v) V ? V , returns in O(k) time, a 2k - 1 stretch estimate of ( u,v ). But for small values of k, the time to construct the oracle is rather high. Here we present an O(n^2 log n) algorithm to construct such a data structure of size O(kn^ 1+1 k ) for all integers k 2. Our query answering time is O(k) for k 2 and (log n) for k = 2. We use a new generic scheme for all-pairs approximate shortest paths for these results. This scheme also enables us to design faster algorithms for allpairs t-stretch distances for t = 2 and 7 3, and compute all-pairs almost stretch 2 distances in O(n^2 log n) time." ] }
cs0102023
2122926560
AbstractThis note addresses the input and output of intervals in the sense of intervalarithmetic and interval constraints. The most obvious, and so far most widely usednotation, for intervals has drawbacks that we remedy with a new notation that wepropose to call factored notation. It is more compact and allows one to find a goodtrade-off between interval width and ease of reading. We describe how such a trade-off can be based on the information yield (in the sense of information theory) of thelast decimal shown. 1 Introduction Once upon a time, it was a matter of professional ethics among computers never to writea meaningless decimal. Since then computers have become machines and thereby lost anyform of ethics, professional or otherwise. The human computers of yore were helped intheir ethical behaviour by the fact that it took effort to write spurious decimals. Now thesituation is reversed: the lazy way is to use the default precision of the I O library function.As a result it is common to see fifteen decimals, all but three of which are meaningless.Of course interval arithmetic is not guilty of such negligence. After all, the very raisond’ˆetre of the subject is to be explicit about the precision of computed results. Yet, eveninterval arithmetic is plagued by phoney decimals, albeit in a more subtle way. Just as con-ventional computation often needs more care in the presentation of computational results,the most obvious interval notation with default precision needs improvement.As a bounded interval has two bounds, say, l and u, the most straightforward notationis something like [l,u]. Written like this, it may not be immediately obvious what is wrongwith writing it that way. But when confronted with a real-life consequence
The standard notation in the Numerica book @cite_2 solves the scanning problem in an interesting way. It uses the idea of the @math notation, but writes instead @math . This variation has the advantage of not introducing new notation. The reason why we still prefer factored notation is clear from the @math example ), which, if rewritten as @math becomes @math Although it is attractive not to introduce special-purpose notation, there is so much redundancy here that the factored alternative: @math seems worth the new notation.
{ "cite_N": [ "@cite_2" ], "mid": [ "2737269238", "2406728828", "1523041988", "2591592591" ], "abstract": [ "We consider document listing on string collections, that is, finding in which strings a given pattern appears. In particular, we focus on repetitive collections: a collection of size @math over alphabet @math is composed of @math copies of a string of size @math , and @math edits are applied on ranges of copies. We introduce the first document listing index with size @math , precisely @math bits, and with useful worst-case time guarantees: Given a pattern of length @math , the index reports the @math strings where it appears in time @math , for any constant @math (and tells in time @math if @math ). Our technique is to augment a range data structure that is commonly used on grammar-based indexes, so that instead of retrieving all the pattern occurrences, it computes useful summaries on them. We show that the idea has independent interest: we introduce the first grammar-based index that, on a text @math with a grammar of size @math , uses @math bits and counts the number of occurrences of a pattern @math in time @math , for any constant @math . We also give the first index using @math bits, where @math is parsed by Lempel-Ziv into @math phrases, counting occurrences in time @math .", "This paper describes our system designed for the NLPCC 2015 shared task on Chinese word segmentation WS and POS tagging for Weibo Text. We treat WS and POS tagging as two separate tasks and use a cascaded approach. Our major focus is how to effectively exploit multiple heterogeneous data to boost performance of statistical models. This work considers three sets of heterogeneous data, i.e., Weibo @math , 10K sentences, Penn Chinese Treebank 7.0 @math , 50K, and People's Daily @math , 280K. For WS, we adopt the recently proposed coupled sequence labeling to combine @math , @math , and @math , boosting F1 score from @math baseline model trained on only @math to @math @math . For POS tagging, we adopt an ensemble approach combining coupled sequence labeling and the guide-feature based method, since the three datasets have three different annotation standards. First, we convert @math into the annotation style of @math based on coupled sequence labeling, denoted by @math . Then, we merge CTB7 and @math to train a POS tagger, denoted by @math , which is further used to produce guide features on @math . Finally, the tagging F1 score is improved from 87.93 to 88.99 +1.06 .", "A traditional counterexample to a linear-time safety property shows the values of all signals at all times prior to the error. However, some signals may not be critical to causing the failure. A succinct explanation may help human understanding as well as speed up algorithms that have to analyze many such traces. In Bounded Model Checking (BMC), a counterexample is constructed from a satisfying assignment to a Boolean formula, typically in CNF. Modern SAT solvers usually assign values to all variables when the input formula is satisfiable. Deriving minimal satisfying assignments from such complete assignments does not lead to concise explanations of counterexamples because of how CNF formulae are derived from the models. Hence, we formulate the extraction of a succinct counterexample as the problem of finding a minimal assignment that, together with the Boolean formula describing the model, implies an objective. We present a two-stage algorithm for this problem, such that the result of each stage contributes to identify the “interesting” events that cause the failure. We demonstrate the effectiveness of our approach with an example and with experimental results.", "This paper considers the problem of designing maximum distance separable (MDS) codes over small fields with constraints on the support of their generator matrices. For any given @math binary matrix @math , the GM-MDS conjecture, proposed by , states that if @math satisfies the so-called MDS condition, then for any field @math of size @math , there exists an @math MDS code whose generator matrix @math , with entries in @math , fits the matrix @math (i.e., @math is the support matrix of @math ). Despite all the attempts by the coding theory community, this conjecture remains still open in general. It was shown, independently by and , that the GM-MDS conjecture holds if the following conjecture, referred to as the TM-MDS conjecture, holds: if @math satisfies the MDS condition, then the determinant of a transform matrix @math , such that @math fits @math , is not identically zero, where @math is a Vandermonde matrix with distinct parameters. In this work, we first reformulate the TM-MDS conjecture in terms of the Wronskian determinant, and then present an algebraic-combinatorial approach based on polynomial-degree reduction for proving this conjecture. Our proof technique's strength is based primarily on reducing inherent combinatorics in the proof. We demonstrate the strength of our technique by proving the TM-MDS conjecture for the cases where the number of rows ( @math ) of @math is upper bounded by @math . For this class of special cases of @math where the only additional constraint is on @math , only cases with @math were previously proven theoretically, and the previously used proof techniques are not applicable to cases with @math ." ] }
cs0102017
2950044032
Parallel jobs are different from sequential jobs and require a different type of process management. We present here a process management system for parallel programs such as those written using MPI. A primary goal of the system, which we call MPD (for multipurpose daemon), is to be scalable. By this we mean that startup of interactive parallel jobs comprising thousands of processes is quick, that signals can be quickly delivered to processes, and that stdin, stdout, and stderr are managed intuitively. Our primary target is parallel machines made up of clusters of SMPs, but the system is also useful in more tightly integrated environments. We describe how MPD enables much faster startup and better runtime management of parallel jobs. We show how close control of stdio can support the easy implementation of a number of convenient system utilities, even a parallel debugger. We describe a simple but general interface that can be used to separate any process manager from a parallel library, which we use to keep MPD separate from MPICH.
Many systems are intended to manage a collection of computing resources for both single-process and parallel jobs; see the survey by @cite_7 . Typically, these use a daemon that manages individual processes, with emphasis on jobs involving only a single process. Widely used systems include PBS @cite_11 , LSF @cite_0 , DQS @cite_22 , and Loadleveler POE @cite_9 . The Condor system @cite_23 is also widely used and supports parallel programs that use PVM @cite_2 or MPI @cite_8 @cite_12 . More specialized systems, such as MOSIX @cite_13 and GLUnix @cite_1 , provide single-system image support for clusters. Harness @cite_19 @cite_4 shares with MPD the goal of supporting management of parallel jobs. Its primary research goal is to demonstrate the flexibility of the plug-in'' approach to application design, potentially providing a wide range of services. The MPD system focuses more specifically on the design and implementation of services required for process management of parallel jobs, including high-speed startup of large parallel jobs on clusters and scalable standard I O management. The book @cite_10 provides a good overview of metacomputing systems and issues, and Feitelson @cite_3 surveys support for scheduling parallel processes.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_7", "@cite_8", "@cite_10", "@cite_9", "@cite_1", "@cite_3", "@cite_0", "@cite_19", "@cite_23", "@cite_2", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2545968212", "2004261242", "2765129600", "2008170189" ], "abstract": [ "Most high-performance, scientific libraries have adopted hybrid parallelization schemes - such as the popular MPI+OpenMP hybridization - to benefit from the capacities of modern distributed-memory machines. While these approaches have shown to achieve high performance, they require a lot of effort to design and maintain sophisticated synchronization communication strategies. On the other hand, task-based programming paradigms aim at delegating this burden to a runtime system for maximizing productivity. In this article, we assess the potential of task-based fast multipole methods (FMM) on clusters of multicore processors. We propose both a hybrid MPI+task FMM parallelization and a pure task-based parallelization where the MPI communications are implicitly handled by the runtime system. The latter approach yields a very compact code following a sequential task-based programming model. We show that task-based approaches can compete with a hybrid MPI+OpenMP highly optimized code and that furthermore the compact task-based scheme fully matches the performance of the sophisticated, hybrid MPI+task version, ensuring performance while maximizing productivity. We illustrate our discussion with the ScalFMM FMM library and the StarPU runtime system.", "Large scale supercomputing applications typically run on clusters using vendor message passing libraries, limiting the application to the availability of memory and CPU resources on that single machine. The ability to run inter-cluster parallel code is attractive since it allows the consolidation of multiple large scale resources for computational simulations not possible on a single machine, and it also allows the conglomeration of small subsets of CPU cores for rapid turnaround, for example, in the case of high-availability computing. MPIg is a grid-enabled implementation of the Message Passing Interface (MPI), extending the MPICH implementation of MPI to use Globus Toolkit services such as resource allocation and authentication. To achieve co-availability of resources, HARC, the Highly-Available Resource Co-allocator, is used. Here we examine two applications using MPIg: LAMMPS (Large-scale Atomic Molecular Massively Parallel Simulator), is used with a replica exchange molecular dynamics approach to enhance binding affinity calculations in HIV drug research, and HemeLB, which is a lattice-Boltzmann solver designed to address fluid flow in geometries such as the human cerebral vascular system. The cross-site scalability of both these applications is tested and compared to single-machine performance. In HemeLB, communication costs are hidden by effectively overlapping non-blocking communication with computation, essentially scaling linearly across multiple sites, and LAMMPS scales almost as well when run between two significantly geographically separated sites as it does at a single site.", "This paper reports our observations from a top-tier supercomputer Titan and its Lustre parallel file stores under production load. In summary, we find that supercomputer file systems are highly variable across the machine at fine time scales. This variability has two major implications. First, stragglers lessen the benefit of coupled I O parallelism (striping). Peak median output bandwidths are obtained with parallel writes to many independent files, with no striping or write-sharing of files across clients (compute nodes). I O parallelism is most effective when the application—or its I O middleware system—distributes the I O load so that each client writes separate files on multiple targets, and each target stores files for multiple clients, in a balanced way. Second, our results suggest that the potential benefit of dynamic adaptation is limited. In particular, it is not fruitful to attempt to identify “good spots” in the machine or in the file system: component performance is driven by transient load conditions, and past performance is not a useful predictor of future performance. For example, we do not observe regular diurnal load patterns.", "Parallel computing on volatile distributed resources requires schedulers that consider job and resource characteristics. We study unconventional computing environments containing devices spread throughout a single large organization. The devices are not necessarily typical general purpose machines; instead, they could be processors dedicated to special purpose tasks (for example printing and document processing), but capable of being leveraged for distributed computations. Harvesting their idle cycles can simultaneously help resources cooperate to perform their primary task and enable additional functionality and services. A new burstiness metric characterizes the volatility of the high-priority native tasks. A burstiness-aware scheduling heuristic opportunistically introduces grid jobs (a lower priority workload class) to avoid the higher-priority native applications, and effectively harvests idle cycles. Simulations based on real workload traces indicate that this approach improves makespan by an average of 18.3 over random scheduling, and comes within 7.6 of the theoretical upper bound." ] }
cs0103026
2953044264
This paper presents a corpus-based approach to word sense disambiguation where a decision tree assigns a sense to an ambiguous word based on the bigrams that occur nearby. This approach is evaluated using the sense-tagged corpora from the 1998 SENSEVAL word sense disambiguation exercise. It is more accurate than the average results reported for 30 of 36 words, and is more accurate than the best results for 19 of 36 words.
Bigrams have been used as features for word sense disambiguation, particularly in the form of collocations where the ambiguous word is one component of the bigram (e.g., @cite_10 , @cite_0 , @cite_9 ). While some of the bigrams we identify are collocations that include the word being disambiguated, there is no requirement that this be the case.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_10" ], "mid": [ "1903115690", "1851555520", "2481930807", "2108325777" ], "abstract": [ "When a trigram backoff language model is created from a large body of text, trigrams and bigrams that occur few times in the training text are often excluded from the model in order to decrease the model size. Generally, the elimination of n-grams with very low counts is believed to not significantly affect model performance. This project investigates the degradation of a trigram backoff model's perplexity and word error rates as bigram and trigram cutoffs are increased. The advantage of reduction in model size is compared to the increase in word error rate and perplexity scores. More importantly, this project also investigates alternative ways of excluding bigrams and trigrams from a backoff language model, using criteria other than the number of times an n-gram occurs in the training text. Specifically, a difference method has been investigated where the difference in the logs of the original and backed off trigram and bigram probabilities is used as a basis for n-gram exclusion from the model. We show that excluding trigrams and bigrams based on a weighted version of this difference method results in better perplexity and word error rate performance than excluding trigrams and bigrams based on counts alone.", "The unavailability of very large corpora with semantically disambiguated words is a major limitation in text processing research. For example, statistical methods for word sense disambiguation of free text are known to achieve high accuracy results when large corpora are available to develop context rules, to train and test them.This paper presents a novel approach to automatically generate arbitrarily large corpora for word senses. The method is based on (1) the information provided in WordNet, used to formulate queries consisting of synonyms or definitions of word senses, and (2) the information gathered from Internet using existing search engines. The method was tested on 120 word senses and a precision of 91 was observed.", "Given an image of a handwritten word, a CNN is employed to estimate its n-gram frequency profile, which is the set of n-grams contained in the word. Frequencies for unigrams, bigrams and trigrams are estimated for the entire word and for parts of it. Canonical Correlation Analysis is then used to match the estimated profile to the true profiles of all words in a large dictionary. The CNN that is used employs several novelties such as the use of multiple fully connected branches. Applied to all commonly used handwriting recognition benchmarks, our method outperforms, by a very large margin, all existing methods.", "In this paper we describe two new objective automatic evaluation methods for machine translation. The first method is based on longest common subsequence between a candidate translation and a set of reference translations. Longest common subsequence takes into account sentence level structure similarity naturally and identifies longest co-occurring in-sequence n-grams automatically. The second method relaxes strict n-gram matching to skip-bigram matching. Skip-bigram is any pair of words in their sentence order. Skip-bigram cooccurrence statistics measure the overlap of skip-bigrams between a candidate translation and a set of reference translations. The empirical results show that both methods correlate with human judgments very well in both adequacy and fluency." ] }
cs0103026
2953044264
This paper presents a corpus-based approach to word sense disambiguation where a decision tree assigns a sense to an ambiguous word based on the bigrams that occur nearby. This approach is evaluated using the sense-tagged corpora from the 1998 SENSEVAL word sense disambiguation exercise. It is more accurate than the average results reported for 30 of 36 words, and is more accurate than the best results for 19 of 36 words.
Decision trees have been used in supervised learning approaches to word sense disambiguation, and have fared well in a number of comparative studies (e.g., @cite_2 , @cite_17 ). In the former they were used with the bag of word feature sets and in the latter they were used with a mixed feature set that included the part-of-speech of neighboring words, three collocations, and the morphology of the ambiguous word. We believe that the approach in this paper is the first time that decision trees based strictly on bigram features have been employed.
{ "cite_N": [ "@cite_17", "@cite_2" ], "mid": [ "1489348810", "1756650108", "1974976142", "2098458263" ], "abstract": [ "This paper describes a supervised algorithm for word sensedisambiguation based on hierarchies of decision lists. This algorithmsupports a useful degree of conditional branching while minimizing thetraining data fragmentation typical of decision trees. Classificationsare based on a rich set of collocational, morphological and syntacticcontextual features, extracted automatically from training data andweighted sensitive to the nature of the feature and feature class. Thealgorithm is evaluated comprehensively in the SENSEVAL framework,achieving the top performance of all participating supervised systems onthe 36 test words where training data is available.", "Abstract Objective The aim of this study was to investigate relations among different aspects in supervised word sense disambiguation (WSD; supervised machine learning for disambiguating the sense of a term in a context) and compare supervised WSD in the biomedical domain with that in the general English domain. Methods The study involves three data sets (a biomedical abbreviation data set, a general biomedical term data set, and a general English data set). The authors implemented three machine-learning algorithms, including (1) naive Bayes (NBL) and decision lists (TDLL), (2) their adaptation of decision lists (ODLL), and (3) their mixed supervised learning (MSL). There were six feature representations (various combinations of collocations, bag of words, oriented bag of words, etc.) and five window sizes (2, 4, 6, 8, and 10). Results Supervised WSD is suitable only when there are enough sense-tagged instances with at least a few dozens of instances for each sense. Collocations combined with neighboring words are appropriate selections for the context. For terms with unrelated biomedical senses, a large window size such as the whole paragraph should be used, while for general English words a moderate window size between 4 and 10 should be used. The performance of the authors' implementation of decision list classifiers for abbreviations was better than that of traditional decision list classifiers. However, the opposite held for the other two sets. Also, the authors' mixed supervised learning was stable and generally better than others for all sets. Conclusion From this study, it was found that different aspects of supervised WSD depend on each other. The experiment method presented in the study can be used to select the best supervised WSD classifier for each ambiguous term.", "Veronis (2004) has recently proposed an innovative unsupervised algorithm for word sense disambiguation based on small-world graphs called HyperLex. This paper explores two sides of the algorithm. First, we extend Veronis' work by optimizing the free parameters (on a set of words which is different to the target set). Second, given that the empirical comparison among unsupervised systems (and with respect to supervised systems) is seldom made, we used hand-tagged corpora to map the induced senses to a standard lexicon (WordNet) and a publicly available gold standard (Senseval 3 English Lexical Sample). Our results for nouns show that thanks to the optimization of parameters and the mapping method, HyperLex obtains results close to supervised systems using the same kind of bag-of-words features. Given the information loss inherent in any mapping step and the fact that the parameters were tuned for another set of words, these are very interesting results.", "Randomized decision trees and forests have a rich history in machine learning and have seen considerable success in application, perhaps particularly so for computer vision. However, they face a fundamental limitation: given enough data, the number of nodes in decision trees will grow exponentially with depth. For certain applications, for example on mobile or embedded processors, memory is a limited resource, and so the exponential growth of trees limits their depth, and thus their potential accuracy. This paper proposes decision jungles, revisiting the idea of ensembles of rooted decision directed acyclic graphs (DAGs), and shows these to be compact and powerful discriminative models for classification. Unlike conventional decision trees that only allow one path to every node, a DAG in a decision jungle allows multiple paths from the root to each leaf. We present and compare two new node merging algorithms that jointly optimize both the features and the structure of the DAGs efficiently. During training, node splitting and node merging are driven by the minimization of exactly the same objective function, here the weighted sum of entropies at the leaves. Results on varied datasets show that, compared to decision forests and several other baselines, decision jungles require dramatically less memory while considerably improving generalization." ] }
cs0103026
2953044264
This paper presents a corpus-based approach to word sense disambiguation where a decision tree assigns a sense to an ambiguous word based on the bigrams that occur nearby. This approach is evaluated using the sense-tagged corpora from the 1998 SENSEVAL word sense disambiguation exercise. It is more accurate than the average results reported for 30 of 36 words, and is more accurate than the best results for 19 of 36 words.
The decision list is a closely related approach that has also been applied to word sense disambiguation (e.g., @cite_6 , @cite_14 , @cite_4 ). Rather than building and traversing a tree to perform disambiguation, a list is employed. In the general case a decision list may suffer from less fragmentation during learning than decision trees; as a practical matter this means that the decision list is less likely to be over--trained. However, we believe that fragmentation also reflects on the feature set used for learning. Ours consists of at most approximately 100 binary features. This results in a relatively small feature space that is not as likely to suffer from fragmentation as are larger spaces.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_6" ], "mid": [ "1489348810", "1967148170", "2050806103", "1930624869" ], "abstract": [ "This paper describes a supervised algorithm for word sensedisambiguation based on hierarchies of decision lists. This algorithmsupports a useful degree of conditional branching while minimizing thetraining data fragmentation typical of decision trees. Classificationsare based on a rich set of collocational, morphological and syntacticcontextual features, extracted automatically from training data andweighted sensitive to the nature of the feature and feature class. Thealgorithm is evaluated comprehensively in the SENSEVAL framework,achieving the top performance of all participating supervised systems onthe 36 test words where training data is available.", "Decision trees are probably the most popular and commonly used classification model. They are recursively built following a top-down approach (from general concepts to particular examples) by repeated splits of the training dataset. When this dataset contains numerical attributes, binary splits are usually performed by choosing the threshold value which minimizes the impurity measure used as splitting criterion (e.g. C4.5 gain ratio criterion or CART Gini's index). In this paper we propose the use of multi-way splits for continuous attributes in order to reduce the tree complexity without decreasing classification accuracy. This can be done by intertwining a hierarchical clustering algorithm with the usual greedy decision tree learning.", "Decision trees are the commonly applied tools in the task of data stream classification. The most critical point in decision tree construction algorithm is the choice of the splitting attribute. In majority of algorithms existing in literature the splitting criterion is based on statistical bounds derived for split measure functions. In this paper we propose a totally new kind of splitting criterion. We derive statistical bounds for arguments of split measure function instead of deriving it for split measure function itself. This approach allows us to properly use the Hoeffding's inequality to obtain the required bounds. Based on this theoretical results we propose the Decision Trees based on the Fractions Approximation algorithm (DTFA). The algorithm exhibits satisfactory results of classification accuracy in numerical experiments. It is also compared with other existing in literature methods, demonstrating noticeably better performance.", "Decision trees are attractive classifiers due to their high execution speed. But trees derived with traditional methods often cannot be grown to arbitrary complexity for possible loss of generalization accuracy on unseen data. The limitation on complexity usually means suboptimal accuracy on training data. Following the principles of stochastic modeling, we propose a method to construct tree-based classifiers whose capacity can be arbitrarily expanded for increases in accuracy for both training and unseen data. The essence of the method is to build multiple trees in randomly selected subspaces of the feature space. Trees in, different subspaces generalize their classification in complementary ways, and their combined classification can be monotonically improved. The validity of the method is demonstrated through experiments on the recognition of handwritten digits." ] }
cs0105021
1678362335
This paper deals with a problem from discrete-time robust control which requires the solution of constraints over the reals that contain both universal and existential quantifiers. For solving this problem we formulate it as a program in a (fictitious) constraint logic programming language with explicit quantifier notation. This allows us to clarify the special structure of the problem, and to extend an algorithm for computing approximate solution sets of first-order constraints over the reals to exploit this structure. As a result we can deal with inputs that are clearly out of reach for current symbolic solvers.
Inversion of functions on sets is done implicitly by every algorithm for solving systems of equations @cite_10 --- in this case the input set just contains one zero vector. It is mentioned explicitly mostly for computing the solution set of systems of inequalities @cite_32 @cite_37 .
{ "cite_N": [ "@cite_37", "@cite_10", "@cite_32" ], "mid": [ "2079397195", "2104375222", "2159964742", "2101779504" ], "abstract": [ "The method of inversion for arbitrary continuous multilayer nets is developed. The inversion is done by computing iteratively an input vector which minimizes the least-mean-square errors to approximate a given output target. This inversion is not unique for given targets and depends on the starting point in input space. The inversion method turns out to be a valuable tool for the examination of multilayer nets (MLNs). Applications of the inversion method to constraint satisfaction, feature detection, and the testing of reliability and performance of MLNs are outlined. It is concluded that recurrent nets and even time-delay nets might be invertible. >", "The problem of inverting trained feedforward neural networks is to find the inputs which yield a given output. In general, this problem is an ill-posed problem. We present a method for dealing with the inverse problem by using mathematical programming techniques. The principal idea behind the method is to formulate the inverse problem as a nonlinear programming problem, a separable programming (SP) problem, or a linear programming problem according to the architectures of networks to be inverted or the types of network inversions to be computed. An important advantage of the method over the existing iterative inversion algorithm is that various designated network inversions of multilayer perceptrons and radial basis function neural networks can be obtained by solving the corresponding SP problems, which can be solved by a modified simplex method. We present several examples to demonstrate the proposed method and applications of network inversions to examine and improve the generalization performance of trained networks. The results show the effectiveness of the proposed method.", "There are many methods for performing neural network inversion. Multi-element evolutionary inversion procedures are capable of finding numerous inversion points simultaneously. Constrained neural network inversion requires that the inversion solution belong to one or more specified constraint sets. In many cases, iterating between the neural network inversion solution and the constraint set can successfully solve constrained inversion problems. This paper surveys existing methodologies for neural network inversion, which is illustrated by its use as a tool in query-based learning, sonar performance analysis, power system security assessment, control, and generation of codebook vectors.", "Dense matrix inversion is a basic procedure in many linear algebra algorithms. A computationally arduous step in most dense matrix inversion methods is the inversion of triangular matrices as produced by factorization methods such as LU decomposition. In this paper, we demonstrate how triangular matrix inversion (TMI) can be accelerated considerably by using commercial Graphics Processing Units (GPU) in a standard PC. Our implementation is based on a divide and conquer type recursive TMI algorithm, efficiently adapted to the GPU architecture. Our implementation obtains a speedup of 34x versus a CPU-based LAPACK reference routine, and runs at up to 54 gigaflops s on a GTX 280 in double precision. Limitations of the algorithm are discussed, and strategies to cope with them are introduced. In addition, we show how inversion of an L- and U-matrix can be performed concurrently on a GTX 295 based dual-GPU system at up to 90 gigaflops s." ] }
cs0105021
1678362335
This paper deals with a problem from discrete-time robust control which requires the solution of constraints over the reals that contain both universal and existential quantifiers. For solving this problem we formulate it as a program in a (fictitious) constraint logic programming language with explicit quantifier notation. This allows us to clarify the special structure of the problem, and to extend an algorithm for computing approximate solution sets of first-order constraints over the reals to exploit this structure. As a result we can deal with inputs that are clearly out of reach for current symbolic solvers.
First-order constraints occur frequently in control, and especially robust control. Up to now they either have been solved by specialized methods @cite_19 @cite_3 @cite_1 or by applying general solvers like QEPCAD @cite_27 . In the first case one is usually restricted to conditions like linearity, and in the second case one suffers from the high run-time complexity of computing exact solutions @cite_5 @cite_17 . We know of only one case where general solvers for first-order constraints have been applied to discrete-time systems @cite_26 , but they have been frequently applied to continuous systems @cite_14 @cite_15 @cite_25 . For non-linear discrete-time systems without perturbations or control, interval methods have also proved to be an important tool @cite_20 @cite_13 .
{ "cite_N": [ "@cite_26", "@cite_14", "@cite_1", "@cite_3", "@cite_19", "@cite_27", "@cite_5", "@cite_15", "@cite_13", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "2768546550", "2018738327", "1982831910", "2061749308" ], "abstract": [ "First-order methods have been popularly used for solving large-scale problems. However, many existing works only consider unconstrained problems or those with simple constraint. In this paper, we develop two first-order methods for constrained convex programs, for which the constraint set is represented by affine equations and smooth nonlinear inequalities. Both methods are based on the classic augmented Lagrangian function. They update the multipliers in the same way as the augmented Lagrangian method (ALM) but employ different primal variable updates. The first method, at each iteration, performs a single proximal gradient step to the primal variable, and the second method is a block update version of the first one. For the first method, we establish its global iterate convergence as well as global sublinear and local linear convergence, and for the second method, we show a global sublinear convergence result in expectation. Numerical experiments are carried out on the basis pursuit denoising and a convex quadratically constrained quadratic program to show the empirical performance of the proposed methods. Their numerical behaviors closely match the established theoretical results.", "When solving the general smooth nonlinear and possibly nonconvex optimization problem involving equality and or inequality constraints, an approximate first-order critical point of accuracy @math can be obtained by a second-order method using cubic regularization in at most @math evaluations of problem functions, the same order bound as in the unconstrained case. This result is obtained by first showing that the same result holds for inequality constrained nonlinear least-squares. As a consequence, the presence of (possibly nonconvex) equality inequality constraints does not affect the complexity of finding approximate first-order critical points in nonconvex optimization. This result improves on the best known ( @math ) evaluation-complexity bound for solving general nonconvexly constrained optimization problems.", "In this paper, we consider conic programming problems whose constraints consist of linear equalities, linear inequalities, a nonpolyhedral cone, and a polyhedral cone. A convenient way for solving this class of problems is to apply the directly extended alternating direction method of multipliers (ADMM) to its dual problem, which has been observed to perform well in numerical computations but may diverge in theory. Ideally, one should find a convergent variant which is at least as efficient as the directly extended ADMM in practice. We achieve this goal by designing a convergent semiproximal ADMM (called sPADMM3c for convenience) for convex programming problems having three separable blocks in the objective function with the third part being linear. At each iteration, the proposed sPADMM3c takes one special block coordinate descent (BCD) cycle with the order @math , instead of the usual @math Gauss--Seidel BCD cycle used in the nonconvergent directly extended 3-block ADMM, for updating the variable blocks. Our numerical experiments demonstrate that the convergent method is at least 20 faster than the directly extended ADMM with unit step-length for the vast majority of about 550 large-scale doubly nonnegative semidefinite programming problems with linear equality and or inequality constraints. This confirms that at least for conic convex programming, one can design a convergent and efficient ADMM with a special BCD cycle of updating the variable blocks.", "Many problems in control theory can be formulated as formulae in the first-order theory of real closed fields. In this paper we investigate some of the expressive power of this theory. We consider dynamical systems described by polynomial differential equations subjected to constraints on control and system variables and show how to formulate questions in the above framework which can be answered by quantifier elimination. The problems treated in this paper regard stationarity, stability, and following of a polynomially parametrized curve. The software package QEPCAD has been used to solve a number of examples." ] }
cs0105021
1678362335
This paper deals with a problem from discrete-time robust control which requires the solution of constraints over the reals that contain both universal and existential quantifiers. For solving this problem we formulate it as a program in a (fictitious) constraint logic programming language with explicit quantifier notation. This allows us to clarify the special structure of the problem, and to extend an algorithm for computing approximate solution sets of first-order constraints over the reals to exploit this structure. As a result we can deal with inputs that are clearly out of reach for current symbolic solvers.
Apart from the method used in this paper @cite_8 , there there have been several successful attempts at solving special cases of first-order constraints, for example using classical interval techniques @cite_36 @cite_21 or constraint satisfaction @cite_0 , and very often in the context of robust control @cite_2 @cite_30 @cite_28 @cite_11 .
{ "cite_N": [ "@cite_30", "@cite_8", "@cite_36", "@cite_28", "@cite_21", "@cite_0", "@cite_2", "@cite_11" ], "mid": [ "2768546550", "2018738327", "2126442135", "1982831910" ], "abstract": [ "First-order methods have been popularly used for solving large-scale problems. However, many existing works only consider unconstrained problems or those with simple constraint. In this paper, we develop two first-order methods for constrained convex programs, for which the constraint set is represented by affine equations and smooth nonlinear inequalities. Both methods are based on the classic augmented Lagrangian function. They update the multipliers in the same way as the augmented Lagrangian method (ALM) but employ different primal variable updates. The first method, at each iteration, performs a single proximal gradient step to the primal variable, and the second method is a block update version of the first one. For the first method, we establish its global iterate convergence as well as global sublinear and local linear convergence, and for the second method, we show a global sublinear convergence result in expectation. Numerical experiments are carried out on the basis pursuit denoising and a convex quadratically constrained quadratic program to show the empirical performance of the proposed methods. Their numerical behaviors closely match the established theoretical results.", "When solving the general smooth nonlinear and possibly nonconvex optimization problem involving equality and or inequality constraints, an approximate first-order critical point of accuracy @math can be obtained by a second-order method using cubic regularization in at most @math evaluations of problem functions, the same order bound as in the unconstrained case. This result is obtained by first showing that the same result holds for inequality constrained nonlinear least-squares. As a consequence, the presence of (possibly nonconvex) equality inequality constraints does not affect the complexity of finding approximate first-order critical points in nonconvex optimization. This result improves on the best known ( @math ) evaluation-complexity bound for solving general nonconvexly constrained optimization problems.", "We present a simple probabilistic algorithm for solving k-SAT and more generally, for solving constraint satisfaction problems (CSP). The algorithm follows a simple local search paradigm (S. , 1992): randomly guess an initial assignment and then, guided by those clauses (constraints) that are not satisfied, by successively choosing a random literal from such a clause and flipping the corresponding bit, try to find a satisfying assignment. If no satisfying assignment is found after O(n) steps, start over again. Our analysis shows that for any satisfiable k-CNF-formula with n variables this process has to be repeated only t times, on the average, to find a satisfying assignment, where t is within a polynomial factor of (2(1-1 k)) sup n . This is the fastest (and also the simplest) algorithm for 3-SAT known up to date. We consider also the more general case of a CSP with n variables, each variable taking at most d values, and constraints of order l, and analyze the complexity of the corresponding (generalized) algorith m. It turns out that any CSP can be solved with complexity at most (d spl middot (1-1 l)+ spl epsiv ) sup n .", "In this paper, we consider conic programming problems whose constraints consist of linear equalities, linear inequalities, a nonpolyhedral cone, and a polyhedral cone. A convenient way for solving this class of problems is to apply the directly extended alternating direction method of multipliers (ADMM) to its dual problem, which has been observed to perform well in numerical computations but may diverge in theory. Ideally, one should find a convergent variant which is at least as efficient as the directly extended ADMM in practice. We achieve this goal by designing a convergent semiproximal ADMM (called sPADMM3c for convenience) for convex programming problems having three separable blocks in the objective function with the third part being linear. At each iteration, the proposed sPADMM3c takes one special block coordinate descent (BCD) cycle with the order @math , instead of the usual @math Gauss--Seidel BCD cycle used in the nonconvergent directly extended 3-block ADMM, for updating the variable blocks. Our numerical experiments demonstrate that the convergent method is at least 20 faster than the directly extended ADMM with unit step-length for the vast majority of about 550 large-scale doubly nonnegative semidefinite programming problems with linear equality and or inequality constraints. This confirms that at least for conic convex programming, one can design a convergent and efficient ADMM with a special BCD cycle of updating the variable blocks." ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
ii) The ideal gas with the Haldane statistics and the Sutherland-Wu equation. The series @math has an interpretation of the grand partition function of the ideal gas with the Haldane exclusion statistics @cite_16 . The finite @math -system appeared in @cite_16 as the thermal equilibrium condition for the distribution functions of the same system. See also @cite_1 for another interpretation. The one variable case ) also appeared in @cite_26 as the thermal equilibrium condition for the distribution function of the Calogero-Sutherland model. As an application of our second formula in Theorem , we can quickly reproduce the cluster expansion formula'' in [Eq. (129)] I , which was originally calculated by the Lagrange inversion formula, as follows: where @math is the solution of ). The Sutherland-Wu equation also plays an important role for the conformal field theory spectra. (See @cite_23 and the references therein.)
{ "cite_N": [ "@cite_23", "@cite_16", "@cite_1", "@cite_26" ], "mid": [ "2095564275", "2000277525", "2102499404", "2149433831" ], "abstract": [ "We discuss the relationship between the classical Lagrange theorem in mathematics and the quantum statistical mechanics and thermodynamics of an ideal gas of multispecies quasiparticles with mutual fractional exclusion statistics. First, we show that the thermodynamic potential and the density of the system are analytically expressed in terms of the language of generalized cluster expansions, where the cluster coefficients are determined from Wu’s functional relations for describing the distribution functions of mutual fractional exclusion statistics. Second, we generalize the classical Lagrange theorem for inverting the one complex variable functions to that for the multicomplex variable functions. Third, we explicitly obtain all the exact cluster coefficients by applying the generalized Lagrange theorem. @S0163-1829 98!03335-9#", "We derive an exact integral representation for the gr and partition function for an ideal gas with exclusion statistics. Using this we show how the Wu's equation for the exclusion statistics appears in the problem. This can be an alternative proof for the Wu's equation. We also discuss that singularities are related to the existence of a phase transition of the system.", "We study the properties of the conformal blocks of the conformal eld theories with Virasoro or W-extended symmetry. When the conformal blocks contain only second-order degenerate elds, the conformal blocks obey second order dierential equations and they can be interpreted as ground-state wave functions of a trigonometric Calogero-Sutherland Hamiltonian with nontrivial braiding properties. A generalized duality property relates the two types of second order degenerate elds. By studying this duality we found that the excited states of the CalogeroSutherland Hamiltonian are characterized by two partitions, or in the case of WAk 1 theories by k partitions. By extending the conformal eld theories under consideration by a u(1) eld, we nd that we can put in correspondence the states in the Hilbert state of the extended CFT with the excited non-polynomial eigenstates of the Calogero-Sutherland Hamiltonian. When the action of the Calogero-Sutherland integrals of motion is translated on the Hilbert space, they become identical to the integrals of motion recently discovered by Alba, Fateev, Litvinov and Tarnopolsky in Liouville theory in the context of the AGT conjecture. Upon bosonisation, these integrals of motion can be expressed as a sum of two, or in generalk, bosonic Calogero-Sutherland Hamiltonian coupled by an interaction term with a triangular structure. For special values of the coupling constant, the conformal blocks can be expressed in terms of Jack polynomials with pairing properties, and they give electron wave functions for special Fractional Quantum Hall states.", "We consider the solution of the stochastic heat equation @TZ D 1 @ 2 X ZZ P W with delta function initial condition Z.T D0;X DiXD0 whose logarithm, with appropriate normalization, is the free energy of the con- tinuum directed polymer, or the Hopf-Cole solution of the Kardar-Parisi-Zhang equation with narrow wedge initial conditions. We obtain explicit formulas for the one-dimensional marginal distributions, the crossover distributions, which interpolate between a standard Gaussian dis- tribution (small time) and the GUE Tracy-Widom distribution (large time). The proof is via a rigorous steepest-descent analysis of the Tracy-Widom formula for the asymmetric simple exclusion process with antishock initial data, which is shown to converge to the continuum equations in an appropriate weakly asymmetric limit. The limit also describes the crossover behavior between the symmetric and asymmetric exclusion processes. © 2010 Wiley Periodicals, Inc." ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
Below we list the related works on Conjectures and -- mostly chronologically. However, the list is by no means complete. The series @math in ) admits a natural @math -analogue called the fermionic formula . This is another fascinating subject, but we do not cover it here. See @cite_23 @cite_7 @cite_6 and reference therein. It is convenient to refer the formula ) with the binomial coefficient ) as type I , and the ones with the binomial coefficient in Remark as type II . (In the context of the -type integrable spin chains, @math and @math represent the numbers of @math -strings and @math -holes of color @math , respectively. Therefore one must demand @math , which implies that the relevant formulae are necessarily of type II.) The manifest expression of the decomposition of @math such as is referred as type III , where @math is the character of the irreducible @math -module @math with highest weight @math . Since there is no essential distinction between these conjectured formulae for @math and @math , we simply refer the both cases as @math below. At this moment, however, the proofs should be separately given for nonsimply-laced case @cite_34 .
{ "cite_N": [ "@cite_34", "@cite_7", "@cite_23", "@cite_6" ], "mid": [ "2106856555", "1618024583", "1945101555", "2472026787" ], "abstract": [ "We introduce a fermionic formula associated with any quantum affine algebra U q (X N (r) . Guided by the interplay between corner transfer matrix and the Bethe ansatz in solvable lattice models, we study several aspects related to representation theory, most crucially, the crystal basis theory. They include one-dimensional sums over both finite and semi-infinite paths, spinon character formulae, Lepowsky—Primc type conjectural formula for vacuum string functions, dilogarithm identities, Q-systems and their solution by characters of various classical subalgebras and so forth. The results expand [HKOTY1] including the twisted cases and more details on inhomogeneous paths consisting of non-perfect crystals. As a most intriguing example, certain inhomogeneous one-dimensional sums conjecturally give rise to branching functions of an integrable G 2 (1) -module related to the embedding G 2 (1) ↪ B 3 (1) ↪ D 4 1 .", "Fermionic formulae originate in the Bethe ansatz in solvable lattice models. They are specific expressions of some q-polynomials as sums of products of q-binomial coefficients. We consider the fermionic formulae associated with general non-twisted quantum affine algebra U_q(X^ (1) _n) and discuss several aspects related to representation theories and combinatorics. They include crystal base theory, one dimensional sums, spinon character formulae, Q-system and combinatorial completeness of the string hypothesis for arbitrary X_n.", "Let @math be a smooth scheme over an algebraically closed field @math of characteristic zero and @math a regular function, and write @math Crit @math , as a closed subscheme of @math . The motivic vanishing cycle @math is an element of the @math -equivariant motivic Grothendieck ring @math defined by Denef and Loeser math.AG 0006050 and Looijenga math.AG 0006220, and used in Kontsevich and Soibelman's theory of motivic Donaldson-Thomas invariants, arXiv:0811.2435. We prove three main results: (a) @math depends only on the third-order thickenings @math of @math . (b) If @math is another smooth scheme, @math is regular, @math Crit @math , and @math is an embedding with @math and @math an isomorphism, then @math equals @math \"twisted\" by a motive associated to a principal @math -bundle defined using @math , where now we work in a quotient ring @math of @math . (c) If @math is an \"oriented algebraic d-critical locus\" in the sense of Joyce arXiv:1304.4508, there is a natural motive @math , such that if @math is locally modelled on Crit @math , then @math is locally modelled on @math . Using results from arXiv:1305.6302, these imply the existence of natural motives on moduli schemes of coherent sheaves on a Calabi-Yau 3-fold equipped with \"orientation data\", as required in Kontsevich and Soibelman's motivic Donaldson-Thomas theory arXiv:0811.2435, and on intersections of oriented Lagrangians in an algebraic symplectic manifold. This paper is an analogue for motives of results on perverse sheaves of vanishing cycles proved in arXiv:1211.3259. We extend this paper to Artin stacks in arXiv:1312.0090.", "Relations among tautological classes on the moduli space of stable curves are obtained via the study of Witten's r-spin theory for higher r. In order to calculate the quantum product, a new formula relating the r-spin correlators in genus 0 to the representation theory of sl2 is proven. The Givental-Teleman classification of CohFTs is used at two special semisimple points of the associated Frobenius manifold. At the first semisimple point, the R-matrix is exactly solved in terms of hypergeometric series. As a result, an explicit formula for Witten's r-spin class is obtained (along with tautological relations in higher degrees). As an application, the r=4 relations are used to bound the Betti numbers of the tautological ring of the moduli of nonsingular curves. At the second semisimple point, the form of the R-matrix implies a polynomiality property in r of Witten's r-spin class. In the Appendix (with F. Janda), a conjecture relating the r=0 limit of Witten's r-spin class to the class of the moduli space of holomorphic differentials is presented." ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
2 @cite_24 . Kerov . proposed and proved the type II formula for @math by the combinatorial method, where the bijection between the Littlewood-Richardson tableaux and the rigged configurations was constructed.
{ "cite_N": [ "@cite_24" ], "mid": [ "2010163765", "2010193278", "2748981990", "1554718120" ], "abstract": [ "We establish a lower bound for the formula size of quolynomials over arbitrary fields. Our basic formula operations are addition, subtraction, multiplication and division. The proof is based on Neciporuk’s [Soviet Math. Doklady, 7 (1966), pp. 999–1000] lower bound for Boolean functions and uses formal power series. This result immediately yields a lower bound for the formula size of rational functions over infinite fields. We also show how to adapt Neciporuk’s method to rational functions over finite fields. These results are then used to show that, over any field, the @math determinant function has formula size at least @math . We thus have an algebraic analogue to the @math lower bound for the Boolean determinant due to Kloss [Soviet Math. Doklady, 7 (1966), pp. 1537–1540].", "Let Fk(n,m) be a random k-SAT formula on n variables formed by selecting uniformly and independently m out of all possible k-clauses. It is well-known that for r ≥ 2k ln 2, Fk(n,rn) is unsatisfiable with probability 1-o(1). We prove that there exists a sequence tk = O(k) such that for r ≥ 2k ln 2 - tk, Fk(n,rn) is satisfiable with probability 1-o(1).Our technique yields an explicit lower bound for every k which for k > 3 improves upon all previously known bounds. For example, when k=10 our lower bound is 704.94 while the upper bound is 708.94.", "We formalise the axiomatic set theory second-order ZF in the constructive type theory of Coq assuming excluded middle. In this setting we prove Zermelo’s embedding theorem for models, categoricity in all cardinalities, and the correspondence of inner models and Grothendieck universes. Our results are based on an inductive definition of the cumulative hierarchy eliminating the need for ordinals and transfinite recursion.", "We give an algebro-combinatorial proof of a general ver­ sion of Pieri's formula following the approach developed by Fomin and Kirillov in the paper \"Quadratic algebras, Dunkl elements, and Schu­ bert calculus.\" We prove several conjectures posed in their paper. As a consequence, a new proof of classical Pieri's formula for cohomol­ ogy of complex flag manifolds, and that of its analogue for quantum cohomology is obtained in this paper." ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
4 @cite_14 . Ogievetsky and Wiegmann proposed the type III formula of @math for some @math for the exceptional algebras from the reproduction scheme.
{ "cite_N": [ "@cite_14" ], "mid": [ "2180264879", "1554718120", "1647571282", "2065330478" ], "abstract": [ "Let @math be a cyclic @math -algebra of dimension @math with finite dimensional cohomology only in dimension one and two. By transfer theorem there exists a cyclic @math -algebra structure on the cohomology @math . The inner product plus the higher products of the cyclic @math -algebra defines a superpotential function @math on @math . We associate with an analytic Milnor fiber for the formal function @math and define the Euler characteristic of @math is to be the Euler characteristic of the 'et ale cohomology of the analytic Milnor fiber. In this paper we prove a Thom-Sebastiani type formula for the Euler characteristic of cyclic @math -algebras. As applications we prove the Joyce-Song formulas about the Behrend function identities for semi-Schur objects in the derived category of coherent sheaves over Calabi-Yau threefolds. A motivic Thom-Sebastiani type formula and a conjectural motivic Joyce-Song formulas for the motivic Milnor fiber of cyclic @math -algebras are also discussed.", "We give an algebro-combinatorial proof of a general ver­ sion of Pieri's formula following the approach developed by Fomin and Kirillov in the paper \"Quadratic algebras, Dunkl elements, and Schu­ bert calculus.\" We prove several conjectures posed in their paper. As a consequence, a new proof of classical Pieri's formula for cohomol­ ogy of complex flag manifolds, and that of its analogue for quantum cohomology is obtained in this paper.", "We give positive formulas for the restriction of a Schubert Class to a T-fixed point in the equivariant K-theory and equivariant cohomology of the Grassmannian. Our formulas rely on a result of Kodiyalam-Raghavan and Kreiman-Lakshmibai, which gives an equivariant Grobner degeneration of a Schubert variety in the neighborhood of a T-fixed point of the Grassmannian.", "We prove a comparison formula for the Donaldson-Thomas curve-counting invariants of two smooth and projective Calabi-Yau threefolds related by a flop. By results of Bridgeland any two such varieties are derived equivalent. Furthermore there exist pairs of categories of perverse coherent sheaves on both sides which are swapped by this equivalence. Using the theory developed by Joyce we construct the motivic Hall algebras of these categories. These algebras provide a bridge relating the invariants on both sides of the flop." ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
8 @cite_22 . Kleber analyzed a combinatorial structure of the type II formula for the simply-laced algebras. In particular, it was proved that the type III formula of @math and the corresponding type II formula are equivalent for @math and @math .
{ "cite_N": [ "@cite_22" ], "mid": [ "1554718120", "2180264879", "2215703121", "2531021157" ], "abstract": [ "We give an algebro-combinatorial proof of a general ver­ sion of Pieri's formula following the approach developed by Fomin and Kirillov in the paper \"Quadratic algebras, Dunkl elements, and Schu­ bert calculus.\" We prove several conjectures posed in their paper. As a consequence, a new proof of classical Pieri's formula for cohomol­ ogy of complex flag manifolds, and that of its analogue for quantum cohomology is obtained in this paper.", "Let @math be a cyclic @math -algebra of dimension @math with finite dimensional cohomology only in dimension one and two. By transfer theorem there exists a cyclic @math -algebra structure on the cohomology @math . The inner product plus the higher products of the cyclic @math -algebra defines a superpotential function @math on @math . We associate with an analytic Milnor fiber for the formal function @math and define the Euler characteristic of @math is to be the Euler characteristic of the 'et ale cohomology of the analytic Milnor fiber. In this paper we prove a Thom-Sebastiani type formula for the Euler characteristic of cyclic @math -algebras. As applications we prove the Joyce-Song formulas about the Behrend function identities for semi-Schur objects in the derived category of coherent sheaves over Calabi-Yau threefolds. A motivic Thom-Sebastiani type formula and a conjectural motivic Joyce-Song formulas for the motivic Milnor fiber of cyclic @math -algebras are also discussed.", "Whereas formal category theory is classically considered within a @math -category, in this paper a double-dimensional approach is taken. More precisely we develop such theory within the setting of hypervirtual double categories, a notion extending that of virtual double category by adding cells with nullary target. [...] After this the notion of weak' Kan extension within a hypervirtual double category is considered, together with three strengthenings. [...] The notion of yoneda embedding is then considered in a hypervirtual double category, and compared to that of a good yoneda structure on a @math -category; the latter in the sense of Street-Walters and Weber. Conditions are given ensuring that a yoneda embedding @math defines @math as the free small cocompletion of @math , in a suitable sense. In the second half we consider formal category theory in the presence of algebraic structures. In detail: to a monad @math on a hypervirtual double category @math several hypervirtual double categories @math of @math -algebras are associated, [...]. This is followed by the study of the creation of, amongst others, left Kan extensions by the forgetful functors @math . The main motivation of this paper is the description of conditions ensuring that yoneda embeddings in @math lift along these forgetful functors, as well as ensuring that such lifted algebraic yoneda embeddings again define free small cocompletions, now in @math . As a first example we apply the previous to monoidal structures on categories, hence recovering Day convolution of presheaves and Im-Kelly's result on free monoidal cocompletion, as well as obtaining a \"monoidal Yoneda lemma\".", "In our LICS 2004 paper we introduced an approach to the study of the local structure of finite algebras and relational structures that aims at applications in the Constraint Satisfaction Problem (CSP). This approach involves a graph associated with an algebra @math or a relational structure A, whose vertices are the elements of @math (or A), the edges represent subsets of @math such that the restriction of some term operation of @math is ‘good’ on the subset, that is, act as an operation of one of the 3 types: semilattice, majority, or affine. In this paper we significantly refine and advance this approach. In particular, we prove certain connectivity and rectangularity properties of relations over algebras related to components of the graph connected by semilattice and affine edges. We also prove a result similar to 2-decomposition of relations invariant under a majority operation, only here we do not impose any restrictions on the relation. These results allow us to give a new, somewhat more intuitive proof of the bounded width theorem: the CSP over algebra @math has bounded width if and only if @math does not contain affine edges. Actually, this result shows that bounded width implies width (2,3). We also consider algebras with edges from a restricted set of types. In particular, it can be proved that type restrictions are preserved under the standard algebraic constructions. Finally, we prove that algebras without semilattice edges have few subalgebras of powers, that is, the CSP over such algebras is also polynomial time." ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
9 @cite_7 @cite_6 . Hatayama . gave a characterization of the type I formula as the solution of the @math -system which are @math -linear combinations of the @math -characters with the property equivalent to the convergence property ). Using it, the equivalence of the type III formula of @math and the type I formula of @math for the classical algebras was shown @cite_7 . In @cite_6 , the type I and type II formulae, and the @math -systems for the twisted algebras @math were proposed. The type III formula of @math for @math , @math , @math , @math was also proposed, and the equivalence to the type I formula was shown in a similar way to the untwisted case.
{ "cite_N": [ "@cite_6", "@cite_7" ], "mid": [ "2065330478", "1945101555", "2007134917", "2276886773" ], "abstract": [ "We prove a comparison formula for the Donaldson-Thomas curve-counting invariants of two smooth and projective Calabi-Yau threefolds related by a flop. By results of Bridgeland any two such varieties are derived equivalent. Furthermore there exist pairs of categories of perverse coherent sheaves on both sides which are swapped by this equivalence. Using the theory developed by Joyce we construct the motivic Hall algebras of these categories. These algebras provide a bridge relating the invariants on both sides of the flop.", "Let @math be a smooth scheme over an algebraically closed field @math of characteristic zero and @math a regular function, and write @math Crit @math , as a closed subscheme of @math . The motivic vanishing cycle @math is an element of the @math -equivariant motivic Grothendieck ring @math defined by Denef and Loeser math.AG 0006050 and Looijenga math.AG 0006220, and used in Kontsevich and Soibelman's theory of motivic Donaldson-Thomas invariants, arXiv:0811.2435. We prove three main results: (a) @math depends only on the third-order thickenings @math of @math . (b) If @math is another smooth scheme, @math is regular, @math Crit @math , and @math is an embedding with @math and @math an isomorphism, then @math equals @math \"twisted\" by a motive associated to a principal @math -bundle defined using @math , where now we work in a quotient ring @math of @math . (c) If @math is an \"oriented algebraic d-critical locus\" in the sense of Joyce arXiv:1304.4508, there is a natural motive @math , such that if @math is locally modelled on Crit @math , then @math is locally modelled on @math . Using results from arXiv:1305.6302, these imply the existence of natural motives on moduli schemes of coherent sheaves on a Calabi-Yau 3-fold equipped with \"orientation data\", as required in Kontsevich and Soibelman's motivic Donaldson-Thomas theory arXiv:0811.2435, and on intersections of oriented Lagrangians in an algebraic symplectic manifold. This paper is an analogue for motives of results on perverse sheaves of vanishing cycles proved in arXiv:1211.3259. We extend this paper to Artin stacks in arXiv:1312.0090.", "Abstract The vanishing ideal I of a subspace arrangement V 1 ∪ V 2 ∪ ⋯ ∪ V m ⊆ V is an intersection I 1 ∩ I 2 ∩ ⋯ ∩ I m of linear ideals. We give a formula for the Hilbert polynomial of I if the subspaces meet transversally. We also give a formula for the Hilbert series of the product ideal J = I 1 I 2 ⋯ I m without any assumptions about the subspace arrangement. It turns out that the Hilbert series of J is a combinatorial invariant of the subspace arrangement: it only depends on the intersection lattice and the dimension function. The graded Betti numbers of J are determined by the Hilbert series, so they are combinatorial invariants as well. We will also apply our results to generalized principal component analysis (GPCA), a tool that is useful for computer vision and image processing.", "Curves of genus g which admit a map to CP1 with specified ramification profile mu over 0 and nu over infinity define a double ramification cycle DR_g(mu,nu) on the moduli space of curves. The study of the restrictions of these cycles to the moduli of nonsingular curves is a classical topic. In 2003, Hain calculated the cycles for curves of compact type. We study here double ramification cycles on the moduli space of Deligne-Mumford stable curves. The cycle DR_g(mu,nu) for stable curves is defined via the virtual fundamental class of the moduli of stable maps to rubber. Our main result is the proof of an explicit formula for DR_g(mu,nu) in the tautological ring conjectured by Pixton in 2014. The formula expresses the double ramification cycle as a sum over stable graphs (corresponding to strata classes) with summand equal to a product over markings and edges. The result answers a question of Eliashberg from 2001 and specializes to Hain's formula in the compact type case. When mu and nu are both empty, the formula for double ramification cycles expresses the top Chern class lambda_g of the Hodge bundle of the moduli space of stable genus g curves as a push-forward of tautological classes supported on the divisor of nonseparating nodes. Applications to Hodge integral calculations are given." ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
10 @cite_13 @cite_10 . The second formula in Conjecture was proposed and proved for @math @cite_13 from the formal completeness of the -type Bethe vectors. The same formula was proposed for @math , and the equivalence to the type I formula was proved @cite_10 . The type I formula is formulated in the form ), and the characterization of type I formula in @cite_7 was simplified as the solution of the @math -system with the convergence property ).
{ "cite_N": [ "@cite_10", "@cite_13", "@cite_7" ], "mid": [ "1994934410", "2591592591", "2000931246", "2021217869" ], "abstract": [ "The ( U _ q ( s l (2)) ) Bethe equation is studied at q = 0. A linear congruence equation is proposed related to the string solutions. The number of its off-diagonal solutions is expressed in terms of an explicit combinatorial formula and coincides with the weight multiplicities of the quantum space.", "This paper considers the problem of designing maximum distance separable (MDS) codes over small fields with constraints on the support of their generator matrices. For any given @math binary matrix @math , the GM-MDS conjecture, proposed by , states that if @math satisfies the so-called MDS condition, then for any field @math of size @math , there exists an @math MDS code whose generator matrix @math , with entries in @math , fits the matrix @math (i.e., @math is the support matrix of @math ). Despite all the attempts by the coding theory community, this conjecture remains still open in general. It was shown, independently by and , that the GM-MDS conjecture holds if the following conjecture, referred to as the TM-MDS conjecture, holds: if @math satisfies the MDS condition, then the determinant of a transform matrix @math , such that @math fits @math , is not identically zero, where @math is a Vandermonde matrix with distinct parameters. In this work, we first reformulate the TM-MDS conjecture in terms of the Wronskian determinant, and then present an algebraic-combinatorial approach based on polynomial-degree reduction for proving this conjecture. Our proof technique's strength is based primarily on reducing inherent combinatorics in the proof. We demonstrate the strength of our technique by proving the TM-MDS conjecture for the cases where the number of rows ( @math ) of @math is upper bounded by @math . For this class of special cases of @math where the only additional constraint is on @math , only cases with @math were previously proven theoretically, and the previously used proof techniques are not applicable to cases with @math .", "Let f be a random Boolean formula that is an instance of 3-SAT. We consider the problem of computing the least real number k such that if the ratio of the number of clauses over the number of variables of f strictly exceeds k , then f is almost certainly unsatisfiable. By a well-known and more or less straightforward argument, it can be shown that kF5.191. This upper bound was improved by to 4.758 by first providing new improved bounds for the occupancy problem. There is strong experimental evidence that the value of k is around 4.2. In this work, we define, in terms of the random formula f, a decreasing sequence of random variables such that, if the expected value of any one of them converges to zero, then f is almost certainly unsatisfiable. By letting the expected value of the first term of the sequence converge to zero, we obtain, by simple and elementary computations, an upper bound for k equal to 4.667. From the expected value of the second term of the sequence, we get the value 4.601q . In general, by letting the U This work was performed while the first author was visiting the School of Computer Science, Carleton Ž University, and was partially supported by NSERC Natural Sciences and Engineering Research Council . of Canada , and by a grant from the University of Patras for sabbatical leaves. The second and third Ž authors were supported in part by grants from NSERC Natural Sciences and Engineering Research . Council of Canada . During the last stages of this research, the first and last authors were also partially Ž . supported by EU ESPRIT Long-Term Research Project ALCOM-IT Project No. 20244 . †An extended abstract of this paper was published in the Proceedings of the Fourth Annual European Ž Symposium on Algorithms, ESA’96, September 25]27, 1996, Barcelona, Spain Springer-Verlag, LNCS, . pp. 27]38 . That extended abstract was coauthored by the first three authors of the present paper. Correspondence to: L. M. Kirousis Q 1998 John Wiley & Sons, Inc. CCC 1042-9832r98r030253-17 253", "A well known but incorrect piece of functional programming folklore is that ML expressions can be efficiently typed in polynomial time. In probing the truth of that folklore, various researchers, including Wand, Buneman, Kanellakis, and Mitchell, constructed simple counterexamples consisting of typable ML programs having length n , with principal types having O(2 cn ) distinct type variables and length O(2 2cn ). When the types associated with these ML constructions were represented as directed acyclic graphs, their sizes grew as O(2 cn ). The folklore was even more strongly contradicted by the recent result of Kanellakis and Mitchell that simply deciding whether or not an ML expression is typable is PSPACE-hard. We improve the latter result, showing that deciding ML typability is DEXPTIME-hard. As Kanellakis and Mitchell have shown containment in DEXPTIME, the problem is DEXPTIME-complete. The proof of DEXPTIME-hardness is carried out via a generic reduction: it consists of a very straightforward simulation of any deterministic one-tape Turing machine M with input k running in O ( c |k| ) time by a polynomial-sized ML formula P M,k , such that M accepts k iff P M,k is typable. The simulation of the transition function δ of the Turing Machine is realized uniquely through terms in the lambda calculus without the use of the polymorphic let construct. We use let for two purposes only: to generate an exponential amount of blank tape for the Turing Machine simulation to begin, and to compose an exponential number of applications of the ML formula simulating state transition. It is purely the expressive power of ML polymorphism to succinctly express function composition which results in a proof of DEXPTIME-hardness. We conjecture that lower bounds on deciding typability for extensions to the typed lambda calculus can be regarded precisely in terms of this expressive capacity for succinct function composition. To further understand this lower bound, we relate it to the problem of proving equality of type variables in a system of type equations generated from an ML expression with let-polymorphism. We show that given an oracle for solving this problem, deciding typability would be in PSPACE, as would be the actual computation of the principal type of the expression, were it indeed typable." ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
11 @cite_2 . Chari proved the type III formula of @math for @math for any @math for the classical algebras, and for some @math for the exceptional algebras.
{ "cite_N": [ "@cite_2" ], "mid": [ "1554718120", "2023541349", "2069546133", "1835694908" ], "abstract": [ "We give an algebro-combinatorial proof of a general ver­ sion of Pieri's formula following the approach developed by Fomin and Kirillov in the paper \"Quadratic algebras, Dunkl elements, and Schu­ bert calculus.\" We prove several conjectures posed in their paper. As a consequence, a new proof of classical Pieri's formula for cohomol­ ogy of complex flag manifolds, and that of its analogue for quantum cohomology is obtained in this paper.", "An arithmetic formula is multilinear if the polynomial computed by each of its subformulas is multilinear. We prove that any multilinear arithmetic formula for the permanent or the determinant of an n × n matrix is of size super-polynomial in n. Previously, super-polynomial lower bounds were not known (for any explicit function) even for the special case of multilinear formulas of constant depth.", "We consider arithmetic formulas consisting of alternating layers of addition (+) and multiplication (×) gates such that the fanin of all the gates in any fixed layer is the same. Such a formula Φ which additionally has the property that its formal syntactic degree is at most twice the (total) degree of its output polynomial, we refer to as a regular formula. As usual, we allow arbitrary constants from the underlying field F on the incoming edges to a + gate so that a + gate can in fact compute an arbitrary F-linear combination of its inputs. We show that there is an (n2 + 1)-variate polynomial of degree 2n in VNP such that any regular formula computing it must be of size at least nΩ(log n). Along the way, we examine depth four (ΣΠΣΠ) regular formulas wherein all multiplication gates in the layer adjacent to the inputs have fanin a and all multiplication gates in the layer adjacent to the output node have fanin b. We refer to such formulas as ΣΠ[b]ΣΠ[a]-formulas. We show that there exists an n2-variate polynomial of degree n in VNP such that any ΣΠ[O(√n)]ΣΠ[√n]-formula computing it must have top fan-in at least 2Ω(√n·log n). In comparison, Tavenas [Tav13] has recently shown that every nO(1)-variate polynomial of degree n in VP admits a ΣΠ[O(√n)]ΣΠ[√n]-formula of top fan-in 2O(√n·log n). This means that any further asymptotic improvement in our lower bound for such formulas (to say 2ω(√n log n)) will imply that VP is different from VNP.", "Let @math be a primitive Hilbert modular form of parallel weight @math and level @math for the totally real field @math , and let @math be a rational prime coprime to @math . If @math is ordinary at @math and @math is a CM extension of @math of relative discriminant @math prime to @math , we give an explicit construction of the @math -adic Rankin-Selberg @math -function @math . When the sign of its functional equation is @math , we show, under the assumption that all primes @math are principal ideals of @math which split in @math , that its central derivative is given by the @math -adic height of a Heegner point on the abelian variety @math associated with @math . This @math -adic Gross--Zagier formula generalises the result obtained by Perrin-Riou when @math and @math satisfies the so-called Heegner condition. We deduce applications to both the @math -adic and the classical Birch and Swinnerton-Dyer conjectures for @math ." ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
12 @cite_18 . Okado constructed bijections between the rigged configurations and the crystals (resp. virtual crystals) corresponding to @math , with @math for @math , for @math and @math (resp. @math ). As a corollary, the type II formula of those @math was proved for @math and @math .
{ "cite_N": [ "@cite_18" ], "mid": [ "2106856555", "2949133410", "2320222014", "2215703121" ], "abstract": [ "We introduce a fermionic formula associated with any quantum affine algebra U q (X N (r) . Guided by the interplay between corner transfer matrix and the Bethe ansatz in solvable lattice models, we study several aspects related to representation theory, most crucially, the crystal basis theory. They include one-dimensional sums over both finite and semi-infinite paths, spinon character formulae, Lepowsky—Primc type conjectural formula for vacuum string functions, dilogarithm identities, Q-systems and their solution by characters of various classical subalgebras and so forth. The results expand [HKOTY1] including the twisted cases and more details on inhomogeneous paths consisting of non-perfect crystals. As a most intriguing example, certain inhomogeneous one-dimensional sums conjecturally give rise to branching functions of an integrable G 2 (1) -module related to the embedding G 2 (1) ↪ B 3 (1) ↪ D 4 1 .", "We prove a conjecture of Knutson asserting that the Schubert structure constants of the cohomology ring of a two-step flag variety are equal to the number of puzzles with specified border labels that can be created using a list of eight puzzle pieces. As a consequence, we obtain a puzzle formula for the Gromov-Witten invariants defining the small quantum cohomology ring of a Grassmann variety of type A. The proof of the conjecture proceeds by showing that the puzzle formula defines an associative product on the cohomology ring of the two-step flag variety. It is based on an explicit bijection of gashed puzzles that is analogous to the jeu de taquin algorithm but more complicated.", "We give a log-geometric description of the space of twisted canonical divisors constructed by Farkas--Pandharipande. In particular, we introduce the notion of a principal rubber @math -log-canonical divisor, and we study its moduli space. It is a proper Deligne--Mumford stack admitting a perfect obstruction theory whose virtual fundamental cycle is of dimension @math . In the so-called strictly meromorphic case with @math , the moduli space is of the expected dimension and the push-forward of its virtual fundamental cycle to the moduli space of stable curves equals the weighted fundamental class of the moduli space of twisted canonical divisors. Conjecturally, it yields a formula of Pixton generalizing the double ramification cycle in the moduli space of stable curves.", "Whereas formal category theory is classically considered within a @math -category, in this paper a double-dimensional approach is taken. More precisely we develop such theory within the setting of hypervirtual double categories, a notion extending that of virtual double category by adding cells with nullary target. [...] After this the notion of weak' Kan extension within a hypervirtual double category is considered, together with three strengthenings. [...] The notion of yoneda embedding is then considered in a hypervirtual double category, and compared to that of a good yoneda structure on a @math -category; the latter in the sense of Street-Walters and Weber. Conditions are given ensuring that a yoneda embedding @math defines @math as the free small cocompletion of @math , in a suitable sense. In the second half we consider formal category theory in the presence of algebraic structures. In detail: to a monad @math on a hypervirtual double category @math several hypervirtual double categories @math of @math -algebras are associated, [...]. This is followed by the study of the creation of, amongst others, left Kan extensions by the forgetful functors @math . The main motivation of this paper is the description of conditions ensuring that yoneda embeddings in @math lift along these forgetful functors, as well as ensuring that such lifted algebraic yoneda embeddings again define free small cocompletions, now in @math . As a first example we apply the previous to monoidal structures on categories, hence recovering Day convolution of presheaves and Im-Kelly's result on free monoidal cocompletion, as well as obtaining a \"monoidal Yoneda lemma\"." ] }
cs0107014
2949367797
We introduce a transformation system for concurrent constraint programming (CCP). We define suitable applicability conditions for the transformations which guarantee that the input output CCP semantics is preserved also when distinguishing deadlocked computations from successful ones and when considering intermediate results of (possibly) non-terminating computations. The system allows us to optimize CCP programs while preserving their intended meaning: In addition to the usual benefits that one has for sequential declarative languages, the transformation of concurrent programs can also lead to the elimination of communication channels and of synchronization points, to the transformation of non-deterministic computations into deterministic ones, and to the crucial saving of computational space. Furthermore, since the transformation system preserves the deadlock behavior of programs, it can be used for proving deadlock freeness of a given program wrt a class of queries. To this aim it is sometimes sufficient to apply our transformations and to specialize the resulting program wrt the given queries in such a way that the obtained program is trivially deadlock free.
As mentioned in the introduction, this is one of the few attempts to apply fold unfold techniques in the field of concurrent languages. In fact, in the literature we find only three papers which are relatively closely related to the present one: Ueda and Furukawa UF88 defined transformation systems for the concurrent logic language GHC @cite_7 , Sahlin Sah95 defined a partial evaluator for AKL, while de Francesco and Santone in DFS96 presented a transformation system for CCS @cite_26 .
{ "cite_N": [ "@cite_26", "@cite_7" ], "mid": [ "1601458080", "1607674807", "2170546552", "1534027338" ], "abstract": [ "Rewriting logic extends to concurrent systems with state changes the body of theory developed within the algebraic semantics approach. It is both a foundational tool and the kernel language of several implementation efforts (Cafe, ELAN, Maude). Tile logic extends (unconditional) rewriting logic since it takes into account state changes with side effects and synchronization. It is especially useful for defining compositional models of computation of reactive systems, coordination languages, mobile calculi, and causal and located concurrent systems. In this paper, the two logics are defined and compared using a recently developed algebraic specification methodology, membership equational logic. Given a theory T, the rewriting logic of T is the free monoidal 2-category, and the tile logic of T is the free monoidal double category, both generated by T. An extended version of monoidal 2-categories, called 2VH-categories, is also defined, able to include in an appropriate sense the structure of monoidal double categories. We show that 2VH-categories correspond to an extended version of rewriting logic, which is able to embed tile logic, and which can be implemented in the basic version of rewriting logic using suitable internal strategies. These strategies can be significantly simpler when the theory is uniform. A uniform theory is provided in the paper for CCS, and it is conjectured that uniform theories exist for most process algebras.", "We study the relationship between Concurrent Separation Logic (CSL) and the assume-guarantee (A-G) method (a.k.a. rely-guarantee method). We show in three steps that CSL can be treated as a specialization of the A-G method for well-synchronized concurrent programs. First, we present an A-G based program logic for a low-level language with built-in locking primitives. Then we extend the program logic with explicit separation of \"private data\" and \"shared data\", which provides better memory modularity. Finally, we show that CSL (adapted for the low-level language) can be viewed as a specialization of the extended A-G logic by enforcing the invariant that \"shared resources are well-formed outside of critical regions\". This work can also be viewed as a different approach (from Brookes') to proving the soundness of CSL: our CSL inference rules are proved as lemmas in the A-G based logic, whose soundness is established following the syntactic approach to proving soundness of type systems.", "Mulmuley [Mul12a] recently gave an explicit version of Noether’s Normalization lemma for ring of invariants of matrices under simultaneous conjugation, under the conjecture that there are deterministic black-box algorithms for polynomial identity testing (PIT). He argued that this gives evidence that constructing such algorithms for PIT is beyond current techniques. In this work, we show this is not the case. That is, we improve Mulmuley’s reduction and correspondingly weaken the conjecture regarding PIT needed to give explicit Noether Normalization. We then observe that the weaker conjecture has recently been nearly settled by the authors ([FS12]), who gave quasipolynomial size hitting sets for the class of read-once oblivious algebraic branching programs (ROABPs). This gives the desired explicit Noether Normalization unconditionally, up to quasipolynomial factors. As a consequence of our proof we give a deterministic parallel polynomial-time algorithm for deciding if two matrix tuples have intersecting orbit closures, under simultaneous conjugation. We also study the strength of conjectures that Mulmuley requires to obtain similar results as ours. We prove that his conjectures are stronger, in the sense that the computational model he needs PIT algorithms for is equivalent to the well-known algebraic branching program (ABP) model, which is provably stronger than the ROABP model. Finally, we consider the depth-3 diagonal circuit model as defined by Saxena [Sax08], as PIT algorithms for this model also have implications in Mulmuley’s work. Previous work (such as [ASS12] and [FS12]) have given quasipolynomial size hitting sets for this model. In this work, we give a much simpler construction of such hitting sets, using techniques of Shpilka and Volkovich [SV09].", "The paper deals with the relationship of committed-choice logic programming languages and their proof-theoretic semantics based on linear logic. Fragments of linear logic are used in order to express various aspects of guarded clause concurrent programming and behavior of the system. The outlined translation comprises structural properties of concurrent computations, providing a sound and complete model wrt. to the interleaving operational semantics based on transformation systems. In the presence of variables, just asynchronous properties are captured without resorting to special proof-generating strategies, so the model is only correct for deadlock-free programs." ] }
cs0107014
2949367797
We introduce a transformation system for concurrent constraint programming (CCP). We define suitable applicability conditions for the transformations which guarantee that the input output CCP semantics is preserved also when distinguishing deadlocked computations from successful ones and when considering intermediate results of (possibly) non-terminating computations. The system allows us to optimize CCP programs while preserving their intended meaning: In addition to the usual benefits that one has for sequential declarative languages, the transformation of concurrent programs can also lead to the elimination of communication channels and of synchronization points, to the transformation of non-deterministic computations into deterministic ones, and to the crucial saving of computational space. Furthermore, since the transformation system preserves the deadlock behavior of programs, it can be used for proving deadlock freeness of a given program wrt a class of queries. To this aim it is sometimes sufficient to apply our transformations and to specialize the resulting program wrt the given queries in such a way that the obtained program is trivially deadlock free.
The transformation system we are proposing builds on the systems defined in the papers above and can be considered an extension of them. Differently from the previous cases, our system is defined for a generic (concurrent) constraint language. Thus, together with some new transformations such as the distribution, the backward instantiation and the branch elimination, we introduce also specific operations which allow constraint simplification and elimination (though, some constraint simplification is done in @cite_9 as well).
{ "cite_N": [ "@cite_9" ], "mid": [ "2155710590", "2028246200", "1889773849", "2045196404" ], "abstract": [ "We present an important step towards the solution of the problem of inverse procedural modeling by generating parametric context-free L-systems that represent an input 2D model. The L-systemrules efficiently code the regular structures and the parameters represent the properties of the structure transformations. The algorithm takes as input a 2D vector image that is composed of atomic elements, such as curves and poly-lines. Similar elements are recognized and assigned terminal symbols ofan L-systemalphabet. Theterminal symbols’ position and orientation are pair-wise compared and the transformations are stored as points in multiple 4D transformation spaces. By careful analysis of the clusters in the transformation spaces, we detect sequences of elements and code them as L-system rules. The coded elements are then removed from the clusters, the clusters are updated, and then the analysis attempts to code groups of elements in (hierarchies) the same way. The analysis ends with a single group of elements that is coded as an L-system axiom. We recognize and code branching sequences of linearly translated, scaled, and rotated elements and their hierarchies. The L-system not only represents the input image, but it can also be used for various editing operations. By changing the L-system parameters, the image can be randomized, symmetrized, and groups of elements and regular structures can be edited. By changing the terminal and non-terminal symbols, elements or groups of elements can be replaced.", "This paper presents a general, consistency-based framework for expressing belief change. The framework has good formal properties while being well-suited for implementation. For belief revision, informally, in revising a knowledge base K by a sentence α, we begin with α and include as much of K as consistently possible. This is done by expressing K and α in disjoint languages, asserting that the languages agree on the truth values of corresponding atoms wherever consistently possible, and then re-expressing the result in the original language of K. There may be more than one way in which the languages of K and α can be so correlated: in choice revision, one such \"extension\" represents the revised state; alternately (skeptical) revision consists of the intersection of all such extensions. Contraction is similarly defined although, interestingly, it is not interdefinable with revision.The framework is general and flexible. For example, one could go on and express other belief change operations such as update and erasure, and the merging of knowledge bases. Further, the framework allows the incorporation of static and dynamic integrity constraints. The approach is well-suited for implementation: belief change can be equivalently expressed in terms of a finite knowledge base; and the scope of a belief change operation can be restricted to just those propositions common to the knowledge base and sentence for change. We give a high-level algorithm implementing the procedure, and an expression of the approach in Default Logic. Lastly, we briefly discuss two implementations of the approach.", "According to a folk theorem, every program can be transformed into a program that produces the same output and only has one loop. We generalize this to a form where the resulting program has one loop and no other branches than the one associated with the loop control. For this branch, branch prediction is easy even for a static branch predictor. If the original program is of length κ, measured in the number of assembly-language instructions, and runs in t(n) time for an input of size n, the transformed program is of length O(κ) and runs in O(κt(n)) time. Normally sorting programs are short, but still κ may be too large for practical purposes. Therefore, we provide more efficient hand-tailored heapsort and mergesort programs. Our programs retain most features of the original programs--e.g. they perform the same number of element comparisons--and they induce O(1) branch mispredictions. On computers where branch mispredictions were expensive, some of our programs were, for integer data and small instances, faster than the counterparts in the GNU implementation of the C++ standard library.", "There are a huge number of problems, from various areas, being solved by reducing them to SAT. However, for many applications, translation into SAT is performed by specialized, problem-specific tools. In this paper we describe a new system for uniform solving of a wide class of problems by reducing them to SAT. The system uses a new specification language URSA that combines imperative and declarative programming paradigms. The reduction to SAT is defined precisely by the semantics of the specification language. The domain of the approach is wide (e.g., many NP-complete problems can be simply specified and then solved by the system) and there are problems easily solvable by the proposed system, while they can be hardly solved by using other programming languages or constraint programming systems. So, the system can be seen not only as a tool for solving problems by reducing them to SAT, but also as a general-purpose constraint solving system (for finite domains). In this paper, we also describe an open-source implementation of the described approach. The performed experiments suggest that the system is competitive to state-of-the-art related modelling systems." ] }
cs0107014
2949367797
We introduce a transformation system for concurrent constraint programming (CCP). We define suitable applicability conditions for the transformations which guarantee that the input output CCP semantics is preserved also when distinguishing deadlocked computations from successful ones and when considering intermediate results of (possibly) non-terminating computations. The system allows us to optimize CCP programs while preserving their intended meaning: In addition to the usual benefits that one has for sequential declarative languages, the transformation of concurrent programs can also lead to the elimination of communication channels and of synchronization points, to the transformation of non-deterministic computations into deterministic ones, and to the crucial saving of computational space. Furthermore, since the transformation system preserves the deadlock behavior of programs, it can be used for proving deadlock freeness of a given program wrt a class of queries. To this aim it is sometimes sufficient to apply our transformations and to specialize the resulting program wrt the given queries in such a way that the obtained program is trivially deadlock free.
As previously mentioned, differently from our case in @cite_9 it is considered a definition of which allows us to remove potentially selectable branches; the consequence is that the resulting transformation system is only (thus not totally) correct. We should mention that in @cite_9 two preliminary assumptions on the scheduling'' are made in such a way that this limitation is actually less constraining than it might appear.
{ "cite_N": [ "@cite_9" ], "mid": [ "2072469765", "2971432649", "1517415100", "1853087932" ], "abstract": [ "Modern dynamically scheduled processors use branch prediction hardware to speculatively fetch and execute most likely executed paths in a program. Complex branch predictors have been proposed which attempt to identify these paths accurately such that the hardware can benefit from out-of-order (OOO) execution. Recent studies have shown that inspite of such complex prediction schemes, there still exist many frequently executed branches which are difficult to predict. Predicated execution has been proposed as an alternative technique to eliminate some of these branches in various forms ranging from a restrictive support to a full-blown support. We call the restrictive form of predicated execution as guarded execution. In this paper, we propose a new algorithm which uses profiling and selectively performs if-conversion for architectures with guarded execution support. Branch profiling is used to gather the taken, non-taken and misprediction counts for every branch. This combined with block profiling is used to select paths which suffer from heavy mispredictions and are profitable to if-convert. Effects of three different selection criterias, namely size-based, predictability-based and profiled-based, on net cycle improvements, branch mispredictions and mis-speculated instructions are then studied. We also propose new mechanisms to convert unsafe instructions to safe form to enhance the applicability of the technique. Finally, we explain numerous adjustments that were made to the selection criterias to better reflect the OOO processor behavior.", "Abstract In this paper, we present a unifying analysis for redundancy systems with cancel-on-start ( c . o . s . ) and cancel-on-complete ( c . o . c . ) with exponentially distributed service requirements. With c . o . s . ( c . o . c . ) all redundant copies are removed as soon as one of the copies starts (completes) service. As a consequence, c . o . s . does not waste any computing resources, as opposed to c . o . c . We show that the c . o . s . model is equivalent to a queueing system with multi-type jobs and servers, which was analyzed in , (2012), and show that c . o . c . (under the assumption of i.i.d. copies) can be analyzed by a generalization of , (2012) where state-dependent departure rates are permitted. This allows us to show that the stationary distribution for both the c . o . c . and c . o . s . models has a product form. We give a detailed first-time analysis for c . o . s and derive a closed form expression for important metrics like mean number of jobs in the system, and probability of waiting. We also note that the c . o . s . model is equivalent to Join-Shortest-Work queue with power of d (JSW( d )). In the latter, an incoming job is dispatched to the server with smallest workload among d randomly chosen ones. Thus, all our results apply mutatis-mutandis to JSW( d ). Comparing the performance of c . o . s . with that of c . o . c . with i.i.d. copies gives the unexpected conclusion (since c . o . s . does not waste any resources) that c . o . s . is worse in terms of mean number of jobs. As part of ancillary results, we illustrate that this is primarily due to the assumption of i.i.d. copies in case of c . o . c . (together with exponentially distributed requirements) and that such assumptions might lead to conclusions that are qualitatively different from that observed in practice.", "We study the problem of non-preemptive scheduling to minimize energy consumption for devices that allow dynamic voltage scaling. Specifically, consider a device that can process jobs in a non-preemptive manner. The input consists of (i) the set R of available speeds of the device, (ii) a set J of jobs, and (iii) a precedence constraint Π among J. Each job j in J, defined by its arrival time aj, deadline dj, and amount of computation cj, is supposed to be processed by the device at a speed in R. Under the assumption that a higher speed means higher energy consumption, the power-saving scheduling problem is to compute a feasible schedule with speed assignment for the jobs in J such that the required energy consumption is minimized. This paper focuses on the setting of weakly dynamic voltage scaling, i.e., speed change is not allowed in the middle of processing a job. To demonstrate that this restriction on many portable power-aware devices introduces hardness to the power-saving scheduling problem, we prove that the problem is NP-hard even if aj = aj ′ and dj = dj ′ hold for all j,j ′∈ Jand |R|=2. If |R|<∞, we also give fully polynomial-time approximation schemes for two cases of the general NP-hard problem: (a) all jobs share a common arrival time, and (b) Π = ∅ and for any j,j ′ ∈ J, aj ≤ aj ′ implies dj ≤ dj ′. To the best of our knowledge, there is no previously known approximation algorithm for any special case of the NP-hard problem.", "We investigate the exact solution of the vehicle routing problem with time windows, where multiple trips are allowed for the vehicles. In contrast to previous works in the literature, we specifically consider the case in which it is mandatory to visit all customers and there is no limitation on duration. We develop two branch-and-price frameworks based on two set covering formulations: a traditional one where columns (variables) represent routes, that is, a sequence of consecutive trips, and a second one in which columns are single trips. One important difficulty related to the latter is the way mutual temporal exclusion of trips can be handled. It raises the issue of time discretization when solving the pricing problem. Our dynamic programming algorithm is based on concept of groups of labels and representative labels. We provide computational results on modified small-sized instances (25 customers) from Solomon’s benchmarks in order to evaluate and compare the two methods. Results show that some difficult instances are out of reach for the first branch-and-price implementation, while they are consistently solved with the second." ] }
cs0203030
2952475254
We study routing and scheduling in packet-switched networks. We assume an adversary that controls the injection time, source, and destination for each packet injected. A set of paths for these packets is admissible if no link in the network is overloaded. We present the first on-line routing algorithm that finds a set of admissible paths whenever this is feasible. Our algorithm calculates a path for each packet as soon as it is injected at its source using a simple shortest path computation. The length of a link reflects its current congestion. We also show how our algorithm can be implemented under today's Internet routing paradigms. When the paths are known (either given by the adversary or computed as above) our goal is to schedule the packets along the given paths so that the packets experience small end-to-end delays. The best previous delay bounds for deterministic and distributed scheduling protocols were exponential in the path length. In this paper we present the first deterministic and distributed scheduling protocol that guarantees a polynomial end-to-end delay for every packet. Finally, we discuss the effects of combining routing with scheduling. We first show that some unstable scheduling protocols remain unstable no matter how the paths are chosen. However, the freedom to choose paths can make a difference. For example, we show that a ring with parallel links is stable for all greedy scheduling protocols if paths are chosen intelligently, whereas this is not the case if the adversary specifies the paths.
The problem of choosing routes for a fixed set of packets was studied by Srinivasan and Teo @cite_5 and Bertsimas and Gamarnik @cite_13 . For example, @cite_5 presents an algorithm that minimizes the congestion and dilation of the routes up to a constant factor. This result complemented the paper of Leighton, Maggs and Rao @cite_1 which showed that packets could be scheduled along a set of paths in time @math congestion @math dilation @math .
{ "cite_N": [ "@cite_1", "@cite_5", "@cite_13" ], "mid": [ "2133049312", "2140916841", "2117065758", "2112269231" ], "abstract": [ "We study routing and scheduling in packet-switched networks. We assume an adversary that controls the injection time, source, and destination for each packet injected. A set of paths for these packets is admissible if no link in the network is overloaded. We present the first on-line routing algorithm that finds a set of admissible paths whenever this is feasible. Our algorithm calculates a path for each packet as soon as it is injected at its source using a simple shortest path computation. The length of a link reflects its current congestion. We also show how our algorithm can be implemented under today's Internet routing paradigms.When the paths are known (either given by the adversary or computed as above), our goal is to schedule the packets along the given paths so that the packets experience small end-to-end delays. The best previous delay bounds for deterministic and distributed scheduling protocols were exponential in the path length. In this article, we present the first deterministic and distributed scheduling protocol that guarantees a polynomial end-to-end delay for every packet.Finally, we discuss the effects of combining routing with scheduling. We first show that some unstable scheduling protocols remain unstable no matter how the paths are chosen. However, the freedom to choose paths can make a difference. For example, we show that a ring with parallel links is stable for all greedy scheduling protocols if paths are chosen intelligently, whereas this is not the case if the adversary specifies the paths.", "We present polylogarithmic approximations for the R|prec|Cmax and R|prec|∑jwjCj problems, when the precedence constraints are “treelike” – i.e., when the undirected graph underlying the precedences is a forest. We also obtain improved bounds for the weighted completion time and flow time for the case of chains with restricted assignment – this generalizes the job shop problem to these objective functions. We use the same lower bound of “congestion+dilation”, as in other job shop scheduling approaches. The first step in our algorithm for the R|prec|Cmax problem with treelike precedences involves using the algorithm of Lenstra, Shmoys and Tardos to obtain a processor assignment with the congestion + dilation value within a constant factor of the optimal. We then show how to generalize the random delays technique of Leighton, Maggs and Rao to the case of trees. For the weighted completion time, we show a certain type of reduction to the makespan problem, which dovetails well with the lower bound we employ for the makespan problem. For the special case of chains, we show a dependent rounding technique which leads to improved bounds on the weighted completion time and new bicriteria bounds for the flow time.", "This paper considers two inter-related questions: (i) Given a wireless ad-hoc network and a collection of source-destination pairs (s i ,t i ) , what is the maximum throughput capacity of the network, i.e. the rate at which data from the sources to their corresponding destinations can be transferred in the network? (ii) Can network protocols be designed that jointly route the packets and schedule transmissions at rates close to the maximum throughput capacity? Much of the earlier work focused on random instances and proved analytical lower and upper bounds on the maximum throughput capacity. Here, in contrast, we consider arbitrary wireless networks. Further, we study the algorithmic aspects of the above questions: the goal is to design provably good algorithms for arbitrary instances. We develop analytical performance evaluation models and distributed algorithms for routing and scheduling which incorporate fairness, energy and dilation (path-length) requirements and provide a unified framework for utilizing the network close to its maximum throughput capacity.Motivated by certain popular wireless protocols used in practice, we also explore \"shortest-path like\" path selection strategies which maximize the network throughput. The theoretical results naturally suggest an interesting class of congestion aware link metrics which can be directly plugged into several existing routing protocols such as AODV, DSR, etc. We complement the theoretical analysis with extensive simulations. The results indicate that routes obtained using our congestion aware link metrics consistently yield higher throughput than hop-count based shortest path metrics.", "We consider the problem of routing traffic to optimize the performance of a congested network. We are given a network, a rate of traffic between each pair of nodes, and a latency function for each edge specifying the time needed to traverse the edge given its congestion; the objective is to route traffic such that the sum of all travel times---the total latency---is minimized.In many settings, it may be expensive or impossible to regulate network traffic so as to implement an optimal assignment of routes. In the absence of regulation by some central authority, we assume that each network user routes its traffic on the minimum-latency path available to it, given the network congestion caused by the other users. In general such a \"selfishly motivated\" assignment of traffic to paths will not minimize the total latency; hence, this lack of regulation carries the cost of decreased network performance.In this article, we quantify the degradation in network performance due to unregulated traffic. We prove that if the latency of each edge is a linear function of its congestion, then the total latency of the routes chosen by selfish network users is at most 4 3 times the minimum possible total latency (subject to the condition that all traffic must be routed). We also consider the more general setting in which edge latency functions are assumed only to be continuous and nondecreasing in the edge congestion. Here, the total latency of the routes chosen by unregulated selfish network users may be arbitrarily larger than the minimum possible total latency; however, we prove that it is no more than the total latency incurred by optimally routing twice as much traffic." ] }
cs0207085
1745858282
In this paper we consider two points of views to the problem of coherent integration of distributed data. First we give a pure model-theoretic analysis of the possible ways to repair' a database. We do so by characterizing the possibilities to recover' consistent data from an inconsistent database in terms of those models of the database that exhibit as minimal inconsistent information as reasonably possible. Then we introduce an abductive application to restore the consistency of a given database. This application is based on an abductive solver (A-system) that implements an SLDNFA-resolution procedure, and computes a list of data-facts that should be inserted to the database or retracted from it in order to keep the database consistent. The two approaches for coherent data integration are related by soundness and completeness results.
Coherent integration and proper representation of amalgamated data is extensively studied in the literature (see, e.g., @cite_40 @cite_36 @cite_2 @cite_19 @cite_27 @cite_21 @cite_37 @cite_10 @cite_12 @cite_6 @cite_33 ). Common approaches for dealing with this task are based on techniques of belief revision @cite_21 , methods of resolving contradictions by quantitative considerations (such as majority vote'' @cite_37 ) or qualitative ones (e.g., defining priorities on different sources of information or preferring certain data over another @cite_24 @cite_34 ), and approaches that are based on rewriting rules for representing the information in a specific form @cite_27 . As in our case, abduction is used for database updating in @cite_23 and an extended form of abduction is used in @cite_17 @cite_18 to explain modifications in a theory.
{ "cite_N": [ "@cite_37", "@cite_18", "@cite_33", "@cite_36", "@cite_21", "@cite_6", "@cite_24", "@cite_19", "@cite_40", "@cite_27", "@cite_23", "@cite_2", "@cite_34", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "176609766", "2048333161", "1549828304", "35390552" ], "abstract": [ "Horn clause logic programming can be extended to include abduction with integrity constraints. In the resulting extension of logic programming, negation by failure can be simulated by making negative conditions abducible and by imposing appropriate denials and disjunctions as integrity constraints. This gives an alternative semantics for negation by failure, which generalises the stable model semantics of negation by failure. The abductive extension of logic programming extends negation by failure in three ways: (1) computation can be perfonned in alternative minimal models, (2) positive as well as negative conditions can be made abducible, and (3) other integrity constraints can also be accommodated. * This paper was written while the first author was at Imperial College. 235 Introduction The tenn \"abduction\" was introduced by the philosopher Charles Peirce [1931] to refer to a particular kind of hypothetical reasoning. In the simplest case, it has the fonn: From A and A fB infer B as a possible \"explanation\" of A. Abduction has been given prominence in Charniak and McDennot's [1985] \"Introduction to Artificial Intelligence\", where it has been applied to expert systems and story comprehension. Independently, several authors have developed deductive techniques to drive the generation of abductive hypotheses. Cox and Pietrzykowski [1986] construct hypotheses from the \"dead ends\" of linear resolution proofs. Finger and Genesereth [1985] generate \"deductive solutions to design problems\" using the \"residue\" left behind in resolution proofs. Poole, Goebel and Aleliunas [1987] also use linear resolution to generate hypotheses. All impose the restriction that hypotheses should be consistent with the \"knowledge base\". Abduction is a fonn of non-monotonic reasoning, because hypotheses which are consistent with one state of a knowledge base may become inconSistent when new knowledge is added. Poole [1988] argues that abduction is preferable to noh-monotonic logics for default reasoning. In this view, defaults are hypotheses fonnulated within classical logic rather than conclusions derived withln some fonn of non-monotonic logic. The similarity between abduction and default reasoning was also pointed out in [Kowalski, 1979]. In this paper we show how abduction can be integrated with logic programming, and we concentrate on the use of abduction to generalise negation by failure. Conditional Answers Compared with Abduction In the simplest case, a logic program consists of a set of Horn Clauses, which are used backward to_reduce goals to sub goals. The initial goal is solved when there are no subgollls left;", "Abstract During the process of knowledge acquisition from different experts it is usual that contradictions occur. Therefore strategies are needed for dealing with divergent statements and conflicts. We provide a formal framework to represent, process and combine distributed knowledge. The representation formalism is many-valued logic, which is a widely accepted method for expressing uncertainty, vagueness, contradictions and lack of information. Combining knowledge as proposed here makes use of the bilattice approach, which turns out to be very flexible and suggestive in the context of combining divergent information. We give some guidelines for choosing truth value spaces, assigning truth values and defining global operators to encode integration strategies.", "The process of integrating knowledge coming from different sources has been widely investigated in the literature. Three distinct conceptual approaches to this problem have been most succesful: belief revision, merging and update. In this paper we present a framework that integrates these three approaches. In the proposed framework all three operations can be performed. We provide an example that can only be solved by applying more than one single style of knowledge integration and, therefore, cannot be addressed by anyone of the approaches alone. The framework has been implemented, and the examples shown in this paper (as well as other examples from the belief revision literature) have been successfully tested.", "This paper investigates, several methods for coping with inconsistency caused by multiple source information by introducing suitable consequence relations capable of inferring non trivial conclusions from an inconsistent stratified knowledge base. Some of these methods presuppose a revision step, namely a selection of one or several consistent subsets of formulas, and then classical inference is used for inferring from these subsets. Two alternative methods that do not require any revision step are studied: inference based on arguments and a new approach called safely supported inference, where inconsistency is kept local. These two last methods look suitable when the inconsistency is due to the presence of several sources of information. The paper offers a comparative study of the various inference modes under inconsistency." ] }
cs0207085
1745858282
In this paper we consider two points of views to the problem of coherent integration of distributed data. First we give a pure model-theoretic analysis of the possible ways to repair' a database. We do so by characterizing the possibilities to recover' consistent data from an inconsistent database in terms of those models of the database that exhibit as minimal inconsistent information as reasonably possible. Then we introduce an abductive application to restore the consistency of a given database. This application is based on an abductive solver (A-system) that implements an SLDNFA-resolution procedure, and computes a list of data-facts that should be inserted to the database or retracted from it in order to keep the database consistent. The two approaches for coherent data integration are related by soundness and completeness results.
The use of three-valued logics is also a well-known technique for maintaining incomplete or inconsistent information; such logics are often used for defining fixpoint semantics of incomplete logic programs @cite_32 @cite_3 , and so in principle they can be applied on integrity constraints in an (extended) clause form @cite_11 . Three-valued formalisms such as LFI @cite_0 are also the basis of paraconsistent methods to construct database repairs @cite_8 and are useful in general for pinpointing inconsistencies @cite_7 . As noted above, this is also the role of the three-valued semantics in our case.
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_32", "@cite_3", "@cite_0", "@cite_11" ], "mid": [ "2003531456", "1785931840", "1989393769", "2152131859" ], "abstract": [ "The use of conventional classical logic is misleading for characterizing the behavior of logic programs because a logic program, when queried, will do one of three things: succeed with the query, fail with it, or not respond because it has fallen into infinite backtracking. In [7] Kleene proposed a three-valued logic for use in recursive function theory. The so-called third truth value was really undefined: truth value not determined. This logic is a useful tool in logic-program specification, and in particular, for describing models. (See [11].) Tarski showed that formal languages, like arithmetic, cannot contain their own truth predicate because one could then construct a paradoxical sentence that effectively asserts its own falsehood. Natural languages do allow the use of \"is true\", so by Tarski's argument a semantics for natural language must leave truth-value gaps: some sentences must fail to have a truth value. In [8] Kripke showed how a model having truth-value gaps, using Kleene's three-valued logic, could be specified. The mechanism he used is a famiUar one in program semantics: consider the least fixed point of a certain monotone operator. But that operator must be defined on a space involving three-valued logic, and for Kripke's application it will not be continuous. We apply techniques similar to Kripke's to logic programs. We associate with each program a monotone operator on a space of three-valued logic interpretations, or better partial interpretations. This space is not a complete lattice, and the operators are not, in general, continuous. But least and other fixed points do exist. These fixed points are shown to provide suitable three-valued program models. They relate closely to the least and greatest fixed points of the operators used in [1]. Because of the extra machinery involved, our treatment allows for a natural consideration of negation, and indeed, of the other prepositional connectives as well. And because of the elaborate structure of fixed points available, we are able to", "The logics of formal inconsistency (LFI’s) are logics that allow to explicitly formalize the concepts of consistency and inconsistency by means of formulas of their language. Contradictoriness, on the other hand, can always be expressed in any logic, provided its language includes a symbol for negation. Besides being able to represent the distinction between contradiction and inconsistency, LFI’s are non-explosive logics, in the sense that a contradiction does not entail arbitrary statements, but yet are gently explosive, in the sense that, adjoining the additional requirement of consistency, then contradictoriness do cause explosion. Several logics can be seen as LFI’s, among them the great majority of paraconsistent systems developed under the Brazilian and Polish tradition. We present here tableau systems for some important LFI’s: bC, Ci and LFI1.", "In this paper we compare the expressive power of elementary representation formats for vague, incomplete or conflicting information. These include Boolean valuation pairs introduced by Lawry and Gonzalez-Rodriguez, orthopairs of sets of variables, Boolean possibility and necessity measures, three-valued valuations, supervaluations. We make explicit their connections with strong Kleene logic and with Belnap logic of conflicting information. The formal similarities between 3-valued approaches to vagueness and formalisms that handle incomplete information often lead to a confusion between degrees of truth and degrees of uncertainty. Yet there are important differences that appear at the interpretive level: while truth-functional logics of vagueness are accepted by a part of the scientific community (even if questioned by supervaluationists), the truth-functionality assumption of three-valued calculi for handling incomplete information looks questionable, compared to the non-truth-functional approaches based on Boolean possibility-necessity pairs. This paper aims to clarify the similarities and differences between the two situations. We also study to what extent operations for comparing and merging information items in the form of orthopairs can be expressed by means of operations on valuation pairs, three-valued valuations and underlying possibility distributions. We explore the connections between several representations of imperfect information.In each case we compare the expressive power of these formalisms.In each case we study how to express aggregation operations.We demonstrate the formal similarities among these approaches.We point out the differences in interpretations between these approaches.", "Logic programming with the stable model semantics is put forward as a novel constraint programming paradigm. This paradigm is interesting because it bring advantages of logic programming based knowledge representation techniques to constraint programming and because implementation methods for the stable model semantics for ground (variabledfree) programs have advanced significantly in recent years. For a program with variables these methods need a grounding procedure for generating a variabledfree program. As a practical approach to handling the grounding problem a subclass of logic programs, domain restricted programs, is proposed. This subclass enables efficient grounding procedures and serves as a basis for integrating builtdin predicates and functions often needed in applications. It is shown that the novel paradigm embeds classical logical satisfiability and standard (finite domain) constraint satisfaction problems but seems to provide a more expressive framework from a knowledge representation point of view. The first steps towards a programming methodology for the new paradigm are taken by presenting solutions to standard constraint satisfaction problems, combinatorial graph problems and planning problems. An efficient implementation of the paradigm based on domain restricted programs has been developed. This is an extension of a previous implementation of the stable model semantics, the Smodels system, and is publicly available. It contains, e.g., builtdin integer arithmetic integrated to stable model computation. The implementation is described briefly and some test results illustrating the current level of performance are reported." ] }
cs0207085
1745858282
In this paper we consider two points of views to the problem of coherent integration of distributed data. First we give a pure model-theoretic analysis of the possible ways to repair' a database. We do so by characterizing the possibilities to recover' consistent data from an inconsistent database in terms of those models of the database that exhibit as minimal inconsistent information as reasonably possible. Then we introduce an abductive application to restore the consistency of a given database. This application is based on an abductive solver (A-system) that implements an SLDNFA-resolution procedure, and computes a list of data-facts that should be inserted to the database or retracted from it in order to keep the database consistent. The two approaches for coherent data integration are related by soundness and completeness results.
A closely related topic is the problem of giving consistent query answers in inconsistent database @cite_26 @cite_15 @cite_27 . The idea is to answer database queries in a consistent way without computing the repairs of the database.
{ "cite_N": [ "@cite_27", "@cite_15", "@cite_26" ], "mid": [ "2062180302", "2077518845", "1551374365", "2019599098" ], "abstract": [ "This article deals with the computation of consistent answers to queries on relational databases that violate primary key constraints. A repair of such inconsistent database is obtained by selecting a maximal number of tuples from each relation without ever selecting two distinct tuples that agree on the primary key. We are interested in the following problem: Given a Boolean conjunctive query q, compute a Boolean first-order (FO) query @j such that for every database db, @j evaluates to true on db if and only if q evaluates to true on every repair of db. Such @j is called a consistent FO rewriting of q. We use novel techniques to characterize classes of queries that have a consistent FO rewriting. In this way, we are able to extend previously known classes and discover new ones. Finally, we use an Ehrenfeucht-Fraisse game to show the non-existence of a consistent FO rewriting for @[email protected]?y(R([email protected]?,y)@?R([email protected]?,c)), where c is a constant and the first coordinate of R is the primary key.", "In this paper we consider the problem of the logical characterization of the notion of consistent answer in a relational database that may violate given integrity constraints. This notion is captured in terms of the possible repaired versions of the database. A method for computing consistent answers is given and its soundness and completeness (for some classes of constraints and queries) proved. The method is based on an iterative procedure whose termination for several classes of constraints is proved as well.", "We consider the problem of answering queries from databases that may be incomplete. A database is incomplete if some tuples may be missing from some relations, and only a part of each relation is known to be complete. This problem arises in several contexts. For example, systems that provide access to multiple heterogeneous information sources often encounter incomplete sources. The question we address is to determine whether the answer to a specific given query is complete even when the database is incomplete. We present a novel sound and complete algorithm for the answer-completeness problem by relating it to the problem of independence of queries from updates. We also show an important case of the independence problem (and therefore ofthe answer-completeness problem) that can be decided in polynomial time, whereas the best known algorithm for this case is exponential. This case involves updates that are described using a conjunction of comparison predicates. We also describe an algorithm that determines whether the answer to the query is complete in the current state of the database. Finally, we show that our ‘treatment extends naturally to partiallyincorrect databases. Permission to copy without fee all or part of this material is granted provided that the copies aTe not made OT distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, OT to republish, requirea a fee and or special permission from the Endowment. Proceedings of the 22nd VLDB Conference Mumbai(Bombay), India, 1996", "Research in consistent query answering studies the definition and computation of \"meaningful\" answers to queries posed to inconsistent databases, i.e., databases whose data do not satisfy the integrity constraints (ICs) declared on their schema. Computing consistent answers to conjunctive queries is generally coNP-hard in data complexity, even in the presence of very restricted forms of ICs (single, unary keys). Recent studies on consistent query answering for database schemas containing only key dependencies have analyzed the possibility of identifying classes of queries whose consistent answers can be obtained by a first-order rewriting of the query, which in turn can be easily formulated in SQL and directly evaluated through any relational DBMS. In this paper we study consistent query answering in the presence of key dependencies and exclusion dependencies. We first prove that even in the presence of only exclusion dependencies the problem is coNP-hard in data complexity, and define a general method for consistent answering of conjunctive queries under key and exclusion dependencies, based on the rewriting of the query in Datalog with negation. Then, we identify a subclass of conjunctive queries that can be first-order rewritten in the presence of key and exclusion dependencies, and define an algorithm for computing the first-order rewriting of a query belonging to such a class of queries. Finally, we compare the relative efficiency of the two methods for processing queries in the subclass above mentioned. Experimental results, conducted on a real and large database of the computer science engineering degrees of the University of Rome \"La Sapienza\", clearly show the computational advantage of the first-order based technique." ] }
cs0209023
2952481296
This paper studies the problem of load-balancing the demand for content in a peer-to-peer network across heterogeneous peer nodes that hold replicas of the content. Previous decentralized load balancing techniques in distributed systems base their decisions on periodic updates containing information about load or available capacity observed at the serving entities. We show that these techniques do not work well in the peer-to-peer context; either they do not address peer node heterogeneity, or they suffer from significant load oscillations. We propose a new decentralized algorithm, Max-Cap, based on the maximum inherent capacities of the replica nodes and show that unlike previous algorithms, it is not tied to the timeliness or frequency of updates. Yet, Max-Cap can handle the heterogeneity of a peer-to-peer environment without suffering from load oscillations.
Of the load-balancing algorithms based on load, a very common approach to performing load-balancing is to choose the server with the least reported load from among a set of servers. This approach performs well in a homogeneous system where the task allocation is performed by a single centralized entity (dispatcher) which has complete up-to-date load information @cite_25 @cite_35 . In a system where multiple dispatchers are independently performing the allocation of tasks, this approach however has been shown to behave badly, especially if load information used is stale @cite_28 @cite_46 @cite_13 @cite_47 . Mitzenmacher talks about the herd behavior'' that can occur when servers that have reported low load are inundated with requests from dispatchers until new load information is reported @cite_13 .
{ "cite_N": [ "@cite_35", "@cite_47", "@cite_28", "@cite_46", "@cite_13", "@cite_25" ], "mid": [ "2109440766", "2120849241", "2154007983", "2963728009" ], "abstract": [ "In this paper we examine the problem of balancing load in a large-scale distributed system when information about server loads may be stale. It is well known that sending each request to the machine with the apparent lowest load can behave badly in such systems, yet this technique is common in practice. Other systems use round-robin or random selection algorithms that entirely ignore load information or that only use a small subset of the load information. Rather than risk extremely bad performance on one hand or ignore the chance to use load information to improve performance on the other, we develop strategies that interpret load information based on its age. Through simulation, we examine several simple algorithms that use such load interpretation strategies under a range of workloads. Our experiments suggest that by properly interpreting load information, systems can (1) match the performance of the most aggressive algorithms when load information is fresh relative to the job arrival rate, (2) outperform the best of the other algorithms we examine by as much as 60 when information is moderately old, (3) significantly outperform random load distribution when information is older still, and (4) avoid pathological behavior even when information is extremely old.", "Load balancing for distributed servers is a common issue in many applications and has been extensively studied. Several distributed load balancing schemes have been proposed that proactively route individual requests to appropriate servers to best balance the load and shorten request response time. These schemes do not require a centralized load balancer. Instead, each server is responsible for determining, for each request it receives from a client, to which server in the pool the request should be forwarded for processing. We propose a new request routing scheme that is more scalable to increasing number of servers and request load than the existing schemes. The method combines random server selection and next-neighbor load sharing techniques that together prevent the staleness of load information from building up when the number of servers increases. Our simulation shows that it outperforms existing schemes under a piggyback-based load update model.", "We consider the problem of load balancing in dynamic distributed systems in cases where new incoming tasks can make use of old information. For example, consider a multiprocessor system where incoming tasks with exponentially distributed service requirements arrive as a Poisson process, the tasks must choose a processor for service, and a task knows when making this choice the processor queue lengths from T seconds ago. What is a good strategy for choosing a processor in order for tasks to minimize their expected time in the system? Such models can also be used to describe settings where there is a transfer delay between the time a task enters a system and the time it reaches a processor for service. Our models are based on considering the behavior of limiting systems where the number of processors goes to infinity. The limiting systems can be shown to accurately describe the behavior of sufficiently large systems and simulations demonstrate that they are reasonably accurate even for systems with a small number of processors. Our studies of specific models demonstrate the importance of using randomness to break symmetry in these systems and yield important rules of thumb for system design. The most significant result is that only small amounts of queue length information can be extremely useful in these settings; for example, having incoming tasks choose the least loaded of two randomly chosen processors is extremely effective over a large range of possible system parameters. In contrast, using global information can actually degrade performance unless used carefully; for example, unlike most settings where the load information is current, having tasks go to the apparently least loaded server can significantly hurt performance.", "We consider load balancing in a network of caching servers delivering contents to end users. Randomized load balancing via the so-called power of two choices is a well-known approach in parallel and distributed systems. In this framework, we investigate the tension between storage resources, communication cost, and load balancing performance. To this end, we propose a randomized load balancing scheme which simultaneously considers cache size limitation and proximity in the server redirection process. In contrast to the classical power of two choices setup, since the memory limitation and the proximity constraint cause correlation in the server selection process, we may not benefit from the power of two choices. However, we prove that in certain regimes of problem parameters, our scheme results in the maximum load of order @math (here @math is the network size). This is an exponential improvement compared to the scheme which assigns each request to the nearest available replica. Interestingly, the extra communication cost incurred by our proposed scheme, compared to the nearest replica strategy, is small. Furthermore, our extensive simulations show that the trade-off trend does not depend on the network topology and library popularity profile details." ] }
cs0209023
2952481296
This paper studies the problem of load-balancing the demand for content in a peer-to-peer network across heterogeneous peer nodes that hold replicas of the content. Previous decentralized load balancing techniques in distributed systems base their decisions on periodic updates containing information about load or available capacity observed at the serving entities. We show that these techniques do not work well in the peer-to-peer context; either they do not address peer node heterogeneity, or they suffer from significant load oscillations. We propose a new decentralized algorithm, Max-Cap, based on the maximum inherent capacities of the replica nodes and show that unlike previous algorithms, it is not tied to the timeliness or frequency of updates. Yet, Max-Cap can handle the heterogeneity of a peer-to-peer environment without suffering from load oscillations.
Dahlin proposes algorithms @cite_1 . These algorithms take into account the age (staleness) of the load information reported by each of a set of distributed homogeneous servers as well as an estimate of the rate at which new requests arrive at the whole system to determine to which server to allocate a request.
{ "cite_N": [ "@cite_1" ], "mid": [ "2109440766", "1666061104", "2134659242", "2950554129" ], "abstract": [ "In this paper we examine the problem of balancing load in a large-scale distributed system when information about server loads may be stale. It is well known that sending each request to the machine with the apparent lowest load can behave badly in such systems, yet this technique is common in practice. Other systems use round-robin or random selection algorithms that entirely ignore load information or that only use a small subset of the load information. Rather than risk extremely bad performance on one hand or ignore the chance to use load information to improve performance on the other, we develop strategies that interpret load information based on its age. Through simulation, we examine several simple algorithms that use such load interpretation strategies under a range of workloads. Our experiments suggest that by properly interpreting load information, systems can (1) match the performance of the most aggressive algorithms when load information is fresh relative to the job arrival rate, (2) outperform the best of the other algorithms we examine by as much as 60 when information is moderately old, (3) significantly outperform random load distribution when information is older still, and (4) avoid pathological behavior even when information is extremely old.", "The model is motivated by the problem of load distribution in large-scale cloud-based data processing systems. We consider a heterogeneous service system, consisting of multiple large server pools. The pools are different in that their servers may have different processing speeds and or different buffer sizes (which may be finite or infinite). We study an asymptotic regime in which the customer arrival rate and pool sizes scale to infinity simultaneously, in proportion to some scaling parameter n. Arriving customers are assigned to the servers by a \"router,\" according to a pull-based algorithm, called PULL. Under the algorithm, each server sends a \"pull-message\" to the router, when it becomes idle; the router assigns an arriving customer to a server according to a randomly chosen available pull-message, if there are any, or to a random server, otherwise. Assuming subcritical system load, we prove asymptotic optimality of PULL. Namely, as system scale @math n??, the steady-state probability of an arriving customer experiencing blocking or waiting, vanishes. We also describe some generalizations of the model and PULL algorithm, for which the asymptotic optimality still holds.", "We consider optimal load balancing in a distributed computing environment consisting of homogeneous unreliable processors. Each processor receives its own sequence of tasks from outside users, some of which can be redirected to the other processors. Processing times are independent and identically distributed with an arbitrary distribution. The arrival sequence of outside tasks to each processor may be arbitrary as long as it is independent of the state of the system. Processors may fail, with arbitrary failure and repair processes that are also independent of the state of the system. The only information available to a processor is the history of its decisions for routing work to other processors, and the arrival times of its own arrival sequence. We prove the optimality of the round-robin policy, in which each processor sends all the tasks that can be redirected to each of the other processors in turn. We show that, among all policies that balance workload, round robin stochastically minimizes the nth task completion time for all n, and minimizes response times and queue lengths in a separable increasing convex sense for the entire system. We also show that if there is a single centralized controller, round-robin is the optimal policy, and a single controller using round-robin routing is better than the optimal distributed system in which each processor routes its own arrivals. Again \"optimal\" and \"better\" are in the sense of stochastically minimizing task completion times, and minimizing response time and queue lengths in the separable increasing convex sense.", "We consider a basic content distribution scenario consisting of a single origin server connected through a shared bottleneck link to a number of users each equipped with a cache of finite memory. The users issue a sequence of content requests from a set of popular files, and the goal is to operate the caches as well as the server such that these requests are satisfied with the minimum number of bits sent over the shared link. Assuming a basic Markov model for renewing the set of popular files, we characterize approximately the optimal long-term average rate of the shared link. We further prove that the optimal online scheme has approximately the same performance as the optimal offline scheme, in which the cache contents can be updated based on the entire set of popular files before each new request. To support these theoretical results, we propose an online coded caching scheme termed coded least-recently sent (LRS) and simulate it for a demand time series derived from the dataset made available by Netflix for the Netflix Prize. For this time series, we show that the proposed coded LRS algorithm significantly outperforms the popular least-recently used (LRU) caching algorithm." ] }
cs0209023
2952481296
This paper studies the problem of load-balancing the demand for content in a peer-to-peer network across heterogeneous peer nodes that hold replicas of the content. Previous decentralized load balancing techniques in distributed systems base their decisions on periodic updates containing information about load or available capacity observed at the serving entities. We show that these techniques do not work well in the peer-to-peer context; either they do not address peer node heterogeneity, or they suffer from significant load oscillations. We propose a new decentralized algorithm, Max-Cap, based on the maximum inherent capacities of the replica nodes and show that unlike previous algorithms, it is not tied to the timeliness or frequency of updates. Yet, Max-Cap can handle the heterogeneity of a peer-to-peer environment without suffering from load oscillations.
@cite_16 propose an algorithm, which we call that first randomly selects @math servers. The algorithm then weighs the servers by load information and chooses a server with probability that is inversely proportional to the load reported by that server. When @math , where @math is the total number of servers, the algorithm is shown to perform better than previous load-based algorithms and for this reason we focus on this algorithm in this paper.
{ "cite_N": [ "@cite_16" ], "mid": [ "2109440766", "1666061104", "2950469527", "2950958145" ], "abstract": [ "In this paper we examine the problem of balancing load in a large-scale distributed system when information about server loads may be stale. It is well known that sending each request to the machine with the apparent lowest load can behave badly in such systems, yet this technique is common in practice. Other systems use round-robin or random selection algorithms that entirely ignore load information or that only use a small subset of the load information. Rather than risk extremely bad performance on one hand or ignore the chance to use load information to improve performance on the other, we develop strategies that interpret load information based on its age. Through simulation, we examine several simple algorithms that use such load interpretation strategies under a range of workloads. Our experiments suggest that by properly interpreting load information, systems can (1) match the performance of the most aggressive algorithms when load information is fresh relative to the job arrival rate, (2) outperform the best of the other algorithms we examine by as much as 60 when information is moderately old, (3) significantly outperform random load distribution when information is older still, and (4) avoid pathological behavior even when information is extremely old.", "The model is motivated by the problem of load distribution in large-scale cloud-based data processing systems. We consider a heterogeneous service system, consisting of multiple large server pools. The pools are different in that their servers may have different processing speeds and or different buffer sizes (which may be finite or infinite). We study an asymptotic regime in which the customer arrival rate and pool sizes scale to infinity simultaneously, in proportion to some scaling parameter n. Arriving customers are assigned to the servers by a \"router,\" according to a pull-based algorithm, called PULL. Under the algorithm, each server sends a \"pull-message\" to the router, when it becomes idle; the router assigns an arriving customer to a server according to a randomly chosen available pull-message, if there are any, or to a random server, otherwise. Assuming subcritical system load, we prove asymptotic optimality of PULL. Namely, as system scale @math n??, the steady-state probability of an arriving customer experiencing blocking or waiting, vanishes. We also describe some generalizations of the model and PULL algorithm, for which the asymptotic optimality still holds.", "We consider a system of @math servers inter-connected by some underlying graph topology @math . Tasks arrive at the various servers as independent Poisson processes of rate @math . Each incoming task is irrevocably assigned to whichever server has the smallest number of tasks among the one where it appears and its neighbors in @math . Tasks have unit-mean exponential service times and leave the system upon service completion. The above model has been extensively investigated in the case @math is a clique. Since the servers are exchangeable in that case, the queue length process is quite tractable, and it has been proved that for any @math , the fraction of servers with two or more tasks vanishes in the limit as @math . For an arbitrary graph @math , the lack of exchangeability severely complicates the analysis, and the queue length process tends to be worse than for a clique. Accordingly, a graph @math is said to be @math -optimal or @math -optimal when the occupancy process on @math is equivalent to that on a clique on an @math -scale or @math -scale, respectively. We prove that if @math is an Erd o s-R 'enyi random graph with average degree @math , then it is with high probability @math -optimal and @math -optimal if @math and @math as @math , respectively. This demonstrates that optimality can be maintained at @math -scale and @math -scale while reducing the number of connections by nearly a factor @math and @math compared to a clique, provided the topology is suitably random. It is further shown that if @math contains @math bounded-degree nodes, then it cannot be @math -optimal. In addition, we establish that an arbitrary graph @math is @math -optimal when its minimum degree is @math , and may not be @math -optimal even when its minimum degree is @math for any @math .", "We consider the problem of selecting the best subset of exactly @math columns from an @math matrix @math . We present and analyze a novel two-stage algorithm that runs in @math time and returns as output an @math matrix @math consisting of exactly @math columns of @math . In the first (randomized) stage, the algorithm randomly selects @math columns according to a judiciously-chosen probability distribution that depends on information in the top- @math right singular subspace of @math . In the second (deterministic) stage, the algorithm applies a deterministic column-selection procedure to select and return exactly @math columns from the set of columns selected in the first stage. Let @math be the @math matrix containing those @math columns, let @math denote the projection matrix onto the span of those columns, and let @math denote the best rank- @math approximation to the matrix @math . Then, we prove that, with probability at least 0.8, @math This Frobenius norm bound is only a factor of @math worse than the best previously existing existential result and is roughly @math better than the best previous algorithmic result for the Frobenius norm version of this Column Subset Selection Problem (CSSP). We also prove that, with probability at least 0.8, @math This spectral norm bound is not directly comparable to the best previously existing bounds for the spectral norm version of this CSSP. Our bound depends on @math , whereas previous results depend on @math ; if these two quantities are comparable, then our bound is asymptotically worse by a @math factor." ] }
cs0209023
2952481296
This paper studies the problem of load-balancing the demand for content in a peer-to-peer network across heterogeneous peer nodes that hold replicas of the content. Previous decentralized load balancing techniques in distributed systems base their decisions on periodic updates containing information about load or available capacity observed at the serving entities. We show that these techniques do not work well in the peer-to-peer context; either they do not address peer node heterogeneity, or they suffer from significant load oscillations. We propose a new decentralized algorithm, Max-Cap, based on the maximum inherent capacities of the replica nodes and show that unlike previous algorithms, it is not tied to the timeliness or frequency of updates. Yet, Max-Cap can handle the heterogeneity of a peer-to-peer environment without suffering from load oscillations.
Another approach is to to exclude servers that fail some utilization threshold and to choose from the remaining servers. @cite_6 and @cite_47 classify machines as lightly-utilized or heavily-utilized and then choose randomly from the lightly-utilized servers. This work focuses on local-area distributed systems. use this approach to enhance round-robin DNS load-balancing across a set of widely distributed heterogeneous web servers @cite_2 , Specifically, when a web server surpasses a utilization threshold it sends an alarm signal to the DNS system indicating it is out of commission. The server is excluded from the DNS resolution until it sends another signal indicating it is below threshold and free to service requests again. In this work, the maximum capacities of the most capable servers are at most a factor of three that of the least capable servers.
{ "cite_N": [ "@cite_47", "@cite_6", "@cite_2" ], "mid": [ "2109440766", "1597560875", "2786117165", "1747723070" ], "abstract": [ "In this paper we examine the problem of balancing load in a large-scale distributed system when information about server loads may be stale. It is well known that sending each request to the machine with the apparent lowest load can behave badly in such systems, yet this technique is common in practice. Other systems use round-robin or random selection algorithms that entirely ignore load information or that only use a small subset of the load information. Rather than risk extremely bad performance on one hand or ignore the chance to use load information to improve performance on the other, we develop strategies that interpret load information based on its age. Through simulation, we examine several simple algorithms that use such load interpretation strategies under a range of workloads. Our experiments suggest that by properly interpreting load information, systems can (1) match the performance of the most aggressive algorithms when load information is fresh relative to the job arrival rate, (2) outperform the best of the other algorithms we examine by as much as 60 when information is moderately old, (3) significantly outperform random load distribution when information is older still, and (4) avoid pathological behavior even when information is extremely old.", "Energy management for servers is now necessary for technical, financial, and environmental reasons. This paper describes three policies designed to reduce energy consumption in Web servers. The policies employ two power management mechanisms: dynamic voltage scaling (DVS), an existing mechanism, and request batching, a new mechanism introduced in this paper. The first policy uses DVS in isolation, except that we extend recently introduced task-based DVS policies for use in server environments with many concurrent tasks. The second policy uses request batching to conserve energy during periods of low workload intensity. The third policy uses both DVS and request batching mechanisms to reduce processor energy usage over a wide range of workload intensities. All the policies trade off system responsiveness to save energy. However, the policies employ the mechanisms in a feedback-driven control framework in order to conserve energy while maintaining a given quality of service level, as defined by a percentile-level response time. We evaluate the policies using Salsa, a web server simulator that has been extensively validated for both energy and response time against measurements from a commodity web server. Three daylong static web workloads from real web server systems are used to quantify the energy savings: the Nagano Olympics98 web server, a financial services company web site, and a disk intensive web workload. Our results show that when required to maintain a 90th-percentile response time of 50ms, the DVS and request batching policies save from 8.7 to 38 and from 3.1 to 27 respectively of the CPU energy used by the base system. The two polices provide these savings for complementary workload intensities. The combined policy is effective for all three workloads across a broad range of intensities, saving from 17 to 42 of the CPU energy.", "Motivated by distributed schedulers that combine the power-of-d-choices with late binding and systems that use replication with cancellation-on-start, we study the performance of the LL(d) policy which assigns a job to a server that currently has the least workload among d randomly selected servers in large-scale homogeneous clusters. We consider general job size distributions and propose a partial integro-differential equation to describe the evolution of the system. This equation relies on the earlier proven ansatz for LL(d) which asserts that the workload distribution of any finite set of queues becomes independent of one another as the number of servers tends to infinity. Based on this equation we propose a fixed point iteration for the limiting workload distribution and study its convergence.", "With ever increasing Web traffic, a distributed multi server Web site can provide scalability and flexibility to cope with growing client demands. Load balancing algorithms to spread the requests across multiple Web servers are crucial to achieve the scalability. Various domain name server (DNS) based schedulers have been proposed in the literature, mainly for multiple homogeneous servers. The presence of heterogeneous Web servers not only increases the complexity of the DNS scheduling problem, but also makes previously proposed algorithms for homogeneous distributed systems not directly applicable. This leads us to propose new policies, cabled adaptive TTL algorithms, that take into account both the uneven distribution of client request rates and heterogeneity of Web servers to adaptively set the time-to-live (TTL) value for each address mapping request. Extensive simulation results show that these strategies are robust and effective in balancing load among geographically distributed heterogeneous Web servers." ] }
cs0209023
2952481296
This paper studies the problem of load-balancing the demand for content in a peer-to-peer network across heterogeneous peer nodes that hold replicas of the content. Previous decentralized load balancing techniques in distributed systems base their decisions on periodic updates containing information about load or available capacity observed at the serving entities. We show that these techniques do not work well in the peer-to-peer context; either they do not address peer node heterogeneity, or they suffer from significant load oscillations. We propose a new decentralized algorithm, Max-Cap, based on the maximum inherent capacities of the replica nodes and show that unlike previous algorithms, it is not tied to the timeliness or frequency of updates. Yet, Max-Cap can handle the heterogeneity of a peer-to-peer environment without suffering from load oscillations.
Another well-studied load-balancing cluster approach is to have heavily loaded servers handoff requests they receive to other servers within the cluster that are less loaded or to have lightly loaded servers attempt to get tasks from heavily loaded servers (e.g., @cite_9 @cite_10 ). This can be achieved through techniques such as HTTP redirection (e.g., @cite_32 @cite_31 @cite_36 ) or packet header rewriting (e.g., @cite_24 ) or remote script execution @cite_43 . HTTP redirection adds additional client round-trip latency for every rescheduled request. TCP IP hand-off and packet header rewriting require changes in the OS kernel or network interface drivers. Remote script execution requires trust between the serving entities.
{ "cite_N": [ "@cite_36", "@cite_9", "@cite_32", "@cite_24", "@cite_43", "@cite_31", "@cite_10" ], "mid": [ "2151744612", "2759636699", "2159285576", "2120849241" ], "abstract": [ "Users of highly popular Web sites may experience long delays when accessing information. Upgrading content site infrastructure from a single node to a locally distributed Web cluster composed by multiple server nodes provides limited relief, because the cluster wide-area connectivity may become the bottleneck. A better solution is to distribute Web clusters over the Internet by placing content nodes in strategic locations. A geographically distributed architecture where the Domain Name System (DNS) servers evaluate network proximity and users are served from the closest cluster reduces network impact on response time. On the other hand, serving closest requests only may cause unbalanced servers and may increase system impact on response time. To achieve a scalable Web system, we propose to integrate DNS proximity scheduling with an HTTP request redirection mechanism that any Web server can activate. We demonstrate through simulation experiments that this further dispatching mechanism augments the percentage of requests with guaranteed response time, thereby enhancing the Quality of Service of geographically distributed Web sites. However, HTTP request redirection should be used selectively because the additional round-trip increases network impact on latency time experienced by users. As a further contribution, this paper proposes and compares various mechanisms to limit reassignments with no negative consequences on load balancing.", "Load balancing plays an important role in improving scalability and stability in Content Delivery Networks (CDNs) to meet the increasing demand on bandwidth. This paper proposed a modified algorithm that takes into account the equilibrium between load balancing and redirection proximity. We extended a fluid queue model which is adopted in the existing literatures to the overall CDN system. In the system, scheduler selects proper replica server for each redistributed request by exploiting load differences between them. Furthermore, through limiting the migration distance for each request, the total costs mainly associated with delay are also effectively optimized. The simulation result indicates that the proposed algorithm can efficiently reduce redirection cost compared to Control-Law Balancing (CLB) algorithm at the expense of a bit of performance sacrifice of queue balancing. Besides, we found that the proposed mechanism has more benefit on queue balancing than CLB algorithm as well, when selecting an appropriate distance threshold.", "Replication of information among multiple World Wide Web servers is necessary to support high request rates to popular Web sites. A clustered Web server organization is preferable to multiple independent mirrored servers because it maintains a single interface to the users and has the potential to be more scalable, fault-tolerant and better load-balanced. In this paper, we propose a Web cluster architecture in which the Domain Name System (DNS) server, which dispatches the user requests among the servers through the URL name to the IP address mapping mechanism, is integrated with a redirection request mechanism based on HTTP. This should alleviate the side-effect of caching the IP address mapping at intermediate name servers. We compare many alternative mechanisms, including synchronous vs. asynchronous activation and centralized vs. distributed decisions on redirection. Moreover, we analyze the reassignment of entire domains or individual client requests, different types of status information and different server selection policies for redirecting requests. Our results show that the combination of centralized and distributed dispatching policies allows the Web server cluster to handle high load skews in the WWW environment.", "Load balancing for distributed servers is a common issue in many applications and has been extensively studied. Several distributed load balancing schemes have been proposed that proactively route individual requests to appropriate servers to best balance the load and shorten request response time. These schemes do not require a centralized load balancer. Instead, each server is responsible for determining, for each request it receives from a client, to which server in the pool the request should be forwarded for processing. We propose a new request routing scheme that is more scalable to increasing number of servers and request load than the existing schemes. The method combines random server selection and next-neighbor load sharing techniques that together prevent the staleness of load information from building up when the number of servers increases. Our simulation shows that it outperforms existing schemes under a piggyback-based load update model." ] }
cs0209023
2952481296
This paper studies the problem of load-balancing the demand for content in a peer-to-peer network across heterogeneous peer nodes that hold replicas of the content. Previous decentralized load balancing techniques in distributed systems base their decisions on periodic updates containing information about load or available capacity observed at the serving entities. We show that these techniques do not work well in the peer-to-peer context; either they do not address peer node heterogeneity, or they suffer from significant load oscillations. We propose a new decentralized algorithm, Max-Cap, based on the maximum inherent capacities of the replica nodes and show that unlike previous algorithms, it is not tied to the timeliness or frequency of updates. Yet, Max-Cap can handle the heterogeneity of a peer-to-peer environment without suffering from load oscillations.
A lot of work has looked at balancing load across multi-server homogeneous web sites by leveraging the DNS service used to provide the mapping between a web page's URL and the IP address of a web server serving the URL. Round-robin DNS was proposed, where the DNS system maps requests to web servers in a round-robin fashion @cite_22 @cite_14 . Because DNS mappings have a Time-to-Live (TTL) field associated with them and tend to be cached at the local name server in each domain, this approach can lead to a large number of client requests from a particular domain getting mapped to the same web server during the TTL period. Thus, round-robin DNS achieves good balance only so long as each domain has the same client request rate. Moreover, round-robin load-balancing does not work in a heterogeneous peer-to-peer context because each serving replica gets a uniform rate of requests regardless of whether it can handle this rate. Work that takes into account domain request rate improves upon round-robin DNS and is described by @cite_34 .
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_34" ], "mid": [ "1747723070", "2159285576", "2151744612", "2168282297" ], "abstract": [ "With ever increasing Web traffic, a distributed multi server Web site can provide scalability and flexibility to cope with growing client demands. Load balancing algorithms to spread the requests across multiple Web servers are crucial to achieve the scalability. Various domain name server (DNS) based schedulers have been proposed in the literature, mainly for multiple homogeneous servers. The presence of heterogeneous Web servers not only increases the complexity of the DNS scheduling problem, but also makes previously proposed algorithms for homogeneous distributed systems not directly applicable. This leads us to propose new policies, cabled adaptive TTL algorithms, that take into account both the uneven distribution of client request rates and heterogeneity of Web servers to adaptively set the time-to-live (TTL) value for each address mapping request. Extensive simulation results show that these strategies are robust and effective in balancing load among geographically distributed heterogeneous Web servers.", "Replication of information among multiple World Wide Web servers is necessary to support high request rates to popular Web sites. A clustered Web server organization is preferable to multiple independent mirrored servers because it maintains a single interface to the users and has the potential to be more scalable, fault-tolerant and better load-balanced. In this paper, we propose a Web cluster architecture in which the Domain Name System (DNS) server, which dispatches the user requests among the servers through the URL name to the IP address mapping mechanism, is integrated with a redirection request mechanism based on HTTP. This should alleviate the side-effect of caching the IP address mapping at intermediate name servers. We compare many alternative mechanisms, including synchronous vs. asynchronous activation and centralized vs. distributed decisions on redirection. Moreover, we analyze the reassignment of entire domains or individual client requests, different types of status information and different server selection policies for redirecting requests. Our results show that the combination of centralized and distributed dispatching policies allows the Web server cluster to handle high load skews in the WWW environment.", "Users of highly popular Web sites may experience long delays when accessing information. Upgrading content site infrastructure from a single node to a locally distributed Web cluster composed by multiple server nodes provides limited relief, because the cluster wide-area connectivity may become the bottleneck. A better solution is to distribute Web clusters over the Internet by placing content nodes in strategic locations. A geographically distributed architecture where the Domain Name System (DNS) servers evaluate network proximity and users are served from the closest cluster reduces network impact on response time. On the other hand, serving closest requests only may cause unbalanced servers and may increase system impact on response time. To achieve a scalable Web system, we propose to integrate DNS proximity scheduling with an HTTP request redirection mechanism that any Web server can activate. We demonstrate through simulation experiments that this further dispatching mechanism augments the percentage of requests with guaranteed response time, thereby enhancing the Quality of Service of geographically distributed Web sites. However, HTTP request redirection should be used selectively because the additional round-trip increases network impact on latency time experienced by users. As a further contribution, this paper proposes and compares various mechanisms to limit reassignments with no negative consequences on load balancing.", "This paper presents a way of modeling the hit rates of caches that use a time-to-live (TTL)-based consistency policy. TTL-based consistency, as exemplified by DNS and Web caches, is a policy in which a data item, once retrieved, remains valid for a period known as the \"time-to-live\". Cache systems using large TTL periods are known to have high hit rates and scale well, but the effects of using shorter TTL periods are not well understood. We model hit rate as a function of request arrival times and the choice of TTL, enabling us to better understand cache behavior for shorter TTL periods. Our formula for the hit rate is closed form and relies upon a simplifying assumption about the interarrival times of requests for the data item in question: that these requests can be modeled as a sequence of independent and identically distributed random variables. Analyzing extensive DNS traces, we find that the results of the formula match observed statistics surprisingly well; in particular, the analysis is able to adequately explain the somewhat counterintuitive empirical finding of that the cache hit rate for DNS accesses rapidly increases as a function of TTL, exceeding 80 for a TTL of 15 minutes." ] }
cs0209023
2952481296
This paper studies the problem of load-balancing the demand for content in a peer-to-peer network across heterogeneous peer nodes that hold replicas of the content. Previous decentralized load balancing techniques in distributed systems base their decisions on periodic updates containing information about load or available capacity observed at the serving entities. We show that these techniques do not work well in the peer-to-peer context; either they do not address peer node heterogeneity, or they suffer from significant load oscillations. We propose a new decentralized algorithm, Max-Cap, based on the maximum inherent capacities of the replica nodes and show that unlike previous algorithms, it is not tied to the timeliness or frequency of updates. Yet, Max-Cap can handle the heterogeneity of a peer-to-peer environment without suffering from load oscillations.
later extend this work to balance load across a set of widely distributed heterogeneous web servers @cite_2 . This work proposes the use of adaptive TTLs, where the TTL for a DNS mapping is set inversely proportional to the domain's local client request rate for the mapping of interest (as reported by the domain's local name server). The TTL is at the same time set to be proportional to the chosen web server's maximum capacity. So web servers with high maximum capacity will have DNS mappings with longer TTLs, and domains with low request rates will receive mappings with longer TTLs. Max-Cap, the algorithm proposed in this thesis, also uses the maximum capacities of the serving replica nodes to allocate requests proportionally. The main difference is that in the work by , the root DNS scheduler acts as a centralized dispatcher setting all DNS mappings and is assumed to know what the request rate in the requesting domain is like. In the peer-to-peer case the authority node has no idea what the request rate throughout the network is like, nor how large is the set of requesting nodes.
{ "cite_N": [ "@cite_2" ], "mid": [ "1747723070", "2168282297", "2159285576", "2157357672" ], "abstract": [ "With ever increasing Web traffic, a distributed multi server Web site can provide scalability and flexibility to cope with growing client demands. Load balancing algorithms to spread the requests across multiple Web servers are crucial to achieve the scalability. Various domain name server (DNS) based schedulers have been proposed in the literature, mainly for multiple homogeneous servers. The presence of heterogeneous Web servers not only increases the complexity of the DNS scheduling problem, but also makes previously proposed algorithms for homogeneous distributed systems not directly applicable. This leads us to propose new policies, cabled adaptive TTL algorithms, that take into account both the uneven distribution of client request rates and heterogeneity of Web servers to adaptively set the time-to-live (TTL) value for each address mapping request. Extensive simulation results show that these strategies are robust and effective in balancing load among geographically distributed heterogeneous Web servers.", "This paper presents a way of modeling the hit rates of caches that use a time-to-live (TTL)-based consistency policy. TTL-based consistency, as exemplified by DNS and Web caches, is a policy in which a data item, once retrieved, remains valid for a period known as the \"time-to-live\". Cache systems using large TTL periods are known to have high hit rates and scale well, but the effects of using shorter TTL periods are not well understood. We model hit rate as a function of request arrival times and the choice of TTL, enabling us to better understand cache behavior for shorter TTL periods. Our formula for the hit rate is closed form and relies upon a simplifying assumption about the interarrival times of requests for the data item in question: that these requests can be modeled as a sequence of independent and identically distributed random variables. Analyzing extensive DNS traces, we find that the results of the formula match observed statistics surprisingly well; in particular, the analysis is able to adequately explain the somewhat counterintuitive empirical finding of that the cache hit rate for DNS accesses rapidly increases as a function of TTL, exceeding 80 for a TTL of 15 minutes.", "Replication of information among multiple World Wide Web servers is necessary to support high request rates to popular Web sites. A clustered Web server organization is preferable to multiple independent mirrored servers because it maintains a single interface to the users and has the potential to be more scalable, fault-tolerant and better load-balanced. In this paper, we propose a Web cluster architecture in which the Domain Name System (DNS) server, which dispatches the user requests among the servers through the URL name to the IP address mapping mechanism, is integrated with a redirection request mechanism based on HTTP. This should alleviate the side-effect of caching the IP address mapping at intermediate name servers. We compare many alternative mechanisms, including synchronous vs. asynchronous activation and centralized vs. distributed decisions on redirection. Moreover, we analyze the reassignment of entire domains or individual client requests, different types of status information and different server selection policies for redirecting requests. Our results show that the combination of centralized and distributed dispatching policies allows the Web server cluster to handle high load skews in the WWW environment.", "Geo-replicated services need an effective way to direct client requests to a particular location, based on performance, load, and cost. This paper presents DONAR, a distributed system that can offload the burden of replica selection, while providing these services with a sufficiently expressive interface for specifying mapping policies. Most existing approaches for replica selection rely on either central coordination (which has reliability, security, and scalability limitations) or distributed heuristics (which lead to suboptimal request distributions, or even instability). In contrast, the distributed mapping nodes in DONAR run a simple, efficient algorithm to coordinate their replica-selection decisions for clients. The protocol solves an optimization problem that jointly considers both client performance and server load, allowing us to show that the distributed algorithm is stable and effective. Experiments with our DONAR prototype--providing replica selection for CoralCDN and the Measurement Lab--demonstrate that our algorithm performs well \"in the wild.\" Our prototype supports DNS- and HTTP-based redirection, IP anycast, and a secure update protocol, and can handle many customer services with diverse policy objectives." ] }
cs0209023
2952481296
This paper studies the problem of load-balancing the demand for content in a peer-to-peer network across heterogeneous peer nodes that hold replicas of the content. Previous decentralized load balancing techniques in distributed systems base their decisions on periodic updates containing information about load or available capacity observed at the serving entities. We show that these techniques do not work well in the peer-to-peer context; either they do not address peer node heterogeneity, or they suffer from significant load oscillations. We propose a new decentralized algorithm, Max-Cap, based on the maximum inherent capacities of the replica nodes and show that unlike previous algorithms, it is not tied to the timeliness or frequency of updates. Yet, Max-Cap can handle the heterogeneity of a peer-to-peer environment without suffering from load oscillations.
Lottery scheduling is another technique that, like Max-Cap, uses proportional allocation. This approach has been proposed in the context of resource allocation within an operating system (the Mach microkernel) @cite_23 . Client processes hold tickets that give them access to particular resources in the operating system. Clients are allocated resources by a centralized lottery scheduler proportionally to the number of tickets they own and can donate their tickets to other clients in exchange for tickets at a later point. Max-Cap is similar in that it allocates requests to a replica node proportionally to the maximum capacity of the replica node. The main difference is that in Max-Cap the allocation decision is completely distributed with no opportunity for exchange of resources across replica nodes.
{ "cite_N": [ "@cite_23" ], "mid": [ "2111087562", "2340819442", "2593819652", "2036318658" ], "abstract": [ "This paper presents lottery scheduling, a novel randomized resource allocation mechanism. Lottery scheduling provides efficient, responsive control over the relative execution rates of computations. Such control is beyond the capabilities of conventional schedulers, and is desirable in systems that service requests of varying importance, such as databases, media-based applications, and networks. Lottery scheduling also supports modular resource management by enabling concurrent modules to insulate their resource allocation policies from one another. A currency abstraction is introduced to flexibly name, share, and protect resource rights. We also show that lottery scheduling can be generalized to manage many diverse resources, such as I O bandwidth, memory, and access to locks. We have implemented a prototype lottery scheduler for the Mach 3.0 microkernel, and found that it provides flexible and responsive control over the relative execution rates of a wide range of applications. The overhead imposed by our unoptimized prototype is comparable to that of the standard Mach timesharing policy.", "We consider Max-min Share (MmS) allocations of items both in the case where items are goods (positive utility) and when they are chores (negative utility). We show that fair allocations of goods and chores have some fundamental connections but differences as well. We prove that like in the case for goods, an MmS allocation does not need to exist for chores and computing an MmS allocation - if it exists - is strongly NP-hard. In view of these non-existence and complexity results, we present a polynomial-time 2-approximation algorithm for MmS fairness for chores. We then introduce a new fairness concept called optimal MmS that represents the best possible allocation in terms of MmS that is guaranteed to exist. For both goods and chores, we use connections to parallel machine scheduling to give (1) an exponential-time exact algorithm and (2) a polynomial-time approximation scheme for computing an optimal MmS allocation when the number of agents is fixed.", "We consider Max-min Share (MmS) fair allocations of indivisible chores (items with negative utilities). We show that allocation of chores and classical allocation of goods (items with positive utilities) have some fundamental connections but also differences which prevent a straightforward application of algorithms for goods in the chores setting and viceversa. We prove that an MmS allocation does not need to exist for chores and computing an MmS allocation - if it exists - is strongly NP-hard. In view of these non-existence and complexity results, we present a polynomial-time 2-approximation algorithm for MmS fairness for chores. We then introduce a new fairness concept called optimal MmS that represents the best possible allocation in terms of MmS that is guaranteed to exist. We use connections to parallel machine scheduling to give (1) a polynomial-time approximation scheme for computing an optimal MmS allocation when the number of agents is fixed and (2) an effective and efficient heuristic with an ex-post worst-case analysis.", "System bottlenecks, namely those resources which are subjected to high contention, constrain system performance. Hence effective resource management should be done by focusing on the bottleneck resources and allocating them to the most deserving clients. It has been shown that for any combination of entitlements and requests a fair allocation of bottleneck resources can be found, using an off-line algorithm that is given full information in advance regarding the needs of each client. We extend this result to the on-line case with no prior information. To this end we introduce a simple greedy algorithm. In essence, when a scheduling decision needs to be made, this algorithm selects the client that has the largest minimal gap between its entitlement and its current allocation among all the bottleneck resources. Importantly, this algorithm takes a global view of the system, and assigns each client a single priority based on his usage of all the resources; this single priority is then used to make coordinated scheduling decisions on all the resources. Extensive simulations show that this algorithm achieves fair allocations according to the desired entitlements for a wide range of conditions, without using any prior information regarding resource requirements. It also follows shifting usage patterns, including situations where the bottlenecks change with time." ] }
cs0507024
1892314125
This paper presents some experiments in clustering homogeneous XML documents to validate an existing classification or more generally an organisational structure. Our approach integrates techniques for extracting knowledge from docu- ments with unsupervised classification (clustering) of documents. We focus on the feature selection used for representing documents and its impact on the emerging clas- sification. We mix the selection of structured features with fine textual selection based on syntactic characteristics. We illustrate and evaluate this approach with a collection of Inria activity reports for the year 2003. The objective is to cluster projects into larger groups (Themes), based on the keywords or different chapters of these activity reports. We then compare the results of clustering using different feature selections, with the official theme structure used by Inria.
Currently research in classification and clustering methods for XML or semi-structured documents is very active. New document models have been proposed by ( @cite_1 , @cite_7 ) to extend the classical vector model and take into account both the structure and the textual part. It amounts to distinguish words appearing in different types of XML elements in a generic way, while our approach uses the structure to select (manually) the type of elements relevant to a specific mining objective.
{ "cite_N": [ "@cite_1", "@cite_7" ], "mid": [ "1575842006", "2045825058", "2138437702", "1970881376" ], "abstract": [ "A semi-structured document has more structured information compared to an ordinary document, and the relation among semi-structured documents can be fully utilized. In order to take advantage of the structure and link information in a semi-structured document for better mining, a structured link vector model (SLVM) is presented in this paper, where a vector represents a document, and vectors' elements are determined by terms, document structure and neighboring documents. Text mining based on SLVM is described in the procedure of K-means for briefness and clarity: calculating document similarity and calculating cluster center. The clustering based on SLVM performs significantly better than that based on a conventional vector space model in the experiments, and its F value increases from 0.65-0.73 to 0.82-0.86.", "In this paper, we present a probabilistic method that can improve the efficiency of document classification when applied to structured documents. The analysis of the structure of a document is the starting point of document classification. Our method is designed to augment other classification schemes and complement pre-filtering information extraction procedures to reduce uncertainties. To this end, a probabilistic distribution on the structure of XML documents is introduced. We show how to parameterise existing learning methods to describe the structure distribution efficiently. The learned distribution is then used to predict the classes of unseen documents. Novelty detection making use of the structure-based distribution function is also discussed. Demonstration on model documents and on Internet XML documents are presented.", "A key problem in document classification and clustering is learning the similarity between documents. Traditional approaches include estimating similarity between feature vectors of documents where the vectors are computed using TF-IDF in the bag-of-words model. However, these approaches do not work well when either similar documents do not use the same vocabulary or the feature vectors are not estimated correctly. In this paper, we represent documents and keywords using multiple layers of connected graphs. We pose the problem of simultaneously learning similarity between documents and keyword weights as an edge-weight regularization problem over the different layers of graphs. Unlike most feature weight learning algorithms, we propose an unsupervised algorithm in the proposed framework to simultaneously optimize similarity and the keyword weights. We extrinsically evaluate the performance of the proposed similarity measure on two different tasks, clustering and classification. The proposed similarity measure outperforms the similarity measure proposed by (, 2010), a state-of-the-art classification algorithm (Zhou and Burges, 2007) and three different baselines on a variety of standard, large data sets.", "We propose a new statistical model for the classification of structured documents and consider its use for multimedia document classification. Its main originality is its ability to simultaneously take into account the structural and the content information present in a structured document, and also to cope with different types of content (text, image, etc). We present experiments on the classification of multilingual pornographic HTML pages using text and image data. The system accurately classifies porn sites from 8 European languages. This corpus has been developed by EADS company in the context of a large Web site filtering application." ] }
cs0507024
1892314125
This paper presents some experiments in clustering homogeneous XML documents to validate an existing classification or more generally an organisational structure. Our approach integrates techniques for extracting knowledge from docu- ments with unsupervised classification (clustering) of documents. We focus on the feature selection used for representing documents and its impact on the emerging clas- sification. We mix the selection of structured features with fine textual selection based on syntactic characteristics. We illustrate and evaluate this approach with a collection of Inria activity reports for the year 2003. The objective is to cluster projects into larger groups (Themes), based on the keywords or different chapters of these activity reports. We then compare the results of clustering using different feature selections, with the official theme structure used by Inria.
XML document clustering has been used mostly for visualizing large collections of documents, for example @cite_2 cluster AML (Astronomical Markup Language) documents based only on their links. @cite_3 propose a model similar to @cite_1 but adding in- and out-links to the model, and they use it for clustering rather than classification. @cite_4 also propose a BitCube model for clustering that represents documents based on their ePaths (paths of text elements) and textual content. Their focus is on evaluating time performance rather than clustering effectiveness.
{ "cite_N": [ "@cite_4", "@cite_1", "@cite_3", "@cite_2" ], "mid": [ "3603", "2084370216", "2590145195", "2007675602" ], "abstract": [ "In this paper, we describe a new bitmap indexing technique to cluster XML documents. XML is a new standard for exchanging and representing information on the Internet. Documents can be hierarchically represented by XML-elements. XML documents are represented and indexed using a bitmap indexing technique. We define the similarity and popularity operations available in bitmap indexes and propose a method for partitioning a XML document set. Furthermore, a 2-dimensional bitmap index is extended to a 3dimensional bitmap index, called BitCube. We define statistical measurements in the BitCube: mean, mode, standard derivation, and correlation coefficient. Based on these measurements, we also define the slice, project, and dice operations on a BitCube. BitCube can be manipulated efficiently and improves the performance of document retrieval.", "Abstract Self-organization or clustering of data objects can be a powerful aid towards knowledge discovery in distributed databases. The web presents opportunities for such clustering of documents and other data objects. This potential will be even more pronounced when XML becomes widely used over the next few years. Based on clustering of XML links, we explore a visualization approach for discovering knowledge on the web.", "Document clustering is generally the first step for topic identification. Since many clustering methods operate on the similarities between documents, it is important to build representations of these documents which keep their semantics as much as possible and are also suitable for efficient similarity calculation. As we describe in (Proceedings of ISSI 2015 Istanbul: 15th International Society of Scientometrics and Informetrics Conference, Istanbul, Turkey, 29 June to 3 July, 2015. Bogazici University Printhouse. http: www.issi2015.org files downloads all-papers 1042.pdf, 2015), the metadata of articles in the Astro dataset contribute to a semantic matrix, which uses a vector space to capture the semantics of entities derived from these articles and consequently supports the contextual exploration of these entities in LittleAriadne. However, this semantic matrix does not allow to calculate similarities between articles directly. In this paper, we will describe in detail how we build a semantic representation for an article from the entities that are associated with it. Base on such semantic representations of articles, we apply two standard clustering methods, K-Means and the Louvain community detection algorithm, which leads to our two clustering solutions labelled as OCLC-31 (standing for K-Means) and OCLC-Louvain (standing for Louvain). In this paper, we will give the implementation details and a basic comparison with other clustering solutions that are reported in this special issue.", "The increasing diffusion of XML languages for the encoding of domain-specific multimedia information raises the need for new information retrieval models that can fully exploit structural information. An XML language specifically designed for music like MX allows queries to be made directly on the thematic material. The main advantage of such a system is that it can handle symbolic, notational, and audio objects at the same time through a multilayered structure. On the model side, common music information retrieval methods do not take into account the inner structure of melodic themes and the metric relationships between notes. In this article we deal with two main topics: a novel architecture based on a new XML language for music and a new model of melodic themes based on graph theory. This model takes advantage of particular graph invariants that can be linked to melodic themes as metadata in order to characterize all their possible modifications through specific transformations and that can be exploited in filtering algorithms. We provide a similarity function and show through an evaluation stage how it improves existing methods, particularly in the case of same-structured themes." ] }
cs0507024
1892314125
This paper presents some experiments in clustering homogeneous XML documents to validate an existing classification or more generally an organisational structure. Our approach integrates techniques for extracting knowledge from docu- ments with unsupervised classification (clustering) of documents. We focus on the feature selection used for representing documents and its impact on the emerging clas- sification. We mix the selection of structured features with fine textual selection based on syntactic characteristics. We illustrate and evaluate this approach with a collection of Inria activity reports for the year 2003. The objective is to cluster projects into larger groups (Themes), based on the keywords or different chapters of these activity reports. We then compare the results of clustering using different feature selections, with the official theme structure used by Inria.
Another direction is clustering Web documents returned as answers to a query, an alternative to rank lists. @cite_11 propose an original algorithm using a suffix tree structure, that is linear in the size of the collection and incremental, an important feature to support online clustering.
{ "cite_N": [ "@cite_11" ], "mid": [ "2100958137", "2123656745", "2141957180", "2950763224" ], "abstract": [ "Users of Web search engines are often forced to sift through the long ordered list of document \"snippets\" returned by the engines. The IR community has explored document clustering as an alternative method of organizing retrieval results, but clustering has yet to be deployed on the major search engines. The paper articulates the unique requirements of Web document clustering and reports on the first evaluation of clustering methods in this domain. A key requirement is that the methods create their clusters based on the short snippets returned by Web search engines. Surprisingly, we find that clusters based on snippets are almost as good as clusters created using the full text of Web documents. To satisfy the stringent requirements of the Web domain, we introduce an incremental, linear time (in the document collection size) algorithm called Suffix Tree Clustering (STC). which creates clusters based on phrases shared between documents. We show that STC is faster than standard clustering methods in this domain, and argue that Web document clustering via STC is both feasible and potentially beneficial.", "Most previous work on automatic query clustering generated a flat, un-nested partition of query terms. In this work, we discuss the organization of query terms into a hierarchical structure and construct a query taxonomy in an automatic way. The proposed approach is designed based on a hierarchical agglomerative clustering algorithm to hierarchically group similar queries and generate cluster hierarchies using a novel cluster partition technique. The search processes of real-world search engines are combined to obtain highly ranked Web documents as the feature source for each query term. Preliminary experiments show that the proposed approach is effective for obtaining thesaurus information for query terms, and is also feasible for constructing a query taxonomy which provides a basis for in-depth analysis of users' search interests and domain-specific vocabulary on a larger scale.", "Given a set @math of @math strings of total length @math , our task is to report the \"most relevant\"strings for a given query pattern @math . This involves somewhat more advanced query functionality than the usual pattern matching, as some notion of \"most relevant\" is involved. In information retrieval literature, this task is best achieved by using inverted indexes. However, inverted indexes work only for some predefined set of patterns. In the pattern matching community, the most popular pattern-matching data structures are suffix trees and suffix arrays. However, a typical suffix tree search involves going through all the occurrences of the pattern over the entire string collection, which might be a lot more than the required relevant documents. The first formal framework to study such kind of retrieval problems was given by [Muthukrishnan, 2002]. He considered two metrics for relevance: frequency and proximity. He took a threshold-based approach on these metrics and gave data structures taking @math words of space. We study this problem in a slightly different framework of reporting the top @math most relevant documents (in sorted order) under similar and more general relevance metrics. Our framework gives linear space data structure with optimal query times for arbitrary score functions. As a corollary, it improves the space utilization for the problems in [Muthukrishnan, 2002] while maintaining optimal query performance. We also develop compressed variants of these data structures for several specific relevance metrics.", "Inspired by the PageRank and HITS (hubs and authorities) algorithms for Web search, we propose a structural re-ranking approach to ad hoc information retrieval: we reorder the documents in an initially retrieved set by exploiting asymmetric relationships between them. Specifically, we consider generation links, which indicate that the language model induced from one document assigns high probability to the text of another; in doing so, we take care to prevent bias against long documents. We study a number of re-ranking criteria based on measures of centrality in the graphs formed by generation links, and show that integrating centrality into standard language-model-based retrieval is quite effective at improving precision at top ranks." ] }
cs0507024
1892314125
This paper presents some experiments in clustering homogeneous XML documents to validate an existing classification or more generally an organisational structure. Our approach integrates techniques for extracting knowledge from docu- ments with unsupervised classification (clustering) of documents. We focus on the feature selection used for representing documents and its impact on the emerging clas- sification. We mix the selection of structured features with fine textual selection based on syntactic characteristics. We illustrate and evaluate this approach with a collection of Inria activity reports for the year 2003. The objective is to cluster projects into larger groups (Themes), based on the keywords or different chapters of these activity reports. We then compare the results of clustering using different feature selections, with the official theme structure used by Inria.
@cite_5 compare different text feature extractions, and variants of a linear-time clustering algorithm using random seed selection with center adjustment.
{ "cite_N": [ "@cite_5" ], "mid": [ "2070412788", "2137691837", "2084594934", "2109380209" ], "abstract": [ "Clustering is a powerful technique for large-scale topic discovery from text. It involves two phases: first, feature extraction maps each document or record to a point in high-dimensional space, then clustering algorithms automatically group the points into a hierarchy of clusters. We describe an unsupervised, near-linear time text clustering system that offers a number of algorithm choices for each phase. We introduce a methodology for measuring the quality of a cluster hierarchy in terms of FMeasure, and present the results of experiments comparing different algorithms. The evaluation considers some feature selection parameters (tfidfand feature vector length) but focuses on the clustering algorithms, namely techniques from Scatter Gather (buckshot, fractionation, and split join) and kmeans. Our experiments suggest that continuous center adjustment contributes more to cluster quality than seed selection does. It follows that using a simpler seed selection algorithm gives a better time quality tradeoff. We describe a refinement to center adjustment, “vector average damping,” that further improves cluster quality. We also compare the near-linear time algorithms to a group average greedy agglomerative clustering algorithm to demonstrate the time quality tradeoff quantitatively.", "We present a novel analysis of a random sampling approach for four clustering problems in metric spaces: k-median, k-means, min-sum k-clustering, and balanced k-median. For all these problems, we consider the following simple sampling scheme: select a small sample set of input points uniformly at random and then run some approximation algorithm on this sample set to compute an approximation of the best possible clustering of this set. Our main technical contribution is a significantly strengthened analysis of the approximation guarantee by this scheme for the clustering problems.The main motivation behind our analyses was to design sublinear-time algorithms for clustering problems. Our second contribution is the development of new approximation algorithms for the aforementioned clustering problems. Using our random sampling approach, we obtain for these problems the first time approximation algorithms that have running time independent of the input size, and depending on k and the diameter of the metric space only. © 2006 Wiley Periodicals, Inc. Random Struct. Alg., 2006A preliminary extended abstract of this work appeared in Proceedings of the 31st Annual International Colloquium on Automata, Languages and Programming (ICALP), pp. 396-407, 2004.", "Scene text extraction, i.e., segmenting text pixels from background, is an important step before the text can be recognized. It is a challenging problem due to the cluttered background and the variation of lighting. In this paper, we propose a seed-based segmentation method that can automatically judge the text polarity, extract seed points of text and background, and segment texts by semi-supervised learning (SSL). First, we estimate the text polarity and the stroke width using gradient local correlation. Then, all the points in the middle of stroke edge pairs satisfying the width and polarity are taken as foreground seeds, and the points in the middle of the edge pairs with opposite polarity are taken as background seeds. The whole image is then segmented into text and background using an SSL algorithm. Owing to the accurate estimate of text polarity and extraction of seed points, the proposed method yields good segmentation performance. Experimental results on the KAIST dataset demonstrate the superiority of the method.", "Large datasets become common in applications like Internet services, genomic sequence analysis and astronomical telescope. The demanding requirements of memory and computation power force data mining algorithms to be parallelized in order to efficiently deal with the large datasets. This paper introduces our experience of grouping internet users by mining a huge volume of web access log of up to 100 gigabytes. The application is realized using hierarchical clustering algorithms with Map-Reduce, a parallel processing framework over clusters. However, the immediate implementation of the algorithms suffers from efficiency problem for both inadequate memory and higher execution time. This paper present an efficient hierarchical clustering method of mining large datasets with Map-Reduce. The method includes two optimization techniques: “Batch Updating” to reduce the computational time and communication costs among cluster nodes, and “Co-occurrence based feature selection” to decrease the dimension of feature vectors and eliminate noise features. The empirical study shows the first technique can significantly reduce the IO and distributed communication overhead, reducing the total execution time to nearly 1 15. Experimentally, the second technique efficiently simplifies the features while obtains improved accuracy of hierarchical clustering." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
For routing on a circle, the best-known constructions have @math and @math . Examples include: Chord @cite_15 with distance-function @math , a variant of Chord with bidirectional links'' @cite_4 and distance-function @math , and the hypercube with distance function @math . In this paper, we improve upon all of these constructions by showing how to route in @math hops in the worst case with @math links per node.
{ "cite_N": [ "@cite_15", "@cite_4" ], "mid": [ "2070219632", "2949856235", "2160405192", "2949588463" ], "abstract": [ "We propose optimal routing algorithms for Chord [1], a popular topology for routing in peer-to-peer networks. Chord is an undirected graph on 2b nodes arranged in a circle, with edges connecting pairs of nodes that are 2k positions apart for any k ≥ 0. The standard Chord routing algorithm uses edges in only one direction. Our algorithms exploit the bidirectionality of edges for optimality. At the heart of the new protocols lie algorithms for writing a positive integer d as the difference of two non-negative integers d′ and d″ such that the total number of 1-bits in the binary representation of d′ and d″ is minimized. Given that Chord is a variant of the hypercube, the optimal routes possess a surprising combinatorial structure.", "We introduce a family of directed geometric graphs, denoted @math , that depend on two parameters @math and @math . For @math and @math , the @math graph is a strong @math -spanner, with @math . The out-degree of a node in the @math graph is at most @math . Moreover, we show that routing can be achieved locally on @math . Next, we show that all strong @math -spanners are also @math -spanners of the unit disk graph. Simulations for various values of the parameters @math and @math indicate that for random point sets, the spanning ratio of @math is better than the proven theoretical bounds.", "Several peer-to-peer networks are based upon randomized graph topologies that permit efficient greedy routing, e. g., randomized hypercubes, randomized Chord, skip-graphs and constructions based upon small-world percolation networks. In each of these networks, a node has out-degree Θ(log n), where n denotes the total number of nodes, and greedy routing is known to take O(log n) hops on average. We establish lower-bounds for greedy routing for these networks, and analyze Neighbor-of-Neighbor (NoN)- greedy routing. The idea behind NoN, as the name suggests, is to take a neighbor's neighbors into account for making better routing decisions.The following picture emerges: Deterministic routing networks like hypercubes and Chord have diameter Θ(log n) and greedy routing is optimal. Randomized routing networks like randomized hypercubes, randomized Chord, and constructions based on small-world percolation networks, have diameter Θ(log n log log n) with high probability. The expected diameter of Skip graphs is also Θ(log n log log n). In all of these networks, greedy routing fails to find short routes, requiring Ω(log n) hops with high probability. Surprisingly, the NoN- greedy routing algorithm is able to diminish route-lengths to Θ(log n log log n) hops, which is asymptotically optimal.", "We study approximate distributed solutions to the weighted all-pairs-shortest-paths (APSP) problem in the CONGEST model. We obtain the following results. @math A deterministic @math -approximation to APSP in @math rounds. This improves over the best previously known algorithm, by both derandomizing it and by reducing the running time by a @math factor. In many cases, routing schemes involve relabeling, i.e., assigning new names to nodes and require that these names are used in distance and routing queries. It is known that relabeling is necessary to achieve running times of @math . In the relabeling model, we obtain the following results. @math A randomized @math -approximation to APSP, for any integer @math , running in @math rounds, where @math is the hop diameter of the network. This algorithm simplifies the best previously known result and reduces its approximation ratio from @math to @math . Also, the new algorithm uses uses labels of asymptotically optimal size, namely @math bits. @math A randomized @math -approximation to APSP, for any integer @math , running in time @math and producing compact routing tables of size @math . The node lables consist of @math bits. This improves on the approximation ratio of @math for tables of that size achieved by the best previously known algorithm, which terminates faster, in @math rounds." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
routing with distance function @math has been studied for Chord @cite_4 , a popular topology for P2P networks. Chord has @math nodes, with out-degree @math per node. The longest route takes @math hops. In terms of @math and @math , the largest-sized Chord network has @math nodes. Moreover, @math and @math cannot be chosen independently -- they are functionally related. Both @math and @math are @math . Analysis of routing of Chord leaves open the following question:
{ "cite_N": [ "@cite_4" ], "mid": [ "2070219632", "1599788664", "2949588463", "2949856235" ], "abstract": [ "We propose optimal routing algorithms for Chord [1], a popular topology for routing in peer-to-peer networks. Chord is an undirected graph on 2b nodes arranged in a circle, with edges connecting pairs of nodes that are 2k positions apart for any k ≥ 0. The standard Chord routing algorithm uses edges in only one direction. Our algorithms exploit the bidirectionality of edges for optimality. At the heart of the new protocols lie algorithms for writing a positive integer d as the difference of two non-negative integers d′ and d″ such that the total number of 1-bits in the binary representation of d′ and d″ is minimized. Given that Chord is a variant of the hypercube, the optimal routes possess a surprising combinatorial structure.", "We propose a family of novel schemes based on Chord retaining all positive aspects that made Chord a popular topology for routing in P2P networks. The schemes, based on the Fibonacci number system, allow to improve on the maximum average number of hops for lookups and the routing table size per node.", "We study approximate distributed solutions to the weighted all-pairs-shortest-paths (APSP) problem in the CONGEST model. We obtain the following results. @math A deterministic @math -approximation to APSP in @math rounds. This improves over the best previously known algorithm, by both derandomizing it and by reducing the running time by a @math factor. In many cases, routing schemes involve relabeling, i.e., assigning new names to nodes and require that these names are used in distance and routing queries. It is known that relabeling is necessary to achieve running times of @math . In the relabeling model, we obtain the following results. @math A randomized @math -approximation to APSP, for any integer @math , running in @math rounds, where @math is the hop diameter of the network. This algorithm simplifies the best previously known result and reduces its approximation ratio from @math to @math . Also, the new algorithm uses uses labels of asymptotically optimal size, namely @math bits. @math A randomized @math -approximation to APSP, for any integer @math , running in time @math and producing compact routing tables of size @math . The node lables consist of @math bits. This improves on the approximation ratio of @math for tables of that size achieved by the best previously known algorithm, which terminates faster, in @math rounds.", "We introduce a family of directed geometric graphs, denoted @math , that depend on two parameters @math and @math . For @math and @math , the @math graph is a strong @math -spanner, with @math . The out-degree of a node in the @math graph is at most @math . Moreover, we show that routing can be achieved locally on @math . Next, we show that all strong @math -spanners are also @math -spanners of the unit disk graph. Simulations for various values of the parameters @math and @math indicate that for random point sets, the spanning ratio of @math is better than the proven theoretical bounds." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
Xu al @cite_16 provide a partial answer to the above question by studying routing with distance function @math over graph topologies. A graph over @math nodes placed in a circle is said to be uniform if the set of clockwise offsets of out-going links is identical for all nodes. Chord is an example of a uniform graph. Xu al show that for any uniform graph with @math links per node, routing with distance function @math necessitates @math hops in the worst-case.
{ "cite_N": [ "@cite_16" ], "mid": [ "2950469527", "2949856235", "2135290452", "2009356484" ], "abstract": [ "We consider a system of @math servers inter-connected by some underlying graph topology @math . Tasks arrive at the various servers as independent Poisson processes of rate @math . Each incoming task is irrevocably assigned to whichever server has the smallest number of tasks among the one where it appears and its neighbors in @math . Tasks have unit-mean exponential service times and leave the system upon service completion. The above model has been extensively investigated in the case @math is a clique. Since the servers are exchangeable in that case, the queue length process is quite tractable, and it has been proved that for any @math , the fraction of servers with two or more tasks vanishes in the limit as @math . For an arbitrary graph @math , the lack of exchangeability severely complicates the analysis, and the queue length process tends to be worse than for a clique. Accordingly, a graph @math is said to be @math -optimal or @math -optimal when the occupancy process on @math is equivalent to that on a clique on an @math -scale or @math -scale, respectively. We prove that if @math is an Erd o s-R 'enyi random graph with average degree @math , then it is with high probability @math -optimal and @math -optimal if @math and @math as @math , respectively. This demonstrates that optimality can be maintained at @math -scale and @math -scale while reducing the number of connections by nearly a factor @math and @math compared to a clique, provided the topology is suitably random. It is further shown that if @math contains @math bounded-degree nodes, then it cannot be @math -optimal. In addition, we establish that an arbitrary graph @math is @math -optimal when its minimum degree is @math , and may not be @math -optimal even when its minimum degree is @math for any @math .", "We introduce a family of directed geometric graphs, denoted @math , that depend on two parameters @math and @math . For @math and @math , the @math graph is a strong @math -spanner, with @math . The out-degree of a node in the @math graph is at most @math . Moreover, we show that routing can be achieved locally on @math . Next, we show that all strong @math -spanners are also @math -spanners of the unit disk graph. Simulations for various values of the parameters @math and @math indicate that for random point sets, the spanning ratio of @math is better than the proven theoretical bounds.", "In the study of deterministic distributed algorithms it is commonly assumed that each node has a unique O(log n)-bit identifier. We prove that for a general class of graph problems, local algorithms (constant-time distributed algorithms) do not need such identifiers: a port numbering and orientation is sufficient. Our result holds for so-called simple PO-checkable graph optimisation problems; this includes many classical packing and covering problems such as vertex covers, edge covers, matchings, independent sets, dominating sets, and edge dominating sets. We focus on the case of bounded-degree graphs and show that if a local algorithm finds a constant-factor approximation of a simple PO-checkable graph problem with the help of unique identifiers, then the same approximation ratio can be achieved on anonymous networks. As a corollary of our result and by prior work, we derive a tight lower bound on the local approximability of the minimum edge dominating set problem. Our main technical tool is an algebraic construction of homogeneously ordered graphs: We say that a graph is (α,r)-homogeneous if its nodes are linearly ordered so that an α fraction of nodes have pairwise isomorphic radius-r neighbourhoods. We show that there exists a finite (α,r)-homogeneous 2k-regular graph of girth at least g for any α", "We show that if a connected graph with @math nodes has conductance φ then rumour spreading, also known as randomized broadcast, successfully broadcasts a message within O(φ-1 • log n), many rounds with high probability, regardless of the source, by using the PUSH-PULL strategy. The O(••) notation hides a polylog φ-1 factor. This result is almost tight since there exists graph of n nodes, and conductance φ, with diameter Ω(φ-1 • log n). If, in addition, the network satisfies some kind of uniformity condition on the degrees, our analysis implies that both both PUSH and PULL, by themselves, successfully broadcast the message to every node in the same number of rounds." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
Cordasco al @cite_19 extend the result of Xu al @cite_16 by showing that routing with distance function @math in a uniform graph over @math nodes satisfies the inequality @math , where @math denotes the out-degree of each node, @math is the length of the longest path, and @math denotes the @math Fibonacci number. It is well-known that @math , where @math is the Golden ratio and @math denotes the integer closest to real number @math . It follows that @math . Cordasco al show that the inequality is strict if @math . For @math , they construct uniform graphs based upon Fibonacci numbers which achieve an optimal tradeoff between @math and @math .
{ "cite_N": [ "@cite_19", "@cite_16" ], "mid": [ "2949856235", "2099470983", "2950469527", "2963781977" ], "abstract": [ "We introduce a family of directed geometric graphs, denoted @math , that depend on two parameters @math and @math . For @math and @math , the @math graph is a strong @math -spanner, with @math . The out-degree of a node in the @math graph is at most @math . Moreover, we show that routing can be achieved locally on @math . Next, we show that all strong @math -spanners are also @math -spanners of the unit disk graph. Simulations for various values of the parameters @math and @math indicate that for random point sets, the spanning ratio of @math is better than the proven theoretical bounds.", "It is proven that the connected pathwidth of any graph @math is at most @math , where @math is the pathwidth of @math . The method is constructive, i.e., it yields an efficient algorithm that for a given path decomposition of width @math computes a connected path decomposition of width at most @math . The running time of the algorithm is @math , where @math is the number of “bags” in the input path decomposition. The motivation for studying connected path decompositions comes from the connection between the pathwidth and the search number of a graph. One of the advantages of the above bound for connected pathwidth is an inequality @math , where @math and @math are the connected search number and the search number of @math , respectively. Moreover, the algorithm presented in this work can be used to convert a given search strategy using @math searchers into a (monotone) connected one using @math searc...", "We consider a system of @math servers inter-connected by some underlying graph topology @math . Tasks arrive at the various servers as independent Poisson processes of rate @math . Each incoming task is irrevocably assigned to whichever server has the smallest number of tasks among the one where it appears and its neighbors in @math . Tasks have unit-mean exponential service times and leave the system upon service completion. The above model has been extensively investigated in the case @math is a clique. Since the servers are exchangeable in that case, the queue length process is quite tractable, and it has been proved that for any @math , the fraction of servers with two or more tasks vanishes in the limit as @math . For an arbitrary graph @math , the lack of exchangeability severely complicates the analysis, and the queue length process tends to be worse than for a clique. Accordingly, a graph @math is said to be @math -optimal or @math -optimal when the occupancy process on @math is equivalent to that on a clique on an @math -scale or @math -scale, respectively. We prove that if @math is an Erd o s-R 'enyi random graph with average degree @math , then it is with high probability @math -optimal and @math -optimal if @math and @math as @math , respectively. This demonstrates that optimality can be maintained at @math -scale and @math -scale while reducing the number of connections by nearly a factor @math and @math compared to a clique, provided the topology is suitably random. It is further shown that if @math contains @math bounded-degree nodes, then it cannot be @math -optimal. In addition, we establish that an arbitrary graph @math is @math -optimal when its minimum degree is @math , and may not be @math -optimal even when its minimum degree is @math for any @math .", "An @math maximum distance separable (MDS) array code of length @math , dimension @math , and sub-packetization @math is formed of @math matrices over a finite field @math , with every column of the matrix stored on a separate node in the distributed storage system and viewed as a coordinate of the codeword. Repair of a failed node (recovery of one erased column) can be performed by accessing a set of @math surviving (helper) nodes. The code is said to have the optimal access property if the amount of data accessed at each of the helper nodes meets a lower bound on this quantity. For optimal-access MDS codes with @math , the sub-packetization @math satisfies the bound @math . In our previous work (IEEE Trans. Inf. Theory, vol. 63, no. 4, 2017), for any @math and @math , we presented an explicit construction of optimal-access MDS codes with sub-packetization @math . In this paper, we take up the question of reducing the sub-packetization value @math to make it to approach the lower bound. We construct an explicit family of optimal-access codes with @math , which differs from the optimal value by at most a factor of @math . These codes can be constructed over any finite field @math as long as @math , and afford low-complexity encoding and decoding procedures. We also define a version of the repair problem that bridges the context of regenerating codes and codes with locality constraints (LRC codes), which we call group repair with optimal access . In this variation, we assume that the set of @math nodes is partitioned into @math repair groups of size @math , and require that the amount of accessed data for repair is the smallest possible whenever the @math helper nodes include all the other @math nodes from the same group as the failed node. For this problem, we construct a family of codes with the group optimal access property. These codes can be constructed over any field @math of size @math , and also afford low-complexity encoding and decoding procedures." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
The results in @cite_4 @cite_16 @cite_19 leave open the question whether there exists any graph construction that permits routes of length @math with distance function @math and or @math . provides an answer to the problem by constructing a non-uniform graph --- the set of clockwise offsets of out-going links is different for different nodes.
{ "cite_N": [ "@cite_19", "@cite_16", "@cite_4" ], "mid": [ "1512819151", "2950552904", "1623319572", "2568950526" ], "abstract": [ "The emergence of real life graphs with billions of nodes poses significant challenges for managing and querying these graphs. One of the fundamental queries submitted to graphs is the shortest distance query. Online BFS (breadth-first search) and offline pre-computing pairwise shortest distances are prohibitive in time or space complexity for billion-node graphs. In this paper, we study the feasibility of building distance oracles for billion-node graphs. A distance oracle provides approximate answers to shortest distance queries by using a pre-computed data structure for the graph. Sketch-based distance oracles are good candidates because they assign each vertex a sketch of bounded size, which means they have linear space complexity. However, state-of-the-art sketch-based distance oracles lack efficiency or accuracy when dealing with big graphs. In this paper, we address the scalability and accuracy issues by focusing on optimizing the three key factors that affect the performance of distance oracles: landmark selection, distributed BFS, and answer generation. We conduct extensive experiments on both real networks and synthetic networks to show that we can build distance oracles of affordable cost and efficiently answer shortest distance queries even for billion-node graphs.", "We present new and improved data structures that answer exact node-to-node distance queries in planar graphs. Such data structures are also known as distance oracles. For any directed planar graph on n nodes with non-negative lengths we obtain the following: * Given a desired space allocation @math , we show how to construct in @math time a data structure of size @math that answers distance queries in @math time per query. As a consequence, we obtain an improvement over the fastest algorithm for k-many distances in planar graphs whenever @math . * We provide a linear-space exact distance oracle for planar graphs with query time @math for any constant eps>0. This is the first such data structure with provable sublinear query time. * For edge lengths at least one, we provide an exact distance oracle of space @math such that for any pair of nodes at distance D the query time is @math . Comparable query performance had been observed experimentally but has never been explained theoretically. Our data structures are based on the following new tool: given a non-self-crossing cycle C with @math nodes, we can preprocess G in @math time to produce a data structure of size @math that can answer the following queries in @math time: for a query node u, output the distance from u to all the nodes of C. This data structure builds on and extends a related data structure of Klein (SODA'05), which reports distances to the boundary of a face, rather than a cycle. The best distance oracles for planar graphs until the current work are due to Cabello (SODA'06), Djidjev (WG'96), and Fakcharoenphol and Rao (FOCS'01). For @math and space @math , we essentially improve the query time from @math to @math .", "A (1+e)-approximate distance oracle for a graph is a data structure that supports approximate point-to-point shortest-path-distance queries. The most relevant measures for a distance-oracle construction are: space, query time, and preprocessing time. There are strong distance-oracle constructions known for planar graphs (Thorup, JACM'04) and, subsequently, minor-excluded graphs (Abraham and Gavoille, PODC'06). However, these require Ω(e-1n lg n) space for n-node graphs. In this paper, for planar graphs, bounded-genus graphs, and minor-excluded graphs we give distance-oracle constructions that require only O(n) space. The big O hides only a fixed constant, independent of e and independent of genus or size of an excluded minor. The preprocessing times for our distance oracle are also faster than those for the previously known constructions. For planar graphs, the preprocessing time is O(nlg2 n). However, our constructions have slower query times. For planar graphs, the query time is O(e-2 lg2 n). For all our linear-space results, we can in fact ensure, for any δ > 0, that the space required is only 1 + δ times the space required just to represent the graph itself.", "We consider the task of topology discovery of sparse random graphs using end-to-end random measurements (e.g., delay) between a subset of nodes, referred to as the participants. The rest of the nodes are hidden, and do not provide any information for topology discovery. We consider topology discovery under two routing models: (a) the participants exchange messages along the shortest paths and obtain end-to-end measurements, and (b) additionally, the participants exchange messages along the second shortest path. For scenario (a), our proposed algorithm results in a sub-linear edit-distance guarantee using a sub-linear number of uniformly selected participants. For scenario (b), we obtain a much stronger result, and show that we can achieve consistent reconstruction when a sub-linear number of uniformly selected nodes participate. This implies that accurate discovery of sparse random graphs is tractable using an extremely small number of participants. We finally obtain a lower bound on the number of participants required by any algorithm to reconstruct the original random graph up to a given edit distance. We also demonstrate that while consistent discovery is tractable for sparse random graphs using a small number of participants, in general, there are graphs which cannot be discovered by any algorithm even with a significant number of participants, and with the availability of end-to-end information along all the paths between the participants. © 2012 Wiley Periodicals, Inc. Random Struct. Alg., 2013" ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
Kleinberg's construction has found applications in the design of overlay routing networks for Distributed Hash Tables. Symphony @cite_13 is an adaptation of Kleinberg's construction in a single dimension. The idea is to place @math nodes in a virtual circle and to equip each node with @math out-going links. In the resulting network, the average path length of routes with distance function @math is @math hops. Note that unlike Kleinberg's network, the space here is virtual and so are the distances and the sense of routing. The same complexity was achieved with a slightly different Kleinberg-style construction by Aspnes al @cite_18 . In the same paper, it was also shown that any symmetric, randomized degree- @math network has @math routing complexity.
{ "cite_N": [ "@cite_18", "@cite_13" ], "mid": [ "1992467531", "2107997203", "2949588463", "2084442192" ], "abstract": [ "We consider the problem of designing an overlay network and routing mechanism that permits finding resources efficiently in a peer-to-peer system. We argue that many existing approaches to this problem can be modeled as the construction of a random graph embedded in a metric space whose points represent resource identifiers, where the probability of a connection between two nodes depends only on the distance between them in the metric space. We study the performance of a peer-to-peer system where nodes are embedded at grid points in a simple metric space: a one-dimensional real line. We prove upper and lower bounds on the message complexity of locating particular resources in such a system, under a variety of assumptions about failures of either nodes or the connections between them. Our lower bounds in particular show that the use of inverse power-law distributions in routing, as suggested by Kleinberg [5], is close to optimal. We also give heuristics to efficiently maintain a network supporting efficient routing as nodes enter and leave the system. Finally, we give some experimental results that suggest promising directions for future work.", "Consider @math nodes connected by wires to make an n-dimensional binary cube. Suppose that initially the nodes contain one packet each addressed to distinct nodes of the cube. We show that there is a distributed randomized algorithm that can route every packet to its destination without two packets passing down the same wire at any one time, and finishes within time @math with overwhelming probability for all such routing requests. Each packet carries with it @math bits of bookkeeping information. No other communication among the nodes takes place.The algorithm offers the only scheme known for realizing arbitrary permutations in a sparse N node network in @math time and has evident applications in the design of general purpose parallel computers.", "We study approximate distributed solutions to the weighted all-pairs-shortest-paths (APSP) problem in the CONGEST model. We obtain the following results. @math A deterministic @math -approximation to APSP in @math rounds. This improves over the best previously known algorithm, by both derandomizing it and by reducing the running time by a @math factor. In many cases, routing schemes involve relabeling, i.e., assigning new names to nodes and require that these names are used in distance and routing queries. It is known that relabeling is necessary to achieve running times of @math . In the relabeling model, we obtain the following results. @math A randomized @math -approximation to APSP, for any integer @math , running in @math rounds, where @math is the hop diameter of the network. This algorithm simplifies the best previously known result and reduces its approximation ratio from @math to @math . Also, the new algorithm uses uses labels of asymptotically optimal size, namely @math bits. @math A randomized @math -approximation to APSP, for any integer @math , running in time @math and producing compact routing tables of size @math . The node lables consist of @math bits. This improves on the approximation ratio of @math for tables of that size achieved by the best previously known algorithm, which terminates faster, in @math rounds.", "We analyze decentralized routing in small-world networks that combine a wide variation in node degrees with a notion of spatial embedding. Specifically, we consider a variation of Kleinberg's augmented-lattice model (STOC 2000), where the number of long-range contacts for each node is drawn from a power-law distribution. This model is motivated by the experimental observation that many \"real-world\" networks have power-law degrees. In such networks, the exponent α of the power law is typically between 2 and 3. We prove that, in our model, for this range of values, 2" ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
Papillon outperforms all of the above randomized constructions, using degree @math and achieving @math routing. It should be possible to randomize Papillon along similar principles to the Viceroy @cite_14 randomized construction of the butterfly network, though we do not pursue this direction here.
{ "cite_N": [ "@cite_14" ], "mid": [ "1980177572", "2160405192", "2009356484", "1975579567" ], "abstract": [ "In this paper we study randomized algorithms for circuit switching on multistage networks related to the butterfly. We devise algorithms that route messages by constructing circuits (or paths) for the messages with small congestion, dilation, and setup time. Our algorithms are based on the idea of having each message choose a route from two possibilities, a technique that has previously proven successful in simpler load balancing settings. As an application of our techniques, we propose a novel design for a data server.", "Several peer-to-peer networks are based upon randomized graph topologies that permit efficient greedy routing, e. g., randomized hypercubes, randomized Chord, skip-graphs and constructions based upon small-world percolation networks. In each of these networks, a node has out-degree Θ(log n), where n denotes the total number of nodes, and greedy routing is known to take O(log n) hops on average. We establish lower-bounds for greedy routing for these networks, and analyze Neighbor-of-Neighbor (NoN)- greedy routing. The idea behind NoN, as the name suggests, is to take a neighbor's neighbors into account for making better routing decisions.The following picture emerges: Deterministic routing networks like hypercubes and Chord have diameter Θ(log n) and greedy routing is optimal. Randomized routing networks like randomized hypercubes, randomized Chord, and constructions based on small-world percolation networks, have diameter Θ(log n log log n) with high probability. The expected diameter of Skip graphs is also Θ(log n log log n). In all of these networks, greedy routing fails to find short routes, requiring Ω(log n) hops with high probability. Surprisingly, the NoN- greedy routing algorithm is able to diminish route-lengths to Θ(log n log log n) hops, which is asymptotically optimal.", "We show that if a connected graph with @math nodes has conductance φ then rumour spreading, also known as randomized broadcast, successfully broadcasts a message within O(φ-1 • log n), many rounds with high probability, regardless of the source, by using the PUSH-PULL strategy. The O(••) notation hides a polylog φ-1 factor. This result is almost tight since there exists graph of n nodes, and conductance φ, with diameter Ω(φ-1 • log n). If, in addition, the network satisfies some kind of uniformity condition on the degrees, our analysis implies that both both PUSH and PULL, by themselves, successfully broadcast the message to every node in the same number of rounds.", "In this paper, we carry on investigating the line of research questioning the power of randomization for the design of distributed algorithms. In their seminal paper, Naor and Stockmeyer [STOC 1993] established that, in the context of network computing, in which all nodes execute the same algorithm in parallel, any construction task that can be solved locally by a randomized Monte-Carlo algorithm can also be solved locally by a deterministic algorithm. This result however holds in a specific context. In particular, it holds only for distributed tasks whose solutions can be locally checked by a deterministic algorithm. In this paper, we extend the result of Naor and Stockmeyer to a wider class of tasks. Specifically, we prove that the same derandomization result holds for every task whose solutions can be locally checked using a 2-sided error randomized Monte-Carlo algorithm. This extension finds applications to, e.g., the design of lower bounds for construction tasks which tolerate that some nodes compute incorrect values. In a nutshell, we show that randomization does not help for solving such resilient tasks." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
With @math out-going links per node, several graphs over @math nodes in a circle support routes with @math greedy hops. Deterministic graphs with this property include: (a) the original Chord @cite_15 topology with distance function @math , (b) Chord with edges treated as bidirectional @cite_4 with distance function @math . This is also the known lower bound on any uniform graph with distance function @math @cite_16 . Randomized graphs with the same tradeoff include randomized-Chord @cite_2 @cite_22 and Symphony @cite_13 -- both with distance function @math . With degree @math , Symphony @cite_13 has routes of length @math on average. The network of @cite_18 also supports routes of length @math on average , with a gap to the known lower bound on their network of @math .
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_22", "@cite_2", "@cite_15", "@cite_16", "@cite_13" ], "mid": [ "2900102855", "2568950526", "2951371116", "2950469527" ], "abstract": [ "In the distributed all-pairs shortest paths problem (APSP), every node in the weighted undirected distributed network (the CONGEST model) needs to know the distance from every other node using least number of communication rounds (typically called time complexity ). The problem admits @math -approximation @math -time algorithm and a nearly-tight @math lower bound [Nanongkai, STOC'14; Lenzen and Patt-Shamir PODC'15] @math , @math and @math hide polylogarithmic factors. Note that the lower bounds also hold even in the unweighted case and in the weighted case with polynomial approximation ratios LenzenP_podc13,HolzerW12,PelegRT12,Nanongkai-STOC14 . . For the exact case, Elkin [STOC'17] presented an @math time bound, which was later improved to @math [Huang, Nanongkai, Saranurak FOCS'17]. It was shown that any super-linear lower bound (in @math ) requires a new technique [Censor-Hillel, Khoury, Paz, DISC'17], but otherwise it remained widely open whether there exists a @math -time algorithm for the exact case, which would match the best possible approximation algorithm. This paper resolves this question positively: we present a randomized (Las Vegas) @math -time algorithm, matching the lower bound up to polylogarithmic factors. Like the previous @math bound, our result works for directed graphs with zero (and even negative) edge weights. In addition to the improved running time, our algorithm works in a more general setting than that required by the previous @math bound; in our setting (i) the communication is only along edge directions (as opposed to bidirectional), and (ii) edge weights are arbitrary (as opposed to integers in 1, 2, ... poly(n) ). ...", "We consider the task of topology discovery of sparse random graphs using end-to-end random measurements (e.g., delay) between a subset of nodes, referred to as the participants. The rest of the nodes are hidden, and do not provide any information for topology discovery. We consider topology discovery under two routing models: (a) the participants exchange messages along the shortest paths and obtain end-to-end measurements, and (b) additionally, the participants exchange messages along the second shortest path. For scenario (a), our proposed algorithm results in a sub-linear edit-distance guarantee using a sub-linear number of uniformly selected participants. For scenario (b), we obtain a much stronger result, and show that we can achieve consistent reconstruction when a sub-linear number of uniformly selected nodes participate. This implies that accurate discovery of sparse random graphs is tractable using an extremely small number of participants. We finally obtain a lower bound on the number of participants required by any algorithm to reconstruct the original random graph up to a given edit distance. We also demonstrate that while consistent discovery is tractable for sparse random graphs using a small number of participants, in general, there are graphs which cannot be discovered by any algorithm even with a significant number of participants, and with the availability of end-to-end information along all the paths between the participants. © 2012 Wiley Periodicals, Inc. Random Struct. Alg., 2013", "We consider the problem of topology recognition in wireless (radio) networks modeled as undirected graphs. Topology recognition is a fundamental task in which every node of the network has to output a map of the underlying graph i.e., an isomorphic copy of it, and situate itself in this map. In wireless networks, nodes communicate in synchronous rounds. In each round a node can either transmit a message to all its neighbors, or stay silent and listen. At the receiving end, a node @math hears a message from a neighbor @math in a given round, if @math listens in this round, and if @math is its only neighbor that transmits in this round. Nodes have labels which are (not necessarily different) binary strings. The length of a labeling scheme is the largest length of a label. We concentrate on wireless networks modeled by trees, and we investigate two problems. What is the shortest labeling scheme that permits topology recognition in all wireless tree networks of diameter @math and maximum degree @math ? What is the fastest topology recognition algorithm working for all wireless tree networks of diameter @math and maximum degree @math , using such a short labeling scheme? We are interested in deterministic topology recognition algorithms. For the first problem, we show that the minimum length of a labeling scheme allowing topology recognition in all trees of maximum degree @math is @math . For such short schemes, used by an algorithm working for the class of trees of diameter @math and maximum degree @math , we show almost matching bounds on the time of topology recognition: an upper bound @math , and a lower bound @math , for any constant @math .", "We consider a system of @math servers inter-connected by some underlying graph topology @math . Tasks arrive at the various servers as independent Poisson processes of rate @math . Each incoming task is irrevocably assigned to whichever server has the smallest number of tasks among the one where it appears and its neighbors in @math . Tasks have unit-mean exponential service times and leave the system upon service completion. The above model has been extensively investigated in the case @math is a clique. Since the servers are exchangeable in that case, the queue length process is quite tractable, and it has been proved that for any @math , the fraction of servers with two or more tasks vanishes in the limit as @math . For an arbitrary graph @math , the lack of exchangeability severely complicates the analysis, and the queue length process tends to be worse than for a clique. Accordingly, a graph @math is said to be @math -optimal or @math -optimal when the occupancy process on @math is equivalent to that on a clique on an @math -scale or @math -scale, respectively. We prove that if @math is an Erd o s-R 'enyi random graph with average degree @math , then it is with high probability @math -optimal and @math -optimal if @math and @math as @math , respectively. This demonstrates that optimality can be maintained at @math -scale and @math -scale while reducing the number of connections by nearly a factor @math and @math compared to a clique, provided the topology is suitably random. It is further shown that if @math contains @math bounded-degree nodes, then it cannot be @math -optimal. In addition, we establish that an arbitrary graph @math is @math -optimal when its minimum degree is @math , and may not be @math -optimal even when its minimum degree is @math for any @math ." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
The construction demonstrates that we can indeed design networks in which greedy routing along these metrics has asymptotically optimal routing complexity. Our contribution is a family of networks that extends the Butterfly network family, so as to facilitate efficient greedy routing. With @math links per node, greedy routes are @math in the worst-case, which is asymptotically optimal. For @math , this beats the lower bound of @cite_18 on symmetric, randomized greedy routing networks (and it meets it for @math ). In the specific case of @math , our greedy routing achieves @math average route length.
{ "cite_N": [ "@cite_18" ], "mid": [ "2160405192", "2158969254", "2144253712", "2118347867" ], "abstract": [ "Several peer-to-peer networks are based upon randomized graph topologies that permit efficient greedy routing, e. g., randomized hypercubes, randomized Chord, skip-graphs and constructions based upon small-world percolation networks. In each of these networks, a node has out-degree Θ(log n), where n denotes the total number of nodes, and greedy routing is known to take O(log n) hops on average. We establish lower-bounds for greedy routing for these networks, and analyze Neighbor-of-Neighbor (NoN)- greedy routing. The idea behind NoN, as the name suggests, is to take a neighbor's neighbors into account for making better routing decisions.The following picture emerges: Deterministic routing networks like hypercubes and Chord have diameter Θ(log n) and greedy routing is optimal. Randomized routing networks like randomized hypercubes, randomized Chord, and constructions based on small-world percolation networks, have diameter Θ(log n log log n) with high probability. The expected diameter of Skip graphs is also Θ(log n log log n). In all of these networks, greedy routing fails to find short routes, requiring Ω(log n) hops with high probability. Surprisingly, the NoN- greedy routing algorithm is able to diminish route-lengths to Θ(log n log log n) hops, which is asymptotically optimal.", "Greedy routing is a novel routing paradigm where messages are always forwarded to the neighbor that is closest to the destination. Our main result is a polynomial-time algorithm that embeds combinatorial unit disk graphs (CUDGs - a CUDG is a UDG without any geometric information) into O(log 2 n)- dimensional space, permitting greedy routing with constant stretch. To the best of our knowledge, this is the first greedy embedding with stretch guarantees for this class of networks. Our main technical contribution involves extracting, in polynomial time, a constant number of isometric and balanced tree separators from a given CUDG. We do this by extending the celebrated Lipton-Tarjan separator theorem for planar graphs to CUDGs. Our techniques extend to other classes of graphs; for example, for general graphs, we obtain an O(log n)-stretch greedy embedding into O(log 2 n)-dimensional space. The greedy embeddings constructed by our algorithm can also be viewed as a constant-stretch compact routing scheme in which each node is assigned an O(log 3 n)-bit label. To the best of our knowledge, this result yields the best known stretch-space trade-off for compact routing on CUDGs. Extensive simulations on random wireless networks indicate that the average routing overhead is about 10 ; only few routes have a stretch above 1.5.", "We investigate the construction of greedy embeddings in polylogarithmic dimensional Euclidian spaces in order to achieve scalable routing through geographic routing. We propose a practical algorithm which uses random projection to achieve greedy forwarding on a space of dimension O(log(n)) where nodes have coordinates of size O(log(n)), thus achieving greedy forwarding using a route table at each node of polylogarithmic size with respect to the number of nodes. We further improve this algorithm by using a quasi-greedy algorithm which ensures greedy forwarding works along a path-wise construction, allowing us to further reduce the dimension of the embedding. The proposed algorithm, denoted GLoVE-U, is fully distributed and practical to implement. We evaluate the performance using extensive simulations and show that our greedy forwarding algorithm delivers low path stretch and scales properly.", "We propose an embedding and routing scheme for arbitrary network connectivity graphs, based on greedy routing and utilizing virtual node coordinates. In dynamic multihop packet-switching communication networks, routing elements can join or leave during network operation or exhibit intermittent failures. We present an algorithm for online greedy graph embedding in the hyperbolic plane that enables incremental embedding of network nodes as they join the network, without disturbing the global embedding. Even a single link or node removal may invalidate the greedy routing success guarantees in network embeddings based on an embedded spanning tree subgraph. As an alternative to frequent reembedding of temporally dynamic network graphs in order to retain the greedy embedding property, we propose a simple but robust generalization of greedy distance routing called Gravity-Pressure (GP) routing. Our routing method always succeeds in finding a route to the destination provided that a path exists, even if a significant fraction of links or nodes is removed subsequent to the embedding. GP routing does not require precomputation or maintenance of special spanning subgraphs and, as demonstrated by our numerical evaluation, is particularly suitable for operation in tandem with our proposed algorithm for online graph embedding." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
Recent work @cite_9 explores the surprising advantages of with in randomized graphs over @math nodes in a circle. The idea behind is to take neighbor's neighbors into account to make routing decisions. It shows that greedy with achieves @math expected route length in Symphony @cite_13 . For other networks which have @math out-going links per node, e.g., randomized-Chord @cite_2 @cite_22 , randomized-hypercubes @cite_2 , skip-graphs @cite_20 and SkipNet @cite_8 , average path length is @math hops. Among these networks, Symphony and randomized-Chord use routing with distance function @math . Other networks use a different distance function (none of them uses @math ). For each of these networks, with @math out-going links per node, it was established that plain ( ) is sub-optimal and achieves @math expected route lengths. The results suggest that lookahead has significant impact on routing.
{ "cite_N": [ "@cite_22", "@cite_8", "@cite_9", "@cite_2", "@cite_13", "@cite_20" ], "mid": [ "2160405192", "2949588463", "2568950526", "2028069703" ], "abstract": [ "Several peer-to-peer networks are based upon randomized graph topologies that permit efficient greedy routing, e. g., randomized hypercubes, randomized Chord, skip-graphs and constructions based upon small-world percolation networks. In each of these networks, a node has out-degree Θ(log n), where n denotes the total number of nodes, and greedy routing is known to take O(log n) hops on average. We establish lower-bounds for greedy routing for these networks, and analyze Neighbor-of-Neighbor (NoN)- greedy routing. The idea behind NoN, as the name suggests, is to take a neighbor's neighbors into account for making better routing decisions.The following picture emerges: Deterministic routing networks like hypercubes and Chord have diameter Θ(log n) and greedy routing is optimal. Randomized routing networks like randomized hypercubes, randomized Chord, and constructions based on small-world percolation networks, have diameter Θ(log n log log n) with high probability. The expected diameter of Skip graphs is also Θ(log n log log n). In all of these networks, greedy routing fails to find short routes, requiring Ω(log n) hops with high probability. Surprisingly, the NoN- greedy routing algorithm is able to diminish route-lengths to Θ(log n log log n) hops, which is asymptotically optimal.", "We study approximate distributed solutions to the weighted all-pairs-shortest-paths (APSP) problem in the CONGEST model. We obtain the following results. @math A deterministic @math -approximation to APSP in @math rounds. This improves over the best previously known algorithm, by both derandomizing it and by reducing the running time by a @math factor. In many cases, routing schemes involve relabeling, i.e., assigning new names to nodes and require that these names are used in distance and routing queries. It is known that relabeling is necessary to achieve running times of @math . In the relabeling model, we obtain the following results. @math A randomized @math -approximation to APSP, for any integer @math , running in @math rounds, where @math is the hop diameter of the network. This algorithm simplifies the best previously known result and reduces its approximation ratio from @math to @math . Also, the new algorithm uses uses labels of asymptotically optimal size, namely @math bits. @math A randomized @math -approximation to APSP, for any integer @math , running in time @math and producing compact routing tables of size @math . The node lables consist of @math bits. This improves on the approximation ratio of @math for tables of that size achieved by the best previously known algorithm, which terminates faster, in @math rounds.", "We consider the task of topology discovery of sparse random graphs using end-to-end random measurements (e.g., delay) between a subset of nodes, referred to as the participants. The rest of the nodes are hidden, and do not provide any information for topology discovery. We consider topology discovery under two routing models: (a) the participants exchange messages along the shortest paths and obtain end-to-end measurements, and (b) additionally, the participants exchange messages along the second shortest path. For scenario (a), our proposed algorithm results in a sub-linear edit-distance guarantee using a sub-linear number of uniformly selected participants. For scenario (b), we obtain a much stronger result, and show that we can achieve consistent reconstruction when a sub-linear number of uniformly selected nodes participate. This implies that accurate discovery of sparse random graphs is tractable using an extremely small number of participants. We finally obtain a lower bound on the number of participants required by any algorithm to reconstruct the original random graph up to a given edit distance. We also demonstrate that while consistent discovery is tractable for sparse random graphs using a small number of participants, in general, there are graphs which cannot be discovered by any algorithm even with a significant number of participants, and with the availability of end-to-end information along all the paths between the participants. © 2012 Wiley Periodicals, Inc. Random Struct. Alg., 2013", "We propose a routing strategy to improve the transportation efficiency on complex networks. Instead of using the routing strategy for shortest path, we give a generalized routing algorithm to find the so-called efficient path, which considers the possible congestion in the nodes along actual paths. Since the nodes with the largest degree are very susceptible to traffic congestion, an effective way to improve traffic and control congestion, as our strategy, can be redistributing traffic load in central nodes to other noncentral nodes. Simulation results indicate that the network capability in processing traffic is improved more than 10 times by optimizing the efficient path, which is in good agreement with the analysis. DOI: 10.1103 PhysRevE.73.046108 PACS numbers: 89.75.Hc Since the seminal work on scale-free networks by Barabasi and Albert BA model1 and on the small-world phenomenon by Watts and Strogatz 2, the structure and dynamics of complex networks have recently attracted a tremendous amount of interest and attention from the physics community see the review papers 3‐5 and references therein. The increasing importance of large communication networks such as the Internet 6, upon which our society survives, calls for the need for high efficiency in handling and delivering information. In this light, to find optimal strategies for traffic routing is one of the important issues we have to address. There have been many previous studies to understand and control traffic congestion on networks, with a basic assumption that the network has a homogeneous structure 7‐11. However, many real networks display both scale-free and small-world features, and thus it is of great interest to study the effect of network topology on traffic flow and the effect of traffic on network evolution. present a formalism that can cope simultaneously with the searching and traffic dynamics in parallel transportation systems 12. This formalism can be used to optimize network structure under a local search algorithm, while to obtain the formalism one should know the global information of the whole networks. Holme and Kim provide an in-depth analysis on the vertex edge overload cascading breakdowns based on evolving networks, and suggest a method to avoid" ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
demonstrates that it is possible to construct a graph in which each node has degree @math and in which 1- has routes of length @math in the worst case, for the metrics @math , @math and @math . Furthermore, for all @math , plain greedy on our network design beats even the results obtained in @cite_9 with @math - lookahead .
{ "cite_N": [ "@cite_9" ], "mid": [ "2949856235", "2079430060", "2169273227", "2161190897" ], "abstract": [ "We introduce a family of directed geometric graphs, denoted @math , that depend on two parameters @math and @math . For @math and @math , the @math graph is a strong @math -spanner, with @math . The out-degree of a node in the @math graph is at most @math . Moreover, we show that routing can be achieved locally on @math . Next, we show that all strong @math -spanners are also @math -spanners of the unit disk graph. Simulations for various values of the parameters @math and @math indicate that for random point sets, the spanning ratio of @math is better than the proven theoretical bounds.", "We study undirected networks with edge costs that satisfy the triangle inequality. Let @math denote the number of nodes. We present an @math -approximation algorithm for a generalization of the metric-cost subset @math -node-connectivity problem. Our approximation guarantee is proved via lower bounds that apply to the simple edge-connectivity version of the problem, where the requirements are for edge-disjoint paths rather than for openly node-disjoint paths. A corollary is that, for metric costs and for each @math , there exists a @math -node connected graph whose cost is within a factor of @math of the cost of any simple @math -edge connected graph. Based on our @math -approximation algorithm, we present an @math -approximation algorithm for the metric-cost node-connectivity survivable network design problem, where @math denotes the maximum requirement over all pairs of nodes. Our results contrast with the case of edge costs of 0 or 1, where Kortsarz, Krauthgamer, and Lee. [SIAM J. Comput., 33 (2004), pp. 704-720] recently proved, assuming NP @math DTIME( @math ), a hardness-of-approximation lower bound of @math for the subset @math -node-connectivity problem, where @math denotes a small positive number.", "We conjecture that any planar 3-connected graph can be embedded in the plane in such a way that for any nodes s and t, there is a path from s to t such that the Euclidean distance to t decreases monotonically along the path. A consequence of this conjecture would be that in any ad hoc network containing such a graph as a spanning subgraph, two-dimensional virtual coordinates for the nodes can be found for which the method of purely greedy geographic routing is guaranteed to work. We discuss this conjecture and its equivalent forms show that its hypothesis is as weak as possible, and show a result delimiting the applicability of our approach: any 3-connected K3,3-free graph has a planar 3-connected spanning subgraph. We also present two alternative versions of greedy routing on virtual coordinates that provably work. Using Steinitz's theorem we show that any 3-connected planar graph can be embedded in three dimensions so that greedy routing works, albeit with a modified notion of distance; we present experimental evidence that this scheme can be implemented effectively in practice. We also present a simple but provably robust version of greedy routing that works for any graph with a 3-connected planar spanning subgraph.", "We present algorithmic and hardness results for network design problems with degree or order constraints. The first problem we consider is the Survivable Network Design problem with degree constraints on vertices. The objective is to find a minimum cost subgraph which satisfies connectivity requirements between vertices and also degree upper bounds @math on the vertices. This includes the well-studied Minimum Bounded Degree Spanning Tree problem as a special case. Our main result is a @math -approximation algorithm for the edge-connectivity Survivable Network Design problem with degree constraints, where the cost of the returned solution is at most twice the cost of an optimum solution (satisfying the degree bounds) and the degree of each vertex @math is at most @math . This implies the first constant factor (bicriteria) approximation algorithms for many degree constrained network design problems, including the Minimum Bounded Degree Steiner Forest problem. Our results also extend to directed graphs and provide the first constant factor (bicriteria) approximation algorithms for the Minimum Bounded Degree Arborescence problem and the Minimum Bounded Degree Strongly @math -Edge-Connected Subgraph problem. In contrast, we show that the vertex-connectivity Survivable Network Design problem with degree constraints is hard to approximate, even when the cost of every edge is zero. A striking aspect of our algorithmic result is its simplicity. It is based on the iterative relaxation method, which is an extension of Jain's iterative rounding method. This provides an elegant and unifying algorithmic framework for a broad range of network design problems. We also study the problem of finding a minimum cost @math -edge-connected subgraph with at least @math vertices, which we call the @math -subgraph problem. This generalizes some well-studied classical problems such as the @math -MST and the minimum cost @math -edge-connected subgraph problems. We give a polylogarithmic approximation for the @math -subgraph problem. However, by relating it to the Densest @math -Subgraph problem, we provide evidence that the @math -subgraph problem might be hard to approximate for arbitrary @math ." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
Deterministic butterflies have been proposed for DHT routing by Xu al @cite_16 , who subsequently developed their ideas into Ulysses @cite_6 . for distance function @math has structural similarities with Ulysses -- both are butterfly-based networks. The key differences are as follows: (a) Ulysses does not use @math as its distance function, (b) Ulysses does not use routing, and (c) Ulysses uses more links than for distance function @math -- additional links have been introduced to ameliorate non-uniform edge congestion caused by Ulysses' routing algorithm. In contrast, the congestion-free routing algorithm developed in obviates the need for any additional links in (see Theorem ).
{ "cite_N": [ "@cite_16", "@cite_6" ], "mid": [ "2049130980", "2031684765", "2096706512", "69178526" ], "abstract": [ "A number of distributed hash table (DHT)-based protocols have been proposed to address the issue of scalability in peer-to-peer networks. In this paper, we present Ulysses, a peer-to-peer network based on the butterfly topology that achieves the theoretical lower bound of log n log log n on network diameter when the average routing table size at nodes is no more than log n. Compared to existing DHT-based schemes with similar routing table size, Ulysses reduces the network diameter by a factor of log log n, which is 2–4 for typical configurations. This translates into the same amount of reduction on query latency and average traffic per link node. In addition, Ulysses maintains the same level of robustness in terms of routing in the face of faults and recovering from graceful ungraceful joins and departures, as provided by existing DHT-based schemes. The performance of the protocol has been evaluated using both analysis and simulation. Copyright © 2004 AEI", "We study a fundamental tradeoff issue in designing a distributed hash table (DHT) in peer-to-peer (P2P) networks: the size of the routing table versus the network diameter. Observing that existing DHT schemes have either 1) a routing table size and network diameter both of O(log sub 2 n), or 2) a routing table of size d and network diameter of O(n sup 1 d ), S. (2001) asked whether this represents the best asymptotic \"state-efficiency\" tradeoffs. We show that some straightforward routing algorithms achieve better asymptotic tradeoffs. However, such algorithms all cause severe congestion on certain network nodes, which is undesirable in a P2P network. We rigorously define the notion of \"congestion\" and conjecture that the above tradeoffs are asymptotically optimal for a congestion-free network. The answer to this conjecture is negative in the strict sense. However, it becomes positive if the routing algorithm is required to eliminate congestion in a \"natural\" way by being uniform. We also prove that the tradeoffs are asymptotically optimal for uniform algorithms. Furthermore, for uniform algorithms, we find that the routing table size of O(log sub 2 n) is a magic threshold point that separates two different \"state-efficiency\" regions. Our third result is to study the exact (instead of asymptotic) optimal tradeoffs for uniform algorithms. We propose a new routing algorithm that reduces the routing table size and the network diameter of Chord both by 21.4 without introducing any other protocol overhead, based on a novel number-theory technique. Our final result is to present Ulysses, a congestion-free nonuniform algorithm that achieves a better asymptotic \"state-efficiency\" tradeoff than existing schemes in the probabilistic sense, even under dynamic node joins leaves.", "The various proposed DHT routing algorithms embody several different underlying routing geometries. These geometries include hypercubes, rings, tree-like structures, and butterfly networks. In this paper we focus on how these basic geometric approaches affect the resilience and proximity properties of DHTs. One factor that distinguishes these geometries is the degree of flexibility they provide in the selection of neighbors and routes. Flexibility is an important factor in achieving good static resilience and effective proximity neighbor and route selection. Our basic finding is that, despite our initial preference for more complex geometries, the ring geometry allows the greatest flexibility, and hence achieves the best resilience and proximity performance.", "Dipsea is a modular architecture for building a Distributed Hash Table (DHT). A DHT is a large hash table that is cooperatively maintained by a large number of machines communicating over the Internet. Decentralization and automatic re-configuration are two key design goals for a DHT. The architecture of Dipsea consists of three layers: ID Management, Overlay Routing and Data Management. The Overlay Routing layer consists of three modules: Emulation Engine, Ring Management and Choice of Long-Distance Links. Efficient algorithms for ID Management are designed—these algorithms require few messages, require few re-assignments of existing IDs and ensure that the hash table is divided among the participating machines as evenly as possible. Ring Management ensures that participating machines establish connections among themselves, as a function of their IDs, to form a fault-tolerant ring. The Emulation Engine is responsible for “emulation” of arbitrary families of routing networks. It handles issues arising out of dynamism (arrival and departure of participating machines), scale (variation in the average number of participating machines) and physical network proximity. Choice of Long-Distance Links allows a DHT to choose any family of routing networks (deterministic or randomized) for emulation. Several deterministic and randomized routing networks are designed and analyzed. Among these are Symphony (one of the first randomized DHT routing networks), Papillon (a deterministic routing network that guarantees asymptotically optimal route lengths with greedy routing for a fixed out-degree of nodes), and Mariposa (a randomized routing network that also guarantees optimal route lengths for a given out-degree of nodes)." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
Viceroy @cite_14 is a butterfly network which routes in @math hops in expectation with @math links per node. Mariposa (see reference @cite_23 or @cite_21 ) improves upon Viceroy by providing routes of length @math in the worst-case, with @math out-going links per node. Viceroy and Mariposa are different from other randomized networks in terms of their design philosophy. The topology borrows elements of the geometric embedding of the butterfly in a circle from Viceroy @cite_14 and from @cite_21 , while extending them for greedy routing.
{ "cite_N": [ "@cite_14", "@cite_21", "@cite_23" ], "mid": [ "2953216592", "1980177572", "2568950526", "2155564629" ], "abstract": [ "Given @math wireless transceivers located in a plane, a fundamental problem in wireless communications is to construct a strongly connected digraph on them such that the constituent links can be scheduled in fewest possible time slots, assuming the SINR model of interference. In this paper, we provide an algorithm that connects an arbitrary point set in @math slots, improving on the previous best bound of @math due to Moscibroda. This is complemented with a super-constant lower bound on our approach to connectivity. An important feature is that the algorithms allow for bi-directional (half-duplex) communication. One implication of this result is an improved bound of @math on the worst-case capacity of wireless networks, matching the best bound known for the extensively studied average-case. We explore the utility of oblivious power assignments, and show that essentially all such assignments result in a worst case bound of @math slots for connectivity. This rules out a recent claim of a @math bound using oblivious power. On the other hand, using our result we show that @math slots suffice, where @math is the ratio between the largest and the smallest links in a minimum spanning tree of the points. Our results extend to the related problem of minimum latency aggregation scheduling, where we show that aggregation scheduling with @math latency is possible, improving upon the previous best known latency of @math . We also initiate the study of network design problems in the SINR model beyond strong connectivity, obtaining similar bounds for biconnected and @math -edge connected structures.", "In this paper we study randomized algorithms for circuit switching on multistage networks related to the butterfly. We devise algorithms that route messages by constructing circuits (or paths) for the messages with small congestion, dilation, and setup time. Our algorithms are based on the idea of having each message choose a route from two possibilities, a technique that has previously proven successful in simpler load balancing settings. As an application of our techniques, we propose a novel design for a data server.", "We consider the task of topology discovery of sparse random graphs using end-to-end random measurements (e.g., delay) between a subset of nodes, referred to as the participants. The rest of the nodes are hidden, and do not provide any information for topology discovery. We consider topology discovery under two routing models: (a) the participants exchange messages along the shortest paths and obtain end-to-end measurements, and (b) additionally, the participants exchange messages along the second shortest path. For scenario (a), our proposed algorithm results in a sub-linear edit-distance guarantee using a sub-linear number of uniformly selected participants. For scenario (b), we obtain a much stronger result, and show that we can achieve consistent reconstruction when a sub-linear number of uniformly selected nodes participate. This implies that accurate discovery of sparse random graphs is tractable using an extremely small number of participants. We finally obtain a lower bound on the number of participants required by any algorithm to reconstruct the original random graph up to a given edit distance. We also demonstrate that while consistent discovery is tractable for sparse random graphs using a small number of participants, in general, there are graphs which cannot be discovered by any algorithm even with a significant number of participants, and with the availability of end-to-end information along all the paths between the participants. © 2012 Wiley Periodicals, Inc. Random Struct. Alg., 2013", "In classical routing strategies for multihop mobile wireless networks packets are routed on a pre-defined route usually obtained by a shortest path routing protocol. In opportunistic routing schemes, for each packet and each hop, the next relay is found by dynamically selecting the node that captures the packet transmission and which is the nearest to the destination. Such a scheme allows each packet to take advantage of the local pattern of transmissions and fadings at any slot and at any hop. The aim of this paper is to quantify and optimize the potential performance gains of such opportunistic routing strategies compared with classical routing schemes. The analysis is conducted under the following lower layer assumptions: the Medium Access (MAC) layer is a spatial version of Aloha which has been shown to scale well for large multihop networks; the capture of a packet by some receiver is determined by the Signal over Interference and Noise Ratio (SINR) experienced by the receiver. The paper contains a detailed simulation study which shows that such time-space opportunistic schemes very significantly outperform classical routing schemes. It also contains a mathematical study where we show how to optimally tune the MAC parameters so as to minimize the average number of time slots required to carry a typical packet from origin to destination on long paths. We show that this optimization is independent of network density." ] }
0706.0580
1672346542
Resources in a cloud can be identified using identifiers based on random numbers. When using a distributed hash table to resolve such identifiers to network locations, the straightforward approach is to store the network location directly in the hash table entry associated with an identifier. When a mobile host contains a large number of resources, this requires that all of the associated hash table entries must be updated when its network address changes. We propose an alternative approach where we store a host identifier in the entry associated with a resource identifier and the actual network address of the host in a separate host entry. This can drastically reduce the time required for updating the distributed hash table when a mobile host changes its network address. We also investigate under which circumstances our approach should or should not be used. We evaluate and confirm the usefulness of our approach with experiments run on top of OpenDHT.
Ballintijn al argue that resource naming should be decoupled from resource identification @cite_7 . Resources are named with human-friendly names, which are based on DNS @cite_9 , while identification is done with object handles, which are globally unique identifiers that need not contain network locations. They use DNS to resolve human-friendly names to object handles and a location service to resolve object handles to network locations. The location service uses a hierarchical architecture for resolving object handles. This two-level approach allows the naming of resources without worrying about replication or migration and the identification of resources without worrying about naming policies.
{ "cite_N": [ "@cite_9", "@cite_7" ], "mid": [ "2096392987", "2083158002", "1505083828", "2047197963" ], "abstract": [ "To fill the gap between what uniform resource names (URNs) provide and what humans need, we propose a new kind of uniform resource identifier (URI) called human-friendly names (HFNs). In this article, we present the design for a scalable HFN-to-URL (uniform resource locator) resolution mechanism that makes use of the Domain Name System (DNS) and the Globe location service to name and locate resources. This new URI proposes to improve both scalability and usability in naming replicated resources on the Web.", "Name services are critical for mapping logical resource names to physical resources in large-scale distributed systems. The Domain Name System (DNS) used on the Internet, however, is slow, vulnerable to denial of service attacks, and does not support fast updates. These problems stem fundamentally from the structure of the legacy DNS.This paper describes the design and implementation of the Cooperative Domain Name System (CoDoNS), a novel name service, which provides high lookup performance through proactive caching, resilience to denial of service attacks through automatic load-balancing, and fast propagation of updates. CoDoNS derives its scalability, decentralization, self-organization, and failure resilience from peer-to-peer overlays, while it achieves high performance using the Beehive replication framework. Cryptographic delegation, instead of host-based physical delegation, limits potential malfeasance by namespace operators and creates a competitive market for namespace management. Backwards compatibility with existing protocols and wire formats enables CoDoNS to serve as a backup for legacy DNS, as well as a complete replacement. Performance measurements from a real-life deployment of the system in PlanetLab shows that CoDoNS provides fast lookups, automatically reconfigures around faults without manual involvement and thwarts distributed denial of service attacks by promptly redistributing load across nodes.", "This thesis describes a novel statistical named-entity (i.e. “proper name”) recognition system known as “MENE” (Maximum Entropy Named Entity). Named entity (N.E.) recognition is a form of information extraction in which we seek to classify every word in a document as being a person-name, organization, location, date, time, monetary value, percentage, or “none of the above”. The task has particular significance for Internet search engines, machine translation, the automatic indexing of documents, and as a foundation for work on more complex information extraction tasks. Two of the most significant problems facing the constructor of a named entity system are the questions of portability and system performance. A practical N.E. system will need to be ported frequently to new bodies of text and even to new languages. The challenge is to build a system which can be ported with minimal expense (in particular minimal programming by a computational linguist) while maintaining a high degree of accuracy in the new domains or languages. MENE attempts to address these issues through the use of maximum entropy probabilistic modeling. It utilizes a very flexible object-based architecture which allows it to make use of a broad range of knowledge sources in making its tagging decisions. In the DARPA-sponsored MUC-7 named entity evaluation, the system displayed an accuracy rate which was well-above the median, demonstrating that it can achieve the performance goal. In addition, we demonstrate that the system can be used as a post-processing tool to enhance the output of a hand-coded named entity recognizer through experiments in which MENE improved on the performance of N.E. systems from three different sites. Furthermore, when all three external recognizers are combined under MENE, we are able to achieve very strong results which, in some cases, appear to be competitive with human performance. Finally, we demonstrate the trans-lingual portability of the system. We ported the system to two Japanese-language named entity tasks, one of which involved a new named entity category, “artifact”. Our results on these tasks were competitive with the best systems built by native Japanese speakers despite the fact that the author speaks no Japanese.", "We consider the problem of resolving duplicates in a database of places, where a place is defined as any entity that has a name and a physical location. When other auxiliary attributes like phone and full address are not available, deduplication based solely on names and approximate location becomes an exceptionally challenging problem that requires both domain knowledge as well an local geographical knowledge. For example, the pairs \"Newpark Mall Gap Outlet\" and \"Newpark Mall Sears Outlet\" have a high string similarity, but determining that they are different requires the domain knowledge that they represent two different store names in the same mall. Similarly, in most parts of the world, a local business called \"Central Park Cafe\" might simply be referred to by \"Central Park\", except in New York, where the keyword \"Cafe\" in the name becomes important to differentiate it from the famous park in the city. In this paper, we present a language model that can encapsulate both domain knowledge as well as local geographical knowledge. We also present unsupervised techniques that can learn such a model from a database of places. Finally, we present deduplication techniques based on such a model, and we demonstrate, using real datasets, that our techniques are much more effective than simple TF-IDF based models in resolving duplicates. Our techniques are used in production at Facebook for deduplicating the Places database." ] }
0706.0580
1672346542
Resources in a cloud can be identified using identifiers based on random numbers. When using a distributed hash table to resolve such identifiers to network locations, the straightforward approach is to store the network location directly in the hash table entry associated with an identifier. When a mobile host contains a large number of resources, this requires that all of the associated hash table entries must be updated when its network address changes. We propose an alternative approach where we store a host identifier in the entry associated with a resource identifier and the actual network address of the host in a separate host entry. This can drastically reduce the time required for updating the distributed hash table when a mobile host changes its network address. We also investigate under which circumstances our approach should or should not be used. We evaluate and confirm the usefulness of our approach with experiments run on top of OpenDHT.
Walfish al argue for the use of semantic-free references for identifying web documents instead of URLs @cite_8 . The reason is that changes in naming policies or ownership of DNS domain names often result in previous URLs pointing to unrelated or non-existent documents, even when the original documents still exist. Semantic-free references are hashes of public keys or other data, and are resolved to URLs using a distributed hash table based on Chord @cite_13 . Using semantic-free references would allow web documents to link to each other without worrying about changes in the URLs of the documents.
{ "cite_N": [ "@cite_13", "@cite_8" ], "mid": [ "144112633", "1784290353", "2084401375", "232172822" ], "abstract": [ "The Web relies on the Domain Name System (DNS) to resolve the hostname portion of URLs into IP addresses. This marriage-of-convenience enabled the Web's meteoric rise, but the resulting entanglement is now hindering both infrastructures--the Web is overly constrained by the limitations of DNS, and DNS is unduly burdened by the demands of the Web. There has been much commentary on this sad state-of-affairs, but dissolving the ill-fated union between DNS and the Web requires a new way to resolve Web references. To this end, this paper describes the design and implementation of Semantic Free Referencing (SFR), a reference resolution infrastructure based on distributed hash tables (DHTs).", "Over the last decades, several billion Web pages have been made available on the Web. The ongoing transition from the current Web of unstructured data to the Web of Data yet requires scalable and accurate approaches for the extraction of structured data in RDF (Resource Description Framework) from these websites. One of the key steps towards extracting RDF from text is the disambiguation of named entities. While several approaches aim to tackle this problem, they still achieve poor accuracy. We address this drawback by presenting AGDISTIS, a novel knowledge-base-agnostic approach for named entity disambiguation. Our approach combines the Hypertext-Induced Topic Search (HITS) algorithm with label expansion strategies and string similarity measures. Based on this combination, AGDISTIS can efficiently detect the correct URIs for a given set of named entities within an input text. We evaluate our approach on eight different datasets against state-of-the-art named entity disambiguation frameworks. Our results indicate that we outperform the state-of-the-art approach by up to 29 F-measure.", "The integration of the classical Web (of documents) with the emerging Web of Data is a challenging vision. In this paper we focus on an integration approach during searching which aims at enriching the responses of non-semantic search systems (e.g. professional search systems, web search engines) with semantic information, i.e. Linked Open Data (LOD), and exploiting the outcome for providing an overview of the search space and allowing the users (apart from restricting it) to explore the related LOD. We use named entities (e.g. persons, locations, etc.) as the \"glue\" for automatically connecting search hits with LOD. We consider a scenario where this entity-based integration is performed at query time with no human effort, and no a-priori indexing, which is beneficial in terms of configurability and freshness. To realize this scenario one has to tackle various challenges. One spiny issue is that the number of identified entities can be high, the same is true for the semantic information about these entities that can be fetched from the available LOD (i.e. their properties and associations with other entities). To this end, in this paper we propose a Link Analysis-based method which is used for (a) ranking (and thus selecting to show) the more important semantic information related to the search results, (b) deriving and showing top-K semantic graphs. In the sequel, we report the results of a survey regarding the marine domain with promising results, and comparative results that illustrate the effectiveness of the proposed (Page Rank-based) ranking scheme. Finally, we report experimental results regarding efficiency showing that the proposed functionality can be offered even at query time.", "Document management is not often handled appropriately by organisations, if at all. Despite that, and despite the lack of structure in documents, organisations must face regulations that require owning a document collection with semantic content. The technique based on taxonomies and folksonomies can easily produce an adequate semantic classification for documents. It requires an adequate setup among the domain experts that apply it. The approach we propose uses Lean Kanban to coordinate the phases of definition, validation and implementation of taxonomies and folksonomies. It helps organisations to create a semantic classification of existing document resources, making them ready to be used in ways that were not possible before. At the same time, it helps to improve the quality of work of the organisation itself, adding speed to document search." ] }
0706.0580
1672346542
Resources in a cloud can be identified using identifiers based on random numbers. When using a distributed hash table to resolve such identifiers to network locations, the straightforward approach is to store the network location directly in the hash table entry associated with an identifier. When a mobile host contains a large number of resources, this requires that all of the associated hash table entries must be updated when its network address changes. We propose an alternative approach where we store a host identifier in the entry associated with a resource identifier and the actual network address of the host in a separate host entry. This can drastically reduce the time required for updating the distributed hash table when a mobile host changes its network address. We also investigate under which circumstances our approach should or should not be used. We evaluate and confirm the usefulness of our approach with experiments run on top of OpenDHT.
Distributed hash tables, also called peer-to-peer structured overlay networks, are distributed systems which map a uniform distribution of identifiers to nodes in the system @cite_3 @cite_13 @cite_20 . Nodes act as peers, with no node having to play a special role, and a distributed hash table can continue operation even as nodes join or leave the system. Lookups and updates to a distributed hash table are scalable, typically taking time logarithmic to the number of nodes in the system. We experimentally evaluated our work using OpenDHT @cite_12 , which is a public distributed hash table service based on Bamboo @cite_5 .
{ "cite_N": [ "@cite_3", "@cite_5", "@cite_13", "@cite_12", "@cite_20" ], "mid": [ "1587208850", "2151682391", "2049794981", "2134320193" ], "abstract": [ "Distributed Hash Tables (DHTs) are very efficient distributed systems for routing, but at the same time vulnerable to disruptive nodes. Designers of such systems want them used in open networks, where an adversary can perform a sybil attack by introducing a large number of corrupt nodes in the network, considerably degrading its performance. We introduce a routing strategy that alleviates some of the effects of such an attack by making sure that lookups are performed using a diverse set of nodes. This ensures that at least some of the nodes queried are good, and hence the search makes forward progress. This strategy makes use of latent social information present in the introduction graph of the network.", "Mobile ad-hoc networks (MANETs) and distributed hash-tables (DHTs) share key characteristics in terms of self organization, decentralization, redundancy requirements, and limited infrastructure. However, node mobility and the continually changing physical topology pose a special challenge to scalability and the design of a DHT for mobile ad-hoc network. The mobile hash-table (MHT) [9] addresses this challenge by mapping a data item to a path through the environment. In contrast to existing DHTs, MHT does not to maintain routing tables and thereby can be used in networks with highly dynamic topologies. Thus, in mobile environments it stores data items with low maintenance overhead on the moving nodes and allows the MHT to scale up to several ten thousands of nodes.This paper addresses the problem of churn in mobile hash tables. Similar to Internet based peer-to-peer systems a deployed mobile hash table suffers from suddenly leaving nodes and the need to recover lost data items. We evaluate how redundancy and recovery technique used in the internet domain can be deployed in the mobile hash table. Furthermore, we show that these redundancy techniques can greatly benefit from the local broadcast properties of typical mobile ad-hoc networks.", "Distributed hash table (DHT) systems are an important class of peer-to-peer routing infrastructures. They enable scalable wide-area storage and retrieval of information, and will support the rapid development of a wide variety of Internet-scale applications ranging from naming systems and file systems to application-layer multicast. DHT systems essentially build an overlay network, but a path on the overlay between any two nodes can be significantly different from the unicast path between those two nodes on the underlying network. As such, the lookup latency in these systems can be quite high and can adversely impact the performance of applications built on top of such systems.In this paper, we discuss a random sampling technique that incrementally improves lookup latency in DHT systems. Our sampling can be implemented using information gleaned from lookups traversing the overlay network. For this reason, we call our approach lookup-parasitic random sampling (LPRS). LPRS is fast, incurs little network overhead, and requires relatively few modifications to existing DHT systems.For idealized versions of DHT systems like Chord, Tapestry and Pastry, we analytically prove that LPRS can result in lookup latencies proportional to the average unicast latency of the network, provided the underlying physical topology has a power-law latency expansion. We then validate this analysis by implementing LPRS in the Chord simulator. Our simulations reveal that LPRS-Chord exhibits a qualitatively better latency scaling behavior relative to unmodified Chord.Finally, we provide evidence which suggests that the Internet router-level topology resembles power-law latency expansion. This finding implies that LPRS has significant practical applicability as a general latency reduction technique for many DHT systems. This finding is also of independent interest since it might inform the design of latency-sensitive topology models for the Internet.", "Decentralized systems, such as structured overlays, are subject to the Sybil attack, in which an adversary creates many false identities to increase its influence. This paper describes a one-hop distributed hash table which uses the social links between users to strongly resist the Sybil attack. The social network is assumed to be fast mixing, meaning that a random walk in the honest part of the network quickly approaches the uniform distribution. As in the related SybilLimit system [25], with a social network of n honest nodes and m honest edges, the protocol can tolerate up to o(n log n) attack edges (social links from honest nodes to compromised nodes). The routing tables contain O(√m log m) entries per node and are constructed efficiently by a distributed protocol. This is the first sublinear solution to this problem. Preliminary simulation results are presented to demonstrate the approach's effectiveness." ] }
0706.0580
1672346542
Resources in a cloud can be identified using identifiers based on random numbers. When using a distributed hash table to resolve such identifiers to network locations, the straightforward approach is to store the network location directly in the hash table entry associated with an identifier. When a mobile host contains a large number of resources, this requires that all of the associated hash table entries must be updated when its network address changes. We propose an alternative approach where we store a host identifier in the entry associated with a resource identifier and the actual network address of the host in a separate host entry. This can drastically reduce the time required for updating the distributed hash table when a mobile host changes its network address. We also investigate under which circumstances our approach should or should not be used. We evaluate and confirm the usefulness of our approach with experiments run on top of OpenDHT.
There has also been research on implementing distributed hash tables on top of mobile ad hoc networks @cite_4 @cite_6 . As with Mobile IP @cite_15 and HIP @cite_1 , hosts in mobile ad hoc networks do not change their network address with movement, so there would be no need to update entries in a distributed hash table used for resolving resource identifiers. However, almost the entire Internet is not part of a mobile ad hoc network, so it is of little help to applications that need to run on current networks.
{ "cite_N": [ "@cite_1", "@cite_15", "@cite_4", "@cite_6" ], "mid": [ "2151682391", "2040228414", "2134011626", "2126105048" ], "abstract": [ "Mobile ad-hoc networks (MANETs) and distributed hash-tables (DHTs) share key characteristics in terms of self organization, decentralization, redundancy requirements, and limited infrastructure. However, node mobility and the continually changing physical topology pose a special challenge to scalability and the design of a DHT for mobile ad-hoc network. The mobile hash-table (MHT) [9] addresses this challenge by mapping a data item to a path through the environment. In contrast to existing DHTs, MHT does not to maintain routing tables and thereby can be used in networks with highly dynamic topologies. Thus, in mobile environments it stores data items with low maintenance overhead on the moving nodes and allows the MHT to scale up to several ten thousands of nodes.This paper addresses the problem of churn in mobile hash tables. Similar to Internet based peer-to-peer systems a deployed mobile hash table suffers from suddenly leaving nodes and the need to recover lost data items. We evaluate how redundancy and recovery technique used in the internet domain can be deployed in the mobile hash table. Furthermore, we show that these redundancy techniques can greatly benefit from the local broadcast properties of typical mobile ad-hoc networks.", "Ad hoc networks have no spatial hierarchy and suffer from frequent link failures which prevent mobile hosts from using traditional routing schemes. Under these conditions, mobile hosts must find routes to destinations without the use of designated routers and also must dynamically adapt the routes to the current link conditions. This article proposes a distributed adaptive routing protocol for finding and maintaining stable routes based on signal strength and location stability in an ad hoc network and presents an architecture for its implementation. Interoperability with mobile IP (Internet protocol) is discussed.", "Reliable storage of data with concurrent read write accesses (or query update) is an ever recurring issue in distributed settings. In mobile ad hoc networks, the problem becomes even more challenging due to highly dynamic and unpredictable topology changes. It is precisely this unpredictability that makes probabilistic protocols very appealing for such environments. Inspired by the principles of probabilistic quorum systems, we present a Probabilistic quorum system for ad hoc networks Pan), a collection of protocols for the reliable storage of data in mobile ad hoc networks. Our system behaves in a predictable way due to the gossip-based diffusion mechanism applied for quorum accesses, and the protocol overhead is reduced by adopting an asymmetric quorum construction. We present an analysis of our Pan system, in terms of both reliability and overhead, which can be used to fine tune protocol parameters to obtain the desired tradeoff between efficiency and fault tolerance. We confirm the predictability and tunability of Pan through simulations with ns-2.", "The advances in computer and wireless communication technologies have led to an increasing interest in ad hoc networks which are temporarily constructed by only mobile hosts. In ad hoc networks, since mobile hosts move freely, disconnections occur frequently, and this causes frequent network division. Consequently, data accessibility in ad hoc networks is lower than that in the conventional fixed networks. We propose three replica allocation methods to improve data accessibility by replicating data items on mobile hosts. In these three methods, we take into account the access frequency from mobile hosts to each data item and the status of the network connection. We also show the results of simulation experiments regarding the performance evaluation of our proposed methods." ] }
0706.0430
2950884312
As decentralized computing scenarios get ever more popular, unstructured topologies are natural candidates to consider running mix networks upon. We consider mix network topologies where mixes are placed on the nodes of an unstructured network, such as social networks and scale-free random networks. We explore the efficiency and traffic analysis resistance properties of mix networks based on unstructured topologies as opposed to theoretically optimal structured topologies, under high latency conditions. We consider a mix of directed and undirected network models, as well as one real world case study -- the LiveJournal friendship network topology. Our analysis indicates that mix-networks based on scale-free and small-world topologies have, firstly, mix-route lengths that are roughly comparable to those in expander graphs; second, that compromise of the most central nodes has little effect on anonymization properties, and third, batch sizes required for warding off intersection attacks need to be an order of magnitude higher in unstructured networks in comparison with expander graph topologies.
Borisov @cite_11 analyzes anonymous communications over a De Bruijn graph topology overlay network. He analyzes the deBruijn graph topology and comments on their successful mixing capabilities.
{ "cite_N": [ "@cite_11" ], "mid": [ "2163598416", "2568950526", "2048454711", "2953249628" ], "abstract": [ "As more of our daily activities are carried out online, it becomes important to develop technologies to protect our online privacy. Anonymity is a key privacy technology, since it serves to hide patterns of communication that can often be as revealing as their contents. This motivates our study of the use of large scale peer-to-peer systems for building anonymous systems. We first develop a novel methodology for studying the anonymity of peer-to-peer systems, based on an information-theoretic anonymity metric and simulation. We use simulations to sample a probability distribution modeling attacker knowledge under conservative assumptions and estimate the entropy-based anonymity metric using the sampled distribution. We then validate this approach against an analytic method for computing entropy. The use of sampling introduces some error, but it can be accurately bounded and therefore we can make rigorous statements about the success of an entire class of attacks. We next apply our methodology to perform the first rigorous analysis of Freenet, a peer-to-peer anonymous publishing system, and identify a number of weaknesses in its design. We show that a targeted attack on high-degree nodes can be very effective at reducing anonymity. We also consider a next generation routing algorithm proposed by the Freenet authors to improve performance and show that it has a significant negative impact on anonymity. Finally, even in the best case scenario, the anonymity levels provided by Freenet are highly variable and, in many cases, little or no anonymity is achieved. To provide more uniform anonymity protection, we propose a new design for peer-to-peer anonymous systems based on structured overlays. We use random walks along the overlay to provide anonymity. We compare the mixing times of random walks on different graph structures and find that de Bruijn graphs are superior to other structures such as the hypercube or butterfly. Using our simulation methodology, we analyze the anonymity achieved by our design running on top of Koorde, a structured overlay based on de Bruijn graphs. We show that it provides anonymity competitive with Freenet in the average case, while ensuring that worst-case anonymity remains at an acceptable level. We also maintain logarithmic guarantees on routing performance.", "We consider the task of topology discovery of sparse random graphs using end-to-end random measurements (e.g., delay) between a subset of nodes, referred to as the participants. The rest of the nodes are hidden, and do not provide any information for topology discovery. We consider topology discovery under two routing models: (a) the participants exchange messages along the shortest paths and obtain end-to-end measurements, and (b) additionally, the participants exchange messages along the second shortest path. For scenario (a), our proposed algorithm results in a sub-linear edit-distance guarantee using a sub-linear number of uniformly selected participants. For scenario (b), we obtain a much stronger result, and show that we can achieve consistent reconstruction when a sub-linear number of uniformly selected nodes participate. This implies that accurate discovery of sparse random graphs is tractable using an extremely small number of participants. We finally obtain a lower bound on the number of participants required by any algorithm to reconstruct the original random graph up to a given edit distance. We also demonstrate that while consistent discovery is tractable for sparse random graphs using a small number of participants, in general, there are graphs which cannot be discovered by any algorithm even with a significant number of participants, and with the availability of end-to-end information along all the paths between the participants. © 2012 Wiley Periodicals, Inc. Random Struct. Alg., 2013", "This paper concerns the communication primitives of broadcasting (one-to-all communication) and gossiping (all-to-all communication) in radio networks with known topology, i.e., where for each primitive the schedule of transmissions is precomputed based on full knowledge about the size and the topology of the network.The first part of the paper examines the two communication primitives in general graphs. In particular, it proposes a new (efficiently computable) deterministic schedule that uses O(D+Δ log n) time units to complete the gossiping task in any radio network with size n, diameter D and max-degree Δ. Our new schedule improves and simplifies the currently best known gossiping schedule, requiring time O(D+√[i+2]DΔ logi+1 n), for any network with the diameter D=Ω(logi+4n), where i is an arbitrary integer constant i ≥ 0, see [17]. For the broadcast task we deliver two new results: a deterministic efficient algorithm for computing a radio schedule of length D+O(log3 n), and a randomized algorithm for computing a radio schedule of length D+O(log2 n). These results improve on the best currently known D+O(log4 n) time schedule due to Elkin and Kortsarz [12].The second part of the paper focuses on radio communication in planar graphs, devising a new broadcasting schedule using fewer than 3D time slots. This result improves, for small values of D, on currently best known D+O(log3n) time schedule proposed by Elkin and Kortsarz in [12]. Our new algorithm should be also seen as the separation result between the planar and the general graphs with a small diameter due to the polylogarithmic inapproximability result in general graphs due to Elkin and Kortsarz, see [11].", "We consider the problem of diffusing information in networks that contain malicious nodes. We assume that each normal node in the network has no knowledge of the network topology other than an upper bound on the number of malicious nodes in its neighborhood. We introduce a topological property known as r-robustness of a graph, and show that this property provides improved bounds on tolerating malicious behavior, in comparison to traditional concepts such as connectivity and minimum degree. We use this topological property to analyze the canonical problems of distributed consensus and broadcasting, and provide sufficient conditions for these operations to succeed. Finally, we provide a construction for r-robust graphs and show that the common preferential-attachment model for scale-free networks produces a robust graph." ] }
0706.0523
2069748505
In predicate abstraction, exact image computation is problematic, requiring in the worst case an exponential number of calls to a decision procedure. For this reason, software model checkers typically use a weak approximation of the image. This can result in a failure to prove a property, even given an adequate set of predicates. We present an interpolant-based method for strengthening the abstract transition relation in case of such failures. This approach guarantees convergence given an adequate set of predicates, without requiring an exact image computation. We show empirically that the method converges more rapidly than an earlier method based on counterexample analysis.
The chief alternative to iterative approximation is to produce an exact propositional characterization of the abstract transition relation. For example the method of @cite_3 uses small-domain techniques to translate a first-order transition formula into a propositional one that is equisatisfiable over the state-holding predicates. However, this translation introduces a large number of auxiliary Boolean variables, making it impractical to use BDD-based methods for image computation. Though SAT-base Boolean quantifier elimination methods can be used, the effect is still essentially to enumerate the states in the image. By contrast, the interpolation-based method produces an approximate transition relation with no auxiliary Boolean variables, allowing efficient use of BDD-based methods.
{ "cite_N": [ "@cite_3" ], "mid": [ "1552505815", "2113033572", "1600009974", "1549166962" ], "abstract": [ "The paper presents an approach for shape analysis based on predicate abstraction. Using a predicate base that involves reachability relations between program variables pointing into the heap, we are able to analyze functional properties of programs with destructive heap updates, such as list reversal and various in-place list sorts. The approach allows verification of both safety and liveness properties. The abstraction we use does not require any abstract representation of the heap nodes (e.g. abstract shapes), only reachability relations between the program variables. The computation of the abstract transition relation is precise and automatic yet does not require the use of a theorem prover. Instead, we use a small model theorem to identify a truncated (small) finite-state version of the program whose abstraction is identical to the abstraction of the unbounded-heap version of the same program. The abstraction of the finite-state version is then computed by BDD techniques. For proving liveness properties, we augment the original system by a well-founded ranking function, which is abstracted together with the system. Well-foundedness is then abstracted into strong fairness (compassion). We show that, for a restricted class of programs that still includes many interesting cases, the small model theorem can be applied to this joint abstraction. Independently of the application to shape-analysis examples, we demonstrate the utility of the ranking abstraction method and its advantages over the direct use of ranking functions in a deductive verification of the same property.", "Recently introduced implicit set manipulation techniques have made it possible to formally verify finite state machines with state graphs too large to be built. The authors show that these techniques can also be used with success to compute and manipulate implicitly large sets of prime and of essential prime implicants of incompletely specified Boolean functions. These sets are denoted by meta-products that are represented with binary decision diagrams (BDDs). Two procedures are described. The first is based on the standard BDD operators, and the second, more efficient, takes advantage of the structural properties of BDDs and of meta-products to handle a larger class of functions than the first procedure. >", "Existing program analysis tools that implement abstraction rely on saturating procedures to compute over-approximations of fixpoints. As an alternative, we propose a new algorithm to compute an over-approximation of the set of reachable states of a program by replacing loops in the control flow graph by their abstract transformer. Our technique is able to generate diagnostic information in case of property violations, which we call leaping counterexamples. We have implemented this technique and report experimental results on a set of large ANSI-C programs using abstract domains that focus on properties related to string-buffers.", "A new form of SAT-based symbolic model checking is described. Instead of unrolling the transition relation, it incrementally generates clauses that are inductive relative to (and augment) stepwise approximate reachability information. In this way, the algorithm gradually refines the property, eventually producing either an inductive strengthening of the property or a counterexample trace. Our experimental studies show that induction is a powerful tool for generalizing the unreachability of given error states: it can refine away many states at once, and it is effective at focusing the proof search on aspects of the transition system relevant to the property. Furthermore, the incremental structure of the algorithm lends itself to a parallel implementation." ] }
0706.0523
2069748505
In predicate abstraction, exact image computation is problematic, requiring in the worst case an exponential number of calls to a decision procedure. For this reason, software model checkers typically use a weak approximation of the image. This can result in a failure to prove a property, even given an adequate set of predicates. We present an interpolant-based method for strengthening the abstract transition relation in case of such failures. This approach guarantees convergence given an adequate set of predicates, without requiring an exact image computation. We show empirically that the method converges more rapidly than an earlier method based on counterexample analysis.
The most closely related method is that of Das and Dill @cite_7 . This method analyzes abstract counterexamples (sequences of predicate states), refining the transition relation approximation in such a way as to rule out infeasible transitions. This method is effective, but has the disadvantage that it uses a specific counterexample and does not consider the property being verified. Thus it can easily generate refinements not relevant to the property. The interpolation-based method does not use abstract counterexamples. Rather, it generates facts relevant to proving the given property in a bounded sense. Thus, it tends to generate more relevant refinements, and as a result converges more rapidly.
{ "cite_N": [ "@cite_7" ], "mid": [ "1503537039", "2134147303", "1600009974", "171295454" ], "abstract": [ "Abstraction can often lead to spurious counterexamples. Counterexample-guided abstraction refinement is a method of strengthening abstractions based on the analysis of these spurious counterexamples. For invariance properties, a counterexample is a finite trace that violates the invariant; it is spurious if it is possible in the abstraction but not in the original system. When proving termination or other liveness properties of infinite-state systems, a useful notion of spurious counterexamples has remained an open problem. For this reason, no counterexample-guided abstraction refinement algorithm was known for termination. In this paper, we address this problem and present the first known automatic counterexample-guided abstraction refinement algorithm for termination proofs. We exploit recent results on transition invariants and transition predicate abstraction. We identify two reasons for spuriousness: abstractions that are too coarse, and candidate transition invariants that are too strong. Our counterexample-guided abstraction refinement algorithm successively weakens candidate transition invariants and refines the abstraction.", "Recently, we have improved the efficiency of the predicate abstraction scheme presented by Das, Dill and Park (1999). As a result, the number of validity checks needed to prove the necessary verification condition has been reduced. The key idea is to refine an approximate abstract transition relation based on the counter-example generated. The system starts with an approximate abstract transition relation on which the verification condition (in our case, this is a safety property) is model-checked. If the property holds then the proof is done; otherwise the model checker returns an abstract counter-example trace. This trace is used to refine the abstract transition relation if possible and start anew. At the end of the process, the system either proves the verification condition or comes up with an abstract counter-example trace which holds in the most accurate abstract transition relation possible (with the user-provided predicates as a basis). If the verification condition fails in the abstract system, then either the concrete system does not satisfy it or the abstraction predicates chosen are not strong enough. This algorithm has been used on a concurrent garbage collection algorithm and a secure contract-signing protocol. This method improved the performance on the first problem significantly, and allowed us to tackle the second problem, which the previous method could not handle.", "Existing program analysis tools that implement abstraction rely on saturating procedures to compute over-approximations of fixpoints. As an alternative, we propose a new algorithm to compute an over-approximation of the set of reachable states of a program by replacing loops in the control flow graph by their abstract transformer. Our technique is able to generate diagnostic information in case of property violations, which we call leaping counterexamples. We have implemented this technique and report experimental results on a set of large ANSI-C programs using abstract domains that focus on properties related to string-buffers.", "We present a technique for using infeasible program paths to automatically infer Range Predicates that describe properties of unbounded array segments. First, we build proofs showing the infeasibility of the paths, using axioms that precisely encode the high-level (but informal) rules with which programmers reason about arrays. Next, we mine the proofs for Craig Interpolants which correspond to predicates that refute the particular counterexample path. By embedding the predicate inference technique within a Counterexample-Guided Abstraction-Refinement (CEGAR) loop, we obtain a method for verifying data-sensitive safety properties whose precision is tailored in a program- and property-sensitive manner. Though the axioms used are simple, we show that the method suffices to prove a variety of array-manipulating programs that were previously beyond automatic model checkers." ] }
0706.0523
2069748505
In predicate abstraction, exact image computation is problematic, requiring in the worst case an exponential number of calls to a decision procedure. For this reason, software model checkers typically use a weak approximation of the image. This can result in a failure to prove a property, even given an adequate set of predicates. We present an interpolant-based method for strengthening the abstract transition relation in case of such failures. This approach guarantees convergence given an adequate set of predicates, without requiring an exact image computation. We show empirically that the method converges more rapidly than an earlier method based on counterexample analysis.
@cite_8 , interpolants are used to choose new predicates to refine a predicate abstraction. Here, we use interpolants to refine an approximation of the abstract transition relation for a given set of predicates.
{ "cite_N": [ "@cite_8" ], "mid": [ "2151463894", "2134147303", "1601517679", "2080841971" ], "abstract": [ "The success of model checking for large programs depends crucially on the ability to efficiently construct parsimonious abstractions. A predicate abstraction is parsimonious if at each control location, it specifies only relationships between current values of variables, and only those which are required for proving correctness. Previous methods for automatically refining predicate abstractions until sufficient precision is obtained do not systematically construct parsimonious abstractions: predicates usually contain symbolic variables, and are added heuristically and often uniformly to many or all control locations at once. We use Craig interpolation to efficiently construct, from a given abstract error trace which cannot be concretized, a parsominous abstraction that removes the trace. At each location of the trace, we infer the relevant predicates as an interpolant between the two formulas that define the past and the future segment of the trace. Each interpolant is a relationship between current values of program variables, and is relevant only at that particular program location. It can be found by a linear scan of the proof of infeasibility of the trace.We develop our method for programs with arithmetic and pointer expressions, and call-by-value function calls. For function calls, Craig interpolation offers a systematic way of generating relevant predicates that contain only the local variables of the function and the values of the formal parameters when the function was called. We have extended our model checker Blast with predicate discovery by Craig interpolation, and applied it successfully to C programs with more than 130,000 lines of code, which was not possible with approaches that build less parsimonious abstractions.", "Recently, we have improved the efficiency of the predicate abstraction scheme presented by Das, Dill and Park (1999). As a result, the number of validity checks needed to prove the necessary verification condition has been reduced. The key idea is to refine an approximate abstract transition relation based on the counter-example generated. The system starts with an approximate abstract transition relation on which the verification condition (in our case, this is a safety property) is model-checked. If the property holds then the proof is done; otherwise the model checker returns an abstract counter-example trace. This trace is used to refine the abstract transition relation if possible and start anew. At the end of the process, the system either proves the verification condition or comes up with an abstract counter-example trace which holds in the most accurate abstract transition relation possible (with the user-provided predicates as a basis). If the verification condition fails in the abstract system, then either the concrete system does not satisfy it or the abstraction predicates chosen are not strong enough. This algorithm has been used on a concurrent garbage collection algorithm and a secure contract-signing protocol. This method improved the performance on the first problem significantly, and allowed us to tackle the second problem, which the previous method could not handle.", "Predicate abstraction is a useful form of abstraction for the verification of transition systems with large or infinite state spaces. One of the main bottlenecks of this approach is the extremely large number of decision procedures calls that are required to construct the abstract state space. In this paper we propose the use of a symbolic decision procedure and its application for predicate abstraction. The advantage of the approach is that it reduces the number of calls to the decision procedure exponentially and also provides for reducing the re-computations inherent in the current approaches. We provide two implementations of the symbolic decision procedure: one based on BDDs which leverages the current advances in early quantification algorithms, and the other based on SAT-solvers. We also demonstrate our approach with quantified predicates for verifying parameterized systems. We illustrate the effectiveness of this approach on benchmarks from the verification of microprocessors, communication protocols, parameterized systems, and Microsoft Windows device drivers.", "Interpolation based automatic abstraction is a powerful and robust technique for the automated analysis of hardware and software systems. Its use has however been limited to control-dominated applications because of a lack of algorithms for computing interpolants for data structures used in software programs. We present efficient procedures to construct interpolants for the theories of arrays, sets, and multisets using the reduction approach for obtaining decision procedures for complex data structures. The approach taken is that of reducing the theories of such data structures to the theories of equality and linear arithmetic for which efficient interpolating decision procedures exist. This enables interpolation based techniques to be applied to proving properties of programs that manipulate these data structures." ] }
0706.2434
2143252188
In the analysis of large random wireless networks, the underlying node distribution is almost ubiquitously assumed to be the homogeneous Poisson point process. In this paper, the node locations are assumed to form a Poisson cluster process on the plane. We derive the distributional properties of the interference and provide upper and lower bounds for its distribution. We consider the probability of successful transmission in an interference-limited channel when fading is modeled as Rayleigh. We provide a numerically integrable expression for the outage probability and closed-form upper and lower bounds. We show that when the transmitter-receiver distance is large, the success probability is greater than that of a Poisson arrangement. These results characterize the performance of the system under geographical or MAC-induced clustering. We obtain the maximum intensity of transmitting nodes for a given outage constraint, i.e., the transmission capacity (of this spatial arrangement) and show that it is equal to that of a Poisson arrangement of nodes. For the analysis, techniques from stochastic geometry are used, in particular the probability generating functional of Poisson cluster processes, the Palm characterization of Poisson cluster processes, and the Campbell-Mecke theorem.
There exists a significant body of literature for networks with Poisson distributed nodes. In @cite_6 the characteristic function of the interference was obtained when there is no fading and the nodes are Poisson distributed. They also provide the probability distribution function of the interference as an infinite series. , in @cite_2 , analyze the interference when the interference contribution by a transmitter located at @math , to a receiver located at the origin is exponentially distributed with parameter @math . Using this model they derive the density function of the interference when the nodes are arranged as a one dimensional lattice. Also the Laplace transform of the interference is obtained when the nodes are Poisson distributed.
{ "cite_N": [ "@cite_6", "@cite_2" ], "mid": [ "2143252188", "2290648561", "1982849351", "2963692345" ], "abstract": [ "In the analysis of large random wireless networks, the underlying node distribution is almost ubiquitously assumed to be the homogeneous Poisson point process. In this paper, the node locations are assumed to form a Poisson cluster process on the plane. We derive the distributional properties of the interference and provide upper and lower bounds for its distribution. We consider the probability of successful transmission in an interference-limited channel when fading is modeled as Rayleigh. We provide a numerically integrable expression for the outage probability and closed-form upper and lower bounds. We show that when the transmitter-receiver distance is large, the success probability is greater than that of a Poisson arrangement. These results characterize the performance of the system under geographical or MAC-induced clustering. We obtain the maximum intensity of transmitting nodes for a given outage constraint, i.e., the transmission capacity (of this spatial arrangement) and show that it is equal to that of a Poisson arrangement of nodes. For the analysis, techniques from stochastic geometry are used, in particular the probability generating functional of Poisson cluster processes, the Palm characterization of Poisson cluster processes, and the Campbell-Mecke theorem.", "A Manhattan Poisson line process divides the plane into an infinite number of rectangular rooms with walls extending infinitely along the axes. When the path loss is dominated by the penetration through each of the walls, a Poisson field of transmitters creates a heavy tailed interference at a randomly picked room, whose distribution is tractable in the Laplace domain. Interference correlation at different rooms is explicitly available. This model gives the first tractable mathematical abstraction to indoor physical environments where wireless signals are shadowed by (common) walls. Applying the analytical results leads to a formula for success probabilities of a transmission attempt between two given rooms.", "The spatial correlations in transmitter node locations introduced by common multiple access protocols make the analysis of interference, outage, and other related metrics in a wireless network extremely difficult. Most works therefore assume that nodes are distributed either as a Poisson point process (PPP) or a grid, and utilize the independence properties of the PPP (or the regular structure of the grid) to analyze interference, outage and other metrics. But, the independence of node locations makes the PPP a dubious model for nontrivial MACs which intentionally introduce correlations, e.g., spatial separation, while the grid is too idealized to model real networks. In this paper, we introduce a new technique based on the factorial moment expansion of functionals of point processes to analyze functions of interference, in particular outage probability. We provide a Taylor-series type expansion of functions of interference, wherein increasing the number of terms in the series provides a better approximation at the cost of increased complexity of computation. Various examples illustrate how this new approach can be used to find outage probability in both Poisson and non-Poisson wireless networks.", "In this paper, we consider a vehicular network in which the wireless nodes are located on a system of roads. We model the roadways, which are predominantly straight and randomly oriented, by a Poisson line process (PLP) and the locations of nodes on each road as a homogeneous 1D Poisson point process. Assuming that each node transmits independently, the locations of transmitting and receiving nodes are given by two Cox processes driven by the same PLP. For this setup, we derive the coverage probability of a typical receiver, which is an arbitrarily chosen receiving node, assuming independent Nakagami- @math fading over all wireless channels. Assuming that the typical receiver connects to its closest transmitting node in the network, we first derive the distribution of the distance between the typical receiver and the serving node to characterize the desired signal power. We then characterize coverage probability for this setup, which involves two key technical challenges. First, we need to handle several cases as the serving node can possibly be located on any line in the network and the corresponding interference experienced at the typical receiver is different in each case. Second, conditioning on the serving node imposes constraints on the spatial configuration of lines, which requires careful analysis of the conditional distribution of the lines. We address these challenges in order to characterize the interference experienced at the typical receiver. We then derive an exact expression for coverage probability in terms of the derivative of Laplace transform of interference power distribution. We analyze the trends in coverage probability as a function of the network parameters: line density and node density. We also provide some theoretical insights by studying the asymptotic characteristics of coverage probability." ] }