aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1908.04933
2968171141
Re-Pair is a grammar compression scheme with favorably good compression rates. The computation of Re-Pair comes with the cost of maintaining large frequency tables, which makes it hard to compute Re-Pair on large scale data sets. As a solution for this problem we present, given a text of length @math whose characters are drawn from an integer alphabet, an @math time algorithm computing Re-Pair in @math bits of space including the text space, where @math is the number of terminals and non-terminals. The algorithm works in the restore model, supporting the recovery of the original input in the time for the Re-Pair computation with @math additional bits of working space. We give variants of our solution working in parallel or in the external memory model.
Re-Pair Computation Re-Pair is a grammar proposed by , who gave an algorithm computing it in expected linear time with @math words of working space, where @math is the number of non-terminals (produced by Re-Pair). This space requirement got improved by , who presented a linear time algorithm taking @math words on top of the rewriteable text space for a constant @math with @math . Subsequently, they improved their algorithm in @cite_6 to include the text space within the @math words of working space. However, they assume that the alphabet size @math is constant and @math , where @math is the machine word size. They also provide a solution for @math running in expected linear time. Recently, showed how to convert an arbitrary grammar (representing a text) in the Re-Pair grammar in compressed space, i.e., without decompressing the text. Combined with a grammar compression that can process the text in compressed space in a streaming fashion, this result leads to the first Re-Pair computation in compressed space.
{ "cite_N": [ "@cite_6" ], "mid": [ "2610200252", "2620416450", "2951532727", "2038227658" ], "abstract": [ "Re-Pair is an efficient grammar compressor that operates by recursively replacing high-frequency character pairs with new grammar symbols. The most space-efficient linear-time algorithm computing Re-Pair uses @math words on top of the re-writable text (of length @math and stored in @math words), for any constant @math ; in practice however, this solution uses complex sub-procedures preventing it from being practical. In this paper, we present an implementation of the above-mentioned result making use of more practical solutions; our tool further improves the working space to @math words (text included), for some small constant @math . As a second contribution, we focus on compact representations of the output grammar. The lower bound for storing a grammar with @math rules is @math bits, and the most efficient encoding algorithm in the literature uses at most @math bits and runs in @math time. We describe a linear-time heuristic maximizing the compressibility of the output Re-Pair grammar. On real datasets, our grammar encoding uses---on average---only @math more bits than the information-theoretic minimum. In half of the tested cases, our compressor improves the output size of 7-Zip with maximum compression rate turned on.", "Indexing highly repetitive texts --- such as genomic databases, software repositories and versioned text collections --- has become an important problem since the turn of the millennium. A relevant compressibility measure for repetitive texts is @math , the number of runs in their Burrows-Wheeler Transform (BWT). One of the earliest indexes for repetitive collections, the Run-Length FM-index, used @math space and was able to efficiently count the number of occurrences of a pattern of length @math in the text (in loglogarithmic time per pattern symbol, with current techniques). However, it was unable to locate the positions of those occurrences efficiently within a space bounded in terms of @math . Since then, a number of other indexes with space bounded by other measures of repetitiveness --- the number of phrases in the Lempel-Ziv parse, the size of the smallest grammar generating the text, the size of the smallest automaton recognizing the text factors --- have been proposed for efficiently locating, but not directly counting, the occurrences of a pattern. In this paper we close this long-standing problem, showing how to extend the Run-Length FM-index so that it can locate the @math occurrences efficiently within @math space (in loglogarithmic time each), and reaching optimal time @math within @math space, on a RAM machine of @math bits. Within @math space, our index can also count in optimal time @math . Raising the space to @math , we support count and locate in @math and @math time, which is optimal in the packed setting and had not been obtained before in compressed space. We also describe a structure using @math space that replaces the text and extracts any text substring of length @math in almost-optimal time @math . (...continues...)", "We introduce the first grammar-compressed representation of a sequence that supports searches in time that depends only logarithmically on the size of the grammar. Given a text @math that is represented by a (context-free) grammar of @math (terminal and nonterminal) symbols and size @math (measured as the sum of the lengths of the right hands of the rules), a basic grammar-based representation of @math takes @math bits of space. Our representation requires @math bits of space, for any @math . It can find the positions of the @math occurrences of a pattern of length @math in @math in @math time, and extract any substring of length @math of @math in time @math , where @math is the height of the grammar tree.", "We present an algorithm for learning from unlabeled text, based on the Vector Space Model (VSM) of information retrieval, that can solve verbal analogy questions of the kind found in the SAT college entrance exam. A verbal analogy has the form A:B::C:D, meaning \"A is to B as C is to D\"; for example, mason:stone::carpenter:wood. SAT analogy questions provide a word pair, A:B, and the problem is to select the most analogous word pair, C:D, from a set of five choices. The VSM algorithm correctly answers 47 of a collection of 374 college-level analogy questions (random guessing would yield 20 correct; the average college-bound senior high school student answers about 57 correctly). We motivate this research by applying it to a difficult problem in natural language processing, determining semantic relations in noun-modifier pairs. The problem is to classify a noun-modifier pair, such as \"laser printer\", according to the semantic relation between the noun (printer) and the modifier (laser). We use a supervised nearest-neighbour algorithm that assigns a class to a given noun-modifier pair by finding the most analogous noun-modifier pair in the training data. With 30 classes of semantic relations, on a collection of 600 labeled noun-modifier pairs, the learning algorithm attains an F value of 26.5 (random guessing: 3.3 ). With 5 classes of semantic relations, the F value is 43.2 (random: 20 ). The performance is state-of-the-art for both verbal analogies and noun-modifier relations." ] }
1908.05004
2967978532
The subject of this report is the re-identification of individuals in the Myki public transport dataset released as part of the Melbourne Datathon 2018. We demonstrate the ease with which we were able to re-identify ourselves, our co-travellers, and complete strangers; our analysis raises concerns about the nature and granularity of the data released, in particular the ability to identify vulnerable or sensitive groups.
A common theme is that a remarkably small number of distinct points of information are necessary to make an individual unique---whenever one person's information is linked together into a detailed record of their events, a few known events are usually enough to identify them. De Montjoye @cite_5 showed that 80 were unique based on 3 points of time and location, even when neither times nor places were very precisely given.
{ "cite_N": [ "@cite_5" ], "mid": [ "2115240023", "2103469743", "2110381504", "2072918779" ], "abstract": [ "We study fifteen months of human mobility data for one and a half million individuals and find that human mobility traces are highly unique. In fact, in a dataset where the location of an individual is specified hourly, and with a spatial resolution equal to that given by the carrier's antennas, four spatio-temporal points are enough to uniquely identify 95 of the individuals. We coarsen the data spatially and temporally to find a formula for the uniqueness of human mobility traces given their resolution and the available outside information. This formula shows that the uniqueness of mobility traces decays approximately as the 1 10 power of their resolution. Hence, even coarse datasets provide little anonymity. These findings represent fundamental constraints to an individual's privacy and have important implications for the design of frameworks and institutions dedicated to protect the privacy of individuals.", "Two friends have become separated in a building or shopping mall and and wish to meet as quickly as possible. There are n possible locations where they might meet. However, the locations are identical and there has been no prior agreement where to meet or how to search. Hence they must use identical strategies and must treat all locations in a symmetrical fashion. Suppose their search proceeds in discrete time. Since they wish to avoid the possibility of never meeting, they will wish to use some randomizing strategy. If each person searches one of the n locations at random at each step, then rendezvous will require n steps on average. It is possible to do better than this: although the optimal strategy is difficult to characterize for general n, there is a strategy with an expected time until rendezvous of less than 0.829 n for large enough n. For n = 2 and 3 the optimal strategy can be established and on average 2 and 8 3 steps are required respectively. There are many tantalizing variations on this problem, which we discuss with some conjectures. DYNAMIC PROGRAMMING; SEARCH PROBLEMS", "1. The class to which each thing belongs. 2. The average properties of each class. 3. The deviations of each thing from the average properties of its parent class. If the things are found to be concentrated in a small area of the region of each class in the measurement space then the deviations will be small, and with reference to the average class properties most of the information about a thing is given by naming the class to which it belongs. In this case the information may be recorded much more briefly than if a classification had not been used. We suggest that the best classification is that which results in the briefest recording of all the attribute information. In this context, we will regard the measurements of each thing as being a message about that thing. Shannon (1948) showed that where messages may be regarded as each nominating the occurrence of a particular event among a universe of possible events, the information needed to record a series of such messages is minimised if the messages are encoded so that the length of each message is proportional to minus the logarithm of the relative frequency of occurrence of the event which it nominates. The information required is greatest when all frequencies are equal. The messages here nominate the positions in measurement space of the 5 1 points representing the attributes of the things. If the expected density of points in the measurement space is everywhere uniform, the positions of the points cannot be encoded more briefly than by a simple list of the measured values. However, if the expected density is markedly non-uniform, application", "In recent years, information trustworthiness has become a serious issue when user-generated contents prevail in our information world. In this paper, we investigate the important problem of estimating information trustworthiness from the perspective of correlating and comparing multiple data sources. To a certain extent, the consistency degree is an indicator of information reliability--Information unanimously agreed by all the sources is more likely to be reliable. Based on this principle, we develop an effective computational approach to identify consistent information from multiple data sources. Particularly, we analyze vast amounts of information collected from multiple review platforms (multiple sources) in which people can rate and review the items they have purchased. The major challenge is that different platforms attract diverse sets of users, and thus information cannot be compared directly at the surface. However, latent reasons hidden in user ratings are mostly shared by multiple sources, and thus inconsistency about an item only appears when some source provides ratings deviating from the common latent reasons. Therefore, we propose a novel two-step procedure to calculate information consistency degrees for a set of items which are rated by multiple sets of users on different platforms. We first build a Multi-Source Deep Belief Network (MSDBN) to identify the common reasons hidden in multi-source rating data, and then calculate a consistency score for each item by comparing individual sources with the reconstructed data derived from the latent reasons. We conduct experiments on real user ratings collected from Orbitz, Priceline and TripAdvisor on all the hotels in Las Vegas and New York City. Experimental results demonstrate that the proposed approach successfully finds the hotels that receive inconsistent, and possibly unreliable, ratings." ] }
1908.05055
2968345977
In this paper we present new optimization formulations for maximizing the network lifetime in wireless mesh networks performing data aggregation and dissemination for machine-to-machine communication in the Internet of Things. We focus on heterogeneous networks in which multiple applications co-exist and nodes may take on different roles for different applications. Moreover, we address network reconfiguration as a means to increase the network lifetime, in keeping with the current trend towards software defined networks and network function virtualization. To test our optimization formulations, we conducted a numerical study using randomly-generated mesh networks from 10 to 30 nodes, and showed that the network lifetime can be increased using network reconfiguration by up to 75 over a single, minimal-energy configuration. Further, our solutions are feasible to implement in practical scenarios: only few configurations are needed, thus requiring little storage for a standalone network, and the synchronization and signalling needed to switch configurations is low relative to each configuration's operating time.
In @cite_2 , we considered the problem of data aggregation and dissemination in IoT networks serving, for example, monitoring, sensing, or machine control applications. A key aspect of the IoT that differentiates it from classical wireless sensor networks (WSNs) is its heterogeneity. We therefore considered cases where nodes may take on different roles (for example, sensors, destinations, or transit nodes) for different applications, and where multiple applications with different demands may be present in the network simultaneously. Moreover, these demands can be more general than only collecting data and forwarding it to a single sink, as is usually the case for WSNs. Rather, data may be processed within the network (we take the specific case of aggregation), and may be disseminated to multiple sinks via multicast transmissions.
{ "cite_N": [ "@cite_2" ], "mid": [ "2786403027", "2102832611", "2345093194", "1988815444" ], "abstract": [ "Established approaches to data aggregation in wireless sensor networks (WSNs) do not cover the variety of new use cases developing with the advent of the Internet of Things (IoT). In particular, the current push toward fog computing, in which control, computation, and storage are moved to nodes close to the network edge, induces a need to collect data at multiple sinks, rather than the single sink typically considered in WSN aggregation algorithms. Moreover, for machine-to-machine communication scenarios, actuators subscribing to sensor measurements may also be present, in which case data should be not only aggregated and processed in-network but also disseminated to actuator nodes. In this paper, we present mixed-integer programming formulations and algorithms for the problem of energy-optimal routing and multiple-sink aggregation, as well as joint aggregation and dissemination, of sensor measurement data in IoT edge networks. We consider optimization of the network for both minimal total energy usage, and min-max per-node energy usage. We also provide a formulation and algorithm for throughput-optimal scheduling of transmissions under the physical interference model in the pure aggregation case. We have conducted a numerical study to compare the energy required for the two use cases, as well as the time to solve them, in generated network scenarios with varying topologies and between 10 and 40 nodes. Although aggregation only accounts for less than 15 of total energy usage in all cases tested, it provides substantial energy savings. Our results show more than 13 times greater energy usage for 40-node networks using direct, shortest-path flows from sensors to actuators, compared with our aggregation and dissemination solutions.", "Wireless sensor networks (WSNs) are ad-hoc networks composed of tiny devices with limited computation and energy capacities. For such devices, data transmission is a very energy-consuming operation. It thus becomes essential to the lifetime of a WSN to minimize the number of bits sent by each device. One well-known approach is to aggregate sensor data (e.g., by adding) along the path from sensors to the sink. Aggregation becomes especially challenging if end-to-end privacy between sensors and the sink is required. In this paper, we propose a simple and provably secure additively homomorphic stream cipher that allows efficient aggregation of encrypted data. The new cipher only uses modular additions (with very small moduli) and is therefore very well suited for CPU-constrained devices. We show that aggregation based on this cipher can be used to efficiently compute statistical values such as mean, variance and standard deviation of sensed data, while achieving significant bandwidth gain.", "Wireless sensor networks (WSNs) are starting to have a high impact on our societies and, for next generation WSNs to become more integrated with the Internet, researchers recently proposed to embed IPv6 into such very constrained networks. Also, constraint application protocol (CoAP) and Observe have been proposed for RESTful services to be provided. CoAP Observe supports the use of caches proxies and, for this reason, an observation request may resort to multiple client server registration steps in order to get notifications. Here, we propose to plan the multiple registration steps, at proxies, of multiple observation requests in order to make proper aggregation scheduling of notifications for transmission. This leads to less energy consumption and to an effective use of bandwidth, avoiding energy depletion of nodes, and increasing the network lifetime. Besides, mathematically formalizing the problem, a heuristic approach is developed and a discussion on how to incorporate algorithm’s decision into the network is done. The proposed framework can be applied to multiple application domains (e.g., monitoring, machine to machine).", "The emerging wireless energy transfer technology enables charging sensor batteries in a wireless sensor network (WSN) and maintaining perpetual operation of the network. Recent breakthrough in this area has opened up a new dimension to the design of sensor network protocols. In the meanwhile, mobile data gathering has been considered as an efficient alternative to data relaying in WSNs. However, time variation of recharging rates in wireless rechargeable sensor networks imposes a great challenge in obtaining an optimal data gathering strategy. In this paper, we propose a framework of joint Wireless Energy Replenishment and anchor-point based Mobile Data Gathering (WerMDG) in WSNs by considering various sources of energy consumption and time-varying nature of energy replenishment. To that end, we first determine the anchor point selection and the sequence to visit the anchor points. We then formulate the WerMDG problem into a network utility maximization problem which is constrained by flow conversation, energy balance, link and battery capacity and the bounded sojourn time of the mobile collector. Furthermore, we present a distributed algorithm composed of cross-layer data control, scheduling and routing subalgorithms for each sensor node, and sojourn time allocation subalgorithm for the mobile collector at different anchor points. Finally, we give extensive numerical results to verify the convergence of the proposed algorithm and the impact of utility weight on network performance." ] }
1908.05055
2968345977
In this paper we present new optimization formulations for maximizing the network lifetime in wireless mesh networks performing data aggregation and dissemination for machine-to-machine communication in the Internet of Things. We focus on heterogeneous networks in which multiple applications co-exist and nodes may take on different roles for different applications. Moreover, we address network reconfiguration as a means to increase the network lifetime, in keeping with the current trend towards software defined networks and network function virtualization. To test our optimization formulations, we conducted a numerical study using randomly-generated mesh networks from 10 to 30 nodes, and showed that the network lifetime can be increased using network reconfiguration by up to 75 over a single, minimal-energy configuration. Further, our solutions are feasible to implement in practical scenarios: only few configurations are needed, thus requiring little storage for a standalone network, and the synchronization and signalling needed to switch configurations is low relative to each configuration's operating time.
Network lifetime has been studied extensively in the context of WSNs since the early 2000's. A full review of the literature in this area is therefore beyond the scope of this paper; a recent survey can be found in @cite_20 . We will instead focus on the recent work that is most relevant to the current paper.
{ "cite_N": [ "@cite_20" ], "mid": [ "2034964589", "2571303926", "2156919687", "2038464572" ], "abstract": [ "The longevity of wireless sensor network (WSN) deployments is often crucial for real-time monitoring applications. Minimizing energy consumption by utilizing intelligent information processing is one of the main ways to prolong the lifetime of a network deployment. Data streams from the sensors need to be processed within the resource constraints of the sensing platforms to reduce the energy consumption associated with packet transmission. In this paper we carried out both simulation and real-world implementation of light-weight adaptive models to achieve a prolonged WSN lifetime. Specifically, we propose a Naive model that incurs virtually no cost with low memory footprint to realize this goal. Our results show that, despite its minimal complexity, the Naive model is robust when compared with other well-known algorithms used for prediction in WSNs. We show that our approach achieves up to 96 communication reduction, within 0.2 degrees error bound with no significant loss in accuracy and it is comparable in performance to the more complex algorithms like Exponential Smoothing (ETS).", "Emerging technologies, such as the Internet of things, smart applications, smart grids and machine-to-machine networks stimulate the deployment of autonomous, selfconfiguring, large-scale wireless sensor networks (WSNs). Efficient energy utilization is crucially important in order to maintain a fully operational network for the longest period of time possible. Therefore, network lifetime (NL) maximization techniques have attracted a lot of research attention owing to their importance in terms of extending the flawless operation of battery-constrained WSNs. In this paper, we review the recent developments in WSNs, including their applications, design constraints and lifetime estimation models. Commencing with the portrayal of rich variety definitions of NL design objective used for WSNs, the family of NL maximization techniques is introduced and some design guidelines with examples are provided to show the potential improvements of the different design criteria", "In a heterogeneous wireless sensor network (WSN), relay nodes (RNs) are adopted to relay data packets from sensor nodes (SNs) to the base station (BS). The deployment of the RNs can have a significant impact on connectivity and lifetime of a WSN system. This paper studies the effects of random deployment strategies. We first discuss the biased energy consumption rate problem associated with uniform random deployment. This problem leads to insufficient energy utilization and shortened network lifetime. To overcome this problem, we propose two new random deployment strategies, namely, the lifetime-oriented deployment and hybrid deployment. The former solely aims at balancing the energy consumption rates of RNs across the network, thus extending the system lifetime. However, this deployment scheme may not provide sufficient connectivity to SNs when the given number of RNs is relatively small. The latter reconciles the concerns of connectivity and lifetime extension. Both single-hop and multihop communication models are considered in this paper. With a combination of theoretical analysis and simulated evaluation, this study explores the trade-off between connectivity and lifetime extension in the problem of RN deployment. It also provides a guideline for efficient deployment of RNs in a large-scale heterogeneous WSN.", "We consider a two-tiered Wireless Sensor Network (WSN) consisting of sensor clusters deployed around strategic locations and base-stations (BSs) whose locations are relatively flexible. Within a sensor cluster, there are many small sensor nodes (SNs) that capture, encode and transmit relevant information from the designated area, and there is at least one application node (AN) that receives raw data from these SNs, creates a comprehensive local-view, and forwards the composite bit-stream toward a BS. In practice, both SN and AN are battery-powered and energy-constrained, and their node lifetimes directly affect the network lifetime of WSNs. In this paper, we focus on the topology control process for ANs and BSs, which constitute the upper tier of a two-tiered WSN. We propose approaches to maximize the topological network lifetime of the WSN, by arranging BS location and inter-AN relaying optimally. Based on an algorithm in Computational Geometry, we derive the optimal BS locations under three topological lifetime definitions according to mission criticality. In addition, by studying the intrinsic properties of WSNs, we establish the upper and lower bounds of their maximal topological lifetime. When inter-AN relaying becomes feasible and favorable, we continue to develop an optimal parallel relay allocation to further prolong the topological lifetime of the WSN. An equivalent serialized relay schedule is also obtained, so that each AN only needs to have one relay destination at any time throughout the mission. The experimental performance evaluation demonstrates the efficacy of topology control as a vital process to maximize the network lifetime of WSNs." ] }
1908.05055
2968345977
In this paper we present new optimization formulations for maximizing the network lifetime in wireless mesh networks performing data aggregation and dissemination for machine-to-machine communication in the Internet of Things. We focus on heterogeneous networks in which multiple applications co-exist and nodes may take on different roles for different applications. Moreover, we address network reconfiguration as a means to increase the network lifetime, in keeping with the current trend towards software defined networks and network function virtualization. To test our optimization formulations, we conducted a numerical study using randomly-generated mesh networks from 10 to 30 nodes, and showed that the network lifetime can be increased using network reconfiguration by up to 75 over a single, minimal-energy configuration. Further, our solutions are feasible to implement in practical scenarios: only few configurations are needed, thus requiring little storage for a standalone network, and the synchronization and signalling needed to switch configurations is low relative to each configuration's operating time.
There are numerous different definitions of network lifetime adopted in the literature @cite_20 . Some of these include that the network lifetime expires at the time instant a certain number (possibly as low as one) or proportion of nodes deplete their batteries, when the first data collection failure occurs, or when the specific node with the highest consumption rate runs out of energy. In @cite_20 , these definitions are classified into four categories depending on whether they are based on node lifetime, coverage and connectivity, transmission, or a combination of parameters.
{ "cite_N": [ "@cite_20" ], "mid": [ "2099641228", "2106665154", "2545881182", "1787995280" ], "abstract": [ "We derive a general formula for the lifetime-of wireless sensor networks which holds independently of the underlying network model including network architecture and protocol, data collection initiation, lifetime definition, channel fading characteristics, and energy consumption model. This formula identifies two key parameters at the physical layer that affect the network lifetime: the channel state and the residual energy of sensors. As a result, it provides not only a gauge for performance evaluation of sensor networks but also a guideline for the design of network protocols. Based on this formula, we propose a medium access control protocol that exploits both the channel state information and the residual energy information of individual sensors. Referred to as the max-min approach, this protocol maximizes the minimum residual energy across the network in each data collection.", "In wireless sensor networks that consist of a large number of low-power, short-lived, unreliable sensors, one of the main design challenges is to obtain long system lifetime without sacrificing system original performances (sensing coverage and sensing reliability). In this paper, we propose a node-scheduling scheme, which can reduce system overall energy consumption, therefore increasing system lifetime, by identifying redundant nodes in respect of sensing coverage and then assigning them an off-duty operation mode that has lower energy consumption than the normal on-duty one. Our scheme aims to completely preserve original sensing coverage theoretically. Practically, sensing coverage degradation caused by location error, packet loss and node failure is very limited, not more than 1 as shown by our experimental results. In addition, the experimental results illustrate that certain redundancy is still guaranteed after node-scheduling, which we believe can provide enough sensing reliability in many applications. We implement the proposed scheme in NS-2 as an extension of the LEACH protocol and compare its energy consumption with the original LEACH. Simulation results exhibit noticeably longer system lifetime after introducing our scheme than before. Copyright © 2003 John Wiley & Sons, Ltd.", "In a Wireless Sensor Network (WSN) the sensed data must be gathered and transmitted to a base station where it is further processed by end users. Since that kind of network consists of low-power nodes with limited battery power, power efficient methods must be applied for node communication and data gathering in order to achieve long network lifetimes. In such networks where in a round of communication many sensor nodes have data to send to a base station, it is very important to minimize the total energy consumed by the system so that the total network lifetime is maximized. The lifetime of such sensor network is the time until base station can receive data from all sensors in the network. In this work1, besides the conventional protocol of direct transmission or the use of dynamic routing protocols proposed in literature that potentially aggregates data, we propose an algorithm based on static routing among sensor nodes with unequal energy distribution in order to extend network lifetime and find a near-optimal node energy charge scheme that leads to both node and network lifetime prolongation. Our simulation results show that our algorithm achieves longer network lifetimes mainly because the final energy charge of each node is not uniform, while each node is free from maintaining complex route information and thus less infrastructure communication is needed.", "An ad-hoc network of wireless static nodes is considered as it arises in a rapidly deployed, sensor-based, monitoring system. Information is generated in certain nodes and needs to reach a set of designated gateway nodes. Each node may adjust its power within a certain range that determines the set of possible one hop away neighbors. Traffic forwarding through multiple hops is employed when the intended destination is not within immediate reach. The nodes have limited initial amounts of energy that is consumed at different rates depending on the power level and the intended receiver. We propose algorithms to select the routes and the corresponding power levels such that the time until the batteries of the nodes drain-out is maximized. The algorithms are local and amenable to distributed implementation. When there is a single power level, the problem is reduced to a maximum flow problem with node capacities and the algorithms converge to the optimal solution. When there are multiple power levels then the achievable lifetime is close to the optimal (that is computed by linear programming) most of the time. It turns out that in order to maximize the lifetime, the traffic should be routed such that the energy consumption is balanced among the nodes in proportion to their energy reserves, instead of routing to minimize the absolute consumed power." ] }
1908.05055
2968345977
In this paper we present new optimization formulations for maximizing the network lifetime in wireless mesh networks performing data aggregation and dissemination for machine-to-machine communication in the Internet of Things. We focus on heterogeneous networks in which multiple applications co-exist and nodes may take on different roles for different applications. Moreover, we address network reconfiguration as a means to increase the network lifetime, in keeping with the current trend towards software defined networks and network function virtualization. To test our optimization formulations, we conducted a numerical study using randomly-generated mesh networks from 10 to 30 nodes, and showed that the network lifetime can be increased using network reconfiguration by up to 75 over a single, minimal-energy configuration. Further, our solutions are feasible to implement in practical scenarios: only few configurations are needed, thus requiring little storage for a standalone network, and the synchronization and signalling needed to switch configurations is low relative to each configuration's operating time.
However, a problem with many of these definitions is that they are not application-centric. In practice, whether or not a network is functional depends on the specific application or applications which it serves. Some applications may require all nodes in the network to have remaining energy, while others may continue to operate correctly with only a few nodes working. The lifetime also depends on the capabilities of the network. For example, if the network can be reconfigured, the lifetime may be extended by switching configurations. This can be facilitated by the use of software defined networking @cite_22 , as well as support from cloud services that are capable of performing even demanding calculations to determine the best network configuration at any given time, without incurring an energy cost in the end devices.
{ "cite_N": [ "@cite_22" ], "mid": [ "1787995280", "2050266688", "2545881182", "2160450532" ], "abstract": [ "An ad-hoc network of wireless static nodes is considered as it arises in a rapidly deployed, sensor-based, monitoring system. Information is generated in certain nodes and needs to reach a set of designated gateway nodes. Each node may adjust its power within a certain range that determines the set of possible one hop away neighbors. Traffic forwarding through multiple hops is employed when the intended destination is not within immediate reach. The nodes have limited initial amounts of energy that is consumed at different rates depending on the power level and the intended receiver. We propose algorithms to select the routes and the corresponding power levels such that the time until the batteries of the nodes drain-out is maximized. The algorithms are local and amenable to distributed implementation. When there is a single power level, the problem is reduced to a maximum flow problem with node capacities and the algorithms converge to the optimal solution. When there are multiple power levels then the achievable lifetime is close to the optimal (that is computed by linear programming) most of the time. It turns out that in order to maximize the lifetime, the traffic should be routed such that the energy consumption is balanced among the nodes in proportion to their energy reserves, instead of routing to minimize the absolute consumed power.", "Abstract Energy is one of the scarcest resources in wireless sensor network (WSN). One fundamental way of conserving energy is judicious deployment of sensor nodes within the network area so that energy flow remains balanced throughout the network. This avoids the problem of occurrence of ‘energy holes’ and ensures prolonged network lifetime. We have first investigated the problem for enhancing network lifetime using homogeneous sensor nodes. From our observation it is revealed that energy imbalance in WSN occurs due to relaying of data from different parts of the network towards sink. So for improved energy balance instead of using only sensor nodes it is desirable to deploy relay nodes in addition to sensor nodes to manage such imbalance. We have also developed a location-wise pre-determined heterogeneous node deployment strategy based on the principle of energy balancing derived from this analysis, leading to an enhancement of network lifetime. Exhaustive simulation is performed primarily to measure the extent of achieving our design goal of enhancing network lifetime while attaining energy balancing and maintaining coverage. The simulation results also show that our scheme does not compromise with other network performance metrics such as end-to-end delay, packet loss, throughput while achieving the design goal. Finally all the results are compared with two competing schemes and the results confirm our scheme's supremacy in terms of both design performance metrics as well as network performance metrics.", "In a Wireless Sensor Network (WSN) the sensed data must be gathered and transmitted to a base station where it is further processed by end users. Since that kind of network consists of low-power nodes with limited battery power, power efficient methods must be applied for node communication and data gathering in order to achieve long network lifetimes. In such networks where in a round of communication many sensor nodes have data to send to a base station, it is very important to minimize the total energy consumed by the system so that the total network lifetime is maximized. The lifetime of such sensor network is the time until base station can receive data from all sensors in the network. In this work1, besides the conventional protocol of direct transmission or the use of dynamic routing protocols proposed in literature that potentially aggregates data, we propose an algorithm based on static routing among sensor nodes with unequal energy distribution in order to extend network lifetime and find a near-optimal node energy charge scheme that leads to both node and network lifetime prolongation. Our simulation results show that our algorithm achieves longer network lifetimes mainly because the final energy charge of each node is not uniform, while each node is free from maintaining complex route information and thus less infrastructure communication is needed.", "A critical aspect of applications with wireless sensor networks is network lifetime. Power-constrained wireless sensor networks are usable as long as they can communicate sensed data to a processing node. Sensing and communications consume energy, therefore judicious power management and sensor scheduling can effectively extend network lifetime. To cover a set of targets with known locations when ground access in the remote area is prohibited, one solution is to deploy the sensors remotely, from an aircraft. The lack of precise sensor placement is compensated by a large sensor population deployed in the drop zone, that would improve the probability of target coverage. The data collected from the sensors is sent to a central node (e.g. cluster head) for processing. In this paper we propose un efficient method to extend the sensor network life time by organizing the sensors into a maximal number of set covers that are activated successively. Only the sensors from the current active set are responsible for monitoring all targets and for transmitting the collected data, while all other nodes are in a low-energy sleep mode. By allowing sensors to participate in multiple sets, our problem formulation increases the network lifetime compared with related work [M. ], that has the additional requirements of sensor sets being disjoint and operating equal time intervals. In this paper we model the solution as the maximum set covers problem and design two heuristics that efficiently compute the sets, using linear programming and a greedy approach. Simulation results are presented to verify our approaches." ] }
1908.05055
2968345977
In this paper we present new optimization formulations for maximizing the network lifetime in wireless mesh networks performing data aggregation and dissemination for machine-to-machine communication in the Internet of Things. We focus on heterogeneous networks in which multiple applications co-exist and nodes may take on different roles for different applications. Moreover, we address network reconfiguration as a means to increase the network lifetime, in keeping with the current trend towards software defined networks and network function virtualization. To test our optimization formulations, we conducted a numerical study using randomly-generated mesh networks from 10 to 30 nodes, and showed that the network lifetime can be increased using network reconfiguration by up to 75 over a single, minimal-energy configuration. Further, our solutions are feasible to implement in practical scenarios: only few configurations are needed, thus requiring little storage for a standalone network, and the synchronization and signalling needed to switch configurations is low relative to each configuration's operating time.
This is the approach we adopt in this paper, and we define valid configurations based on the demands of the applications present in the network along with the roles the various nodes play in these demands. As such, we will adopt a general definition of the network lifetime as the total time in which the network is operational. Since we consider a class of applications with data streams as their demands, this is most similar to the definition used in @cite_15 , where the network lifetime was defined as the number of sensory information task cycles achieved until the network ceases to be fully operational.
{ "cite_N": [ "@cite_15" ], "mid": [ "2099641228", "2571303926", "2013227485", "1787995280" ], "abstract": [ "We derive a general formula for the lifetime-of wireless sensor networks which holds independently of the underlying network model including network architecture and protocol, data collection initiation, lifetime definition, channel fading characteristics, and energy consumption model. This formula identifies two key parameters at the physical layer that affect the network lifetime: the channel state and the residual energy of sensors. As a result, it provides not only a gauge for performance evaluation of sensor networks but also a guideline for the design of network protocols. Based on this formula, we propose a medium access control protocol that exploits both the channel state information and the residual energy information of individual sensors. Referred to as the max-min approach, this protocol maximizes the minimum residual energy across the network in each data collection.", "Emerging technologies, such as the Internet of things, smart applications, smart grids and machine-to-machine networks stimulate the deployment of autonomous, selfconfiguring, large-scale wireless sensor networks (WSNs). Efficient energy utilization is crucially important in order to maintain a fully operational network for the longest period of time possible. Therefore, network lifetime (NL) maximization techniques have attracted a lot of research attention owing to their importance in terms of extending the flawless operation of battery-constrained WSNs. In this paper, we review the recent developments in WSNs, including their applications, design constraints and lifetime estimation models. Commencing with the portrayal of rich variety definitions of NL design objective used for WSNs, the family of NL maximization techniques is introduced and some design guidelines with examples are provided to show the potential improvements of the different design criteria", "One of the main operations in wireless sensor networks is the surveillance of a set of events (targets) that occur in the field. In practice, a node monitors an event accurately when it is located closer to it, while the opposite happens when the node is moving away from the target. This detection accuracy can be represented by a probabilistic distribution. Since the network nodes are usually randomly deployed, some of the events are monitored by a few nodes and others by many nodes. In applications where there is a need of a full coverage and of a minimum allowed detection accuracy, a single node may not be able to sufficiently cover an event by itself. In this case, two or more nodes are needed to collaborate and to cover a single target. Moreover, all the nodes must be connected with a base station that collects the monitoring data. In this paper we describe the problem of the minimum sampling quality, where an event must be sufficiently detected by the maximum possible amount of time. Since the probability of detecting a single target using randomly deployed static nodes is quite low, we present a localized algorithm based on mobile nodes. Our algorithm sacrifices a part of the energy of the nodes by moving them to a new location in order to satisfy the desired detection accuracy. It divides the monitoring process in rounds to extend the network lifetime, while it ensures connectivity with the base station. Furthermore, since the network lifetime is strongly related to the number of rounds, we propose two redeployment schemes that enhance the performance of our approach by balancing the number of sensors between densely covered areas and areas that are poorly covered. Finally, our evaluation results show an over 10 times improvement on the network lifetime compared to the case where the sensors are static. Our approaches, also, outperform a virtual forces algorithm when connectivity with the base station is required. The redeployment schemes present a good balance between network lifetime and convergence time.", "An ad-hoc network of wireless static nodes is considered as it arises in a rapidly deployed, sensor-based, monitoring system. Information is generated in certain nodes and needs to reach a set of designated gateway nodes. Each node may adjust its power within a certain range that determines the set of possible one hop away neighbors. Traffic forwarding through multiple hops is employed when the intended destination is not within immediate reach. The nodes have limited initial amounts of energy that is consumed at different rates depending on the power level and the intended receiver. We propose algorithms to select the routes and the corresponding power levels such that the time until the batteries of the nodes drain-out is maximized. The algorithms are local and amenable to distributed implementation. When there is a single power level, the problem is reduced to a maximum flow problem with node capacities and the algorithms converge to the optimal solution. When there are multiple power levels then the achievable lifetime is close to the optimal (that is computed by linear programming) most of the time. It turns out that in order to maximize the lifetime, the traffic should be routed such that the energy consumption is balanced among the nodes in proportion to their energy reserves, instead of routing to minimize the absolute consumed power." ] }
1908.05055
2968345977
In this paper we present new optimization formulations for maximizing the network lifetime in wireless mesh networks performing data aggregation and dissemination for machine-to-machine communication in the Internet of Things. We focus on heterogeneous networks in which multiple applications co-exist and nodes may take on different roles for different applications. Moreover, we address network reconfiguration as a means to increase the network lifetime, in keeping with the current trend towards software defined networks and network function virtualization. To test our optimization formulations, we conducted a numerical study using randomly-generated mesh networks from 10 to 30 nodes, and showed that the network lifetime can be increased using network reconfiguration by up to 75 over a single, minimal-energy configuration. Further, our solutions are feasible to implement in practical scenarios: only few configurations are needed, thus requiring little storage for a standalone network, and the synchronization and signalling needed to switch configurations is low relative to each configuration's operating time.
Some work has been performed regarding network lifetime for networks with heterogeneous nodes, but only in a quite limited sense. For example, there is work based on the LEACH clustering protocol @cite_16 @cite_18 , where each node may be either an ordinary sensor node or a cluster head at different times. Examples of variations on LEACH that improve the network lifetime include @cite_7 , @cite_17 and @cite_10 , while @cite_12 presents a clustering routing protocol that considers both network lifetime and coverage. In @cite_4 , the nodes are also heterogeneous, however they may only be of two types: sensor nodes and relay nodes. This is also the case in @cite_13 , where network lifetime is defined as the time until the first node depletes its battery, and (unicast) routing is then optimised for each traffic flow to reach the sink.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_4", "@cite_7", "@cite_16", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2106665154", "2167360551", "2106335692", "2126379392" ], "abstract": [ "In wireless sensor networks that consist of a large number of low-power, short-lived, unreliable sensors, one of the main design challenges is to obtain long system lifetime without sacrificing system original performances (sensing coverage and sensing reliability). In this paper, we propose a node-scheduling scheme, which can reduce system overall energy consumption, therefore increasing system lifetime, by identifying redundant nodes in respect of sensing coverage and then assigning them an off-duty operation mode that has lower energy consumption than the normal on-duty one. Our scheme aims to completely preserve original sensing coverage theoretically. Practically, sensing coverage degradation caused by location error, packet loss and node failure is very limited, not more than 1 as shown by our experimental results. In addition, the experimental results illustrate that certain redundancy is still guaranteed after node-scheduling, which we believe can provide enough sensing reliability in many applications. We implement the proposed scheme in NS-2 as an extension of the LEACH protocol and compare its energy consumption with the original LEACH. Simulation results exhibit noticeably longer system lifetime after introducing our scheme than before. Copyright © 2003 John Wiley & Sons, Ltd.", "In a sensor network, usually a large number of sensors transport data messages to a limited number of sinks. Due to this multipoint-to-point communications pattern in general homogeneous sensor networks, the closer a sensor to the sink, the quicker it will deplete its battery. This unbalanced energy depletion phenomenon has become the bottleneck problem to elongate the lifetime of sensor networks. In this paper, we consider the effects of joint relay node deployment and transmission power control on network lifetime. Contrary to the intuition the relay nodes considered are even simpler devices than the sensor nodes with limited capabilities. We show that the network lifetime can be extended significantly with the addition of relay nodes to the network. In addition, for the same expected network lifetime goal, the number of relay nodes required can be reduced by employing efficient transmission power control while leaving the network connectivity level unchanged. The solution suggests that it is sufficient to deploy relay nodes only with a specific probabilistic distribution rather than the specifying the exact places. Furthermore, the solution does not require any change on the protocols (such as routing) used in the network.", "Wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks. Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multi-hop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster based station (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic networks, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show the LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional outing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.", "Prolonged network lifetime, scalability, and load balancing are important requirements for many ad-hoc sensor network applications. Clustering sensor nodes is an effective technique for achieving these goals. In this work, we propose a new energy-efficient approach for clustering nodes in ad-hoc sensor networks. Based on this approach, we present a protocol, HEED (hybrid energy-efficient distributed clustering), that periodically selects cluster heads according to a hybrid of their residual energy and a secondary parameter, such as node proximity to its neighbors or node degree. HEED does not make any assumptions about the distribution or density of nodes, or about node capabilities, e.g., location-awareness. The clustering process terminates in O(1) iterations, and does not depend on the network topology or size. The protocol incurs low overhead in terms of processing cycles and messages exchanged. It also achieves fairly uniform cluster head distribution across the network. A careful selection of the secondary clustering parameter can balance load among cluster heads. Our simulation results demonstrate that HEED outperforms weight-based clustering protocols in terms of several cluster characteristics. We also apply our approach to a simple application to demonstrate its effectiveness in prolonging the network lifetime and supporting data aggregation." ] }
1908.05055
2968345977
In this paper we present new optimization formulations for maximizing the network lifetime in wireless mesh networks performing data aggregation and dissemination for machine-to-machine communication in the Internet of Things. We focus on heterogeneous networks in which multiple applications co-exist and nodes may take on different roles for different applications. Moreover, we address network reconfiguration as a means to increase the network lifetime, in keeping with the current trend towards software defined networks and network function virtualization. To test our optimization formulations, we conducted a numerical study using randomly-generated mesh networks from 10 to 30 nodes, and showed that the network lifetime can be increased using network reconfiguration by up to 75 over a single, minimal-energy configuration. Further, our solutions are feasible to implement in practical scenarios: only few configurations are needed, thus requiring little storage for a standalone network, and the synchronization and signalling needed to switch configurations is low relative to each configuration's operating time.
Some work in the literature also considers in-network processing. In @cite_8 , data aggregation trees are constructed and scheduled, and the network can be reconfigured, in that different trees can be used in different time periods. This work again uses the traditional WSN model of many homogeneous sensor nodes all sending measurements to a single sink. The scenario considered in @cite_23 focuses on a machine-to-machine communication application similar to the one we consider, including the presence of edge nodes in the network. However, there, the problem addressed is that of data placement on these edge nodes in order to maximize the network lifetime under latency constraints. Routing is performed by selecting the paths that yield the maximum lifetime, defined as the time until any node runs out of energy; reconfiguration of the network as we propose in this paper is not considered.
{ "cite_N": [ "@cite_23", "@cite_8" ], "mid": [ "2786403027", "2545881182", "2005747091", "2050266688" ], "abstract": [ "Established approaches to data aggregation in wireless sensor networks (WSNs) do not cover the variety of new use cases developing with the advent of the Internet of Things (IoT). In particular, the current push toward fog computing, in which control, computation, and storage are moved to nodes close to the network edge, induces a need to collect data at multiple sinks, rather than the single sink typically considered in WSN aggregation algorithms. Moreover, for machine-to-machine communication scenarios, actuators subscribing to sensor measurements may also be present, in which case data should be not only aggregated and processed in-network but also disseminated to actuator nodes. In this paper, we present mixed-integer programming formulations and algorithms for the problem of energy-optimal routing and multiple-sink aggregation, as well as joint aggregation and dissemination, of sensor measurement data in IoT edge networks. We consider optimization of the network for both minimal total energy usage, and min-max per-node energy usage. We also provide a formulation and algorithm for throughput-optimal scheduling of transmissions under the physical interference model in the pure aggregation case. We have conducted a numerical study to compare the energy required for the two use cases, as well as the time to solve them, in generated network scenarios with varying topologies and between 10 and 40 nodes. Although aggregation only accounts for less than 15 of total energy usage in all cases tested, it provides substantial energy savings. Our results show more than 13 times greater energy usage for 40-node networks using direct, shortest-path flows from sensors to actuators, compared with our aggregation and dissemination solutions.", "In a Wireless Sensor Network (WSN) the sensed data must be gathered and transmitted to a base station where it is further processed by end users. Since that kind of network consists of low-power nodes with limited battery power, power efficient methods must be applied for node communication and data gathering in order to achieve long network lifetimes. In such networks where in a round of communication many sensor nodes have data to send to a base station, it is very important to minimize the total energy consumed by the system so that the total network lifetime is maximized. The lifetime of such sensor network is the time until base station can receive data from all sensors in the network. In this work1, besides the conventional protocol of direct transmission or the use of dynamic routing protocols proposed in literature that potentially aggregates data, we propose an algorithm based on static routing among sensor nodes with unequal energy distribution in order to extend network lifetime and find a near-optimal node energy charge scheme that leads to both node and network lifetime prolongation. Our simulation results show that our algorithm achieves longer network lifetimes mainly because the final energy charge of each node is not uniform, while each node is free from maintaining complex route information and thus less infrastructure communication is needed.", "Energy consumption is a crucially important issue in battery-driven wireless sensor networks (WSNs). In most sensor networks, the sensors near the data collector (i.e. the sink) become drained more quickly than those elsewhere in the network since they are required to relay all of the data collected in the network to the sink. Therefore more balanced data paths to the sink should be established in order to extend the lifetime of the sensor network. Accordingly, a novel relay deployment scheme for WSNs based on the Voronoi diagram is proposed. The proposed scheme is applicable to both two-dimensional and three-dimensional network topologies and establishes effective routing paths that balance the traffic load within the sensor network and alleviate the burden on the sensors around the sink. Simulation results indicate that the number of relays deployed in the proposed scheme is similar to that deployed in the predetermined location scheme and is significantly less than that deployed in the minimum set cover scheme. Furthermore, the lifetime of the sensor network containing relay nodes deployed using the current scheme is longer than that achieved using either the predetermined location scheme or the minimum set cover scheme.", "Abstract Energy is one of the scarcest resources in wireless sensor network (WSN). One fundamental way of conserving energy is judicious deployment of sensor nodes within the network area so that energy flow remains balanced throughout the network. This avoids the problem of occurrence of ‘energy holes’ and ensures prolonged network lifetime. We have first investigated the problem for enhancing network lifetime using homogeneous sensor nodes. From our observation it is revealed that energy imbalance in WSN occurs due to relaying of data from different parts of the network towards sink. So for improved energy balance instead of using only sensor nodes it is desirable to deploy relay nodes in addition to sensor nodes to manage such imbalance. We have also developed a location-wise pre-determined heterogeneous node deployment strategy based on the principle of energy balancing derived from this analysis, leading to an enhancement of network lifetime. Exhaustive simulation is performed primarily to measure the extent of achieving our design goal of enhancing network lifetime while attaining energy balancing and maintaining coverage. The simulation results also show that our scheme does not compromise with other network performance metrics such as end-to-end delay, packet loss, throughput while achieving the design goal. Finally all the results are compared with two competing schemes and the results confirm our scheme's supremacy in terms of both design performance metrics as well as network performance metrics." ] }
1908.05055
2968345977
In this paper we present new optimization formulations for maximizing the network lifetime in wireless mesh networks performing data aggregation and dissemination for machine-to-machine communication in the Internet of Things. We focus on heterogeneous networks in which multiple applications co-exist and nodes may take on different roles for different applications. Moreover, we address network reconfiguration as a means to increase the network lifetime, in keeping with the current trend towards software defined networks and network function virtualization. To test our optimization formulations, we conducted a numerical study using randomly-generated mesh networks from 10 to 30 nodes, and showed that the network lifetime can be increased using network reconfiguration by up to 75 over a single, minimal-energy configuration. Further, our solutions are feasible to implement in practical scenarios: only few configurations are needed, thus requiring little storage for a standalone network, and the synchronization and signalling needed to switch configurations is low relative to each configuration's operating time.
A few general frameworks for maximizing network lifetime have also been developed. In @cite_25 , the focus is on network deployment, specifically the initial energy allocated to each node. Once again nodes are homogeneous, with all nodes collecting data and transmitting it to their neighbors, and the definition of network lifetime is the time until the first sensor depletes its battery. A more general definition of network lifetime is used in @cite_1 , which applies a framework based on channel states aimed at developing medium access protocols for improved lifetime. However, nodes have fixed roles and only a single application is considered.
{ "cite_N": [ "@cite_1", "@cite_25" ], "mid": [ "2545881182", "1787995280", "2050266688", "2099641228" ], "abstract": [ "In a Wireless Sensor Network (WSN) the sensed data must be gathered and transmitted to a base station where it is further processed by end users. Since that kind of network consists of low-power nodes with limited battery power, power efficient methods must be applied for node communication and data gathering in order to achieve long network lifetimes. In such networks where in a round of communication many sensor nodes have data to send to a base station, it is very important to minimize the total energy consumed by the system so that the total network lifetime is maximized. The lifetime of such sensor network is the time until base station can receive data from all sensors in the network. In this work1, besides the conventional protocol of direct transmission or the use of dynamic routing protocols proposed in literature that potentially aggregates data, we propose an algorithm based on static routing among sensor nodes with unequal energy distribution in order to extend network lifetime and find a near-optimal node energy charge scheme that leads to both node and network lifetime prolongation. Our simulation results show that our algorithm achieves longer network lifetimes mainly because the final energy charge of each node is not uniform, while each node is free from maintaining complex route information and thus less infrastructure communication is needed.", "An ad-hoc network of wireless static nodes is considered as it arises in a rapidly deployed, sensor-based, monitoring system. Information is generated in certain nodes and needs to reach a set of designated gateway nodes. Each node may adjust its power within a certain range that determines the set of possible one hop away neighbors. Traffic forwarding through multiple hops is employed when the intended destination is not within immediate reach. The nodes have limited initial amounts of energy that is consumed at different rates depending on the power level and the intended receiver. We propose algorithms to select the routes and the corresponding power levels such that the time until the batteries of the nodes drain-out is maximized. The algorithms are local and amenable to distributed implementation. When there is a single power level, the problem is reduced to a maximum flow problem with node capacities and the algorithms converge to the optimal solution. When there are multiple power levels then the achievable lifetime is close to the optimal (that is computed by linear programming) most of the time. It turns out that in order to maximize the lifetime, the traffic should be routed such that the energy consumption is balanced among the nodes in proportion to their energy reserves, instead of routing to minimize the absolute consumed power.", "Abstract Energy is one of the scarcest resources in wireless sensor network (WSN). One fundamental way of conserving energy is judicious deployment of sensor nodes within the network area so that energy flow remains balanced throughout the network. This avoids the problem of occurrence of ‘energy holes’ and ensures prolonged network lifetime. We have first investigated the problem for enhancing network lifetime using homogeneous sensor nodes. From our observation it is revealed that energy imbalance in WSN occurs due to relaying of data from different parts of the network towards sink. So for improved energy balance instead of using only sensor nodes it is desirable to deploy relay nodes in addition to sensor nodes to manage such imbalance. We have also developed a location-wise pre-determined heterogeneous node deployment strategy based on the principle of energy balancing derived from this analysis, leading to an enhancement of network lifetime. Exhaustive simulation is performed primarily to measure the extent of achieving our design goal of enhancing network lifetime while attaining energy balancing and maintaining coverage. The simulation results also show that our scheme does not compromise with other network performance metrics such as end-to-end delay, packet loss, throughput while achieving the design goal. Finally all the results are compared with two competing schemes and the results confirm our scheme's supremacy in terms of both design performance metrics as well as network performance metrics.", "We derive a general formula for the lifetime-of wireless sensor networks which holds independently of the underlying network model including network architecture and protocol, data collection initiation, lifetime definition, channel fading characteristics, and energy consumption model. This formula identifies two key parameters at the physical layer that affect the network lifetime: the channel state and the residual energy of sensors. As a result, it provides not only a gauge for performance evaluation of sensor networks but also a guideline for the design of network protocols. Based on this formula, we propose a medium access control protocol that exploits both the channel state information and the residual energy information of individual sensors. Referred to as the max-min approach, this protocol maximizes the minimum residual energy across the network in each data collection." ] }
1908.04924
2967111226
Locality preserving projections (LPP) are a classical dimensionality reduction method based on data graph information. However, LPP is still responsive to extreme outliers. LPP aiming for vectorial data may undermine data structural information when it is applied to multidimensional data. Besides, it assumes the dimension of data to be smaller than the number of instances, which is not suitable for high-dimensional data. For high-dimensional data analysis, the tensor-train decomposition is proved to be able to efficiently and effectively capture the spatial relations. Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model. The manifold optimization technique is utilized to solve the new model. The performance of TTPUDR is assessed on classification problems and TTPUDR significantly outperforms the past methods and the several state-of-the-art methods.
To preserve spatial information within tensors in the dimensionality reduction methods, @cite_17 introduces the Tucker LPP (TLPP) which is LPP based on the Tucker decomposition to analyze the high-dimensional data and has the exponential increase in storage complexity as the number of modes increases.
{ "cite_N": [ "@cite_17" ], "mid": [ "1995406764", "2613951549", "2154872931", "2109531142" ], "abstract": [ "For @math -dimensional tensors with possibly large @math , an hierarchical data structure, called the Tree-Tucker format, is presented as an alternative to the canonical decomposition. It has asymptotically the same (and often even smaller) number of representation parameters and viable stability properties. The approach involves a recursive construction described by a tree with the leafs corresponding to the Tucker decompositions of three-dimensional tensors, and is based on a sequence of SVDs for the recursively obtained unfolding matrices and on the auxiliary dimensions added to the initial “spatial” dimensions. It is shown how this format can be applied to the problem of multidimensional convolution. Convincing numerical examples are given.", "Abstract Tensors are valuable tools to represent Electroencephalogram (EEG) data. Tucker decomposition is the most used tensor decomposition in multidimensional discriminant analysis and tensor extension of Linear Discriminant Analysis (LDA), called Higher Order Discriminant Analysis (HODA), is a popular tensor discriminant method used for analyzing Event Related Potentials (ERP). In this paper, we introduce a new tensor-based feature reduction technique, named Higher Order Spectral Regression Discriminant Analysis (HOSRDA), for use in a classification framework for ERP detection. The proposed method (HOSRDA) is a tensor extension of Spectral Regression Discriminant Analysis (SRDA) and casts the eigenproblem of HODA to a regression problem. The formulation of HOSRDA can open a new framework for adding different regularization constraints in higher order feature reduction problem. Additionally, when the dimension and number of samples is very large, the regression problem can be solved via efficient iterative algorithms. We applied HOSRDA on data of a P300 speller from BCI competition III and reached average character detection accuracy of 96.5 for the two subjects. HOSRDA outperforms almost all of other reported methods on this dataset. Additionally, the results of our method are fairly comparable with those of other methods when 5 and 10 repetitions are used in the P300 speller paradigm.", "Many problems in information processing involve some form of dimensionality reduction. In this paper, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) – a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. This is borne out by illustrative examples on some high dimensional data sets.", "Reducing the dimensionality of data without losing intrinsic information is an important preprocessing step in high-dimensional data analysis. Fisher discriminant analysis (FDA) is a traditional technique for supervised dimensionality reduction, but it tends to give undesired results if samples in a class are multimodal. An unsupervised dimensionality reduction method called locality-preserving projection (LPP) can work well with multimodal data due to its locality preserving property. However, since LPP does not take the label information into account, it is not necessarily useful in supervised learning scenarios. In this paper, we propose a new linear supervised dimensionality reduction method called local Fisher discriminant analysis (LFDA), which effectively combines the ideas of FDA and LPP. LFDA has an analytic form of the embedding transformation and the solution can be easily computed just by solving a generalized eigenvalue problem. We demonstrate the practical usefulness and high scalability of the LFDA method in data visualization and classification tasks through extensive simulation studies. We also show that LFDA can be extended to non-linear dimensionality reduction scenarios by applying the kernel trick." ] }
1908.04924
2967111226
Locality preserving projections (LPP) are a classical dimensionality reduction method based on data graph information. However, LPP is still responsive to extreme outliers. LPP aiming for vectorial data may undermine data structural information when it is applied to multidimensional data. Besides, it assumes the dimension of data to be smaller than the number of instances, which is not suitable for high-dimensional data. For high-dimensional data analysis, the tensor-train decomposition is proved to be able to efficiently and effectively capture the spatial relations. Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model. The manifold optimization technique is utilized to solve the new model. The performance of TTPUDR is assessed on classification problems and TTPUDR significantly outperforms the past methods and the several state-of-the-art methods.
The other existing dimensionality reduction method which embeds the TT subspace, is the tensor train neighbourhood preserving embedding (TTNPE) @cite_2 . TTNPE solves the exponential explosion on the complexity with the number of modes increasing. However, its robustness to the extreme outliers remains as a concern. Therefore, a dimensionality reduction method for tensors with a large number of modes or dimensions is demanded to propose on the TT subspace and the capability of reducing the sensitivity to the extreme outliers. Our method TTPUDR is thus developed with all the aspects.
{ "cite_N": [ "@cite_2" ], "mid": [ "2963465654", "2962881496", "2295543477", "16346066" ], "abstract": [ "In this paper, we propose a tensor train neighborhood preserving embedding (TTNPE) to embed multidimensional tensor data into low-dimensional tensor subspace. Novel approaches to solve the optimization problem in TTNPE are proposed. For this embedding, we evaluate a novel tradeoff gain among classification, computation, and dimensionality reduction (storage) for supervised learning. It is shown that compared to the state-of-the-arts tensor embedding methods, TTNPE achieves superior tradeoff in classification, computation, and dimensionality reduction in MNIST handwritten digits, Weizmann face datasets, and financial market datasets.", "Tensor train (TT) decomposition provides a space-efficient representation for higher-order tensors. Despite its advantage, we face two crucial limitations when we apply the TT decomposition to machine learning problems: the lack of statistical theory and of scalable algorithms. In this paper, we address the limitations. First, we introduce a convex relaxation of the TT decomposition problem and derive its error bound for the tensor completion task. Next, we develop a randomized optimization method, in which the time complexity is as efficient as the space complexity is. In experiments, we numerically confirm the derived bounds and empirically demonstrate the performance of our method with a real higher-order tensor.", "We present TTC, an open-source parallel compiler for multidimensional tensor transpositions. In order to generate high-performance C++ code, TTC explores a number of optimizations, including software prefetching, blocking, loop-reordering, and explicit vectorization. To evaluate the performance of multidimensional transpositions across a range of possible use-cases, we also release a benchmark covering arbitrary transpositions of up to six dimensions. Performance results show that the routines generated by TTC achieve close to peak memory bandwidth on both the Intel Haswell and the AMD Steamroller architectures, and yield significant performance gains over modern compilers. By implementing a set of pruning heuristics, TTC allows users to limit the number of potential solutions; this option is especially useful when dealing with high-dimensional tensors, as the search space might become prohibitively large. Experiments indicate that when only 100 potential solutions are considered, the resulting performance is about 99 of that achieved with exhaustive search.", "Over the past few years, some embedding methods have been proposed for feature extraction and dimensionality reduction in various machine learning and pattern classification tasks. Among the methods proposed are Neighborhood Preserving Embedding (NPE), Locality Preserving Projection (LPP) and Local Discriminant Embedding (LDE) which have been used in such applications as face recognition and image video retrieval. However, although the data in these applications are more naturally represented as higher-order tensors, the embedding methods can only work with vectorized data representations which may not capture well some useful information in the original data. Moreover, high-dimensional vectorized representations also suffer from the curse of dimensionality and the high computational demand. In this paper, we propose some novel tensor embedding methods which, unlike previous methods, take data directly in the form of tensors of arbitrary order as input. These methods allow the relationships between dimensions of a tensor representation to be efficiently characterized. Moreover, they also allow the intrinsic local geometric and topological properties of the manifold embedded in a tensor space to be naturally estimated. Furthermore, they do not suffer from the curse of dimensionality and the high computational demand. We demonstrate the effectiveness of the proposed tensor embedding methods on a face recognition application and compare them with some previous methods. Extensive experiments show that our methods are not only more effective but also more efficient." ] }
1908.04924
2967111226
Locality preserving projections (LPP) are a classical dimensionality reduction method based on data graph information. However, LPP is still responsive to extreme outliers. LPP aiming for vectorial data may undermine data structural information when it is applied to multidimensional data. Besides, it assumes the dimension of data to be smaller than the number of instances, which is not suitable for high-dimensional data. For high-dimensional data analysis, the tensor-train decomposition is proved to be able to efficiently and effectively capture the spatial relations. Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model. The manifold optimization technique is utilized to solve the new model. The performance of TTPUDR is assessed on classification problems and TTPUDR significantly outperforms the past methods and the several state-of-the-art methods.
We denote the left unfolding operation @cite_2 of @math as the matrix @math where the last mode of the tensor becomes the column indices of the left unfolding matrix and the rest of the modes are the row indices. Similarly, for the right unfolding operation, denoting it as @math . Also, the vectorization of a tensor is denoted by @math . The F-norm of a tensor can be defined as the @math -norm of its vectorization, i.e., @math , which considers all the elements @math as an entire group and preserves the general spatial relations between elements. Besides @math -norm of a tensor is computed as @math which treats each elements separately and can probably cause the spatial information loss.
{ "cite_N": [ "@cite_2" ], "mid": [ "2043571470", "2951021721", "1521197246", "2435918055" ], "abstract": [ "Abstract Operations with tensors, or multiway arrays, have become increasingly prevalent in recent years. Traditionally, tensors are represented or decomposed as a sum of rank-1 outer products using either the CANDECOMP PARAFAC (CP) or the Tucker models, or some variation thereof. Such decompositions are motivated by specific applications where the goal is to find an approximate such representation for a given multiway array. The specifics of the approximate representation (such as how many terms to use in the sum, orthogonality constraints, etc.) depend on the application. In this paper, we explore an alternate representation of tensors which shows promise with respect to the tensor approximation problem. Reminiscent of matrix factorizations, we present a new factorization of a tensor as a product of tensors. To derive the new factorization, we define a closed multiplication operation between tensors. A major motivation for considering this new type of tensor multiplication is to devise new types of factorizations for tensors which can then be used in applications. Specifically, this new multiplication allows us to introduce concepts such as tensor transpose, inverse, and identity, which lead to the notion of an orthogonal tensor. The multiplication also gives rise to a linear operator, and the null space of the resulting operator is identified. We extend the concept of outer products of vectors to outer products of matrices. All derivations are presented for third-order tensors. However, they can be easily extended to the order-p ( p > 3 ) case. We conclude with an application in image deblurring.", "Recovering a low-rank tensor from incomplete information is a recurring problem in signal processing and machine learning. The most popular convex relaxation of this problem minimizes the sum of the nuclear norms of the unfoldings of the tensor. We show that this approach can be substantially suboptimal: reliably recovering a @math -way tensor of length @math and Tucker rank @math from Gaussian measurements requires @math observations. In contrast, a certain (intractable) nonconvex formulation needs only @math observations. We introduce a very simple, new convex relaxation, which partially bridges this gap. Our new formulation succeeds with @math observations. While these results pertain to Gaussian measurements, simulations strongly suggest that the new norm also outperforms the sum of nuclear norms for tensor completion from a random subset of entries. Our lower bound for the sum-of-nuclear-norms model follows from a new result on recovering signals with multiple sparse structures (e.g. sparse, low rank), which perhaps surprisingly demonstrates the significant suboptimality of the commonly used recovery approach via minimizing the sum of individual sparsity inducing norms (e.g. @math , nuclear norm). Our new formulation for low-rank tensor recovery however opens the possibility in reducing the sample complexity by exploiting several structures jointly.", "We study a statistical model for the tensor principal component analysis problem introduced by Montanari and Richard: Given a order- @math tensor @math of the form @math , where @math is a signal-to-noise ratio, @math is a unit vector, and @math is a random noise tensor, the goal is to recover the planted vector @math . For the case that @math has iid standard Gaussian entries, we give an efficient algorithm to recover @math whenever @math , and certify that the recovered vector is close to a maximum likelihood estimator, all with high probability over the random choice of @math . The previous best algorithms with provable guarantees required @math . In the regime @math , natural tensor-unfolding-based spectral relaxations for the underlying optimization problem break down (in the sense that their integrality gap is large). To go beyond this barrier, we use convex relaxations based on the sum-of-squares method. Our recovery algorithm proceeds by rounding a degree- @math sum-of-squares relaxations of the maximum-likelihood-estimation problem for the statistical model. To complement our algorithmic results, we show that degree- @math sum-of-squares relaxations break down for @math , which demonstrates that improving our current guarantees (by more than logarithmic factors) would require new techniques or might even be intractable. Finally, we show how to exploit additional problem structure in order to solve our sum-of-squares relaxations, up to some approximation, very efficiently. Our fastest algorithm runs in nearly-linear time using shifted (matrix) power iteration and has similar guarantees as above. The analysis of this algorithm also confirms a variant of a conjecture of Montanari and Richard about singular vectors of tensor unfoldings.", "This paper studies the Tensor Robust Principal Component (TRPCA) problem which extends the known Robust PCA ( 2011) to the tensor case. Our model is based on a new tensor Singular Value Decomposition (t-SVD) (Kilmer and Martin 2011) and its induced tensor tubal rank and tensor nuclear norm. Consider that we have a 3-way tensor @math such that @math , where @math has low tubal rank and @math is sparse. Is that possible to recover both components? In this work, we prove that under certain suitable assumptions, we can recover both the low-rank and the sparse components exactly by simply solving a convex program whose objective is a weighted combination of the tensor nuclear norm and the @math -norm, i.e., @math , where @math . Interestingly, TRPCA involves RPCA as a special case when @math and thus it is a simple and elegant tensor extension of RPCA. Also numerical experiments verify our theory and the application for the image denoising demonstrates the effectiveness of our method." ] }
1908.04924
2967111226
Locality preserving projections (LPP) are a classical dimensionality reduction method based on data graph information. However, LPP is still responsive to extreme outliers. LPP aiming for vectorial data may undermine data structural information when it is applied to multidimensional data. Besides, it assumes the dimension of data to be smaller than the number of instances, which is not suitable for high-dimensional data. For high-dimensional data analysis, the tensor-train decomposition is proved to be able to efficiently and effectively capture the spatial relations. Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model. The manifold optimization technique is utilized to solve the new model. The performance of TTPUDR is assessed on classification problems and TTPUDR significantly outperforms the past methods and the several state-of-the-art methods.
The tensor-train (TT) decomposition is designed for large-scale data analysis @cite_3 . It can achieve a simpler implementation than the tree-type decomposition algorithms @cite_18 which are developed to reduce the storage complexity and avoid the local minima.
{ "cite_N": [ "@cite_18", "@cite_3" ], "mid": [ "2962881496", "2963465654", "2789880577", "1429947488" ], "abstract": [ "Tensor train (TT) decomposition provides a space-efficient representation for higher-order tensors. Despite its advantage, we face two crucial limitations when we apply the TT decomposition to machine learning problems: the lack of statistical theory and of scalable algorithms. In this paper, we address the limitations. First, we introduce a convex relaxation of the TT decomposition problem and derive its error bound for the tensor completion task. Next, we develop a randomized optimization method, in which the time complexity is as efficient as the space complexity is. In experiments, we numerically confirm the derived bounds and empirically demonstrate the performance of our method with a real higher-order tensor.", "In this paper, we propose a tensor train neighborhood preserving embedding (TTNPE) to embed multidimensional tensor data into low-dimensional tensor subspace. Novel approaches to solve the optimization problem in TTNPE are proposed. For this embedding, we evaluate a novel tradeoff gain among classification, computation, and dimensionality reduction (storage) for supervised learning. It is shown that compared to the state-of-the-arts tensor embedding methods, TTNPE achieves superior tradeoff in classification, computation, and dimensionality reduction in MNIST handwritten digits, Weizmann face datasets, and financial market datasets.", "Tensor train is a hierarchical tensor network structure that helps alleviate the curse of dimensionality by parameterizing large-scale multidimensional data via a set of network of low-rank tensors. Associated with such a construction is a notion of Tensor Train subspace and in this paper we propose a TT-PCA algorithm for estimating this structured subspace from the given data. By maintaining low rank tensor structure, TT-PCA is more robust to noise comparing with PCA or Tucker-PCA. This is borne out numerically by testing the proposed approach on the Extended YaleFace Dataset B.", "Tensor decompositions are promising tools for big data analytics as they bring multiple modes and aspects of data to a unified framework, which allows us to discover complex internal structures and correlations of data. Unfortunately most existing approaches are not designed to meet the major challenges posed by big data analytics. This paper attempts to improve the scalability of tensor decompositions and provides two contributions: A flexible and fast algorithm for the CP decomposition (FFCP) of tensors based on their Tucker compression; A distributed randomized Tucker decomposition approach for arbitrarily big tensors but with relatively low multilinear rank. These two algorithms can deal with huge tensors, even if they are dense. Extensive simulations provide empirical evidence of the validity and efficiency of the proposed algorithms." ] }
1908.04924
2967111226
Locality preserving projections (LPP) are a classical dimensionality reduction method based on data graph information. However, LPP is still responsive to extreme outliers. LPP aiming for vectorial data may undermine data structural information when it is applied to multidimensional data. Besides, it assumes the dimension of data to be smaller than the number of instances, which is not suitable for high-dimensional data. For high-dimensional data analysis, the tensor-train decomposition is proved to be able to efficiently and effectively capture the spatial relations. Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model. The manifold optimization technique is utilized to solve the new model. The performance of TTPUDR is assessed on classification problems and TTPUDR significantly outperforms the past methods and the several state-of-the-art methods.
For most of the applications, in order to achieve the computational efficiency and be less information redundant, the researchers often restrict the tensor ranks to be smaller than the size of their corresponding tensor mode, i.e., @math for @math @cite_2 .
{ "cite_N": [ "@cite_2" ], "mid": [ "2132267493", "2030927653", "2953204310", "2951021721" ], "abstract": [ "There has been continued interest in seeking a theorem describing optimal low-rank approximations to tensors of order 3 or higher that parallels the Eckart-Young theorem for matrices. In this paper, we argue that the naive approach to this problem is doomed to failure because, unlike matrices, tensors of order 3 or higher can fail to have best rank- @math approximations. The phenomenon is much more widespread than one might suspect: examples of this failure can be constructed over a wide range of dimensions, orders, and ranks, regardless of the choice of norm (or even Bregman divergence). Moreover, we show that in many instances these counterexamples have positive volume: they cannot be regarded as isolated phenomena. In one extreme case, we exhibit a tensor space in which no rank-3 tensor has an optimal rank-2 approximation. The notable exceptions to this misbehavior are rank-1 tensors and order-2 tensors (i.e., matrices). In a more positive spirit, we propose a natural way of overcoming the ill-posedness of the low-rank approximation problem, by using weak solutions when true solutions do not exist. For this to work, it is necessary to characterize the set of weak solutions, and we do this in the case of rank 2, order 3 (in arbitrary dimensions). In our work we emphasize the importance of closely studying concrete low-dimensional examples as a first step toward more general results. To this end, we present a detailed analysis of equivalence classes of @math tensors, and we develop methods for extending results upward to higher orders and dimensions. Finally, we link our work to existing studies of tensors from an algebraic geometric point of view. The rank of a tensor can in theory be given a semialgebraic description; in other words, it can be determined by a system of polynomial inequalities. We study some of these polynomials in cases of interest to us; in particular, we make extensive use of the hyperdeterminant @math on @math .", "In this paper we propose novel methods for completion (from limited samples) and de-noising of multilinear (tensor) data and as an application consider 3-D and 4- D (color) video data completion and de-noising. We exploit the recently proposed tensor-Singular Value Decomposition (t-SVD)[11]. Based on t-SVD, the notion of multilinear rank and a related tensor nuclear norm was proposed in [11] to characterize informational and structural complexity of multilinear data. We first show that videos with linear camera motion can be represented more efficiently using t-SVD compared to the approaches based on vectorizing or flattening of the tensors. Since efficiency in representation implies efficiency in recovery, we outline a tensor nuclear norm penalized algorithm for video completion from missing entries. Application of the proposed algorithm for video recovery from missing entries is shown to yield a superior performance over existing methods. We also consider the problem of tensor robust Principal Component Analysis (PCA) for de-noising 3-D video data from sparse random corruptions. We show superior performance of our method compared to the matrix robust PCA adapted to this setting as proposed in [4].", "In this paper we propose novel methods for completion (from limited samples) and de-noising of multilinear (tensor) data and as an application consider 3-D and 4- D (color) video data completion and de-noising. We exploit the recently proposed tensor-Singular Value Decomposition (t-SVD)[11]. Based on t-SVD, the notion of multilinear rank and a related tensor nuclear norm was proposed in [11] to characterize informational and structural complexity of multilinear data. We first show that videos with linear camera motion can be represented more efficiently using t-SVD compared to the approaches based on vectorizing or flattening of the tensors. Since efficiency in representation implies efficiency in recovery, we outline a tensor nuclear norm penalized algorithm for video completion from missing entries. Application of the proposed algorithm for video recovery from missing entries is shown to yield a superior performance over existing methods. We also consider the problem of tensor robust Principal Component Analysis (PCA) for de-noising 3-D video data from sparse random corruptions. We show superior performance of our method compared to the matrix robust PCA adapted to this setting as proposed in [4].", "Recovering a low-rank tensor from incomplete information is a recurring problem in signal processing and machine learning. The most popular convex relaxation of this problem minimizes the sum of the nuclear norms of the unfoldings of the tensor. We show that this approach can be substantially suboptimal: reliably recovering a @math -way tensor of length @math and Tucker rank @math from Gaussian measurements requires @math observations. In contrast, a certain (intractable) nonconvex formulation needs only @math observations. We introduce a very simple, new convex relaxation, which partially bridges this gap. Our new formulation succeeds with @math observations. While these results pertain to Gaussian measurements, simulations strongly suggest that the new norm also outperforms the sum of nuclear norms for tensor completion from a random subset of entries. Our lower bound for the sum-of-nuclear-norms model follows from a new result on recovering signals with multiple sparse structures (e.g. sparse, low rank), which perhaps surprisingly demonstrates the significant suboptimality of the commonly used recovery approach via minimizing the sum of individual sparsity inducing norms (e.g. @math , nuclear norm). Our new formulation for low-rank tensor recovery however opens the possibility in reducing the sample complexity by exploiting several structures jointly." ] }
1908.04924
2967111226
Locality preserving projections (LPP) are a classical dimensionality reduction method based on data graph information. However, LPP is still responsive to extreme outliers. LPP aiming for vectorial data may undermine data structural information when it is applied to multidimensional data. Besides, it assumes the dimension of data to be smaller than the number of instances, which is not suitable for high-dimensional data. For high-dimensional data analysis, the tensor-train decomposition is proved to be able to efficiently and effectively capture the spatial relations. Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model. The manifold optimization technique is utilized to solve the new model. The performance of TTPUDR is assessed on classification problems and TTPUDR significantly outperforms the past methods and the several state-of-the-art methods.
Given a set of vectorial training data @math and an affinity matrix of locality similarity @math , LPP intends to seek for a linear projection @math from @math to @math such that the following optimization problem is solved to minimize the locality preserving criterion set as the objective function. The widely used affinity @math is based on the graph of the neighborhood information in the data as follows @cite_5 . where @math is a positive parameter and @math denotes the @math -nearest neighborhood of @math .
{ "cite_N": [ "@cite_5" ], "mid": [ "2089035607", "2154872931", "2106094637", "2337671281" ], "abstract": [ "Locality preserving projection (LPP) is a manifold learning method widely used in pattern recognition and computer vision. The face recognition application of LPP is known to suffer from a number of problems including the small sample size (SSS) problem, the fact that it might produce statistically identical transform results for neighboring samples, and that its classification performance seems to be heavily influenced by its parameters. In this paper, we propose three novel solution schemes for LPP. Experimental results also show that the proposed LPP solution scheme is able to classify much more accurately than conventional LPP and to obtain a classification performance that is only little influenced by the definition of neighbor samples.", "Many problems in information processing involve some form of dimensionality reduction. In this paper, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) – a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. This is borne out by illustrative examples on some high dimensional data sets.", "In this paper, we learn explicit representations for dynamic shape manifolds of moving humans for the task of action recognition. We exploit locality preserving projections (LPP) for dimensionality reduction, leading to a low-dimensional embedding of human movements. Given a sequence of moving silhouettes associated to an action video, by LPP, we project them into a low-dimensional space to characterize the spatiotemporal property of the action, as well as to preserve much of the geometric structure. To match the embedded action trajectories, the median Hausdorff distance or normalized spatiotemporal correlation is used for similarity measures. Action classification is then achieved in a nearest-neighbor framework. To evaluate the proposed method, extensive experiments have been carried out on a recent dataset including ten actions performed by nine different subjects. The experimental results show that the proposed method is able to not only recognize human actions effectively, but also considerably tolerate some challenging conditions, e.g., partial occlusion, low-quality videos, changes in viewpoints, scales, and clothes; within-class variations caused by different subjects with different physical build; styles of motion; etc", "Abstract This paper mainly focuses on dimensional reduction of fused dataset of holistic and geometrical face features vectors by solving singularity problem of linear discriminant analysis and maximizing the Fisher ratio in nonlinear subspace region with the preservation of local discriminative features. The combinational feature vector space is projected into low dimensional subspace using proposed Kernel Locality Preserving Symmetrical Weighted Fisher Discriminant Analysis (KLSWFDA) method. Matching score level fusion technique has been applied on projected subspace and combinational entire Gabor subspace is framed. Euclidean distance metric (L2) and support vector machine (SVM) classifier has been implemented to recognize and classify the expressions. Performance of proposed approach is evaluated and compared with state of art approaches. Experimental results on JAFFE, YALE and FD expression database demonstrate the effectiveness of the proposed approach." ] }
1908.04924
2967111226
Locality preserving projections (LPP) are a classical dimensionality reduction method based on data graph information. However, LPP is still responsive to extreme outliers. LPP aiming for vectorial data may undermine data structural information when it is applied to multidimensional data. Besides, it assumes the dimension of data to be smaller than the number of instances, which is not suitable for high-dimensional data. For high-dimensional data analysis, the tensor-train decomposition is proved to be able to efficiently and effectively capture the spatial relations. Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model. The manifold optimization technique is utilized to solve the new model. The performance of TTPUDR is assessed on classification problems and TTPUDR significantly outperforms the past methods and the several state-of-the-art methods.
LPP is a classical dimensionality reduction method and has been applied in many real cases, for example, computer vision @cite_8 . It captures the local information among the data points and reduces more sensitivity to the outliers than PCA. However, we do observe the following shortcomings of LPP: LPP is designed for vectorial data. When it is applied to multi-dimensional data, i.e, tensors, there exists potential loss of spatial information. The existing tensor locality preserving projections, i.e., the Tucker LPP (TLPP) @cite_17 embeds the tensor space with a high storage complexity at @math . Theoretically, LPP cannot work for the cases where the data dimension is greater than the number of samples. Although this can be avoided by a trick in which one first projects the data onto its PCA subspace, then implements LPP in this subspace http: www.cad.zju.edu.cn home dengcai Data code LPP.m , this would not work well for ultra-dimensional data with a fairly large dataset as a singular value decomposition (SVD) becomes a bottleneck.
{ "cite_N": [ "@cite_17", "@cite_8" ], "mid": [ "2154872931", "2109531142", "2089035607", "2117553576" ], "abstract": [ "Many problems in information processing involve some form of dimensionality reduction. In this paper, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) – a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. This is borne out by illustrative examples on some high dimensional data sets.", "Reducing the dimensionality of data without losing intrinsic information is an important preprocessing step in high-dimensional data analysis. Fisher discriminant analysis (FDA) is a traditional technique for supervised dimensionality reduction, but it tends to give undesired results if samples in a class are multimodal. An unsupervised dimensionality reduction method called locality-preserving projection (LPP) can work well with multimodal data due to its locality preserving property. However, since LPP does not take the label information into account, it is not necessarily useful in supervised learning scenarios. In this paper, we propose a new linear supervised dimensionality reduction method called local Fisher discriminant analysis (LFDA), which effectively combines the ideas of FDA and LPP. LFDA has an analytic form of the embedding transformation and the solution can be easily computed just by solving a generalized eigenvalue problem. We demonstrate the practical usefulness and high scalability of the LFDA method in data visualization and classification tasks through extensive simulation studies. We also show that LFDA can be extended to non-linear dimensionality reduction scenarios by applying the kernel trick.", "Locality preserving projection (LPP) is a manifold learning method widely used in pattern recognition and computer vision. The face recognition application of LPP is known to suffer from a number of problems including the small sample size (SSS) problem, the fact that it might produce statistically identical transform results for neighboring samples, and that its classification performance seems to be heavily influenced by its parameters. In this paper, we propose three novel solution schemes for LPP. Experimental results also show that the proposed LPP solution scheme is able to classify much more accurately than conventional LPP and to obtain a classification performance that is only little influenced by the definition of neighbor samples.", "We propose an appearance-based face recognition method called the Laplacianface approach. By using locality preserving projections (LPP), the face images are mapped into a face subspace for analysis. Different from principal component analysis (PCA) and linear discriminant analysis (LDA) which effectively see only the Euclidean structure of face space, LPP finds an embedding that preserves local information, and obtains a face subspace that best detects the essential face manifold structure. The Laplacianfaces are the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the face manifold. In this way, the unwanted variations resulting from changes in lighting, facial expression, and pose may be eliminated or reduced. Theoretical analysis shows that PCA, LDA, and LPP can be obtained from different graph models. We compare the proposed Laplacianface approach with Eigenface and Fisherface methods on three different face data sets. Experimental results suggest that the proposed Laplacianface approach provides a better representation and achieves lower error rates in face recognition." ] }
1908.04924
2967111226
Locality preserving projections (LPP) are a classical dimensionality reduction method based on data graph information. However, LPP is still responsive to extreme outliers. LPP aiming for vectorial data may undermine data structural information when it is applied to multidimensional data. Besides, it assumes the dimension of data to be smaller than the number of instances, which is not suitable for high-dimensional data. For high-dimensional data analysis, the tensor-train decomposition is proved to be able to efficiently and effectively capture the spatial relations. Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model. The manifold optimization technique is utilized to solve the new model. The performance of TTPUDR is assessed on classification problems and TTPUDR significantly outperforms the past methods and the several state-of-the-art methods.
The TT decomposition with a smaller storage complexity at @math has been recently applied in the tensor train neighborhood preserving embedding (TTNPE) @cite_2 @cite_0 . Nevertheless, the actual algorithm in TTNPE is only implemented as a TT approximation to the pseudo PCA. To the best of our knowledge, there is no existing dimensionality reduction method which can directly process the tensor data with less storage complexity, i.e., using the TT decomposition in algorithms.
{ "cite_N": [ "@cite_0", "@cite_2" ], "mid": [ "2963465654", "2962881496", "2789880577", "2030927653" ], "abstract": [ "In this paper, we propose a tensor train neighborhood preserving embedding (TTNPE) to embed multidimensional tensor data into low-dimensional tensor subspace. Novel approaches to solve the optimization problem in TTNPE are proposed. For this embedding, we evaluate a novel tradeoff gain among classification, computation, and dimensionality reduction (storage) for supervised learning. It is shown that compared to the state-of-the-arts tensor embedding methods, TTNPE achieves superior tradeoff in classification, computation, and dimensionality reduction in MNIST handwritten digits, Weizmann face datasets, and financial market datasets.", "Tensor train (TT) decomposition provides a space-efficient representation for higher-order tensors. Despite its advantage, we face two crucial limitations when we apply the TT decomposition to machine learning problems: the lack of statistical theory and of scalable algorithms. In this paper, we address the limitations. First, we introduce a convex relaxation of the TT decomposition problem and derive its error bound for the tensor completion task. Next, we develop a randomized optimization method, in which the time complexity is as efficient as the space complexity is. In experiments, we numerically confirm the derived bounds and empirically demonstrate the performance of our method with a real higher-order tensor.", "Tensor train is a hierarchical tensor network structure that helps alleviate the curse of dimensionality by parameterizing large-scale multidimensional data via a set of network of low-rank tensors. Associated with such a construction is a notion of Tensor Train subspace and in this paper we propose a TT-PCA algorithm for estimating this structured subspace from the given data. By maintaining low rank tensor structure, TT-PCA is more robust to noise comparing with PCA or Tucker-PCA. This is borne out numerically by testing the proposed approach on the Extended YaleFace Dataset B.", "In this paper we propose novel methods for completion (from limited samples) and de-noising of multilinear (tensor) data and as an application consider 3-D and 4- D (color) video data completion and de-noising. We exploit the recently proposed tensor-Singular Value Decomposition (t-SVD)[11]. Based on t-SVD, the notion of multilinear rank and a related tensor nuclear norm was proposed in [11] to characterize informational and structural complexity of multilinear data. We first show that videos with linear camera motion can be represented more efficiently using t-SVD compared to the approaches based on vectorizing or flattening of the tensors. Since efficiency in representation implies efficiency in recovery, we outline a tensor nuclear norm penalized algorithm for video completion from missing entries. Application of the proposed algorithm for video recovery from missing entries is shown to yield a superior performance over existing methods. We also consider the problem of tensor robust Principal Component Analysis (PCA) for de-noising 3-D video data from sparse random corruptions. We show superior performance of our method compared to the matrix robust PCA adapted to this setting as proposed in [4]." ] }
1908.05085
2968619508
The use of fingerprinting localization techniques in outdoor IoT settings has started to gain popularity over the recent years. Communication signals of Low Power Wide Area Networks (LPWAN), such as LoRaWAN, are used to estimate the location of low power mobile devices. In this study, a publicly available dataset of LoRaWAN RSSI measurements is utilized to compare different machine learning methods and their accuracy in producing location estimates. The tested methods are: the k Nearest Neighbours method, the Extra Trees method and a neural network approach using a Multilayer Perceptron. To facilitate the reproducibility of tests and the comparability of results, the code and the train validation test split of the dataset used in this study have become available. The neural network approach was the method with the highest accuracy, achieving a mean error of 358 meters and a median error of 204 meters.
Fingerprinting has been a broadly studied method of indoor positioning @cite_12 . More particularly, RSSI has been the main type of signal that is used @cite_12 . It has been only a few years since the transfering of fingerprinting techniques in the outdoor world, and in particular in LPWAN settings. In a recent study, @cite_1 have made three fingerprinting datasets of Low Power Wide Area Networks publicly available. One of these datasets contains LoRaWAN RSSI measurements collected in the urban area of the city of Antwerp, in Belgium. The motivation for making the datasets publicly available was to provide the global research community with a benchmark tool to evaluate fingerprinting algorithms for LPWAN standards. In that work, the utilization of the presented LoRaWAN dataset by a k Nearest Neighbours fingerprinting method was exemplified, achieving a mean localization error of 398 meters. To the best of our knowledge, there is no follow up study so far, which utilizes this dataset.
{ "cite_N": [ "@cite_1", "@cite_12" ], "mid": [ "2968588162", "2791401550", "2343041401", "2901188462" ], "abstract": [ "Fingerprinting techniques, which are a common method for indoor localization, have been recently applied with success into outdoor settings. Particularly, the communication signals of Low Power Wide Area Networks (LPWAN) such as Sigfox, have been used for localization. In this rather recent field of study, not many publicly available datasets, which would facilitate the consistent comparison of different positioning systems, exist so far. In the current study, a published dataset of RSSI measurements on a Sigfox network deployed in Antwerp, Belgium is used to analyse the appropriate selection of preprocessing steps and to tune the hyperparameters of a kNN fingerprinting method. Initially, the tuning of hyperparameter k for a variety of distance metrics, and the selection of efficient data transformation schemes, proposed by relevant works, is presented. In addition, accuracy improvements are achieved in this study, by a detailed examination of the appropriate adjustment of the parameters of the data transformation schemes tested, and of the handling of out of range values. With the appropriate tuning of these factors, the achieved mean localization error was 298 meters, and the median error was 109 meters. To facilitate the reproducibility of tests and comparability of results, the code and train validation test split used in this study are available.", "Because of the increasing relevance of the Internet of Things and location-based services, researchers are evaluating wireless positioning techniques, such as fingerprinting, on Low Power Wide Area Network (LPWAN) communication. In order to evaluate fingerprinting in large outdoor environments, extensive, time-consuming measurement campaigns need to be conducted to create useful datasets. This paper presents three LPWAN datasets which are collected in large-scale urban and rural areas. The goal is to provide the research community with a tool to evaluate fingerprinting algorithms in large outdoor environments. During a period of three months, numerous mobile devices periodically obtained location data via a GPS receiver which was transmitted via a Sigfox or LoRaWAN message. Together with network information, this location data is stored in the appropriate LPWAN dataset. The first results of our basic fingerprinting implementation, which is also clarified in this paper, indicate a mean location estimation error of 214.58 m for the rural Sigfox dataset, 688.97 m for the urban Sigfox dataset and 398.40 m for the urban LoRaWAN dataset. In the future, we will enlarge our current datasets and use them to evaluate and optimize our fingerprinting methods. Also, we intend to collect additional datasets for Sigfox, LoRaWAN and NB-IoT.", "The Received Signal Strength (RSS) based fingerprinting approaches for indoor localization pose a need for updating the fingerprint databases due to dynamic nature of the indoor environment. This process is hectic and time-consuming when the size of the indoor area is large. The semi-supervised approaches reduce this workload and achieve good accuracy around 15 percent of the fingerprinting load but the performance is severely degraded if it is reduced below this level. We propose an indoor localization framework that uses unsupervised manifold alignment. It requires only 1 percent of the fingerprinting load, some crowd sourced readings, and plan coordinates of the indoor area. The 1 percent fingerprinting load is used only in perturbing the local geometries of the plan coordinates. The proposed framework achieves less than 5 m mean localization error, which is considerably better than semi-supervised approaches at very small amount of fingerprinting load. In addition, the few location estimations together with few fingerprints help to estimate the complete radio map of the indoor environment. The estimation of radio map does not demand extra workload rather it employs the already available information from the proposed indoor localization framework. The testing results for radio map estimation show almost 50 percent performance improvement by using this information as compared to using only fingerprints.", "The Internet of Things (IoT) has caused the modern society to connect everything in our environment to a network. In a myriad of IoT applications, smart devices need to be located. This can easily be done by satellite based receivers. However, there are more energy-efficient localization technologies, especially in Low Power Wide Area Networks (LPWAN). In this research, we discuss the accuracy of an outdoor fingerprinting technique using a large outdoor Sigfox dataset which is openly available. A kNN (k Nearest Neighbors) algorithm is applied to our fingerprinting database. 31 different distance functions and four RSS data representations are evaluated. Our analysis shows that a Sigfox transmitter can be located with a mean estimation error of 340 meters." ] }
1908.05085
2968619508
The use of fingerprinting localization techniques in outdoor IoT settings has started to gain popularity over the recent years. Communication signals of Low Power Wide Area Networks (LPWAN), such as LoRaWAN, are used to estimate the location of low power mobile devices. In this study, a publicly available dataset of LoRaWAN RSSI measurements is utilized to compare different machine learning methods and their accuracy in producing location estimates. The tested methods are: the k Nearest Neighbours method, the Extra Trees method and a neural network approach using a Multilayer Perceptron. To facilitate the reproducibility of tests and the comparability of results, the code and the train validation test split of the dataset used in this study have become available. The neural network approach was the method with the highest accuracy, achieving a mean error of 358 meters and a median error of 204 meters.
@cite_0 , have evaluated experimentally RRS and TDoA ranging positioning methods using a LoRaWAN network, reporting median errors of 1250 and 200 meters for RRS and TDoA respectively. Other works @cite_2 , @cite_5 , have focused on rather specific settings over which they evaluate positioning methods. These works @cite_2 , @cite_5 , present experiments in car parking settings, testing in confined areas, with a placement of basestations that was adapted to their use-case, presenting a low error which ranges at the scale of a few tens of meters.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_2" ], "mid": [ "2901105864", "2900294995", "2084503286", "1966679468" ], "abstract": [ "This paper experimentally compares the positioning accuracy of TDoA-based and RSS-based localization in a public outdoor LoRa network in the Netherlands. The performance of different Received Signal Strength (RSS)-based approaches (proximity, centroid, map matching,…) is compared with Time-Difference-of-Arrival (TDoA) performance. The number of RSS and TDoA location updates and the positioning accuracy per spreading factor (SF) is assessed, allowing to select the optimal SF choice for the network. A road mapping filter is applied to the raw location estimates for the best algorithms and SFs. RSS-based approaches have median and maximal errors that are limited to 1000 m and 2000 m respectively, using a road mapping filter. Using the same filter, TDoA-based approaches deliver median and maximal errors in the order of 150 m and 350 m respectively. However, the number of location updates per time unit using SF7 is around 10 times higher for RSS algorithms than for the TDoA algorithm.", "Positioning is an essential element in most Internet of Things (IoT) applications. Global Positioning System (GPS) chips have high cost and power consumption, making it unsuitable for long-range (LoRa) and low-power IoT devices. Alternatively, low-power wide-area (LPWA) signals can be used for simultaneous positioning and communication. We summarize previous studies related to LoRa signal-based positioning systems, including those addressing proximity, a path loss model, time difference of arrival (TDoA), and fingerprint positioning methods. We propose a LoRa signal-based positioning method that uses a fingerprint algorithm instead of a received signal strength indicator (RSSI) proximity or TDoA method. The main objective of this study was to evaluate the accuracy and usability of the fingerprint algorithm for large areas in the real world. We estimated the locations using probabilistic means based on three different algorithms that use interpolated fingerprint RSSI maps. The average accuracy of the three proposed algorithms in our experiments was 28.8 m. Our method also reduced the battery consumption significantly compared with that of existing GPS-based positioning methods.", "A robust and accurate positioning solution is required to increase the safety in GPS-denied environments. Although there is a lot of available research in this area, little has been done for confined environments such as tunnels. Therefore, we organized a measurement campaign in a basement tunnel of Linkoping university, in which we obtained ultra-wideband (UWB) complex impulse responses for line-of-sight (LOS), and three non-LOS (NLOS) scenarios. This paper is focused on time-of-arrival (TOA) ranging since this technique can provide the most accurate range estimates, which are required for range-based positioning. We describe the measurement setup and procedure, select the threshold for TOA estimation, analyze the channel propagation parameters obtained from the power delay profile (PDP), and provide statistical model for ranging. According to our results, the rise-time should be used for NLOS identification, and the maximum excess delay should be used for NLOS error mitigation. However, the NLOS condition cannot be perfectly determined, so the distance likelihood has to be represented in a Gaussian mixture form. We also compared these results with measurements from a mine tunnel, and found a similar behavior.", "The location of people, mobile terminals and equipment is highly desirable for operational enhancements in the mining industry. In an indoor environment such as a mine, the multipath caused by reflection, diffraction and diffusion on the rough sidewall surfaces, and the non-line of sight (NLOS) due to the blockage of the shortest direct path between transmitter and receiver are the main sources of range measurement errors. Unreliable measurements of location metrics such as received signal strengths (RSS), angles of arrival (AOA) and times of arrival (TOA) or time differences of arrival (TDOA), result in the deterioration of the positioning performance. Hence, alternatives to the traditional parametric geolocation techniques have to be considered. In this paper, we present a novel method for mobile station location using wideband channel measurement results applied to an artificial neural network (ANN). The proposed system, the wide band neural network-locate (WBNN-locate), learns off-line the location 'signatures' from the extracted location-dependent features of the measured channel impulse responses for line of sight (LOS) and non-line of sight (NLOS) situations. It then matches on-line the observation received from a mobile station against the learned set of 'signatures' to accurately locate its position. The location accuracy of the proposed system, applied in an underground mine, has been found to be 2 meters for 90 and 80 of trained and untrained data, respectively. Moreover, the proposed system may also be applicable to any other indoor situation and particularly in confined environments with characteristics similar to those of a mine (e.g. rough sidewalls surface)." ] }
1908.05085
2968619508
The use of fingerprinting localization techniques in outdoor IoT settings has started to gain popularity over the recent years. Communication signals of Low Power Wide Area Networks (LPWAN), such as LoRaWAN, are used to estimate the location of low power mobile devices. In this study, a publicly available dataset of LoRaWAN RSSI measurements is utilized to compare different machine learning methods and their accuracy in producing location estimates. The tested methods are: the k Nearest Neighbours method, the Extra Trees method and a neural network approach using a Multilayer Perceptron. To facilitate the reproducibility of tests and the comparability of results, the code and the train validation test split of the dataset used in this study have become available. The neural network approach was the method with the highest accuracy, achieving a mean error of 358 meters and a median error of 204 meters.
General purpose fingerprinting methods in LPWAN settings have been presented and discussed in recent works @cite_6 , @cite_7 . @cite_6 have utilized a Sigfox dataset to apply a kNN algorithm, and selected among a variety of distance metrics and data representations the best performing ones, resulting with a mean positioning error of 340 meters. In addition, in our previous work @cite_7 , we have moved further in analysing the same Sigfox dataset, by tuning relevant parameters of the discussed preprocessing schemes, reducing the mean error to 298 meters. As was done in our previous work @cite_7 , in order to facilitate the comparability of results, we share the train validation test sets used in the current work as well.
{ "cite_N": [ "@cite_7", "@cite_6" ], "mid": [ "2968588162", "2791401550", "2901188462", "2902629385" ], "abstract": [ "Fingerprinting techniques, which are a common method for indoor localization, have been recently applied with success into outdoor settings. Particularly, the communication signals of Low Power Wide Area Networks (LPWAN) such as Sigfox, have been used for localization. In this rather recent field of study, not many publicly available datasets, which would facilitate the consistent comparison of different positioning systems, exist so far. In the current study, a published dataset of RSSI measurements on a Sigfox network deployed in Antwerp, Belgium is used to analyse the appropriate selection of preprocessing steps and to tune the hyperparameters of a kNN fingerprinting method. Initially, the tuning of hyperparameter k for a variety of distance metrics, and the selection of efficient data transformation schemes, proposed by relevant works, is presented. In addition, accuracy improvements are achieved in this study, by a detailed examination of the appropriate adjustment of the parameters of the data transformation schemes tested, and of the handling of out of range values. With the appropriate tuning of these factors, the achieved mean localization error was 298 meters, and the median error was 109 meters. To facilitate the reproducibility of tests and comparability of results, the code and train validation test split used in this study are available.", "Because of the increasing relevance of the Internet of Things and location-based services, researchers are evaluating wireless positioning techniques, such as fingerprinting, on Low Power Wide Area Network (LPWAN) communication. In order to evaluate fingerprinting in large outdoor environments, extensive, time-consuming measurement campaigns need to be conducted to create useful datasets. This paper presents three LPWAN datasets which are collected in large-scale urban and rural areas. The goal is to provide the research community with a tool to evaluate fingerprinting algorithms in large outdoor environments. During a period of three months, numerous mobile devices periodically obtained location data via a GPS receiver which was transmitted via a Sigfox or LoRaWAN message. Together with network information, this location data is stored in the appropriate LPWAN dataset. The first results of our basic fingerprinting implementation, which is also clarified in this paper, indicate a mean location estimation error of 214.58 m for the rural Sigfox dataset, 688.97 m for the urban Sigfox dataset and 398.40 m for the urban LoRaWAN dataset. In the future, we will enlarge our current datasets and use them to evaluate and optimize our fingerprinting methods. Also, we intend to collect additional datasets for Sigfox, LoRaWAN and NB-IoT.", "The Internet of Things (IoT) has caused the modern society to connect everything in our environment to a network. In a myriad of IoT applications, smart devices need to be located. This can easily be done by satellite based receivers. However, there are more energy-efficient localization technologies, especially in Low Power Wide Area Networks (LPWAN). In this research, we discuss the accuracy of an outdoor fingerprinting technique using a large outdoor Sigfox dataset which is openly available. A kNN (k Nearest Neighbors) algorithm is applied to our fingerprinting database. 31 different distance functions and four RSS data representations are evaluated. Our analysis shows that a Sigfox transmitter can be located with a mean estimation error of 340 meters.", "Location-based services play an important role in Internet of Things (IoT) applications. However, a trade-off has to be made between the location estimation error and the battery lifetime of an IoT device. As IoT devices communicate over Low Power Wide Area Networks (LPWAN), signal strength localization methods can use the existing communication link to estimate their location. In this paper, we present a comparison of three proximity methods, one fingerprinting method and three ranging methods using Sigfox communication messages. To evaluate these methods, we use a ground truth Sigfox dataset which we collected in a large urban environment, as well as new evaluation data that was collected in the same urban area. With a mean estimation error of 586 m, our fingerprinting method achieves the best result compared to other signal strength localization methods." ] }
1908.04465
2967650664
We explore the challenges and opportunities of shifting industrial control software from dedicated hardware to bare-met al servers or cloud computing platforms using off the shelf technologies. In particular, we demonstrate that executing time-critical applications on cloud platforms is viable based on a series of dedicated latency tests targeting relevant real-time configurations.
Containerizing control applications has been discussed in recent literature. @cite_6 , for instance, presented the concept of containerization of full control applications as a means to decouple the hardware and software life-cycles of an industrial automation system. Due to the performance overhead in hardware virtualization, the authors state that OS-level virtualization is a suitable technique to cope with automation system timing demands. They propose two approaches to migrate a control application into containers on top of a patched real-time Linux-based operating system: A given system is decomposed into subsystems, where a set of sub-units performs a localized computation, which then is actuated through a global decision maker. Devices are defined as a set of processes, where each process is an isolated standalone solution with a shared communication stack. Based on this, systems are divided into specialized modules, allowing a granular development and update strategy. The authors demonstrate the feasibility of real-time applications in conjunction with containerization, even though they express concern on the maturity of the technical solution presented.
{ "cite_N": [ "@cite_6" ], "mid": [ "2487718634", "1623427068", "2534821673", "2792724737" ], "abstract": [ "Virtualization is entering the world of real-time embedded systems. Industrial automation systems, in particular, can benefit from what virtualization has to offer: flexible consolidation of applications on different hardware types and scales, extending the life-time of legacy code or decoupling software and hardware lifecycles. However, such systems require a light-weight virtualization technology in order to be able to maintain real-time behavior while dealing with real-time data. This paper sets out to investigate the applicability of container-based OS-level virtualization technology to industrial automation systems. To this end, we provide insights into the capabilities of containers to achieve flexible consolidation and easy migration of industrial automation applications as well as into the container technology readiness with respect to the fundamental requirement of industrial automation systems, namely performing timely control actions based on real-time data. Moreover, we provide an empirical study of the performance overhead introduced by containers based on micro-benchmarks that capture the characteristics of targeted industrial automation applications.", "High Performance Computing (HPC) applications require systems with environments for maximum use of limited resources to facilitate efficient computations. However, these systems are faced with a large trade-off between efficient resource allocation and minimum execution times for the applications executed on them. Also, deploying applications in newer environments is exacting. To alleviate this challenge, container-based systems are recently being deployed to reduce the trade-off. In this paper, we investigate container-based technology as an efficient virtualization technology for running high performance scientific applications. We select Docker as the container-based technology for our test bed. We execute autodock3, a molecular modeling simulation software mostly used for Protein-ligand docking, in Docker containers and VMs created using OpenStack. We compare the execution times of the docking process in both Docker containers and in VMs.", "Cyber-physical systems and the Internet-of-Things are getting more and more traction in different application areas. Boosted by initiatives such as Industrie 4.0 in Germany or the Industrial Internet Consortium in the US, they are enablers for innovation in industrial automation. To provide the advanced flexibility in production envisioned for future automation systems, Programmable Logic Controllers (PLCs), as one of their main building blocks, also need to become more flexible. However, the conservative nature of this domain prohibits changes in the controller architecture impacting the installed base. Currently there exist various approaches that evolve control architectures to the next level, but none of them address flexible function deployment at the same time with legacy support. In this paper, we present a an architecture for a multi-purpose controller that is inspired by the virtualization trend in cloud systems which moves from heavyweight virtual machines to lightweight containers solutions such as LXC or Docker. Our solution includes the support for multiple PLC execution engines and adds support for the emulation of legacy engines as well. We evaluate this architecture by executing performance measurements that analyze the impact of container technologies to the real-time aspects of PLC engines.", "Abstract Cyber-physical systems and the Internet-of-Things are getting more and more traction in different application areas. Boosted by initiatives such as Industrie 4.0 in Germany or the Industrial Internet Consortium in the US, they are enablers for innovation in industrial automation. To provide the advanced flexibility in production envisioned for future automation systems, Programmable Logic Controllers (PLCs), as one of their main building blocks, also need to become more flexible. However, the conservative nature of this domain prohibits changes in the controller architecture impacting the installed base. Currently there exist various approaches that evolve control architectures to the next level, but none of them address flexible function deployment at the same time with legacy support. In this paper, we present an architecture for a multi-purpose controller that is inspired by the virtualization trend in cloud systems which moves from heavyweight virtual machines to lightweight containers solutions such as LXC or Docker. Our solution includes the support for multiple PLC execution engines and adds support for the emulation of legacy engines as well. We evaluate this architecture by executing performance measurements that analyze the impact of container technologies to the real-time aspects of PLC engines." ] }
1908.04465
2967650664
We explore the challenges and opportunities of shifting industrial control software from dedicated hardware to bare-met al servers or cloud computing platforms using off the shelf technologies. In particular, we demonstrate that executing time-critical applications on cloud platforms is viable based on a series of dedicated latency tests targeting relevant real-time configurations.
Goldschmidt and Hauk-Stattelmann in @cite_9 perform benchmark tests on modularized industrial Programmable Logic Controller (PLC) applications. This analysis analyzes the impact of container-based virtualization on real-time constraints. As there is no solution for legacy code migration of PLCs, the migration to application containers could extend a system's lifetime beyond the physical device's limits. Even though tests showed worst-case latencies of the order of @math on Intel-based hosts, the authors argue that the container engines may be stripped down and optimized for real-time execution. In a follow-up work, @cite_28 , a possible multi-purpose architecture was described and tested in a real-world use case. The results show worst case latencies in the range of @math for a Raspberry PI single-board computer, making the solution viable for cycle times in the range of @math to @math . The authors state that topics such as memory overhead, containers' restricted access and problems due to technology immaturity are still to be investigated.
{ "cite_N": [ "@cite_28", "@cite_9" ], "mid": [ "2534821673", "2792724737", "2156678748", "2885043734" ], "abstract": [ "Cyber-physical systems and the Internet-of-Things are getting more and more traction in different application areas. Boosted by initiatives such as Industrie 4.0 in Germany or the Industrial Internet Consortium in the US, they are enablers for innovation in industrial automation. To provide the advanced flexibility in production envisioned for future automation systems, Programmable Logic Controllers (PLCs), as one of their main building blocks, also need to become more flexible. However, the conservative nature of this domain prohibits changes in the controller architecture impacting the installed base. Currently there exist various approaches that evolve control architectures to the next level, but none of them address flexible function deployment at the same time with legacy support. In this paper, we present a an architecture for a multi-purpose controller that is inspired by the virtualization trend in cloud systems which moves from heavyweight virtual machines to lightweight containers solutions such as LXC or Docker. Our solution includes the support for multiple PLC execution engines and adds support for the emulation of legacy engines as well. We evaluate this architecture by executing performance measurements that analyze the impact of container technologies to the real-time aspects of PLC engines.", "Abstract Cyber-physical systems and the Internet-of-Things are getting more and more traction in different application areas. Boosted by initiatives such as Industrie 4.0 in Germany or the Industrial Internet Consortium in the US, they are enablers for innovation in industrial automation. To provide the advanced flexibility in production envisioned for future automation systems, Programmable Logic Controllers (PLCs), as one of their main building blocks, also need to become more flexible. However, the conservative nature of this domain prohibits changes in the controller architecture impacting the installed base. Currently there exist various approaches that evolve control architectures to the next level, but none of them address flexible function deployment at the same time with legacy support. In this paper, we present an architecture for a multi-purpose controller that is inspired by the virtualization trend in cloud systems which moves from heavyweight virtual machines to lightweight containers solutions such as LXC or Docker. Our solution includes the support for multiple PLC execution engines and adds support for the emulation of legacy engines as well. We evaluate this architecture by executing performance measurements that analyze the impact of container technologies to the real-time aspects of PLC engines.", "Commercial Off-The-Shelf (COTS) processors are now commonly used in real-time embedded systems. The characteristics of these processors fulfill system requirements in terms of time-to-market, low cost, and high performance-per-watt ratio. However, multithreaded (MT) processors are still not widely used in real-time systems because the timing analysis is too complex. In MT processors, simultaneously-running tasks share and compete for processor resources, so the timing analysis has to estimate the possible impact that the inter-task interferences have on the execution time of the applications. In this paper, we propose a method that quantifies the slowdown that simultaneously-running tasks may experience due to collision in shared processor resources. To that end, we designed benchmarks that stress specific processor resources and we used them to (1) estimate the upper limit of a slowdown that simultaneously-running tasks may experience because of collision in different shared processor resources, and (2) quantify the sensitivity of time-critical applications to collision in these resources. We used the presented method to determine if a given MT processor is a good candidate for systems with timing requirements. We also present a case study in which the method is used to analyze three multithreaded architectures exhibiting different configurations of resource sharing. Finally, we show that measuring the slowdown that real applications experience when simultaneously-running with resource-stressing benchmarks is an important step in measurement-based timing analysis. This information is a base for incremental verification of MT COTS architectures.", "Internet-of-Things and cyber-pyhsical systems are gaining ongoing importance in the field of industrial automation. At the same time, the production becomes more and more flexible. Therefore, the main parts, such as Programmable Logic Controllers (PLCs) need to be flexible as well. However, today's PLCs are monolithic and distributed as a single deployable piece of software. In this paper we propose an architecture that uses containers to modularize real-time control applications, messaging for communication and a hardware abstraction layer to improve maintainability, re-usability and flexibility. Using a prototypical implementation of the architecture, we validate the feasibility of this approach through a benchmark" ] }
1908.04465
2967650664
We explore the challenges and opportunities of shifting industrial control software from dedicated hardware to bare-met al servers or cloud computing platforms using off the shelf technologies. In particular, we demonstrate that executing time-critical applications on cloud platforms is viable based on a series of dedicated latency tests targeting relevant real-time configurations.
@cite_0 address architectural details not discussed in @cite_9 and @cite_28 . These additions include the definite run-time environment and how deterministic communication of containers and field devices may be achieved in a novel container-based architecture. They proposed a Linux-based solution as host operating system, including both the single kernel preemption-focused PREEMPT-RT patch and the co-kernel oriented Xenomai. With this patch, the approach exhibits better predictability, although it suffers from security concerns introduced by exposed system files required by Xenomai. For this reason, they suggested limiting its application for safety-critical code execution. They analyzed and discussed inter-process messaging in detail, focusing on the specific properties needed in real-time applications. Finally, they implemented an orchestration run-time managing intra-container communication and showed that task times as low as @math are possible.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_28" ], "mid": [ "2131355481", "2010670047", "1503814339", "2092038045" ], "abstract": [ "Management, allocation and scheduling of heterogeneous resources for complex distributed real-time applications is a challenging problem. Timing constraints of applications may be fulfilled by the proper use of real-time scheduling policies, admission control and enforcement of timing constraints. However, it is not easy to design basic infrastructure services that allow for easy access to the allocation of multiple heterogeneous resources in a distributed environment. In this paper, we present a middleware for providing distributed soft real-time applications with a uniform API for reserving heterogeneous resources with real-time scheduling capabilities in a distributed environment. The architecture relies on standard POSIX OS facilities, such as time management and standard TCP IP networking services, and it is designed around CORBA, in order to facilitate modularity, flexibility and portability of the applications using it. However, real-time scheduling is supported by proper extensions at the kernel-level, plugged within the framework by means of dedicated resource managers. Our current implementation on Linux supports the reservation of the CPU, disk and network bandwidth. However, additional resource managers supporting alternative real-time schedulers for these resources, as well as additional types of resources, may be easily added. We present experimental results gathered on both synthetic applications and a real multimedia video streaming case study, showing the advantages deriving from the use of the proposed middleware. Finally, overhead figures are reported, showing the sustainability of the approach for a wide class of complex, distributed, soft real-time applications.", "Real-time communication (RTC) applications such as VoIP, video conferencing, and online gaming are flourishing. To adapt and deliver good performance, these applications require accurate estimations of short-term network performance metrics, e.g., loss rate, one-way delay, and throughput. However, the wide variation in mobile cellular network performance makes running RTC applications on these networks problematic. To address this issue, various performance adaptation techniques have been proposed, but one common problem of such techniques is that they only adjust application behavior reactively after performance degradation is visible. Thus, proactive adaptation based on accurate short-term, fine-grained network performance prediction can be a preferred alternative that benefits RTC applications. In this study, we show that forecasting the short-term performance in cellular networks is possible in part due to the channel estimation scheme on the device and the radio resource scheduling algorithm at the base station. We develop a system interface called PROTEUS, which passively collects current network performance, such as throughput, loss, and one-way delay, and then uses regression trees to forecast future network performance. PROTEUS successfully predicts the occurrence of packet loss within a 0.5s time window for 98 of the time windows and the occurrence of long one-way delay for 97 of the time windows. We also demonstrate how PROTEUS can be integrated with RTC applications to significantly improve the perceptual quality. In particular, we increase the peak signal-to-noise ratio of a video conferencing application by up to 15dB and reduce the perceptual delay in a gaming application by up to 4s.", "The cloud computing infrastructure relies on virtualized servers that provide isolation across guest OS's through sand boxing. This isolation was demonstrated to be imperfect in past work which exploited hardware level information leakages to gain access to sensitive information across co-located virtual machines (VMs). In response virtualization companies and cloud services providers have disabled features such as deduplication to prevent such attacks. In this work, we introduce a fine-grain cross-core cache attack that exploits access time variations on the last level cache. The attack exploits huge pages to work across VM boundaries without requiring deduplication. No configuration changes on the victim OS are needed, making the attack quite viable. Furthermore, only machine co-location is required, while the target and victim OS can still reside on different cores of the machine. Our new attack is a variation of the prime and probe cache attack whose applicability at the time is limited to L1 cache. In contrast, our attack works in the spirit of the flush and reload attack targeting the shared L3 cache instead. Indeed, by adjusting the huge page size our attack can be customized to work virtually at any cache level size. We demonstrate the viability of the attack by targeting an Open SSL1.0.1f implementation of AES. The attack recovers AES keys in the cross-VM setting on Xen 4.1 with deduplication disabled, being only slightly less efficient than the flush and reload attack. Given that huge pages are a standard feature enabled in the memory management unit of OS's and that besides co-location no additional assumptions are needed, the attack we present poses a significant risk to existing cloud servers.", "In many-core architectures different distributed applications are executed in parallel. The applications may need hard guarantees for communication with respect to latency and throughput to cope with their constraints. Networks on Chip (NoC) are the most promising approach to handle these requirements in architectures with a large number of cores. Dynamic reservation of communication resources in virtual channel NoCs is used to enable quality of service for concurrent communication. This paper presents a router design supporting best effort and connection-oriented guaranteed service communication. The communication resources are shared dynamically between the two communication schemes. The key contribution is a concept for virtual channel reservation supporting different bandwidth and latency guarantees for simultaneous guaranteed service communication flows. Different to state-of-the-art, the used scheduling approach allows to give hard guarantees regarding throughput and latency. The concept enables to adjust the bandwidth and latency requirements of connections at run-time to cope with dynamically changing application requirements. Due to its distributed reservation process and resource allocation it offers good scalability for many-core architectures. The implementation of a router and the required extension of a network interface to support the proposed concept are presented. The software perspective is discussed. An algorithm is presented that is used to establish guaranteed service connections according to the applications bandwidth requirements. Simulation results are compared to state-of-the-art arbitration schemes and show significant improvements of latency and throughput, e.g. for an MPEG4 application. Synthesis results expose the low area overhead and impact on energy consumption which makes the concepts highly attractive for QoS-constraint many-core architectures." ] }
1908.04574
2968985217
The domain name resolution into IP addresses can significantly delay connection establishments on the web. Moreover, the common use of recursive DNS resolvers presents a privacy risk as they can closely monitor the user's browsing activities. In this paper, we present a novel HTTP response header allowing web server to provide their clients with relevant DNS records. Our results indicate, that this resolver-less DNS mechanism allows user agents to save the DNS lookup time for subsequent connection establishments. We find, that this proposal saves at least 80ms per DNS lookup for the one percent of users having the longest round-trip times towards their recursive resolver. Furthermore, our proposal decreases the number of DNS lookups and thus improves the privacy posture of the user towards the used recursive resolver. Comparing the security guarantees of the traditional DNS to our proposal, we find that resolver-less DNS achieves at least the same security properties. In detail, it even improves the user's resilience against censorship through tampered DNS resolvers.
The DNS Anonymity Service combines a broadcast mechanism for popular DNS records with an anonymity network to conduct additional DNS lookups @cite_15 . Unlike our proposal, the DNS Anonymity Service causes additional network traffic for downloading the broadcasted DNS records and suffers additional network latency when the client resolves hostnames via the anonymity network. In total, the performance gains of this clean-slate approach are vague as they depend on the user's browsing behavior. Furthermore, this approach does not integrate well into the existing DNS and requires additional Internet infrastructure to be deployed.
{ "cite_N": [ "@cite_15" ], "mid": [ "37081517", "2011441851", "1975016298", "2083158002" ], "abstract": [ "We propose a dedicated DNS Anonymity Service which protects users' privacy. The design consists of two building blocks: a broadcast scheme for the distribution of a \"top list\" of DNS hostnames, and low-latency Mixes for requesting the remaining hostnames unobservably. We show that broadcasting the 10,000 most frequently queried hostnames allows zero-latency lookups for over 80 of DNS queries at reasonable cost. We demonstrate that the performance of the previously proposed Range Queries approach severely suffers from high lookup latencies in a real-world scenario.", "Popular anonymous communication systems often require sending packets through a sequence of relays on dilated paths for strong anonymity protection. As a result, increased end-to-end latency renders such systems inadequate for the majority of Internet users who seek an intermediate level of anonymity protection while using latency-sensitive applications, such as Web applications. This paper serves to bridge the gap between communication systems that provide strong anonymity protection but with intolerable latency and non-anonymous communication systems by considering a new design space for the setting. More specifically, we explore how to achieve near-optimal latency while achieving an intermediate level of anonymity with a weaker yet practical adversary model (i.e., protecting an end-host's identity and location from servers) such that users can choose between the level of anonymity and usability. We propose Lightweight Anonymity and Privacy (LAP), an efficient network-based solution featuring lightweight path establishment and stateless communication, by concealing an end-host's topological location to enhance anonymity against remote tracking. To show practicality, we demonstrate that LAP can work on top of the current Internet and proposed future Internet architectures.", "Existing IP anonymity systems tend to sacrifice one of low latency, high bandwidth, or resistance to traffic-analysis. High-latency mix-nets like Mixminion batch messages to resist traffic-analysis at the expense of low latency. Onion routing schemes like Tor deliver low latency and high bandwidth, but are not designed to withstand traffic analysis. Designs based on DC-nets or broadcast channels resist traffic analysis and provide low latency, but are limited to low bandwidth communication. In this paper, we present the design, implementation, and evaluation of Aqua, a high-bandwidth anonymity system that resists traffic analysis. We focus on providing strong anonymity for BitTorrent, and evaluate the performance of Aqua using traces from hundreds of thousands of actual BitTorrent users. We show that Aqua achieves latency low enough for efficient bulk TCP flows, bandwidth sufficient to carry BitTorrent traffic with reasonable efficiency, and resistance to traffic analysis within anonymity sets of hundreds of clients. We conclude that Aqua represents an interesting new point in the space of anonymity network designs.", "Name services are critical for mapping logical resource names to physical resources in large-scale distributed systems. The Domain Name System (DNS) used on the Internet, however, is slow, vulnerable to denial of service attacks, and does not support fast updates. These problems stem fundamentally from the structure of the legacy DNS.This paper describes the design and implementation of the Cooperative Domain Name System (CoDoNS), a novel name service, which provides high lookup performance through proactive caching, resilience to denial of service attacks through automatic load-balancing, and fast propagation of updates. CoDoNS derives its scalability, decentralization, self-organization, and failure resilience from peer-to-peer overlays, while it achieves high performance using the Beehive replication framework. Cryptographic delegation, instead of host-based physical delegation, limits potential malfeasance by namespace operators and creates a competitive market for namespace management. Backwards compatibility with existing protocols and wire formats enables CoDoNS to serve as a backup for legacy DNS, as well as a complete replacement. Performance measurements from a real-life deployment of the system in PlanetLab shows that CoDoNS provides fast lookups, automatically reconfigures around faults without manual involvement and thwarts distributed denial of service attacks by promptly redistributing load across nodes." ] }
1908.04574
2968985217
The domain name resolution into IP addresses can significantly delay connection establishments on the web. Moreover, the common use of recursive DNS resolvers presents a privacy risk as they can closely monitor the user's browsing activities. In this paper, we present a novel HTTP response header allowing web server to provide their clients with relevant DNS records. Our results indicate, that this resolver-less DNS mechanism allows user agents to save the DNS lookup time for subsequent connection establishments. We find, that this proposal saves at least 80ms per DNS lookup for the one percent of users having the longest round-trip times towards their recursive resolver. Furthermore, our proposal decreases the number of DNS lookups and thus improves the privacy posture of the user towards the used recursive resolver. Comparing the security guarantees of the traditional DNS to our proposal, we find that resolver-less DNS achieves at least the same security properties. In detail, it even improves the user's resilience against censorship through tampered DNS resolvers.
DNS prefetching describes a popular performance optimization where browsers start resolving the hostname of hyperlinks before the user clicks on them. However, privacy research on this mechanism indicates severe privacy problems. For example, it was shown that the recursive resolver could even infer the search terms the user entered into the search engine based on DNS prefetching @cite_6 .
{ "cite_N": [ "@cite_6" ], "mid": [ "1512251782", "182451592", "2181959400", "2187405838" ], "abstract": [ "A recent trend in optimizing Internet browsing speed is to optimistically pre-resolve (or prefetch) DNS resolutions. While the practical benefits of doing so are still being debated, this paper attempts to raise awareness that current practices could lead to privacy threats that are ripe for abuse. More specifically, although the adoption of several browser optimizations have already raised security concerns, we examine how prefetching amplifies disclosure attacks to a degree where it is possible to infer the likely search terms issued by clients using a given DNS resolver. The success of these inference attacks relies on the fact that prefetching inserts a significant amount of context into a resolver's cache, allowing an adversary to glean far more detailed insights than when this feature is turned off.", "Computer systems have enjoyed an exponential growth in processor speed for the past 20 years, while main memory speed has improved only moderately. Today a cache miss to main memory takes hundreds of processor cycles. Recent studies have demonstrated that on commercial databases, about 50 or more of execution time in memory is often wasted due to cache misses. In light of this problem, a number of recent studies focused on reducing the number of cache misses of database algorithms. In this thesis, we investigate a different approach: reducing the impact of cache misses through a technique called cache prefetching. Since prefetching for sequential array accesses has been well studied, we are interested in studying non-contiguous access patterns found in two classes of database algorithms: the B+-Tree index algorithm and the hash join algorithm. We re-examine their designs with cache prefetching in mind, and combine prefetching and data locality optimizations to achieve good cache performance. For B+-Trees, we first propose and evaluate a novel main memory index structure, Prefetching B+Trees, which uses prefetching to accelerate two major access patterns of B+-Tree indices: searches and range scans. We then apply our findings in the development of a novel index structure, Fractal Prefetching B+-Trees, that optimizes index operations both for CPU cache performance and for disk performance in commercial database systems by intelligently embedding cache-optimized trees into disk pages. For hash joins, we first exploit cache prefetching separately for the I O partition phase and the join phase of the algorithm. We propose and evaluate two techniques, Group Prefetching and Software-Pipelined Prefetching, that exploit inter-tuple parallelism to overlap cache misses across the processing of multiple tuples. Then we present a novel algorithm, Inspector Joins, that exploits the free information obtained from one pass of the hash join algorithm to improve the performance of a later pass. This new algorithm addresses the memory bandwidth sharing problem in shared-bus multiprocessor systems. We compare our techniques against state-of-the-art cache-friendly algorithms for B+-Trees and hash joins through both simulation studies and real machine experiments. Our experimental results demonstrate dramatic performance benefits of our cache prefetching enabled techniques.", "In this paper, we present our design of a high performance prefetcher, which exploits various localities in both local cache-miss streams (misses generated from the same instruction) and the global cache-miss address stream (the misses from different instructions). Besides the stride and context localities that have been exploited in previous work, we identify new data localities and incorporate novel prefetching algorithms into our design. In this work, we also study the (largely overlooked) importance of eliminating redundant prefetches. We use logic to remove local (by the same instruction) redundant prefetches and we use a Bloom filter or miss status handling registers (MSHRs) to remove global (by all instructions) redundant prefetches. We evaluate three different design points of the proposed architecture, trading off performance for complexity and latency efficiency. Our experimental results based on a set of SPEC 2006 benchmarks show that the proposed design significantly improves the performance (over 1.6X for our highest performance design point) at a small hardware cost for various processor, cache and memory bandwidth configurations.", "Hardware data prefetching is widely adopted to hide long memory latency. A hardware data prefetcher predicts the memory address that will be accessed in the near future and fetches the data at the predicted address into the cache memory in advance. To detect memory access patterns such as a constant stride, most existing prefetchers use differences between addresses in a sequence of memory accesses. However, prefetching based on the differences often fail to detect memory access patterns when aggressive optimizations are applied. For example, out-of-order execution changes the memory access order. It causes inaccurate prediction because the sequence of memory addresses used to calculate the difference are changed by the optimization. To overcome the problems of existing prefetchers, we propose Access Map Pattern Matching (AMPM). The AMPM prefetcher has two key components: a memory access map and hardware pattern matching logic. The memory access map is a bitmap-like data structure for holding past memory accesses. The AMPM prefetcher divides the memory address space into memory regions of a fixed size. The memory access map is mapped to the memory region. Each entry in the bitmap-like data structure is mapped to each cache line in the region. Once the bitmap is mapped to the memory region, the entry records whether the corresponding line has already been accessed or not. The AMPM prefetcher detects memory access patterns from the bitmap-like data structure that is mapped to the accessed region. The hardware pattern matching logic is used to detect stride access patterns in the memory access map. The result of pattern matching is affected by neither the memory access order nor the instruction addresses because the bitmap-like data structure holds neither the information that reveals the memory access order of past memory accesses nor the instruction addresses. Therefore, the AMPM prefetcher achieves high performance even when such aggressive optimizations are applied. The AMPM prefetcher is evaluated by performing cycle-accurate simulations using the memory-intensive benchmarks in the SPEC CPU2006 and the NAS Parallel Benchmark. In an aggressively optimized environment, the AMPM prefetcher improves prefetch coverage, while the other state-of-the-art prefetcher degrades the prefetch coverage significantly. As a result, the AMPM prefetcher increases IPC by 32.4 compared to state-of-the-art prefetcher." ] }
1908.04727
2967991851
A in the unit @math -cube is a set @math such that for every @math and @math in @math we either have @math for all @math , or @math for all @math . We consider subsets, @math , of the unit @math -cube @math that satisfy [ card (A C) k, , for all chains , C [0,1]^n , , ] where @math is a fixed positive integer. We refer to such a set @math as a @math -antichain. We show that the @math -dimensional Hausdorff measure of a @math -antichain in @math is at most @math and that the bound is asymptotically sharp. Moreover, we conjecture that there exist @math -antichains in @math whose @math -dimensional Hausdorff measure equals @math and we verify the validity of this conjecture when @math .
When @math this conjecture is clearly true, and when @math it is observed in @cite_0 that the validity of Conjecture is an immediate consequence of the following, well-known, result. Recall that a singular function @math is a strictly decreasing function whose derivative equals zero almost everywhere.
{ "cite_N": [ "@cite_0" ], "mid": [ "2110785107", "2067941796", "2119973367", "2953316345" ], "abstract": [ "We consider the functions @math defined as the @math th partial derivative of Lebesgue's singular function @math with respect to @math at @math . This sequence includes a multiple of the Takagi function as the case @math . We show that @math is continuous but nowhere differentiable for each @math , and determine the Holder order of @math . From this, we derive that the Hausdorff dimension of the graph of @math is one. Using a formula of Lomnicki and Ulam, we obtain an arithmetic expression for @math using the binary expansion of @math , and use this to find the sets of points where @math and @math take on their absolute maximum and minimum values. We show that these sets are topological Cantor sets. In addition, we characterize the sets of local maximum and minimum points of @math and @math .", "We consider the behavior of the GMRES method for solving a linear system @math when @math is singular or nearly so, i.e., ill conditioned. The (near) singularity of @math may or may not affect the performance of GMRES, depending on the nature of the system and the initial approximate solution. For singular @math , we give conditions under which the GMRES iterates converge safely to a least-squares solution or to the pseudoinverse solution. These results also apply to any residual minimizing Krylov subspace method that is mathematically equivalent to GMRES. A practical procedure is outlined for efficiently and reliably detecting singularity or ill conditioning when it becomes a threat to the performance of GMRES.", "We show the existence of a large family of representations supported by the orbit closure of the determinant. However, the validity of our result is based on the validity of the celebrated ‘Latin square conjecture’ due to Alon and Tarsi or, more precisely, on the validity of an equivalent ‘column Latin square conjecture’ due to Huang and Rota.", "In this paper, we deduce the vanishing of Selmer groups for the Rankin-Selberg convolution of a cusp form with a theta series of higher weight from the nonvanishing of the associated @math -value, thus establishing the rank 0 case of the Bloch-Kato conjecture in these cases. Our methods are based on the connection between Heegner cycles and @math -adic @math -functions, building upon recent work of Bertolini, Darmon and Prasanna, and on an extension of Kolyvagin's method of Euler systems to the anticyclotomic setting. In the course of the proof, we also obtain a higher weight analogue of Mazur's conjecture (as proven in weight 2 by Cornut-Vatsal), and as a consequence of our results, we deduce from Nekovar's work a proof of the parity conjecture in this setting." ] }
1908.04090
2967417443
Although many tools have been presented in the research literature of software visualization, there is little evidence of their adoption. To choose a suitable visualization tool, practitioners need to analyze various characteristics of tools such as their supported software concerns and level of maturity. Indeed, some tools can be prototypes for which the lifespan is expected to be short, whereas others can be fairly mature products that are maintained for a longer time. Although such characteristics are often described in papers, we conjecture that practitioners willing to adopt software visualizations require additional support to discover suitable visualization tools. In this paper, we elaborate on our efforts to provide such support. To this end, we systematically analyzed research papers in the literature of software visualization and curated a catalog of 70 available tools that employ various visualization techniques to support the analysis of multiple software concerns. We further encapsulate these characteristics in an ontology. VISON, our software visualization ontology, captures these semantics as concepts and relationships. We report on early results of usage scenarios that demonstrate how the ontology can support (i) developers to find suitable tools for particular development concerns, and (ii) researchers who propose new software visualization tools to identify a baseline tool for a controlled experiment.
Some studies examine software visualization tools, in particular, to create guidelines for designing and evaluating software visualizations. For example, Storey al @cite_52 examine 12 software visualization tools and propose a framework to evaluate software visualizations based on intent, information, presentation, interaction, and effectiveness. Sensalire al @cite_59 @cite_6 classify the features users require in software visualization tools. To this end, they elaborate on lessons learned from evaluating 20 software visualization tools and identify dimensions that can help design an evaluation and then analyze the results. In our investigation, we do not attempt to provide a comprehensive catalog of software visualization tools, but we seek to provide a means to boost software visualization discoverability.
{ "cite_N": [ "@cite_52", "@cite_59", "@cite_6" ], "mid": [ "2014707001", "2566045746", "2147787175", "2159601092" ], "abstract": [ "We provide an evaluation of 15 software visualization tools applicable to corrective maintenance. The tasks supported as well as the techniques used are presented and graded based on the support level. By analyzing user acceptation of current tools, we aim to help developers to select what to consider, avoid or improve in their next releases. Tool users can also recognize what to broadly expect (and what not) from such tools, thereby supporting an informed choice for the tools evaluated here and for similar tools.", "Software visualization can be very useful for answering complex questions that arise in the software development process. Although modern visualization engines offer expressive APIs for building such visualizations, developers often have difficulties to (1) identify a suitable visualization technique to answer their particular development question, and to (2) implement that visualization using the existing APIs. Examples that illustrate the usage of an engine to build concrete visualizations offer a good starting point, but developers may have to traverse long lists of categories and analyze examples one-by-one to find a suitable one. We propose MetaVis, a tool that fills the gap between existing visualization techniques and their practical applications during software development. We classify questions frequently formulated by software developers and for each, based on our expertise, identify suitable visualizations. MetaVis uses tags mined from these questions to offer a tag-iconic cloud-based visualization. Each tag links to suitable visualizations that developers can explore, modify and try out. We present initial results of an implementation of MetaVis in the Pharo programming environment. The tool visualizes 76 developers' questions assigned to 49 visualization examples.", "Many software visualization (SoftVis) tools are continuously being developed by both researchers as well as software development companies. In order to determine if the developed tools are effective in helping their target users, it is desirable that they are exposed to a proper evaluation. Despite this, there is still lack of a general guideline on how these evaluations should be carried out and many of the tool developers perform very limited or no evaluation of their tools. Each person that carries out one evaluation, however, has experiences which, if shared, can guide future evaluators. This paper presents the lessons learned from evaluating over 20 SoftVis tools with over 90 users in five different studies spread on a period of over two years. The lessons covered include the selection of the tools, tasks, as well as evaluation participants. Other discussed points are related to the duration of the evaluation experiment, its location, the procedure followed when carrying out the experiment, as well as motivation of the participants. Finally, an analysis of the lessons learned is shown with the hope that these lessons will be of some assistance to future SoftVis tool evaluators.", "Software visualization studies techniques and methods for graphically representing different aspects of software. Its main goal is to enhance, simplify and clarify the mental representation a software engineer has of a computer system. During many years, visualization in 2D space has been actively studied, but in the last decade, researchers have begun to explore new 3D representations for visualizing software. In this article, we present an overview of current research in the area, describing several major aspects like: visual representations, interaction issues, evaluation methods and development tools. We also perform a survey of some representative tools to support different tasks, i.e., software maintenance and comprehension, requirements validation and algorithm animation for educational purposes, among others. Finally, we conclude identifying future research directions." ] }
1908.04090
2967417443
Although many tools have been presented in the research literature of software visualization, there is little evidence of their adoption. To choose a suitable visualization tool, practitioners need to analyze various characteristics of tools such as their supported software concerns and level of maturity. Indeed, some tools can be prototypes for which the lifespan is expected to be short, whereas others can be fairly mature products that are maintained for a longer time. Although such characteristics are often described in papers, we conjecture that practitioners willing to adopt software visualizations require additional support to discover suitable visualization tools. In this paper, we elaborate on our efforts to provide such support. To this end, we systematically analyzed research papers in the literature of software visualization and curated a catalog of 70 available tools that employ various visualization techniques to support the analysis of multiple software concerns. We further encapsulate these characteristics in an ontology. VISON, our software visualization ontology, captures these semantics as concepts and relationships. We report on early results of usage scenarios that demonstrate how the ontology can support (i) developers to find suitable tools for particular development concerns, and (ii) researchers who propose new software visualization tools to identify a baseline tool for a controlled experiment.
Some other studies present taxonomies that characterize software visualization tools. Myers @cite_14 classifies software visualization tools based on whether they focus on code, data, or algorithms; and whether they are implemented in a static or dynamic fashion. Price al @cite_24 present a taxonomy of software visualization tools based on six dimensions: scope, content, form, method, interaction, and effectiveness. Maletic al @cite_62 propose a taxonomy of five dimensions to classify software visualization tools: tasks, audience, target, representation, and medium. Schots al @cite_54 extend this taxonomy by adding two dimensions: resource requirements of visualizations, and evidence of their utility. Merino al @cite_11 add as a main characteristic of software visualization tools. In their context, needs'' refers to the set of questions that are supported by software visualization tools. Although we consider these studies crucial for reflecting on the software visualization domain, we think that practitioners may require a more comprehensive support to identify a suitable tool. In particular, we believe that the semantics of concepts and their relationships are often missing in taxonomies and other classifications. The use of an ontology enforces the analysis of these relationships, which can play an important role in identifying a suitable visualization tools.
{ "cite_N": [ "@cite_14", "@cite_62", "@cite_54", "@cite_24", "@cite_11" ], "mid": [ "2163225273", "2070921605", "2566045746", "2149784077" ], "abstract": [ "A number of taxonomies to classify and categorize software visualization systems have been proposed in the past. Most notable are those presented by Price (1993) and Roman (1993). While these taxonomies are an accurate representation of software visualization issues, they are somewhat skewed with respect to current research areas on software visualization. We revisit this important work and propose a number of re-alignments with respect to addressing the software engineering tasks of large-scale development and maintenance. We propose a framework to emphasize the general tasks of understanding and analysis during development and maintenance of large-scale software systems. Five dimensions relating to the what, where, how, who, and why of software visualization make up this framework. The focus of this work is not so much as to classify software visualization system, but to point out the need for matching the method with the task. Finally, a number of software visualization systems are examined under our framework to highlight the particular problems each addresses.", "In the early 1980s researchers began building systems to visualize computer programs and algorithms using newly emerging graphical workstation technology. After more than a decade of advances in interface technology, a large variety of systems has been built and many different aspects of the visualization process have been investigated. As in any new branch of a science, a taxonomy is required so that researchers can use a common language to discuss the merits of existing systems, classify new ones (to see if they really are new) and identify gaps which suggest promising areas for further development. Several authors have suggested taxonomies for these visualization systems, but they have been ad hoc and have relied on only a handful of characteristics to describe a large and diverse area of work. Another major drawback of these taxonomies is their inability to accommodate expansion: there is no clear way to add new categories when the need arises. In this paper we present a detailed taxonomy of systems for the visualization of computer software. This taxonomy was derived from an established black-box model of software and is composed of a hierarchy with six broad categories at the top and over 30 leaf-level nodes at four hierarchical levels. We describe 12 important systems in detail and apply the taxonomy to them in order to illustrate its features. After discussing each system in this context, we analyse its coverage of the categories and present a research agenda for future work in the area.", "Software visualization can be very useful for answering complex questions that arise in the software development process. Although modern visualization engines offer expressive APIs for building such visualizations, developers often have difficulties to (1) identify a suitable visualization technique to answer their particular development question, and to (2) implement that visualization using the existing APIs. Examples that illustrate the usage of an engine to build concrete visualizations offer a good starting point, but developers may have to traverse long lists of categories and analyze examples one-by-one to find a suitable one. We propose MetaVis, a tool that fills the gap between existing visualization techniques and their practical applications during software development. We classify questions frequently formulated by software developers and for each, based on our expertise, identify suitable visualizations. MetaVis uses tags mined from these questions to offer a tag-iconic cloud-based visualization. Each tag links to suitable visualizations that developers can explore, modify and try out. We present initial results of an implementation of MetaVis in the Pharo programming environment. The tool visualizes 76 developers' questions assigned to 49 visualization examples.", "Software is usually complex and always intangible. In practice, the development and maintenance processes are time-consuming activities mainly because software complexity is difficult to manage. Graphical visualization of software has the potential to result in a better and faster understanding of its design and functionality, thus saving time and providing valuable information to improve its quality. However, visualizing software is not an easy task because of the huge amount of information comprised in the software. Furthermore, the information content increases significantly once the time dimension to visualize the evolution of the software is taken into account. Human perception of information and cognitive factors must thus be taken into account to improve the understandability of the visualization. In this paper, we survey visualization techniques, both 2D- and 3D-based, representing the static aspects of the software and its evolution. We categorize these techniques according to the issues they focus on, in order to help compare them and identify the most relevant techniques and tools for a given problem." ] }
1908.04008
2967406324
Batch Normalization (BN) (Ioffe and Szegedy 2015) normalizes the features of an input image via statistics of a batch of images and this batch information is considered as batch noise that will be brought to the features of an instance by BN. We offer a point of view that self-attention mechanism can help regulate the batch noise by enhancing instance-specific information. Based on this view, we propose combining BN with a self-attention mechanism to adjust the batch noise and give an attention-based version of BN called Instance Enhancement Batch Normalization (IEBN) which recalibrates channel information by a simple linear transformation. IEBN outperforms BN with a light parameter increment in various visual tasks universally for different network structures and benchmark data sets. Besides, even if under the attack of synthetic noise, IEBN can still stabilize network training with good generalization. The code of IEBN is available at this https URL
The normalization layer is an important component of a deep network. Multiple normalization methods have been proposed for different tasks. Batch Normalization @cite_30 which normalizes input by mini-batch statistics has been a foundation of visual recognition tasks @cite_7 . Instance Normalization @cite_1 performs one instance BN-like normalization and is widely used in generative model @cite_15 @cite_4 . There are some variants of BN, such as, Conditional Batch Normalization @cite_27 for Visual Questioning and Answering, Group Normalization @cite_25 and Batch Renormalization @cite_23 for small batch size training, Adaptive Batch Normalization @cite_28 for domain adaptation and Switchable normalization @cite_10 which learns to select different normalizers for different normalization layers. Among them, Conditional Batch Norm and Batch Renorm adjust the trainable parameters in reparameterization step of BN. Both of them are most related to our work which modifies the trainable scaling parameter.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_7", "@cite_28", "@cite_1", "@cite_27", "@cite_23", "@cite_15", "@cite_10", "@cite_25" ], "mid": [ "2902302607", "2962958829", "2292729293", "2795783309" ], "abstract": [ "As an indispensable component, Batch Normalization (BN) has successfully improved the training of deep neural networks (DNNs) with mini-batches, by normalizing the distribution of the internal representation for each hidden layer. However, the effectiveness of BN would diminish with the scenario of micro-batch (e.g. less than 4 samples in a mini-batch), since the estimated statistics in a mini-batch are not reliable with insufficient samples. This limits BN's room in training larger models on segmentation, detection, and video-related problems, which require small batches constrained by memory consumption. In this paper, we present a novel normalization method, called Kalman Normalization (KN), for improving and accelerating the training of DNNs, particularly under the context of micro-batches. Specifically, unlike the existing solutions treating each hidden layer as an isolated system, KN treats all the layers in a network as a whole system, and estimates the statistics of a certain layer by considering the distributions of all its preceding layers, mimicking the merits of Kalman Filtering. On ResNet50 trained in ImageNet, KN has 3.4 lower error than its BN counterpart when using a batch size of 4; Even when using typical batch sizes, KN still maintains an advantage over BN while other BN variants suffer a performance degradation. Moreover, KN can be naturally generalized to many existing normalization variants to obtain gains, e.g. equipping Group Normalization with Group Kalman Normalization (GKN). KN can outperform BN and its variants for large scale object detection and segmentation task in COCO 2017.", "While the authors of Batch Normalization (BN) identify and address an important problem involved in training deep networks- Internal Covariate Shift- the current solution has certain drawbacks. For instance, BN depends on batch statistics for layerwise input normalization during training which makes the estimates of mean and standard deviation of input (distribution) to hidden layers inaccurate due to shifting parameter values (especially during initial training epochs). Another fundamental problem with BN is that it cannot be used with batch-size 1 during training. We address these drawbacks of BN by proposing a non-adaptive normalization technique for removing covariate shift, that we call Normalization Propagation. Our approach does not depend on batch statistics, but rather uses a data-independent parametric estimate of mean and standard-deviation in every layer thus being computationally faster compared with BN. We exploit the observation that the pre-activation before Rectified Linear Units follow Gaussian distribution in deep networks, and that once the first and second order statistics of any given dataset are normalized, we can forward propagate this normalization without the need for recalculating the approximate statistics for hidden layers.", "While the authors of Batch Normalization (BN) identify and address an important problem involved in training deep networks-- Internal Covariate Shift-- the current solution has certain drawbacks. Specifically, BN depends on batch statistics for layerwise input normalization during training which makes the estimates of mean and standard deviation of input (distribution) to hidden layers inaccurate for validation due to shifting parameter values (especially during initial training epochs). Also, BN cannot be used with batch-size 1 during training. We address these drawbacks by proposing a non-adaptive normalization technique for removing internal covariate shift, that we call Normalization Propagation. Our approach does not depend on batch statistics, but rather uses a data-independent parametric estimate of mean and standard-deviation in every layer thus being computationally faster compared with BN. We exploit the observation that the pre-activation before Rectified Linear Units follow Gaussian distribution in deep networks, and that once the first and second order statistics of any given dataset are normalized, we can forward propagate this normalization without the need for recalculating the approximate statistics for hidden layers.", "Batch Normalization (BN) is a milestone technique in the development of deep learning, enabling various networks to train. However, normalizing along the batch dimension introduces problems --- BN's error increases rapidly when the batch size becomes smaller, caused by inaccurate batch statistics estimation. This limits BN's usage for training larger models and transferring features to computer vision tasks including detection, segmentation, and video, which require small batches constrained by memory consumption. In this paper, we present Group Normalization (GN) as a simple alternative to BN. GN divides the channels into groups and computes within each group the mean and variance for normalization. GN's computation is independent of batch sizes, and its accuracy is stable in a wide range of batch sizes. On ResNet-50 trained in ImageNet, GN has 10.6 lower error than its BN counterpart when using a batch size of 2; when using typical batch sizes, GN is comparably good with BN and outperforms other normalization variants. Moreover, GN can be naturally transferred from pre-training to fine-tuning. GN can outperform its BN-based counterparts for object detection and segmentation in COCO, and for video classification in Kinetics, showing that GN can effectively replace the powerful BN in a variety of tasks. GN can be easily implemented by a few lines of code in modern libraries." ] }
1908.04008
2967406324
Batch Normalization (BN) (Ioffe and Szegedy 2015) normalizes the features of an input image via statistics of a batch of images and this batch information is considered as batch noise that will be brought to the features of an instance by BN. We offer a point of view that self-attention mechanism can help regulate the batch noise by enhancing instance-specific information. Based on this view, we propose combining BN with a self-attention mechanism to adjust the batch noise and give an attention-based version of BN called Instance Enhancement Batch Normalization (IEBN) which recalibrates channel information by a simple linear transformation. IEBN outperforms BN with a light parameter increment in various visual tasks universally for different network structures and benchmark data sets. Besides, even if under the attack of synthetic noise, IEBN can still stabilize network training with good generalization. The code of IEBN is available at this https URL
The cooperation of BN and attention dates back to Visual Questioning and Answering (VQA) which inputs an image and an image-related question and then outputs the answer to the question. For this task, Conditional Batch Norm @cite_27 is proposed to influence the feature extraction of an image via the feature collected from the question. A Recurrent Neural Network (RNN) is used to extract the features from the question while a Convolutional Neural Network (CNN), a pre-trained ResNet, performs features selection from the image. The features extracted from the question are conditioned on the shift and scale parameters of the BN in the pre-trained ResNet such that the feature selection of the CNN is question-referenced and the overall networks can handle different reasoning tasks. Note that for VQA, the features from question can be viewed as external attention to guide the training of overall network since those features are external regarding the image. In our work, the IEBN we proposed can also be viewed as a kind of Conditional Batch Norm but the guidance of the network training is using the internal attention since we use self-attention mechanism to extract the information from the image itself.
{ "cite_N": [ "@cite_27" ], "mid": [ "2174492417", "2526782364", "2560920409", "2754586843" ], "abstract": [ "We propose a novel attention based deep learning architecture for visual question answering task (VQA). Given an image and an image related natural language question, VQA generates the natural language answer for the question. Generating the correct answers requires the model's attention to focus on the regions corresponding to the question, because different questions inquire about the attributes of different image regions. We introduce an attention based configurable convolutional neural network (ABC-CNN) to learn such question-guided attention. ABC-CNN determines an attention map for an image-question pair by convolving the image feature map with configurable convolutional kernels derived from the question's semantics. We evaluate the ABC-CNN architecture on three benchmark VQA datasets: Toronto COCO-QA, DAQUAR, and VQA dataset. ABC-CNN model achieves significant improvements over state-of-the-art methods on these datasets. The question-guided attention generated by ABC-CNN is also shown to reflect the regions that are highly relevant to the questions.", "Recently, neuron activations extracted from a pre-trained convolutional neural network (CNN) show promising performance in various visual tasks. However, due to the domain and task bias, using the features generated from the model pre-trained for image classification as image representations for instance retrieval is problematic. In this paper, we propose quartet-net learning to improve the discriminative power of CNN features for instance retrieval. The general idea is to map the features into a space where the image similarity can be better evaluated. Our network differs from the traditional Siamese-net in two ways. First, we adopt a double-margin contrastive loss with a dynamic margin tuning strategy to train the network which leads to more robust performance. Second, we introduce in the mimic learning regularization to improve the generalization ability of the network by preventing it from overfitting to the training data. Catering for the network learning, we collect a large-scale dataset, namely GeoPair, which consists of 68k matching image pairs and 63k non-matching pairs. Experiments on several standard instance retrieval datasets demonstrate the effectiveness of our method.", "Much of the recent progress in Vision-to-Language problems has been achieved through a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). This approach does not explicitly represent high-level semantic concepts, but rather seeks to progress directly from image features to text. In this paper we first propose a method of incorporating high-level concepts into the successful CNN-RNN approach, and show that it achieves a significant improvement on the state-of-the-art in both image captioning and visual question answering. We further show that the same mechanism can be used to incorporate external knowledge, which is critically important for answering high level visual questions. Specifically, we design a visual question answering model that combines an internal representation of the content of an image with information extracted from a general knowledge base to answer a broad range of image-based questions. It particularly allows questions to be asked where the image alone does not contain the information required to select the appropriate answer. Our final model achieves the best reported results for both image captioning and visual question answering on several of the major benchmark datasets.", "Recurrent neural nets (RNN) and convolutional neural nets (CNN) are widely used on NLP tasks to capture the long-term and local dependencies, respectively. Attention mechanisms have recently attracted enormous interest due to their highly parallelizable computation, significantly less training time, and flexibility in modeling dependencies. We propose a novel attention mechanism in which the attention between elements from input sequence(s) is directional and multi-dimensional (i.e., feature-wise). A light-weight neural net, \"Directional Self-Attention Network (DiSAN)\", is then proposed to learn sentence embedding, based solely on the proposed attention without any RNN CNN structure. DiSAN is only composed of a directional self-attention with temporal order encoded, followed by a multi-dimensional attention that compresses the sequence into a vector representation. Despite its simple form, DiSAN outperforms complicated RNN models on both prediction quality and time efficiency. It achieves the best test accuracy among all sentence encoding methods and improves the most recent best result by 1.02 on the Stanford Natural Language Inference (SNLI) dataset, and shows state-of-the-art test accuracy on the Stanford Sentiment Treebank (SST), Multi-Genre natural language inference (MultiNLI), Sentences Involving Compositional Knowledge (SICK), Customer Review, MPQA, TREC question-type classification and Subjectivity (SUBJ) datasets." ] }
1908.04036
2968366816
This work identifies the fundamental limits of cache-aided coded multicasting in the presence of the well-known worst-user' bottleneck. This stems from the presence of receiving users with uneven channel capacities, which often forces the rate of transmission of each multicasting message to be reduced to that of the slowest user. This bottleneck, which can be detrimental in general wireless broadcast settings, motivates the analysis of coded caching over a standard Single-Input-Single-Output (SISO) Broadcast Channel (BC) with K cache-aided receivers, each with a generally different channel capacity. For this setting, we design a communication algorithm that is based on superposition coding that capitalizes on the realization that the user with the worst channel may not be the real bottleneck of communication. We then proceed to provide a converse that shows the algorithm to be near optimal, identifying the fundamental limits of this setting within a multiplicative factor of 4. Interestingly, the result reveals that, even if several users are experiencing channels with reduced capacity, the system can achieve the same optimal delivery time that would be achievable if all users enjoyed maximal capacity.
The importance of the uneven-channel bottleneck in coded caching has been acknowledged in a large number of recent works that seek to understand and ameliorate this limitation @cite_7 @cite_20 @cite_25 @cite_31 @cite_28 @cite_12 @cite_18 @cite_3 @cite_21 @cite_15 @cite_30 @cite_23 @cite_14 @cite_9 @cite_19 @cite_17 . For example, reference @cite_7 focuses on the uneven link-capacity SISO BC where each user experiences a distinct channel strength, and proposes algorithms that outperform the naive implementation of the algorithm of @cite_24 whereby each coded message is transmitted at a rate equal to the rate of the worst user whose message appears in the corresponding XOR operation. Under a similar setting, the work in @cite_31 considered feedback-aided user selection that can maximize the sum-rate as well as increase a fairness criterion that ensures that each user receives their requested file in a timely manner. In the related context of the erasure BC where users have uneven probabilities of erasures, references @cite_28 and @cite_12 showed how an erasure at some users can be exploited as side information at the remaining users in order to increase system performance. Related work can also be found in @cite_18 @cite_3 @cite_21 .
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_14", "@cite_7", "@cite_15", "@cite_28", "@cite_9", "@cite_21", "@cite_3", "@cite_24", "@cite_19", "@cite_23", "@cite_12", "@cite_31", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "2963745869", "2963602389", "1987954156", "2078905979" ], "abstract": [ "We explore the performance of coded caching in a SISO BC setting where some users have higher link capacities than others. Focusing on a binary and fixed topological model where strong links have a fixed normalized capacity 1, and where weak links have reduced normalized capacity T < 1, we identify — as a function of the cache size and T — the optimal throughput performance, within a factor of at most 8. The transmission scheme that achieves this performance, employs a simple form of interference enhancement, and exploits the property that weak links attenuate interference, thus allowing for multicasting rates to remain high even when involving weak users. This approach ameliorates the negative effects of uneven topology in multicasting, now allowing all users to achieve the optimal performance associated to T = 1, even if τ is approximately as low as T ≥ 1 − (1 − w)g where g is the coded-caching gain, and where w is the fraction of users that are weak. This leads to the interesting conclusion that for coded multicasting, the weak users need not bring down the performance of all users, but on the contrary to a certain extent, the strong users can lift the performance of the weak users without any penalties on their own performance. Furthermore for smaller ranges of τ, we also see that achieving the near-optimal performance comes with the advantage that the strong users do not suffer any additional delays compared to the case where T = 1.", "We consider the canonical shared link caching network formed by a source node, hosting a library of @math information messages (files), connected via a noiseless multicast link to @math user nodes, each equipped with a cache of size @math files. Users request files independently at random according to an a-priori known demand distribution q. A coding scheme for this network consists of two phases: cache placement and delivery. The cache placement is a mapping of the library files onto the user caches that can be optimized as a function of the demand statistics, but is agnostic of the actual demand realization. After the user demands are revealed, during the delivery phase the source sends a codeword (function of the library files, cache placement, and demands) to the users, such that each user retrieves its requested file with arbitrarily high probability. The goal is to minimize the average transmission length of the delivery phase, referred to as rate (expressed in channel symbols per file). In the case of deterministic demands, the optimal min-max rate has been characterized within a constant multiplicative factor, independent of the network parameters. The case of random demands was previously addressed by applying the order-optimal min-max scheme separately within groups of files requested with similar probability. However, no complete characterization of order-optimality was previously provided for random demands under the average rate performance criterion. In this paper, we consider the random demand setting and, for the special yet relevant case of a Zipf demand distribution, we provide a comprehensive characterization of the order-optimal rate for all regimes of the system parameters, as well as an explicit placement and delivery scheme achieving order-optimal rates. We present also numerical results that confirm the superiority of our scheme with respect to previously proposed schemes for the same setting.", "In this paper, we study resource allocation in a downlink OFDMA system assuming imperfect channel state information (CSI) at the transmitter. To achieve the individual QoS of the users in OFDMA system, adaptive resource allocation is very important, and has therefore been an active area of research. However, in most of the the previous work perfect CSI at the transmitter is assumed which is rarely possible due to channel estimation error and feedback delay. In this paper, we study the effect of channel estimation error on resource allocation in a downlink OFDMA system. We assume that each user terminal estimates its channel by using an MMSE estimator and sends its CSI back to the base station through a feedback channel. We approach the problem by using convex optimization framework, provide an explicit closed form expression for the users' transmit power and then develop an optimal margin adaptive resource allocation algorithm. Our proposed algorithm minimizes the total transmit power of the system subject to constraints on users' average data rate. The algorithm has polynomial complexity and solves the problem with zero optimality gaps. Simulation results show that our algorithm highly improves the system performance in the presence of imperfect channel estimation.", "We introduce the concept of resource management for in-network caching environments. We argue that in Information-Centric Networking environments, deterministically caching content messages at predefined places along the content delivery path results in unfair and inefficient content multiplexing between different content flows, as well as in significant caching redundancy. Instead, allocating resources along the path according to content flow characteristics results in better use of network resources and therefore, higher overall performance. The design principles of our proposed in-network caching scheme, which we call ProbCache, target these two outcomes, namely reduction of caching redundancy and fair content flow multiplexing along the delivery path. In particular, ProbCache approximates the caching capability of a path and caches contents probabilistically to: 1) leave caching space for other flows sharing (part of) the same path, and 2) fairly multiplex contents in caches along the path from the server to the client. We elaborate on the content multiplexing fairness of ProbCache and find that it sometimes behaves in favor of content flows connected far away from the source, that is, it gives higher priority to flows travelling longer paths, leaving little space to shorter-path flows. We introduce an enhanced version of the main algorithm that guarantees fair behavior to all participating content flows. We evaluate the proposed schemes in both homogeneous and heterogeneous cache size environments and formulate a framework for resource allocation in in-network caching environments. The proposed probabilistic approach to in-network caching exhibits ideal performance both in terms of network resource utilization and in terms of resource allocation fairness among competing content flows. Finally, and in contrast to the expected behavior, we find that the efficient design of ProbCache results in fast convergence to caching of popular content items." ] }
1908.04036
2968366816
This work identifies the fundamental limits of cache-aided coded multicasting in the presence of the well-known worst-user' bottleneck. This stems from the presence of receiving users with uneven channel capacities, which often forces the rate of transmission of each multicasting message to be reduced to that of the slowest user. This bottleneck, which can be detrimental in general wireless broadcast settings, motivates the analysis of coded caching over a standard Single-Input-Single-Output (SISO) Broadcast Channel (BC) with K cache-aided receivers, each with a generally different channel capacity. For this setting, we design a communication algorithm that is based on superposition coding that capitalizes on the realization that the user with the worst channel may not be the real bottleneck of communication. We then proceed to provide a converse that shows the algorithm to be near optimal, identifying the fundamental limits of this setting within a multiplicative factor of 4. Interestingly, the result reveals that, even if several users are experiencing channels with reduced capacity, the system can achieve the same optimal delivery time that would be achievable if all users enjoyed maximal capacity.
The uneven-capacity bottleneck was also studied in the presence of multiple transmit antennas @cite_25 @cite_26 . Reference @cite_25 exploited transmit diversity to ameliorate the impact of the worst-user capacity, and showed that employing @math transmit antennas can allow for a transmission sum-rate that scales with @math . Similarly, the work in @cite_26 considered multiple transmit and multiple receive antennas, and designed topology-dependent cache-placement to ameliorate the worst-user effect.
{ "cite_N": [ "@cite_26", "@cite_25" ], "mid": [ "2598021007", "2137531352", "2083962008", "2098126478" ], "abstract": [ "We consider the content delivery problem in a fading multi-input single-output channel with cache-aided users. We are interested in the scalability of the equivalent content delivery rate when the number of users, @math , is large. Analytical results show that, using coded caching and wireless multicasting, without channel state information at the transmitter, linear scaling of the content delivery rate with respect to @math can be achieved in some different ways. First, if the multicast transmission spans over @math independent sub-channels, e.g., in quasi-static fading if @math , and in block fading or multi-carrier systems if @math , linear scaling can be obtained, when the product of the number of transmit antennas and the number of sub-channels scales logarithmically with @math . Second, even with a fixed number of antennas, we can achieve the linear scaling with a threshold-based user selection requiring only one-bit feedbacks from the users. When CSIT is available, we propose a mixed strategy that combines spatial multiplexing and multicasting. Numerical results show that, by optimizing the power split between spatial multiplexing and multicasting, we can achieve a significant gain of the content delivery rate with moderate cache size.", "We consider a dense fading multi-user network with multiple active multi-antenna source-destination pair terminals communicating simultaneously through a large common set of K multi-antenna relay terminals in the full spatial multiplexing mode. We use Shannon-theoretic tools to analyze the tradeoff between energy efficiency and spectral efficiency (known as the power-bandwidth tradeoff) in meaningful asymptotic regimes of signal-to-noise ratio (SNR) and network size. We design linear distributed multi-antenna relay beamforming (LDMRB) schemes that exploit the spatial signature of multi-user interference and characterize their power-bandwidth tradeoff under a system-wide power constraint on source and relay transmissions. The impact of multiple users, multiple relays and multiple antennas on the key performance measures of the high and low SNR regimes is investigated in order to shed new light on the possible reduction in power and bandwidth requirements through the usage of such practical relay cooperation techniques. Our results indicate that point-to-point coded multi-user networks supported by distributed relay beamforming techniques yield enhanced energy efficiency and spectral efficiency, and with appropriate signaling and sufficient antenna degrees of freedom, can achieve asymptotically optimal power-bandwidth tradeoff with the best possible (i.e., as in the cutset bound) energy scaling of K-1 and the best possible spectral efficiency slope at any SNR for large number of relay terminals. Furthermore, our results help to identify the role of interference cancellation capability at the relay terminals on realizing the optimal power- bandwidth tradeoff; and show how relaying schemes that do not attempt to mitigate multi-user interference, despite their optimal capacity scaling performance, could yield a poor power- bandwidth tradeoff.", "In multiple-antenna broadcast channels, unlike point-to-point multiple-antenna channels, the multiuser capacity depends heavily on whether the transmitter knows the channel coefficients to each user. For instance, in a Gaussian broadcast channel with M transmit antennas and n single-antenna users, the sum rate capacity scales like Mloglogn for large n if perfect channel state information (CSI) is available at the transmitter, yet only logarithmically with M if it is not. In systems with large n, obtaining full CSI from all users may not be feasible. Since lack of CSI does not lead to multiuser gains, it is therefore of interest to investigate transmission schemes that employ only partial CSI. We propose a scheme that constructs M random beams and that transmits information to the users with the highest signal-to-noise-plus-interference ratios (SINRs), which can be made available to the transmitter with very little feedback. For fixed M and n increasing, the throughput of our scheme scales as MloglognN, where N is the number of receive antennas of each user. This is precisely the same scaling obtained with perfect CSI using dirty paper coding. We furthermore show that a linear increase in throughput with M can be obtained provided that M does not not grow faster than logn. We also study the fairness of our scheduling in a heterogeneous network and show that, when M is large enough, the system becomes interference dominated and the probability of transmitting to any user converges to 1 n, irrespective of its path loss. In fact, using M= spl alpha logn transmit antennas emerges as a desirable operating point, both in terms of providing linear scaling of the throughput with M as well as in guaranteeing fairness.", "The use of space-division multiple access (SDMA) in the downlink of a multiuser multiple-input, multiple-output (MIMO) wireless communications network can provide a substantial gain in system throughput. The challenge in such multiuser systems is designing transmit vectors while considering the co-channel interference of other users. Typical optimization problems of interest include the capacity problem - maximizing the sum information rate subject to a power constraint-or the power control problem-minimizing transmitted power such that a certain quality-of-service metric for each user is met. Neither of these problems possess closed-form solutions for the general multiuser MIMO channel, but the imposition of certain constraints can lead to closed-form solutions. This paper presents two such constrained solutions. The first, referred to as \"block-diagonalization,\" is a generalization of channel inversion when there are multiple antennas at each receiver. It is easily adapted to optimize for either maximum transmission rate or minimum power and approaches the optimal solution at high SNR. The second, known as \"successive optimization,\" is an alternative method for solving the power minimization problem one user at a time, and it yields superior results in some (e.g., low SNR) situations. Both of these algorithms are limited to cases where the transmitter has more antennas than all receive antennas combined. In order to accommodate more general scenarios, we also propose a framework for coordinated transmitter-receiver processing that generalizes the two algorithms to cases involving more receive than transmit antennas. While the proposed algorithms are suboptimal, they lead to simpler transmitter and receiver structures and allow for a reasonable tradeoff between performance and complexity." ] }
1908.04036
2968366816
This work identifies the fundamental limits of cache-aided coded multicasting in the presence of the well-known worst-user' bottleneck. This stems from the presence of receiving users with uneven channel capacities, which often forces the rate of transmission of each multicasting message to be reduced to that of the slowest user. This bottleneck, which can be detrimental in general wireless broadcast settings, motivates the analysis of coded caching over a standard Single-Input-Single-Output (SISO) Broadcast Channel (BC) with K cache-aided receivers, each with a generally different channel capacity. For this setting, we design a communication algorithm that is based on superposition coding that capitalizes on the realization that the user with the worst channel may not be the real bottleneck of communication. We then proceed to provide a converse that shows the algorithm to be near optimal, identifying the fundamental limits of this setting within a multiplicative factor of 4. Interestingly, the result reveals that, even if several users are experiencing channels with reduced capacity, the system can achieve the same optimal delivery time that would be achievable if all users enjoyed maximal capacity.
In a related line of work, the papers @cite_15 and @cite_30 studied the cache-aided topological interference channel where @math cache-aided transmitters are connected to @math cache-aided receivers, and each transmitter is connected to one receiver via a direct strong'' link and to each of the other receivers via weak'' links. Under the assumption of no channel state information at the transmitters (CSIT), the authors showed how the lack of CSIT can be ameliorated by exploiting the topology of the channel and the multicast nature of the transmissions.
{ "cite_N": [ "@cite_30", "@cite_15" ], "mid": [ "2740920677", "2598021007", "2725094597", "2963615879" ], "abstract": [ "This work explores cache-aided interference management in the absence of channel state information at the transmitters (CSIT), focusing on the setting with K transmitter receiver pairs endowed with caches, where each receiver k is connected to transmitter k via a direct link with normalized capacity 1, and to any other transmitter via a cross link with normalized capacity t ≤ 1. In this setting, we explore how a combination of pre-caching at transmitters and receivers, together with interference enhancement techniques, can a) partially counter the lack of CSIT, and b) render the network self-sufficient, in the sense that the transmitters need not receive additional data after pre-caching. Toward this we present new schemes that blindly harness topology and transmitter-and-receiver caching, to create separate streams, each serving many receivers at a time. Key to the approach here is a combination of rate-splitting, interference enhancement and coded caching.", "We consider the content delivery problem in a fading multi-input single-output channel with cache-aided users. We are interested in the scalability of the equivalent content delivery rate when the number of users, @math , is large. Analytical results show that, using coded caching and wireless multicasting, without channel state information at the transmitter, linear scaling of the content delivery rate with respect to @math can be achieved in some different ways. First, if the multicast transmission spans over @math independent sub-channels, e.g., in quasi-static fading if @math , and in block fading or multi-carrier systems if @math , linear scaling can be obtained, when the product of the number of transmit antennas and the number of sub-channels scales logarithmically with @math . Second, even with a fixed number of antennas, we can achieve the linear scaling with a threshold-based user selection requiring only one-bit feedbacks from the users. When CSIT is available, we propose a mixed strategy that combines spatial multiplexing and multicasting. Numerical results show that, by optimizing the power split between spatial multiplexing and multicasting, we can achieve a significant gain of the content delivery rate with moderate cache size.", "The Interfering Broadcast Channel (IBC) applies to the downlink of (cellular and or heterogeneous) multi-cell networks, which are limited by multi-user (MU) interference. The interference alignment (IA) concept has shown that interference does not need to be inevitable. In particular spatial IA in the MIMO IBC allows for low latency transmission. However, IA requires perfect and typically global Channel State Information at the Transmitter(s) (CSIT), whose acquisition does not scale well with network size. Also, the design of transmitters (Txs) and receivers (Rxs) is coupled and hence needs to be centralized (cloud) or duplicated (distributed approach). CSIT, which is crucial in MU systems, is always imperfect in practice. We consider the joint optimal exploitation of mean (channel estimates) and covariance Gaussian partial CSIT. Indeed, in a Massive MIMO (MaMIMO) setting (esp. when combined with mmWave) the channel covariances may exhibit low rank and zero-forcing might be possible by just exploiting the covariance subspaces. But the question is the optimization of beamformers for the expected weighted sum rate (EWSR) at finite SNR. We propose explicit beamforming solutions and indicate that existing large system analysis can be extended to handle optimized beamformers with the more general partial CSIT considered here.", "We consider the cache-aided MISO broadcast channel (BC) in which a multi-antenna transmitter serves @math single-antenna receivers, each equipped with a cache memory. The transmitter has access to partial knowledge of the channel state information. For a symmetric setting, in terms of channel strength levels, partial channel knowledge levels and cache sizes, we characterize the generalized degrees of freedom (GDoF) up to a constant multiplicative factor. The achievability scheme exploits the interplay between spatial multiplexing gains and coded-multicasting gain. On the other hand, a cut-set-based argument in conjunction with a GDoF outer bound for a parallel MISO BC under channel uncertainty is used for the converse. We further show that the characterized order-optimal GDoF is also attained in a decentralized setting, where no coordination is required for content placement in the caches." ] }
1908.04036
2968366816
This work identifies the fundamental limits of cache-aided coded multicasting in the presence of the well-known worst-user' bottleneck. This stems from the presence of receiving users with uneven channel capacities, which often forces the rate of transmission of each multicasting message to be reduced to that of the slowest user. This bottleneck, which can be detrimental in general wireless broadcast settings, motivates the analysis of coded caching over a standard Single-Input-Single-Output (SISO) Broadcast Channel (BC) with K cache-aided receivers, each with a generally different channel capacity. For this setting, we design a communication algorithm that is based on superposition coding that capitalizes on the realization that the user with the worst channel may not be the real bottleneck of communication. We then proceed to provide a converse that shows the algorithm to be near optimal, identifying the fundamental limits of this setting within a multiplicative factor of 4. Interestingly, the result reveals that, even if several users are experiencing channels with reduced capacity, the system can achieve the same optimal delivery time that would be achievable if all users enjoyed maximal capacity.
Recently, significant effort has been made toward understanding the behavior of coded caching in the finite Signal-to-Noise Ratio (SNR) regime with realistic (and thus often uneven) channel qualities. In this direction, the work in @cite_23 showed that a single-stream coded caching message beamformed by an appropriate transmit vector can outperform some existing multi-stream coded caching methods in the low-SNR regime, while references @cite_14 @cite_9 (see also @cite_19 ) revealed the importance of jointly considering caching with multicast beamformer design. Moreover, the work in @cite_17 studied the connection between rate and subpacketization in the multi-antenna environment, accounting for the unevenness naturally brought about by fading.
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_19", "@cite_23", "@cite_17" ], "mid": [ "2921187562", "2767375015", "2074863717", "2598021007" ], "abstract": [ "We study downlink beamforming in a single-cell network with a multi-antenna base station (BS) serving cache-enabled users. For a given common rate of the files in the system, we first formulate the minimum transmit power with beamforming at the BS as a non-convex optimization problem. This corresponds to a multiple multicast problem, to which a stationary solution can be efficiently obtained through successive convex approximation (SCA). It is observed that the complexity of the problem grows exponentially with the number of subfiles delivered to each user in each time slot, which itself grows exponentially with the number of users in the system. Therefore, we introduce a low-complexity alternative through time-sharing that limits the number of subfiles that can be received by a user in each time slot. It is shown through numerical simulations that, the reduced-complexity beamforming scheme has minimal performance gap compared to transmitting all the subfiles jointly, and outperforms the state-of-the-art low-complexity scheme at all SNR and rate values with sufficient spatial degrees of freedom, and in the high SNR high rate regime when the number of spatial degrees of freedom is limited.", "A single cell downlink scenario is considered where a multiple-antenna base station delivers contents to multiple cache-enabled user terminals. Using the ideas from multi-server coded caching (CC) scheme developed for wired networks, a joint design of CC and general multicast beamforming is proposed to benefit from spatial multiplexing gain, improved interference management and the global CC gain, simultaneously. Utilizing the multiantenna multicasting opportunities provided by the CC technique, the proposed method is shown to perform well over the entire SNR region, including the low SNR regime, unlike the existing schemes based on zero forcing (ZF). Instead of nulling the interference at users not requiring a specific coded message, general multicast beamforming strategies are employed, optimally balancing the detrimental impact of both noise and inter-stream interference from coded messages transmitted in parallel. The proposed scheme is shown to provide the same degrees-of-freedom at high SNR as the state-of-art methods and, in general, to perform significantly better than several base-line schemes including, the joint ZF and CC, max-min fair multicasting with CC, and basic unicasting with multiuser beamforming.", "Transmit beamforming and receive combining are simple methods for exploiting the significant diversity that is available in multiple-input multiple-output (MIMO) wireless systems. Unfortunately, optimal performance requires either complete channel knowledge or knowledge of the optimal beamforming vector; both are hard to realize. In this article, a quantized maximum signal-to-noise ratio (SNR) beamforming technique is proposed where the receiver only sends the label of the best beamforming vector in a predetermined codebook to the transmitter. By using the distribution of the optimal beamforming vector in independent and identically distributed Rayleigh fading matrix channels, the codebook design problem is solved and related to the problem of Grassmannian line packing. The proposed design criterion is flexible enough to allow for side constraints on the codebook vectors. Bounds on the codebook size are derived to guarantee full diversity order. Results on the density of Grassmannian line packings are derived and used to develop bounds on the codebook size given a capacity or SNR loss. Monte Carlo simulations are presented that compare the probability of error for different quantization strategies.", "We consider the content delivery problem in a fading multi-input single-output channel with cache-aided users. We are interested in the scalability of the equivalent content delivery rate when the number of users, @math , is large. Analytical results show that, using coded caching and wireless multicasting, without channel state information at the transmitter, linear scaling of the content delivery rate with respect to @math can be achieved in some different ways. First, if the multicast transmission spans over @math independent sub-channels, e.g., in quasi-static fading if @math , and in block fading or multi-carrier systems if @math , linear scaling can be obtained, when the product of the number of transmit antennas and the number of sub-channels scales logarithmically with @math . Second, even with a fixed number of antennas, we can achieve the linear scaling with a threshold-based user selection requiring only one-bit feedbacks from the users. When CSIT is available, we propose a mixed strategy that combines spatial multiplexing and multicasting. Numerical results show that, by optimizing the power split between spatial multiplexing and multicasting, we can achieve a significant gain of the content delivery rate with moderate cache size." ] }
1908.04036
2968366816
This work identifies the fundamental limits of cache-aided coded multicasting in the presence of the well-known worst-user' bottleneck. This stems from the presence of receiving users with uneven channel capacities, which often forces the rate of transmission of each multicasting message to be reduced to that of the slowest user. This bottleneck, which can be detrimental in general wireless broadcast settings, motivates the analysis of coded caching over a standard Single-Input-Single-Output (SISO) Broadcast Channel (BC) with K cache-aided receivers, each with a generally different channel capacity. For this setting, we design a communication algorithm that is based on superposition coding that capitalizes on the realization that the user with the worst channel may not be the real bottleneck of communication. We then proceed to provide a converse that shows the algorithm to be near optimal, identifying the fundamental limits of this setting within a multiplicative factor of 4. Interestingly, the result reveals that, even if several users are experiencing channels with reduced capacity, the system can achieve the same optimal delivery time that would be achievable if all users enjoyed maximal capacity.
Our work is in the spirit of all the above papers, and it can be seen specifically as an extension of @cite_20 . This reference considered a specific binary topological case, for which it proposed a two-level superposition-based transmission scheme to alleviate the worst-user bottleneck.
{ "cite_N": [ "@cite_20" ], "mid": [ "1965299092", "2952966314", "2400872280", "2949343062" ], "abstract": [ "Selective families, a weaker variant of superimposed codes [KS64, F92, 197, CR96], have been recently used to design Deterministic Distributed Broadcast (DDB) protocols for unknown radio networks (a radio network is said to be unknown when the nodes know nothing about the network but their own label) [CGGPR00, CGOR00]. We first provide a general almost tight lower bound on the size of selective families. Then, by reverting the selective families - DDB protocols connection, we exploit our lower bound to construct a family of “hard” radio networks (i.e. directed graphs). These networks yield an O(n log D) lower bound on the completion time of DDB protocols that is superlinear (in the size n of the network) even for very small maximum eccentricity D of the network, while all the previous lower bounds (e.g. O(D log n) [CGGPR00]) are superlinear only when D is almost linear. On the other hand, the previous upper bounds are all superlinear in n independently of the eccentricity D and the maximum in-degree d of the network. We introduce a broadcast technique that exploits selective families in a new way. Then, by combining selective families of almost optimal size with our new broadcast technique, we obtain an O(Dd log3 n) upper bound that we prove to be almost optimal when d = O(n D). This exponentially improves over the best known upper bound [CGR00) when D, d = O(polylogn). Furthermore, by comparing our deterministic upper bound with the best known randomized one [BGI87] we obtain a new, rather surprising insight into the real gap between deterministic and randomized protocols. It turns out that this gap is exponential (as discovered in [BGI87]), but only when the network has large maximum in-degree (i.e. d = O(na), for some constant a > O). We then look at the multibroadcast problem on unknown radio networks. A similar connection to that between selective families and (single) broadcast also holds between superimposed codes and multibroadcast. We in fact combine a variant of our (single) broadcast technique with superimposed codes of almost optimal size available in literature [EFF85, HS87, I97, CHI99]. This yields a multibroadcast protocol having completion time O(Dd2 log3 n). Finally, in order to determine the limits of our multibroadcast technique, we generalize (and improve) the best known lower bound [CR96] on the size of superimposed codes.", "A generalization of the Gaussian dirty-paper problem to a multiple access setup is considered. There are two additive interference signals, one known to each transmitter but none to the receiver. The rates achievable using Costa's strategies (i.e. by a random binning scheme induced by Costa's auxiliary random variables) vanish in the limit when the interference signals are strong. In contrast, it is shown that lattice strategies (\"lattice precoding\") can achieve positive rates independent of the interferences, and in fact in some cases - which depend on the noise variance and power constraints - they are optimal. In particular, lattice strategies are optimal in the limit of high SNR. It is also shown that the gap between the achievable rate region and the capacity region is at most 0.167 bit. Thus, the dirty MAC is another instance of a network setup, like the Korner-Marton modulo-two sum problem, where linear coding is potentially better than random binning. Lattice transmission schemes and conditions for optimality for the asymmetric case, where there is only one interference which is known to one of the users (who serves as a \"helper\" to the other user), and for the \"common interference\" case are also derived. In the former case the gap between the helper achievable rate and its capacity is at most 0.085 bit.", "Lattice codes used under the Compute-and-Forward paradigm suggest an alternative strategy for the standard Gaussian multiple-access channel (MAC): The receiver successively decodes integer linear combinations of the messages until it can invert and recover all messages. In this paper, this strategy is developed and analyzed. For the two-user MAC, it is shown that without time-sharing, the entire capacity region can be attained with a single-user decoder as soon as the signal-to-noise ratios are above @math . A partial analysis is given for more than two users. Lastly the strategy is extended to the so-called dirty MAC where two interfering signals are known non-causally to the two transmitters in a distributed fashion. Our scheme extends the previously known results and gives new achievable rate regions.", "In this paper, collocated and distributed space-time block codes (DSTBCs) which admit multi-group maximum likelihood (ML) decoding are studied. First the collocated case is considered and the problem of constructing space-time block codes (STBCs) which optimally tradeoff rate and ML decoding complexity is posed. Recently, sufficient conditions for multi-group ML decodability have been provided in the literature and codes meeting these sufficient conditions were called Clifford Unitary Weight (CUW) STBCs. An algebraic framework based on extended Clifford algebras is proposed to study CUW STBCs and using this framework, the optimal tradeoff between rate and ML decoding complexity of CUW STBCs is obtained for few specific cases. Code constructions meeting this tradeoff optimally are also provided. The paper then focuses on multi-group ML decodable DSTBCs for application in synchronous wireless relay networks and three constructions of four-group ML decodable DSTBCs are provided. Finally, the OFDM based Alamouti space-time coded scheme proposed by Li-Xia for a 2 relay asynchronous relay network is extended to a more general transmission scheme that can achieve full asynchronous cooperative diversity for arbitrary number of relays. It is then shown how differential encoding at the source can be combined with the proposed transmission scheme to arrive at a new transmission scheme that can achieve full cooperative diversity in asynchronous wireless relay networks with no channel information and also no timing error knowledge at the destination node. Four-group decodable DSTBCs applicable in the proposed OFDM based transmission scheme are also given." ] }
1908.03864
2968252280
We consider an information theoretic approach to address the problem of identifying fake digital images. We propose an innovative method to formulate the issue of localizing manipulated regions in an image as a deep representation learning problem using the Information Bottleneck (IB), which has recently gained popularity as a framework for interpreting deep neural networks. Tampered images pose a serious predicament since digitized media is a ubiquitous part of our lives. These are facilitated by the easy availability of image editing software and aggravated by recent advances in deep generative models such as GANs. We propose InfoPrint, a computationally efficient solution to the IB formulation using approximate variational inference and compare it to a numerical solution that is computationally expensive. Testing on a number of standard datasets, we demonstrate that InfoPrint outperforms the state-of-the-art and the numerical solution. Additionally, it also has the ability to detect alterations made by inpainting GANs.
Information theory is a powerful framework that is being increasingly adopted to improve various aspects of deep machine learning, e.g., representation learning @cite_36 generalizability & regularization @cite_38 , and for interpreting how deep neural networks function @cite_27 @cite_34 . Mutual information plays a key role in many of these methods. InfoGAN @cite_13 , showed that maximizing the mutual information between the latent code and the generator's output improved the representations learned by a generative adversarial network (GAN) @cite_24 , allowing them to be more disentangled and interpretable. Since mutual information is hard to compute, InfoGAN maximized a variational lower bound @cite_0 . A similar information maximization idea was explored in @cite_36 to improve unsupervised representation learning using the numerical estimator proposed in @cite_31 .
{ "cite_N": [ "@cite_38", "@cite_36", "@cite_24", "@cite_0", "@cite_27", "@cite_31", "@cite_34", "@cite_13" ], "mid": [ "2963226019", "2787273235", "2164700406", "2962730405" ], "abstract": [ "This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound of the mutual information objective that can be optimized efficiently. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing supervised methods. For an up-to-date version of this paper, please see https: arxiv.org abs 1606.03657.", "Advances in unsupervised learning enable reconstruction and generation of samples from complex distributions, but this success is marred by the inscrutability of the representations learned. We propose an information-theoretic approach to characterizing disentanglement and dependence in representation learning using multivariate mutual information, also called total correlation. The principle of total Cor-relation Ex-planation (CorEx) has motivated successful unsupervised learning applications across a variety of domains, but under some restrictive assumptions. Here we relax those restrictions by introducing a flexible variational lower bound to CorEx. Surprisingly, we find that this lower bound is equivalent to the one in variational autoencoders (VAE) under certain conditions. This information-theoretic view of VAE deepens our understanding of hierarchical VAE and motivates a new algorithm, AnchorVAE, that makes latent codes more interpretable through information maximization and enables generation of richer and more realistic samples.", "Building intelligent systems that are capable of extracting high-level representations from high-dimensional sensory data lies at the core of solving many AI related tasks, including object recognition, speech perception, and language understanding. Theoretical and biological arguments strongly suggest that building such systems requires models with deep architectures that involve many layers of nonlinear processing. The aim of the thesis is to demonstrate that deep generative models that contain many layers of latent variables and millions of parameters can be learned efficiently, and that the learned high-level feature representations can be successfully applied in a wide spectrum of application domains, including visual object recognition, information retrieval, and classification and regression tasks. In addition, similar methods can be used for nonlinear dimensionality reduction. The first part of the thesis focuses on analysis and applications of probabilistic generative models called Deep Belief Networks. We show that these deep hierarchical models can learn useful feature representations from a large supply of unlabeled sensory inputs. The learned high-level representations capture a lot of structure in the input data, which is useful for subsequent problem-specific tasks, such as classification, regression or information retrieval, even though these tasks are unknown when the generative model is being trained. In the second part of the thesis, we introduce a new learning algorithm for a different type of hierarchical probabilistic model, which we call a Deep Boltzmann Machine. Like Deep Belief Networks, Deep Boltzmann Machines have the potential of learning internal representations that become increasingly complex at higher layers, which is a promising way of solving object and speech recognition problems. Unlike Deep Belief Networks and many existing models with deep architectures, the approximate inference procedure, in addition to a fast bottom-up pass, can incorporate top-down feedback. This allows Deep Boltzmann Machines to better propagate uncertainty about ambiguous inputs.", "The mutual information is a core statistical quantity that has applications in all areas of machine learning, whether this is in training of density models over multiple data modalities, in maximising the efficiency of noisy transmission channels, or when learning behaviour policies for exploration by artificial agents. Most learning algorithms that involve optimisation of the mutual information rely on the Blahut-Arimoto algorithm — an enumerative algorithm with exponential complexity that is not suitable for modern machine learning applications. This paper provides a new approach for scalable optimisation of the mutual information by merging techniques from variational inference and deep learning. We develop our approach by focusing on the problem of intrinsically-motivated learning, where the mutual information forms the definition of a well-known internal drive known as empowerment. Using a variational lower bound on the mutual information, combined with convolutional networks for handling visual input streams, we develop a stochastic optimisation algorithm that allows for scalable information maximisation and empowerment-based reasoning directly from pixels to actions." ] }
1908.03978
2968137284
Accurate pedestrian counting algorithm is critical to eliminate insecurity in the congested public scenes. However, counting pedestrians in crowded scenes often suffer from severe perspective distortion. In this paper, basing on the straight-line double region pedestrian counting method, we propose a dynamic region division algorithm to keep the completeness of counting objects. Utilizing the object bounding boxes obtained by YoloV3 and expectation division line of the scene, the boundary for nearby region and distant one is generated under the premise of retaining whole head. Ulteriorly, appropriate learning models are applied to count pedestrians in each obtained region. In the distant region, a novel inception dilated convolutional neural network is proposed to solve the problem of choosing dilation rate. In the nearby region, YoloV3 is used for detecting the pedestrian in multi-scale. Accordingly, the total number of pedestrians in each frame is obtained by fusing the result in nearby and distant regions. A typical subway pedestrian video dataset is chosen to conduct experiment in this paper. The result demonstrate that proposed algorithm is superior to existing machine learning based methods in general performance.
Traditional methods used the histogram of oriented gradients(HOG) as the pedestrian-level features and the support vector machine as the classifier to detect pedestrians in specific scenes @cite_10 , but these hand-crafted features severely suffered from light variance and scale variance. The region-based convolutional neural networks(R-CNNs) @cite_17 used features extracted from CNN and improved the performance in detection. This method could be summarized as two stages processing: proposal and classification, but hard to be accelerated. YOLO @cite_3 provided a new one-stage solution for detection and significantly improved the speed. It converted the thought of classification to regression in sub-grids and abandoned the process of proposal. Following YOLO, some methods paid attention to support multiple scales object detection such as SSD @cite_7 and YOLOV3 @cite_6 . Although detection methods have achieved tremendous performance and can be used in sparse crowd scenes, it is hard to substitute density-based methods in crowded scenes.
{ "cite_N": [ "@cite_7", "@cite_6", "@cite_3", "@cite_10", "@cite_17" ], "mid": [ "2963315052", "2953226057", "2410641892", "2291533986" ], "abstract": [ "In this paper, we consider the problem of pedestrian detection in natural scenes. Intuitively, instances of pedestrians with different spatial scales may exhibit dramatically different features. Thus, large variance in instance scales, which results in undesirable large intracategory variance in features, may severely hurt the performance of modern object instance detection methods. We argue that this issue can be substantially alleviated by the divide-and-conquer philosophy. Taking pedestrian detection as an example, we illustrate how we can leverage this philosophy to develop a Scale-Aware Fast R-CNN (SAF R-CNN) framework. The model introduces multiple built-in subnetworks which detect pedestrians with scales from disjoint ranges. Outputs from all of the subnetworks are then adaptively combined to generate the final detection results that are shown to be robust to large variance in instance scales, via a gate function defined over the sizes of object proposals. Extensive evaluations on several challenging pedestrian detection datasets well demonstrate the effectiveness of the proposed SAF R-CNN. Particularly, our method achieves state-of-the-art performance on Caltech [P. Dollar, C. Wojek, B. Schiele, and P. Perona, “Pedestrian detection: An evaluation of the state of the art,” IEEE Trans. Pattern Anal. Mach. Intell. , vol. 34, no. 4, pp. 743–761, Apr. 2012], and obtains competitive results on INRIA [N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. , 2005, pp. 886–893], ETH [A. Ess, B. Leibe, and L. V. Gool, “Depth and appearance for mobile scene analysis,” in Proc. Int. Conf. Comput. Vis ., 2007, pp. 1–8], and KITTI [A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? The KITTI vision benchmark suite,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit ., 2012, pp. 3354–3361].", "Most of the recent successful methods in accurate object detection and localization used some variants of R-CNN style two stage Convolutional Neural Networks (CNN) where plausible regions were proposed in the first stage then followed by a second stage for decision refinement. Despite the simplicity of training and the efficiency in deployment, the single stage detection methods have not been as competitive when evaluated in benchmarks consider mAP for high IoU thresholds. In this paper, we proposed a novel single stage end-to-end trainable object detection network to overcome this limitation. We achieved this by introducing Recurrent Rolling Convolution (RRC) architecture over multi-scale feature maps to construct object classifiers and bounding box regressors which are \"deep in context\". We evaluated our method in the challenging KITTI dataset which measures methods under IoU threshold of 0.7. We showed that with RRC, a single reduced VGG-16 based model already significantly outperformed all the previously published results. At the time this paper was written our models ranked the first in KITTI car detection (the hard level), the first in cyclist detection and the second in pedestrian detection. These results were not reached by the previous single stage methods. The code is publicly available.", "Convolutional neural networks (CNNs) have shown great performance as general feature representations for object recognition applications. However, for multi-label images that contain multiple objects from different categories, scales and locations, global CNN features are not optimal. In this paper, we incorporate local information to enhance the feature discriminative power. In particular, we first extract object proposals from each image. With each image treated as a bag and object proposals extracted from it treated as instances, we transform the multi-label recognition problem into a multi-class multi-instance learning problem. Then, in addition to extracting the typical CNN feature representation from each proposal, we propose to make use of ground-truth bounding box annotations (strong labels) to add another level of local information by using nearest-neighbor relationships of local regions to form a multi-view pipeline. The proposed multi-view multiinstance framework utilizes both weak and strong labels effectively, and more importantly it has the generalization ability to even boost the performance of unseen categories by partial strong labels from other categories. Our framework is extensively compared with state-of-the-art handcrafted feature based methods and CNN based methods on two multi-label benchmark datasets. The experimental results validate the discriminative power and the generalization ability of the proposed framework. With strong labels, our framework is able to achieve state-of-the-art results in both datasets.", "Pedestrian detection based on the combination of convolutional neural network (CNN) and traditional handcrafted features (i.e., HOG+LUV) has achieved great success. In general, HOG+LUV are used to generate the candidate proposals and then CNN classifies these proposals. Despite its success, there is still room for improvement. For example, CNN classifies these proposals by the fully connected layer features, while proposal scores and the features in the inner-layers of CNN are ignored. In this paper, we propose a unifying framework called multi-layer channel features (MCF) to overcome the drawback. It first integrates HOG+LUV with each layer of CNN into a multi-layer image channels. Based on the multi-layer image channels, a multi-stage cascade AdaBoost is then learned. The weak classifiers in each stage of the multi-stage cascade are learned from the image channels of corresponding layer. Experiments on Caltech data set, INRIA data set, ETH data set, TUD-Brussels data set, and KITTI data set are conducted. With more abundant features, an MCF achieves the state of the art on Caltech pedestrian data set (i.e., 10.40 miss rate). Using new and accurate annotations, an MCF achieves 7.98 miss rate. As many non-pedestrian detection windows can be quickly rejected by the first few stages, it accelerates detection speed by 1.43 times. By eliminating the highly overlapped detection windows with lower scores after the first stage, it is 4.07 times faster than negligible performance loss." ] }
1908.03803
2966887519
During the last decade, the number of devices connected to the Internet by Wi-Fi has grown significantly. A high density of both the client devices and the hot spots posed new challenges related to providing the desired quality of service in the current and emerging scenarios. To cope with the negative effects caused by network densification, modern Wi-Fi is becoming more and more centralized. To improve network efficiency, today many new Wi-Fi deployments are under control of management systems that optimize network parameters in a centralized manner. In the paper, for such a cloud management system, we develop an algorithm which aims at maximizing energy efficiency and also keeps fairness among clients. For that, we design an objective function and solve an optimization problem using the branch and bound approach. To evaluate the efficiency of the developed solution, we implement it in the NS-3 simulator and compare with existing solutions and legacy behavior.
In the modern world, a high density of wireless networks and huge interference between them make centralized coordination of the networks more and more popular. It allows optimizing network performance and thus increasing total efficiency. While today's wireless networks are mainly optimized to provide high throughput, the growing OPEX of network operators including the payments for energy consumption may shift the paradigm in the near future. Because of the very high number of base stations and access points energy consumption becomes an essential issue for wireless networks. To improve energy efficiency, various approaches can be used, including energy harvesting, improving hardware, network planning, and resource allocation @cite_5 .
{ "cite_N": [ "@cite_5" ], "mid": [ "2093917343", "2100998721", "2130891087", "2104286156" ], "abstract": [ "Recently, energy efficiency in wireless networks has become an important objective. Aside from the growing proliferation of smartphones and other high-end devices in conventional human-to-human (H2H) communication, the introduction of machine-to-machine (M2M) communication or machine-type communication into cellular networks is another contributing factor. In this paper, we investigate quality-of-service (QoS)-driven energy-efficient design for the uplink of long term evolution (LTE) networks in M2M H2H co-existence scenarios. We formulate the resource allocation problem as a maximization of effective capacity-based bits-per-joule capacity under statistical QoS provisioning. The specific constraints of single carrier frequency division multiple access (uplink air interface in LTE networks) pertaining to power and resource block allocation not only complicate the resource allocation problem, but also render the standard Lagrangian duality techniques inapplicable. We overcome the analytical and computational intractability by first transforming the original problem into a mixed integer programming (MIP) problem and then formulating its dual problem using the canonical duality theory. The proposed energy-efficient design is compared with the spectral efficient design along with round robin (RR) and best channel quality indicator (BCQI) algorithms. Numerical results, which are obtained using the invasive weed optimization (IWO) algorithm, show that the proposed energy-efficient uplink design not only outperforms other algorithms in terms of energy efficiency while satisfying the QoS requirements, but also performs closer to the optimal design.", "The spread of mobile connectivity is generating major social and economic benefits around the world, while along with the rapid growth of new telecommunication technologies like mobile broadband communication and M2M (Machine-to-Machine) networks, larger number of various base stations will be employed into the network, which will greatly increase the power expense and CO2 emission. In order to degrade the system power expense, variety of researches on new energy and novel transmission technology are put in agenda. In this paper, instead of reducing the absolute power expense, the research focuses on guiding more power consumption into green source energy, which implying that the UEs (User Equipment), especially the cell edge UEs, will have preferential access to the BSs (Base Station) with natural energy supply. To realize the tendentious connection, two detailed approaches are proposed, the HO (Hand Over) parameter tuning for target cell selection and power control for coverage optimization. The system evaluation shows that, by proper setting of parameters in HO and power control, both of the two approaches can achieve good balance between energy saving effect and system throughput impact.", "With the exponential increase in mobile internet traffic driven by a new generation of wireless devices, future cellular networks face a great challenge to meet this overwhelming demand of network capacity. At the same time, the demand for higher data rates and the ever-increasing number of wireless users led to rapid increases in power consumption and operating cost of cellular networks. One potential solution to address these issues is to overlay small cell networks with macrocell networks as a means to provide higher network capacity and better coverage. However, the dense and random deployment of small cells and their uncoordinated operation raise important questions about the energy efficiency implications of such multi-tier networks. Another technique to improve energy efficiency in cellular networks is to introduce active sleep (on off) modes in macrocell base stations. In this paper, we investigate the design and the associated tradeoffs of energy efficient cellular networks through the deployment of sleeping strategies and small cells. Using a stochastic geometry based model, we derive the success probability and energy efficiency in homogeneous macrocell (single-tier) and heterogeneous K-tier wireless networks under different sleeping policies. In addition, we formulate the power consumption minimization and energy efficiency maximization problems, and determine the optimal operating regimes for macrocell base stations. Numerical results confirm the effectiveness of switching off base stations in homogeneous macrocell networks. Nevertheless, the gains in terms of energy efficiency depend on the type of sleeping strategy used. In addition, the deployment of small cells generally leads to higher energy efficiency but this gain saturates as the density of small cells increases. In a nutshell, our proposed framework provides an essential understanding on the deployment of future green heterogeneous networks.", "The last ten years have witnessed explosive growth in the number of subscribers for mobile telephony. The technology has evolved from early voice only services to today's mobile wireless broadband (Internet) data delivery. The increasing use of wireless connectivity via smartphones and laptops has led to an exponential surge in network traffic. Meeting traffic demands will cause a significant increase in operator energy cost as an enlarged network of radio base stations will be needed to support mobile broadband effectively and maintain operational competitiveness. This article explores approaches that will assist in delivering significant energy efficiency gains in future wireless networks, easing the burden on network operators. It investigates three approaches to saving energy in future wireless networks. These include sleep mode techniques to switch off radio transmissions whenever possible; femtocell and relay deployments; and multiple antenna wireless systems. The impact of these approaches on achieving energy-efficient wireless communication systems is discussed." ] }
1908.03803
2966887519
During the last decade, the number of devices connected to the Internet by Wi-Fi has grown significantly. A high density of both the client devices and the hot spots posed new challenges related to providing the desired quality of service in the current and emerging scenarios. To cope with the negative effects caused by network densification, modern Wi-Fi is becoming more and more centralized. To improve network efficiency, today many new Wi-Fi deployments are under control of management systems that optimize network parameters in a centralized manner. In the paper, for such a cloud management system, we develop an algorithm which aims at maximizing energy efficiency and also keeps fairness among clients. For that, we design an objective function and solve an optimization problem using the branch and bound approach. To evaluate the efficiency of the developed solution, we implement it in the NS-3 simulator and compare with existing solutions and legacy behavior.
@cite_11 , energy efficiency is defined as the amount of data delivered through a link divided by the consumed energy. The authors of this paper consider a terminal having limited energy and compare the energy efficiency of various Automatic Repeat reQuest (ARQ) protocols.
{ "cite_N": [ "@cite_11" ], "mid": [ "2963302028", "2060821751", "2106087555", "2093917343" ], "abstract": [ "High-fidelity, real-time interactive applications are envisioned with the emergence of the Internet of Things and tactile Internet by means of ultra-reliable low-latency communications (URLLC). Exploiting time diversity for fulfilling the URLLC requirements in an energy efficient manner is a challenging task due to the nontrivial interplay among packet size, retransmission rounds and delay, and transmit power. In this paper, we study the fundamental energy-latency tradeoff in URLLC systems employing incremental redundancy (IR) hybrid automatic repeat request (HARQ). We cast the average energy minimization problem with a finite blocklength (latency) constraint and feedback delay, which is non-convex. We propose a dynamic programming algorithm for energy efficient IR-HARQ optimization in terms of number of retransmissions, blocklength, and power per round. Numerical results show that our IR-HARQ approach could provide around 25 energy saving compared with one-shot transmission (no HARQ).", "This paper addresses the problem of energy-efficient resource allocation in the downlink of a cellular orthogonal frequency division multiple access system. Three definitions of energy efficiency are considered for system design, accounting for both the radiated and the circuit power. User scheduling and power allocation are optimized across a cluster of coordinated base stations with a constraint on the maximum transmit power (either per subcarrier or per base station). The asymptotic noise-limited regime is discussed as a special case. Results show that the maximization of the energy efficiency is approximately equivalent to the maximization of the spectral efficiency for small values of the maximum transmit power, while there is a wide range of values of the maximum transmit power for which a moderate reduction of the data rate provides large savings in terms of dissipated energy. In addition, the performance gap among the considered resource allocation strategies is reduced as the out-of-cluster interference increases.", "Energy efficiency is a key issue in wireless ad hoc and sensor networks. Several directions have been explored to maximize network lifetime, among them energy efficient routing. In this paper, we show how to extend the standardized OLSR routing protocol, in order to make it energy efficient. To take into account residual node energy, three new selection algorithms of multipoint relays, based on the minimum residual energy are evaluated, the best one is chosen. This OLSR extension selects the path minimizing the energy consumed in the end-to-end transmission of a flow packet and avoids nodes with low residual energy. We compare this extension with a two-path source routing strategy (with different links or different nodes). An extensive performance evaluation shows that this energy efficient extension maximizes both network lifetime and user data delivered.", "Recently, energy efficiency in wireless networks has become an important objective. Aside from the growing proliferation of smartphones and other high-end devices in conventional human-to-human (H2H) communication, the introduction of machine-to-machine (M2M) communication or machine-type communication into cellular networks is another contributing factor. In this paper, we investigate quality-of-service (QoS)-driven energy-efficient design for the uplink of long term evolution (LTE) networks in M2M H2H co-existence scenarios. We formulate the resource allocation problem as a maximization of effective capacity-based bits-per-joule capacity under statistical QoS provisioning. The specific constraints of single carrier frequency division multiple access (uplink air interface in LTE networks) pertaining to power and resource block allocation not only complicate the resource allocation problem, but also render the standard Lagrangian duality techniques inapplicable. We overcome the analytical and computational intractability by first transforming the original problem into a mixed integer programming (MIP) problem and then formulating its dual problem using the canonical duality theory. The proposed energy-efficient design is compared with the spectral efficient design along with round robin (RR) and best channel quality indicator (BCQI) algorithms. Numerical results, which are obtained using the invasive weed optimization (IWO) algorithm, show that the proposed energy-efficient uplink design not only outperforms other algorithms in terms of energy efficiency while satisfying the QoS requirements, but also performs closer to the optimal design." ] }
1908.03803
2966887519
During the last decade, the number of devices connected to the Internet by Wi-Fi has grown significantly. A high density of both the client devices and the hot spots posed new challenges related to providing the desired quality of service in the current and emerging scenarios. To cope with the negative effects caused by network densification, modern Wi-Fi is becoming more and more centralized. To improve network efficiency, today many new Wi-Fi deployments are under control of management systems that optimize network parameters in a centralized manner. In the paper, for such a cloud management system, we develop an algorithm which aims at maximizing energy efficiency and also keeps fairness among clients. For that, we design an objective function and solve an optimization problem using the branch and bound approach. To evaluate the efficiency of the developed solution, we implement it in the NS-3 simulator and compare with existing solutions and legacy behavior.
When optimizing energy efficiency, it is essential to add circuit power consumption @math to transmit power. Without taking this component into account, the maximum energy efficiency corresponds to the lowest transmission rate @cite_1 .
{ "cite_N": [ "@cite_1" ], "mid": [ "2152481521", "2060821751", "2050930635", "2147437672" ], "abstract": [ "Wireless systems where the nodes operate on batteries so that energy consumption must be minimized while satisfying given throughput and delay requirements are considered. In this context, the best modulation strategy to minimize the total energy consumption required to send a given number of bits is analyzed. The total energy consumption includes both the transmission energy and the circuit energy consumption. For uncoded systems, by optimizing the transmission time and the modulation parameters, it is shown that up to 80 energy savings is achievable over nonoptimized systems. For coded systems, it is shown that the benefit of coding varies with the transmission distance and the underlying modulation schemes.", "This paper addresses the problem of energy-efficient resource allocation in the downlink of a cellular orthogonal frequency division multiple access system. Three definitions of energy efficiency are considered for system design, accounting for both the radiated and the circuit power. User scheduling and power allocation are optimized across a cluster of coordinated base stations with a constraint on the maximum transmit power (either per subcarrier or per base station). The asymptotic noise-limited regime is discussed as a special case. Results show that the maximization of the energy efficiency is approximately equivalent to the maximization of the spectral efficiency for small values of the maximum transmit power, while there is a wide range of values of the maximum transmit power for which a moderate reduction of the data rate provides large savings in terms of dissipated energy. In addition, the performance gap among the considered resource allocation strategies is reduced as the out-of-cluster interference increases.", "Characterizing the fundamental tradeoffs for maximizing energy efficiency (EE) versus spectrum efficiency (SE) is a key problem in wireless communication. In this paper, we address this problem for a point-to-point additive white Gaussian noise (AWGN) channel with the transmitter powered solely via energy harvesting from the environment. In addition, we assume a practical on-off transmitter model with non-ideal circuit power, i.e., when the transmitter is on, its consumed power is the sum of the transmit power and a constant circuit power. Under this setup, we study the optimal transmit power allocation to maximize the average throughput over a finite horizon, subject to the time-varying energy constraint and the non-ideal circuit power consumption. First, we consider the off-line optimization under the assumption that the energy arrival time and amount are a priori known at the transmitter. Although this problem is non-convex due to the non-ideal circuit power, we show an efficient optimal solution that in general corresponds to a two-phase transmission: the first phase with an EE-maximizing on-off power allocation, and the second phase with a SE-maximizing power allocation that is non-decreasing over time, thus revealing an interesting result that both the EE and SE optimizations are unified in an energy harvesting communication system. We then extend the optimal off-line algorithm to the case with multiple parallel AWGN channels, based on the principle of nested optimization. Finally, inspired by the off-line optimal solution, we propose a new online algorithm under the practical setup with only the past and present energy state information (ESI) known at the transmitter.", "Energy-efficient link adaptation is studied for transmission on a frequency-selective parallel AWGN channel. The total power dissipation model includes a circuit power that varies with the sum rate and a power amplifier efficiency that varies with the bandwidth used. The mathematical analysis provides insight into how the subcarrier rates should be chosen for optimal energy efficiency and suggests a simple fixed-point algorithm that finds the solution in few iterations. Moreover, ways of improving the energy efficiency are discussed based on the dependence on bandwidth and distance between transmitter and receiver." ] }
1908.03803
2966887519
During the last decade, the number of devices connected to the Internet by Wi-Fi has grown significantly. A high density of both the client devices and the hot spots posed new challenges related to providing the desired quality of service in the current and emerging scenarios. To cope with the negative effects caused by network densification, modern Wi-Fi is becoming more and more centralized. To improve network efficiency, today many new Wi-Fi deployments are under control of management systems that optimize network parameters in a centralized manner. In the paper, for such a cloud management system, we develop an algorithm which aims at maximizing energy efficiency and also keeps fairness among clients. For that, we design an objective function and solve an optimization problem using the branch and bound approach. To evaluate the efficiency of the developed solution, we implement it in the NS-3 simulator and compare with existing solutions and legacy behavior.
Mentioned above papers consider only a single wireless link. The definition of energy efficiency has to be extended for systems with multiple transmitters and receivers. In @cite_10 , it is done in the following way: where @math is the overall utility function, @math is the link utility function of link @math , @math is the number of links, @math is the rate of link @math , @math is the average transmit power at link @math . The major disadvantage of such a way is that the utility function represents the sum of energy efficiencies of individual links, while the network operator is interested in the total network energy consumption and energy efficiency which is different.
{ "cite_N": [ "@cite_10" ], "mid": [ "2093917343", "2147437672", "2060821751", "2110519532" ], "abstract": [ "Recently, energy efficiency in wireless networks has become an important objective. Aside from the growing proliferation of smartphones and other high-end devices in conventional human-to-human (H2H) communication, the introduction of machine-to-machine (M2M) communication or machine-type communication into cellular networks is another contributing factor. In this paper, we investigate quality-of-service (QoS)-driven energy-efficient design for the uplink of long term evolution (LTE) networks in M2M H2H co-existence scenarios. We formulate the resource allocation problem as a maximization of effective capacity-based bits-per-joule capacity under statistical QoS provisioning. The specific constraints of single carrier frequency division multiple access (uplink air interface in LTE networks) pertaining to power and resource block allocation not only complicate the resource allocation problem, but also render the standard Lagrangian duality techniques inapplicable. We overcome the analytical and computational intractability by first transforming the original problem into a mixed integer programming (MIP) problem and then formulating its dual problem using the canonical duality theory. The proposed energy-efficient design is compared with the spectral efficient design along with round robin (RR) and best channel quality indicator (BCQI) algorithms. Numerical results, which are obtained using the invasive weed optimization (IWO) algorithm, show that the proposed energy-efficient uplink design not only outperforms other algorithms in terms of energy efficiency while satisfying the QoS requirements, but also performs closer to the optimal design.", "Energy-efficient link adaptation is studied for transmission on a frequency-selective parallel AWGN channel. The total power dissipation model includes a circuit power that varies with the sum rate and a power amplifier efficiency that varies with the bandwidth used. The mathematical analysis provides insight into how the subcarrier rates should be chosen for optimal energy efficiency and suggests a simple fixed-point algorithm that finds the solution in few iterations. Moreover, ways of improving the energy efficiency are discussed based on the dependence on bandwidth and distance between transmitter and receiver.", "This paper addresses the problem of energy-efficient resource allocation in the downlink of a cellular orthogonal frequency division multiple access system. Three definitions of energy efficiency are considered for system design, accounting for both the radiated and the circuit power. User scheduling and power allocation are optimized across a cluster of coordinated base stations with a constraint on the maximum transmit power (either per subcarrier or per base station). The asymptotic noise-limited regime is discussed as a special case. Results show that the maximization of the energy efficiency is approximately equivalent to the maximization of the spectral efficiency for small values of the maximum transmit power, while there is a wide range of values of the maximum transmit power for which a moderate reduction of the data rate provides large savings in terms of dissipated energy. In addition, the performance gap among the considered resource allocation strategies is reduced as the out-of-cluster interference increases.", "Energy-efficiency, one of the major design goals in wireless cellular networks, has received much attention lately, due to increased awareness of environmental and economic issues for network operators. In this paper, we develop a theoretical framework for BS energy saving that encompasses dynamic BS operation and the related problem of user association together. Specifically, we formulate a total cost minimization that allows for a flexible tradeoff between flow-level performance and energy consumption. For the user association problem, we propose an optimal energy-efficient user association policy and further present a distributed implementation with provable convergence. For the BS operation problem (i.e., BS switching on off), which is a challenging combinatorial problem, we propose simple greedy-on and greedy-off algorithms that are inspired by the mathematical background of submodularity maximization problem. Moreover, we propose other heuristic algorithms based on the distances between BSs or the utilizations of BSs that do not impose any additional signaling overhead and thus are easy to implement in practice. Extensive simulations under various practical configurations demonstrate that the proposed user association and BS operation algorithms can significantly reduce energy consumption." ] }
1908.03803
2966887519
During the last decade, the number of devices connected to the Internet by Wi-Fi has grown significantly. A high density of both the client devices and the hot spots posed new challenges related to providing the desired quality of service in the current and emerging scenarios. To cope with the negative effects caused by network densification, modern Wi-Fi is becoming more and more centralized. To improve network efficiency, today many new Wi-Fi deployments are under control of management systems that optimize network parameters in a centralized manner. In the paper, for such a cloud management system, we develop an algorithm which aims at maximizing energy efficiency and also keeps fairness among clients. For that, we design an objective function and solve an optimization problem using the branch and bound approach. To evaluate the efficiency of the developed solution, we implement it in the NS-3 simulator and compare with existing solutions and legacy behavior.
@cite_3 , the authors consider some other utility functions. In addition to the sum of energy efficiencies, an example of which is described above, they consider the product of energy efficiencies and so-called Global Energy Efficiency (GEE). Global energy efficiency is defined as the sum of rates divided by the total power consumption of all devices. Fast algorithms are proposed to solve Sum-EE and Prod-EE maximization problems. For GEE maximization problem, the optimal solution is only found when interference is negligible compared to the constant background noise.
{ "cite_N": [ "@cite_3" ], "mid": [ "2589615107", "2318872164", "2314118322", "2096380022" ], "abstract": [ "The characterization of the global maximum of energy efficiency (EE) problems in wireless networks is a challenging problem due to their nonconvex nature in interference channels. The aim of this paper is to develop a new and general framework to achieve globally optimal solutions. First, the hidden monotonic structure of the most common EE maximization problems is exploited jointly with fractional programming theory to obtain globally optimal solutions with exponential complexity in the number of network links. To overcome the high complexity, we also propose a framework to compute suboptimal power control strategies with affordable complexity. This is achieved by merging fractional programming and sequential optimization. The proposed monotonic framework is used to shed light on the ultimate performance of wireless networks in terms of EE and also to benchmark the performance of the lower-complexity framework based on sequential programming. Numerical evidence is provided to show that the sequential fractional programming framework achieves global optimality in several practical communication scenarios.", "In this paper, energy-efficient secure communications via untrusted two-way relaying are investigated considering physical layer security to prevent the relays from intercepting the confidential information of users. The performance metric of secure energy efficiency (EE), defined as the ratio of the secrecy sum rate to the total power consumption, is maximized by jointly optimizing power allocation for all nodes, with the constraints of the maximum allowed power and the minimum target secrecy rate. To deal with this intractable nonconvex optimization problem, some optimization methods termed as fractional programming, alternate optimization, penalty function method, and difference of convex functions programming, are jointly applied to address a solution scheme with comparatively lower complexity. With these above-mentioned optimization methods, the primal problem is transformed into simple subproblems hierarchically so as to adopt the corresponding optimization algorithm. By simulations, the achievable secure EE, the secrecy sum rate and the total transmission power of the proposed scheme are compared with those of secrecy sum rate maximization. It is demonstrated that the proposed scheme can improve secure EE remarkably yet at the cost of secrecy sum rate loss. This fact also reveals the inherent tradeoff between energy and security.", "Energy harvesting (EH) from background radio-frequency signals has recently started to be considered as a promising means of mitigating energy deficiencies in wireless sensors that have limited battery power. In this paper, to deal with the problem of energy shortage, we consider EH and energy efficiency, which is the ratio of the amount of data transferred to the energy dissipated within the system, at the same time. We formulate a nonconvex optimization problem for determining the training interval for channel estimation, user scheduling, and power allocation strategies to maximize the energy efficiency and transform the problem into a tractable convex problem. Based on the convexity of the problem, we propose an energy-efficient resource allocation strategy using an iterative method and provide the convergence of the proposed algorithm based on nonlinear fractional programming. The simulation results show that the proposed algorithm operates in an energy-efficient way, thereby achieving maximum energy efficiency while ensuring the data rate and harvested energy requirements.", "The dramatic increase of network infrastructure comes at the cost of rapidly increasing energy consumption, which makes optimization of energy efficiency (EE) an important topic. Since EE is often modeled as the ratio of rate to power, we present a mathematical framework called fractional programming that provides insight into this class of optimization problems, as well as algorithms for computing the solution. The main idea is that the objective function is transformed to a weighted sum of rate and power. A generic problem formulation for systems dissipating transmit-independent circuit power in addition to transmit-dependent power is presented. We show that a broad class of EE maximization problems can be solved efficiently, provided the rate is a concave function of the transmit power. We elaborate examples of various system models including time-varying parallel channels. Rate functions with an arbitrary discrete modulation scheme are also treated. The examples considered lead to water-filling solutions, but these are different from the dual problems of power minimization under rate constraints and rate maximization under power constraints, respectively, because the constraints need not be active. We also demonstrate that if the solution to a rate maximization problem is known, it can be utilized to reduce the EE problem into a one-dimensional convex problem." ] }
1908.03803
2966887519
During the last decade, the number of devices connected to the Internet by Wi-Fi has grown significantly. A high density of both the client devices and the hot spots posed new challenges related to providing the desired quality of service in the current and emerging scenarios. To cope with the negative effects caused by network densification, modern Wi-Fi is becoming more and more centralized. To improve network efficiency, today many new Wi-Fi deployments are under control of management systems that optimize network parameters in a centralized manner. In the paper, for such a cloud management system, we develop an algorithm which aims at maximizing energy efficiency and also keeps fairness among clients. For that, we design an objective function and solve an optimization problem using the branch and bound approach. To evaluate the efficiency of the developed solution, we implement it in the NS-3 simulator and compare with existing solutions and legacy behavior.
The GEE maximization problem can be solved with existing mathematical methods based on the so-called polyblock algorithm @cite_7 . However, this approach is known to converge very slowly when one or more variables are close to zero. While modeling real deployments, we often observed such cases. That is why we use another approach based on the branch-and-bound method that avoids this slow convergence @cite_2 . Although being applied to solve the GEE problem @cite_6 in LTE networks, its applicability for Wi-Fi networks is not straightforward.
{ "cite_N": [ "@cite_6", "@cite_7", "@cite_2" ], "mid": [ "1539042193", "2134058948", "1970830346", "2316126926" ], "abstract": [ "Carrier sense multiple access (CSMA), which resolves contentions over wireless networks in a fully distributed fashion, has recently gained a lot of attentions since it has been proved that appropriate control of CSMA parameters guarantees optimality in terms of stability (i.e., scheduling) and system-wide utility (i.e., scheduling and congestion control). Most CSMA-based algorithms rely on the popular Markov chain Monte Carlo technique, which enables one to find optimal CSMA parameters through iterative loops of simulation-and-update. However, such a simulation-based approach often becomes a major cause of exponentially slow convergence, being poorly adaptive to flow topology changes. In this paper, we develop distributed iterative algorithms which produce approximate solutions with convergence in polynomial time for both stability and utility maximization problems. In particular, for the stability problem, the proposed distributed algorithm requires, somewhat surprisingly, only one iteration among links. Our approach is motivated by the Bethe approximation (introduced by Yedidia, Freeman, and Weiss) allowing us to express approximate solutions via a certain nonlinear system with polynomial size. Our polynomial convergence guarantee comes from directly solving the nonlinear system in a distributed manner, rather than multiple simulation-and-update loops in existing algorithms. We provide numerical results to show that the algorithm produces highly accurate solutions and converges much faster than the prior ones.", "In a heterogeneous wireless cellular network, each user may be covered by multiple access points such as macro pico relay femto base stations (BS). An effective approach to maximize the sum utility (e.g., system throughput) in such a network is to jointly optimize users' linear procoders as well as their BS associations. In this paper, we first show that this joint optimization problem is NP-hard and thus is difficult to solve to global optimality. To find a locally optimal solution, we formulate the problem as a noncooperative game in which the users and the BSs both act as players. We introduce a set of new utility functions for the players and show that every Nash equilibrium (NE) of the resulting game is a stationary solution of the original sum utility maximization problem. Moreover, we develop a best-response type algorithm that allows the players to distributedly reach a NE of the game. Simulation results show that the proposed distributed algorithm can effectively relieve local BS congestion and simultaneously achieve high throughput and load balancing in a heterogeneous network.", "Consider the multiple-input multiple-output (MIMO) interfering broadcast channel whereby multiple base stations in a cellular network simultaneously transmit signals to a group of users in their own cells while causing interference to each other. The basic problem is to design linear beamformers that can maximize the system throughput. In this paper, we propose a linear transceiver design algorithm for weighted sum-rate maximization that is based on iterative minimization of weighted mean-square error (MSE). The proposed algorithm only needs local channel knowledge and converges to a stationary point of the weighted sum-rate maximization problem. Furthermore, the algorithm and its convergence can be extended to a general class of sum-utility maximization problem. The effectiveness of the proposed algorithm is validated by numerical experiments.", "We present a novel approach for distributed load balancing in heterogeneous networks that use cell range expansion (CRE) for user association and almost blank subframe (ABS) for interference management. First, we formulate the problem as a minimization of an @math -fairness objective function with load and outage constraints. Depending on @math , different objectives in terms of network performance or fairness can be achieved. Next, we model the interactions among the base stations for load balancing as a near-potential game, in which the potential function is the @math -fairness function. The optimal pure Nash equilibrium (PNE) of the game is found by using distributed learning algorithms. We propose log-linear and binary log-linear learning algorithms for complete and partial information settings, respectively. We give a detailed proof of convergence of learning algorithms for a near-potential game. We provide sufficient conditions under which the learning algorithms converge to the optimal PNE. By running extensive simulations, we show that the proposed algorithms converge within few hundreds of iterations. The convergence speed in the case of partial information setting is comparable to that of the complete information setting. Finally, we show that outage can be controlled and a better load balancing can be achieved by introducing ABS." ] }
1908.03803
2966887519
During the last decade, the number of devices connected to the Internet by Wi-Fi has grown significantly. A high density of both the client devices and the hot spots posed new challenges related to providing the desired quality of service in the current and emerging scenarios. To cope with the negative effects caused by network densification, modern Wi-Fi is becoming more and more centralized. To improve network efficiency, today many new Wi-Fi deployments are under control of management systems that optimize network parameters in a centralized manner. In the paper, for such a cloud management system, we develop an algorithm which aims at maximizing energy efficiency and also keeps fairness among clients. For that, we design an objective function and solve an optimization problem using the branch and bound approach. To evaluate the efficiency of the developed solution, we implement it in the NS-3 simulator and compare with existing solutions and legacy behavior.
Wi-Fi networks impose additional restrictions on solutions of the described problem. Specifically, since Wi-Fi implements CSMA CA, regulatory bodies put limits on the sensitivity threshold. An example of a solution to the GEE problem is shown in paper @cite_0 , where an algorithm based on the branch-and-bound technique was proposed to allocate power in Wi-Fi networks dynamically. Even with a constant traffic load such an algorithm dynamically varies the transmit power and, thus, obtains higher efficiency. In this paper, we generalize the GEE metric to take both power consumption and fairness into account and to develop a global optimization algorithm for green Wi-Fi networks.
{ "cite_N": [ "@cite_0" ], "mid": [ "2292788438", "2166319816", "1540457649", "2540490069" ], "abstract": [ "In this paper, we propose a distributed multi-hop interference avoidance algorithm, namely, IAA to avoid co-channel interference inside a wireless body area network (WBAN). Our proposal adopts carrier sense multiple access with collision avoidance (CSMA CA) between sources and relays and a flexible time division multiple access (FTDMA) between relays and coordinator. The proposed scheme enables low interfering nodes to transmit their messages using base channel. Depending on suitable situations, high interfering nodes double their contention windows (CW) and probably use switched orthogonal channel. Simulation results show that proposed scheme has far better minimum SINR (12dB improvement) and longer energy lifetime than other schemes (power control and opportunistic relaying). Additionally, we validate our proposal in a theoretical analysis and also propose a probabilistic approach to prove the outage probability can be effectively reduced to the minimal.", "Decentralized medium access control schemes for wireless networks based on CSMA CA, such as the IEEE 802.11 protocol, are known to be unfair. In multihop networks, they can even favor some links to such an extent that the others suffer from virtually complete starvation. This observation has been reported in quite a few works, but the factors causing it are still not well understood. We find that the capture effect and the relative values of the receive and carrier sensing ranges play a crucial role in the performance of these protocols. Using a simple Markovian model, we show that an idealized CSMA CA protocol suffers from starvation when the receiving and sensing ranges are equal, but quite surprisingly that this unfairness is reduced or even disappears when these two ranges are sufficiently different. We also show that starvation has a positive counterpart, namely organization. When its access intensity is large the protocol organizes the transmissions in space in such a way that it maximizes the number of concurrent successful transmissions. We obtain exact formula for the so-called spatial reuse of the protocol on large line networks.", "This paper analyzes the performance of a duty-cycled polling-based access mechanism that exploits the Transmission Opportunity Power Save Mode (TXOP PSM) defined in the IEEE 802.11ac to improve the energy efficiency of Wireless Local Area Networks (WLANs) based on the IEEE 802.11. The basic idea behind the proposed approach, named GreenPoll, is to enable contention free periods, based on polling with beacons, during which wireless stations can save energy by turning off their radio transceivers after exchanging data with the access point. The closed expression of energy efficiency of GreenPoll is formulated in this paper and is used to evaluate the performance of GreenPoll considering important parameters like the traffic load, packet length, data rate, and number of stations in the network. Both analytical and simulation results show the high energy efficiency of GreenPoll with gains of up to 330 and 110 when compared to the legacy Distributed Coordination Function (DCF) and the Point Coordination Function (PCF) defined in the IEEE 802.11, respectively.", "In this paper, we study resource allocation for a multicarrier-based cognitive radio (CR) network. More specifically, we investigate the problem of secondary users’ energy-efficiency (EE) maximization problem under secondary total power and primary interference constraints. First, assuming cooperation among the secondary base stations (BSs), a centralized approach is considered to solve the EE optimization problem for the CR network where the primary and secondary users are using either orthogonal frequency-division multiplexing (OFDM) or filter bank based multicarrier (FBMC) modulations. We propose an alternating-based approach to solve the joint power-subcarrier allocation problem. More importantly, in the first place, subcarriers are allocated using a heuristic method for a given feasible power allocation. Then, we conservatively approximate the nonconvex power control problem and propose a joint successive convex approximation-Dinkelbach algorithm (SCADA) to efficiently obtain a solution to the nonconvex power control problem. The proposed algorithm is shown to converge to a solution that coincides with the stationary point of the original nonconvex power control subproblem. Moreover, we propose a dual decomposition-based decentralized version of the SCADA. Second, under the assumption of no cooperation among the secondary BSs, we propose a fully distributed power control algorithm from the perspective of game theory. The proposed algorithm is shown to converge to a Nash-equilibrium (NE) point. Moreover, we propose a sufficient condition that guarantees the uniqueness of the achieved NE. Extensive simulation analyses are further provided to highlight the advantages and demonstrate the efficiency of our proposed schemes." ] }
1908.03687
2967382602
The sense of touch is essential for reliable mapping between the environment and a robot which interacts physically with objects. Presumably, an artificial tactile skin would facilitate safe interaction of the robots with the environment. In this work, we present our color-coded tactile sensor, incorporating plastic optical fibers (POF), transparent silicone rubber and an off-the-shelf color camera. Processing electronics are placed away from the sensing surface to make the sensor robust to harsh environments. Contact localization is possible thanks to the lower number of light sources compared to the number of camera POFs. Classical machine learning techniques and a hierarchical classification scheme were used for contact localization. Specifically, we generated the mapping from stimulation to sensation of a robotic perception system using our sensor. We achieved a force sensing range up to 18 N with the force resolution of around 3.6 N and the spatial resolution of 8 mm. The color-coded tactile sensor is suitable for tactile exploration and might enable further innovations in robust tactile sensing.
There are different types of materials used in manufacturing of optical sensors: various polymers including silicone, polyurethane, and thermoplastic elastomers; POFs and hydrogels. Liquid silicone rubber compounds (e.g. Smooth-on Sorta Clear 18 and Techsil RTV27905) are widely used in injection molding to create robust parts. The part quality mainly depends on how well the silicone compounds are mixed during molding. On the other hand, thermoplastic rubbers, as in @cite_21 , provide a better ability to return to their original shape after stretching them to moderate elongations. These can be processed by heating the granules of the thermoplastic elastomer, shaping them under pressure, and then cooling them to solidify. In contrast to silicone rubber and elastomers, polyurethanes can be synthesized by chemical reactions. Polyurethane parts are resistant to wear and tear.
{ "cite_N": [ "@cite_21" ], "mid": [ "2157459335", "2075459747", "1965890979", "2103654576" ], "abstract": [ "We present the development of a polyimide-based two-dimensional tactile sensing array realized using a novel inverted fabrication technique. Thermal silicon oxide or Pyrex® substrates are treated such that their surfaces are OH group terminated, allowing good adhesion between such substrates and a spun-on polyimide film during processing through what are suspected to be hydrogen bonds that can be selectively broken when release is desired. The release of the continuous polyimide film is rapidly accomplished by breaking these bonds. This process results in robust, low-cost and continuous polymer-film devices. The developed sensor skin contains an array of membrane-based tactile sensors (taxels). Micromachined thin-film met al strain gauges are positioned on the edges of polyimide membranes. The change in resistance from each strain gauge resulting from normal forces applied to tactile bumps on the top of the membranes is used to image force distribution. Response of an individual taxel is characterized. The effective gauge factor of the taxels is found to be approximately 1.3. Sensor array output is experimentally obtained. The demonstrated devices are robust enough for direct contact with humans, everyday objects and contaminants without undue care.", "Self-actuating materials capable of transforming between three-dimensional shapes have applications in areas as diverse as biomedicine, robotics, and tunable micro-optics. We introduce a method of photopatterning polymer films that yields temperature-responsive gel sheets that can transform between a flat state and a prescribed three-dimensional shape. Our approach is based on poly(N-isopropylacrylamide) copolymers containing pendent benzophenone units that allow cross-linking to be tuned by irradiation dose. We describe a simple method of halftone gel lithography using only two photomasks, wherein highly cross-linked dots embedded in a lightly cross-linked matrix provide access to nearly continuous, and fully two-dimensional, patterns of swelling. This method is used to fabricate surfaces with constant Gaussian curvature (spherical caps, saddles, and cones) or zero mean curvature (Enneper’s surfaces), as well as more complex and nearly closed shapes.", "We are developing a total-internal-reflection-based tactile sensor in which the shape is reconstructed using an optical reflection. This sensor consists of silicone rubber, an image pattern, and a camera. It reconstructs the shape of the sensor surface from an image of a pattern reflected at the inner sensor surface by total internal reflection. In this study, we propose precise real-time reconstruction by employing an optimization method. Furthermore, we propose to use active patterns. Deformation of the reflection image causes reconstruction errors. By controlling the image pattern, the sensor reconstructs the surface deformation more precisely. We implement the proposed optimization and active-pattern-based reconstruction methods in a reflection-based tactile sensor, and perform reconstruction experiments using the system. A precise deformation experiment confirms the linearity and precision of the reconstruction.", "In this paper, we report a new flexible capacitive tactile sensor array with the capability of measuring both normal and shear force distribution using polydimethylsiloxane (PDMS) as a base material. A tactile cell consists of two thick PDMS layers with embedded electrodes, an air gap, and a pillar structure. The pillar structure is formed at the center of each tactile cell between the air gap under a large bump. There are four capacitors in a cell to decompose the contact force into normal and shear components. Four capacitors are arranged in a square form. If a normal force is applied on the bump, the PDMS layer on the pillar structure is compressed, and the air gap between the top and bottom electrodes decreases, resulting in the increase in all four capacitances. If a shear force is applied, a torque is induced around the pillar. Therefore, the capacitance of the two capacitors increases, whereas that of the other two decreases. The bump and the pillar structure play a critical role to generate a torque for shear force measurement. The sensor has been realized in an 8 x 8 array of unit sensors, and each unit sensor responds to normal and shear stresses in all three axes, respectively. Measurement of a single sensor shows that the full-scale range of detectable force is about 10 mN, which corresponds to 131 kPa in three directions. The sensitivities of a cell measured with a current setup are 2.5 mN, 3.0 mN, and 2.9 mN for the x-, y-, and z-directions, respectively. Normal and shear force images are also captured from a 4 x 4 array of the fabricated sensor. Distinctive characteristic patterns appear when a shear force is applied to the sensor." ] }
1908.03687
2967382602
The sense of touch is essential for reliable mapping between the environment and a robot which interacts physically with objects. Presumably, an artificial tactile skin would facilitate safe interaction of the robots with the environment. In this work, we present our color-coded tactile sensor, incorporating plastic optical fibers (POF), transparent silicone rubber and an off-the-shelf color camera. Processing electronics are placed away from the sensing surface to make the sensor robust to harsh environments. Contact localization is possible thanks to the lower number of light sources compared to the number of camera POFs. Classical machine learning techniques and a hierarchical classification scheme were used for contact localization. Specifically, we generated the mapping from stimulation to sensation of a robotic perception system using our sensor. We achieved a force sensing range up to 18 N with the force resolution of around 3.6 N and the spatial resolution of 8 mm. The color-coded tactile sensor is suitable for tactile exploration and might enable further innovations in robust tactile sensing.
With the aim of making wearable and biocompatible parts, technological advances in bioengineering led to the emergence of hydrogels @cite_3 . A hydrogel, a rubbery and transparent material composed mostly of water, can also be a good choice for safe physical HRI.
{ "cite_N": [ "@cite_3" ], "mid": [ "2049609871", "2266720690", "1970061687", "2006225923" ], "abstract": [ "Hydrogels have found wide application in biosensors due to their versatile nature. This family of materials is applied in biosensing either to increase the loading capacity compared to two-dimensional surfaces, or to support biospecific hydrogel swelling occurring subsequent to specific recognition of an analyte. This review focuses on various principles underpinning the design of biospecific hydrogels acting through various molecular mechanisms in transducing the recognition event of label-free analytes. Towards this end, we describe several promising hydrogel systems that when combined with the appropriate readout platform and quantitative approach could lead to future real-life applications.", "Printed hydrogel composites with plant-inspired architectures dynamically change shape on immersion in water to yield prescribed complex morphologies.", "Implantable and wearable medical devices are used for monitoring, diagnosis, and treatment of an ever-increasing range of medical conditions, leading to an improved quality of life for patients. The addition of wireless connectivity to medical devices has enabled post-deployment tuning of therapy and access to device data virtually anytime and anywhere but, at the same time, has led to the emergence of security attacks as a critical concern. While cryptography and secure communication protocols may be used to address most known attacks, the lack of a viable secure connection establishment and key exchange mechanism is a fundamental challenge that needs to be addressed. We propose a vibration-based secure side channel between an external device (medical programmer or smartphone) and a medical device. Vibration is an intrinsically short-range, user-perceptible channel that is suitable for realizing physically secure communication at low energy and size weight overheads. We identify and address key challenges associated with the vibration channel, and propose a vibration-based wakeup and key exchange scheme, named SecureVibe, that is resistant to battery drain attacks. We analyze the risk of acoustic eavesdropping attacks and propose an acoustic masking countermeasure. We demonstrate and evaluate vibration-based wakeup and key exchange between a smartphone and a prototype medical device in the context of a realistic human body model.", "Two complementary strategies can be used in the fabrication of molecular biomaterials. In the 'top-down' approach, biomaterials are generated by stripping down a complex entity into its component parts (for example, paring a virus particle down to its capsid to form a viral cage). This contrasts with the 'bottom-up' approach, in which materials are assembled molecule by molecule (and in some cases even atom by atom) to produce novel supramolecular architectures. The latter approach is likely to become an integral part of nanomaterials manufacture and requires a deep understanding of individual molecular building blocks and their structures, assembly properties and dynamic behaviors. Two key elements in molecular fabrication are chemical complementarity and structural compatibility, both of which confer the weak and noncovalent interactions that bind building blocks together during self-assembly. Using natural processes as a guide, substantial advances have been achieved at the interface of nanomaterials and biology, including the fabrication of nanofiber materials for three-dimensional cell culture and tissue engineering, the assembly of peptide or protein nanotubes and helical ribbons, the creation of living microlenses, the synthesis of met al nanowires on DNA templates, the fabrication of peptide, protein and lipid scaffolds, the assembly of electronic materials by bacterial phage selection, and the use of radiofrequency to regulate molecular behaviors." ] }
1908.03687
2967382602
The sense of touch is essential for reliable mapping between the environment and a robot which interacts physically with objects. Presumably, an artificial tactile skin would facilitate safe interaction of the robots with the environment. In this work, we present our color-coded tactile sensor, incorporating plastic optical fibers (POF), transparent silicone rubber and an off-the-shelf color camera. Processing electronics are placed away from the sensing surface to make the sensor robust to harsh environments. Contact localization is possible thanks to the lower number of light sources compared to the number of camera POFs. Classical machine learning techniques and a hierarchical classification scheme were used for contact localization. Specifically, we generated the mapping from stimulation to sensation of a robotic perception system using our sensor. We achieved a force sensing range up to 18 N with the force resolution of around 3.6 N and the spatial resolution of 8 mm. The color-coded tactile sensor is suitable for tactile exploration and might enable further innovations in robust tactile sensing.
Using the materials described in Sec , a variety of optical tactile sensors were presented in the literature. The general principle is based on the optical reflection between mediums with different refractive indices. A conventional optical tactile sensor consists of an array of infrared light-emitting diodes (LEDs) and photodetectors. The intensity of the light is usually proportional to the magnitude of the pressure @cite_1 .
{ "cite_N": [ "@cite_1" ], "mid": [ "2070970799", "1493041716", "2781493652", "1981625219" ], "abstract": [ "This paper presents a fiber optic based tactile array sensor that can be employed in magnetic resonance environments. In contrast to conventional sensing approaches, such as resistive or capacitive-based sensing methods, which strongly rely on the generation and transmission of electronics signals, here electromagnetically isolated optical fibers were utilized to develop the tactile array sensor. The individual sensing elements of the proposed sensor detect normal forces; fusing the information from the individual elements allows the perception of the shape of probed objects. Applied forces deform a micro-flexure inside each sensor tactel, displacing a miniature mirror which, in turn, modulates the light intensity introduced by a transmitting fiber connected to a light source at its proximal end. For each tactel, the light intensity is read by a receiving fiber connected directly to a 2-D vision sensor. Computer software, such as MATLAB, is used to process the images received by the vision sensor. The calibration process was conducted by relating the applied forces to the number of activated pixels for each image received from a receiving fiber. The proposed approach allows the concurrent acquisition of data from multiple tactile sensor elements using a vision sensor such as a standard video camera. Test results of force responses and shape detection have proven the viability of this sensing concept.", "This paper presents the construction and theory of operation of a large area, flexible, tactile sensor and its applications. The sensing mechanism is based on the novel contact piezoresistive effect. Furthermore, the sensor's resolution, size and shape can be easily tailored to the applications' requirements. This versatility facilitates the use of the sensor in smart applications where tactile information is used to create system intelligence. Future improvements in the tactile sensing arena are discussed along with the potential benefits of using polymer electronics.", "Abstract Tactile sensing is an essential component in human–robot interaction and object manipulation. Soft sensors allow for safe interaction and improved gripping performance. Here we present the TacTip family of sensors: a range of soft optical tactile sensors with various morphologies fabricated through dual-material 3D printing. All of these sensors are inspired by the same biomimetic design principle: transducing deformation of the sensing surface via movement of pins analogous to the function of intermediate ridges within the human fingertip. The performance of the TacTip, TacTip-GR2, TacTip-M2, and TacCylinder sensors is here evaluated and shown to attain submillimeter accuracy on a rolling cylinder task, representing greater than 10-fold super-resolved acuity. A version of the TacTip sensor has also been open-sourced, enabling other laboratories to adopt it as a platform for tactile sensing and manipulation research. These sensors are suitable for real-world applications in tactile perception, ex...", "This paper suggests a force sensor array measuring contact force based on intensity change of light transmitted throughout optical waveguide. For transparency and flexibility of the sensor, two soft prepolymers with different refractive index have been developed. The optical waveguide consists of two cladding layers and a core layer. The top cladding layer is designed to allow light scattering at the specific area in response to finger contact. The force sensor shows a distinct tendency that output intensity decreases with input force and measurement range is from 0 to −13.2 dB." ] }
1908.03687
2967382602
The sense of touch is essential for reliable mapping between the environment and a robot which interacts physically with objects. Presumably, an artificial tactile skin would facilitate safe interaction of the robots with the environment. In this work, we present our color-coded tactile sensor, incorporating plastic optical fibers (POF), transparent silicone rubber and an off-the-shelf color camera. Processing electronics are placed away from the sensing surface to make the sensor robust to harsh environments. Contact localization is possible thanks to the lower number of light sources compared to the number of camera POFs. Classical machine learning techniques and a hierarchical classification scheme were used for contact localization. Specifically, we generated the mapping from stimulation to sensation of a robotic perception system using our sensor. We achieved a force sensing range up to 18 N with the force resolution of around 3.6 N and the spatial resolution of 8 mm. The color-coded tactile sensor is suitable for tactile exploration and might enable further innovations in robust tactile sensing.
GelSight tactile sensor @cite_21 uses a thermoplastic elastomer coated with a reflective membrane highlighted by an LED ring to capture surface textures with a camera. @cite_4 , this sensor was benchmarked in a texture recognition problem. Similarly, researchers of the Bristol Robotics laboratory developed a family of optical tactile sensors that are almost ready for small-scale mass production @cite_9 . Their TacTip sensor uses a commodity image tracker originally used in optical computer mice. It combines an image acquisition system and a digital signal processor, capable of processing the images at 2000 Hz @cite_19 . Thanks to the high image processing rate, they can detect the slippage of a grasped object @cite_18 . @cite_26 , a touch sensor, consisting of 41 silicone rubber markers, a light source, and a camera, estimates the tangential and normal forces by tracking these markers. Markers with different colors are used in GelForce sensor @cite_24 .
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_4", "@cite_9", "@cite_21", "@cite_24", "@cite_19" ], "mid": [ "2775635818", "2743209907", "1965953026", "2962983231" ], "abstract": [ "Tactile sensing is an important perception mode for robots, but the existing tactile technologies have multiple limitations. What kind of tactile information robots need, and how to use the information, remain open questions. We believe a soft sensor surface and high-resolution sensing of geometry should be important components of a competent tactile sensor. In this paper, we discuss the development of a vision-based optical tactile sensor, GelSight. Unlike the traditional tactile sensors which measure contact force, GelSight basically measures geometry, with very high spatial resolution. The sensor has a contact surface of soft elastomer, and it directly measures its deformation, both vertical and lateral, which corresponds to the exact object shape and the tension on the contact surface. The contact force, and slip can be inferred from the sensor’s deformation as well. Particularly, we focus on the hardware and software that support GelSight’s application on robot hands. This paper reviews the development of GelSight, with the emphasis in the sensing principle and sensor design. We introduce the design of the sensor’s optical system, the algorithm for shape, force and slip measurement, and the hardware designs and fabrication of different sensor versions. We also show the experimental evaluation on the GelSight’s performance on geometry and force measurement. With the high-resolution measurement of shape and contact force, the sensor has successfully assisted multiple robotic tasks, including material perception or recognition and in-hand localization for robot manipulation.", "A GelSight sensor uses an elastomeric slab covered with a reflective membrane to measure tactile signals. It measures the 3D geometry and contact force information with high spacial resolution, and successfully helped many challenging robot tasks. A previous sensor [1], based on a semi-specular membrane, produces high resolution but with limited geometry accuracy. In this paper, we describe a new design of GelSight for robot gripper, using a Lambertian membrane and new illumination system, which gives greatly improved geometric accuracy while retaining the compact size. We demonstrate its use in measuring surface normals and reconstructing height maps using photometric stereo. We also use it for the task of slip detection, using a combination of information about relative motions on the membrane surface and the shear distortions. Using a robotic arm and a set of 37 everyday objects with varied properties, we find that the sensor can detect translational and rotational slip in general cases, and can be used to improve the stability of the grasp.", "Sensing surface textures by touch is a valuable capability for robots. Until recently it was difficult to build a compliant sensor with high sensitivity and high resolution. The GelSight sensor is compliant and offers sensitivity and resolution exceeding that of the human fingertips. This opens the possibility of measuring and recognizing highly detailed surface textures. The GelSight sensor, when pressed against a surface, delivers a height map. This can be treated as an image, and processed using the tools of visual texture analysis. We have devised a simple yet effective texture recognition system based on local binary patterns, and enhanced it by the use of a multi-scale pyramid and a Hellinger distance metric. We built a database with 40 classes of tactile textures using materials such as fabric, wood, and sandpaper. Our system can correctly categorize materials from this database with high accuracy. This suggests that the GelSight sensor can be useful for material recognition by robots.", "Vision and touch are two of the important sensing modalities for humans and they offer complementary information for sensing the environment. Robots could also benefit from such multi-modal sensing ability. In this paper, addressing for the first time (to the best of our knowledge) texture recognition from tactile images and vision, we propose a new fusion method named Deep Maximum Covariance Analysis (DMCA) to learn a joint latent space for sharing features through vision and tactile sensing. The features of camera images and tactile data acquired from a GelSight sensor are learned by deep neural networks. But the learned features are of a high dimensionality and are redundant due to the differences between the two sensing modalities, which deteriorates the perception performance. To address this, the learned features are paired using maximum covariance analysis. Results of the algorithm on a newly collected dataset of paired visual and tactile data relating to cloth textures show that a good recognition performance of greater than 90 can be achieved by using the proposed DMCA framework. In addition, we find that the perception performance of either vision or tactile sensing can be improved by employing the shared representation space, compared to learning from unimodal data." ] }
1908.03687
2967382602
The sense of touch is essential for reliable mapping between the environment and a robot which interacts physically with objects. Presumably, an artificial tactile skin would facilitate safe interaction of the robots with the environment. In this work, we present our color-coded tactile sensor, incorporating plastic optical fibers (POF), transparent silicone rubber and an off-the-shelf color camera. Processing electronics are placed away from the sensing surface to make the sensor robust to harsh environments. Contact localization is possible thanks to the lower number of light sources compared to the number of camera POFs. Classical machine learning techniques and a hierarchical classification scheme were used for contact localization. Specifically, we generated the mapping from stimulation to sensation of a robotic perception system using our sensor. We achieved a force sensing range up to 18 N with the force resolution of around 3.6 N and the spatial resolution of 8 mm. The color-coded tactile sensor is suitable for tactile exploration and might enable further innovations in robust tactile sensing.
Researchers embedded an optical tactile sensor into the multi-modal tactile sensing system of an underwater robot gripper @cite_25 . As in @cite_10 and Optoforce sensor , the sensing principle is based on the light reflection delivered via POFs. The POFs can be used as force sensing elements due to the stray light, which is considered as a drawback in telecommunications @cite_1 . The deformation of a POF increases the losses of the light propagated inside, as the attenuation coefficient increases. In additon, the elasto-optic metamaterial presented in @cite_2 can change its refractive index due to pure bending. Such POFs are fabricated by the chemical vapor deposition technique. Their design generally relies upon the phenomenon of optical interference @cite_0 .
{ "cite_N": [ "@cite_1", "@cite_0", "@cite_2", "@cite_10", "@cite_25" ], "mid": [ "2070970799", "2775635818", "2472910955", "2793447234" ], "abstract": [ "This paper presents a fiber optic based tactile array sensor that can be employed in magnetic resonance environments. In contrast to conventional sensing approaches, such as resistive or capacitive-based sensing methods, which strongly rely on the generation and transmission of electronics signals, here electromagnetically isolated optical fibers were utilized to develop the tactile array sensor. The individual sensing elements of the proposed sensor detect normal forces; fusing the information from the individual elements allows the perception of the shape of probed objects. Applied forces deform a micro-flexure inside each sensor tactel, displacing a miniature mirror which, in turn, modulates the light intensity introduced by a transmitting fiber connected to a light source at its proximal end. For each tactel, the light intensity is read by a receiving fiber connected directly to a 2-D vision sensor. Computer software, such as MATLAB, is used to process the images received by the vision sensor. The calibration process was conducted by relating the applied forces to the number of activated pixels for each image received from a receiving fiber. The proposed approach allows the concurrent acquisition of data from multiple tactile sensor elements using a vision sensor such as a standard video camera. Test results of force responses and shape detection have proven the viability of this sensing concept.", "Tactile sensing is an important perception mode for robots, but the existing tactile technologies have multiple limitations. What kind of tactile information robots need, and how to use the information, remain open questions. We believe a soft sensor surface and high-resolution sensing of geometry should be important components of a competent tactile sensor. In this paper, we discuss the development of a vision-based optical tactile sensor, GelSight. Unlike the traditional tactile sensors which measure contact force, GelSight basically measures geometry, with very high spatial resolution. The sensor has a contact surface of soft elastomer, and it directly measures its deformation, both vertical and lateral, which corresponds to the exact object shape and the tension on the contact surface. The contact force, and slip can be inferred from the sensor’s deformation as well. Particularly, we focus on the hardware and software that support GelSight’s application on robot hands. This paper reviews the development of GelSight, with the emphasis in the sensing principle and sensor design. We introduce the design of the sensor’s optical system, the algorithm for shape, force and slip measurement, and the hardware designs and fabrication of different sensor versions. We also show the experimental evaluation on the GelSight’s performance on geometry and force measurement. With the high-resolution measurement of shape and contact force, the sensor has successfully assisted multiple robotic tasks, including material perception or recognition and in-hand localization for robot manipulation.", "This paper addresses 6-DOF (degree-of-freedom) tactile localization, i.e., the pose estimation of tridimensional objects using tactile measurements. This estimation problem is fundamental for the operation of autonomous robots that are often required to manipulate and grasp objects whose pose is a priori unknown. The nature of tactile measurements, the strict time requirements for real-time operation, and the multimodality of the involved probability distributions pose remarkable challenges and call for advanced nonlinear filtering techniques. Following a Bayesian approach, this paper proposes a novel and effective algorithm, named memory unscented particle filter (MUPF), which solves 6-DOF localization recursively in real time by only exploiting contact point measurements. The MUPF combines a modified particle filter that incorporates a sliding memory of past measurements to better handle multimodal distributions, along with the unscented Kalman filter that moves the particles toward regions of the search space that are more likely with the measurements. The performance of the proposed MUPF algorithm has been assessed both in simulation and on a real robotic system equipped with tactile sensors (i.e., the iCub humanoid robot). The experiments show that the algorithm provides accurate and reliable localization even with a low number of particles and, hence, is compatible with real-time requirements.", "Tactile sensing is required for human-like control with robotic manipulators. Multimodality is an essential component for these tactile sensors, for robots to achieve both the perceptual accuracy required for precise control, as well as the robustness to maintain a stable grasp without causing damage to the object or the robot itself. In this study, we present a cheap, 3D-printed, compliant, dual-modal, optical tactile sensor that is capable of both high (temporal) speed sensing, analogous to pain reception in humans and high (spatial) resolution sensing, analogous to the sensing provided by Merkel cell complexes in the human fingertip. We apply three tasks for testing the sensing capabilities in both modes; first, a depth modulation task, requiring the robot to follow a target trajectory using the high-speed mode; second, a high-resolution perception task, where the sensor perceives angle and radial position relative to an object edge; and third, a tactile exploration task, where the robot uses the high-resolution mode to perceive an edge and subsequently follow the object contour. The robot is capable of modulating contact depth using the high-speed mode, high accuracy in the perception task, and accurate control using the high-resolution mode." ] }
1908.03687
2967382602
The sense of touch is essential for reliable mapping between the environment and a robot which interacts physically with objects. Presumably, an artificial tactile skin would facilitate safe interaction of the robots with the environment. In this work, we present our color-coded tactile sensor, incorporating plastic optical fibers (POF), transparent silicone rubber and an off-the-shelf color camera. Processing electronics are placed away from the sensing surface to make the sensor robust to harsh environments. Contact localization is possible thanks to the lower number of light sources compared to the number of camera POFs. Classical machine learning techniques and a hierarchical classification scheme were used for contact localization. Specifically, we generated the mapping from stimulation to sensation of a robotic perception system using our sensor. We achieved a force sensing range up to 18 N with the force resolution of around 3.6 N and the spatial resolution of 8 mm. The color-coded tactile sensor is suitable for tactile exploration and might enable further innovations in robust tactile sensing.
Laboratory prototypes of image-based tactile sensors were reported in @cite_22 and @cite_24 . In these sensing panels, LEDs and photo-diodes camera were placed against a reflecting planar surface. When the surface deforms, it causes changes in reflected beams. These sensors use optical light to detect deformation of the contact surface, which can be used to estimate the force.
{ "cite_N": [ "@cite_24", "@cite_22" ], "mid": [ "1965890979", "2070970799", "2775635818", "2157459335" ], "abstract": [ "We are developing a total-internal-reflection-based tactile sensor in which the shape is reconstructed using an optical reflection. This sensor consists of silicone rubber, an image pattern, and a camera. It reconstructs the shape of the sensor surface from an image of a pattern reflected at the inner sensor surface by total internal reflection. In this study, we propose precise real-time reconstruction by employing an optimization method. Furthermore, we propose to use active patterns. Deformation of the reflection image causes reconstruction errors. By controlling the image pattern, the sensor reconstructs the surface deformation more precisely. We implement the proposed optimization and active-pattern-based reconstruction methods in a reflection-based tactile sensor, and perform reconstruction experiments using the system. A precise deformation experiment confirms the linearity and precision of the reconstruction.", "This paper presents a fiber optic based tactile array sensor that can be employed in magnetic resonance environments. In contrast to conventional sensing approaches, such as resistive or capacitive-based sensing methods, which strongly rely on the generation and transmission of electronics signals, here electromagnetically isolated optical fibers were utilized to develop the tactile array sensor. The individual sensing elements of the proposed sensor detect normal forces; fusing the information from the individual elements allows the perception of the shape of probed objects. Applied forces deform a micro-flexure inside each sensor tactel, displacing a miniature mirror which, in turn, modulates the light intensity introduced by a transmitting fiber connected to a light source at its proximal end. For each tactel, the light intensity is read by a receiving fiber connected directly to a 2-D vision sensor. Computer software, such as MATLAB, is used to process the images received by the vision sensor. The calibration process was conducted by relating the applied forces to the number of activated pixels for each image received from a receiving fiber. The proposed approach allows the concurrent acquisition of data from multiple tactile sensor elements using a vision sensor such as a standard video camera. Test results of force responses and shape detection have proven the viability of this sensing concept.", "Tactile sensing is an important perception mode for robots, but the existing tactile technologies have multiple limitations. What kind of tactile information robots need, and how to use the information, remain open questions. We believe a soft sensor surface and high-resolution sensing of geometry should be important components of a competent tactile sensor. In this paper, we discuss the development of a vision-based optical tactile sensor, GelSight. Unlike the traditional tactile sensors which measure contact force, GelSight basically measures geometry, with very high spatial resolution. The sensor has a contact surface of soft elastomer, and it directly measures its deformation, both vertical and lateral, which corresponds to the exact object shape and the tension on the contact surface. The contact force, and slip can be inferred from the sensor’s deformation as well. Particularly, we focus on the hardware and software that support GelSight’s application on robot hands. This paper reviews the development of GelSight, with the emphasis in the sensing principle and sensor design. We introduce the design of the sensor’s optical system, the algorithm for shape, force and slip measurement, and the hardware designs and fabrication of different sensor versions. We also show the experimental evaluation on the GelSight’s performance on geometry and force measurement. With the high-resolution measurement of shape and contact force, the sensor has successfully assisted multiple robotic tasks, including material perception or recognition and in-hand localization for robot manipulation.", "We present the development of a polyimide-based two-dimensional tactile sensing array realized using a novel inverted fabrication technique. Thermal silicon oxide or Pyrex® substrates are treated such that their surfaces are OH group terminated, allowing good adhesion between such substrates and a spun-on polyimide film during processing through what are suspected to be hydrogen bonds that can be selectively broken when release is desired. The release of the continuous polyimide film is rapidly accomplished by breaking these bonds. This process results in robust, low-cost and continuous polymer-film devices. The developed sensor skin contains an array of membrane-based tactile sensors (taxels). Micromachined thin-film met al strain gauges are positioned on the edges of polyimide membranes. The change in resistance from each strain gauge resulting from normal forces applied to tactile bumps on the top of the membranes is used to image force distribution. Response of an individual taxel is characterized. The effective gauge factor of the taxels is found to be approximately 1.3. Sensor array output is experimentally obtained. The demonstrated devices are robust enough for direct contact with humans, everyday objects and contaminants without undue care." ] }
1908.03645
2966981412
Qualitative relationships describe how increasing or decreasing one property (e.g. altitude) affects another (e.g. temperature). They are an important aspect of natural language question answering and are crucial for building chatbots or voice agents where one may enquire about qualitative relationships. Recently a dataset about question answering involving qualitative relationships has been proposed, and a few approaches to answer such questions have been explored, in the heart of which lies a semantic parser that converts the natural language input to a suitable logical form. A problem with existing semantic parsers is that they try to directly convert the input sentences to a logical form. Since the output language varies with each application, it forces the semantic parser to learn almost everything from scratch. In this paper, we show that instead of using a semantic parser to produce the logical form, if we apply the generate-validate framework i.e. generate a natural language description of the logical form and validate if the natural language description is followed from the input text, we get a better scope for transfer learning and our method outperforms the state-of-the-art by a large margin of 7.93 .
Our work is related to both the works in semantic parsing @cite_14 @cite_12 @cite_1 @cite_0 @cite_4 and question answering using semantic parsing @cite_11 @cite_5 @cite_13 . The problem of QUAREL is quite similar to the word math problems @cite_16 @cite_6 in the sense that both are story problems and use semantic parsing to translate the input problem to a suitable representation. Our work is also related to the work in @cite_13 that uses generate-validate framework to answer questions w.r.t life cycle text. @cite_13 uses generate-validate framework to verify given facts''. Particularly, it shows how rules can be used to infer new information over raw text without using a semantic parser to create a structured knowledge base. The work in @cite_13 uses a semantic parser to translate the question into one of the predefined forms. In our work, however we use generate-validate for both question and given fact" understanding. The work of @cite_9 is most related to us. @cite_9 proposes two models for QUAREL. One uses a state-of-the-art semantic parser @cite_0 to convert the input problem to the desired logical representation. They call this model QUASP, which obtains an accuracy of $56.1
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_9", "@cite_1", "@cite_6", "@cite_0", "@cite_5", "@cite_16", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2250225488", "2963611534", "2901386711", "2473222270" ], "abstract": [ "A central challenge in semantic parsing is handling the myriad ways in which knowledge base predicates can be expressed. Traditionally, semantic parsers are trained primarily from text paired with knowledge base information. Our goal is to exploit the much larger amounts of raw text not tied to any knowledge base. In this paper, we turn semantic parsing on its head. Given an input utterance, we first use a simple method to deterministically generate a set of candidate logical forms with a canonical realization in natural language for each. Then, we use a paraphrase model to choose the realization that best paraphrases the input, and output the corresponding logical form. We present two simple paraphrase models, an association model and a vector space model, and train them jointly from question-answer pairs. Our system PARASEMPRE improves stateof-the-art accuracies on two recently released question-answering datasets.", "Despite the availability of a huge amount of video data accompanied by descriptive texts, it is not always easy to exploit the information contained in natural language in order to automatically recognize video concepts. Towards this goal, in this paper we use textual cues as means of supervision, introducing two weakly supervised techniques that extend the Multiple Instance Learning (MIL) framework: the Fuzzy Sets Multiple Instance Learning (FSMIL) and the Probabilistic Labels Multiple Instance Learning (PLMIL). The former encodes the spatio-temporal imprecision of the linguistic descriptions with Fuzzy Sets, while the latter models different interpretations of each description's semantics with Probabilistic Labels, both formulated through a convex optimization algorithm. In addition, we provide a novel technique to extract weak labels in the presence of complex semantics, that consists of semantic similarity computations. We evaluate our methods on two distinct problems, namely face and action recognition, in the challenging and realistic setting of movies accompanied by their screenplays, contained in the COGNIMUSE database. We show that, on both tasks, our method considerably outperforms a state-of-the-art weakly supervised approach, as well as other baselines.", "Many natural language questions require recognizing and reasoning with qualitative relationships (e.g., in science, economics, and medicine), but are challenging to answer with corpus-based methods. Qualitative modeling provides tools that support such reasoning, but the semantic parsing task of mapping questions into those models has formidable challenges. We present QuaRel, a dataset of diverse story questions involving qualitative relationships that characterize these challenges, and techniques that begin to address them. The dataset has 2771 questions relating 19 different types of quantities. For example, \"Jenny observes that the robot vacuum cleaner moves slower on the living room carpet than on the bedroom carpet. Which carpet has more friction?\" We contribute (1) a simple and flexible conceptual framework for representing these kinds of questions; (2) the QuaRel dataset, including logical forms, exemplifying the parsing challenges; and (3) two novel models for this task, built as extensions of type-constrained semantic parsing. The first of these models (called QuaSP+) significantly outperforms off-the-shelf tools on QuaRel. The second (QuaSP+Zero) demonstrates zero-shot capability, i.e., the ability to handle new qualitative relationships without requiring additional training data, something not possible with previous models. This work thus makes inroads into answering complex, qualitative questions that require reasoning, and scaling to new relationships at low cost. The dataset and models are available at this http URL", "Traditional semantic parsers map language onto compositional, executable queries in a fixed schema. This mapping allows them to effectively leverage the information contained in large, formal knowledge bases (KBs, e.g., Freebase) to answer questions, but it is also fundamentally limiting---these semantic parsers can only assign meaning to language that falls within the KB's manually-produced schema. Recently proposed methods for open vocabulary semantic parsing overcome this limitation by learning execution models for arbitrary language, essentially using a text corpus as a kind of knowledge base. However, all prior approaches to open vocabulary semantic parsing replace a formal KB with textual information, making no use of the KB in their models. We show how to combine the disparate representations used by these two approaches, presenting for the first time a semantic parser that (1) produces compositional, executable representations of language, (2) can successfully leverage the information contained in both a formal KB and a large corpus, and (3) is not limited to the schema of the underlying KB. We demonstrate significantly improved performance over state-of-the-art baselines on an open-domain natural language question answering task." ] }
1908.03405
2968351798
Early time series classification (eTSC) is the problem of classifying a time series after as few measurements as possible with the highest possible accuracy. The most critical issue of any eTSC method is to decide when enough data of a time series has been seen to take a decision: Waiting for more data points usually makes the classification problem easier but delays the time in which a classification is made; in contrast, earlier classification has to cope with less input data, often leading to inferior accuracy. The state-of-the-art eTSC methods compute a fixed optimal decision time assuming that every times series has the same defined start time (like turning on a machine). However, in many real-life applications measurements start at arbitrary times (like measuring heartbeats of a patient), implying that the best time for taking a decision varies heavily between time series. We present TEASER, a novel algorithm that models eTSC as a two two-tier classification problem: In the first tier, a classifier periodically assesses the incoming time series to compute class probabilities. However, these class probabilities are only used as output label if a second-tier classifier decides that the predicted label is reliable enough, which can happen after a different number of measurements. In an evaluation using 45 benchmark datasets, TEASER is two to three times earlier at predictions than its competitors while reaching the same or an even higher classification accuracy. We further show TEASER's superior performance using real-life use cases, namely energy monitoring, and gait detection.
The techniques used for (TSC) can be broadly categorized into two classes: methods and . Whole series-based methods make use of a point-wise comparison of entire TS like 1-NN Dynamic Time Warping (DTW) @cite_12 . In contrast, feature-based classifiers rely on comparing features generated from substructures of TS. Approaches can be grouped as either using shapelets or bag-of-patterns (BOP). Shapelets are defined as TS subsequences that are maximally representative of a class @cite_26 @cite_21 . The (BOP) model @cite_10 @cite_19 @cite_25 @cite_39 breaks up a TS into a bag of substructures, represents these substructures as discrete features, and finally builds a histogram of feature counts as basis for classification. The recent Word ExtrAction for time SEries cLassification (WEASEL) @cite_10 also conceptually builds on the bag-of-patterns (BOP) approach and is one of the fastest and most accurate classifiers. @cite_1 deep learning networks are applied to TSC. Their best performing full convolutional network (FCN) performs not significantly different from state of the art. @cite_5 presents an overview of deep learning approaches.
{ "cite_N": [ "@cite_26", "@cite_21", "@cite_1", "@cite_39", "@cite_19", "@cite_5", "@cite_10", "@cite_25", "@cite_12" ], "mid": [ "2802962644", "1975257359", "2306394264", "2581867724" ], "abstract": [ "With the development of Fully Convolutional Neural Network (FCN), there have been progressive advances in the field of semantic segmentation in recent years. The FCN-based solutions are able to summarize features across training images and generate matching templates for the desired object classes, yet they overlook intra-class difference (ICD) among multiple instances in the same class. In this work, we present a novel fine-to-coarse learning (FCL) procedure, which first guides the network with designed 'finer' sub-class labels, whose decisions are mapped to the original 'coarse' object category through end-to-end learning. A sub-class labeling strategy is designed with unsupervised clustering upon deep convolutional features, and the proposed FCL procedure enables a balance between the fine-scale (i.e. sub-class) and the coarse-scale (i.e. class) knowledge. We conduct extensive experiments on several popular datasets, including PASCAL VOC, Context, Person-Part and NYUDepth-v2 to demonstrate the advantage of learning finer sub-classes and the potential to guide the learning of deep networks with unsupervised clustering.", "Time series classification is an important task with many challenging applications. A nearest neighbor (NN) classifier with dynamic time warping (DTW) distance is a strong solution in this context. On the other hand, feature-based approaches have been proposed as both classifiers and to provide insight into the series, but these approaches have problems handling translations and dilations in local patterns. Considering these shortcomings, we present a framework to classify time series based on a bag-of-features representation (TSBF). Multiple subsequences selected from random locations and of random lengths are partitioned into shorter intervals to capture the local information. Consequently, features computed from these subsequences measure properties at different locations and dilations when viewed from the original series. This provides a feature-based approach that can handle warping (although differently from DTW). Moreover, a supervised learner (that handles mixed data types, different units, etc.) integrates location information into a compact codebook through class probability estimates. Additionally, relevant global features can easily supplement the codebook. TSBF is compared to NN classifiers and other alternatives (bag-of-words strategies, sparse spatial sample kernels, shapelets). Our experimental results show that TSBF provides better results than competitive methods on benchmark datasets from the UCR time series database.", "Time series classification (TSC), the problem of predicting class labels of time series, has been around for decades within the community of data mining and machine learning, and found many important applications such as biomedical engineering and clinical prediction. However, it still remains challenging and falls short of classification accuracy and efficiency. Traditional approaches typically involve extracting discriminative features from the original time series using dynamic time warping (DTW) or shapelet transformation, based on which an off-the-shelf classifier can be applied. These methods are ad-hoc and separate the feature extraction part with the classification part, which limits their accuracy performance. Plus, most existing methods fail to take into account the fact that time series often have features at different time scales. To address these problems, we propose a novel end-to-end neural network model, Multi-Scale Convolutional Neural Networks (MCNN), which incorporates feature extraction and classification in a single framework. Leveraging a novel multi-branch layer and learnable convolutional layers, MCNN automatically extracts features at different scales and frequencies, leading to superior feature representation. MCNN is also computationally efficient, as it naturally leverages GPU computing. We conduct comprehensive empirical evaluation with various existing methods on a large number of benchmark datasets, and show that MCNN advances the state-of-the-art by achieving superior accuracy performance than other leading methods.", "Time series (TS) occur in many scientific and commercial applications, ranging from earth surveillance to industry automation to the smart grids. An important type of TS analysis is classification, which can, for instance, improve energy load forecasting in smart grids by detecting the types of electronic devices based on their energy consumption profiles recorded by automatic sensors. Such sensor-driven applications are very often characterized by (a) very long TS and (b) very large TS datasets needing classification. However, current methods to time series classification (TSC) cannot cope with such data volumes at acceptable accuracy; they are either scalable but offer only inferior classification quality, or they achieve state-of-the-art classification quality but cannot scale to large data volumes. In this paper, we present WEASEL (Word ExtrAction for time SEries cLassification), a novel TSC method which is both fast and accurate. Like other state-of-the-art TSC methods, WEASEL transforms time series into feature vectors, using a sliding-window approach, which are then analyzed through a machine learning classifier. The novelty of WEASEL lies in its specific method for deriving features, resulting in a much smaller yet much more discriminative feature set. On the popular UCR benchmark of 85 TS datasets, WEASEL is more accurate than the best current non-ensemble algorithms at orders-of-magnitude lower classification and training times, and it is almost as accurate as ensemble classifiers, whose computational complexity makes them inapplicable even for mid-size datasets. The outstanding robustness of WEASEL is also confirmed by experiments on two real smart grid datasets, where it out-of-the-box achieves almost the same accuracy as highly tuned, domain-specific methods." ] }
1908.03295
2967131314
We introduce a novel single-shot object detector to ease the imbalance of foreground-background class by suppressing the easy negatives while increasing the positives. To achieve this, we propose an Anchor Promotion Module (APM) which predicts the probability of each anchor as positive and adjusts their initial locations and shapes to promote both the quality and quantity of positive anchors. In addition, we design an efficient Feature Alignment Module (FAM) to extract aligned features for fitting the promoted anchors with the help of both the location and shape transformation information from the APM. We assemble the two proposed modules to the backbone of VGG-16 and ResNet-101 network with an encoder-decoder architecture. Extensive experiments on MS COCO well demonstrate our model performs competitively with alternative methods (40.0 mAP on set) and runs faster (28.6 ).
Cascaded Architecture . Cascaded architecture has been explored a lot for improving classification and refining locations. Viola and Jones @cite_25 trained a series of cascaded weak classifiers to form a strong region classifier for face detection. MR-CNN @cite_2 introduced an iterative bounding box regression by feeding the bounding boxes into RCNN several times to improve the localization accuracy during inference. More recently, Cai al @cite_11 proposed the Cascade R-CNN which achieved more accurate boxes by a sequence of detectors trained with increasing IoU thresholds. Cheng al @cite_12 resampled the hard positive detection boxes and applied a R-CNN to rescore these boxes. Different from the above works which focus on further improving the output detection results in two-stage methods, our framework aims to recognize the positive anchor boxes and promote the anchors for one-stage detection.
{ "cite_N": [ "@cite_11", "@cite_25", "@cite_12", "@cite_2" ], "mid": [ "2473640056", "2912662889", "2559348937", "2951001760" ], "abstract": [ "Cascade has been widely used in face detection, where classifier with low computation cost can be firstly used to shrink most of the background while keeping the recall. The cascade in detection is popularized by seminal Viola-Jones framework and then widely used in other pipelines, such as DPM and CNN. However, to our best knowledge, most of the previous detection methods use cascade in a greedy manner, where previous stages in cascade are fixed when training a new stage. So optimizations of different CNNs are isolated. In this paper, we propose joint training to achieve end-to-end optimization for CNN cascade. We show that the back propagation algorithm used in training CNN can be naturally used in training CNN cascade. We present how jointly training can be conducted on naive CNN cascade and more sophisticated region proposal network (RPN) and fast R-CNN. Experiments on face detection benchmarks verify the advantages of the joint training.", "Cascade is a classic yet powerful architecture that has boosted performance on various tasks. However, how to introduce cascade to instance segmentation remains an open question. A simple combination of Cascade R-CNN and Mask R-CNN only brings limited gain. In exploring a more effective approach, we find that the key to a successful instance segmentation cascade is to fully leverage the reciprocal relationship between detection and segmentation. In this work, we propose a new framework, Hybrid Task Cascade (HTC), which differs in two important aspects: (1) instead of performing cascaded refinement on these two tasks separately, it interweaves them for a joint multi-stage processing; (2) it adopts a fully convolutional branch to provide spatial context, which can help distinguishing hard foreground from cluttered background. Overall, this framework can learn more discriminative features progressively while integrating complementary features together in each stage. Without bells and whistles, a single HTC obtains 38.4 and 1.5 improvement over a strong Cascade Mask R-CNN baseline on MSCOCO dataset. Moreover, our overall system achieves 48.6 mask AP on the test-challenge split, ranking 1st in the COCO 2018 Challenge Object Detection Task. Code is available at: this https URL.", "Object detection is a challenging task in visual understanding domain, and even more so if the supervision is to be weak. Recently, few efforts to handle the task without expensive human annotations is established by promising deep neural network. A new architecture of cascaded networks is proposed to learn a convolutional neural network (CNN) under such conditions. We introduce two such architectures, with either two cascade stages or three which are trained in an end-to-end pipeline. The first stage of both architectures extracts best candidate of class specific region proposals by training a fully convolutional network. In the case of the three stage architecture, the middle stage provides object segmentation, using the output of the activation maps of first stage. The final stage of both architectures is a part of a convolutional neural network that performs multiple instance learning on proposals extracted in the previous stage(s). Our experiments on the PASCAL VOC 2007, 2010, 2012 and large scale object datasets, ILSVRC 2013, 2014 datasets show improvements in the areas of weakly-supervised object detection, classification and localization.", "Object detection is a challenging task in visual understanding domain, and even more so if the supervision is to be weak. Recently, few efforts to handle the task without expensive human annotations is established by promising deep neural network. A new architecture of cascaded networks is proposed to learn a convolutional neural network (CNN) under such conditions. We introduce two such architectures, with either two cascade stages or three which are trained in an end-to-end pipeline. The first stage of both architectures extracts best candidate of class specific region proposals by training a fully convolutional network. In the case of the three stage architecture, the middle stage provides object segmentation, using the output of the activation maps of first stage. The final stage of both architectures is a part of a convolutional neural network that performs multiple instance learning on proposals extracted in the previous stage(s). Our experiments on the PASCAL VOC 2007, 2010, 2012 and large scale object datasets, ILSVRC 2013, 2014 datasets show improvements in the areas of weakly-supervised object detection, classification and localization." ] }
1908.03391
2967874181
Individual identification is essential to animal behavior and ecology research and is of significant importance for protecting endangered species. Red pandas, among the world's rarest animals, are currently identified mainly by visual inspection and microelectronic chips, which are costly and inefficient. Motivated by recent advancement in computer-vision-based animal identification, in this paper, we propose an automatic framework for identifying individual red pandas based on their face images. We implement the framework by exploring well-established deep learning models with necessary adaptation for effectively dealing with red panda images. Based on a database of red panda images constructed by ourselves, we evaluate the effectiveness of the proposed automatic individual red panda identification method. The evaluation results show the promising potential of automatically recognizing individual red pandas from their faces. We are going to release our database and model in the public domain to promote the research on automatic animal identification and particularly on the technique for protecting red pandas.
As summarized in Table , automatic individual identification methods have been studied for a number of species, including African penguins @cite_5 , northeast tigers @cite_12 , cattle @cite_11 , lemurs @cite_8 , dairy cows @cite_7 , great white sharks @cite_3 , pandas @cite_0 , primates @cite_18 , pigs @cite_9 , and ringed seals @cite_4 . Different species usually have largely different appearance; however, different individual animals of the same species may differ quite slightly in their appearance, and can be distinguished only by fine-grained detail. Almost all of the related studies are based on specific body parts of an animal to determine its identity. For those species that have salient characteristics in their appearance (e.g., the spots on the breast of penguins @cite_5 , and the rings on the body of ringed seals @cite_4 ), individual identification can be done by extracting and comparing their salient features. For those species that have subtle appearance differences between different individuals, such as pigs @cite_9 , lemurs @cite_8 , and pandas @cite_0 , the most common solution to individual identification is to focus on the body parts with relatively rich textures and extract discriminative features from the parts.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_7", "@cite_8", "@cite_9", "@cite_3", "@cite_0", "@cite_5", "@cite_12", "@cite_11" ], "mid": [ "1580935273", "2765230784", "2791690647", "2963430954" ], "abstract": [ "Summary 1 The ability to identify individual animals is a critical aid in wildlife and conservation studies requiring information on behaviour, distribution, habitat use, population and life-history parameters. We present a computer-aided photo-identification technique that relies on natural marks to identify individuals of Carcharias taurus, a shark species that is critically endangered off the eastern Australian coast and considered globally vulnerable. The technique could potentially be applied to a range of species of similar form and bearing natural marks. 2 The use of natural marks for photo-identification is a non-invasive technique for identifying individual animals. As photo-identification databases grow larger, and their implementation spans several years, the historically used visual-matching processes lose accuracy and speed. A computerized pattern-matching system that requires initial user interaction to select the key features aids researchers by considerably reducing the time needed for identification of individuals. 3 Our method uses a two-dimensional affine transformation to compare two individuals in a commonly defined reference space. The methodology was developed using a database of 221 individually identifiable sharks that were photographically marked and rephotographed over 9 years, demonstrating both the efficacy of the technique and that the natural pigment marks of C. taurus are a reliable means of tracking individuals over several years. 4 Synthesis and applications. The identification of individual animals that are naturally marked with spots or similar patterns is achieved with an interactive pattern-matching system that uses an affine transformation to compare selected points in a single-user computer-aided interface. Our technique has been used successfully on C. taurus and we believe the methodology can be applied to other species of a similar form that have natural marks or patterns. The identification of individuals allows accurate tracking of their movements and distribution, and contributes to better population estimates for improved wildlife management and conservation planning.", "In order to monitor an animal population and to track individual animals in a non-invasive way, identification of individual animals based on certain distinctive characteristics is necessary. In this study, automatic image-based individual identification of the endangered Saimaa ringed seal (Phoca hispida saimensis) is considered. Ringed seals have a distinctive permanent pelage pattern that is unique to each individual. This can be used as a basis for the identification process. The authors propose a framework that starts with segmentation of the seal from the background and proceeds to various post-processing steps to make the pelage pattern more visible and the identification easier. Finally, two existing species independent individual identification methods are compared with a challenging data set of Saimaa ringed seal images. The results show that the segmentation and proposed post-processing steps increase the identification performance.", "Abstract Identification of individual livestock such as pigs and cows has become a pressing issue in recent years as intensification practices continue to be adopted and precise objective measurements are required (e.g. weight). Current best practice involves the use of RFID tags which are time-consuming for the farmer and distressing for the animal to fit. To overcome this, non-invasive biometrics are proposed by using the face of the animal. We test this in a farm environment, on 10 individual pigs using three techniques adopted from the human face recognition literature: Fisherfaces, the VGG-Face pre-trained face convolutional neural network (CNN) model and our own CNN model that we train using an artificially augmented data set. Our results show that accurate individual pig recognition is possible with accuracy rates of 96.7 on 1553 images. Class Activated Mapping using Grad-CAM is used to show the regions that our network uses to discriminate between pigs.", "We address the problem of identifying individual cetaceans from images showing the trailing edge of their fins. Given the trailing edge from an unknown individual, we produce a ranking of known individuals from a database. The nicks and notches along the trailing edge define an individual's unique signature. We define a representation based on integral curvature that is robust to changes in viewpoint and pose, and captures the pattern of nicks and notches in a local neighborhood at multiple scales. We explore two ranking methods that use this representation. The first uses a dynamic programming time-warping algorithm to align two representations, and interprets the alignment cost as a measure of similarity. This algorithm also exploits learned spatial weights to downweight matches from regions of unstable curvature. The second interprets the representation as a feature descriptor. Feature keypoints are defined at the local extrema of the representation. Descriptors for the set of known individuals are stored in a tree structure, which allows us to perform queries given the descriptors from an unknown trailing edge. We evaluate the top-k accuracy on two real-world datasets to demonstrate the effectiveness of the curvature representation, achieving top-1 accuracy scores of approximately 95 and 80 for bottlenose dolphins and humpback whales, respectively." ] }
1908.03391
2967874181
Individual identification is essential to animal behavior and ecology research and is of significant importance for protecting endangered species. Red pandas, among the world's rarest animals, are currently identified mainly by visual inspection and microelectronic chips, which are costly and inefficient. Motivated by recent advancement in computer-vision-based animal identification, in this paper, we propose an automatic framework for identifying individual red pandas based on their face images. We implement the framework by exploring well-established deep learning models with necessary adaptation for effectively dealing with red panda images. Based on a database of red panda images constructed by ourselves, we evaluate the effectiveness of the proposed automatic individual red panda identification method. The evaluation results show the promising potential of automatically recognizing individual red pandas from their faces. We are going to release our database and model in the public domain to promote the research on automatic animal identification and particularly on the technique for protecting red pandas.
Red pandas obviously belong to those species that have subtle appearance differences between different individuals. Fortunately, their faces have relatively salient textures. According to Table , most methods for the species that do not have salient appearance differences are based on learned features. With learning based models, researchers do not have to manually find out the exact parts that are helpful to identification. Inspired by these works, we build a deep neural network model for identifying individual red pandas based on their face images. Compared with existing animal identification methods, ours is fully automatic. Almost all existing methods are based on pre-cropped pictures of specific body parts, such as the tailhead images of dairy cows @cite_7 and face images of pig @cite_9 . In contrast, our method takes the image of a red panda as input and automatically detect its face, extracts features and matches the features to the ones enrolled in the gallery to determine its identity. In addition, to the best of our knowledge, the research in this paper is the first attempt to image-based automatic individual identification of red pandas.
{ "cite_N": [ "@cite_9", "@cite_7" ], "mid": [ "2791690647", "2125860240", "2059697919", "2118696714" ], "abstract": [ "Abstract Identification of individual livestock such as pigs and cows has become a pressing issue in recent years as intensification practices continue to be adopted and precise objective measurements are required (e.g. weight). Current best practice involves the use of RFID tags which are time-consuming for the farmer and distressing for the animal to fit. To overcome this, non-invasive biometrics are proposed by using the face of the animal. We test this in a farm environment, on 10 individual pigs using three techniques adopted from the human face recognition literature: Fisherfaces, the VGG-Face pre-trained face convolutional neural network (CNN) model and our own CNN model that we train using an artificially augmented data set. Our results show that accurate individual pig recognition is possible with accuracy rates of 96.7 on 1553 images. Class Activated Mapping using Grad-CAM is used to show the regions that our network uses to discriminate between pigs.", "The issue of recognizability of subjects in biometric identification is of particular interest to the designers of these systems. We have applied the concept of Doddington’s biometric menagerie to the area of facial recognition. We performed a series of tests for the presence of goats, lambs, and wolves on FRGC 2.0 color image data. The data for the subjects that appeared at the extreme end of these tests was then visually examined. Even a cursory comparison of images showed that for this set of data, some images fell into the defined menagerie categories. Our tests show the statistical existence of these animal classifications within the constraints of this set of FRGC 2.0 data using the baseline matching algorithm. Ultimately, these tests were limited by the image data set and matching algorithm used. For further confirmation of the existence of the menagerie, the analysis must be expanded to include different image sets and matching algorithms..", "In this work, we propose and experiment an original solution to 3D face recognition that supports face matching also in the case of probe scans with missing parts. In the proposed approach, distinguishing traits of the face are captured by first extracting 3D keypoints of the scan and then measuring how the face surface changes in the keypoints neighborhood using local shape descriptors. In particular: 3D keypoints detection relies on the adaptation to the case of 3D faces of the meshDOG algorithm that has been demonstrated to be effective for 3D keypoints extraction from generic objects; as 3D local descriptors we used the HOG descriptor and also proposed two alternative solutions that develop, respectively, on the histogram of orientations and the geometric histogram descriptors. Face similarity is evaluated by comparing local shape descriptors across inlier pairs of matching keypoints between probe and gallery scans. The face recognition accuracy of the approach has been first experimented on the difficult probes included in the new 2D 3D Florence face dataset that has been recently collected and released at the University of Firenze, and on the Binghamton University 3D facial expression dataset. Then, a comprehensive comparative evaluation has been performed on the Bosphorus, Gavab and UND FRGC v2.0 databases, where competitive results with respect to existing solutions for 3D face biometrics have been obtained. Graphical abstractDisplay Omitted Highlights3D face recognition approach deployable in real non-cooperative contexts of use.Fully-3D approach, based on keypoints detection, description and matching.MeshDOG keypoints detector combined with the multi-ring GH descriptor.RANSAC algorithm included for outlier removal from matching keypoints.State of the art accuracy for recognizing 3D scans with missing parts.", "From a set of images in a particular domain, labeled with part locations and class, we present a method to automatically learn a large and diverse set of highly discriminative intermediate features that we call Part-based One-vs.-One Features (POOFs). Each of these features specializes in discrimination between two particular classes based on the appearance at a particular part. We demonstrate the particular usefulness of these features for fine-grained visual categorization with new state-of-the-art results on bird species identification using the Caltech UCSD Birds (CUB) dataset and parity with the best existing results in face verification on the Labeled Faces in the Wild (LFW) dataset. Finally, we demonstrate the particular advantage of POOFs when training data is scarce." ] }
1908.03440
2967014930
In this paper, we propose a deep reinforcement learning (DRL) solution to the grasping problem using 2.5D images as the only source of information. In particular, we developed a simulated environment where a robot equipped with a vacuum gripper has the aim of reaching blocks with planar surfaces. These blocks can have different dimensions, shapes, position and orientation. Unity 3D allowed us to simulate a real-world setup, where a depth camera is placed in a fixed position and the stream of images is used by our policy network to learn how to solve the task. We explored different DRL algorithms and problem configurations. The experiments demonstrated the effectiveness of the proposed DRL algorithm applied to grasp tasks guided by visual depth camera inputs. When using the proper policy, the proposed method estimates a robot tool configuration that reaches the object surface with negligible position and orientation errors. This is, to the best of our knowledge, the first successful attempt of using 2.5D images only as of the input of a DRL algorithm, to solve the grasping problem regressing 3D world coordinates.
Deep reinforcement learning has been applied to solve several tasks such as learning to play video-games and robotics problems @cite_4 . In particular, it has been applied to grasp tasks with manipulator robots equipped with grippers, to locomotion tasks and also to humanoid robots @cite_12 @cite_3 . These problems are currently solved in a simulated environment. Several works obtained good results, like @cite_10 that involves the simulation of a robot and its working environment to develop a solution based on deep reinforcement learning algorithms, to solve the grasping problem. Another recent work is @cite_17 that simulate four complex tasks of dexterous manipulation based on deep reinforcement learning. It uses a policy gradient method, in particular, @cite_18 in combination with an imitation learning algorithm called @cite_9 which learns a policy through supervised learning to mimic the demonstrations of an expert.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_9", "@cite_3", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2767506186", "2575705757", "2953326790", "2902125520" ], "abstract": [ "Deep learning techniques have shown success in learning from raw high-dimensional data in various applications. While deep reinforcement learning is recently gaining popularity as a method to train intelligent agents, utilizing deep learning in imitation learning has been scarcely explored. Imitation learning can be an efficient method to teach intelligent agents by providing a set of demonstrations to learn from. However, generalizing to situations that are not represented in the demonstrations can be challenging, especially in 3D environments. In this paper, we propose a deep imitation learning method to learn navigation tasks from demonstrations in a 3D environment. The supervised policy is refined using active learning in order to generalize to unseen situations. This approach is compared to two popular deep reinforcement learning techniques: deep-Q-networks and Asynchronous actor-critic (A3C). The proposed method as well as the reinforcement learning methods employ deep convolutional neural networks and learn directly from raw visual input. Methods for combining learning from demonstrations and experience are also investigated. This combination aims to join the generalization ability of learning by experience with the efficiency of learning by imitation. The proposed methods are evaluated on 4 navigation tasks in a 3D simulated environment. Navigation tasks are a typical problem that is relevant to many real applications. They pose the challenge of requiring demonstrations of long trajectories to reach the target and only providing delayed rewards (usually terminal) to the agent. The experiments show that the proposed method can successfully learn navigation tasks from raw visual input while learning from experience methods fail to learn an effective policy. Moreover, it is shown that active learning can significantly improve the performance of the initially learned policy using a small number of active samples.", "Reinforcement learning holds the promise of enabling autonomous robots to learn large repertoires of behavioral skills with minimal human intervention. However, robotic applications of reinforcement learning often compromise the autonomy of the learning process in favor of achieving training times that are practical for real physical systems. This typically involves introducing hand-engineered policy representations and human-supplied demonstrations. Deep reinforcement learning alleviates this limitation by training general-purpose neural network policies, but applications of direct deep reinforcement learning algorithms have so far been restricted to simulated settings and relatively simple tasks, due to their apparent high sample complexity. In this paper, we demonstrate that a recent deep reinforcement learning algorithm based on off-policy training of deep Q-functions can scale to complex 3D manipulation tasks and can learn deep neural network policies efficiently enough to train on real physical robots. We demonstrate that the training times can be further reduced by parallelizing the algorithm across multiple robots which pool their policy updates asynchronously. Our experimental evaluation shows that our method can learn a variety of 3D manipulation skills in simulation and a complex door opening skill on real robots without any prior demonstrations or manually designed representations.", "Reinforcement learning holds the promise of enabling autonomous robots to learn large repertoires of behavioral skills with minimal human intervention. However, robotic applications of reinforcement learning often compromise the autonomy of the learning process in favor of achieving training times that are practical for real physical systems. This typically involves introducing hand-engineered policy representations and human-supplied demonstrations. Deep reinforcement learning alleviates this limitation by training general-purpose neural network policies, but applications of direct deep reinforcement learning algorithms have so far been restricted to simulated settings and relatively simple tasks, due to their apparent high sample complexity. In this paper, we demonstrate that a recent deep reinforcement learning algorithm based on off-policy training of deep Q-functions can scale to complex 3D manipulation tasks and can learn deep neural network policies efficiently enough to train on real physical robots. We demonstrate that the training times can be further reduced by parallelizing the algorithm across multiple robots which pool their policy updates asynchronously. Our experimental evaluation shows that our method can learn a variety of 3D manipulation skills in simulation and a complex door opening skill on real robots without any prior demonstrations or manually designed representations.", "Deep reinforcement learning (RL) algorithms can learn complex robotic skills from raw sensory inputs, but have yet to achieve the kind of broad generalization and applicability demonstrated by deep learning methods in supervised domains. We present a deep RL method that is practical for real-world robotics tasks, such as robotic manipulation, and generalizes effectively to never-before-seen tasks and objects. In these settings, ground truth reward signals are typically unavailable, and we therefore propose a self-supervised model-based approach, where a predictive model learns to directly predict the future from raw sensory readings, such as camera images. At test time, we explore three distinct goal specification methods: designated pixels, where a user specifies desired object manipulation tasks by selecting particular pixels in an image and corresponding goal positions, goal images, where the desired goal state is specified with an image, and image classifiers, which define spaces of goal states. Our deep predictive models are trained using data collected autonomously and continuously by a robot interacting with hundreds of objects, without human supervision. We demonstrate that visual MPC can generalize to never-before-seen objects---both rigid and deformable---and solve a range of user-defined object manipulation tasks using the same model." ] }
1908.03440
2967014930
In this paper, we propose a deep reinforcement learning (DRL) solution to the grasping problem using 2.5D images as the only source of information. In particular, we developed a simulated environment where a robot equipped with a vacuum gripper has the aim of reaching blocks with planar surfaces. These blocks can have different dimensions, shapes, position and orientation. Unity 3D allowed us to simulate a real-world setup, where a depth camera is placed in a fixed position and the stream of images is used by our policy network to learn how to solve the task. We explored different DRL algorithms and problem configurations. The experiments demonstrated the effectiveness of the proposed DRL algorithm applied to grasp tasks guided by visual depth camera inputs. When using the proper policy, the proposed method estimates a robot tool configuration that reaches the object surface with negligible position and orientation errors. This is, to the best of our knowledge, the first successful attempt of using 2.5D images only as of the input of a DRL algorithm, to solve the grasping problem regressing 3D world coordinates.
@cite_3 is an open-source perceptual and physics simulator. This work can be seen as a bridge between learning from a simulator and transfer learning. The main goal of is to facilitate transferring models trained in a simulated environment to the real world.
{ "cite_N": [ "@cite_3" ], "mid": [ "1884730573", "2869375357", "2312995908", "2586029550" ], "abstract": [ "This paper describes a 3D object perception and perceptual learning system developed for a complex artificial cognitive agent working in a restaurant scenario. This system, developed within the scope of the European project RACE, integrates detection, tracking, learning and recognition of tabletop objects. Interaction capabilities were also developed to enable a human user to take the role of instructor and teach new object categories. Thus, the system learns in an incremental and open-ended way from user-mediated experiences. Based on the analysis of memory requirements for storing both semantic and perceptual data, a dual memory approach, comprising a semantic memory and a perceptual memory, was adopted. The perceptual memory is the central data structure of the described perception and learning system. The goal of this paper is twofold: on one hand, we provide a thorough description of the developed system, starting with motivations, cognitive considerations and architecture design, then providing details on the developed modules, and finally presenting a detailed evaluation of the system; on the other hand, we emphasize the crucial importance of the Point Cloud Library (PCL) for developing such system.11This paper is a revised and extended version of Oliveira et?al. (2014). We describe an object perception and perceptual learning system.The system is able to detect, track and recognize tabletop objects.The system learns novel object categories in an open-ended fashion.The Point Cloud Library is used in nearly all modules of the system.The system was developed and used in the European project RACE.", "We present research using the latest reinforcement learning algorithm for end-to-end driving without any mediated perception (object recognition, scene understanding). The newly proposed reward and learning strategies lead together to faster convergence and more robust driving using only RGB image from a forward facing camera. An Asynchronous Actor Critic (A3C) framework is used to learn the car control in a physically and graphically realistic rally game, with the agents evolving simultaneously on tracks with a variety of road structures (turns, hills), graphics (seasons, location) and physics (road adherence). A thorough evaluation is conducted and generalization is proven on unseen tracks and using legal speed limits. Open loop tests on real sequences of images show some domain adaption capability of our method.", "Understanding physical phenomena is a key competence that enables humans and animals to act and interact under uncertain perception in previously unseen environments containing novel object and their configurations. Developmental psychology has shown that such skills are acquired by infants from observations at a very early stage. In this paper, we contrast a more traditional approach of taking a model-based route with explicit 3D representations and physical simulation by an end-to-end approach that directly predicts stability and related quantities from appearance. We ask the question if and to what extent and quality such a skill can directly be acquired in a data-driven way bypassing the need for an explicit simulation. We present a learning-based approach based on simulated data that predicts stability of towers comprised of wooden blocks under different conditions and quantities related to the potential fall of the towers. The evaluation is carried out on synthetic data and compared to human judgments on the same stimuli.", "Understanding physical phenomena is a key competence that enables humans and animals to act and interact under uncertain perception in previously unseen environments containing novel objects and their configurations. Developmental psychology has shown that such skills are acquired by infants from observations at a very early stage. In this paper, we contrast a more traditional approach of taking a model-based route with explicit 3D representations and physical simulation by an end-to-end approach that directly predicts stability from appearance. We ask the question if and to what extent and quality such a skill can directly be acquired in a data-driven way — bypassing the need for an explicit simulation at run-time. We present a learning-based approach based on simulated data that predicts stability of towers comprised of wooden blocks under different conditions and quantities related to the potential fall of the towers. We first evaluate the approach on synthetic data and compared the results to human judgments on the same stimuli. Further, we extend this approach to reason about future states of such towers that in return enables successful stacking." ] }
1908.03440
2967014930
In this paper, we propose a deep reinforcement learning (DRL) solution to the grasping problem using 2.5D images as the only source of information. In particular, we developed a simulated environment where a robot equipped with a vacuum gripper has the aim of reaching blocks with planar surfaces. These blocks can have different dimensions, shapes, position and orientation. Unity 3D allowed us to simulate a real-world setup, where a depth camera is placed in a fixed position and the stream of images is used by our policy network to learn how to solve the task. We explored different DRL algorithms and problem configurations. The experiments demonstrated the effectiveness of the proposed DRL algorithm applied to grasp tasks guided by visual depth camera inputs. When using the proper policy, the proposed method estimates a robot tool configuration that reaches the object surface with negligible position and orientation errors. This is, to the best of our knowledge, the first successful attempt of using 2.5D images only as of the input of a DRL algorithm, to solve the grasping problem regressing 3D world coordinates.
Although the previous works seem to obtain good results using deep reinforcement learning, there are a lot of other works that do not show these satisfactory results. A significant example is @cite_13 , which shows good results mainly on tasks with vector observations as input, while most of the tasks with visual observations perform badly.
{ "cite_N": [ "@cite_13" ], "mid": [ "2342662072", "2963641140", "2606433045", "2963286043" ], "abstract": [ "Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at this https URL in order to facilitate experimental reproducibility and to encourage adoption by other researchers.", "Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https: github.com rllab rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.", "Deep reinforcement learning has achieved many impressive results in recent years. However, tasks with sparse rewards or long horizons continue to pose significant challenges. To tackle these important problems, we propose a general framework that first learns useful skills in a pre-training environment, and then leverages the acquired skills for learning faster in downstream tasks. Our approach brings together some of the strengths of intrinsic motivation and hierarchical methods: the learning of useful skill is guided by a single proxy reward, the design of which requires very minimal domain knowledge about the downstream tasks. Then a high-level policy is trained on top of these skills, providing a significant improvement of the exploration and allowing to tackle sparse rewards in the downstream tasks. To efficiently pre-train a large span of skills, we use Stochastic Neural Networks combined with an information-theoretic regularizer. Our experiments show that this combination is effective in learning a wide span of interpretable skills in a sample-efficient way, and can significantly boost the learning performance uniformly across a wide range of downstream tasks.", "Deep reinforcement learning has achieved many impressive results in recent years. However, tasks with sparse rewards or long horizons continue to pose significant challenges. To tackle these important problems, we propose a general framework that first learns useful skills in a pre-training environment, and then leverages the acquired skills for learning faster in downstream tasks. Our approach brings together some of the strengths of intrinsic motivation and hierarchical methods: the learning of useful skill is guided by a single proxy reward, the design of which requires very minimal domain knowledge about the downstream tasks. Then a high-level policy is trained on top of these skills, providing a significant improvement of the exploration and allowing to tackle sparse rewards in the downstream tasks. To efficiently pre-train a large span of skills, we use Stochastic Neural Networks combined with an information-theoretic regularizer. Our experiments show that this combination is effective in learning a wide span of interpretable skills in a sample-efficient way, and can significantly boost the learning performance uniformly across a wide range of downstream tasks." ] }
1908.03440
2967014930
In this paper, we propose a deep reinforcement learning (DRL) solution to the grasping problem using 2.5D images as the only source of information. In particular, we developed a simulated environment where a robot equipped with a vacuum gripper has the aim of reaching blocks with planar surfaces. These blocks can have different dimensions, shapes, position and orientation. Unity 3D allowed us to simulate a real-world setup, where a depth camera is placed in a fixed position and the stream of images is used by our policy network to learn how to solve the task. We explored different DRL algorithms and problem configurations. The experiments demonstrated the effectiveness of the proposed DRL algorithm applied to grasp tasks guided by visual depth camera inputs. When using the proper policy, the proposed method estimates a robot tool configuration that reaches the object surface with negligible position and orientation errors. This is, to the best of our knowledge, the first successful attempt of using 2.5D images only as of the input of a DRL algorithm, to solve the grasping problem regressing 3D world coordinates.
Another problem in deep reinforcement learning, and deep learning in general, is hyper-parameters tuning. Deep reinforcement learning problems involve a huge number of hyper-parameters that affect training process. It is necessary to tune hyper-parameters such as the learning rate, batch size, random seed, architecture of the policy networks, together with correct reward functions, defining and normalizing them before feeding the network. In @cite_0 the authors introduce a large comparison of state of the art problems, algorithms and implementations to highlight how deep reinforcement learning is still heavily weak to hyper-parameters variation.
{ "cite_N": [ "@cite_0" ], "mid": [ "2591924527", "2963655672", "2617242334", "2963199420" ], "abstract": [ "Deep neural networks require a large amount of labeled training data during supervised learning. However, collecting and labeling so much data might be infeasible in many cases. In this paper, we introduce a deep transfer learning scheme, called selective joint fine-tuning, for improving the performance of deep learning tasks with insufficient training data. In this scheme, a target learning task with insufficient training data is carried out simultaneously with another source learning task with abundant training data. However, the source learning task does not use all existing training data. Our core idea is to identify and use a subset of training images from the original source learning task whose low-level characteristics are similar to those from the target learning task, and jointly fine-tune shared convolutional layers for both tasks. Specifically, we compute descriptors from linear or nonlinear filter bank responses on training images from both tasks, and use such descriptors to search for a desired subset of training samples for the source learning task. Experiments demonstrate that our deep transfer learning scheme achieves state-of-the-art performance on multiple visual classification tasks with insufficient training data for deep learning. Such tasks include Caltech 256, MIT Indoor 67, and fine-grained classification problems (Oxford Flowers 102 and Stanford Dogs 120). In comparison to fine-tuning without a source domain, the proposed method can improve the classification accuracy by 2 - 10 using a single model. Codes and models are available at https: github.com ZYYSzj Selective-Joint-Fine-tuning.", "Background: Deep learning models are typically trained using stochastic gradient descent or one of its variants. These methods update the weights using their gradient, estimated from a small fraction of the training data. It has been observed that when using large batch sizes there is a persistent degradation in generalization performance - known as the \"generalization gap\" phenomenon. Identifying the origin of this gap and closing it had remained an open problem. Contributions: We examine the initial high learning rate training phase. We find that the weight distance from its initialization grows logarithmically with the number of weight updates. We therefore propose a \"random walk on a random landscape\" statistical model which is known to exhibit similar \"ultra-slow\" diffusion behavior. Following this hypothesis we conducted experiments to show empirically that the \"generalization gap\" stems from the relatively small number of updates rather than the batch size, and can be completely eliminated by adapting the training regime used. We further investigate different techniques to train models in the large-batch regime and present a novel algorithm named \"Ghost Batch Normalization\" which enables significant decrease in the generalization gap without increasing the number of updates. To validate our findings we conduct several additional experiments on MNIST, CIFAR-10, CIFAR-100 and ImageNet. Finally, we reassess common practices and beliefs concerning training of deep models and suggest they may not be optimal to achieve good generalization.", "Background: Deep learning models are typically trained using stochastic gradient descent or one of its variants. These methods update the weights using their gradient, estimated from a small fraction of the training data. It has been observed that when using large batch sizes there is a persistent degradation in generalization performance - known as the \"generalization gap\" phenomena. Identifying the origin of this gap and closing it had remained an open problem. Contributions: We examine the initial high learning rate training phase. We find that the weight distance from its initialization grows logarithmically with the number of weight updates. We therefore propose a \"random walk on random landscape\" statistical model which is known to exhibit similar \"ultra-slow\" diffusion behavior. Following this hypothesis we conducted experiments to show empirically that the \"generalization gap\" stems from the relatively small number of updates rather than the batch size, and can be completely eliminated by adapting the training regime used. We further investigate different techniques to train models in the large-batch regime and present a novel algorithm named \"Ghost Batch Normalization\" which enables significant decrease in the generalization gap without increasing the number of updates. To validate our findings we conduct several additional experiments on MNIST, CIFAR-10, CIFAR-100 and ImageNet. Finally, we reassess common practices and beliefs concerning training of deep models and suggest they may not be optimal to achieve good generalization.", "Most deep reinforcement learning algorithms are data inefficient in complex and rich environments, limiting their applicability to many scenarios. One direction for improving data efficiency is multitask learning with shared neural network parameters, where efficiency may be improved through transfer across related tasks. In practice, however, this is not usually observed, because gradients from different tasks can interfere negatively, making learning unstable and sometimes even less data efficient. Another issue is the different reward schemes between tasks, which can easily lead to one task dominating the learning of a shared model. We propose a new approach for joint training of multiple tasks, which we refer to as Distral (distill & transfer learning). Instead of sharing parameters between the different workers, we propose to share a \"distilled\" policy that captures common behaviour across tasks. Each worker is trained to solve its own task while constrained to stay close to the shared policy, while the shared policy is trained by distillation to be the centroid of all task policies. Both aspects of the learning process are derived by optimizing a joint objective function. We show that our approach supports efficient transfer on complex 3D environments, outperforming several related methods. Moreover, the proposed learning process is more robust to hyperparameter settings and more stable—attributes that are critical in deep reinforcement learning." ] }
1908.03030
2966691903
We present an approach to accurately estimate high fidelity markerless 3D pose and volumetric reconstruction of human performance using only a small set of camera views ( @math ). Our method utilises a dual loss in a generative adversarial network that can yield improved performance in both reconstruction and pose estimate error. We use a deep prior implicitly learnt by the network trained over a dataset of view-ablated multi-view video footage of a wide range of subjects and actions. Uniquely we use a multi-channel symmetric 3D convolutional encoder-decoder with a dual loss to enforce the learning of a latent embedding that enforces skelet al joint positions and a deep volumetric reconstruction of the performer. An extensive evaluation is performed with state of the art performance reported on three datasets; Human 3.6M, TotalCapture and TotalCaptureOutdoor. The method opens the possibility of high-end volumetric and pose performance capture in on-set and prosumer scenarios where time or cost prohibit a high witness camera count.
Super-resolution: The classical solution to image restoration and super-resolution was to combine multiple data sources ( multiple images obtained at sub-pixel misalignments @cite_6 , or use self-similar patches within a single image @cite_31 @cite_33 ), and then incorporate these within a regularisation constraint total variation @cite_46 . Microscopy has applied super-resolution for volumetric data via depth of field @cite_34 , and through multi-spectral sensing data @cite_48 via sparse coding a machine learning-based super-resolution approach that learns the visual characteristics of the supplied training images, then applies the learnt model within an optimisation framework to enhance detail. More recently, as with all computer vision domains convolutional neural network (CNN) autoencoders have been applied to image @cite_16 @cite_3 and video-upscaling @cite_32 . While symmetric autoencoders have effectively learnt an image transformation between clean and synthetically noisy images @cite_41 . Similarly, Dong @cite_12 trained end-to-end networks to model image up-scaling or super-resolution.
{ "cite_N": [ "@cite_31", "@cite_33", "@cite_41", "@cite_48", "@cite_32", "@cite_6", "@cite_16", "@cite_3", "@cite_46", "@cite_34", "@cite_12" ], "mid": [ "2914829730", "2534320940", "2523714292", "2963470893" ], "abstract": [ "Image Super-Resolution: Historical Overview and Future Challenges, J. Yang and T. Huang Introduction to Super-Resolution Notations Techniques for Super-Resolution Challenge issues for Super-Resolution Super-Resolution Using Adaptive Wiener Filters, R.C. Hardie Introduction Observation Model AWF SR Algorithms Experimental Results Conclusions Acknowledgments Locally Adaptive Kernel Regression for Space-Time Super-Resolution, H. Takeda and P. Milanfar Introduction Adaptive Kernel Regression Examples Conclusion AppendiX Super-Resolution With Probabilistic Motion Estimation, M. Protter and M. Elad Introduction Classic Super-Resolution: Background The Proposed Algorithm Experimental Validation Summary Spatially Adaptive Filtering as Regularization in Inverse Imaging, A. Danielyan, A. Foi, V. Katkovnik, and K. Egiazarian Introduction Iterative filtering as regularization Compressed sensing Super-resolution Conclusions Registration for Super-Resolution, P. Vandewalle, L. Sbaiz, and M. Vetterli Camera Model What Is Resolution? Super-Resolution as a Multichannel Sampling Problem Registration of Totally Aliased Signals Registration of Partially Aliased Signals Conclusions Towards Super-Resolution in the Presence of Spatially Varying Blur, M. Sorel, F. Sroubek and J. Flusser Introduction Defocus and Optical Aberrations Camera Motion Blur Scene Motion Algorithms Conclusion Acknowledgments Toward Robust Reconstruction-Based Super-Resolution, M. Tanaka and M. Okutomi Introduction Overviews Robust SR Reconstruction with Pixel Selection Robust Super-Resolution Using MPEG Motion Vectors Robust Registration for Super-Resolution Conclusions Multi-Frame Super-Resolution from a Bayesian Perspective, L. Pickup, S. Roberts, A. Zisserman and D. Capel The Generative Model Where Super-Resolution Algorithms Go Wrong Simultaneous Super-Resolution Bayesian Marginalization Concluding Remarks Variational Bayesian Super Resolution Reconstruction, S. Derin Babacan, R. Molina, and A.K. Katsaggelos Introduction Problem Formulation Bayesian Framework for Super Resolution Bayesian Inference Variational Bayesian Inference Using TV Image Priors Experiments Estimation of Motion and Blur Conclusions Acknowledgements Pattern Recognition Techniques for Image Super-Resolution, K. Ni and T.Q. Nguyen Introduction Nearest Neighbor Super-Resolution Markov Random Fields and Approximations Kernel Machines for Image Super-Resolution Multiple Learners and Multiple Regressions Design Considerations and Examples Remarks Glossary Super-Resolution Reconstruction of Multi-Channel Images, O.G. Sezer and Y. Altunbasak Introduction Notation Image Acquisition Model Subspace Representation Reconstruction Algorithm Experiments & Discussions Conclusion New Applications of Super-Resolution in Medical Imaging, M.D.Robinson, S.J. Chiu, C.A. Toth, J.A. Izatt, J.Y. Lo, and S. Farsiu Introduction The Super-Resolution Framework New Medical Imaging Applications Conclusion Acknowledgment Practicing Super-Resolution: What Have We Learned? N. Bozinovic Abstract Introduction MotionDSP: History and Concepts Markets and Applications Technology Results Lessons Learned Conclusions", "Methods for super-resolution can be broadly classified into two families of methods: (i) The classical multi-image super-resolution (combining images obtained at subpixel misalignments), and (ii) Example-Based super-resolution (learning correspondence between low and high resolution image patches from a database). In this paper we propose a unified framework for combining these two families of methods. We further show how this combined approach can be applied to obtain super resolution from as little as a single image (with no database or prior examples). Our approach is based on the observation that patches in a natural image tend to redundantly recur many times inside the image, both within the same scale, as well as across different scales. Recurrence of patches within the same image scale (at subpixel misalignments) gives rise to the classical super-resolution, whereas recurrence of patches across different scales of the same image gives rise to example-based super-resolution. Our approach attempts to recover at each pixel its best possible resolution increase based on its patch redundancy within and across scales.", "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method." ] }
1908.03030
2966691903
We present an approach to accurately estimate high fidelity markerless 3D pose and volumetric reconstruction of human performance using only a small set of camera views ( @math ). Our method utilises a dual loss in a generative adversarial network that can yield improved performance in both reconstruction and pose estimate error. We use a deep prior implicitly learnt by the network trained over a dataset of view-ablated multi-view video footage of a wide range of subjects and actions. Uniquely we use a multi-channel symmetric 3D convolutional encoder-decoder with a dual loss to enforce the learning of a latent embedding that enforces skelet al joint positions and a deep volumetric reconstruction of the performer. An extensive evaluation is performed with state of the art performance reported on three datasets; Human 3.6M, TotalCapture and TotalCaptureOutdoor. The method opens the possibility of high-end volumetric and pose performance capture in on-set and prosumer scenarios where time or cost prohibit a high witness camera count.
Bottom-up pose estimation is driven by image parsing to isolate components, Srinivasan @cite_42 used graph-cuts to parse a subset of salient shapes from an image and group these into a model of a person. Ren @cite_39 recursively splits Canny edge contours into segments, classifying each as a putative body part using cues such as parallelism. Ren @cite_1 also used Bag of Visual Words for implicit pose estimation as part of a pose similarity system for dance video retrieval. More recently studies have begun to leverage the power of convolutional neural networks, following in the wake of the eye-opening results of Krizhevsky @cite_15 on image recognition. In DeepPose, Toshev @cite_24 used a cascade of convolutional neural networks to estimate 2D pose in images. Descriptors learnt by a CNN have also been used in 2D pose estimation from very low-resolution images @cite_52 . Elhayek @cite_50 used MVV with a Convnet to produce 2D pose estimations while Rhodin @cite_55 minimised the edge energy inspired by volume ray casting to deduce the 3D pose.
{ "cite_N": [ "@cite_55", "@cite_42", "@cite_1", "@cite_52", "@cite_39", "@cite_24", "@cite_50", "@cite_15" ], "mid": [ "1896788142", "2890816492", "2561080715", "2792747672" ], "abstract": [ "In this paper we present a convolutional neural network (CNN)-based model for human head pose estimation in low-resolution multi-modal RGB-D data. We pose the problem as one of classification of human gazing direction. We further fine-tune a regressor based on the learned deep classifier. Next we combine the two models (classification and regression) to estimate approximate regression confidence. We present state-of-the-art results in datasets that span the range of high-resolution human robot interaction (close up faces plus depth information) data to challenging low resolution outdoor surveillance data. We build upon our robust head-pose estimation and further introduce a new visual attention model to recover interaction with the environment . Using this probabilistic model, we show that many higher level scene understanding like human-human scene interaction detection can be achieved. Our solution runs in real-time on commercial hardware.", "In this work we integrate ideas from surface-based modeling with neural synthesis: we propose a combination of surface-based pose estimation and deep generative models that allows us to perform accurate pose transfer, i.e. synthesize a new image of a person based on a single image of that person and the image of a pose donor. We use a dense pose estimation system that maps pixels from both images to a common surface-based coordinate system, allowing the two images to be brought in correspondence with each other. We inpaint and refine the source image intensities in the surface coordinate system, prior to warping them onto the target pose. These predictions are fused with those of a convolutional predictive module through a neural synthesis module allowing for training the whole pipeline jointly end-to-end, optimizing a combination of adversarial and perceptual losses. We show that dense pose estimation is a substantially more powerful conditioning input than landmark-, or mask-based alternatives, and report systematic improvements over state of the art generators on DeepFashion and MVC datasets.", "Human pose as a query modality is an alternative and rich experience for image and video retrieval. It has interesting retrieval applications in domains such as sports and dance databases. In this work we propose two novel ways for representing the image of a person striking a pose, one looking for parts and other looking at the whole image. These representations are then used for retrieval. Both the representations are obtained using deep learning methods.In the first method, we make the following contributions: (a) We introduce deep poselets' for pose-sensitive detection of various body parts, built on convolutional neural network (CNN) features. These deep poselets significantly outperform previous instantiations of Berkeley poselets [6], and (b) Using these detector responses, we construct a pose representation that is suitable for pose search, and show that pose retrieval performance is on par with the previous methods. In the second method, we make the following contributions: (a) We design an optimized neural network which maps the input image to a very low dimensional space where similar poses are close by and dissimilar poses are farther away, and (b) We show that pose retrieval system using these low dimensional representation is on par with the deep poselet representation and is on par with the previous methods.The previous works with which the above two methods are compared include bag of visual words [44], Berkeley poselets[6] and human pose estimation algorithms [52]. All the methods are quantitatively evaluated on a large dataset of images built from a number of standard benchmarks together with frames from Hollywood movies. For human pose search, two novel pose descriptors (D-I and D-II) are proposed.D-I pools scores of deep poselets (body part detectors in a specific pose).D-II, maps an image such that similar and dissimilar pose images are separated.Both descriptors use CNNs and do not need an explicit human pose estimation.These have been shown to outperform other pose descriptors by large margins.", "We propose an end-to-end architecture for joint 2D and 3D human pose estimation in natural images. Key to our approach is the generation and scoring of a number of pose proposals per image, which allows us to predict 2D and 3D poses of multiple people simultaneously. Hence, our approach does not require an approximate localization of the humans for initialization. Our Localization-Classification-Regression architecture, named LCR-Net, contains 3 main components: 1) the pose proposal generator that suggests candidate poses at different locations in the image; 2) a classifier that scores the different pose proposals; and 3) a regressor that refines pose proposals both in 2D and 3D. All three stages share the convolutional feature layers and are trained jointly. The final pose estimation is obtained by integrating over neighboring pose hypotheses, which is shown to improve over a standard non maximum suppression algorithm. Our method recovers full-body 2D and 3D poses, hallucinating plausible body parts when the persons are partially occluded or truncated by the image boundary. Our approach significantly outperforms the state of the art in 3D pose estimation on Human3.6M, a controlled environment. Moreover, it shows promising results on real images for both single and multi-person subsets of the MPII 2D pose benchmark." ] }
1908.03030
2966691903
We present an approach to accurately estimate high fidelity markerless 3D pose and volumetric reconstruction of human performance using only a small set of camera views ( @math ). Our method utilises a dual loss in a generative adversarial network that can yield improved performance in both reconstruction and pose estimate error. We use a deep prior implicitly learnt by the network trained over a dataset of view-ablated multi-view video footage of a wide range of subjects and actions. Uniquely we use a multi-channel symmetric 3D convolutional encoder-decoder with a dual loss to enforce the learning of a latent embedding that enforces skelet al joint positions and a deep volumetric reconstruction of the performer. An extensive evaluation is performed with state of the art performance reported on three datasets; Human 3.6M, TotalCapture and TotalCaptureOutdoor. The method opens the possibility of high-end volumetric and pose performance capture in on-set and prosumer scenarios where time or cost prohibit a high witness camera count.
More recently given the success and accuracy of 2D joint estimation @cite_53 , several works lift 2D detections to 3D using learning or geometric reasoning, aiming to recover the missing depth dimension in the images. Sanzari @cite_7 estimates the location of 2D joints, before predicting 3D pose using appearance and probable 3D pose of the discovered parts with a hierarchical Bayesian model. While Zhou @cite_2 integrates 2D, 3D and temporal information to account for uncertainties in the data. The challenge of estimating 3D human pose from MVV is currently less explored, generally casting 3D pose estimation as a coordinate regression task, with the target output being the spatial @math coordinates of a joint with respect to a known root node such as the pelvis. Trumble @cite_43 used a flattened MVV based spherical histogram with a 2D convnet to estimate pose. While Pavlakos @cite_5 used a simple volumetric representation in a 3D convnet for pose estimation and Wei @cite_18 performed related work in aligning pairs of joints to estimate 3D human pose. Differently, Huang @cite_51 constructed a 4-D mesh of the subject from video reconstruction to estimate the 3D pose.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_53", "@cite_43", "@cite_2", "@cite_5", "@cite_51" ], "mid": [ "2626687521", "2557698284", "2769237672", "2554247908" ], "abstract": [ "We propose a novel approach to 3D human pose estimation from a single depth map. Recently, convolutional neural network (CNN) has become a powerful paradigm in computer vision. Many of computer vision tasks have benefited from CNNs, however, the conventional approach to directly regress 3D body joint locations from an image does not yield a noticeably improved performance. In contrast, we formulate the problem as estimating per-voxel likelihood of key body joints from a 3D occupancy grid. We argue that learning a mapping from volumetric input to volumetric output with 3D convolution consistently improves the accuracy when compared to learning a regression from depth map to 3D joint coordinates. We propose a two-stage approach to reduce the computational overhead caused by volumetric representation and 3D convolution: Holistic 2D prediction and Local 3D prediction. In the first stage, Planimetric Network (P-Net) estimates per-pixel likelihood for each body joint in the holistic 2D space. In the second stage, Volumetric Network (V-Net) estimates the per-voxel likelihood of each body joints in the local 3D space around the 2D estimations of the first stage, effectively reducing the computational cost. Our model outperforms existing methods by a large margin in publicly available datasets.", "This paper addresses the problem of 3D human pose estimation from a single image. We follow a standard two-step pipeline by first detecting the 2D position of the N body joints, and then using these observations to infer 3D pose. For the first step, we use a recent CNN-based detector. For the second step, most existing approaches perform 2N-to-3N regression of the Cartesian joint coordinates. We show that more precise pose estimates can be obtained by representing both the 2D and 3D human poses using NxN distance matrices, and formulating the problem as a 2D-to-3D distance matrix regression. For learning such a regressor we leverage on simple Neural Network architectures, which by construction, enforce positivity and symmetry of the predicted matrices. The approach has also the advantage to naturally handle missing observations and allowing to hypothesize the position of non-observed joints. Quantitative results on Humaneva and Human3.6M datasets demonstrate consistent performance gains over state-of-the-art. Qualitative evaluation on the images in-the-wild of the LSP dataset, using the regressor learned on Human3.6M, reveals very promising generalization results.", "In this work, we address the problem of 3D human pose estimation from a sequence of 2D human poses. Although the recent success of deep networks has led many state-of-the-art methods for 3D pose estimation to train deep networks end-to-end to predict from images directly, the top-performing approaches have shown the effectiveness of dividing the task of 3D pose estimation into two steps: using a state-of-the-art 2D pose estimator to estimate the 2D pose from images and then mapping them into 3D space. They also showed that a low-dimensional representation like 2D locations of a set of joints can be discriminative enough to estimate 3D pose with high accuracy. However, estimation of 3D pose for individual frames leads to temporally incoherent estimates due to independent error in each frame causing jitter. Therefore, in this work we utilize the temporal information across a sequence of 2D joint locations to estimate a sequence of 3D poses. We designed a sequence-to-sequence network composed of layer-normalized LSTM units with shortcut connections connecting the input to the output on the decoder side and imposed temporal smoothness constraint during training. We found that the knowledge of temporal consistency improves the best reported result on Human3.6M dataset by approximately (12.2 ) and helps our network to recover temporally consistent 3D poses over a sequence of images even when the 2D pose detector fails.", "This paper addresses the challenge of 3D human pose estimation from a single color image. Despite the general success of the end-to-end learning paradigm, top performing approaches employ a two-step solution consisting of a Convolutional Network (ConvNet) for 2D joint localization and a subsequent optimization step to recover 3D pose. In this paper, we identify the representation of 3D pose as a critical issue with current ConvNet approaches and make two important contributions towards validating the value of end-to-end learning for this task. First, we propose a fine discretization of the 3D space around the subject and train a ConvNet to predict per voxel likelihoods for each joint. This creates a natural representation for 3D pose and greatly improves performance over the direct regression of joint coordinates. Second, to further improve upon initial estimates, we employ a coarse-to-fine prediction scheme. This step addresses the large dimensionality increase and enables iterative refinement and repeated processing of the image features. The proposed approach outperforms all state-of-the-art methods on standard benchmarks achieving a relative error reduction greater than 30 on average. Additionally, we investigate using our volumetric representation in a related architecture which is suboptimal compared to our end-to-end approach, but is of practical interest, since it enables training when no image with corresponding 3D groundtruth is available, and allows us to present compelling results for in-the-wild images." ] }
1908.03030
2966691903
We present an approach to accurately estimate high fidelity markerless 3D pose and volumetric reconstruction of human performance using only a small set of camera views ( @math ). Our method utilises a dual loss in a generative adversarial network that can yield improved performance in both reconstruction and pose estimate error. We use a deep prior implicitly learnt by the network trained over a dataset of view-ablated multi-view video footage of a wide range of subjects and actions. Uniquely we use a multi-channel symmetric 3D convolutional encoder-decoder with a dual loss to enforce the learning of a latent embedding that enforces skelet al joint positions and a deep volumetric reconstruction of the performer. An extensive evaluation is performed with state of the art performance reported on three datasets; Human 3.6M, TotalCapture and TotalCaptureOutdoor. The method opens the possibility of high-end volumetric and pose performance capture in on-set and prosumer scenarios where time or cost prohibit a high witness camera count.
Since detecting pose for each frame individually leads to incoherent and jittery predictions over a sequence, many approaches exploit temporal information. Andriluka @cite_28 used tracking-by-detection to associate 2D poses detected in each frame individually and used them to retrieve 3D pose. While Tekin @cite_10 used a CNN to first align bounding boxes of successive frames so that the person in the image is always at the centre of the box and then extracted 3D HOG features over the spatiotemporal volume from which they regress the 3D pose of the central frame. Lin @cite_19 performed a multi-stage sequential refinement using LSTMs @cite_8 to predict 3D pose sequences using previously predicted 2D pose representations and 3D pose. While Hossain @cite_49 learns the temporal context of a sequence using a form of sequence-to-sequence network.
{ "cite_N": [ "@cite_8", "@cite_28", "@cite_19", "@cite_49", "@cite_10" ], "mid": [ "2604236302", "2769237672", "2079846689", "2952717317" ], "abstract": [ "We introduce a novel method for 3D object detection and pose estimation from color images only. We first use segmentation to detect the objects of interest in 2D even in presence of partial occlusions and cluttered background. By contrast with recent patch-based methods, we rely on a “holistic” approach: We apply to the detected objects a Convolutional Neural Network (CNN) trained to predict their 3D poses in the form of 2D projections of the corners of their 3D bounding boxes. This, however, is not sufficient for handling objects from the recent T-LESS dataset: These objects exhibit an axis of rotational symmetry, and the similarity of two images of such an object under two different poses makes training the CNN challenging. We solve this problem by restricting the range of poses used for training, and by introducing a classifier to identify the range of a pose at run-time before estimating it. We also use an optional additional step that refines the predicted poses. We improve the state-of-the-art on the LINEMOD dataset from 73.7 [2] to 89.3 of correctly registered RGB frames. We are also the first to report results on the Occlusion dataset [1 ] using color images only. We obtain 54 of frames passing the Pose 6D criterion on average on several sequences of the T-LESS dataset, compared to the 67 of the state-of-the-art [10] on the same sequences which uses both color and depth. The full approach is also scalable, as a single network can be trained for multiple objects simultaneously.", "In this work, we address the problem of 3D human pose estimation from a sequence of 2D human poses. Although the recent success of deep networks has led many state-of-the-art methods for 3D pose estimation to train deep networks end-to-end to predict from images directly, the top-performing approaches have shown the effectiveness of dividing the task of 3D pose estimation into two steps: using a state-of-the-art 2D pose estimator to estimate the 2D pose from images and then mapping them into 3D space. They also showed that a low-dimensional representation like 2D locations of a set of joints can be discriminative enough to estimate 3D pose with high accuracy. However, estimation of 3D pose for individual frames leads to temporally incoherent estimates due to independent error in each frame causing jitter. Therefore, in this work we utilize the temporal information across a sequence of 2D joint locations to estimate a sequence of 3D poses. We designed a sequence-to-sequence network composed of layer-normalized LSTM units with shortcut connections connecting the input to the output on the decoder side and imposed temporal smoothness constraint during training. We found that the knowledge of temporal consistency improves the best reported result on Human3.6M dataset by approximately (12.2 ) and helps our network to recover temporally consistent 3D poses over a sequence of images even when the 2D pose detector fails.", "We present a novel method for accurate marker-less capture of articulated skeleton motion of several subjects in general scenes, indoors and outdoors, even from input filmed with as few as two cameras. Our approach unites a discriminative image-based joint detection method with a model-based generative motion tracking algorithm through a combined pose optimization energy. The discriminative part-based pose detection method, implemented using Convolutional Networks (ConvNet), estimates unary potentials for each joint of a kinematic skeleton model. These unary potentials are used to probabilistically extract pose constraints for tracking by using weighted sampling from a pose posterior guided by the model. In the final energy, these constraints are combined with an appearance-based model-to-image similarity term. Poses can be computed very efficiently using iterative local optimization, as ConvNet detection is fast, and our formulation yields a combined pose estimation energy with analytic derivatives. In combination, this enables to track full articulated joint angles at state-of-the-art accuracy and temporal stability with a very low number of cameras.", "We propose a single-shot approach for simultaneously detecting an object in an RGB image and predicting its 6D pose without requiring multiple stages or having to examine multiple hypotheses. Unlike a recently proposed single-shot technique for this task (, ICCV'17) that only predicts an approximate 6D pose that must then be refined, ours is accurate enough not to require additional post-processing. As a result, it is much faster - 50 fps on a Titan X (Pascal) GPU - and more suitable for real-time processing. The key component of our method is a new CNN architecture inspired by the YOLO network design that directly predicts the 2D image locations of the projected vertices of the object's 3D bounding box. The object's 6D pose is then estimated using a PnP algorithm. For single object and multiple object pose estimation on the LINEMOD and OCCLUSION datasets, our approach substantially outperforms other recent CNN-based approaches when they are all used without post-processing. During post-processing, a pose refinement step can be used to boost the accuracy of the existing methods, but at 10 fps or less, they are much slower than our method." ] }
1908.02983
2964677000
Semi-supervised learning, i.e. jointly learning from labeled an unlabeled samples, is an active research topic due to its key role on relaxing human annotation constraints. In the context of image classification, recent advances to learn from unlabeled samples are mainly focused on consistency regularization methods that encourage invariant predictions for different perturbations of unlabeled samples. We, conversely, propose to learn from unlabeled data by generating soft pseudo-labels using the network predictions. We show that a naive pseudo-labeling overfits to incorrect pseudo-labels due to the so-called confirmation bias and demonstrate that label noise and mixup augmentation are effective regularization techniques for reducing it. The proposed approach achieves state-of-the-art results in CIFAR-10 100 and Mini-Imaget despite being much simpler than other state-of-the-art. These results demonstrate that pseudo-labeling can outperform consistency regularization methods, while the opposite was supposed in previous work. Source code is available at this https URL .
Semi-supervised learning for image classification is an active research topic @cite_29 ; this section focuses on reviewing work closely related to ours, discussing methods that use deep learning with mini-batch optimization over large image collections. Previous work on semi-supervised deep learning differ in whether they use consistency regularization or pseudo-labeling to learn from the unlabeled set @cite_33 , while they all share the use of a cross-entropy loss (or similar) on labeled data.
{ "cite_N": [ "@cite_29", "@cite_33" ], "mid": [ "1981613567", "2621015177", "2943152387", "2412035906" ], "abstract": [ "In image categorization the goal is to decide if an image belongs to a certain category or not. A binary classifier can be learned from manually labeled images; while using more labeled examples improves performance, obtaining the image labels is a time consuming process. We are interested in how other sources of information can aid the learning process given a fixed amount of labeled images. In particular, we consider a scenario where keywords are associated with the training images, e.g. as found on photo sharing websites. The goal is to learn a classifier for images alone, but we will use the keywords associated with labeled and unlabeled images to improve the classifier using semi-supervised learning. We first learn a strong Multiple Kernel Learning (MKL) classifier using both the image content and keywords, and use it to score unlabeled images. We then learn classifiers on visual features only, either support vector machines (SVM) or least-squares regression (LSR), from the MKL output values on both the labeled and unlabeled images. In our experiments on 20 classes from the PASCAL VOC'07 set and 38 from the MIR Flickr set, we demonstrate the benefit of our semi-supervised approach over only using the labeled images. We also present results for a scenario where we do not use any manual labeling but directly learn classifiers from the image tags. The semi-supervised approach also improves classification accuracy in this case.", "Aiming at improving the performance of visual classification in a cost-effective manner, this paper proposes an incremental semi-supervised learning paradigm called deep co-space (DCS). Unlike many conventional semi-supervised learning methods usually performed within a fixed feature space, our DCS gradually propagates information from labeled samples to unlabeled ones along with deep feature learning. We regard deep feature learning as a series of steps pursuing feature transformation, i.e., projecting the samples from a previous space into a new one, which tends to select the reliable unlabeled samples with respect to this setting. Specifically, for each unlabeled image instance, we measure its reliability by calculating the category variations of feature transformation from two different neighborhood variation perspectives and merged them into a unified sample mining criterion deriving from Hellinger distance. Then, those samples keeping stable correlation to their neighboring samples (i.e., having small category variation in distribution) across the successive feature space transformation are automatically received labels and incorporated into the model for incrementally training in terms of classification. Our extensive experiments on standard image classification benchmarks (e.g., Caltech-256 and SUN-397) demonstrate that the proposed framework is capable of effectively mining from large-scale unlabeled images, which boosts image classification performance and achieves promising results compared with other semi-supervised learning methods.", "This paper presents a study of semi-supervised learning with large convolutional networks. We propose a pipeline, based on a teacher student paradigm, that leverages a large collection of unlabelled images (up to 1 billion). Our main goal is to improve the performance for a given target architecture, like ResNet-50 or ResNext. We provide an extensive analysis of the success factors of our approach, which leads us to formulate some recommendations to produce high-accuracy models for image classification with semi-supervised learning. As a result, our approach brings important gains to standard architectures for image, video and fine-grained classification. For instance, by leveraging one billion unlabelled images, our learned vanilla ResNet-50 achieves 81.2 top-1 accuracy on the ImageNet benchmark.", "In this paper we consider the problem of semi-supervised learning with deep Convolutional Neural Networks (ConvNets). Semi-supervised learning is motivated on the observation that unlabeled data is cheap and can be used to improve the accuracy of classifiers. In this paper we propose an unsupervised regularization term that explicitly forces the classifier's prediction for multiple classes to be mutually-exclusive and effectively guides the decision boundary to lie on the low density space between the manifolds corresponding to different classes of data. Our proposed approach is general and can be used with any backpropagation-based learning method. We show through different experiments that our method can improve the object recognition performance of ConvNets using unlabeled data." ] }
1908.02983
2964677000
Semi-supervised learning, i.e. jointly learning from labeled an unlabeled samples, is an active research topic due to its key role on relaxing human annotation constraints. In the context of image classification, recent advances to learn from unlabeled samples are mainly focused on consistency regularization methods that encourage invariant predictions for different perturbations of unlabeled samples. We, conversely, propose to learn from unlabeled data by generating soft pseudo-labels using the network predictions. We show that a naive pseudo-labeling overfits to incorrect pseudo-labels due to the so-called confirmation bias and demonstrate that label noise and mixup augmentation are effective regularization techniques for reducing it. The proposed approach achieves state-of-the-art results in CIFAR-10 100 and Mini-Imaget despite being much simpler than other state-of-the-art. These results demonstrate that pseudo-labeling can outperform consistency regularization methods, while the opposite was supposed in previous work. Source code is available at this https URL .
Co-training @cite_15 combines several ideas from the previous works, using two (or more) networks trained simultaneously to agree in their predictions (consistency regularization) and disagree in their errors. Here the errors are defined as making different predictions when exposed to adversarial attacks, thus forcing different networks to learn complementary representations for the same samples. Recently, @cite_10 measure the consistency between the current prediction and an additional prediction of the same sample given by an external memory module that keeps track of previous representations of a sample. They additionally introduce an uncertainty weighting of the consistency term to reduce the contribution of uncertain sample predictions given by the memory module. Consistency regularization methods such as @math -model @cite_1 , mean teachers @cite_34 , and VAT @cite_4 have all been shown to benefit from the recent stochastic weight averaging (SWA) method @cite_18 @cite_38 . SWA averages network parameters at different training epochs to move the SGD solution on borders of flat loss regions to their center and improve generalization.
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_4", "@cite_1", "@cite_15", "@cite_34", "@cite_10" ], "mid": [ "2609701267", "2963476860", "2909986471", "2792287754" ], "abstract": [ "Self-paced learning and hard example mining re-weight training instances to improve learning accuracy. This paper presents two improved alternatives based on lightweight estimates of sample uncertainty in stochastic gradient descent (SGD): the variance in predicted probability of the correct class across iterations of mini-batch SGD, and the proximity of the correct class probability to the decision threshold. Extensive experimental results on six datasets show that our methods reliably improve accuracy in various network architectures, including additional gains on top of other popular training techniques, such as residual learning, momentum, ADAM, batch normalization, dropout, and distillation.", "Self-paced learning and hard example mining re-weight training instances to improve learning accuracy. This paper presents two improved alternatives based on lightweight estimates of sample uncertainty in stochastic gradient descent (SGD): the variance in predicted probability of the correct class across iterations of mini-batch SGD, and the proximity of the correct class probability to the decision threshold. Extensive experimental results on six datasets show that our methods reliably improve accuracy in various network architectures, including additional gains on top of other popular training techniques, such as residual learning, momentum, ADAM, batch normalization, dropout, and distillation.", "The recently proposed semi-supervised learning methods exploit consistency loss between different predictions under random perturbations. Typically, a student model is trained to predict consistently with the targets generated by a noisy teacher. However, they ignore the fact that not all training data provide meaningful and reliable information in terms of consistency. For misclassified data, blindly minimizing the consistency loss around them can hinder learning. In this paper, we propose a novel certainty-driven consistency loss (CCL) to dynamically select data samples that have relatively low uncertainty. Specifically, we measure the variance or entropy of multiple predictions under random augmentations and dropout as an estimation of uncertainty. Then, we introduce two approaches, i.e. Filtering CCL and Temperature CCL to guide the student learn more meaningful and certain reliable targets, and hence improve the quality of the gradients backpropagated to the student. Experiments demonstrate the advantages of the proposed method over the state-of-the-art semi-supervised deep learning methods on three benchmark datasets: SVHN, CIFAR10, and CIFAR100. Our method also shows robustness to noisy labels.", "Deep neural networks are typically trained by optimizing a loss function with an SGD variant, in conjunction with a decaying learning rate, until convergence. We show that simple averaging of multiple points along the trajectory of SGD, with a cyclical or constant learning rate, leads to better generalization than conventional training. We also show that this Stochastic Weight Averaging (SWA) procedure finds much broader optima than SGD, and approximates the recent Fast Geometric Ensembling (FGE) approach with a single model. Using SWA we achieve notable improvement in test accuracy over conventional SGD training on a range of state-of-the-art residual networks, PyramidNets, DenseNets, and Shake-Shake networks on CIFAR-10, CIFAR-100, and ImageNet. In short, SWA is extremely easy to implement, improves generalization, and has almost no computational overhead." ] }
1908.02983
2964677000
Semi-supervised learning, i.e. jointly learning from labeled an unlabeled samples, is an active research topic due to its key role on relaxing human annotation constraints. In the context of image classification, recent advances to learn from unlabeled samples are mainly focused on consistency regularization methods that encourage invariant predictions for different perturbations of unlabeled samples. We, conversely, propose to learn from unlabeled data by generating soft pseudo-labels using the network predictions. We show that a naive pseudo-labeling overfits to incorrect pseudo-labels due to the so-called confirmation bias and demonstrate that label noise and mixup augmentation are effective regularization techniques for reducing it. The proposed approach achieves state-of-the-art results in CIFAR-10 100 and Mini-Imaget despite being much simpler than other state-of-the-art. These results demonstrate that pseudo-labeling can outperform consistency regularization methods, while the opposite was supposed in previous work. Source code is available at this https URL .
It is important to highlight a widely used practice @cite_27 @cite_1 @cite_34 @cite_15 @cite_39 @cite_33 : a warm-up where labeled samples have a higher (or full) weight at the beginning of training to palliate the incorrect guidance of unlabeled samples early in training. The authors in @cite_29 also reveal some limitations of current practices in semi-supervised learning such as low quality fully-supervised frameworks, absence of comparison with transfer learning baselines, and pointing out issues related to excessive hyperparameter tuning on large validation sets (not available in real situations in semi-supervised learning).
{ "cite_N": [ "@cite_33", "@cite_29", "@cite_1", "@cite_39", "@cite_27", "@cite_15", "@cite_34" ], "mid": [ "2048679005", "2119991813", "2909986471", "2621925205" ], "abstract": [ "We consider the problem of using a large unlabeled sample to boost performance of a learning algorit,hrn when only a small set of labeled examples is available. In particular, we consider a problem setting motivated by the task of learning to classify web pages, in which the description of each example can be partitioned into two distinct views. For example, the description of a web page can be partitioned into the words occurring on that page, and the words occurring in hyperlinks t,hat point to that page. We assume that either view of the example would be sufficient for learning if we had enough labeled data, but our goal is to use both views together to allow inexpensive unlabeled data to augment, a much smaller set of labeled examples. Specifically, the presence of two distinct views of each example suggests strategies in which two learning algorithms are trained separately on each view, and then each algorithm’s predictions on new unlabeled examples are used to enlarge the training set of the other. Our goal in this paper is to provide a PAC-style analysis for this setting, and, more broadly, a PAC-style framework for the general problem of learning from both labeled and unlabeled data. We also provide empirical results on real web-page data indicating that this use of unlabeled examples can lead to significant improvement of hypotheses in practice. *This research was supported in part by the DARPA HPKB program under contract F30602-97-1-0215 and by NSF National Young investigator grant CCR-9357793. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. TO copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and or a fee. COLT 98 Madison WI USA Copyright ACM 1998 l-58113-057--0 98 7... 5.00 92 Tom Mitchell School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213-3891 [email protected]", "In this paper, we study the problem of learning from weakly labeled data, where labels of the training examples are incomplete. This includes, for example, (i) semi-supervised learning where labels are partially known; (ii) multi-instance learning where labels are implicitly known; and (iii) clustering where labels are completely unknown. Unlike supervised learning, learning with weak labels involves a difficult Mixed-Integer Programming (MIP) problem. Therefore, it can suffer from poor scalability and may also get stuck in local minimum. In this paper, we focus on SVMs and propose the WELLSVM via a novel label generation strategy. This leads to a convex relaxation of the original MIP, which is at least as tight as existing convex Semi-Definite Programming (SDP) relaxations. Moreover, the WELLSVM can be solved via a sequence of SVM subproblems that are much more scalable than previous convex SDP relaxations. Experiments on three weakly labeled learning tasks, namely, (i) semi-supervised learning; (ii) multi-instance learning for locating regions of interest in content-based information retrieval; and (iii) clustering, clearly demonstrate improved performance, and WELLSVM is also readily applicable on large data sets.", "The recently proposed semi-supervised learning methods exploit consistency loss between different predictions under random perturbations. Typically, a student model is trained to predict consistently with the targets generated by a noisy teacher. However, they ignore the fact that not all training data provide meaningful and reliable information in terms of consistency. For misclassified data, blindly minimizing the consistency loss around them can hinder learning. In this paper, we propose a novel certainty-driven consistency loss (CCL) to dynamically select data samples that have relatively low uncertainty. Specifically, we measure the variance or entropy of multiple predictions under random augmentations and dropout as an estimation of uncertainty. Then, we introduce two approaches, i.e. Filtering CCL and Temperature CCL to guide the student learn more meaningful and certain reliable targets, and hence improve the quality of the gradients backpropagated to the student. Experiments demonstrate the advantages of the proposed method over the state-of-the-art semi-supervised deep learning methods on three benchmark datasets: SVHN, CIFAR10, and CIFAR100. Our method also shows robustness to noisy labels.", "In many real-world scenarios, labeled data for a specific machine learning task is costly to obtain. Semi-supervised training methods make use of abundantly available unlabeled data and a smaller number of labeled examples. We propose a new framework for semi-supervised training of deep neural networks inspired by learning in humans. Associations are made from embeddings of labeled samples to those of unlabeled ones and back. The optimization schedule encourages correct association cycles that end up at the same class from which the association was started and penalizes wrong associations ending at a different class. The implementation is easy to use and can be added to any existing end-to-end training setup. We demonstrate the capabilities of learning by association on several data sets and show that it can improve performance on classification tasks tremendously by making use of additionally available unlabeled data. In particular, for cases with few labeled data, our training scheme outperforms the current state of the art on SVHN." ] }
1908.02949
2966611292
Applications like disaster management and industrial inspection often require experts to enter contaminated places. To circumvent the need for physical presence, it is desirable to generate a fully immersive individual live teleoperation experience. However, standard video-based approaches suffer from a limited degree of immersion and situation awareness due to the restriction to the camera view, which impacts the navigation. In this paper, we present a novel VR-based practical system for immersive robot teleoperation and scene exploration. While being operated through the scene, a robot captures RGB-D data that is streamed to a SLAM-based live multi-client telepresence system. Here, a global 3D model of the already captured scene parts is reconstructed and streamed to the individual remote user clients where the rendering for e.g. head-mounted display devices (HMDs) is performed. We introduce a novel lightweight robot client component which transmits robot-specific data and enables a quick integration into existing robotic systems. This way, in contrast to first-person exploration systems, the operators can explore and navigate in the remote site completely independent of the current position and view of the capturing robot, complementing traditional input devices for teleoperation. We provide a proof-of-concept implementation and demonstrate the capabilities as well as the performance of our system regarding interactive object measurements and bandwidth-efficient data streaming and visualization. Furthermore, we show its benefits over purely video-based teleoperation in a user study revealing a higher degree of situation awareness and a more precise navigation in challenging environments.
The key to success for the generation of an immersive and interactive telepresence experience is the real-time 3D reconstruction of the scene of interest. In particular due to the high computational burden and the huge memory requirements required to process and store large scenes, seminal work on multi-camera telepresence systems @cite_22 @cite_34 @cite_14 @cite_25 @cite_26 @cite_41 with less powerful hardware available at that time faced limitations regarding the capability to capture high-quality 3D models in real-time and to immediately transmit them to remote users. More recently, the emerging progress towards affordable commodity depth sensors including the Microsoft Kinect has successfully been exploited for the development of 3D reconstruction approaches working at room scale @cite_24 @cite_11 @cite_15 @cite_19 . Yet the step towards high-quality reconstructions remained highly challenging due to the high sensor noise as well as temporal inconsistency in the reconstructed data.
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_22", "@cite_41", "@cite_24", "@cite_19", "@cite_15", "@cite_34", "@cite_25", "@cite_11" ], "mid": [ "2801778672", "244217497", "2890415768", "353406226" ], "abstract": [ "Real-time 3D scene reconstruction from RGB-D sensor data, as well as the exploration of such data in VR AR settings, has seen tremendous progress in recent years. The combination of both these components into telepresence systems, however, comes with significant technical challenges. All approaches proposed so far are extremely demanding on input and output devices, compute resources and transmission bandwidth, and they do not reach the level of immediacy required for applications such as remote collaboration. Here, we introduce what we believe is the first practical client-server system for real-time capture and many-user exploration of static 3D scenes. Our system is based on the observation that interactive frame rates are sufficient for capturing and reconstruction, and real-time performance is only required on the client site to achieve lag-free view updates when rendering the 3D model. Starting from this insight, we extend previous voxel block hashing frameworks by introducing a novel thread-safe GPU hash map data structure that is robust under massively concurrent retrieval, insertion and removal of entries on a thread level. We further propose a novel transmission scheme for volume data that is specifically targeted to Marching Cubes geometry reconstruction and enables a 90 reduction in bandwidth between server and exploration clients. The resulting system poses very moderate requirements on network bandwidth, latency and client-side computation, which enables it to rely entirely on consumer-grade hardware, including mobile devices. We demonstrate that our technique achieves state-of-the-art representation accuracy while providing, for any number of clients, an immersive and fluid lag-free viewing experience even during network outages.", "The availability of commodity depth sensors such as Kinect has enabled development of methods which can densely reconstruct arbitrary scenes. While the results of these methods are accurate and visually appealing, they are quite often incomplete. This is either due to the fact that only part of the space was visible during the data capture process or due to the surfaces being occluded by other objects in the scene. In this paper, we address the problem of completing and refining such reconstructions. We propose a method for scene completion that can infer the layout of the complete room and the full extent of partially occluded objects. We propose a new probabilistic model, Contour Completion Random Fields, that allows us to complete the boundaries of occluded surfaces. We evaluate our method on synthetic and real world reconstructions of 3D scenes and show that it quantitatively and qualitatively outperforms standard methods. We created a large dataset of partial and complete reconstructions which we will make available to the community as a benchmark for the scene completion task. Finally, we demonstrate the practical utility of our algorithm via an augmented-reality application where objects interact with the completed reconstructions inferred by our method.", "We propose a new approach for 3D reconstruction of dynamic indoor and outdoor scenes in everyday environments, leveraging only cameras worn by a user. This approach allows 3D reconstruction of experiences at any location and virtual tours from anywhere. The key innovation of the proposed ego-centric reconstruction system is to capture the wearer's body pose and facial expression from near-body views, e.g. cameras on the user's glasses, and to capture the surrounding environment using outward-facing views. The main challenge of the ego-centric reconstruction, however, is the poor coverage of the near-body views – that is, the user's body and face are observed from vantage points that are convenient for wear but inconvenient for capture. To overcome these challenges, we propose a parametric-model-based approach to user motion estimation. This approach utilizes convolutional neural networks (CNNs) for near-view body pose estimation, and we introduce a CNN-based approach for facial expression estimation that combines audio and video. For each time-point during capture, the intermediate model-based reconstructions from these systems are used to re-target a high-fidelity pre-scanned model of the user. We demonstrate that the proposed self-sufficient, head-worn capture system is capable of reconstructing the wearer's movements and their surrounding environment in both indoor and outdoor situations without any additional views. As a proof of concept, we show how the resulting 3D-plus-time reconstruction can be immersively experienced within a virtual reality system (e.g., the HTC Vive). We expect that the size of the proposed egocentric capture-and-reconstruction system will eventually be reduced to fit within future AR glasses, and will be widely useful for immersive 3D telepresence, virtual tours, and general use-anywhere 3D content creation.", "This paper describes an enhanced telepresence system that offers fully dynamic, real-time 3D scene capture and continuous-viewpoint, head-tracked stereo 3D display without requiring the user to wear any tracking or viewing apparatus. We present a complete software and hardware framework for implementing the system, which is based on an array of commodity Microsoft Kinect^T^Mcolor-plus-depth cameras. Contributions include an algorithm for merging data between multiple depth cameras and techniques for automatic color calibration and preserving stereo quality even with low rendering rates. Also presented is a solution to the problem of interference that occurs between Kinect cameras with overlapping views. Emphasis is placed on a fully GPU-accelerated data processing and rendering pipeline that can apply hole filling, smoothing, data merger, surface generation, and color correction at rates of up to 200 million triangles s on a single PC and graphics board. Also presented is a Kinect-based markerless tracking system that combines 2D eye recognition with depth information to allow head-tracked stereo views to be rendered for a parallax barrier autostereoscopic display. Enhancements in calibration, filtering, and data merger were made to improve image quality over a previous version of the system." ] }
1908.02949
2966611292
Applications like disaster management and industrial inspection often require experts to enter contaminated places. To circumvent the need for physical presence, it is desirable to generate a fully immersive individual live teleoperation experience. However, standard video-based approaches suffer from a limited degree of immersion and situation awareness due to the restriction to the camera view, which impacts the navigation. In this paper, we present a novel VR-based practical system for immersive robot teleoperation and scene exploration. While being operated through the scene, a robot captures RGB-D data that is streamed to a SLAM-based live multi-client telepresence system. Here, a global 3D model of the already captured scene parts is reconstructed and streamed to the individual remote user clients where the rendering for e.g. head-mounted display devices (HMDs) is performed. We introduce a novel lightweight robot client component which transmits robot-specific data and enables a quick integration into existing robotic systems. This way, in contrast to first-person exploration systems, the operators can explore and navigate in the remote site completely independent of the current position and view of the capturing robot, complementing traditional input devices for teleoperation. We provide a proof-of-concept implementation and demonstrate the capabilities as well as the performance of our system regarding interactive object measurements and bandwidth-efficient data streaming and visualization. Furthermore, we show its benefits over purely video-based teleoperation in a user study revealing a higher degree of situation awareness and a more precise navigation in challenging environments.
Recently, a huge step towards an immersive teleconferencing experience has been achieved with the development of the Holoportation system @cite_38 . This system has been implemented based on the Fusion4D framework @cite_6 that allows an accurate 3D reconstruction at real-time rates, as well real-time data transmission and the coupling to AR VR technology. However, real-time performance is achieved based on massive hardware requirements involving several high-end GPUs running on multiple desktop computers and most of the hardware components have to be installed at the local user's side. Furthermore, only an area of limited size that is surrounded by the involved static cameras can be captured which allows the application of this framework for teleconferencing but prevents it from being used for interactive remote exploration of larger live-captured scenes.
{ "cite_N": [ "@cite_38", "@cite_6" ], "mid": [ "2801778672", "2532511219", "2584684405", "2539646095" ], "abstract": [ "Real-time 3D scene reconstruction from RGB-D sensor data, as well as the exploration of such data in VR AR settings, has seen tremendous progress in recent years. The combination of both these components into telepresence systems, however, comes with significant technical challenges. All approaches proposed so far are extremely demanding on input and output devices, compute resources and transmission bandwidth, and they do not reach the level of immediacy required for applications such as remote collaboration. Here, we introduce what we believe is the first practical client-server system for real-time capture and many-user exploration of static 3D scenes. Our system is based on the observation that interactive frame rates are sufficient for capturing and reconstruction, and real-time performance is only required on the client site to achieve lag-free view updates when rendering the 3D model. Starting from this insight, we extend previous voxel block hashing frameworks by introducing a novel thread-safe GPU hash map data structure that is robust under massively concurrent retrieval, insertion and removal of entries on a thread level. We further propose a novel transmission scheme for volume data that is specifically targeted to Marching Cubes geometry reconstruction and enables a 90 reduction in bandwidth between server and exploration clients. The resulting system poses very moderate requirements on network bandwidth, latency and client-side computation, which enables it to rely entirely on consumer-grade hardware, including mobile devices. We demonstrate that our technique achieves state-of-the-art representation accuracy while providing, for any number of clients, an immersive and fluid lag-free viewing experience even during network outages.", "We present an end-to-end system for augmented and virtual reality telepresence, called Holoportation. Our system demonstrates high-quality, real-time 3D reconstructions of an entire space, including people, furniture and objects, using a set of new depth cameras. These 3D models can also be transmitted in real-time to remote users. This allows users wearing virtual or augmented reality displays to see, hear and interact with remote participants in 3D, almost as if they were present in the same physical space. From an audio-visual perspective, communicating and interacting with remote users edges closer to face-to-face communication. This paper describes the Holoportation technical system in full, its key interactive capabilities, the application scenarios it enables, and an initial qualitative study of using this new communication medium.", "We introduce a novel framework that enables large-scale dense 3D scene reconstruction, data streaming over the network and immersive exploration of the reconstructed environment using virtual reality. The system is operated by two remote entities, where one entity – for instance an autonomous aerial vehicle – captures and reconstructs the environment as well as transmits the data to another entity – such as human observer – that can immersivly explore the 3D scene, decoupled from the view of the capturing entity. The performance evaluation revealed the framework’s capabilities to perform RGB-D data capturing, dense 3D reconstruction, streaming and dynamic scene updating in real time for indoor environments up to a size of 100m2, using either a state-of-the-art mobile computer or a workstation. Thereby, our work provides a foundation for enabling immersive exploration of remotely captured and incrementally reconstructed dense 3D scenes, which has not shown before and opens up new research aspects in future.", "Traditional set-top camera video-conferencing systems still fail to meet the 'telepresence challenge' of providing a viable alternative for physical business travel, which is nowadays characterized by unacceptable delays, costs, inconvenience, and an increasingly large ecological footprint. Even recent high-end commercial solutions, while partially removing some of these traditional shortcomings, still present the problems of not scaling easily, expensive implementations, not utilizing 3D life-sized representations of the remote participants and addressing only eye contact and gesture-based interactions in very limited ways. The European FP7 project 3DPresence will develop a multi-party, high-end 3D videoconferencing concept that will tackle the problem of transmitting the feeling of physical presence in real-time to multiple remote locations in a transparent and natural way. In this paper, we present an overall concept, which includes the geometrical design of the whole prototype demonstrator, the arrangement of the cameras and displays and the general multi-view video analysis chain. The driving force behind the design strategy is to fulfil the requirements of a novel 3D immersive videoconferencing system, including directional eye gaze and gesture awareness. (8 pages)" ] }
1908.02949
2966611292
Applications like disaster management and industrial inspection often require experts to enter contaminated places. To circumvent the need for physical presence, it is desirable to generate a fully immersive individual live teleoperation experience. However, standard video-based approaches suffer from a limited degree of immersion and situation awareness due to the restriction to the camera view, which impacts the navigation. In this paper, we present a novel VR-based practical system for immersive robot teleoperation and scene exploration. While being operated through the scene, a robot captures RGB-D data that is streamed to a SLAM-based live multi-client telepresence system. Here, a global 3D model of the already captured scene parts is reconstructed and streamed to the individual remote user clients where the rendering for e.g. head-mounted display devices (HMDs) is performed. We introduce a novel lightweight robot client component which transmits robot-specific data and enables a quick integration into existing robotic systems. This way, in contrast to first-person exploration systems, the operators can explore and navigate in the remote site completely independent of the current position and view of the capturing robot, complementing traditional input devices for teleoperation. We provide a proof-of-concept implementation and demonstrate the capabilities as well as the performance of our system regarding interactive object measurements and bandwidth-efficient data streaming and visualization. Furthermore, we show its benefits over purely video-based teleoperation in a user study revealing a higher degree of situation awareness and a more precise navigation in challenging environments.
Towards the goal of exploring larger environments as related to the exploration of contaminated scenes envisioned in this work, Mossel and Kr "o ter @cite_5 presented a system that allows interactive VR-based exploration of the captured scene by a single exploration client. Their system benefits from the real-time reconstruction based on current voxel block hashing techniques @cite_20 , however, it only allows scene exploration by one single exploration client, and, yet, the bandwidth requirements of this approach have been reported to be up to 175 ,MBit s. Furthermore, the system relies on the direct transmission of the captured data to the rendering client, which is not designed to handle network interruptions that force the exploration client to reconnect to the reconstruction client and, consequently, scene parts that have been reconstructed during network outage will be lost.
{ "cite_N": [ "@cite_5", "@cite_20" ], "mid": [ "2801778672", "2584684405", "2188632634", "2417235025" ], "abstract": [ "Real-time 3D scene reconstruction from RGB-D sensor data, as well as the exploration of such data in VR AR settings, has seen tremendous progress in recent years. The combination of both these components into telepresence systems, however, comes with significant technical challenges. All approaches proposed so far are extremely demanding on input and output devices, compute resources and transmission bandwidth, and they do not reach the level of immediacy required for applications such as remote collaboration. Here, we introduce what we believe is the first practical client-server system for real-time capture and many-user exploration of static 3D scenes. Our system is based on the observation that interactive frame rates are sufficient for capturing and reconstruction, and real-time performance is only required on the client site to achieve lag-free view updates when rendering the 3D model. Starting from this insight, we extend previous voxel block hashing frameworks by introducing a novel thread-safe GPU hash map data structure that is robust under massively concurrent retrieval, insertion and removal of entries on a thread level. We further propose a novel transmission scheme for volume data that is specifically targeted to Marching Cubes geometry reconstruction and enables a 90 reduction in bandwidth between server and exploration clients. The resulting system poses very moderate requirements on network bandwidth, latency and client-side computation, which enables it to rely entirely on consumer-grade hardware, including mobile devices. We demonstrate that our technique achieves state-of-the-art representation accuracy while providing, for any number of clients, an immersive and fluid lag-free viewing experience even during network outages.", "We introduce a novel framework that enables large-scale dense 3D scene reconstruction, data streaming over the network and immersive exploration of the reconstructed environment using virtual reality. The system is operated by two remote entities, where one entity – for instance an autonomous aerial vehicle – captures and reconstructs the environment as well as transmits the data to another entity – such as human observer – that can immersivly explore the 3D scene, decoupled from the view of the capturing entity. The performance evaluation revealed the framework’s capabilities to perform RGB-D data capturing, dense 3D reconstruction, streaming and dynamic scene updating in real time for indoor environments up to a size of 100m2, using either a state-of-the-art mobile computer or a workstation. Thereby, our work provides a foundation for enabling immersive exploration of remotely captured and incrementally reconstructed dense 3D scenes, which has not shown before and opens up new research aspects in future.", "Our long-term vision is to provide a better every-day working environment, with high-fidelity scene reconstruction for life-sized 3D telecollaboration. In particular, we want to provide the user with a true sense of presence with our remote collaborator and their real surroundings, and the ability to share and interact with 3D documents. The challenges related to this vision are enormous and involve many technical tradeoffs, particularly in scene reconstruction. In this paper we present a significant step toward our ultimate goal. By assembling the best of available hardware and software technologies in scene reconstruction, rendering, and distributed scene graph software, members of the National Tele-Immersion Initiative (NTII) are able to demonstrate 3D collaborative, tele-presence over Internet2 between colleagues in remote offices.", "This paper presents an efficient system for simultaneous dense scene reconstruction and object labeling in real-world environments (captured with an RGB-D sensor). The proposed system starts with the generation of object proposals in the scene. It then tracks spatio-temporally consistent object proposals across multiple frames and produces a dense reconstruction of the scene. In parallel, the proposed system uses an efficient inference algorithm, where object class probabilities are computed at an object-level and fused into a voxel-based prediction hypothesis modeled on the voxels of the reconstructed scene. Our extensive experiments using challenging RGB-D object and scene datasets, and live video streams from Microsoft Kinect show that the proposed system achieved competitive 3D scene reconstruction and object labeling results compared to the state-of-the-art methods." ] }
1908.02949
2966611292
Applications like disaster management and industrial inspection often require experts to enter contaminated places. To circumvent the need for physical presence, it is desirable to generate a fully immersive individual live teleoperation experience. However, standard video-based approaches suffer from a limited degree of immersion and situation awareness due to the restriction to the camera view, which impacts the navigation. In this paper, we present a novel VR-based practical system for immersive robot teleoperation and scene exploration. While being operated through the scene, a robot captures RGB-D data that is streamed to a SLAM-based live multi-client telepresence system. Here, a global 3D model of the already captured scene parts is reconstructed and streamed to the individual remote user clients where the rendering for e.g. head-mounted display devices (HMDs) is performed. We introduce a novel lightweight robot client component which transmits robot-specific data and enables a quick integration into existing robotic systems. This way, in contrast to first-person exploration systems, the operators can explore and navigate in the remote site completely independent of the current position and view of the capturing robot, complementing traditional input devices for teleoperation. We provide a proof-of-concept implementation and demonstrate the capabilities as well as the performance of our system regarding interactive object measurements and bandwidth-efficient data streaming and visualization. Furthermore, we show its benefits over purely video-based teleoperation in a user study revealing a higher degree of situation awareness and a more precise navigation in challenging environments.
The recent approach by Stotko al @cite_7 overcomes these problems and allows the on-the-fly scene inspection and interaction by an arbitrary number of exploration clients, and, hence, represents a practical framework for interactive collaboration purposes. Most notably, the system is based on a novel compact Marching Cubes (MC) based voxel block representation maintained on a server. Efficient streaming at low-bandwidth requirements is achieved by transmitting MC indices and reconstructing and storing the models explored by individual exploration clients directly on their hardware. This makes the approach both scalable to many-client-exploration and robust to network interruptions as the consistent model is generated on the server and the updates are streamed once the connection is re-established.
{ "cite_N": [ "@cite_7" ], "mid": [ "2801778672", "2417235025", "1625949922", "2141494606" ], "abstract": [ "Real-time 3D scene reconstruction from RGB-D sensor data, as well as the exploration of such data in VR AR settings, has seen tremendous progress in recent years. The combination of both these components into telepresence systems, however, comes with significant technical challenges. All approaches proposed so far are extremely demanding on input and output devices, compute resources and transmission bandwidth, and they do not reach the level of immediacy required for applications such as remote collaboration. Here, we introduce what we believe is the first practical client-server system for real-time capture and many-user exploration of static 3D scenes. Our system is based on the observation that interactive frame rates are sufficient for capturing and reconstruction, and real-time performance is only required on the client site to achieve lag-free view updates when rendering the 3D model. Starting from this insight, we extend previous voxel block hashing frameworks by introducing a novel thread-safe GPU hash map data structure that is robust under massively concurrent retrieval, insertion and removal of entries on a thread level. We further propose a novel transmission scheme for volume data that is specifically targeted to Marching Cubes geometry reconstruction and enables a 90 reduction in bandwidth between server and exploration clients. The resulting system poses very moderate requirements on network bandwidth, latency and client-side computation, which enables it to rely entirely on consumer-grade hardware, including mobile devices. We demonstrate that our technique achieves state-of-the-art representation accuracy while providing, for any number of clients, an immersive and fluid lag-free viewing experience even during network outages.", "This paper presents an efficient system for simultaneous dense scene reconstruction and object labeling in real-world environments (captured with an RGB-D sensor). The proposed system starts with the generation of object proposals in the scene. It then tracks spatio-temporally consistent object proposals across multiple frames and produces a dense reconstruction of the scene. In parallel, the proposed system uses an efficient inference algorithm, where object class probabilities are computed at an object-level and fused into a voxel-based prediction hypothesis modeled on the voxels of the reconstructed scene. Our extensive experiments using challenging RGB-D object and scene datasets, and live video streams from Microsoft Kinect show that the proposed system achieved competitive 3D scene reconstruction and object labeling results compared to the state-of-the-art methods.", "Volumetric methods provide efficient, flexible and simple ways of integrating multiple depth images into a full 3D model. They provide dense and photorealistic 3D reconstructions, and parallelised implementations on GPUs achieve real-time performance on modern graphics hardware. To run such methods on mobile devices, providing users with freedom of movement and instantaneous reconstruction feedback, remains challenging however. In this paper we present a range of modifications to existing volumetric integration methods based on voxel block hashing, considerably improving their performance and making them applicable to tablet computer applications. We present (i) optimisations for the basic data structure, and its allocation and integration; (ii) a highly optimised raycasting pipeline; and (iii) extensions to the camera tracker to incorporate IMU data. In total, our system thus achieves frame rates up 47 Hz on a Nvidia Shield Tablet and 910 Hz on a Nvidia GTX Titan XGPU, or even beyond 1.1 kHz without visualisation.", "We present a novel action recognition method based on space-time locally adaptive regression kernels and the matrix cosine similarity measure. The proposed method uses a single example of an action as a query to find similar matches. It does not require prior knowledge about actions, foreground background segmentation, or any motion estimation or tracking. Our method is based on the computation of novel space-time descriptors from the query video which measure the likeness of a voxel to its surroundings. Salient features are extracted from said descriptors and compared against analogous features from the target video. This comparison is done using a matrix generalization of the cosine similarity measure. The algorithm yields a scalar resemblance volume, with each voxel indicating the likelihood of similarity between the query video and all cubes in the target video. Using nonparametric significance tests by controlling the false discovery rate, we detect the presence and location of actions similar to the query video. High performance is demonstrated on challenging sets of action data containing fast motions, varied contexts, and complicated background. Further experiments on the Weizmann and KTH data sets demonstrate state-of-the-art performance in action categorization." ] }
1908.02949
2966611292
Applications like disaster management and industrial inspection often require experts to enter contaminated places. To circumvent the need for physical presence, it is desirable to generate a fully immersive individual live teleoperation experience. However, standard video-based approaches suffer from a limited degree of immersion and situation awareness due to the restriction to the camera view, which impacts the navigation. In this paper, we present a novel VR-based practical system for immersive robot teleoperation and scene exploration. While being operated through the scene, a robot captures RGB-D data that is streamed to a SLAM-based live multi-client telepresence system. Here, a global 3D model of the already captured scene parts is reconstructed and streamed to the individual remote user clients where the rendering for e.g. head-mounted display devices (HMDs) is performed. We introduce a novel lightweight robot client component which transmits robot-specific data and enables a quick integration into existing robotic systems. This way, in contrast to first-person exploration systems, the operators can explore and navigate in the remote site completely independent of the current position and view of the capturing robot, complementing traditional input devices for teleoperation. We provide a proof-of-concept implementation and demonstrate the capabilities as well as the performance of our system regarding interactive object measurements and bandwidth-efficient data streaming and visualization. Furthermore, we show its benefits over purely video-based teleoperation in a user study revealing a higher degree of situation awareness and a more precise navigation in challenging environments.
In Schwarz al @cite_8 , the rescue robot Momaro is described, which is equipped with interfaces for immersive teleoperation using an HMD device and 6D trackers. The immersive display greatly benefited the operators by increasing situational awareness. However, visualization was limited to registered 3D point clouds, which carry no color information. As a result, additional 2D camera images were displayed to the operator to visualize texture. Momaro served as a precursor to the Centauro robot @cite_27 , which extends the Momaro system in several directions, including immersive display of RGB-D data. However, the system is currently limited to displaying live data without aggregation.
{ "cite_N": [ "@cite_27", "@cite_8" ], "mid": [ "2530835184", "2410621027", "2891700736", "2554423287" ], "abstract": [ "Planetary exploration scenarios illustrate the need for autonomous robots that are capable to operate in unknown environments without direct human interaction. At the DARPA Robotics Challenge, we demonstrated that our Centaur-like mobile manipulation robot Momaro can solve complex tasks when teleoperated. Motivated by the DLR SpaceBot Cup 2015, where robots should explore a Mars-like environment, find and transport objects, take a soil sample, and perform assembly tasks, we developed autonomous capabilities for Momaro. Our robot perceives and maps previously unknown, uneven terrain using a 3D laser scanner. Based on the generated height map, we assess drivability, plan navigation paths, and execute them using the omnidirectional drive. Using its four legs, the robot adapts to the slope of the terrain. Momaro perceives objects with cameras, estimates their pose, and manipulates them with its two arms autonomously. For specifying missions, monitoring mission progress, on-the-fly reconfiguration, and teleoperation, we developed a ground station with suitable operator interfaces. To handle network communication interruptions and latencies between robot and ground station, we implemented a robust network layer for the ROS middleware. With the developed system, our team NimbRo Explorer solved all tasks of the DLR SpaceBot Camp 2015. We also discuss the lessons learned from this demonstration.", "Locomotion in uneven terrain is important for a wide range of robotic applications, including Search&Rescue operations. Our mobile manipulation robot Momaro features a unique locomotion design consisting of four legs ending in pairs of steerable wheels, allowing the robot to omnidirectionally drive on sufficiently even terrain, step over obstacles, and also to overcome height differences by climbing. We demonstrate the feasibility and usefulness of this design on the example of the DARPA Robotics Challenge, where our team NimbRo Rescue solved seven out of eight tasks in only 34 minutes. We also introduce a method for semi-autonomous execution of weight-shifting and stepping actions based on a 2D heightmap generated from 3D laser data.", "Navigating in search and rescue environments is challenging, since a variety of terrains has to be considered. Hybrid driving-stepping locomotion, as provided by our robot Momaro, is a promising approach. Similar to other locomotion methods, it incorporates many degrees of freedom—offering high flexibility but making planning computationally expensive for larger environments. We propose a navigation planning method, which unifies different levels of representation in a single planner. In the vicinity of the robot, it provides plans with a fine resolution and a high robot state dimensionality. With increasing distance from the robot, plans become coarser and the robot state dimensionality decreases. We compensate this loss of information by enriching coarser representations with additional semantics. Experiments show that the proposed planner provides plans for large, challenging scenarios in feasible time.", "Robots that solve complex tasks in environments too dangerous for humans to enter are desperately needed, e.g., for search and rescue applications. We describe our mobile manipulation robot Momaro, with which we participated successfully in the DARPA Robotics Challenge. It features a unique locomotion design with four legs ending in steerable wheels, which allows it both to drive omnidirectionally and to step over obstacles or climb. Furthermore, we present advanced communication and teleoperation approaches, which include immersive three-dimensional 3D visualization, and 6D tracking of operator head and arm motions. The proposed system is evaluated in the DARPA Robotics Challenge, the DLR SpaceBot Cup Qualification, and lab experiments. We also discuss the lessons learned from the competitions." ] }
1908.03020
2966062579
We propose a novel method for explaining the predictions of any classifier. In our approach, local explanations are expected to explain both the outcome of a prediction and how that prediction would change if 'things had been different'. Furthermore, we argue that satisfactory explanations cannot be dissociated from a notion and measure of fidelity, as advocated in the early days of neural networks' knowledge extraction. We introduce a definition of fidelity to the underlying classifier for local explanation models which is based on distances to a target decision boundary. A system called CLEAR: Counterfactual Local Explanations via Regression, is introduced and evaluated. CLEAR generates w-counterfactual explanations that state minimum changes necessary to flip a prediction's classification. CLEAR then builds local regression models, using the w-counterfactuals to measure and improve the fidelity of its regressions. By contrast, the popular LIME method, which also uses regression to generate local explanations, neither measures its own fidelity nor generates counterfactuals. CLEAR's regressions are found to have significantly higher fidelity than LIME's, averaging over 45 higher in this paper's four case studies.
Early work seeking to provide explanations to neural networks have been focused on the extraction of symbolic knowledge from trained networks @cite_18 , either decision trees in the case of feedforward networks @cite_10 or graphs in the case of recurrent networks @cite_5 @cite_3 . More recently, attention has been shifted from global to local explanation models due to the very large-scale nature of current deep networks, and has been focused on explaining specific network architectures (such as the bottleneck in auto-encoders @cite_9 ) or domain specific networks such as those used to solve computer vision problems @cite_6 , although some recent approaches continue to advocate the use of rule-based knowledge extraction @cite_15 @cite_2 . The reader is referred to @cite_13 for a recent survey.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_9", "@cite_3", "@cite_6", "@cite_2", "@cite_5", "@cite_15", "@cite_10" ], "mid": [ "2065204741", "2962949867", "2247148075", "1545139845" ], "abstract": [ "Although neural networks have shown very good performance in many application domains, one of their main drawbacks lies in the incapacity to provide an explanation for the underlying reasoning mechanisms. The “explanation capability” of neural networks can be achieved by the extraction of symbolic knowledge. In this paper, we present a new method of extraction that captures nonmonotonic rules encoded in the network, and prove that such a method is sound. We start by discussing some of the main problems of knowledge extraction methods. We then discuss how these problems may be ameliorated. To this end, a partial ordering on the set of input vectors of a network is defined, as well as a number of pruning and simplification rules. The pruning rules are then used to reduce the search space of the extraction algorithm during a pedagogical extraction, whereas the simplification rules are used to reduce the size of the extracted set of rules. We show that, in the case of regular networks, the extraction algorithm is sound and complete. We proceed to extend the extraction algorithm to the class of non-regular networks, the general case. We show that non-regular networks always contain regularities in their subnetworks. As a result, the underlying extraction method for regular networks can be applied, but now in a decompositional fashion. In order to combine the sets of rules extracted from each subnetwork into the final set of rules, we use a method whereby we are able to keep the soundness of the extraction algorithm. Finally, we present the results of an empirical analysis of the extraction system, using traditional examples and real-world application problems. The results have shown that a very high fidelity between the extracted set of rules and the network can be achieved.", "It is widely believed that the success of deep convolutional networks is based on progressively discarding uninformative variability about the input with respect to the problem at hand. This is supported empirically by the difficulty of recovering images from their hidden representations, in most commonly used network architectures. In this paper we show that this loss of information is not a necessary condition to learn representations that generalize well on complicated problems, such as ImageNet. Via a cascade of homeomorphic layers, we build the i-RevNet, a network that can be fully inverted up to the final projection onto the classes, i.e. no information is discarded. Building an invertible architecture is difficult, for example, because the local inversion is ill-conditioned, we overcome this by providing an explicit inverse. An analysis of i-RevNet’s learned representations suggests an explanation of the good accuracy by a progressive contraction and linear separation with depth. To shed light on the nature of the model learned by the i-RevNet we reconstruct linear interpolations between natural images representations.", "Much of the recent success of neural networks can be attributed to the deeper architectures that have become prevalent. However, the deeper architectures often yield unintelligible solutions, require enormous amounts of labeled data, and still remain brittle and easily broken. In this paper, we present a method to efficiently and intuitively discover input instances that are misclassified by well-trained neural networks. As in previous studies, we can identify instances that are so similar to previously seen examples such that the transformation is visually imperceptible. Additionally, unlike in previous studies, we can also generate mistakes that are significantly different from any training sample, while, importantly, still remaining in the space of samples that the network should be able to classify correctly. This is achieved by training a basket of N \"peer networks\" rather than a single network. These are similarly trained networks that serve to provide consistency pressure on each other. When an example is found for which a single network, S, disagrees with all of the other @math networks, which are consistent in their prediction, that example is a potential mistake for S. We present a simple method to find such examples and demonstrate it on two visual tasks. The examples discovered yield realistic images that clearly illuminate the weaknesses of the trained models, as well as provide a source of numerous, diverse, labeled-training samples.", "1. Introduction and Overview.- 1.1 Why Integrate Neurons and Symbols?.- 1.2 Strategies of Neural-Symbolic Integration.- 1.3 Neural-Symbolic Learning Systems.- 1.4 A Simple Example.- 1.5 How to Read this Book.- 1.6 Summary.- 2. Background.- 2.1 General Preliminaries.- 2.2 Inductive Learning.- 2.3 Neural Networks.- 2.3.1 Architectures.- 2.3.2 Learning Strategy.- 2.3.3 Recurrent Networks.- 2.4 Logic Programming.- 2.4.1 What is Logic Programming?.- 2.4.2 Fixpoints and Definite Programs.- 2.5 Nonmonotonic Reasoning.- 2.5.1 Stable Models and Acceptable Programs.- 2.6 Belief Revision.- 2.6.1 Truth Maintenance Systems.- 2.6.2 Compromise Revision.- I. Knowledge Refinement in Neural Networks.- 3. Theory Refinement in Neural Networks.- 3.1 Inserting Background Knowledge.- 3.2 Massively Parallel Deduction.- 3.3 Performing Inductive Learning.- 3.4 Adding Classical Negation.- 3.5 Adding Met alevel Priorities.- 3.6 Summary and Further Reading.- 4. Experiments on Theory Refinement.- 4.1 DNA Sequence Analysis.- 4.2 Power Systems Fault Diagnosis.- 4.3.Discussion.- 4.4.Appendix.- II. Knowledge Extraction from Neural Networks.- 5. Knowledge Extraction from Trained Networks.- 5.1 The Extraction Problem.- 5.2 The Case of Regular Networks.- 5.2.1 Positive Networks.- 5.2.2 Regular Networks.- 5.3 The General Case Extraction.- 5.3.1 Regular Subnetworks.- 5.3.2 Knowledge Extraction from Subnetworks.- 5.3.3 Assembling the Final Rule Set.- 5.4 Knowledge Representation Issues.- 5.5 Summary and Further Reading.- 6. Experiments on Knowledge Extraction.- 6.1 Implementation.- 6.2 The Monk's Problems.- 6.3 DNA Sequence Analysis.- 6.4 Power Systems Fault Diagnosis.- 6.5 Discussion.- III. Knowledge Revision in Neural Networks.- 7. Handling Inconsistencies in Neural Networks.- 7.1 Theory Revision in Neural Networks.- 7.1.1The Equivalence with Truth Maintenance Systems.- 7.1.2Minimal Learning.- 7.2 Solving Inconsistencies in Neural Networks.- 7.2.1 Compromise Revision.- 7.2.2 Foundational Revision.- 7.2.3 Nonmonotonic Theory Revision.- 7.3 Summary of the Chapter.- 8. Experiments on Handling Inconsistencies.- 8.1 Requirements Specifications Evolution as Theory Refinement.- 8.1.1Analysing Specifications.- 8.1.2Revising Specifications.- 8.2 The Automobile Cruise Control System.- 8.2.1Knowledge Insertion.- 8.2.2Knowledge Revision: Handling Inconsistencies.- 8.2.3Knowledge Extraction.- 8.3 Discussion.- 8.4 Appendix.- 9. Neural-Symbolic Integration: The Road Ahead.- 9.1 Knowledge Extraction.- 9.2 Adding Disjunctive Information.- 9.3 Extension to the First-Order Case.- 9.4 Adding Modalities.- 9.5 New Preference Relations.- 9.6 A Proof Theoretical Approach.- 9.7 The \"Forbidden Zone\" [Amax, Amin].- 9.8 Acceptable Programs and Neural Networks.- 9.9 Epilogue." ] }
1908.03020
2966062579
We propose a novel method for explaining the predictions of any classifier. In our approach, local explanations are expected to explain both the outcome of a prediction and how that prediction would change if 'things had been different'. Furthermore, we argue that satisfactory explanations cannot be dissociated from a notion and measure of fidelity, as advocated in the early days of neural networks' knowledge extraction. We introduce a definition of fidelity to the underlying classifier for local explanation models which is based on distances to a target decision boundary. A system called CLEAR: Counterfactual Local Explanations via Regression, is introduced and evaluated. CLEAR generates w-counterfactual explanations that state minimum changes necessary to flip a prediction's classification. CLEAR then builds local regression models, using the w-counterfactuals to measure and improve the fidelity of its regressions. By contrast, the popular LIME method, which also uses regression to generate local explanations, neither measures its own fidelity nor generates counterfactuals. CLEAR's regressions are found to have significantly higher fidelity than LIME's, averaging over 45 higher in this paper's four case studies.
More specifically, @cite_14 have proposed LORE – Local Rule based Explanations, which provides local explanations for binary classification tasks using decision trees. It is model-agnostic, generates local models from synthetic data, has many other similarities to LIME, but it also generates counterfactual explanations. criticise LIME for producing neighbourhood datasets whose observations are too distant from each other and have too low a density around . By contrast LORE uses a genetic algorithm to create neighbourhood datasets with a high density around and the decision boundary. claim that their system outperforms LIME and they provide fidelity statistics comparing LORE and LIME, where fidelity is defined in terms of how well local models perform in making the same classifications as the underlying machine learning system. However, their fidelity statistics for LIME could be misconstrued; it does not follow from being able to mimic a system’s classifications that a local model will also faithfully mimic its counterfactuals (see Section 4).
{ "cite_N": [ "@cite_14" ], "mid": [ "2803532212", "2951501516", "2282821441", "2622028207" ], "abstract": [ "The recent years have witnessed the rise of accurate but obscure decision systems which hide the logic of their internal decision processes to the users. The lack of explanations for the decisions of black box systems is a key ethical issue, and a limitation to the adoption of machine learning components in socially sensitive and safety-critical contexts. Therefore, we need explanations that reveals the reasons why a predictor takes a certain decision. In this paper we focus on the problem of black box outcome explanation, i.e., explaining the reasons of the decision taken on a specific instance. We propose LORE, an agnostic method able to provide interpretable and faithful explanations. LORE first leans a local interpretable predictor on a synthetic neighborhood generated by a genetic algorithm. Then it derives from the logic of the local interpretable predictor a meaningful explanation consisting of: a decision rule, which explains the reasons of the decision; and a set of counterfactual rules, suggesting the changes in the instance's features that lead to a different outcome. Wide experiments show that LORE outperforms existing methods and baselines both in the quality of explanations and in the accuracy in mimicking the black box.", "Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.", "Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.", "We propose a novel method called deep convolutional decision jungle (CDJ) and its learning algorithm for image classification. The CDJ maintains the structure of standard convolutional neural networks (CNNs), i.e. multiple layers of multiple response maps fully connected. Each response map-or node-in both the convolutional and fully-connected layers selectively respond to class labels s.t. each data sample travels via a specific soft route of those activated nodes. The proposed method CDJ automatically learns features, whereas decision forests and jungles require pre-defined feature sets. Compared to CNNs, the method embeds the benefits of using data-dependent discriminative functions, which better handles multi-modal heterogeneous data; further,the method offers more diverse sparse network responses, which in turn can be used for cost-effective learning classification. The network is learnt by combining conventional softmax and proposed entropy losses in each layer. The entropy loss,as used in decision tree growing, measures the purity of data activation according to the class label distribution. The back-propagation rule for the proposed loss function is derived from stochastic gradient descent (SGD) optimization of CNNs. We show that our proposed method outperforms state-of-the-art methods on three public image classification benchmarks and one face verification dataset. We also demonstrate the use of auxiliary data labels, when available, which helps our method to learn more discriminative routing and representations and leads to improved classification." ] }
1908.02743
2965212953
Consider a distributed system with @math processors out of which @math can be Byzantine faulty. In the approximate agreement task, each processor @math receives an input value @math and has to decide on an output value @math such that - the output values are in the convex hull of the non-faulty processors' input values, - the output values are within distance @math of each other. Classically, the values are assumed to be from an @math -dimensional Euclidean space, where @math . In this work, we study the task in a discrete setting, where input values with some structure expressible as a graph. Namely, the input values are vertices of a finite graph @math and the goal is to output vertices that are within distance @math of each other in @math , but still remain in the graph-induced convex hull of the input values. For @math , the task reduces to consensus and cannot be solved with a deterministic algorithm in an asynchronous system even with a single crash fault. For any @math , we show that the task is solvable in asynchronous systems when @math is chordal and @math , where @math is the clique number of @math . In addition, we give the first Byzantine-tolerant algorithm for a variant of lattice agreement. For synchronous systems, we show tight resilience bounds for the exact variants of these and related tasks over a large class of combinatorial structures.
The seminal result of @cite_0 showed that consensus cannot be reached in asynchronous systems in the presence of crash faults. @cite_5 showed that it is however possible to reach in an asynchronous system even with arbitrary faulty behavior when the values reside on the continuous real line. Subsequently, the one-dimensional approximate agreement problem has been extensively studied @cite_5 @cite_10 @cite_36 @cite_41 . Fekete @cite_36 showed that any algorithm reducing the distance of values from @math to @math requires @math asynchronous rounds when @math ; in the discrete setting this yields the bound @math for paths of length @math . Recently, @cite_13 introduced the natural generalisation of approximate agreement and showed that the @math -dimensional problem is solvable in an asynchronous system with Byzantine faults if and only if @math holds for any given @math .
{ "cite_N": [ "@cite_13", "@cite_41", "@cite_36", "@cite_0", "@cite_5", "@cite_10" ], "mid": [ "2139686511", "1971773339", "2010107859", "2147056869" ], "abstract": [ "The condition-based approach identifies sets of input vectors, called conditions, for which it is possible to design an asynchronous protocol solving a distributed problem despite process crashes. This paper establishes a direct correlation between distributed agreement problems and error-correcting codes. In particular, crash failures in distributed agreement problems correspond to erasure failures in error-correcting codes and Byzantine and value domain faults correspond to corruption errors. This correlation is exemplified by concentrating on two well-known agreement problems, namely, consensus and interactive consistency, in the context of the condition-based approach. Specifically, the paper presents the following results: first, it shows that the conditions that allow interactive consistency to be solved despite fc crashes and fc value domain faults correspond exactly to the set of error-correcting codes capable of recovering from fc erasures and fc corruptions. Second, the paper proves that consensus can be solved despite fc crash failures if the condition corresponds to a code whose Hamming distance is fc + 1 and Byzantine consensus can be solved despite fb Byzantine faults if the Hamming distance of the code is 2 fb + 1. Finally, the paper uses the above relations to establish several results in distributed agreement that are derived from known results in error-correcting codes and vice versa.", "Consider a network of @math n processes, where each process inputs a @math d-dimensional vector of reals. All processes can communicate directly with others via reliable FIFO channels. We discuss two problems. The multidimensional Byzantine consensus problem, for synchronous systems, requires processes to decide on a single @math d-dimensional vector @math v?Rd, inside the convex hull of @math d-dimensional vectors that were input by the non-faulty processes. Also, the multidimensional Byzantine approximate agreement (MBAA) problem, for asynchronous systems, requires processes to decide on multiple @math d-dimensional vectors in @math Rd, all within a fixed Euclidean distance @math ∈ of each other, and inside the convex hull of @math d-dimensional vectors that were input by the non-faulty processes. We obtain the following results for the problems above, while tolerating up to @math f Byzantine failures in systems with complete communication graphs: (1) In synchronous systems, @math n>max 3f,(d+1)f is necessary and sufficient to solve the multidimensional consensus problem. (2) In asynchronous systems, @math n>(d+2)f is necessary and sufficient to solve the multidimensional approximate agreement problem. Our sufficiency proofs are constructive, giving explicit protocols for the problems. In particular, for the MBAA problem, we give two protocols with strictly different properties and applications.", "The problem of e-approximate agreement in Byzantine asynchronous systems is well-understood when all values lie on the real line. In this paper, we generalize the problem to consider values that lie in Rm, for m ≥ 1, and present an optimal protocol in regard to fault tolerance. Our scenario is the following. Processes start with values in Rm, for m ≥ 1, and communicate via message-passing. The system is asynchronous: there is no upper bound on processes' relative speeds or on message delay. Some faulty processes can display arbitrarily malicious (i.e. Byzantine) behavior. Non-faulty processes must decide on values that are: (1) in Rm; (2) within distance e of each other; and (3) in the convex hull of the non-faulty processes' inputs. We give an algorithm with a matching lower bound on fault tolerance: we require n > t(m+2), where n is the number of processes, t is the number of Byzantine processes, and input and output values reside in Rm. Non-faulty processes send O(n2 d log(m e max δ(d): 1 ≤ d ≤ m )) messages in total, where δ(d) is the range of non-faulty inputs projected at coordinate d. The Byzantine processes do not affect the algorithm's running time.", "Much of the past work on asynchronous approximate Byzantine consensus has assumed scalar inputs at the nodes [4, 8]. Recent work has yielded approximate Byzantine consensus algorithms for the case when the input at each node is a d-dimensional vector, and the nodes must reach consensus on a vector in the convex hull of the input vectors at the fault-free nodes [9, 13]. The d-dimensional vectors can be equivalently viewed as points in the d-dimensional Euclidean space. Thus, the algorithms in [9, 13] require the fault-free nodes to decide on a point in the d-dimensional space. In our recent work [12], we proposed a generalization of the consensus problem, namely Byzantine convex consensus (BCC), which allows the decision to be a convex polytope in the d-dimensional space, such that the decided polytope is within the convex hull of the input vectors at the fault-free nodes. We also presented an asynchronous approximate BCC algorithm. In this paper, we propose a new BCC algorithm with optimal fault-tolerance that also agrees on a convex polytope that is as large as possible under adversarial conditions. Our prior work [12] does not guarantee the optimality of the output polytope." ] }
1908.02743
2965212953
Consider a distributed system with @math processors out of which @math can be Byzantine faulty. In the approximate agreement task, each processor @math receives an input value @math and has to decide on an output value @math such that - the output values are in the convex hull of the non-faulty processors' input values, - the output values are within distance @math of each other. Classically, the values are assumed to be from an @math -dimensional Euclidean space, where @math . In this work, we study the task in a discrete setting, where input values with some structure expressible as a graph. Namely, the input values are vertices of a finite graph @math and the goal is to output vertices that are within distance @math of each other in @math , but still remain in the graph-induced convex hull of the input values. For @math , the task reduces to consensus and cannot be solved with a deterministic algorithm in an asynchronous system even with a single crash fault. For any @math , we show that the task is solvable in asynchronous systems when @math is chordal and @math , where @math is the clique number of @math . In addition, we give the first Byzantine-tolerant algorithm for a variant of lattice agreement. For synchronous systems, we show tight resilience bounds for the exact variants of these and related tasks over a large class of combinatorial structures.
The was originally introduced in the context of wait-free algorithms in shared memory models @cite_47 @cite_53 . The problem has recently resurfaced in the context of asynchronous message-passing models with crash faults @cite_28 @cite_56 . These papers consider the problem when the validity condition is given as @math , i.e., the output of a processor must satisfy @math and the feasible area is determined also by the inputs of faulty processors. However, it is not difficult to see that under Byzantine faults, this validity condition is not reasonable, as the problem cannot be solved even with one faulty processor.
{ "cite_N": [ "@cite_28", "@cite_47", "@cite_56", "@cite_53" ], "mid": [ "1967858331", "2083306187", "2139686511", "2042928046" ], "abstract": [ "In the classical consensus problem, each of n processors receives a private input value and produces a decision value which is one of the original input values, with the requirement that all processors decide the same value. A central result in distributed computing is that, in several standard models including the asynchronous shared-memory model, this problem has no deterministic solution. The k-set agreement problem is a generalization of the classical consensus proposed by Chaudhuri [ Inform. and Comput., 105 (1993), pp. 132--158], where the agreement condition is weakened so that the decision values produced may be different, as long as the number of distinct values is at most k. For @math it was not known whether this problem is solvable deterministically in the asynchronous shared memory model. In this paper, we resolve this question by showing that for any k < n, there is no deterministic wait-free protocol for n processors that solves the k-set agreement problem. The proof technique is new: it is based on the development of a topological structure on the set of possible processor schedules of a protocol. This topological structure has a natural interpretation in terms of the knowledge of the processors of the state of the system. This structure reveals a close analogy between the impossibility of wait-free k-set agreement and the Brouwer fixed point theorem for the k-dimensional ball.", "In the classical consensus problem,each of n processors receives a private input value and produces a decision value which is one of the original input values,with the requirement that all processors decide the same value. A central result in distributed computing is that,in several standard models including the asynchronous shared-memory model,this problem has no determinis- tic solution. The k-set agreement problem is a generalization of the classical consensus proposed by Chaudhuri (Inform. and Comput.,105 (1993),pp. 132-158),where the agreement condition is weak- ened so that the decision values produced may be different,as long as the number of distinct values is at most k .F or n>k ≥ 2 it was not known whether this problem is solvable deterministically in the asynchronous shared memory model. In this paper,we resolve this question by showing that for any k<n ,there is no deterministic wait-free protocol for n processors that solves the k-set agreement problem. The proof technique is new: it is based on the development of a topological structure on the set of possible processor schedules of a protocol. This topological structure has a natural interpretation in terms of the knowledge of the processors of the state of the system. This structure reveals a close analogy between the impossibility of wait-free k-set agreement and the Brouwer fixed point theorem for the k-dimensional ball.", "The condition-based approach identifies sets of input vectors, called conditions, for which it is possible to design an asynchronous protocol solving a distributed problem despite process crashes. This paper establishes a direct correlation between distributed agreement problems and error-correcting codes. In particular, crash failures in distributed agreement problems correspond to erasure failures in error-correcting codes and Byzantine and value domain faults correspond to corruption errors. This correlation is exemplified by concentrating on two well-known agreement problems, namely, consensus and interactive consistency, in the context of the condition-based approach. Specifically, the paper presents the following results: first, it shows that the conditions that allow interactive consistency to be solved despite fc crashes and fc value domain faults correspond exactly to the set of error-correcting codes capable of recovering from fc erasures and fc corruptions. Second, the paper proves that consensus can be solved despite fc crash failures if the condition corresponds to a code whose Hamming distance is fc + 1 and Byzantine consensus can be solved despite fb Byzantine faults if the Hamming distance of the code is 2 fb + 1. Finally, the paper uses the above relations to establish several results in distributed agreement that are derived from known results in error-correcting codes and vice versa.", "We show that no algorithm exists for deciding whether a finite task for three or more processors is wait-free solvable in the asynchronous read-write shared-memory model. This impossibility result implies that there is no constructive (recursive) characterization of wait-free solvable tasks. It also applies to other shared-memory models of distributed computing, such as the comparison-based model." ] }
1908.02743
2965212953
Consider a distributed system with @math processors out of which @math can be Byzantine faulty. In the approximate agreement task, each processor @math receives an input value @math and has to decide on an output value @math such that - the output values are in the convex hull of the non-faulty processors' input values, - the output values are within distance @math of each other. Classically, the values are assumed to be from an @math -dimensional Euclidean space, where @math . In this work, we study the task in a discrete setting, where input values with some structure expressible as a graph. Namely, the input values are vertices of a finite graph @math and the goal is to output vertices that are within distance @math of each other in @math , but still remain in the graph-induced convex hull of the input values. For @math , the task reduces to consensus and cannot be solved with a deterministic algorithm in an asynchronous system even with a single crash fault. For any @math , we show that the task is solvable in asynchronous systems when @math is chordal and @math , where @math is the clique number of @math . In addition, we give the first Byzantine-tolerant algorithm for a variant of lattice agreement. For synchronous systems, we show tight resilience bounds for the exact variants of these and related tasks over a large class of combinatorial structures.
Another class of structured agreement problems in the wait-free asynchronous setting are tasks @cite_32 , which generalise @math -set agreement and approximate agreement (e.g., @math -set agreement and one-dimensional approximate agreement). In loop agreement, the set of inputs consists of three distinct vertices on a loop in a 2-dimensional simplicial complex and the outputs are vertices of the complex with certain constraints, whereas are a generalisation of loop agreement to higher dimensions @cite_39 . These tasks are part of large body of work exploring the deep connection of asynchronous computability and combinatorial topology, which has successfully been used to characterise the of various distributed tasks @cite_11 . Gafni and Kuznetsov's @math -reconciliation task @cite_17 achieves geodesic approximate agreement on a graph of system configurations.
{ "cite_N": [ "@cite_17", "@cite_11", "@cite_32", "@cite_39" ], "mid": [ "1967858331", "2083306187", "2886192190", "2011451665" ], "abstract": [ "In the classical consensus problem, each of n processors receives a private input value and produces a decision value which is one of the original input values, with the requirement that all processors decide the same value. A central result in distributed computing is that, in several standard models including the asynchronous shared-memory model, this problem has no deterministic solution. The k-set agreement problem is a generalization of the classical consensus proposed by Chaudhuri [ Inform. and Comput., 105 (1993), pp. 132--158], where the agreement condition is weakened so that the decision values produced may be different, as long as the number of distinct values is at most k. For @math it was not known whether this problem is solvable deterministically in the asynchronous shared memory model. In this paper, we resolve this question by showing that for any k < n, there is no deterministic wait-free protocol for n processors that solves the k-set agreement problem. The proof technique is new: it is based on the development of a topological structure on the set of possible processor schedules of a protocol. This topological structure has a natural interpretation in terms of the knowledge of the processors of the state of the system. This structure reveals a close analogy between the impossibility of wait-free k-set agreement and the Brouwer fixed point theorem for the k-dimensional ball.", "In the classical consensus problem,each of n processors receives a private input value and produces a decision value which is one of the original input values,with the requirement that all processors decide the same value. A central result in distributed computing is that,in several standard models including the asynchronous shared-memory model,this problem has no determinis- tic solution. The k-set agreement problem is a generalization of the classical consensus proposed by Chaudhuri (Inform. and Comput.,105 (1993),pp. 132-158),where the agreement condition is weak- ened so that the decision values produced may be different,as long as the number of distinct values is at most k .F or n>k ≥ 2 it was not known whether this problem is solvable deterministically in the asynchronous shared memory model. In this paper,we resolve this question by showing that for any k<n ,there is no deterministic wait-free protocol for n processors that solves the k-set agreement problem. The proof technique is new: it is based on the development of a topological structure on the set of possible processor schedules of a protocol. This topological structure has a natural interpretation in terms of the knowledge of the processors of the state of the system. This structure reveals a close analogy between the impossibility of wait-free k-set agreement and the Brouwer fixed point theorem for the k-dimensional ball.", "This paper studies the lattice agreement problem and the generalized lattice agreement problem in distributed message passing systems. In the lattice agreement problem, given input values from a lattice, processes have to non-trivially decide output values that lie on a chain. We consider the lattice agreement problem in both synchronous and asynchronous systems. For synchronous lattice agreement, we present two algorithms which run in @math and @math rounds, respectively, where @math denotes the height of the input sublattice @math , @math is the number of crash failures the system can tolerate, and @math is the number of processes in the system. These algorithms have significant better round complexity than previously known algorithms. The algorithm by attiya1995atomic takes @math synchronous rounds, and the algorithm by Mavronicolasa mavronicolasabound takes @math rounds. For asynchronous lattice agreement, we propose an algorithm which has time complexity of @math message delays which improves on the previously known time complexity of @math message delays. The generalized lattice agreement problem defined by in faleiro2012generalized is a generalization of the lattice agreement problem where it is applied for the replicated state machine. We propose an algorithm which guarantees liveness when a majority of the processes are correct in asynchronous systems. Our algorithm requires @math units of time in the worst case which is better than @math units of time required by the algorithm of faleiro2012generalized .", "Loop agreement is a family of wait-free tasks that includes instances of set agreement and approximate agreement tasks. A task G implements task F if one can construct a solution to F from a solution to G, possibly followed by access to a read write memory. Loop agreement tasks form a lattice under this notion of implementation.This paper presents a classification of loop agreement tasks. Each loop agreement task can be assigned an algebraic signature consisting of a finitely presented group G and a distinguished element g in G. This signature characterizes the task's power to implement other tasks. If F and G are loop agreement tasks with respective signatures 〈F,f〉 and 〈G,g〉, then F implements G if and only if there exists a group homomorphism h : F → G carrying f to g." ] }
1908.02571
2964743966
Informing professionals about the latest research results in their field is a particularly important task in the field of health care, since any development in this field directly improves the health status of the patients. Meanwhile, social media is an infrastructure that allows public instant sharing of information, thus it has recently become popular in medical applications. In this study, we apply Multi Distance Knowledge Graph Embeddings (MDE) to link physicians and surgeons to the latest medical breakthroughs that are shared as the research results on Twitter. Our study shows that using this method physicians can be informed about the new findings in their field given that they have an account dedicated to their profession.
Classic link prediction methods on social media use graph properties of the social network or NLP feature of nodes to predict links between entities. For example, @cite_3 is base solely on graph features and @cite_8 uses a similar technique for the social networks in healthcare. Meanwhile, @cite_14 uses common words to cluster and rank nodes and based on that predicts the closely-ranked nodes to be connected. Another Study @cite_10 uses a combination of graph features and keyword matches to train classifiers(SVM, Naive Bayes, etc) to predict if a link exists between two nodes.
{ "cite_N": [ "@cite_14", "@cite_10", "@cite_3", "@cite_8" ], "mid": [ "2137962371", "2542727820", "2097915776", "2136116685" ], "abstract": [ "In this paper we discuss a very simple approach of combining content and link information in graph structures for the purpose of community discovery, a fundamental task in network analysis. Our approach hinges on the basic intuition that many networks contain noise in the link structure and that content information can help strengthen the community signal. This enables ones to eliminate the impact of noise (false positives and false negatives), which is particularly prevalent in online social networks and Web-scale information networks. Specifically we introduce a measure of signal strength between two nodes in the network by fusing their link strength with content similarity. Link strength is estimated based on whether the link is likely (with high probability) to reside within a community. Content similarity is estimated through cosine similarity or Jaccard coefficient. We discuss a simple mechanism for fusing content and link similarity. We then present a biased edge sampling procedure which retains edges that are locally relevant for each graph node. The resulting backbone graph can be clustered using standard community discovery algorithms such as Metis and Markov clustering. Through extensive experiments on multiple real-world datasets (Flickr, Wikipedia and CiteSeer) with varying sizes and characteristics, we demonstrate the effectiveness and efficiency of our methods over state-of-the-art learning and mining approaches several of which also attempt to combine link and content analysis for the purposes of community discovery. Specifically we always find a qualitative benefit when combining content with link analysis. Additionally our biased graph sampling approach realizes a quantitative benefit in that it is typically several orders of magnitude faster than competing approaches.", "Online social networking sites have become increasingly popular over the last few years. As a result, new interdisciplinary research directions have emerged in which social network analysis methods are applied to networks containing hundreds millions of users. Unfortunately, links between individuals may be missing due to imperfect acquirement processes or because they are not yet reflected in the online network (i.e., friends in real world did not form a virtual connection.) Existing link prediction techniques lack the scalability required for full application on a continuously growing social network which may be adding everyday users with thousands of connections. The primary bottleneck in link prediction techniques is extracting structural features required for classifying links. In this paper we propose a set of simple, easy-to-compute structural features that can be analyzed to identify missing links. We show that a machine learning classifier trained using the proposed simple structural features can successfully identify missing links even when applied to a hard problem of classifying links between individuals who have at least one common friend. A new friends measure that we developed is shown to be a good predictor for missing links and an evaluation experiment was performed on five large social networks datasets: Face book, Flickr, You Tube, Academia and The Marker. Our methods can provide social network site operators with the capability of helping users to find known, offline contacts and to discover new friends online. They may also be used for exposing hidden links in an online social network.", "Link prediction is a complex, inherently relational, task. Be it in the domain of scientific citations, social networks or hypertext links, the underlying data are extremely noisy and the characteristics useful for prediction are not readily available in a “flat” file format, but rather involve complex relationships among objects. In this paper, we propose the application of our methodology for Statistical Relational Learning to building link prediction models. We propose an integrated approach to building regression models from data stored in relational databases in which potential predictors are generated by structured search of the space of queries to the database, and then tested for inclusion in a logistic regression. We present experimental results for the task of predicting citations made in scientific literature using relational data taken from CiteSeer. This data includes the citation graph, authorship and publication venues of papers, as well as their word content.", "Link prediction is a fundamental problem in social network analysis. The key technique in unsupervised link prediction is to find an appropriate similarity measure between nodes of a network. A class of wildly used similarity measures are based on random walk on graph. The traditional random walk (TRW) considers the link structures by treating all nodes in a network equivalently, and ignores the centrality of nodes of a network. However, in many real networks, nodes of a network not only prefer to link to the similar node, but also prefer to link to the central nodes of the network. To address this issue, we use maximal entropy random walk (MERW) for link prediction, which incorporates the centrality of nodes of the network. First, we study certain important properties of MERW on graph @math by constructing an eigen-weighted graph G. We show that the transition matrix and stationary distribution of MERW on G are identical to the ones of TRW on G. Based on G, we further give the maximal entropy graph Laplacians, and show how to fast compute the hitting time and commute time of MERW. Second, we propose four new graph kernels and two similarity measures based on MERW for link prediction. Finally, to exhibit the power of MERW in link prediction, we compare 27 various link prediction methods over 3 synthetic and 8 real networks. The results show that our newly proposed MERW based methods outperform the state-of-the-art method on most datasets." ] }
1908.02571
2964743966
Informing professionals about the latest research results in their field is a particularly important task in the field of health care, since any development in this field directly improves the health status of the patients. Meanwhile, social media is an infrastructure that allows public instant sharing of information, thus it has recently become popular in medical applications. In this study, we apply Multi Distance Knowledge Graph Embeddings (MDE) to link physicians and surgeons to the latest medical breakthroughs that are shared as the research results on Twitter. Our study shows that using this method physicians can be informed about the new findings in their field given that they have an account dedicated to their profession.
TransE @cite_13 is an embedding model that is popular because of its simplicity and efficiency. It represents the entities in a KG by a relation between the vectors representing them. The score function describing these vectors in TransE is:
{ "cite_N": [ "@cite_13" ], "mid": [ "2127795553", "2073587810", "2283196293", "2250342289" ], "abstract": [ "We consider the problem of embedding entities and relationships of multi-relational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples.", "The representation of a knowledge graph (KG) in a latent space recently has attracted more and more attention. To this end, some proposed models (e.g., TransE) embed entities and relations of a KG into a \"point\" vector space by optimizing a global loss function which ensures the scores of positive triplets are higher than negative ones. We notice that these models always regard all entities and relations in a same manner and ignore their (un)certainties. In fact, different entities and relations may contain different certainties, which makes identical certainty insufficient for modeling. Therefore, this paper switches to density-based embedding and propose KG2E for explicitly modeling the certainty of entities and relations, which learn the representations of KGs in the space of multi-dimensional Gaussian distributions. Each entity relation is represented by a Gaussian distribution, where the mean denotes its position and the covariance (currently with diagonal covariance) can properly represent its certainty. In addition, compared with the symmetric measures used in point-based methods, we employ the KL-divergence for scoring triplets, which is a natural asymmetry function for effectively modeling multiple types of relations. We have conducted extensive experiments on link prediction and triplet classification with multiple benchmark datasets (WordNet and Freebase). Our experimental results demonstrate that our method can effectively model the (un)certainties of entities and relations in a KG, and it significantly outperforms state-of-the-art methods (including TransH and TransR).", "We deal with embedding a large scale knowledge graph composed of entities and relations into a continuous vector space. TransE is a promising method proposed recently, which is very efficient while achieving state-of-the-art predictive performance. We discuss some mapping properties of relations which should be considered in embedding, such as reflexive, one-to-many, many-to-one, and many-to-many. We note that TransE does not do well in dealing with these properties. Some complex models are capable of preserving these mapping properties but sacrifice efficiency in the process. To make a good trade-off between model capacity and efficiency, in this paper we propose TransH which models a relation as a hyperplane together with a translation operation on it. In this way, we can well preserve the above mapping properties of relations with almost the same model complexity of TransE. Additionally, as a practical knowledge graph is often far from completed, how to construct negative examples to reduce false negative labels in training is very important. Utilizing the one-to-many many-to-one mapping property of a relation, we propose a simple trick to reduce the possibility of false negative labeling. We conduct extensive experiments on link prediction, triplet classification and fact extraction on benchmark datasets like WordNet and Freebase. Experiments show TransH delivers significant improvements over TransE on predictive accuracy with comparable capability to scale up.", "Knowledge graphs are useful resources for numerous AI applications, but they are far from completeness. Previous work such as TransE, TransH and TransR CTransR regard a relation as translation from head entity to tail entity and the CTransR achieves state-of-the-art performance. In this paper, we propose a more fine-grained model named TransD, which is an improvement of TransR CTransR. In TransD, we use two vectors to represent a named symbol object (entity and relation). The first one represents the meaning of a(n) entity (relation), the other one is used to construct mapping matrix dynamically. Compared with TransR CTransR, TransD not only considers the diversity of relations, but also entities. TransD has less parameters and has no matrix-vector multiplication operations, which makes it can be applied on large scale graphs. In Experiments, we evaluate our model on two typical tasks including triplets classification and link prediction. Evaluation results show that our approach outperforms stateof-the-art methods." ] }
1908.02675
2966014518
Consensus is one of the most fundamental distributed computing problems. In particular, it serves as a building block in many replication based fault-tolerant systems and in particular in multiple recent blockchain solutions. Depending on its exact variant and other environmental assumptions, solving consensus requires multiple communication rounds. Yet, there are known optimistic protocols that guarantee termination in a single communication round under favorable conditions. In this paper we present a generic optimizer than can turn any consensus protocol into an optimized protocol that terminates in a single communication round whenever all nodes start with the same predetermined value and no Byzantine failures occur (although node crashes are allowed). This is regardless of the network timing assumptions and additional oracle capabilities assumed by the base consensus protocol being optimized. In the case of benign failures, our optimizer works whenever the number of faulty nodes @math . For Byzantine behavior, our optimizer's resiliency depends on the validity variant sought. In the case of classical validity, it can accommodate @math Byzantine failures. With the more recent external validity function assumption, it works whenever @math . Either way, our optimizer only relies on oral messages, thereby imposing very light-weight crypto requirements.
The first work to explore one communication round consensus in the benign failure model is @cite_10 . The basic protocol in @cite_10 requires @math . That protocol is also extended to support a preferred value which improves the resiliency requirement to @math , similar to our work. The main contribution of this paper compared to @cite_10 is in our exploration of this problem under Byzantine failures and in the fact that we present a single generic optimizer for both failure models.
{ "cite_N": [ "@cite_10" ], "mid": [ "1976693492", "2058322902", "2591036807", "2030698973" ], "abstract": [ "This paper introduces some algorithms to solve crash-failure, failure-by-omission and Byzantine failure versions of the Byzantine Generals or consensus problem, where non-faulty processors need only arrive at values that are close together rather than identical. For each failure model and each value ofS, we give at-resilient algorithm usingS rounds of communication. IfS=t+1, exact agreement is obtained. In the algorithms for the failure-by-omission and Byzantine failure models, each processor attempts to identify the faulty processors and corrects values transmited by them to reduce the amount of disagreement. We also prove lower bounds for each model, to show that each of our algorithms has a convergence rate that is asymptotic to the best possible in that model as the number of processors increases.", "We present the first protocol that reaches asynchronous Byzantine consensus in two communication steps in the common case. We prove that our protocol is optimal in terms of both number of communication steps and number of processes for two-step consensus. The protocol can be used to build a replicated state machine that requires only three communication steps per request in the common case. Further, we show a parameterized version of the protocol that is safe despite f Byzantine failures and, in the common case, guarantees two-step execution despite some number t of failures (t les f). We show that this parameterized two-step consensus protocol is also optimal in terms of both number of communication steps and number of processes", "We consider distributed plurality consensus in a complete graph of size @math with @math initial opinions. We design an efficient and simple protocol in the asynchronous communication model that ensures that all nodes eventually agree on the initially most frequent opinion. In this model, each node is equipped with a random Poisson clock with parameter @math . Whenever a node's clock ticks, it samples some neighbors, uniformly at random and with replacement, and adjusts its opinion according to the sample. A prominent example is the so-called two-choices algorithm in the synchronous model, where in each round, every node chooses two neighbors uniformly at random, and if the two sampled opinions coincide, then that opinion is adopted. This protocol is very efficient and well-studied when @math . If @math for some small @math , we show that it converges to the initial plurality opinion within @math rounds, w.h.p., as long as the initial difference between the largest and second largest opinion is @math . On the other side, we show that there are cases in which @math rounds are needed, w.h.p. One can beat this lower bound in the synchronous model by combining the two-choices protocol with randomized broadcasting. Our main contribution is a non-trivial adaptation of this approach to the asynchronous model. If the support of the most frequent opinion is at least @math times that of the second-most frequent one and @math , then our protocol achieves the best possible run time of @math , w.h.p. We relax full synchronicity by allowing @math nodes to be poorly synchronized, and the well synchronized nodes are only required to be within a certain time difference from one another. We enforce this synchronicity by introducing a novel gadget into the protocol.", "This paper is on the consensus problem in asynchronous distributed systems where (up to f) processes (among n) can exhibit a Byzantine behavior, i.e., can deviate arbitrarily from their specification. One way to solve the consensus problem in such a context consists of enriching the system with additional oracles that are powerful enough to cope with the uncertainty and unpredictability created by the combined effect of Byzantine behavior and asynchrony. This paper presents two kinds of Byzantine asynchronous consensus protocols using two types of oracles, namely, a common coin that provides processes with random values and a failure detector oracle. Both allow the processes to decide in one communication step in favorable circumstances. The first is a randomized protocol for an oblivious scheduler model that assumes n > 6f. The second one is a failure detector-based protocol that assumes n > tif. These protocols are designed to be particularly simple and efficient in terms of communication steps, the number of messages they generate in each step, and the size of messages. So, although they are not optimal in the number of Byzantine processes that can be tolerated, they are particularly efficient when we consider the number of communication steps they require to decide and the number and size of the messages they use. In that sense, they are practically appealing." ] }
1908.02675
2966014518
Consensus is one of the most fundamental distributed computing problems. In particular, it serves as a building block in many replication based fault-tolerant systems and in particular in multiple recent blockchain solutions. Depending on its exact variant and other environmental assumptions, solving consensus requires multiple communication rounds. Yet, there are known optimistic protocols that guarantee termination in a single communication round under favorable conditions. In this paper we present a generic optimizer than can turn any consensus protocol into an optimized protocol that terminates in a single communication round whenever all nodes start with the same predetermined value and no Byzantine failures occur (although node crashes are allowed). This is regardless of the network timing assumptions and additional oracle capabilities assumed by the base consensus protocol being optimized. In the case of benign failures, our optimizer works whenever the number of faulty nodes @math . For Byzantine behavior, our optimizer's resiliency depends on the validity variant sought. In the case of classical validity, it can accommodate @math Byzantine failures. With the more recent external validity function assumption, it works whenever @math . Either way, our optimizer only relies on oral messages, thereby imposing very light-weight crypto requirements.
The work of @cite_18 explored simple Byzantine consensus protocols that can terminate in a single communication round whenever all nodes start with the same value and certain failures do not manifest. Yet, the probabilistic protocol of @cite_18 required @math while their deterministic protocol needed @math . In contrast, our optimizer when instantiated to Byzantine failures can withstand up to @math with the classical validity definition (and @math with external validity, which was not explored in @cite_18 ). This is due to biasing the consensus into preferring a certain value. The price that we pay compared to @cite_18 is that if all nodes start with the non-preferable value and the respective failures do not manifest, the protocol in @cite_18 would terminate in a single communication step while our optimizer would have to invoke the full protocol.
{ "cite_N": [ "@cite_18" ], "mid": [ "2058322902", "2128634904", "2902905458", "2030698973" ], "abstract": [ "We present the first protocol that reaches asynchronous Byzantine consensus in two communication steps in the common case. We prove that our protocol is optimal in terms of both number of communication steps and number of processes for two-step consensus. The protocol can be used to build a replicated state machine that requires only three communication steps per request in the common case. Further, we show a parameterized version of the protocol that is safe despite f Byzantine failures and, in the common case, guarantees two-step execution despite some number t of failures (t les f). We show that this parameterized two-step consensus protocol is also optimal in terms of both number of communication steps and number of processes", "A team consisting of an unknown number of mobile agents, starting from different nodes of an unknown network, have to meet at the same node. Agents move in synchronous rounds. Each agent has a different label. Up to f of the agents are Byzantine. We consider two levels of Byzantine behavior. A strongly Byzantine agent can choose an arbitrary port when it moves and it can convey arbitrary information to other agents, while a weakly Byzantine agent can do the same, except changing its label. What is the minimum number of good agents that guarantees deterministic gathering of all of them, with terminationq We solve exactly this Byzantine gathering problem in arbitrary networks for weakly Byzantine agents and give approximate solutions for strongly Byzantine agents, both when the size of the network is known and when it is unknown. It turns out that both the strength versus the weakness of Byzantine behavior and the knowledge of network size significantly impact the results. For weakly Byzantine agents, we show that any number of good agents permits solving the problem for networks of known size. If the size is unknown, then this minimum number is fp2. More precisely, we show a deterministic polynomial algorithm that gathers all good agents in an arbitrary network, provided that there are at least fp2 of them. We also provide a matching lower bound: we prove that if the number of good agents is at most fp1, then they are not able to gather deterministically with termination in some networks. For strongly Byzantine agents, we give a lower bound of fp1, even when the graph is known: we show that f good agents cannot gather deterministically in the presence of f Byzantine agents even in a ring of known size. On the positive side, we give deterministic gathering algorithms for at least 2fp1 good agents when the size of the network is known and for at least 4fp2 good agents when it is unknown.", "This paper introduces a new leaderless Byzantine consensus called the Democratic Byzantine Fault Tolerance (DBFT) for blockchains. While most blockchain consensus protocols rely on a correct leader or coordinator to terminate, our algorithm can terminate even when its coordinator is faulty. The key idea is to allow processes to complete asynchronous rounds as soon as they receive a threshold of messages, instead of having to wait for a message from a coordinator that may be slow. The resulting decentralization is particularly appealing for blockchains for two reasons: (i) each node plays a similar role in the execution of the consensus, hence making the decision inherently “democratic” (ii) decentralization avoids bottlenecks by balancing the load, making the solution scalable. DBFT is deterministic, assumes partial synchrony, is resilience optimal, time optimal and does not need signatures. We first present a simple safe binary Byzantine consensus algorithm, modify it to ensure termination, and finally present an optimized reduction from multivalue consensus to binary consensus whose fast path terminates in 4 message delays.", "This paper is on the consensus problem in asynchronous distributed systems where (up to f) processes (among n) can exhibit a Byzantine behavior, i.e., can deviate arbitrarily from their specification. One way to solve the consensus problem in such a context consists of enriching the system with additional oracles that are powerful enough to cope with the uncertainty and unpredictability created by the combined effect of Byzantine behavior and asynchrony. This paper presents two kinds of Byzantine asynchronous consensus protocols using two types of oracles, namely, a common coin that provides processes with random values and a failure detector oracle. Both allow the processes to decide in one communication step in favorable circumstances. The first is a randomized protocol for an oblivious scheduler model that assumes n > 6f. The second one is a failure detector-based protocol that assumes n > tif. These protocols are designed to be particularly simple and efficient in terms of communication steps, the number of messages they generate in each step, and the size of messages. So, although they are not optimal in the number of Byzantine processes that can be tolerated, they are particularly efficient when we consider the number of communication steps they require to decide and the number and size of the messages they use. In that sense, they are practically appealing." ] }
1908.02675
2966014518
Consensus is one of the most fundamental distributed computing problems. In particular, it serves as a building block in many replication based fault-tolerant systems and in particular in multiple recent blockchain solutions. Depending on its exact variant and other environmental assumptions, solving consensus requires multiple communication rounds. Yet, there are known optimistic protocols that guarantee termination in a single communication round under favorable conditions. In this paper we present a generic optimizer than can turn any consensus protocol into an optimized protocol that terminates in a single communication round whenever all nodes start with the same predetermined value and no Byzantine failures occur (although node crashes are allowed). This is regardless of the network timing assumptions and additional oracle capabilities assumed by the base consensus protocol being optimized. In the case of benign failures, our optimizer works whenever the number of faulty nodes @math . For Byzantine behavior, our optimizer's resiliency depends on the validity variant sought. In the case of classical validity, it can accommodate @math Byzantine failures. With the more recent external validity function assumption, it works whenever @math . Either way, our optimizer only relies on oral messages, thereby imposing very light-weight crypto requirements.
Traditional deterministic Byzantine consensus protocols, most notably PBFT @cite_15 require at least 3 communication rounds to terminate. Multiple works that reduce this number have been published, each presenting a unique optimization. The Q U work presented a client driven protocol @cite_17 which enables termination in two communication rounds when favorable conditions are met. Yet, its resiliency requirement is @math , compared to our @math for the classical validity and @math for external validity. The HQ work improved the resiliency of Q U to @math , yet does not perform well under high network load @cite_2 . Also, our optimizer is generic whereas Q U and HQ are specialized solutions, each tailored to its intricate protocol.
{ "cite_N": [ "@cite_15", "@cite_2", "@cite_17" ], "mid": [ "2129467152", "2902905458", "2058322902", "1988655303" ], "abstract": [ "There are currently two approaches to providing Byzantine-fault-tolerant state machine replication: a replica-based approach, e.g., BFT, that uses communication between replicas to agree on a proposed ordering of requests, and a quorum-based approach, such as Q U, in which clients contact replicas directly to optimistically execute operations. Both approaches have shortcomings: the quadratic cost of inter-replica communication is un-necessary when there is no contention, and Q U requires a large number of replicas and performs poorly under contention. We present HQ, a hybrid Byzantine-fault-tolerant state machine replication protocol that overcomes these problems. HQ employs a lightweight quorum-based protocol when there is no contention, but uses BFT to resolve contention when it arises. Furthermore, HQ uses only 3f + 1 replicas to tolerate f faults, providing optimal resilience to node failures. We implemented a prototype of HQ, and we compare its performance to BFT and Q U analytically and experimentally. Additionally, in this work we use a new implementation of BFT designed to scale as the number of faults increases. Our results show that both HQ and our new implementation of BFT scale as f increases; additionally our hybrid approach of using BFT to handle contention works well.", "This paper introduces a new leaderless Byzantine consensus called the Democratic Byzantine Fault Tolerance (DBFT) for blockchains. While most blockchain consensus protocols rely on a correct leader or coordinator to terminate, our algorithm can terminate even when its coordinator is faulty. The key idea is to allow processes to complete asynchronous rounds as soon as they receive a threshold of messages, instead of having to wait for a message from a coordinator that may be slow. The resulting decentralization is particularly appealing for blockchains for two reasons: (i) each node plays a similar role in the execution of the consensus, hence making the decision inherently “democratic” (ii) decentralization avoids bottlenecks by balancing the load, making the solution scalable. DBFT is deterministic, assumes partial synchrony, is resilience optimal, time optimal and does not need signatures. We first present a simple safe binary Byzantine consensus algorithm, modify it to ensure termination, and finally present an optimized reduction from multivalue consensus to binary consensus whose fast path terminates in 4 message delays.", "We present the first protocol that reaches asynchronous Byzantine consensus in two communication steps in the common case. We prove that our protocol is optimal in terms of both number of communication steps and number of processes for two-step consensus. The protocol can be used to build a replicated state machine that requires only three communication steps per request in the common case. Further, we show a parameterized version of the protocol that is safe despite f Byzantine failures and, in the common case, guarantees two-step execution despite some number t of failures (t les f). We show that this parameterized two-step consensus protocol is also optimal in terms of both number of communication steps and number of processes", "Abstract A consensus protocol enables a system of n asynchronous processes, some of them faulty, to reach agreement. Both the processes and the message system are capable of cooperating to prevent the correct processes from reaching decision. A protocol is t -resilient if in the presence of up to t faulty processes it reaches agreement with probability 1. Byzantine processes are faulty processes that can deviate arbitrarily from the protocol; Fail-Stop processes can just stop participating in it. In a recent paper, t -resilient randomized consensus protocols were presented for t n 5 . We improve this to t n 3 , thus matching the known lower bound on the number of correct processes necessary for consensus. The protocol uses a general technique in which the behavior of the Byzantine processes is restricted by the use of a broadcast protocol that filters some of the messages. The apparent behavior of the Byzantine processes, filtered by the broadcast protocol, is similar to that of Fail-Stop processes. Plugging the broadcast protocol as a communicating primitive into an agreement protocol for Fail-Stop processes gives the result. This technique, of using broadcast protocols to reduce the power of the faulty processes and then using them as communication primitives in algorithms designed for weaker failure models, was used succesfully in other contexts." ] }
1908.02675
2966014518
Consensus is one of the most fundamental distributed computing problems. In particular, it serves as a building block in many replication based fault-tolerant systems and in particular in multiple recent blockchain solutions. Depending on its exact variant and other environmental assumptions, solving consensus requires multiple communication rounds. Yet, there are known optimistic protocols that guarantee termination in a single communication round under favorable conditions. In this paper we present a generic optimizer than can turn any consensus protocol into an optimized protocol that terminates in a single communication round whenever all nodes start with the same predetermined value and no Byzantine failures occur (although node crashes are allowed). This is regardless of the network timing assumptions and additional oracle capabilities assumed by the base consensus protocol being optimized. In the case of benign failures, our optimizer works whenever the number of faulty nodes @math . For Byzantine behavior, our optimizer's resiliency depends on the validity variant sought. In the case of classical validity, it can accommodate @math Byzantine failures. With the more recent external validity function assumption, it works whenever @math . Either way, our optimizer only relies on oral messages, thereby imposing very light-weight crypto requirements.
The Fast Byzantine Consensus (FaB) protocol was the first protocol to implement a Byzantine consensus protocol that terminates in two communication phases in the normal case while requiring @math @cite_29 . The normal case in @cite_29 is defined as when there is a unique correct leader, all correct acceptors agree on its identity, and the system is in a period of synchrony. This protocol translates into a @math phase state machine replication protocol. Another variant can accommodate @math , where @math is the upper bound on non-leaders suffering Byzantine failures.
{ "cite_N": [ "@cite_29" ], "mid": [ "2058322902", "2902905458", "1988655303", "2030698973" ], "abstract": [ "We present the first protocol that reaches asynchronous Byzantine consensus in two communication steps in the common case. We prove that our protocol is optimal in terms of both number of communication steps and number of processes for two-step consensus. The protocol can be used to build a replicated state machine that requires only three communication steps per request in the common case. Further, we show a parameterized version of the protocol that is safe despite f Byzantine failures and, in the common case, guarantees two-step execution despite some number t of failures (t les f). We show that this parameterized two-step consensus protocol is also optimal in terms of both number of communication steps and number of processes", "This paper introduces a new leaderless Byzantine consensus called the Democratic Byzantine Fault Tolerance (DBFT) for blockchains. While most blockchain consensus protocols rely on a correct leader or coordinator to terminate, our algorithm can terminate even when its coordinator is faulty. The key idea is to allow processes to complete asynchronous rounds as soon as they receive a threshold of messages, instead of having to wait for a message from a coordinator that may be slow. The resulting decentralization is particularly appealing for blockchains for two reasons: (i) each node plays a similar role in the execution of the consensus, hence making the decision inherently “democratic” (ii) decentralization avoids bottlenecks by balancing the load, making the solution scalable. DBFT is deterministic, assumes partial synchrony, is resilience optimal, time optimal and does not need signatures. We first present a simple safe binary Byzantine consensus algorithm, modify it to ensure termination, and finally present an optimized reduction from multivalue consensus to binary consensus whose fast path terminates in 4 message delays.", "Abstract A consensus protocol enables a system of n asynchronous processes, some of them faulty, to reach agreement. Both the processes and the message system are capable of cooperating to prevent the correct processes from reaching decision. A protocol is t -resilient if in the presence of up to t faulty processes it reaches agreement with probability 1. Byzantine processes are faulty processes that can deviate arbitrarily from the protocol; Fail-Stop processes can just stop participating in it. In a recent paper, t -resilient randomized consensus protocols were presented for t n 5 . We improve this to t n 3 , thus matching the known lower bound on the number of correct processes necessary for consensus. The protocol uses a general technique in which the behavior of the Byzantine processes is restricted by the use of a broadcast protocol that filters some of the messages. The apparent behavior of the Byzantine processes, filtered by the broadcast protocol, is similar to that of Fail-Stop processes. Plugging the broadcast protocol as a communicating primitive into an agreement protocol for Fail-Stop processes gives the result. This technique, of using broadcast protocols to reduce the power of the faulty processes and then using them as communication primitives in algorithms designed for weaker failure models, was used succesfully in other contexts.", "This paper is on the consensus problem in asynchronous distributed systems where (up to f) processes (among n) can exhibit a Byzantine behavior, i.e., can deviate arbitrarily from their specification. One way to solve the consensus problem in such a context consists of enriching the system with additional oracles that are powerful enough to cope with the uncertainty and unpredictability created by the combined effect of Byzantine behavior and asynchrony. This paper presents two kinds of Byzantine asynchronous consensus protocols using two types of oracles, namely, a common coin that provides processes with random values and a failure detector oracle. Both allow the processes to decide in one communication step in favorable circumstances. The first is a randomized protocol for an oblivious scheduler model that assumes n > 6f. The second one is a failure detector-based protocol that assumes n > tif. These protocols are designed to be particularly simple and efficient in terms of communication steps, the number of messages they generate in each step, and the size of messages. So, although they are not optimal in the number of Byzantine processes that can be tolerated, they are particularly efficient when we consider the number of communication steps they require to decide and the number and size of the messages they use. In that sense, they are practically appealing." ] }
1908.02675
2966014518
Consensus is one of the most fundamental distributed computing problems. In particular, it serves as a building block in many replication based fault-tolerant systems and in particular in multiple recent blockchain solutions. Depending on its exact variant and other environmental assumptions, solving consensus requires multiple communication rounds. Yet, there are known optimistic protocols that guarantee termination in a single communication round under favorable conditions. In this paper we present a generic optimizer than can turn any consensus protocol into an optimized protocol that terminates in a single communication round whenever all nodes start with the same predetermined value and no Byzantine failures occur (although node crashes are allowed). This is regardless of the network timing assumptions and additional oracle capabilities assumed by the base consensus protocol being optimized. In the case of benign failures, our optimizer works whenever the number of faulty nodes @math . For Byzantine behavior, our optimizer's resiliency depends on the validity variant sought. In the case of classical validity, it can accommodate @math Byzantine failures. With the more recent external validity function assumption, it works whenever @math . Either way, our optimizer only relies on oral messages, thereby imposing very light-weight crypto requirements.
Zyzzyva is a client driven protocol @cite_24 which terminates after @math communication rounds (including the communication between the client and the replicas) whenever the client receives identical replies from all @math replicas. Our optimizer obtains termination in a single communication round among the replicas even when upto @math of them may crush or be slow. This is by relying on all-to-all communication, and ensuring fast termination only when the preferred value is included in the first @math replies. Also, our optimizer is generic while Zyzzyva and FaB are specialized solutions.
{ "cite_N": [ "@cite_24" ], "mid": [ "2139359217", "2124037649", "1965159636", "2964000194" ], "abstract": [ "We present Zyzzyva, a protocol that uses speculation to reduce the cost and simplify the design of Byzantine fault tolerant state machine replication. In Zyzzyva, replicas respond to a client's request without first running an expensive three-phase commit protocol to reach agreement on the order in which the request must be processed. Instead, they optimistically adopt the order proposed by the primary and respond immediately to the client. Replicas can thus become temporarily inconsistent with one another, but clients detect inconsistencies, help correct replicas converge on a single total ordering of requests, and only rely on responses that are consistent with this total order. This approach allows Zyzzyva to reduce replication overheads to near their theoretical minimal.", "We present Abstract (ABortable STate mAChine replicaTion), a new abstraction for designing and reconfiguring generalized replicated state machines that are, unlike traditional state machines, allowed to abort executing a client’s request if “something goes wrong.” Abstract can be used to considerably simplify the incremental development of efficient Byzantine fault-tolerant state machine replication ( BFT ) protocols that are notorious for being difficult to develop. In short, we treat a BFT protocol as a composition of Abstract instances. Each instance is developed and analyzed independently and optimized for specific system conditions. We illustrate the power of Abstract through several interesting examples. We first show how Abstract can yield benefits of a state-of-the-art BFT protocol in a less painful and error-prone manner. Namely, we develop AZyzzyva, a new protocol that mimics the celebrated best-case behavior of Zyzzyva using less than 35p of the Zyzzyva code. To cover worst-case situations, our abstraction enables one to use in AZyzzyva any existing BFT protocol. We then present Aliph, a new BFT protocol that outperforms previous BFT protocols in terms of both latency (by up to 360p) and throughput (by up to 30p). Finally, we present R-Aliph, an implementation of Aliph that is robust, that is, whose performance degrades gracefully in the presence of Byzantine replicas and Byzantine clients.", "Modern Byzantine fault-tolerant state machine replication (BFT) protocols involve about 20,000 lines of challenging C++ code encompassing synchronization, networking and cryptography. They are notoriously difficult to develop, test and prove. We present a new abstraction to simplify these tasks. We treat a BFT protocol as a composition of instances of our abstraction. Each instance is developed and analyzed independently. To illustrate our approach, we first show how our abstraction can be used to obtain the benefits of a state-of-the-art BFT protocol with much less pain. Namely, we develop AZyzzyva, a new protocol that mimics the behavior of Zyzzyva in best-case situations (for which Zyzzyva was optimized) using less than 24 of the actual code of Zyzzyva. To cover worst-case situations, our abstraction enables to use in AZyzzyva any existing BFT protocol, typically, a classical one like PBFT which has been tested and proved correct. We then present Aliph, a new BFT protocol that outperforms previous BFT protocols both in terms of latency (by up to 30 ) and throughput (by up to 360 ). The development of Aliph required two new instances of our abstraction. Each instance contains less than 25 of the code needed to develop state-of-the-art BFT protocols.", "We consider the problem of learning an unknown Markov Decision Process (MDP) that is weakly communicating in the infinite horizon setting. We propose a Thompson Sampling-based reinforcement learning algorithm with dynamic episodes (TSDE). At the beginning of each episode, the algorithm generates a sample from the posterior distribution over the unknown model parameters. It then follows the optimal stationary policy for the sampled model for the rest of the episode. The duration of each episode is dynamically determined by two stopping criteria. The first stopping criterion controls the growth rate of episode length. The second stopping criterion happens when the number of visits to any state-action pair is doubled. We establish @math bounds on expected regret under a Bayesian setting, where @math and @math are the sizes of the state and action spaces, @math is time, and @math is the bound of the span. This regret bound matches the best available bound for weakly communicating MDPs. Numerical results show it to perform better than existing algorithms for infinite horizon MDPs." ] }
1908.02675
2966014518
Consensus is one of the most fundamental distributed computing problems. In particular, it serves as a building block in many replication based fault-tolerant systems and in particular in multiple recent blockchain solutions. Depending on its exact variant and other environmental assumptions, solving consensus requires multiple communication rounds. Yet, there are known optimistic protocols that guarantee termination in a single communication round under favorable conditions. In this paper we present a generic optimizer than can turn any consensus protocol into an optimized protocol that terminates in a single communication round whenever all nodes start with the same predetermined value and no Byzantine failures occur (although node crashes are allowed). This is regardless of the network timing assumptions and additional oracle capabilities assumed by the base consensus protocol being optimized. In the case of benign failures, our optimizer works whenever the number of faulty nodes @math . For Byzantine behavior, our optimizer's resiliency depends on the validity variant sought. In the case of classical validity, it can accommodate @math Byzantine failures. With the more recent external validity function assumption, it works whenever @math . Either way, our optimizer only relies on oral messages, thereby imposing very light-weight crypto requirements.
The condition based approach for solving consensus identifies various sets of input values that enable solving consensus fast @cite_7 . This is by treating the set of input values held by all processes as an input vector to the problem. Specifically, the work in @cite_11 showed that when the possible input vectors correspond to error correcting codes, then consensus is solvable in a single communication round regardless of synchrony assumptions.
{ "cite_N": [ "@cite_7", "@cite_11" ], "mid": [ "2139686511", "1760770890", "2021482754", "2951735990" ], "abstract": [ "The condition-based approach identifies sets of input vectors, called conditions, for which it is possible to design an asynchronous protocol solving a distributed problem despite process crashes. This paper establishes a direct correlation between distributed agreement problems and error-correcting codes. In particular, crash failures in distributed agreement problems correspond to erasure failures in error-correcting codes and Byzantine and value domain faults correspond to corruption errors. This correlation is exemplified by concentrating on two well-known agreement problems, namely, consensus and interactive consistency, in the context of the condition-based approach. Specifically, the paper presents the following results: first, it shows that the conditions that allow interactive consistency to be solved despite fc crashes and fc value domain faults correspond exactly to the set of error-correcting codes capable of recovering from fc erasures and fc corruptions. Second, the paper proves that consensus can be solved despite fc crash failures if the condition corresponds to a code whose Hamming distance is fc + 1 and Byzantine consensus can be solved despite fb Byzantine faults if the Hamming distance of the code is 2 fb + 1. Finally, the paper uses the above relations to establish several results in distributed agreement that are derived from known results in error-correcting codes and vice versa.", "This work addresses Byzantine vector consensus, wherein the input at each process is a d-dimensional vector of reals, and each process is required to decide on a decision vector that is in the convex hull of the input vectors at the fault-free processes [9,12]. The input vector at each process may also be viewed as a point in the d-dimensional Euclidean space R d , where di¾?>i¾?0 is a finite integer. Recent work [9,12] has addressed Byzantine vector consensus, and presented algorithms with optimal fault tolerance in complete graphs. This paper considers Byzantine vector consensus in incomplete graphs using a restricted class of iterative algorithms that maintain only a small amount of memory across iterations. For such algorithms, we prove a necessary condition, and a sufficient condition, for the graphs to be able to solve the vector consensus problem iteratively. We present an iterative Byzantine vector consensus algorithm, and prove it correct under the sufficient condition. The necessary condition presented in this paper for vector consensus does not match with the sufficient condition for di¾?>i¾?1; thus, a weaker condition may potentially suffice for Byzantine vector consensus.", "This article introduces and explores the condition-based approach to solve the consensus problem in asynchronous systems. The approach studies conditions that identify sets of input vectors for which it is possible to solve consensus despite the occurrence of up to f process crashes. The first main result defines acceptable conditions and shows that these are exactly the conditions for which a consensus protocol exists. Two examples of realistic acceptable conditions are presented, and proved to be maximal, in the sense that they cannot be extended and remain acceptable. The second main result is a generic consensus shared-memory protocol for any acceptable condition. The protocol always guarantees agreement and validity, and terminates (at least) when the inputs satisfy the condition with which the protocol has been instantiated, or when there are no crashes. An efficient version of the protocol is then designed for the message passing model that works when f < n 2, and it is shown that no such protocol exists when f ≥ n 2. It is also shown how the protocol's safety can be traded for its liveness.", "This work addresses Byzantine vector consensus (BVC), wherein the input at each process is a d-dimensional vector of reals, and each process is expected to decide on a decision vector that is in the convex hull of the input vectors at the fault-free processes [3, 8]. The input vector at each process may also be viewed as a point in the d-dimensional Euclidean space R^d, where d > 0 is a finite integer. Recent work [3, 8] has addressed Byzantine vector consensus in systems that can be modeled by a complete graph. This paper considers Byzantine vector consensus in incomplete graphs. In particular, we address a particular class of iterative algorithms in incomplete graphs, and prove a necessary condition, and a sufficient condition, for the graphs to be able to solve the vector consensus problem iteratively. We present an iterative Byzantine vector consensus algorithm, and prove it correct under the sufficient condition. The necessary condition presented in this paper for vector consensus does not match with the sufficient condition for d > 1; thus, a weaker condition may potentially suffice for Byzantine vector consensus." ] }
1908.02484
2965761271
Fitting model parameters to a set of noisy data points is a common problem in computer vision. In this work, we fit the 6D camera pose to a set of noisy correspondences between the 2D input image and a known 3D environment. We estimate these correspondences from the image using a neural network. Since the correspondences often contain outliers, we utilize a robust estimator such as Random Sample Consensus (RANSAC) or Differentiable RANSAC (DSAC) to fit the pose parameters. When the problem domain, e.g. the space of all 2D-3D correspondences, is large or ambiguous, a single network does not cover the domain well. Mixture of Experts (MoE) is a popular strategy to divide a problem domain among an ensemble of specialized networks, so called experts, where a gating network decides which expert is responsible for a given input. In this work, we introduce Expert Sample Consensus (ESAC), which integrates DSAC in a MoE. Our main technical contribution is an efficient method to train ESAC jointly and end-to-end. We demonstrate experimentally that ESAC handles two real-world problems better than competing methods, i.e. scalability and ambiguity. We apply ESAC to fitting simple geometric models to synthetic images, and to camera re-localization for difficult, real datasets.
In contrast, Mixture of Experts (MoE) @cite_40 employs a divide-and-conquer strategy where each base-learner, expert, specializes in one part of the problem domain. An additional gating network assesses the relevancy of each expert for a given input, and predicts an associated weight. The ensemble prediction is a weighted average of the experts' outputs. MoE has been trained by minimizing the expected training loss @cite_40 , maximizing the likelihood under a Gaussian mixture model interpretation @cite_40 or using the expectation-maximization (EM) algorithm @cite_41 .
{ "cite_N": [ "@cite_41", "@cite_40" ], "mid": [ "2963280294", "2787223504", "2066334462", "2082366447" ], "abstract": [ "Mixtures of Experts combine the outputs of several “expert” networks, each of which specializes in a different part of the input space. This is achieved by training a “gating” network that maps each input to a distribution over the experts. Such models show promise for building larger networks that are still cheap to compute at test time, and more parallelizable at training time. In this this work, we extend the Mixture of Experts to a stacked model, the Deep Mixture of Experts, with multiple sets of gating and experts. This exponentially increases the number of effective experts by associating each input with a combination of experts at each layer, yet maintains a modest model size. On a randomly translated version of the MNIST dataset, we find that the Deep Mixture of Experts automatically learns to develop location-dependent (“where”) experts at the first layer, and class-specific (“what”) experts at the second layer. In addition, we see that the different combinations are in use when the model is applied to a dataset of speech monophones. These demonstrate effective use of all expert combinations.", "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.", "Mixture of experts (ME) is one of the most popular and interesting combining methods, which has great potential to improve performance in machine learning. ME is established based on the divide-and-conquer principle in which the problem space is divided between a few neural network experts, supervised by a gating network. In earlier works on ME, different strategies were developed to divide the problem space between the experts. To survey and analyse these methods more clearly, we present a categorisation of the ME literature based on this difference. Various ME implementations were classified into two groups, according to the partitioning strategies used and both how and when the gating network is involved in the partitioning and combining procedures. In the first group, The conventional ME and the extensions of this method stochastically partition the problem space into a number of subspaces using a special employed error function, and experts become specialised in each subspace. In the second group, the problem space is explicitly partitioned by the clustering method before the experts' training process starts, and each expert is then assigned to one of these sub-spaces. Based on the implicit problem space partitioning using a tacit competitive process between the experts, we call the first group the mixture of implicitly localised experts (MILE), and the second group is called mixture of explicitly localised experts (MELE), as it uses pre-specified clusters. The properties of both groups are investigated in comparison with each other. Investigation of MILE versus MELE, discussing the advantages and disadvantages of each group, showed that the two approaches have complementary features. Moreover, the features of the ME method are compared with other popular combining methods, including boosting and negative correlation learning methods. As the investigated methods have complementary strengths and limitations, previous researches that attempted to combine their features in integrated approaches are reviewed and, moreover, some suggestions are proposed for future research directions.", "Previous research has shown that averaging ensemble can scale up learning over very large cost-sensitive datasets with linear speedup independent of the learning algorithms. At the same time, it achieves the same or even better accuracy than a single model computed from the entire dataset. However, one major drawback is its inefficiency in prediction since every base model in the ensemble has to be consulted in order to produce a final prediction. In this paper, we propose several approaches to reduce the number of base classifiers. Among various methods explored, our empirical studies have shown that the benefit-based greedy approach can safely remove more than 90 of the base models while maintaining or even exceeding the prediction accuracy of the original ensemble. Assuming that each base classifier consumes one unit of prediction time, the removal of 90 of base classifiers translates to a prediction speedup of 10 times. On top of pruning, we propose a novel dynamic scheduling approach to further reduce the \"expected\" number of classifiers employed in prediction. It measures the confidence of a prediction by a subset of classifiers in the pruned ensemble. This confidence is used to decide if more classifiers are needed in order to produce a prediction that is the same as the original unpruned ensemble. This approach reduces the \"expected\" number of classifiers by another 25 to 75 without loss of accuracy." ] }
1908.02484
2965761271
Fitting model parameters to a set of noisy data points is a common problem in computer vision. In this work, we fit the 6D camera pose to a set of noisy correspondences between the 2D input image and a known 3D environment. We estimate these correspondences from the image using a neural network. Since the correspondences often contain outliers, we utilize a robust estimator such as Random Sample Consensus (RANSAC) or Differentiable RANSAC (DSAC) to fit the pose parameters. When the problem domain, e.g. the space of all 2D-3D correspondences, is large or ambiguous, a single network does not cover the domain well. Mixture of Experts (MoE) is a popular strategy to divide a problem domain among an ensemble of specialized networks, so called experts, where a gating network decides which expert is responsible for a given input. In this work, we introduce Expert Sample Consensus (ESAC), which integrates DSAC in a MoE. Our main technical contribution is an efficient method to train ESAC jointly and end-to-end. We demonstrate experimentally that ESAC handles two real-world problems better than competing methods, i.e. scalability and ambiguity. We apply ESAC to fitting simple geometric models to synthetic images, and to camera re-localization for difficult, real datasets.
Scene coordinate regression methods @cite_58 @cite_10 @cite_2 @cite_43 @cite_28 @cite_44 @cite_4 @cite_29 @cite_13 @cite_1 also estimate 2D-3D correspondences between image and environment but do so densely for each pixel of the input image. This circumvents the need for a feature detector with the aforementioned draw-backs of feature-based methods. Brachmann al @cite_33 combine a neural network for scene coordinate regression with a differentiable RANSAC for an end-to-end trainable camera re-localization pipeline. Brachmann and Rother @cite_0 improve the pipeline's initialization and differentiable pose optimization to achieve state-of-the-art results for indoor camera re-localization from single RGB images. We build on and extend @cite_33 @cite_1 by combining them with our ESAC framework. Thereby, we are able to address two real-world problems: scalability and ambiguity in camera re-localization. Some scene coordinate regression methods use an ensemble of base learners, namely random forests @cite_58 @cite_2 @cite_43 @cite_28 @cite_44 @cite_29 @cite_13 . Guzman-Rivera al @cite_45 train the random forest in a boosting-like manner to diversify its predictions. Massiceti al @cite_59 map an ensemble of decision trees to an ensemble of neural networks. However, in none of these methods do the base-learners specialize in parts of the problem domain.
{ "cite_N": [ "@cite_13", "@cite_4", "@cite_33", "@cite_28", "@cite_29", "@cite_1", "@cite_44", "@cite_43", "@cite_0", "@cite_45", "@cite_59", "@cite_2", "@cite_58", "@cite_10" ], "mid": [ "2522883048", "2963053725", "2739492061", "2964175348" ], "abstract": [ "This work addresses the task of camera localization in a known 3D scene given a single input RGB image. State-of-the-art approaches accomplish this in two steps: firstly, regressing for every pixel in the image its 3D scene coordinate and subsequently, using these coordinates to estimate the final 6D camera pose via RANSAC. To solve the first step, Random Forests (RFs) are typically used. On the other hand, Neural Networks (NNs) reign in many dense regression tasks, but are not test-time efficient. We ask the question: which of the two is best for camera localization? To address this, we make two method contributions: (1) a test-time efficient NN architecture which we term a ForestNet that is derived and initialized from a RF, and (2) a new fully-differentiable robust averaging technique for regression ensembles which can be trained end-to-end with a NN. Our experimental findings show that for scene coordinate regression, traditional NN architectures are superior to test-time efficient RFs and ForestNets, however, this does not translate to final 6D camera pose accuracy where RFs and ForestNets perform slightly better. To summarize, our best method, a ForestNet with a robust average, which has an equivalent fast and lightweight RF, improves over the state-of-the-art for camera localization on the 7-Scenes dataset. While this work focuses on scene coordinate regression for camera localization, our innovations may also be applied to other continuous regression tasks.", "This work addresses the task of camera localization in a known 3D scene given a single input RGB image. State-of-the-art approaches accomplish this in two steps: firstly, regressing for every pixel in the image its 3D scene coordinate and subsequently, using these coordinates to estimate the final 6D camera pose via RANSAC. To solve the first step. Random Forests (RFs) are typically used. On the other hand. Neural Networks (NNs) reign in many dense regression tasks, but are not test-time efficient. We ask the question: which of the two is best for camera localization? To address this, we make two method contributions: (1) a test-time efficient NN architecture which we term a ForestNet that is derived and initialized from a RF, and (2) a new fully-differentiable robust averaging technique for regression ensembles which can be trained end-to-end with a NN. Our experimental findings show that for scene coordinate regression, traditional NN architectures are superior to test-time efficient RFs and ForestNets, however, this does not translate to final 6D camera pose accuracy where RFs and ForestNets perform slightly better. To summarize, our best method, a ForestNet with a robust average, which has an equivalent fast and lightweight RF, improves over the state-of-the-art for camera localization on the 7-Scenes dataset [1]. While this work focuses on scene coordinate regression for camera localization, our innovations may also be applied to other continuous regression tasks.", "We propose a new deep learning based approach for camera relocalization. Our approach localizes a given query image by using a convolutional neural network (CNN) for first retrieving similar database images and then predicting the relative pose between the query and the database images, whose poses are known. The camera location for the query image is obtained via triangulation from two relative translation estimates using a RANSAC based approach. Each relative pose estimate provides a hypothesis for the camera orientation and they are fused in a second RANSAC scheme. The neural network is trained for relative pose estimation in an end-to-end manner using training image pairs. In contrast to previous work, our approach does not require scene-specific training of the network, which improves scalability, and it can also be applied to scenes which are not available during the training of the network. As another main contribution, we release a challenging indoor localisation dataset covering 5 different scenes registered to a common coordinate frame. We evaluate our approach using both our own dataset and the standard 7 Scenes benchmark. The results show that the proposed approach generalizes well to previously unseen scenes and compares favourably to other recent CNN-based methods.", "We propose a new deep learning based approach for camera relocalization. Our approach localizes a given query image by using a convolutional neural network (CNN) for first retrieving similar database images and then predicting the relative pose between the query and the database images, whose poses are known. The camera location for the query image is obtained via triangulation from two relative translation estimates using a RANSAC based approach. Each relative pose estimate provides a hypothesis for the camera orientation and they are fused in a second RANSAC scheme. The neural network is trained for relative pose estimation in an end-to-end manner using training image pairs. In contrast to previous work, our approach does not require scene-specific training of the network, which improves scalability, and it can also be applied to scenes which are not available during the training of the network. As another main contribution, we release a challenging indoor localisation dataset covering 5 different scenes registered to a common coordinate frame. We evaluate our approach using both our own dataset and the standard 7 Scenes benchmark. The results show that the proposed approach generalizes well to previously unseen scenes and compares favourably to other recent CNN-based methods." ] }
1908.02484
2965761271
Fitting model parameters to a set of noisy data points is a common problem in computer vision. In this work, we fit the 6D camera pose to a set of noisy correspondences between the 2D input image and a known 3D environment. We estimate these correspondences from the image using a neural network. Since the correspondences often contain outliers, we utilize a robust estimator such as Random Sample Consensus (RANSAC) or Differentiable RANSAC (DSAC) to fit the pose parameters. When the problem domain, e.g. the space of all 2D-3D correspondences, is large or ambiguous, a single network does not cover the domain well. Mixture of Experts (MoE) is a popular strategy to divide a problem domain among an ensemble of specialized networks, so called experts, where a gating network decides which expert is responsible for a given input. In this work, we introduce Expert Sample Consensus (ESAC), which integrates DSAC in a MoE. Our main technical contribution is an efficient method to train ESAC jointly and end-to-end. We demonstrate experimentally that ESAC handles two real-world problems better than competing methods, i.e. scalability and ambiguity. We apply ESAC to fitting simple geometric models to synthetic images, and to camera re-localization for difficult, real datasets.
In @cite_65 , Brachmann al train a joint classification-regression forest for camera re-localization. The forest classifies which part of the environment an input belongs to, and regresses relative scene coordinates for this part. More recently, image-retrieval and relative pose regression have been combined in one system for good accuracy in @cite_51 . Both works, @cite_65 and @cite_51 , bear some resemblance to our strategy but utilize one large model without the benefit of efficient, conditional computation. Also, their models cannot be trained in an end-to-end fashion.
{ "cite_N": [ "@cite_51", "@cite_65" ], "mid": [ "2522883048", "2963053725", "1989476314", "2739492061" ], "abstract": [ "This work addresses the task of camera localization in a known 3D scene given a single input RGB image. State-of-the-art approaches accomplish this in two steps: firstly, regressing for every pixel in the image its 3D scene coordinate and subsequently, using these coordinates to estimate the final 6D camera pose via RANSAC. To solve the first step, Random Forests (RFs) are typically used. On the other hand, Neural Networks (NNs) reign in many dense regression tasks, but are not test-time efficient. We ask the question: which of the two is best for camera localization? To address this, we make two method contributions: (1) a test-time efficient NN architecture which we term a ForestNet that is derived and initialized from a RF, and (2) a new fully-differentiable robust averaging technique for regression ensembles which can be trained end-to-end with a NN. Our experimental findings show that for scene coordinate regression, traditional NN architectures are superior to test-time efficient RFs and ForestNets, however, this does not translate to final 6D camera pose accuracy where RFs and ForestNets perform slightly better. To summarize, our best method, a ForestNet with a robust average, which has an equivalent fast and lightweight RF, improves over the state-of-the-art for camera localization on the 7-Scenes dataset. While this work focuses on scene coordinate regression for camera localization, our innovations may also be applied to other continuous regression tasks.", "This work addresses the task of camera localization in a known 3D scene given a single input RGB image. State-of-the-art approaches accomplish this in two steps: firstly, regressing for every pixel in the image its 3D scene coordinate and subsequently, using these coordinates to estimate the final 6D camera pose via RANSAC. To solve the first step. Random Forests (RFs) are typically used. On the other hand. Neural Networks (NNs) reign in many dense regression tasks, but are not test-time efficient. We ask the question: which of the two is best for camera localization? To address this, we make two method contributions: (1) a test-time efficient NN architecture which we term a ForestNet that is derived and initialized from a RF, and (2) a new fully-differentiable robust averaging technique for regression ensembles which can be trained end-to-end with a NN. Our experimental findings show that for scene coordinate regression, traditional NN architectures are superior to test-time efficient RFs and ForestNets, however, this does not translate to final 6D camera pose accuracy where RFs and ForestNets perform slightly better. To summarize, our best method, a ForestNet with a robust average, which has an equivalent fast and lightweight RF, improves over the state-of-the-art for camera localization on the 7-Scenes dataset [1]. While this work focuses on scene coordinate regression for camera localization, our innovations may also be applied to other continuous regression tasks.", "We address the problem of inferring the pose of an RGB-D camera relative to a known 3D scene, given only a single acquired image. Our approach employs a regression forest that is capable of inferring an estimate of each pixel's correspondence to 3D points in the scene's world coordinate frame. The forest uses only simple depth and RGB pixel comparison features, and does not require the computation of feature descriptors. The forest is trained to be capable of predicting correspondences at any pixel, so no interest point detectors are required. The camera pose is inferred using a robust optimization scheme. This starts with an initial set of hypothesized camera poses, constructed by applying the forest at a small fraction of image pixels. Preemptive RANSAC then iterates sampling more pixels at which to evaluate the forest, counting inliers, and refining the hypothesized poses. We evaluate on several varied scenes captured with an RGB-D camera and observe that the proposed technique achieves highly accurate relocalization and substantially out-performs two state of the art baselines.", "We propose a new deep learning based approach for camera relocalization. Our approach localizes a given query image by using a convolutional neural network (CNN) for first retrieving similar database images and then predicting the relative pose between the query and the database images, whose poses are known. The camera location for the query image is obtained via triangulation from two relative translation estimates using a RANSAC based approach. Each relative pose estimate provides a hypothesis for the camera orientation and they are fused in a second RANSAC scheme. The neural network is trained for relative pose estimation in an end-to-end manner using training image pairs. In contrast to previous work, our approach does not require scene-specific training of the network, which improves scalability, and it can also be applied to scenes which are not available during the training of the network. As another main contribution, we release a challenging indoor localisation dataset covering 5 different scenes registered to a common coordinate frame. We evaluate our approach using both our own dataset and the standard 7 Scenes benchmark. The results show that the proposed approach generalizes well to previously unseen scenes and compares favourably to other recent CNN-based methods." ] }
1908.02402
2965998974
This paper proposes a novel end-to-end architecture for task-oriented dialogue systems. It is based on a simple and practical yet very effective sequence-to-sequence approach, where language understanding and state tracking tasks are modeled jointly with a structured copy-augmented sequential decoder and a multi-label decoder for each slot. The policy engine and language generation tasks are modeled jointly following that. The copy-augmented sequential decoder deals with new or unknown values in the conversation, while the multi-label decoder combined with the sequential decoder ensures the explicit assignment of values to slots. On the generation part, slot binary classifiers are used to improve performance. This architecture is scalable to real-world scenarios and is shown through an empirical evaluation to achieve state-of-the-art performance on both the Cambridge Restaurant dataset and the Stanford in-car assistant dataset The code is available at this https URL
Our work is related to end-to-end task-oriented dialogue systems in general [among others] BingNAACL18,Jason17,Lowe18,msr_challenge,BingGoogle17,Pawel18,bordes2016learning,HoriWHWHRHKJZA16,wen2016network,serban2016building and those that extend the Seq2Seq @cite_8 architecture in particular . Belief tracking, which is necessary to form KB queries, is not explicitly performed in the latter works. To compensate, adopt a copy mechanism that allows copying information retrieved from the KB to the generated response. adopt Memory Networks to memorize the retrieved KB entities and words appearing in the dialogue history. These models scale linearly with the size of the KB and need to be retrained at each update of the KB. Both issues make these approaches less practical in real-world applications.
{ "cite_N": [ "@cite_8" ], "mid": [ "2616122292", "2551884415", "2593751037", "2952798561" ], "abstract": [ "Neural task-oriented dialogue systems often struggle to smoothly interface with a knowledge base. In this work, we seek to address this problem by proposing a new neural dialogue agent that is able to effectively sustain grounded, multi-domain discourse through a novel key-value retrieval mechanism. The model is end-to-end differentiable and does not need to explicitly model dialogue state or belief trackers. We also release a new dataset of 3,031 dialogues that are grounded through underlying knowledge bases and span three distinct tasks in the in-car personal assistant space: calendar scheduling, weather information retrieval, and point-of-interest navigation. Our architecture is simultaneously trained on data from all domains and significantly outperforms a competitive rule-based system and other existing neural dialogue architectures on the provided domains according to both automatic and human evaluation metrics.", "We model coherent conversation continuation via RNN-based dialogue models equipped with a dynamic attention mechanism. Our attention-RNN language model dynamically increases the scope of attention on the history as the conversation continues, as opposed to standard attention (or alignment) models with a fixed input scope in a sequence-to-sequence model. This allows each generated word to be associated with the most relevant words in its corresponding conversation history. We evaluate the model on two popular dialogue datasets, the open-domain MovieTriples dataset and the closed-domain Ubuntu Troubleshoot dataset, and achieve significant improvements over the state-of-the-art and baselines on several metrics, including complementary diversity-based metrics, human evaluation, and qualitative visualizations. We also show that a vanilla RNN with dynamic attention outperforms more complex memory models (e.g., LSTM and GRU) by allowing for flexible, long-distance memory. We promote further coherence via topic modeling-based reranking.", "In this paper, we construct and train end-to-end neural network-based dialogue systems using an updated version of the recent Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words. This dataset is interesting because of its size, long context lengths, and technical nature; thus, it can be used to train large models directly from data with minimal feature engineering, which can be both time consuming and expensive. We provide baselines in two different environments: one where models are trained to maximize the log-likelihood of a generated utterance conditioned on the context of the conversation, and one where models are trained to select the correct next response from a list of candidate responses. These are both evaluated on a recall task that we call Next Utterance Classification (NUC), as well as other generation-specific metrics. Finally, we provide a qualitative error analysis to help determine the most promising directions for future research on the Ubuntu Dialogue Corpus, and for end-to-end dialogue systems in general.", "We introduce end-to-end neural network based models for simulating users of task-oriented dialogue systems. User simulation in dialogue systems is crucial from two different perspectives: (i) automatic evaluation of different dialogue models, and (ii) training task-oriented dialogue systems. We design a hierarchical sequence-to-sequence model that first encodes the initial user goal and system turns into fixed length representations using Recurrent Neural Networks (RNN). It then encodes the dialogue history using another RNN layer. At each turn, user responses are decoded from the hidden representations of the dialogue level RNN. This hierarchical user simulator (HUS) approach allows the model to capture undiscovered parts of the user goal without the need of an explicit dialogue state tracking. We further develop several variants by utilizing a latent variable model to inject random variations into user responses to promote diversity in simulated user responses and a novel goal regularization mechanism to penalize divergence of user responses from the initial user goal. We evaluate the proposed models on movie ticket booking domain by systematically interacting each user simulator with various dialogue system policies trained with different objectives and users." ] }
1908.02239
2964358797
A surge in artificial intelligence and autonomous technologies have increased the demand toward enhanced edge-processing capabilities. Computational complexity and size of state-of-the-art Deep Neural Networks (DNNs) are rising exponentially with diverse network models and larger datasets. This growth limits the performance scaling and energy-efficiency of both distributed and embedded inference platforms. Embedded designs at the edge are constrained by energy and speed limitations of available processor substrates and processor to memory communication required to fetch the model coefficients. While many hardware accelerator and network deployment frameworks have been in development, a framework is needed to allow the variety of existing architectures, and those in development, to be expressed in critical parts of the flow that perform various optimization steps. Moreover, premature architecture-blind network selection and optimization diminish the effectiveness of schedule optimizations and hardware-specific mappings. In this paper, we address these issues by creating a cross-layer software-hardware design framework that encompasses network training and model compression that is aware of and tuned to the underlying hardware architecture. This approach leverages the available degrees of DNN structure and sparsity to create a converged network that can be partitioned and efficiently scheduled on the target hardware platform, minimizing data movement, and improving the overall throughput and energy. To further streamline the design, we leverage the high-level, flexible SoC generator platform based on RISC-V ROCC framework. This integration allows seamless extensions of the RISC-V instruction set and Chisel-based rapid generator design. Utilizing this approach, we implemented a silicon prototype in a 16 nm TSMC process node achieving record processing efficiency of up to 18 TOPS W.
the concept of pruning the neural network and exploiting the sparsity has been explored lately either on the general purpose processors @cite_29 @cite_12 @cite_48 @cite_49 or dedicated accelerators. Both static pruning in which the layer weights are compressed, and dynamic pruning with zero-detection of the input activation values have been explored. In both approaches, the unstructured sparse matrix resulting from the pruning of the weights poses a limitation on the speedup and energy saving achieved from the compression technique applied due to the random memory accesses that are required. Although Scalpel @cite_48 takes into account the underlying hardware platform, it only achieves on average 1.25x speedup on GPU with Cusparse library, while structured pruning achieves 4x on the same platform @cite_34 @cite_39 . On the other hand, considering the customized ASIC designs, EIE @cite_36 achieves 5.12x speedup with respect to the GPU, whereas the current APU design we presented reaches up to 80x speedup for a typical fully connected layer.
{ "cite_N": [ "@cite_36", "@cite_48", "@cite_29", "@cite_39", "@cite_49", "@cite_34", "@cite_12" ], "mid": [ "2884180697", "2657126969", "2757143157", "2540366045" ], "abstract": [ "Weight pruning methods of deep neural networks have been demonstrated to achieve a good model pruning ratio without loss of accuracy, thereby alleviating the significant computation storage requirements of large-scale DNNs. Structured weight pruning methods have been proposed to overcome the limitation of irregular network structure and demonstrated actual GPU acceleration. However, the pruning ratio and GPU acceleration are limited when accuracy needs to be maintained. In this work, we overcome pruning ratio and GPU acceleration limitations by proposing a unified, systematic framework of structured weight pruning for DNNs, named ADAM-ADMM. It is a framework that can be used to induce different types of structured sparsity, such as filter-wise, channel-wise, and shape-wise sparsity, as well non-structured sparsity. The proposed framework incorporates stochastic gradient descent with ADMM, and can be understood as a dynamic regularization method in which the regularization target is analytically updated in each iteration. A significant improvement in structured weight pruning ratio is achieved without loss of accuracy, along with fast convergence rate. With a small sparsity degree of 33.3 on the convolutional layers, we achieve 1.64 accuracy enhancement for the AlexNet model. This is obtained by mitigation of overfitting. Without loss of accuracy on the AlexNet model, we achieve 2.58x and 3.65x average measured speedup on two GPUs, clearly outperforming the prior work. The average speedups reach 2.77x and 7.5x when allowing a moderate accuracy loss of 2 . In this case the model compression for convolutional layers is 13.2x, corresponding to 10.5x CPU speedup. Our experiments on ResNet model and on other datasets like UCF101 and CIFAR-10 demonstrate the consistently higher performance of our framework. Our models and codes are released at this https URL", "As the size of Deep Neural Networks (DNNs) continues to grow to increase accuracy and solve more complex problems, their energy footprint also scales. Weight pruning reduces DNN model size and the computation by removing redundant weights. However, we implemented weight pruning for several popular networks on a variety of hardware platforms and observed surprising results. For many networks, the network sparsity caused by weight pruning will actually hurt the overall performance despite large reductions in the model size and required multiply-accumulate operations. Also, encoding the sparse format of pruned networks incurs additional storage space overhead. To overcome these challenges, we propose Scalpel that customizes DNN pruning to the underlying hardware by matching the pruned network structure to the data-parallel hardware organization. Scalpel consists of two techniques: SIMD-aware weight pruning and node pruning. For low-parallelism hardware (e.g., microcontroller), SIMD-aware weight pruning maintains weights in aligned fixed-size groups to fully utilize the SIMD units. For high-parallelism hardware (e.g., GPU), node pruning removes redundant nodes, not redundant weights, thereby reducing computation without sacrificing the dense matrix format. For hardware with moderate parallelism (e.g., desktop CPU), SIMD-aware weight pruning and node pruning are synergistically applied together. Across the microcontroller, CPU and GPU, Scalpel achieves mean speedups of 3.54x, 2.61x, and 1.25x while reducing the model sizes by 88 , 82 , and 53 . In comparison, traditional weight pruning achieves mean speedups of 1.90x, 1.06x, 0.41x across the three platforms.", "Although deep Convolutional Neural Network (CNN) has shown better performance in various computer vision tasks, its application is restricted by a significant increase in storage and computation. Among CNN simplification techniques, parameter pruning is a promising approach which aims at reducing the number of weights of various layers without intensively reducing the original accuracy. In this paper, we propose a novel progressive parameter pruning method, named Structured Probabilistic Pruning (SPP), which effectively prunes weights of convolutional layers in a probabilistic manner. Specifically, unlike existing deterministic pruning approaches, where unimportant weights are permanently eliminated, SPP introduces a pruning probability for each weight, and pruning is guided by sampling from the pruning probabilities. A mechanism is designed to increase and decrease pruning probabilities based on importance criteria for the training process. Experiments show that, with 4x speedup, SPP can accelerate AlexNet with only 0.3 loss of top-5 accuracy and VGG-16 with 0.8 loss of top-5 accuracy in ImageNet classification. Moreover, SPP can be directly applied to accelerate multi-branch CNN networks, such as ResNet, without specific adaptations. Our 2x speedup ResNet-50 only suffers 0.8 loss of top-5 accuracy on ImageNet. We further prove the effectiveness of our method on transfer learning task on Flower-102 dataset with AlexNet.", "The learning capability of a neural network improves with increasing depth at higher computational costs. Wider layers with dense kernel connectivity patterns furhter increase this cost and may hinder real-time inference. We propose feature map and kernel level pruning for reducing the computational complexity of a deep convolutional neural network. Pruning feature maps reduces the width of a layer and hence does not need any sparse representation. Further, kernel pruning converts the dense connectivity pattern into a sparse one. Due to coarse nature, these pruning granularities can be exploited by GPUs and VLSI based implementations. We propose a simple and generic strategy to choose the least adversarial pruning masks for both granularities. The pruned networks are retrained which compensates the loss in accuracy. We obtain the best pruning ratios when we prune a network with both granularities. Experiments with the CIFAR-10 dataset show that more than 85 sparsity can be induced in the convolution layers with less than 1 increase in the missclassification rate of the baseline network." ] }
1908.01950
2966170251
The importance of wild video based image set recognition is becoming monotonically increasing. However, the contents of these collected videos are often complicated, and how to efficiently perform set modeling and feature extraction is a big challenge for set-based classification algorithms. In recent years, some proposed image set classification methods have made a considerable advance by modeling the original image set with covariance matrix, linear subspace, or Gaussian distribution. As a matter of fact, most of them just adopt a single geometric model to describe each given image set, which may lose some other useful information for classification. To tackle this problem, we propose a novel algorithm to model each image set from a multi-geometric perspective. Specifically, the covariance matrix, linear subspace, and Gaussian distribution are applied for set representation simultaneously. In order to fuse these multiple heterogeneous Riemannian manifoldvalued features, the well-equipped Riemannian kernel functions are first utilized to map them into high dimensional Hilbert spaces. Then, a multi-kernel metric learning framework is devised to embed the learned hybrid kernels into a lower dimensional common subspace for classification. We conduct experiments on four widely used datasets corresponding to four different classification tasks: video-based face recognition, set-based object categorization, video-based emotion recognition, and dynamic scene classification, to evaluate the classification performance of the proposed algorithm. Extensive experimental results justify its superiority over the state-of-the-art.
In image set classification, the covariance matrix, linear subspace and Gaussian distribution are three commonly used Riemannian manifold-valued descriptors for image set description. For covariance matrix, its advantages are the simplicity and flexibility to capture the variations within the set @cite_1 @cite_50 @cite_24 , and for linear subspace, its preponderance stem both from the lower computational cost and from the ability to accommodate the effects of various intra-set variations @cite_37 @cite_19 . In comparison, the strength of Gaussian distribution is that it can describe the set data variations by estimating their first-order statistics and second order statistics simultaneously @cite_44 @cite_3 . The increasing attention and promotion of these three descriptors based image set classification problems manifests in three main factors, which are presented as follows.
{ "cite_N": [ "@cite_37", "@cite_1", "@cite_3", "@cite_24", "@cite_19", "@cite_44", "@cite_50" ], "mid": [ "2144093206", "2585165747", "2099629511", "2433217581" ], "abstract": [ "We propose a novel discriminative learning approach to image set classification by modeling the image set with its natural second-order statistic, i.e. covariance matrix. Since nonsingular covariance matrices, a.k.a. symmetric positive definite (SPD) matrices, lie on a Riemannian manifold, classical learning algorithms cannot be directly utilized to classify points on the manifold. By exploring an efficient metric for the SPD matrices, i.e., Log-Euclidean Distance (LED), we derive a kernel function that explicitly maps the covariance matrix from the Riemannian manifold to a Euclidean space. With this explicit mapping, any learning method devoted to vector space can be exploited in either its linear or kernel formulation. Linear Discriminant Analysis (LDA) and Partial Least Squares (PLS) are considered in this paper for their feasibility for our specific problem. We further investigate the conventional linear subspace based set modeling technique and cast it in a unified framework with our covariance matrix based modeling. The proposed method is evaluated on two tasks: face recognition and object categorization. Extensive experimental results show not only the superiority of our method over state-of-the-art ones in both accuracy and efficiency, but also its stability to two real challenges: noisy set data and varying set size.", "We describe a new region descriptor and apply it to two problems, object detection and texture classification. The covariance of d-features, e.g., the three-dimensional color vector, the norm of first and second derivatives of intensity with respect to x and y, etc., characterizes a region of interest. We describe a fast method for computation of covariances based on integral images. The idea presented here is more general than the image sums or histograms, which were already published before, and with a series of integral images the covariances are obtained by a few arithmetic operations. Covariance matrices do not lie on Euclidean space, therefore we use a distance metric involving generalized eigenvalues which also follows from the Lie group structure of positive definite matrices. Feature matching is a simple nearest neighbor search under the distance metric and performed extremely rapidly using the integral images. The performance of the covariance features is superior to other methods, as it is shown, and large rotations and illumination changes are also absorbed by the covariance matrix.", "We introduce the notion of subspace learning from image gradient orientations for appearance-based object recognition. As image data are typically noisy and noise is substantially different from Gaussian, traditional subspace learning from pixel intensities very often fails to estimate reliably the low-dimensional subspace of a given data population. We show that replacing pixel intensities with gradient orientations and the l2 norm with a cosine-based distance measure offers, to some extend, a remedy to this problem. Within this framework, which we coin Image Gradient Orientations (IGO) subspace learning, we first formulate and study the properties of Principal Component Analysis of image gradient orientations (IGO-PCA). We then show its connection to previously proposed robust PCA techniques both theoretically and experimentally. Finally, we derive a number of other popular subspace learning techniques, namely, Linear Discriminant Analysis (LDA), Locally Linear Embedding (LLE), and Laplacian Eigenmaps (LE). Experimental results show that our algorithms significantly outperform popular methods such as Gabor features and Local Binary Patterns and achieve state-of-the-art performance for difficult problems such as illumination and occlusion-robust face recognition. In addition to this, the proposed IGO-methods require the eigendecomposition of simple covariance matrices and are as computationally efficient as their corresponding l2 norm intensity-based counterparts. Matlab code for the methods presented in this paper can be found at http: ibug.doc.ic.ac.uk resources.", "Describing the color and textural information of a person image is one of the most crucial aspects of person re-identification. In this paper, we present a novel descriptor based on a hierarchical distribution of pixel features. A hierarchical covariance descriptor has been successfully applied for image classification. However, the mean information of pixel features, which is absent in covariance, tends to be major discriminative information of person images. To solve this problem, we describe a local region in an image via hierarchical Gaussian distribution in which both means and covariances are included in their parameters. More specifically, we model the region as a set of multiple Gaussian distributions in which each Gaussian represents the appearance of a local patch. The characteristics of the set of Gaussians are again described by another Gaussian distribution. In both steps, unlike the hierarchical covariance descriptor, the proposed descriptor can model both the mean and the covariance information of pixel features properly. The results of experiments conducted on five databases indicate that the proposed descriptor exhibits re-markably high performance which outperforms the state-of-the-art descriptors for person re-identification." ] }
1908.01950
2966170251
The importance of wild video based image set recognition is becoming monotonically increasing. However, the contents of these collected videos are often complicated, and how to efficiently perform set modeling and feature extraction is a big challenge for set-based classification algorithms. In recent years, some proposed image set classification methods have made a considerable advance by modeling the original image set with covariance matrix, linear subspace, or Gaussian distribution. As a matter of fact, most of them just adopt a single geometric model to describe each given image set, which may lose some other useful information for classification. To tackle this problem, we propose a novel algorithm to model each image set from a multi-geometric perspective. Specifically, the covariance matrix, linear subspace, and Gaussian distribution are applied for set representation simultaneously. In order to fuse these multiple heterogeneous Riemannian manifoldvalued features, the well-equipped Riemannian kernel functions are first utilized to map them into high dimensional Hilbert spaces. Then, a multi-kernel metric learning framework is devised to embed the learned hybrid kernels into a lower dimensional common subspace for classification. We conduct experiments on four widely used datasets corresponding to four different classification tasks: video-based face recognition, set-based object categorization, video-based emotion recognition, and dynamic scene classification, to evaluate the classification performance of the proposed algorithm. Extensive experimental results justify its superiority over the state-of-the-art.
Manifold Dimensionality Reduction Based Image Set Classification: To circumvent the above problem, some algorithms that jointly perform linear mapping and metric learning directly on the original Riemannian manifold have been suggested recently @cite_50 @cite_19 @cite_53 , and therefore a discriminative lower-dimensional one can be yielded. Harandi . @cite_50 produce a lower-dimensional SPD manifold with an orthogonal mapping obtained by devising a discriminative metric learning framework with respect to the original high-dimensional data. To simplify the computational complexity, Huang . @cite_53 put forward a novel Log-Euclidean metric learning algorithm to form a desirable SPD manifold by directly embedding the tangent space of original SPD manifold into a lower-dimensional one. Similarly, Huang . @cite_19 try to learn a lower-dimensional and more discriminative Grassmannian-valued feature representations for the original high dimensional Grassmann manifold under a devised projection metric learning framework. Thanks to the advantage of fully considering the manifold geometry, the above algorithms show good classification performance. Yet, they also have an inherent design flaw, that is the mapping which is defined and learned on the non-linear Riemannian geometry is linear, which seems to be unreasonable.
{ "cite_N": [ "@cite_19", "@cite_53", "@cite_50" ], "mid": [ "1862697533", "2962772276", "1922045146", "566612420" ], "abstract": [ "Representing images and videos with Symmetric Positive Definite (SPD) matrices and considering the Riemannian geometry of the resulting space has proven beneficial for many recognition tasks. Unfortunately, computation on the Riemannian manifold of SPD matrices –especially of high-dimensional ones– comes at a high cost that limits the applicability of existing techniques. In this paper we introduce an approach that lets us handle high-dimensional SPD matrices by constructing a lower-dimensional, more discriminative SPD manifold. To this end, we model the mapping from the high-dimensional SPD manifold to the low-dimensional one with an orthonormal projection. In particular, we search for a projection that yields a low-dimensional manifold with maximum discriminative power encoded via an affinity-weighted similarity measure based on metrics on the manifold. Learning can then be expressed as an optimization problem on a Grassmann manifold. Our evaluation on several classification tasks shows that our approach leads to a significant accuracy gain over state-of-the-art methods.", "Representing images and videos with Symmetric Positive Definite (SPD) matrices, and considering the Riemannian geometry of the resulting space, has been shown to yield high discriminative power in many visual recognition tasks. Unfortunately, computation on the Riemannian manifold of SPD matrices –especially of high-dimensional ones– comes at a high cost that limits the applicability of existing techniques. In this paper, we introduce algorithms able to handle high-dimensional SPD matrices by constructing a lower-dimensional SPD manifold. To this end, we propose to model the mapping from the high-dimensional SPD manifold to the low-dimensional one with an orthonormal projection. This lets us formulate dimensionality reduction as the problem of finding a projection that yields a low-dimensional manifold either with maximum discriminative power in the supervised scenario, or with maximum variance of the data in the unsupervised one. We show that learning can be expressed as an optimization problem on a Grassmann manifold and discuss fast solutions for special cases. Our evaluation on several classification tasks evidences that our approach leads to a significant accuracy gain over state-of-the-art methods.", "In video based face recognition, great success has been made by representing videos as linear subspaces, which typically lie in a special type of non-Euclidean space known as Grassmann manifold. To leverage the kernel-based methods developed for Euclidean space, several recent methods have been proposed to embed the Grassmann manifold into a high dimensional Hilbert space by exploiting the well established Project Metric, which can approximate the Riemannian geometry of Grassmann manifold. Nevertheless, they inevitably introduce the drawbacks from traditional kernel-based methods such as implicit map and high computational cost to the Grassmann manifold. To overcome such limitations, we propose a novel method to learn the Projection Metric directly on Grassmann manifold rather than in Hilbert space. From the perspective of manifold learning, our method can be regarded as performing a geometry-aware dimensionality reduction from the original Grassmann manifold to a lower-dimensional, more discriminative Grassmann manifold where more favorable classification can be achieved. Experiments on several real-world video face datasets demonstrate that the proposed method yields competitive performance compared with the state-of-the-art algorithms.", "The manifold of Symmetric Positive Definite (SPD) matrices has been successfully used for data representation in image set classification. By endowing the SPD manifold with Log-Euclidean Metric, existing methods typically work on vector-forms of SPD matrix logarithms. This however not only inevitably distorts the geometrical structure of the space of SPD matrix logarithms but also brings low efficiency especially when the dimensionality of SPD matrix is high. To overcome this limitation, we propose a novel metric learning approach to work directly on logarithms of SPD matrices. Specifically, our method aims to learn a tangent map that can directly transform the matrix logarithms from the original tangent space to a new tangent space of more discriminability. Under the tangent map framework, the novel metric learning can then be formulated as an optimization problem of seeking a Mahalanobis-like matrix, which can take the advantage of traditional metric learning techniques. Extensive evaluations on several image set classification tasks demonstrate the effectiveness of our proposed metric learning method." ] }
1908.01841
2966573247
Neural dialogue models, despite their successes, still suffer from lack of relevance, diversity, and in many cases coherence in their generated responses. These issues have been attributed to reasons including (1) short-range model architectures that capture limited temporal dependencies, (2) limitations of the maximum likelihood training objective, (3) the concave entropy profile of dialogue datasets resulting into short and generic responses, and (4) out-of-vocabulary problem leading to generation of a large number of @math tokens. Autoregressive transformer models such as GPT-2, although trained with the maximum likelihood objective, do not suffer from the out-of-vocabulary problem and have demonstrated an excellent ability to capture long-range structures in language modeling tasks. In this paper, we examine the use of autoregressive transformer models for multi-turn dialogue response generation. In our experiments, we employ small and medium GPT-2 models (with publicly available pretrained language model parameters) on the open-domain Movie Triples dataset and the closed-domain Ubuntu Dialogue dataset. The models (with and without pretraining) achieve significant improvements over the baselines for multi-turn dialogue response generation. They also produce state-of-the-art performance on the two datasets based on several metrics, including BLEU, ROGUE, and distinct n-gram.
There has been an ongoing effort to drastically improve the performance of dialogue response generation model especially in multi-turn scenarios. In particular, effort has been made to improve the performance of RNN-based models by exploring alternative frameworks such as variational auto-encoding @cite_15 , and generative adversarial networks @cite_35 that simultaneously encourage response relevance and diversity. Despite the improvements provided by these models, the quality of model-generated responses is still much below the human level. Recent work on autoregressive transformer-based language models @cite_9 @cite_34 @cite_2 @cite_21 have however shown an impressive ability to exploit long temporal dependencies in textual data. In this work, we investigate the effectiveness of the long temporal memory capability of autoregressive transformer-based models for multi-turn dialogue modeling. For our experiments, we adopted the GPT-2 autoregressive transformer architecture @cite_2 due to its large sequence length (1024) and 100 the best of our knowledge, there has been no previous work on using autoregressive transformer-based models for dialogue modeling.
{ "cite_N": [ "@cite_35", "@cite_9", "@cite_21", "@cite_2", "@cite_15", "@cite_34" ], "mid": [ "2806935606", "2551884415", "2952798561", "2593751037" ], "abstract": [ "We propose an adversarial learning approach for generating multi-turn dialogue responses. Our proposed framework, hredGAN, is based on conditional generative adversarial networks (GANs). The GAN's generator is a modified hierarchical recurrent encoder-decoder network (HRED) and the discriminator is a word-level bidirectional RNN that shares context and word embeddings with the generator. During inference, noise samples conditioned on the dialogue history are used to perturb the generator's latent space to generate several possible responses. The final response is the one ranked best by the discriminator. The hredGAN shows improved performance over existing methods: (1) it generalizes better than networks trained using only the log-likelihood criterion, and (2) it generates longer, more informative and more diverse responses with high utterance and topic relevance even with limited training data. This improvement is demonstrated on the Movie triples and Ubuntu dialogue datasets using both automatic and human evaluations.", "We model coherent conversation continuation via RNN-based dialogue models equipped with a dynamic attention mechanism. Our attention-RNN language model dynamically increases the scope of attention on the history as the conversation continues, as opposed to standard attention (or alignment) models with a fixed input scope in a sequence-to-sequence model. This allows each generated word to be associated with the most relevant words in its corresponding conversation history. We evaluate the model on two popular dialogue datasets, the open-domain MovieTriples dataset and the closed-domain Ubuntu Troubleshoot dataset, and achieve significant improvements over the state-of-the-art and baselines on several metrics, including complementary diversity-based metrics, human evaluation, and qualitative visualizations. We also show that a vanilla RNN with dynamic attention outperforms more complex memory models (e.g., LSTM and GRU) by allowing for flexible, long-distance memory. We promote further coherence via topic modeling-based reranking.", "We introduce end-to-end neural network based models for simulating users of task-oriented dialogue systems. User simulation in dialogue systems is crucial from two different perspectives: (i) automatic evaluation of different dialogue models, and (ii) training task-oriented dialogue systems. We design a hierarchical sequence-to-sequence model that first encodes the initial user goal and system turns into fixed length representations using Recurrent Neural Networks (RNN). It then encodes the dialogue history using another RNN layer. At each turn, user responses are decoded from the hidden representations of the dialogue level RNN. This hierarchical user simulator (HUS) approach allows the model to capture undiscovered parts of the user goal without the need of an explicit dialogue state tracking. We further develop several variants by utilizing a latent variable model to inject random variations into user responses to promote diversity in simulated user responses and a novel goal regularization mechanism to penalize divergence of user responses from the initial user goal. We evaluate the proposed models on movie ticket booking domain by systematically interacting each user simulator with various dialogue system policies trained with different objectives and users.", "In this paper, we construct and train end-to-end neural network-based dialogue systems using an updated version of the recent Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words. This dataset is interesting because of its size, long context lengths, and technical nature; thus, it can be used to train large models directly from data with minimal feature engineering, which can be both time consuming and expensive. We provide baselines in two different environments: one where models are trained to maximize the log-likelihood of a generated utterance conditioned on the context of the conversation, and one where models are trained to select the correct next response from a list of candidate responses. These are both evaluated on a recall task that we call Next Utterance Classification (NUC), as well as other generation-specific metrics. Finally, we provide a qualitative error analysis to help determine the most promising directions for future research on the Ubuntu Dialogue Corpus, and for end-to-end dialogue systems in general." ] }
1908.01714
2966789542
In their seminal work on systemic risk in financial markets, Eisenberg and Noe proposed and studied a model with @math firms embedded into a network of debt relations. We analyze this model from a game-theoretic point of view. Every firm is a rational agent in a directed graph that has an incentive to allocate payments in order to clear as much of its debt as possible. Each edge is weighted and describes a liability between the firms. We consider several variants of the game that differ in the permissible payment strategies. We study the existence and computational complexity of pure Nash and strong equilibria, and we provide bounds on the (strong) prices of anarchy and stability for a natural notion of social welfare. Our results highlight the power of financial regulation -- if payments of insolvent firms can be centrally assigned, a socially optimal strong equilibrium can be found in polynomial time. In contrast, worst-case strong equilibria can be a factor of @math away from optimal, and, in general, computing a best response is an NP-hard problem. For less permissible sets of strategies, we show that pure equilibria might not exist, and deciding their existence as well as computing them if they exist constitute NP-hard problems.
To our knowledge, strategic aspects are currently reflected only in models of network formation @cite_10 @cite_9 . A three period economy is assumed where firms can invest into risky assets. To do so, they strategically decide to borrow funds from outside investors as well as other firms. Thereby a network of financial cross-holdings is endogenously formed as each firm maximizes their expected profit. The results show that risk-seeking firms tend to over-connect leading to stronger contagion and systemic risk as compared to the socially optimal risk-sharing allocation. Note that in this case, strategic aspects only play a role in the formation of inter-bank relations whereas the clearing mechanism is assumed to follow the same process as in @cite_3 .
{ "cite_N": [ "@cite_9", "@cite_10", "@cite_3" ], "mid": [ "100186914", "2160164079", "2013403206", "2103720435" ], "abstract": [ "I develop a model of the financial sector in which endogenous intermediation among debt financed banks generates excessive systemic risk. Financial institutions have incentives to capture intermediation spreads through strategic borrowing and lending decisions. By doing so, they tilt the division of surplus along an intermediation chain in their favor, while at the same time reducing aggregate surplus. I show that a core-periphery network -- few highly interconnected and many sparsely connected banks -- endogenously emerges in my model. The network is inefficient relative to a constrained efficient benchmark since banks who make risky investments \"overconnect\", exposing themselves to excessive counterparty risk, while banks who mainly provide funding end up with too few connections. The predictions of the model are consistent with empirical evidence in the literature.", "We provide a framework to study the formation of financial networks and investigate the interplay between banks' lending incentives and the emergence of systemic risk. We show that under natural contracting assumptions, banks fail to internalize the implications of their lending decisions for the banks with whom they are not directly contracting, thus establishing the presence of a financial network externality in the process of network formation. We then illustrate how the presence of this externality can function as a channel for the emergence of systemic risk. In particular, we show that (i) banks may \"overlend\" in equilibrium, creating channels over which idiosyncratic shocks can translate into systemic crises via financial contagion; and (ii) they may not spread their lending sufficiently among the set of potential borrowers, creating insufficiently connected financial networks that are excessively prone to contagious defaults. Finally, we show that banks' private incentives may lead to the formation of financial networks that are overly susceptible to systemic meltdowns with some small probability.", "We propose a simple model of inter-bank borrowing and lending where the evolution of the log-monetary reserves of @math banks is described by a system of diffusion processes coupled through their drifts in such a way that stability of the system depends on the rate of inter-bank borrowing and lending. Systemic risk is characterized by a large number of banks reaching a default threshold by a given time horizon. Our model incorporates a game feature where each bank controls its rate of borrowing lending to a central bank. The optimization reflects the desire of each bank to borrow from the central bank when its monetary reserve falls below a critical level or lend if it rises above this critical level which is chosen here as the average monetary reserve. Borrowing from or lending to the central bank is also subject to a quadratic cost at a rate which can be fixed by the regulator. We solve explicitly for Nash equilibria with finitely many players, and we show that in this model the central bank acts as a clearing house, adding liquidity to the system without affecting its systemic risk. We also study the corresponding Mean Field Game in the limit of large number of banks in the presence of a common noise.", "This paper presents the first study of the endogenous formation of networks by strategic, self-interested agents who benefit from producing and disseminating information. This work departs from previous works on network formation (especially in the economics literature) which assume that agents benefit only by acquiring information produced by other agents. The strategic production and dissemination of information have striking consequences. We show first that the network structure that emerges (in equilibrium) typically displays a core-periphery structure, with the few agents at the core playing the role of eeconnectorsee, creating and maintaining links to the agents at the periphery. We then determine conditions under which the networks that emerge are minimally connected and have short network diameters (properties that are important for efficiency). Finally, we show that the number of agents who produce information and the total amount of information produced in the network grow at the same rate as the agent population; this is in stark contrast to the \"law of the few\" that had been established in previous works which do not consider information dissemination." ] }
1908.01714
2966789542
In their seminal work on systemic risk in financial markets, Eisenberg and Noe proposed and studied a model with @math firms embedded into a network of debt relations. We analyze this model from a game-theoretic point of view. Every firm is a rational agent in a directed graph that has an incentive to allocate payments in order to clear as much of its debt as possible. Each edge is weighted and describes a liability between the firms. We consider several variants of the game that differ in the permissible payment strategies. We study the existence and computational complexity of pure Nash and strong equilibria, and we provide bounds on the (strong) prices of anarchy and stability for a natural notion of social welfare. Our results highlight the power of financial regulation -- if payments of insolvent firms can be centrally assigned, a socially optimal strong equilibrium can be found in polynomial time. In contrast, worst-case strong equilibria can be a factor of @math away from optimal, and, in general, computing a best response is an NP-hard problem. For less permissible sets of strategies, we show that pure equilibria might not exist, and deciding their existence as well as computing them if they exist constitute NP-hard problems.
On a more technical level, our game-theoretic approach is related to a number of existing game-theoretic models based on flows in networks. In cooperative game theory, there are several notions of flow games based on a directed flow network. Existing variants include games, where edges are players @cite_2 @cite_8 @cite_22 @cite_0 @cite_7 @cite_20 , or each player owns a source-sink pair @cite_1 @cite_12 . The total value of a coalition @math is the profit from a maximum (multi-commodity) flow that can be routed through the network if only the players in @math are present. There is a rich set of results on structural characterizations and computability of solutions in the core, as well as other solution concepts for cooperative games. In contrast to our work, these games are non-strategic. Instead, here we consider each player as a single node with a strategic decision about flow allocation.
{ "cite_N": [ "@cite_22", "@cite_7", "@cite_8", "@cite_1", "@cite_0", "@cite_2", "@cite_12", "@cite_20" ], "mid": [ "2013391976", "2143039680", "2115537584", "2127813674" ], "abstract": [ "We study network games in which each player wishes to connect his source and sink, and the cost of each edge is shared among its users either equally (in Fair Connection Games--FCG's) or arbitrarily (in General Connection Games--GCG's). We study the existence and quality of strong equilibria (SE)--strategy profiles from which no coalition can improve the cost of each of its members--in these settings. We show that SE always exist in the following games: (1) Single source and sink FCG's and GCG's. (2) Single source multiple sinks FCG's and GCG's on series parallel graphs. (3) Multi source and sink FCG's on extension parallel graphs. As for the quality of the SE, in any FCG with n players, the cost of any SE is bounded by H(n) (i.e., the harmonic sum), contrasted with the [Theta](n) price of anarchy. For any GCG, any SE is optimal.", "We consider computational aspects of a game theoretic approach to network reliability. Consider a network where failure of one node may disrupt communication between two other nodes. We model this network as a simple coalitional game, called the vertex Connectivity Game (CG). In this game, each agent owns a vertex, and controls all the edges going to and from that vertex. A coalition of agents wins if it fully connects a certain subset of vertices in the graph, called the primary vertices. We show that power indices, which express an agent's ability to affect the outcome of the vertex connectivity game, can be used to identify significant possible points of failure in the communication network, and can thus be used to increase network reliability. We show that in general graphs, calculating the Banzhaf power index is #P-complete, but suggest a polynomial algorithm for calculating this index in trees. We also show a polynomial algorithm for computing the core of a CG, which allows a stable division of payments to coalition agents.", "A key question in cooperative game theory is that of coalitional stability, usually captured by the notion of the core --the set of outcomes such that no subgroup of players has an incentive to deviate. However, some coalitional games have empty cores, and any outcome in such a game is unstable. In this paper, we investigate the possibility of stabilizing a coalitional game by using external payments. We consider a scenario where an external party, which is interested in having the players work together, offers a supplemental payment to the grand coalition (or, more generally, a particular coalition structure). This payment is conditional on players not deviating from their coalition(s). The sum of this payment plus the actual gains of the coalition(s) may then be divided among the agents so as to promote stability. We define the cost of stability (CoS) as the minimal external payment that stabilizes the game. We provide general bounds on the cost of stability in several classes of games, and explore its algorithmic properties. To develop a better intuition for the concepts we introduce, we provide a detailed algorithmic study of the cost of stability in weighted voting games, a simple but expressive class of games which can model decision-making in political bodies, and cooperation in multiagent settings. Finally, we extend our model and results to games with coalition structures.", "Coalitional games allow subsets (coalitions) of players to cooperate to receive a collective payoff. This payoff is then distributed “fairly” among the members of that coalition according to some division scheme. Various solution concepts have been proposed as reasonable schemes for generating fair allocations. The Shapley value is one classic solution concept: player i’s share is precisely equal to i’s expected marginal contribution if the players join the coalition one at a time, in a uniformly random order. In this paper, we consider the class of supermodular games (sometimes called convex games), and give a fully polynomial-time randomized approximation scheme (FPRAS) to compute the Shapley value to within a (1 ±e) factor in monotone supermodular games. We show that this result is tight in several senses: no deterministic algorithm can approximate Shapley value as well, no randomized algorithm can do better, and both monotonicity and supermodularity are required for the existence of an efficient (1 ±e)-approximation algorithm. We also argue that, relative to supermodularity, monotonicity is a mild assumption, and we discuss how to transform supermodular games to be monotonic." ] }
1908.01623
2966715342
Temporal point process is widely used for sequential data modeling. In this paper, we focus on the problem of modeling sequential event propagation in graph, such as retweeting by social network users, news transmitting between websites, etc. Given a collection of event propagation sequences, conventional point process model consider only the event history, i.e. embed event history into a vector, not the latent graph structure. We propose a Graph Biased Temporal Point Process (GBTPP) leveraging the structural information from graph representation learning, where the direct influence between nodes and indirect influence from event history is modeled respectively. Moreover, the learned node embedding vector is also integrated into the embedded event history as side information. Experiments on a synthetic dataset and two real-world datasets show the efficacy of our model compared to conventional methods and state-of-the-art.
First, the conventional varying-order Markov models @cite_12 deal with this problem as a discrete-time sequence prediction task. Based on the observed history states sequence, prediction of the event type is given by the most likely state that the state transition process will evolve into on the next step. An obvious limit for the families of Markov models is that they assume the state transition process proceed with unit time-step, it can not capture the temporal dependency of the continuous time and give predictions on the exact time of the next event. Moreover, Markov models can not deal with long dependency of the history events when the event sequence is long, because the size of the state space grow exponentially with the number of the time steps considered in Markov model. It is worth mentioning that semi-Markov models @cite_3 can model continuous time-intervals between two states to some extent, by assuming the intervals to follow some simple distributions, but it still has the state space explosion problem when dealing with long time dependency.
{ "cite_N": [ "@cite_3", "@cite_12" ], "mid": [ "2003095404", "1535517661", "2550431203", "2594789366" ], "abstract": [ "Abstract When the initial distribution and transition rates for a continuous time Markov chain are not known precisely, robust methods are needed to study the evolution of the process in time to avoid judgements based on unwarranted precision. We follow the ideas successfully applied in the study of discrete time model to build a framework of imprecise Markov chains in continuous time. The imprecision in the distributions over the set of states is modelled with upper and lower expectation functionals, which equivalently represent sets of probability distributions. Uncertainty in transitions is modelled with sets of transition rates compatible with available information. The Kolmogorov’s backward equation is then generalised into the form of a generalised differential equation, with generalised derivatives and set valued maps. The upper and lower expectation functionals corresponding to imprecise distributions at given times are determined by the maximal and minimal solutions of these equations. The second part of the paper is devoted to numerical methods for approximating the boundary solutions. The methods are based on discretisation of the time interval. A uniform and adaptive grid discretisations are examined. The latter is computationally much more efficient than the former one, but is not applicable on every interval. Therefore, to achieve maximal efficiency a combination of the methods is used.", "We present an on-the-fly abstraction technique for infinite-state continuous -time Markov chains. We consider Markov chains that are specified by a finite set of transition classes. Such models naturally represent biochemical reactions and therefore play an important role in the stochastic modeling of biological systems. We approximate the transient probability distributions at various time instances by solving a sequence of dynamically constructed abstract models, each depending on the previous one. Each abstract model is a finite Markov chain that represents the behavior of the original, infinite chain during a specific time interval. Our approach provides complete information about probability distributions, not just about individual parameters like the mean. The error of each abstraction can be computed, and the precision of the abstraction refined when desired. We implemented the algorithm and demonstrate its usefulness and efficiency on several case studies from systems biology.", "Abstract Continuous-time Markov chains are mathematical models that are used to describe the state-evolution of dynamical systems under stochastic uncertainty, and have found widespread applications in various fields. In order to make these models computationally tractable, they rely on a number of assumptions that—as is well known—may not be realistic for the domain of application; in particular, the ability to provide exact numerical parameter assessments, and the applicability of time-homogeneity and the eponymous Markov property. In this work, we extend these models to imprecise continuous-time Markov chains (ICTMC's), which are a robust generalisation that relaxes these assumptions while remaining computationally tractable. More technically, an ICTMC is a set of “precise” continuous-time finite-state stochastic processes, and rather than computing expected values of functions, we seek to compute lower expectations , which are tight lower bounds on the expectations that correspond to such a set of “precise” models. Note that, in contrast to e.g. Bayesian methods, all the elements of such a set are treated on equal grounds; we do not consider a distribution over this set. Together with the conjugate notion of upper expectation , the bounds that we provide can then be intuitively interpreted as providing best- and worst-case scenarios with respect to all the models in our set of stochastic processes. The first part of this paper develops a formalism for describing continuous-time finite-state stochastic processes that does not require the aforementioned simplifying assumptions. Next, this formalism is used to characterise ICTMC's and to investigate their properties. The concept of lower expectation is then given an alternative operator-theoretic characterisation, by means of a lower transition operator , and the properties of this operator are investigated as well. Finally, we use this lower transition operator to derive tractable algorithms (with polynomial runtime complexity w.r.t. the maximum numerical error) for computing the lower expectation of functions that depend on the state at any finite number of time points.", "We present automated techniques for the verification and control of partially observable, probabilistic systems for both discrete and dense models of time. For the discrete-time case, we formally model these systems using partially observable Markov decision processes; for dense time, we propose an extension of probabilistic timed automata in which local states are partially visible to an observer or controller. We give probabilistic temporal logics that can express a range of quantitative properties of these models, relating to the probability of an event's occurrence or the expected value of a reward measure. We then propose techniques to either verify that such a property holds or synthesise a controller for the model which makes it true. Our approach is based on a grid-based abstraction of the uncountable belief space induced by partial observability and, for dense-time models, an integer discretisation of real-time behaviour. The former is necessarily approximate since the underlying problem is undecidable, however we show how both lower and upper bounds on numerical results can be generated. We illustrate the effectiveness of the approach by implementing it in the PRISM model checker and applying it to several case studies from the domains of task and network scheduling, computer security and planning." ] }
1908.01623
2966715342
Temporal point process is widely used for sequential data modeling. In this paper, we focus on the problem of modeling sequential event propagation in graph, such as retweeting by social network users, news transmitting between websites, etc. Given a collection of event propagation sequences, conventional point process model consider only the event history, i.e. embed event history into a vector, not the latent graph structure. We propose a Graph Biased Temporal Point Process (GBTPP) leveraging the structural information from graph representation learning, where the direct influence between nodes and indirect influence from event history is modeled respectively. Moreover, the learned node embedding vector is also integrated into the embedded event history as side information. Experiments on a synthetic dataset and two real-world datasets show the efficacy of our model compared to conventional methods and state-of-the-art.
Second, Temporal point processes with conditional intensity functions is a more general framework for sequential event data modeling. Temporal Point Process (TPP) is powerful for modeling event sequence with time-stamp in continuous time space. Early work dates back to the Hawkes processes @cite_40 which shows appropriateness for self-exciting and mutual-exciting process like earthquake and its aftershock @cite_15 @cite_26 . As an effective model for event sequence modeling, TPP has widely used in various applications, including data mining tasks e.g. social infectivity learning @cite_37 , conflict analysis @cite_7 , crime modeling @cite_31 , email network analytics @cite_39 and extremal behavior of stock price @cite_23 , and event prediction tasks e.g. failure prediction @cite_21 , sales outcome forecasting @cite_2 , literature citation prediction @cite_11 .
{ "cite_N": [ "@cite_37", "@cite_26", "@cite_31", "@cite_7", "@cite_21", "@cite_39", "@cite_40", "@cite_23", "@cite_2", "@cite_15", "@cite_11" ], "mid": [ "2509830164", "1938647246", "2605191235", "2086092403" ], "abstract": [ "Large volumes of event data are becoming increasingly available in a wide variety of applications, such as healthcare analytics, smart cities and social network analysis. The precise time interval or the exact distance between two events carries a great deal of information about the dynamics of the underlying systems. These characteristics make such data fundamentally different from independently and identically distributed data and time-series data where time and space are treated as indexes rather than random variables. Marked temporal point processes are the mathematical framework for modeling event data with covariates. However, typical point process models often make strong assumptions about the generative processes of the event data, which may or may not reflect the reality, and the specifically fixed parametric assumptions also have restricted the expressive power of the respective processes. Can we obtain a more expressive model of marked temporal point processes? How can we learn such a model from massive data? In this paper, we propose the Recurrent Marked Temporal Point Process (RMTPP) to simultaneously model the event timings and the markers. The key idea of our approach is to view the intensity function of a temporal point process as a nonlinear function of the history, and use a recurrent neural network to automatically learn a representation of influences from the event history. We develop an efficient stochastic gradient algorithm for learning the model parameters which can readily scale up to millions of events. Using both synthetic and real world datasets, we show that, in the case where the true models have parametric specifications, RMTPP can learn the dynamics of such models without the need to know the actual parametric forms; and in the case where the true models are unknown, RMTPP can also learn the dynamics and achieve better predictive performance than other parametric alternatives based on particular prior assumptions.", "Determinantal point processes (DPP) serve as a practicable modeling for many applications of repulsive point processes. A known approach for simulation was proposed in Hough(2006) , which generate the desired distribution point wise through rejection sampling. Unfortunately, the size of rejection could be very large. In this paper, we investigate the application of perfect simulation via coupling from the past (CFTP) on DPP. We give a general framework for perfect simulation on DPP model. It is shown that the limiting sequence of the time-to-coalescence of the coupling is bounded by @math An application is given to the stationary models in DPP.", "Event sequence, asynchronously generated with random timestamp, is ubiquitous among applications. The precise and arbitrary timestamp can carry important clues about the underlying dynamics, and has lent the event data fundamentally different from the time-series whereby series is indexed with fixed and equal time interval. One expressive mathematical tool for modeling event is point process. The intensity functions of many point processes involve two components: the background and the effect by the history. Due to its inherent spontaneousness, the background can be treated as a time series while the other need to handle the history events. In this paper, we model the background by a Recurrent Neural Network (RNN) with its units aligned with time series indexes while the history effect is modeled by another RNN whose units are aligned with asynchronous events to capture the long-range dynamics. The whole model with event type and timestamp prediction output layers can be trained end-to-end. Our approach takes an RNN perspective to point process, and models its background and history effect. For utility, our method allows a black-box treatment for modeling the intensity which is often a pre-defined parametric form in point processes. Meanwhile end-to-end training opens the venue for reusing existing rich techniques in deep network for point process modeling. We apply our model to the predictive maintenance problem using a log dataset by more than 1000 ATMs from a global bank headquartered in North America.", "This paper presents an inference algorithm that can discover temporal logic properties of a system from data. Our algorithm operates on finite time system trajectories that are labeled according to whether or not they demonstrate some desirable system properties (e.g. \"the car successfully stops before hitting an obstruction\"). A temporal logic formula that can discriminate between the desirable behaviors and the undesirable ones is constructed. The formulae also indicate possible causes for each set of behaviors (e.g. \"If the speed of the car is greater than 15 m s within 0.5s of brake application, the obstruction will be struck\") which can be used to tune designs or to perform on-line monitoring to ensure the desired behavior. We introduce reactive parameter signal temporal logic (rPSTL), a fragment of parameter signal temporal logic (PSTL) that is expressive enough to capture causal, spatial, and temporal relationships in data. We define a partial order over the set of rPSTL formulae that is based on language inclusion. This order enables a directed search over this set, i.e. given a candidate rPSTL formula that does not adequately match the observed data, we can automatically construct a formula that will fit the data at least as well. Two case studies, one involving a cattle herding scenario and one involving a stochastic hybrid gene circuit model, are presented to illustrate our approach." ] }
1908.01623
2966715342
Temporal point process is widely used for sequential data modeling. In this paper, we focus on the problem of modeling sequential event propagation in graph, such as retweeting by social network users, news transmitting between websites, etc. Given a collection of event propagation sequences, conventional point process model consider only the event history, i.e. embed event history into a vector, not the latent graph structure. We propose a Graph Biased Temporal Point Process (GBTPP) leveraging the structural information from graph representation learning, where the direct influence between nodes and indirect influence from event history is modeled respectively. Moreover, the learned node embedding vector is also integrated into the embedded event history as side information. Experiments on a synthetic dataset and two real-world datasets show the efficacy of our model compared to conventional methods and state-of-the-art.
Traditional TPP models are modeled by parametric forms involving manual design of conditional intensity function @math depicting event occurrence rate over time, which measures the instantaneous event occurrence rate at time @math . A few popular examples include: Poisson process @cite_25 : the basic form is history independent @math which can be dated back to the 1900's; Reinforced Poisson processes @cite_19 : the model captures the rich-get-richer' mechanism by @math where @math mimics the aging effect while @math is the accumulation of history events; Self-exciting process (Hawkes process) @cite_16 : it provides an additive model to capture the self-exciting effect from history events @math ; Reactive point process @cite_21 : generalization to the Hawkes process by adding a self-inhibiting term to account for the inhibiting effects from history @math .
{ "cite_N": [ "@cite_19", "@cite_16", "@cite_21", "@cite_25" ], "mid": [ "1938647246", "2090320383", "2509830164", "2605191235" ], "abstract": [ "Determinantal point processes (DPP) serve as a practicable modeling for many applications of repulsive point processes. A known approach for simulation was proposed in Hough(2006) , which generate the desired distribution point wise through rejection sampling. Unfortunately, the size of rejection could be very large. In this paper, we investigate the application of perfect simulation via coupling from the past (CFTP) on DPP. We give a general framework for perfect simulation on DPP model. It is shown that the limiting sequence of the time-to-coalescence of the coupling is bounded by @math An application is given to the stationary models in DPP.", "Massachusetts Institute of Technology and the University of Washington Reactive point processes (RPPs) are a new statistical model designed for predicting discrete events in time, based on past history. RPPs were developed to handle an important problem within the domain of electrical grid reliability: short term prediction of electrical grid failures (“manhole events”), including outages, fires, explosions, and smoking manholes, which can cause threats to public safety and reliability of electrical service in cities. RPPs incorporate self-exciting, self-regulating, and saturating components. The self-excitement occurs as a result of a past event, which causes a temporary rise in vulnerability to future events. The self-regulation occurs as a result of an external inspection which temporarily lowers vulnerability to future events. RPPs can saturate when too many events or inspections occur close together, which ensures that the probability of an event stays within a realistic range. Two of the operational challenges for power companies are i) making continuous-time failure predictions, and ii) cost benefit analysis for decision making and proactive maintenance. RPPs are naturally suited for handling both of these challenges. We use the model to predict power-grid failures in Manhattan over a short term horizon, and use to provide a cost benefit analysis of different proactive maintenance programs.", "Large volumes of event data are becoming increasingly available in a wide variety of applications, such as healthcare analytics, smart cities and social network analysis. The precise time interval or the exact distance between two events carries a great deal of information about the dynamics of the underlying systems. These characteristics make such data fundamentally different from independently and identically distributed data and time-series data where time and space are treated as indexes rather than random variables. Marked temporal point processes are the mathematical framework for modeling event data with covariates. However, typical point process models often make strong assumptions about the generative processes of the event data, which may or may not reflect the reality, and the specifically fixed parametric assumptions also have restricted the expressive power of the respective processes. Can we obtain a more expressive model of marked temporal point processes? How can we learn such a model from massive data? In this paper, we propose the Recurrent Marked Temporal Point Process (RMTPP) to simultaneously model the event timings and the markers. The key idea of our approach is to view the intensity function of a temporal point process as a nonlinear function of the history, and use a recurrent neural network to automatically learn a representation of influences from the event history. We develop an efficient stochastic gradient algorithm for learning the model parameters which can readily scale up to millions of events. Using both synthetic and real world datasets, we show that, in the case where the true models have parametric specifications, RMTPP can learn the dynamics of such models without the need to know the actual parametric forms; and in the case where the true models are unknown, RMTPP can also learn the dynamics and achieve better predictive performance than other parametric alternatives based on particular prior assumptions.", "Event sequence, asynchronously generated with random timestamp, is ubiquitous among applications. The precise and arbitrary timestamp can carry important clues about the underlying dynamics, and has lent the event data fundamentally different from the time-series whereby series is indexed with fixed and equal time interval. One expressive mathematical tool for modeling event is point process. The intensity functions of many point processes involve two components: the background and the effect by the history. Due to its inherent spontaneousness, the background can be treated as a time series while the other need to handle the history events. In this paper, we model the background by a Recurrent Neural Network (RNN) with its units aligned with time series indexes while the history effect is modeled by another RNN whose units are aligned with asynchronous events to capture the long-range dynamics. The whole model with event type and timestamp prediction output layers can be trained end-to-end. Our approach takes an RNN perspective to point process, and models its background and history effect. For utility, our method allows a black-box treatment for modeling the intensity which is often a pre-defined parametric form in point processes. Meanwhile end-to-end training opens the venue for reusing existing rich techniques in deep network for point process modeling. We apply our model to the predictive maintenance problem using a log dataset by more than 1000 ATMs from a global bank headquartered in North America." ] }
1908.01623
2966715342
Temporal point process is widely used for sequential data modeling. In this paper, we focus on the problem of modeling sequential event propagation in graph, such as retweeting by social network users, news transmitting between websites, etc. Given a collection of event propagation sequences, conventional point process model consider only the event history, i.e. embed event history into a vector, not the latent graph structure. We propose a Graph Biased Temporal Point Process (GBTPP) leveraging the structural information from graph representation learning, where the direct influence between nodes and indirect influence from event history is modeled respectively. Moreover, the learned node embedding vector is also integrated into the embedded event history as side information. Experiments on a synthetic dataset and two real-world datasets show the efficacy of our model compared to conventional methods and state-of-the-art.
One obvious limitation of the above TPP models is that they all assume all the samples obey a single parametric form which is too idealistic for real-world data. By contrast, recurrent neural network (RNN) based models @cite_4 @cite_17 @cite_22 are devised for learning point process. In these works, recurrent neural network (RNN) and its variants e.g. long-short term memory (LSTM) are used for modeling the conditional intensity function over time. More recently attention mechanisms are introduced to improve the interpretability of the neural model @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_22", "@cite_17" ], "mid": [ "2743945814", "2890832667", "2274880506", "1951216520" ], "abstract": [ "Recurrent neural networks (RNNs), such as long short-term memory networks (LSTMs), serve as a fundamental building block for many sequence learning tasks, including machine translation, language modeling, and question answering. In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on hidden-to-hidden weights as a form of recurrent regularization. Further, we introduce NT-ASGD, a variant of the averaged stochastic gradient method, wherein the averaging trigger is determined using a non-monotonic condition as opposed to being tuned by the user. Using these and other regularization strategies, we achieve state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the effectiveness of a neural cache in conjunction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2.", "Recurrent neural networks (RNNs) such as long short-term memory and gated recurrent units are pivotal building blocks across a broad spectrum of sequence modeling problems. This paper proposes a recurrently controlled recurrent network (RCRN) for expressive and powerful sequence encoding. More concretely, the key idea behind our approach is to learn the recurrent gating functions using recurrent networks. Our architecture is split into two components - a controller cell and a listener cell whereby the recurrent controller actively influences the compositionality of the listener cell. We conduct extensive experiments on a myriad of tasks in the NLP domain such as sentiment analysis (SST, IMDb, Amazon reviews, etc.), question classification (TREC), entailment classification (SNLI, SciTail), answer selection (WikiQA, TrecQA) and reading comprehension (NarrativeQA). Across all 26 datasets, our results demonstrate that RCRN not only consistently outperforms BiLSTMs but also stacked BiLSTMs, suggesting that our controller architecture might be a suitable replacement for the widely adopted stacked architecture. Additionally, RCRN achieves state-of-the-art results on several well-established datasets.", "Recurrent neural network (RNN) has been broadly applied to natural language process (NLP) problems. This kind of neural network is designed for modeling sequential data and has been testified to be quite efficient in sequential tagging tasks. In this paper, we propose to use bi-directional RNN with long short-term memory (LSTM) units for Chinese word segmentation, which is a crucial task for modeling Chinese sentences and articles. Classical methods focus on designing and combining hand-craft features from context, whereas bi-directional LSTM network (BLSTM) does not need any prior knowledge or pre-designing, and is expert in creating hierarchical feature representation of contextual information from both directions. Experiment result shows that our approach gets state-of-the-art performance in word segmentation on both traditional Chinese datasets and simplified Chinese datasets.", "Recurrent Neural Networks (RNNs), and specifically a variant with Long Short-Term Memory (LSTM), are enjoying renewed interest as a result of successful applications in a wide range of machine learning problems that involve sequential data. However, while LSTMs provide exceptional results in practice, the source of their performance and their limitations remain rather poorly understood. Using character-level language models as an interpretable testbed, we aim to bridge this gap by providing an analysis of their representations, predictions and error types. In particular, our experiments reveal the existence of interpretable cells that keep track of long-range dependencies such as line lengths, quotes and brackets. Moreover, our comparative analysis with finite horizon n-gram models traces the source of the LSTM improvements to long-range structural dependencies. Finally, we provide analysis of the remaining errors and suggests areas for further study." ] }
1908.01623
2966715342
Temporal point process is widely used for sequential data modeling. In this paper, we focus on the problem of modeling sequential event propagation in graph, such as retweeting by social network users, news transmitting between websites, etc. Given a collection of event propagation sequences, conventional point process model consider only the event history, i.e. embed event history into a vector, not the latent graph structure. We propose a Graph Biased Temporal Point Process (GBTPP) leveraging the structural information from graph representation learning, where the direct influence between nodes and indirect influence from event history is modeled respectively. Moreover, the learned node embedding vector is also integrated into the embedded event history as side information. Experiments on a synthetic dataset and two real-world datasets show the efficacy of our model compared to conventional methods and state-of-the-art.
When dealing with event propagation sequences, a major limitation of these existing studies is that the structural information of the latent graph @math is not utilized. Conventional TPP models including state-of-the-art method in @cite_4 solve event propagation modeling as general event sequences modeling and take input @math , while our GBTPP model leverage the structural information and node proximity of graph @math taking input @math , where @math is the node embedding vector obtained by a graph representation learning method for @math .
{ "cite_N": [ "@cite_4" ], "mid": [ "2169415915", "1961165281", "1938647246", "2950857600" ], "abstract": [ "Important inference problems in statistical physics, computer vision, error-correcting coding theory, and artificial intelligence can all be reformulated as the computation of marginal probabilities on factor graphs. The belief propagation (BP) algorithm is an efficient way to solve these problems that is exact when the factor graph is a tree, but only approximate when the factor graph has cycles. We show that BP fixed points correspond to the stationary points of the Bethe approximation of the free energy for a factor graph. We explain how to obtain region-based free energy approximations that improve the Bethe approximation, and corresponding generalized belief propagation (GBP) algorithms. We emphasize the conditions a free energy approximation must satisfy in order to be a \"valid\" or \"maxent-normal\" approximation. We describe the relationship between four different methods that can be used to generate valid approximations: the \"Bethe method\", the \"junction graph method\", the \"cluster variation method\", and the \"region graph method\". Finally, we explain how to tell whether a region-based approximation, and its corresponding GBP algorithm, is likely to be accurate, and describe empirical results showing that GBP can significantly outperform BP.", "This work presents a novel modeling and analysis framework for graph sequences which addresses the challenge of detecting and contextualizing anomalies in labeled, streaming graph data. We introduce a generalization of the BTER model of by adding flexibility to community structure, and use this model to perform multi-scale graph anomaly detection. Specifically, probability models describing coarse subgraphs are built by aggregating node probabilities, and these related hierarchical models simultaneously detect deviations from expectation. This technique provides insight into the graphs' structure and helps contextualized detected event. For evaluation, two new hierarchical detectors are tested against a baseline Gaussian method on a synthetic graph sequence with seeded anomalies. We demonstrate that in a labeled setting with community structure, our graph statistics-based approach outperforms both a distribution-based detector and the baseline, accurately detecting anomalies at the node, subgraph, and graph levels.", "Determinantal point processes (DPP) serve as a practicable modeling for many applications of repulsive point processes. A known approach for simulation was proposed in Hough(2006) , which generate the desired distribution point wise through rejection sampling. Unfortunately, the size of rejection could be very large. In this paper, we investigate the application of perfect simulation via coupling from the past (CFTP) on DPP. We give a general framework for perfect simulation on DPP model. It is shown that the limiting sequence of the time-to-coalescence of the coupling is bounded by @math An application is given to the stationary models in DPP.", "We present a Bayesian tensor factorization model for inferring latent group structures from dynamic pairwise interaction patterns. For decades, political scientists have collected and analyzed records of the form \"country @math took action @math toward country @math at time @math \"---known as dyadic events---in order to form and test theories of international relations. We represent these event data as a tensor of counts and develop Bayesian Poisson tensor factorization to infer a low-dimensional, interpretable representation of their salient patterns. We demonstrate that our model's predictive performance is better than that of standard non-negative tensor factorization methods. We also provide a comparison of our variational updates to their maximum likelihood counterparts. In doing so, we identify a better way to form point estimates of the latent factors than that typically used in Bayesian Poisson matrix factorization. Finally, we showcase our model as an exploratory analysis tool for political scientists. We show that the inferred latent factor matrices capture interpretable multilateral relations that both conform to and inform our knowledge of international affairs." ] }