aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1907.13329
2925427079
We propose a process algebra for link layer protocols, featuring a unique mechanism for modelling frame collisions. We also formalise suitable liveness properties for link layer protocols specified in this framework. To show applicability we model and analyse two versions of the Carrier-Sense Multiple Access with Collision Avoidance (CSMA CA) protocol. Our analysis confirms the hidden station problem for the version without virtual carrier sensing. However, we show that the version with virtual carrier sensing not only overcomes this problem, but also the exposed station problem with probability 1. Yet the protocol cannot guarantee packet delivery, not even with probability 1.
Multiple analyses were performed for the CSMA CD protocol (CSMA with collision detection), a predecessor of CSMA CA that has a constant backoff, i.e. the backoff time is not increased exponentially, see @cite_12 @cite_3 @cite_24 @cite_5 @cite_8 . In all these approaches frame collisions have to be modelled explicitly, as part of the protocol description. In contrast, our approach handles collisions in the semantics; thereby achieving a clear separation between protocol specifications and link layer behaviour.
{ "cite_N": [ "@cite_8", "@cite_3", "@cite_24", "@cite_5", "@cite_12" ], "mid": [ "2001480730", "1600358513", "1988873236", "2001714096" ], "abstract": [ "CSMA with Enhanced Collision Avoidance (CSMA ECA) uses a deterministic backoff after successful transmissions to significantly reduce the number of collisions. This paper assesses by means of simulations the throughput and conditional collision probability obtained from a single-hop ad-hoc network using CSMA ECA. A comparison with the legacy CSMA CA reveals that the proposed protocol outperforms the legacy one in all considered scenarios. Specifically, it is shown that CSMA ECA presents advantages for both rigid and elastic flows.", "Carrier sense multiple access with enhanced collision avoidance (CSMA ECA) is a distributed MAC protocol that allows collision-free access to the medium in WLANs. The only difference between CSMA ECA and the well-known CSMA CA is that the former uses a deterministic backoff after successful transmissions. Collision-free operation is reached after a transient state during which some collisions may occur. This paper shows that the duration of the transient state can be shortened by appropriately setting the contention parameters. Standard absorbing Markov chain theory is used to describe the behaviour of the system in the transient state and to predict the expected number of slots to reach the collision-free operation. The paper also introduces CSMA E2CA, in which a deterministic backoff is used two consecutive times after a successful transmission. CSMA E2CA converges quicker to collision-free operation and delivers higher performance than CSMA ECA, specially in harsh wireless scenarios with high frame-error rates. The last part of the paper addresses scenarios with a large number of contenders. We suggest dynamic parameter adjustment techniques to accommodate a varying (and potentially high) number of contenders. The effectiveness of these adjustments in preventing collisions is validated by means of simulation.", "Carrier Sense Multiple Access Collision Detection (CSMA CD) is the protocol for carrier transmission access in Ethernet networks (international standard IEEE 802.3). On Ethernet, any Network Interface Card (NIC) can try to send a packet in a channel at any time. If another NIC tries to send a packet at the same time, a collision is said to occur and the packets are discarded. The CSMA CD protocol was designed to avoid this problem, more precisely to allow a NIC to send its packet without collision. This is done by way of a randomized exponential backoff process. In this paper, we analyse the correctness of the CSMA CD protocol, using techniques from probabilistic model checking and approximate probabilistic model checking. The tools that we use are PRISM and APMC. Moreover, we provide a quantitative analysis of some CSMA CD properties.", "Carrier Sense Multiple Access with Enhanced Collision Avoidance (CSMA ECA) is a recently proposed modification to the well-known CSMA CA protocol. By using a deterministic backoff after successful transmissions, the number of collisions decreases. This article presents a model that captures the behaviour of CSMA ECA in both saturated and non-saturated scenarios. The results, which are validated by simulations, show that CSMA ECA effectively prevents collisions and, therefore, it can deliver a higher throughput than CSMA CA." ] }
1907.13329
2925427079
We propose a process algebra for link layer protocols, featuring a unique mechanism for modelling frame collisions. We also formalise suitable liveness properties for link layer protocols specified in this framework. To show applicability we model and analyse two versions of the Carrier-Sense Multiple Access with Collision Avoidance (CSMA CA) protocol. Our analysis confirms the hidden station problem for the version without virtual carrier sensing. However, we show that the version with virtual carrier sensing not only overcomes this problem, but also the exposed station problem with probability 1. Yet the protocol cannot guarantee packet delivery, not even with probability 1.
@cite_12 @cite_3 use probabilistic timed automata (PTAs) to model the protocol, and use probabilistic model checking ( ) and approximate model checking ( ) for their analysis. The model explained in @cite_24 is based on PTAs as well, but uses the model checker as verification tool. These approaches, although formal, have very little in common with our approach. On the one hand it is not easy to change the model from CSMA CD to CSMA CA, as the latter requires unbounded data structures (or alike) to model the exponential backoff. On the other hand, as usual, model checking suffers from state space explosion and only small networks (usually fewer than ten nodes) can be analysed. This is sufficient and convenient when it comes to finding counter examples, but these approaches cannot provide guarantees for arbitrary network topologies, as ours does.
{ "cite_N": [ "@cite_24", "@cite_3", "@cite_12" ], "mid": [ "2160106331", "2546429222", "2144812583", "2021255060" ], "abstract": [ "Abstract We present an approximation technique, that can render real-time model checking of safety and universal path properties more efficient. It is beneficial, when loops lead to repetition of control situations. Basically we augment a timed automata model with carefully selected extra transitions. This increases the size of the state-space, but potentially decreases the number of symbolic states to be explored by orders of magnitude. We give a formal definition of a timed automata formalism, enriched with basic data types, hand-shake synchronization, urgency, and committed locations. We prove by means of a trace semantics, that if a safety property can be established in the augmented model, it also holds for the original model. We extend our technique to a richer set of properties, that can be decided via a set of traces (universal path properties). In order for universal path properties to carry over to the original model, the semantics of the timed automata formalism is formulated relative to the applied augmentation. Our technique is particularly useful in systems, where a scheduler dictates repetition of control over elapsing time. As a typical example we mention translations of LEGO® RCX™ programs to U ppaal models, where the Round-Robin scheduler is a static entity. We allow scheduler and associated tasks to “park”, until some timing or environmental conditions are met. We apply our technique on a brick-sorter model for a safety property and report run-time data.", "Probabilistic model checking is a formal verification technique for the analysis of systems that exhibit stochastic behaviour. It has been successfully employed in an extremely wide array of application domains including, for example, communication and multimedia protocols, security and power management. In this chapter we focus on the applicability of these techniques to the analysis of communication protocols. An analysis of the performance of such systems must successfully incorporate several crucial aspects, including concurrency between multiple components, real-time constraints and randomisation. Probabilistic model checking, in particular using probabilistic timed automata, is well suited to such an analysis. We provide an overview of this area, with emphasis on an industrially relevant case study: the IEEE 802.3 (CSMA CD) protocol. We also discuss two contrasting approaches to the implementation of probabilistic model checking, namely those based on numerical computation and those based on discrete-event simulation. Using results from the two tools PRISM and APMC, we summarise the advantages, disadvantages and trade-offs associated with these techniques.", "It is known that CSMA CA channel access schemes are not well suited to meet the high traffic demand of wireless mesh networks. One possible way to increase traffic carrying capacity is to use a spatial TDMA (STDMA) approach in conjunction with the physical interference model, which allows more aggressive scheduling than the protocol interference model on which CSMA CA is based. While an efficient centralized solution for STDMA with physical interference has been recently proposed, no satisfactory distributed approaches have been introduced so far. In this paper, we first prove that no localized distributed algorithm can solve the problem of building a feasible schedule under the physical interference model. Motivated by this, we design a global primitive, called SCREAM, which is used to verify the feasibility of a schedule during an iterative distributed scheduling procedure. Based on this primitive, we present two distributed protocols for efficient, distributed scheduling under the physical interference model, and we prove an approximation bound for one of the protocols. We also present extensive packet-level simulation results, which show that our protocols achieve schedule lengths very close to those of the centralized algorithm and have running times that are practical for mesh networks.", "Over the last years there has been an increasing research effort directed towards the automatic verification of infinite state systems, such as timed automata, hybrid automata, data-independent systems, relational automata, Petri nets, lossy channel systems, context-free and push-down processes. We present a method for deciding reachability properties of networks of timed processes. Such a network consists of an arbitrary set of identical timed automata, each with a single real-valued clock. Using a standard reduction from safety properties to reachability properties, we can use our algorithm to decide general safety properties of timed networks. To our knowledge, this is the first decidability result concerning verification of systems that are infinite-state in \"two dimensions\": they contain an arbitrary set of (identical) processes, and they use infinite data-structures, viz real-valued clocks. We illustrate our method by showing how it can be used to automatically verify Fischer's protocol, a timer-based protocol for enforcing mutual exclusion among an arbitrary number of processes.Finally, we show undecidability of the recurrent state problem: given a state in a timed network, check whether there is a computation of the network visiting the state infinitely often. This implies undecidability of model checking for any temporal logic which is sufficiently expressive to encode the recurrent state problem, such as PTL, CTL, etc." ] }
1907.13329
2925427079
We propose a process algebra for link layer protocols, featuring a unique mechanism for modelling frame collisions. We also formalise suitable liveness properties for link layer protocols specified in this framework. To show applicability we model and analyse two versions of the Carrier-Sense Multiple Access with Collision Avoidance (CSMA CA) protocol. Our analysis confirms the hidden station problem for the version without virtual carrier sensing. However, we show that the version with virtual carrier sensing not only overcomes this problem, but also the exposed station problem with probability 1. Yet the protocol cannot guarantee packet delivery, not even with probability 1.
@cite_5 use models of CSMA CD to compare the tools SPIN and . Their models are much more abstract than ours. It is proven that no collisions will ever occur, without stating the exact conditions under which this statement holds.
{ "cite_N": [ "@cite_5" ], "mid": [ "2001480730", "2116394223", "1988873236", "1600358513" ], "abstract": [ "CSMA with Enhanced Collision Avoidance (CSMA ECA) uses a deterministic backoff after successful transmissions to significantly reduce the number of collisions. This paper assesses by means of simulations the throughput and conditional collision probability obtained from a single-hop ad-hoc network using CSMA ECA. A comparison with the legacy CSMA CA reveals that the proposed protocol outperforms the legacy one in all considered scenarios. Specifically, it is shown that CSMA ECA presents advantages for both rigid and elastic flows.", "We present a theory, based on statistical mechanics, to evaluate analytically the performance of uncoded, fully synchronous, randomly spread code-division multiple-access (CDMA) multiuser detectors with additive white Gaussian noise (AWGN) channel, under perfect power control, and in the large-system limit. Application of the replica method, a tool developed in the literature of statistical mechanics, allows us to derive analytical expressions for the bit-error rate, as well as the multiuser efficiency, of the individually optimum (IO) and jointly optimum (JO) multiuser detectors over the whole range of noise levels. The information-theoretic capacity of the randomly spread CDMA channel and the performance of decorrelating and linear minimum mean-square error (MMSE) detectors are also derived in the same replica formulation, thereby demonstrating validity of the statistical-mechanical approach.", "Carrier Sense Multiple Access Collision Detection (CSMA CD) is the protocol for carrier transmission access in Ethernet networks (international standard IEEE 802.3). On Ethernet, any Network Interface Card (NIC) can try to send a packet in a channel at any time. If another NIC tries to send a packet at the same time, a collision is said to occur and the packets are discarded. The CSMA CD protocol was designed to avoid this problem, more precisely to allow a NIC to send its packet without collision. This is done by way of a randomized exponential backoff process. In this paper, we analyse the correctness of the CSMA CD protocol, using techniques from probabilistic model checking and approximate probabilistic model checking. The tools that we use are PRISM and APMC. Moreover, we provide a quantitative analysis of some CSMA CD properties.", "Carrier sense multiple access with enhanced collision avoidance (CSMA ECA) is a distributed MAC protocol that allows collision-free access to the medium in WLANs. The only difference between CSMA ECA and the well-known CSMA CA is that the former uses a deterministic backoff after successful transmissions. Collision-free operation is reached after a transient state during which some collisions may occur. This paper shows that the duration of the transient state can be shortened by appropriately setting the contention parameters. Standard absorbing Markov chain theory is used to describe the behaviour of the system in the transient state and to predict the expected number of slots to reach the collision-free operation. The paper also introduces CSMA E2CA, in which a deterministic backoff is used two consecutive times after a successful transmission. CSMA E2CA converges quicker to collision-free operation and delivers higher performance than CSMA ECA, specially in harsh wireless scenarios with high frame-error rates. The last part of the paper addresses scenarios with a large number of contenders. We suggest dynamic parameter adjustment techniques to accommodate a varying (and potentially high) number of contenders. The effectiveness of these adjustments in preventing collisions is validated by means of simulation." ] }
1907.13329
2925427079
We propose a process algebra for link layer protocols, featuring a unique mechanism for modelling frame collisions. We also formalise suitable liveness properties for link layer protocols specified in this framework. To show applicability we model and analyse two versions of the Carrier-Sense Multiple Access with Collision Avoidance (CSMA CA) protocol. Our analysis confirms the hidden station problem for the version without virtual carrier sensing. However, we show that the version with virtual carrier sensing not only overcomes this problem, but also the exposed station problem with probability 1. Yet the protocol cannot guarantee packet delivery, not even with probability 1.
There are far fewer formal analyses techniques available when it comes to CSMA CA (with and without virtual medium sensing). Traditional approaches to the analysis of network protocols are simulation and test-bed experiments. This is also the case for CSMA CA (e.g. @cite_11 ). While these are important and valid methods for protocol evaluation, in particular for quantitative performance evaluation, they have limitations in regards to the evaluation of basic protocol correctness properties.
{ "cite_N": [ "@cite_11" ], "mid": [ "2001480730", "2154233760", "2144812583", "2166319816" ], "abstract": [ "CSMA with Enhanced Collision Avoidance (CSMA ECA) uses a deterministic backoff after successful transmissions to significantly reduce the number of collisions. This paper assesses by means of simulations the throughput and conditional collision probability obtained from a single-hop ad-hoc network using CSMA ECA. A comparison with the legacy CSMA CA reveals that the proposed protocol outperforms the legacy one in all considered scenarios. Specifically, it is shown that CSMA ECA presents advantages for both rigid and elastic flows.", "In this paper, the performance of the ALOHA and CSMA MAC protocols are analyzed in spatially distributed wireless networks. The main system objective is correct reception of packets, and thus the analysis is performed in terms of outage probability. In our network model, packets belonging to specific transmitters arrive randomly in space and time according to a 3-D Poisson point process, and are then transmitted to their intended destinations using a fully-distributed MAC protocol. A packet transmission is considered successful if the received SINR is above a predefined threshold for the duration of the packet. Accurate bounds on the outage probabilities are derived as a function of the transmitter density, the number of backoffs and retransmissions, and in the case of CSMA, also the sensing threshold. The analytical expressions are validated with simulation results. For continuous-time transmissions, CSMA with receiver sensing (which involves adding a feedback channel to the conventional CSMA protocol) is shown to yield the best performance. Moreover, the sensing threshold of CSMA is optimized. It is shown that introducing sensing for lower densities (i.e., in sparse networks) is not beneficial, while for higher densities (i.e., in dense networks), using an optimized sensing threshold provides significant gain.", "It is known that CSMA CA channel access schemes are not well suited to meet the high traffic demand of wireless mesh networks. One possible way to increase traffic carrying capacity is to use a spatial TDMA (STDMA) approach in conjunction with the physical interference model, which allows more aggressive scheduling than the protocol interference model on which CSMA CA is based. While an efficient centralized solution for STDMA with physical interference has been recently proposed, no satisfactory distributed approaches have been introduced so far. In this paper, we first prove that no localized distributed algorithm can solve the problem of building a feasible schedule under the physical interference model. Motivated by this, we design a global primitive, called SCREAM, which is used to verify the feasibility of a schedule during an iterative distributed scheduling procedure. Based on this primitive, we present two distributed protocols for efficient, distributed scheduling under the physical interference model, and we prove an approximation bound for one of the protocols. We also present extensive packet-level simulation results, which show that our protocols achieve schedule lengths very close to those of the centralized algorithm and have running times that are practical for mesh networks.", "Decentralized medium access control schemes for wireless networks based on CSMA CA, such as the IEEE 802.11 protocol, are known to be unfair. In multihop networks, they can even favor some links to such an extent that the others suffer from virtually complete starvation. This observation has been reported in quite a few works, but the factors causing it are still not well understood. We find that the capture effect and the relative values of the receive and carrier sensing ranges play a crucial role in the performance of these protocols. Using a simple Markovian model, we show that an idealized CSMA CA protocol suffers from starvation when the receiving and sensing ranges are equal, but quite surprisingly that this unfairness is reduced or even disappears when these two ranges are sufficiently different. We also show that starvation has a positive counterpart, namely organization. When its access intensity is large the protocol organizes the transmissions in space in such a way that it maximizes the number of concurrent successful transmissions. We obtain exact formula for the so-called spatial reuse of the protocol on large line networks." ] }
1907.13329
2925427079
We propose a process algebra for link layer protocols, featuring a unique mechanism for modelling frame collisions. We also formalise suitable liveness properties for link layer protocols specified in this framework. To show applicability we model and analyse two versions of the Carrier-Sense Multiple Access with Collision Avoidance (CSMA CA) protocol. Our analysis confirms the hidden station problem for the version without virtual carrier sensing. However, we show that the version with virtual carrier sensing not only overcomes this problem, but also the exposed station problem with probability 1. Yet the protocol cannot guarantee packet delivery, not even with probability 1.
-3pt Following the spirit of the above-mentioned research of model checking CSMA, Fruth @cite_18 analyses CSMA CA using PTAs and . He considers properties such as the minimum probability of two nodes successfully completing their transmissions, and maximum expected number of collisions until two nodes have successfully completed their transmissions. As before, this analysis technique does not scale; in @cite_18 the experiments are limited to two contending nodes only.
{ "cite_N": [ "@cite_18" ], "mid": [ "2001480730", "2158384898", "2962815534", "1600358513" ], "abstract": [ "CSMA with Enhanced Collision Avoidance (CSMA ECA) uses a deterministic backoff after successful transmissions to significantly reduce the number of collisions. This paper assesses by means of simulations the throughput and conditional collision probability obtained from a single-hop ad-hoc network using CSMA ECA. A comparison with the legacy CSMA CA reveals that the proposed protocol outperforms the legacy one in all considered scenarios. Specifically, it is shown that CSMA ECA presents advantages for both rigid and elastic flows.", "It was shown recently that carrier sense multiple access (CSMA)-like distributed algorithms can achieve the maximal throughput in wireless networks (and task processing networks) under certain assumptions. One important but idealized assumption is that the sensing time is negligible, so that there is no collision. In this paper, we study more practical CSMA-based scheduling algorithms with collisions. First, we provide a Markov chain model and give an explicit throughput formula that takes into account the cost of collisions and overhead. The formula has a simple form since the Markov chain is \"almost\" time-reversible. Second, we propose transmission-length control algorithms to approach throughput-optimality in this case. Sufficient conditions are given to ensure the convergence and stability of the proposed algorithms. Finally, we characterize the relationship between the CSMA parameters (such as the maximum packet lengths) and the achievable capacity region.", "We develop an analytical framework to derive the meta distribution and moments of the conditional success probability (CSP), which is defined as success probability for a given realization of the transmitters, in large-scale co-channel uplink and downlink non-orthogonal multiple access (NOMA) networks with one NOMA cluster per cell. The moments of CSP translate to various network performance metrics such as the standard success or signal-to-interference ratio (SIR) coverage probability (which is the 1-st moment), the mean local delay (which is the −1st moment in a static network setting), and the meta distribution (which is the complementary cumulative distribution function of the success or SIR coverage probability and can be approximated by using the 1st and 2nd moments). For the uplink NOMA network, to make the framework tractable, we propose two point process models for the spatial locations of the inter-cell interferers by utilizing the base station (BS) user pair correlation function. We validate the proposed models by comparing the second moment measure of each model with that of the actual point process for the inter-cluster (or inter-cell) interferers obtained via simulations. For downlink NOMA, we derive closed-form solutions for the moments of the CSP, success (or coverage) probability, mean local delay, and meta distribution for the users. As an application of the developed analytical framework, we use the closed-form expressions to optimize the power allocations for downlink NOMA users in order to maximize the success probability of a given NOMA user with and without latency constraints. Closed-form optimal solutions for the transmit powers are obtained for two-user NOMA scenario. We note that maximizing the success probability with latency constraints can significantly impact the optimal power solutions for low SIR thresholds and favor orthogonal multiple access.", "Carrier sense multiple access with enhanced collision avoidance (CSMA ECA) is a distributed MAC protocol that allows collision-free access to the medium in WLANs. The only difference between CSMA ECA and the well-known CSMA CA is that the former uses a deterministic backoff after successful transmissions. Collision-free operation is reached after a transient state during which some collisions may occur. This paper shows that the duration of the transient state can be shortened by appropriately setting the contention parameters. Standard absorbing Markov chain theory is used to describe the behaviour of the system in the transient state and to predict the expected number of slots to reach the collision-free operation. The paper also introduces CSMA E2CA, in which a deterministic backoff is used two consecutive times after a successful transmission. CSMA E2CA converges quicker to collision-free operation and delivers higher performance than CSMA ECA, specially in harsh wireless scenarios with high frame-error rates. The last part of the paper addresses scenarios with a large number of contenders. We suggest dynamic parameter adjustment techniques to accommodate a varying (and potentially high) number of contenders. The effectiveness of these adjustments in preventing collisions is validated by means of simulation." ] }
1907.13329
2925427079
We propose a process algebra for link layer protocols, featuring a unique mechanism for modelling frame collisions. We also formalise suitable liveness properties for link layer protocols specified in this framework. To show applicability we model and analyse two versions of the Carrier-Sense Multiple Access with Collision Avoidance (CSMA CA) protocol. Our analysis confirms the hidden station problem for the version without virtual carrier sensing. However, we show that the version with virtual carrier sensing not only overcomes this problem, but also the exposed station problem with probability 1. Yet the protocol cannot guarantee packet delivery, not even with probability 1.
Beyond model checking, simulation and test-bed experiments, we are only aware of two other formal approaches. In @cite_4 Markov chains are used to derive an accurate, analytical model to compute the throughput of CSMA CA. Calculating throughput is an orthogonal task to our vision of proving (functional) correctness.
{ "cite_N": [ "@cite_4" ], "mid": [ "2130986870", "2158384898", "2032566227", "2054671542" ], "abstract": [ "In this paper, we consider the throughput modelling and fairness provisioning in CSMA CA based ad-hoc networks. The main contributions are: firstly, a throughput model based on Markovian analysis is proposed for the CSMA CA network with a general topology. Simulation investigations are presented to verify its performance. Secondly, fairness issues in CSMA CA networks are discussed based on the throughput model. The origin of unfairness is explained and the trade-off between throughput and fairness is illustrated. Thirdly, throughput approximations based on local topology information are proposed and their performances are investigated. Fourthly, three different fairness metrics are presented and their distributed implementations, based on the throughput approximation, are proposed.", "It was shown recently that carrier sense multiple access (CSMA)-like distributed algorithms can achieve the maximal throughput in wireless networks (and task processing networks) under certain assumptions. One important but idealized assumption is that the sensing time is negligible, so that there is no collision. In this paper, we study more practical CSMA-based scheduling algorithms with collisions. First, we provide a Markov chain model and give an explicit throughput formula that takes into account the cost of collisions and overhead. The formula has a simple form since the Markov chain is \"almost\" time-reversible. Second, we propose transmission-length control algorithms to approach throughput-optimality in this case. Sufficient conditions are given to ensure the convergence and stability of the proposed algorithms. Finally, we characterize the relationship between the CSMA parameters (such as the maximum packet lengths) and the achievable capacity region.", "We present a novel modeling approach to derive closed-form throughput expressions for CSMA networks with hidden terminals. The key modeling principle is to break the interdependence of events in a wireless network using conditional expressions that capture the effect of a specific factor each, yet preserve the required dependences when combined together. Different from existing models that use numerical aggregation techniques, our approach is the first to jointly characterize the three main critical factors affecting flow throughput (referred to as hidden terminals, information asymmetry and flow-in-the-middle) within a single analytical expression. We have developed a symbolic implementation of the model, that we use for validation against realistic simulations and experiments with real wireless hardware, observing high model accuracy in the evaluated scenarios. The derived closed-form expressions enable new analytical studies of capacity and protocol performance that would not be possible with prior models. We illustrate this through an application of network utility maximization in complex networks with collisions, hidden terminals, asymmetric interference and flow-in-the-middle instances. Despite that such problematic scenarios make utility maximization a challenging problem, the model-based optimization yields vast fairness gains and an average per-flow throughput gain higher than 500 with respect to 802.11 in the evaluated networks.", "Random-access algorithms such as CSMA provide a popular mechanism for distributed medium access control in large-scale wireless networks. In recent years, tractable stochastic models have been shown to yield accurate throughput estimates for CSMA networks. We consider a saturated random-access network on a general conflict graph, and prove that for every feasible combination of throughputs, there exists a unique vector of back-off rates that achieves this throughput vector. This result entails proving global invertibility of the non-linear function that describes the throughputs of all nodes in the network. We present several numerical procedures for calculating this inverse, based on fixed-point iteration and Newton's method. Finally, we provide closed-form results for several special conflict graphs using the theory of Markov random fields." ] }
1907.13329
2925427079
We propose a process algebra for link layer protocols, featuring a unique mechanism for modelling frame collisions. We also formalise suitable liveness properties for link layer protocols specified in this framework. To show applicability we model and analyse two versions of the Carrier-Sense Multiple Access with Collision Avoidance (CSMA CA) protocol. Our analysis confirms the hidden station problem for the version without virtual carrier sensing. However, we show that the version with virtual carrier sensing not only overcomes this problem, but also the exposed station problem with probability 1. Yet the protocol cannot guarantee packet delivery, not even with probability 1.
An approach aiming at proving the correctness of CSMA CA with virtual carrier sensing ( ), and hence related to ours, is presented in @cite_7 . Based on stochastic bigraphs with sharing it uses rewrite rules to analyse quantitative properties. Although it is an approach that is capable to analyse arbitrary topologies, to apply the rewrite rules a particular topology needs to be modelled by a directed acyclic graph structure, which is part of the bigraph.
{ "cite_N": [ "@cite_7" ], "mid": [ "2121822142", "2168959209", "2144812583", "1539042193" ], "abstract": [ "Stochastic geometry proves to be a powerful tool for modeling dense wireless networks adopting random MAC protocols such as ALOHA and CSMA. The main strength of this methodology lies in its ability to account for the randomness in the nodes' location jointly with an accurate description at the physical layer, based on the SINR, that allows to consider also random fading on each link. Existing models of CSMA networks adopting the stochastic geometry approach suffer from two important weaknesses: 1) they permit to evaluate only spatial averages of the main performance measures, thus hiding possibly huge discrepancies in the performance achieved by individual nodes; 2) they are analytically tractable only when nodes are distributed over the area according to simple spatial processes (e.g., the Poisson point process). In this paper we show how the stochastic geometry approach can be extended to overcome the above limitations, allowing to obtain node throughput distributions as well as to analyze a significant class of topologies in which nodes are not independently placed.", "In this paper, a mathematical model for the beacon-enabled mode of the IEEE 802.15.4 medium-access control (MAC) protocol is provided. A personal area network (PAN) composed of multiple nodes, which transmit data to a PAN coordinator through direct links or multiple hops, is considered. The application is query based: Upon reception of the beacon transmitted by the PAN coordinator, each node tries to transmit its packet using the superframe structure defined by the IEEE 802.15.4 protocol. Those nodes that do not succeed in accessing the channel discard the packet; at the next superframe, a new packet is generated. The aim of the paper is to develop a flexible mathematical tool able to study beacon-enabled 802.15.4 networks organized in different topologies. Both the contention access period (CAP) and the contention-free period defined by the standard are considered. The slotted carrier-sense multiple access with collision avoidance (CSMA CA) algorithm used in the CAP portion of the superframe is analytically modeled. The model describes the probability of packet successful reception and access delay statistics. Moreover, both star and tree-based topologies are dealt with; a suitable comparison between these topologies is provided. The model is a useful tool for the design of MAC parameters and to select the better topology. The mathematical model is validated through simulation results. The model differs from those previously published by other authors in the literature as it precisely follows the MAC procedure defined by the standard in the context of the application scenario described.", "It is known that CSMA CA channel access schemes are not well suited to meet the high traffic demand of wireless mesh networks. One possible way to increase traffic carrying capacity is to use a spatial TDMA (STDMA) approach in conjunction with the physical interference model, which allows more aggressive scheduling than the protocol interference model on which CSMA CA is based. While an efficient centralized solution for STDMA with physical interference has been recently proposed, no satisfactory distributed approaches have been introduced so far. In this paper, we first prove that no localized distributed algorithm can solve the problem of building a feasible schedule under the physical interference model. Motivated by this, we design a global primitive, called SCREAM, which is used to verify the feasibility of a schedule during an iterative distributed scheduling procedure. Based on this primitive, we present two distributed protocols for efficient, distributed scheduling under the physical interference model, and we prove an approximation bound for one of the protocols. We also present extensive packet-level simulation results, which show that our protocols achieve schedule lengths very close to those of the centralized algorithm and have running times that are practical for mesh networks.", "Carrier sense multiple access (CSMA), which resolves contentions over wireless networks in a fully distributed fashion, has recently gained a lot of attentions since it has been proved that appropriate control of CSMA parameters guarantees optimality in terms of stability (i.e., scheduling) and system-wide utility (i.e., scheduling and congestion control). Most CSMA-based algorithms rely on the popular Markov chain Monte Carlo technique, which enables one to find optimal CSMA parameters through iterative loops of simulation-and-update. However, such a simulation-based approach often becomes a major cause of exponentially slow convergence, being poorly adaptive to flow topology changes. In this paper, we develop distributed iterative algorithms which produce approximate solutions with convergence in polynomial time for both stability and utility maximization problems. In particular, for the stability problem, the proposed distributed algorithm requires, somewhat surprisingly, only one iteration among links. Our approach is motivated by the Bethe approximation (introduced by Yedidia, Freeman, and Weiss) allowing us to express approximate solutions via a certain nonlinear system with polynomial size. Our polynomial convergence guarantee comes from directly solving the nonlinear system in a distributed manner, rather than multiple simulation-and-update loops in existing algorithms. We provide numerical results to show that the algorithm produces highly accurate solutions and converges much faster than the prior ones." ] }
1907.13359
2966761695
Deep learning algorithms have achieved excellent performance lately in a wide range of fields (e.g., computer version). However, a severe challenge faced by deep learning is the high dependency on hyper-parameters. The algorithm results may fluctuate dramatically under the different configuration of hyper-parameters. Addressing the above issue, this paper presents an efficient Orthogonal Array Tuning Method (OATM) for deep learning hyper-parameter tuning. We describe the OATM approach in five detailed steps and elaborate on it using two widely used deep neural network structures (Recurrent Neural Networks and Convolutional Neural Networks). The proposed method is compared to the state-of-the-art hyper-parameter tuning methods including manually (e.g., grid search and random search) and automatically (e.g., Bayesian Optimization) ones. The experiment results state that OATM can significantly save the tuning time compared to the state-of-the-art methods while preserving the satisfying performance.
Apart from the aforementioned methods, the orthogonal array based hyper-parameter tuning already used in a range of research areas such as mechanical engineering and electrical engineering. J.A @cite_6 applied orthogonal array based approach to optimize the cutting parameters in the end milling. S.S. @cite_10 optimized wire electrical discharge machining (WEDM) process parameters by orthogonal array method. The traditional methods are not suited for deep learning algorithms while the effectiveness of OATM has been demonstrated in many research topics. Intuitively, we adopt OATM for deep learning hyper-parameter tuning. To our best knowledge, our work is the first batch of studies in this area.
{ "cite_N": [ "@cite_10", "@cite_6" ], "mid": [ "2075314456", "2084841915", "1579064899", "2042992924" ], "abstract": [ "Wire electrical discharge machining (WEDM) is extensively used in machining of conductive materials when precision is of prime importance. Rough cutting operation in WEDM is treated as a challenging one because improvement of more than one machining performance measures viz. met al removal rate (MRR), surface finish (SF) and cutting width (kerf) are sought to obtain a precision work. Using Taguchi’s parameter design, significant machining parameters affecting the performance measures are identified as discharge current, pulse duration, pulse frequency, wire speed, wire tension, and dielectric flow. It has been observed that a combination of factors for optimization of each performance measure is different. In this study, the relationship between control factors and responses like MRR, SF and kerf are established by means of nonlinear regression analysis, resulting in a valid mathematical model. Finally, genetic algorithm, a popular evolutionary approach, is employed to optimize the wire electrical discharge machining process with multiple objectives. The study demonstrates that the WEDM process parameters can be adjusted to achieve better met al removal rate, surface finish and cutting width simultaneously.", "Abstract In this study, the Taguchi method is used to find the optimal cutting parameters for surface roughness in turning. The orthogonal array, the signal-to-noise ratio, and analysis of variance are employed to study the performance characteristics in turning operations of AISI 1030 steel bars using TiN coated tools. Three cutting parameters namely, insert radius, feed rate, and depth of cut, are optimized with considerations of surface roughness. Experimental results are provided to illustrate the effectiveness of this approach.", "We introduce a means of automating machine learning (ML) for big data tasks, by performing scalable stochastic Bayesian optimisation of ML algorithm parameters and hyper-parameters. More often than not, the critical tuning of ML algorithm parameters has relied on domain expertise from experts, along with laborious hand-tuning, brute search or lengthy sampling runs. Against this background, Bayesian optimisation is finding increasing use in automating parameter tuning, making ML algorithms accessible even to non-experts. However, the state of the art in Bayesian optimisation is incapable of scaling to the large number of evaluations of algorithm performance required to fit realistic models to complex, big data. We here describe a stochastic, sparse, Bayesian optimisation strategy to solve this problem, using many thousands of noisy evaluations of algorithm performance on subsets of data in order to effectively train algorithms for big data. We provide a comprehensive benchmarking of possible sparsification strategies for Bayesian optimisation, concluding that a Nystrom approximation offers the best scaling and performance for real tasks. Our proposed algorithm demonstrates substantial improvement over the state of the art in tuning the parameters of a Gaussian Process time series prediction task on real, big data.", "We present an auto-tuning system for optimizing I O performance of HDF5 applications and demonstrate its value across platforms, applications, and at scale. The system uses a genetic algorithm to search a large space of tunable parameters and to identify effective settings at all layers of the parallel I O stack. The parameter settings are applied transparently by the auto-tuning system via dynamically intercepted HDF5 calls. To validate our auto-tuning system, we applied it to three I O benchmarks (VPIC, VORPAL, and GCRM) that replicate the I O activity of their respective applications. We tested the system with different weak-scaling configurations (128, 2048, and 4096 CPU cores) that generate 30 GB to 1 TB of data, and executed these configurations on diverse HPC platforms (Cray XE6, IBM BG P, and Dell Cluster). In all cases, the auto-tuning framework identified tunable parameters that substantially improved write performance over default system settings. We consistently demonstrate I O write speedups between 2x and 100x for test configurations." ] }
1907.13216
2965944882
Recently, the posit numerical format has shown promise for DNN data representation and compute with ultra-low precision ([5..8]-bit). However, majority of studies focus only on DNN inference. In this work, we propose DNN training using posits and compare with the floating point training. We evaluate on both MNIST and Fashion MNIST corpuses, where 16-bit posits outperform 16-bit floating point for end-to-end DNN training.
As early as the 1980s, low-precision arithmetic has been explored in shallow neural networks to decrease both compute and memory complexity for training and inference without deteriorating performance @cite_21 @cite_3 @cite_14 @cite_12 . In some scenarios, this bit-precision constraint also improves DNN performance due to the quantization noise acting as a regularization method @cite_17 @cite_22 . The outcome of these studies indicate that 16- and 8-bit precision DNN parameters are capable of satisfactorily maintaining performance for both training and inference in shallow networks @cite_3 @cite_14 @cite_17 . The capability of low-precision arithmetic is reevaluated in the deep learning era to reduce memory footprint and energy consumption during training and inference @cite_25 @cite_26 @cite_11 @cite_30 @cite_7 @cite_4 @cite_15 @cite_0 @cite_29 @cite_6 @cite_31 @cite_35 @cite_9 @cite_19 .
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_22", "@cite_29", "@cite_3", "@cite_15", "@cite_4", "@cite_21", "@cite_17", "@cite_26", "@cite_7", "@cite_6", "@cite_19", "@cite_25", "@cite_12", "@cite_14", "@cite_9", "@cite_0", "@cite_31", "@cite_11" ], "mid": [ "2946955515", "2963521187", "2889797931", "2924943819" ], "abstract": [ "Reduced precision computation for deep neural networks is one of the key areas addressing the widening compute gap driven by an exponential growth in model size. In recent years, deep learning training has largely migrated to 16-bit precision, with significant gains in performance and energy efficiency. However, attempts to train DNNs at 8-bit precision have met with significant challenges because of the higher precision and dynamic range requirements of back-propagation. In this paper, we propose a method to train deep neural networks using 8-bit floating point representation for weights, activations, errors, and gradients. In addition to reducing compute precision, we also reduced the precision requirements for the master copy of weights from 32-bit to 16-bit. We demonstrate state-of-the-art accuracy across multiple data sets (imagenet-1K, WMT16) and a broader set of workloads (Resnet-18 34 50, GNMT, Transformer) than previously reported. We propose an enhanced loss scaling method to augment the reduced subnormal range of 8-bit floating point for improved error propagation. We also examine the impact of quantization noise on generalization and propose a stochastic rounding technique to address gradient noise. As a result of applying all these techniques, we report slightly higher validation accuracy compared to full precision baseline.", "This paper tackles the problem of training a deep convolutional neural network with both low-precision weights and low-bitwidth activations. Optimizing a low-precision network is very challenging since the training process can easily get trapped in a poor local minima, which results in substantial accuracy loss. To mitigate this problem, we propose three simple-yet-effective approaches to improve the network training. First, we propose to use a two-stage optimization strategy to progressively find good local minima. Specifically, we propose to first optimize a net with quantized weights and then quantized activations. This is in contrast to the traditional methods which optimize them simultaneously. Second, following a similar spirit of the first method, we propose another progressive optimization approach which progressively decreases the bit-width from high-precision to low-precision during the course of training. Third, we adopt a novel learning scheme to jointly train a full-precision model alongside the low-precision one. By doing so, the full-precision model provides hints to guide the low-precision model training. Extensive experiments on various datasets (i.e., CIFAR-100 and ImageNet) show the effectiveness of the proposed methods. To highlight, using our methods to train a 4-bit precision network leads to no performance decrease in comparison with its full-precision counterpart with standard network architectures (i.e., AlexNet and ResNet-50).", "The state-of-the-art hardware platforms for training deep neural networks are moving from traditional single precision (32-bit) computations towards 16 bits of precision - in large part due to the high energy efficiency and smaller bit storage associated with using reduced-precision representations. However, unlike inference, training with numbers represented with less than 16 bits has been challenging due to the need to maintain fidelity of the gradient computations during back-propagation. Here we demonstrate, for the first time, the successful training of deep neural networks using 8-bit floating point numbers while fully maintaining the accuracy on a spectrum of deep learning models and datasets. In addition to reducing the data and computation precision to 8 bits, we also successfully reduce the arithmetic precision for additions (used in partial product accumulation and weight updates) from 32 bits to 16 bits through the introduction of a number of key ideas including chunk-based accumulation and floating point stochastic rounding. The use of these novel techniques lays the foundation for a new generation of hardware training platforms with the potential for 2-4 times improved throughput over today's systems.", "Deep neural networks (DNNs) have been demonstrated as effective prognostic models across various domains, e.g. natural language processing, computer vision, and genomics. However, modern-day DNNs demand high compute and memory storage for executing any reasonably complex task. To optimize the inference time and alleviate the power consumption of these networks, DNN accelerators with low-precision representations of data and DNN parameters are being actively studied. An interesting research question is in how low-precision networks can be ported to edge-devices with similar performance as high-precision networks. In this work, we employ the fixed-point, floating point, and posit numerical formats at ≤8-bit precision within a DNN accelerator, Deep Positron, with exact multiply-and-accumulate (EMAC) units for inference. A unified analysis quantifies the trade-offs between overall network efficiency and performance across five classification tasks. Our results indicate that posits are a natural fit for DNN inference, outperforming at ≤8-bit precision, and can be realized with competitive resource requirements relative to those of floating point." ] }
1907.13216
2965944882
Recently, the posit numerical format has shown promise for DNN data representation and compute with ultra-low precision ([5..8]-bit). However, majority of studies focus only on DNN inference. In this work, we propose DNN training using posits and compare with the floating point training. We evaluate on both MNIST and Fashion MNIST corpuses, where 16-bit posits outperform 16-bit floating point for end-to-end DNN training.
Aside from the BFP numerical format, Narang explored mixed-precision floating point @cite_11 using 16-bit floating point weights, activations, and gradients during both the forward and backward passes. To prevent accuracy loss caused by underflow in 16-bit floating point, the weights are updated with 32-bit floating point. Additionally, to prevent gradients with very small magnitude from becoming zero when represented by 16-bit floating point, a new loss scaling approach is proposed.
{ "cite_N": [ "@cite_11" ], "mid": [ "2947629474", "2964228333", "2946955515", "2785961331" ], "abstract": [ "This paper presents the first comprehensive empirical study demonstrating the efficacy of the Brain Floating Point (BFLOAT16) half-precision format for Deep Learning training across image classification, speech recognition, language modeling, generative networks and industrial recommendation systems. BFLOAT16 is attractive for Deep Learning training for two reasons: the range of values it can represent is the same as that of IEEE 754 floating-point format (FP32) and conversion to from FP32 is simple. Maintaining the same range as FP32 is important to ensure that no hyper-parameter tuning is required for convergence; e.g., IEEE 754 compliant half-precision floating point (FP16) requires hyper-parameter tuning. In this paper, we discuss the flow of tensors and various key operations in mixed precision training, and delve into details of operations, such as the rounding modes for converting FP32 tensors to BFLOAT16. We have implemented a method to emulate BFLOAT16 operations in Tensorflow, Caffe2, IntelCaffe, and Neon for our experiments. Our results show that deep learning training using BFLOAT16 tensors achieves the same state-of-the-art (SOTA) results across domains as FP32 tensors in the same number of iterations and with no changes to hyper-parameters.", "This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization (a variable-length encoding: 1 bit for representing zero value, and the remaining 4 bits represent at most 16 different values for the powers of two), our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. We believe that our method sheds new insights on how to make deep CNNs to be applicable on mobile or embedded devices. The code will be made publicly available.", "Reduced precision computation for deep neural networks is one of the key areas addressing the widening compute gap driven by an exponential growth in model size. In recent years, deep learning training has largely migrated to 16-bit precision, with significant gains in performance and energy efficiency. However, attempts to train DNNs at 8-bit precision have met with significant challenges because of the higher precision and dynamic range requirements of back-propagation. In this paper, we propose a method to train deep neural networks using 8-bit floating point representation for weights, activations, errors, and gradients. In addition to reducing compute precision, we also reduced the precision requirements for the master copy of weights from 32-bit to 16-bit. We demonstrate state-of-the-art accuracy across multiple data sets (imagenet-1K, WMT16) and a broader set of workloads (Resnet-18 34 50, GNMT, Transformer) than previously reported. We propose an enhanced loss scaling method to augment the reduced subnormal range of 8-bit floating point for improved error propagation. We also examine the impact of quantization noise on generalization and propose a stochastic rounding technique to address gradient noise. As a result of applying all these techniques, we report slightly higher validation accuracy compared to full precision baseline.", "The state-of-the-art (SOTA) for mixed precision training is dominated by variants of low precision floating point operations, and in particular, FP16 accumulating into FP32 (2017). On the other hand, while a lot of research has also happened in the domain of low and mixed-precision Integer training, these works either present results for non-SOTA networks (for instance only AlexNet for ImageNet-1K), or relatively small datasets (like CIFAR-10). In this work, we train state-of-the-art visual understanding neural networks on the ImageNet-1K dataset, with Integer operations on General Purpose (GP) hardware. In particular, we focus on Integer Fused-Multiply-and-Accumulate (FMA) operations which take two pairs of INT16 operands and accumulate results into an INT32 output.We propose a shared exponent representation of tensors and develop a Dynamic Fixed Point (DFP) scheme suitable for common neural network operations. The nuances of developing an efficient integer convolution kernel is examined, including methods to handle overflow of the INT32 accumulator. We implement CNN training for ResNet-50, GoogLeNet-v1, VGG-16 and AlexNet; and these networks achieve or exceed SOTA accuracy within the same number of iterations as their FP32 counterparts without any change in hyper-parameters and with a 1.8X improvement in end-to-end training throughput. To the best of our knowledge these results represent the first INT16 training results on GP hardware for ImageNet-1K dataset using SOTA CNNs and achieve highest reported accuracy using half-precision" ] }
1907.13216
2965944882
Recently, the posit numerical format has shown promise for DNN data representation and compute with ultra-low precision ([5..8]-bit). However, majority of studies focus only on DNN inference. In this work, we propose DNN training using posits and compare with the floating point training. We evaluate on both MNIST and Fashion MNIST corpuses, where 16-bit posits outperform 16-bit floating point for end-to-end DNN training.
Recently, Wang and Mellempudi propose a method to reduce the bit-precision of weights, activations, and gradients to 8 bits by exhaustively analyzing DNN parameters during training @cite_7 @cite_4 . In @cite_4 , a new chunk-based addition is presented to solve the truncation issue caused by the addition of large- and small-magnitude numbers, thus successfully reducing the number of bits for the accumulator and weight updates to 16 bits. To mitigate requiring loss scaling in mixed-precision floating point training, Kalamkar @cite_15 proposed the brain floating point (BFLOAT-16) half-precision format with a reduced 8-bit fractional precision and similar dynamic range (7-bit exponent) to 32-bit floating point. A side effect of this representation is that the conversion complexity between these BFLOAT-16 and IEEE floating point is reduced during training. In training a ResNet model on the ImageNet dataset, BFLOAT-16s achieve the same performance as 32-bit floating point.
{ "cite_N": [ "@cite_15", "@cite_4", "@cite_7" ], "mid": [ "2947629474", "2946955515", "2964228333", "2899063892" ], "abstract": [ "This paper presents the first comprehensive empirical study demonstrating the efficacy of the Brain Floating Point (BFLOAT16) half-precision format for Deep Learning training across image classification, speech recognition, language modeling, generative networks and industrial recommendation systems. BFLOAT16 is attractive for Deep Learning training for two reasons: the range of values it can represent is the same as that of IEEE 754 floating-point format (FP32) and conversion to from FP32 is simple. Maintaining the same range as FP32 is important to ensure that no hyper-parameter tuning is required for convergence; e.g., IEEE 754 compliant half-precision floating point (FP16) requires hyper-parameter tuning. In this paper, we discuss the flow of tensors and various key operations in mixed precision training, and delve into details of operations, such as the rounding modes for converting FP32 tensors to BFLOAT16. We have implemented a method to emulate BFLOAT16 operations in Tensorflow, Caffe2, IntelCaffe, and Neon for our experiments. Our results show that deep learning training using BFLOAT16 tensors achieves the same state-of-the-art (SOTA) results across domains as FP32 tensors in the same number of iterations and with no changes to hyper-parameters.", "Reduced precision computation for deep neural networks is one of the key areas addressing the widening compute gap driven by an exponential growth in model size. In recent years, deep learning training has largely migrated to 16-bit precision, with significant gains in performance and energy efficiency. However, attempts to train DNNs at 8-bit precision have met with significant challenges because of the higher precision and dynamic range requirements of back-propagation. In this paper, we propose a method to train deep neural networks using 8-bit floating point representation for weights, activations, errors, and gradients. In addition to reducing compute precision, we also reduced the precision requirements for the master copy of weights from 32-bit to 16-bit. We demonstrate state-of-the-art accuracy across multiple data sets (imagenet-1K, WMT16) and a broader set of workloads (Resnet-18 34 50, GNMT, Transformer) than previously reported. We propose an enhanced loss scaling method to augment the reduced subnormal range of 8-bit floating point for improved error propagation. We also examine the impact of quantization noise on generalization and propose a stochastic rounding technique to address gradient noise. As a result of applying all these techniques, we report slightly higher validation accuracy compared to full precision baseline.", "This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization (a variable-length encoding: 1 bit for representing zero value, and the remaining 4 bits represent at most 16 different values for the powers of two), our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. We believe that our method sheds new insights on how to make deep CNNs to be applicable on mobile or embedded devices. The code will be made publicly available.", "Reducing hardware overhead of neural networks for faster or lower power inference and training is an active area of research. Uniform quantization using integer multiply-add has been thoroughly investigated, which requires learning many quantization parameters, fine-tuning training or other prerequisites. Little effort is made to improve floating point relative to this baseline; it remains energy inefficient, and word size reduction yields drastic loss in needed dynamic range. We improve floating point to be more energy efficient than equivalent bit width integer hardware on a 28 nm ASIC process while retaining accuracy in 8 bits with a novel hybrid log multiply linear add, Kulisch accumulation and tapered encodings from Gustafson's posit format. With no network retraining, and drop-in replacement of all math and float32 parameters via round-to-nearest-even only, this open-sourced 8-bit log float is within 0.9 top-1 and 0.2 top-5 accuracy of the original float32 ResNet-50 CNN model on ImageNet. Unlike int8 quantization, it is still a general purpose floating point arithmetic, interpretable out-of-the-box. Our 8 38-bit log float multiply-add is synthesized and power profiled at 28 nm at 0.96x the power and 1.12x the area of 8 32-bit integer multiply-add. In 16 bits, our log float multiply-add is 0.59x the power and 0.68x the area of IEEE 754 float16 fused multiply-add, maintaining the same signficand precision and dynamic range, proving useful for training ASICs as well." ] }
1907.13216
2965944882
Recently, the posit numerical format has shown promise for DNN data representation and compute with ultra-low precision ([5..8]-bit). However, majority of studies focus only on DNN inference. In this work, we propose DNN training using posits and compare with the floating point training. We evaluate on both MNIST and Fashion MNIST corpuses, where 16-bit posits outperform 16-bit floating point for end-to-end DNN training.
This research builds on earlier studies @cite_31 @cite_35 @cite_9 @cite_19 @cite_33 and for the first time studies feedforward neural network training with posits on MNIST and Fashion MNIST datasets .
{ "cite_N": [ "@cite_35", "@cite_33", "@cite_9", "@cite_19", "@cite_31" ], "mid": [ "1570197553", "2750384547", "2950769435", "2526782364" ], "abstract": [ "In this work, we investigate the use of sparsity-inducing regularizers during training of Convolution Neural Networks (CNNs). These regularizers encourage that fewer connections in the convolution and fully connected layers take non-zero values and in effect result in sparse connectivity between hidden units in the deep network. This in turn reduces the memory and runtime cost involved in deploying the learned CNNs. We show that training with such regularization can still be performed using stochastic gradient descent implying that it can be used easily in existing codebases. Experimental evaluation of our approach on MNIST, CIFAR, and ImageNet datasets shows that our regularizers can result in dramatic reductions in memory requirements. For instance, when applied on AlexNet, our method can reduce the memory consumption by a factor of four with minimal loss in accuracy.", "We present Fashion-MNIST, a new dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. The dataset is freely available at this https URL", "We simulate the training of a set of state of the art neural networks, the Maxout networks (, 2013a), on three benchmark datasets: the MNIST, CIFAR10 and SVHN, with three distinct arithmetics: floating point, fixed point and dynamic fixed point. For each of those datasets and for each of those arithmetics, we assess the impact of the precision of the computations on the final error of the training. We find that very low precision computation is sufficient not just for running trained networks but also for training them. For example, almost state-of-the-art results were obtained on most datasets with 10 bits for computing activations and gradients, and 12 bits for storing updated parameters.", "Recently, neuron activations extracted from a pre-trained convolutional neural network (CNN) show promising performance in various visual tasks. However, due to the domain and task bias, using the features generated from the model pre-trained for image classification as image representations for instance retrieval is problematic. In this paper, we propose quartet-net learning to improve the discriminative power of CNN features for instance retrieval. The general idea is to map the features into a space where the image similarity can be better evaluated. Our network differs from the traditional Siamese-net in two ways. First, we adopt a double-margin contrastive loss with a dynamic margin tuning strategy to train the network which leads to more robust performance. Second, we introduce in the mimic learning regularization to improve the generalization ability of the network by preventing it from overfitting to the training data. Catering for the network learning, we collect a large-scale dataset, namely GeoPair, which consists of 68k matching image pairs and 63k non-matching pairs. Experiments on several standard instance retrieval datasets demonstrate the effectiveness of our method." ] }
1907.13314
2965544063
Many evaluation methods have been used to assess the usefulness of Visual Analytics (VA) solutions. These methods stem from a variety of origins with different assumptions and goals, which cause confusion about their proofing capabilities. Moreover, the lack of discussion about the evaluation processes may limit our potential to develop new evaluation methods specialized for VA. In this paper, we present an analysis of evaluation methods that have been used to summatively evaluate VA solutions. We provide a survey and taxonomy of the evaluation methods that have appeared in the VAST literature in the past two years. We then analyze these methods in terms of validity and generalizability of their findings, as well as the feasibility of using them. We propose a new metric called summative quality to compare evaluation methods according to their ability to prove usefulness, and make recommendations for selecting evaluation methods based on their summative quality in the VA domain.
Multiple studies have surveyed existing evaluation practices. Lam @cite_59 suggest that it is reasonable to generate a taxonomy of evaluation studies by defining scenarios of evaluation practices that are common in the literature. Their extensive survey is unique and provides many insights for researchers. Specifically, seven scenarios of evaluation practices are discussed along with the goals of each, with examplar studies and methods used in each scenario. Isenberg @cite_48 continue this effort by extending the number of surveyed studies and introducing an eighth scenario of evaluation practices. These studies helped us build the backbone of our taxonomy as explained in Section . The initial code to group evaluation methods in our survey was derived from Lam and Isenberg . We then gradually modified the coding of evaluation methods according to the studies we surveyed. In contrast to the grouping approach according to common evaluation practices taken by previous surveys, we focus on grouping evaluation methods based on the similarities in each method's (sub)activities, with the ultimate goal of analyzing the potential risks associated with them, rather than simply describing the existing evaluation practices.
{ "cite_N": [ "@cite_48", "@cite_59" ], "mid": [ "2048679005", "2099883114", "179757531", "2963686848" ], "abstract": [ "We consider the problem of using a large unlabeled sample to boost performance of a learning algorit,hrn when only a small set of labeled examples is available. In particular, we consider a problem setting motivated by the task of learning to classify web pages, in which the description of each example can be partitioned into two distinct views. For example, the description of a web page can be partitioned into the words occurring on that page, and the words occurring in hyperlinks t,hat point to that page. We assume that either view of the example would be sufficient for learning if we had enough labeled data, but our goal is to use both views together to allow inexpensive unlabeled data to augment, a much smaller set of labeled examples. Specifically, the presence of two distinct views of each example suggests strategies in which two learning algorithms are trained separately on each view, and then each algorithm’s predictions on new unlabeled examples are used to enlarge the training set of the other. Our goal in this paper is to provide a PAC-style analysis for this setting, and, more broadly, a PAC-style framework for the general problem of learning from both labeled and unlabeled data. We also provide empirical results on real web-page data indicating that this use of unlabeled examples can lead to significant improvement of hypotheses in practice. *This research was supported in part by the DARPA HPKB program under contract F30602-97-1-0215 and by NSF National Young investigator grant CCR-9357793. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. TO copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and or a fee. COLT 98 Madison WI USA Copyright ACM 1998 l-58113-057--0 98 7... 5.00 92 Tom Mitchell School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213-3891 [email protected]", "Display Omitted Active learning is promising in the areas with complex topics in systematic reviews.Certainty criteria is promising to accelerate screening regardless of the topic.Certainty criteria performs as well as uncertainty criteria in classification.Weighting positive instances is promising to overcome the data imbalance.Unsupervised methods enhance the classification performance. In systematic reviews, the growing number of published studies imposes a significant screening workload on reviewers. Active learning is a promising approach to reduce the workload by automating some of the screening decisions, but it has been evaluated for a limited number of disciplines. The suitability of applying active learning to complex topics in disciplines such as social science has not been studied, and the selection of useful criteria and enhancements to address the data imbalance problem in systematic reviews remains an open problem. We applied active learning with two criteria (certainty and uncertainty) and several enhancements in both clinical medicine and social science (specifically, public health) areas, and compared the results in both. The results show that the certainty criterion is useful for finding relevant documents, and weighting positive instances is promising to overcome the data imbalance problem in both data sets. Latent dirichlet allocation (LDA) is also shown to be promising when little manually-assigned information is available. Active learning is effective in complex topics, although its efficiency is limited due to the difficulties in text classification. The most promising criterion and weighting method are the same regardless of the review topic, and unsupervised techniques like LDA have a possibility to boost the performance of active learning without manual annotation.", "Many methods, including supervised and unsupervised algorithms, have been developed for extractive document summarization. Most supervised methods consider the summarization task as a two-class classification problem and classify each sentence individually without leveraging the relationship among sentences. The unsupervised methods use heuristic rules to select the most informative sentences into a summary directly, which are hard to generalize. In this paper, we present a Conditional Random Fields (CRF) based framework to keep the merits of the above two kinds of approaches while avoiding their disadvantages. What is more, the proposed framework can take the outcomes of previous methods as features and seamlessly integrate them. The key idea of our approach is to treat the summarization task as a sequence labeling problem. In this view, each document is a sequence of sentences and the summarization procedure labels the sentences by 1 and 0. The label of a sentence depends on the assignment of labels of others. We compared our proposed approach with eight existing methods on an open benchmark data set. The results show that our approach can improve the performance by more than 7.1 and 12.1 over the best supervised baseline and unsupervised baseline respectively in terms of two popular metrics F1 and ROUGE-2. Detailed analysis of the improvement is presented as well.", "Abstract Context For many years, we have observed industry struggling in defining a high quality requirements engineering (RE) and researchers trying to understand industrial expectations and problems. Although we are investigating the discipline with a plethora of empirical studies, they still do not allow for empirical generalisations. Objective To lay an empirical and externally valid foundation about the state of the practice in RE, we aim at a series of open and reproducible surveys that allow us to steer future research in a problem-driven manner. Method We designed a globally distributed family of surveys in joint collaborations with different researchers and completed the first run in Germany. The instrument is based on a theory in the form of a set of hypotheses inferred from our experiences and available studies. We test each hypothesis in our theory and identify further candidates to extend the theory by correlation and Grounded Theory analysis. Results In this article, we report on the design of the family of surveys, its underlying theory, and the full results obtained from Germany with participants from 58 companies. The results reveal, for example, a tendency to improve RE via internally defined qualitative methods rather than relying on normative approaches like CMMI. We also discovered various RE problems that are statistically significant in practice. For instance, we could corroborate communication flaws or moving targets as problems in practice. Our results are not yet fully representative but already give first insights into current practices and problems in RE, and they allow us to draw lessons learnt for future replications. Conclusion Our results obtained from this first run in Germany make us confident that the survey design and instrument are well-suited to be replicated and, thereby, to create a generalisable empirical basis of RE in practice." ] }
1907.13314
2965544063
Many evaluation methods have been used to assess the usefulness of Visual Analytics (VA) solutions. These methods stem from a variety of origins with different assumptions and goals, which cause confusion about their proofing capabilities. Moreover, the lack of discussion about the evaluation processes may limit our potential to develop new evaluation methods specialized for VA. In this paper, we present an analysis of evaluation methods that have been used to summatively evaluate VA solutions. We provide a survey and taxonomy of the evaluation methods that have appeared in the VAST literature in the past two years. We then analyze these methods in terms of validity and generalizability of their findings, as well as the feasibility of using them. We propose a new metric called summative quality to compare evaluation methods according to their ability to prove usefulness, and make recommendations for selecting evaluation methods based on their summative quality in the VA domain.
An early study that introduces McGrath's work to the information visualization evaluation context is done by Carpendale @cite_67 , who provides a summary of different quantitative, qualitative and mixed methodologies along with a discussion about their limitations and challenges. A more recent work by Crisan and Elliott @cite_2 revisits quantitative, qualitative and mixed methodologies and provides guidance on when and how to correctly apply them. Instead of taking a general view of behavioral methodologies, we use a unified lens to identify limitations in evaluation methods used to prove usefulness, which may follow different methodologies, but are indeed used with summative intentions. Similar to Crisan and Elliott, we use validity and generalizability as our analysis criteria and add the feasibility criterion to the analysis to determine the level of applicability of the methods.
{ "cite_N": [ "@cite_67", "@cite_2" ], "mid": [ "2170947705", "2109992782", "1992743299", "2963611534" ], "abstract": [ "This paper investigates how to blindly evaluate the visual quality of an image by learning rules from linguistic descriptions. Extensive psychological evidence shows that humans prefer to conduct evaluations qualitatively rather than numerically. The qualitative evaluations are then converted into the numerical scores to fairly benchmark objective image quality assessment (IQA) metrics. Recently, lots of learning-based IQA models are proposed by analyzing the mapping from the images to numerical ratings. However, the learnt mapping can hardly be accurate enough because some information has been lost in such an irreversible conversion from the linguistic descriptions to numerical scores. In this paper, we propose a blind IQA model, which learns qualitative evaluations directly and outputs numerical scores for general utilization and fair comparison. Images are represented by natural scene statistics features. A discriminative deep model is trained to classify the features into five grades, corresponding to five explicit mental concepts, i.e., excellent, good, fair, poor, and bad. A newly designed quality pooling is then applied to convert the qualitative labels into scores. The classification framework is not only much more natural than the regression-based models, but also robust to the small sample size problem. Thorough experiments are conducted on popular databases to verify the model’s effectiveness, efficiency, and robustness.", "The growth of Internet commerce has stimulated the use of collaborative filtering (CF) algorithms as recommender systems. Such systems leverage knowledge about the known preferences of multiple users to recommend items of interest to other users. CF methods have been harnessed to make recommendations about such items as web pages, movies, books, and toys. Researchers have proposed and evaluated many approaches for generating recommendations. We describe and evaluate a new method called personality diagnosis (PD). Given a user's preferences for some items, we compute the probability that he or she is of the same \"personality type\" as other users, and, in turn, the probability that he or she will like new items. PD retains some of the advantages of traditional similarity-weighting techniques in that all data is brought to bear on each prediction and new data can be added easily and incrementally. Additionally, PD has a meaningful probabilistic interpretation, which may be leveraged to justify, explain, and augment results. We report empirical results on the EachMovie database of movie ratings, and on user profile data collected from the CiteSeer digital library of Computer Science research papers. The probabilistic framework naturally supports a variety of descriptive measurements--in particular, we consider the applicability of a value of information (VOI) computation.", "We present an assessment of the state and historic development of evaluation practices as reported in papers published at the IEEE Visualization conference. Our goal is to reflect on a meta-level about evaluation in our community through a systematic understanding of the characteristics and goals of presented evaluations. For this purpose we conducted a systematic review of ten years of evaluations in the published papers using and extending a coding scheme previously established by [2012]. The results of our review include an overview of the most common evaluation goals in the community, how they evolved over time, and how they contrast or align to those of the IEEE Information Visualization conference. In particular, we found that evaluations specific to assessing resulting images and algorithm performance are the most prevalent (with consistently 80-90 of all papers since 1997). However, especially over the last six years there is a steady increase in evaluation methods that include participants, either by evaluating their performances and subjective feedback or by evaluating their work practices and their improved analysis and reasoning capabilities using visual tools. Up to 2010, this trend in the IEEE Visualization conference was much more pronounced than in the IEEE Information Visualization conference which only showed an increasing percentage of evaluation through user performance and experience testing. Since 2011, however, also papers in IEEE Information Visualization show such an increase of evaluations of work practices and analysis as well as reasoning using visual tools. Further, we found that generally the studies reporting requirements analyses and domain-specific work practices are too informally reported which hinders cross-comparison and lowers external validity.", "Despite the availability of a huge amount of video data accompanied by descriptive texts, it is not always easy to exploit the information contained in natural language in order to automatically recognize video concepts. Towards this goal, in this paper we use textual cues as means of supervision, introducing two weakly supervised techniques that extend the Multiple Instance Learning (MIL) framework: the Fuzzy Sets Multiple Instance Learning (FSMIL) and the Probabilistic Labels Multiple Instance Learning (PLMIL). The former encodes the spatio-temporal imprecision of the linguistic descriptions with Fuzzy Sets, while the latter models different interpretations of each description's semantics with Probabilistic Labels, both formulated through a convex optimization algorithm. In addition, we provide a novel technique to extract weak labels in the presence of complex semantics, that consists of semantic similarity computations. We evaluate our methods on two distinct problems, namely face and action recognition, in the challenging and realistic setting of movies accompanied by their screenplays, contained in the COGNIMUSE database. We show that, on both tasks, our method considerably outperforms a state-of-the-art weakly supervised approach, as well as other baselines." ] }
1907.13314
2965544063
Many evaluation methods have been used to assess the usefulness of Visual Analytics (VA) solutions. These methods stem from a variety of origins with different assumptions and goals, which cause confusion about their proofing capabilities. Moreover, the lack of discussion about the evaluation processes may limit our potential to develop new evaluation methods specialized for VA. In this paper, we present an analysis of evaluation methods that have been used to summatively evaluate VA solutions. We provide a survey and taxonomy of the evaluation methods that have appeared in the VAST literature in the past two years. We then analyze these methods in terms of validity and generalizability of their findings, as well as the feasibility of using them. We propose a new metric called summative quality to compare evaluation methods according to their ability to prove usefulness, and make recommendations for selecting evaluation methods based on their summative quality in the VA domain.
One argument made by Munzner @cite_15 was the necessity of summative evaluation during each stage of design studies to evaluate the outcome of that individual stage. Sedlmair @cite_49 and Mckenna @cite_27 made similar arguments while describing the process of design studies. They make the case for considering non-quantitative methods, such as heuristic evaluation, for summative purposes. While the Munzner's nested model @cite_15 essentially prescribes evaluation methods based on the development stage, we focus our analysis and prescription based on the activities performed during evaluation, and judge the quality of evaluation findings (evidence of usefulness) based on the amount of risk introduced by the involved activities. Further, our approach adapts to different evaluation instances and prescribes relatively smaller number of potential evaluation methods, compared to @cite_15 .
{ "cite_N": [ "@cite_27", "@cite_15", "@cite_49" ], "mid": [ "2141927646", "2786674049", "2963686848", "1936155969" ], "abstract": [ "We present a fully automatic method for content selection evaluation in summarization that does not require the creation of human model summaries. Our work capitalizes on the assumption that the distribution of words in the input and an informative summary of that input should be similar to each other. Results on a large scale evaluation from the Text Analysis Conference show that input-summary comparisons are very effective for the evaluation of content selection. Our automatic methods rank participating systems similarly to manual model-based pyramid evaluation and to manual human judgments of responsiveness. The best feature, Jensen-Shannon divergence, leads to a correlation as high as 0.88 with manual pyramid and 0.73 with responsiveness evaluations.", "In this paper, we present the results of long-term research conducted in order to study the contribution made by software models based on the Unified Modeling Language (UML) to the comprehensibility of Java source-code deprived of comments. We have conducted 12 controlled experiments in different experimental contexts and on different sites with participants with different levels of expertise (i.e., Bachelor’s, Master’s, and PhD students and software practitioners from Italy and Spain). A total of 333 observations were obtained from these experiments. The UML models in our experiments were those produced in the analysis and design phases. The models produced in the analysis phase were created with the objective of abstracting the environment in which the software will work (i.e., the problem domain), while those produced in the design phase were created with the goal of abstracting implementation aspects of the software (i.e., the solution application domain). Source-code comprehensibility was assessed with regard to correctness of understanding, time taken to accomplish the comprehension tasks, and efficiency as regards accomplishing those tasks. In order to study the global effect of UML models on source-code comprehensibility, we aggregated results from the individual experiments using a meta-analysis. We made every effort to account for the heterogeneity of our experiments when aggregating the results obtained from them. The overall results suggest that the use of UML models affects the comprehensibility of source-code, when it is deprived of comments. Indeed, models produced in the analysis phase might reduce source-code comprehensibility, while increasing the time taken to complete comprehension tasks. That is, browsing source code and this kind of models together negatively impacts on the time taken to complete comprehension tasks without having a positive effect on the comprehensibility of source code. One plausible justification for this is that the UML models produced in the analysis phase focus on the problem domain. That is, models produced in the analysis phase say nothing about source code and there should be no expectation that they would, in any way, be beneficial to comprehensibility. On the other hand, UML models produced in the design phase improve source-code comprehensibility. One possible justification for this result is that models produced in the design phase are more focused on implementation details. Therefore, although the participants had more material to read and browse, this additional effort was paid back in the form of an improved comprehension of source code.", "Abstract Context For many years, we have observed industry struggling in defining a high quality requirements engineering (RE) and researchers trying to understand industrial expectations and problems. Although we are investigating the discipline with a plethora of empirical studies, they still do not allow for empirical generalisations. Objective To lay an empirical and externally valid foundation about the state of the practice in RE, we aim at a series of open and reproducible surveys that allow us to steer future research in a problem-driven manner. Method We designed a globally distributed family of surveys in joint collaborations with different researchers and completed the first run in Germany. The instrument is based on a theory in the form of a set of hypotheses inferred from our experiences and available studies. We test each hypothesis in our theory and identify further candidates to extend the theory by correlation and Grounded Theory analysis. Results In this article, we report on the design of the family of surveys, its underlying theory, and the full results obtained from Germany with participants from 58 companies. The results reveal, for example, a tendency to improve RE via internally defined qualitative methods rather than relying on normative approaches like CMMI. We also discovered various RE problems that are statistically significant in practice. For instance, we could corroborate communication flaws or moving targets as problems in practice. Our results are not yet fully representative but already give first insights into current practices and problems in RE, and they allow us to draw lessons learnt for future replications. Conclusion Our results obtained from this first run in Germany make us confident that the survey design and instrument are well-suited to be replicated and, thereby, to create a generalisable empirical basis of RE in practice.", "In many decision-making scenarios, people can benefit from knowing what other people's opinions are. As more and more evaluative documents are posted on the Web, summarizing these useful resources becomes a critical task for many organizations and individuals. This paper presents a framework for summarizing a corpus of evaluative documents about a single entity by a natural language summary. We propose two summarizers: an extractive summarizer and an abstractive one. As an additional contribution, we show how our abstractive summarizer can be modified to generate summaries tailored to a model of the user preferences that is solidly grounded in decision theory and can be effectively elicited from users. We have tested our framework in three user studies. In the first one, we compared the two summarizers. They performed equally well relative to each other quantitatively, while significantly outperforming a baseline standard approach to multidocument summarization. Trends in the results as well as qualitative comments from participants suggest that the summarizers have different strengths and weaknesses. After this initial user study, we realized that the diversity of opinions expressed in the corpus (i.e., its controversiality) might play a critical role in comparing abstraction versus extraction. To clearly pinpoint the role of controversiality, we ran a second user study in which we controlled for the degree of controversiality of the corpora that were summarized for the participants. The outcome of this study indicates that for evaluative text abstraction tends to be more effective than extraction, particularly when the corpus is controversial. In the third user study we assessed the effectiveness of our user tailoring strategy. The results of this experiment confirm that user tailored summaries are more informative than untailored ones." ] }
1907.13495
2965311640
We develop a novel hierarchy for zero-dimensional persistence pairs, i.e., connected components, which is capable of capturing more fine-grained spatial relations between persistence pairs. Our work is motivated by a lack of spatial relationships between features in persistence diagrams, leading to a limited expressive power. We build upon a recently-introduced hierarchy of pairs in persistence diagrams that augments the pairing stored in persistence diagrams with information about which components merge. Our proposed hierarchy captures differences in branching structure. Moreover, we show how to use our hierarchy to measure the spatial stability of a pairing and we define a rank function for persistence pairs and demonstrate different applications.
We refer the reader to Edelsbrunner and Harer @cite_8 for a detailed overview of persistence and related concepts. There are several related approaches for creating a hierarchy of persistence information. @cite_2 calculate a topological saliency of critical points in a scalar field based on their spatial arrangement. Critical points with low persistence that are isolated from other critical points have a higher saliency in this concept. These calculations yield saliency curves for different smoothing radii. While these curves permit a ranking of persistence pairs, they do not afford a description of their nesting behavior. Consequently, in contrast to our approach, the saliency approach is incapable of distinguishing some spatial rearrangements that leave persistence values and relative distances largely intact, such as moving all peaks towards each other. Bauer @cite_24 developed what we refer to in this paper as the regular persistence hierarchy. It is fully combinatorial and merely requires small changes of the pairing calculation of related critical points. This hierarchy was successfully used in determining cancellation sequences of critical points of surfaces. However, as shown in this paper, this hierarchy cannot distinguish between certain nesting relations.
{ "cite_N": [ "@cite_24", "@cite_2", "@cite_8" ], "mid": [ "2056761334", "2964237352", "2788076228", "2907190463" ], "abstract": [ "Topological persistence has proven to be a key concept for the study of real-valued functions defined over topological spaces. Its validity relies on the fundamental property that the persistence diagrams of nearby functions are close. However, existing stability results are restricted to the case of continuous functions defined over triangulable spaces. In this paper, we present new stability results that do not suffer from the above restrictions. Furthermore, by working at an algebraic level directly, we make it possible to compare the persistence diagrams of functions defined over different spaces, thus enabling a variety of new applications of the concept of persistence. Along the way, we extend the definition of persistence diagram to a larger setting, introduce the notions of discretization of a persistence module and associated pixelization map, define a proximity measure between persistence modules, and show how to interpolate between persistence modules, thereby lending a more analytic character to this otherwise algebraic setting. We believe these new theoretical concepts and tools shed new light on the theory of persistence, in addition to simplifying proofs and enabling new applications.", "Many data sets can be viewed as a noisy sampling of an underlying space, and tools from topological data analysis can characterize this structure for the purpose of knowledge discovery. One such tool is persistent homology, which provides a multiscale description of the homological features within a data set. A useful representation of this homological information is a persistence diagram (PD). Efforts have been made to map PDs into spaces with additional structure valuable to machine learning tasks. We convert a PD to a finite-dimensional vector representation which we call a persistence image (PI), and prove the stability of this transformation with respect to small perturbations in the inputs. The discriminatory power of PIs is compared against existing methods, showing significant performance gains. We explore the use of PIs with vector-based machine learning tools, such as linear sparse support vector machines, which identify features containing discriminating topological information. Finally, high accuracy inference of parameter values from the dynamic output of a discrete dynamical system (the linked twist map) and a partial differential equation (the anisotropic Kuramoto-Sivashinsky equation) provide a novel application of the discriminatory power of PIs.", "Persistence diagrams play a fundamental role in Topological Data Analysis where they are used as topological descriptors of filtrations built on top of data. They consist in discrete multisets of points in the plane @math that can equivalently be seen as discrete measures in @math . When the data come as a random point cloud, these discrete measures become random measures whose expectation is studied in this paper. First, we show that for a wide class of filtrations, including the C ech and Rips-Vietoris filtrations, the expected persistence diagram, that is a deterministic measure on @math , has a density with respect to the Lebesgue measure. Second, building on the previous result we show that the persistence surface recently introduced in [Adams & al., Persistence images: a stable vector representation of persistent homology] can be seen as a kernel estimator of this density. We propose a cross-validation scheme for selecting an optimal bandwidth, which is proven to be a consistent procedure to estimate the density.", "Persistence diagrams are important tools in the field of topological data analysis that describe the presence and magnitude of features in a filtered topological space. However, current approaches for comparing a persistence diagram to a set of other persistence diagrams is linear in the number of diagrams or do not offer performance guarantees. In this paper, we apply concepts from locality-sensitive hashing to support approximate nearest neighbor search in the space of persistence diagrams. Given a set @math of @math @math -bounded persistence diagrams, each with at most @math points, we snap-round the points of each diagram to points on a cubical lattice and produce a key for each possible snap-rounding. Specifically, we fix a grid over each diagram at several resolutions and consider the snap-roundings of each diagram to the four nearest lattice points. Then, we propose a data structure with @math levels @math that stores all snap-roundings of each persistence diagram in @math at each resolution. This data structure has size @math to account for varying lattice resolutions as well as snap-roundings and the deletion of points with low persistence. To search for a persistence diagram, we compute a key for a query diagram by snapping each point to a lattice and deleting points of low persistence. Furthermore, as the lattice parameter decreases, searching our data structure yields a six-approximation of the nearest diagram in @math in @math time and a constant factor approximation of the @math th nearest diagram in @math time." ] }
1907.13368
2965737826
The digital retina in smart cities is to select what the City Eye tells the City Brain, and convert the acquired visual data from front-end visual sensors to features in an intelligent sensing manner. By deploying deep learning and or handcrafted models in front-end devices, the compact features can be extracted and subsequently delivered to back-end cloud for search and advanced analytics. In this context, we propose a model generation, utilization, and communication paradigm, aiming to address a set of unique challenges for better artificial intelligence services in smart cities. In particular, we present an integrated multiple deep learning models reuse and prediction strategy, which greatly increases the feasibility of the digital retina in processing and analyzing the large-scale visual data in smart cities. The promise of the proposed paradigm is demonstrated through a set of experiments.
The deep neural network transmission aims to utilize and deliver the knowledge concentrated in the network model to facilitate different intelligent applications. In @cite_5 , the model compression is formulated from the perspective of transmission. As such, the redundancy among different models can be further exploited to facilitate many applications in front-end visual sensors. It is also shown that such scheme can be elegantly combined with the existing compression methods to form an integrated compression and communication framework.
{ "cite_N": [ "@cite_5" ], "mid": [ "2775776326", "2766839578", "2963341152", "2895260923" ], "abstract": [ "Highly distributed training of Deep Neural Networks (DNNs) on future compute platforms (offering 100 of TeraOps s of computational capacity) is expected to be severely communication constrained. To overcome this limitation, new gradient compression techniques are needed that are computationally friendly, applicable to a wide variety of layers seen in Deep Neural Networks and adaptable to variations in network architectures as well as their hyper-parameters. In this paper we introduce a novel technique - the Adaptive Residual Gradient Compression (AdaComp) scheme. AdaComp is based on localized selection of gradient residues and automatically tunes the compression rate depending on local activity. We show excellent results on a wide spectrum of state of the art Deep Learning models in multiple domains (vision, speech, language), datasets (MNIST, CIFAR10, ImageNet, BN50, Shakespeare), optimizers (SGD with momentum, Adam) and network parameters (number of learners, minibatch-size etc.). Exploiting both sparsity and quantization, we demonstrate end-to-end compression rates of 200X for fully-connected and recurrent layers, and 40X for convolutional layers, without any noticeable degradation in model accuracies.", "Deep convolutional neural networks (CNNs) have recently achieved great success in many visual recognition tasks. However, existing deep neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance. During the past few years, tremendous progress has been made in this area. In this paper, we survey the recent advanced techniques for compacting and accelerating CNNs model developed. These techniques are roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transferred compact convolutional filters, and knowledge distillation. Methods of parameter pruning and sharing will be described at the beginning, after that the other techniques will be introduced. For each scheme, we provide insightful analysis regarding the performance, related applications, advantages, and drawbacks etc. Then we will go through a few very recent additional successful methods, for example, dynamic capacity networks and stochastic depths networks. After that, we survey the evaluation matrix, the main datasets used for evaluating the model performance and recent benchmarking efforts. Finally, we conclude this paper, discuss remaining challenges and possible directions on this topic.", "Neural network compression has recently received much attention due to the computational requirements of modern deep models. In this work, our objective is to transfer knowledge from a deep and accurate model to a smaller one. Our contributions are threefold: (i) we propose an adversarial network compression approach to train the small student network to mimic the large teacher, without the need for labels during training; (ii) we introduce a regularization scheme to prevent a trivially-strong discriminator without reducing the network capacity and (iii) our approach generalizes on different teacher-student models.", "Deep neural network compression has the potential to bring modern resource-hungry deep networks to resource-limited devices. However, in many of the most compelling deployment scenarios of compressed deep networks, the operational constraints matter: for example, a pedestrian detection network on a self-driving car may have to satisfy a latency constraint for safe operation. We propose the first principled treatment of deep network compression under operational constraints. We formulate the compression learning problem from the perspective of constrained Bayesian optimization, and introduce a cooling (annealing) strategy to guide the network compression towards the target constraints. Experiments on ImageNet demonstrate the value of modelling constraints directly in network compression." ] }
1907.13196
2966684444
Reinforcement learning algorithms, though successful, tend to over-fit to training environments hampering their application to the real-world. This paper proposes WR @math L; a robust reinforcement learning algorithm with significant robust performance on low and high-dimensional control tasks. Our method formalises robust reinforcement learning as a novel min-max game with a Wasserstein constraint for a correct and convergent solver. Apart from the formulation, we also propose an efficient and scalable solver following a novel zero-order optimisation method that we believe can be useful to numerical optimisation in general. We contribute both theoretically and empirically. On the theory side, we prove that WR @math L converges to a stationary point in the general setting of continuous state and action spaces. Empirically, we demonstrate significant gains compared to standard and robust state-of-the-art algorithms on high-dimensional MuJuCo environments.
There is a long-standing thread of research on robustness in the classical control community, and the literature in this area is vast, with the @math method being a standard approach [] doyle2013feedback . This approach was introduced into reinforcement learning by @cite_7 . In that paper, a continuous time reinforcement learning setting was studied for which a max-min problem was formulated involving a modified value function, the optimal solutions of which can be determined by solving Hamilton-Jacobi-Isaacs (HJI) equation.
{ "cite_N": [ "@cite_7" ], "mid": [ "1970832114", "2147967768", "130805539", "1503631637" ], "abstract": [ "The concept of a robust control Lyapunov function ( rclf ) is introduced, and it is shown that the existence of an rclf for a control-affine system is equivalent to robust stabilizability via continuous state feedback. This extends Artstein's theorem on nonlinear stabilizability to systems with disturbances. It is then shown that every rclf satisfies the steady-state Hamilton--Jacobi--Isaacs (HJI) equation associated with a meaningful game and that every member of a class of pointwise min-norm control laws is optimal for such a game. These control laws have desirable properties of optimality and can be computed directly from the rclf without solving the HJI equation for the upper value function.", "We incorporate statistical confidence intervals in both the multi-armed bandit and the reinforcement learning problems. In the bandit problem we show that given n arms, it suffices to pull the arms a total of O((n e2)log(1 δ)) times to find an e-optimal arm with probability of at least 1-δ. This bound matches the lower bound of Mannor and Tsitsiklis (2004) up to constants. We also devise action elimination procedures in reinforcement learning algorithms. We describe a framework that is based on learning the confidence interval around the value function or the Q-function and eliminating actions that are not optimal (with high probability). We provide a model-based and a model-free variants of the elimination method. We further derive stopping conditions guaranteeing that the learned policy is approximately optimal with high probability. Simulations demonstrate a considerable speedup and added robustness over e-greedy Q-learning.", "Traditional Reinforcement Learning methods are insufficient for AGIs who must be able to learn to deal with Partially Observable Markov Decision Processes. We investigate a novel method for dealing with this problem: standard RL techniques using as input the hidden layer output of a Sequential Constant-Size Compressor (SCSC). The SCSC takes the form of a sequential Recurrent Auto-Associative Memory, trained through standard back-propagation. Results illustrate the feasibility of this approach -- this system learns to deal with highdimensional visual observations (up to 640 pixels) in partially observable environments where there are long time lags (up to 12 steps) between relevant sensory information and necessary action.", "In this paper, we introduce a min max approach for addressing the generalization problem in Reinforcement Learning. The min max approach works by determining a sequence of actions that maximizes the worst return that could possibly be obtained considering any dynamics and reward function compatible with the sample of trajectories and some prior knowledge on the environment. We consider the particular case of deterministic Lipschitz continuous environments over continuous state spaces, finite action spaces, and a finite optimization horizon. We discuss the non-triviality of computing an exact solution of the min max problem even after reformulating it so as to avoid search in function spaces. For addressing this problem, we propose to replace, inside this min max problem, the search for the worst environment given a sequence of actions by an expression that lower bounds the worst return that can be obtained for a given sequence of actions. This lower bound has a tightness that depends on the sample sparsity. From there, we propose an algorithm of polynomial complexity that returns a sequence of actions leading to the maximization of this lower bound. We give a condition on the sample sparsity ensuring that, for a given initial state, the proposed algorithm produces an optimal sequence of actions in open-loop. Our experiments show that this algorithm can lead to more cautious policies than algorithms combining dynamic programming with function approximators." ] }
1907.13196
2966684444
Reinforcement learning algorithms, though successful, tend to over-fit to training environments hampering their application to the real-world. This paper proposes WR @math L; a robust reinforcement learning algorithm with significant robust performance on low and high-dimensional control tasks. Our method formalises robust reinforcement learning as a novel min-max game with a Wasserstein constraint for a correct and convergent solver. Apart from the formulation, we also propose an efficient and scalable solver following a novel zero-order optimisation method that we believe can be useful to numerical optimisation in general. We contribute both theoretically and empirically. On the theory side, we prove that WR @math L converges to a stationary point in the general setting of continuous state and action spaces. Empirically, we demonstrate significant gains compared to standard and robust state-of-the-art algorithms on high-dimensional MuJuCo environments.
The CVaR criterion is also adopted in @cite_13 , in which, rather than sampling trajectories and finding a quantile in terms of performance, two policies are trained simultaneously: a protagonist'' which aims to optimise performance, and an adversary which aims to disrupt the protagonist. The protagonist and adversary train alternatively, with one being fixed whilst the other adapts. The action space for the adversary, in the tests documented in the paper includes forces on the entities (InvertedPendulum, HalfCheetah, Swimmer, Hopper, Walker2D) that aim to destabalise it. We made comparisons against this algorithm in our experiments.
{ "cite_N": [ "@cite_13" ], "mid": [ "2602963933", "2952765942", "2738938906", "2514333820" ], "abstract": [ "Deep neural networks coupled with fast simulation and improved computation have led to recent successes in the field of reinforcement learning (RL). However, most current RL-based approaches fail to generalize since: (a) the gap between simulation and real world is so large that policy-learning approaches fail to transfer; (b) even if policy learning is done in real world, the data scarcity leads to failed generalization from training to test scenarios (e.g., due to different friction or object masses). Inspired from H∞ control methods, we note that both modeling errors and differences in training and test scenarios can be viewed as extra forces disturbances in the system. This paper proposes the idea of robust adversarial reinforcement learning (RARL), where we train an agent to operate in the presence of a destabilizing adversary that applies disturbance forces to the system. The jointly trained adversary is reinforced - that is, it learns an optimal destabilization policy. We formulate the policy learning as a zero-sum, minimax objective function. Extensive experiments in multiple environments (InvertedPendulum, HalfCheetah, Swimmer, Hopper, Walker2d and Ant) conclusively demonstrate that our method (a) improves training stability; (b) is robust to differences in training test conditions; and c) outperform the baseline even in the absence of the adversary.", "Deep neural networks coupled with fast simulation and improved computation have led to recent successes in the field of reinforcement learning (RL). However, most current RL-based approaches fail to generalize since: (a) the gap between simulation and real world is so large that policy-learning approaches fail to transfer; (b) even if policy learning is done in real world, the data scarcity leads to failed generalization from training to test scenarios (e.g., due to different friction or object masses). Inspired from H-infinity control methods, we note that both modeling errors and differences in training and test scenarios can be viewed as extra forces disturbances in the system. This paper proposes the idea of robust adversarial reinforcement learning (RARL), where we train an agent to operate in the presence of a destabilizing adversary that applies disturbance forces to the system. The jointly trained adversary is reinforced -- that is, it learns an optimal destabilization policy. We formulate the policy learning as a zero-sum, minimax objective function. Extensive experiments in multiple environments (InvertedPendulum, HalfCheetah, Swimmer, Hopper and Walker2d) conclusively demonstrate that our method (a) improves training stability; (b) is robust to differences in training test conditions; and c) outperform the baseline even in the absence of the adversary.", "In adversarial training, a set of models learn together by pursuing competing goals, usually defined on single data instances. However, in relational learning and other non-i.i.d domains, goals can also be defined over sets of instances. For example, a link predictor for the is-a relation needs to be consistent with the transitivity property: if is-a(x_1, x_2) and is-a(x_2, x_3) hold, is-a(x_1, x_3) needs to hold as well. Here we use such assumptions for deriving an inconsistency loss, measuring the degree to which the model violates the assumptions on an adversarially-generated set of examples. The training objective is defined as a minimax problem, where an adversary finds the most offending adversarial examples by maximising the inconsistency loss, and the model is trained by jointly minimising a supervised loss and the inconsistency loss on the adversarial examples. This yields the first method that can use function-free Horn clauses (as in Datalog) to regularise any neural link predictor, with complexity independent of the domain size. We show that for several link prediction models, the optimisation problem faced by the adversary has efficient closed-form solutions. Experiments on link prediction benchmarks indicate that given suitable prior knowledge, our method can significantly improve neural link predictors on all relevant metrics.", "Abstract Several competing human behavior models have been proposed to model boundedly rational adversaries in repeated Stackelberg Security Games (SSG). However, these existing models fail to address three main issues which are detrimental to defender performance. First, while they attempt to learn adversary behavior models from adversaries' past actions (“attacks on targets”), they fail to take into account adversaries' future adaptation based on successes or failures of these past actions. Second, existing algorithms fail to learn a reliable model of the adversary unless there exists sufficient data collected by exposing enough of the attack surface – a situation that often arises in initial rounds of the repeated SSG. Third, current leading models have failed to include probability weighting functions, even though it is well known that human beings' weighting of probability is typically nonlinear. To address these limitations of existing models, this article provides three main contributions. Our first contribution is a new human behavior model, SHARP, which mitigates these three limitations as follows: (i) SHARP reasons based on success or failure of the adversary's past actions on exposed portions of the attack surface to model adversary adaptivity; (ii) SHARP reasons about similarity between exposed and unexposed areas of the attack surface, and also incorporates a discounting parameter to mitigate adversary's lack of exposure to enough of the attack surface; and (iii) SHARP integrates a non-linear probability weighting function to capture the adversary's true weighting of probability. Our second contribution is a first “repeated measures study” – at least in the context of SSGs – of competing human behavior models. This study, where each experiment lasted a period of multiple weeks with individual sets of human subjects on the Amazon Mechanical Turk platform, illustrates the strengths and weaknesses of different models and shows the advantages of SHARP. Our third major contribution is to demonstrate SHARP's superiority by conducting real-world human subjects experiments at the Bukit Barisan Seletan National Park in Indonesia against wildlife security experts." ] }
1907.13196
2966684444
Reinforcement learning algorithms, though successful, tend to over-fit to training environments hampering their application to the real-world. This paper proposes WR @math L; a robust reinforcement learning algorithm with significant robust performance on low and high-dimensional control tasks. Our method formalises robust reinforcement learning as a novel min-max game with a Wasserstein constraint for a correct and convergent solver. Apart from the formulation, we also propose an efficient and scalable solver following a novel zero-order optimisation method that we believe can be useful to numerical optimisation in general. We contribute both theoretically and empirically. On the theory side, we prove that WR @math L converges to a stationary point in the general setting of continuous state and action spaces. Empirically, we demonstrate significant gains compared to standard and robust state-of-the-art algorithms on high-dimensional MuJuCo environments.
More recently, @cite_2 studies robustness with respect to action perturbations. There are two forms of perturbation addressed: (i) Probabilistic Action Robust MDP (PR-MDP), and (ii) Noisy Action Robust MDP (NR-MDP). In PR-MDP, when an action is taken by an agent, with probability @math , a different, possibly adversarial action is taken instead. In NR-MDP, when an action is taken, a perturbation is added to the action itself. Like @cite_11 and @cite_13 , the algorithm is suitable for applying deep neural networks, and the paper reports experiments on InvertedPendulum, Hopper, Walker2d and Humanoid. We tested against PR-MDP in some of our experiments, and found it to be lacking in robustness (see Section , Figure and Figure ).
{ "cite_N": [ "@cite_11", "@cite_13", "@cite_2" ], "mid": [ "1554120378", "2793079679", "2217248474", "2911793117" ], "abstract": [ "Learning to act optimally in a complex, dynamic and noisy environment is a hard problem. Various threads of research from reinforcement learning, animal conditioning, operations research, machine learning, statistics and optimal control are beginning to come together to offer solutions to this problem. I present a thesis in which novel algorithms are presented for learning the dynamics, learning the value function, and selecting good actions for Markov decision processes. The problems considered have high-dimensional factored state and action spaces, and are either fully or partially observable. The approach I take is to recognize similarities between the problems being solved in the reinforcement learning and graphical models literature, and to use and combine techniques from the two fields in novel ways. In particular I present two new algorithms. First, the DBN algorithm learns a compact representation of the core process of a partially observable MDP. Because inference in the DBN is intractable, I use approximate inference to maintain the belief state. A belief state action-value function is learned using reinforcement learning. I show that this DBN algorithm can solve POMDPs with very large state spaces and useful hidden state. Second, the PoE algorithm learns an approximation to value functions over large factored state-action spaces. The algorithm approximates values as (negative) free energies in a product of experts model. The model parameters can be learned efficiently because inference is tractable in a product of experts. I show that good actions can be found even in large factored action spaces by the use of brief Gibbs sampling. These two new algorithms take techniques from the machine learning community and apply them in new ways to reinforcement learning problems. Simulation results show that these new methods can be used to solve very large problems. The DBN method is used to solve a POMDP with a hidden state space and an observation space of size greater than 2180. The DBN model of the core process has 232 states represented as 32 binary variables. The PoE method is used to find actions in action spaces of size 240 .", "Although adversarial samples of deep neural networks (DNNs) have been intensively studied on static images, their extensions in videos are never explored. Compared with images, attacking a video needs to consider not only spatial cues but also temporal cues. Moreover, to improve the imperceptibility as well as reduce the computation cost, perturbations should be added on as fewer frames as possible, i.e., adversarial perturbations are temporally sparse. This further motivates the propagation of perturbations, which denotes that perturbations added on the current frame can transfer to the next frames via their temporal interactions. Thus, no (or few) extra perturbations are needed for these frames to misclassify them. To this end, we propose an l2,1-norm based optimization algorithm to compute the sparse adversarial perturbations for videos. We choose the action recognition as the targeted task, and networks with a CNN+RNN architecture as threat models to verify our method. Thanks to the propagation, we can compute perturbations on a shortened version video, and then adapt them to the long version video to fool DNNs. Experimental results on the UCF101 dataset demonstrate that even only one frame in a video is perturbed, the fooling rate can still reach 59.7 .", "We show that adversarial examples, i.e., the visually imperceptible perturbations that result in Convolutional Neural Networks (CNNs) fail, can be alleviated with a mechanism based on foveations---applying the CNN in different image regions. To see this, first, we report results in ImageNet that lead to a revision of the hypothesis that adversarial perturbations are a consequence of CNNs acting as a linear classifier: CNNs act locally linearly to changes in the image regions with objects recognized by the CNN, and in other regions the CNN may act non-linearly. Then, we corroborate that when the neural responses are linear, applying the foveation mechanism to the adversarial example tends to significantly reduce the effect of the perturbation. This is because, hypothetically, the CNNs for ImageNet are robust to changes of scale and translation of the object produced by the foveation, but this property does not generalize to transformations of the perturbation. As a result, the accuracy after a foveation is almost the same as the accuracy of the CNN without the adversarial perturbation, even if the adversarial perturbation is calculated taking into account a foveation.", "Consider a Markov decision process (MDP) that admits a set of state-action features, which can linearly express the process's probabilistic transition model. We propose a parametric Q-learning algorithm that finds an approximate-optimal policy using a sample size proportional to the feature dimension @math and invariant with respect to the size of the state space. To further improve its sample efficiency, we exploit the monotonicity property and intrinsic noise structure of the Bellman operator, provided the existence of anchor state-actions that imply implicit non-negativity in the feature space. We augment the algorithm using techniques of variance reduction, monotonicity preservation, and confidence bounds. It is proved to find a policy which is @math -optimal from any initial state with high probability using @math sample transitions for arbitrarily large-scale MDP with a discount factor @math . A matching information-theoretical lower bound is proved, confirming the sample optimality of the proposed method with respect to all parameters (up to polylog factors)." ] }
1907.13196
2966684444
Reinforcement learning algorithms, though successful, tend to over-fit to training environments hampering their application to the real-world. This paper proposes WR @math L; a robust reinforcement learning algorithm with significant robust performance on low and high-dimensional control tasks. Our method formalises robust reinforcement learning as a novel min-max game with a Wasserstein constraint for a correct and convergent solver. Apart from the formulation, we also propose an efficient and scalable solver following a novel zero-order optimisation method that we believe can be useful to numerical optimisation in general. We contribute both theoretically and empirically. On the theory side, we prove that WR @math L converges to a stationary point in the general setting of continuous state and action spaces. Empirically, we demonstrate significant gains compared to standard and robust state-of-the-art algorithms on high-dimensional MuJuCo environments.
In @cite_4 a non-stationary Markov Decision Process model is considered, where the dynamics can change from one time step to another. The constraint is based on Wasserstein distance, specifically, the Wasserstein distance between dynamics at time @math and @math is bounded by @math , i.e., is @math -Lipschitz with respect to time, for some constant @math . They approach the problem by treating nature as an adversary and implement a Minimax algorithm. The basis of their algorithm is that due to the fact that the dynamics changes slowly (due to the Lipschitz constraint), a planning algorithm can project into the future the scope of possible future dynamics and plan for the worst. The resulting algorithm, known as , is - as the name implies - a tree search algorithm. It operates on a sequence snapshots'' of the evolving MDP, which are instances of the MDP at points in time. The algorithm is tested on small grid world, and does not appear to be readily extendible to the continuous state and action scenarios our algorithm addresses.
{ "cite_N": [ "@cite_4" ], "mid": [ "1512919909", "2964000194", "2942369252", "2765892966" ], "abstract": [ "A critical issue for the application of Markov decision processes (MDPs) to realistic problems is how the complexity of planning scales with the size of the MDP. In stochastic environments with very large or infinite state spaces, traditional planning and reinforcement learning algorithms may be inapplicable, since their running time typically grows linearly with the state space size in the worst case. In this paper we present a new algorithm that, given only a generative model (a natural and common type of simulator) for an arbitrary MDP, performs on-line, near-optimal planning with a per-state running time that has no dependence on the number of states. The running time is exponential in the horizon time (which depends only on the discount factor γ and the desired degree of approximation to the optimal policy). Our algorithm thus provides a different complexity trade-off than classical algorithms such as value iteration—rather than scaling linearly in both horizon time and state space size, our running time trades an exponential dependence on the former in exchange for no dependence on the latter. Our algorithm is based on the idea of sparse sampling. We prove that a randomly sampled look-ahead tree that covers only a vanishing fraction of the full look-ahead tree nevertheless suffices to compute near-optimal actions from any state of an MDP. Practical implementations of the algorithm are discussed, and we draw ties to our related recent results on finding a near-best strategy from a given class of strategies in very large partially observable MDPs (Kearns, Mansour, & Ng. Neural information processing systems 13, to appear).", "We consider the problem of learning an unknown Markov Decision Process (MDP) that is weakly communicating in the infinite horizon setting. We propose a Thompson Sampling-based reinforcement learning algorithm with dynamic episodes (TSDE). At the beginning of each episode, the algorithm generates a sample from the posterior distribution over the unknown model parameters. It then follows the optimal stationary policy for the sampled model for the rest of the episode. The duration of each episode is dynamically determined by two stopping criteria. The first stopping criterion controls the growth rate of episode length. The second stopping criterion happens when the number of visits to any state-action pair is doubled. We establish @math bounds on expected regret under a Bayesian setting, where @math and @math are the sizes of the state and action spaces, @math is time, and @math is the bound of the span. This regret bound matches the best available bound for weakly communicating MDPs. Numerical results show it to perform better than existing algorithms for infinite horizon MDPs.", "This work tackles the problem of robust zero-shot planning in non-stationary stochastic environments. We study Markov Decision Processes (MDPs) evolving over time and consider Model-Based Reinforcement Learning algorithms in this setting. We make two hypotheses: 1) the environment evolves continuously and its evolution rate is bounded, 2) a current model is known at each decision epoch but not its evolution. Our contribution can be presented in four points. First, we define this specific class of MDPs that we call Non-Stationary MDPs (NSMDPs). We introduce the notion of regular evolution by making an hypothesis of Lipschitz-Continuity on the transition and reward functions w.r.t. time. Secondly, we consider a planning agent using the current model of the environment, but unaware of its future evolution. This leads us to consider a worst-case method where the environment is seen as an adversarial agent. Third, following this approach, we propose the Risk-Averse Tree-Search (RATS) algorithm. This is a zero-shot Model-Based method similar to Minimax search. Finally, we illustrate the benefits brought by RATS empirically and compare its performance with reference Model-Based algorithms.", "Consider the problem of approximating the optimal policy of a Markov decision process (MDP) by sampling state transitions. In contrast to existing reinforcement learning methods that are based on successive approximations to the nonlinear Bellman equation, we propose a Primal-Dual @math Learning method in light of the linear duality between the value and policy. The @math learning method is model-free and makes primal-dual updates to the policy and value vectors as new data are revealed. For infinite-horizon undiscounted Markov decision process with finite state space @math and finite action space @math , the @math learning method finds an @math -optimal policy using the following number of sample transitions @math where @math is an upper bound of mixing times across all policies and @math is a parameter characterizing the range of stationary distributions across policies. The @math learning method also applies to the computational problem of MDP where the transition probabilities and rewards are explicitly given as the input. In the case where each state transition can be sampled in @math time, the @math learning method gives a sublinear-time algorithm for solving the averaged-reward MDP." ] }
1907.13196
2966684444
Reinforcement learning algorithms, though successful, tend to over-fit to training environments hampering their application to the real-world. This paper proposes WR @math L; a robust reinforcement learning algorithm with significant robust performance on low and high-dimensional control tasks. Our method formalises robust reinforcement learning as a novel min-max game with a Wasserstein constraint for a correct and convergent solver. Apart from the formulation, we also propose an efficient and scalable solver following a novel zero-order optimisation method that we believe can be useful to numerical optimisation in general. We contribute both theoretically and empirically. On the theory side, we prove that WR @math L converges to a stationary point in the general setting of continuous state and action spaces. Empirically, we demonstrate significant gains compared to standard and robust state-of-the-art algorithms on high-dimensional MuJuCo environments.
To summarise, our paper uses the Wasserstein distance for addressing, in common with @cite_4 , but is suited to applying deep neural networks for continuous state and action spaces. Our paper does not require a full dynamics available to it, merely a parameterisable dynamics. It competes well with the above papers, and operates well for high dimensional problems, as evidenced by the experiments.
{ "cite_N": [ "@cite_4" ], "mid": [ "2962919088", "2614256707", "1629559917", "2892998444" ], "abstract": [ "We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs. As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters. We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramer GAN critic. Being an integral probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs. In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, resulting in a simpler and faster-training algorithm with matching performance. We also propose an improved measure of GAN convergence, the Kernel Inception Distance, and show how to use it to dynamically adapt learning rates during GAN training.", "In this paper, we describe a novel deep convolutional neural network (CNN) that is deeper and wider than other existing deep networks for hyperspectral image classification. Unlike current state-of-the-art approaches in CNN-based hyperspectral image classification, the proposed network, called contextual deep CNN, can optimally explore local contextual interactions by jointly exploiting local spatio-spectral relationships of neighboring individual pixel vectors. The joint exploitation of the spatio-spectral information is achieved by a multi-scale convolutional filter bank used as an initial component of the proposed CNN pipeline. The initial spatial and spectral feature maps obtained from the multi-scale filter bank are then combined together to form a joint spatio-spectral feature map. The joint feature map representing rich spectral and spatial properties of the hyperspectral image is then fed through a fully convolutional network that eventually predicts the corresponding label of each pixel vector. The proposed approach is tested on three benchmark data sets: the Indian Pines data set, the Salinas data set, and the University of Pavia data set. Performance comparison shows enhanced classification performance of the proposed approach over the current state-of-the-art on the three data sets.", "We present new algorithms to compute the mean of a set of empirical probability measures under the optimal transport metric. This mean, known as the Wasserstein barycenter, is the measure that minimizes the sum of its Wasserstein distances to each element in that set. We propose two original algorithms to compute Wasserstein barycenters that build upon the subgradient method. A direct implementation of these algorithms is, however, too costly because it would require the repeated resolution of large primal and dual optimal transport problems to compute subgradients. Extending the work of Cuturi (2013), we propose to smooth the Wasserstein distance used in the definition of Wasserstein barycenters with an entropic regularizer and recover in doing so a strictly convex objective whose gradients can be computed for a considerably cheaper computational cost using matrix scaling algorithms. We use these algorithms to visualize a large family of images and to solve a constrained clustering problem.", "The performance of single image super-resolution has achieved significant improvement by utilizing deep convolutional neural networks (CNNs). The features in deep CNN contain different types of information which make different contributions to image reconstruction. However, most CNN-based models lack discriminative ability for different types of information and deal with them equally, which results in the representational capacity of the models being limited. On the other hand, as the depth of neural networks grows, the long-term information coming from preceding layers is easy to be weaken or lost in late layers, which is adverse to super-resolving image. To capture more informative features and maintain long-term information for image super-resolution, we propose a channel-wise and spatial feature modulation (CSFM) network in which a sequence of feature-modulation memory (FMM) modules is cascaded with a densely connected structure to transform low-resolution features to high informative features. In each FMM module, we construct a set of channel-wise and spatial attention residual (CSAR) blocks and stack them in a chain structure to dynamically modulate multi-level features in a global-and-local manner. This feature modulation strategy enables the high contribution information to be enhanced and the redundant information to be suppressed. Meanwhile, for long-term information persistence, a gated fusion (GF) node is attached at the end of the FMM module to adaptively fuse hierarchical features and distill more effective information via the dense skip connections and the gating mechanism. Extensive quantitative and qualitative evaluations on benchmark datasets illustrate the superiority of our proposed method over the state-of-the-art methods." ] }
1907.13285
2964987353
Text-entry aims to provide an effective and efficient pathway for humans to deliver their messages to computers. With the advent of mobile computing, the recent focus of text-entry research has moved from physical keyboards to soft keyboards. Current soft keyboards, however, increase the typo rate due to lack of tactile feedback and degrade the usability of mobile devices due to their large portion on screens. To tackle these limitations, we propose a fully imaginary keyboard (I-Keyboard) with a deep neural decoder (DND). The invisibility of I-Keyboard maximizes the usability of mobile devices and DND empowered by a deep neural architecture allows users to start typing from any position on the touch screens at any angle. To the best of our knowledge, the eyes-free ten-finger typing scenario of I-Keyboard which does not necessitate both a calibration step and a predefined region for typing is first explored in this work. For the purpose of training DND, we collected the largest user data in the process of developing I-Keyboard. We verified the performance of the proposed I-Keyboard and DND by conducting a series of comprehensive simulations and experiments under various conditions. I-Keyboard showed 18.95 and 4.06 increases in typing speed (45.57 WPM) and accuracy (95.84 ), respectively over the baseline.
First of all, gesture-based text-entry allows drawing-like typing @cite_17 @cite_26 . Drawing-like typing removes the need for localizing each key position and users can start drawing from any place on the screen in an eyes-free manner. Though gesture-based text-entry offers concise eyes-free typing interfaces, it requires gesture recognition algorithms, which can hardly achieve a high accuracy @cite_4 . Gesture variability among users and similarities between gestures for each key cause ambiguity that increases the inherent difficulty of sequence classification @cite_21 . In addition, gesture-based text-entry takes longer time than other text-entry methods since each key involves a gesture rather than a touch or a key press. The proposed I-Keyboard targets a more tractable decoding problem and utilizes deep learning techniques to successfully deal with the variability among users.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_21", "@cite_17" ], "mid": [ "2731419753", "2058049219", "2795166434", "2595328592" ], "abstract": [ "Eyes-free input is desirable for ubiquitous computing, since interacting with mobile and wearable devices often competes for visual attention with other devices and tasks. In this paper, we explore eyes-free typing on a touchpad using one thumb, wherein a user taps on an imaginary QWERTY keyboard while receiving text feedback on a separate screen. Our hypothesis is that users can transfer their typing ability obtained from visible keyboards to eyes-free use. We propose two statistical decoding algorithms to infer users’ eyes-free input: the absolute algorithm and the relative algorithm. The absolute algorithm infers user input based on the absolute position of touch endpoints, while the relative algorithm infers based on the vectors between successive touch endpoints. Evaluation results showed users could achieve satisfying performance with both algorithms. Text entry rate was 17-23 WPM (words per minute) depending on the algorithm used. In comparison, a baseline cursor-based text entry method yielded only 7.66 WPM. In conclusion, our research demonstrates for the first time the feasibility of thumb-based eyes-free typing, which provides a new possibility for text entry on ubiquitous computing platforms such as smart TVs and HMDs.", "Users often struggle to enter text accurately on touchscreen keyboards. To address this, we present a flexible decoder for touchscreen text entry that combines probabilistic touch models with a language model. We investigate two different touch models. The first touch model is based on a Gaussian Process regression approach and implicitly models the inherent uncertainty of the touching process. The second touch model allows users to explicitly control the uncertainty via touch pressure. Using the first model we show that the character error rate can be reduced by up to 7 over a baseline method, and by up to 1.3 over a leading commercial keyboard. Using the second model we demonstrate that providing users with control over input certainty reduces the amount of text users have to correct manually and increases the text entry rate.", "Touch typing on flat surfaces (e.g. interactive tabletop) is challenging due to lack of tactile feedback and hand drifting. In this paper, we present TOAST, an eyes-free keyboard technique for enabling efficient touch typing on touch-sensitive surfaces. We first formalized the problem of keyboard parameter (e.g. location and size) estimation based on users' typing data. Through a user study, we then examined users' eyes-free touch typing behavior on an interactive tabletop with only asterisk feedback. We fitted the keyboard model to the typing data, results suggested that the model parameters (keyboard location and size) changed not only between different users, but also within the same user along with time. Based on the results, we proposed a Markov-Bayesian algorithm for input prediction, which considers the relative location between successive touch points within each hand respectively. Simulation results showed that based on the pooled data from all users, this model improved the top-1 accuracy of the classical statistical decoding algorithm from 86.2 to 92.1 . In a second user study, we further improved TOAST with dynamical model parameter adaptation, and evaluated users' text entry performance with TOAST using realistic text entry tasks. Participants reached a pick-up speed of 41.4 WPM with a character-level error rate of 0.6 . And with less than 10 minutes of practice, they reached 44.6 WPM without sacrificing accuracy. Participants' subjective feedback also indicated that TOAST offered a natural and efficient typing experience.", "Gesture recognition aims to recognize meaningful movements of human bodies, and is of utmost importance in intelligent human–computer robot interactions. In this paper, we present a multimodal gesture recognition method based on 3-D convolution and convolutional long-short-term-memory (LSTM) networks. The proposed method first learns short-term spatiotemporal features of gestures through the 3-D convolutional neural network, and then learns long-term spatiotemporal features by convolutional LSTM networks based on the extracted short-term spatiotemporal features. In addition, fine-tuning among multimodal data is evaluated, and we find that it can be considered as an optional skill to prevent overfitting when no pre-trained models exist. The proposed method is verified on the ChaLearn LAP large-scale isolated gesture data set (IsoGD) and the Sheffield Kinect gesture (SKIG) data set. The results show that our proposed method can obtain the state-of-the-art recognition accuracy (51.02 on the validation set of IsoGD and 98.89 on SKIG)." ] }
1907.13285
2964987353
Text-entry aims to provide an effective and efficient pathway for humans to deliver their messages to computers. With the advent of mobile computing, the recent focus of text-entry research has moved from physical keyboards to soft keyboards. Current soft keyboards, however, increase the typo rate due to lack of tactile feedback and degrade the usability of mobile devices due to their large portion on screens. To tackle these limitations, we propose a fully imaginary keyboard (I-Keyboard) with a deep neural decoder (DND). The invisibility of I-Keyboard maximizes the usability of mobile devices and DND empowered by a deep neural architecture allows users to start typing from any position on the touch screens at any angle. To the best of our knowledge, the eyes-free ten-finger typing scenario of I-Keyboard which does not necessitate both a calibration step and a predefined region for typing is first explored in this work. For the purpose of training DND, we collected the largest user data in the process of developing I-Keyboard. We verified the performance of the proposed I-Keyboard and DND by conducting a series of comprehensive simulations and experiments under various conditions. I-Keyboard showed 18.95 and 4.06 increases in typing speed (45.57 WPM) and accuracy (95.84 ), respectively over the baseline.
For the second point, optimized text-entry supplies accessible and comfortable typing interfaces by optimizing the size, shape, and position of keys @cite_8 @cite_5 . Current optimized text-entry methods require users to learn new typing interfaces @cite_1 because knowledge transfer seldom occurs for novel typing interfaces. Furthermore, the optimization process frequently demands a calibration step. The calibration step complicates the usage of the optimized text-entries. I-Keyboard proposed in this paper does not involve learning and calibration processes since it operates with ten fingers and its decoding algorithm does not need any prior knowledge.
{ "cite_N": [ "@cite_5", "@cite_1", "@cite_8" ], "mid": [ "2134512786", "2128267659", "2010376411", "2795166434" ], "abstract": [ "Text entry user interfaces have been a bottleneck of non traditional computing devices. One of the promising methods is the virtual keyboard on touch screens. Various layouts have been manually designed to replace the dominant QWERTY layout. This paper presents two computerized quantitative design techniques to search for the optimal virtual keyboard. The first technique simulated the dynamics of a keyboard with “digraph springs” between keys, which produced a “Hooke’s” keyboard with 41.6 wpm performance. The second technique used a Metropolis random walk algorithm guided by a “Fitts energy” objective function, which produced a “Metropolis” keyboard with 43.1 wpm performance. The paper also models and evaluates the perfo rmance of four existing keyboard layouts. We corrected erroneous estimates in the literature and predicted the performance of QWERTY, CHUBON, FITALY, OPTI to be in the neighborhood of 30, 33, 36 and 38 wpm respectively. Our best design was 40 faster than QWERTY and 10 faster than OPTI, illustrating the advantage of quantitative user interface design techniques based on models of human performance over traditional trial and error designs guided by heuristics.", "Mobile devices gain increasing computational power and storage capabilities, and there are already mobile phones that can show movies, act as digital music players and offer full-scale web browsing. The bottleneck for information flow is however limited by the inefficient communication channel between the user and the small device. The small mobile phone form factor has proven to be surprisingly difficult to overcome and limited text entry capabilities are in effect crippling mobile devices’ use experience. The desktop keyboard is too large for mobile phones, and the keypad too limited. In recent years, advanced mobile phones have come equipped with touch-screens that enable new text entry solutions. This dissertation explores how software keyboards on touch-screens can be improved to provide an efficient and practical text and command entry experience on mobile devices. The central hypothesis is that it is possible to combine three elements: software keyboard, language redundancy and pattern recognition, and create new effective interfaces for text entry and control. These are collectively called “shape writing” interfaces. Words form shapes on the software keyboard layout. Users write words by articulating the shapes for words on the software keyboard. Two classes of shape writing interfaces are developed and analyzed: discrete and continuous shape writing. The former recognizes users’ pen or finger tapping motion as discrete patterns on the touch-screen. The latter recognizes users’ continuous motion patterns. Experimental results show that novice users can write text with an average entry rate of 25 wpm and an error rate of 1 after 35 minutes of practice. An accelerated novice learning experiment shows that users can exactly copy a single well-practiced phrase with an average entry rate of 46.5 wpm, with individual phrase entry rate measurements up to 99 wpm. When used as a control interface, users can send commands to applications 1.6 times faster than using de-facto standard linear pull-down menus. Visual command preview leads to significantly less errors and shorter gestures for unpracticed commands. Taken together, the quantitative results show that shape writing is among the fastest mobile interfaces for text entry and control, both initially and after practice, that are currently known.", "We study the performance and user experience of two popular mainstream mobile text entry methods: the Smart Touch Keyboard (STK) and the Smart Gesture Keyboard (SGK). Our first study is a lab-based ten-session text entry experiment. In our second study we use a new text entry evaluation methodology based on the experience sampling method (ESM). In the ESM study, participants installed an Android app on their own mobile phones that periodically sampled their text entry performance and user experience amid their everyday activities for four weeks. The studies show that text can be entered at an average speed of 28 to 39 WPM, depending on the method and the user's experience, with 1.0 to 3.6 character error rates remaining. Error rates of touchscreen input, particularly with SGK, are a major challenge; and reducing out-of-vocabulary errors is particularly important. Both SGK and STK have strengths, weaknesses, and different individual awareness and preferences. Two-thumb touch typing in a focused setting is particularly effective on STK, whereas one-handed SGK typing with the thumb is particularly effective in more mobile situations. When exposed to both, users tend to migrate from STK to SGK. We also conclude that studies in the lab and in the wild can both be informative to reveal different aspects of keyboard experience, but used in conjunction is more reliable in comprehensively assessing input technologies of current and future generations.", "Touch typing on flat surfaces (e.g. interactive tabletop) is challenging due to lack of tactile feedback and hand drifting. In this paper, we present TOAST, an eyes-free keyboard technique for enabling efficient touch typing on touch-sensitive surfaces. We first formalized the problem of keyboard parameter (e.g. location and size) estimation based on users' typing data. Through a user study, we then examined users' eyes-free touch typing behavior on an interactive tabletop with only asterisk feedback. We fitted the keyboard model to the typing data, results suggested that the model parameters (keyboard location and size) changed not only between different users, but also within the same user along with time. Based on the results, we proposed a Markov-Bayesian algorithm for input prediction, which considers the relative location between successive touch points within each hand respectively. Simulation results showed that based on the pooled data from all users, this model improved the top-1 accuracy of the classical statistical decoding algorithm from 86.2 to 92.1 . In a second user study, we further improved TOAST with dynamical model parameter adaptation, and evaluated users' text entry performance with TOAST using realistic text entry tasks. Participants reached a pick-up speed of 41.4 WPM with a character-level error rate of 0.6 . And with less than 10 minutes of practice, they reached 44.6 WPM without sacrificing accuracy. Participants' subjective feedback also indicated that TOAST offered a natural and efficient typing experience." ] }
1907.13285
2964987353
Text-entry aims to provide an effective and efficient pathway for humans to deliver their messages to computers. With the advent of mobile computing, the recent focus of text-entry research has moved from physical keyboards to soft keyboards. Current soft keyboards, however, increase the typo rate due to lack of tactile feedback and degrade the usability of mobile devices due to their large portion on screens. To tackle these limitations, we propose a fully imaginary keyboard (I-Keyboard) with a deep neural decoder (DND). The invisibility of I-Keyboard maximizes the usability of mobile devices and DND empowered by a deep neural architecture allows users to start typing from any position on the touch screens at any angle. To the best of our knowledge, the eyes-free ten-finger typing scenario of I-Keyboard which does not necessitate both a calibration step and a predefined region for typing is first explored in this work. For the purpose of training DND, we collected the largest user data in the process of developing I-Keyboard. We verified the performance of the proposed I-Keyboard and DND by conducting a series of comprehensive simulations and experiments under various conditions. I-Keyboard showed 18.95 and 4.06 increases in typing speed (45.57 WPM) and accuracy (95.84 ), respectively over the baseline.
Last but not least, imaginary keyboards, which are invisible to users, save invaluable screen resources and enables multi-tasking in the context of mobile computing @cite_27 . Imaginary keyboards reduce constraints during interaction and users can freely and comfortably deliver their messages. In addition, the imaginary keyboards coincide with the potent vision for user interfaces (UI) which has evolved from mechanical, graphical and gestural UI to imaginary UI, achieving tighter embodiment and more directness @cite_18 . Conventional works on imaginary UI, however, have only shown the feasibility not reaching the practical deployment level. Our I-Keyboard proposes a new concrete concept for imaginary UI and demonstrates the practical implementation of the concept deployed in a real-world environment.
{ "cite_N": [ "@cite_27", "@cite_18" ], "mid": [ "2795433822", "2128267659", "2134512786", "2008181407" ], "abstract": [ "A virtual keyboard takes a large portion of precious screen real estate. We have investigated whether an invisible keyboard is a feasible design option, how to support it, and how well it performs. Our study showed users could correctly recall relative key positions even when keys were invisible, although with greater absolute errors and overlaps between neighboring keys. Our research also showed adapting the spatial model in decoding improved the invisible keyboard performance. This method increased the input speed by 11.5 over simply hiding the keyboard and using the default spatial model. Our 3-day multi-session user study showed typing on an invisible keyboard could reach a practical level of performance after only a few sessions of practice: the input speed increased from 31.3 WPM to 37.9 WPM after 20 - 25 minutes practice on each day in 3 days, approaching that of a regular visible keyboard (41.6 WPM). Overall, our investigation shows an invisible keyboard with adapted spatial model is a practical and promising interface option for the mobile text entry systems.", "Mobile devices gain increasing computational power and storage capabilities, and there are already mobile phones that can show movies, act as digital music players and offer full-scale web browsing. The bottleneck for information flow is however limited by the inefficient communication channel between the user and the small device. The small mobile phone form factor has proven to be surprisingly difficult to overcome and limited text entry capabilities are in effect crippling mobile devices’ use experience. The desktop keyboard is too large for mobile phones, and the keypad too limited. In recent years, advanced mobile phones have come equipped with touch-screens that enable new text entry solutions. This dissertation explores how software keyboards on touch-screens can be improved to provide an efficient and practical text and command entry experience on mobile devices. The central hypothesis is that it is possible to combine three elements: software keyboard, language redundancy and pattern recognition, and create new effective interfaces for text entry and control. These are collectively called “shape writing” interfaces. Words form shapes on the software keyboard layout. Users write words by articulating the shapes for words on the software keyboard. Two classes of shape writing interfaces are developed and analyzed: discrete and continuous shape writing. The former recognizes users’ pen or finger tapping motion as discrete patterns on the touch-screen. The latter recognizes users’ continuous motion patterns. Experimental results show that novice users can write text with an average entry rate of 25 wpm and an error rate of 1 after 35 minutes of practice. An accelerated novice learning experiment shows that users can exactly copy a single well-practiced phrase with an average entry rate of 46.5 wpm, with individual phrase entry rate measurements up to 99 wpm. When used as a control interface, users can send commands to applications 1.6 times faster than using de-facto standard linear pull-down menus. Visual command preview leads to significantly less errors and shorter gestures for unpracticed commands. Taken together, the quantitative results show that shape writing is among the fastest mobile interfaces for text entry and control, both initially and after practice, that are currently known.", "Text entry user interfaces have been a bottleneck of non traditional computing devices. One of the promising methods is the virtual keyboard on touch screens. Various layouts have been manually designed to replace the dominant QWERTY layout. This paper presents two computerized quantitative design techniques to search for the optimal virtual keyboard. The first technique simulated the dynamics of a keyboard with “digraph springs” between keys, which produced a “Hooke’s” keyboard with 41.6 wpm performance. The second technique used a Metropolis random walk algorithm guided by a “Fitts energy” objective function, which produced a “Metropolis” keyboard with 43.1 wpm performance. The paper also models and evaluates the perfo rmance of four existing keyboard layouts. We corrected erroneous estimates in the literature and predicted the performance of QWERTY, CHUBON, FITALY, OPTI to be in the neighborhood of 30, 33, 36 and 38 wpm respectively. Our best design was 40 faster than QWERTY and 10 faster than OPTI, illustrating the advantage of quantitative user interface design techniques based on models of human performance over traditional trial and error designs guided by heuristics.", "We present PalmType, which uses palms as interactive keyboards for smart wearable displays, such as Google Glass. PalmType leverages users' innate ability to pinpoint specific areas of their palms and fingers without visual attention (i.e. proprioception), and provides visual feedback via the wearable displays. With wrist-worn sensors and wearable displays, PalmType enables typing without requiring users to hold any devices and does not require visual attention to their hands. We conducted design sessions with 6 participants to see how users map QWERTY layout to their hands based on their proprioception. To evaluate typing performance and preference, we conducted a 12-person user study using Google Glass and Vicon motion tracking system, which showed that PalmType with optimized QWERTY layout is 39 faster than current touchpad-based keyboards. In addition, PalmType is preferred by 92 of the participants. We demonstrate the feasibility of wearable PalmType by building a prototype that uses a wrist-worn array of 15 infrared sensors to detect users' finger position and taps, and provides visual feedback via Google Glass." ] }
1907.13285
2964987353
Text-entry aims to provide an effective and efficient pathway for humans to deliver their messages to computers. With the advent of mobile computing, the recent focus of text-entry research has moved from physical keyboards to soft keyboards. Current soft keyboards, however, increase the typo rate due to lack of tactile feedback and degrade the usability of mobile devices due to their large portion on screens. To tackle these limitations, we propose a fully imaginary keyboard (I-Keyboard) with a deep neural decoder (DND). The invisibility of I-Keyboard maximizes the usability of mobile devices and DND empowered by a deep neural architecture allows users to start typing from any position on the touch screens at any angle. To the best of our knowledge, the eyes-free ten-finger typing scenario of I-Keyboard which does not necessitate both a calibration step and a predefined region for typing is first explored in this work. For the purpose of training DND, we collected the largest user data in the process of developing I-Keyboard. We verified the performance of the proposed I-Keyboard and DND by conducting a series of comprehensive simulations and experiments under various conditions. I-Keyboard showed 18.95 and 4.06 increases in typing speed (45.57 WPM) and accuracy (95.84 ), respectively over the baseline.
Ten finger typing is one of the most natural and common text-entry methods @cite_28 . Users can achieve typing speed of 60 - 100 words per minute (WPM) by ten finger typing on physical keyboards @cite_19 . Ten finger typing experience stored in muscle memory and tactile feedback from mechanical keys enable eyes-free typing @cite_13 . However, transferring ten-finger typing knowledge from mechanical keyboards to soft keyboards does not successfully occur in general due to lack of tactile feedback, though a few works have shown the viability in special use cases @cite_22 @cite_32 .
{ "cite_N": [ "@cite_22", "@cite_28", "@cite_32", "@cite_19", "@cite_13" ], "mid": [ "2731419753", "2795166434", "2169710492", "2010376411" ], "abstract": [ "Eyes-free input is desirable for ubiquitous computing, since interacting with mobile and wearable devices often competes for visual attention with other devices and tasks. In this paper, we explore eyes-free typing on a touchpad using one thumb, wherein a user taps on an imaginary QWERTY keyboard while receiving text feedback on a separate screen. Our hypothesis is that users can transfer their typing ability obtained from visible keyboards to eyes-free use. We propose two statistical decoding algorithms to infer users’ eyes-free input: the absolute algorithm and the relative algorithm. The absolute algorithm infers user input based on the absolute position of touch endpoints, while the relative algorithm infers based on the vectors between successive touch endpoints. Evaluation results showed users could achieve satisfying performance with both algorithms. Text entry rate was 17-23 WPM (words per minute) depending on the algorithm used. In comparison, a baseline cursor-based text entry method yielded only 7.66 WPM. In conclusion, our research demonstrates for the first time the feasibility of thumb-based eyes-free typing, which provides a new possibility for text entry on ubiquitous computing platforms such as smart TVs and HMDs.", "Touch typing on flat surfaces (e.g. interactive tabletop) is challenging due to lack of tactile feedback and hand drifting. In this paper, we present TOAST, an eyes-free keyboard technique for enabling efficient touch typing on touch-sensitive surfaces. We first formalized the problem of keyboard parameter (e.g. location and size) estimation based on users' typing data. Through a user study, we then examined users' eyes-free touch typing behavior on an interactive tabletop with only asterisk feedback. We fitted the keyboard model to the typing data, results suggested that the model parameters (keyboard location and size) changed not only between different users, but also within the same user along with time. Based on the results, we proposed a Markov-Bayesian algorithm for input prediction, which considers the relative location between successive touch points within each hand respectively. Simulation results showed that based on the pooled data from all users, this model improved the top-1 accuracy of the classical statistical decoding algorithm from 86.2 to 92.1 . In a second user study, we further improved TOAST with dynamical model parameter adaptation, and evaluated users' text entry performance with TOAST using realistic text entry tasks. Participants reached a pick-up speed of 41.4 WPM with a character-level error rate of 0.6 . And with less than 10 minutes of practice, they reached 44.6 WPM without sacrificing accuracy. Participants' subjective feedback also indicated that TOAST offered a natural and efficient typing experience.", "Text entry rates are explored for several variations of soft keyboards. We present a model to predict novice and expert entry rates and present an empirical test with 24 subjects. Six keyboards were examined: the Qwerty, ABC, Dvorak, Fitaly, JustType, and telephone. At 8-10 wpm, novice predictions are low for all layouts because the dominant factor is the visual scan time, rather than the movement time. Expert predictions are in the range of 22-56 wpm, although these were not tested empirically. In a quick, novice test with a representative phrase of text, subjects achieved rates of 20.2 wpm (Qwerty), 10.7 wpm (ABC), 8.5 wpm (Dvorak), 8.0 wpm (Fitaly), 7.0 wpm (JustType), and 8.0 wpm (telephone). The Qwerty rate of 20.2 wpm is consistent with observations in other studies. The relatively high rate for Qwerty suggests that there is skill transfer from users' familiarity with desktop computers to the stylus tapping task.", "We study the performance and user experience of two popular mainstream mobile text entry methods: the Smart Touch Keyboard (STK) and the Smart Gesture Keyboard (SGK). Our first study is a lab-based ten-session text entry experiment. In our second study we use a new text entry evaluation methodology based on the experience sampling method (ESM). In the ESM study, participants installed an Android app on their own mobile phones that periodically sampled their text entry performance and user experience amid their everyday activities for four weeks. The studies show that text can be entered at an average speed of 28 to 39 WPM, depending on the method and the user's experience, with 1.0 to 3.6 character error rates remaining. Error rates of touchscreen input, particularly with SGK, are a major challenge; and reducing out-of-vocabulary errors is particularly important. Both SGK and STK have strengths, weaknesses, and different individual awareness and preferences. Two-thumb touch typing in a focused setting is particularly effective on STK, whereas one-handed SGK typing with the thumb is particularly effective in more mobile situations. When exposed to both, users tend to migrate from STK to SGK. We also conclude that studies in the lab and in the wild can both be informative to reveal different aspects of keyboard experience, but used in conjunction is more reliable in comprehensively assessing input technologies of current and future generations." ] }
1907.13285
2964987353
Text-entry aims to provide an effective and efficient pathway for humans to deliver their messages to computers. With the advent of mobile computing, the recent focus of text-entry research has moved from physical keyboards to soft keyboards. Current soft keyboards, however, increase the typo rate due to lack of tactile feedback and degrade the usability of mobile devices due to their large portion on screens. To tackle these limitations, we propose a fully imaginary keyboard (I-Keyboard) with a deep neural decoder (DND). The invisibility of I-Keyboard maximizes the usability of mobile devices and DND empowered by a deep neural architecture allows users to start typing from any position on the touch screens at any angle. To the best of our knowledge, the eyes-free ten-finger typing scenario of I-Keyboard which does not necessitate both a calibration step and a predefined region for typing is first explored in this work. For the purpose of training DND, we collected the largest user data in the process of developing I-Keyboard. We verified the performance of the proposed I-Keyboard and DND by conducting a series of comprehensive simulations and experiments under various conditions. I-Keyboard showed 18.95 and 4.06 increases in typing speed (45.57 WPM) and accuracy (95.84 ), respectively over the baseline.
A number of works have attempted to understand the user behavior with ten finger typing with soft keyboards. The major findings are as follows: : The typing speed with soft keyboards drops dramatically compared to the speed with physical keyboards @cite_12 . : The distribution of touch points resembles the mechanical keyboard layout @cite_7 , though the distribution varies over time in shape and size @cite_19 . : Hand-drift occurs over time and becomes stronger for invisible keyboards @cite_25 . : Various factors cause tap variability among users @cite_20 . The factors include finger volume, hand posture and mobility.
{ "cite_N": [ "@cite_7", "@cite_19", "@cite_12", "@cite_25", "@cite_20" ], "mid": [ "2164067526", "1972648436", "2795166434", "2618581749" ], "abstract": [ "On a touchscreen keyboard, it can be difficult to continuously type without frequently looking at the keys. One factor contributing to this difficulty is called hand drift, where a user's hands gradually misalign with the touchscreen keyboard due to limited tactile feedback. Although intuitive, there remains a lack of empirical data to describe the effect of hand drift. A formal understanding of it can provide insights for improving soft keyboards. To formally quantify the degree (magnitude and direction) of hand drift, we conducted a 3-session study with 13 participants. We measured hand drift with two typing interfaces: a visible conventional keyboard and an invisible adaptive keyboard. To expose drift patterns, both keyboards used relaxed letter disambiguation to allow for unconstrained movement. Findings show that hand drift occurred in both interfaces, at an average rate of 0.25mm min on the conventional keyboard and 1.32mm min on the adaptive keyboard. Participants were also more likely to drift up and or left instead of down or right.", "Touch screen surfaces large enough for ten-finger input have become increasingly popular, yet typing on touch screens pales in comparison to physical keyboards. We examine typing patterns that emerge when expert users of physical keyboards touch-type on a flat surface. Our aim is to inform future designs of touch screen keyboards, with the ultimate goal of supporting touch-typing with limited tactile feedback. To study the issues inherent to flat-glass typing, we asked 20 expert typists to enter text under three conditions: (1) with no visual keyboard and no feedback on input errors, then (2) with and (3) without a visual keyboard, but with some feedback. We analyzed touch contact points and hand contours, looking at attributes such as natural finger positioning, the spread of hits among individual keys, and the pattern of non-finger touches. We also show that expert typists exhibit spatially consistent key press distributions within an individual, which provides evidence that eyes-free touch-typing may be possible on touch surfaces and points to the role of personalization in such a solution. We conclude with implications for design.", "Touch typing on flat surfaces (e.g. interactive tabletop) is challenging due to lack of tactile feedback and hand drifting. In this paper, we present TOAST, an eyes-free keyboard technique for enabling efficient touch typing on touch-sensitive surfaces. We first formalized the problem of keyboard parameter (e.g. location and size) estimation based on users' typing data. Through a user study, we then examined users' eyes-free touch typing behavior on an interactive tabletop with only asterisk feedback. We fitted the keyboard model to the typing data, results suggested that the model parameters (keyboard location and size) changed not only between different users, but also within the same user along with time. Based on the results, we proposed a Markov-Bayesian algorithm for input prediction, which considers the relative location between successive touch points within each hand respectively. Simulation results showed that based on the pooled data from all users, this model improved the top-1 accuracy of the classical statistical decoding algorithm from 86.2 to 92.1 . In a second user study, we further improved TOAST with dynamical model parameter adaptation, and evaluated users' text entry performance with TOAST using realistic text entry tasks. Participants reached a pick-up speed of 41.4 WPM with a character-level error rate of 0.6 . And with less than 10 minutes of practice, they reached 44.6 WPM without sacrificing accuracy. Participants' subjective feedback also indicated that TOAST offered a natural and efficient typing experience.", "Abstract Typing on tiny QWERTY keyboards on smartwatches is considered challenging or even impractical due to the limited screen space. In this paper, we describe three user studies undertaken to investigate users’ typing abilities and preferences on tiny QWERTY keyboards. The first two studies, using a smartphone as a substitute for a smartwatch, tested five different keyboard sizes (2, 2.5, 3, 3.5 and 4 cm). Study 1 collected typing data from participants using keyboards and given asterisk feedback. We analyzed both the distribution of touch points (e.g., the systematic offset and shape of the distribution) and the effect of keyboard size. Study 2 adopted a Bayesian algorithm based on a touch model derived from Study 1 and a unigram word language model to perform input prediction. We found that on the smart keyboard, participants could type between 26.8 and 33.6 words per minute (WPM) across the five keyboard sizes with an uncorrected character error rate ranging from 0.4 to 1.9 . Participants’ subjective feedback indicated that they felt most comfortable with keyboards larger than 2.5 cm. Study 3 replicated the 3.0 and 3.5 cm keyboard tests on a real smartwatch and verified that in terms of text entry speed, error rate and user preference, there was no significant difference between the results measured on a smartphone and that on a smartwatch with same sized keys. This study result indicated that the results of Study 1 and 2 are applicable to smartwatch devices. Finally, we conducted a simulation to investigate the performance of different touch language models based on our collected data. The results showed that using either a bigram language model or a detailed touch model can effectively correct imprecision in users’ input. Our results suggest that achieving satisfactory levels of text input on tiny QWERTY keyboards is possible." ] }
1907.13285
2964987353
Text-entry aims to provide an effective and efficient pathway for humans to deliver their messages to computers. With the advent of mobile computing, the recent focus of text-entry research has moved from physical keyboards to soft keyboards. Current soft keyboards, however, increase the typo rate due to lack of tactile feedback and degrade the usability of mobile devices due to their large portion on screens. To tackle these limitations, we propose a fully imaginary keyboard (I-Keyboard) with a deep neural decoder (DND). The invisibility of I-Keyboard maximizes the usability of mobile devices and DND empowered by a deep neural architecture allows users to start typing from any position on the touch screens at any angle. To the best of our knowledge, the eyes-free ten-finger typing scenario of I-Keyboard which does not necessitate both a calibration step and a predefined region for typing is first explored in this work. For the purpose of training DND, we collected the largest user data in the process of developing I-Keyboard. We verified the performance of the proposed I-Keyboard and DND by conducting a series of comprehensive simulations and experiments under various conditions. I-Keyboard showed 18.95 and 4.06 increases in typing speed (45.57 WPM) and accuracy (95.84 ), respectively over the baseline.
In summary, ten-finger eyes-free typing on virtual keyboards, which is most natural and easy to transfer knowledge directly from physical keyboards @cite_19 , is feasible according to the previous research results, though a couple of obstacles need to be resolved. The proposed DND handles hand drift, tab variability and automatic calibration with a deep neural architecture to improve typing speed and to reduce error rate.
{ "cite_N": [ "@cite_19" ], "mid": [ "2731419753", "2795166434", "2164067526", "1972648436" ], "abstract": [ "Eyes-free input is desirable for ubiquitous computing, since interacting with mobile and wearable devices often competes for visual attention with other devices and tasks. In this paper, we explore eyes-free typing on a touchpad using one thumb, wherein a user taps on an imaginary QWERTY keyboard while receiving text feedback on a separate screen. Our hypothesis is that users can transfer their typing ability obtained from visible keyboards to eyes-free use. We propose two statistical decoding algorithms to infer users’ eyes-free input: the absolute algorithm and the relative algorithm. The absolute algorithm infers user input based on the absolute position of touch endpoints, while the relative algorithm infers based on the vectors between successive touch endpoints. Evaluation results showed users could achieve satisfying performance with both algorithms. Text entry rate was 17-23 WPM (words per minute) depending on the algorithm used. In comparison, a baseline cursor-based text entry method yielded only 7.66 WPM. In conclusion, our research demonstrates for the first time the feasibility of thumb-based eyes-free typing, which provides a new possibility for text entry on ubiquitous computing platforms such as smart TVs and HMDs.", "Touch typing on flat surfaces (e.g. interactive tabletop) is challenging due to lack of tactile feedback and hand drifting. In this paper, we present TOAST, an eyes-free keyboard technique for enabling efficient touch typing on touch-sensitive surfaces. We first formalized the problem of keyboard parameter (e.g. location and size) estimation based on users' typing data. Through a user study, we then examined users' eyes-free touch typing behavior on an interactive tabletop with only asterisk feedback. We fitted the keyboard model to the typing data, results suggested that the model parameters (keyboard location and size) changed not only between different users, but also within the same user along with time. Based on the results, we proposed a Markov-Bayesian algorithm for input prediction, which considers the relative location between successive touch points within each hand respectively. Simulation results showed that based on the pooled data from all users, this model improved the top-1 accuracy of the classical statistical decoding algorithm from 86.2 to 92.1 . In a second user study, we further improved TOAST with dynamical model parameter adaptation, and evaluated users' text entry performance with TOAST using realistic text entry tasks. Participants reached a pick-up speed of 41.4 WPM with a character-level error rate of 0.6 . And with less than 10 minutes of practice, they reached 44.6 WPM without sacrificing accuracy. Participants' subjective feedback also indicated that TOAST offered a natural and efficient typing experience.", "On a touchscreen keyboard, it can be difficult to continuously type without frequently looking at the keys. One factor contributing to this difficulty is called hand drift, where a user's hands gradually misalign with the touchscreen keyboard due to limited tactile feedback. Although intuitive, there remains a lack of empirical data to describe the effect of hand drift. A formal understanding of it can provide insights for improving soft keyboards. To formally quantify the degree (magnitude and direction) of hand drift, we conducted a 3-session study with 13 participants. We measured hand drift with two typing interfaces: a visible conventional keyboard and an invisible adaptive keyboard. To expose drift patterns, both keyboards used relaxed letter disambiguation to allow for unconstrained movement. Findings show that hand drift occurred in both interfaces, at an average rate of 0.25mm min on the conventional keyboard and 1.32mm min on the adaptive keyboard. Participants were also more likely to drift up and or left instead of down or right.", "Touch screen surfaces large enough for ten-finger input have become increasingly popular, yet typing on touch screens pales in comparison to physical keyboards. We examine typing patterns that emerge when expert users of physical keyboards touch-type on a flat surface. Our aim is to inform future designs of touch screen keyboards, with the ultimate goal of supporting touch-typing with limited tactile feedback. To study the issues inherent to flat-glass typing, we asked 20 expert typists to enter text under three conditions: (1) with no visual keyboard and no feedback on input errors, then (2) with and (3) without a visual keyboard, but with some feedback. We analyzed touch contact points and hand contours, looking at attributes such as natural finger positioning, the spread of hits among individual keys, and the pattern of non-finger touches. We also show that expert typists exhibit spatially consistent key press distributions within an individual, which provides evidence that eyes-free touch-typing may be possible on touch surfaces and points to the role of personalization in such a solution. We conclude with implications for design." ] }
1907.13285
2964987353
Text-entry aims to provide an effective and efficient pathway for humans to deliver their messages to computers. With the advent of mobile computing, the recent focus of text-entry research has moved from physical keyboards to soft keyboards. Current soft keyboards, however, increase the typo rate due to lack of tactile feedback and degrade the usability of mobile devices due to their large portion on screens. To tackle these limitations, we propose a fully imaginary keyboard (I-Keyboard) with a deep neural decoder (DND). The invisibility of I-Keyboard maximizes the usability of mobile devices and DND empowered by a deep neural architecture allows users to start typing from any position on the touch screens at any angle. To the best of our knowledge, the eyes-free ten-finger typing scenario of I-Keyboard which does not necessitate both a calibration step and a predefined region for typing is first explored in this work. For the purpose of training DND, we collected the largest user data in the process of developing I-Keyboard. We verified the performance of the proposed I-Keyboard and DND by conducting a series of comprehensive simulations and experiments under various conditions. I-Keyboard showed 18.95 and 4.06 increases in typing speed (45.57 WPM) and accuracy (95.84 ), respectively over the baseline.
Classical statistical decoding algorithms translate user inputs (key strokes) into characters or words using probabilistic models. These statistical decoding algorithms have proved their effectiveness in a few controlled environments @cite_24 . The goal of statistical decoding is to find the sequence of characters that maximizes the joint probability of the given user input sequence. Mathematically, the user typing pattern for the decoding process is formulated as where @math 's are the position of each key stroke, @math , @math 's are the characters and @math is the length of the sequence. Since the complexity of modeling the joint probability becomes untractable as the sequence length increases, the independence assumption is employed in most cases. By assuming the independence property, ) becomes The probability @math is approximated by a Gaussian distribution with Markov-Bayesian algorithm @cite_19 or a bivariate Gaussian distribution @cite_3 in conventional approaches. In addition, the probability is separately modelled for left and right hands.
{ "cite_N": [ "@cite_24", "@cite_19", "@cite_3" ], "mid": [ "2103374230", "2514483739", "2100540720", "2951090971" ], "abstract": [ "We introduce a simple technique for analyzing the iterative decoder that is broadly applicable to different classes of codes defined over graphs in certain fading as well as additive white Gaussian noise (AWGN) channels. The technique is based on the observation that the extrinsic information from constituent maximum a posteriori (MAP) decoders is well approximated by Gaussian random variables when the inputs to the decoders are Gaussian. The independent Gaussian model implies the existence of an iterative decoder threshold that statistically characterizes the convergence of the iterative decoder. Specifically, the iterative decoder converges to zero probability of error as the number of iterations increases if and only if the channel E sub b N sub 0 exceeds the threshold. Despite the idealization of the model and the simplicity of the analysis technique, the predicted threshold values are in excellent agreement with the waterfall regions observed experimentally in the literature when the codeword lengths are large. Examples are given for parallel concatenated convolutional codes, serially concatenated convolutional codes, and the generalized low-density parity-check (LDPC) codes of Gallager and Cheng-McEliece (1996). Convergence-based design of asymmetric parallel concatenated convolutional codes (PCCC) is also discussed.", "This paper investigates the compress-and-forward scheme for an uplink cloud radio access network (C-RAN) model, where multi-antenna base stations (BSs) are connected to a cloud-computing-based central processor (CP) via capacity-limited fronthaul links. The BSs compress the received signals with Wyner-Ziv coding and send the representation bits to the CP; the CP performs the decoding of all the users’ messages. Under this setup, this paper makes progress toward the optimal structure of the fronthaul compression and CP decoding strategies for the compress-and-forward scheme in the C-RAN. On the CP decoding strategy design, this paper shows that under a sum fronthaul capacity constraint, a generalized successive decoding strategy of the quantization and user message codewords that allows arbitrary interleaved order at the CP achieves the same rate region as the optimal joint decoding. Furthermore, it is shown that a practical strategy of successively decoding the quantization codewords first, then the user messages, achieves the same maximum sum rate as joint decoding under individual fronthaul constraints. On the joint optimization of user transmission and BS quantization strategies, this paper shows that if the input distributions are assumed to be Gaussian, then under joint decoding, the optimal quantization scheme for maximizing the achievable rate region is Gaussian. Moreover, Gaussian input and Gaussian quantization with joint decoding achieve to within a constant gap of the capacity region of the Gaussian multiple-input multiple-output (MIMO) uplink C-RAN model. Finally, this paper addresses the computational aspect of optimizing uplink MIMO C-RAN by showing that under fixed Gaussian input, the sum rate maximization problem over the Gaussian quantization noise covariance matrices can be formulated as convex optimization problems, thereby facilitating its efficient solution.", "The fixed slope lossy algorithm derived from the kth-order adaptive arithmetic codeword length function is extended to finite-state decoders or trellis-structured decoders. When this algorithm is used to encode a stationary, ergodic source with a continuous alphabet, the Lagrangian performance converges with probability one to a quantity computable as the infimum of an information-theoretic functional over a set of auxiliary random variables and reproduction levels, where spl lambda >0 and - spl lambda are designated to be the slope of the rate distortion function R(D) of the source at some D; the quantity is close to R(D)+ spl lambda D when the order k used in the arithmetic coding or the number of states in the decoders is large enough, An alternating minimization algorithm for computing the quantity is presented; this algorithm is based on a training sequence and in turn gives rise to a design algorithm for variable-rate trellis source codes. The resulting variable-rate trellis source codes are very efficient in low-rate regions. With k=8, the mean-squared error encoding performance at the rate 1 2 bits sample for memoryless Gaussian sources is comparable to that afforded by trellis-coded quantizers; with k=8 and the number of states in the decoder=32, the mean-squared error encoding performance at the rate 1 2 bits sample for memoryless Laplacian sources is about 1 dB better than that afforded by the trellis-coded quantizers with 256 states, with k=8 and the number of states in the decoder=256, the mean-squared error encoding performance at the rates of a fraction of 1 bit sample for highly dependent Gauss-Markov sources with correlation coefficient 0.9 is within about 0.6 dB of the distortion rate function.", "We consider the estimation of an i.i.d. random vector observed through a linear transform followed by a componentwise, probabilistic (possibly nonlinear) measurement channel. A novel algorithm, called generalized approximate message passing (GAMP), is presented that provides computationally efficient approximate implementations of max-sum and sum-problem loopy belief propagation for such problems. The algorithm extends earlier approximate message passing methods to incorporate arbitrary distributions on both the input and output of the transform and can be applied to a wide range of problems in nonlinear compressed sensing and learning. Extending an analysis by Bayati and Montanari, we argue that the asymptotic componentwise behavior of the GAMP method under large, i.i.d. Gaussian transforms is described by a simple set of state evolution (SE) equations. From the SE equations, one can predict the asymptotic value of virtually any componentwise performance metric including mean-squared error or detection accuracy. Moreover, the analysis is valid for arbitrary input and output distributions, even when the corresponding optimization problems are non-convex. The results match predictions by Guo and Wang for relaxed belief propagation on large sparse matrices and, in certain instances, also agree with the optimal performance predicted by the replica method. The GAMP methodology thus provides a computationally efficient methodology, applicable to a large class of non-Gaussian estimation problems with precise asymptotic performance guarantees." ] }
1907.13285
2964987353
Text-entry aims to provide an effective and efficient pathway for humans to deliver their messages to computers. With the advent of mobile computing, the recent focus of text-entry research has moved from physical keyboards to soft keyboards. Current soft keyboards, however, increase the typo rate due to lack of tactile feedback and degrade the usability of mobile devices due to their large portion on screens. To tackle these limitations, we propose a fully imaginary keyboard (I-Keyboard) with a deep neural decoder (DND). The invisibility of I-Keyboard maximizes the usability of mobile devices and DND empowered by a deep neural architecture allows users to start typing from any position on the touch screens at any angle. To the best of our knowledge, the eyes-free ten-finger typing scenario of I-Keyboard which does not necessitate both a calibration step and a predefined region for typing is first explored in this work. For the purpose of training DND, we collected the largest user data in the process of developing I-Keyboard. We verified the performance of the proposed I-Keyboard and DND by conducting a series of comprehensive simulations and experiments under various conditions. I-Keyboard showed 18.95 and 4.06 increases in typing speed (45.57 WPM) and accuracy (95.84 ), respectively over the baseline.
Conventional statistical decoding algorithms, however, cannot perfectly deal with the complex dynamics of user inputs. The independence assumption applied in these methods cannot count both long-term and short-term dependencies among the key strokes. The independence assumption confines the conventional approaches to regard only the current input. Furthermore, the previous research outcomes have proposed fixed models for statistical decoding, thus they cannot adaptively handle hand drift and tap variability that vary over time. Though a few have designed adaptive models, those models require either an additional calibration step or a controlled experiment environment @cite_19 .
{ "cite_N": [ "@cite_19" ], "mid": [ "2762596667", "2950541101", "2124486835", "1795258949" ], "abstract": [ "Abstract Nowadays fast-arriving information flows lay the basis of many data mining applications. Such data streams are usually affected by non-stationary events that eventually change their distribution (concept drift), causing that predictive models trained over these data become obsolete and do not adapt suitably to the new distribution. Specially in online learning scenarios, there is a pressing need for new algorithms that adapt to this change as fast as possible, while maintaining good performance scores. Recent studies have revealed that a good strategy is to construct highly diverse ensembles towards utilizing them shortly after the drift (independently from the type of drift) to obtain good performance scores. However, the existence of the so-called trade-off between stability (performance over stable data concepts) and plasticity (recovery and adaptation after drift events) implies that the construction of the ensemble model should account simultaneously for these two conflicting objectives. In this regard, this work presents a new approach to artificially generate an optimal diversity level when building prediction ensembles once shortly after a drift occurs. The approach uses a Kernel Density Estimation (KDE) method to generate synthetic data, which are subsequently labeled by means a multi-objective optimization method that allows training each model of the ensemble with a different subset of synthetic samples. Computational experiments reveal that the proposed approach can be hybridized with other traditional diversity generation approaches, yielding optimized levels of diversity that render an enhanced recovery from drifts.", "A great deal of effort has been devoted to reducing the risk of spurious scientific discoveries, from the use of sophisticated validation techniques, to deep statistical methods for controlling the false discovery rate in multiple hypothesis testing. However, there is a fundamental disconnect between the theoretical results and the practice of data analysis: the theory of statistical inference assumes a fixed collection of hypotheses to be tested, or learning algorithms to be applied, selected non-adaptively before the data are gathered, whereas in practice data is shared and reused with hypotheses and new analyses being generated on the basis of data exploration and the outcomes of previous analyses. In this work we initiate a principled study of how to guarantee the validity of statistical inference in adaptive data analysis. As an instance of this problem, we propose and investigate the question of estimating the expectations of @math adaptively chosen functions on an unknown distribution given @math random samples. We show that, surprisingly, there is a way to estimate an exponential in @math number of expectations accurately even if the functions are chosen adaptively. This gives an exponential improvement over standard empirical estimators that are limited to a linear number of estimates. Our result follows from a general technique that counter-intuitively involves actively perturbing and coordinating the estimates, using techniques developed for privacy preservation. We give additional applications of this technique to our question.", "Olshausen and Field (1996) applied the principle of independence maximization by sparse coding to extract features from natural images. This leads to the emergence of oriented linear filters that have simultaneous localization in space and in frequency, thus resembling Gabor functions and simple cell receptive fields. In this article, we show that the same principle of independence maximization can explain the emergence of phase- and shift-invariant features, similar to those found in complex cells. This new kind of emergence is obtained by maximizing the independence between norms of projections on linear subspaces (instead of the independence of simple linear filter outputs). The norms of the projections on such “independent feature subspaces” then indicate the values of invariant features.", "Stochastic variational inference makes it possible to approximate posterior distributions induced by large datasets quickly using stochastic optimization. The algorithm relies on the use of fully factorized variational distributions. However, this \"mean-field\" independence approximation limits the fidelity of the posterior approximation, and introduces local optima. We show how to relax the mean-field approximation to allow arbitrary dependencies between global parameters and local hidden variables, producing better parameter estimates by reducing bias, sensitivity to local optima, and sensitivity to hyperparameters." ] }
1907.13463
2966452173
Zeroth-order (gradient-free) method is a class of powerful optimization tool for many machine learning problems because it only needs function values (not gradient) in the optimization. In particular, zeroth-order method is very suitable for many complex problems such as black-box attacks and bandit feedback, whose explicit gradients are difficult or infeasible to obtain. Recently, although many zeroth-order methods have been developed, these approaches still exist two main drawbacks: 1) high function query complexity; 2) not being well suitable for solving the problems with complex penalties and constraints. To address these challenging drawbacks, in this paper, we propose a novel fast zeroth-order stochastic alternating direction method of multipliers (ADMM) method (, ZO-SPIDER-ADMM) with lower function query complexity for solving nonconvex problems with multiple nonsmooth penalties. Moreover, we prove that our ZO-SPIDER-ADMM has the optimal function query complexity of @math for finding an @math -approximate local solution, where @math and @math denote the sample size and dimension of data, respectively. In particular, the ZO-SPIDER-ADMM improves the existing best nonconvex zeroth-order ADMM methods by a factor of @math . Moreover, we propose a fast online ZO-SPIDER-ADMM ( ZOO-SPIDER-ADMM). Our theoretical analysis shows that the ZOO-SPIDER-ADMM has the function query complexity of @math , which improves the existing best result by a factor of @math . Finally, we utilize a task of structured adversarial attack on black-box deep neural networks to demonstrate the efficiency of our algorithms.
* -8pt ADMM @cite_35 @cite_22 is a popular optimization method in solving the composite and constrained problems in machine learning. Due to the flexibility in splitting the objective function into loss and complex penalty, the ADMM can relatively easily solve some problems with complicated structure penalty such as the graph-guided fused lasso @cite_7 , which are too complicated for the other popular optimization methods such as proximal gradient methods @cite_14 . Thus, ADMM has been widely studied in recent years @cite_1 @cite_12 . For large-scale optimization, some stochastic ADMM methods @cite_3 @cite_15 @cite_10 @cite_26 have been proposed. In fact, the ADMM method is also successful in solving many nonconvex machine learning problems such as training neural networks @cite_21 . Thus, the nonconvex ADMM and its stochastic version methods have been developed in @cite_34 @cite_33 @cite_9 @cite_20 . At the same time, the nonconvex stochastic ADMM methods @cite_0 @cite_6 have been studied.
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_26", "@cite_22", "@cite_7", "@cite_33", "@cite_9", "@cite_21", "@cite_1", "@cite_20", "@cite_3", "@cite_6", "@cite_0", "@cite_15", "@cite_34", "@cite_10", "@cite_12" ], "mid": [ "25321933", "2962853966", "2946968605", "2510516734" ], "abstract": [ "We develop new stochastic optimization methods that are applicable to a wide range of structured regularizations. Basically our methods are combinations of basic stochastic optimization techniques and Alternating Direction Multiplier Method (ADMM). ADMM is a general framework for optimizing a composite function, and has a wide range of applications. We propose two types of online variants of ADMM, which correspond to online proximal gradient descent and regularized dual averaging respectively. The proposed algorithms are computationally efficient and easy to implement. Our methods yield O(1 √T) convergence of the expected risk. Moreover, the online proximal gradient descent type method yields O(log(T) T) convergence for a strongly convex loss. Numerical experiments show effectiveness of our methods in learning tasks with structured sparsity such as overlapped group lasso.", "In this paper, we analyze the convergence of the alternating direction method of multipliers (ADMM) for minimizing a nonconvex and possibly nonsmooth objective function, ( (x_0, ,x_p,y) ), subject to coupled linear equality constraints. Our ADMM updates each of the primal variables (x_0, ,x_p,y ), followed by updating the dual variable. We separate the variable y from (x_i )’s as it has a special role in our analysis. The developed convergence guarantee covers a variety of nonconvex functions such as piecewise linear functions, ( _q ) quasi-norm, Schatten-q quasi-norm ( (0<q<1 )), minimax concave penalty (MCP), and smoothly clipped absolute deviation penalty. It also allows nonconvex constraints such as compact manifolds (e.g., spherical, Stiefel, and Grassman manifolds) and linear complementarity constraints. Also, the (x_0 )-block can be almost any lower semi-continuous function. By applying our analysis, we show, for the first time, that several ADMM algorithms applied to solve nonconvex models in statistical learning, optimization on manifold, and matrix decomposition are guaranteed to converge. Our results provide sufficient conditions for ADMM to converge on (convex or nonconvex) monotropic programs with three or more blocks, as they are special cases of our model. ADMM has been regarded as a variant to the augmented Lagrangian method (ALM). We present a simple example to illustrate how ADMM converges but ALM diverges with bounded penalty parameter ( ). Indicated by this example and other analysis in this paper, ADMM might be a better choice than ALM for some nonconvex nonsmooth problems, because ADMM is not only easier to implement, it is also more likely to converge for the concerned scenarios.", "Alternating direction method of multipliers (ADMM) is a popular optimization tool for the composite and constrained problems in machine learning. However, in many machine learning problems such as black-box attacks and bandit feedback, ADMM could fail because the explicit gradients of these problems are difficult or infeasible to obtain. Zeroth-order (gradient-free) methods can effectively solve these problems due to that the objective function values are only required in the optimization. Recently, though there exist a few zeroth-order ADMM methods, they build on the convexity of objective function. Clearly, these existing zeroth-order methods are limited in many applications. In the paper, thus, we propose a class of fast zeroth-order stochastic ADMM methods (i.e., ZO-SVRG-ADMM and ZO-SAGA-ADMM) for solving nonconvex problems with multiple nonsmooth penalties, based on the coordinate smoothing gradient estimator. Moreover, we prove that both the ZO-SVRG-ADMM and ZO-SAGA-ADMM have convergence rate of @math , where @math denotes the number of iterations. In particular, our methods not only reach the best convergence rate @math for the nonconvex optimization, but also are able to effectively solve many complex machine learning problems with multiple regularized penalties and constraints. Finally, we conduct the experiments of black-box binary classification and structured adversarial attack on black-box deep neural network to validate the efficiency of our algorithms.", "The Alternating Direction Method of Multipliers (ADMM) has received lots of attention recently due to the tremendous demand from large-scale and data-distributed machine learning applications. In this paper, we present a stochastic setting for optimization problems with non-smooth composite objective functions. To solve this problem, we propose a stochastic ADMM algorithm. Our algorithm applies to a more general class of convex and nonsmooth objective functions, beyond the smooth and separable least squares loss used in lasso. We also demonstrate the rates of convergence for our algorithm under various structural assumptions of the stochastic function: O(1 √t)) for convex functions and O(log t t) for strongly convex functions. Compared to previous literature, we establish the convergence rate of ADMM for convex problems in terms of both the objective value and the feasibility violation. A novel application named Graph-Guided SVM is proposed to demonstrate the usefulness of our algorithm." ] }
1907.13463
2966452173
Zeroth-order (gradient-free) method is a class of powerful optimization tool for many machine learning problems because it only needs function values (not gradient) in the optimization. In particular, zeroth-order method is very suitable for many complex problems such as black-box attacks and bandit feedback, whose explicit gradients are difficult or infeasible to obtain. Recently, although many zeroth-order methods have been developed, these approaches still exist two main drawbacks: 1) high function query complexity; 2) not being well suitable for solving the problems with complex penalties and constraints. To address these challenging drawbacks, in this paper, we propose a novel fast zeroth-order stochastic alternating direction method of multipliers (ADMM) method (, ZO-SPIDER-ADMM) with lower function query complexity for solving nonconvex problems with multiple nonsmooth penalties. Moreover, we prove that our ZO-SPIDER-ADMM has the optimal function query complexity of @math for finding an @math -approximate local solution, where @math and @math denote the sample size and dimension of data, respectively. In particular, the ZO-SPIDER-ADMM improves the existing best nonconvex zeroth-order ADMM methods by a factor of @math . Moreover, we propose a fast online ZO-SPIDER-ADMM ( ZOO-SPIDER-ADMM). Our theoretical analysis shows that the ZOO-SPIDER-ADMM has the function query complexity of @math , which improves the existing best result by a factor of @math . Finally, we utilize a task of structured adversarial attack on black-box deep neural networks to demonstrate the efficiency of our algorithms.
So far, the above ADMM methods need to repeatedly calculate gradients of the loss function over the iterations. However, in many machine learning problems, the gradients of objective functions are difficult or infeasible to obtain. For example, in adversarial attack to black-box DNNs @cite_4 @cite_23 , only evaluation values (, function values) are provided. Thus, @cite_18 @cite_16 have proposed the zeroth-order online and stochastic ADMM methods for solving some convex problems. More recently, @cite_13 has proposed the nonconvex ZO-SVRG-ADMM and ZO-SAGA-ADMM methods. * -8pt
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_23", "@cite_16", "@cite_13" ], "mid": [ "2946968605", "2531036984", "1523661875", "2798170643" ], "abstract": [ "Alternating direction method of multipliers (ADMM) is a popular optimization tool for the composite and constrained problems in machine learning. However, in many machine learning problems such as black-box attacks and bandit feedback, ADMM could fail because the explicit gradients of these problems are difficult or infeasible to obtain. Zeroth-order (gradient-free) methods can effectively solve these problems due to that the objective function values are only required in the optimization. Recently, though there exist a few zeroth-order ADMM methods, they build on the convexity of objective function. Clearly, these existing zeroth-order methods are limited in many applications. In the paper, thus, we propose a class of fast zeroth-order stochastic ADMM methods (i.e., ZO-SVRG-ADMM and ZO-SAGA-ADMM) for solving nonconvex problems with multiple nonsmooth penalties, based on the coordinate smoothing gradient estimator. Moreover, we prove that both the ZO-SVRG-ADMM and ZO-SAGA-ADMM have convergence rate of @math , where @math denotes the number of iterations. In particular, our methods not only reach the best convergence rate @math for the nonconvex optimization, but also are able to effectively solve many complex machine learning problems with multiple regularized penalties and constraints. Finally, we conduct the experiments of black-box binary classification and structured adversarial attack on black-box deep neural network to validate the efficiency of our algorithms.", "In the paper, we study the stochastic alternating direction method of multipliers (ADMM) for the nonconvex optimizations, and propose three classes of the nonconvex stochastic ADMM with variance reduction, based on different reduced variance stochastic gradients. Specifically, the first class called the nonconvex stochastic variance reduced gradient ADMM (SVRG-ADMM), uses a multi-stage scheme to progressively reduce the variance of stochastic gradients. The second is the nonconvex stochastic average gradient ADMM (SAG-ADMM), which additionally uses the old gradients estimated in the previous iteration. The third called SAGA-ADMM is an extension of the SAG-ADMM method. Moreover, under some mild conditions, we establish the iteration complexity bound of @math of the proposed methods to obtain an @math -stationary solution of the nonconvex optimizations. In particular, we provide a general framework to analyze the iteration complexity of these nonconvex stochastic ADMM methods with variance reduction. Finally, some numerical experiments demonstrate the effectiveness of our methods.", "In this paper we study the problem of minimizing the average of a large number ( @math ) of smooth convex loss functions. We propose a new method, S2GD (Semi-Stochastic Gradient Descent), which runs for one or several epochs in each of which a single full gradient and a random number of stochastic gradients is computed, following a geometric law. The total work needed for the method to output an @math -accurate solution in expectation, measured in the number of passes over data, or equivalently, in units equivalent to the computation of a single gradient of the loss, is @math , where @math is the condition number. This is achieved by running the method for @math epochs, with a single gradient evaluation and @math stochastic gradient evaluations in each. The SVRG method of Johnson and Zhang arises as a special case. If our method is limited to a single epoch only, it needs to evaluate at most @math stochastic gradients. In contrast, SVRG requires @math stochastic gradients. To illustrate our theoretical results, S2GD only needs the workload equivalent to about 2.1 full gradient evaluations to find an @math -accurate solution for a problem with @math and @math .", "Weight pruning methods for deep neural networks (DNNs) have been investigated recently, but prior work in this area is mainly heuristic, iterative pruning, thereby lacking guarantees on the weight reduction ratio and convergence time. To mitigate these limitations, we present a systematic weight pruning framework of DNNs using the alternating direction method of multipliers (ADMM). We first formulate the weight pruning problem of DNNs as a nonconvex optimization problem with combinatorial constraints specifying the sparsity requirements, and then adopt the ADMM framework for systematic weight pruning. By using ADMM, the original nonconvex optimization problem is decomposed into two subproblems that are solved iteratively. One of these subproblems can be solved using stochastic gradient descent, the other can be solved analytically. Besides, our method achieves a fast convergence rate." ] }
1907.13357
2965376428
We propose a new regularization technique, named Hybrid Spatio-Spectral Total Variation (HSSTV), for hyperspectral (HS) image denoising and compressed sensing. Regularization techniques based on total variation (TV) focus on local differences of an HS image to model its underlying smoothness and have been recognized as a popular approach to HS image restoration. However, existing TVs do not fully exploit underlying spectral correlation in their designs and or require a high computational cost in optimization. Our HSSTV is designed to simultaneously evaluates two types of local differences: direct local spatial differences and local spatio-spectral differences in a unified manner with a balancing weight. This design resolves the said drawbacks of existing TVs. Then, we formulate HS image restoration as a constrained convex optimization problem involving HSSTV and develop an efficient algorithm based on the alternating direction method of multipliers (ADMM) for solving it. In the experiments, we illustrate the advantages of HSSTV over several state-of-the-art methods.
Yuan proposed HTV @cite_13 for HS image denoising. HTV can be seen as a generalization of the standard color TV @cite_21 , and its formulation is given as follows: where @math and @math are vertical and horizontal differences for @math th pixel of @math th band in an HS image, respectively. From this definition, one can see that HTV evaluates spatial piecewise smoothness but does not consider spectral correlation, resulting in spatial oversmoothing. This will be empirically shown in Sec.
{ "cite_N": [ "@cite_21", "@cite_13" ], "mid": [ "2145268773", "2963315679", "1969128393", "2139529730" ], "abstract": [ "A novel color image histogram equalization approach is proposed that exploits the correlation between color components and it is enhanced by a multi-level smoothing technique borrowed from statistical language engineering. Multi-level smoothing aims at dealing efficiently with the problem of unseen color values, either considered independently or in combination with others. It is applied here to the HSI color space for the probability of intensity and the probability of saturation given the intensity, while the hue is left unchanged. Moreover, the proposed approach is extended by an empirical technique, which is based on a hue preserving non-linear transformation, in order to eliminate the gamut problem. This is the second method proposed in the paper. The equalized images by the two methods are compared to those produced by other well-known methods. The better quality of the images equalized by the proposed methods is judged in terms of their visual appeal and objective figures of merit, such as the entropy and the Kullback-Leibler divergence estimates between the resulting color histogram and the multivariate uniform probability density function.", "Most of the existing denoising algorithms are developed for grayscale images. It is not trivial to extend them for color image denoising since the noise statistics in R, G, and B channels can be very different for real noisy images. In this paper, we propose a multi-channel (MC) optimization model for real color image denoising under the weighted nuclear norm minimization (WNNM) framework. We concatenate the RGB patches to make use of the channel redundancy, and introduce a weight matrix to balance the data fidelity of the three channels in consideration of their different noise statistics. The proposed MC-WNNM model does not have an analytical solution. We reformulate it into a linear equality-constrained problem and solve it via alternating direction method of multipliers. Each alternative updating step has a closed-form solution and the convergence can be guaranteed. Experiments on both synthetic and real noisy image datasets demonstrate the superiority of the proposed MC-WNNM over state-of-the-art denoising methods.", "A novel architecture for performing hue-saturation-value (HSV) domain enhancement of digital color images with non-uniform lighting conditions is proposed in this paper for video streaming applications. The approach promotes log-domain computation to eliminate all multiplications, divisions and exponentiations utilizing the effective logarithmic estimation modules. An optimized quadrant symmetric architecture is incorporated into the design of homomorphic filter for the enhancement of intensity value. Efficient modules are also presented for conversion between RGB and HSV color spaces. The design is able to bring out details hidden in shadow regions of the image. It is capable of producing 187.86 million outputs per second (MOPs) on Xilinx's Virtex II XC2V2000-4ff896 field programmable gate array (FPGA) at a clock frequency of 187.86 MHz. It can process over 179.1 (1024 X 1024) frames per second and consumes approximately 70.7 and 76.8 less hardware resource with 127 and 280 performance boost when compared to the designs with machine learning algorithm [10], and with separated dynamic and contrast enhancements [11], respectively.", "This paper introduces a novel approach for evaluating the quality of pansharpened multispectral (MS) imagery without resorting to reference originals. Hence, evaluations are feasible at the highest spatial resolution of the panchromatic (PAN) sensor. Wang and Bovik’s image quality index (QI) provides a statistical similarity measurement between two monochrome images. The QI values between any couple of MS bands are calculated before and after fusion and used to define a measurement of spectral distortion. Analogously, QI values between each MS band and the PAN image are calculated before and after fusion to yield a measurement of spatial distortion. The rationale is that such QI values should be unchanged after fusion, i.e., when the spectral information is translated from the coarse scale of the MS data to the fine scale of the PAN image. Experimental results, carried out on very high-resolution Ikonos data and simulated Pleiades data, demonstrate that the results provided by the proposed approach are consistent and in trend with analysis performed on spatially degraded data. However, the proposed method requires no reference originals and is therefore usable in all practical cases." ] }
1907.13357
2965376428
We propose a new regularization technique, named Hybrid Spatio-Spectral Total Variation (HSSTV), for hyperspectral (HS) image denoising and compressed sensing. Regularization techniques based on total variation (TV) focus on local differences of an HS image to model its underlying smoothness and have been recognized as a popular approach to HS image restoration. However, existing TVs do not fully exploit underlying spectral correlation in their designs and or require a high computational cost in optimization. Our HSSTV is designed to simultaneously evaluates two types of local differences: direct local spatial differences and local spatio-spectral differences in a unified manner with a balancing weight. This design resolves the said drawbacks of existing TVs. Then, we formulate HS image restoration as a constrained convex optimization problem involving HSSTV and develop an efficient algorithm based on the alternating direction method of multipliers (ADMM) for solving it. In the experiments, we illustrate the advantages of HSSTV over several state-of-the-art methods.
Addesso proposed to use CTV @cite_57 for HS image inpainting @cite_61 . CTV is defined by It evaluates spatial piecewise smoothness by using @math norm. In addition, the method can also use the Schatten- @math norm as ) |_ S^p ^ q )^ 1 q . CTV can be seen as a generalization of HTV, which is equivalent to HTV when @math , @math and @math . In @cite_61 , the authors experimentally show that CTV with @math norm achieves the best performance, which means that the limitation of CTV in HS image restoration is the same as HTV.
{ "cite_N": [ "@cite_57", "@cite_61" ], "mid": [ "2759219183", "2119259145", "2050630265", "2416791088" ], "abstract": [ "Inpainting in hyperspectral imagery is a challenging research area and several methods have been recently developed to deal with this kind of data. In this paper we address missing data restoration via a convex optimization technique with regularization term based on Collaborative Total Variation (CTV). In particular we evaluate the effectiveness of several instances of CTV in conjunction with different dimensionality reduction algorithms.", "High radiation dose in computed tomography (CT) scans increases the lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with total variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, the low-contrast structures tend to be smoothed out by the TV regularization, posing a great challenge for the TV method. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV (EPTV) regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing energy consisting of an EPTV norm and a data fidelity term posed by the x-ray projections. The EPTV term is proposed to preferentially perform smoothing only on the non-edge part of the image in order to better preserve the edges, which is realized by introducing a penalty weight to the original TV norm. During the reconstruction process, the pixels at the edges would be gradually identified and given low penalty weight. Our iterative algorithm is implemented on graphics processing unit to improve its speed. We test our reconstruction algorithm on a digital NURBS-based cardiac-troso phantom, a physical chest phantom and a Catphan phantom. Reconstruction results from a conventional filtered backprojection (FBP) algorithm and a TV regularization method without edge-preserving penalty are also presented for comparison purposes. The experimental results illustrate that both the TV-based algorithm and our EPTV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under a low-dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of low-contrast structures and therefore maintain acceptable spatial resolution.", "This paper presents a new patch-based video restoration scheme. By grouping similar patches in the spatiotemporal domain, we formulate the video restoration problem as a joint sparse and low-rank matrix approximation problem. The resulting nuclear norm and @math norm related minimization problem can also be efficiently solved by many recently developed numerical methods. The effectiveness of the proposed video restoration scheme is illustrated on two applications: video denoising in the presence of random-valued noise, and video in-painting for archived films. The numerical experiments indicate that the proposed video restoration method compares favorably against many existing algorithms.", "In this paper, we explore various aspects of fusing LIDAR and color imagery for pedestrian detection in the context of convolutional neural networks (CNNs), which have recently become state-of-art for many vision problems. We incorporate LIDAR by up-sampling the point cloud to a dense depth map and then extracting three features representing different aspects of the 3D scene. We then use those features as extra image channels. Specifically, we leverage recent work on HHA [9] (horizontal disparity, height above ground, and angle) representations, adapting the code to work on up-sampled LIDAR rather than Microsoft Kinect depth maps. We show, for the first time, that such a representation is applicable to up-sampled LIDAR data, despite its sparsity. Since CNNs learn a deep hierarchy of feature representations, we then explore the question: At what level of representation should we fuse this additional information with the original RGB image channels? We use the KITTI pedestrian detection dataset for our exploration. We first replicate the finding that region-CNNs (R-CNNs) [8] can outperform the original proposal mechanism using only RGB images, but only if fine-tuning is employed. Then, we show that: 1) using HHA features and RGB images performs better than RGB-only, even without any fine-tuning using large RGB web data, 2) fusing RGB and HHA achieves the strongest results if done late, but, under a parameter or computational budget, is best done at the early to middle layers of the hierarchical representation, which tend to represent midlevel features rather than low (e.g. edges) or high (e.g. object class decision) level features, 3) some of the less successful methods have the most parameters, indicating that increased classification accuracy is not simply a function of increased capacity in the neural network." ] }
1907.13357
2965376428
We propose a new regularization technique, named Hybrid Spatio-Spectral Total Variation (HSSTV), for hyperspectral (HS) image denoising and compressed sensing. Regularization techniques based on total variation (TV) focus on local differences of an HS image to model its underlying smoothness and have been recognized as a popular approach to HS image restoration. However, existing TVs do not fully exploit underlying spectral correlation in their designs and or require a high computational cost in optimization. Our HSSTV is designed to simultaneously evaluates two types of local differences: direct local spatial differences and local spatio-spectral differences in a unified manner with a balancing weight. This design resolves the said drawbacks of existing TVs. Then, we formulate HS image restoration as a constrained convex optimization problem involving HSSTV and develop an efficient algorithm based on the alternating direction method of multipliers (ADMM) for solving it. In the experiments, we illustrate the advantages of HSSTV over several state-of-the-art methods.
He proposed ASSTV @cite_58 for HS image denoising. ASSTV simultaneously evaluates direct spatial and spectral differences, which is defined by where @math , @math , and @math are vertical, horizontal, and spectral differences for the @math th pixel of an HS image, respectively, and @math , @math and @math are balancing parameters for each difference (Fig. , blue lines). Although the parameters play a very important role, their suitable values are changed for each HS image and noise intensity. Therefore, their settings are very difficult.
{ "cite_N": [ "@cite_58" ], "mid": [ "2021046129", "1964231221", "1969128393", "2039596145" ], "abstract": [ "Hyperspectral remote sensing images (HSIs) usually have high spectral resolution and low spatial resolution. Conversely, multispectral images (MSIs) usually have low spectral and high spatial resolutions. The problem of inferring images that combine the high spectral and high spatial resolutions of HSIs and MSIs, respectively, is a data fusion problem that has been the focus of recent active research due to the increasing availability of HSIs and MSIs retrieved from the same geographical area. We formulate this problem as the minimization of a convex objective function containing two quadratic data-fitting terms and an edge-preserving regularizer. The data-fitting terms account for blur, different resolutions, and additive noise. The regularizer, a form of vector total variation, promotes piecewise-smooth solutions with discontinuities aligned across the hyperspectral bands. The downsampling operator accounting for the different spatial resolutions, the nonquadratic and nonsmooth nature of the regularizer, and the very large size of the HSI to be estimated lead to a hard optimization problem. We deal with these difficulties by exploiting the fact that HSIs generally “live” in a low-dimensional subspace and by tailoring the split augmented Lagrangian shrinkage algorithm (SALSA), which is an instance of the alternating direction method of multipliers (ADMM), to this optimization problem, by means of a convenient variable splitting. The spatial blur and the spectral linear operators linked, respectively, with the HSI and MSI acquisition processes are also estimated, and we obtain an effective algorithm that outperforms the state of the art, as illustrated in a series of experiments with simulated and real-life data.", "Purpose: To investigate a novel locally adaptive projection space denoising algorithm for low-dose CT data. Methods: The denoising algorithm is based on bilateral filtering, which smooths values using a weighted average in a local neighborhood, with weights determined according to both spatial proximity and intensity similarity between the center pixel and the neighboring pixels. This filtering is locally adaptive and can preserve important edge information in the sinogram, thus maintaining high spatial resolution. A CTnoise model that takes into account the bowtie filter and patient-specific automatic exposure control effects is also incorporated into the denoising process. The authors evaluated the noise-resolution properties of bilateral filtering incorporating such a CTnoise model in phantom studies and preliminary patient studies with contrast-enhanced abdominal CT exams. Results: On a thin wire phantom, the noise-resolution properties were significantly improved with the denoising algorithm compared to commercial reconstruction kernels. The noise-resolution properties on low-dose (40 mA s) data after denoising approximated those of conventional reconstructions at twice the dose level. A separate contrast plate phantom showed improved depiction of low-contrast plates with the denoising algorithm over conventional reconstructions when noise levels were matched. Similar improvement in noise-resolution properties was found on CT colonography data and on five abdominal low-energy (80 kV) CT exams. In each abdominal case, a board-certified subspecialized radiologist rated the denoised 80 kV images markedly superior in image quality compared to the commercially available reconstructions, and denoising improved the image quality to the point where the 80 kV images alone were considered to be of diagnostic quality. Conclusions: The results demonstrate that bilateral filtering incorporating a CTnoise model can achieve a significantly better noise-resolution trade-off than a series of commercial reconstruction kernels. This improvement in noise-resolution properties can be used for improving image quality in CT and can be translated into substantial dose reduction.", "A novel architecture for performing hue-saturation-value (HSV) domain enhancement of digital color images with non-uniform lighting conditions is proposed in this paper for video streaming applications. The approach promotes log-domain computation to eliminate all multiplications, divisions and exponentiations utilizing the effective logarithmic estimation modules. An optimized quadrant symmetric architecture is incorporated into the design of homomorphic filter for the enhancement of intensity value. Efficient modules are also presented for conversion between RGB and HSV color spaces. The design is able to bring out details hidden in shadow regions of the image. It is capable of producing 187.86 million outputs per second (MOPs) on Xilinx's Virtex II XC2V2000-4ff896 field programmable gate array (FPGA) at a clock frequency of 187.86 MHz. It can process over 179.1 (1024 X 1024) frames per second and consumes approximately 70.7 and 76.8 less hardware resource with 127 and 280 performance boost when compared to the designs with machine learning algorithm [10], and with separated dynamic and contrast enhancements [11], respectively.", "The amount of noise included in a hyperspectral image limits its application and has a negative impact on hyperspectral image classification, unmixing, target detection, and so on. In hyperspectral images, because the noise intensity in different bands is different, to better suppress the noise in the high-noise-intensity bands and preserve the detailed information in the low-noise-intensity bands, the denoising strength should be adaptively adjusted with the noise intensity in the different bands. Meanwhile, in the same band, there exist different spatial property regions, such as homogeneous regions and edge or texture regions; to better reduce the noise in the homogeneous regions and preserve the edge and texture information, the denoising strength applied to pixels in different spatial property regions should also be different. Therefore, in this paper, we propose a hyperspectral image denoising algorithm employing a spectral-spatial adaptive total variation (TV) model, in which the spectral noise differences and spatial information differences are both considered in the process of noise reduction. To reduce the computational load in the denoising process, the split Bregman iteration algorithm is employed to optimize the spectral-spatial hyperspectral TV model and accelerate the speed of hyperspectral image denoising. A number of experiments illustrate that the proposed approach can satisfactorily realize the spectral-spatial adaptive mechanism in the denoising process, and superior denoising results are produced." ] }
1907.13432
2966528571
We propose two neural network based mixture models in this article. The proposed mixture models are explicit in nature. The explicit models have analytical forms with the advantages of computing likelihood and efficiency of generating samples. Computation of likelihood is an important aspect of our models. Expectation-maximization based algorithms are developed for learning parameters of the proposed models. We provide sufficient conditions to realize the expectation-maximization based learning. The main requirements are invertibility of neural networks that are used as generators and Jacobian computation of functional form of the neural networks. The requirements are practically realized using a flow-based neural network. In our first mixture model, we use multiple flow-based neural networks as generators. Naturally the model is complex. A single latent variable is used as the common input to all the neural networks. The second mixture model uses a single flow-based neural network as a generator to reduce complexity. The single generator has a latent variable input that follows a Gaussian mixture distribution. We demonstrate efficiency of proposed mixture models through extensive experiments for generating samples and maximum likelihood based classification.
While GANs have high success in many applications, they are known to suffer in a mode dropping problem where a generator of a GAN is unable to capture all modes of an underlying probability distribution of data @cite_13 . To address diversity in data and model multiple modes in a distribution, variants of generative models have been developed and usage of multiple generators has been considered. For instance, methods of minibatch discrimination @cite_27 and feature representation @cite_10 are used to construct new discriminators of GANs which encourage the GANs to generate samples with diversity. Multiple Wasserstein GANs @cite_18 are used in @cite_13 with appropriate mutual information based regularization to encourage the diversity of samples generated by different GANs. A mixture GAN approach is proposed in @cite_26 using multiple generators and multi-classification solution to encourage diversity of samples. Multi-agent diverse GAN @cite_12 similarly employs @math generators, but uses a @math -class discriminator instead of a typical binary discriminator to increase the diversity of generated samples. These works are implicit probability distribution modeling and thus prior distribution of generators can not be inferred when multiple generators are used.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_26", "@cite_27", "@cite_10", "@cite_12" ], "mid": [ "2787223504", "2607448608", "2962900302", "2596763562" ], "abstract": [ "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.", "This paper describes an intuitive generalization to the Generative Adversarial Networks (GANs) to generate samples while capturing diverse modes of the true data distribution. Firstly, we propose a very simple and intuitive multi-agent GAN architecture that incorporates multiple generators capable of generating samples from high probability modes. Secondly, in order to enforce different generators to generate samples from diverse modes, we propose two extensions to the standard GAN objective function. (1) We augment the generator specific GAN objective function with a diversity enforcing term that encourage different generators to generate diverse samples using a user-defined similarity based function. (2) We modify the discriminator objective function where along with finding the real and fake samples, the discriminator has to predict the generator which generated the given fake sample. Intuitively, in order to succeed in this task, the discriminator must learn to push different generators towards different identifiable modes. Our framework is generalizable in the sense that it can be easily combined with other existing variants of GANs to produce diverse samples. Experimentally we show that our framework is able to produce high quality diverse samples for the challenging tasks such as image face generation and image-to-image translation. We also show that it is capable of learning a better feature representation in an unsupervised setting.", "We propose in this paper a novel approach to tackle the problem of mode collapse encountered in generative adversarial network (GAN). Our idea is intuitive but proven to be very effective, especially in addressing some key limitations of GAN. In essence, it combines the Kullback-Leibler (KL) and reverse KL divergences into a unified objective function, thus it exploits the complementary statistical properties from these divergences to effectively diversify the estimated density in capturing multi-modes. We term our method dual discriminator generative adversarial nets (D2GAN) which, unlike GAN, has two discriminators; and together with a generator, it also has the analogy of a minimax game, wherein a discriminator rewards high scores for samples from data distribution whilst another discriminator, conversely, favoring data from the generator, and the generator produces data to fool both two discriminators. We develop theoretical analysis to show that, given the maximal discriminators, optimizing the generator of D2GAN reduces to minimizing both KL and reverse KL divergences between data distribution and the distribution induced from the data generated by the generator, hence effectively avoiding the mode collapsing problem. We conduct extensive experiments on synthetic and real-world large-scale datasets (MNIST, CIFAR-10, STL-10, ImageNet), where we have made our best effort to compare our D2GAN with the latest state-of-the-art GAN's variants in comprehensive qualitative and quantitative evaluations. The experimental results demonstrate the competitive and superior performance of our approach in generating good quality and diverse samples over baselines, and the capability of our method to scale up to ImageNet database.", "Generative Adversarial Nets (GANs) have shown promise in image generation and semi-supervised learning (SSL). However, existing GANs in SSL have two problems: (1) the generator and the discriminator (i.e. the classifier) may not be optimal at the same time; and (2) the generator cannot control the semantics of the generated samples. The problems essentially arise from the two-player formulation, where a single discriminator shares incompatible roles of identifying fake samples and predicting labels and it only estimates the data without considering the labels. To address the problems, we present triple generative adversarial net (Triple-GAN), which consists of three players---a generator, a discriminator and a classifier. The generator and the classifier characterize the conditional distributions between images and labels, and the discriminator solely focuses on identifying fake image-label pairs. We design compatible utilities to ensure that the distributions characterized by the classifier and the generator both converge to the data distribution. Our results on various datasets demonstrate that Triple-GAN as a unified model can simultaneously (1) achieve the state-of-the-art classification results among deep generative models, and (2) disentangle the classes and styles of the input and transfer smoothly in the data space via interpolation in the latent space class-conditionally." ] }
1907.13432
2966528571
We propose two neural network based mixture models in this article. The proposed mixture models are explicit in nature. The explicit models have analytical forms with the advantages of computing likelihood and efficiency of generating samples. Computation of likelihood is an important aspect of our models. Expectation-maximization based algorithms are developed for learning parameters of the proposed models. We provide sufficient conditions to realize the expectation-maximization based learning. The main requirements are invertibility of neural networks that are used as generators and Jacobian computation of functional form of the neural networks. The requirements are practically realized using a flow-based neural network. In our first mixture model, we use multiple flow-based neural networks as generators. Naturally the model is complex. A single latent variable is used as the common input to all the neural networks. The second mixture model uses a single flow-based neural network as a generator to reduce complexity. The single generator has a latent variable input that follows a Gaussian mixture distribution. We demonstrate efficiency of proposed mixture models through extensive experiments for generating samples and maximum likelihood based classification.
Typically, for a GAN, the latent variable is assumed to follow a known and fixed distribution, e.g., Gaussian. The latent signal for a given data sample can not be obtained since generators which are usually based on neural networks are non-invertible. The mapping from a data sample to its corresponding latent signal is approximately estimated by neural networks in different ways. @cite_15 and @cite_33 propose to train a generative model and an inverse mapping (also neural network) from the data sample to the latent signal simultaneously, using the adversarial training method of GAN. Alternatively, @cite_8 proposes to approximately minimize a Kullback-Leibler divergence to estimate the mapping from the data sample to the latent variable, which leads to a nontrivial probability density ratio estimation problem.
{ "cite_N": [ "@cite_15", "@cite_33", "@cite_8" ], "mid": [ "2963105487", "2787223504", "2607491080", "2767375635" ], "abstract": [ "Generative adversarial networks (GANs) learn a deep generative model that is able to synthesize novel, high-dimensional data samples. New data samples are synthesized by passing latent samples, drawn from a chosen prior distribution, through the generative model. Once trained, the latent space exhibits interesting properties that may be useful for downstream tasks such as classification or retrieval. Unfortunately, GANs do not offer an “inverse model,” a mapping from data space back to latent space, making it difficult to infer a latent representation for a given data sample. In this paper, we introduce a technique, inversion , to project data samples, specifically images, to the latent space using a pretrained GAN. Using our proposed inversion technique, we are able to identify which attributes of a data set a trained GAN is able to model and quantify GAN performance, based on a reconstruction loss. We demonstrate how our proposed inversion technique may be used to quantitatively compare the performance of various GAN models trained on three image data sets. We provide codes for all of our experiments in the website ( https: github.com ToniCreswell InvertingGAN ).", "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.", "Traditional generative adversarial networks (GAN) and many of its variants are trained by minimizing the KL or JS-divergence loss that measures how close the generated data distribution is from the true data distribution. A recent advance called the WGAN based on Wasserstein distance can improve on the KL and JS-divergence based GANs, and alleviate the gradient vanishing, instability, and mode collapse issues that are common in the GAN training. In this work, we aim at improving on the WGAN by first generalizing its discriminator loss to a margin-based one, which leads to a better discriminator, and in turn a better generator, and then carrying out a progressive training paradigm involving multiple GANs to contribute to the maximum margin ranking loss so that the GAN at later stages will improve upon early stages. We call this method Gang of GANs (GoGAN). We have shown theoretically that the proposed GoGAN can reduce the gap between the true data distribution and the generated data distribution by at least half in an optimally trained WGAN. We have also proposed a new way of measuring GAN quality which is based on image completion tasks. We have evaluated our method on four visual datasets: CelebA, LSUN Bedroom, CIFAR-10, and 50K-SSFF, and have seen both visual and quantitative improvement over baseline WGAN.", "Generative models such as Variational Auto Encoders (VAEs) and Generative Adversarial Networks (GANs) are typically trained for a fixed prior distribution in the latent space, such as uniform or Gaussian. After a trained model is obtained, one can sample the Generator in various forms for exploration and understanding, such as interpolating between two samples, sampling in the vicinity of a sample or exploring differences between a pair of samples applied to a third sample. In this paper, we show that the latent space operations used in the literature so far induce a distribution mismatch between the resulting outputs and the prior distribution the model was trained on. To address this, we propose to use distribution matching transport maps to ensure that such latent space operations preserve the prior distribution, while minimally modifying the original operation. Our experimental results validate that the proposed operations give higher quality samples compared to the original operations." ] }
1907.13418
2965317766
Deep learning (DL) has shown great potential in medical image enhancement problems, such as super-resolution or image synthesis. However, to date, little consideration has been given to uncertainty quantification over the output image. Here we introduce methods to characterise different components of uncertainty in such problems and demonstrate the ideas using diffusion MRI super-resolution. Specifically, we propose to account for @math uncertainty through a heteroscedastic noise model and for @math uncertainty through approximate Bayesian inference, and integrate the two to quantify @math uncertainty over the output image. Moreover, we introduce a method to propagate the predictive uncertainty on a multi-channelled image to derived scalar parameters, and separately quantify the effects of intrinsic and parameter uncertainty therein. The methods are evaluated for super-resolution of two different signal representations of diffusion MR images---DTIs and Mean Apparent Propagator MRI---and their derived quantities such as MD and FA, on multiple datasets of both healthy and pathological human brains. Results highlight three key benefits of uncertainty modelling for improving the safety of DL-based image enhancement systems. Firstly, incorporating uncertainty improves the predictive performance even when test data departs from training data. Secondly, the predictive uncertainty highly correlates with errors, and is therefore capable of detecting predictive "failures". Results demonstrate that such an uncertainty measure enables subject-specific and voxel-wise risk assessment of the output images. Thirdly, we show that the method for decomposing predictive uncertainty into its independent sources provides high-level "explanations" for the performance by quantifying how much uncertainty arises from the inherent difficulty of the task or the limited training examples.
However, within the context of medical image enhancement, these lines of research performed only limited validation of the quality and utility of uncertainty modelling. In this work, we formalise and extend the preliminary ideas in Tanno @cite_81 and provide a comprehensive set of experiments to evaluate the proposed uncertainty modelling techniques in a diverse set of datasets, which vary in demographics, scanner types, acquisition protocols or pathology. Moreover, with the exception of @cite_81 , none of the previous methods model different components of uncertainty, namely intrinsic and parameter uncertainty. Our method accounts for both, and provides conclusive evidence that this improves performance thanks to different regularisation effects. In addition, we propose a method to decompose predictive uncertainty over an arbitrary function of the output image (e.g. morphological measurements) into its sources, in order to provide a high-level explanation of model performance on the given input.
{ "cite_N": [ "@cite_81" ], "mid": [ "2610571781", "2886155082", "2884442510", "1568207135" ], "abstract": [ "In this work, we investigate the value of uncertainty modelling in 3D super-resolution with convolutional neural networks (CNNs). Deep learning has shown success in a plethora of medical image transformation problems, such as super-resolution (SR) and image synthesis. However, the highly ill-posed nature of such problems results in inevitable ambiguity in the learning of networks. We propose to account for intrinsic uncertainty through a per-patch heteroscedastic noise model and for parameter uncertainty through approximate Bayesian inference in the form of variational dropout. We show that the combined benefits of both lead to the state-of-the-art performance SR of diffusion MR brain images in terms of errors compared to ground truth. We further show that the reduced error scores produce tangible benefits in downstream tractography. In addition, the probabilistic nature of the methods naturally confers a mechanism to quantify uncertainty over the super-resolved output. We demonstrate through experiments on both healthy and pathological brains the potential utility of such an uncertainty measure in the risk assessment of the super-resolved images for subsequent clinical use.", "Automated medical image segmentation, specifically using deep learning, has shown outstanding performance in semantic segmentation tasks. However, these methods rarely quantify their uncertainty, which may lead to errors in downstream analysis. In this work we propose to use Bayesian neural networks to quantify uncertainty within the domain of semantic segmentation. We also propose a method to convert voxel-wise segmentation uncertainty into volumetric uncertainty, and calibrate the accuracy and reliability of confidence intervals of derived measurements. When applied to a tumour volume estimation application, we demonstrate that by using such modelling of uncertainty, deep learning systems can be made to report volume estimates with well-calibrated error-bars, making them safer for clinical use. We also show that the uncertainty estimates extrapolate to unseen data, and that the confidence intervals are robust in the presence of artificial noise. This could be used to provide a form of quality control and quality assurance, and may permit further adoption of deep learning tools in the clinic.", "Accurate synthesis of a full 3D MR image containing tumours from available MRI (e.g. to replace an image that is currently unavailable or corrupted) would provide a clinician as well as downstream inference methods with important complementary information for disease analysis. In this paper, we present an end-to-end 3D convolution neural network that takes a set of acquired MR image sequences (e.g. T1, T2, T1ce) as input and concurrently performs (1) regression of the missing full resolution 3D MRI (e.g. FLAIR) and (2) segmentation of the tumour into subtypes (e.g. enhancement, core). The hypothesis is that this would focus the network to perform accurate synthesis in the area of the tumour. Experiments on the BraTS 2015 and 2017 datasets [1] show that: (1) the proposed method gives better performance than state-of-the art methods in terms of established global evaluation metrics (e.g. PSNR), (2) replacing real MR volumes with the synthesized MRI does not lead to significant degradation in tumour and sub-structure segmentation accuracy. The system further provides uncertainty estimates based on Monte Carlo (MC) dropout [11] for the synthesized volume at each voxel, permitting quantification of the system’s confidence in the output at each location.", "This work addresses the challenging problem of simultaneously segmenting multiple anatomical structures in highly varied CT scans. We propose the entangled decision forest (EDF) as a new discriminative classifier which augments the state of the art decision forest, resulting in higher prediction accuracy and shortened decision time. Our main contribution is two-fold. First, we propose entangling the binary tests applied at each tree node in the forest, such that the test result can depend on the result of tests applied earlier in the same tree and at image points offset from the voxel to be classified. This is demonstrated to improve accuracy and capture long-range semantic context. Second, during training, we propose injecting randomness in a guided way, in which node feature types and parameters are randomly drawn from a learned (non-uniform) distribution. This further improves classification accuracy. We assess our probabilistic anatomy segmentation technique using a labeled database of CT image volumes of 250 different patients from various scan protocols and scanner vendors. In each volume, 12 anatomical structures have been manually segmented. The database comprises highly varied body shapes and sizes, a wide array of pathologies, scan resolutions, and diverse contrast agents. Quantitative comparisons with state of the art algorithms demonstrate both superior test accuracy and computational efficiency." ] }
1907.13496
2965224253
Techniques from computational topology, in particular persistent homology, are becoming increasingly relevant for data analysis. Their stable metrics permit the use of many distance-based data analysis methods, such as multidimensional scaling, while providing a firm theoretical ground. Many modern machine learning algorithms, however, are based on kernels. This paper presents persistence indicator functions (PIFs), which summarize persistence diagrams, i.e., feature descriptors in topological data analysis. PIFs can be calculated and compared in linear time and have many beneficial properties, such as the availability of a kernel-based similarity measure. We demonstrate their usage in common data analysis scenarios, such as confidence set estimation and classification of complex structured data.
Recognizing that persistence diagrams can be analyzed at multiple scales as well in order to facilitate hierarchical comparisons, there are some approaches that provide approximations to persistence diagrams based on, e.g., a smoothing parameter. Among these, the stable kernel of @cite_21 is particularly suited for topological machine learning. Another approach by @cite_9 transforms a persistence diagram into a finite-dimensional vector by means of a probability distribution. Both methods require choosing a set of parameters (for kernel computations), while PIF are fully parameter-free. Moreover, PIF also permit other applications, such as mean calculations and statistical hypothesis testing, which pure kernel methods cannot provide.
{ "cite_N": [ "@cite_9", "@cite_21" ], "mid": [ "2964237352", "2788076228", "1960384938", "2952711884" ], "abstract": [ "Many data sets can be viewed as a noisy sampling of an underlying space, and tools from topological data analysis can characterize this structure for the purpose of knowledge discovery. One such tool is persistent homology, which provides a multiscale description of the homological features within a data set. A useful representation of this homological information is a persistence diagram (PD). Efforts have been made to map PDs into spaces with additional structure valuable to machine learning tasks. We convert a PD to a finite-dimensional vector representation which we call a persistence image (PI), and prove the stability of this transformation with respect to small perturbations in the inputs. The discriminatory power of PIs is compared against existing methods, showing significant performance gains. We explore the use of PIs with vector-based machine learning tools, such as linear sparse support vector machines, which identify features containing discriminating topological information. Finally, high accuracy inference of parameter values from the dynamic output of a discrete dynamical system (the linked twist map) and a partial differential equation (the anisotropic Kuramoto-Sivashinsky equation) provide a novel application of the discriminatory power of PIs.", "Persistence diagrams play a fundamental role in Topological Data Analysis where they are used as topological descriptors of filtrations built on top of data. They consist in discrete multisets of points in the plane @math that can equivalently be seen as discrete measures in @math . When the data come as a random point cloud, these discrete measures become random measures whose expectation is studied in this paper. First, we show that for a wide class of filtrations, including the C ech and Rips-Vietoris filtrations, the expected persistence diagram, that is a deterministic measure on @math , has a density with respect to the Lebesgue measure. Second, building on the previous result we show that the persistence surface recently introduced in [Adams & al., Persistence images: a stable vector representation of persistent homology] can be seen as a kernel estimator of this density. We propose a cross-validation scheme for selecting an optimal bandwidth, which is proven to be a consistent procedure to estimate the density.", "Topological data analysis offers a rich source of valuable information to study vision problems. Yet, so far we lack a theoretically sound connection to popular kernel-based learning techniques, such as kernel SVMs or kernel PCA. In this work, we establish such a connection by designing a multi-scale kernel for persistence diagrams, a stable summary representation of topological features in data. We show that this kernel is positive definite and prove its stability with respect to the 1-Wasserstein distance. Experiments on two benchmark datasets for 3D shape classification retrieval and texture recognition show considerable performance gains of the proposed method compared to an alternative approach that is based on the recently introduced persistence landscapes.", "Topological data analysis offers a rich source of valuable information to study vision problems. Yet, so far we lack a theoretically sound connection to popular kernel-based learning techniques, such as kernel SVMs or kernel PCA. In this work, we establish such a connection by designing a multi-scale kernel for persistence diagrams, a stable summary representation of topological features in data. We show that this kernel is positive definite and prove its stability with respect to the 1-Wasserstein distance. Experiments on two benchmark datasets for 3D shape classification retrieval and texture recognition show considerable performance gains of the proposed method compared to an alternative approach that is based on the recently introduced persistence landscapes." ] }
1907.13496
2965224253
Techniques from computational topology, in particular persistent homology, are becoming increasingly relevant for data analysis. Their stable metrics permit the use of many distance-based data analysis methods, such as multidimensional scaling, while providing a firm theoretical ground. Many modern machine learning algorithms, however, are based on kernels. This paper presents persistence indicator functions (PIFs), which summarize persistence diagrams, i.e., feature descriptors in topological data analysis. PIFs can be calculated and compared in linear time and have many beneficial properties, such as the availability of a kernel-based similarity measure. We demonstrate their usage in common data analysis scenarios, such as confidence set estimation and classification of complex structured data.
Recently, Bubenik @cite_16 introduced , a functional summary of persistence diagrams. Within his framework, PIF can be considered to represent a summary (or projection) of the . Our definition of PIF is more straightforward and easier to implement, however. Since PIF share several properties of persistence landscapes---most importantly the existence of simple function-space distance measures---this paper uses similar experimental setups as Bubenik @cite_16 and @cite_5 .
{ "cite_N": [ "@cite_5", "@cite_16" ], "mid": [ "2056761334", "2964237352", "2788076228", "2408772186" ], "abstract": [ "Topological persistence has proven to be a key concept for the study of real-valued functions defined over topological spaces. Its validity relies on the fundamental property that the persistence diagrams of nearby functions are close. However, existing stability results are restricted to the case of continuous functions defined over triangulable spaces. In this paper, we present new stability results that do not suffer from the above restrictions. Furthermore, by working at an algebraic level directly, we make it possible to compare the persistence diagrams of functions defined over different spaces, thus enabling a variety of new applications of the concept of persistence. Along the way, we extend the definition of persistence diagram to a larger setting, introduce the notions of discretization of a persistence module and associated pixelization map, define a proximity measure between persistence modules, and show how to interpolate between persistence modules, thereby lending a more analytic character to this otherwise algebraic setting. We believe these new theoretical concepts and tools shed new light on the theory of persistence, in addition to simplifying proofs and enabling new applications.", "Many data sets can be viewed as a noisy sampling of an underlying space, and tools from topological data analysis can characterize this structure for the purpose of knowledge discovery. One such tool is persistent homology, which provides a multiscale description of the homological features within a data set. A useful representation of this homological information is a persistence diagram (PD). Efforts have been made to map PDs into spaces with additional structure valuable to machine learning tasks. We convert a PD to a finite-dimensional vector representation which we call a persistence image (PI), and prove the stability of this transformation with respect to small perturbations in the inputs. The discriminatory power of PIs is compared against existing methods, showing significant performance gains. We explore the use of PIs with vector-based machine learning tools, such as linear sparse support vector machines, which identify features containing discriminating topological information. Finally, high accuracy inference of parameter values from the dynamic output of a discrete dynamical system (the linked twist map) and a partial differential equation (the anisotropic Kuramoto-Sivashinsky equation) provide a novel application of the discriminatory power of PIs.", "Persistence diagrams play a fundamental role in Topological Data Analysis where they are used as topological descriptors of filtrations built on top of data. They consist in discrete multisets of points in the plane @math that can equivalently be seen as discrete measures in @math . When the data come as a random point cloud, these discrete measures become random measures whose expectation is studied in this paper. First, we show that for a wide class of filtrations, including the C ech and Rips-Vietoris filtrations, the expected persistence diagram, that is a deterministic measure on @math , has a density with respect to the Lebesgue measure. Second, building on the previous result we show that the persistence surface recently introduced in [Adams & al., Persistence images: a stable vector representation of persistent homology] can be seen as a kernel estimator of this density. We propose a cross-validation scheme for selecting an optimal bandwidth, which is proven to be a consistent procedure to estimate the density.", "Persistent homology captures the evolution of topological features of a model as a parameter changes. The most commonly used summary statistics of persistent homology are the barcode and the persistence diagram. Another summary statistic, the persistence landscape, was recently introduced by Bubenik. It is a functional summary, so it is easy to calculate sample means and variances, and it is straightforward to construct various test statistics. Implementing a permutation test we detect conformational changes between closed and open forms of the maltose-binding protein, a large biomolecule consisting of 370 amino acid residues. Furthermore, persistence landscapes can be applied to machine learning methods. A hyperplane from a support vector machine shows the clear separation between the closed and open proteins conformations. Moreover, because our approach captures dynamical properties of the protein our results may help in identifying residues susceptible to ligand binding; we show that the majority of active site residues and allosteric pathway residues are located in the vicinity of the most persistent loop in the corresponding filtered Vietoris-Rips complex. This finding was not observed in the classical anisotropic network model." ] }
1907.13286
2964427276
Recommender systems are known to suffer from the popularity bias problem: popular (i.e. frequently rated) items get a lot of exposure while less popular ones are under-represented in the recommendations. Research in this area has been mainly focusing on finding ways to tackle this issue by increasing the number of recommended long-tail items or otherwise the overall catalog coverage. In this paper, however, we look at this problem from the users' perspective: we want to see how popularity bias causes the recommendations to deviate from what the user expects to get from the recommender system. We define three different groups of users according to their interest in popular items (Niche, Diverse and Blockbuster-focused) and show the impact of popularity bias on the users in each group. Our experimental results on a movie dataset show that in many recommendation algorithms the recommendations the users get are extremely concentrated on popular items even if a user is interested in long-tail and non-popular items showing an extreme bias disparity.
And finally, @cite_12 compared different recommendation algorithms in terms of accuracy and popularity bias. In that paper they observed some algorithms concentrate more on popular items than the others. In our work, we are mainly interested in seeing the popularity bias from the users' expectations perspective.
{ "cite_N": [ "@cite_12" ], "mid": [ "1224564842", "2748058847", "2732691751", "2962847184" ], "abstract": [ "Most real-world recommender systems are deployed in a commercial context or designed to represent a value-adding service, e.g., on shopping or Social Web platforms, and typical success indicators for such systems include conversion rates, customer loyalty or sales numbers. In academic research, in contrast, the evaluation and comparison of different recommendation algorithms is mostly based on offline experimental designs and accuracy or rank measures which are used as proxies to assess an algorithm's recommendation quality. In this paper, we show that popular recommendation techniques--despite often being similar when compared with the help of accuracy measures--can be quite different with respect to which items they recommend. We report the results of an in-depth analysis in which we compare several recommendations strategies from different perspectives, including accuracy, catalog coverage and their bias to recommend popular items. Our analyses reveal that some recent techniques that perform well with respect to accuracy measures focus their recommendations on a tiny fraction of the item spectrum or recommend mostly top sellers. We analyze the reasons for some of these biases in terms of algorithmic design and parameterization and show how the characteristics of the recommendations can be altered by hyperparameter tuning. Finally, we propose two novel algorithmic schemes to counter these popularity biases.", "Many recommendation algorithms suffer from popularity bias in their output: popular items are recommended frequently and less popular ones rarely, if at all. However, less popular, long-tail items are precisely those that are often desirable recommendations. In this paper, we introduce a flexible regularization-based framework to enhance the long-tail coverage of recommendation lists in a learning-to-rank algorithm. We show that regularization provides a tunable mechanism for controlling the trade-off between accuracy and coverage. Moreover, the experimental results using two data sets show that it is possible to improve coverage of long tail items without substantial loss of ranking performance.", "Algorithms that favor popular items are used to help us select among many choices, from top-ranked search engine results to highly-cited scientific papers. The goal of these algorithms is to identify high-quality items such as reliable news, credible information sources, and important discoveries–in short, high-quality content should rank at the top. Prior work has shown that choosing what is popular may amplify random fluctuations and lead to sub-optimal rankings. Nonetheless, it is often assumed that recommending what is popular will help high-quality content “bubble up” in practice. Here we identify the conditions in which popularity may be a viable proxy for quality content by studying a simple model of a cultural market endowed with an intrinsic notion of quality. A parameter representing the cognitive cost of exploration controls the trade-off between quality and popularity. Below and above a critical exploration cost, popularity bias is more likely to hinder quality. But we find a narrow intermediate regime of user attention where an optimal balance exists: choosing what is popular can help promote high-quality items to the top. These findings clarify the effects of algorithmic popularity bias on quality outcomes, and may inform the design of more principled mechanisms for techno-social cultural markets.", "Most recommender systems recommend a list of items. The user examines the list, from the first item to the last, and often chooses the first attractive item and does not examine the rest. This type of user behavior can be modeled by the cascade model. In this work, we study cascading bandits, an online learning variant of the cascade model where the goal is to recommend K most attractive items from a large set of L candidate items. We propose two algorithms for solving this problem, which are based on the idea of linear generalization. The key idea in our solutions is that we learn a predictor of the attraction probabilities of items from their features, as opposing to learning the attraction probability of each item independently as in the existing work. This results in practical learning algorithms whose regret does not depend on the number of items L. We bound the regret of one algorithm and comprehensively evaluate the other on a range of recommendation problems. The algorithm performs well and outperforms all baselines." ] }
1907.13351
2964404395
Synthesizing geometrical shapes from human brain activities is an interesting and meaningful but very challenging topic. Recently, the advancements of deep generative models like Generative Adversarial Networks (GANs) have supported the object generation from neurological signals. However, the Electroencephalograph (EEG)-based shape generation still suffer from the low realism problem. In particular, the generated geometrical shapes lack clear edges and fail to contain necessary details. In light of this, we propose a novel multi-task generative adversarial network to convert the individual's EEG signals evoked by geometrical shapes to the original geometry. First, we adopt a Convolutional Neural Network (CNN) to learn highly informative latent representation for the raw EEG signals, which is vital for the subsequent shape reconstruction. Next, we build the discriminator based on multi-task learning to distinguish and classify fake samples simultaneously, where the mutual promotion between different tasks improves the quality of the recovered shapes. Then, we propose a semantic alignment constraint in order to force the synthesized samples to approach the real ones in pixel-level, thus producing more compelling shapes. The proposed approach is evaluated over a local dataset and the results show that our model outperforms the competitive state-of-the-art baselines.
Recent years' research in neuroscience and neuroimaging @cite_11 indicated that human perception of visual stimuli can be decoded through some techniques in neuroimaging. To be specific, a few works gave evidence about decoding the brain signals to human activity by using the Functional Magnetic Resonance Imaging (fMRI) and EEG. There are some works use the fMRI signals to reconstruct the image which is seen by the individual and get an acceptable performance @cite_2 @cite_10 . The studies show the potential of fMRI-based image reconstruction in the brain signals decoding area, however, fMRI faces a number of crucial issues such as expensive acquisition equipment and low portability. Apart from the fMRI based method, there are a few EEG based methods in image reconstruction as EEG signals are less expensive @cite_3 @cite_4 . As a typical investigation, Brain2image @cite_3 encoded the raw EEG signals into a latent space which contains the distinctive information, and then sent them to a Conditional Generative Adversarial Networks (CGAN) for image reconstruction. @cite_4 applied a very similar algorithm framework.
{ "cite_N": [ "@cite_4", "@cite_3", "@cite_2", "@cite_10", "@cite_11" ], "mid": [ "2097267381", "1966461713", "2167237727", "2112180451" ], "abstract": [ "SUMMARY Perceptualexperienceconsistsofanenormousnumber of possible states. Previous fMRI studies have predicted a perceptual state by classifying brain activity into prespecified categories. Constraint-free visual image reconstruction is more challenging, as itisimpracticaltospecifybrainactivityforallpossible images.Inthisstudy,wereconstructedvisualimages by combining local image bases of multiple scales, whose contrasts were independently decoded from fMRI activity by automatically selecting relevant voxels and exploiting their correlated patterns. Binarycontrast, 10 3 10-patch images (2 100 possible states) were accurately reconstructed without any image prior on a single trial or volume basis by measuring brain activity only for several hundred random images. Reconstruction was also used to identify the presented image among millions of candidates. Theresultssuggestthatourapproachprovidesaneffective means to read out complex perceptual states from brain activity while discovering information representation in multivoxel patterns.", "Generally, various visual media are unequally memorable by the human brain. This paper looks into a new direction of modeling the memorability of video clips and automatically predicting how memorable they are by learning from brain functional magnetic resonance imaging (fMRI). We propose a novel computational framework by integrating the power of low-level audiovisual features and brain activity decoding via fMRI. Initially, a user study experiment is performed to create a ground truth database for measuring video memorability and a set of effective low-level audiovisual features is examined in this database. Then, human subjects’ brain fMRI data are obtained when they are watching the video clips. The fMRI-derived features that convey the brain activity of memorizing videos are extracted using a universal brain reference system. Finally, due to the fact that fMRI scanning is expensive and time-consuming, a computational model is learned on our benchmark dataset with the objective of maximizing the correlation between the low-level audiovisual features and the fMRI-derived features using joint subspace learning. The learned model can then automatically predict the memorability of videos without fMRI scans. Evaluations on publically available image and video databases demonstrate the effectiveness of the proposed framework.", "Neural encoding and decoding provide perspectives for understanding neural representations of sensory inputs. Recent functional magnetic resonance imaging fMRI studies have succeeded in building prediction models for encoding and decoding numerous stimuli by representing a complex stimulus as a combination of simple elements. While arbitrary visual images were reconstructed using a modular model that combined the outputs of decoder modules for multiscale local image bases elements, the shapes of the image bases were heuristically determined. In this work, we propose a method to establish mappings between the stimulus and the brain by automatically extracting modules from measured data. We develop a model based on Bayesian canonical correlation analysis, in which each module is modeled by a latent variable that relates a set of pixels in a visual image to a set of voxels in an fMRI activity pattern. The estimated mapping from a latent variable to pixels can be regarded as an image basis. We show that the model estimates a modular representation with spatially localized multiscale image bases. Further, using the estimated mappings, we derive encoding and decoding models that produce accurate predictions for brain activity and stimulus images. Our approach thus provides a novel means of revealing neural representations of stimuli by automatically extracting modules, which can be used to generate effective prediction models for encoding and decoding.", "Summary Recent studies have used fMRI signals from early visual areas to reconstruct simple geometric patterns. Here, we demonstrate a new Bayesian decoder that uses fMRI signals from early and anterior visual areas to reconstruct complex natural images. Our decoder combines three elements: a structural encoding model that characterizes responses in early visual areas, a semantic encoding model that characterizes responses in anterior visual areas, and prior information about the structure and semantic content of natural images. By combining all these elements, the decoder produces reconstructions that accurately reflect both the spatial structure and semantic category of the objects contained in the observed natural image. Our results show that prior information has a substantial effect on the quality of natural image reconstructions. We also demonstrate that much of the variance in the responses of anterior visual areas to complex natural images is explained by the semantic category of the image alone." ] }
1907.13351
2964404395
Synthesizing geometrical shapes from human brain activities is an interesting and meaningful but very challenging topic. Recently, the advancements of deep generative models like Generative Adversarial Networks (GANs) have supported the object generation from neurological signals. However, the Electroencephalograph (EEG)-based shape generation still suffer from the low realism problem. In particular, the generated geometrical shapes lack clear edges and fail to contain necessary details. In light of this, we propose a novel multi-task generative adversarial network to convert the individual's EEG signals evoked by geometrical shapes to the original geometry. First, we adopt a Convolutional Neural Network (CNN) to learn highly informative latent representation for the raw EEG signals, which is vital for the subsequent shape reconstruction. Next, we build the discriminator based on multi-task learning to distinguish and classify fake samples simultaneously, where the mutual promotion between different tasks improves the quality of the recovered shapes. Then, we propose a semantic alignment constraint in order to force the synthesized samples to approach the real ones in pixel-level, thus producing more compelling shapes. The proposed approach is evaluated over a local dataset and the results show that our model outperforms the competitive state-of-the-art baselines.
Most of the visual object reconstruction methods are based on Generative Adversarial Networks (GANs) and the variations. GANs @cite_5 , as the typical deep learning frameworks, was used widely in image generation. The standard GANs are composed of a generator network which generates images from the random sampled noise and a discriminator network which tried to distinguish the generated image correctly. Normally, original GANs had to suffer from the uncontrollable issue of the generation process. In order to retard it, the conditional GAN (CGAN) was proposed @cite_1 which involves the conditional information (e.g., labels) in order to control the generating process. Auxiliary Classifier GAN (ACGAN) @cite_6 improve the performance of GAN for image synthesis. ACGAN demonstrated that adding more structure to the GAN latent space along with a specialized cost function results in higher quality samples. A task-specific branch in the discriminator is empowered to enhance the discriminability.
{ "cite_N": [ "@cite_5", "@cite_1", "@cite_6" ], "mid": [ "2787223504", "2798844427", "2607448608", "2607491080" ], "abstract": [ "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.", "This paper proposes an unpaired learning method for image enhancement. Given a set of photographs with the desired characteristics, the proposed method learns a photo enhancer which transforms an input image into an enhanced image with those characteristics. The method is based on the framework of two-way generative adversarial networks (GANs) with several improvements. First, we augment the U-Net with global features and show that it is more effective. The global U-Net acts as the generator in our GAN model. Second, we improve Wasserstein GAN (WGAN) with an adaptive weighting scheme. With this scheme, training converges faster and better, and is less sensitive to parameters than WGAN-GP. Finally, we propose to use individual batch normalization layers for generators in two-way GANs. It helps generators better adapt to their own input distributions. All together, they significantly improve the stability of GAN training for our application. Both quantitative and visual results show that the proposed method is effective for enhancing images.", "This paper describes an intuitive generalization to the Generative Adversarial Networks (GANs) to generate samples while capturing diverse modes of the true data distribution. Firstly, we propose a very simple and intuitive multi-agent GAN architecture that incorporates multiple generators capable of generating samples from high probability modes. Secondly, in order to enforce different generators to generate samples from diverse modes, we propose two extensions to the standard GAN objective function. (1) We augment the generator specific GAN objective function with a diversity enforcing term that encourage different generators to generate diverse samples using a user-defined similarity based function. (2) We modify the discriminator objective function where along with finding the real and fake samples, the discriminator has to predict the generator which generated the given fake sample. Intuitively, in order to succeed in this task, the discriminator must learn to push different generators towards different identifiable modes. Our framework is generalizable in the sense that it can be easily combined with other existing variants of GANs to produce diverse samples. Experimentally we show that our framework is able to produce high quality diverse samples for the challenging tasks such as image face generation and image-to-image translation. We also show that it is capable of learning a better feature representation in an unsupervised setting.", "Traditional generative adversarial networks (GAN) and many of its variants are trained by minimizing the KL or JS-divergence loss that measures how close the generated data distribution is from the true data distribution. A recent advance called the WGAN based on Wasserstein distance can improve on the KL and JS-divergence based GANs, and alleviate the gradient vanishing, instability, and mode collapse issues that are common in the GAN training. In this work, we aim at improving on the WGAN by first generalizing its discriminator loss to a margin-based one, which leads to a better discriminator, and in turn a better generator, and then carrying out a progressive training paradigm involving multiple GANs to contribute to the maximum margin ranking loss so that the GAN at later stages will improve upon early stages. We call this method Gang of GANs (GoGAN). We have shown theoretically that the proposed GoGAN can reduce the gap between the true data distribution and the generated data distribution by at least half in an optimally trained WGAN. We have also proposed a new way of measuring GAN quality which is based on image completion tasks. We have evaluated our method on four visual datasets: CelebA, LSUN Bedroom, CIFAR-10, and 50K-SSFF, and have seen both visual and quantitative improvement over baseline WGAN." ] }
1907.13351
2964404395
Synthesizing geometrical shapes from human brain activities is an interesting and meaningful but very challenging topic. Recently, the advancements of deep generative models like Generative Adversarial Networks (GANs) have supported the object generation from neurological signals. However, the Electroencephalograph (EEG)-based shape generation still suffer from the low realism problem. In particular, the generated geometrical shapes lack clear edges and fail to contain necessary details. In light of this, we propose a novel multi-task generative adversarial network to convert the individual's EEG signals evoked by geometrical shapes to the original geometry. First, we adopt a Convolutional Neural Network (CNN) to learn highly informative latent representation for the raw EEG signals, which is vital for the subsequent shape reconstruction. Next, we build the discriminator based on multi-task learning to distinguish and classify fake samples simultaneously, where the mutual promotion between different tasks improves the quality of the recovered shapes. Then, we propose a semantic alignment constraint in order to force the synthesized samples to approach the real ones in pixel-level, thus producing more compelling shapes. The proposed approach is evaluated over a local dataset and the results show that our model outperforms the competitive state-of-the-art baselines.
Most brain signal based image reconstruction work is based on fMRI. Due to the drawbacks of fMRI (e.g., low time resolution, expensive, and low portability), we focus on EEG based geometric shape reconstruction. Compare to the typical EEG-based work like brain2image @cite_3 , we have several technical advantages: 1) we concentrate on the influence to the EEG signals brought by geometric attribute while @cite_3 focus on images with a large number of attributes; 2) we adopt CNN instead of RNN to learn the latent EEG features which cost less training time with a similar accuracy; 3) we add an auxiliary task-specific classifier to improve the discriminability of the discriminator; 4) we propose a semantic alignment method to generate more realistic images.
{ "cite_N": [ "@cite_3" ], "mid": [ "2284198383", "2729876886", "2097267381", "2526782364" ], "abstract": [ "Abstract Brain extraction from magnetic resonance imaging (MRI) is crucial for many neuroimaging workflows. Current methods demonstrate good results on non-enhanced T1-weighted images, but struggle when confronted with other modalities and pathologically altered tissue. In this paper we present a 3D convolutional deep learning architecture to address these shortcomings. In contrast to existing methods, we are not limited to non-enhanced T1w images. When trained appropriately, our approach handles an arbitrary number of modalities including contrast-enhanced scans. Its applicability to MRI data, comprising four channels: non-enhanced and contrast-enhanced T1w, T2w and FLAIR contrasts, is demonstrated on a challenging clinical data set containing brain tumors (N = 53), where our approach significantly outperforms six commonly used tools with a mean Dice score of 95.19. Further, the proposed method at least matches state-of-the-art performance as demonstrated on three publicly available data sets: IBSR, LPBA40 and OASIS, totaling N = 135 volumes. For the IBSR (96.32) and LPBA40 (96.96) data set the convolutional neuronal network (CNN) obtains the highest average Dice scores, albeit not being significantly different from the second best performing method. For the OASIS data the second best Dice (95.02) results are achieved, with no statistical difference in comparison to the best performing tool. For all data sets the highest average specificity measures are evaluated, whereas the sensitivity displays about average results. Adjusting the cut-off threshold for generating the binary masks from the CNN's probability output can be used to increase the sensitivity of the method. Of course, this comes at the cost of a decreased specificity and has to be decided application specific. Using an optimized GPU implementation predictions can be achieved in less than one minute. The proposed method may prove useful for large-scale studies and clinical trials.", "Brain extraction or whole brain segmentation is an important first step in many of the neuroimage analysis pipelines. The accuracy and the robustness of brain extraction, therefore, are crucial for the accuracy of the entire brain analysis process. The state-of-the-art brain extraction techniques rely heavily on the accuracy of alignment or registration between brain atlases and query brain anatomy, and or make assumptions about the image geometry, and therefore have limited success when these assumptions do not hold or image registration fails. With the aim of designing an accurate, learning-based, geometry-independent, and registration-free brain extraction tool, in this paper, we present a technique based on an auto-context convolutional neural network (CNN), in which intrinsic local and global image features are learned through 2-D patches of different window sizes. We consider two different architectures: 1) a voxelwise approach based on three parallel 2-D convolutional pathways for three different directions (axial, coronal, and sagittal) that implicitly learn 3-D image information without the need for computationally expensive 3-D convolutions and 2) a fully convolutional network based on the U-net architecture. Posterior probability maps generated by the networks are used iteratively as context information along with the original image patches to learn the local shape and connectedness of the brain to extract it from non-brain tissue. The brain extraction results we have obtained from our CNNs are superior to the recently reported results in the literature on two publicly available benchmark data sets, namely, LPBA40 and OASIS, in which we obtained the Dice overlap coefficients of 97.73 and 97.62 , respectively. Significant improvement was achieved via our auto-context algorithm. Furthermore, we evaluated the performance of our algorithm in the challenging problem of extracting arbitrarily oriented fet al brains in reconstructed fet al brain magnetic resonance imaging (MRI) data sets. In this application, our voxelwise auto-context CNN performed much better than the other methods (Dice coefficient: 95.97 ), where the other methods performed poorly due to the non-standard orientation and geometry of the fet al brain in MRI. Through training, our method can provide accurate brain extraction in challenging applications. This, in turn, may reduce the problems associated with image registration in segmentation tasks.", "SUMMARY Perceptualexperienceconsistsofanenormousnumber of possible states. Previous fMRI studies have predicted a perceptual state by classifying brain activity into prespecified categories. Constraint-free visual image reconstruction is more challenging, as itisimpracticaltospecifybrainactivityforallpossible images.Inthisstudy,wereconstructedvisualimages by combining local image bases of multiple scales, whose contrasts were independently decoded from fMRI activity by automatically selecting relevant voxels and exploiting their correlated patterns. Binarycontrast, 10 3 10-patch images (2 100 possible states) were accurately reconstructed without any image prior on a single trial or volume basis by measuring brain activity only for several hundred random images. Reconstruction was also used to identify the presented image among millions of candidates. Theresultssuggestthatourapproachprovidesaneffective means to read out complex perceptual states from brain activity while discovering information representation in multivoxel patterns.", "Recently, neuron activations extracted from a pre-trained convolutional neural network (CNN) show promising performance in various visual tasks. However, due to the domain and task bias, using the features generated from the model pre-trained for image classification as image representations for instance retrieval is problematic. In this paper, we propose quartet-net learning to improve the discriminative power of CNN features for instance retrieval. The general idea is to map the features into a space where the image similarity can be better evaluated. Our network differs from the traditional Siamese-net in two ways. First, we adopt a double-margin contrastive loss with a dynamic margin tuning strategy to train the network which leads to more robust performance. Second, we introduce in the mimic learning regularization to improve the generalization ability of the network by preventing it from overfitting to the training data. Catering for the network learning, we collect a large-scale dataset, namely GeoPair, which consists of 68k matching image pairs and 63k non-matching pairs. Experiments on several standard instance retrieval datasets demonstrate the effectiveness of our method." ] }
1907.12924
2966152490
Service robots are expected to operate effectively in human-centric environments for long periods of time. In such realistic scenarios, fine-grained object categorization is as important as basic-level object categorization. We tackle this problem by proposing an open-ended object recognition approach which concurrently learns both the object categories and the local features for encoding objects. In this work, each object is represented using a set of general latent visual topics and category-specific dictionaries. The general topics encode the common patterns of all categories, while the category-specific dictionary describes the content of each category in details. The proposed approach discovers both sets of general and specific representations in an unsupervised fashion and updates them incrementally using new object views. Experimental results show that our approach yields significant improvements over the previous state-of-the-art approaches concerning scalability and object classification performance. Moreover, our approach demonstrates the capability of learning from very few training examples in a real-world setting. Regarding computation time, the best result was obtained with a Bag-of-Words method followed by a variant of the Latent Dirichlet Allocation approach.
In the last decade, various research groups have made substantial progress towards the development of learning approaches which support online and incremental object category learning @cite_2 @cite_17 . In recent studies on object recognition, much attention has been given to deep Convolutional Neural Networks (CNNs). It is now clear that if in a scenario, we have and , CNN-based approaches yield good results, notable recent works include @cite_0 @cite_15 . In open-ended scenarios, these assumptions are not satisfied, and the robot needs to learn new concepts on-site using very few training examples. While deep learning is a very powerful and useful tool, there are several limitations to apply CNNs in open-ended domains. In general, CNN approaches are incremental by nature but not open-ended, since the inclusion of new categories enforces a restructuring in the topology of the network. Furthermore, training a CNN-based approach requires long training times and training with a few examples per category poses a challenge for these methods. In contrast, @cite_13 @cite_16 allows for concurrent learning and recognition. Our approach falls into this category.
{ "cite_N": [ "@cite_0", "@cite_2", "@cite_15", "@cite_16", "@cite_13", "@cite_17" ], "mid": [ "2787358858", "1929903369", "2774918944", "2963998559" ], "abstract": [ "In recent years, Convolutional Neural Networks (CNNs) have shown remarkable performance in many computer vision tasks such as object recognition and detection. However, complex training issues, such as \"catastrophic forgetting\" and hyper-parameter tuning, make incremental learning in CNNs a difficult challenge. In this paper, we propose a hierarchical deep neural network, with CNNs at multiple levels, and a corresponding training method for lifelong learning. The network grows in a tree-like manner to accommodate the new classes of data without losing the ability to identify the previously trained classes. The proposed network was tested on CIFAR-10 and CIFAR-100 datasets, and compared against the method of fine tuning specific layers of a conventional CNN. We obtained comparable accuracies and achieved 40 and 20 reduction in training effort in CIFAR-10 and CIFAR 100 respectively. The network was able to organize the incoming classes of data into feature-driven super-classes. Our model improves upon existing hierarchical CNN models by adding the capability of self-growth and also yields important observations on feature selective classification.", "Deep convolutional neural networks (CNN) have seen tremendous success in large-scale generic object recognition. In comparison with generic object recognition, fine-grained image classification (FGIC) is much more challenging because (i) fine-grained labeled data is much more expensive to acquire (usually requiring domain expertise); (ii) there exists large intra-class and small inter-class variance. Most recent work exploiting deep CNN for image recognition with small training data adopts a simple strategy: pre-train a deep CNN on a large-scale external dataset (e.g., ImageNet) and fine-tune on the small-scale target data to fit the specific classification task. In this paper, beyond the fine-tuning strategy, we propose a systematic framework of learning a deep CNN that addresses the challenges from two new perspectives: (i) identifying easily annotated hyper-classes inherent in the fine-grained data and acquiring a large number of hyper-class-labeled images from readily available external sources (e.g., image search engines), and formulating the problem into multitask learning; (ii) a novel learning model by exploiting a regularization between the fine-grained recognition model and the hyper-class recognition model. We demonstrate the success of the proposed framework on two small-scale fine-grained datasets (Stanford Dogs and Stanford Cars) and on a large-scale car dataset that we collected.", "Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.", "During the last half decade, convolutional neural networks (CNNs) have triumphed over semantic segmentation, which is a core task of various emerging industrial applications such as autonomous driving and medical imaging. However, to train CNNs requires a huge amount of data, which is difficult to collect and laborious to annotate. Recent advances in computer graphics make it possible to train CNN models on photo-realistic synthetic data with computer-generated annotations. Despite this, the domain mismatch between the real images and the synthetic data significantly decreases the models’ performance. Hence we propose a curriculum-style learning approach to minimize the domain gap in semantic segmentation. The curriculum domain adaptation solves easy tasks first in order to infer some necessary properties about the target domain; in particular, the first task is to learn global label distributions over images and local distributions over landmark superpixels. These are easy to estimate because images of urban traffic scenes have strong idiosyncrasies (e.g., the size and spatial relations of buildings, streets, cars, etc.). We then train the segmentation network in such a way that the network predictions in the target domain follow those inferred properties. In experiments, our method significantly outperforms the baselines as well as the only known existing approach to the same problem." ] }
1907.12924
2966152490
Service robots are expected to operate effectively in human-centric environments for long periods of time. In such realistic scenarios, fine-grained object categorization is as important as basic-level object categorization. We tackle this problem by proposing an open-ended object recognition approach which concurrently learns both the object categories and the local features for encoding objects. In this work, each object is represented using a set of general latent visual topics and category-specific dictionaries. The general topics encode the common patterns of all categories, while the category-specific dictionary describes the content of each category in details. The proposed approach discovers both sets of general and specific representations in an unsupervised fashion and updates them incrementally using new object views. Experimental results show that our approach yields significant improvements over the previous state-of-the-art approaches concerning scalability and object classification performance. Moreover, our approach demonstrates the capability of learning from very few training examples in a real-world setting. Regarding computation time, the best result was obtained with a Bag-of-Words method followed by a variant of the Latent Dirichlet Allocation approach.
Several types of research have been performed to assess the added-value of structural information. @cite_14 extended an online version of Latent Dirichlet Allocation (LDA) and proposed an incremental Gibbs sampler for LDA (here referred to as I-LDA). In online-LDA and I-LDA, the number of categories is fixed, while in our approach the number of categories is growing. @cite_13 proposed an open-ended object category learning approach just by learning specific topics per category, while our approach does not only learn a set of general topics for basic-level categorization, but also learn a category-specific dictionary for fine-grained categorization.
{ "cite_N": [ "@cite_14", "@cite_13" ], "mid": [ "2752711721", "2170246630", "1880262756", "2171706135" ], "abstract": [ "Latent Dirichlet Allocation (LDA) is a popular topic modeling technique for exploring hidden topics in text corpora. Standard LDA model suffers the problem that the topic assignment of each word is independent and lacks the mechanism to utilize the rich prior background knowledge to learn semantically coherent topics. To address this problem, in this paper, we propose a model called Entity Correlation Latent Dirichlet Allocation (EC-LDA) by incorporating constraints derived from entity correlations as the prior knowledge into LDA topic model. Different from other knowledge-based topic models which extract the knowledge information directly from the train dataset itself or even from the human judgements, for our work, we take advantage of the prior knowledge from the external knowledge base (Freebase 1, in our experiment). Hence, our approach is more suitable to widely kinds of text corpora in different scenarios. We fit our proposed model using Gibbs sampling. Experiment results demonstrate the effectiveness of our model compared with standard LDA.", "This paper introduces LDA-G, a scalable Bayesian approach to finding latent group structures in large real-world graph data. Existing Bayesian approaches for group discovery (such as Infinite Relational Models) have only been applied to small graphs with a couple of hundred nodes. LDA-G (short for Latent Dirichlet Allocation for Graphs) utilizes a well-known topic modeling algorithm to find latent group structure. Specifically, we modify Latent Dirichlet Allocation (LDA) to operate on graph data instead of text corpora. Our modifications reflect the differences between real-world graph data and text corpora (e.g., a node's neighbor count vs. a document's word count). In our empirical study, we apply LDA-G to several large graphs (with thousands of nodes) from PubMed (a scientific publication repository). We compare LDA-G's quantitative performance on link prediction with two existing approaches: one Bayesian (namely, Infinite Relational Model) and one non-Bayesian (namely, Cross-association). On average, LDA-G outperforms IRM by 15 and Cross-association by 25 (in terms of area under the ROC curve). Furthermore, we demonstrate that LDA-G can discover useful qualitative information.", "We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.", "Given a set of images containing multiple object categories, we seek to discover those categories and their image locations without supervision. We achieve this using generative models from the statistical text literature: probabilistic Latent Semantic Analysis (pLSA), and Latent Dirichlet Allocation (LDA). In text analysis these are used to discover topics in a corpus using the bag-of-words document representation. Here we discover topics as object categories, so that an image containing instances of several categories is modelled as a mixture of topics. The models are applied to images by using a visual analogue of a word, formed by vector quantizing SIFT like region descriptors. We investigate a set of increasingly demanding scenarios, starting with image sets containing only two object categories through to sets containing multiple categories (including airplanes, cars, faces, motorbikes, spotted cats) and background clutter. The object categories sample both intra-class and scale variation, and both the categories and their approximate spatial layout are found without supervision. We also demonstrate classification of unseen images and images containing multiple objects. Performance of the proposed unsupervised method is compared to the semi-supervised approach of [7].1 1This work was sponsored in part by the EU Project CogViSys, the University of Oxford, Shell Oil, and the National Geospatial-Intelligence Agency." ] }
1907.12924
2966152490
Service robots are expected to operate effectively in human-centric environments for long periods of time. In such realistic scenarios, fine-grained object categorization is as important as basic-level object categorization. We tackle this problem by proposing an open-ended object recognition approach which concurrently learns both the object categories and the local features for encoding objects. In this work, each object is represented using a set of general latent visual topics and category-specific dictionaries. The general topics encode the common patterns of all categories, while the category-specific dictionary describes the content of each category in details. The proposed approach discovers both sets of general and specific representations in an unsupervised fashion and updates them incrementally using new object views. Experimental results show that our approach yields significant improvements over the previous state-of-the-art approaches concerning scalability and object classification performance. Moreover, our approach demonstrates the capability of learning from very few training examples in a real-world setting. Regarding computation time, the best result was obtained with a Bag-of-Words method followed by a variant of the Latent Dirichlet Allocation approach.
Assume at time @math (i.e., first teaching action) a dictionary is learned for category @math , denoted as @math , which represents the distribution of 3D shape features observed up to time @math . Later at time @math , a new training instance, which is represented as a set of spin-images, is taught by a teacher to category @math (i.e., supervised learning). The teaching instruction trigs the robot to retrieve the current dictionary of the category as well as the representation of the new object view and updates the relevant dictionary using an incremental K-means algorithm @cite_18 (i.e. unsupervised learning). Such category-specific dictionary would highlight the differences of objects from different categories, and as a consequence improves the object recognition performance.
{ "cite_N": [ "@cite_18" ], "mid": [ "2162583786", "1982405594", "1963932623", "2027805700" ], "abstract": [ "For the task of visual categorization, the learning model is expected to be endowed with discriminative visual feature representation and flexibilities in processing many categories. Many existing approaches are designed based on a flat category structure, or rely on a set of pre-computed visual features, hence may not be appreciated for dealing with large numbers of categories. In this paper, we propose a novel dictionary learning method by taking advantage of hierarchical category correlation. For each internode of the hierarchical category structure, a discriminative dictionary and a set of classification models are learnt for visual categorization, and the dictionaries in different layers are learnt to exploit the discriminative visual properties of different granularity. Moreover, the dictionaries in lower levels also inherit the dictionary of ancestor nodes, so that categories in lower levels are described with multi-scale visual information using our dictionary learning approach. Experiments on Image Net object data subset and SUN397 scene dataset demonstrate that our approach achieves promising performance on data with large numbers of classes compared with some state-of-the-art methods, and is more efficient in processing large numbers of categories.", "The employed dictionary plays an important role in sparse representation or sparse coding based image reconstruction and classification, while learning dictionaries from the training data has led to state-of-the-art results in image classification tasks. However, many dictionary learning models exploit only the discriminative information in either the representation coefficients or the representation residual, which limits their performance. In this paper we present a novel dictionary learning method based on the Fisher discrimination criterion. A structured dictionary, whose atoms have correspondences to the subject class labels, is learned, with which not only the representation residual can be used to distinguish different classes, but also the representation coefficients have small within-class scatter and big between-class scatter. The classification scheme associated with the proposed Fisher discrimination dictionary learning (FDDL) model is consequently presented by exploiting the discriminative information in both the representation residual and the representation coefficients. The proposed FDDL model is extensively evaluated on various image datasets, and it shows superior performance to many state-of-the-art dictionary learning methods in a variety of classification tasks.", "A label consistent K-SVD (LC-KSVD) algorithm to learn a discriminative dictionary for sparse coding is presented. In addition to using class labels of training data, we also associate label information with each dictionary item (columns of the dictionary matrix) to enforce discriminability in sparse codes during the dictionary learning process. More specifically, we introduce a new label consistency constraint called \"discriminative sparse-code error\" and combine it with the reconstruction error and the classification error to form a unified objective function. The optimal solution is efficiently obtained using the K-SVD algorithm. Our algorithm learns a single overcomplete dictionary and an optimal linear classifier jointly. The incremental dictionary learning algorithm is presented for the situation of limited memory resources. It yields dictionaries so that feature points with the same class labels have similar sparse codes. Experimental results demonstrate that our algorithm outperforms many recently proposed sparse-coding techniques for face, action, scene, and object category recognition under the same learning conditions.", "In a sparse-representation-based face recognition scheme, the desired dictionary should have good representational power (i.e., being able to span the subspace of all faces) while supporting optimal discrimination of the classes (i.e., different human subjects). We propose a method to learn an over-complete dictionary that attempts to simultaneously achieve the above two goals. The proposed method, discriminative K-SVD (D-KSVD), is based on extending the K-SVD algorithm by incorporating the classification error into the objective function, thus allowing the performance of a linear classifier and the representational power of the dictionary being considered at the same time by the same optimization procedure. The D-KSVD algorithm finds the dictionary and solves for the classifier using a procedure derived from the K-SVD algorithm, which has proven efficiency and performance. This is in contrast to most existing work that relies on iteratively solving sub-problems with the hope of achieving the global optimal through iterative approximation. We evaluate the proposed method using two commonly-used face databases, the Extended YaleB database and the AR database, with detailed comparison to 3 alternative approaches, including the leading state-of-the-art in the literature. The experiments show that the proposed method outperforms these competing methods in most of the cases. Further, using Fisher criterion and dictionary incoherence, we also show that the learned dictionary and the corresponding classifier are indeed better-posed to support sparse-representation-based recognition." ] }
1907.13025
2964650603
Due to the availability of large-scale skeleton datasets, 3D human action recognition has recently called the attention of computer vision community. Many works have focused on encoding skeleton data as skeleton image representations based on spatial structure of the skeleton joints, in which the temporal dynamics of the sequence is encoded as variations in columns and the spatial structure of each frame is represented as rows of a matrix. To further improve such representations, we introduce a novel skeleton image representation to be used as input of Convolutional Neural Networks (CNNs), named SkeleMotion. The proposed approach encodes the temporal dynamics by explicitly computing the magnitude and orientation values of the skeleton joints. Different temporal scales are employed to compute motion values to aggregate more temporal dynamics to the representation making it able to capture longrange joint interactions involved in actions as well as filtering noisy motion values. Experimental results demonstrate the effectiveness of the proposed representation on 3D action recognition outperforming the state-of-the-art on NTU RGB+D 120 dataset.
@cite_33 @cite_49 present a skeleton representation to represent both spatial configuration and dynamics of joint trajectories into three texture images through color encoding, named Joint Trajectory Maps (JTMs). The authors apply rotations to the skeleton data to mimicking multi-views and also for data enlargement to overcome the drawback of CNNs usually being not view invariant. JTMs are generated by projecting the trajectories onto the three orthogonal planes. To encode motion direction in the JTM, they use a hue colormap function to color'' the joint trajectories over the action period. They also encode the motion magnitude of joints into saturation and brightness claiming that changes in motion results in texture in the JMTs. Finally, the authors individually fine-tune three AlexNet @cite_26 CNNs (one for each JTM) to perform classification.
{ "cite_N": [ "@cite_26", "@cite_33", "@cite_49" ], "mid": [ "2769356789", "2593146028", "2797382244", "2761860076" ], "abstract": [ "Human skeleton joints are popular for action analysis since they can be easily extracted from videos to discard background noises. However, current skeleton representations do not fully benefit from machine learning with CNNs. We propose \"Skepxels\" a spatio-temporal representation for skeleton sequences to fully exploit the \"local\" correlations between joints using the 2D convolution kernels of CNN. We transform skeleton videos into images of flexible dimensions using Skepxels and develop a CNN-based framework for effective human action recognition using the resulting images. Skepxels encode rich spatio-temporal information about the skeleton joints in the frames by maximizing a unique distance metric, defined collaboratively over the distinct joint arrangements used in the skelet al image. Moreover, they are flexible in encoding compound semantic notions such as location and speed of the joints. The proposed action recognition exploits the representation in a hierarchical manner by first capturing the micro-temporal relations between the skeleton joints with the Skepxels and then exploiting their macro-temporal relations by computing the Fourier Temporal Pyramids over the CNN features of the skelet al images. We extend the Inception-ResNet CNN architecture with the proposed method and improve the state-of-the-art accuracy by 4.4 on the large scale NTU human activity dataset. On the medium-sized N-UCLA and UTH-MHAD datasets, our method outperforms the existing results by 5.7 and 9.3 respectively.", "Sequence-based view invariant transform can effectively cope with view variations.Enhanced skeleton visualization method encodes spatio-temporal skeletons as visual and motion enhanced color images in a compact yet distinctive manner.Multi-stream convolutional neural networks fusion model is able to explore complementary properties among different types of enhanced color images.Our method consistently achieves the highest accuracies on four datasets, including the largest and most challenging NTU RGB+D dataset for skeleton-based action recognition. Human action recognition based on skeletons has wide applications in humancomputer interaction and intelligent surveillance. However, view variations and noisy data bring challenges to this task. Whats more, it remains a problem to effectively represent spatio-temporal skeleton sequences. To solve these problems in one goal, this work presents an enhanced skeleton visualization method for view invariant human action recognition. Our method consists of three stages. First, a sequence-based view invariant transform is developed to eliminate the effect of view variations on spatio-temporal locations of skeleton joints. Second, the transformed skeletons are visualized as a series of color images, which implicitly encode the spatio-temporal information of skeleton joints. Furthermore, visual and motion enhancement methods are applied on color images to enhance their local patterns. Third, a convolutional neural networks-based model is adopted to extract robust and discriminative features from color images. The final action class scores are generated by decision level fusion of deep features. Extensive experiments on four challenging datasets consistently demonstrate the superiority of our method.", "Action recognition with 3D skeleton sequences became popular due to its speed and robustness. The recently proposed convolutional neural networks (CNNs)-based methods show a good performance in learning spatio–temporal representations for skeleton sequences. Despite the good recognition accuracy achieved by previous CNN-based methods, there existed two problems that potentially limit the performance. First, previous skeleton representations were generated by chaining joints with a fixed order. The corresponding semantic meaning was unclear and the structural information among the joints was lost. Second, previous models did not have an ability to focus on informative joints. The attention mechanism was important for skeleton-based action recognition because different joints contributed unequally toward the correct recognition. To solve these two problems, we proposed a novel CNN-based method for skeleton-based action recognition. We first redesigned the skeleton representations with a depth-first tree traversal order, which enhanced the semantic meaning of skeleton images and better preserved the associated structural information. We then proposed the general two-branch attention architecture that automatically focused on spatio–temporal key stages and filtered out unreliable joint predictions. Based on the proposed general architecture, we designed a global long-sequence attention network with refined branch structures. Furthermore, in order to adjust the kernel’s spatio–temporal aspect ratios and better capture long-term dependencies, we proposed a sub-sequence attention network (SSAN) that took sub-image sequences as inputs. We showed that the two-branch attention architecture could be combined with the SSAN to further improve the performance. Our experiment results on the NTU RGB+D data set and the SBU kinetic interaction data set outperformed the state of the art. The model was further validated on noisy estimated poses from the subsets of the UCF101 data set and the kinetics data set.", "Motivated by the promising performance achieved by deep learning, an effective yet simple method is proposed to encode the spatio-temporal information of skeleton sequences into color texture images, referred to as joint distance maps (JDMs), and convolutional neural networks are employed to exploit the discriminative features from the JDMs for human action and interaction recognition. The pair-wise distances between joints over a sequence of single or multiple person skeletons are encoded into color variations to capture temporal information. The efficacy of the proposed method has been verified by the state-of-the-art results on the large RGB+D Dataset and small UTD-MHAD Dataset in both single-view and cross-view settings." ] }
1907.13025
2964650603
Due to the availability of large-scale skeleton datasets, 3D human action recognition has recently called the attention of computer vision community. Many works have focused on encoding skeleton data as skeleton image representations based on spatial structure of the skeleton joints, in which the temporal dynamics of the sequence is encoded as variations in columns and the spatial structure of each frame is represented as rows of a matrix. To further improve such representations, we introduce a novel skeleton image representation to be used as input of Convolutional Neural Networks (CNNs), named SkeleMotion. The proposed approach encodes the temporal dynamics by explicitly computing the magnitude and orientation values of the skeleton joints. Different temporal scales are employed to compute motion values to aggregate more temporal dynamics to the representation making it able to capture longrange joint interactions involved in actions as well as filtering noisy motion values. Experimental results demonstrate the effectiveness of the proposed representation on 3D action recognition outperforming the state-of-the-art on NTU RGB+D 120 dataset.
To overcome the problem of the sparse data generated by skeleton sequence video, @cite_23 represent the temporal dynamics of the skeleton sequence by generating four skeleton representation images. Their approach is closer to @cite_47 method, however they compute the relative positions of the joints to four reference joints by arranging them as a chain and concatenating the joints of each body part to the reference joints resulting onto four different skeleton representations. According to the authors, such structure incorporate different spatial relationships between the joints. Finally, the skeleton images are resized and each channel of the four representations is used as input to a VGG19 @cite_30 pre-trained architecture for feature extraction.
{ "cite_N": [ "@cite_30", "@cite_47", "@cite_23" ], "mid": [ "2797382244", "2769356789", "2604321021", "2021150171" ], "abstract": [ "Action recognition with 3D skeleton sequences became popular due to its speed and robustness. The recently proposed convolutional neural networks (CNNs)-based methods show a good performance in learning spatio–temporal representations for skeleton sequences. Despite the good recognition accuracy achieved by previous CNN-based methods, there existed two problems that potentially limit the performance. First, previous skeleton representations were generated by chaining joints with a fixed order. The corresponding semantic meaning was unclear and the structural information among the joints was lost. Second, previous models did not have an ability to focus on informative joints. The attention mechanism was important for skeleton-based action recognition because different joints contributed unequally toward the correct recognition. To solve these two problems, we proposed a novel CNN-based method for skeleton-based action recognition. We first redesigned the skeleton representations with a depth-first tree traversal order, which enhanced the semantic meaning of skeleton images and better preserved the associated structural information. We then proposed the general two-branch attention architecture that automatically focused on spatio–temporal key stages and filtered out unreliable joint predictions. Based on the proposed general architecture, we designed a global long-sequence attention network with refined branch structures. Furthermore, in order to adjust the kernel’s spatio–temporal aspect ratios and better capture long-term dependencies, we proposed a sub-sequence attention network (SSAN) that took sub-image sequences as inputs. We showed that the two-branch attention architecture could be combined with the SSAN to further improve the performance. Our experiment results on the NTU RGB+D data set and the SBU kinetic interaction data set outperformed the state of the art. The model was further validated on noisy estimated poses from the subsets of the UCF101 data set and the kinetics data set.", "Human skeleton joints are popular for action analysis since they can be easily extracted from videos to discard background noises. However, current skeleton representations do not fully benefit from machine learning with CNNs. We propose \"Skepxels\" a spatio-temporal representation for skeleton sequences to fully exploit the \"local\" correlations between joints using the 2D convolution kernels of CNN. We transform skeleton videos into images of flexible dimensions using Skepxels and develop a CNN-based framework for effective human action recognition using the resulting images. Skepxels encode rich spatio-temporal information about the skeleton joints in the frames by maximizing a unique distance metric, defined collaboratively over the distinct joint arrangements used in the skelet al image. Moreover, they are flexible in encoding compound semantic notions such as location and speed of the joints. The proposed action recognition exploits the representation in a hierarchical manner by first capturing the micro-temporal relations between the skeleton joints with the Skepxels and then exploiting their macro-temporal relations by computing the Fourier Temporal Pyramids over the CNN features of the skelet al images. We extend the Inception-ResNet CNN architecture with the proposed method and improve the state-of-the-art accuracy by 4.4 on the large scale NTU human activity dataset. On the medium-sized N-UCLA and UTH-MHAD datasets, our method outperforms the existing results by 5.7 and 9.3 respectively.", "This paper presents a new method for 3D action recognition with skeleton sequences (i.e., 3D trajectories of human skeleton joints). The proposed method first transforms each skeleton sequence into three clips each consisting of several frames for spatial temporal feature learning using deep neural networks. Each clip is generated from one channel of the cylindrical coordinates of the skeleton sequence. Each frame of the generated clips represents the temporal information of the entire skeleton sequence, and incorporates one particular spatial relationship between the joints. The entire clips include multiple frames with different spatial relationships, which provide useful spatial structural information of the human skeleton. We propose to use deep convolutional neural networks to learn long-term temporal information of the skeleton sequence from the frames of the generated clips, and then use a Multi-Task Learning Network (MTLN) to jointly process all frames of the clips in parallel to incorporate spatial structural information for action recognition. Experimental results clearly show the effectiveness of the proposed new representation and feature learning method for 3D action recognition.", "Recent advances on human motion analysis have made the extraction of human skeleton structure feasible, even from single depth images. This structure has been proven quite informative for discriminating actions in a recognition scenario. In this context, we propose a local skeleton descriptor that encodes the relative position of joint quadruples. Such a coding implies a similarity normalisation transform that leads to a compact (6D) view-invariant skelet al feature, referred to as skelet al quad. Further, the use of a Fisher kernel representation is suggested to describe the skelet al quads contained in a (sub)action. A Gaussian mixture model is learnt from training data, so that the generation of any set of quads is encoded by its Fisher vector. Finally, a multi-level representation of Fisher vectors leads to an action description that roughly carries the order of sub-action within each action sequence. Efficient classification is here achieved by linear SVMs. The proposed action representation is tested on widely used datasets, MSRAction3D and HDM05. The experimental evaluation shows that the proposed method outperforms state-of-the-art algorithms that rely only on joints, while it competes with methods that combine joints with extra cues." ] }
1907.13025
2964650603
Due to the availability of large-scale skeleton datasets, 3D human action recognition has recently called the attention of computer vision community. Many works have focused on encoding skeleton data as skeleton image representations based on spatial structure of the skeleton joints, in which the temporal dynamics of the sequence is encoded as variations in columns and the spatial structure of each frame is represented as rows of a matrix. To further improve such representations, we introduce a novel skeleton image representation to be used as input of Convolutional Neural Networks (CNNs), named SkeleMotion. The proposed approach encodes the temporal dynamics by explicitly computing the magnitude and orientation values of the skeleton joints. Different temporal scales are employed to compute motion values to aggregate more temporal dynamics to the representation making it able to capture longrange joint interactions involved in actions as well as filtering noisy motion values. Experimental results demonstrate the effectiveness of the proposed representation on 3D action recognition outperforming the state-of-the-art on NTU RGB+D 120 dataset.
@cite_46 claim that the concatenation process of chaining all joints with a fixed order turn into lack of semantic meaning and leads to loss in skeleton structural information. To that end, @cite_46 proposed a representation named Tree Structure Skeleton Image (TSSI) to preserve spatial relations. Their method is created by traversing a skeleton tree with a depth-first order algorithm with the premise that the fewer edges there are, the more relevant the joint pair is. The generated representation are then quantified into an image and resized before being sent to a ResNet-50 @cite_39 CNN architecture.
{ "cite_N": [ "@cite_46", "@cite_39" ], "mid": [ "2797382244", "2021150171", "2769356789", "2761860076" ], "abstract": [ "Action recognition with 3D skeleton sequences became popular due to its speed and robustness. The recently proposed convolutional neural networks (CNNs)-based methods show a good performance in learning spatio–temporal representations for skeleton sequences. Despite the good recognition accuracy achieved by previous CNN-based methods, there existed two problems that potentially limit the performance. First, previous skeleton representations were generated by chaining joints with a fixed order. The corresponding semantic meaning was unclear and the structural information among the joints was lost. Second, previous models did not have an ability to focus on informative joints. The attention mechanism was important for skeleton-based action recognition because different joints contributed unequally toward the correct recognition. To solve these two problems, we proposed a novel CNN-based method for skeleton-based action recognition. We first redesigned the skeleton representations with a depth-first tree traversal order, which enhanced the semantic meaning of skeleton images and better preserved the associated structural information. We then proposed the general two-branch attention architecture that automatically focused on spatio–temporal key stages and filtered out unreliable joint predictions. Based on the proposed general architecture, we designed a global long-sequence attention network with refined branch structures. Furthermore, in order to adjust the kernel’s spatio–temporal aspect ratios and better capture long-term dependencies, we proposed a sub-sequence attention network (SSAN) that took sub-image sequences as inputs. We showed that the two-branch attention architecture could be combined with the SSAN to further improve the performance. Our experiment results on the NTU RGB+D data set and the SBU kinetic interaction data set outperformed the state of the art. The model was further validated on noisy estimated poses from the subsets of the UCF101 data set and the kinetics data set.", "Recent advances on human motion analysis have made the extraction of human skeleton structure feasible, even from single depth images. This structure has been proven quite informative for discriminating actions in a recognition scenario. In this context, we propose a local skeleton descriptor that encodes the relative position of joint quadruples. Such a coding implies a similarity normalisation transform that leads to a compact (6D) view-invariant skelet al feature, referred to as skelet al quad. Further, the use of a Fisher kernel representation is suggested to describe the skelet al quads contained in a (sub)action. A Gaussian mixture model is learnt from training data, so that the generation of any set of quads is encoded by its Fisher vector. Finally, a multi-level representation of Fisher vectors leads to an action description that roughly carries the order of sub-action within each action sequence. Efficient classification is here achieved by linear SVMs. The proposed action representation is tested on widely used datasets, MSRAction3D and HDM05. The experimental evaluation shows that the proposed method outperforms state-of-the-art algorithms that rely only on joints, while it competes with methods that combine joints with extra cues.", "Human skeleton joints are popular for action analysis since they can be easily extracted from videos to discard background noises. However, current skeleton representations do not fully benefit from machine learning with CNNs. We propose \"Skepxels\" a spatio-temporal representation for skeleton sequences to fully exploit the \"local\" correlations between joints using the 2D convolution kernels of CNN. We transform skeleton videos into images of flexible dimensions using Skepxels and develop a CNN-based framework for effective human action recognition using the resulting images. Skepxels encode rich spatio-temporal information about the skeleton joints in the frames by maximizing a unique distance metric, defined collaboratively over the distinct joint arrangements used in the skelet al image. Moreover, they are flexible in encoding compound semantic notions such as location and speed of the joints. The proposed action recognition exploits the representation in a hierarchical manner by first capturing the micro-temporal relations between the skeleton joints with the Skepxels and then exploiting their macro-temporal relations by computing the Fourier Temporal Pyramids over the CNN features of the skelet al images. We extend the Inception-ResNet CNN architecture with the proposed method and improve the state-of-the-art accuracy by 4.4 on the large scale NTU human activity dataset. On the medium-sized N-UCLA and UTH-MHAD datasets, our method outperforms the existing results by 5.7 and 9.3 respectively.", "Motivated by the promising performance achieved by deep learning, an effective yet simple method is proposed to encode the spatio-temporal information of skeleton sequences into color texture images, referred to as joint distance maps (JDMs), and convolutional neural networks are employed to exploit the discriminative features from the JDMs for human action and interaction recognition. The pair-wise distances between joints over a sequence of single or multiple person skeletons are encoded into color variations to capture temporal information. The efficacy of the proposed method has been verified by the state-of-the-art results on the large RGB+D Dataset and small UTD-MHAD Dataset in both single-view and cross-view settings." ] }
1907.12933
2965255081
Artificial Neural networks (ANNs) are powerful computing systems employed for various applications due to their versatility to generalize and to respond to unexpected inputs patterns. However, implementations of ANNs for safety-critical systems might lead to failures, which are hardly predicted in the design phase since ANNs are highly parallel and their parameters are hardly interpretable. Here we develop and evaluate a novel symbolic software verification framework based on incremental bounded model checking (BMC) to check for adversarial cases and coverage methods in multi-layer perceptron (MLP). In particular, we further develop the efficient SMT-based Context-Bounded Model Checker for Graphical Processing Units (ESBMC-GPU) in order to ensure the reliability of certain safety properties in which safety-critical systems can fail and make incorrect decisions, thereby leading to unwanted material damage or even put lives in danger. This paper marks the first symbolic verification framework to reason over ANNs implemented in CUDA. Our experimental results show that our approach implemented in ESBMC-GPU can successfully verify safety properties and covering methods in ANNs and correctly generate 28 adversarial cases in MLPs.
Our ultimate goal is to formally ensure safety for applications that are based on Artificial Intelligence (AI), as described by @cite_4 . In particular, the potential impact of intelligent systems performing tasks in society and how safety guarantees are necessary to prevent damages are the main problem of safety in ANNs.
{ "cite_N": [ "@cite_4" ], "mid": [ "2963600714", "2753457114", "2737838988", "2462906003" ], "abstract": [ "The deployment of Artificial Neural Networks (ANNs) in safety-critical applications poses a number of new verification and certification challenges. In particular, for ANN-enabled self-driving vehicles it is important to establish properties about the resilience of ANNs to noisy or even maliciously manipulated sensory input. We are addressing these challenges by defining resilience properties of ANN-based classifiers as the maximum amount of input or sensor perturbation which is still tolerated. This problem of computing maximum perturbation bounds for ANNs is then reduced to solving mixed integer optimization problems (MIP). A number of MIP encoding heuristics are developed for drastically reducing MIP-solver runtimes, and using parallelization of MIP-solvers results in an almost linear speed-up in the number (up to a certain limit) of computing cores in our experiments. We demonstrate the effectiveness and scalability of our approach by means of computing maximum resilience bounds for a number of ANN benchmark sets ranging from typical image recognition scenarios to the autonomous maneuvering of robots.", "We propose a methodology for designing dependable Artificial Neural Networks (ANN) by extending the concepts of understandability, correctness, and validity that are crucial ingredients in existing certification standards. We apply the concept in a concrete case study in designing a high-way ANN-based motion predictor to guarantee safety properties such as impossibility for the ego vehicle to suggest moving to the right lane if there exists another vehicle on its right.", "With almost daily improvements in capabilities of artificial intelligence it is more important than ever to develop safety software for use by the AI research community. Building on our previous work on AI Containment Problem we propose a number of guidelines which should help AI safety researchers to develop reliable sandboxing software for intelligent programs of all levels. Such safety container software will make it possible to study and analyze intelligent artificial agent while maintaining certain level of safety against information leakage, social engineering attacks and cyberattacks from within the container.", "Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function (\"avoiding side effects\" and \"avoiding reward hacking\"), an objective function that is too expensive to evaluate frequently (\"scalable supervision\"), or undesirable behavior during the learning process (\"safe exploration\" and \"distributional shift\"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI." ] }
1907.12933
2965255081
Artificial Neural networks (ANNs) are powerful computing systems employed for various applications due to their versatility to generalize and to respond to unexpected inputs patterns. However, implementations of ANNs for safety-critical systems might lead to failures, which are hardly predicted in the design phase since ANNs are highly parallel and their parameters are hardly interpretable. Here we develop and evaluate a novel symbolic software verification framework based on incremental bounded model checking (BMC) to check for adversarial cases and coverage methods in multi-layer perceptron (MLP). In particular, we further develop the efficient SMT-based Context-Bounded Model Checker for Graphical Processing Units (ESBMC-GPU) in order to ensure the reliability of certain safety properties in which safety-critical systems can fail and make incorrect decisions, thereby leading to unwanted material damage or even put lives in danger. This paper marks the first symbolic verification framework to reason over ANNs implemented in CUDA. Our experimental results show that our approach implemented in ESBMC-GPU can successfully verify safety properties and covering methods in ANNs and correctly generate 28 adversarial cases in MLPs.
@cite_34 and @cite_32 have shown how weak ANNs can be if small noises are present in their inputs. They described and evaluated testing and verification approaches based on covering methods and images proximity @cite_34 and how adversarial cases are obtained @cite_32 . In particular, our study resembles that of and @cite_32 @cite_34 to obtain adversarial cases. Here, if any property is violated, then a counterexample is provided; in cases of safety properties, adversarial examples will be generated via counterexample using ESBMC-GPU. In contrast to @cite_32 , we do not focus on generating noise in specific regions, but in every image pixel. Our approach in images proximity is influenced by @cite_34 but we use incremental BMC instead of concolic testing as our verification engine. Our symbolic verification method checks safety properties on non-deterministic images with a certain distance of a given image; both image and distance are determined by the user. @cite_2 also describe an approach to validate ANNs using symbolic execution by translating a NN into an imperative program. By contrast, we consider the actual implementation of ANN in CUDA and apply incremental BMC using off-the-shelf SMT solvers.
{ "cite_N": [ "@cite_34", "@cite_32", "@cite_2" ], "mid": [ "2798356176", "1523041988", "2799031185", "2884519271" ], "abstract": [ "Due to the increasing deployment of Deep Neural Networks (DNNs) in real-world security-critical domains including autonomous vehicles and collision avoidance systems, formally checking security properties of DNNs, especially under different attacker capabilities, is becoming crucial. Most existing security testing techniques for DNNs try to find adversarial examples without providing any formal security guarantees about the non-existence of adversarial examples. Recently, several projects have used different types of Satisfiability Modulo Theory (SMT) solvers to formally check security properties of DNNs. However, all of these approaches are limited by the high overhead caused by the solver. In this paper, we present a new direction for formally checking security properties of DNNs without using SMT solvers. Instead, we leverage interval arithmetic to formally check security properties by computing rigorous bounds on the DNN outputs. Our approach, unlike existing solver-based approaches, is easily parallelizable. We further present symbolic interval analysis along with several other optimizations to minimize overestimations. We design, implement, and evaluate our approach as part of ReluVal, a system for formally checking security properties of Relu-based DNNs. Our extensive empirical results show that ReluVal outperforms Reluplex, a state-of-the-art solver-based system, by 200 times on average for the same security properties. ReluVal is able to prove a security property within 4 hours on a single 8-core machine without GPUs, while Reluplex deemed inconclusive due to timeout (more than 5 days). Our experiments demonstrate that symbolic interval analysis is a promising new direction towards rigorously analyzing different security properties of DNNs.", "A traditional counterexample to a linear-time safety property shows the values of all signals at all times prior to the error. However, some signals may not be critical to causing the failure. A succinct explanation may help human understanding as well as speed up algorithms that have to analyze many such traces. In Bounded Model Checking (BMC), a counterexample is constructed from a satisfying assignment to a Boolean formula, typically in CNF. Modern SAT solvers usually assign values to all variables when the input formula is satisfiable. Deriving minimal satisfying assignments from such complete assignments does not lead to concise explanations of counterexamples because of how CNF formulae are derived from the models. Hence, we formulate the extraction of a succinct counterexample as the problem of finding a minimal assignment that, together with the Boolean formula describing the model, implies an objective. We present a two-stage algorithm for this problem, such that the result of each stage contributes to identify the “interesting” events that cause the failure. We demonstrate the effectiveness of our approach with an example and with experimental results.", "In recent years, defending adversarial perturbations to natural examples in order to build robust machine learning models trained by deep neural networks (DNNs) has become an emerging research field in the conjunction of deep learning and security. In particular, MagNet consisting of an adversary detector and a data reformer is by far one of the strongest defenses in the black-box oblivious attack setting, where the attacker aims to craft transferable adversarial examples from an undefended DNN model to bypass an unknown defense module deployed on the same DNN model. Under this setting, MagNet can successfully defend a variety of attacks in DNNs, including the high-confidence adversarial examples generated by the Carlini and Wagner's attack based on the @math distortion metric. However, in this paper, under the same attack setting we show that adversarial examples crafted based on the @math distortion metric can easily bypass MagNet and mislead the target DNN image classifiers on MNIST and CIFAR-10. We also provide explanations on why the considered approach can yield adversarial examples with superior attack performance and conduct extensive experiments on variants of MagNet to verify its lack of robustness to @math distortion based attacks. Notably, our results substantially weaken the assumption of effective threat models on MagNet that require knowing the deployed defense technique when attacking DNNs (i.e., the gray-box attack setting).", "Deep neural networks (DNNs) are vulnerable to adversarial examples-maliciously crafted inputs that cause DNNs to make incorrect predictions. Recent work has shown that these attacks generalize to the physical domain, to create perturbations on physical objects that fool image classifiers under a variety of real-world conditions. Such attacks pose a risk to deep learning models used in safety-critical cyber-physical systems. In this work, we extend physical attacks to more challenging object detection models, a broader class of deep learning algorithms widely used to detect and label multiple objects within a scene. Improving upon a previous physical attack on image classifiers, we create perturbed physical objects that are either ignored or mislabeled by object detection models. We implement a Disappearance Attack, in which we cause a Stop sign to \"disappear\" according to the detector-either by covering thesign with an adversarial Stop sign poster, or by adding adversarial stickers onto the sign. In a video recorded in a controlled lab environment, the state-of-the-art YOLOv2 detector failed to recognize these adversarial Stop signs in over 85 of the video frames. In an outdoor experiment, YOLO was fooled by the poster and sticker attacks in 72.5 and 63.5 of the video frames respectively. We also use Faster R-CNN, a different object detection model, to demonstrate the transferability of our adversarial perturbations. The created poster perturbation is able to fool Faster R-CNN in 85.9 of the video frames in a controlled lab environment, and 40.2 of the video frames in an outdoor environment. Finally, we present preliminary results with a new Creation Attack, where in innocuous physical stickers fool a model into detecting nonexistent objects." ] }
1907.12933
2965255081
Artificial Neural networks (ANNs) are powerful computing systems employed for various applications due to their versatility to generalize and to respond to unexpected inputs patterns. However, implementations of ANNs for safety-critical systems might lead to failures, which are hardly predicted in the design phase since ANNs are highly parallel and their parameters are hardly interpretable. Here we develop and evaluate a novel symbolic software verification framework based on incremental bounded model checking (BMC) to check for adversarial cases and coverage methods in multi-layer perceptron (MLP). In particular, we further develop the efficient SMT-based Context-Bounded Model Checker for Graphical Processing Units (ESBMC-GPU) in order to ensure the reliability of certain safety properties in which safety-critical systems can fail and make incorrect decisions, thereby leading to unwanted material damage or even put lives in danger. This paper marks the first symbolic verification framework to reason over ANNs implemented in CUDA. Our experimental results show that our approach implemented in ESBMC-GPU can successfully verify safety properties and covering methods in ANNs and correctly generate 28 adversarial cases in MLPs.
@cite_0 presented formal techniques to extract invariants from the decision logic of ANNs. These invariants represent pre- and post-conditions, which hold when transformations of a certain type are applied to ANNs. The authors have proposed two techniques. The first one is called iterative relaxation of decision patterns, which uses Reluplex as the decision procedures @cite_9 . The second one is called decision-tree based invariant generation, which resembles covering methods @cite_34 . Robustness and explainability are the core properties of this study. Applying those properties to ANNs have shown impressive experimental results. Explainability showed an important property to evaluate safety in ANNs; the core idea is to obtain explanations of why the adversarial case happened by observing the pattern activation behavior of a subset of neurons described by the given invariant.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_34" ], "mid": [ "2943722115", "1970168990", "2185676247", "2787382079" ], "abstract": [ "We present techniques for automatically inferring invariant properties of feed-forward neural networks. Our insight is that feed forward networks should be able to learn a decision logic that is captured in the activation patterns of its neurons. We propose to extract such decision patterns that can be considered as invariants of the network with respect to a certain output behavior. We present techniques to extract input invariants as convex predicates on the input space, and layer invariants that represent features captured in the hidden layers. We apply the techniques on the networks for the MNIST and ACASXU applications. Our experiments highlight the use of invariants in a variety of applications, such as explainability, providing robustness guarantees, detecting adversaries, simplifying proofs and network distillation.", "This paper presents a new method for generating inductive loop invariants that are expressible as boolean combinations of linear integer constraints. The key idea underlying our technique is to perform a backtracking search that combines Hoare-style verification condition generation with a logical abduction procedure based on quantifier elimination to speculate candidate invariants. Starting with true, our method iteratively strengthens loop invariants until they are inductive and strong enough to verify the program. A key feature of our technique is that it is lazy: It only infers those invariants that are necessary for verifying program correctness. Furthermore, our technique can infer arbitrary boolean combinations (including disjunctions) of linear invariants. We have implemented the proposed approach in a tool called HOLA. Our experiments demonstrate that HOLA can infer interesting invariants that are beyond the reach of existing state-of-the-art invariant generation tools.", "Inductive invariants can be robustly synthesized using a learning model where the teacher is a program verifier who instructs the learner through concrete program configurations, classified as positive, negative, and implications. We propose the first learning algorithms in this model with implication counter-examples that are based on machine learning techniques. In particular, we extend classical decision-tree learning algorithms in machine learning to handle implication samples, building new scalable ways to construct small decision trees using statistical measures. We also develop a decision-tree learning algorithm in this model that is guaranteed to converge to the right concept (invariant) if one exists. We implement the learners and an appropriate teacher, and show that the resulting invariant synthesis is efficient and convergent for a large suite of programs.", "This paper presents a method to learn a decision tree to quantitatively explain the logic of each prediction of a pre-trained convolutional neural networks (CNNs). Our method boosts the following two aspects of network interpretability. 1) In the CNN, each filter in a high conv-layer must represent a specific object part, instead of describing mixed patterns without clear meanings. 2) People can explain each specific prediction made by the CNN at the semantic level using a decision tree, i.e., which filters (or object parts) are used for prediction and how much they contribute in the prediction. To conduct such a quantitative explanation of a CNN, our method learns explicit representations of object parts in high conv-layers of the CNN and mines potential decision modes memorized in fully-connected layers. The decision tree organizes these potential decision modes in a coarse-to-fine manner. Experiments have demonstrated the effectiveness of the proposed method." ] }
1907.12933
2965255081
Artificial Neural networks (ANNs) are powerful computing systems employed for various applications due to their versatility to generalize and to respond to unexpected inputs patterns. However, implementations of ANNs for safety-critical systems might lead to failures, which are hardly predicted in the design phase since ANNs are highly parallel and their parameters are hardly interpretable. Here we develop and evaluate a novel symbolic software verification framework based on incremental bounded model checking (BMC) to check for adversarial cases and coverage methods in multi-layer perceptron (MLP). In particular, we further develop the efficient SMT-based Context-Bounded Model Checker for Graphical Processing Units (ESBMC-GPU) in order to ensure the reliability of certain safety properties in which safety-critical systems can fail and make incorrect decisions, thereby leading to unwanted material damage or even put lives in danger. This paper marks the first symbolic verification framework to reason over ANNs implemented in CUDA. Our experimental results show that our approach implemented in ESBMC-GPU can successfully verify safety properties and covering methods in ANNs and correctly generate 28 adversarial cases in MLPs.
@cite_13 also proposed a novel approach for automatically identifying safe regions of inputs w.r.t. some labels. The core idea is to identify safe regions w.r.t. labeled targets, i.e., providing a specific safety guarantee that a robust region is robust enough against adversarial perturbations w.r.t. to a target label. As the notion of safety robustness in ANNs is a strong term for many ANNs, the target robustness is the main property. The technique works with clustering and verification. Clustering technique is used to split the dataset into a subset of inputs with the same labels, then each cluster is verified by Reluplex @cite_9 to provide the safety region w.r.t. the target label. The tool proposed is called DeepSafe, which is evaluated on trained ANNs by the dataset MNIST and ACAS XU.
{ "cite_N": [ "@cite_9", "@cite_13" ], "mid": [ "2761709036", "2474152637", "2083651719", "2410641892" ], "abstract": [ "Deep neural networks have become widely used, obtaining remarkable results in domains such as computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, and bio-informatics, where they have produced results comparable to human experts. However, these networks can be easily fooled by adversarial perturbations: minimal changes to correctly-classified inputs, that cause the network to mis-classify them. This phenomenon represents a concern for both safety and security, but it is currently unclear how to measure a network's robustness against such perturbations. Existing techniques are limited to checking robustness around a few individual input points, providing only very limited guarantees. We propose a novel approach for automatically identifying safe regions of the input space, within which the network is robust against adversarial perturbations. The approach is data-guided, relying on clustering to identify well-defined geometric regions as candidate safe regions. We then utilize verification techniques to confirm that these regions are safe or to provide counter-examples showing that they are not safe. We also introduce the notion of targeted robustness which, for a given target label and region, ensures that a NN does not map any input in the region to the target label. We evaluated our technique on the MNIST dataset and on a neural network implementation of a controller for the next-generation Airborne Collision Avoidance System for unmanned aircraft (ACAS Xu). For these networks, our approach identified multiple regions which were completely safe as well as some which were only safe for specific labels. It also discovered several adversarial perturbations of interest.", "Wepropose using relaxed deep supervision (RDS) within convolutional neural networks for edge detection. The conventional deep supervision utilizes the general groundtruth to guide intermediate predictions. Instead, we build hierarchical supervisory signals with additional relaxed labels to consider the diversities in deep neural networks. We begin by capturing the relaxed labels from simple detectors (e.g. Canny). Then we merge them with the general groundtruth to generate the RDS. Finally we employ the RDS to supervise the edge network following a coarse-to-fine paradigm. These relaxed labels can be seen as some false positives that are difficult to be classified. Weconsider these false positives in the supervision, and are able to achieve high performance for better edge detection. Wecompensate for the lack of training images by capturing coarse edge annotations from a large dataset of image segmentations to pretrain the model. Extensive experiments demonstrate that our approach achieves state-of-the-art performance on the well-known BSDS500 dataset (ODS F-score of .792) and obtains superior cross-dataset generalization results on NYUD dataset.", "This paper aims at verifying the accuracy of Artificial Neural Networks (ANN) in assessing the transient stability of a single machine infinite bus system. The fault critical clearing time obtained through ANN is compared to the results of the conventional equal area criterion method. The multilayer feedforward artificial neural network concept is applied to the system. The training of the ANN is achieved through the supervised learning; and the back propagation technique is used as a learning method in order to minimize the training error. The training data set is generated using two steps process. First, the equal area criterion is used to determine the critical angle. After that the swing equation is solved using the point-to-point method up to the critical angle to determine the critical clearing time. Then the stability of the system is verified. As a result we find that the critical clearing time is predicted with slightly less accuracy using ANN compared to the conventional methods for the same input data sets unless the ANN is well trained.", "Convolutional neural networks (CNNs) have shown great performance as general feature representations for object recognition applications. However, for multi-label images that contain multiple objects from different categories, scales and locations, global CNN features are not optimal. In this paper, we incorporate local information to enhance the feature discriminative power. In particular, we first extract object proposals from each image. With each image treated as a bag and object proposals extracted from it treated as instances, we transform the multi-label recognition problem into a multi-class multi-instance learning problem. Then, in addition to extracting the typical CNN feature representation from each proposal, we propose to make use of ground-truth bounding box annotations (strong labels) to add another level of local information by using nearest-neighbor relationships of local regions to form a multi-view pipeline. The proposed multi-view multiinstance framework utilizes both weak and strong labels effectively, and more importantly it has the generalization ability to even boost the performance of unseen categories by partial strong labels from other categories. Our framework is extensively compared with state-of-the-art handcrafted feature based methods and CNN based methods on two multi-label benchmark datasets. The experimental results validate the discriminative power and the generalization ability of the proposed framework. With strong labels, our framework is able to achieve state-of-the-art results in both datasets." ] }
1907.12933
2965255081
Artificial Neural networks (ANNs) are powerful computing systems employed for various applications due to their versatility to generalize and to respond to unexpected inputs patterns. However, implementations of ANNs for safety-critical systems might lead to failures, which are hardly predicted in the design phase since ANNs are highly parallel and their parameters are hardly interpretable. Here we develop and evaluate a novel symbolic software verification framework based on incremental bounded model checking (BMC) to check for adversarial cases and coverage methods in multi-layer perceptron (MLP). In particular, we further develop the efficient SMT-based Context-Bounded Model Checker for Graphical Processing Units (ESBMC-GPU) in order to ensure the reliability of certain safety properties in which safety-critical systems can fail and make incorrect decisions, thereby leading to unwanted material damage or even put lives in danger. This paper marks the first symbolic verification framework to reason over ANNs implemented in CUDA. Our experimental results show that our approach implemented in ESBMC-GPU can successfully verify safety properties and covering methods in ANNs and correctly generate 28 adversarial cases in MLPs.
In addition to ESBMC-GPU, there exist other tools able to verify CUDA programs and each one of them uses its approach and targets specific property violations. However, given the current knowledge in software verification, ESBMC-GPU is the first verifier to check for adversarial cases and coverage methods in ANNs implemented in CUDA. For instance, GPUVerify @cite_10 is based on synchronous, delayed visibility semantics, which focuses on detecting data race and barrier divergence, while reducing kernel verification procedures for the analysis of sequential programs. GPU+KLEE (GKLEE) @cite_23 , in turn, is a concrete and symbolic execution tool, which considers both kernels and main functions, while checking deadlocks, memory coalescing, data race, warp divergence, and compilation level issues. Also, Concurrency Intermediate Verification Language (CIVL) @cite_1 , a framework for static analysis and concurrent program verification, uses abstract syntax tree and partial order reduction to detect user-specified assertions, deadlocks, memory leaks, invalid pointer dereference, array out-of-bounds, and division by zero.
{ "cite_N": [ "@cite_1", "@cite_10", "@cite_23" ], "mid": [ "2121717408", "2258083119", "2076960126", "1971463841" ], "abstract": [ "We present a technique for verifying race- and divergence-freedom of GPU kernels that are written in mainstream kernel programming languages such as OpenCL and CUDA. Our approach is founded on a novel formal operational semantics for GPU programming termed synchronous, delayed visibility (SDV) semantics. The SDV semantics provides a precise definition of barrier divergence in GPU kernels and allows kernel verification to be reduced to analysis of a sequential program, thereby completely avoiding the need to reason about thread interleavings, and allowing existing modular techniques for program verification to be leveraged. We describe an efficient encoding for data race detection and propose a method for automatically inferring loop invariants required for verification. We have implemented these techniques as a practical verification tool, GPUVerify, which can be applied directly to OpenCL and CUDA source code. We evaluate GPUVerify with respect to a set of 163 kernels drawn from public and commercial sources. Our evaluation demonstrates that GPUVerify is capable of efficient, automatic verification of a large number of real-world kernels.", "We present ESBMC-GPU, an extension to the ESBMC model checker that is aimed at verifying GPU programs written for the CUDA framework. ESBMC-GPU uses an operational model for the verification, i.e., an abstract representation of the standard CUDA libraries that conservatively approximates their semantics. ESBMC-GPU verifies CUDA programs, by explicitly exploring the possible interleavings (up to the given context bound), while treating each interleaving itself symbolically. Experimental results show that ESBMC-GPU is able to detect more properties violations, while keeping lower rates of false results.", "Programs written for GPUs often contain correctness errors such as races, deadlocks, or may compute the wrong result. Existing debugging tools often miss these errors because of their limited input-space and execution-space exploration. Existing tools based on conservative static analysis or conservative modeling of SIMD concurrency generate false alarms resulting in wasted bug-hunting. They also often do not target performance bugs (non-coalesced memory accesses, memory bank conflicts, and divergent warps). We provide a new framework called GKLEE that can analyze C++ GPU programs, locating the aforesaid correctness and performance bugs. For these programs, GKLEE can also automatically generate tests that provide high coverage. These tests serve as concrete witnesses for every reported bug. They can also be used for downstream debugging, for example to test the kernel on the actual hardware. We describe the architecture of GKLEE, its symbolic virtual machine model, and describe previously unknown bugs and performance issues that it detected on commercial SDK kernels. We describe GKLEE's test-case reduction heuristics, and the resulting scalability improvement for a given coverage target.", "Data-dependent GPU kernels, whose data or control flow are dependent on the input of the program, are difficult to verify because they require reasoning about shared state manipulated by many parallel threads. Existing verification techniques for GPU kernels achieve soundness and scalability by using a two-thread reduction and making the contents of the shared state nondeterministic each time threads synchronise at a barrier, to account for all possible thread interactions. This coarse abstraction prohibits verification of data-dependent kernels. We present barrier invariants, a novel abstraction technique which allows key properties about the shared state of a kernel to be preserved across barriers during formal reasoning. We have integrated barrier invariants with the GPUVerify tool, and present a detailed case study showing how they can be used to verify three prefix sum algorithms, allowing efficient modular verification of a stream compaction kernel, a key building block for GPU programming. This analysis goes significantly beyond what is possible using existing verification techniques for GPU kernels." ] }
1907.12933
2965255081
Artificial Neural networks (ANNs) are powerful computing systems employed for various applications due to their versatility to generalize and to respond to unexpected inputs patterns. However, implementations of ANNs for safety-critical systems might lead to failures, which are hardly predicted in the design phase since ANNs are highly parallel and their parameters are hardly interpretable. Here we develop and evaluate a novel symbolic software verification framework based on incremental bounded model checking (BMC) to check for adversarial cases and coverage methods in multi-layer perceptron (MLP). In particular, we further develop the efficient SMT-based Context-Bounded Model Checker for Graphical Processing Units (ESBMC-GPU) in order to ensure the reliability of certain safety properties in which safety-critical systems can fail and make incorrect decisions, thereby leading to unwanted material damage or even put lives in danger. This paper marks the first symbolic verification framework to reason over ANNs implemented in CUDA. Our experimental results show that our approach implemented in ESBMC-GPU can successfully verify safety properties and covering methods in ANNs and correctly generate 28 adversarial cases in MLPs.
Our approach implemented on top of ESBMC-GPU has some similarities with other techniques described here, e.g., covering methods proposed by @cite_34 , model checking to solve adversarial cases proposed by @cite_32 . However, the main contribution is our requirements and how we handle the actual implementations of ANNs. To run our proposed safety verification, only the ANNs with weights and bias descriptors and the desired input of the dataset is required. For tools such as DeepConcolic @cite_34 and DLV @cite_32 , obtaining adversarial cases or safety guarantees for different ANNs is not an easy task due to the focus given to the famous datasets as MNIST @cite_17 or CIFAR-10 @cite_22 during the tool development. In our proposed approach, there exists no need for providing specific datasets, but only the desired dataset sample to be verified. Besides these requirements, it is necessary for the user to know how cuDNN @cite_21 deals with ANNs.
{ "cite_N": [ "@cite_22", "@cite_21", "@cite_32", "@cite_34", "@cite_17" ], "mid": [ "2772024431", "2074208271", "2247194987", "2148461049" ], "abstract": [ "Recently proposed robust 3D face alignment methods establish either dense or sparse correspondence between a 3D face model and a 2D facial image. The use of these methods presents new challenges as well as opportunities for facial texture analysis. In particular, by sampling the image using the fitted model, a facial UV can be created. Unfortunately, due to self-occlusion, such a UV map is always incomplete. In this paper, we propose a framework for training Deep Convolutional Neural Network (DCNN) to complete the facial UV map extracted from in-the-wild images. To this end, we first gather complete UV maps by fitting a 3D Morphable Model (3DMM) to various multiview image and video datasets, as well as leveraging on a new 3D dataset with over 3,000 identities. Second, we devise a meticulously designed architecture that combines local and global adversarial DCNNs to learn an identity-preserving facial UV completion model. We demonstrate that by attaching the completed UV to the fitted mesh and generating instances of arbitrary poses, we can increase pose variations for training deep face recognition verification models, and minimise pose discrepancy during testing, which lead to better performance. Experiments on both controlled and in-the-wild UV datasets prove the effectiveness of our adversarial UV completion model. We achieve state-of-the-art verification accuracy, 94.05 , under the CFP frontal-profile protocol only by combining pose augmentation during training and pose discrepancy reduction during testing. We will release the first in-the-wild UV dataset (we refer as WildUV) that comprises of complete facial UV maps from 1,892 identities for research purposes.", "We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993 in vertebral labeling (with 'success' defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535 success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial run) the same registration could be solved with 99.993 success in 6.3 s. The ability to register CT to fluoroscopy in a manner robust to patient deformation could be valuable in applications such as radiation therapy, interventional radiology, and an assistant to target localization (e.g., vertebral labeling) in image-guided spine surgery.", "In practice, there are often explicit constraints on what representations or decisions are acceptable in an application of machine learning. For example it may be a legal requirement that a decision must not favour a particular group. Alternatively it can be that that representation of data must not have identifying information. We address these two related issues by learning flexible representations that minimize the capability of an adversarial critic. This adversary is trying to predict the relevant sensitive variable from the representation, and so minimizing the performance of the adversary ensures there is little or no information in the representation about the sensitive variable. We demonstrate this adversarial approach on two problems: making decisions free from discrimination and removing private information from images. We formulate the adversarial model as a minimax problem, and optimize that minimax objective using a stochastic gradient alternate min-max optimizer. We demonstrate the ability to provide discriminant free representations for standard test problems, and compare with previous state of the art methods for fairness, showing statistically significant improvement across most cases. The flexibility of this method is shown via a novel problem: removing annotations from images, from unaligned training examples of annotated and unannotated images, and with no a priori knowledge of the form of annotation provided to the model.", "We present a fast, fully parameterizable GPU implementation of Convolutional Neural Network variants. Our feature extractors are neither carefully designed nor pre-wired, but rather learned in a supervised way. Our deep hierarchical architectures achieve the best published results on benchmarks for object classification (NORB, CIFAR10) and handwritten digit recognition (MNIST), with error rates of 2.53 , 19.51 , 0.35 , respectively. Deep nets trained by simple back-propagation perform better than more shallow ones. Learning is surprisingly rapid. NORB is completely trained within five epochs. Test error rates on MNIST drop to 2.42 , 0.97 and 0.48 after 1, 3 and 17 epochs, respectively." ] }
1907.12821
2966519285
Pseudo-Boolean monotone functions are unimodal functions which are trivial to optimize for some hillclimbers, but are challenging for a surprising number of evolutionary algorithms (EAs). A general trend is that EAs are efficient if parameters like the mutation rate are set conservatively, but may need exponential time otherwise. In particular, it was known that the @math -EA and the @math -EA can optimize every monotone function in pseudolinear time if the mutation rate is @math for some @math . The second part of the statement was also known for the @math -EA. In this paper we show that the first statement does not apply to the @math -EA. More precisely, we prove that for every constant @math there is a constant integer @math such that the @math -EA with mutation rate @math and population size @math needs superpolynomial time to optimize some monotone functions. Thus, increasing the population size by just a constant has devastating effects on the performance. This is in stark contrast to many other benchmark functions on which increasing the population size either increases the performance significantly, or affects performance mildly. The reason why larger populations are harmful lies in the fact that larger populations may temporarily decrease selective pressure on parts of the population. This allows unfavorable mutations to accumulate in single individuals and their descendants. If the population moves sufficiently fast through the search space, such unfavorable descendants can become ancestors of future generations, and the bad mutations are preserved. Remarkably, this effect only occurs if the population renews itself sufficiently fast, which can only happen far away from the optimum. This is counter-intuitive since usually optimization gets harder as we approach the optimum.
The analysis of EAs on monotone functions started in 2010 by the work of Doerr, Jansen, Sudholt, Winzen and Zarges @cite_15 @cite_10 . Their contribution was twofold: firstly, they showed that the , which flips each bit independently with static mutation rate @math , needs time @math on all monotone functions if the mutation parameter @math is a constant strictly smaller than one. This result was already implicit in @cite_6 .
{ "cite_N": [ "@cite_15", "@cite_10", "@cite_6" ], "mid": [ "2914741526", "1892716203", "2109000177", "2097801309" ], "abstract": [ "We study the well-known black-box optimisation algorithm (1+1)-EA on a novel type of noise model. In our noise model, the fitness function is linear with positive weights, but the absolute values of the weights may fluctuate in each round. Thus in every state, the fitness function indicates that one-bits are preferred over zero-bits. In particular, hillclimbing heuristics should be able to find the optimum fast. We show that the (1+1)-EA indeed finds the optimum in time @math if the mutation parameter is @math for a constant @math However, we also show that for @math the (1+1)-EA needs superpolynomial time to find the optimum. Thus the choice of mutation parameter is critical even for optimisation tasks in which there is a clear path to the the optimum. A similar threshold phenomenon has recently been shown for noise-free monotone fitness functions.", "Extending previous analyses on function classes like linear functions, we analyze how the simple (1+1) evolutionary algorithm optimizes pseudo-Boolean functions that are strictly monotone. Contrary to what one would expect, not all of these functions are easy to optimize. The choice of the constant c in the mutation probability p(n) = c n can make a decisive difference. We show that if c > 1, then the (1+1) EA finds the optimum of every such function in Θ(n log n) iterations. For c = 1, we can still prove an upper bound of O(n3 2). However, for c > 33, we present a strictly monotone function such that the (1+1) EA with overwhelming probability does not find the optimum within 2Ω(n) iterations. This is the first time that we observe that a constant factor change of the mutation probability changes the run-time by more than constant factors.", "Extending previous analyses on function classes like linear functions, we analyze how the simple 1+1 evolutionary algorithm optimizes pseudo-Boolean functions that are strictly monotonic. These functions have the property that whenever only 0-bits are changed to 1, then the objective value strictly increases. Contrary to what one would expect, not all of these functions are easy to optimize. The choice of the constant c in the mutation probability pn=c n can make a decisive difference. We show that if c iterations. For c=1, we can still prove an upper bound of On3 2. However, for , we present a strictly monotonic function such that the 1+1 EA with overwhelming probability needs iterations to find the optimum. This is the first time that we observe that a constant factor change of the mutation probability changes the runtime by more than a constant factor.", "This paper analyzes monotone comparative statics predictions in several classes of stochastic optimization problems. The main results characterize necessary and sufficient conditions for comparative statics predictions to hold based on properties of primitive functions, that is, utility functions and probability distributions. The results apply when the primitives satisfy one of the following two properties: (i) a single-crossing property, which arises in applications such as portfolio investment problems and auctions, or (ii) log-supermodularity, which arises in the analysis of demand functions, affiliated random variables, stochastic orders, and orders over risk aversion." ] }
1907.12821
2966519285
Pseudo-Boolean monotone functions are unimodal functions which are trivial to optimize for some hillclimbers, but are challenging for a surprising number of evolutionary algorithms (EAs). A general trend is that EAs are efficient if parameters like the mutation rate are set conservatively, but may need exponential time otherwise. In particular, it was known that the @math -EA and the @math -EA can optimize every monotone function in pseudolinear time if the mutation rate is @math for some @math . The second part of the statement was also known for the @math -EA. In this paper we show that the first statement does not apply to the @math -EA. More precisely, we prove that for every constant @math there is a constant integer @math such that the @math -EA with mutation rate @math and population size @math needs superpolynomial time to optimize some monotone functions. Thus, increasing the population size by just a constant has devastating effects on the performance. This is in stark contrast to many other benchmark functions on which increasing the population size either increases the performance significantly, or affects performance mildly. The reason why larger populations are harmful lies in the fact that larger populations may temporarily decrease selective pressure on parts of the population. This allows unfavorable mutations to accumulate in single individuals and their descendants. If the population moves sufficiently fast through the search space, such unfavorable descendants can become ancestors of future generations, and the bad mutations are preserved. Remarkably, this effect only occurs if the population renews itself sufficiently fast, which can only happen far away from the optimum. This is counter-intuitive since usually optimization gets harder as we approach the optimum.
Most other work on population-based algorithms has shown benefits of larger population sizes, especially when crossover is used @cite_11 @cite_12 @cite_2 @cite_14 . The only exception in which a population has theoretically been proven to be severely disadvantageous is on Ignoble Trails. This rather specific function has been carefully designed to lead into a trap for crossover operators @cite_13 , and it is deceptive for @math if crossover is used, but not for @math . Arguably, the functions are also rather artificial, although they were not specifically designed to be deceptive for populations. However, regarding the larger and more natural framework of monotone functions, our results imply that a with mutation parameter @math does not optimize all monotone functions efficiently if @math is too large, while the corresponding is efficient.
{ "cite_N": [ "@cite_14", "@cite_2", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2160241726", "2487146608", "2011559870", "1994258343" ], "abstract": [ "Due to experimental evidence it is incontestable that crossover is essential for some fitness functions. However, theoretical results without assumptions are difficult. So-called real royal road functions are known where crossover is proved to be essential, i.e., mutation-based algorithms have an exponential expected runtime while the expected runtime of a genetic algorithm is polynomially bounded. However, these functions are artificial and have been designed in such a way that crossover is essential only at the very end (or at other well-specified points) of the optimization process.Here, a more natural fitness function based on a generalized Ising model is presented where crossover is essential throughout the whole optimization process. Mutation-based algorithms such as (μ+λ) EAs with constant population size are proved to have an exponential expected runtime while the expected runtime of a simple genetic algorithm with population size 2 and fitness sharing is polynomially bounded.", "Population diversity is essential for the effective use of any crossover operator. We compare seven commonly used diversity mechanisms and prove rigorous run time bounds for the (μ+1) GA using uniform crossover on the fitness function Jumpk. All previous results in this context only hold for unrealistically low crossover probability pc=O(k n), while we give analyses for the setting of constant pc 2 and constant pc, we can compare the resulting expected optimisation times for different diversity mechanisms assuming an optimal choice of μ: O (nk-1) for duplicate elimination minimisation, O (n2 log n) for maximising the convex hull, O(n log n) for det. crowding (assuming pc = k n), O(n log n) for maximising the Hamming distance, O(n log n) for fitness sharing, O(n log n) for the single-receiver island model. This proves a sizeable advantage of all variants of the (μ+1) GA compared to the (1+1) EA, which requires θ(nk). In a short empirical study we confirm that the asymptotic differences can also be observed experimentally.", "Understanding the impact of crossover on performance is a major problem in the theory of genetic algorithms (GAs). We present new insight on working principles of crossover by analyzing the performance of crossover-based GAs on the simple functions OneMax and Jump. First, we assess the potential speedup by crossover when combined with a fitness-invariant bit shuffling operator that simulates a lineage of independent evolution on a function of unitation. Theoretical and empirical results show drastic speedups for both functions. Second, we consider a simple GA without shuffling and investigate the interplay of mutation and crossover on Jump. If the crossover probability is small, subsequent mutations create sufficient diversity, even for very small populations. Contrarily, with high crossover probabilities crossover tends to lose diversity more quickly than mutation can create it. This has a drastic impact on the performance on Jump. We complement our theoretical findings by Monte Carlo simulations on the population diversity.", "Mutation and crossover are the main search operators of different variants of evolutionary algorithms. Despite the many discussions on the importance of crossover nobody has proved rigorously for some explicitly defined fitness functions f\"n: 0,1 ^n->R that a genetic algorithm with crossover can optimize f\"n in expected polynomial time while all evolution strategies based only on mutation (and selection) need expected exponential time. Here such functions and proofs are presented for a genetic algorithm without any idealization. For some functions one-point crossover is appropriate while for others uniform crossover is the right choice." ] }
1907.12821
2966519285
Pseudo-Boolean monotone functions are unimodal functions which are trivial to optimize for some hillclimbers, but are challenging for a surprising number of evolutionary algorithms (EAs). A general trend is that EAs are efficient if parameters like the mutation rate are set conservatively, but may need exponential time otherwise. In particular, it was known that the @math -EA and the @math -EA can optimize every monotone function in pseudolinear time if the mutation rate is @math for some @math . The second part of the statement was also known for the @math -EA. In this paper we show that the first statement does not apply to the @math -EA. More precisely, we prove that for every constant @math there is a constant integer @math such that the @math -EA with mutation rate @math and population size @math needs superpolynomial time to optimize some monotone functions. Thus, increasing the population size by just a constant has devastating effects on the performance. This is in stark contrast to many other benchmark functions on which increasing the population size either increases the performance significantly, or affects performance mildly. The reason why larger populations are harmful lies in the fact that larger populations may temporarily decrease selective pressure on parts of the population. This allows unfavorable mutations to accumulate in single individuals and their descendants. If the population moves sufficiently fast through the search space, such unfavorable descendants can become ancestors of future generations, and the bad mutations are preserved. Remarkably, this effect only occurs if the population renews itself sufficiently fast, which can only happen far away from the optimum. This is counter-intuitive since usually optimization gets harder as we approach the optimum.
Moreover, Lengler and Schaller pointed out an interesting connection between functions and a dynamic optimization problem in @cite_19 , which is arguably more natural. In that paper, the algorithm should optimize a linear function with positive weights, but the weights of the objective function are re-drawn each round (independently and identically distributed). This setting is similar to monotone functions, since a one-bit is always preferable over a zero-bit, and the all-one string is always the global optimum. However, the weight of each bit changes from round to round, which somewhat resembles that the function switches between different hot topics as the algorithm progresses. @cite_19 the was studied, and the behavior in the dynamic setting is very similar to the behavior on functions. It remains open whether the effects observed in our paper carry over to this dynamic setting.
{ "cite_N": [ "@cite_19" ], "mid": [ "2914741526", "2109000177", "2263717697", "2236345491" ], "abstract": [ "We study the well-known black-box optimisation algorithm (1+1)-EA on a novel type of noise model. In our noise model, the fitness function is linear with positive weights, but the absolute values of the weights may fluctuate in each round. Thus in every state, the fitness function indicates that one-bits are preferred over zero-bits. In particular, hillclimbing heuristics should be able to find the optimum fast. We show that the (1+1)-EA indeed finds the optimum in time @math if the mutation parameter is @math for a constant @math However, we also show that for @math the (1+1)-EA needs superpolynomial time to find the optimum. Thus the choice of mutation parameter is critical even for optimisation tasks in which there is a clear path to the the optimum. A similar threshold phenomenon has recently been shown for noise-free monotone fitness functions.", "Extending previous analyses on function classes like linear functions, we analyze how the simple 1+1 evolutionary algorithm optimizes pseudo-Boolean functions that are strictly monotonic. These functions have the property that whenever only 0-bits are changed to 1, then the objective value strictly increases. Contrary to what one would expect, not all of these functions are easy to optimize. The choice of the constant c in the mutation probability pn=c n can make a decisive difference. We show that if c iterations. For c=1, we can still prove an upper bound of On3 2. However, for , we present a strictly monotonic function such that the 1+1 EA with overwhelming probability needs iterations to find the optimum. This is the first time that we observe that a constant factor change of the mutation probability changes the runtime by more than a constant factor.", "We consider the problem of maximizing a (non-monotone) submodular function subject to a cardinality constraint. In addition to capturing well-known combinatorial optimization problems, e.g., Max-k-Coverage and Max-Bisection, this problem has applications in other more practical settings such as natural language processing, information retrieval, and machine learning. In this work we present improved approximations for two variants of the cardinality constraint for non-monotone functions. When at most k elements can be chosen, we improve the current best 1 e -- o(1) approximation to a factor that is in the range [1 e + 0.004, 1 2], achieving a tight approximation of 1 2 -- o(1) for k = n 2 and breaking the 1 e barrier for all values of k. When exactly k elements must be chosen, our algorithms improve the current best 1 4 -- o(1) approximation to a factor that is in the range [0.356, 1 2], again achieving a tight approximation of 1 2 -- o(1) for k = n 2. Additionally, some of the algorithms we provide are very fast with time complexities of O(nk), as opposed to previous known algorithms which are continuous in nature, and thus, too slow for applications in the practical settings mentioned above. Our algorithms are based on two new techniques. First, we present a simple randomized greedy approach where in each step a random element is chosen from a set of \"reasonably good\" elements. This approach might be considered a natural substitute for the greedy algorithm of Nemhauser, Wolsey and Fisher [45], as it retains the same tight guarantee of 1--1 e for monotone objectives and the same time complexity of O(nk), while giving an approximation of 1 e for general non-monotone objectives (while the greedy algorithm of Nemhauser et. al. fails to provide any constant guarantee). Second, we extend the double greedy technique, which achieves a tight 1 2 approximation for unconstrained submodular maximization, to the continuous setting. This allows us to manipulate the natural rates by which elements change, thus bounding the total number of elements chosen.", "A wide range of AI problems, such as sensor placement, active learning, and network influence maximization, require sequentially selecting elements from a large set with the goal of optimizing the utility of the selected subset. Moreover, each element that is picked may provide stochastic feedback, which can be used to make smarter decisions about future selections. Finding efficient policies for this general class of adaptive optimization problems can be extremely hard. However, when the objective function is adaptive monotone and adaptive submodular, a simple greedy policy attains a 1 - 1 e approximation ratio in terms of expected utility. Unfortunately, many practical objective functions are naturally non-monotone; to our knowledge, no existing policy has provable performance guarantees when the assumption of adaptive monotonicity is lifted. We propose the adaptive random greedy policy for maximizing adaptive submodular functions, and prove that it retains the aforementioned 1 - 1 e approximation ratio for functions that are also adaptive monotone, while it additionally provides a 1 e approximation ratio for nonmonotone adaptive submodular functions. We showcase the benefits of adaptivity on three real-world network data sets using two non-monotone functions, representative of two classes of commonly encountered non-monotone objectives." ] }
1907.12736
2965903589
We present a simple yet effective prediction module for a one-stage detector. The main process is conducted in a coarse-to-fine manner. First, the module roughly adjusts the default boxes to well capture the extent of target objects in an image. Second, given the adjusted boxes, the module aligns the receptive field of the convolution filters accordingly, not requiring any embedding layers. Both steps build a propose-and-attend mechanism, mimicking two-stage detectors in a highly efficient manner. To verify its effectiveness, we apply the proposed module to a basic one-stage detector SSD. Our final model achieves an accuracy comparable to that of state-of-the-art detectors while using a fraction of their model parameters and computational overheads. Moreover, we found that the proposed module has two strong applications. 1) The module can be successfully integrated into a lightweight backbone, further pushing the efficiency of the one-stage detector. 2) The module also allows train-from-scratch without relying on any sophisticated base networks as previous methods do.
Two-stage detectors @cite_1 @cite_33 are composed of two parts. The first part generates a sparse set of region proposals, and the second part further classifies and regresses the proposals. These two-stage detectors have occupied top entries of challenging benchmarks @cite_33 @cite_32 @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_32", "@cite_1", "@cite_33" ], "mid": [ "2895403383", "2743473392", "2884561390", "2963351448" ], "abstract": [ "Current two-stage object detectors, which consists of a region proposal stage and a refinement stage, may produce unreliable results due to ill-localized proposed regions. To address this problem, we propose a context refinement algorithm that explores rich contextual information to better refine each proposed region. In particular, we first identify neighboring regions that may contain useful contexts and then perform refinement based on the extracted and unified contextual information. In practice, our method effectively improves the quality of the final detection results as well as region proposals. Empirical studies show that context refinement yields substantial and consistent improvements over different baseline detectors. Moreover, the proposed algorithm brings around 3 performance gain on PASCAL VOC benchmark and around 6 gain on MS COCO benchmark respectively.", "The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: this https URL", "The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: https: github.com facebookresearch Detectron.", "The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors." ] }
1907.12861
2965663461
We introduce LEAF-QA, a comprehensive dataset of @math densely annotated figures charts, constructed from real-world open data sources, along with 2 million question-answer (QA) pairs querying the structure and semantics of these charts. LEAF-QA highlights the problem of multimodal QA, which is notably different from conventional visual QA (VQA), and has recently gained interest in the community. Furthermore, LEAF-QA is significantly more complex than previous attempts at chart QA, viz. FigureQA and DVQA, which present only limited variations in chart data. LEAF-QA being constructed from real-world sources, requires a novel architecture to enable question answering. To this end, LEAF-Net, a deep architecture involving chart element localization, question and answer encoding in terms of chart elements, and an attention network is proposed. Different experiments are conducted to demonstrate the challenges of QA on LEAF-QA. The proposed architecture, LEAF-Net also considerably advances the current state-of-the-art on FigureQA and DVQA.
There has been recent interest in analyzing figures and charts, particularly to understand the type of visualization and for data extraction from the chart images. @cite_26 describe algorithms to extract data from pie and bar charts, particularly to re-visualize them. Further, interactive methods for bar chart extraction have been studied @cite_34 @cite_22 . @cite_14 describe an object detection framework for extracting scatter plot elements. Similarly, an analysis for line plot extraction has been presented by @cite_19 . There have also been attempts at indexing of figures @cite_35 @cite_11 for search and classification. B "o @cite_15 and @cite_8 describe methods for improving text and symbol extraction from figures. @cite_7 describe a framework to restyle different kinds of visualizations, through maneuvering the data in the SVGs.
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_26", "@cite_22", "@cite_7", "@cite_8", "@cite_19", "@cite_15", "@cite_34", "@cite_11" ], "mid": [ "2139488215", "2075058343", "2607825849", "2053604034" ], "abstract": [ "Information graphics, such as graphs and plots, are used in technical documents to convey information to humans and to facilitate greater understanding. Usually, graphics are a key component in a technical document, as they enable the author to convey complex ideas in a simplified visual format. However, in an automatic text recognition system, which are typically used to digitize documents, the ideas conveyed in a graphical format are lost. We contend that the message or extracted information can be used to help better understand the ideas conveyed in the document. In scientific papers, line plots are the most commonly used graphic to represent experimental results in the form of correlation present between values represented on the axes. The contribution of our work is in the series of image processing algorithms that are used to automatically extract relevant information, including text and plot from graphics found in technical documents. We validate the approach by performing the experiments on a dataset of line plots obtained from scientific documents from computer science conference papers and evaluate the variation of a reconstructed curve from the original curve. Our algorithm achieves a classification accuracy of 91 across the dataset and successfully extracts the axes from 92 of line plots. Axes label extraction and line curve tracing are performed successfully in about half the line plots as well.", "Existing research on analyzing information graphics assume to have a perfect text detection and extraction available. However, text extraction from information graphics is far from solved. To fill this gap, we propose a novel processing pipeline for multi-oriented text extraction from infographics. The pipeline applies a combination of data mining and computer vision techniques to identify text elements, cluster them into text lines, compute their orientation, and uses a state-of-the-art open source OCR engine to perform the text recognition. We evaluate our method on 121 infographics extracted from an open access corpus of scientific publications. The results show that our approach is effective and significantly outperforms a state-of-the-art baseline.", "Charts are an excellent way to convey patterns and trends in data, but they do not facilitate further modeling of the data or close inspection of individual data points. We present a fully automated system for extracting the numerical values of data points from images of scatter plots. We use deep learning techniques to identify the key components of the chart, and optical character recognition together with robust regression to map from pixels to the coordinate system of the chart. We focus on scatter plots with linear scales, which already have several interesting challenges. Previous work has done fully automatic extraction for other types of charts, but to our knowledge this is the first approach that is fully automatic for scatter plots. Our method performs well, achieving successful data extraction on 89 of the plots in our test set.", "Poorly designed charts are prevalent in reports, magazines, books and on the Web. Most of these charts are only available as bitmap images; without access to the underlying data it is prohibitively difficult for viewers to create more effective visual representations. In response we present ReVision, a system that automatically redesigns visualizations to improve graphical perception. Given a bitmap image of a chart as input, ReVision applies computer vision and machine learning techniques to identify the chart type (e.g., pie chart, bar chart, scatterplot, etc.). It then extracts the graphical marks and infers the underlying data. Using a corpus of images drawn from the web, ReVision achieves image classification accuracy of 96 across ten chart categories. It also accurately extracts marks from 79 of bar charts and 62 of pie charts, and from these charts it successfully extracts data from 71 of bar charts and 64 of pie charts. ReVision then applies perceptually-based design principles to populate an interactive gallery of redesigned charts. With this interface, users can view alternative chart designs and retarget content to different visual styles." ] }
1907.12861
2965663461
We introduce LEAF-QA, a comprehensive dataset of @math densely annotated figures charts, constructed from real-world open data sources, along with 2 million question-answer (QA) pairs querying the structure and semantics of these charts. LEAF-QA highlights the problem of multimodal QA, which is notably different from conventional visual QA (VQA), and has recently gained interest in the community. Furthermore, LEAF-QA is significantly more complex than previous attempts at chart QA, viz. FigureQA and DVQA, which present only limited variations in chart data. LEAF-QA being constructed from real-world sources, requires a novel architecture to enable question answering. To this end, LEAF-Net, a deep architecture involving chart element localization, question and answer encoding in terms of chart elements, and an attention network is proposed. Different experiments are conducted to demonstrate the challenges of QA on LEAF-QA. The proposed architecture, LEAF-Net also considerably advances the current state-of-the-art on FigureQA and DVQA.
Learning to answer questions based on natural images has been an area of extensive research in recent years. Several datasets including DAQUAR @cite_6 , COCO-QA @cite_16 , VQA @cite_2 , Visual7w @cite_9 and MovieQA @cite_33 have been proposed to explore different facets of question answering on natural images and videos. Correspondingly, methods using attention @cite_29 @cite_32 @cite_10 , neural modules @cite_24 and compositional modeling @cite_23 have been explored. There has been related work on question answering on synthetic data @cite_27 @cite_30 . However, the current work is most related to recent work on multimodal question answering @cite_17 @cite_25 , which show that current VQA do not perform well while reasoning on text in natural images, and hence, there is a need to learn image and scene text jointly for question answering.
{ "cite_N": [ "@cite_30", "@cite_33", "@cite_9", "@cite_29", "@cite_32", "@cite_6", "@cite_24", "@cite_27", "@cite_23", "@cite_2", "@cite_16", "@cite_10", "@cite_25", "@cite_17" ], "mid": [ "2174492417", "2963398599", "2391839782", "2176212817" ], "abstract": [ "We propose a novel attention based deep learning architecture for visual question answering task (VQA). Given an image and an image related natural language question, VQA generates the natural language answer for the question. Generating the correct answers requires the model's attention to focus on the regions corresponding to the question, because different questions inquire about the attributes of different image regions. We introduce an attention based configurable convolutional neural network (ABC-CNN) to learn such question-guided attention. ABC-CNN determines an attention map for an image-question pair by convolving the image feature map with configurable convolutional kernels derived from the question's semantics. We evaluate the ABC-CNN architecture on three benchmark VQA datasets: Toronto COCO-QA, DAQUAR, and VQA dataset. ABC-CNN model achieves significant improvements over state-of-the-art methods on these datasets. The question-guided attention generated by ABC-CNN is also shown to reflect the regions that are highly relevant to the questions.", "We propose a method for visual question answering which combines an internal representation of the content of an image with information extracted from a general knowledge base to answer a broad range of image-based questions. This allows more complex questions to be answered using the predominant neural network-based approach than has previously been possible. It particularly allows questions to be asked about the contents of an image, even when the image itself does not contain the whole answer. The method constructs a textual representation of the semantic content of an image, and merges it with textual information sourced from a knowledge base, to develop a deeper understanding of the scene viewed. Priming a recurrent neural network with this combined information, and the submitted question, leads to a very flexible visual question answering approach. We are specifically able to answer questions posed in natural language, that refer to information not contained in the image. We demonstrate the effectiveness of our model on two publicly available datasets, Toronto COCO-QA [23] and VQA [1] and show that it produces the best reported results in both cases.", "We address a question answering task on real-world images that is set up as a Visual Turing Test. By combining latest advances in image representation and natural language processing, we propose Ask Your Neurons, a scalable, jointly trained, end-to-end formulation to this problem. In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language inputs (image and question). We provide additional insights into the problem by analyzing how much information is contained only in the language part for which we provide a new human baseline. To study human consensus, which is related to the ambiguities inherent in this challenging task, we propose two novel metrics and collect additional answers which extend the original DAQUAR dataset to DAQUAR-Consensus. Moreover, we also extend our analysis to VQA, a large-scale question answering about images dataset, where we investigate some particular design choices and show the importance of stronger visual models. At the same time, we achieve strong performance of our model that still uses a global image representation. Finally, based on such analysis, we refine our Ask Your Neurons on DAQUAR, which also leads to a better performance on this challenging task.", "We propose a method for visual question answering which combines an internal representation of the content of an image with information extracted from a general knowledge base to answer a broad range of image-based questions. This allows more complex questions to be answered using the predominant neural network-based approach than has previously been possible. It particularly allows questions to be asked about the contents of an image, even when the image itself does not contain the whole answer. The method constructs a textual representation of the semantic content of an image, and merges it with textual information sourced from a knowledge base, to develop a deeper understanding of the scene viewed. Priming a recurrent neural network with this combined information, and the submitted question, leads to a very flexible visual question answering approach. We are specifically able to answer questions posed in natural language, that refer to information not contained in the image. We demonstrate the effectiveness of our model on two publicly available datasets, Toronto COCO-QA and MS COCO-VQA and show that it produces the best reported results in both cases." ] }
1907.12646
2964433578
In this paper, we propose a noise-aware exposure control algorithm for robust robot vision. Our method aims to capture the best-exposed image which can boost the performance of various computer vision and robotics tasks. For this purpose, we carefully design an image quality metric which captures complementary quality attributes and ensures light-weight computation. Specifically, our metric consists of a combination of image gradient, entropy, and noise metrics. The synergy of these measures allows preserving sharp edge and rich texture in the image while maintaining a low noise level. Using this novel metric, we propose a real-time and fully automatic exposure and gain control technique based on the Nelder-Mead method. To illustrate the effectiveness of our technique, a large set of experimental results demonstrates higher qualitative and quantitative performances when compared with conventional approaches.
Capturing a well-exposed image is an essential condition to apply any vision based algorithms under challenging environments. In this paper, we define the term from a robotics point of view, as an image containing texture details, sharp object boundaries with low noise, saturation, and blur. In fact, these conditions are desirable to achieve various tasks such as visual-SLAM @cite_15 that requires robust and repeatable keypoints detection, instance segmentation @cite_0 that requires sharp object boundaries, and object classification where even an imperceptible noise may lead to misclassification @cite_3 .
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_3" ], "mid": [ "2541136605", "2581510845", "2563042993", "1761788141" ], "abstract": [ "This paper describes a visual feature detector and descriptor scheme designed to address the specific problems of humanoid robots in the tasks of visual odometry, localization, and SLAM (Simultaneous Localization And Mapping). During walking, turning, and squatting movements, the camera of a humanoid robot moves in jerky and sometimes unpredictable way. This causes an undesired motion blur in the images grabbed by the robot camera, that negatively affects the performance of the image processing algorithms. Indeed, the classical features detector and descriptor filtering techniques, that proved to work so well for wheeled robots, do not perform so reliably in humanoid robots. This paper presents a method to detect image interest points (invariant to scale transformation and rotations) robust to motion-blur introduced by the camera motion. Our approach is based on a preprocessing step to estimate the point spread function (PSF) of the motion blur. The PSF is used to deconvolve the image reducing the blur. Then, we apply a feature detector inspired by SURF approach and the feature descriptor from SIFT. Experiments performed on standard datasets corrupted with motion blur and on images taken by a camera mounted on a small humanoid robot show the effectiveness of the proposed technique. Our approach presents higher performances and higher reliability in matching features in the different images of a sequence affected by motion-blur.", "Real-world sensors suffer from noise, blur, and other imperfections that make high-level computer vision tasks like scene segmentation, tracking, and scene understanding difficult. Making high-level computer vision networks robust is imperative for real-world applications like autonomous driving, robotics, and surveillance. We propose a novel end-to-end differentiable architecture for joint denoising, deblurring, and classification that makes classification robust to realistic noise and blur. The proposed architecture dramatically improves the accuracy of a classification network in low light and other challenging conditions, outperforming alternative approaches such as retraining the network on noisy and blurry images and preprocessing raw sensor inputs with conventional denoising and deblurring algorithms. The architecture learns denoising and deblurring pipelines optimized for classification whose outputs differ markedly from those of state-of-the-art denoising and deblurring methods, preserving fine detail at the cost of more noise and artifacts. Our results suggest that the best low-level image processing for computer vision is different from existing algorithms designed to produce visually pleasing images. The principles used to design the proposed architecture easily extend to other high-level computer vision tasks and image formation models, providing a general framework for integrating low-level and high-level image processing.", "The detection of consistent feature points in an image is fundamental for various kinds of computer vision techniques, such as stereo matching, object recognition, target tracking and optical flow computation. This paper presents an event-based approach to the detection of corner points, which benefits from the high temporal resolution, compressed visual information and low latency provided by an asynchronous neuromorphic event-based camera. The proposed method adapts the commonly used Harris corner detector to the event-based data, in which frames are replaced by a stream of asynchronous events produced in response to local light changes at μs temporal resolution. Responding only to changes in its field of view, an event-based camera naturally enhances edges in the scene, simplifying the detection of corner features. We characterised and tested the method on both a controlled pattern and a real scenario, using the dynamic vision sensor (DVS) on the neuromorphic iCub robot. The method detects corners with a typical error distribution within 2 pixels. The error is constant for different motion velocities and directions, indicating a consistent detection across the scene and over time. We achieve a detection rate proportional to speed, higher than frame-based technique for a significant amount of motion in the scene, while also reducing the computational cost.", "One of the central problems in computer vision is the detection of semantically important objects and the estimation of their pose. Most of the work in object detection has been based on single image processing and its performance is limited by occlusions and ambiguity in appearance and geometry. This paper proposes an active approach to object detection by controlling the point of view of a mobile depth camera. When an initial static detection phase identifies an object of interest, several hypotheses are made about its class and orientation. The sensor then plans a sequence of views, which balances the amount of energy used to move with the chance of identifying the correct hypothesis. We formulate an active hypothesis testing problem, which includes sensor mobility, and solve it using a point-based approximate POMDP algorithm. The validity of our approach is verified through simulation and real-world experiments with the PR2 robot. The results suggest that our approach outperforms the widely-used greedy view point selection and provides a significant improvement over static object detection." ] }
1907.12646
2964433578
In this paper, we propose a noise-aware exposure control algorithm for robust robot vision. Our method aims to capture the best-exposed image which can boost the performance of various computer vision and robotics tasks. For this purpose, we carefully design an image quality metric which captures complementary quality attributes and ensures light-weight computation. Specifically, our metric consists of a combination of image gradient, entropy, and noise metrics. The synergy of these measures allows preserving sharp edge and rich texture in the image while maintaining a low noise level. Using this novel metric, we propose a real-time and fully automatic exposure and gain control technique based on the Nelder-Mead method. To illustrate the effectiveness of our technique, a large set of experimental results demonstrates higher qualitative and quantitative performances when compared with conventional approaches.
The main problem of gradient-based metrics is their tendency to favor high exposures, which, in turn leads to over-exposed images. To avoid such problem, Kim al @cite_17 proposed a gradient weighting scheme based on local image entropy. The optimal exposure is estimated via a Bayesian optimization framework, which finds the global solution by estimating the surrogate models. However, the complexity of the Bayesian optimization and weighting scheme does not allow real-time ability.
{ "cite_N": [ "@cite_17" ], "mid": [ "2889607906", "2198155329", "49238428", "2949358371" ], "abstract": [ "Under- and oversaturation can cause severe image degradation in many vision-based robotic applications. To control camera exposure in dynamic lighting conditions, we introduce a novel metric for image information measure. Measuring an image gradient is typical when evaluating its level of image detail. However, emphasizing more informative pixels substantially improves the measure within an image. By using this entropy weighted image gradient, we introduce an optimal exposure value for vision-based approaches. Using this newly invented metric, we also propose an effective exposure control scheme that covers a wide range of light conditions. When evaluating the function (e.g., image frame grab) is expensive, the next best estimation needs to be carefully considered. Through Bayesian optimization, the algorithm can estimate the optimal exposure value with minimal cost. We validated the proposed image information measure and exposure control scheme via a series of thorough experiments using various exposure conditions.", "Patch-based low-rank models have shown effective in exploiting spatial redundancy of natural images especially for the application of image denoising. However, two-dimensional low-rank model can not fully exploit the spatio-temporal correlation in larger data sets such as multispectral images and 3D MRIs. In this work, we propose a novel low-rank tensor approximation framework with Laplacian Scale Mixture (LSM) modeling for multi-frame image denoising. First, similar 3D patches are grouped to form a tensor of d-order and high-order Singular Value Decomposition (HOSVD) is applied to the grouped tensor. Then the task of multiframe image denoising is formulated as a Maximum A Posterior (MAP) estimation problem with the LSM prior for tensor coefficients. Both unknown sparse coefficients and hidden LSM parameters can be efficiently estimated by the method of alternating optimization. Specifically, we have derived closed-form solutions for both subproblems. Experimental results on spectral and dynamic MRI images show that the proposed algorithm can better preserve the sharpness of important image structures and outperform several existing state-of-the-art multiframe denoising methods (e.g., BM4D and tensor dictionary learning).", "We consider unsupervised partitioning problems based explicitly or implicitly on the minimization of Euclidean distortions, such as clustering, image or video segmentation, and other change-point detection problems. We emphasize on cases with specific structure, which include many practical situations ranging from mean-based change-point detection to image segmentation problems. We aim at learning a Mahalanobis metric for these unsupervised problems, leading to feature weighting and or selection. This is done in a supervised way by assuming the availability of several (partially) labeled datasets that share the same metric. We cast the metric learning problem as a large-margin structured prediction problem, with proper definition of regularizers and losses, leading to a convex optimization problem which can be solved efficiently. Our experiments show how learning the metric can significantly improve performance on bioinformatics, video or image segmentation problems.", "Solving for adversarial examples with projected gradient descent has been demonstrated to be highly effective in fooling the neural network based classifiers. However, in the black-box setting, the attacker is limited only to the query access to the network and solving for a successful adversarial example becomes much more difficult. To this end, recent methods aim at estimating the true gradient signal based on the input queries but at the cost of excessive queries. We propose an efficient discrete surrogate to the optimization problem which does not require estimating the gradient and consequently becomes free of the first order update hyperparameters to tune. Our experiments on Cifar-10 and ImageNet show the state of the art black-box attack performance with significant reduction in the required queries compared to a number of recently proposed methods. The source code is available at this https URL." ] }
1907.12782
2964885899
Bluetooth Low Energy (BLE) has become an intrinsic wireless technology for the Internet of Things (IoT). With the proliferation of BLE-embedded IoT devices, it is important to study the security and privacy implications of BLE. The forefront attack to BLE devices is the wireless sniffing attack, which would lead to more detrimental threats like jamming, encryption cracking or system penetration. Existing sniffing attacks are based on the correct detection of BLE connection initiation state, but they become ineffective for BLE long-lived connections. In this paper, we focus on the adversary setting with a low-cost single radio and develop a suite of real-time algorithms to determine the key parameters necessary to follow and sniff a BLE connection in the connected state. We implement our algorithms in the open source platform -Ubertooth One and evaluate its performance in terms of sniffing overhead and accuracy. By comparing with state-of-the-art schemes, experimental results show that our sniffer achieves much higher sniffing accuracy (over 80 ) and better stability to BLE operational dynamics.
Aforementioned sniffing attacks are based on multi-radio platforms which could be prohibitively expensive for less-capable adversaries. Currently, one popular platform Ubertooth One is an open-source, single-radio and cheap Bluetooth sniffer that was developed by Ryan @cite_17 . This is a powerful sniffer that is relied on observing advertisement packets and looking for AFH parameters to follow. As stated earlier it is difficult to sniff a BLE connection after the connection has been already established. In @cite_17 , Ryan imply how AFH parameters can be extracted from a long-lived BLE connection via jamming. Moreover, Ryan also demonstrate a proof-of-concept of key re-negotiation and man-in-the-middle (MitM) attack to BLE devices.
{ "cite_N": [ "@cite_17" ], "mid": [ "2563417884", "2611258330", "2013405390", "58703277" ], "abstract": [ "This study investigated the impact of Bluetooth Low Energy devices in advertising beaconing mode on fingerprint-based indoor positioning schemes. Early experimentation demonstrated that the low bandwidth of BLE signals compared to WiFi is the cause of significant measurement error when coupled with the use of three BLE advertising channels. The physics underlying this behaviour is verified in simulation. A multipath mitigation scheme is proposed and tested. It is determined that the optimal positioning performance is provided by 10Hz beaconing and a 1 second multipath mitigation processing window size. It is determined that a steady increase in positioning performance with fingerprint size occurs up to 7 ±1, above this there is no clear benefit to extra beacon coverage.", "Infrastructure monitoring applications currently lack a cost-effective and reliable solution for supporting the last communication hop for low-power devices. The use of cellular infrastructure requires contracts and complex radios that are often too power hungry and cost prohibitive for sensing applications that require just a few bits of data each day. New low-power, sub-GHz, long-range radios are an ideal technology to help fill this communication void by providing access points that are able to cover multiple kilometers of urban space with thousands of end-point devices. These new Low-Power Wide-Area Networking (LPWAN) platforms provide a cost-effective and highly deployable option that could piggyback off of existing public and private wireless networks (WiFi, Cellular, etc). In this paper, we present OpenChirp, a prototype end-to-end LPWAN architecture built using LoRa Wide-Area Network (LoRaWAN) with the goal of simplifying the design and deployment of Internet-of-Things (IoT) devices across wide areas like campuses and cities. We present a software architecture that exposes an application layer allowing users to register devices, describe transducer properties, transfer data and retrieve historical values. We define a service model on top of LoRaWAN that acts as a session layer to provide basic encoding and syntax to raw data streams. At the device-level, we introduce and benchmark an open-source hardware platform that uses Bluetooth Low-Energy (BLE) to help provision LoRa clients that can be extended with custom transducers. We evaluate the system in terms of end-node energy consumption, radio penetration into buildings as well as coverage provided by a network currently deployed at Carnegie Mellon University.", "This paper presents a new pairing protocol that allows twoCPU-constrained wireless devices Alice and Bob to establish ashared secret at a very low cost. To our knowledge, this is thefirst software pairing scheme that does not rely on expensivepublic-key cryptography, out-of-band channels (such as a keyboardor a display) or specific hardware, making it inexpensive andsuitable for CPU-constrained devices such as sensors. In the described protocol, Alice can send the secret bit 1 toBob by broadcasting an (empty) packet with the source field set toAlice. Similarly, Alice can send the secret bit 0 to Bob bybroadcasting an (empty) packet with the source field set to Bob.Only Bob can identify the real source of the packet (since it didnot send it, the source is Alice), and can recover the secret bit(1 if the source is set to Alice or 0 otherwise). An eavesdroppercannot retrieve the secret bit since it cannot figure out whetherthe packet was actually sent by Alice or Bob. By randomlygenerating n such packets Alice and Bob can agree on ann-bit secret key. Our scheme requires that the devices being paired, Alice andBob, are shaken during the key exchange protocol. This is toguarantee that an eavesdropper cannot identify the packets sent byAlice from those sent by Bob using data from the RSSI (ReceivedSignal Strength Indicator) registers available in commercialwireless cards. The proposed protocol works with off-the-shelf802.11 wireless cards and is secure against eavesdropping attacksthat use power analysis. It requires, however, some firmwarechanges to protect against attacks that attempt to identify thesource of packets from their transmission frequency.", "We discuss our tools and techniques to monitor and inject packets in Bluetooth Low Energy. Also known as BTLE or Bluetooth Smart, it is found in recent high-end smartphones, sports devices, sensors, and will soon appear in many medical devices. We show that we can effectively render useless the encryption of any Bluetooth Low Energy link." ] }
1907.12648
2966026106
In multi-agent path finding (MAPF) the task is to navigate agents from their starting positions to given individual goals. The problem takes place in an undirected graph whose vertices represent positions and edges define the topology. Agents can move to neighbor vertices across edges. In the standard MAPF, space occupation by agents is modeled by a capacity constraint that permits at most one agent per vertex. We suggest an extension of MAPF in this paper that permits more than one agent per vertex. Propositional satisfiability (SAT) models for these extensions of MAPF are studied. We focus on modeling capacity constraints in SAT-based formulations of MAPF and evaluation of performance of these models. We extend two existing SAT-based formulations with vertex capacity constraints: MDD-SAT and SMT-CBS where the former is an approach that builds the model in an eager way while the latter relies on lazy construction of the model.
The idea behind the SAT-based approach is to construct a propositional formula @math such that it is satisfiable if and only if a solution of a given MAPF of sum-of-costs @math exists @cite_3 . Moreover, the approach is constructive; that is, @math exactly reflects the MAPF instance and if satisfiable, solution of MAPF can be reconstructed from satisfying assignment of the formula. We say @math to be a complete propositional model of MAPF.
{ "cite_N": [ "@cite_3" ], "mid": [ "1515930456", "2951782995", "2152383085", "16743356" ], "abstract": [ "The problem of deciding the satisfiability of a quantifier-free formula with respect to a background theory, also known as Satisfiability Modulo Theories (SMT), is gaining increasing relevance in verification: representation capabilities beyond propositional logic allow for a natural modeling of real-world problems (e.g., pipeline and RTL circuits verification, proof obligations in software systems). In this paper, we focus on the case where the background theory is the combination T1∪T2 of two simpler theories. Many SMT procedures combine a boolean model enumeration with a decision procedure for T1∪T2, where conjunctions of literals can be decided by an integration schema such as Nelson-Oppen, via a structured exchange of interface formulae (e.g., equalities in the case of convex theories, disjunctions of equalities otherwise). We propose a new approach for SMT(T1∪T2), called Delayed Theory Combination, which does not require a decision procedure for T1∪T2, but only individual decision procedures for T1 and T2, which are directly integrated into the boolean model enumerator. This approach is much simpler and natural, allows each of the solvers to be implemented and optimized without taking into account the others, and it nicely encompasses the case of non-convex theories. We show the effectiveness of the approach by a thorough experimental comparison.", "Propositional model counting (#SAT), i.e., counting the number of satisfying assignments of a propositional formula, is a problem of significant theoretical and practical interest. Due to the inherent complexity of the problem, approximate model counting, which counts the number of satisfying assignments to within given tolerance and confidence level, was proposed as a practical alternative to exact model counting. Yet, approximate model counting has been studied essentially only theoretically. The only reported implementation of approximate model counting, due to Karp and Luby, worked only for DNF formulas. A few existing tools for CNF formulas are bounding model counters; they can handle realistic problem sizes, but fall short of providing counts within given tolerance and confidence, and, thus, are not approximate model counters. We present here a novel algorithm, as well as a reference implementation, that is the first scalable approximate model counter for CNF formulas. The algorithm works by issuing a polynomial number of calls to a SAT solver. Our tool, ApproxMC, scales to formulas with tens of thousands of variables. Careful experimental comparisons show that ApproxMC reports, with high confidence, bounds that are close to the exact count, and also succeeds in reporting bounds with small tolerance and high confidence in cases that are too large for computing exact model counts.", "We consider the problem of computing optimal plans for propositional planning problems with action costs. In the spirit of leveraging advances in general-purpose automated reasoning for that setting, we develop an approach that operates by solving a sequence of partial weighted MaxSAT problems, each of which corresponds to a step-bounded variant of the problem at hand. Our approach is the first SAT-based system in which a proof of cost optimality is obtained using a MaxSAT procedure. It is also the first system of this kind to incorporate an admissible planning heuristic. We perform a detailed empirical evaluation of our work using benchmarks from a number of International Planning Competitions.", "From a computational perspective, there is a close connection between various probabilistic reasoning tasks and the problem of counting or sampling satisfying assignments of a propositional theory. We consider the question of whether state-of-the-art satisfiability procedures, based on random walk strategies, can be used to sample uniformly or nearuniformly from the space of satisfying assignments. We first show that random walk SAT procedures often do reach the full set of solutions of complex logical theories. Moreover, by interleaving random walk steps with Metropolis transitions, we also show how the sampling becomes near-uniform." ] }
1907.12648
2966026106
In multi-agent path finding (MAPF) the task is to navigate agents from their starting positions to given individual goals. The problem takes place in an undirected graph whose vertices represent positions and edges define the topology. Agents can move to neighbor vertices across edges. In the standard MAPF, space occupation by agents is modeled by a capacity constraint that permits at most one agent per vertex. We suggest an extension of MAPF in this paper that permits more than one agent per vertex. Propositional satisfiability (SAT) models for these extensions of MAPF are studied. We focus on modeling capacity constraints in SAT-based formulations of MAPF and evaluation of performance of these models. We extend two existing SAT-based formulations with vertex capacity constraints: MDD-SAT and SMT-CBS where the former is an approach that builds the model in an eager way while the latter relies on lazy construction of the model.
A common measure how to reduce the number of decision variables derived from the time expansion is the use of multi-value decision diagrams (MDDs) @cite_14 . The basic observation that holds for MAPF is that an agent can reach vertices in the distance @math (distance of a vertex is measured as the length of the shortest path) from the current position of the agent no earlier than in the @math -th time step. Analogical observation can be made with respect to the distance from the goal position.
{ "cite_N": [ "@cite_14" ], "mid": [ "2137108108", "1585861384", "2225272567", "2119218916" ], "abstract": [ "An investigation was made of the analogous graph structure for representing and manipulating discrete variable problems. The authors define the multi-valued decision diagram (MDD), analyze its properties (in particular prove a strong canonical form) and provide algorithms for combining and manipulating MDDs. They give a method for mapping an MDD into an equivalent BDD (binary decision diagram) which allows them to provide a highly efficient implementation using the previously developed BDD packages. A direct implementation of the MDD structure has also been carried out, but this initial implementation has not yet been tuned to the same extent as the BDDs to allow a reasonable comparison to be made. The authors have used the mapping to BDDs to provide an initial understanding of the limits on the sizes of real problems that can be executed. The results are encouraging. >", "Decision making usually involves choosing among different courses of action over a broad range of time scales. For instance, a person planning a trip to a distant location makes high-level decisions regarding what means of transportation to use, but also chooses low-level actions, such as the movements for getting into a car. The problem of picking an appropriate time scale for reasoning and learning has been explored in artificial intelligence, control theory and robotics. In this dissertation we develop a framework that allows novel solutions to this problem, in the context of Markov Decision Processes (MDPs) and reinforcement learning. In this dissertation, we present a general framework for prediction, control and learning at multiple temporal scales. In this framework, temporally extended actions are represented by a way of behaving (a policy) together with a termination condition. An action represented in this way is called an option. Options can be easily incorporated in MDPs, allowing an agent to use existing controllers, heuristics for picking actions, or learned courses of action. The effects of behaving according to an option can be predicted using multi-time models, learned by interacting with the environment. In this dissertation we develop multi-time models, and we illustrate the way in which they can be used to produce plans of behavior very quickly, using classical dynamic programming or reinforcement learning techniques. The most interesting feature of our framework is that it allows an agent to work simultaneously with high-level and low-level temporal representations. The interplay of these levels can be exploited in order to learn and plan more efficiently and more accurately. We develop new algorithms that take advantage of this structure to improve the quality of plans, and to learn in parallel about the effects of many different options.", "We study the complexity of motivating time-inconsistent agents to complete long term projects in a graph-based planning model as proposed by Kleinberg and Oreni¾?[5]. Given a task graph G with n nodes, our objective is to guide an agent towards a target node t under certain budget constraints. The crux is that the agent may change its strategy over time due to its present-bias. We consider two strategies to guide the agent. First, a single reward is placed at t and arbitrary edges can be removed from G. Secondly, rewards can be placed at arbitrary nodes of G but no edges must be deleted. In both cases we show that it is NP-complete to decide if a given budget is sufficient to guide the agent. For the first setting, we give complementing upper and lower bounds on the approximability of the minimum required budget. In particular, we devise a @math 1+n-approximation algorithm and prove NP-hardness for ratios greater than @math n 3. Finally, we argue that the second setting does not permit any efficient approximation unless @math P=NP.", "In this paper, we consider the problem of maximizing the spread of influence through a social network. Given a graph with a threshold value thr(v) attached to each vertex v, the spread of influence is modeled as follows: A vertex v becomes ''active'' (influenced) if at least thr(v) of its neighbors are active. In the corresponding optimization problem the objective is then to find a fixed number k of vertices to activate such that the number of activated vertices at the end of the propagation process is maximum. We show that this problem is strongly inapproximable in time f(k)@?n^O^(^1^), for some function f, even for very restrictive thresholds. In the case that the threshold of each vertex equals its degree, we prove that the problem is inapproximable in polynomial time and it becomes r(n)-approximable in time f(k)@?n^O^(^1^), for some function f, for any strictly increasing function r. Moreover, we show that the decision version parameterized by k is W[1]-hard but becomes fixed-parameter tractable on bounded degree graphs." ] }
1907.12743
2966860738
Although various image-based domain adaptation (DA) techniques have been proposed in recent years, domain shift in videos is still not well-explored. Most previous works only evaluate performance on small-scale datasets which are saturated. Therefore, we first propose two large-scale video DA datasets with much larger domain discrepancy: UCF-HMDB_full and Kinetics-Gameplay. Second, we investigate different DA integration methods for videos, and show that simultaneously aligning and learning temporal dynamics achieves effective alignment even without sophisticated DA methods. Finally, we propose Temporal Attentive Adversarial Adaptation Network (TA3N), which explicitly attends to the temporal dynamics using domain discrepancy for more effective domain alignment, achieving state-of-the-art performance on four video DA datasets (e.g. 7.9 accuracy gain over "Source only" from 73.9 to 81.8 on "HMDB --> UCF", and 10.3 gain on "Kinetics --> Gameplay"). The code and data are released at this http URL
With the rise of deep convolutional neural networks (CNNs), recent work for video classification mainly aims to learn compact spatio-temporal representations by leveraging CNNs for spatial information and designing various architectures to exploit temporal dynamics @cite_55 . In addition to separating spatial and temporal learning, some works propose different architectures to encode spatio-temporal representations with consideration of the trade-off between performance and computational cost @cite_21 @cite_63 @cite_42 @cite_43 . Another branch of work utilizes optical flow to compensate for the lack of temporal information in raw RGB frames @cite_62 @cite_28 @cite_65 @cite_63 @cite_14 . Moreover, some works extract temporal dependencies between frames for video tasks by utilizing recurrent neural networks (RNNs) @cite_37 , attention @cite_22 @cite_17 and relation modules @cite_44 . Note that we focus on attending to the temporal dynamics to effectively align domains and we consider other modalities, e.g. optical flow, to be complementary to our method.
{ "cite_N": [ "@cite_37", "@cite_62", "@cite_14", "@cite_22", "@cite_28", "@cite_55", "@cite_21", "@cite_42", "@cite_65", "@cite_44", "@cite_43", "@cite_63", "@cite_17" ], "mid": [ "2883429621", "2517503862", "2761659801", "2963820951" ], "abstract": [ "Despite the steady progress in video analysis led by the adoption of convolutional neural networks (CNNs), the relative improvement has been less drastic as that in 2D static image classification. Three main challenges exist including spatial (image) feature representation, temporal information representation, and model computation complexity. It was recently shown by Carreira and Zisserman that 3D CNNs, inflated from 2D networks and pretrained on ImageNet, could be a promising way for spatial and temporal representation learning. However, as for model computation complexity, 3D CNNs are much more expensive than 2D CNNs and prone to overfit. We seek a balance between speed and accuracy by building an effective and efficient video classification system through systematic exploration of critical network design choices. In particular, we show that it is possible to replace many of the 3D convolutions by low-cost 2D convolutions. Rather surprisingly, best result (in both speed and accuracy) is achieved when replacing the 3D convolutions at the bottom of the network, suggesting that temporal representation learning on high-level “semantic” features is more useful. Our conclusion generalizes to datasets with very different properties. When combined with several other cost-effective designs including separable spatial temporal convolution and feature gating, our system results in an effective video classification system that that produces very competitive results on several action classification benchmarks (Kinetics, Something-something, UCF101 and HMDB), as well as two action detection (localization) benchmarks (JHMDB and UCF101-24).", "This paper presents a novel method to involve both spatial and temporal features for semantic video segmentation. Current work on convolutional neural networks(CNNs) has shown that CNNs provide advanced spatial features supporting a very good performance of solutions for both image and video analysis, especially for the semantic segmentation task. We investigate how involving temporal features also has a good effect on segmenting video data. We propose a module based on a long short-term memory (LSTM) architecture of a recurrent neural network for interpreting the temporal characteristics of video frames over time. Our system takes as input frames of a video and produces a correspondingly-sized output; for segmenting the video our method combines the use of three components: First, the regional spatial features of frames are extracted using a CNN; then, using LSTM the temporal features are added; finally, by deconvolving the spatio-temporal features we produce pixel-wise predictions. Our key insight is to build spatio-temporal convolutional networks (spatio-temporal CNNs) that have an end-to-end architecture for semantic video segmentation. We adapted fully some known convolutional network architectures (such as FCN-AlexNet and FCN-VGG16), and dilated convolution into our spatio-temporal CNNs. Our spatio-temporal CNNs achieve state-of-the-art semantic segmentation, as demonstrated for the Camvid and NYUDv2 datasets.", "Convolutional Neural Networks (CNN) have been regarded as a powerful class of models for image recognition problems. Nevertheless, it is not trivial when utilizing a CNN for learning spatio-temporal video representation. A few studies have shown that performing 3D convolutions is a rewarding approach to capture both spatial and temporal dimensions in videos. However, the development of a very deep 3D CNN from scratch results in expensive computational cost and memory demand. A valid question is why not recycle off-the-shelf 2D networks for a 3D CNN. In this paper, we devise multiple variants of bottleneck building blocks in a residual learning framework by simulating @math convolutions with @math convolutional filters on spatial domain (equivalent to 2D CNN) plus @math convolutions to construct temporal connections on adjacent feature maps in time. Furthermore, we propose a new architecture, named Pseudo-3D Residual Net (P3D ResNet), that exploits all the variants of blocks but composes each in different placement of ResNet, following the philosophy that enhancing structural diversity with going deep could improve the power of neural networks. Our P3D ResNet achieves clear improvements on Sports-1M video classification dataset against 3D CNN and frame-based 2D CNN by 5.3 and 1.8 , respectively. We further examine the generalization performance of video representation produced by our pre-trained P3D ResNet on five different benchmarks and three different tasks, demonstrating superior performances over several state-of-the-art techniques.", "Convolutional Neural Networks (CNN) have been regarded as a powerful class of models for image recognition problems. Nevertheless, it is not trivial when utilizing a CNN for learning spatio-temporal video representation. A few studies have shown that performing 3D convolutions is a rewarding approach to capture both spatial and temporal dimensions in videos. However, the development of a very deep 3D CNN from scratch results in expensive computational cost and memory demand. A valid question is why not recycle off-the-shelf 2D networks for a 3D CNN. In this paper, we devise multiple variants of bottleneck building blocks in a residual learning framework by simulating 3 x 3 x 3 convolutions with 1 × 3 × 3 convolutional filters on spatial domain (equivalent to 2D CNN) plus 3 × 1 × 1 convolutions to construct temporal connections on adjacent feature maps in time. Furthermore, we propose a new architecture, named Pseudo-3D Residual Net (P3D ResNet), that exploits all the variants of blocks but composes each in different placement of ResNet, following the philosophy that enhancing structural diversity with going deep could improve the power of neural networks. Our P3D ResNet achieves clear improvements on Sports-1M video classification dataset against 3D CNN and frame-based 2D CNN by 5.3 and 1.8 , respectively. We further examine the generalization performance of video representation produced by our pre-trained P3D ResNet on five different benchmarks and three different tasks, demonstrating superior performances over several state-of-the-art techniques." ] }
1907.12743
2966860738
Although various image-based domain adaptation (DA) techniques have been proposed in recent years, domain shift in videos is still not well-explored. Most previous works only evaluate performance on small-scale datasets which are saturated. Therefore, we first propose two large-scale video DA datasets with much larger domain discrepancy: UCF-HMDB_full and Kinetics-Gameplay. Second, we investigate different DA integration methods for videos, and show that simultaneously aligning and learning temporal dynamics achieves effective alignment even without sophisticated DA methods. Finally, we propose Temporal Attentive Adversarial Adaptation Network (TA3N), which explicitly attends to the temporal dynamics using domain discrepancy for more effective domain alignment, achieving state-of-the-art performance on four video DA datasets (e.g. 7.9 accuracy gain over "Source only" from 73.9 to 81.8 on "HMDB --> UCF", and 10.3 gain on "Kinetics --> Gameplay"). The code and data are released at this http URL
Most recent DA approaches are based on deep learning architectures designed for addressing the domain shift problems given the fact that the deep CNN features without any DA method outperform traditional DA methods using hand-crafted features @cite_29 . Most DA approaches follow the two-branch (source and target) architecture, and aim to find a common feature space between the source and target domains. The models are therefore optimized with a combination of and losses @cite_52 .
{ "cite_N": [ "@cite_29", "@cite_52" ], "mid": [ "2963864946", "2214409633", "2953226914", "2786906486" ], "abstract": [ "Current Domain Adaptation (DA) methods based on deep architectures assume that the source samples arise from a single distribution. However, in practice most datasets can be regarded as mixtures of multiple domains. In these cases exploiting single-source DA methods for learning target classifiers may lead to sub-optimal, if not poor, results. In addition, in many applications it is difficult to manually provide the domain labels for all source data points, i.e. latent domains should be automatically discovered. This paper introduces a novel Convolutional Neural Network (CNN) architecture which (i) automatically discovers latent domains in visual datasets and (ii) exploits this information to learn robust target classifiers. Our approach is based on the introduction of two main components, which can be embedded into any existing CNN architecture: (i) a side branch that automatically computes the assignment of a source sample to a latent domain and (ii) novel layers that exploit domain membership information to appropriately align the distribution of the CNN internal feature representations to a reference distribution. We test our approach on publicly-available datasets, showing that it outperforms state-of-the-art multi-source DA methods by a large margin.", "Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings.", "Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings.", "Recently several different deep learning architectures have been proposed that take a string of characters as the raw input signal and automatically derive features for text classification. Little studies are available that compare the effectiveness of these approaches for character based text classification with each other. In this paper we perform such an empirical comparison for the important cybersecurity problem of DGA detection: classifying domain names as either benign vs. produced by malware (i.e., by a Domain Generation Algorithm). Training and evaluating on a dataset with 2M domain names shows that there is surprisingly little difference between various convolutional neural network (CNN) and recurrent neural network (RNN) based architectures in terms of accuracy, prompting a preference for the simpler architectures, since they are faster to train and less prone to overfitting." ] }
1907.12704
2964609265
Unsupervised feature learning for point clouds has been vital for large-scale point cloud understanding. Recent deep learning based methods depend on learning global geometry from self-reconstruction. However, these methods are still suffering from ineffective learning of local geometry, which significantly limits the discriminability of learned features. To resolve this issue, we propose MAP-VAE to enable the learning of global and local geometry by jointly leveraging global and local self-supervision. To enable effective local self-supervision, we introduce multi-angle analysis for point clouds. In a multi-angle scenario, we first split a point cloud into a front half and a back half from each angle, and then, train MAP-VAE to learn to predict a back half sequence from the corresponding front half sequence. MAP-VAE performs this half-to-half prediction using RNN to simultaneously learn each local geometry and the spatial relationship among them. In addition, MAP-VAE also learns global geometry via self-reconstruction, where we employ a variational constraint to facilitate novel shape generation. The outperforming results in four shape analysis tasks show that MAP-VAE can learn more discriminative global or local features than the state-of-the-art methods.
Deep learning models have led to significant progress in feature learning for 3D shapes @cite_27 @cite_18 @cite_33 @cite_4 @cite_17 @cite_37 @cite_32 @cite_12 @cite_3 @cite_25 . Here, we focus on reviewing studies on point clouds. For supervised methods, supervised information, such as shape class labels or segmentation labels, are required to train deep learning models in the feature learning process. In contrast, unsupervised methods are designed to mine self-supervision information from point clouds for training, which eliminates the need for supervised information that can be tedious to obtain. We briefly review the state-of-the-art methods in these two categories as follows.
{ "cite_N": [ "@cite_18", "@cite_37", "@cite_4", "@cite_33", "@cite_32", "@cite_3", "@cite_27", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "2964609265", "2785325870", "2962742544", "2798314605" ], "abstract": [ "Unsupervised feature learning for point clouds has been vital for large-scale point cloud understanding. Recent deep learning based methods depend on learning global geometry from self-reconstruction. However, these methods are still suffering from ineffective learning of local geometry, which significantly limits the discriminability of learned features. To resolve this issue, we propose MAP-VAE to enable the learning of global and local geometry by jointly leveraging global and local self-supervision. To enable effective local self-supervision, we introduce multi-angle analysis for point clouds. In a multi-angle scenario, we first split a point cloud into a front half and a back half from each angle, and then, train MAP-VAE to learn to predict a back half sequence from the corresponding front half sequence. MAP-VAE performs this half-to-half prediction using RNN to simultaneously learn each local geometry and the spatial relationship among them. In addition, MAP-VAE also learns global geometry via self-reconstruction, where we employ a variational constraint to facilitate novel shape generation. The outperforming results in four shape analysis tasks show that MAP-VAE can learn more discriminative global or local features than the state-of-the-art methods.", "Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similarly striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification. The code and models of our paper will be published on: this https URL .", "Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similar striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification.", "3D shape completion from partial point clouds is a fundamental problem in computer vision and computer graphics. Recent approaches can be characterized as either data-driven or learning-based. Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations. Learning-based approaches, in contrast, avoid the expensive optimization step and instead directly predict the complete shape from the incomplete observations using deep neural networks. However, full supervision is required which is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, i.e., learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. Tackling 3D shape completion of cars on ShapeNet [5] and KITTI [18], we demonstrate that the proposed amortized maximum likelihood approach is able to compete with a fully supervised baseline and a state-of-the-art data-driven approach while being significantly faster. On ModelNet [49], we additionally show that the approach is able to generalize to other object categories as well." ] }
1907.12704
2964609265
Unsupervised feature learning for point clouds has been vital for large-scale point cloud understanding. Recent deep learning based methods depend on learning global geometry from self-reconstruction. However, these methods are still suffering from ineffective learning of local geometry, which significantly limits the discriminability of learned features. To resolve this issue, we propose MAP-VAE to enable the learning of global and local geometry by jointly leveraging global and local self-supervision. To enable effective local self-supervision, we introduce multi-angle analysis for point clouds. In a multi-angle scenario, we first split a point cloud into a front half and a back half from each angle, and then, train MAP-VAE to learn to predict a back half sequence from the corresponding front half sequence. MAP-VAE performs this half-to-half prediction using RNN to simultaneously learn each local geometry and the spatial relationship among them. In addition, MAP-VAE also learns global geometry via self-reconstruction, where we employ a variational constraint to facilitate novel shape generation. The outperforming results in four shape analysis tasks show that MAP-VAE can learn more discriminative global or local features than the state-of-the-art methods.
As a pioneering work, PointNet @cite_28 was proposed to directly learn features from point clouds by deep learning models. However, PointNet is limited in capturing contextual information among points. To resolve this issue, various techniques were proposed to establish graph in a local region to capture the relationship among points in the region @cite_5 @cite_38 @cite_19 @cite_2 @cite_23 . Furthermore, multi-scale analysis @cite_20 was introduced to extract more semantic features from the local region by separating points into scales or bins, and then, aggregating these features by concatenation @cite_15 or RNN @cite_36 . These methods require supervised information in the feature learning process, which is different from unsupervised approach in MAP-VAE.
{ "cite_N": [ "@cite_38", "@cite_28", "@cite_36", "@cite_19", "@cite_23", "@cite_2", "@cite_5", "@cite_15", "@cite_20" ], "mid": [ "2964609265", "2899746032", "2796040722", "2963053547" ], "abstract": [ "Unsupervised feature learning for point clouds has been vital for large-scale point cloud understanding. Recent deep learning based methods depend on learning global geometry from self-reconstruction. However, these methods are still suffering from ineffective learning of local geometry, which significantly limits the discriminability of learned features. To resolve this issue, we propose MAP-VAE to enable the learning of global and local geometry by jointly leveraging global and local self-supervision. To enable effective local self-supervision, we introduce multi-angle analysis for point clouds. In a multi-angle scenario, we first split a point cloud into a front half and a back half from each angle, and then, train MAP-VAE to learn to predict a back half sequence from the corresponding front half sequence. MAP-VAE performs this half-to-half prediction using RNN to simultaneously learn each local geometry and the spatial relationship among them. In addition, MAP-VAE also learns global geometry via self-reconstruction, where we employ a variational constraint to facilitate novel shape generation. The outperforming results in four shape analysis tasks show that MAP-VAE can learn more discriminative global or local features than the state-of-the-art methods.", "Exploring contextual information in the local region is important for shape understanding and analysis. Existing studies often employ hand-crafted or explicit ways to encode contextual information of local regions. However, it is hard to capture fine-grained contextual information in hand-crafted or explicit manners, such as the correlation between different areas in a local region, which limits the discriminative ability of learned features. To resolve this issue, we propose a novel deep learning model for 3D point clouds, named Point2Sequence, to learn 3D shape features by capturing fine-grained contextual information in a novel implicit way. Point2Sequence employs a novel sequence learning model for point clouds to capture the correlations by aggregating multi-scale areas of each local region with attention. Specifically, Point2Sequence first learns the feature of each area scale in a local region. Then, it captures the correlation between area scales in the process of aggregating all area scales using a recurrent neural network (RNN) based encoder-decoder structure, where an attention mechanism is proposed to highlight the importance of different area scales. Experimental results show that Point2Sequence achieves state-of-the-art performance in shape classification and segmentation tasks.", "Unlike on images, semantic learning on 3D point clouds using a deep network is challenging due to the naturally unordered data structure. Among existing works, PointNet has achieved promising results by directly learning on point sets. However, it does not take full advantage of a point's local neighborhood that contains fine-grained structural information which turns out to be helpful towards better semantic learning. In this regard, we present two new operations to improve PointNet with a more efficient exploitation of local structures. The first one focuses on local 3D geometric structures. In analogy to a convolution kernel for images, we define a point-set kernel as a set of learnable 3D points that jointly respond to a set of neighboring data points according to their geometric affinities measured by kernel correlation, adapted from a similar technique for point cloud registration. The second one exploits local high-dimensional feature structures by recursive feature aggregation on a nearest-neighbor-graph computed from 3D positions. Experiments show that our network can efficiently capture local information and robustly achieve better performances on major datasets. Our code is available at this http URL", "Unlike on images, semantic learning on 3D point clouds using a deep network is challenging due to the naturally unordered data structure. Among existing works, PointNet has achieved promising results by directly learning on point sets. However, it does not take full advantage of a point's local neighborhood that contains fine-grained structural information which turns out to be helpful towards better semantic learning. In this regard, we present two new operations to improve PointNet with a more efficient exploitation of local structures. The first one focuses on local 3D geometric structures. In analogy to a convolution kernel for images, we define a point-set kernel as a set of learnable 3D points that jointly respond to a set of neighboring data points according to their geometric affinities measured by kernel correlation, adapted from a similar technique for point cloud registration. The second one exploits local high-dimensional feature structures by recursive feature aggregation on a nearest-neighbor-graph computed from 3D positions. Experiments show that our network can efficiently capture local information and robustly achieve better performances on major datasets. Our code is available at http: www.merl.com research license#KCNet" ] }
1907.12622
2965300374
The cross-depiction problem refers to the task of recognising visual objects regardless of their depictions; whether photographed, painted, sketched, etc . In the past, some researchers considered cross-depiction to be domain adaptation (DA). More recent work considers cross-depiction as domain generalisation (DG), in which algorithms extend recognition from one set of domains (such as photographs and coloured artwork) to another (such as sketches). We show that fixing the last layer of AlexNet to random values provides a performance comparable to state of the art DA and DG algorithms, when tested over the PACS benchmark. With support from background literature, our results lead us to conclude that texture alone is insufficient to support generalisation; rather, higher-order representations such as structure and shape are necessary.
The cross-depiction problem refers to the task of recognising visual objects regardless of their depiction whether realistic or artistic. It is an under-researched area. Some work uses constellation models, e.g. Crowley and Zisserman use a DPM to learn figurative art on Greek vases @cite_27 . Others develop the problem of searching a database of photographs based on a sketch query; edge-based HOG was explored in @cite_32 . Li @cite_11 developed rich sketch representations for sketch matching using both local features and global structures of sketches. Others have investigated sketch based retrieval of video @cite_30 @cite_29 . Wu @cite_15 provide a non-neural fully-connected constellation model that is stable across depictions.
{ "cite_N": [ "@cite_30", "@cite_29", "@cite_32", "@cite_27", "@cite_15", "@cite_11" ], "mid": [ "2270586871", "1587734090", "2253323104", "2035652042" ], "abstract": [ "The cross-depiction problem is that of recognising visual objects regardless of whether they are photographed, painted, drawn, etc. It introduces great challenge as the variance across photo and art domains is much larger than either alone. We extensively evaluate classification, domain adaptation and detection benchmarks for leading techniques, demonstrating that none perform consistently well given the cross-depiction problem. Finally we refine the DPM model, based on query expansion, enabling it to bridge the gap across depiction boundaries to some extent.", "The cross-depiction problem is that of recognising visual objects regardless of whether they are photographed, painted, drawn, etc. It is a potentially significant yet under-researched problem. Emulating the remarkable human ability to recognise objects in an astonishingly wide variety of depictive forms is likely to advance both the foundations and the applications of Computer Vision. In this paper we benchmark classification, domain adaptation, and deep learning methods; demonstrating that none perform consistently well in the cross-depiction problem. Given the current interest in deep learning, the fact such methods exhibit the same behaviour as all but one other method: they show a significant fall in performance over inhomogeneous databases compared to their peak performance, which is always over data comprising photographs only. Rather, we find the methods that have strong models of spatial relations between parts tend to be more robust and therefore conclude that such information is important in modelling object classes regardless of appearance details.", "Cross-depiction is the recognition—and synthesis—of objects whether they are photographed, painted, drawn, etc. It is a significant yet underresearched problem. Emulating the remarkable human ability to recognise and depict objects in an astonishingly wide variety of depictive forms is likely to advance both the foundations and the applications of computer vision. In this paper we motivate the cross-depiction problem, explain why it is difficult, and discuss some current approaches. Our main conclusions are (i) appearance-based recognition systems tend to be over-fitted to one depiction, (ii) models that explicitly encode spatial relations between parts are more robust, and (iii) recognition and non-photorealistic synthesis are related tasks.", "The goal of this work is to find visually similar images even if they appear quite different at the raw pixel level. This task is particularly important for matching images across visual domains, such as photos taken over different seasons or lighting conditions, paintings, hand-drawn sketches, etc. We propose a surprisingly simple method that estimates the relative importance of different features in a query image based on the notion of \"data-driven uniqueness\". We employ standard tools from discriminative object detection in a novel way, yielding a generic approach that does not depend on a particular image representation or a specific visual domain. Our approach shows good performance on a number of difficult cross-domain visual tasks e.g., matching paintings or sketches to real photographs. The method also allows us to demonstrate novel applications such as Internet re-photography, and painting2gps. While at present the technique is too computationally intensive to be practical for interactive image retrieval, we hope that some of the ideas will eventually become applicable to that domain as well." ] }
1907.12622
2965300374
The cross-depiction problem refers to the task of recognising visual objects regardless of their depictions; whether photographed, painted, sketched, etc . In the past, some researchers considered cross-depiction to be domain adaptation (DA). More recent work considers cross-depiction as domain generalisation (DG), in which algorithms extend recognition from one set of domains (such as photographs and coloured artwork) to another (such as sketches). We show that fixing the last layer of AlexNet to random values provides a performance comparable to state of the art DA and DG algorithms, when tested over the PACS benchmark. With support from background literature, our results lead us to conclude that texture alone is insufficient to support generalisation; rather, higher-order representations such as structure and shape are necessary.
Deep learning has recently emerged as a truly significant development in Computer Vision. It has been successful on conventional databases, and over a wide range of tasks, with recognition rates in excess of @math Other than this paper, we know of only two studies aimed at assessing the performance of well established methods on the cross depiction problem. Crowley and Zisserman @cite_18 use a subset of the Your Paintings' dataset @cite_2 , the subset decided by those that have been tagged with VOC categories @cite_4 . Using 11 classes, and objects that can only scale and translate, they report an overall drop in per class Prec@k (at @math ) from 0.98 when trained and tested on paintings alone, to 0.66 when trained on photographs and tested on paintings. Hu and Collomosse @cite_32 use 33 shape categories in Flickr to compare a range of descriptors: SIFT, multi-resolution HOG, Self Similarity, Shape Context, Structure Tensor, and (their contribution) Gradient Field HOG. They test a collection of 8 distance measures, reporting low mean average precision rates in all cases. Our focus is on domain-shift via meta-learning, and therefore concentrate our review on that area.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_32", "@cite_2" ], "mid": [ "1776042733", "2785325870", "2952809312", "2951498893" ], "abstract": [ "Deep networks have recently enjoyed enormous success when applied to recognition and classification problems in computer vision [22, 33], but their use in graphics problems has been limited ([23, 7] are notable recent exceptions). In this work, we present a novel deep architecture that performs new view synthesis directly from pixels, trained from a large number of posed image sets. In contrast to traditional approaches, which consist of multiple complex stages of processing, each of which requires careful tuning and can fail in unexpected ways, our system is trained end-to-end. The pixels from neighboring views of a scene are presented to the network, which then directly produces the pixels of the unseen view. The benefits of our approach include generality (we only require posed image sets and can easily apply our method to different domains), and high quality results on traditionally difficult scenes. We believe this is due to the end-to-end nature of our system, which is able to plausibly generate pixels according to color, depth, and texture priors learnt automatically from the training data. We show view interpolation results on imagery from the KITTI dataset [12], from data from [1] as well as on Google Street View images. To our knowledge, our work is the first to apply deep learning to the problem of new view synthesis from sets of real-world, natural imagery.", "Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similarly striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification. The code and models of our paper will be published on: this https URL .", "Deep networks have recently enjoyed enormous success when applied to recognition and classification problems in computer vision, but their use in graphics problems has been limited. In this work, we present a novel deep architecture that performs new view synthesis directly from pixels, trained from a large number of posed image sets. In contrast to traditional approaches which consist of multiple complex stages of processing, each of which require careful tuning and can fail in unexpected ways, our system is trained end-to-end. The pixels from neighboring views of a scene are presented to the network which then directly produces the pixels of the unseen view. The benefits of our approach include generality (we only require posed image sets and can easily apply our method to different domains), and high quality results on traditionally difficult scenes. We believe this is due to the end-to-end nature of our system which is able to plausibly generate pixels according to color, depth, and texture priors learnt automatically from the training data. To verify our method we show that it can convincingly reproduce known test views from nearby imagery. Additionally we show images rendered from novel viewpoints. To our knowledge, our work is the first to apply deep learning to the problem of new view synthesis from sets of real-world, natural imagery.", "Building upon recent Deep Neural Network architectures, current approaches lying in the intersection of computer vision and natural language processing have achieved unprecedented breakthroughs in tasks like automatic captioning or image retrieval. Most of these learning methods, though, rely on large training sets of images associated with human annotations that specifically describe the visual content. In this paper we propose to go a step further and explore the more complex cases where textual descriptions are loosely related to the images. We focus on the particular domain of News articles in which the textual content often expresses connotative and ambiguous relations that are only suggested but not directly inferred from images. We introduce new deep learning methods that address source detection, popularity prediction, article illustration and geolocation of articles. An adaptive CNN architecture is proposed, that shares most of the structure for all the tasks, and is suitable for multitask and transfer learning. Deep Canonical Correlation Analysis is deployed for article illustration, and a new loss function based on Great Circle Distance is proposed for geolocation. Furthermore, we present BreakingNews, a novel dataset with approximately 100K news articles including images, text and captions, and enriched with heterogeneous meta-data (such as GPS coordinates and popularity metrics). We show this dataset to be appropriate to explore all aforementioned problems, for which we provide a baseline performance using various Deep Learning architectures, and different representations of the textual and visual features. We report very promising results and bring to light several limitations of current state-of-the-art in this kind of domain, which we hope will help spur progress in the field." ] }
1907.12622
2965300374
The cross-depiction problem refers to the task of recognising visual objects regardless of their depictions; whether photographed, painted, sketched, etc . In the past, some researchers considered cross-depiction to be domain adaptation (DA). More recent work considers cross-depiction as domain generalisation (DG), in which algorithms extend recognition from one set of domains (such as photographs and coloured artwork) to another (such as sketches). We show that fixing the last layer of AlexNet to random values provides a performance comparable to state of the art DA and DG algorithms, when tested over the PACS benchmark. With support from background literature, our results lead us to conclude that texture alone is insufficient to support generalisation; rather, higher-order representations such as structure and shape are necessary.
Datasets exhibit bias, which can be problematic. In photographic image recognition, bias for particular camera settings and other attributes can prevent models generalising well @cite_12 . This motivated the collection of the multi-domain VLCS dataset: an aggregation of photos from Caltech, LabelMe, Pascal VOC 2007 and SUN09 @cite_12 . Until recently, domain adaptation and generalisation in image recognition focused on transfer across photo-only benchmarks. Now, more datasets are available that cover larger domains shift across more varying depictive styles @cite_15 @cite_31 @cite_13 and better reflect the cross-depiction problem. We make use of the PACS dataset provided by Li al @cite_31 . As a domain generalisation benchmark, where one domain is an unseen target domain, PACS is a far more challenging task than photographic benchmarks. Li al @cite_31 measured an average KL-divergence @cite_14 of 0.85 between training and test domains across PACS, compared to 0.07 across VLCS.
{ "cite_N": [ "@cite_31", "@cite_14", "@cite_15", "@cite_13", "@cite_12" ], "mid": [ "2157989183", "2065494927", "1943722231", "1560380655" ], "abstract": [ "In visual recognition problems, the common data distribution mismatches between training and testing make domain adaptation essential. However, image data is difficult to manually divide into the discrete domains required by adaptation algorithms, and the standard practice of equating datasets with domains is a weak proxy for all the real conditions that alter the statistics in complex ways (lighting, pose, background, resolution, etc.) We propose an approach to automatically discover latent domains in image or video datasets. Our formulation imposes two key properties on domains: maximum distinctiveness and maximum learnability. By maximum distinctiveness, we require the underlying distributions of the identified domains to be different from each other to the maximum extent; by maximum learnability, we ensure that a strong discriminative model can be learned from the domain. We devise a nonparametric formulation and efficient optimization procedure that can successfully discover domains among both training and test data. We extensively evaluate our approach on object recognition and human activity recognition tasks.", "We address the visual categorization problem and present a method that utilizes weakly labeled data from other visual domains as the auxiliary source data for enhancing the original learning system. The proposed method aims to expand the intra-class diversity of original training data through the collaboration with the source data. In order to bring the original target domain data and the auxiliary source domain data into the same feature space, we introduce a weakly-supervised cross-domain dictionary learning method, which learns a reconstructive, discriminative and domain-adaptive dictionary pair and the corresponding classifier parameters without using any prior information. Such a method operates at a high level, and it can be applied to different cross-domain applications. To build up the auxiliary domain data, we manually collect images from Web pages, and select human actions of specific categories from a different dataset. The proposed method is evaluated for human action recognition, image classification and event recognition tasks on the UCF YouTube dataset, the Caltech101 256 datasets and the Kodak dataset, respectively, achieving outstanding results.", "In this work, we formulate a new weakly supervised domain generalization approach for visual recognition by using loosely labeled web images videos as training data. Specifically, we aim to address two challenging issues when learning robust classifiers: 1) coping with noise in the labels of training web images videos in the source domain; and 2) enhancing generalization capability of learnt classifiers to any unseen target domain. To address the first issue, we partition the training samples in each class into multiple clusters. By treating each cluster as a “bag” and the samples in each cluster as “instances”, we formulate a multi-instance learning (MIL) problem by selecting a subset of training samples from each training bag and simultaneously learning the optimal classifiers based on the selected samples. To address the second issue, we assume the training web images videos may come from multiple hidden domains with different data distributions. We then extend our MIL formulation to learn one classifier for each class and each latent domain such that multiple classifiers from each class can be effectively integrated to achieve better generalization capability. Extensive experiments on three benchmark datasets demonstrate the effectiveness of our new approach for visual recognition by learning from web data.", "Most current approaches to recognition aim to be scale-invariant. However, the cues available for recognizing a 300 pixel tall object are qualitatively different from those for recognizing a 3 pixel tall object. We argue that for sensors with finite resolution, one should instead use scale-variant, or multiresolution representations that adapt in complexity to the size of a putative detection window. We describe a multiresolution model that acts as a deformable part-based model when scoring large instances and a rigid template with scoring small instances. We also examine the interplay of resolution and context, and demonstrate that context is most helpful for detecting low-resolution instances when local models are limited in discriminative power. We demonstrate impressive results on the Caltech Pedestrian benchmark, which contains object instances at a wide range of scales. Whereas recent state-of-the-art methods demonstrate missed detection rates of 86 -37 at 1 false-positive-per-image, our multiresolution model reduces the rate to 29 ." ] }
1907.12622
2965300374
The cross-depiction problem refers to the task of recognising visual objects regardless of their depictions; whether photographed, painted, sketched, etc . In the past, some researchers considered cross-depiction to be domain adaptation (DA). More recent work considers cross-depiction as domain generalisation (DG), in which algorithms extend recognition from one set of domains (such as photographs and coloured artwork) to another (such as sketches). We show that fixing the last layer of AlexNet to random values provides a performance comparable to state of the art DA and DG algorithms, when tested over the PACS benchmark. With support from background literature, our results lead us to conclude that texture alone is insufficient to support generalisation; rather, higher-order representations such as structure and shape are necessary.
Domain Adaptation (DA) attempts to compensate for bias by adapting a model constructed on one domain to a target domain using examples from that new domain, e.g. @cite_8 . DA has been used in the cross depiction problem with both non-neural @cite_17 and neural algorithms, such as the Domain Separation Network (DSN) @cite_22 .
{ "cite_N": [ "@cite_22", "@cite_17", "@cite_8" ], "mid": [ "2287612586", "2531958295", "2963864946", "2104068492" ], "abstract": [ "Domain Adaptation (DA) techniques aim at enabling machine learning methods learn effective classifiers for a \"target\" domain when the only available training data belongs to a different \"source\" domain. In this paper we present the Distributional Correspondence Indexing (DCI) method for domain adaptation in sentiment classification. DCI derives term representations in a vector space common to both domains where each dimension re ects its distributional correspondence to a pivot, i.e., to a highly predictive term that behaves similarly across domains. Term correspondence is quantified by means of a distributional correspondence function (DCF). We propose a number of efficient DCFs that are motivated by the distributional hypothesis, i.e., the hypothesis according to which terms with similar meaning tend to have similar distributions in text. Experiments show that DCI obtains better performance than current state-of-the-art techniques for cross-lingual and cross-domain sentiment classification. DCI also brings about a significantly reduced computational cost, and requires a smaller amount of human intervention. As a final contribution, we discuss a more challenging formulation of the domain adaptation problem, in which both the cross-domain and cross-lingual dimensions are tackled simultaneously.", "Domain adaptation (DA) is an important and emerging field of machine learning that tackles the problem occurring when the distributions of training (source domain) and test (target domain) data are similar but different. This kind of learning paradigm is of vital importance for future advances as it allows a learner to generalize the knowledge across different tasks. Current theoretical results show that the efficiency of DA algorithms depends on their capacity of minimizing the divergence between source and target probability distributions. In this paper, we provide a theoretical study on the advantages that concepts borrowed from optimal transportation theory [17] can bring to DA. In particular, we show that the Wasserstein metric can be used as a divergence measure between distributions to obtain generalization guarantees for three different learning settings: (i) classic DA with unsupervised target data (ii) DA combining source and target labeled data, (iii) multiple source DA. Based on the obtained results, we motivate the use of the regularized optimal transport and provide some algorithmic insights for multi-source domain adaptation. We also show when this theoretical analysis can lead to tighter inequalities than those of other existing frameworks. We believe that these results open the door to novel ideas and directions for DA.", "Current Domain Adaptation (DA) methods based on deep architectures assume that the source samples arise from a single distribution. However, in practice most datasets can be regarded as mixtures of multiple domains. In these cases exploiting single-source DA methods for learning target classifiers may lead to sub-optimal, if not poor, results. In addition, in many applications it is difficult to manually provide the domain labels for all source data points, i.e. latent domains should be automatically discovered. This paper introduces a novel Convolutional Neural Network (CNN) architecture which (i) automatically discovers latent domains in visual datasets and (ii) exploits this information to learn robust target classifiers. Our approach is based on the introduction of two main components, which can be embedded into any existing CNN architecture: (i) a side branch that automatically computes the assignment of a source sample to a latent domain and (ii) novel layers that exploit domain membership information to appropriately align the distribution of the CNN internal feature representations to a reference distribution. We test our approach on publicly-available datasets, showing that it outperforms state-of-the-art multi-source DA methods by a large margin.", "In this paper, we introduce a new domain adaptation (DA) algorithm where the source and target domains are represented by subspaces described by eigenvectors. In this context, our method seeks a domain adaptation solution by learning a mapping function which aligns the source subspace with the target one. We show that the solution of the corresponding optimization problem can be obtained in a simple closed form, leading to an extremely fast algorithm. We use a theoretical result to tune the unique hyper parameter corresponding to the size of the subspaces. We run our method on various datasets and show that, despite its intrinsic simplicity, it outperforms state of the art DA methods." ] }
1907.12622
2965300374
The cross-depiction problem refers to the task of recognising visual objects regardless of their depictions; whether photographed, painted, sketched, etc . In the past, some researchers considered cross-depiction to be domain adaptation (DA). More recent work considers cross-depiction as domain generalisation (DG), in which algorithms extend recognition from one set of domains (such as photographs and coloured artwork) to another (such as sketches). We show that fixing the last layer of AlexNet to random values provides a performance comparable to state of the art DA and DG algorithms, when tested over the PACS benchmark. With support from background literature, our results lead us to conclude that texture alone is insufficient to support generalisation; rather, higher-order representations such as structure and shape are necessary.
Recently, Domain Generalisation (DG) approaches have gained attention. These differ from DA in that DG algorithms have no access to the target domain. General approaches include learning domain invariant representations, or deriving domain agnostic classifiers by assuming individual domains' classifiers consist of domain-specific and domain-agnostic components, then extracting the latter @cite_26 . Examples of relevance here are Domain Multi-Task Auto Encoders (D-MTAE) @cite_19 and Deeper-Broader-Artier'' network (DBA-DG) @cite_31 . Most recently, MetaReg @cite_0 and MLDG @cite_7 exhibit state of the art performance on the PACS dataset.
{ "cite_N": [ "@cite_26", "@cite_7", "@cite_0", "@cite_19", "@cite_31" ], "mid": [ "2763549966", "2951454049", "1920962657", "2963864946" ], "abstract": [ "The problem of domain generalization is to learn from multiple training domains, and extract a domain-agnostic model that can then be applied to an unseen domain. Domain generalization (DG) has a clear motivation in contexts where there are target domains with distinct characteristics, yet sparse data for training. For example recognition in sketch images, which are distinctly more abstract and rarer than photos. Nevertheless, DG methods have primarily been evaluated on photo-only benchmarks focusing on alleviating the dataset bias where both problems of domain distinctiveness and data sparsity can be minimal. We argue that these benchmarks are overly straightforward, and show that simple deep learning baselines perform surprisingly well on them. In this paper, we make two main contributions: Firstly, we build upon the favorable domain shift-robust properties of deep learning methods, and develop a low-rank parameterized CNN model for end-to-end DG learning. Secondly, we develop a DG benchmark dataset covering photo, sketch, cartoon and painting domains. This is both more practically relevant, and harder (bigger domain shift) than existing benchmarks. The results show that our method outperforms existing DG alternatives, and our dataset provides a more significant DG challenge to drive future research.", "The problem of domain generalization is to learn from multiple training domains, and extract a domain-agnostic model that can then be applied to an unseen domain. Domain generalization (DG) has a clear motivation in contexts where there are target domains with distinct characteristics, yet sparse data for training. For example recognition in sketch images, which are distinctly more abstract and rarer than photos. Nevertheless, DG methods have primarily been evaluated on photo-only benchmarks focusing on alleviating the dataset bias where both problems of domain distinctiveness and data sparsity can be minimal. We argue that these benchmarks are overly straightforward, and show that simple deep learning baselines perform surprisingly well on them. In this paper, we make two main contributions: Firstly, we build upon the favorable domain shift-robust properties of deep learning methods, and develop a low-rank parameterized CNN model for end-to-end DG learning. Secondly, we develop a DG benchmark dataset covering photo, sketch, cartoon and painting domains. This is both more practically relevant, and harder (bigger domain shift) than existing benchmarks. The results show that our method outperforms existing DG alternatives, and our dataset provides a more significant DG challenge to drive future research.", "The problem of domain generalization is to take knowledge acquired from a number of related domains, where training data is available, and to then successfully apply it to previously unseen domains. We propose a new feature learning algorithm, Multi-Task Autoencoder (MTAE), that provides good generalization performance for cross-domain object recognition. The algorithm extends the standard denoising autoencoder framework by substituting artificially induced corruption with naturally occurring inter-domain variability in the appearance of objects. Instead of reconstructing images from noisy versions, MTAE learns to transform the original image into analogs in multiple related domains. It thereby learns features that are robust to variations across domains. The learnt features are then used as inputs to a classifier. We evaluated the performance of the algorithm on benchmark image recognition datasets, where the task is to learn features from multiple datasets and to then predict the image label from unseen datasets. We found that (denoising) MTAE outperforms alternative autoencoder-based models as well as the current state-of-the-art algorithms for domain generalization.", "Current Domain Adaptation (DA) methods based on deep architectures assume that the source samples arise from a single distribution. However, in practice most datasets can be regarded as mixtures of multiple domains. In these cases exploiting single-source DA methods for learning target classifiers may lead to sub-optimal, if not poor, results. In addition, in many applications it is difficult to manually provide the domain labels for all source data points, i.e. latent domains should be automatically discovered. This paper introduces a novel Convolutional Neural Network (CNN) architecture which (i) automatically discovers latent domains in visual datasets and (ii) exploits this information to learn robust target classifiers. Our approach is based on the introduction of two main components, which can be embedded into any existing CNN architecture: (i) a side branch that automatically computes the assignment of a source sample to a latent domain and (ii) novel layers that exploit domain membership information to appropriately align the distribution of the CNN internal feature representations to a reference distribution. We test our approach on publicly-available datasets, showing that it outperforms state-of-the-art multi-source DA methods by a large margin." ] }
1907.12707
2966474827
Multiple-antenna backscatter is emerging as a promising approach to offer high communication performance for the data-intensive applications of ambient backscatter communications (AmBC). Although much has been understood about multiple-antenna backscatter in conventional backscatter communications (CoBC), existing analytical models cannot be directly applied to AmBC due to the structural differences in RF source and tag circuit designs. This paper takes the first step to fill the gap, by exploring the use of spatial modulation (SM) in AmBC whenever tags are equipped with multiple antennas. Specifically, we present a practical multiple-antenna backscatter design for AmBC that exempts tags from the inter-antenna synchronization and mutual coupling problems while ensuring high spectral efficiency and ultra-low power consumption. We obtain an optimal detector for the joint detection of both backscatter signal and source signal based on the maximum likelihood principle. We also design a two-step algorithm to derive bounds on the bit error rate (BER) of both signals. Simulation results validate the analysis and show that the proposed scheme can significantly improve the throughput compared with traditional systems.
Of the existing studies in AmBC, the majority has focused solely on multiple antennas at the reader, while ignoring the tags. To address the direct link interference from ambient RF source, multiple-antenna readers are proposed in @cite_9 @cite_1 for the joint detection of the legacy and backscatter systems. Previous works on the multiple-antenna backscatter applied to AmBC systems are sparse. Though the achievable rate has been studied in AmBC with multiple-antenna tags @cite_1 , this work does not consider the effect of time synchronization and mutual coupling problems between antennas. A blind detector for the reader to detect the backscatter signal is proposed in @cite_4 . This work solely focuses on the detection mechanism for the reader to detect the backscatter signal and does not improve the data rate. Departure from these studies, our work considers both the feasibility of real-world implementation and the high spectral efficiency in AmBC with multiple-antenna tags.
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_4" ], "mid": [ "2962732150", "2783633736", "2916694736", "2784331297" ], "abstract": [ "Ambient backscatter communication (AmBC) enables a passive backscatter device to transmit information to a reader using ambient RF signals, and has emerged as a promising solution to green Internet-of-Things (IoT). Conventional AmBC receivers are interested in recovering the information from the ambient backscatter device (A-BD) only. In this paper, we propose a cooperative AmBC (CABC) system in which the reader recovers information not only from the A-BD, but also from the RF source. We first establish the system model for the CABC system from spread spectrum and spectrum sharing perspectives. Then, for flat fading channels, we derive the optimal maximum-likelihood (ML) detector, suboptimal linear detectors as well as successive interference-cancellation (SIC) based detectors. For frequency-selective fading channels, the system model for the CABC system over ambient orthogonal frequency division multiplexing carriers is proposed, upon which a low-complexity optimal ML detector is derived. For both kinds of channels, the bit-error-rate expressions for the proposed detectors are derived in closed forms. Finally, extensive numerical results have shown that, when the A-BD signal and the RF-source signal have equal symbol period, the proposed SIC-based detectors can achieve near-ML detection performance for typical application scenarios, and when the A-BD symbol period is longer than the RF-source symbol period, the existence of backscattered signal in the CABC system can enhance the ML detection performance of the RF-source signal, thanks to the beneficial effect of the backscatter link when the A-BD transmits at a lower rate than the RF source.", "Ambient backscatter communication (AmBC) enables a tag to modulate its information bits over ambient RF carriers by intentionally changing its reflection coefficient, thus has emerged as a promising technique to achieve green communications for future Internet-of-Things. In this paper, we model a cooperative AmBC system from a spectrum- sharing perspective, where a cooperative receiver (C-RX) decodes the information from both a multi-antenna primary transmitter (PT) and a single-antenna secondary transmitter (i.e., tag). We consider two scenarios: first, the tag-symbol period equals the PT-symbol period; second, the tag-symbol period is an integer multiple of the PT-symbol period. For each scenario, we analyze the data rate via successive- interference-cancellation (SIC) based decoding, and formulate a problem to maximize the sum rate by optimizing the beamforming vector at the PT. The problems are transformed into semi-definite programming (SDP), and solved by using the technique of semi-definite relaxation (SDR). Furthermore, a novel transmit beamforming structure is proposed to reduce the computational complexity of beamforming optimization. Numerical results show that the cooperative AmBC system can achieve a higher sum rate than a conventional point-to-point system without a backscatter tag.", "Recently, ambient backscatter that utilizes surrounding radio frequency (RF) signals for both power and communications, has attracted vast interest since it can free sensors and tags from batteries and has extensive applications in Internet of Things (IoT), Existing studies about ambient backscatter often assume single antenna for each tag. Actually, as we show in this paper, equipping tags with multiple antennas can enlarge communication distance, enhance detection performance, and thus be practically useful. One key challenge of using multiple-antenna tags is the signal detection at the reader because the tag may have limited power and can transmit few training symbols. Therefore, in this paper, we design a blind detector based on F-test for the reader to recover tag signals without any knowledge of RF signals power, noise variance and all channel state information (CSI). Furthermore, we derive the lower and upper bounds of detection probabilities, and its exact expression in a special case. The optimal antenna selection scheme is also proposed to maximize the detection probability. Finally, simulation results are provided to corroborate our theoretical studies.", "This paper considers the massive connectivity application in which a large number of devices communicate with a base-station (BS) in a sporadic fashion. Device activity detection and channel estimation are central problems in such a scenario. Due to the large number of potential devices, the devices need to be assigned non-orthogonal signature sequences. The main objective of this paper is to show that by using random signature sequences and by exploiting sparsity in the user activity pattern, the joint user detection and channel estimation problem can be formulated as a compressed sensing single measurement vector (SMV) or multiple measurement vector (MMV) problem depending on whether the BS has a single antenna or multiple antennas and efficiently solved using an approximate message passing (AMP) algorithm. This paper proposes an AMP algorithm design that exploits the statistics of the wireless channel and provides an analytical characterization of the probabilities of false alarm and missed detection via state evolution. We consider two cases depending on whether or not the large-scale component of the channel fading is known at the BS and design the minimum mean squared error denoiser for AMP according to the channel statistics. Simulation results demonstrate the substantial advantage of exploiting the channel statistics in AMP design; however, knowing the large-scale fading component does not appear to offer tangible benefits. For the multiple-antenna case, we employ two different AMP algorithms, namely the AMP with vector denoiser and the parallel AMP-MMV, and quantify the benefit of deploying multiple antennas." ] }
1907.12797
2966008881
With the aim to propose a non parametric hypothesis test, this paper carries out a study on the Matching Error (ME), a comparison index of two partitions obtained from the same data set, using for example two clustering methods. This index is related to the misclassifica-tion error in supervised learning. Some properties of the ME and, especially, its distribution function for the case of two independent partitions are analyzed. Extensive simulations show the efficiency of the ME and we propose a hypothesis test based on it.
@cite_14 introduced an index called inspired from the misclassification error used in supervised learning. Consider that one of the two compared clusterings ( @math for instance) corresponds to the true labels of each observation and the other clustering ( @math ) to the predicted ones. The supervised classification error may be computed for all the possible permutations of the predicted labels (in @math ), and the maximum error over all the permutations may be taken. Thus the classification error for comparing both partitions may be written as
{ "cite_N": [ "@cite_14" ], "mid": [ "2013172302", "2126022166", "2407221801", "2963185791" ], "abstract": [ "Classifying a set of objects into clusters can be done in numerous ways, producing different results. They can be visually compared using contingency tables [27], mosaicplots [13], fluctuation diagrams [15], tableplots [20] , (modified) parallel coordinates plots [28], Parallel Sets plots [18] or circos diagrams [19]. Unfortunately the interpretability of all these graphical displays decreases rapidly with the numbers of categories and clusterings. In his famous book A Semiology of Graphics [5] Bertin writes “the discovery of an ordered concept appears as the ultimate point in logical simplification since it permits reducing to a single instant the assimilation of series which previously required many instants of study”. Or in more everyday language, if you use good orderings you can see results immediately that with other orderings might take a lot of effort. This is also related to the idea of effect ordering [12], that data should be organised to reflect the effect you want to observe. This paper presents an efficient algorithm based on Bertin's idea and concepts related to Kendall's t [17], which finds informative joint orders for two or more nominal classification variables. We also show how these orderings improve the various displays and how groups of corresponding categories can be detected using a top-down partitioning algorithm. Different clusterings based on data on the environmental performance of cars sold in Germany are used for illustration. All presented methods are available in the R package extracat which is used to compute the optimized orderings for the example dataset.", "This paper studies two-class (or binary) classification of elements X in R k that allows for a reject option. Based on n independent copies of the pair of random variables (X,Y ) with X 2 R k and Y 2 0,1 , we consider classifiers f(X) that render three possible outputs: 0, 1 and R. The option R expresses doubt and is to be used for few observations that are hard to classify in an automatic way. Chow (1970) derived the optimal rule minimizing the risk P f(X) 6 Y, f(X) 6 R + dP f(X) = R . This risk function subsumes that the cost of making a wrong decision equals 1 and that of utilizing the reject option is d. We show that the classification problem hinges on the behavior of the regression function (x) = E(Y |X = x) near d and 1 d. (Here d 2 [0,1 2] as the other cases turn out to be trivial.) Classification rules can be categorized into plug-in estimators and empirical risk minimizers. Both types are considered here and we prove that the rates of convergence of the risk of any estimate depends on P | (X) d| + P | (X) (1 d)| and on the quality of the estimate for or an appropriate measure of the size of the class of classifiers, in case of plug-in rules and empirical risk minimizers, respectively. We extend the mathematical framework even further by dierentiating between costs associated with the two possible errors: predicting f(X) = 0 whilst Y = 1 and predicting f(X) = 1 whilst Y = 0. Such situations are common in, for instance, medical studies where misclassifying a sick patient as healthy is worse than the opposite.", "We consider the problem of clustering partially labeled data from a minimal number of randomly chosen pairwise comparisons between the items. We introduce an efficient local algorithm based on a power iteration of the non-backtracking operator and study its performance on a simple model. For the case of two clusters, we give bounds on the classification error and show that a small error can be achieved from @math randomly chosen measurements, where @math is the number of items in the dataset. Our algorithm is therefore efficient both in terms of time and space complexities. We also investigate numerically the performance of the algorithm on synthetic and real world data.", "Multiclass classification problems such as image annotation can involve a large number of classes. In this context, confusion between classes can occur, and single label classification may be misleading. We provide in the present paper a general device that, given an unlabeled dataset and a score function defined as the minimizer of some empirical and convex risk, outputs a set of class labels, instead of a single one. Interestingly, this procedure does not require that the unlabeled dataset explores the whole classes. Even more, the method is calibrated to control the expected size of the output set while minimizing the classification risk. We show the statistical optimality of the procedure and establish rates of convergence under the Tsybakov margin condition. It turns out that these rates are linear on the number of labels. We apply our methodology to convex aggregation of confidence sets based on the V-fold cross validation principle also known as the superlearning principle. We illustrate the numerical performance of the procedure on real data and demonstrate in particular that with moderate expected size, w.r.t. the number of labels, the procedure provides significant improvement of the classification risk." ] }
1907.12797
2966008881
With the aim to propose a non parametric hypothesis test, this paper carries out a study on the Matching Error (ME), a comparison index of two partitions obtained from the same data set, using for example two clustering methods. This index is related to the misclassifica-tion error in supervised learning. Some properties of the ME and, especially, its distribution function for the case of two independent partitions are analyzed. Extensive simulations show the efficiency of the ME and we propose a hypothesis test based on it.
where @math is an injective mapping of @math into @math ( @cite_4 ). The @math index may be complex to compute when the number of clusters is large. A polynomial time algorithm has been proposed by @cite_10 to compute it efficiently. We will study the distributional properties of this index in the next section.
{ "cite_N": [ "@cite_10", "@cite_4" ], "mid": [ "1653737407", "2018165630", "2901801762", "2620416450" ], "abstract": [ "Index Coding has received considerable attention recently motivated in part by real-world applications and in part by its connection to Network Coding. The basic setting of Index Coding encodes the problem input as an undirected graph and the fundamental parameter is the broadcast rate @math , the average communication cost per bit for sufficiently long messages (i.e. the non-linear vector capacity). Recent nontrivial bounds on @math were derived from the study of other Index Coding capacities (e.g. the scalar capacity @math ) by Bar- (2006), Lubetzky and Stav (2007) and (2008). However, these indirect bounds shed little light on the behavior of @math : there was no known polynomial-time algorithm for approximating @math in a general network to within a nontrivial (i.e. @math ) factor, and the exact value of @math remained unknown for any graph where Index Coding is nontrivial. Our main contribution is a direct information-theoretic analysis of the broadcast rate @math using linear programs, in contrast to previous approaches that compared @math with graph-theoretic parameters. This allows us to resolve the aforementioned two open questions. We provide a polynomial-time algorithm with a nontrivial approximation ratio for computing @math in a general network along with a polynomial-time decision procedure for recognizing instances with @math . In addition, we pinpoint @math precisely for various classes of graphs (e.g. for various Cayley graphs of cyclic groups) thereby simultaneously improving the previously known upper and lower bounds for these graphs. Via this approach we construct graphs where the difference between @math and its trivial lower bound is linear in the number of vertices and ones where @math is uniformly bounded while its upper bound derived from the naive encoding scheme is polynomially worse.", "We consider @math -median clustering in finite metric spaces and @math -means clustering in Euclidean spaces, in the setting where @math is part of the input (not a constant). For the @math -means problem, show that if the optimal @math -means clustering of the input is more expensive than the optimal @math -means clustering by a factor of @math , then one can achieve a @math -approximation to the @math -means optimal in time polynomial in @math and @math by using a variant of Lloyd's algorithm. In this work we substantially improve this approximation guarantee. We show that given only the condition that the @math -means optimal is more expensive than the @math -means optimal by a factor @math for some constant @math , we can obtain a PTAS. In particular, under this assumption, for any @math we achieve a @math -approximation to the @math -means optimal in time polynomial in @math and @math , and exponential in @math and @math . We thus decouple the strength of the assumption from the quality of the approximation ratio. We also give a PTAS for the @math -median problem in finite metrics under the analogous assumption as well. For @math -means, we in addition give a randomized algorithm with improved running time of @math . Our technique also obtains a PTAS under the assumption of that all @math approximations are @math -close to a desired target clustering, in the case that all target clusters have size greater than @math and @math is constant. Note that the motivation of is that for many clustering problems, the objective function is only a proxy for the true goal of getting close to the target. From this perspective, our improvement is that for @math -means in Euclidean spaces we reduce the distance of the clustering found to the target from @math to @math when all target clusters are large, and for @math -median we improve the largeness'' condition needed in the work of to get exactly @math -close from @math to @math . Our results are based on a new notion of clustering stability.", "Clustering is a fundamental tool in data mining. It partitions points into groups (clusters) and may be used to make decisions for each point based on its group. However, this process may harm protected (minority) classes if the clustering algorithm does not adequately represent them in desirable clusters -- especially if the data is already biased. At NIPS 2017, proposed a model for fair clustering requiring the representation in each cluster to (approximately) preserve the global fraction of each protected class. Restricting to two protected classes, they developed both a 4-approximation for the fair @math -center problem and a @math -approximation for the fair @math -median problem, where @math is a parameter for the fairness model. For multiple protected classes, the best known result is a 14-approximation for fair @math -center. We extend and improve the known results. Firstly, we give a 5-approximation for the fair @math -center problem with multiple protected classes. Secondly, we propose a relaxed fairness notion under which we can give bicriteria constant-factor approximations for all of the classical clustering objectives @math -center, @math -supplier, @math -median, @math -means and facility location. The latter approximations are achieved by a framework that takes an arbitrary existing unfair (integral) solution and a fair (fractional) LP solution and combines them into an essentially fair clustering with a weakly supervised rounding scheme. In this way, a fair clustering can be established belatedly, in a situation where the centers are already fixed.", "Indexing highly repetitive texts --- such as genomic databases, software repositories and versioned text collections --- has become an important problem since the turn of the millennium. A relevant compressibility measure for repetitive texts is @math , the number of runs in their Burrows-Wheeler Transform (BWT). One of the earliest indexes for repetitive collections, the Run-Length FM-index, used @math space and was able to efficiently count the number of occurrences of a pattern of length @math in the text (in loglogarithmic time per pattern symbol, with current techniques). However, it was unable to locate the positions of those occurrences efficiently within a space bounded in terms of @math . Since then, a number of other indexes with space bounded by other measures of repetitiveness --- the number of phrases in the Lempel-Ziv parse, the size of the smallest grammar generating the text, the size of the smallest automaton recognizing the text factors --- have been proposed for efficiently locating, but not directly counting, the occurrences of a pattern. In this paper we close this long-standing problem, showing how to extend the Run-Length FM-index so that it can locate the @math occurrences efficiently within @math space (in loglogarithmic time each), and reaching optimal time @math within @math space, on a RAM machine of @math bits. Within @math space, our index can also count in optimal time @math . Raising the space to @math , we support count and locate in @math and @math time, which is optimal in the packed setting and had not been obtained before in compressed space. We also describe a structure using @math space that replaces the text and extracts any text substring of length @math in almost-optimal time @math . (...continues...)" ] }
1907.12797
2966008881
With the aim to propose a non parametric hypothesis test, this paper carries out a study on the Matching Error (ME), a comparison index of two partitions obtained from the same data set, using for example two clustering methods. This index is related to the misclassifica-tion error in supervised learning. Some properties of the ME and, especially, its distribution function for the case of two independent partitions are analyzed. Extensive simulations show the efficiency of the ME and we propose a hypothesis test based on it.
The entropy of a partition @math is defined by @math where @math is the estimate of the probability that an element is in cluster @math . The can be used to measure the independence of two partitions @math and @math . It is given by: @math , where @math is the estimate of the probability that an element belongs to cluster @math of @math and @math of @math . Mutual information is a metric over the space of all clusterings, but its value is not bounded which makes it difficult to interpret. As @math , other bounded indices have been proposed such as ( @cite_2 , @cite_5 ) where @math is divided either by the arithmetic or the geometric mean of the clustering entropies. Meila ( @cite_7 ) has also proposed an index based on Mutual information called .
{ "cite_N": [ "@cite_5", "@cite_7", "@cite_2" ], "mid": [ "2205170224", "2002359780", "1825700702", "160196575" ], "abstract": [ "It was recently shown that estimating the Shannon entropy @math of a discrete @math -symbol distribution @math requires @math samples, a number that grows near-linearly in the support size. In many applications @math can be replaced by the more general R 'enyi entropy of order @math , @math . We determine the number of samples needed to estimate @math for all @math , showing that @math requires a near-linear @math samples, but, perhaps surprisingly, integer @math requires only @math samples. Furthermore, developing on a recently established connection between polynomial approximation and estimation of additive functions of the form @math , we reduce the sample complexity for noninteger values of @math by a factor of @math compared to the empirical estimator. The estimators achieving these bounds are simple and run in time linear in the number of samples. Our lower bounds provide explicit constructions of distributions with different R 'enyi entropies that are hard to distinguish.", "Given a metric space @math , @math , @math , and @math , a distribution over mappings @math is called a @math -sensitive hash family if any two points in @math at distance at most @math are mapped by @math to the same value with probability at least @math , and any two points at distance greater than @math are mapped by @math to the same value with probability at most @math . This notion was introduced by Indyk and Motwani in 1998 as the basis for an efficient approximate nearest neighbor search algorithm and has since been used extensively for this purpose. The performance of these algorithms is governed by the parameter @math , and constructing hash families with small @math automatically yields improved nearest neighbor algorithms. Here we show that for @math it is impossible to achieve @math . This almost matches the construction of Indyk and Motwani which achieves @math .", "The binary symmetric stochastic block model deals with a random graph of @math vertices partitioned into two equal-sized clusters, such that each pair of vertices is connected independently with probability @math within clusters and @math across clusters. In the asymptotic regime of @math and @math for fixed @math and @math , we show that the semidefinite programming relaxation of the maximum likelihood estimator achieves the optimal threshold for exactly recovering the partition from the graph with probability tending to one, resolving a conjecture of Abbe14 . Furthermore, we show that the semidefinite programming relaxation also achieves the optimal recovery threshold in the planted dense subgraph model containing a single cluster of size proportional to @math .", "We study theoretically and numerically the entanglement entropy of the @math -dimensional free fermions whose one body Hamiltonian is the Anderson model. Using basic facts of the exponential Anderson localization, we show first that the disorder averaged entanglement entropy @math of the @math dimension cube @math of side length @math admits the area law scaling @math even in the gapless case, thereby manifesting the area law in the mean for our model. For @math and @math we obtain then asymptotic bounds for the entanglement entropy of typical realizations of disorder and use them to show that the entanglement entropy is not selfaveraging, i.e., has non vanishing random fluctuations even if @math ." ] }
1907.12797
2966008881
With the aim to propose a non parametric hypothesis test, this paper carries out a study on the Matching Error (ME), a comparison index of two partitions obtained from the same data set, using for example two clustering methods. This index is related to the misclassifica-tion error in supervised learning. Some properties of the ME and, especially, its distribution function for the case of two independent partitions are analyzed. Extensive simulations show the efficiency of the ME and we propose a hypothesis test based on it.
In @cite_19 and @cite_13 several indices are compared on artificially simulated partitions with various configurations; partitions are either balanced or unbalanced, dependent or independent, varying number of clusters. They show that the indices based on set overlaps have better performance than those based on counting pairs and mutual information. Besides, most indices are not relevant when the clusters in the partitions are imbalanced.
{ "cite_N": [ "@cite_19", "@cite_13" ], "mid": [ "2103894895", "2148781711", "2090634555", "22730673" ], "abstract": [ "For highly imbalanced data sets, almost all the instances are labeled as one class, whereas far fewer examples are labeled as the other classes. In this paper, we present an empirical comparison of seven different clustering evaluation indices when used to assess partitions generated from highly imbalanced data sets. Some of the metrics are based on matching of sets (F-measure), information theory (normalized mutual information and adjusted mutual information), and pair of objects counting (Rand and adjusted Rand indices). We also investigate the BCubed metric, which takes into account the concepts of recall, precision, as well as counting pairs. Furthermore, in order to avoid the class size imbalance effect, we propose a modification to the Rand index, referred to as the normalized class size Rand (NCR) index. In terms of results, apart from NCR, our experiments indicate that all the other analyzed indices are not able to deal properly with the problem of class size imbalance.", "Cluster recovery indices are more important than ever, because of the necessity for comparing the large number of clustering procedures available today. Of the cluster recovery indices prominent in contemporary literature, the Hubert and Arabie (1985) adjustment to the Rand index (1971) has been demonstrated to have the most desirable properties (Milligan & Cooper, 1986). However, use of the Hubert and Arabie adjustment to the Rand index is limited to cluster solutions involving non-overlapping, or disjoint, clusters. The present paper introduces a generalization of the Hubert and Arabie adjusted Rand index. This generalization, called the Omega index, can be applied to situations where both, one, or neither of the solutions being compared is non-disjoint. In the special case where both solutions are disjoint, the Omega index is equivalent to the Hubert and Arabie adjusted Rand index.", "Five external criteria were used to evaluate the extent of recovery of the true structure in a hierarchical clustering solution. This was accomplished by comparing the partitions produced by the clustering algorithm with the partition that indicates the true cluster structure known to exist in the data. The five criteria examined were the Rand, the Morey and Agresti adjusted Rand, the Hubert and Arabie adjusted Rand, the Jaccard, and the Fowlkes and Mallows measures. The results of the study indicated that the Hubert and Arabie adjusted Rank index was best suited to the task of comparison across hierarchy levels. Deficiencies with the other measures are noted.", "In this paper, we introduce a fuzzy extension of the Rand index, a well-known measure for comparing two clustering structures. In contrast to an existing proposal, which is restricted to the comparison of a fuzzy partition with a non-fuzzy reference partition, our extension is able to compare two proper fuzzy partitions with each other. Elaborating on the formal properties of our fuzzy Rand index, we show that it exhibits desirable metrical properties." ] }
1907.12797
2966008881
With the aim to propose a non parametric hypothesis test, this paper carries out a study on the Matching Error (ME), a comparison index of two partitions obtained from the same data set, using for example two clustering methods. This index is related to the misclassifica-tion error in supervised learning. Some properties of the ME and, especially, its distribution function for the case of two independent partitions are analyzed. Extensive simulations show the efficiency of the ME and we propose a hypothesis test based on it.
@cite_6 study the behavior of the Rand, Adjusted Rand, Jaccard and Fowlkes Mallows indices. They compare the partitions produced by hierarchical algorithms with the true partitions, varying the number of groups with a sample of 50 observations and conclude that the adjusted Rand index seems to be more appropriate for clustering validation in this context. Similar simulations and results are given in @cite_3 and @cite_8 using @math -means.
{ "cite_N": [ "@cite_3", "@cite_6", "@cite_8" ], "mid": [ "2090634555", "2103894895", "22730673", "2148781711" ], "abstract": [ "Five external criteria were used to evaluate the extent of recovery of the true structure in a hierarchical clustering solution. This was accomplished by comparing the partitions produced by the clustering algorithm with the partition that indicates the true cluster structure known to exist in the data. The five criteria examined were the Rand, the Morey and Agresti adjusted Rand, the Hubert and Arabie adjusted Rand, the Jaccard, and the Fowlkes and Mallows measures. The results of the study indicated that the Hubert and Arabie adjusted Rank index was best suited to the task of comparison across hierarchy levels. Deficiencies with the other measures are noted.", "For highly imbalanced data sets, almost all the instances are labeled as one class, whereas far fewer examples are labeled as the other classes. In this paper, we present an empirical comparison of seven different clustering evaluation indices when used to assess partitions generated from highly imbalanced data sets. Some of the metrics are based on matching of sets (F-measure), information theory (normalized mutual information and adjusted mutual information), and pair of objects counting (Rand and adjusted Rand indices). We also investigate the BCubed metric, which takes into account the concepts of recall, precision, as well as counting pairs. Furthermore, in order to avoid the class size imbalance effect, we propose a modification to the Rand index, referred to as the normalized class size Rand (NCR) index. In terms of results, apart from NCR, our experiments indicate that all the other analyzed indices are not able to deal properly with the problem of class size imbalance.", "In this paper, we introduce a fuzzy extension of the Rand index, a well-known measure for comparing two clustering structures. In contrast to an existing proposal, which is restricted to the comparison of a fuzzy partition with a non-fuzzy reference partition, our extension is able to compare two proper fuzzy partitions with each other. Elaborating on the formal properties of our fuzzy Rand index, we show that it exhibits desirable metrical properties.", "Cluster recovery indices are more important than ever, because of the necessity for comparing the large number of clustering procedures available today. Of the cluster recovery indices prominent in contemporary literature, the Hubert and Arabie (1985) adjustment to the Rand index (1971) has been demonstrated to have the most desirable properties (Milligan & Cooper, 1986). However, use of the Hubert and Arabie adjustment to the Rand index is limited to cluster solutions involving non-overlapping, or disjoint, clusters. The present paper introduces a generalization of the Hubert and Arabie adjusted Rand index. This generalization, called the Omega index, can be applied to situations where both, one, or neither of the solutions being compared is non-disjoint. In the special case where both solutions are disjoint, the Omega index is equivalent to the Hubert and Arabie adjusted Rand index." ] }
1907.12868
2965035401
Collagen fiber orientations in bones, visible with Second Harmonic Generation (SHG) microscopy, represent the inner structure and its alteration due to influences like cancer. While analyses of these orientations are valuable for medical research, it is not feasible to analyze the needed large amounts of local orientations manually. Since we have uncertain borders for these local orientations only rough regions can be segmented instead of a pixel-wise segmentation. We analyze the effect of these uncertain borders on human performance by a user study. Furthermore, we compare a variety of 2D and 3D methods such as classical approaches like Fourier analysis with state-of-the-art deep neural networks for the classification of local fiber orientations. We present a general way to use pretrained 2D weights in 3D neural networks, such as Inception-ResNet-3D a 3D extension of Inception-ResNet-v2. In a 10 fold cross-validation our two stage segmentation based on Inception-ResNet-3D and transferred 2D ImageNet weights achieves a human comparable accuracy.
Currently neural networks are state-of-the-art in the field of image data classification (e.g. ImageNet @cite_20 ). A variety of neural networks have emerged over the years @cite_3 @cite_0 @cite_23 @cite_11 @cite_15 @cite_14 . These networks started with a simple architecture (e.g. VGG-16 @cite_0 ). They integrated new structure elements like residual @cite_23 and inception blocks @cite_15 as they were developed and proved their superior performance. This development led to an increase of the top1 accuracy on the ImageNet test set from 71.3 the depth and thereby the complexity increased from 23 to 572 layers [2] Values are based on the reference implementation in https: keras.io applications Keras .
{ "cite_N": [ "@cite_14", "@cite_3", "@cite_0", "@cite_23", "@cite_15", "@cite_20", "@cite_11" ], "mid": [ "2964081807", "1845051632", "2336829997", "2963674932" ], "abstract": [ "Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (which we call the \"NASNet search space\") which enables transferability. In our experiments, we search for the best convolutional layer (or \"cell\") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, which we name a \"NASNet architecture\". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4 error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7 top-1 and 96.2 top-5 on ImageNet. Our model is 1.2 better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28 in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74 top-1 accuracy, which is 3.1 better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0 achieving 43.1 mAP on the COCO dataset.", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.", "We present a tree-structured network architecture for large-scale image classification. The trunk of the network contains convolutional layers optimized over all classes. At a given depth, the trunk splits into separate branches, each dedicated to discriminate a different subset of classes. Each branch acts as an expert classifying a set of categories that are difficult to tell apart, while the trunk provides common knowledge to all experts in the form of shared features. The training of our “network of experts” is completely end-to-end: the partition of categories into disjoint subsets is learned simultaneously with the parameters of the network trunk and the experts are trained jointly by minimizing a single learning objective over all classes. The proposed structure can be built from any existing convolutional neural network (CNN). We demonstrate its generality by adapting 4 popular CNNs for image categorization into the form of networks of experts. Our experiments on CIFAR100 and ImageNet show that in every case our method yields a substantial improvement in accuracy over the base CNN, and gives the best result achieved so far on CIFAR100. Finally, the improvement in accuracy comes at little additional cost: compared to the base network, the training time is only moderately increased and the number of parameters is comparable or in some cases even lower.", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy." ] }
1907.12868
2965035401
Collagen fiber orientations in bones, visible with Second Harmonic Generation (SHG) microscopy, represent the inner structure and its alteration due to influences like cancer. While analyses of these orientations are valuable for medical research, it is not feasible to analyze the needed large amounts of local orientations manually. Since we have uncertain borders for these local orientations only rough regions can be segmented instead of a pixel-wise segmentation. We analyze the effect of these uncertain borders on human performance by a user study. Furthermore, we compare a variety of 2D and 3D methods such as classical approaches like Fourier analysis with state-of-the-art deep neural networks for the classification of local fiber orientations. We present a general way to use pretrained 2D weights in 3D neural networks, such as Inception-ResNet-3D a 3D extension of Inception-ResNet-v2. In a 10 fold cross-validation our two stage segmentation based on Inception-ResNet-3D and transferred 2D ImageNet weights achieves a human comparable accuracy.
Semantic segmentation gives a classification for every pixel in an image and is an extension of a classification problem. @cite_24 first proposed to use fully convolutional networks to solve semantic segmentation. U-Net @cite_25 is a network for semantic segmentation which was designed for medical images. Often semantic segmentation networks consist of a down- and a upsampling part @cite_24 @cite_25 .
{ "cite_N": [ "@cite_24", "@cite_25" ], "mid": [ "2557327399", "2789983685", "2963727650", "2124592697" ], "abstract": [ "Semantic segmentation requires a detailed labeling of image pixels by object category. Information derived from local image patches is necessary to describe the detailed shape of individual objects. However, this information is ambiguous and can result in noisy labels. Global inference of image content can instead capture the general semantic concepts present. We advocate that high-recall holistic inference of image concepts provides valuable information for detailed pixel labeling. We build a two-stream neural network architecture that facilitates information flow from holistic information to local pixels, while keeping common image features shared among the low-level layers of both the holistic analysis and segmentation branches. We empirically evaluate our network on four standard semantic segmentation datasets. Our network obtains state-of-the-art performance on PASCAL-Context and NYUDv2, and ablation studies verify its effectiveness on ADE20K and SIFT-Flow.", "Recent work has made significant progress in improving spatial resolution for pixelwise labeling with Fully Convolutional Network (FCN) framework by employing Dilated Atrous convolution, utilizing multi-scale features and refining boundaries. In this paper, we explore the impact of global contextual information in semantic segmentation by introducing the Context Encoding Module, which captures the semantic context of scenes and selectively highlights class-dependent featuremaps. The proposed Context Encoding Module significantly improves semantic segmentation results with only marginal extra computation cost over FCN. Our approach has achieved new state-of-the-art results 51.7 mIoU on PASCAL-Context, 85.9 mIoU on PASCAL VOC 2012. Our single model achieves a final score of 0.5567 on ADE20K test set, which surpass the winning entry of COCO-Place Challenge in 2017. In addition, we also explore how the Context Encoding Module can improve the feature representation of relatively shallow networks for the image classification on CIFAR-10 dataset. Our 14 layer network has achieved an error rate of 3.45 , which is comparable with state-of-the-art approaches with over 10 times more layers. The source code for the complete system are publicly available.", "Recent work has made significant progress in improving spatial resolution for pixelwise labeling with Fully Convolutional Network (FCN) framework by employing Dilated Atrous convolution, utilizing multi-scale features and refining boundaries. In this paper, we explore the impact of global contextual information in semantic segmentation by introducing the Context Encoding Module, which captures the semantic context of scenes and selectively highlights class-dependent featuremaps. The proposed Context Encoding Module significantly improves semantic segmentation results with only marginal extra computation cost over FCN. Our approach has achieved new state-of-the-art results 51.7 mIoU on PASCAL-Context, 85.9 mIoU on PASCAL VOC 2012. Our single model achieves a final score of 0.5567 on ADE20K test set, which surpasses the winning entry of COCO-Place Challenge 2017. In addition, we also explore how the Context Encoding Module can improve the feature representation of relatively shallow networks for the image classification on CIFAR-10 dataset. Our 14 layer network has achieved an error rate of 3.45 , which is comparable with state-of-the-art approaches with over 10A— more layers. The source code for the complete system are publicly available1.", "Pixel-level labelling tasks, such as semantic segmentation, play a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning techniques for image recognition to tackle pixel-level labelling tasks. One central issue in this methodology is the limited capacity of deep learning techniques to delineate visual objects. To solve this problem, we introduce a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling. To this end, we formulate Conditional Random Fields with Gaussian pairwise potentials and mean-field approximate inference as Recurrent Neural Networks. This network, called CRF-RNN, is then plugged in as a part of a CNN to obtain a deep network that has desirable properties of both CNNs and CRFs. Importantly, our system fully integrates CRF modelling with CNNs, making it possible to train the whole deep network end-to-end with the usual back-propagation algorithm, avoiding offline post-processing methods for object delineation. We apply the proposed method to the problem of semantic image segmentation, obtaining top results on the challenging Pascal VOC 2012 segmentation benchmark." ] }
1907.12868
2965035401
Collagen fiber orientations in bones, visible with Second Harmonic Generation (SHG) microscopy, represent the inner structure and its alteration due to influences like cancer. While analyses of these orientations are valuable for medical research, it is not feasible to analyze the needed large amounts of local orientations manually. Since we have uncertain borders for these local orientations only rough regions can be segmented instead of a pixel-wise segmentation. We analyze the effect of these uncertain borders on human performance by a user study. Furthermore, we compare a variety of 2D and 3D methods such as classical approaches like Fourier analysis with state-of-the-art deep neural networks for the classification of local fiber orientations. We present a general way to use pretrained 2D weights in 3D neural networks, such as Inception-ResNet-3D a 3D extension of Inception-ResNet-v2. In a 10 fold cross-validation our two stage segmentation based on Inception-ResNet-3D and transferred 2D ImageNet weights achieves a human comparable accuracy.
However, the current state-of-the-art approaches for image classification and semantic segmentation have two major drawbacks in the context of uncertain local fiber orientation classification. We have 3D data and a high uncertainty for the borders. Most research focuses on 2D data while @cite_12 showed that it is beneficial to use the 3D information for organ segmentation. Networks like PointNet @cite_8 can classify 3D point clouds yet they do not consider dense 3D input as we have. The network 3D-U-Net @cite_16 represents an expansion of U-Net to 3D data. It is typically used to segment 3D objects like organs @cite_16 . This fixes the first drawback while the second one remains. Objects with uncertain borders like our fiber orientations are not well represented.
{ "cite_N": [ "@cite_16", "@cite_12", "@cite_8" ], "mid": [ "2796040722", "2963053547", "2963336905", "2946747865" ], "abstract": [ "Unlike on images, semantic learning on 3D point clouds using a deep network is challenging due to the naturally unordered data structure. Among existing works, PointNet has achieved promising results by directly learning on point sets. However, it does not take full advantage of a point's local neighborhood that contains fine-grained structural information which turns out to be helpful towards better semantic learning. In this regard, we present two new operations to improve PointNet with a more efficient exploitation of local structures. The first one focuses on local 3D geometric structures. In analogy to a convolution kernel for images, we define a point-set kernel as a set of learnable 3D points that jointly respond to a set of neighboring data points according to their geometric affinities measured by kernel correlation, adapted from a similar technique for point cloud registration. The second one exploits local high-dimensional feature structures by recursive feature aggregation on a nearest-neighbor-graph computed from 3D positions. Experiments show that our network can efficiently capture local information and robustly achieve better performances on major datasets. Our code is available at this http URL", "Unlike on images, semantic learning on 3D point clouds using a deep network is challenging due to the naturally unordered data structure. Among existing works, PointNet has achieved promising results by directly learning on point sets. However, it does not take full advantage of a point's local neighborhood that contains fine-grained structural information which turns out to be helpful towards better semantic learning. In this regard, we present two new operations to improve PointNet with a more efficient exploitation of local structures. The first one focuses on local 3D geometric structures. In analogy to a convolution kernel for images, we define a point-set kernel as a set of learnable 3D points that jointly respond to a set of neighboring data points according to their geometric affinities measured by kernel correlation, adapted from a similar technique for point cloud registration. The second one exploits local high-dimensional feature structures by recursive feature aggregation on a nearest-neighbor-graph computed from 3D positions. Experiments show that our network can efficiently capture local information and robustly achieve better performances on major datasets. Our code is available at http: www.merl.com research license#KCNet", "Semantic parsing of large-scale 3D point clouds is an important research topic in computer vision and remote sensing fields. Most existing approaches utilize hand-crafted features for each modality independently and combine them in a heuristic manner. They often fail to consider the consistency and complementary information among features adequately, which makes them difficult to capture high-level semantic structures. The features learned by most of the current deep learning methods can obtain high-quality image classification results. However, these methods are hard to be applied to recognize 3D point clouds due to unorganized distribution and various point density of data. In this paper, we propose a 3DCNN-DQN-RNN method which fuses the 3D convolutional neural network (CNN), Deep Q-Network (DQN) and Residual recurrent neural network (RNN)for an efficient semantic parsing of large-scale 3D point clouds. In our method, an eye window under control of the 3D CNN and DQN can localize and segment the points of the object's class efficiently. The 3D CNN and Residual RNN further extract robust and discriminative features of the points in the eye window, and thus greatly enhance the parsing accuracy of large-scale point clouds. Our method provides an automatic process that maps the raw data to the classification results. It also integrates object localization, segmentation and classification into one framework. Experimental results demonstrate that the proposed method outperforms the state-of-the-art point cloud classification methods.", "This paper proposes RIU-Net (for Range-Image U-Net), the adaptation of a popular semantic segmentation network for the semantic segmentation of a 3D LiDAR point cloud. The point cloud is turned into a 2D range-image by exploiting the topology of the sensor. This image is then used as input to a U-net. This architecture has already proved its efficiency for the task of semantic segmentation of medical images. We demonstrate how it can also be used for the accurate semantic segmentation of a 3D LiDAR point cloud and how it represents a valid bridge between image processing and 3D point cloud processing. Our model is trained on range-images built from KITTI 3D object detection dataset. Experiments show that RIU-Net, despite being very simple, offers results that are comparable to the state-of-the-art of range-image based methods. Finally, we demonstrate that this architecture is able to operate at 90fps on a single GPU, which enables deployment for real-time segmentation." ] }
1907.12868
2965035401
Collagen fiber orientations in bones, visible with Second Harmonic Generation (SHG) microscopy, represent the inner structure and its alteration due to influences like cancer. While analyses of these orientations are valuable for medical research, it is not feasible to analyze the needed large amounts of local orientations manually. Since we have uncertain borders for these local orientations only rough regions can be segmented instead of a pixel-wise segmentation. We analyze the effect of these uncertain borders on human performance by a user study. Furthermore, we compare a variety of 2D and 3D methods such as classical approaches like Fourier analysis with state-of-the-art deep neural networks for the classification of local fiber orientations. We present a general way to use pretrained 2D weights in 3D neural networks, such as Inception-ResNet-3D a 3D extension of Inception-ResNet-v2. In a 10 fold cross-validation our two stage segmentation based on Inception-ResNet-3D and transferred 2D ImageNet weights achieves a human comparable accuracy.
While 3D extensions of Inception-ResNet-v2 have been presented in @cite_2 @cite_22 the usage of 2D pretraining is not so widely used. Parallel to our research proposed a 2D weight transfer strategy to 3D @cite_17 which is most similar to ours (see subsec:weight ).
{ "cite_N": [ "@cite_17", "@cite_22", "@cite_2" ], "mid": [ "2964228333", "2962934715", "2770565591", "2178031510" ], "abstract": [ "This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization (a variable-length encoding: 1 bit for representing zero value, and the remaining 4 bits represent at most 16 different values for the powers of two), our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. We believe that our method sheds new insights on how to make deep CNNs to be applicable on mobile or embedded devices. The code will be made publicly available.", "The purpose of this study is to determine whether current video datasets have sufficient data for training very deep convolutional neural networks (CNNs) with spatio-temporal three-dimensional (3D) kernels. Recently, the performance levels of 3D CNNs in the field of action recognition have improved significantly. However, to date, conventional research has only explored relatively shallow 3D architectures. We examine the architectures of various 3D CNNs from relatively shallow to very deep ones on current video datasets. Based on the results of those experiments, the following conclusions could be obtained: (i) ResNet-18 training resulted in significant overfitting for UCF-101, HMDB-51, and ActivityNet but not for Kinetics. (ii) The Kinetics dataset has sufficient data for training of deep 3D CNNs, and enables training of up to 152 ResNets layers, interestingly similar to 2D ResNets on ImageNet. ResNeXt-101 achieved 78.4 average accuracy on the Kinetics test set. (iii) Kinetics pretrained simple 3D architectures outperforms complex 2D architectures, and the pretrained ResNeXt-101 achieved 94.5 and 70.2 on UCF-101 and HMDB-51, respectively. The use of 2D CNNs trained on ImageNet has produced significant progress in various tasks in image. We believe that using deep 3D CNNs together with Kinetics will retrace the successful history of 2D CNNs and ImageNet, and stimulate advances in computer vision for videos. The codes and pretrained models used in this study are publicly available1.", "The purpose of this study is to determine whether current video datasets have sufficient data for training very deep convolutional neural networks (CNNs) with spatio-temporal three-dimensional (3D) kernels. Recently, the performance levels of 3D CNNs in the field of action recognition have improved significantly. However, to date, conventional research has only explored relatively shallow3Darchitectures. We examine the architectures of various 3D CNNs from relatively shallow to very deep ones on current video datasets. Based on the results of those experiments, the following conclusions could be obtained: (i) ResNet-18 training resulted in significant overfitting for UCF-101, HMDB-51, and ActivityNet but not for Kinetics. (ii) The Kinetics dataset has sufficient data for training of deep 3D CNNs, and enables training of up to 152 ResNets layers, interestingly similar to 2D ResNets on ImageNet. (iii) Kinetics pretrained simple 3D architectures outperforms complex2D architectures, and the pretrained ResNeXt-101 achieved 94.5 and 70.2 on UCF-101 and HMDB-51, respectively. The use of 2D CNNs trained on ImageNet has produced significant progress in various tasks in image. We believe that using deep 3D CNNs together with Kinetics will retrace the successful history of 2D CNNs and ImageNet, and stimulate advances in computer vision for videos. The codes and pretrained models used in this study are publicly available.", "We introduce techniques for rapidly transferring the information stored in one neural net into another neural net. The main purpose is to accelerate the training of a significantly larger neural net. During real-world workflows, one often trains very many different neural networks during the experimentation and design process. This is a wasteful process in which each new model is trained from scratch. Our Net2Net technique accelerates the experimentation process by instantaneously transferring the knowledge from a previous network to each new deeper or wider network. Our techniques are based on the concept of function-preserving transformations between neural network specifications. This differs from previous approaches to pre-training that altered the function represented by a neural net when adding layers to it. Using our knowledge transfer mechanism to add depth to Inception modules, we demonstrate a new state of the art accuracy rating on the ImageNet dataset." ] }
1907.12868
2965035401
Collagen fiber orientations in bones, visible with Second Harmonic Generation (SHG) microscopy, represent the inner structure and its alteration due to influences like cancer. While analyses of these orientations are valuable for medical research, it is not feasible to analyze the needed large amounts of local orientations manually. Since we have uncertain borders for these local orientations only rough regions can be segmented instead of a pixel-wise segmentation. We analyze the effect of these uncertain borders on human performance by a user study. Furthermore, we compare a variety of 2D and 3D methods such as classical approaches like Fourier analysis with state-of-the-art deep neural networks for the classification of local fiber orientations. We present a general way to use pretrained 2D weights in 3D neural networks, such as Inception-ResNet-3D a 3D extension of Inception-ResNet-v2. In a 10 fold cross-validation our two stage segmentation based on Inception-ResNet-3D and transferred 2D ImageNet weights achieves a human comparable accuracy.
Collagen structures in SHG images have been analyzed in several publications @cite_19 @cite_10 @cite_18 @cite_21 @cite_9 @cite_1 @cite_6 . They were analyzed in tissue @cite_19 and bones @cite_18 @cite_10 . @cite_1 presented how Fourier analysis can be used to investigate the orientation of collagen fibers. The Fourier analysis was extended from small regions to the whole scan in @cite_21 @cite_9 @cite_6 . The analysis classified small image parts as anisotropic, isotropic and dark. These classifications where used to calculate the distributions of classes over an image. In @cite_9 these distributions where used to detect injured tendons. Moreover, @cite_21 showed the change of distribution due to aging can be used to determine the age of pigs. @cite_6 used the 3D information of SHG data and could show an increase in performance.
{ "cite_N": [ "@cite_18", "@cite_9", "@cite_21", "@cite_1", "@cite_6", "@cite_19", "@cite_10" ], "mid": [ "1980068389", "2016266030", "2017323918", "2766462831" ], "abstract": [ "Abstract We propose the use of second-harmonic generation (SHG) microscopy for imaging collagen fibers in porcine femoral cortical bone. The technique is compared with scanning electron microscopy (SEM). SHG microscopy is shown to have excellent potential for bone imaging primarily due its intrinsic specificity to collagen fibers, which results in high contrast images without the need for specimen staining. Furthermore, this technique's ability to quantitatively assess collagen fiber organization is evaluated through an exploratory examination of bone structure as a function of age, from very young to mature bone. In particular, four different age groups: 1 month, 3.5 months, 6 months, and 30 months, were studied. Specifically, we employ the recently developed Fourier transform-second harmonic generation (FT-SHG) imaging technique for the quantification of the structural changes, and observe that as the bone develops, there is an overall reduction in porosity, the number of osteons increases, and the collagen fibers become comparatively more organized. It is also observed that the variations in structure across the whole cross-section of the bone increase with age. The results of this work show that quantitative SHG microscopy can serve as a valuable tool for evaluating the structural organization of collagen fibers in ex vivo bone studies.", "We present three-dimensional Fourier transform-second-harmonic generation (3D FT-SHG) imaging, a generalization of the previously reported two-dimensional FT-SHG, to quantify collagen fiber organization from 3D image stacks of biological tissues. The current implementation calculates 3D preferred orientation of a region of interest, and classifies regions of interest based on orientation anisotropy and average voxel intensity. Presented are some example applications of the technique which reveal the layered structure of collagen fibers in porcine sclera, and estimates the cut angle of porcine tendon tissues. This technique shows promising potential for studying biological tissues that contain fibrillar structures in 3D.", "Fourier transform-second harmonic generation (FT-SHG) imaging is used as a technique for evaluating collagenase-induced injury in horse tendons. The differences in collagen fiber organization between normal and injured tendon are quantified. Results indicate that the organization of collagen fibers is regularly oriented in normal tendons and randomly organized in injured tendons. This is further supported through the use of additional metrics, in particular, the number of dark (no minimal signal) and isotropic (no preferred fiber orientation) regions in the images, and the ratio of forward-to-backward second-harmonic intensity. FT-SHG microscopy is also compared with the conventional polarized light microscopy and is shown to be more sensitive to assessing injured tendons than the latter. Moreover, sample preparation artifacts that affect the quantitative evaluation of collagen fiber organization can be circumvented by using FT-SHG microscopy. The technique has potential as an assessment tool for evaluating the impact of various injuries that affect collagen fiber organization.", "Abstract Second-harmonic generation imaging (SHG) captures triple helical collagen molecules near tissue surfaces. Biomedical research routinely utilizes various imaging software packages to quantify SHG signals for collagen content and distribution estimates in modern tissue samples including bone. For the first time using SHG, samples of modern, medieval, and ice age bones were imaged to test the applicability of SHG to ancient bone from a variety of ages, settings, and taxa. Four independent techniques including Raman spectroscopy, FTIR spectroscopy, radiocarbon dating protocols, and mass spectrometry-based protein sequencing, confirm the presence of protein, consistent with the hypothesis that SHG imaging detects ancient bone collagen. These results suggest that future studies have the potential to use SHG imaging to provide new insights into the composition of ancient bone, to characterize ancient bone disorders, to investigate collagen preservation within and between various taxa, and to monitor collagen decay regimes in different depositional environments." ] }
1907.12868
2965035401
Collagen fiber orientations in bones, visible with Second Harmonic Generation (SHG) microscopy, represent the inner structure and its alteration due to influences like cancer. While analyses of these orientations are valuable for medical research, it is not feasible to analyze the needed large amounts of local orientations manually. Since we have uncertain borders for these local orientations only rough regions can be segmented instead of a pixel-wise segmentation. We analyze the effect of these uncertain borders on human performance by a user study. Furthermore, we compare a variety of 2D and 3D methods such as classical approaches like Fourier analysis with state-of-the-art deep neural networks for the classification of local fiber orientations. We present a general way to use pretrained 2D weights in 3D neural networks, such as Inception-ResNet-3D a 3D extension of Inception-ResNet-v2. In a 10 fold cross-validation our two stage segmentation based on Inception-ResNet-3D and transferred 2D ImageNet weights achieves a human comparable accuracy.
@cite_13 state to be the first to analyze SHG images with neural networks. They estimated the elastic properties of collagenous tissue. A classification or segmentation of fibers were not part of their investigation.
{ "cite_N": [ "@cite_13" ], "mid": [ "2016266030", "2755176028", "1980068389", "2766462831" ], "abstract": [ "We present three-dimensional Fourier transform-second-harmonic generation (3D FT-SHG) imaging, a generalization of the previously reported two-dimensional FT-SHG, to quantify collagen fiber organization from 3D image stacks of biological tissues. The current implementation calculates 3D preferred orientation of a region of interest, and classifies regions of interest based on orientation anisotropy and average voxel intensity. Presented are some example applications of the technique which reveal the layered structure of collagen fibers in porcine sclera, and estimates the cut angle of porcine tendon tissues. This technique shows promising potential for studying biological tissues that contain fibrillar structures in 3D.", "Abstract Biological collagenous tissues comprised of networks of collagen fibers are suitable for a broad spectrum of medical applications owing to their attractive mechanical properties. In this study, we developed a noninvasive approach to estimate collagenous tissue elastic properties directly from microscopy images using Machine Learning (ML) techniques. Glutaraldehyde-treated bovine pericardium (GLBP) tissue, widely used in the fabrication of bioprosthetic heart valves and vascular patches, was chosen to develop a representative application. A Deep Learning model was designed and trained to process second harmonic generation (SHG) images of collagen networks in GLBP tissue samples, and directly predict the tissue elastic mechanical properties. The trained model is capable of identifying the overall tissue stiffness with a classification accuracy of 84 , and predicting the nonlinear anisotropic stress-strain curves with average regression errors of 0.021 and 0.031. Thus, this study demonstrates the feasibility and great potential of using the Deep Learning approach for fast and noninvasive assessment of collagenous tissue elastic properties from microstructural images. Statement of Significance In this study, we developed, to our best knowledge, the first Deep Learning-based approach to estimate the elastic properties of collagenous tissues directly from noninvasive second harmonic generation images. The success of this study holds promise for the use of Machine Learning techniques to noninvasively and efficiently estimate the mechanical properties of many structure-based biological materials, and it also enables many potential applications such as serving as a quality control tool to select tissue for the manufacturing of medical devices (e.g. bioprosthetic heart valves).", "Abstract We propose the use of second-harmonic generation (SHG) microscopy for imaging collagen fibers in porcine femoral cortical bone. The technique is compared with scanning electron microscopy (SEM). SHG microscopy is shown to have excellent potential for bone imaging primarily due its intrinsic specificity to collagen fibers, which results in high contrast images without the need for specimen staining. Furthermore, this technique's ability to quantitatively assess collagen fiber organization is evaluated through an exploratory examination of bone structure as a function of age, from very young to mature bone. In particular, four different age groups: 1 month, 3.5 months, 6 months, and 30 months, were studied. Specifically, we employ the recently developed Fourier transform-second harmonic generation (FT-SHG) imaging technique for the quantification of the structural changes, and observe that as the bone develops, there is an overall reduction in porosity, the number of osteons increases, and the collagen fibers become comparatively more organized. It is also observed that the variations in structure across the whole cross-section of the bone increase with age. The results of this work show that quantitative SHG microscopy can serve as a valuable tool for evaluating the structural organization of collagen fibers in ex vivo bone studies.", "Abstract Second-harmonic generation imaging (SHG) captures triple helical collagen molecules near tissue surfaces. Biomedical research routinely utilizes various imaging software packages to quantify SHG signals for collagen content and distribution estimates in modern tissue samples including bone. For the first time using SHG, samples of modern, medieval, and ice age bones were imaged to test the applicability of SHG to ancient bone from a variety of ages, settings, and taxa. Four independent techniques including Raman spectroscopy, FTIR spectroscopy, radiocarbon dating protocols, and mass spectrometry-based protein sequencing, confirm the presence of protein, consistent with the hypothesis that SHG imaging detects ancient bone collagen. These results suggest that future studies have the potential to use SHG imaging to provide new insights into the composition of ancient bone, to characterize ancient bone disorders, to investigate collagen preservation within and between various taxa, and to monitor collagen decay regimes in different depositional environments." ] }
1907.12400
2965074212
In light of the rising demand for biometric-authentication systems, preventing face spoofing attacks is a critical issue for the safe deployment of face recognition systems. Here, we propose an efficient liveness detection algorithm that requires minimal hardware and only a small database, making it suitable for resource-constrained devices such as mobile phones. Utilizing one monocular visible light camera, the proposed algorithm takes two facial photos, one taken with a flash, the other without a flash. The proposed @math descriptor is constructed by leveraging two types of reflection: (i) specular reflections from the iris region that have a specific intensity distribution depending on liveness, and (ii) diffuse reflections from the entire face region that represents the 3D structure of a subject's face. Classifiers trained with @math descriptor outperforms other flash-based liveness detection algorithms on both an in-house database and on publicly available NUAA and Replay-Attack databases. Moreover, the proposed algorithm achieves comparable accuracy to that of an end-to-end, deep neural network classifier, while being approximately ten-times faster execution speed.
The current liveness detection technologies aimed against spoofing attacks are summarized below. Face spoofing attacks can be subdivided into two major categories: 2D attacks and 3D attacks. The former includes print-attacks and video-replay attacks, while the latter includes 3D spoofing mask attacks. Several publicly available face liveness databases simulate these attacks. To name a few, the NUAA @cite_20 and Print-Attack @cite_11 databases simulate photo attacks. The Replay-Attack @cite_28 and CASIA Face Anti-Spoofing @cite_26 datasets simulate replay attacks in addition to photo attacks. The 3D Mask Attack Database @cite_29 and HKBU-Mask Attack with Real World Variations @cite_14 simulate 3D mask attacks. Example countermeasures to each attack type are summarized below.
{ "cite_N": [ "@cite_26", "@cite_14", "@cite_28", "@cite_29", "@cite_20", "@cite_11" ], "mid": [ "2042883034", "2125320497", "2418633638", "2011016023" ], "abstract": [ "Spoofing attacks mainly include printing artifacts, electronic screens and ultra-realistic face masks or models. In this paper, we propose a component-based face coding approach for liveness detection. The proposed method consists of four steps: (1) locating the components of face; (2) coding the low-level features respectively for all the components; (3) deriving the high-level face representation by pooling the codes with weights derived from Fisher criterion; (4) concatenating the histograms from all components into a classifier for identification. The proposed framework makes good use of micro differences between genuine faces and fake faces. Meanwhile, the inherent appearance differences among different components are retained. Extensive experiments on three published standard databases demonstrate that the method can achieve the best liveness detection performance in three databases.", "The problem of detecting face spoofing attacks (presentation attacks) has recently gained a well-deserved popularity. Mainly focusing on 2D attacks forged by displaying printed photos or replaying recorded videos on mobile devices, a significant portion of these studies ground their arguments on the flatness of the spoofing material in front of the sensor. In this paper, we inspect the spoofing potential of subject-specific 3D facial masks for 2D face recognition. Additionally, we analyze Local Binary Patterns based coun-termeasures using both color and depth data, obtained by Kinect. For this purpose, we introduce the 3D Mask Attack Database (3DMAD), the first publicly available 3D spoofing database, recorded with a low-cost depth camera. Extensive experiments on 3DMAD show that easily attainable facial masks can pose a serious threat to 2D face recognition systems and LBP is a powerful weapon to eliminate it.", "With the wide deployment of the face recognition systems in applications from deduplication to mobile device unlocking, security against the face spoofing attacks requires increased attention; such attacks can be easily launched via printed photos, video replays, and 3D masks of a face. We address the problem of face spoof detection against the print (photo) and replay (photo or video) attacks based on the analysis of image distortion ( e.g. , surface reflection, moire pattern, color distortion, and shape deformation) in spoof face images (or video frames). The application domain of interest is smartphone unlock, given that the growing number of smartphones have the face unlock and mobile payment capabilities. We build an unconstrained smartphone spoof attack database (MSU USSA) containing more than 1000 subjects. Both the print and replay attacks are captured using the front and rear cameras of a Nexus 5 smartphone. We analyze the image distortion of the print and replay attacks using different: 1) intensity channels (R, G, B, and grayscale); 2) image regions (entire image, detected face, and facial component between nose and chin); and 3) feature descriptors. We develop an efficient face spoof detection system on an Android smartphone. Experimental results on the public-domain Idiap Replay-Attack, CASIA FASD, and MSU-MFSD databases, and the MSU USSA database show that the proposed approach is effective in face spoof detection for both the cross-database and intra-database testing scenarios. User studies of our Android face spoof detection system involving 20 participants show that the proposed approach works very well in real application scenarios.", "There are several types of spoofing attacks to face recognition systems such as photograph, video or mask attacks. Recent studies show that face recognition systems are vulnerable to these attacks. In this paper, a countermeasure technique is proposed to protect face recognition systems against mask attacks. To the best of our knowledge, this is the first time a countermeasure is proposed to detect mask attacks. The reason for this delay is mainly due to the unavailability of public mask attacks databases. In this study, a 2D+3D face mask attacks database is used which is prepared for a research project in which the authors are all involved. The performance of the countermeasure is evaluated on both the texture images and the depth maps, separately. The results show that the proposed countermeasure gives satisfactory results using both the texture images and the depth maps. The performance of the countermeasure is observed to be slight better when the technique is applied on texture images instead of depth maps, which proves that face texture provides more information than 3D face shape characteristics using the proposed approach." ] }
1907.12400
2965074212
In light of the rising demand for biometric-authentication systems, preventing face spoofing attacks is a critical issue for the safe deployment of face recognition systems. Here, we propose an efficient liveness detection algorithm that requires minimal hardware and only a small database, making it suitable for resource-constrained devices such as mobile phones. Utilizing one monocular visible light camera, the proposed algorithm takes two facial photos, one taken with a flash, the other without a flash. The proposed @math descriptor is constructed by leveraging two types of reflection: (i) specular reflections from the iris region that have a specific intensity distribution depending on liveness, and (ii) diffuse reflections from the entire face region that represents the 3D structure of a subject's face. Classifiers trained with @math descriptor outperforms other flash-based liveness detection algorithms on both an in-house database and on publicly available NUAA and Replay-Attack databases. Moreover, the proposed algorithm achieves comparable accuracy to that of an end-to-end, deep neural network classifier, while being approximately ten-times faster execution speed.
The recent 3D reconstruction and printing technologies have given malicious users the ability to produce realistic spoofing masks @cite_9 . One example countermeasure against such a 3D attack is multispectral imaging. @cite_24 have reported the effectiveness of short-wave infrared (SWIR) imaging for detecting masks. Another approach is remote photoplethysmography (rPPG), which calculates pulse rhythms from periodic changes in face color @cite_10 . In this paper, however, we do not consider 3D attacks because they are less likely due to the high cost of producing 3D masks. Our work focuses on preventing photo attacks and replay attacks.
{ "cite_N": [ "@cite_24", "@cite_9", "@cite_10" ], "mid": [ "2518976115", "2511115389", "2125320497", "2043167551" ], "abstract": [ "3D mask spoofing attack has been one of the main challenges in face recognition. Among existing methods, texture-based approaches show powerful abilities and achieve encouraging results on 3D mask face anti-spoofing. However, these approaches may not be robust enough in application scenarios and could fail to detect imposters with hyper-real masks. In this paper, we propose a novel approach to 3D mask face anti-spoofing from a new perspective, by analysing heartbeat signal through remote Photoplethysmography (rPPG). We develop a novel local rPPG correlation model to extract discriminative local heartbeat signal patterns so that an imposter can better be detected regardless of the material and quality of the mask. To further exploit the characteristic of rPPG distribution on real faces, we learn a confidence map through heartbeat signal strength to weight local rPPG correlation pattern for classification. Experiments on both public and self-collected datasets validate that the proposed method achieves promising results under intra and cross dataset scenario.", "Recent studies point out that spoofing attacks using facial masks still are a severe problem for current biometric face recognition (FR) systems. As such systems are becoming more frequently used, for example, for automated border crossing or access control to critical infrastructure, advanced anti-spoofing techniques are necessary to counter these attacks. This work presents a novel, cross-modal approach that enhances existing solutions for face verification and uses multispectral short wave infrared (SWIR) imaging to ensure the authenticity of a face even in the presence of partial disguises and masks. It is evaluated on a dataset containing 137 subjects and a variety of spoofing attacks. Using a commercial FR system, it successfully rejects all attempts to counterfeit a foreign face with a false acceptance rate FAR cf = 0 and most attempts to disguise the own identity with FAR dg = 1 at a false rejection rate of FRR < 5 using SWIR images for verification.", "The problem of detecting face spoofing attacks (presentation attacks) has recently gained a well-deserved popularity. Mainly focusing on 2D attacks forged by displaying printed photos or replaying recorded videos on mobile devices, a significant portion of these studies ground their arguments on the flatness of the spoofing material in front of the sensor. In this paper, we inspect the spoofing potential of subject-specific 3D facial masks for 2D face recognition. Additionally, we analyze Local Binary Patterns based coun-termeasures using both color and depth data, obtained by Kinect. For this purpose, we introduce the 3D Mask Attack Database (3DMAD), the first publicly available 3D spoofing database, recorded with a low-cost depth camera. Extensive experiments on 3DMAD show that easily attainable facial masks can pose a serious threat to 2D face recognition systems and LBP is a powerful weapon to eliminate it.", "Short wave infrared (SWIR) is an emerging imaging modality in surveillance applications. It is able to capture clear long range images of a subject in harsh atmospheric conditions and at night time. However, matching SWIR images against a gallery of color images is a very challenging task. The photometric properties of images in these two spectral bands are highly distinct. This work presents a novel cross-spectral face recognition scheme that encodes images filtered with a bank of Gabor filters followed by three local operators: Simplified Weber Local Descriptor, Local Binary Pattern, and Generalized Local Binary Pattern. Both magnitude and phase of filtered images are encoded. Matching encoded face images is performed by using a symmetric I-divergence. We quantify the verification and identification performance of the cross-spectral matcher on two multispectral face datasets. In the first dataset (PRE-TINDERS), both SWIR and visible gallery images are captured at a close distance (about 2 meters). In the second dataset (TINDERS), the probe SWIR images are collected at longer ranges (50 and 106 meters). The results on PRE-TINDERS dataset form a baseline for matching long range data. We also demonstrate the capability of the proposed approach by comparing its performance with the performance of Faceit G8, a commercial face recognition engine distributed by L1. The results show that the designed method outperforms Faceit G8 in terms of verification and identification rates on both datasets." ] }
1907.12508
2964542684
Many real-world datasets are labeled with natural orders, i.e., ordinal labels. Ordinal regression is a method to predict ordinal labels that finds a wide range of applications in data-rich science domains, such as medical, social and economic sciences. Most existing approaches work well for a single ordinal regression task. However, they ignore the task relatedness when there are multiple related tasks. Multi-task learning (MTL) provides a framework to encode task relatedness, to bridge data from all tasks, and to simultaneously learn multiple related tasks to improve the generalization performance. Even though MTL methods have been extensively studied, there is barely existing work investigating MTL for data with ordinal labels. We tackle multiple ordinal regression problems via sparse and deep multi-task approaches, i.e., two regularized multi-task ordinal regression (RMTOR) models for small datasets and two deep neural networks based multi-task ordinal regression (DMTOR) models for large-scale datasets. The performance of the proposed multi-task ordinal regression models (MTOR) is demonstrated on three real-world medical datasets for multi-stage disease diagnosis. Our experimental results indicate that our proposed MTOR models markedly improve the prediction performance comparing with single-task learning (STL) ordinal regression models.
Ordinal regression is an approach aiming at classifying the data with natural ordered labels and plays an important role in many data-rich science domains. According to the commonly used taxonomy of ordinal regression @cite_62 , the existing methods are categorized into: naive approaches, ordinal binary decomposition approaches and threshold models. The naive approaches are the earliest approaches dealing with ordinal regression, which convert the ordinal labels into numeric and then implement standard regression or support vector regression @cite_8 @cite_16 . Since the distance between classes is unknown in this type of methods, the real values used for the labels may undermine regression performance. Moreover, these regression learners are sensitive to the label representation instead of their orders @cite_62 .
{ "cite_N": [ "@cite_62", "@cite_16", "@cite_8" ], "mid": [ "2128186735", "2067001387", "2058475745", "1980896222" ], "abstract": [ "We present a reduction framework from ordinal regression to binary classification based on extended examples. The framework consists of three steps: extracting extended examples from the original examples, learning a binary classifier on the extended examples with any binary classification algorithm, and constructing a ranking rule from the binary classifier. A weighted 0 1 loss of the binary classifier would then bound the mislabeling cost of the ranking rule. Our framework allows not only to design good ordinal regression algorithms based on well-tuned binary classification approaches, but also to derive new generalization bounds for ordinal regression from known bounds for binary classification. In addition, our framework unifies many existing ordinal regression algorithms, such as perceptron ranking and support vector ordinal regression. When compared empirically on benchmark data sets, some of our newly designed algorithms enjoy advantages in terms of both training speed and generalization performance over existing algorithms, which demonstrates the usefulness of our framework.", "Threshold models are one of the most common approaches for ordinal regression, based on projecting patterns to the real line and dividing this real line in consecutive intervals, one interval for each class. However, finding such one-dimensional projection can be too harsh an imposition for some datasets. This paper proposes a multidimensional latent space representation with the purpose of relaxing this projection, where the different classes are arranged based on concentric hyperspheres, each class containing the previous classes in the ordinal scale. The proposal is implemented through a neural network model, each dimension being a linear combination of a common set of basis functions. The model is compared to a nominal neural network, a neural network based on the proportional odds model and to other state-of-the-art ordinal regression methods for a total of 12 datasets. The proposed latent space shows an improvement on the two performance metrics considered, and the model based on the three-dimensional latent space obtains competitive performance when compared to the other methods.", "We investigate the problem of predicting variables of ordinal scale. This task is referred to as ordinal regression and is complementary to the standard machine learning tasks of classification and metric regression. In contrast to statistical models we present a distribution independent formulation of the problem together with uniform bounds of the risk functional. The approach presented is based on a mapping from objects to scalar utility values. Similar to support vector methods we derive a new learning algorithm for the task of ordinal regression based on large margin rank boundaries. We give experimental results for an information retrieval task: learning the order of documents with respect to an initial query. Experimental results indicate that the presented algorithm outperforms more naive approaches to ordinal regression such as support vector classification and support vector regression in the case of more than two ranks.", "In this letter, we propose two new support vector approaches for ordinal regression, which optimize multiple thresholds to define parallel discriminant hyperplanes for the ordinal scales. Both approaches guarantee that the thresholds are properly ordered at the optimal solution. The size of these optimization problems is linear in the number of training samples. The sequential minimal optimization algorithm is adapted for the resulting optimization problems; it is extremely easy to implement and scales efficiently as a quadratic function of the number of examples. The results of numerical experiments on some benchmark and real-world data sets, including applications of ordinal regression to information retrieval, verify the usefulness of these approaches." ] }
1907.12508
2964542684
Many real-world datasets are labeled with natural orders, i.e., ordinal labels. Ordinal regression is a method to predict ordinal labels that finds a wide range of applications in data-rich science domains, such as medical, social and economic sciences. Most existing approaches work well for a single ordinal regression task. However, they ignore the task relatedness when there are multiple related tasks. Multi-task learning (MTL) provides a framework to encode task relatedness, to bridge data from all tasks, and to simultaneously learn multiple related tasks to improve the generalization performance. Even though MTL methods have been extensively studied, there is barely existing work investigating MTL for data with ordinal labels. We tackle multiple ordinal regression problems via sparse and deep multi-task approaches, i.e., two regularized multi-task ordinal regression (RMTOR) models for small datasets and two deep neural networks based multi-task ordinal regression (DMTOR) models for large-scale datasets. The performance of the proposed multi-task ordinal regression models (MTOR) is demonstrated on three real-world medical datasets for multi-stage disease diagnosis. Our experimental results indicate that our proposed MTOR models markedly improve the prediction performance comparing with single-task learning (STL) ordinal regression models.
Ordinal binary decomposition approaches are proposed to decompose the ordinal labels into several binary ones that are then estimated by multiple models @cite_50 @cite_54 . For example, @cite_50 transforms the data from @math -classes ordinal problems to @math ordered binary classification problems and then they are trained in conjunction with a decision tree learner to encode the ordering of the original ranks, i.e., train @math binary classifiers using C4.5 algorithm. Threshold models are proposed based on the idea of approximating the real value predictor followed with partitioning the real line of ordinal values into segments. During the last decade, the two most popular threshold models are support vector machines (SVM) models @cite_45 @cite_44 @cite_56 @cite_22 and generalized linear models for ordinal regression @cite_46 @cite_23 @cite_17 @cite_32 ; the former is to find the hyperplane that separates the segments by maximizing margin using the loss and the latter is to predict the ordinal labels by maximizing the likelihood given the training data.
{ "cite_N": [ "@cite_22", "@cite_54", "@cite_32", "@cite_56", "@cite_44", "@cite_45", "@cite_50", "@cite_23", "@cite_46", "@cite_17" ], "mid": [ "2128186735", "1980896222", "2794673239", "2296324964" ], "abstract": [ "We present a reduction framework from ordinal regression to binary classification based on extended examples. The framework consists of three steps: extracting extended examples from the original examples, learning a binary classifier on the extended examples with any binary classification algorithm, and constructing a ranking rule from the binary classifier. A weighted 0 1 loss of the binary classifier would then bound the mislabeling cost of the ranking rule. Our framework allows not only to design good ordinal regression algorithms based on well-tuned binary classification approaches, but also to derive new generalization bounds for ordinal regression from known bounds for binary classification. In addition, our framework unifies many existing ordinal regression algorithms, such as perceptron ranking and support vector ordinal regression. When compared empirically on benchmark data sets, some of our newly designed algorithms enjoy advantages in terms of both training speed and generalization performance over existing algorithms, which demonstrates the usefulness of our framework.", "In this letter, we propose two new support vector approaches for ordinal regression, which optimize multiple thresholds to define parallel discriminant hyperplanes for the ordinal scales. Both approaches guarantee that the thresholds are properly ordered at the optimal solution. The size of these optimization problems is linear in the number of training samples. The sequential minimal optimization algorithm is adapted for the resulting optimization problems; it is extremely easy to implement and scales efficiently as a quadratic function of the number of examples. The results of numerical experiments on some benchmark and real-world data sets, including applications of ordinal regression to information retrieval, verify the usefulness of these approaches.", "Binary code learning, a.k.a. hashing, has been successfully applied to the approximate nearest neighbor search in large-scale image collections. The key challenge lies in reducing the quantization error from the original real-valued feature space to a discrete Hamming space. Recent advances in unsupervised hashing advocate the preservation of ranking information, which is achieved by constraining the binary code learning to be correlated with pairwise similarity. However, few unsupervised methods consider the preservation of ordinal relations in the learning process, which serves as a more basic cue to learn optimal binary codes. In this paper, we propose a novel hashing scheme, termed Ordinal Constraint Hashing (OCH), which embeds the ordinal relation among data points to preserve ranking into binary codes. The core idea is to construct an ordinal graph via tensor product, and then train the hash function over this graph to preserve the permutation relations among data points in the Hamming space. Subsequently, an in-depth acceleration scheme, termed Ordinal Constraint Projection (OCP), is introduced, which approximates the @math -pair ordinal graph by @math -pair anchor-based ordinal graph, and reduce the corresponding complexity from @math to @math ( @math ). Finally, to make the optimization tractable, we further relax the discrete constrains and design a customized stochastic gradient decent algorithm on the Stiefel manifold. Experimental results on serval large-scale benchmarks demonstrate that the proposed OCH method can achieve superior performance over the state-of-the-art approaches.", "In this paper, we aim to learn a mapping (or embedding) from images to a compact binary space in which Hamming distances correspond to a ranking measure for the image retrieval task. We make use of a triplet loss because this has been shown to be most effective for ranking problems. However, training in previous works can be prohibitively expensive due to the fact that optimization is directly performed on the triplet space, where the number of possible triplets for training is cubic in the number of training examples. To address this issue, we propose to formulate high-order binary codes learning as a multi-label classification problem by explicitly separating learning into two interleaved stages. To solve the first stage, we design a large-scale high-order binary codes inference algorithm to reduce the high-order objective to a standard binary quadratic problem such that graph cuts can be used to efficiently infer the binary code which serve as the label of each training datum. In the second stage we propose to map the original image to compact binary codes via carefully designed deep convolutional neural networks (CNNs) and the hashing function fitting can be solved by training binary CNN classifiers. An incremental interleaved optimization strategy is proffered to ensure that these two steps are interactive with each other during training for better accuracy. We conduct experiments on several benchmark datasets, which demonstrate both improved training time (by as much as two orders of magnitude) as well as producing state-of-the-art hashing for various retrieval tasks." ] }
1907.12508
2964542684
Many real-world datasets are labeled with natural orders, i.e., ordinal labels. Ordinal regression is a method to predict ordinal labels that finds a wide range of applications in data-rich science domains, such as medical, social and economic sciences. Most existing approaches work well for a single ordinal regression task. However, they ignore the task relatedness when there are multiple related tasks. Multi-task learning (MTL) provides a framework to encode task relatedness, to bridge data from all tasks, and to simultaneously learn multiple related tasks to improve the generalization performance. Even though MTL methods have been extensively studied, there is barely existing work investigating MTL for data with ordinal labels. We tackle multiple ordinal regression problems via sparse and deep multi-task approaches, i.e., two regularized multi-task ordinal regression (RMTOR) models for small datasets and two deep neural networks based multi-task ordinal regression (DMTOR) models for large-scale datasets. The performance of the proposed multi-task ordinal regression models (MTOR) is demonstrated on three real-world medical datasets for multi-stage disease diagnosis. Our experimental results indicate that our proposed MTOR models markedly improve the prediction performance comparing with single-task learning (STL) ordinal regression models.
In @cite_45 , support vector ordinal regression (SVOR) is achieved by finding multiple thresholds that partition the real line of ordinal values into several consecutive intervals for representing ordered segments; however, it does not consider the ordinal inequalities on the thresholds. In @cite_44 @cite_56 , the authors take into account of ordinal inequalities on the thresholds and propose two approaches using two types of thresholds for SVOR by introducing explicit constraints. To deal with incremental SVOR learning caused by the complicated formulations of SVOR, @cite_22 proposes a modified SVOR formulation based on a sum-of-margins strategy to solve the computational scalability issue of SVOR.
{ "cite_N": [ "@cite_44", "@cite_45", "@cite_22", "@cite_56" ], "mid": [ "2061879449", "2023508744", "1980896222", "2067001387" ], "abstract": [ "Support vector ordinal regression (SVOR) is a popular method to tackle ordinal regression problems. However, until now there were no effective algorithms proposed to address incremental SVOR learning due to the complicated formulations of SVOR. Recently, an interesting accurate on-line algorithm was proposed for training @math -support vector classification ( @math -SVC), which can handle a quadratic formulation with a pair of equality constraints. In this paper, we first present a modified SVOR formulation based on a sum-of-margins strategy. The formulation has multiple constraints, and each constraint includes a mixture of an equality and an inequality. Then, we extend the accurate on-line @math -SVC algorithm to the modified formulation, and propose an effective incremental SVOR algorithm. The algorithm can handle a quadratic formulation with multiple constraints, where each constraint is constituted of an equality and an inequality. More importantly, it tackles the conflicts between the equality and inequality constraints. We also provide the finite convergence analysis for the algorithm. Numerical experiments on the several benchmark and real-world data sets show that the incremental algorithm can converge to the optimal solution in a finite number of steps, and is faster than the existing batch and incremental SVOR algorithms. Meanwhile, the modified formulation has better accuracy than the existing incremental SVOR algorithm, and is as accurate as the sum-of-margins based formulation of Shashua and Levin.", "In this paper, we propose two new support vector approaches for ordinal regression, which optimize multiple thresholds to define parallel discriminant hyperplanes for the ordinal scales. Both approaches guarantee that the thresholds are properly ordered at the optimal solution. The size of these optimization problems is linear in the number of training samples. The SMO algorithm is adapted for the resulting optimization problems; it is extremely easy to implement and scales efficiently as a quadratic function of the number of examples. The results of numerical experiments on benchmark datasets verify the usefulness of these approaches.", "In this letter, we propose two new support vector approaches for ordinal regression, which optimize multiple thresholds to define parallel discriminant hyperplanes for the ordinal scales. Both approaches guarantee that the thresholds are properly ordered at the optimal solution. The size of these optimization problems is linear in the number of training samples. The sequential minimal optimization algorithm is adapted for the resulting optimization problems; it is extremely easy to implement and scales efficiently as a quadratic function of the number of examples. The results of numerical experiments on some benchmark and real-world data sets, including applications of ordinal regression to information retrieval, verify the usefulness of these approaches.", "Threshold models are one of the most common approaches for ordinal regression, based on projecting patterns to the real line and dividing this real line in consecutive intervals, one interval for each class. However, finding such one-dimensional projection can be too harsh an imposition for some datasets. This paper proposes a multidimensional latent space representation with the purpose of relaxing this projection, where the different classes are arranged based on concentric hyperspheres, each class containing the previous classes in the ordinal scale. The proposal is implemented through a neural network model, each dimension being a linear combination of a common set of basis functions. The model is compared to a nominal neural network, a neural network based on the proportional odds model and to other state-of-the-art ordinal regression methods for a total of 12 datasets. The proposed latent space shows an improvement on the two performance metrics considered, and the model based on the three-dimensional latent space obtains competitive performance when compared to the other methods." ] }
1907.12508
2964542684
Many real-world datasets are labeled with natural orders, i.e., ordinal labels. Ordinal regression is a method to predict ordinal labels that finds a wide range of applications in data-rich science domains, such as medical, social and economic sciences. Most existing approaches work well for a single ordinal regression task. However, they ignore the task relatedness when there are multiple related tasks. Multi-task learning (MTL) provides a framework to encode task relatedness, to bridge data from all tasks, and to simultaneously learn multiple related tasks to improve the generalization performance. Even though MTL methods have been extensively studied, there is barely existing work investigating MTL for data with ordinal labels. We tackle multiple ordinal regression problems via sparse and deep multi-task approaches, i.e., two regularized multi-task ordinal regression (RMTOR) models for small datasets and two deep neural networks based multi-task ordinal regression (DMTOR) models for large-scale datasets. The performance of the proposed multi-task ordinal regression models (MTOR) is demonstrated on three real-world medical datasets for multi-stage disease diagnosis. Our experimental results indicate that our proposed MTOR models markedly improve the prediction performance comparing with single-task learning (STL) ordinal regression models.
Generalized linear models perform ordinal regression by fitting a coefficient vector and a set of thresholds, e.g., ordered logit @cite_46 @cite_23 and ordered probit @cite_17 @cite_32 . The margin functions are defined based on the cumulative probability of training instances' ordinal labels. Different link functions are then chosen for different models, i.e., logistic cumulative distribution function (CDF) for ordered logit and standard normal CDF for ordered probit. Finally, maximum likelihood principal is used for training.
{ "cite_N": [ "@cite_46", "@cite_32", "@cite_23", "@cite_17" ], "mid": [ "1980896222", "2058475745", "2122634476", "2128186735" ], "abstract": [ "In this letter, we propose two new support vector approaches for ordinal regression, which optimize multiple thresholds to define parallel discriminant hyperplanes for the ordinal scales. Both approaches guarantee that the thresholds are properly ordered at the optimal solution. The size of these optimization problems is linear in the number of training samples. The sequential minimal optimization algorithm is adapted for the resulting optimization problems; it is extremely easy to implement and scales efficiently as a quadratic function of the number of examples. The results of numerical experiments on some benchmark and real-world data sets, including applications of ordinal regression to information retrieval, verify the usefulness of these approaches.", "We investigate the problem of predicting variables of ordinal scale. This task is referred to as ordinal regression and is complementary to the standard machine learning tasks of classification and metric regression. In contrast to statistical models we present a distribution independent formulation of the problem together with uniform bounds of the risk functional. The approach presented is based on a mapping from objects to scalar utility values. Similar to support vector methods we derive a new learning algorithm for the task of ordinal regression based on large margin rank boundaries. We give experimental results for an information retrieval task: learning the order of documents with respect to an initial query. Experimental results indicate that the presented algorithm outperforms more naive approaches to ordinal regression such as support vector classification and support vector regression in the case of more than two ranks.", "In recursive linear models, the multivariate normal joint distribution of all variables exhibits a dependence structure induced by a recursive (or acyclic) system of linear structural equations. These linear models have a long tradition and appear in seemingly unrelated regressions, structural equation modelling, and approaches to causal inference. They are also related to Gaussian graphical models via a classical representation known as a path diagram. Despite the models' long history, a number of problems remain open. In this paper, we address the problem of computing maximum likelihood estimates in the subclass of 'bow-free' recursive linear models. The term 'bow-free' refers to the condition that the errors for variables i and j be uncorrelated if variable i occurs in the structural equation for variable j. We introduce a new algorithm, termed Residual Iterative Conditional Fitting (RICF), that can be implemented using only least squares computations. In contrast to existing algorithms, RICF has clear convergence properties and yields exact maximum likelihood estimates after the first iteration whenever the MLE is available in closed form.", "We present a reduction framework from ordinal regression to binary classification based on extended examples. The framework consists of three steps: extracting extended examples from the original examples, learning a binary classifier on the extended examples with any binary classification algorithm, and constructing a ranking rule from the binary classifier. A weighted 0 1 loss of the binary classifier would then bound the mislabeling cost of the ranking rule. Our framework allows not only to design good ordinal regression algorithms based on well-tuned binary classification approaches, but also to derive new generalization bounds for ordinal regression from known bounds for binary classification. In addition, our framework unifies many existing ordinal regression algorithms, such as perceptron ranking and support vector ordinal regression. When compared empirically on benchmark data sets, some of our newly designed algorithms enjoy advantages in terms of both training speed and generalization performance over existing algorithms, which demonstrates the usefulness of our framework." ] }
1907.12508
2964542684
Many real-world datasets are labeled with natural orders, i.e., ordinal labels. Ordinal regression is a method to predict ordinal labels that finds a wide range of applications in data-rich science domains, such as medical, social and economic sciences. Most existing approaches work well for a single ordinal regression task. However, they ignore the task relatedness when there are multiple related tasks. Multi-task learning (MTL) provides a framework to encode task relatedness, to bridge data from all tasks, and to simultaneously learn multiple related tasks to improve the generalization performance. Even though MTL methods have been extensively studied, there is barely existing work investigating MTL for data with ordinal labels. We tackle multiple ordinal regression problems via sparse and deep multi-task approaches, i.e., two regularized multi-task ordinal regression (RMTOR) models for small datasets and two deep neural networks based multi-task ordinal regression (DMTOR) models for large-scale datasets. The performance of the proposed multi-task ordinal regression models (MTOR) is demonstrated on three real-world medical datasets for multi-stage disease diagnosis. Our experimental results indicate that our proposed MTOR models markedly improve the prediction performance comparing with single-task learning (STL) ordinal regression models.
Recently, MTL has been combined with many deep learning approaches @cite_10 . MTL can be implemented in the DNN based approaches in two ways, i.e., soft and hard parameter sharing of hidden layers. In the soft parameter sharing, all tasks do not share representation layers and the distance among their own representation layers are constrained to encourage the parameters to be similar @cite_10 , e.g., @cite_20 and @cite_21 use @math -norm and the trace norm, respectively.
{ "cite_N": [ "@cite_21", "@cite_10", "@cite_20" ], "mid": [ "2900964459", "2966182616", "2025198378", "2407277018" ], "abstract": [ "Multi-task learning (MTL) allows deep neural networks to learn from related tasks by sharing parameters with other networks. In practice, however, MTL involves searching an enormous space of possible parameter sharing architectures to find (a) the layers or subspaces that benefit from sharing, (b) the appropriate amount of sharing, and (c) the appropriate relative weights of the different task losses. Recent work has addressed each of the above problems in isolation. In this work we present an approach that learns a latent multi-task architecture that jointly addresses (a)--(c). We present experiments on synthetic data and data from OntoNotes 5.0, including four different tasks and seven different domains. Our extension consistently outperforms previous approaches to learning latent architectures for multi-task problems and achieves up to 15 average error reductions over common approaches to MTL.", "Multi-task learning (MTL) allows deep neural networks to learn from related tasks by sharing parameters with other networks. In practice, however, MTL involves searching an enormous space of possible parameter sharing architectures to find (a) the layers or subspaces that benefit from sharing, (b) the appropriate amount of sharing, and (c) the appropriate relative weights of the different task losses. Recent work has addressed each of the above problems in isolation. In this work we present an approach that learns a latent multi-task architecture that jointly addresses (a)–(c). We present experiments on synthetic data and data from OntoNotes 5.0, including four different tasks and seven different domains. Our extension consistently outperforms previous approaches to learning latent architectures for multi-task problems and achieves up to 15 average error reductions over common approaches to MTL.", "In the deep neural network (DNN), the hidden layers can be considered as increasingly complex feature transformations and the final softmax layer as a log-linear classifier making use of the most abstract features computed in the hidden layers. While the loglinear classifier should be different for different languages, the feature transformations can be shared across languages. In this paper we propose a shared-hidden-layer multilingual DNN (SHL-MDNN), in which the hidden layers are made common across many languages while the softmax layers are made language dependent. We demonstrate that the SHL-MDNN can reduce errors by 3-5 , relatively, for all the languages decodable with the SHL-MDNN, over the monolingual DNNs trained using only the language specific data. Further, we show that the learned hidden layers sharing across languages can be transferred to improve recognition accuracy of new languages, with relative error reductions ranging from 6 to 28 against DNNs trained without exploiting the transferred hidden layers. It is particularly interesting that the error reduction can be achieved for the target language that is in different families of the languages used to learn the hidden layers.", "Most contemporary multi-task learning methods assume linear models. This setting is considered shallow in the era of deep learning. In this paper, we present a new deep multi-task representation learning framework that learns cross-task sharing structure at every layer in a deep network. Our approach is based on generalising the matrix factorisation techniques explicitly or implicitly used by many conventional MTL algorithms to tensor factorisation, to realise automatic learning of end-to-end knowledge sharing in deep networks. This is in contrast to existing deep learning approaches that need a user-defined multi-task sharing strategy. Our approach applies to both homogeneous and heterogeneous MTL. Experiments demonstrate the efficacy of our deep multi-task representation learning in terms of both higher accuracy and fewer design choices." ] }
1907.12508
2964542684
Many real-world datasets are labeled with natural orders, i.e., ordinal labels. Ordinal regression is a method to predict ordinal labels that finds a wide range of applications in data-rich science domains, such as medical, social and economic sciences. Most existing approaches work well for a single ordinal regression task. However, they ignore the task relatedness when there are multiple related tasks. Multi-task learning (MTL) provides a framework to encode task relatedness, to bridge data from all tasks, and to simultaneously learn multiple related tasks to improve the generalization performance. Even though MTL methods have been extensively studied, there is barely existing work investigating MTL for data with ordinal labels. We tackle multiple ordinal regression problems via sparse and deep multi-task approaches, i.e., two regularized multi-task ordinal regression (RMTOR) models for small datasets and two deep neural networks based multi-task ordinal regression (DMTOR) models for large-scale datasets. The performance of the proposed multi-task ordinal regression models (MTOR) is demonstrated on three real-world medical datasets for multi-stage disease diagnosis. Our experimental results indicate that our proposed MTOR models markedly improve the prediction performance comparing with single-task learning (STL) ordinal regression models.
Hard parameter sharing is the most commonly used approach in DNN based MTL @cite_10 . In the hard parameter sharing, all tasks share the representation layers to reduce the risk of overfitting @cite_24 and keep some task-specific layers to preserve characteristics of each task @cite_40 . In this paper, we use the hard parameters sharing for DMTOR. In the all aforementioned methods and other related works, the learning tasks are either classification or standard regression. Here, in this paper, the learning tasks are multiple ordinal regression problems. We propose a set of novel MTOR models in Section and Section to solve multiple multi-ordered classification problems simultaneously. Moreover, in the Section , the multi-stage disease diagnosis are handled for experiments using the proposed MTOR models, i.e., RMTOR and DMTOR models.
{ "cite_N": [ "@cite_24", "@cite_40", "@cite_10" ], "mid": [ "2900964459", "2131479143", "2966182616", "2407277018" ], "abstract": [ "Multi-task learning (MTL) allows deep neural networks to learn from related tasks by sharing parameters with other networks. In practice, however, MTL involves searching an enormous space of possible parameter sharing architectures to find (a) the layers or subspaces that benefit from sharing, (b) the appropriate amount of sharing, and (c) the appropriate relative weights of the different task losses. Recent work has addressed each of the above problems in isolation. In this work we present an approach that learns a latent multi-task architecture that jointly addresses (a)--(c). We present experiments on synthetic data and data from OntoNotes 5.0, including four different tasks and seven different domains. Our extension consistently outperforms previous approaches to learning latent architectures for multi-task problems and achieves up to 15 average error reductions over common approaches to MTL.", "Consider the problem of learning logistic-regression models for multiple classification tasks, where the training data set for each task is not drawn from the same statistical distribution. In such a multi-task learning (MTL) scenario, it is necessary to identify groups of similar tasks that should be learned jointly. Relying on a Dirichlet process (DP) based statistical model to learn the extent of similarity between classification tasks, we develop computationally efficient algorithms for two different forms of the MTL problem. First, we consider a symmetric multi-task learning (SMTL) situation in which classifiers for multiple tasks are learned jointly using a variational Bayesian (VB) algorithm. Second, we consider an asymmetric multi-task learning (AMTL) formulation in which the posterior density function from the SMTL model parameters (from previous tasks) is used as a prior for a new task: this approach has the significant advantage of not requiring storage and use of all previous data from prior tasks. The AMTL formulation is solved with a simple Markov Chain Monte Carlo (MCMC) construction. Experimental results on two real life MTL problems indicate that the proposed algorithms: (a) automatically identify subgroups of related tasks whose training data appear to be drawn from similar distributions; and (b) are more accurate than simpler approaches such as single-task learning, pooling of data across all tasks, and simplified approximations to DP.", "Multi-task learning (MTL) allows deep neural networks to learn from related tasks by sharing parameters with other networks. In practice, however, MTL involves searching an enormous space of possible parameter sharing architectures to find (a) the layers or subspaces that benefit from sharing, (b) the appropriate amount of sharing, and (c) the appropriate relative weights of the different task losses. Recent work has addressed each of the above problems in isolation. In this work we present an approach that learns a latent multi-task architecture that jointly addresses (a)–(c). We present experiments on synthetic data and data from OntoNotes 5.0, including four different tasks and seven different domains. Our extension consistently outperforms previous approaches to learning latent architectures for multi-task problems and achieves up to 15 average error reductions over common approaches to MTL.", "Most contemporary multi-task learning methods assume linear models. This setting is considered shallow in the era of deep learning. In this paper, we present a new deep multi-task representation learning framework that learns cross-task sharing structure at every layer in a deep network. Our approach is based on generalising the matrix factorisation techniques explicitly or implicitly used by many conventional MTL algorithms to tensor factorisation, to realise automatic learning of end-to-end knowledge sharing in deep networks. This is in contrast to existing deep learning approaches that need a user-defined multi-task sharing strategy. Our approach applies to both homogeneous and heterogeneous MTL. Experiments demonstrate the efficacy of our deep multi-task representation learning in terms of both higher accuracy and fewer design choices." ] }
1907.12363
2966207284
Training models on highly unbalanced data is admitted to be a challenging task for machine learning algorithms. Current studies on deep learning mainly focus on data sets with balanced class labels, or unbalanced data but with massive amount of samples available, like in speech recognition. However, the capacities of deep learning on imbalanced data with little samples is not deeply investigated in literature, while it is a very common application context, in numerous industries. To contribute to fill this gap, this paper compares the performances of several popular machine learning algorithms previously applied with success to unbalanced data set with deep learning algorithms. We conduct those experiments on an highly unbalanced data set, used for credit scoring. We evaluate various configuration including neural network optimisation techniques and try to determine their capacities when they operate with imbalanced corpora.
In their review, @cite_2 of methods and applications for learning on unbalanced data, authors cite as domains extensively explored the topic Taxonomy, Chemical engineering, financial management, information technology, energy management, security management.Most of the related experimentation tend to show that since years, that decision trees are the best performing ML algorithms on unbalanced data sets.
{ "cite_N": [ "@cite_2" ], "mid": [ "1831729862", "1967148170", "2087787741", "2050806103" ], "abstract": [ "Learning from unbalanced datasets presents a convoluted problem in which traditional learning algorithms may perform poorly. The objective functions used for learning the classifiers typically tend to favor the larger, less important classes in such problems. This paper compares the performance of several popular decision tree splitting criteria --- information gain, Gini measure, and DKM --- and identifies a new skew insensitive measure in Hellinger distance. We outline the strengths of Hellinger distance in class imbalance, proposes its application in forming decision trees, and performs a comprehensive comparative analysis between each decision tree construction method. In addition, we consider the performance of each tree within a powerful sampling wrapper framework to capture the interaction of the splitting metric and sampling. We evaluate over this wide range of datasets and determine which operate best under class imbalance.", "Decision trees are probably the most popular and commonly used classification model. They are recursively built following a top-down approach (from general concepts to particular examples) by repeated splits of the training dataset. When this dataset contains numerical attributes, binary splits are usually performed by choosing the threshold value which minimizes the impurity measure used as splitting criterion (e.g. C4.5 gain ratio criterion or CART Gini's index). In this paper we propose the use of multi-way splits for continuous attributes in order to reduce the tree complexity without decreasing classification accuracy. This can be done by intertwining a hierarchical clustering algorithm with the usual greedy decision tree learning.", "Traditional learning algorithms applied to complex and highly imbalanced training sets may not give satisfactory results when distinguishing between examples of the classes. The tendency is to yield classification models that are biased towards the overrepresented (majority) class. This paper investigates this class imbalance problem in the context of multilayer perceptron (MLP) neural networks. The consequences of the equal cost (loss) assumption on imbalanced data are formally discussed from a statistical learning theory point of view. A new cost-sensitive algorithm (CSMLP) is presented to improve the discrimination ability of (two-class) MLPs. The CSMLP formulation is based on a joint objective function that uses a single cost parameter to distinguish the importance of class errors. The learning rule extends the Levenberg-Marquadt's rule, ensuring the computational efficiency of the algorithm. In addition, it is theoretically demonstrated that the incorporation of prior information via the cost parameter may lead to balanced decision boundaries in the feature space. Based on the statistical analysis of results on real data, our approach shows a significant improvement of the area under the receiver operating characteristic curve and G-mean measures of regular MLPs.", "Decision trees are the commonly applied tools in the task of data stream classification. The most critical point in decision tree construction algorithm is the choice of the splitting attribute. In majority of algorithms existing in literature the splitting criterion is based on statistical bounds derived for split measure functions. In this paper we propose a totally new kind of splitting criterion. We derive statistical bounds for arguments of split measure function instead of deriving it for split measure function itself. This approach allows us to properly use the Hoeffding's inequality to obtain the required bounds. Based on this theoretical results we propose the Decision Trees based on the Fractions Approximation algorithm (DTFA). The algorithm exhibits satisfactory results of classification accuracy in numerical experiments. It is also compared with other existing in literature methods, demonstrating noticeably better performance." ] }
1907.12363
2966207284
Training models on highly unbalanced data is admitted to be a challenging task for machine learning algorithms. Current studies on deep learning mainly focus on data sets with balanced class labels, or unbalanced data but with massive amount of samples available, like in speech recognition. However, the capacities of deep learning on imbalanced data with little samples is not deeply investigated in literature, while it is a very common application context, in numerous industries. To contribute to fill this gap, this paper compares the performances of several popular machine learning algorithms previously applied with success to unbalanced data set with deep learning algorithms. We conduct those experiments on an highly unbalanced data set, used for credit scoring. We evaluate various configuration including neural network optimisation techniques and try to determine their capacities when they operate with imbalanced corpora.
In @cite_5 , authors outline the performance of several popular decision tree splitting criteria – information gain, Gini measure, and DKM – can be used to form decision trees, and improve performances of tree construction method applied to unbalanced data.
{ "cite_N": [ "@cite_5" ], "mid": [ "1831729862", "2050806103", "1967148170", "2124357902" ], "abstract": [ "Learning from unbalanced datasets presents a convoluted problem in which traditional learning algorithms may perform poorly. The objective functions used for learning the classifiers typically tend to favor the larger, less important classes in such problems. This paper compares the performance of several popular decision tree splitting criteria --- information gain, Gini measure, and DKM --- and identifies a new skew insensitive measure in Hellinger distance. We outline the strengths of Hellinger distance in class imbalance, proposes its application in forming decision trees, and performs a comprehensive comparative analysis between each decision tree construction method. In addition, we consider the performance of each tree within a powerful sampling wrapper framework to capture the interaction of the splitting metric and sampling. We evaluate over this wide range of datasets and determine which operate best under class imbalance.", "Decision trees are the commonly applied tools in the task of data stream classification. The most critical point in decision tree construction algorithm is the choice of the splitting attribute. In majority of algorithms existing in literature the splitting criterion is based on statistical bounds derived for split measure functions. In this paper we propose a totally new kind of splitting criterion. We derive statistical bounds for arguments of split measure function instead of deriving it for split measure function itself. This approach allows us to properly use the Hoeffding's inequality to obtain the required bounds. Based on this theoretical results we propose the Decision Trees based on the Fractions Approximation algorithm (DTFA). The algorithm exhibits satisfactory results of classification accuracy in numerical experiments. It is also compared with other existing in literature methods, demonstrating noticeably better performance.", "Decision trees are probably the most popular and commonly used classification model. They are recursively built following a top-down approach (from general concepts to particular examples) by repeated splits of the training dataset. When this dataset contains numerical attributes, binary splits are usually performed by choosing the threshold value which minimizes the impurity measure used as splitting criterion (e.g. C4.5 gain ratio criterion or CART Gini's index). In this paper we propose the use of multi-way splits for continuous attributes in order to reduce the tree complexity without decreasing classification accuracy. This can be done by intertwining a hierarchical clustering algorithm with the usual greedy decision tree learning.", "In mining data streams the most popular tool is the Hoeffding tree algorithm. It uses the Hoeffding's bound to determine the smallest number of examples needed at a node to select a splitting attribute. In the literature the same Hoeffding's bound was used for any evaluation function (heuristic measure), e.g., information gain or Gini index. In this paper, it is shown that the Hoeffding's inequality is not appropriate to solve the underlying problem. We prove two theorems presenting the McDiarmid's bound for both the information gain, used in ID3 algorithm, and for Gini index, used in Classification and Regression Trees (CART) algorithm. The results of the paper guarantee that a decision tree learning system, applied to data streams and based on the McDiarmid's bound, has the property that its output is nearly identical to that of a conventional learner. The results of the paper have a great impact on the state of the art of mining data streams and various developed so far methods and algorithms should be reconsidered." ] }
1907.12363
2966207284
Training models on highly unbalanced data is admitted to be a challenging task for machine learning algorithms. Current studies on deep learning mainly focus on data sets with balanced class labels, or unbalanced data but with massive amount of samples available, like in speech recognition. However, the capacities of deep learning on imbalanced data with little samples is not deeply investigated in literature, while it is a very common application context, in numerous industries. To contribute to fill this gap, this paper compares the performances of several popular machine learning algorithms previously applied with success to unbalanced data set with deep learning algorithms. We conduct those experiments on an highly unbalanced data set, used for credit scoring. We evaluate various configuration including neural network optimisation techniques and try to determine their capacities when they operate with imbalanced corpora.
Authors of @cite_6 discuss the use of 108 different classification models to determine the most fitting model, capable of dealing with the imbalanced data issue coming from a biomedical literature corpus and achieve the most satisfying results with a LMT decision tree classifier previously used with success in a unbalanced multi-label classification task related to taxonomy by @cite_1 .
{ "cite_N": [ "@cite_1", "@cite_6" ], "mid": [ "2026905436", "2739996966", "2129414564", "2097553584" ], "abstract": [ "The vast majority of the literature evaluates the performance of classification models using only the criterion of predictive accuracy. This paper reviews the case for considering also the comprehensibility (interpretability) of classification models, and discusses the interpretability of five types of classification models, namely decision trees, classification rules, decision tables, nearest neighbors and Bayesian network classifiers. We discuss both interpretability issues which are specific to each of those model types and more generic interpretability issues, namely the drawbacks of using model size as the only criterion to evaluate the comprehensibility of a model, and the use of monotonicity constraints to improve the comprehensibility and acceptance of classification models by users.", "Extreme multi-label text classification (XMTC) refers to the problem of assigning to each document its most relevant subset of class labels from an extremely large label collection, where the number of labels could reach hundreds of thousands or millions. The huge label space raises research challenges such as data sparsity and scalability. Significant progress has been made in recent years by the development of new machine learning methods, such as tree induction with large-margin partitions of the instance spaces and label-vector embedding in the target space. However, deep learning has not been explored for XMTC, despite its big successes in other related areas. This paper presents the first attempt at applying deep learning to XMTC, with a family of new Convolutional Neural Network (CNN) models which are tailored for multi-label classification in particular. With a comparative evaluation of 7 state-of-the-art methods on 6 benchmark datasets where the number of labels is up to 670,000, we show that the proposed CNN approach successfully scaled to the largest datasets, and consistently produced the best or the second best results on all the datasets. On the Wikipedia dataset with over 2 million documents and 500,000 labels in particular, it outperformed the second best method by 11.7 15.3 in precision@K and by 11.5 11.7 in NDCG@K for K = 1,3,5.", "We propose probabilistic generative models, called parametric mixture models (PMMs), for multiclass, multi-labeled text categorization problem. Conventionally, the binary classification approach has been employed, in which whether or not text belongs to a category is judged by the binary classifier for every category. In contrast, our approach can simultaneously detect multiple categories of text using PMMs. We derive efficient learning and prediction algorithms for PMMs. We also empirically show that our method could significantly outperform the conventional binary methods when applied to multi-labeled text categorization using real World Wide Web pages.", "The volume of biomedical literature has experienced explosive growth in recent years. This is reflected in the corresponding increase in the size of MEDLINE^(R), the largest bibliographic database of biomedical citations. Indexers at the US National Library of Medicine (NLM) need efficient tools to help them accommodate the ensuing workload. After reviewing issues in the automatic assignment of Medical Subject Headings (MeSH^(R) terms) to biomedical text, we focus more specifically on the new subheading attachment feature for NLM's Medical Text Indexer (MTI). Natural Language Processing, statistical, and machine learning methods of producing automatic MeSH main heading subheading pair recommendations were assessed independently and combined. The best combination achieves 48 precision and 30 recall. After validation by NLM indexers, a suitable combination of the methods presented in this paper was integrated into MTI as a subheading attachment feature producing MeSH indexing recommendations compliant with current state-of-the-art indexing practice." ] }
1907.12363
2966207284
Training models on highly unbalanced data is admitted to be a challenging task for machine learning algorithms. Current studies on deep learning mainly focus on data sets with balanced class labels, or unbalanced data but with massive amount of samples available, like in speech recognition. However, the capacities of deep learning on imbalanced data with little samples is not deeply investigated in literature, while it is a very common application context, in numerous industries. To contribute to fill this gap, this paper compares the performances of several popular machine learning algorithms previously applied with success to unbalanced data set with deep learning algorithms. We conduct those experiments on an highly unbalanced data set, used for credit scoring. We evaluate various configuration including neural network optimisation techniques and try to determine their capacities when they operate with imbalanced corpora.
In the domain of credit applications, authors of @cite_11 compare several techniques that can be used in the analysis of imbalanced credit scoring data sets. The results from this empirical study indicate that the random forest and gradient boosting classifiers perform very well in a credit scoring context and are able to cope comparatively well with pronounced class imbalances in these data sets.
{ "cite_N": [ "@cite_11" ], "mid": [ "2052611008", "2096945460", "2040010062", "1563938718" ], "abstract": [ "In this paper, we set out to compare several techniques that can be used in the analysis of imbalanced credit scoring data sets. In a credit scoring context, imbalanced data sets frequently occur as the number of defaulting loans in a portfolio is usually much lower than the number of observations that do not default. As well as using traditional classification techniques such as logistic regression, neural networks and decision trees, this paper will also explore the suitability of gradient boosting, least square support vector machines and random forests for loan default prediction.Five real-world credit scoring data sets are used to build classifiers and test their performance. In our experiments, we progressively increase class imbalance in each of these data sets by randomly under-sampling the minority class of defaulters, so as to identify to what extent the predictive power of the respective techniques is adversely affected. The performance criterion chosen to measure this effect is the area under the receiver operating characteristic curve (AUC); Friedman's statistic and Nemenyi post hoc tests are used to test for significance of AUC differences between techniques.The results from this empirical study indicate that the random forest and gradient boosting classifiers perform very well in a credit scoring context and are able to cope comparatively well with pronounced class imbalances in these data sets. We also found that, when faced with a large class imbalance, the C4.5 decision tree algorithm, quadratic discriminant analysis and k-nearest neighbours perform significantly worse than the best performing classifiers.", "Class imbalance is a problem that is common to many application domains. When examples of one class in a training data set vastly outnumber examples of the other class(es), traditional data mining algorithms tend to create suboptimal classification models. Several techniques have been used to alleviate the problem of class imbalance, including data sampling and boosting. In this paper, we present a new hybrid sampling boosting algorithm, called RUSBoost, for learning from skewed training data. This algorithm provides a simpler and faster alternative to SMOTEBoost, which is another algorithm that combines boosting and data sampling. This paper evaluates the performances of RUSBoost and SMOTEBoost, as well as their individual components (random undersampling, synthetic minority oversampling technique, and AdaBoost). We conduct experiments using 15 data sets from various application domains, four base learners, and four evaluation metrics. RUSBoost and SMOTEBoost both outperform the other procedures, and RUSBoost performs comparably to (and often better than) SMOTEBoost while being a simpler and faster technique. Given these experimental results, we highly recommend RUSBoost as an attractive alternative for improving the classification performance of learners built using imbalanced data.", "Learning from imbalanced data sets, where the number of examples of one (majority) class is much higher than the others, presents an important challenge to the machine learning community. Traditional machine learning algorithms may be biased towards the majority class, thus producing poor predictive accuracy over the minority class. In this paper, we describe a new approach that combines boosting, an ensemble-based learning algorithm, with data generation to improve the predictive power of classifiers against imbalanced data sets consisting of two classes. In the DataBoost-IM method, hard examples from both the majority and minority classes are identified during execution of the boosting algorithm. Subsequently, the hard examples are used to separately generate synthetic examples for the majority and minority classes. The synthetic data are then added to the original training set, and the class distribution and the total weights of the different classes in the new training set are rebalanced. The DataBoost-IM method was evaluated, in terms of the F-measures, G-mean and overall accuracy, against seventeen highly and moderately imbalanced data sets using decision trees as base classifiers. Our results are promising and show that the DataBoost-IM method compares well in comparison with a base classifier, a standard benchmarking boosting algorithm and three advanced boosting-based algorithms for imbalanced data set. Results indicate that our approach does not sacrifice one class in favor of the other, but produces high predictions against both minority and majority classes.", "Many real world data mining applications involve learning from imbalanced data sets. Learning from data sets that contain very few instances of the minority (or interesting) class usually produces biased classifiers that have a higher predictive accuracy over the majority class(es), but poorer predictive accuracy over the minority class. SMOTE (Synthetic Minority Over-sampling TEchnique) is specifically designed for learning from imbalanced data sets. This paper presents a novel approach for learning from imbalanced data sets, based on a combination of the SMOTE algorithm and the boosting procedure. Unlike standard boosting where all misclassified examples are given equal weights, SMOTEBoost creates synthetic examples from the rare or minority class, thus indirectly changing the updating weights and compensating for skewed distributions. SMOTEBoost applied to several highly and moderately imbalanced data sets shows improvement in prediction performance on the minority class and overall improved F-values." ] }