aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
---|---|---|---|---|
1908.11315 | 2970031230 | Due to its hereditary nature, genomic data is not only linked to its owner but to that of close relatives as well. As a result, its sensitivity does not really degrade over time; in fact, the relevance of a genomic sequence is likely to be longer than the security provided by encryption. This prompts the need for specialized techniques providing long-term security for genomic data, yet the only available tool for this purpose is GenoGuard (, 2015). By relying on Honey Encryption, GenoGuard is secure against an adversary that can brute force all possible keys; i.e., whenever an attacker tries to decrypt using an incorrect password, she will obtain an incorrect but plausible looking decoy sequence. In this paper, we set to analyze the real-world security guarantees provided by GenoGuard; specifically, assess how much more information does access to a ciphertext encrypted using GenoGuard yield, compared to one that was not. Overall, we find that, if the adversary has access to side information in the form of partial information from the target sequence, the use of GenoGuard does appreciably increase her power in determining the rest of the sequence. We show that, in the case of a sequence encrypted using an easily guessable (low-entropy) password, the adversary is able to rule out most decoy sequences, and obtain the target sequence with just 2.5 of it available as side information. In the case of a harder-to-guess (high-entropy) password, we show that the adversary still obtains, on average, better accuracy in guessing the rest of the target sequences than using state-of-the-art genomic sequence inference methods, obtaining up to 15 improvement in accuracy. | Long-term security. As the sensitivity of genomic data does not degrade over time, access to an individual's genome poses a threat to her descendants, even years after she has deceased. To the best of our knowledge, GenoGuard @cite_27 is the only attempt to provide long-term security. GenoGuard, reviewed in , relies on Honey Encryption @cite_35 , aiming to provide confidentiality in the presence of brute-force attacks; it only serves as a storage mechanism, i.e., it does not support selective retrieval or testing on encrypted data (as such, it is not composable'' with other techniques supporting privacy-preserving testing or data sharing). In this paper, we provide a security analysis of GenoGuard. In parallel to our work, @cite_15 recently propose attacks against probability model transforming encoders, and also evaluate them on GenoGuard. Using machine learning, they train a classifier to distinguish between the real and the decoy sequences, and exclude all decoy data for approximately 48 | {
"cite_N": [
"@cite_35",
"@cite_27",
"@cite_15"
],
"mid": [
"1892454167",
"1714926069",
"2962983989"
],
"abstract": [
"We introduce honey encryption (HE), a simple, general approach to encrypting messages using low min-entropy keys such as passwords. HE is designed to produce a ciphertext which, when decrypted with any of a number of incorrect keys, yields plausible-looking but bogus plaintexts called honey messages. A key benefit of HE is that it provides security in cases where too little entropy is available to withstand brute-force attacks that try every key; in this sense, HE provides security beyond conventional brute-force bounds. HE can also provide a hedge against partial disclosure of high min-entropy keys.",
"Secure storage of genomic data is of great and increasing importance. The scientific community's improving ability to interpret individuals' genetic materials and the growing size of genetic database populations have been aggravating the potential consequences of data breaches. The prevalent use of passwords to generate encryption keys thus poses an especially serious problem when applied to genetic data. Weak passwords can jeopardize genetic data in the short term, but given the multi-decade lifespan of genetic data, even the use of strong passwords with conventional encryption can lead to compromise. We present a tool, called Geno Guard, for providing strong protection for genomic data both today and in the long term. Geno Guard incorporates a new theoretical framework for encryption called honey encryption (HE): it can provide information-theoretic confidentiality guarantees for encrypted data. Previously proposed HE schemes, however, can be applied to messages from, unfortunately, a very restricted set of probability distributions. Therefore, Geno Guard addresses the open problem of applying HE techniques to the highly non-uniform probability distributions that characterize sequences of genetic data. In Geno Guard, a potential adversary can attempt exhaustively to guess keys or passwords and decrypt via a brute-force attack. We prove that decryption under any key will yield a plausible genome sequence, and that Geno Guard offers an information-theoretic security guarantee against message-recovery attacks. We also explore attacks that use side information. Finally, we present an efficient and parallelized software implementation of Geno Guard.",
""
]
} |
1908.10896 | 2970353712 | One of the biggest hurdles for customers when purchasing fashion online, is the difficulty of finding products with the right fit. In order to provide a better online shopping experience, platforms need to find ways to recommend the right product sizes and the best fitting products to their customers. These recommendation systems, however, require customer feedback in order to estimate the most suitable sizing options. Such feedback is rare and often only available as natural text. In this paper, we examine the extraction of product fit feedback from customer reviews using natural language processing techniques. In particular, we compare traditional methods with more recent transfer learning techniques for text classification, and analyze their results. Our evaluation shows, that the transfer learning approach ULMFit is not only comparatively fast to train, but also achieves highest accuracy on this task. The integration of the extracted information with actual size recommendation systems is left for future work. | Product fit recommendation has only been researched very recently. The main challenge is to estimate the true size of a product and the best fitting size for a customer, and match them accordingly. This has been handled in a number of different ways. In @cite_18 the true size for customers and products is estimated using a latent factor model, and recommendations are made on a similarity-based approach. In @cite_11 an extension using a Bayesian model has been proposed. A hierarchical Bayesian approach can be found in @cite_9 . In @cite_6 the size recommendation problem is tackled by learning embeddings for customers and products. The embeddings are combined in a joint space, where metric learning and prototyping is applied in order to derive good representations for the different size classes. The authors of @cite_6 also published two datasets with their paper, which we utilize in our experiments. | {
"cite_N": [
"@cite_9",
"@cite_18",
"@cite_6",
"@cite_11"
],
"mid": [
"2894397262",
"2749155890",
"2893160345",
"2788493241"
],
"abstract": [
"We introduce a hierarchical Bayesian approach to tackle the challenging problem of size recommendation in e-commerce fashion. Our approach jointly models a size purchased by a customer, and its possible return event: 1. no return, 2. returned too small 3. returned too big. Those events are drawn following a multinomial distribution parameterized on the joint probability of each event, built following a hierarchy combining priors. Such a model allows us to incorporate extended domain expertise and article characteristics as prior knowledge, which in turn makes it possible for the underlying parameters to emerge thanks to sufficient data. Experiments are presented on real (anonymized) data from millions of customers along with a detailed discussion on the efficiency of such an approach within a large scale production system.",
"We propose a novel latent factor model for recommending product size fits Small, Fit, Large to customers. Latent factors for customers and products in our model correspond to their physical true size, and are learnt from past product purchase and returns data. The outcome for a customer, product pair is predicted based on the difference between customer and product true sizes, and efficient algorithms are proposed for computing customer and product true size values that minimize two loss function variants. In experiments with Amazon shoe datasets, we show that our latent factor models incorporating personas, and leveraging return codes show a 17-21 AUC improvement compared to baselines. In an online A B test, our algorithms show an improvement of 0.49 in percentage of Fit transactions over control.",
"Product size recommendation and fit prediction are critical in order to improve customers' shopping experiences and to reduce product return rates. Modeling customers' fit feedback is challenging due to its subtle semantics, arising from the subjective evaluation of products, and imbalanced label distribution. In this paper, we propose a new predictive framework to tackle the product fit problem, which captures the semantics behind customers' fit feedback, and employs a metric learning technique to resolve label imbalance issues. We also contribute two public datasets collected from online clothing retailers.",
"Lack of calibrated product sizing in popular categories such as apparel and shoes leads to customers purchasing incorrect sizes, which in turn results in high return rates due to fi€t issues. We address the problem of product size recommendations based on customer purchase and return data. We propose a novel approach based on Bayesian logit and probit regression models with ordinal categories Small, Fit, Large to model size fits as a function of the difference between latent sizes of customers and products. We propose posterior computation based on mean-field variational inference, leveraging the Polya-Gamma augmentation for the logit prior, that results in simple updates, enabling our technique to efficiently handle large datasets. O„ur experiments with real-life shoe datasets show that our model outperforms the state of the art in 5 of 6 datasets and leads to an improvement of 17-26 in AUC over baselines when predicting size fit outcomes."
]
} |
1908.10962 | 2971222260 | In this paper, we present a novel and principled approach to learn the optimal transport between two distributions, from samples. Guided by the optimal transport theory, we learn the optimal Kantorovich potential which induces the optimal transport map. This involves learning two convex functions, by solving a novel minimax optimization. Building upon recent advances in the field of input convex neural networks, we propose a new framework where the gradient of one convex function represents the optimal transport mapping. Numerical experiments confirm that we learn the optimal transport mapping. This approach ensures that the transport mapping we find is optimal independent of how we initialize the neural networks. Further, target distributions from a discontinuous support can be easily captured, as gradient of a convex function naturally models a discontinuous transport mapping. | To extend this notion of distance between @math and @math in , Kantorovich considered a relaxed version in @cite_4 @cite_20 . When an optimal transport map exists, the following second Wasserstein distance recovers . Further, @math is well-defined, even when an optimal transport map might not exist. In particular, it is defined as where @math denotes the set of all joint probability distributions (or equivalently, couplings) whose first and second marginals are @math and @math , respectively. Any coupling @math achieving the infimum is called the . eq:kantor_relax is also referred to as the primal formulation for Wasserstein- @math distance. Kantorovich also provided a dual formulation for eq:kantor_relax , well-known as the Kantorovich duality theorem [Theorem 1.3] villani2003topics , given by where @math denotes the constrained space of functions, defined as @math . | {
"cite_N": [
"@cite_4",
"@cite_20"
],
"mid": [
"2134670261",
"2036476394"
],
"abstract": [
"The following paper is reproduced from a Russian journal of the character of our own Proceedings of the National Academy of Sciences, Comptes Rendus (Doklady) de I'Academie des Sciences de I'URSS, 1942, Volume XXXVII, No. 7-8. The author is one of the most distinguished of Russian mathematicians. He has made very important contributions in pure mathematics in the theory of functional analysis, and has made equally important contributions to applied mathematics in numerical analysis and the theory and practice of computation. Although his exposition in this paper is quite terse and couched in mathematical language which may be difficult for some readers of Management Science to follow, it is thought that this presentation will: (1) make available to American readers generally an important work in the field of linear programming, (2) provide an indication of the type of analytic work which has been done and is being done in connection with rational planning in Russia, (3) through the specific examples mentioned indicate the types of interpretation which the Russians have made of the abstract mathematics (for example, the potential and field interpretations adduced in this country recently by W. Prager were anticipated in this paper). It is to be noted, however, that the problem of determining an effective method of actually acquiring the solution to a specific problem is not solved in this paper. In the category of development of such methods we seem to be, currently, ahead of the Russians.--A. Charnes, Northwestern Technological Institute and The Transportation Center.",
"In 1942, I considered a general problem on the most profitable translocation of masses in a compact metric space. The problem is as follows: Assume that we are given two mass distributions determined by additive set functions Φ(e) and Φ′(e) with Φ(R) = Φ′(R) = 1. A translocation of masses is a function Ψ(e, e′) that determines the mass translocated from a set e to a set e′ with [Ψ(e, R) = Φ(e); Ψ(R, e) = Φ′(e)]. The translocation work is defined by the integral"
]
} |
1908.10962 | 2971222260 | In this paper, we present a novel and principled approach to learn the optimal transport between two distributions, from samples. Guided by the optimal transport theory, we learn the optimal Kantorovich potential which induces the optimal transport map. This involves learning two convex functions, by solving a novel minimax optimization. Building upon recent advances in the field of input convex neural networks, we propose a new framework where the gradient of one convex function represents the optimal transport mapping. Numerical experiments confirm that we learn the optimal transport mapping. This approach ensures that the transport mapping we find is optimal independent of how we initialize the neural networks. Further, target distributions from a discontinuous support can be easily captured, as gradient of a convex function naturally models a discontinuous transport mapping. | As there is no easy way to ensure the feasibility of the constraints along the gradient updates, common approach is to translate the optimization into a tractable form, while sacrificing the original goal of finding the optimal transport @cite_25 . Concretely, an entropic or a quadratic regularizer is added to . This makes the dual an unconstrained problem, which can be numerically solved using Sinkhorn algorithm @cite_25 or stochastic gradient methods @cite_15 @cite_8 . The optimal transport can then be obtained from @math and @math , using the first-order optimality conditions of the Fenchel-Rockafellar's duality theorem. | {
"cite_N": [
"@cite_15",
"@cite_25",
"@cite_8"
],
"mid": [
"2962970351",
"2158131535",
"2767358676"
],
"abstract": [
"Optimal transport (OT) defines a powerful framework to compare probability distributions in a geometrically faithful way. However, the practical impact of OT is still limited because of its computational burden. We propose a new class of stochastic optimization algorithms to cope with large-scale problems routinely encountered in machine learning applications. These methods are able to manipulate arbitrary distributions (either discrete or continuous) by simply requiring to be able to draw samples from them, which is the typical setup in high-dimensional learning problems. This alleviates the need to discretize these densities, while giving access to provably convergent methods that output the correct distance without discretization error. These algorithms rely on two main ideas: (a) the dual OT problem can be re-cast as the maximization of an expectation; (b) entropic regularization of the primal OT problem results in a smooth dual optimization optimization which can be addressed with algorithms that have a provably faster convergence. We instantiate these ideas in three different computational setups: (i) when comparing a discrete distribution to another, we show that incremental stochastic optimization schemes can beat the current state of the art finite dimensional OT solver (Sinkhorn's algorithm) ; (ii) when comparing a discrete distribution to a continuous density, a re-formulation (semi-discrete) of the dual program is amenable to averaged stochastic gradient descent, leading to better performance than approximately solving the problem by discretization ; (iii) when dealing with two continuous densities, we propose a stochastic gradient descent over a reproducing kernel Hilbert space (RKHS). This is currently the only known method to solve this problem, and is more efficient than discretizing beforehand the two densities. We backup these claims on a set of discrete, semi-discrete and continuous benchmark problems.",
"Optimal transport distances are a fundamental family of distances for probability measures and histograms of features. Despite their appealing theoretical properties, excellent performance in retrieval tasks and intuitive formulation, their computation involves the resolution of a linear program whose cost can quickly become prohibitive whenever the size of the support of these measures or the histograms' dimension exceeds a few hundred. We propose in this work a new family of optimal transport distances that look at transport problems from a maximum-entropy perspective. We smooth the classic optimal transport problem with an entropic regularization term, and show that the resulting optimum is also a distance which can be computed through Sinkhorn's matrix scaling algorithm at a speed that is several orders of magnitude faster than that of transport solvers. We also show that this regularized distance improves upon classic optimal transport distances on the MNIST classification problem.",
"This paper presents a novel two-step approach for the fundamental problem of learning an optimal map from one distribution to another. First, we learn an optimal transport (OT) plan, which can be thought as a one-to-many map between the two distributions. To that end, we propose a stochastic dual approach of regularized OT, and show empirically that it scales better than a recent related approach when the amount of samples is very large. Second, we estimate a Monge map as a deep neural network learned by approximating the barycentric projection of the previously-obtained OT plan. We prove two theoretical stability results of regularized OT which show that our estimations converge to the OT plan and Monge map between the underlying continuous measures. We showcase our proposed approach on two applications: domain adaptation and generative modeling."
]
} |
1908.10962 | 2971222260 | In this paper, we present a novel and principled approach to learn the optimal transport between two distributions, from samples. Guided by the optimal transport theory, we learn the optimal Kantorovich potential which induces the optimal transport map. This involves learning two convex functions, by solving a novel minimax optimization. Building upon recent advances in the field of input convex neural networks, we propose a new framework where the gradient of one convex function represents the optimal transport mapping. Numerical experiments confirm that we learn the optimal transport mapping. This approach ensures that the transport mapping we find is optimal independent of how we initialize the neural networks. Further, target distributions from a discontinuous support can be easily captured, as gradient of a convex function naturally models a discontinuous transport mapping. | In this paper, we take a different approach and aim to solve the dual problem, without introducing a regularization. This idea is also considered classically in @cite_12 and more recently in @cite_21 and @cite_11 . The classical approach relies on the exact knowledge of the density, which is not available in practice. The approach in @cite_21 relies on the discrete Brenier theory which is computationally expensive and not scalable. The most related work to ours is @cite_11 , which we provide a formal comparison in . | {
"cite_N": [
"@cite_21",
"@cite_12",
"@cite_11"
],
"mid": [
"2911275792",
"2292026763",
"2917201408"
],
"abstract": [
"This work builds the connection between the regularity theory of optimal transportation map, Monge-Ampere equation and GANs, which gives a theoretic understanding of the major drawbacks of GANs: convergence difficulty and mode collapse. According to the regularity theory of Monge-Ampere equation, if the support of the target measure is disconnected or just non-convex, the optimal transportation mapping is discontinuous. General DNNs can only approximate continuous mappings. This intrinsic conflict leads to the convergence difficulty and mode collapse in GANs. We test our hypothesis that the supports of real data distribution are in general non-convex, therefore the discontinuity is unavoidable using an Autoencoder combined with discrete optimal transportation map (AE-OT framework) on the CelebA data set. The testing result is positive. Furthermore, we propose to approximate the continuous Brenier potential directly based on discrete Brenier theory to tackle mode collapse. Comparing with existing method, this method is more accurate and effective.",
"We present a new, simple, and elegant algorithm for computing the optimal mapping for the Monge-Kantorovich problem with quadratic cost. The method arises from a reformulation of the dual problem into an unconstrained minimization of a convex, continuous functional, for which the derivative can be explicitly found. The Monge-Kantorovich problem has applications in many fields; examples from image warping and medical imaging are shown.",
"We provide a framework to approximate the 2-Wasserstein distance and the optimal transport map, amenable to efficient training as well as statistical and geometric analysis. With the quadratic cost and considering the Kantorovich dual form of the optimal transportation problem, the Brenier theorem states that the optimal potential function is convex and the optimal transport map is the gradient of the optimal potential function. Using this geometric structure, we restrict the optimization problem to different parametrized classes of convex functions and pay special attention to the class of input-convex neural networks. We analyze the statistical generalization and the discriminative power of the resulting approximate metric, and we prove a restricted moment-matching property for the approximate optimal map. Finally, we discuss a numerical algorithm to solve the restricted optimization problem and provide numerical experiments to illustrate and compare the proposed approach with the established regularization-based approaches. We further discuss practical implications of our proposal in a modular and interpretable design for GANs which connects the generator training with discriminator computations to allow for learning an overall composite generator."
]
} |
1908.10962 | 2971222260 | In this paper, we present a novel and principled approach to learn the optimal transport between two distributions, from samples. Guided by the optimal transport theory, we learn the optimal Kantorovich potential which induces the optimal transport map. This involves learning two convex functions, by solving a novel minimax optimization. Building upon recent advances in the field of input convex neural networks, we propose a new framework where the gradient of one convex function represents the optimal transport mapping. Numerical experiments confirm that we learn the optimal transport mapping. This approach ensures that the transport mapping we find is optimal independent of how we initialize the neural networks. Further, target distributions from a discontinuous support can be easily captured, as gradient of a convex function naturally models a discontinuous transport mapping. | The idea of solving the semi-dual optimization problem is classically considered in @cite_12 , where the authors derive a formula for the functional derivative of the objective function with respect to @math and propose to solve the optimization problem with the gradient descent method. Their approach is based on the discretization of the space and knowledge of the explicit form of the probability density functions, that is not applicable to real-world high dimensional problems. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2292026763"
],
"abstract": [
"We present a new, simple, and elegant algorithm for computing the optimal mapping for the Monge-Kantorovich problem with quadratic cost. The method arises from a reformulation of the dual problem into an unconstrained minimization of a convex, continuous functional, for which the derivative can be explicitly found. The Monge-Kantorovich problem has applications in many fields; examples from image warping and medical imaging are shown."
]
} |
1908.10962 | 2971222260 | In this paper, we present a novel and principled approach to learn the optimal transport between two distributions, from samples. Guided by the optimal transport theory, we learn the optimal Kantorovich potential which induces the optimal transport map. This involves learning two convex functions, by solving a novel minimax optimization. Building upon recent advances in the field of input convex neural networks, we propose a new framework where the gradient of one convex function represents the optimal transport mapping. Numerical experiments confirm that we learn the optimal transport mapping. This approach ensures that the transport mapping we find is optimal independent of how we initialize the neural networks. Further, target distributions from a discontinuous support can be easily captured, as gradient of a convex function naturally models a discontinuous transport mapping. | More recently, the authors in @cite_29 @cite_21 propose to learn the function @math in a semi-discrete setting, where one of the marginals is assumed to be a discrete distribution supported on a set of @math points @math , and the other marginal is assumed to have a continuous density with compact convex support @math . They show that the problem of learning the function @math is similar to the variational formulation of the Alexandrov problem: constructing a convex polytope with prescribed face normals and volumes. Moreover, they show that, in the semi-distrete setting, the optimal @math is of the form @math and simplify the problem of learning @math to the problem learning @math real numbers @math . However, the objective function involves computing polygonal partition of @math into @math convex cells, induced by the function @math , which is computationally challenging. Moreover, the learned optimal transport map @math , transports the probability distribution from each convex cell to a single point @math , which results in generalization issues. Additionally, the proposed approach is semi-discrete, and as a result, does not scale with the number of samples. | {
"cite_N": [
"@cite_29",
"@cite_21"
],
"mid": [
"2766665711",
"2911275792"
],
"abstract": [
"In this work, we show the intrinsic relations between optimal transportation and convex geometry, especially the variational approach to solve Alexandrov problem: constructing a convex polytope with prescribed face normals and volumes. This leads to a geometric interpretation to generative models, and leads to a novel framework for generative models. By using the optimal transportation view of GAN model, we show that the discriminator computes the Kantorovich potential, the generator calculates the transportation map. For a large class of transportation costs, the Kantorovich potential can give the optimal transportation map by a close-form formula. Therefore, it is sufficient to solely optimize the discriminator. This shows the adversarial competition can be avoided, and the computational architecture can be simplified. Preliminary experimental results show the geometric method outperforms WGAN for approximating probability measures with multiple clusters in low dimensional space.",
"This work builds the connection between the regularity theory of optimal transportation map, Monge-Ampere equation and GANs, which gives a theoretic understanding of the major drawbacks of GANs: convergence difficulty and mode collapse. According to the regularity theory of Monge-Ampere equation, if the support of the target measure is disconnected or just non-convex, the optimal transportation mapping is discontinuous. General DNNs can only approximate continuous mappings. This intrinsic conflict leads to the convergence difficulty and mode collapse in GANs. We test our hypothesis that the supports of real data distribution are in general non-convex, therefore the discontinuity is unavoidable using an Autoencoder combined with discrete optimal transportation map (AE-OT framework) on the CelebA data set. The testing result is positive. Furthermore, we propose to approximate the continuous Brenier potential directly based on discrete Brenier theory to tackle mode collapse. Comparing with existing method, this method is more accurate and effective."
]
} |
1908.10962 | 2971222260 | In this paper, we present a novel and principled approach to learn the optimal transport between two distributions, from samples. Guided by the optimal transport theory, we learn the optimal Kantorovich potential which induces the optimal transport map. This involves learning two convex functions, by solving a novel minimax optimization. Building upon recent advances in the field of input convex neural networks, we propose a new framework where the gradient of one convex function represents the optimal transport mapping. Numerical experiments confirm that we learn the optimal transport mapping. This approach ensures that the transport mapping we find is optimal independent of how we initialize the neural networks. Further, target distributions from a discontinuous support can be easily captured, as gradient of a convex function naturally models a discontinuous transport mapping. | Statistical analysis of learning the optimal transport map through the semi-dual optimization problem is studied in @cite_17 @cite_23 , where the authors establish a minimax convergence rate with respect to number of samples for certain classes of regular probability distributions. They also propose a procedure that achieves the optimal convergence rate, that involves representing the function @math with span of wavelet basis functions up to a certain order, and also requiring the function @math to be convex. However, they do not provide a computational algorithm to implement the procedure. | {
"cite_N": [
"@cite_23",
"@cite_17"
],
"mid": [
"2810943841",
"2946680009"
],
"abstract": [
"Isotonic regression is a standard problem in shape-constrained estimation where the goal is to estimate an unknown nondecreasing regression function @math from independent pairs @math where @math . While this problem is well understood both statistically and computationally, much less is known about its uncoupled counterpart where one is given only the unordered sets @math and @math . In this work, we leverage tools from optimal transport theory to derive minimax rates under weak moments conditions on @math and to give an efficient algorithm achieving optimal rates. Both upper and lower bounds employ moment-matching arguments that are also pertinent to learning mixtures of distributions and deconvolution.",
"Brenier's theorem is a cornerstone of optimal transport that guarantees the existence of an optimal transport map @math between two probability distributions @math and @math over @math under certain regularity conditions. The main goal of this work is to establish the minimax rates estimation rates for such a transport map from data sampled from @math and @math under additional smoothness assumptions on @math . To achieve this goal, we develop an estimator based on the minimization of an empirical version of the semi-dual optimal transport problem, restricted to truncated wavelet expansions. This estimator is shown to achieve near minimax optimality using new stability arguments for the semi-dual and a complementary minimax lower bound. These are the first minimax estimation rates for transport maps in general dimension."
]
} |
1908.10962 | 2971222260 | In this paper, we present a novel and principled approach to learn the optimal transport between two distributions, from samples. Guided by the optimal transport theory, we learn the optimal Kantorovich potential which induces the optimal transport map. This involves learning two convex functions, by solving a novel minimax optimization. Building upon recent advances in the field of input convex neural networks, we propose a new framework where the gradient of one convex function represents the optimal transport mapping. Numerical experiments confirm that we learn the optimal transport mapping. This approach ensures that the transport mapping we find is optimal independent of how we initialize the neural networks. Further, target distributions from a discontinuous support can be easily captured, as gradient of a convex function naturally models a discontinuous transport mapping. | The approach proposed in this paper is built upon the recent work @cite_11 , where the proposal to solve the semi-dual optimization problem by representing the function @math with an ICNN appeared for the first time. The proposed procedure in @cite_11 involves solving a convex optimization problem to compute the convex conjugate @math for each sample in the batch, at each optimization iteration. This procedure becomes computationally challenging to scale to large datasets. However, in this paper, we propose a minimax formulation to learn the convex conjugate function in a scalable fashion. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2917201408"
],
"abstract": [
"We provide a framework to approximate the 2-Wasserstein distance and the optimal transport map, amenable to efficient training as well as statistical and geometric analysis. With the quadratic cost and considering the Kantorovich dual form of the optimal transportation problem, the Brenier theorem states that the optimal potential function is convex and the optimal transport map is the gradient of the optimal potential function. Using this geometric structure, we restrict the optimization problem to different parametrized classes of convex functions and pay special attention to the class of input-convex neural networks. We analyze the statistical generalization and the discriminative power of the resulting approximate metric, and we prove a restricted moment-matching property for the approximate optimal map. Finally, we discuss a numerical algorithm to solve the restricted optimization problem and provide numerical experiments to illustrate and compare the proposed approach with the established regularization-based approaches. We further discuss practical implications of our proposal in a modular and interpretable design for GANs which connects the generator training with discriminator computations to allow for learning an overall composite generator."
]
} |
1908.10962 | 2971222260 | In this paper, we present a novel and principled approach to learn the optimal transport between two distributions, from samples. Guided by the optimal transport theory, we learn the optimal Kantorovich potential which induces the optimal transport map. This involves learning two convex functions, by solving a novel minimax optimization. Building upon recent advances in the field of input convex neural networks, we propose a new framework where the gradient of one convex function represents the optimal transport mapping. Numerical experiments confirm that we learn the optimal transport mapping. This approach ensures that the transport mapping we find is optimal independent of how we initialize the neural networks. Further, target distributions from a discontinuous support can be easily captured, as gradient of a convex function naturally models a discontinuous transport mapping. | There are also other alternative approaches to approximate the optimal transport map that are not based on solving the semi-dual optimization problem . @cite_24 , the authors propose to approximate the optimal transport map, through an adversarial computational procedure, by considering the dual optimization problem , and replacing the constraint with a quadratic penalty term. However, in contrast to the other regularization-based approaches such as @cite_8 , they consider a GAN architecture, and propose to take the generator, after the training is finished, as the optimal transport map. They also provide a theoretical justification for their proposal, however the theoretical justification is valid in an ideal setting where the generator has infinite capacity, the discriminator is optimal at each update step, and the cost is equal to the exact Wasserstein distance. These ideal conditions are far from being true in a practical setting. | {
"cite_N": [
"@cite_24",
"@cite_8"
],
"mid": [
"2950516984",
"2767358676"
],
"abstract": [
"Computing optimal transport maps between high-dimensional and continuous distributions is a challenging problem in optimal transport (OT). Generative adversarial networks (GANs) are powerful generative models which have been successfully applied to learn maps across high-dimensional domains. However, little is known about the nature of the map learned with a GAN objective. To address this problem, we propose a generative adversarial model in which the discriminator's objective is the @math -Wasserstein metric. We show that during training, our generator follows the @math -geodesic between the initial and the target distributions. As a consequence, it reproduces an optimal map at the end of training. We validate our approach empirically in both low-dimensional and high-dimensional continuous settings, and show that it outperforms prior methods on image data.",
"This paper presents a novel two-step approach for the fundamental problem of learning an optimal map from one distribution to another. First, we learn an optimal transport (OT) plan, which can be thought as a one-to-many map between the two distributions. To that end, we propose a stochastic dual approach of regularized OT, and show empirically that it scales better than a recent related approach when the amount of samples is very large. Second, we estimate a Monge map as a deep neural network learned by approximating the barycentric projection of the previously-obtained OT plan. We prove two theoretical stability results of regularized OT which show that our estimations converge to the OT plan and Monge map between the underlying continuous measures. We showcase our proposed approach on two applications: domain adaptation and generative modeling."
]
} |
1908.10962 | 2971222260 | In this paper, we present a novel and principled approach to learn the optimal transport between two distributions, from samples. Guided by the optimal transport theory, we learn the optimal Kantorovich potential which induces the optimal transport map. This involves learning two convex functions, by solving a novel minimax optimization. Building upon recent advances in the field of input convex neural networks, we propose a new framework where the gradient of one convex function represents the optimal transport mapping. Numerical experiments confirm that we learn the optimal transport mapping. This approach ensures that the transport mapping we find is optimal independent of how we initialize the neural networks. Further, target distributions from a discontinuous support can be easily captured, as gradient of a convex function naturally models a discontinuous transport mapping. | Another approach, proposed in @cite_22 , is based on a generative learning framework to approximate the optimal coupling, instead of optimal transport map. The approach involves a low-dimensional latent random variable, two generators that take the latent variable as input and map it to a high-dimensional space where the real data resides in, and two discriminators that respectively take as inputs the real data and the output of the generator. Although, the proposed approach is attractive when an optimal transport map does not exist, it is computationally expensive because it involves learning four deep neural networks, and suffers from unused capacity issues that WGAN architecture suffers from @cite_33 . | {
"cite_N": [
"@cite_22",
"@cite_33"
],
"mid": [
"2942758405",
"2962879692"
],
"abstract": [
"Optimal Transport (OT) naturally arises in many machine learning applications, yet the heavy computational burden limits its wide-spread uses. To address the scalability issue, we propose an implicit generative learning-based framework called SPOT (Scalable Push-forward of Optimal Transport). Specifically, we approximate the optimal transport plan by a pushforward of a reference distribution, and cast the optimal transport problem into a minimax problem. We then can solve OT problems efficiently using primal dual stochastic gradient-type algorithms. We also show that we can recover the density of the optimal transport plan using neural ordinary differential equations. Numerical experiments on both synthetic and real datasets illustrate that SPOT is robust and has favorable convergence behavior. SPOT also allows us to efficiently sample from the optimal transport plan, which benefits downstream applications such as domain adaptation.",
"Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only poor samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models with continuous generators. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms."
]
} |
1908.10962 | 2971222260 | In this paper, we present a novel and principled approach to learn the optimal transport between two distributions, from samples. Guided by the optimal transport theory, we learn the optimal Kantorovich potential which induces the optimal transport map. This involves learning two convex functions, by solving a novel minimax optimization. Building upon recent advances in the field of input convex neural networks, we propose a new framework where the gradient of one convex function represents the optimal transport mapping. Numerical experiments confirm that we learn the optimal transport mapping. This approach ensures that the transport mapping we find is optimal independent of how we initialize the neural networks. Further, target distributions from a discontinuous support can be easily captured, as gradient of a convex function naturally models a discontinuous transport mapping. | Finally, a procedure is recently proposed to approximate the optimal transport map that is optimal only on a subspace projection instead of the entire space @cite_1 . This approach is inspired by the sliced Wasserstein distance method to approximate the Wasserstein distance @cite_16 @cite_27 . However, selection of the subspace to project on is a non-trivial task, and optimally selecting the projection is an optimization over the Grassmann manifold which is computationally challenging. | {
"cite_N": [
"@cite_27",
"@cite_16",
"@cite_1"
],
"mid": [
"2963398989",
"1639961155",
"2970661343"
],
"abstract": [
"Generative Adversarial Nets (GANs) are very successful at modeling distributions from given samples, even in the high-dimensional case. However, their formulation is also known to be hard to optimize and often not stable. While this is particularly true for early GAN formulations, there has been significant empirically motivated and theoretically founded progress to improve stability, for instance, by using the Wasserstein distance rather than the Jenson-Shannon divergence. Here, we consider an alternative formulation for generative modeling based on random projections which, in its simplest form, results in a single objective rather than a saddle-point formulation. By augmenting this approach with a discriminator we improve its accuracy. We found our approach to be significantly more stable compared to even the improved Wasserstein GAN. Further, unlike the traditional GAN loss, the loss formulated in our method is a good measure of the actual distance between the distributions and, for the first time for GAN training, we are able to show estimates for the same.",
"This paper proposes a new definition of the averaging of discrete probability distributions as a barycenter over the Monge-Kantorovich optimal transport space. To overcome the time complexity involved by the numerical solving of such problem, the original Wasserstein metric is replaced by a sliced approximation over 1D distributions. This enables us to introduce a new fast gradient descent algorithm to compute Wasserstein barycenters of point clouds. This new notion of barycenter of probabilities is likely to find applications in computer vision where one wants to average features defined as distributions. We show an application to texture synthesis and mixing, where a texture is characterized by the distribution of the response to a multi-scale oriented filter bank. This leads to a simple way to navigate over a convex domain of color textures.",
"Sliced Wasserstein metrics between probability measures solve the optimal transport (OT) problem on univariate projections, and average such maps across projections. The recent interest for the SW distance shows that much can be gained by looking at optimal maps between measures in smaller subspaces, as opposed to the curse-of-dimensionality price one has to pay in higher dimensions. Any transport estimated in a subspace remains, however, an object that can only be used in that subspace. We propose in this work two methods to extrapolate, from an transport map that is optimal on a subspace, one that is nearly optimal in the entire space. We prove that the best optimal transport plan that takes such \"subspace detours\" is a generalization of the Knothe-Rosenblatt transport. We show that these plans can be explicitly formulated when comparing Gaussians measures (between which the Wasserstein distance is usually referred to as the Bures or Fr 'echet distance). Building from there, we provide an algorithm to select optimal subspaces given pairs of Gaussian measures, and study scenarios in which that mediating subspace can be selected using prior information. We consider applications to NLP and evaluation of image quality (FID scores)."
]
} |
1908.10422 | 2965761667 | Abstract Trainable chatbots that exhibit fluent and human-like conversations remain a big challenge in artificial intelligence. Deep Reinforcement Learning (DRL) is promising for addressing this challenge, but its successful application remains an open question. This article describes a novel ensemble-based approach applied to value-based DRL chatbots, which use finite action sets as a form of meaning representation. In our approach, while dialogue actions are derived from sentence clustering, the training datasets in our ensemble are derived from dialogue clustering. The latter aim to induce specialised agents that learn to interact in a particular style. In order to facilitate neural chatbot training using our proposed approach, we assume dialogue data in raw text only – without any manually-labelled data. Experimental results using chitchat data reveal that (1) near human-like dialogue policies can be induced, (2) generalisation to unseen data is a difficult problem, and (3) training an ensemble of chatbot agents is essential for improved performance over using a single agent. In addition to evaluations using held-out data, our results are further supported by a human evaluation that rated dialogues in terms of fluency, engagingness and consistency – which revealed that our proposed dialogue rewards strongly correlate with human judgements. | black This article contributes to the literature of neural-based chatbots as follows. First, our methodology for training value-based DRL agents uses only unlabelled dialogue data. Previous work requires manual extensions to the dialogue data @cite_13 or expensive and time consuming ratings for training a reward function @cite_1 . Second, our proposed reward function strongly correlates with human judgements. Previous work has only shown moderate positive correlations between target dialogue rewards and predicted ones @cite_1 , or rely on high-level annotations requiring external and language-dependent resources typically induced from labelled data @cite_0 . Third, while previous work on DRL chatbots train a single agent @cite_1 @cite_13 , our study---confirmed by automatic and human evaluations---shows that an ensemble-based approach performs better than a counterpart single agent. The remainder of this article elaborates on these contributions. | {
"cite_N": [
"@cite_1",
"@cite_0",
"@cite_13"
],
"mid": [
"2784808670",
"2904468521",
"2963167310"
],
"abstract": [
"We present MILABOT: a deep reinforcement learning chatbot developed by the Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize competition. MILABOT is capable of conversing with humans on popular small talk topics through both speech and text. The system consists of an ensemble of natural language generation and retrieval models, including neural network and template-based models. By applying reinforcement learning to crowdsourced data and real-world user interactions, the system has been trained to select an appropriate response from the models in its ensemble. The system has been evaluated through A B testing with real-world users, where it performed significantly better than other systems. The results highlight the potential of coupling ensemble systems with deep reinforcement learning as a fruitful path for developing real-world, open-domain conversational agents.",
"Abstract End-to-end dialog systems are gaining interest due to the recent advances of deep neural networks and the availability of large human–human dialog corpora. However, in spite of being of fundamental importance to systematically improve the performance of this kind of systems, automatic evaluation of the generated dialog utterances is still an unsolved problem. Indeed, most of the proposed objective metrics shown low correlation with human evaluations. In this paper, we evaluate a two-dimensional evaluation metric that is designed to operate at sentence level, which considers the syntactic and semantic information carried along the answers generated by an end-to-end dialog system with respect to a set of references. The proposed metric, when applied to outputs generated by the systems participating in track 2 of the DSTC-6 challenge, shows a higher correlation with human evaluations (up to 12.8 relative improvement at the system level) than the best of the alternative state-of-the-art automatic metrics currently available.",
""
]
} |
1908.10654 | 2970246034 | Face anti-spoofing is essential to prevent face recognition systems from a security breach. Much of the progresses have been made by the availability of face anti-spoofing benchmark datasets in recent years. However, existing face anti-spoofing benchmarks have limited number of subjects ( @math ) and modalities ( @math ), which hinder the further development of the academic community. To facilitate face anti-spoofing research, we introduce a large-scale multi-modal dataset, namely CASIA-SURF, which is the largest publicly available dataset for face anti-spoofing in terms of both subjects and modalities. Specifically, it consists of @math subjects with @math videos and each sample has @math modalities (i.e., RGB, Depth and IR). We also provide comprehensive evaluation metrics, diverse evaluation protocols, training validation testing subsets and a measurement tool, developing a new benchmark for face anti-spoofing. Moreover, we present a novel multi-modal multi-scale fusion method as a strong baseline, which performs feature re-weighting to select the more informative channel features while suppressing the less useful ones for each modality across different scales. Extensive experiments have been conducted on the proposed dataset to verify its significance and generalization capability. The dataset is available at this https URL | Most of existing face anti-spoofing datasets only contain the RGB modality, including the two widely used PAD datasets Replay-Attack @cite_31 and CASIA-FASD @cite_50 . Even the recently released SiW @cite_26 dataset, collected with high resolution image quality, only contains RGB data. With the widespread application of face recognition in mobile phones, there are also some RGB datasets recorded by replaying face video with smartphone, such as MSU-MFSD @cite_47 , Replay-Mobile @cite_21 and OULU-NPU @cite_46 . | {
"cite_N": [
"@cite_31",
"@cite_26",
"@cite_46",
"@cite_21",
"@cite_50",
"@cite_47"
],
"mid": [
"",
"",
"2728977829",
"2552267233",
"1982209341",
"2003092530"
],
"abstract": [
"",
"",
"The vulnerabilities of face-based biometric systems to presentation attacks have been finally recognized but yet we lack generalized software-based face presentation attack detection (PAD) methods performing robustly in practical mobile authentication scenarios. This is mainly due to the fact that the existing public face PAD datasets are beginning to cover a variety of attack scenarios and acquisition conditions but their standard evaluation protocols do not encourage researchers to assess the generalization capabilities of their methods across these variations. In this present work, we introduce a new public face PAD database, OULU-NPU, aiming at evaluating the generalization of PAD methods in more realistic mobile authentication scenarios across three covariates: unknown environmental conditions (namely illumination and background scene), acquisition devices and presentation attack instruments (PAI). This publicly available database consists of 5940 videos corresponding to 55 subjects recorded in three different environments using high-resolution frontal cameras of six different smartphones. The high-quality print and videoreplay attacks were created using two different printers and two different display devices. Each of the four unambiguously defined evaluation protocols introduces at least one previously unseen condition to the test set, which enables a fair comparison on the generalization capabilities between new and existing approaches. The baseline results using color texture analysis based face PAD method demonstrate the challenging nature of the database.",
"For face authentication to become widespread on mobile devices, robust countermeasures must be developed for face presentation-attack detection (PAD). Existing databases for evaluating face-PAD methods do not capture the specific characteristics of mobile devices. We introduce a new database, REPLAY-MOBILE, for this purpose.1 This publicly available database includes 1,200 videos corresponding to 40 clients. Besides the genuine videos, the database contains a variety of presentation-attacks. The database also provides three non- overlapping sets for training, validating and testing classifiers for the face-PAD problem. This will help researchers in comparing new approaches to existing algorithms in a standardized fashion. For this purpose, we also provide baseline results with state- of-the-art approaches based on image quality analysis and face texture analysis.",
"Face antispoofing has now attracted intensive attention, aiming to assure the reliability of face biometrics. We notice that currently most of face antispoofing databases focus on data with little variations, which may limit the generalization performance of trained models since potential attacks in real world are probably more complex. In this paper we release a face antispoofing database which covers a diverse range of potential attack variations. Specifically, the database contains 50 genuine subjects, and fake faces are made from the high quality records of the genuine faces. Three imaging qualities are considered, namely the low quality, normal quality and high quality. Three fake face attacks are implemented, which include warped photo attack, cut photo attack and video attack. Therefore each subject contains 12 videos (3 genuine and 9 fake), and the final database contains 600 video clips. Test protocol is provided, which consists of 7 scenarios for a thorough evaluation from all possible aspects. A baseline algorithm is also given for comparison, which explores the high frequency information in the facial region to determine the liveness. We hope such a database can serve as an evaluation platform for future researches in the literature.",
"Automatic face recognition is now widely used in applications ranging from deduplication of identity to authentication of mobile payment. This popularity of face recognition has raised concerns about face spoof attacks (also known as biometric sensor presentation attacks), where a photo or video of an authorized person’s face could be used to gain access to facilities or services. While a number of face spoof detection techniques have been proposed, their generalization ability has not been adequately addressed. We propose an efficient and rather robust face spoof detection algorithm based on image distortion analysis (IDA). Four different features (specular reflection, blurriness, chromatic moment, and color diversity) are extracted to form the IDA feature vector. An ensemble classifier, consisting of multiple SVM classifiers trained for different face spoof attacks (e.g., printed photo and replayed video), is used to distinguish between genuine (live) and spoof faces. The proposed approach is extended to multiframe face spoof detection in videos using a voting-based scheme. We also collect a face spoof database, MSU mobile face spoofing database (MSU MFSD), using two mobile devices (Google Nexus 5 and MacBook Air) with three types of spoof attacks (printed photo, replayed video with iPhone 5S, and replayed video with iPad Air). Experimental results on two public-domain face spoof databases (Idiap REPLAY-ATTACK and CASIA FASD), and the MSU MFSD database show that the proposed approach outperforms the state-of-the-art methods in spoof detection. Our results also highlight the difficulty in separating genuine and spoof faces, especially in cross-database and cross-device scenarios."
]
} |
1908.10654 | 2970246034 | Face anti-spoofing is essential to prevent face recognition systems from a security breach. Much of the progresses have been made by the availability of face anti-spoofing benchmark datasets in recent years. However, existing face anti-spoofing benchmarks have limited number of subjects ( @math ) and modalities ( @math ), which hinder the further development of the academic community. To facilitate face anti-spoofing research, we introduce a large-scale multi-modal dataset, namely CASIA-SURF, which is the largest publicly available dataset for face anti-spoofing in terms of both subjects and modalities. Specifically, it consists of @math subjects with @math videos and each sample has @math modalities (i.e., RGB, Depth and IR). We also provide comprehensive evaluation metrics, diverse evaluation protocols, training validation testing subsets and a measurement tool, developing a new benchmark for face anti-spoofing. Moreover, we present a novel multi-modal multi-scale fusion method as a strong baseline, which performs feature re-weighting to select the more informative channel features while suppressing the less useful ones for each modality across different scales. Extensive experiments have been conducted on the proposed dataset to verify its significance and generalization capability. The dataset is available at this https URL | As attack techniques are constantly upgraded, some new types of presentation attacks have emerged, , 3D @cite_0 and silicone masks @cite_38 . These attacks are more realistic than traditional 2D attacks. Therefore, the drawbacks of visible cameras are revealed when facing these realistic face masks. Fortunately, some new sensors have been introduced to provide more possibilities for face PAD methods, such as depth cameras, muti-spectral cameras and infrared light cameras. Kim al @cite_4 introduce a new dataset to distinguish between the facial skin and mask materials by exploiting their reflectance. Kose al @cite_43 propose a 2D+3D face mask attack dataset to study the effects of mask attacks. However, associated data has not been made public. 3DMAD @cite_0 is the first publicly available 3D masks dataset, which is recorded using Microsoft Kinect sensor and consists of Depth and RGB modalities. Another multi-modal face PAD dataset is Msspoof @cite_6 , containing visible and near-infrared images of real accesses and printed spoofing attacks with @math objects. | {
"cite_N": [
"@cite_38",
"@cite_4",
"@cite_6",
"@cite_0",
"@cite_43"
],
"mid": [
"2887396754",
"1983008792",
"2368383431",
"2125320497",
"2011016023"
],
"abstract": [
"We investigate the vulnerability of convolutional neural network (CNN) based face-recognition (FR) systems to presentation attacks (PA) performed using custom-made silicone masks. Previous works have studied the vulnerability of CNN-FR systems to 2D PAs such as print-attacks, or digital- video replay attacks, and to rigid 3D masks. This is the first study to consider PAs performed using custom-made flexible silicone masks. Before embarking on research on detecting a new variety of PA, it is important to estimate the seriousness of the threat posed by the type of PA. In this work we demonstrate that PAs using custom silicone masks do pose a serious threat to state-of-the-art FR systems. Using a new dataset based on six custom silicone masks, we show that the vulnerability of each FR system in this study is at least 10 times higher than its false match rate. We also propose a simple but effective presentation attack detection method, based on a low-cost thermal camera.",
"This research presents a novel 2D feature space where real faces and masked fake faces can be effectively discriminated. We exploit the reflectance disparity based on albedo between real faces and fake materials. The feature vector used consists of radiance measurements of the forehead region under 850 and 685 nm illuminations. Facial skin and mask material show linearly separable distributions in the feature space proposed. By simply applying Fisher's linear discriminant, we have achieved 97.78 accuracy in fake face detection. Our method can be easily implemented in commercial face verification systems.",
"In this chapter, we give an overview of spoofing attacks and spoofing countermeasures for face recognition systems , with a focus on visual spectrum systems (VIS) in 2D and 3D, as well as near-infrared (NIR) and multispectral systems . We cover the existing types of spoofing attacks and report on their success to bypass several state-of-the-art face recognition systems. The results on two different face spoofing databases in VIS and one newly developed face spoofing database in NIR show that spoofing attacks present a significant security risk for face recognition systems in any part of the spectrum. The risk is partially reduced when using multispectral systems. We also give a systematic overview of the existing anti-spoofing techniques, with an analysis of their advantages and limitations and prospective for future work.",
"The problem of detecting face spoofing attacks (presentation attacks) has recently gained a well-deserved popularity. Mainly focusing on 2D attacks forged by displaying printed photos or replaying recorded videos on mobile devices, a significant portion of these studies ground their arguments on the flatness of the spoofing material in front of the sensor. In this paper, we inspect the spoofing potential of subject-specific 3D facial masks for 2D face recognition. Additionally, we analyze Local Binary Patterns based coun-termeasures using both color and depth data, obtained by Kinect. For this purpose, we introduce the 3D Mask Attack Database (3DMAD), the first publicly available 3D spoofing database, recorded with a low-cost depth camera. Extensive experiments on 3DMAD show that easily attainable facial masks can pose a serious threat to 2D face recognition systems and LBP is a powerful weapon to eliminate it.",
"There are several types of spoofing attacks to face recognition systems such as photograph, video or mask attacks. Recent studies show that face recognition systems are vulnerable to these attacks. In this paper, a countermeasure technique is proposed to protect face recognition systems against mask attacks. To the best of our knowledge, this is the first time a countermeasure is proposed to detect mask attacks. The reason for this delay is mainly due to the unavailability of public mask attacks databases. In this study, a 2D+3D face mask attacks database is used which is prepared for a research project in which the authors are all involved. The performance of the countermeasure is evaluated on both the texture images and the depth maps, separately. The results show that the proposed countermeasure gives satisfactory results using both the texture images and the depth maps. The performance of the countermeasure is observed to be slight better when the technique is applied on texture images instead of depth maps, which proves that face texture provides more information than 3D face shape characteristics using the proposed approach."
]
} |
1908.10654 | 2970246034 | Face anti-spoofing is essential to prevent face recognition systems from a security breach. Much of the progresses have been made by the availability of face anti-spoofing benchmark datasets in recent years. However, existing face anti-spoofing benchmarks have limited number of subjects ( @math ) and modalities ( @math ), which hinder the further development of the academic community. To facilitate face anti-spoofing research, we introduce a large-scale multi-modal dataset, namely CASIA-SURF, which is the largest publicly available dataset for face anti-spoofing in terms of both subjects and modalities. Specifically, it consists of @math subjects with @math videos and each sample has @math modalities (i.e., RGB, Depth and IR). We also provide comprehensive evaluation metrics, diverse evaluation protocols, training validation testing subsets and a measurement tool, developing a new benchmark for face anti-spoofing. Moreover, we present a novel multi-modal multi-scale fusion method as a strong baseline, which performs feature re-weighting to select the more informative channel features while suppressing the less useful ones for each modality across different scales. Extensive experiments have been conducted on the proposed dataset to verify its significance and generalization capability. The dataset is available at this https URL | Face anti-spoofing has been studied for decades. Some previous works @cite_34 @cite_51 @cite_1 @cite_53 attempt to detect the evidence of liveness ( , eye-blinking). Another works are based on contextual @cite_7 @cite_9 and moving @cite_22 @cite_44 @cite_57 information. To improve the robustness to illumination variation, some algorithms adopt HSV and YCbCr color spaces @cite_11 @cite_39 , as well as Fourier spectrum @cite_56 . All of these methods use handcrafted features, such as LBP @cite_15 @cite_13 @cite_23 @cite_24 , HoG @cite_23 @cite_24 @cite_19 and GLCM @cite_19 . They achieve a relatively satisfactory performance on small public face anti-spoofing datasets. | {
"cite_N": [
"@cite_13",
"@cite_22",
"@cite_7",
"@cite_53",
"@cite_9",
"@cite_1",
"@cite_39",
"@cite_44",
"@cite_57",
"@cite_56",
"@cite_24",
"@cite_19",
"@cite_23",
"@cite_15",
"@cite_34",
"@cite_51",
"@cite_11"
],
"mid": [
"2163487272",
"1971062533",
"2145426126",
"2145131129",
"",
"2140593870",
"2551249768",
"2106938474",
"2012612618",
"2591381994",
"2159270577",
"1996664229",
"2042883034",
"2163352848",
"2151343288",
"1994030682",
"2341318667"
],
"abstract": [
"Spoofing attacks are one of the security traits that biometric recognition systems are proven to be vulnerable to. When spoofed, a biometric recognition system is bypassed by presenting a copy of the biometric evidence of a valid user. Among all biometric modalities, spoofing a face recognition system is particularly easy to perform: all that is needed is a simple photograph of the user. In this paper, we address the problem of detecting face spoofing attacks. In particular, we inspect the potential of texture features based on Local Binary Patterns (LBP) and their variations on three types of attacks: printed photographs, and photos and videos displayed on electronic screens of different sizes. For this purpose, we introduce REPLAY-ATTACK, a novel publicly available face spoofing database which contains all the mentioned types of attacks. We conclude that LBP, with ∼15 Half Total Error Rate, show moderate discriminability when confronted with a wide set of attack types.",
"Face recognition, which is security-critical, has been widely deployed in our daily life. However, traditional face recognition technologies in practice can be spoofed easily, for example, by using a simple printed photo. In this paper, we propose a novel face liveness detection approach to counter spoofing attacks by recovering sparse 3D facial structure. Given a face video or several images captured from more than two viewpoints, we detect facial landmarks and select key frames. Then, the sparse 3D facial structure can be recoveredfrom the selected key frames. Finally, an Support Vector Machine (SVM) classifier is trained to distinguish the genuine and fake faces. Compared with the previous works, the proposed method has the following advantages. First, it gives perfect liveness detection results, which meets the security requirement of face biometric systems. Second, it is independent on cameras or systems, which works well on different devices. Experiments with genuine faces versus planar photo faces and warped photo faces demonstrate the superiority of the proposed method over the state-of-the-art liveness detection methods.",
"This paper presents a face liveness detection system against spoofing with photographs, videos, and 3D models of a valid user in a face recognition system. Anti-spoofing clues inside and outside a face are both exploited in our system. The inside-face clues of spontaneous eyeblinks are employed for anti-spoofing of photographs and 3D models. The outside-face clues of scene context are used for anti-spoofing of video replays. The system does not need user collaborations, i.e. it runs in a non-intrusive manner. In our system, the eyeblink detection is formulated as an inference problem of an undirected conditional graphical framework which models contextual dependencies in blink image sequences. The scene context clue is found by comparing the difference of regions of interest between the reference scene image and the input one, which is based on the similarity computed by local binary pattern descriptors on a series of fiducial points extracted in scale space. Extensive experiments are carried out to show the effectiveness of our system.",
"For a robust face biometric system, a reliable anti-spoofing approach must be deployed to circumvent the print and replay attacks. Several techniques have been proposed to counter face spoofing, however a robust solution that is computationally efficient is still unavailable. This paper presents a new approach for spoofing detection in face videos using motion magnification. Eulerian motion magnification approach is used to enhance the facial expressions commonly exhibited by subjects in a captured video. Next, two types of feature extraction algorithms are proposed: (i) a configuration of LBP that provides improved performance compared to other computationally expensive texture based approaches and (ii) motion estimation approach using HOOF descriptor. On the Print Attack and Replay Attack spoofing datasets, the proposed framework improves the state-of-art performance, especially HOOF descriptor yielding a near perfect half total error rate of 0 and 1.25 respectively.",
"",
"Resisting spoofing attempts via photographs and video playbacks is a vital issue for the success of face biometrics. Yet, the ldquolivenessrdquo topic has only been partially studied in the past. In this paper we are suggesting a holistic liveness detection paradigm that collaborates with standard techniques in 2D face biometrics. The experiments show that many attacks are avertible via a combination of anti-spoofing measures. We have investigated the topic using real-time techniques and applied them to real-life spoofing scenarios in an indoor, yet uncontrolled environment.",
"The vulnerabilities of face biometric authentication systems to spoofing attacks have received a significant attention during the recent years. Some of the proposed countermeasures have achieved impressive results when evaluated on intratests, i.e., the system is trained and tested on the same database. Unfortunately, most of these techniques fail to generalize well to unseen attacks, e.g., when the system is trained on one database and then evaluated on another database. This is a major concern in biometric antispoofing research that is mostly overlooked. In this letter, we propose a novel solution based on describing the facial appearance by applying Fisher vector encoding on speeded-up robust features extracted from different color spaces. The evaluation of our countermeasure on three challenging benchmark face-spoofing databases, namely the CASIA face antispoofing database, the replay-attack database, and MSU mobile face spoof database, showed excellent and stable performance across all the three datasets. Most importantly, in interdatabase tests, our proposed approach outperforms the state of the art and yields very promising generalization capabilities, even when only limited training data are used.",
"Face recognition provides many advantages compared with other available biometrics, but it is particularly subject to spoofing. The most accurate methods in literature addressing this problem, rely on the estimation of the three-dimensionality of faces, which heavily increase the whole cost of the system. This paper proposes an effective and efficient solution to problem of face spoofing. Starting from a set of automatically located facial points, we exploit geometric invariants for detecting replay attacks. The presented results demonstrate the effectiveness and efficiency of the proposed indices.",
"As Face Recognition(FR) technology becomes more mature and commercially available in the market, many different anti-spoofing techniques have been recently developed to enhance the security, reliability, and effectiveness of FR systems. As a part of anti-spoofing techniques, face liveness detection plays an important role to make FR systems be more secured from various attacks. In this paper, we propose a novel method for face liveness detection by using focus, which is one of camera functions. In order to identify fake faces (e.g. 2D pictures), our approach utilizes the variation of pixel values by focusing between two images sequentially taken in different focuses. The experimental result shows that our focus-based approach is a new method that can significantly increase the level of difficulty of spoof attacks, which is a way to improve the security of FR systems. The performance is evaluated and the proposed method achieves 100 fake detection in a given DoF(Depth of Field).",
"A detent apparatus for maintaining a draw bar in a centered position includes an indentation formed in a cylindrical surface of the pivotable taileye of the draw bar structure. A roller is arranged to ride along the surface and into the indentation, the roller being urged into the indentation by symmetrically arranged levers, pivoted as first class levers, with the inner, shorter arm of each lever urging the roller and the outer arm being urged by compression coil springs which are seated on the anchorage structure attached to a railroad car.",
"Current face biometric systems are vulnerable to spoofing attacks. A spoofing attack occurs when a person tries to masquerade as someone else by falsifying data and thereby gaining illegitimate access. Inspired by image quality assessment, characterisation of printing artefacts and differences in light reflection, the authors propose to approach the problem of spoofing detection from texture analysis point of view. Indeed, face prints usually contain printing quality defects that can be well detected using texture and local shape features. Hence, the authors present a novel approach based on analysing facial image for detecting whether there is a live person in front of the camera or a face print. The proposed approach analyses the texture and gradient structures of the facial images using a set of low-level feature descriptors, fast linear classification scheme and score level fusion. Compared to many previous works, the authors proposed approach is robust and does not require user-cooperation. In addition, the texture features that are used for spoofing detection can also be used for face recognition. This provides a unique feature space for coupling spoofing detection and face recognition. Extensive experimental analysis on three publicly available databases showed excellent results compared to existing works.",
"Personal identity verification based on biometrics has received increasing attention since it allows reliable authentication through intrinsic characteristics, such as face, voice, iris, fingerprint, and gait. Particularly, face recognition techniques have been used in a number of applications, such as security surveillance, access control, crime solving, law enforcement, among others. To strengthen the results of verification, biometric systems must be robust against spoofing attempts with photographs or videos, which are two common ways of bypassing a face recognition system. In this paper, we describe an anti-spoofing solution based on a set of low-level feature descriptors capable of distinguishing between ‘live’ and ‘spoof’ images and videos. The proposed method explores both spatial and temporal information to learn distinctive characteristics between the two classes. Experiments conducted to validate our solution with datasets containing images and videos show results comparable to state-of-the-art approaches.",
"Spoofing attacks mainly include printing artifacts, electronic screens and ultra-realistic face masks or models. In this paper, we propose a component-based face coding approach for liveness detection. The proposed method consists of four steps: (1) locating the components of face; (2) coding the low-level features respectively for all the components; (3) deriving the high-level face representation by pooling the codes with weights derived from Fisher criterion; (4) concatenating the histograms from all components into a classifier for identification. The proposed framework makes good use of micro differences between genuine faces and fake faces. Meanwhile, the inherent appearance differences among different components are retained. Extensive experiments on three published standard databases demonstrate that the method can achieve the best liveness detection performance in three databases.",
"Presents a theoretically very simple, yet efficient, multiresolution approach to gray-scale and rotation invariant texture classification based on local binary patterns and nonparametric discrimination of sample and prototype distributions. The method is based on recognizing that certain local binary patterns, termed \"uniform,\" are fundamental properties of local image texture and their occurrence histogram is proven to be a very powerful texture feature. We derive a generalized gray-scale and rotation invariant operator presentation that allows for detecting the \"uniform\" patterns for any quantization of the angular space and for any spatial resolution and presents a method for combining multiple operators for multiresolution analysis. The proposed approach is very robust in terms of gray-scale variations since the operator is, by definition, invariant against any monotonic transformation of the gray scale. Another advantage is computational simplicity as the operator can be realized with a few operations in a small neighborhood and a lookup table. Experimental results demonstrate that good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary patterns.",
"We present a real-time liveness detection approach against photograph spoofing in face recognition, by recognizing spontaneous eyeblinks, which is a non-intrusive manner. The approach requires no extra hardware except for a generic webcamera. Eyeblink sequences often have a complex underlying structure. We formulate blink detection as inference in an undirected conditional graphical framework, and are able to learn a compact and efficient observation and transition potentials from data. For purpose of quick and accurate recognition of the blink behavior, eye closity, an easily-computed discriminative measure derived from the adaptive boosting algorithm, is developed, and then smoothly embedded into the conditional model. An extensive set of experiments are presented to show effectiveness of our approach and how it outperforms the cascaded Adaboost and HMM in task of eyeblink detection.",
"Abstract In recent years, face recognition has often been proposed for personal identification. However, there are many difficulties with face recognition systems. For example, an imposter could login the face recognition system by stealing the facial photograph of a person registered on the facial recognition system. The security of the face recognition system requires a live detection system to prevent system login using photographs of a human face. This paper describes an effective, efficient face live detection method which uses physiological motion detected by estimating the eye blinks from a captured video sequence and an eye contour extraction algorithm. This technique uses the conventional active shape model with a random forest classifier trained to recognize the local appearance around each landmark. This local match provides more robustness for optimizing the fitting procedure. Tests show that this face live detection approach successfully discriminates a live human face from a photograph of the registered person's face to increase the face recognition system reliability.",
"Research on non-intrusive software-based face spoofing detection schemes has been mainly focused on the analysis of the luminance information of the face images, hence discarding the chroma component, which can be very useful for discriminating fake faces from genuine ones. This paper introduces a novel and appealing approach for detecting face spoofing using a colour texture analysis. We exploit the joint colour-texture information from the luminance and the chrominance channels by extracting complementary low-level feature descriptions from different colour spaces. More specifically, the feature histograms are computed over each image band separately. Extensive experiments on the three most challenging benchmark data sets, namely, the CASIA face anti-spoofing database, the replay-attack database, and the MSU mobile face spoof database, showed excellent results compared with the state of the art. More importantly, unlike most of the methods proposed in the literature, our proposed approach is able to achieve stable performance across all the three benchmark data sets. The promising results of our cross-database evaluation suggest that the facial colour texture representation is more stable in unknown conditions compared with its gray-scale counterparts."
]
} |
1908.10654 | 2970246034 | Face anti-spoofing is essential to prevent face recognition systems from a security breach. Much of the progresses have been made by the availability of face anti-spoofing benchmark datasets in recent years. However, existing face anti-spoofing benchmarks have limited number of subjects ( @math ) and modalities ( @math ), which hinder the further development of the academic community. To facilitate face anti-spoofing research, we introduce a large-scale multi-modal dataset, namely CASIA-SURF, which is the largest publicly available dataset for face anti-spoofing in terms of both subjects and modalities. Specifically, it consists of @math subjects with @math videos and each sample has @math modalities (i.e., RGB, Depth and IR). We also provide comprehensive evaluation metrics, diverse evaluation protocols, training validation testing subsets and a measurement tool, developing a new benchmark for face anti-spoofing. Moreover, we present a novel multi-modal multi-scale fusion method as a strong baseline, which performs feature re-weighting to select the more informative channel features while suppressing the less useful ones for each modality across different scales. Extensive experiments have been conducted on the proposed dataset to verify its significance and generalization capability. The dataset is available at this https URL | CNN-based methods @cite_36 @cite_40 @cite_20 @cite_5 @cite_52 @cite_3 have been presented recently in the face PAD community. They treat face PAD as a binary classification problem and achieve remarkable improvements in the intra-testing. Liu al @cite_26 design a network architecture to leverage two auxiliary information (Depth map and rPPG signal) as supervision. Amin al @cite_3 introduce a new perspective for solving the face anti-spoofing by inversely decomposing a spoof face into the live face and the spoof noise pattern. However, they exhibit a poor generalization ability in the cross-testing due to the over-fitting to training data. This problem remains open, although some works @cite_40 @cite_20 adopt transfer learning to train a CNN model from ImageNet @cite_18 . These works show the need of a larger PAD dataset. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_36",
"@cite_52",
"@cite_3",
"@cite_40",
"@cite_5",
"@cite_20"
],
"mid": [
"2108598243",
"",
"",
"",
"2950744208",
"2578178601",
"1704933117",
"2418633638"
],
"abstract": [
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"",
"",
"",
"Many prior face anti-spoofing works develop discriminative models for recognizing the subtle differences between live and spoof faces. Those approaches often regard the image as an indivisible unit, and process it holistically, without explicit modeling of the spoofing process. In this work, motivated by the noise modeling and denoising algorithms, we identify a new problem of face de-spoofing, for the purpose of anti-spoofing: inversely decomposing a spoof face into a spoof noise and a live face, and then utilizing the spoof noise for classification. A CNN architecture with proper constraints and supervisions is proposed to overcome the problem of having no ground truth for the decomposition. We evaluate the proposed method on multiple face anti-spoofing databases. The results show promising improvements due to our spoof noise modeling. Moreover, the estimated spoof noise provides a visualization which helps to understand the added spoof noise by each spoof medium.",
"Recently deep Convolutional Neural Networks have been successfully applied in many computer vision tasks and achieved promising results. So some works have introduced the deep learning into face anti-spoofing. However, most approaches just use the final fully-connected layer to distinguish the real and fake faces. Inspired by the idea of each convolutional kernel can be regarded as a part filter, we extract the deep partial features from the convolutional neural network (CNN) to distinguish the real and fake faces. In our prosed approach, the CNN is fine-tuned firstly on the face spoofing datasets. Then, the block principle component analysis (PCA) method is utilized to reduce the dimensionality of features that can avoid the over-fitting problem. Lastly, the support vector machine (SVM) is employed to distinguish the real the real and fake faces. The experiments evaluated on two public available databases, Replay-Attack and CASIA, show the proposed method can obtain satisfactory results compared to the state-of-the-art methods.",
"Though having achieved some progresses, the hand-crafted texture features, e.g., LBP [23], LBP-TOP [11] are still unable to capture the most discriminative cues between genuine and fake faces. In this paper, instead of designing feature by ourselves, we rely on the deep convolutional neural network (CNN) to learn features of high discriminative ability in a supervised manner. Combined with some data pre-processing, the face anti-spoofing performance improves drastically. In the experiments, over 70 relative decrease of Half Total Error Rate (HTER) is achieved on two challenging datasets, CASIA [36] and REPLAY-ATTACK [7] compared with the state-of-the-art. Meanwhile, the experimental results from inter-tests between two datasets indicates CNN can obtain features with better generalization ability. Moreover, the nets trained using combined data from two datasets have less biases between two datasets.",
"With the wide deployment of the face recognition systems in applications from deduplication to mobile device unlocking, security against the face spoofing attacks requires increased attention; such attacks can be easily launched via printed photos, video replays, and 3D masks of a face. We address the problem of face spoof detection against the print (photo) and replay (photo or video) attacks based on the analysis of image distortion ( e.g. , surface reflection, moire pattern, color distortion, and shape deformation) in spoof face images (or video frames). The application domain of interest is smartphone unlock, given that the growing number of smartphones have the face unlock and mobile payment capabilities. We build an unconstrained smartphone spoof attack database (MSU USSA) containing more than 1000 subjects. Both the print and replay attacks are captured using the front and rear cameras of a Nexus 5 smartphone. We analyze the image distortion of the print and replay attacks using different: 1) intensity channels (R, G, B, and grayscale); 2) image regions (entire image, detected face, and facial component between nose and chin); and 3) feature descriptors. We develop an efficient face spoof detection system on an Android smartphone. Experimental results on the public-domain Idiap Replay-Attack, CASIA FASD, and MSU-MFSD databases, and the MSU USSA database show that the proposed approach is effective in face spoof detection for both the cross-database and intra-database testing scenarios. User studies of our Android face spoof detection system involving 20 participants show that the proposed approach works very well in real application scenarios."
]
} |
1908.10468 | 2971069003 | Knowledge of what spatial elements of medical images deep learning methods use as evidence is important for model interpretability, trustiness, and validation. There is a lack of such techniques for models in regression tasks. We propose a method, called visualization for regression with a generative adversarial network (VR-GAN), for formulating adversarial training specifically for datasets containing regression target values characterizing disease severity. We use a conditional generative adversarial network where the generator attempts to learn to shift the output of a regressor through creating disease effect maps that are added to the original images. Meanwhile, the regressor is trained to predict the original regression value for the modified images. A model trained with this technique learns to provide visualization for how the image would appear at different stages of the disease. We analyze our method in a dataset of chest x-rays associated with pulmonary function tests, used for diagnosing chronic obstructive pulmonary disease (COPD). For validation, we compute the difference of two registered x-rays of the same patient at different time points and correlate it to the generated disease effect map. The proposed method outperforms a technique based on classification and provides realistic-looking images, making modifications to images following what radiologists usually observe for this disease. Implementation code is available at this https URL. | One way to visualize evidence of a class using deep learning is to perform backpropagation of the outputs of a trained classifier @cite_1 . @cite_5 , for example, a model is trained to predict the presence of 14 diseases in chest x-rays, and class activation maps @cite_2 are used to show what regions of the x-rays have a larger influence on the classifier's decision. However, as shown in @cite_7 , these methods suffer from low resolution or from highlighting limited regions of the original images. | {
"cite_N": [
"@cite_5",
"@cite_1",
"@cite_7",
"@cite_2"
],
"mid": [
"2770241596",
"2785760873",
"2963635991",
"2295107390"
],
"abstract": [
"We develop an algorithm that can detect pneumonia from chest X-rays at a level exceeding practicing radiologists. Our algorithm, CheXNet, is a 121-layer convolutional neural network trained on ChestX-ray14, currently the largest publicly available chest X-ray dataset, containing over 100,000 frontal-view X-ray images with 14 diseases. Four practicing academic radiologists annotate a test set, on which we compare the performance of CheXNet to that of radiologists. We find that CheXNet exceeds average radiologist performance on the F1 metric. We extend CheXNet to detect all 14 diseases in ChestX-ray14 and achieve state of the art results on all 14 diseases.",
"Understanding the flow of information in Deep Neural Networks (DNNs) is a challenging problem that has gain increasing attention over the last few years. While several methods have been proposed to explain network predictions, there have been only a few attempts to compare them from a theoretical perspective. What is more, no exhaustive empirical comparison has been performed in the past. In this work we analyze four gradient-based attribution methods and formally prove conditions of equivalence and approximation between them. By reformulating two of these methods, we construct a unified framework which enables a direct comparison, as well as an easier implementation. Finally, we propose a novel evaluation metric, called Sensitivity-n and test the gradient-based attribution methods alongside with a simple perturbation-based attribution method on several datasets in the domains of image and text classification, using various network architectures.",
"Attributing the pixels of an input image to a certain category is an important and well-studied problem in computer vision, with applications ranging from weakly supervised localisation to understanding hidden effects in the data. In recent years, approaches based on interpreting a previously trained neural network classifier have become the de facto state-of-the-art and are commonly used on medical as well as natural image datasets. In this paper, we discuss a limitation of these approaches which may lead to only a subset of the category specific features being detected. To address this problem we develop a novel feature attribution technique based on Wasserstein Generative Adversarial Networks (WGAN), which does not suffer from this limitation. We show that our proposed method performs substantially better than the state-of-the-art for visual attribution on a synthetic dataset and on real 3D neuroimaging data from patients with mild cognitive impairment (MCI) and Alzheimer's disease (AD). For AD patients the method produces compellingly realistic disease effect maps which are very close to the observed effects.",
"In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network (CNN) to have remarkable localization ability despite being trained on imagelevel labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that exposes the implicit attention of CNNs on an image. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014 without training on any bounding box annotation. We demonstrate in a variety of experiments that our network is able to localize the discriminative image regions despite just being trained for solving classification task1."
]
} |
1908.10468 | 2971069003 | Knowledge of what spatial elements of medical images deep learning methods use as evidence is important for model interpretability, trustiness, and validation. There is a lack of such techniques for models in regression tasks. We propose a method, called visualization for regression with a generative adversarial network (VR-GAN), for formulating adversarial training specifically for datasets containing regression target values characterizing disease severity. We use a conditional generative adversarial network where the generator attempts to learn to shift the output of a regressor through creating disease effect maps that are added to the original images. Meanwhile, the regressor is trained to predict the original regression value for the modified images. A model trained with this technique learns to provide visualization for how the image would appear at different stages of the disease. We analyze our method in a dataset of chest x-rays associated with pulmonary function tests, used for diagnosing chronic obstructive pulmonary disease (COPD). For validation, we compute the difference of two registered x-rays of the same patient at different time points and correlate it to the generated disease effect map. The proposed method outperforms a technique based on classification and provides realistic-looking images, making modifications to images following what radiologists usually observe for this disease. Implementation code is available at this https URL. | @cite_7 , researchers visualize what brain MRIs of patients with mild cognitive impairment would look like if they developed Alzheimer's disease, generating disease effect maps. To solve problems with other visualization methods, they propose an adversarial setup. A generator is trained to modify an input image which fools a discriminator. The modifications the generator outputs are used as visualization of evidence of one class. This setup inspires our method. However, instead of classification labels, we use regression values and a novel loss function. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2963635991"
],
"abstract": [
"Attributing the pixels of an input image to a certain category is an important and well-studied problem in computer vision, with applications ranging from weakly supervised localisation to understanding hidden effects in the data. In recent years, approaches based on interpreting a previously trained neural network classifier have become the de facto state-of-the-art and are commonly used on medical as well as natural image datasets. In this paper, we discuss a limitation of these approaches which may lead to only a subset of the category specific features being detected. To address this problem we develop a novel feature attribution technique based on Wasserstein Generative Adversarial Networks (WGAN), which does not suffer from this limitation. We show that our proposed method performs substantially better than the state-of-the-art for visual attribution on a synthetic dataset and on real 3D neuroimaging data from patients with mild cognitive impairment (MCI) and Alzheimer's disease (AD). For AD patients the method produces compellingly realistic disease effect maps which are very close to the observed effects."
]
} |
1908.10468 | 2971069003 | Knowledge of what spatial elements of medical images deep learning methods use as evidence is important for model interpretability, trustiness, and validation. There is a lack of such techniques for models in regression tasks. We propose a method, called visualization for regression with a generative adversarial network (VR-GAN), for formulating adversarial training specifically for datasets containing regression target values characterizing disease severity. We use a conditional generative adversarial network where the generator attempts to learn to shift the output of a regressor through creating disease effect maps that are added to the original images. Meanwhile, the regressor is trained to predict the original regression value for the modified images. A model trained with this technique learns to provide visualization for how the image would appear at different stages of the disease. We analyze our method in a dataset of chest x-rays associated with pulmonary function tests, used for diagnosing chronic obstructive pulmonary disease (COPD). For validation, we compute the difference of two registered x-rays of the same patient at different time points and correlate it to the generated disease effect map. The proposed method outperforms a technique based on classification and provides realistic-looking images, making modifications to images following what radiologists usually observe for this disease. Implementation code is available at this https URL. | There have been other works on generating visual attribution for regression. @cite_4 , start by training a GAN on a large dataset of frontal x-rays, and then train an encoder that maps from an x-ray to its latent space vector. Finally, train a small model for regression that receives the latent vector of the images from a smaller dataset and outputs a value which is used for diagnosing congestive heart failure. To interpret their model, they backpropagate through the small regression model, taking steps in the latent space to reach the threshold of diagnosis, and generate the image associated with the new diagnosis. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2900003150"
],
"abstract": [
"Generative Visual Rationales can identify imaging features learned by a model trained to predict congestive heart failure from chest radiographs, allowing radiologists to better identify faults and..."
]
} |
1908.10398 | 2912215636 | Abstract The deep supervised and reinforcement learning paradigms (among others) have the potential to endow interactive multimodal social robots with the ability of acquiring skills autonomously. But it is still not very clear yet how they can be best deployed in real world applications. As a step in this direction, we propose a deep learning-based approach for efficiently training a humanoid robot to play multimodal games—and use the game of ‘Noughts and Crosses’ with two variants as a case study. Its minimum requirements for learning to perceive and interact are based on a few hundred example images, a few example multimodal dialogues and physical demonstrations of robot manipulation, and automatic simulations. In addition, we propose novel algorithms for robust visual game tracking and for competitive policy learning with high winning rates, which substantially outperform DQN-based baselines. While an automatic evaluation shows evidence that the proposed approach can be easily extended to new games with competitive robot behaviours, a human evaluation with 130 humans playing with the Pepper robot confirms that highly accurate visual perception is required for successful game play. | There is a similarly limited amount of previous work on humanoid robots playing games against human opponents. Notable exceptions include @cite_18 , where the DB humanoid robot learns to play air hockey using a Nearest Neighbour classifier; @cite_9 , where the Nico humanoid torso robot plays the game of rock-paper-scissors using a Wizzard of Oz' setting; @cite_39 , where the Sky humanoid robot plays catch and juggling using inverse kinematics and induced parameters with least squares linear regression; @cite_28 , where the Nao robot plays a quiz game, an arm imitation game, and a dance game using tabular reinforcement learning; @cite_33 , where the Genie humanoid robot plays the poker game using a Wizard of Oz' setting; and @cite_7 , where the NAO robot plays Checkers using a MinMax search tree. Most of these robots only exhibit non-verbal abilities and are either teleoperated or based on heuristic methods, which suggests that verbal abilities in autonomous trainable robots playing games are underdeveloped. Apart from @cite_4 @cite_24 , we are not aware of any other previous work in humanoid robots playing social games against human opponents and trained with deep learning methods. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_33",
"@cite_7",
"@cite_28",
"@cite_9",
"@cite_39",
"@cite_24"
],
"mid": [
"1965218672",
"2559112319",
"2161062680",
"2769375497",
"2092252440",
"2084907907",
"1989972452",
"2771590356"
],
"abstract": [
"We present a method for humanoid robots to quickly learn new dynamic tasks from observing others and from practice. Ways in which the robot can adapt to initial and also changing conditions are described. Agents are given domain knowledge in the form of task primitives. A key element of our approach is to break learning problems up into as many simple learning problems as possible. We present a case study of a humanoid robot learning to play air hockey.",
"Training robots to perceive, act and communicate using multiple modalities still represents a challenging problem, particularly if robots are expected to learn efficiently from small sets of example interactions. We describe a learning approach as a step in this direction, where we teach a humanoid robot how to play the game of noughts and crosses. Given that multiple multimodal skills can be trained to play this game, we focus our attention to training the robot to perceive the game, and to interact in this game. Our multimodal deep reinforcement learning agent perceives multimodal features and exhibits verbal and non-verbal actions while playing. Experimental results using simulations show that the robot can learn to win or draw up to 98 of the games. A pilot test of the proposed multimodal system for the targeted game---integrating speech, vision and gestures---reports that reasonable and fluent interactions can be achieved using the proposed approach.",
"This paper describes the study of human behaviors in a poker game with the game playing humanoid robot. Betting decision and nonverbal behaviors of human players were analyzed between human–human and the human–humanoid poker game. It was found that card hand strength is related to the betting strategy and nonverbal interaction. Moreover, engagement in the poker game with the humanoid was assessed through questionnaire and by measuring the nonverbal behaviors between playtime and breaktime.",
"In search for better technological solutions for education, we adapted a principle from economic game theory, namely that giving a help will promote collaboration and eventually long-term relations between a robot and a child. This principle has been shown to be effective in games between humans and between humans and computer agents. We compared the social and cognitive engagement of children when playing checkers game combined with a social strategy against a robot or against a computer. We found that by combining the social and game strategy the children (average age of 8.3 years) had more empathy and social engagement with the robot since the children did not want to necessarily win against it. This finding is promising for using social strategies for the creation of long-term relations between robots and children and making educational tasks more engaging. An additional outcome of the study was the significant difference in the perception of the children about the difficulty of the game – the game with the robot was seen as more challenging and the robot – as a smarter opponent. This finding might be due to the higher perceived or expected intelligence from the robot, or because of the higher complexity of seeing patterns in three-dimensional world.",
"For robots to interact effectively with human users they must be capable of coordinated, timely behavior in response to social context. The Adaptive Strategies for Sustainable Long-Term Social Interaction (ALIZ-E) project focuses on the design of long-term, adaptive social interaction between robots and child users in real-world settings. In this paper, we report on the iterative approach taken to scientific and technical developments toward this goal: advancing individual technical competencies and integrating them to form an autonomous robotic system for evaluation \"in the wild.\" The first evaluation iterations have shown the potential of this methodology in terms of adaptation of the robot to the interactant and the resulting influences on engagement. This sets the foundation for an ongoing research program that seeks to develop technologies for social robot companions.",
"Using a humanoid robot and a simple children's game, we examine the degree to which variations in behavior result in attributions of mental state and intentionality. Participants play the well-known children's game \"rock-paper-scissors\" against a robot that either plays fairly, or that cheats in one of two ways. In the \"verbal cheat\" condition, the robot announces the wrong outcome on several rounds which it loses, declaring itself the winner. In the \"action cheat\"' condition, the robot changes its gesture after seeing its opponent's play. We find that participants display a greater level of social engagement and make greater attributions of mental state when playing against the robot in the conditions in which it cheats.",
"Entertainment robots in theme park environments typically do not allow for physical interaction and contact with guests. However, catching and throwing back objects is one form of physical engagement that still maintains a safe distance between the robot and participants. Using a theme park type animatronic humanoid robot, we developed a test bed for a throwing and catching game scenario. We use an external camera system (ASUS Xtion PRO LIVE) to locate balls and a Kalman filter to predict ball destination and timing. The robot's hand and joint-space are calibrated to the vision coordinate system using a least-squares technique, such that the hand can be positioned to the predicted location. Successful catches are thrown back two and a half meters forward to the participant, and missed catches are detected to trigger suitable animations that indicate failure. Human to robot partner juggling (three ball cascade pattern, one hand for each partner) is also achieved by speeding up the catching throwing cycle. We tested the throwing catching system on six participants (one child and five adults, including one elderly), and the juggling system on three skilled jugglers.",
"Deep reinforcement learning for interactive multimodal robots is attractive for endowing machines with trainable skill acquisition. But this form of learning still represents several challenges. The challenge that we focus in this paper is effective policy learning. To address that, in this paper we compare the Deep Q-Networks (DQN) method against a variant that aims for stronger decisions than the original method by avoiding decisions with the lowest negative rewards. We evaluated our baseline and proposed algorithms in agents playing the game of Noughts and Crosses with two grid sizes (3×3 and 5×5). Experimental results show evidence that our proposed method can lead to more effective policies than the baseline DQN method, which can be used for training interactive social robots."
]
} |
1908.10398 | 2912215636 | Abstract The deep supervised and reinforcement learning paradigms (among others) have the potential to endow interactive multimodal social robots with the ability of acquiring skills autonomously. But it is still not very clear yet how they can be best deployed in real world applications. As a step in this direction, we propose a deep learning-based approach for efficiently training a humanoid robot to play multimodal games—and use the game of ‘Noughts and Crosses’ with two variants as a case study. Its minimum requirements for learning to perceive and interact are based on a few hundred example images, a few example multimodal dialogues and physical demonstrations of robot manipulation, and automatic simulations. In addition, we propose novel algorithms for robust visual game tracking and for competitive policy learning with high winning rates, which substantially outperform DQN-based baselines. While an automatic evaluation shows evidence that the proposed approach can be easily extended to new games with competitive robot behaviours, a human evaluation with 130 humans playing with the Pepper robot confirms that highly accurate visual perception is required for successful game play. | In the remainder of the article we describe a deep learning-based approach for efficiently training a robot with the ability of behaving with reasonable performance in a near real world deployment. In particular, we measure the effectiveness of neural-based game move interpretation and the effectiveness of Deep Q-Networks (DQN) @cite_16 for interactive social robots. Field trial results show that the proposed approach can induce reasonable and competitive behaviours, especially when they are not affected by unseen noisy conditions. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2145339207"
],
"abstract": [
"An artificial agent is developed that learns to play a diverse range of classic Atari 2600 computer games directly from sensory experience, achieving a performance comparable to that of an expert human player; this work paves the way to building general-purpose learning algorithms that bridge the divide between perception and action."
]
} |
1908.10357 | 2971237743 | In this paper, we are interested in bottom-up multi-person human pose estimation. A typical bottom-up pipeline consists of two main steps: heatmap prediction and keypoint grouping. We mainly focus on the first step for improving heatmap prediction accuracy. We propose Higher-Resolution Network (HigherHRNet), which is a simple extension of the High-Resolution Network (HRNet). HigherHRNet generates higher-resolution feature maps by deconvolving the high-resolution feature maps outputted by HRNet, which are spatially more accurate for small and medium persons. Then, we build high-quality multi-level features and perform multi-scale pose prediction. The extra computation overhead is marginal and negligible in comparison to existing bottom-up methods that rely on multi-scale image pyramids or large input image size to generate accurate pose heatmaps. HigherHRNet surpasses all existing bottom-up methods on the COCO dataset without using multi-scale test. The code and models will be released. | Top-down methods @cite_19 @cite_26 @cite_28 @cite_0 @cite_21 @cite_16 @cite_7 @cite_30 detect a single person keypoints within a person bounding box. The person bounding boxes are usually generated by an object detector @cite_3 @cite_18 @cite_6 . Mask R-CNN @cite_0 directly adds a keypoint detection branch on Faster R-CNN @cite_3 and reuses features after ROIPooling. G-RMI @cite_28 and the following methods further break top-down methods into two steps and use separate models for person detection and pose estimation. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_26",
"@cite_7",
"@cite_28",
"@cite_21",
"@cite_3",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_16"
],
"mid": [
"2307770531",
"2565639579",
"2916798096",
"2964221239",
"2578797046",
"",
"2613718673",
"2962731685",
"",
"2963402313",
"2963781481"
],
"abstract": [
"This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a “stacked hourglass” network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.",
"Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But pyramid representations have been avoided in recent object detectors that are based on deep convolutional networks, partially because they are slow to compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.",
"",
"The topic of multi-person pose estimation has been largely improved recently, especially with the development of convolutional neural network. However, there still exist a lot of challenging cases, such as occluded keypoints, invisible keypoints and complex background, which cannot be well addressed. In this paper, we present a novel network structure called Cascaded Pyramid Network (CPN) which targets to relieve the problem from these \"hard\" keypoints. More specifically, our algorithm includes two stages: GlobalNet and RefineNet. GlobalNet is a feature pyramid network which can successfully localize the \"simple\" keypoints like eyes and hands but may fail to precisely recognize the occluded or invisible keypoints. Our RefineNet tries explicitly handling the \"hard\" keypoints by integrating all levels of feature representations from the GlobalNet together with an online hard keypoint mining loss. In general, to address the multi-person pose estimation problem, a top-down pipeline is adopted to first generate a set of human bounding boxes based on a detector, followed by our CPN for keypoint localization in each human bounding box. Based on the proposed algorithm, we achieve state-of-art results on the COCO keypoint benchmark, with average precision at 73.0 on the COCO test-dev dataset and 72.1 on the COCO test-challenge dataset, which is a 19 relative improvement compared with 60.5 from the COCO 2016 keypoint challenge. Code1 and the detection results for person used will be publicly available for further research.",
"We propose a method for multi-person detection and 2-D pose estimation that achieves state-of-art results on the challenging COCO keypoints task. It is a simple, yet powerful, top-down approach consisting of two stages. In the first stage, we predict the location and scale of boxes which are likely to contain people, for this we use the Faster RCNN detector. In the second stage, we estimate the keypoints of the person potentially contained in each proposed bounding box. For each keypoint type we predict dense heatmaps and offsets using a fully convolutional ResNet. To combine these outputs we introduce a novel aggregation procedure to obtain highly localized keypoint predictions. We also use a novel form of keypoint-based Non-Maximum-Suppression (NMS), instead of the cruder box-level NMS, and a novel form of keypoint-based confidence score estimation, instead of box-level scoring. Trained on COCO data alone, our final system achieves average precision of 0.649 on the COCO test-dev set and the 0.643 test-standard sets, outperforming the winner of the 2016 COCO keypoints challenge and other recent state-of-art. Further, by using additional in-house labeled data we obtain an even higher average precision of 0.685 on the test-dev set and 0.673 on the test-standard set, more than 5 absolute improvement compared to the previous best performing method on the same dataset.",
"",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.",
"Recent region-based object detectors are usually built with separate classification and localization branches on top of shared feature extraction networks. In this paper, we analyze failure cases of state-of-the-art detectors and observe that most hard false positives result from classification instead of localization. We conjecture that: (1) Shared feature representation is not optimal due to the mismatched goals of feature learning for classification and localization; (2) multi-task learning helps, yet optimization of the multi-task loss may result in sub-optimal for individual tasks; (3) large receptive field for different scales leads to redundant context information for small objects. We demonstrate the potential of detector classification power by a simple, effective, and widely-applicable Decoupled Classification Refinement (DCR) network. DCR samples hard false positives from the base classifier in Faster RCNN and trains a RCNN-styled strong classifier. Experiments show new state-of-the-art results on PASCAL VOC and COCO without any bells and whistles.",
"",
"There has been significant progress on pose estimation and increasing interests on pose tracking in recent years. At the same time, the overall algorithm and system complexity increases as well, making the algorithm analysis and comparison more difficult. This work provides simple and effective baseline methods. They are helpful for inspiring and evaluating new ideas for the field. State-of-the-art results are achieved on challenging benchmarks. The code will be available at https: github.com leoxiaobin pose.pytorch.",
"Multi-person pose estimation in the wild is challenging. Although state-of-the-art human detectors have demonstrated good performance, small errors in localization and recognition are inevitable. These errors can cause failures for a single-person pose estimator (SPPE), especially for methods that solely depend on human detection results. In this paper, we propose a novel regional multi-person pose estimation (RMPE) framework to facilitate pose estimation in the presence of inaccurate human bounding boxes. Our framework consists of three components: Symmetric Spatial Transformer Network (SSTN), Parametric Pose Non-Maximum-Suppression (NMS), and Pose-Guided Proposals Generator (PGPG). Our method is able to handle inaccurate bounding boxes and redundant detections, allowing it to achieve 76:7 mAP on the MPII (multi person) dataset[3]. Our model and source codes are made publicly available."
]
} |
1908.10357 | 2971237743 | In this paper, we are interested in bottom-up multi-person human pose estimation. A typical bottom-up pipeline consists of two main steps: heatmap prediction and keypoint grouping. We mainly focus on the first step for improving heatmap prediction accuracy. We propose Higher-Resolution Network (HigherHRNet), which is a simple extension of the High-Resolution Network (HRNet). HigherHRNet generates higher-resolution feature maps by deconvolving the high-resolution feature maps outputted by HRNet, which are spatially more accurate for small and medium persons. Then, we build high-quality multi-level features and perform multi-scale pose prediction. The extra computation overhead is marginal and negligible in comparison to existing bottom-up methods that rely on multi-scale image pyramids or large input image size to generate accurate pose heatmaps. HigherHRNet surpasses all existing bottom-up methods on the COCO dataset without using multi-scale test. The code and models will be released. | Bottom-up methods @cite_11 @cite_27 @cite_8 @cite_17 @cite_12 detect identity-free body joints for all the persons in an image and then group them into individuals. OpenPose @cite_17 uses a two-branch multi-stage netork with one branch for heatmap prediction and one branch for grouping. OpenPose uses a grouping method named part affinity field which learns a 2D vector field linking two keypoints. Grouping is done by calculating line integral between two keypoints and group the pair with the largest integral. Newell @cite_12 use stacked hourglass network @cite_30 for both heatmap prediction and grouping. Grouping is done by a method named associate embedding, which assigns each keypoint with a tag'' (a vector representation) and groups keypoints based on the @math distance between tag vectors. PersonLab @cite_1 uses dilated ResNet @cite_5 and groups keypoints by directly learning a 2D offset field for each pair of keypoints. | {
"cite_N": [
"@cite_30",
"@cite_11",
"@cite_8",
"@cite_1",
"@cite_27",
"@cite_5",
"@cite_12",
"@cite_17"
],
"mid": [
"2307770531",
"2175012183",
"2509865052",
"2962773068",
"2382036597",
"2194775991",
"2952819818",
"2559085405"
],
"abstract": [
"This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a “stacked hourglass” network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.",
"This paper considers the task of articulated human pose estimation of multiple people in real world images. We propose an approach that jointly solves the tasks of detection and pose estimation: it infers the number of persons in a scene, identifies occluded body parts, and disambiguates body parts between people in close proximity of each other. This joint formulation is in contrast to previous strategies, that address the problem by first detecting people and subsequently estimating their body pose. We propose a partitioning and labeling formulation of a set of body-part hypotheses generated with CNN-based part detectors. Our formulation, an instance of an integer linear program, implicitly performs non-maximum suppression on the set of part candidates and groups them to form configurations of body parts respecting geometric and appearance constraints. Experiments on four different datasets demonstrate state-of-the-art results for both single person and multi person pose estimation1.",
"Despite of the recent success of neural networks for human pose estimation, current approaches are limited to pose estimation of a single person and cannot handle humans in groups or crowds. In this work, we propose a method that estimates the poses of multiple persons in an image in which a person can be occluded by another person or might be truncated. To this end, we consider multi-person pose estimation as a joint-to-person association problem. We construct a fully connected graph from a set of detected joint candidates in an image and resolve the joint-to-person association and outlier detection using integer linear programming. Since solving joint-to-person association jointly for all persons in an image is an NP-hard problem and even approximations are expensive, we solve the problem locally for each person. On the challenging MPII Human Pose Dataset for multiple persons, our approach achieves the accuracy of a state-of-the-art method, but it is 6,000 to 19,000 times faster.",
"We present a box-free bottom-up approach for the tasks of pose estimation and instance segmentation of people in multi-person images using an efficient single-shot model. The proposed PersonLab model tackles both semantic-level reasoning and object-part associations using part-based modeling. Our model employs a convolutional network which learns to detect individual keypoints and predict their relative displacements, allowing us to group keypoints into person pose instances. Further, we propose a part-induced geometric embedding descriptor which allows us to associate semantic person pixels with their corresponding person instance, delivering instance-level person segmentations. Our system is based on a fully-convolutional architecture and allows for efficient inference, with runtime essentially independent of the number of people present in the scene. Trained on COCO data alone, our system achieves COCO test-dev keypoint average precision of 0.665 using single-scale inference and 0.687 using multi-scale inference, significantly outperforming all previous bottom-up pose estimation systems. We are also the first bottom-up method to report competitive results for the person class in the COCO instance segmentation task, achieving a person category average precision of 0.417.",
"The goal of this paper is to advance the state-of-the-art of articulated pose estimation in scenes with multiple people. To that end we contribute on three fronts. We propose (1) improved body part detectors that generate effective bottom-up proposals for body parts; (2) novel image-conditioned pairwise terms that allow to assemble the proposals into a variable number of consistent body part configurations; and (3) an incremental optimization strategy that explores the search space more efficiently thus leading both to better performance and significant speed-up factors. Evaluation is done on two single-person and two multi-person pose estimation benchmarks. The proposed approach significantly outperforms best known multi-person pose estimation results while demonstrating competitive performance on the task of single person pose estimation (Models and code available at http: pose.mpi-inf.mpg.de).",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"We introduce associative embedding, a novel method for supervising convolutional neural networks for the task of detection and grouping. A number of computer vision problems can be framed in this manner including multi-person pose estimation, instance segmentation, and multi-object tracking. Usually the grouping of detections is achieved with multi-stage pipelines, instead we propose an approach that teaches a network to simultaneously output detections and group assignments. This technique can be easily integrated into any state-of-the-art network architecture that produces pixel-wise predictions. We show how to apply this method to both multi-person pose estimation and instance segmentation and report state-of-the-art performance for multi-person pose on the MPII and MS-COCO datasets.",
"We present an approach to efficiently detect the 2D pose of multiple people in an image. The approach uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image. The architecture encodes global context, allowing a greedy bottom-up parsing step that maintains high accuracy while achieving realtime performance, irrespective of the number of people in the image. The architecture is designed to jointly learn part locations and their association via two branches of the same sequential prediction process. Our method placed first in the inaugural COCO 2016 keypoints challenge, and significantly exceeds the previous state-of-the-art result on the MPII Multi-Person benchmark, both in performance and efficiency."
]
} |
1908.10357 | 2971237743 | In this paper, we are interested in bottom-up multi-person human pose estimation. A typical bottom-up pipeline consists of two main steps: heatmap prediction and keypoint grouping. We mainly focus on the first step for improving heatmap prediction accuracy. We propose Higher-Resolution Network (HigherHRNet), which is a simple extension of the High-Resolution Network (HRNet). HigherHRNet generates higher-resolution feature maps by deconvolving the high-resolution feature maps outputted by HRNet, which are spatially more accurate for small and medium persons. Then, we build high-quality multi-level features and perform multi-scale pose prediction. The extra computation overhead is marginal and negligible in comparison to existing bottom-up methods that rely on multi-scale image pyramids or large input image size to generate accurate pose heatmaps. HigherHRNet surpasses all existing bottom-up methods on the COCO dataset without using multi-scale test. The code and models will be released. | There are mainly 4 methods to generate high resolution feature maps. (1) Encoder-decoder @cite_30 @cite_0 @cite_7 @cite_32 @cite_13 @cite_23 @cite_29 captures the context information in the encoder path and recover high resolution features in the decoder path. The decoder usually contains a sequence of bilinear upsample operations with skip connections from encoder features with the same resolution. (2) Dilated convolution @cite_14 @cite_31 @cite_2 @cite_15 @cite_33 @cite_24 @cite_34 @cite_9 ( atrous'' convolution) is used to remove several stride convolutions max poolings to preserve feature map resolution. Dilated convolution prevents losing spatial information but introduces more computational cost. (3) Deconvolution (transposed convolution) @cite_19 is used in sequence at the end of a network to efficiently increase feature map resolution. SimpleBaseline @cite_19 demonstrates that deconvolution can generate high quality feature maps for heatmap prediction. (4) Recently, a High-Resolution Network (HRNet) @cite_26 is proposed as an efficient way to keep a high resolution pass throughout the network. HRNet @cite_26 consists of multiple branches with different resolutions. Lower resolution branches capture contextual information and higher resolution branches preserve spatial information. With multi-scale fusions between branches, HRNet @cite_26 can generate high resolution feature maps with rich semantic. | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_26",
"@cite_33",
"@cite_7",
"@cite_15",
"@cite_29",
"@cite_9",
"@cite_32",
"@cite_0",
"@cite_24",
"@cite_19",
"@cite_23",
"@cite_2",
"@cite_31",
"@cite_34",
"@cite_13"
],
"mid": [
"2307770531",
"2286929393",
"2916798096",
"2964309882",
"2964221239",
"2630837129",
"2738804062",
"2911831070",
"2952232639",
"",
"2891778567",
"2963402313",
"2563705555",
"2412782625",
"1923697677",
"2963136578",
"2963881378"
],
"abstract": [
"This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a “stacked hourglass” network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.",
"State-of-the-art models for semantic segmentation are based on adaptations of convolutional networks that had originally been designed for image classification. However, dense prediction and image classification are structurally different. In this work, we develop a new convolutional network module that is specifically designed for dense prediction. The presented module uses dilated convolutions to systematically aggregate multi-scale contextual information without losing resolution. The architecture is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage. We show that the presented context module increases the accuracy of state-of-the-art semantic segmentation systems. In addition, we examine the adaptation of image classification networks to dense prediction and show that simplifying the adapted network can increase accuracy.",
"",
"Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89 and 82.1 without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at https: github.com tensorflow models tree master research deeplab.",
"The topic of multi-person pose estimation has been largely improved recently, especially with the development of convolutional neural network. However, there still exist a lot of challenging cases, such as occluded keypoints, invisible keypoints and complex background, which cannot be well addressed. In this paper, we present a novel network structure called Cascaded Pyramid Network (CPN) which targets to relieve the problem from these \"hard\" keypoints. More specifically, our algorithm includes two stages: GlobalNet and RefineNet. GlobalNet is a feature pyramid network which can successfully localize the \"simple\" keypoints like eyes and hands but may fail to precisely recognize the occluded or invisible keypoints. Our RefineNet tries explicitly handling the \"hard\" keypoints by integrating all levels of feature representations from the GlobalNet together with an online hard keypoint mining loss. In general, to address the multi-person pose estimation problem, a top-down pipeline is adopted to first generate a set of human bounding boxes based on a detector, followed by our CPN for keypoint localization in each human bounding box. Based on the proposed algorithm, we achieve state-of-art results on the COCO keypoint benchmark, with average precision at 73.0 on the COCO test-dev dataset and 72.1 on the COCO test-challenge dataset, which is a 19 relative improvement compared with 60.5 from the COCO 2016 keypoint challenge. Code1 and the detection results for person used will be publicly available for further research.",
"In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter's field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed DeepLabv3' system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.",
"Many machine vision applications require predictions for every pixel of the input image (for example semantic segmentation, boundary detection). Models for such problems usually consist of encoders which decreases spatial resolution while learning a high-dimensional representation, followed by decoders who recover the original input resolution and result in low-dimensional predictions. While encoders have been studied rigorously, relatively few studies address the decoder side. Therefore this paper presents an extensive comparison of a variety of decoders for a variety of pixel-wise prediction tasks. Our contributions are: (1) Decoders matter: we observe significant variance in results between different types of decoders on various problems. (2) We introduce a novel decoder: bilinear additive upsampling. (3) We introduce new residual-like connections for decoders. (4) We identify two decoder types which give a consistently high performance.",
"We present a single-shot, bottom-up approach for whole image parsing. Whole image parsing, also known as Panoptic Segmentation, generalizes the tasks of semantic segmentation for 'stuff' classes and instance segmentation for 'thing' classes, assigning both semantic and instance labels to every pixel in an image. Recent approaches to whole image parsing typically employ separate standalone modules for the constituent semantic and instance segmentation tasks and require multiple passes of inference. Instead, the proposed DeeperLab image parser performs whole image parsing with a significantly simpler, fully convolutional approach that jointly addresses the semantic and instance segmentation tasks in a single-shot manner, resulting in a streamlined system that better lends itself to fast processing. For quantitative evaluation, we use both the instance-based Panoptic Quality (PQ) metric and the proposed region-based Parsing Covering (PC) metric, which better captures the image parsing quality on 'stuff' classes and larger object instances. We report experimental results on the challenging Mapillary Vistas dataset, in which our single model achieves 31.95 (val) 31.6 PQ (test) and 55.26 PC (val) with 3 frames per second (fps) on GPU or near real-time speed (22.6 fps on GPU) with reduced accuracy.",
"There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL .",
"",
"The design of neural network architectures is an important component for achieving state-of-the-art performance with machine learning systems across a broad array of tasks. Much work has endeavored to design and build architectures automatically through clever construction of a search space paired with simple learning algorithms. Recent progress has demonstrated that such meta-learning methods may exceed scalable human-invented architectures on image classification tasks. An open question is the degree to which such methods may generalize to new domains. In this work we explore the construction of meta-learning techniques for dense image prediction focused on the tasks of scene parsing, person-part segmentation, and semantic image segmentation. Constructing viable search spaces in this domain is challenging because of the multi-scale representation of visual information and the necessity to operate on high resolution imagery. Based on a survey of techniques in dense image prediction, we construct a recursive search space and demonstrate that even with efficient random search, we can identify architectures that outperform human-invented architectures and achieve state-of-the-art performance on three dense prediction tasks including 82.7 on Cityscapes (street scene parsing), 71.3 on PASCAL-Person-Part (person-part segmentation), and 87.9 on PASCAL VOC 2012 (semantic image segmentation). Additionally, the resulting architecture is more computationally efficient, requiring half the parameters and half the computational cost as previous state of the art systems.",
"There has been significant progress on pose estimation and increasing interests on pose tracking in recent years. At the same time, the overall algorithm and system complexity increases as well, making the algorithm analysis and comparison more difficult. This work provides simple and effective baseline methods. They are helpful for inspiring and evaluating new ideas for the field. State-of-the-art results are achieved on challenging benchmarks. The code will be available at https: github.com leoxiaobin pose.pytorch.",
"Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new state-of-the-art results on seven public datasets. In particular, we achieve an intersection-over-union score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date.",
"In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.",
"Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.",
"",
"We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http: mi.eng.cam.ac.uk projects segnet ."
]
} |
1908.10136 | 2970136826 | Spatial and temporal stream model has gained great success in video action recognition. Most existing works pay more attention to designing effective features fusion methods, which train the two-stream model in a separate way. However, it's hard to ensure discriminability and explore complementary information between different streams in existing works. In this work, we propose a novel cooperative cross-stream network that investigates the conjoint information in multiple different modalities. The jointly spatial and temporal stream networks feature extraction is accomplished by an end-to-end learning manner. It extracts this complementary information of different modality from a connection block, which aims at exploring correlations of different stream features. Furthermore, different from the conventional ConvNet that learns the deep separable features with only one cross-entropy loss, our proposed model enhances the discriminative power of the deeply learned features and reduces the undesired modality discrepancy by jointly optimizing a modality ranking constraint and a cross-entropy loss for both homogeneous and heterogeneous modalities. The modality ranking constraint constitutes intra-modality discriminative embedding and inter-modality triplet constraint, and it reduces both the intra-modality and cross-modality feature variations. Experiments on three benchmark datasets demonstrate that by cooperating appearance and motion feature extraction, our method can achieve state-of-the-art or competitive performance compared with existing results. | Before deep learning became popular, most of the traditional CV algorithm variants apply shallow hand-crafted features to solve action recognition. Improved Dense Trajectories (IDT) @cite_36 which uses densely sampled trajectory features indicates that the temporal information could be processed differently from that of spatial information. Instead of extending the Harris corner detector into 3D, it utilizes the warp optical flow field to obtain some trajectories and eliminate the effects of camera motion in the video sequence. For each tracker corner hand-crafted features, like HOF, HOG, and MBH, are extracted along the trajectory. Despite their excellent performance, IDT and its improvements @cite_2 , @cite_37 , @cite_6 are still computationally formidable and become intractable on large-scale datasets. | {
"cite_N": [
"@cite_36",
"@cite_37",
"@cite_6",
"@cite_2"
],
"mid": [
"2105101328",
"2500355674",
"2766724094",
"1871385855"
],
"abstract": [
"Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.",
"A fast reference frame selection algorithm for the HEVC encoder is proposed.A relationship between the content similarity and reference frame selection is derived.The content similarity is studied without any extra computational complexity.Experimental results show that the proposed algorithm efficiently removes the encoding complexity of the best reference frame decision process. The high efficiency video coding (HEVC) is the state-of-the-art video coding standard, which achieves about 50 bit rate saving while maintaining the same visual quality as compared to the H.264 AVC. This achieved coding efficiency benefits from a set of advanced coding tools, such as the multiple reference frames (MRF) based interframe prediction, which efficiently improves the coding efficiency of the HEVC encoder, while it also increases heavy computation into the HEVC encoder. The high encoding complexity becomes a bottleneck for the high definition videos and HEVC encoder to be widely used in real-time and low power multimedia applications. In this paper, we propose a content similarity based fast reference frame selection algorithm for reducing the computational complexity of the multiple reference frames based interframe prediction. Based the large content similarity between the parent prediction unit (Inter_2N2N) and the children prediction units (Inter_2NN, Inter_N2N, Inter_NN, Inter_2NnU, Inter_2NnD, Inter_nL2N, and Inter_nR2N), the reference frame selection information of the children prediction units are obtained by learning the results of their parent prediction unit. Experimental results show that the proposed algorithm can reduce about 54.29 and 43.46 MRF encoding time saving for the low-delay-main and random-access-main coding structures, respectively, while the rate distortion performance degradation is negligible.",
"Recently, deep neural networks based hashing methods have greatly improved the multimedia retrieval performance by simultaneously learning feature representations and binary hash functions. Inspired by the latest advance in the asymmetric hashing scheme, in this work, we propose a novel Deep Asymmetric Pairwise Hashing approach (DAPH) for supervised hashing. The core idea is that two deep convolutional models are jointly trained such that their output codes for a pair of images can well reveal the similarity indicated by their semantic labels. A pairwise loss is elaborately designed to preserve the pairwise similarities between images as well as incorporating the independence and balance hash code learning criteria. By taking advantage of the flexibility of asymmetric hash functions, we devise an efficient alternating algorithm to optimize the asymmetric deep hash functions and high-quality binary code jointly. Experiments on three image benchmarks show that DAPH achieves the state-of-the-art performance on large-scale image retrieval.",
"A Comprehensive study on the BoVW pipeline for action recognition task.An evaluation and a generic analysis on 13 encoding methods.Intra-normalization for supervector based encoding methods.An evaluation on three fusion methods.Several good practices of the BoVW pipeline for action recognition task. Video based action recognition is one of the important and challenging problems in computer vision research. Bag of visual words model (BoVW) with local features has been very popular for a long time and obtained the state-of-the-art performance on several realistic datasets, such as the HMDB51, UCF50, and UCF101. BoVW is a general pipeline to construct a global representation from local features, which is mainly composed of five steps; (i) feature extraction, (ii) feature pre-processing, (iii) codebook generation, (iv) feature encoding, and (v) pooling and normalization. Although many efforts have been made in each step independently in different scenarios, their effects on action recognition are still unknown. Meanwhile, video data exhibits different views of visual patterns , such as static appearance and motion dynamics. Multiple descriptors are usually extracted to represent these different views. Fusing these descriptors is crucial for boosting the final performance of an action recognition system. This paper aims to provide a comprehensive study of all steps in BoVW and different fusion methods, and uncover some good practices to produce a state-of-the-art action recognition system. Specifically, we explore two kinds of local features, ten kinds of encoding methods, eight kinds of pooling and normalization strategies, and three kinds of fusion methods. We conclude that every step is crucial for contributing to the final recognition rate and improper choice in one of the steps may counteract the performance improvement of other steps. Furthermore, based on our comprehensive study, we propose a simple yet effective representation, called hybrid supervector, by exploring the complementarity of different BoVW frameworks with improved dense trajectories. Using this representation, we obtain impressive results on the three challenging datasets; HMDB51 (61.9 ), UCF50 (92.3 ), and UCF101 (87.9 )."
]
} |
1908.10136 | 2970136826 | Spatial and temporal stream model has gained great success in video action recognition. Most existing works pay more attention to designing effective features fusion methods, which train the two-stream model in a separate way. However, it's hard to ensure discriminability and explore complementary information between different streams in existing works. In this work, we propose a novel cooperative cross-stream network that investigates the conjoint information in multiple different modalities. The jointly spatial and temporal stream networks feature extraction is accomplished by an end-to-end learning manner. It extracts this complementary information of different modality from a connection block, which aims at exploring correlations of different stream features. Furthermore, different from the conventional ConvNet that learns the deep separable features with only one cross-entropy loss, our proposed model enhances the discriminative power of the deeply learned features and reduces the undesired modality discrepancy by jointly optimizing a modality ranking constraint and a cross-entropy loss for both homogeneous and heterogeneous modalities. The modality ranking constraint constitutes intra-modality discriminative embedding and inter-modality triplet constraint, and it reduces both the intra-modality and cross-modality feature variations. Experiments on three benchmark datasets demonstrate that by cooperating appearance and motion feature extraction, our method can achieve state-of-the-art or competitive performance compared with existing results. | An activate research which devotes to the design of deep networks for video representation learning has been trying to devise effective ConvNet architectures @cite_40 @cite_3 @cite_19 @cite_3 @cite_23 . @cite_40 attempt to design a deep network which stacks CNN-based frame-level features in a fixed size and then conduct spatiotemporal convolutions for video-level features learning. However, the results which implied the difficulty of CNNs in capturing motion information of the video is not satisfied. Later, many works in this genre leverage ConvNets trained on frames to extract low-level features an then perform high-level temporal integration of those features using pooling @cite_28 @cite_30 , high-dimensional feature encoding @cite_26 @cite_21 , or recurrent neural networks @cite_23 @cite_22 @cite_3 @cite_32 . Recently, the CNN-LSTM frameworks @cite_23 @cite_22 , using stacked LSTM network to connect frame-level representation and exploring long-term temporal relationships of video for learning a more robust representation, have yielded an improvement for modeling temporal dynamics of convolution features in videos. However, this genre using CNN as an encoder and RNN as a decoder of the video will lose low-level temporal context which is essential for action recognition. | {
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_22",
"@cite_28",
"@cite_21",
"@cite_32",
"@cite_3",
"@cite_19",
"@cite_40",
"@cite_23"
],
"mid": [
"2884883933",
"2608988379",
"",
"2962710043",
"2556024076",
"1923404803",
"2235034809",
"2745519816",
"2016053056",
"2951183276"
],
"abstract": [
"Adversarial perturbations are noise-like patterns that can subtly change the data, while failing an otherwise accurate classifier. In this paper, we propose to use such perturbations for improving the robustness of video representations. To this end, given a well-trained deep model for per-frame video recognition, we first generate adversarial noise adapted to this model. Using the original data features from the full video sequence and their perturbed counterparts, as two separate bags, we develop a binary classification problem that learns a set of discriminative hyperplanes – as a subspace – that will separate the two bags from each other. This subspace is then used as a descriptor for the video, dubbed discriminative subspace pooling. As the perturbed features belong to data classes that are likely to be confused with the original features, the discriminative subspace will characterize parts of the feature space that are more representative of the original data, and thus may provide robust video representations. To learn such descriptors, we formulate a subspace learning objective on the Stiefel manifold and resort to Riemannian optimization methods for solving it efficiently. We provide experiments on several video datasets and demonstrate state-of-the-art results.",
"In this work, we introduce a new video representation for action classification that aggregates local convolutional features across the entire spatio-temporal extent of the video. We do so by integrating state-of-the-art two-stream networks [42] with learnable spatio-temporal feature aggregation [6]. The resulting architecture is end-to-end trainable for whole-video classification. We investigate different strategies for pooling across space and time and combining signals from the different streams. We find that: (i) it is important to pool jointly across space and time, but (ii) appearance and motion streams are best aggregated into their own separate representations. Finally, we show that our representation outperforms the two-stream base architecture by a large margin (13 relative) as well as outperforms other baselines with comparable base architectures on HMDB51, UCF101, and Charades video classification benchmarks.",
"",
"Popular deep models for action recognition in videos generate independent predictions for short clips, which are then pooled heuristically to assign an action label to the full video segment. As not all frames may characterize the underlying action-indeed, many are common across multiple actions-pooling schemes that impose equal importance on all frames might be unfavorable. In an attempt to tackle this problem, we propose discriminative pooling, based on the notion that among the deep features generated on all short clips, there is at least one that characterizes the action. To this end, we learn a (nonlinear) hyperplane that separates this unknown, yet discriminative, feature from the rest. Applying multiple instance learning in a large-margin setup, we use the parameters of this separating hyperplane as a descriptor for the full video segment. Since these parameters are directly related to the support vectors in a max-margin framework, they serve as robust representations for pooling of the features. We formulate a joint objective and an efficient solver that learns these hyperplanes per video and the corresponding action classifiers over the hyperplanes. Our pooling scheme is end-to-end trainable within a deep framework. We report results from experiments on three benchmark datasets spanning a variety of challenges and demonstrate state-of-the-art performance across these tasks.",
"The CNN-encoding of features from entire videos for the representation of human actions has rarely been addressed. Instead, CNN work has focused on approaches to fuse spatial and temporal networks, but these were typically limited to processing shorter sequences. We present a new video representation, called temporal linear encoding (TLE) and embedded inside of CNNs as a new layer, which captures the appearance and motion throughout entire videos. It encodes this aggregated information into a robust video feature representation, via end-to-end learning. Advantages of TLEs are: (a) they encode the entire video into a compact feature representation, learning the semantics and a discriminative feature space, (b) they are applicable to all kinds of networks like 2D and 3D CNNs for video classification, and (c) they model feature interactions in a more expressive way and without loss of information. We conduct experiments on two challenging human action datasets: HMDB51 and UCF101. The experiments show that TLE outperforms current state-of-the-art methods on both datasets.",
"Convolutional neural networks (CNNs) have been extensively applied for image recognition problems giving state-of-the-art results on recognition, detection, segmentation and retrieval. In this work we propose and evaluate several deep neural network architectures to combine image information across a video over longer time periods than previously attempted. We propose two methods capable of handling full length videos. The first method explores various convolutional temporal feature pooling architectures, examining the various design choices which need to be made when adapting a CNN for this task. The second proposed method explicitly models the video as an ordered sequence of frames. For this purpose we employ a recurrent neural network that uses Long Short-Term Memory (LSTM) cells which are connected to the output of the underlying CNN. Our best networks exhibit significant performance improvements over previously published results on the Sports 1 million dataset (73.1 vs. 60.9 ) and the UCF-101 datasets with (88.6 vs. 88.0 ) and without additional optical flow information (82.6 vs. 73.0 ).",
"Typical human actions last several seconds and exhibit characteristic spatio-temporal structure. Recent methods attempt to capture this structure and learn action representations with convolutional neural networks. Such representations, however, are typically learned at the level of a few video frames failing to model actions at their full temporal extent. In this work we learn video representations using neural networks with long-term temporal convolutions (LTC). We demonstrate that LTC-CNN models with increased temporal extents improve the accuracy of action recognition. We also study the impact of different low-level representations, such as raw values of video pixels and optical flow vector fields and demonstrate the importance of high-quality optical flow estimation for learning accurate action models. We report state-of-the-art results on two challenging benchmarks for human action recognition UCF101 (92.7 ) and HMDB51 (67.2 ).",
"Learning image representations with ConvNets by pre-training on ImageNet has proven useful across many visual understanding tasks including object detection, semantic segmentation, and image captioning. Although any image representation can be applied to video frames, a dedicated spatiotemporal representation is still vital in order to incorporate motion patterns that cannot be captured by appearance based models alone. This paper presents an empirical ConvNet architecture search for spatiotemporal feature learning, culminating in a deep 3-dimensional (3D) Residual ConvNet. Our proposed architecture outperforms C3D by a good margin on Sports-1M, UCF101, HMDB51, THUMOS14, and ASLAN while being 2 times faster at inference time, 2 times smaller in model size, and having a more compact representation.",
"Convolutional Neural Networks (CNNs) have been established as a powerful class of models for image recognition problems. Encouraged by these results, we provide an extensive empirical evaluation of CNNs on large-scale video classification using a new dataset of 1 million YouTube videos belonging to 487 classes. We study multiple approaches for extending the connectivity of a CNN in time domain to take advantage of local spatio-temporal information and suggest a multiresolution, foveated architecture as a promising way of speeding up the training. Our best spatio-temporal networks display significant performance improvements compared to strong feature-based baselines (55.3 to 63.9 ), but only a surprisingly modest improvement compared to single-frame models (59.3 to 60.9 ). We further study the generalization performance of our best model by retraining the top layers on the UCF-101 Action Recognition dataset and observe significant performance improvements compared to the UCF-101 baseline model (63.3 up from 43.9 ).",
"Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized."
]
} |
1908.10136 | 2970136826 | Spatial and temporal stream model has gained great success in video action recognition. Most existing works pay more attention to designing effective features fusion methods, which train the two-stream model in a separate way. However, it's hard to ensure discriminability and explore complementary information between different streams in existing works. In this work, we propose a novel cooperative cross-stream network that investigates the conjoint information in multiple different modalities. The jointly spatial and temporal stream networks feature extraction is accomplished by an end-to-end learning manner. It extracts this complementary information of different modality from a connection block, which aims at exploring correlations of different stream features. Furthermore, different from the conventional ConvNet that learns the deep separable features with only one cross-entropy loss, our proposed model enhances the discriminative power of the deeply learned features and reduces the undesired modality discrepancy by jointly optimizing a modality ranking constraint and a cross-entropy loss for both homogeneous and heterogeneous modalities. The modality ranking constraint constitutes intra-modality discriminative embedding and inter-modality triplet constraint, and it reduces both the intra-modality and cross-modality feature variations. Experiments on three benchmark datasets demonstrate that by cooperating appearance and motion feature extraction, our method can achieve state-of-the-art or competitive performance compared with existing results. | These works implied the importance of temporal information for action recognition and the incapability of CNNs to capture such information. To exploiting the temporal information, some studies resort to the use of the 3D convolution kernel. @cite_12 @cite_19 apply 3D CNN, both appearance and motion features learned with 3D convolution, simultaneously encode spatial and temporal cues. Several works explored the effect of performing 3D convolutions over the long-range temporal structure with ConvNets @cite_1 @cite_24 . Unfortunately, the network accepts a predefined number of frames as the input, and it's unclear of the right choice of the temporal span. What's more, the 3D convolution kernel inevitably has more network parameters. Therefore, recent interests have proposed a variant of factorizing a 3D filter into a combination of a 2D and 1D filter, including R(2+1)D'' @cite_34 , Pseudo3D network'' @cite_4 , factorized spatiotemporal convolutional networks'' @cite_46 . | {
"cite_N": [
"@cite_4",
"@cite_1",
"@cite_24",
"@cite_19",
"@cite_46",
"@cite_34",
"@cite_12"
],
"mid": [
"2963820951",
"2751445731",
"1586939924",
"2745519816",
"2964191259",
"2963155035",
"1522734439"
],
"abstract": [
"Convolutional Neural Networks (CNN) have been regarded as a powerful class of models for image recognition problems. Nevertheless, it is not trivial when utilizing a CNN for learning spatio-temporal video representation. A few studies have shown that performing 3D convolutions is a rewarding approach to capture both spatial and temporal dimensions in videos. However, the development of a very deep 3D CNN from scratch results in expensive computational cost and memory demand. A valid question is why not recycle off-the-shelf 2D networks for a 3D CNN. In this paper, we devise multiple variants of bottleneck building blocks in a residual learning framework by simulating 3 x 3 x 3 convolutions with 1 × 3 × 3 convolutional filters on spatial domain (equivalent to 2D CNN) plus 3 × 1 × 1 convolutions to construct temporal connections on adjacent feature maps in time. Furthermore, we propose a new architecture, named Pseudo-3D Residual Net (P3D ResNet), that exploits all the variants of blocks but composes each in different placement of ResNet, following the philosophy that enhancing structural diversity with going deep could improve the power of neural networks. Our P3D ResNet achieves clear improvements on Sports-1M video classification dataset against 3D CNN and frame-based 2D CNN by 5.3 and 1.8 , respectively. We further examine the generalization performance of video representation produced by our pre-trained P3D ResNet on five different benchmarks and three different tasks, demonstrating superior performances over several state-of-the-art techniques.",
"3-D convolutional neural networks (3-D-convNets) have been very recently proposed for action recognition in videos, and promising results are achieved. However, existing 3-D-convNets has two “artificial” requirements that may reduce the quality of video analysis: 1) It requires a fixed-sized (e.g., 112 @math 112) input video; and 2) most of the 3-D-convNets require a fixed-length input (i.e., video shots with fixed number of frames). To tackle these issues, we propose an end-to-end pipeline named Two-stream 3-D-convNet Fusion , which can recognize human actions in videos of arbitrary size and length using multiple features. Specifically, we decompose a video into spatial and temporal shots. By taking a sequence of shots as input, each stream is implemented using a spatial temporal pyramid pooling (STPP) convNet with a long short-term memory (LSTM) or CNN-E model, softmax scores of which are combined by a late fusion. We devise the STPP convNet to extract equal-dimensional descriptions for each variable-size shot, and we adopt the LSTM CNN-E model to learn a global description for the input video using these time-varying descriptions. With these advantages, our method should improve all 3-D CNN-based video analysis methods. We empirically evaluate our method for action recognition in videos and the experimental results show that our method outperforms the state-of-the-art methods (both 2-D and 3-D based) on three standard benchmark datasets (UCF101, HMDB51 and ACT datasets).",
"Recent progress in using recurrent neural networks (RNNs) for image description has motivated the exploration of their application for video description. However, while images are static, working with videos requires modeling their dynamic temporal structure and then properly integrating that information into a natural language description model. In this context, we propose an approach that successfully takes into account both the local and global temporal structure of videos to produce descriptions. First, our approach incorporates a spatial temporal 3-D convolutional neural network (3-D CNN) representation of the short temporal dynamics. The 3-D CNN representation is trained on video action recognition tasks, so as to produce a representation that is tuned to human motion and behavior. Second we propose a temporal attention mechanism that allows to go beyond local temporal modeling and learns to automatically select the most relevant temporal segments given the text-generating RNN. Our approach exceeds the current state-of-art for both BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on a new, larger and more challenging dataset of paired video and natural language descriptions.",
"Learning image representations with ConvNets by pre-training on ImageNet has proven useful across many visual understanding tasks including object detection, semantic segmentation, and image captioning. Although any image representation can be applied to video frames, a dedicated spatiotemporal representation is still vital in order to incorporate motion patterns that cannot be captured by appearance based models alone. This paper presents an empirical ConvNet architecture search for spatiotemporal feature learning, culminating in a deep 3-dimensional (3D) Residual ConvNet. Our proposed architecture outperforms C3D by a good margin on Sports-1M, UCF101, HMDB51, THUMOS14, and ASLAN while being 2 times faster at inference time, 2 times smaller in model size, and having a more compact representation.",
"Human actions in video sequences are three-dimensional (3D) spatio-temporal signals characterizing both the visual appearance and motion dynamics of the involved humans and objects. Inspired by the success of convolutional neural networks (CNN) for image classification, recent attempts have been made to learn 3D CNNs for recognizing human actions in videos. However, partly due to the high complexity of training 3D convolution kernels and the need for large quantities of training videos, only limited success has been reported. This has triggered us to investigate in this paper a new deep architecture which can handle 3D signals more effectively. Specifically, we propose factorized spatio-temporal convolutional networks (FstCN) that factorize the original 3D convolution kernel learning as a sequential process of learning 2D spatial kernels in the lower layers (called spatial convolutional layers), followed by learning 1D temporal kernels in the upper layers (called temporal convolutional layers). We introduce a novel transformation and permutation operator to make factorization in FstCN possible. Moreover, to address the issue of sequence alignment, we propose an effective training and inference strategy based on sampling multiple video clips from a given action video sequence. We have tested FstCN on two commonly used benchmark datasets (UCF-101 and HMDB-51). Without using auxiliary training videos to boost the performance, FstCN outperforms existing CNN based methods and achieves comparable performance with a recent method that benefits from using auxiliary training videos.",
"In this paper we discuss several forms of spatiotemporal convolutions for video analysis and study their effects on action recognition. Our motivation stems from the observation that 2D CNNs applied to individual frames of the video have remained solid performers in action recognition. In this work we empirically demonstrate the accuracy advantages of 3D CNNs over 2D CNNs within the framework of residual learning. Furthermore, we show that factorizing the 3D convolutional filters into separate spatial and temporal components yields significantly gains in accuracy. Our empirical study leads to the design of a new spatiotemporal convolutional block \"R(2+1)D\" which produces CNNs that achieve results comparable or superior to the state-of-the-art on Sports-1M, Kinetics, UCF101, and HMDB51.",
"We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets, 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets, and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use."
]
} |
1908.10136 | 2970136826 | Spatial and temporal stream model has gained great success in video action recognition. Most existing works pay more attention to designing effective features fusion methods, which train the two-stream model in a separate way. However, it's hard to ensure discriminability and explore complementary information between different streams in existing works. In this work, we propose a novel cooperative cross-stream network that investigates the conjoint information in multiple different modalities. The jointly spatial and temporal stream networks feature extraction is accomplished by an end-to-end learning manner. It extracts this complementary information of different modality from a connection block, which aims at exploring correlations of different stream features. Furthermore, different from the conventional ConvNet that learns the deep separable features with only one cross-entropy loss, our proposed model enhances the discriminative power of the deeply learned features and reduces the undesired modality discrepancy by jointly optimizing a modality ranking constraint and a cross-entropy loss for both homogeneous and heterogeneous modalities. The modality ranking constraint constitutes intra-modality discriminative embedding and inter-modality triplet constraint, and it reduces both the intra-modality and cross-modality feature variations. Experiments on three benchmark datasets demonstrate that by cooperating appearance and motion feature extraction, our method can achieve state-of-the-art or competitive performance compared with existing results. | Another efficient way to extract temporal features is to precomputing the optical flow @cite_11 using traditional optical flow estimation methods and training a separate CNN to encode the precomputed optical flow, which is kind of escape from temporal modeling but effective in motion features extraction. The famous two-stream architecture @cite_44 proposed to apply two CNN architectures separately on visual frames and staked optical flows to extract spatiotemporal features and then fuse classification score. Further improvements base on this architecture including multi-granular structure @cite_14 @cite_13 , convolutional fusion @cite_25 @cite_1 , key-volume mining @cite_31 , temporal segment networks @cite_8 and ActionVLAD @cite_26 for video representation learning. Remarkably, a recent work (I3D) @cite_35 which combines two-stream processing and 3D convolutions holds the state-of-art action recognition results. The work reflects the power of ultra-deep architectures and pre-trained models. | {
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_26",
"@cite_8",
"@cite_1",
"@cite_44",
"@cite_31",
"@cite_13",
"@cite_25",
"@cite_11"
],
"mid": [
"2963524571",
"",
"2608988379",
"2507009361",
"2751445731",
"2156303437",
"2472293097",
"2962852931",
"",
"2964094092"
],
"abstract": [
"The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.2 on HMDB-51 and 97.9 on UCF-101.",
"",
"In this work, we introduce a new video representation for action classification that aggregates local convolutional features across the entire spatio-temporal extent of the video. We do so by integrating state-of-the-art two-stream networks [42] with learnable spatio-temporal feature aggregation [6]. The resulting architecture is end-to-end trainable for whole-video classification. We investigate different strategies for pooling across space and time and combining signals from the different streams. We find that: (i) it is important to pool jointly across space and time, but (ii) appearance and motion streams are best aggregated into their own separate representations. Finally, we show that our representation outperforms the two-stream base architecture by a large margin (13 relative) as well as outperforms other baselines with comparable base architectures on HMDB51, UCF101, and Charades video classification benchmarks.",
"Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 ( ( 69.4 , )) and UCF101 ( ( 94.2 , )). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices (Models and code at https: github.com yjxiong temporal-segment-networks).",
"3-D convolutional neural networks (3-D-convNets) have been very recently proposed for action recognition in videos, and promising results are achieved. However, existing 3-D-convNets has two “artificial” requirements that may reduce the quality of video analysis: 1) It requires a fixed-sized (e.g., 112 @math 112) input video; and 2) most of the 3-D-convNets require a fixed-length input (i.e., video shots with fixed number of frames). To tackle these issues, we propose an end-to-end pipeline named Two-stream 3-D-convNet Fusion , which can recognize human actions in videos of arbitrary size and length using multiple features. Specifically, we decompose a video into spatial and temporal shots. By taking a sequence of shots as input, each stream is implemented using a spatial temporal pyramid pooling (STPP) convNet with a long short-term memory (LSTM) or CNN-E model, softmax scores of which are combined by a late fusion. We devise the STPP convNet to extract equal-dimensional descriptions for each variable-size shot, and we adopt the LSTM CNN-E model to learn a global description for the input video using these time-varying descriptions. With these advantages, our method should improve all 3-D CNN-based video analysis methods. We empirically evaluate our method for action recognition in videos and the experimental results show that our method outperforms the state-of-the-art methods (both 2-D and 3-D based) on three standard benchmark datasets (UCF101, HMDB51 and ACT datasets).",
"We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multitask learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.",
"Recently, deep learning approaches have demonstrated remarkable progresses for action recognition in videos. Most existing deep frameworks equally treat every volume i.e. spatial-temporal video clip, and directly assign a video label to all volumes sampled from it. However, within a video, discriminative actions may occur sparsely in a few key volumes, and most other volumes are irrelevant to the labeled action category. Training with a large proportion of irrelevant volumes will hurt performance. To address this issue, we propose a key volume mining deep framework to identify key volumes and conduct classification simultaneously. Specifically, our framework is trained is optimized in an alternative way integrated to the forward and backward stages of Stochastic Gradient Descent (SGD). In the forward pass, our network mines key volumes for each action class. In the backward pass, it updates network parameters with the help of these mined key volumes. In addition, we propose \"Stochastic out\" to model key volumes from multi-modalities, and an effective yet simple \"unsupervised key volume proposal\" method for high quality volume sampling. Our experiments show that action recognition performance can be significantly improved by mining key volumes, and we achieve state-of-the-art performance on HMDB51 and UCF101 (93.1 ).",
"From the frame clip-level feature learning to the video-level representation building, deep learning methods in action recognition have developed rapidly in recent years. However, current methods suffer from the confusion caused by partial observation training, or without end-to-end learning, or restricted to single temporal scale modeling and so on. In this paper, we build upon two-stream ConvNets and propose Deep networks with Temporal Pyramid Pooling (DTPP), an end-to-end video-level representation learning approach, to address these problems. Specifically, at first, RGB images and optical flow stacks are sparsely sampled across the whole video. Then a temporal pyramid pooling layer is used to aggregate the frame-level features which consist of spatial and temporal cues. Lastly, the trained model has compact video-level representation with multiple temporal scales, which is both global and sequence-aware. Experimental results show that DTPP achieves the state-of-the-art performance on two challenging video action datasets: UCF101 and HMDB51, either by ImageNet pre-training or Kinetics pre-training.",
"",
"Motion representation plays a vital role in human action recognition in videos. In this study, we introduce a novel compact motion representation for video action recognition, named Optical Flow guided Feature (OFF), which enables the network to distill temporal information through a fast and robust approach. The OFF is derived from the definition of optical flow and is orthogonal to the optical flow. The derivation also provides theoretical support for using the difference between two frames. By directly calculating pixel-wise spatio-temporal gradients of the deep feature maps, the OFF could be embedded in any existing CNN based video action recognition framework with only a slight additional cost. It enables the CNN to extract spatiotemporal information, especially the temporal information between frames simultaneously. This simple but powerful idea is validated by experimental results. The network with OFF fed only by RGB inputs achieves a competitive accuracy of 93.3 on UCF-101, which is comparable with the result obtained by two streams (RGB and optical flow), but is 15 times faster in speed. Experimental results also show that OFF is complementary to other motion modalities such as optical flow. When the proposed method is plugged into the state-of-the-art video action recognition framework, it has 96.0 and 74.2 accuracy on UCF-101 and HMDB-51 respectively. The code for this project is available at: https: github.com kevin-ssy Optical-Flow-Guided-Feature"
]
} |
1908.10331 | 2953161934 | Training chatbots using the reinforcement learning paradigm is challenging due to high-dimensional states, infinite action spaces and the difficulty in specifying the reward function. We address such problems using clustered actions instead of infinite actions, and a simple but promising reward function based on human-likeness scores derived from human-human dialogue data. We train Deep Reinforcement Learning (DRL) agents using chitchat data in raw text—without any manual annotations. Experimental results using different splits of training data report the following. First, that our agents learn reasonable policies in the environments they get familiarised with, but their performance drops substantially when they are exposed to a test set of unseen dialogues. Second, that the choice of sentence embedding size between 100 and 300 dimensions is not significantly different on test data. Third, that our proposed human-likeness rewards are reasonable for training chatbots as long as they use lengthy dialogue histories of ≥10 sentences. | Reinforcement Learning (RL) methods are typically based on value functions or policy search @cite_19 , which also applies to deep RL methods. While value functions have been particularly applied to task-oriented dialogue systems @cite_12 @cite_31 @cite_6 @cite_22 @cite_3 @cite_29 , policy search has been particularly applied to open-ended dialogue systems such as (chitchat) chatbots @cite_7 @cite_33 @cite_0 @cite_23 @cite_11 . This is not surprising given the fact that task-oriented dialogue systems use finite action sets, while chatbot systems use infinite action sets. So far there is a preference for policy search methods for chatbots, but it is not clear whether they should be preferred because they face problems such as local optima rather than global optima, inefficiency and high variance. It is thus that this paper explores the feasibility of value function-based methods for chatbots, which has not been explored before---at least not from the perspective of deriving the action sets automatically as attempted in this paper. | {
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_33",
"@cite_29",
"@cite_6",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_23",
"@cite_31",
"@cite_12",
"@cite_11"
],
"mid": [
"2732273801",
"2963167310",
"2581637843",
"2737041661",
"2603550564",
"2594726847",
"2784808670",
"",
"",
"2294065713",
"2963306198",
""
],
"abstract": [
"Deep reinforcement learning dialogue systems are attractive because they can jointly learn their feature representations and policies without manual feature engineering. But its application is challenging due to slow learning. We propose a two-stage method for accelerating the induction of single or multi-domain dialogue policies. While the first stage reduces the amount of weight updates over time, the second stage uses very limited minibatches (of as much as two learning experiences) sampled from experience replay memories. The former frequently updates the weights of the neural nets at early stages of training, and decreases the amount of updates as training progresses by performing updates during exploration and by skipping updates during exploitation. The learning process is thus accelerated through less weight updates in both stages. An empirical evaluation in three domains (restaurants, hotels and tv guide) confirms that the proposed method trains policies 5 times faster than a baseline without the proposed method. Our findings are useful for training larger-scale neural-based spoken dialogue systems.",
"",
"",
"The majority of NLG evaluation relies on automatic metrics, such as BLEU . In this paper, we motivate the need for novel, system- and data-independent automatic evaluation methods: We investigate a wide range of metrics, including state-of-the-art word-based and novel grammar-based ones, and demonstrate that they only weakly reflect human judgements of system outputs as generated by data-driven, end-to-end NLG. We also show that metric performance is data- and system-specific. Nevertheless, our results also suggest that automatic metrics perform reliably at system-level and can support system development by finding cases where a system performs poorly.",
"Standard deep reinforcement learning methods such as Deep Q-Networks (DQN) for multiple tasks (domains) face scalability problems due to large search spaces. This paper proposes a three-stage method for multi-domain dialogue policy learning-termed NDQN, and applies it to an information-seeking spoken dialogue system in the domains of restaurants and hotels. In this method, the first stage does multi-policy learning via a network of DQN agents; the second makes use of compact state representations by compressing raw inputs; and the third stage applies a pre-training phase for bootstraping the behaviour of agents in the network. Experimental results comparing DQN (baseline) versus NDQN (proposed) using simulations report that the proposed method exhibits better scalability and is promising for optimising the behaviour of multi-domain dialogue systems. An additional evaluation reports that the NDQN agents outperformed a K-Nearest Neighbour baseline in task success and dialogue length, yielding more efficient and successful dialogues.",
"",
"We present MILABOT: a deep reinforcement learning chatbot developed by the Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize competition. MILABOT is capable of conversing with humans on popular small talk topics through both speech and text. The system consists of an ensemble of natural language generation and retrieval models, including neural network and template-based models. By applying reinforcement learning to crowdsourced data and real-world user interactions, the system has been trained to select an appropriate response from the models in its ensemble. The system has been evaluated through A B testing with real-world users, where it performed significantly better than other systems. The results highlight the potential of coupling ensemble systems with deep reinforcement learning as a fruitful path for developing real-world, open-domain conversational agents.",
"",
"",
"This article presents SimpleDS, a simple and publicly available dialogue system trained with deep reinforcement learning. In contrast to previous reinforcement learning dialogue systems, this system avoids manual feature engineering by performing action selection directly from raw text of the last system and (noisy) user responses. Our initial results, in the restaurant domain, report that it is indeed possible to induce reasonable behaviours with such an approach that aims for higher levels of automation in dialogue control for intelligent interactive systems and robots.",
"",
""
]
} |
1908.10198 | 2970102015 | Event detection is gaining increasing attention in smart cities research. Large-scale mobility data serves as an important tool to uncover the dynamics of urban transportation systems, and more often than not the dataset is incomplete. In this article, we develop a method to detect extreme events in large traffic datasets, and to impute missing data during regular conditions. Specifically, we propose a robust tensor recovery problem to recover low rank tensors under fiber-sparse corruptions with partial observations, and use it to identify events, and impute missing data under typical conditions. Our approach is scalable to large urban areas, taking full advantage of the spatio-temporal correlations in traffic patterns. We develop an efficient algorithm to solve the tensor recovery problem based on the alternating direction method of multipliers (ADMM) framework. Compared with existing @math norm regularized tensor decomposition methods, our algorithm can exactly recover the values of uncorrupted fibers of a low rank tensor and find the positions of corrupted fibers under mild conditions. Numerical experiments illustrate that our algorithm can exactly detect outliers even with missing data rates as high as 40 , conditioned on the outlier corruption rate and the Tucker rank of the low rank tensor. Finally, we apply our method on a real traffic dataset corresponding to downtown Nashville, TN, USA and successfully detect the events like severe car crashes, construction lane closures, and other large events that cause significant traffic disruptions. | The outliers we are interested in this work are due to outliers caused by extreme events. Another related problem considers methods to detect outliers caused by data measurement errors, such as sensor malfunction, malicious tampering, or measurement error @cite_13 @cite_2 @cite_38 . The latter methods can be seen as a part of a standard data cleaning or data pre-processing step. On the other hand, outliers caused by extreme traffic have valuable information for congestion management, and can provide agencies with insights into the performance of urban network. The works @cite_1 @cite_35 @cite_21 explore the problem of outlier detection caused by events, while the works @cite_6 @cite_3 @cite_50 @cite_4 focus on determining the root causes of the outlier. | {
"cite_N": [
"@cite_38",
"@cite_35",
"@cite_4",
"@cite_21",
"@cite_1",
"@cite_6",
"@cite_3",
"@cite_50",
"@cite_2",
"@cite_13"
],
"mid": [
"1974476933",
"2093855404",
"2042850276",
"2117618130",
"2743812350",
"2014211872",
"2963024417",
"2021002141",
"2088400370",
"1544613517"
],
"abstract": [
"Novel methods for implementation of detector-level multivariate screening methods are presented. The methods use present data and classify data as outliers on the basis of comparisons with empirical cutoff points derived from extensive archived data rather than from standard statistical tables. In addition, while many of the ideas of the classical Hotelling's T2-statistic are used, modern statistical trend removal and blocking are incorporated. The methods are applied to intelligent transportation system data from San Antonio and Austin, Texas. These examples show how the suggested new methods perform with high-quality traffic data and apparently lower-quality traffic data. All algorithms were implemented by using the SAS programming language.",
"The increasing availability of large-scale trajectory data provides us great opportunity to explore them for knowledge discovery in transportation systems using advanced data mining techniques. Nowadays, large number of taxicabs in major metropolitan cities are equipped with a GPS device. Since taxis are on the road nearly 24h a day (with drivers changing shifts), they can now act as reliable sensors to monitor the behavior of traffic. In this article, we use GPS data from taxis to monitor the emergence of unexpected behavior in the Beijing metropolitan area, which has the potential to estimate and improve traffic conditions in advance. We adapt likelihood ratio test statistic (LRT) which have previously been mostly used in epidemiological studies to describe traffic patterns. To the best of our knowledge the use of LRT in traffic domain is not only novel but results in accurate and rapid detection of anomalous behavior.",
"The recent availability of datasets on transportation networks with higher spatial and temporal resolution is enabling new research activities in the fields of Territorial Intelligence and Smart Cities. Among these, many research efforts are aimed at predicting traffic congestions to alleviate their negative effects on society, mainly by learning recurring mobility patterns. Within this field, in this paper we propose an integrated solution to predict and visualize non-recurring traffic congestion in urban environments caused by Planned Special Events (PSE), such as a soccer game or a concert. Predictions are done by means of two Machine Learning-based techniques. These have been proven to successfully outperform current state of the art predictions by 35 in an empirical assessment we conducted over a time frame of 7 months within the inner city of Cologne, Germany. The predicted congestions are fed into a specifically conceived visualization tool we designed to allow Decision Makers to evaluate the situation and take actions to improve mobility. HighlightsWe analyze in detail the impact of Planned Special Events (PSEs) on traffic.We propose two novel methods to predict upcoming congestions caused by PSEs.Results show that our prediction methods outperform the state of the art by 35 .We introduce a specifically designed tool for visualizing the prediction results.Visualization tool allows experts to evaluate upcoming situations ahead of time.",
"The detection of outliers in spatio-temporal traffic data is an important research problem in the data mining and knowledge discovery community. However to the best of our knowledge, the discovery of relationships, especially causal interactions, among detected traffic outliers has not been investigated before. In this paper we propose algorithms which construct outlier causality trees based on temporal and spatial properties of detected outliers. Frequent substructures of these causality trees reveal not only recurring interactions among spatio-temporal outliers, but potential flaws in the design of existing traffic networks. The effectiveness and strength of our algorithms are validated by experiments on a very large volume of real taxi trajectories in an urban road network.",
"As the development of crowdsourcing technique, acquiring amounts of data in urban cities becomes possible and reliable, which makes it possible to mine useful and significant information from data. Traffic anomaly detection is to find the traffic patterns which are not expected and it can be used to explore traffic problems accurately and efficiently. In this paper, we propose LoTAD to explore anomalous regions with long-term poor traffic situations. Specifically, we process crowdsourced bus data into TS-segments (Temporal and Spatial segments) to model the traffic condition. Later, we explore anomalous TS-segments in each bus line by calculating their AI (Anomaly Index). Then, we combine anomalous TS-segments detected in different lines to mine anomalous regions. The information of anomalous regions provides suggestions for future traffic planning. We conduct experiments with real crowdsourced bus trajectory datasets of October in 2014 and March in 2015 in Hangzhou. We analyze the varieties of the results and explain how they are consistent with the real urban traffic planning or social events happened between the time interval of the two datasets. At last we do a contrast experiment with the most ten congested roads in Hangzhou, which verifies the effectiveness of LoTAD.",
"Anomaly detection (a.k.a., outlier or burst detection) is a well-motivated problem and a major data mining and knowledge discovery task. In this article, we study the problem of population anomaly detection, one of the key issues related to event monitoring and population management within a city. Through studying detected population anomalies, we can trace and analyze these anomalies, which could help to model city traffic design and event impact analysis and prediction. Although a significant and interesting issue, it is very hard to detect population anomalies and retrieve anomaly trajectories, especially given that it is difficult to get actual and sufficient population data. To address the difficulties of a lack of real population data, we take advantage of mobile phone networks, which offer enormous spatial and temporal communication data on persons. More importantly, we claim that we can utilize these mobile phone data to infer and approximate population data. Thus, we can study the population anomaly detection problem by taking advantages of unique features hidden in mobile phone data. In this article, we present a system to conduct Population Anomaly Detection (PAD). First, we propose an effective clustering method, correlation-based clustering, to cluster the incomplete location information from mobile phone data (i.e., from mobile call volume distribution to population density distribution). Then, we design an adaptive parameter-free detection method, R-scan, to capture the distributed dynamic anomalies. Finally, we devise an efficient algorithm, BT-miner, to retrieve anomaly trajectories. The experimental results from real-life mobile phone data confirm the effectiveness and efficiency of the proposed algorithms. Finally, the proposed methods are realized as a pilot system in a city in China.",
"Non-recurring traffic congestion is caused by temporary disruptions, such as accidents, sports games, adverse weather, etc. We use data related to real-time traffic speed, jam factors (a traffic congestion indicator), and events collected over a year from Nashville, TN to train a multi-layered deep neural network. The traffic dataset contains over 900 million data records. The network is thereafter used to classify the real-time data and identify anomalous operations. Compared with traditional approaches of using statistical or machine learning techniques, our model reaches an accuracy of 98.73 percent when identifying traffic congestion caused by football games. Our approach first encodes the traffic across a region as a scaled image. After that the image data from different timestamps is fused with event- and time-related data. Then a crossover operator is used as a data augmentation method to generate training datasets with more balanced classes. Finally, we use the receiver operating characteristic (ROC) analysis to tune the sensitivity of the classifier. We present the analysis of the training time and the inference time separately.",
"With the increasing amount of traffic information collected through floating car data, it is highly desirable to find meaningful traffic patterns such as congestion patterns from the accumulated massive historical dataset. It is however challenging due to the huge size of the dataset and the complexity and dynamics of traffic phenomena. A novel floating car data analysis method based on data cube for congestion pattern exploration is proposed in this paper. This method is different from traditional methods that depend only on numerical statistics of traffic data. The view of the event or spatial-temporal progress is adapted to model and measure traffic congestions. According to a multi-dimensional analysis framework, the traffic congestion event is first identified based on spatial-temporal related relationship of slow-speed road segment. Then, it is aggregated by a cluster style to get the traffic pattern on a different level of detail of spatial-temporal dimension. Aggregated location, time period and duration time for recurrent and important congestions are used to represent the congestion pattern. The authors evaluate methods using a historical traffic dataset collected from about 12000 taxi-based floating cars for one week in a large urban area. Results show that the method can effectively identify and summarize the congestion pattern with efficient computation and reduced storage cost.",
"In order to improve the veracity and reliability of a traffic model built, or to extract important and valuable information from collected traffic data, the technique of outlier mining has been introduced into the traffic engineering domain for detecting and analyzing the outliers in traffic data sets. Three typical outlier algorithms, respectively the statistics-based approach, the distance-based approach, and the density-based local outlier approach, are described with respect to the principle, the characteristics and the time complexity of the algorithms. A comparison among the three algorithms is made through application to intelligent transportation systems (ITS). Two traffic data sets with different dimensions have been used in our experiments carried out, one is travel time data, and the other is traffic flow data. We conducted a number of experiments to recognize outliers hidden in the data sets before building the travel time prediction model and the traffic flow foundation diagram. In addition, some artificial generated outliers are introduced into the traffic flow data to see how well the different algorithms detect them. Three strategies-based on ensemble learning, partition and average LOF have been proposed to develop a better outlier recognizer. The experimental results reveal that these methods of outlier mining are feasible and valid to detect outliers in traffic data sets, and have a good potential for use in the domain of traffic engineering. The comparison and analysis presented in this paper are expected to provide some insights to practitioners who plan to use outlier mining for ITS data.",
"In their goal to effectively manage the use of existing infrastructures, intelligent transportation systems require precise forecasting of near-term traffic volumes to feed real-time analytical models and traffic surveillance tools that alert of network links reaching their capacity. This article proposes a new methodological approach for short-term predictions of time series of volume data at isolated cross sections. The originality in the computational modeling stems from the fit of threshold values used in the stationary wavelet-based denoising process applied on the time series, and from the determination of patterns that characterize the evolution of its samples over a fixed prediction horizon. A self-organizing fuzzy neural network is optimized in its configuration parameters for learning and recognition of these patterns. Four real-world data sets from 3 interstate roads are considered for evaluating the performance of the proposed model. A quantitative comparison made with the results obtained by 4 other relevant prediction models shows a favorable outcome."
]
} |
1908.10198 | 2970102015 | Event detection is gaining increasing attention in smart cities research. Large-scale mobility data serves as an important tool to uncover the dynamics of urban transportation systems, and more often than not the dataset is incomplete. In this article, we develop a method to detect extreme events in large traffic datasets, and to impute missing data during regular conditions. Specifically, we propose a robust tensor recovery problem to recover low rank tensors under fiber-sparse corruptions with partial observations, and use it to identify events, and impute missing data under typical conditions. Our approach is scalable to large urban areas, taking full advantage of the spatio-temporal correlations in traffic patterns. We develop an efficient algorithm to solve the tensor recovery problem based on the alternating direction method of multipliers (ADMM) framework. Compared with existing @math norm regularized tensor decomposition methods, our algorithm can exactly recover the values of uncorrupted fibers of a low rank tensor and find the positions of corrupted fibers under mild conditions. Numerical experiments illustrate that our algorithm can exactly detect outliers even with missing data rates as high as 40 , conditioned on the outlier corruption rate and the Tucker rank of the low rank tensor. Finally, we apply our method on a real traffic dataset corresponding to downtown Nashville, TN, USA and successfully detect the events like severe car crashes, construction lane closures, and other large events that cause significant traffic disruptions. | Low rank matrix and tensor learning has been widely used to utilize the inner structure of the data. Various application have benefited from matrix and tensor based methods, including data completion @cite_43 @cite_47 , link prediction @cite_33 , network structure clustering @cite_8 , etc. | {
"cite_N": [
"@cite_43",
"@cite_47",
"@cite_33",
"@cite_8"
],
"mid": [
"2963472624",
"2343462218",
"1864134408",
"40609341"
],
"abstract": [
"Tensor completion is a problem of filling the missing or unobserved entries of partially observed tensors. Due to the multidimensional character of tensors in describing complex datasets, tensor completion algorithms and their applications have received wide attention and achievement in areas like data mining, computer vision, signal processing, and neuroscience. In this survey, we provide a modern overview of recent advances in tensor completion algorithms from the perspective of big data analytics characterized by diverse variety, large volume, and high velocity. We characterize these advances from the following four perspectives: general tensor completion algorithms, tensor completion with auxiliary information (variety), scalable tensor completion algorithms (volume), and dynamic tensor completion algorithms (velocity). Further, we identify several tensor completion applications on real-world data-driven problems and present some common experimental frameworks popularized in the literature along with several available software repositories. Our goal is to summarize these popular methods and introduce them to researchers and practitioners for promoting future research and applications. We conclude with a discussion of key challenges and promising research directions in this community for future exploration.",
"Intelligent transportation systems (ITSs) gather information about traffic conditions by collecting data from a wide range of on-ground sensors. The collected data usually suffer from irregular spatial and temporal resolution. Consequently, missing data is a common problem faced by ITSs. In this paper, we consider the problem of missing data in large and diverse road networks. We propose various matrix and tensor based methods to estimate these missing values by extracting common traffic patterns in large road networks. To obtain these traffic patterns in the presence of missing data, we apply fixed-point continuation with approximate singular value decomposition, canonical polyadic decomposition, least squares, and variational Bayesian principal component analysis. For analysis, we consider different road networks, each of which is composed of around 1500 road segments. We evaluate the performance of these methods in terms of estimation accuracy, variance of the data set, and the bias imparted by these methods.",
"The data in many disciplines such as social networks, Web analysis, etc. is link-based, and the link structure can be exploited for many different data mining tasks. In this article, we consider the problem of temporal link prediction: Given link data for times 1 through T, can we predict the links at time T + 1? If our data has underlying periodic structure, can we predict out even further in time, i.e., links at time T + 2, T + 3, etc.? In this article, we consider bipartite graphs that evolve over time and consider matrix- and tensor-based methods for predicting future links. We present a weight-based method for collapsing multiyear data into a single matrix. We show how the well-known Katz method for link prediction can be extended to bipartite graphs and, moreover, approximated in a scalable way using a truncated singular value decomposition. Using a CANDECOMP PARAFAC tensor decomposition of the data, we illustrate the usefulness of exploiting the natural three-dimensional structure of temporal link data. Through several numerical experiments, we demonstrate that both matrix- and tensor-based techniques are effective for temporal link prediction despite the inherent difficulty of the problem. Additionally, we show that tensor-based techniques are particularly effective for temporal data with varying periodic patterns.",
"The traffic networks reflect the pulse and structure of a city and shows some dynamic characteristic. Previous research in mining structure from networks mostly focus on static networks and fail to exploit the temporal patterns. In this paper, we aim to solve the problem of discovering the urban spatio-temporal structure from time-evolving traffic networks. We model the time-evolving traffic networks into a 3-order tensor, each element of which indicates the volume of traffic from i-th origin area to j-th destination area in k-th time domain. Considering traffic data and urban contextual knowledge together, we propose a regularized Non-negative Tucker Decomposition (rNTD) method, which discovers the spatial clusters, temporal patterns and relations among them simultaneously. Abundant experiments are conducted in a large dataset collected from Beijing. Results show that our method outperforms the baseline method."
]
} |
1908.10198 | 2970102015 | Event detection is gaining increasing attention in smart cities research. Large-scale mobility data serves as an important tool to uncover the dynamics of urban transportation systems, and more often than not the dataset is incomplete. In this article, we develop a method to detect extreme events in large traffic datasets, and to impute missing data during regular conditions. Specifically, we propose a robust tensor recovery problem to recover low rank tensors under fiber-sparse corruptions with partial observations, and use it to identify events, and impute missing data under typical conditions. Our approach is scalable to large urban areas, taking full advantage of the spatio-temporal correlations in traffic patterns. We develop an efficient algorithm to solve the tensor recovery problem based on the alternating direction method of multipliers (ADMM) framework. Compared with existing @math norm regularized tensor decomposition methods, our algorithm can exactly recover the values of uncorrupted fibers of a low rank tensor and find the positions of corrupted fibers under mild conditions. Numerical experiments illustrate that our algorithm can exactly detect outliers even with missing data rates as high as 40 , conditioned on the outlier corruption rate and the Tucker rank of the low rank tensor. Finally, we apply our method on a real traffic dataset corresponding to downtown Nashville, TN, USA and successfully detect the events like severe car crashes, construction lane closures, and other large events that cause significant traffic disruptions. | The most relevant works with ours are robust matrix and tensor PCA for outlier detection. @math norm regularized robust tensor recovery, as proposed by Goldfarb and Qin @cite_46 , is useful when data is polluted with unstructured random noises. @cite_5 also used @math norm regularized tensor decomposition for traffic data recovery, in face of random noise corruption. But if outliers are structured, for example grouped in columns, @math norm regularization does not yield good results. In addition, although traffic is also modeled in tensor format in @cite_5 , only a single road segment is considered, not taking into account network spacial structures. | {
"cite_N": [
"@cite_5",
"@cite_46"
],
"mid": [
"2083797062",
"1999136078"
],
"abstract": [
"Traffic volume data is already collected and used for a variety of purposes in intelligent transportation system (ITS). However, the collected data might be abnormal due to the problem of outlier data caused by malfunctions in data collection and record systems. To fully analyze and operate the collected data, it is necessary to develop a validate method for addressing the outlier data. Many existing algorithms have studied the problem of outlier recovery based on the time series methods. In this paper, a multiway tensor model is proposed for constructing the traffic volume data based on the intrinsic multilinear correlations, such as day to day and hour to hour. Then, a novel tensor recovery method, called ADMM-TR, is proposed for recovering outlier data of traffic volume data. The proposed method is evaluated on synthetic data and real world traffic volume data. Experimental results demonstrate the practicability, effectiveness, and advantage of the proposed method, especially for the real world traffic volume data.",
"Robust tensor recovery plays an instrumental role in robustifying tensor decompositions for multilinear data analysis against outliers, gross corruptions, and missing values and has a diverse array of applications. In this paper, we study the problem of robust low-rank tensor recovery in a convex optimization framework, drawing upon recent advances in robust principal component analysis and tensor completion. We propose tailored optimization algorithms with global convergence guarantees for solving both the constrained and the Lagrangian formulations of the problem. These algorithms are based on the highly efficient alternating direction augmented Lagrangian and accelerated proximal gradient methods. We also propose a nonconvex model that can often improve the recovery results from the convex models. We investigate the empirical recoverability properties of the convex and nonconvex formulations and compare the computational performance of the algorithms on simulated data. We demonstrate through a number o..."
]
} |
1908.10198 | 2970102015 | Event detection is gaining increasing attention in smart cities research. Large-scale mobility data serves as an important tool to uncover the dynamics of urban transportation systems, and more often than not the dataset is incomplete. In this article, we develop a method to detect extreme events in large traffic datasets, and to impute missing data during regular conditions. Specifically, we propose a robust tensor recovery problem to recover low rank tensors under fiber-sparse corruptions with partial observations, and use it to identify events, and impute missing data under typical conditions. Our approach is scalable to large urban areas, taking full advantage of the spatio-temporal correlations in traffic patterns. We develop an efficient algorithm to solve the tensor recovery problem based on the alternating direction method of multipliers (ADMM) framework. Compared with existing @math norm regularized tensor decomposition methods, our algorithm can exactly recover the values of uncorrupted fibers of a low rank tensor and find the positions of corrupted fibers under mild conditions. Numerical experiments illustrate that our algorithm can exactly detect outliers even with missing data rates as high as 40 , conditioned on the outlier corruption rate and the Tucker rank of the low rank tensor. Finally, we apply our method on a real traffic dataset corresponding to downtown Nashville, TN, USA and successfully detect the events like severe car crashes, construction lane closures, and other large events that cause significant traffic disruptions. | In face of large events, outliers tend to group in columns or fibers in the dataset, as illustrated in section . @math norm regularized decomposition is suitable for group outlier detection, as shown in @cite_25 @cite_51 for matrices, and @cite_27 @cite_41 for tensors. In addition, @cite_9 introduced a multi-view low-rank analysis framework for outlier detection, and @cite_15 used discriminant tensor factorization for event analytics. Our methods differ from the existing tensor outliers pursuit @cite_27 @cite_41 in that they are dealing with slab outliers, i.e., outliers form an entire slice instead of fibers of the tensor. Moreover, compared with existing works, we take one step further and deal with partial observations. As stated in Section , without an overall understanding of the underlying pattern, we can easily impute the missing entries incorrectly and influence our decision about outliers. We will show in Section simulation that our new algorithm can exactly detect the outliers even with 40 | {
"cite_N": [
"@cite_41",
"@cite_9",
"@cite_27",
"@cite_15",
"@cite_51",
"@cite_25"
],
"mid": [
"2594059414",
"2791255512",
"",
"2897753390",
"2120580172",
"2160813243"
],
"abstract": [
"In this paper, we study robust principal component analysis on tensors, in the setting where frame-wise outliers exist. We propose a convex formulation to decompose a tensor into a low rank component and a frame-wise sparse component. Theoretically, we guarantee that exact subspace recovery and outlier identification can be achieved under mild model assumptions. Compared with entry-wise outlier pursuit and naive matricization of tensors with frame-wise outliers, our approach can handle higher ranks and proportion of outliers. Extensive numerical evaluations are provided on both synthetic and real data to support our theory.",
"Detecting outliers or anomalies is a fundamental problem in various machine learning and data mining applications. Conventional outlier detection algorithms are mainly designed for single-view data. Nowadays, data can be easily collected from multiple views, and many learning tasks such as clustering and classification have benefited from multi-view data. However, outlier detection from multi-view data is still a very challenging problem, as the data in multiple views usually have more complicated distributions and exhibit inconsistent behaviors. To address this problem, we propose a multi-view low-rank analysis (MLRA) framework for outlier detection in this article. MLRA pursuits outliers from a new perspective, robust data representation. It contains two major components. First, the cross-view low-rank coding is performed to reveal the intrinsic structures of data. In particular, we formulate a regularized rank-minimization problem, which is solved by an efficient optimization algorithm. Second, the outliers are identified through an outlier score estimation procedure. Different from the existing multi-view outlier detection methods, MLRA is able to detect two different types of outliers from multiple views simultaneously. To this end, we design a criterion to estimate the outlier scores by analyzing the obtained representation coefficients. Moreover, we extend MLRA to tackle the multi-view group outlier detection problem. Extensive evaluations on seven UCI datasets, the MovieLens, the USPS-MNIST, and the WebKB datasets demon strate that our approach outperforms several state-of-the-art outlier detection methods.",
"",
"Analyzing the impact of disastrous events has been central to understanding and responding to crises. Traditionally, the assessment of disaster impact has primarily relied on the manual collection and analysis of surveys and questionnaires as well as the review of authority reports. This can be costly and time-consuming, whereas a timely assessment of an event’s impact is critical for crisis management and humanitarian operations. In this work, we formulate the impact discovery as the problem to identify the shared and discriminative subspace via tensor factorization due to the multi-dimensional nature of mobility data. Existing work in mining the shared and discriminative subspaces typically requires the predefined number of either type of them. In the context of event impact discovery, this could be impractical, especially for those unprecedented events. To overcome this, we propose a new framework, called “PairFac,” that jointly factorizes the multi-dimensional data to discover the latent mobility pattern along with its associated discriminative weight. This framework does not require splitting the shared and discriminative subspaces in advance and at the same time automatically captures the persistent and changing patterns from multi-dimensional behavioral data. Our work has important applications in crisis management and urban planning, which provides a timely assessment of impacts of major events in the urban environment.",
"Singular Value Decomposition (and Principal Component Analysis) is one of the most widely used techniques for dimensionality reduction: successful and efficiently computable, it is nevertheless plagued by a well-known, well-documented sensitivity to outliers. Recent work has considered the setting where each point has a few arbitrarily corrupted components. Yet, in applications of SVD or PCA such as robust collaborative filtering or bioinformatics, malicious agents, defective genes, or simply corrupted or contaminated experiments may effectively yield entire points that are completely corrupted. We present an efficient convex optimization-based algorithm we call Outlier Pursuit, that under some mild assumptions on the uncorrupted points (satisfied, e.g., by the standard generative assumption in PCA problems) recovers the exact optimal low-dimensional subspace, and identifies the corrupted points. Such identification of corrupted points that do not conform to the low-dimensional approximation, is of paramount interest in bioinformatics and financial applications, and beyond. Our techniques involve matrix decomposition using nuclear norm minimization, however, our results, setup, and approach, necessarily differ considerably from the existing line of work in matrix completion and matrix decomposition, since we develop an approach to recover the correct column space of the uncorrupted matrix, rather than the exact matrix itself.",
"In this paper, we propose a convex program for low-rank and block-sparse matrix decomposition. Potential applications include outlier detection when certain columns of the data matrix are outliers. We design an algorithm based on the augmented Lagrange multiplier method to solve the convex program. We solve the subproblems involved in the augmented Lagrange multiplier method using the Douglas Peaceman-Rachford (DR) monotone operator splitting method. Numerical simulations demonstrate the accuracy of our method compared with the robust principal component analysis based on low-rank and sparse matrix decomposition."
]
} |
1908.10193 | 2970619785 | In the field of information retrieval, query expansion (QE) has long been used as a technique to deal with the fundamental issue of word mismatch between a user's query and the target information. In the context of the relationship between the query and expanded terms, existing weighting techniques often fail to appropriately capture the term-term relationship and term to the whole query relationship, resulting in low retrieval effectiveness. Our proposed QE approach addresses this by proposing three weighting models based on (1) tf-itf, (2) k-nearest neighbor (kNN) based cosine similarity, and (3) correlation score. Further, to extract the initial set of expanded terms, we use pseudo-relevant web knowledge consisting of the top N web pages returned by the three popular search engines namely, Google, Bing, and DuckDuckGo, in response to the original query. Among the three weighting models, tf-itf scores each of the individual terms obtained from the web content, kNN-based cosine similarity scores the expansion terms to obtain the term-term relationship, and correlation score weighs the selected expansion terms with respect to the whole query. The proposed model, called web knowledge based query expansion (WKQE), achieves an improvement of 25.89 on the MAP score and 30.83 on the GMAP score over the unexpanded queries on the FIRE dataset. A comparative analysis of the WKQE techniques with other related approaches clearly shows significant improvement in the retrieval performance. We have also analyzed the effect of varying the number of pseudo-relevant documents and expansion terms on the retrieval effectiveness of the proposed model. | Query expansion has a long history of literature in the field of information retrieval. It was first coined by @cite_29 in the 1960s for literature indexing and searching in a mechanized library system. In 1971, Rocchio @cite_21 brought QE to spotlight through the relevance feedback method and its characterization in a vector space model. While this was the first use of relevance feedback method, Rocchio's method is still used for QE in its original and modified forms. The availability of several standard text collections (e.g., Text Retrieval Conference (TREC) http: trec.nist.gov , and Forum for Information Retrieval Evaluation (FIRE) http: fire.irsi.res.in ) and IR platforms (e.g., Terrier http: terrier.org and Apache Lucene http: lucene.apache.org ) have been very instrumental in evaluating the progress in this area in a systematic way. Carpineto and Romano @cite_22 and Azad and Deepak @cite_2 present state-of-the-art comprehensive surveys on QE. This article focuses on web based QE techniques. | {
"cite_N": [
"@cite_29",
"@cite_21",
"@cite_22",
"@cite_2"
],
"mid": [
"2082729696",
"2164547069",
"1993692165",
"2963764152"
],
"abstract": [
"This paper reports on a novel technique for literature indexing and searching in a mechanized library system. The notion of relevance is taken as the key concept in the theory of information retrieval and a comparative concept of relevance is explicated in terms of the theory of probability. The resulting technique called “Probabilistic Indexing,” allows a computing machine, given a request for information, to make a statistical inference and derive a number (called the “relevance number”) for each document, which is a measure of the probability that the document will satisfy the given request. The result of a search is an ordered list of those documents which satisfy the request ranked according to their probable relevance. The paper goes on to show that whereas in a conventional library system the cross-referencing (“see” and “see also”) is based solely on the “semantical closeness” between index terms, statistical measures of closeness between index terms can be defined and computed. Thus, given an arbitrary request consisting of one (or many) index term(s), a machine can elaborate on it to increase the probability of selecting relevant documents that would not otherwise have been selected. Finally, the paper suggests an interpretation of the whole library problem as one where the request is considered as a clue on the basis of which the library system makes a concatenated statistical inference in order to provide as an output an ordered list of those documents which most probably satisfy the information needs of the user.",
"1332840 Primer compositions DOW CORNINGCORP 6 Oct 1971 [30 Dec 1970] 46462 71 Heading C3T [Also in Divisions B2 and C4] A primer composition comprises 1 pbw of tetra ethoxy or propoxy silane or poly ethyl or propyl silicate or any mixture thereof, 0A75-2A5 pbw of bis(acetylacetonyl) diisopropyl titanate, 0A75- 5 pbw of a compound CF 3 CH 2 CH 2 Si[OSi(CH 3 ) 2 - X] 3 wherein each X is H or -CH 2 CH 2 Si- (OOCCH 3 ) 3 , at least one being the latter, and 1-20 pbw of a ketone, hydrocarbon or halohydrocarbon solvent boiling not above 150‹ C. In the examples 1 pbw each of bis(acetylacetonyl)diisopropyl titanate, polyethyl silicate and are dissolved in 10 pbw of acetone or in 9 pbw of light naphtha and 1 of methylisobutylketone. The solutions are used to prime Ti panels, to which a Pt-catalysed room-temperature vulcanizable poly-trifluoropropylmethyl siloxanebased rubber is then applied.",
"The relative ineffectiveness of information retrieval systems is largely caused by the inaccuracy with which a query formed by a few keywords models the actual user information need. One well known method to overcome this limitation is automatic query expansion (AQE), whereby the user’s original query is augmented by new features with a similar meaning. AQE has a long history in the information retrieval community but it is only in the last years that it has reached a level of scientific and experimental maturity, especially in laboratory settings such as TREC. This survey presents a unified view of a large number of recent approaches to AQE that leverage various data sources and employ very different principles and techniques. The following questions are addressed. Why is query expansion so important to improve search effectiveness? What are the main steps involved in the design and implementation of an AQE component? What approaches to AQE are available and how do they compare? Which issues must still be resolved before AQE becomes a standard component of large operational information retrieval systems (e.g., search engines)?",
"Abstract With the ever increasing size of the web, relevant information extraction on the Internet with a query formed by a few keywords has become a big challenge. Query Expansion (QE) plays a crucial role in improving searches on the Internet. Here, the user’s initial query is reformulated by adding additional meaningful terms with similar significance. QE – as part of information retrieval (IR) – has long attracted researchers’ attention. It has become very influential in the field of personalized social document, question answering, cross-language IR, information filtering and multimedia IR. Research in QE has gained further prominence because of IR dedicated conferences such as TREC (Text Information Retrieval Conference) and CLEF (Conference and Labs of the Evaluation Forum). This paper surveys QE techniques in IR from 1960 to 2017 with respect to core techniques, data sources used, weighting and ranking methodologies, user participation and applications – bringing out similarities and differences."
]
} |
1908.10193 | 2970619785 | In the field of information retrieval, query expansion (QE) has long been used as a technique to deal with the fundamental issue of word mismatch between a user's query and the target information. In the context of the relationship between the query and expanded terms, existing weighting techniques often fail to appropriately capture the term-term relationship and term to the whole query relationship, resulting in low retrieval effectiveness. Our proposed QE approach addresses this by proposing three weighting models based on (1) tf-itf, (2) k-nearest neighbor (kNN) based cosine similarity, and (3) correlation score. Further, to extract the initial set of expanded terms, we use pseudo-relevant web knowledge consisting of the top N web pages returned by the three popular search engines namely, Google, Bing, and DuckDuckGo, in response to the original query. Among the three weighting models, tf-itf scores each of the individual terms obtained from the web content, kNN-based cosine similarity scores the expansion terms to obtain the term-term relationship, and correlation score weighs the selected expansion terms with respect to the whole query. The proposed model, called web knowledge based query expansion (WKQE), achieves an improvement of 25.89 on the MAP score and 30.83 on the GMAP score over the unexpanded queries on the FIRE dataset. A comparative analysis of the WKQE techniques with other related approaches clearly shows significant improvement in the retrieval performance. We have also analyzed the effect of varying the number of pseudo-relevant documents and expansion terms on the retrieval effectiveness of the proposed model. | Based on web search query logs, two types of QE approaches are usually used. The first type extract features from the queries, stored in logs, that are related to the user's original query, with or without making use of their respective retrieval results @cite_43 @cite_20 . In techniques based on the first approach, some use their combined retrieval results @cite_41 , while some do not (e.g., @cite_43 @cite_20 ). In the second type of approach, the features are extracted on relational behavior of queries and retrieval results. For example, @cite_26 represent queries in a graph based vector space model (query-click bipartite graph) and analyze the graph constructed using the query logs. Under the second approach, the expansion terms are extracted form several approaches: through user clicks @cite_13 @cite_20 @cite_36 , directly from the clicked results @cite_9 @cite_44 @cite_40 , the top results from the past query terms entered by the user @cite_10 @cite_19 , and queries related with the same documents @cite_4 @cite_6 . However, the second type of approach is more widely used and has been shown to provide better results. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_41",
"@cite_36",
"@cite_9",
"@cite_10",
"@cite_6",
"@cite_44",
"@cite_43",
"@cite_40",
"@cite_19",
"@cite_13",
"@cite_20"
],
"mid": [
"2086378526",
"2039499764",
"2163987313",
"2077528174",
"2099548400",
"2000145992",
"2001306626",
"2105051853",
"1990387666",
"2171743956",
"1976719972",
"2129235726",
"1898200041"
],
"abstract": [
"In this paper we study a large query log of more than twenty million queries with the goal of extracting the semantic relations that are implicitly captured in the actions of users submitting queries and clicking answers. Previous query log analyses were mostly done with just the queries and not the actions that followed after them. We first propose a novel way to represent queries in a vector space based on a graph derived from the query-click bipartite graph. We then analyze the graph produced by our query log, showing that it is less sparse than previous results suggested, and that almost all the measures of these graphs follow power laws, shedding some light on the searching user behavior as well as on the distribution of topics that people want in the Web. The representation we introduce allows to infer interesting semantic relationships between queries. Second, we provide an experimental analysis on the quality of these relations, showing that most of them are relevant. Finally we sketch an application that detects multitopical URLs.",
"Hundreds of millions of users each day use web search engines to meet their information needs. Advances in web search effectiveness are therefore perhaps the most significant public outcomes of IR research. Query expansion is one such method for improving the effectiveness of ranked retrieval by adding additional terms to a query. In previous approaches to query expansion, the additional terms are selected from highly ranked documents returned from an initial retrieval run. We propose a new method of obtaining expansion terms, based on selecting terms from past user queries that are associated with documents in the collection. Our scheme is effective for query expansion for web retrieval: our results show relative improvements over unexpanded full text retrieval of 26 --29 , and 18 --20 over an optimised, conventional expansion approach.",
"Users frequently modify a previous search query in hope of retrieving better results. These modifications are called query reformulations or query refinements. Existing research has studied how web search engines can propose reformulations, but has given less attention to how people perform query reformulations. In this paper, we aim to better understand how web searchers refine queries and form a theoretical foundation for query reformulation. We study users' reformulation strategies in the context of the AOL query logs. We create a taxonomy of query refinement strategies and build a high precision rule-based classifier to detect each type of reformulation. Effectiveness of reformulations is measured using user click behavior. Most reformulation strategies result in some benefit to the user. Certain strategies like add remove words, word substitution, acronym expansion, and spelling correction are more likely to cause clicks, especially on higher ranked results. In contrast, users often click the same result as their previous query or select no results when forming acronyms and reordering words. Perhaps the most surprising finding is that some reformulations are better suited to helping users when the current results are already fruitful, while other reformulations are more effective when the results are lacking. Our findings inform the design of applications that can assist searchers; examples are described in this paper.",
"The semantic gap between low-level visual features and high-level semantics has been investigated for decades but still remains a big challenge in multimedia. When \"search\" became one of the most frequently used applications, \"intent gap\", the gap between query expressions and users' search intents, emerged. Researchers have been focusing on three approaches to bridge the semantic and intent gaps: 1) developing more representative features, 2) exploiting better learning approaches or statistical models to represent the semantics, and 3) collecting more training data with better quality. However, it remains a challenge to close the gaps. In this paper, we argue that the massive amount of click data from commercial search engines provides a data set that is unique in the bridging of the semantic and intent gap. Search engines generate millions of click data (a.k.a. image-query pairs), which provide almost \"unlimited\" yet strong connections between semantics and images, as well as connections between users' intents and queries. To study the intrinsic properties of click data and to investigate how to effectively leverage this huge amount of data to bridge semantic and intent gap is a promising direction to advance multimedia research. In the past, the primary obstacle is that there is no such dataset available to the public research community. This changes as Microsoft has released a new large-scale real-world image click data to public. This paper presents preliminary studies on the power of large-scale click data with a variety of experiments, such as building large-scale concept detectors, tag processing, search, definitive tag detection, intent analysis, etc., with the goal to inspire deeper researches based on this dataset.",
"Queries to search engines on the Web are usually short. They do not provide sufficient information for an effective selection of relevant documents. Previous research has proposed the utilization of query expansion to deal with this problem. However, expansion terms are usually determined on term co-occurrences within documents. In this study, we propose a new method for query expansion based on user interactions recorded in user logs. The central idea is to extract correlations between query terms and document terms by analyzing user logs. These correlations are then used to select high-quality expansion terms for new queries. Compared to previous query expansion methods, ours takes advantage of the user judgments implied in user logs. The experimental results show that the log-based query expansion method can produce much better results than both the classical search method and the other query expansion methods.",
"",
"Search engine logs are an emerging new type of data that offers interesting opportunities for data mining. Existing work on mining such data has mostly attempted to discover knowledge at the level of queries (e.g., query clusters). In this paper, we propose to mine search engine logs for patterns at the level of terms through analyzing the relations of terms inside a query. We define two novel term association patterns (i.e., context-sensitive term substitutions and term additions) and propose new methods for mining such patterns from search engine logs. These two patterns can be used to address the mis-specification and under-specification problems of ineffective queries. Experiment results on real search engine logs show that the mined context-sensitive term substitutions can be used to effectively reword queries and improve their accuracy, while the mined context-sensitive term addition patterns can be used to support query refinement in a more effective way.",
"We present an approach to query expansion in answer retrieval that uses Statistical Machine Translation (SMT) techniques to bridge the lexical gap between questions and answers. SMT-based query expansion is done by i) using a full-sentence paraphraser to introduce synonyms in context of the entire query, and ii) by translating query terms into answer terms using a full-sentence SMT model trained on question-answer pairs. We evaluate these global, context-aware query expansion techniques on tfidf retrieval from 10 million question-answer pairs extracted from FAQ pages. Experimental results show that SMTbased expansion improves retrieval performance over local expansion and over retrieval without expansion.",
"This paper proposes an effective term suggestion approach to interactive Web search. Conventional approaches to making term suggestions involve extracting co-occurring keyterms from highly ranked retrieved documents. Such approaches must deal with term extraction difficulties and interference from irrelevant documents, and, more importantly, have difficulty extracting terms that are conceptually related but do not frequently co-occur in documents. In this paper, we present a new, effective log-based approach to relevant term extraction and term suggestion. Using this approach, the relevant terms suggested for a user query are those that co-occur in similar query sessions from search engine logs, rather than in the retrieved documents. In addition, the suggested terms in each interactive search step can be organized according to its relevance to the entire query session, rather than to the most recent single query as in conventional approaches. The proposed approach was tested using a proxy server log containing about two million query transactions submitted to search engines in Taiwan. The obtained experimental results show that the proposed approach can provide organized and highly relevant terms, and can exploit the contextual information in a user's query session to make more effective suggestions.",
"Query suggestion plays an important role in improving the usability of search engines. Although some recently proposed methods can make meaningful query suggestions by mining query patterns from search logs, none of them are context-aware - they do not take into account the immediately preceding queries as context in query suggestion. In this paper, we propose a novel context-aware query suggestion approach which is in two steps. In the offine model-learning step, to address data sparseness, queries are summarized into concepts by clustering a click-through bipartite. Then, from session data a concept sequence suffix tree is constructed as the query suggestion model. In the online query suggestion step, a user's search context is captured by mapping the query sequence submitted by the user to a sequence of concepts. By looking up the context in the concept sequence sufix tree, our approach suggests queries to the user in a context-aware manner. We test our approach on a large-scale search log of a commercial search engine containing 1:8 billion search queries, 2:6 billion clicks, and 840 million query sessions. The experimental results clearly show that our approach outperforms two baseline methods in both coverage and quality of suggestions.",
"Effective organization of search results is critical for improving the utility of any search engine. Clustering search results is an effective way to organize search results, which allows a user to navigate into relevant documents quickly. However, two deficiencies of this approach make it not always work well: (1) the clusters discovered do not necessarily correspond to the interesting aspects of a topic from the user's perspective; and (2) the cluster labels generated are not informative enough to allow a user to identify the right cluster. In this paper, we propose to address these two deficiencies by (1) learning \"interesting aspects\" of a topic from Web search logs and organizing search results accordingly; and (2) generating more meaningful cluster labels using past query words entered by users. We evaluate our proposed method on a commercial search engine log data. Compared with the traditional methods of clustering search results, our method can give better result organization and more meaningful labels.",
"The performance of web search engines may often deteriorate due to the diversity and noisy information contained within web pages. User click-through data can be used to introduce more accurate description (metadata) for web pages, and to improve the search performance. However, noise and incompleteness, sparseness, and the volatility of web pages and queries are three major challenges for research work on user click-through log mining. In this paper, we propose a novel iterative reinforced algorithm to utilize the user click-through data to improve search performance. The algorithm fully explores the interrelations between queries and web pages, and effectively finds \"virtual queries\" for web pages and overcomes the challenges discussed above. Experiment results on a large set of MSN click-through log data show a significant improvement on search performance over the naive query log mining algorithm as well as the baseline search engine.",
"Automatic query expansion may be used in document retrieval to improve search effectiveness. Traditional query expansion methods are based on the document collection itself. For example, pseudo-relevance feedback (PRF) assumes that the top retrieved documents are relevant, and uses the terms extracted from those documents for query expansion. However, there are other sources of evidence that can be used for expansion, some of which may give better search results with greater efficiency at query time. In this paper, we use the external evidence, especially the hints obtained from external web search engines to expand the original query. We explore 6 different methods using search engine query log, snippets and search result documents. We conduct extensive experiments, with state of the art PRF baselines and careful parameter tuning, on three TREC collections: AP, WT10g, GOV2. Log-based methods do not show consistent significant gains, despite being very efficient at query-time. Snippet-based expansion, using the summaries provided by an external search engine, provides significant effectiveness gains with good efficiency at query-time."
]
} |
1908.10193 | 2970619785 | In the field of information retrieval, query expansion (QE) has long been used as a technique to deal with the fundamental issue of word mismatch between a user's query and the target information. In the context of the relationship between the query and expanded terms, existing weighting techniques often fail to appropriately capture the term-term relationship and term to the whole query relationship, resulting in low retrieval effectiveness. Our proposed QE approach addresses this by proposing three weighting models based on (1) tf-itf, (2) k-nearest neighbor (kNN) based cosine similarity, and (3) correlation score. Further, to extract the initial set of expanded terms, we use pseudo-relevant web knowledge consisting of the top N web pages returned by the three popular search engines namely, Google, Bing, and DuckDuckGo, in response to the original query. Among the three weighting models, tf-itf scores each of the individual terms obtained from the web content, kNN-based cosine similarity scores the expansion terms to obtain the term-term relationship, and correlation score weighs the selected expansion terms with respect to the whole query. The proposed model, called web knowledge based query expansion (WKQE), achieves an improvement of 25.89 on the MAP score and 30.83 on the GMAP score over the unexpanded queries on the FIRE dataset. A comparative analysis of the WKQE techniques with other related approaches clearly shows significant improvement in the retrieval performance. We have also analyzed the effect of varying the number of pseudo-relevant documents and expansion terms on the retrieval effectiveness of the proposed model. | In the context of web-based knowledge, anchor texts can play a role similar to the user's search queries because an anchor text to a page can serve as a brief summary of its content. Anchor texts were first used by McBryan @cite_28 for associating hyperlinks with linked pages as well as with the pages in which the anchor texts are found. Kraft and Zien @cite_56 also used anchor texts for QE; their experimental results suggest that anchor texts can be used to improve traditional QE based on query logs. Similarly, Dang and Croft @cite_3 suggested that anchor text could be an effective alternative to query logs. They demonstrated the effectiveness of QE techniques using log-based stemming through experiments on standard TREC collection dataset. | {
"cite_N": [
"@cite_28",
"@cite_3",
"@cite_56"
],
"mid": [
"2976375847",
"2080825533",
"2171161922"
],
"abstract": [
"",
"Query reformulation techniques based on query logs have been studied as a method of capturing user intent and improving retrieval effectiveness. The evaluation of these techniques has primarily, however, focused on proprietary query logs and selected samples of queries. In this paper, we suggest that anchor text, which is readily available, can be an effective substitute for a query log and study the effectiveness of a range of query reformulation techniques (including log-based stemming, substitution, and expansion) using standard TREC collections. Our results show that log-based query reformulation techniques are indeed effective with standard collections, but expansion is a much safer form of query modification than word substitution. We also show that using anchor text as a simulated query log is as least as effective as a real log for these techniques.",
"When searching large hypertext document collections, it is often possible that there are too many results available for ambiguous queries. Query refinement is an interactive process of query modification that can be used to narrow down the scope of search results. We propose a new method for automatically generating refinements or related terms to queries by mining anchor text for a large hypertext document collection. We show that the usage of anchor text as a basis for query refinement produces high quality refinement suggestions that are significantly better in terms of perceived usefulness compared to refinements that are derived using the document content. Furthermore, our study suggests that anchor text refinements can also be used to augment traditional query refinement algorithms based on query logs, since they typically differ in coverage and produce different refinements. Our results are based on experiments on an anchor text collection of a large corporate intranet."
]
} |
1908.09775 | 2969344335 | Despite the remarkable success of deep learning in pattern recognition, deep network models face the problem of training a large number of parameters. In this paper, we propose and evaluate a novel multi-path wavelet neural network architecture for image classification with far less number of trainable parameters. The model architecture consists of a multi-path layout with several levels of wavelet decompositions performed in parallel followed by fully connected layers. These decomposition operations comprise wavelet neurons with learnable parameters, which are updated during the training phase using the back-propagation algorithm. We evaluate the performance of the introduced network using common image datasets without data augmentation except for SVHN and compare the results with influential deep learning models. Our findings support the possibility of reducing the number of parameters significantly in deep neural networks without compromising its accuracy. | The wavelet transform is a powerful tool for processing data and developing time-frequency representations. A thorough theoretical background on wavelets is explained in @cite_41 @cite_25 . Applying wavelet transform in the context of neural networks is not novel. Earlier work @cite_40 @cite_27 has presented a theoretical approach for wavelet-based feed-forward neural networks. The ability to use wavelet based interpolation for real time unknown function approximation has been researched by Bernard @cite_24 . In this case, the results have been achieved with a relatively less number of coefficients due to the high compression ability of the wavelets. The work by Alexandridis @cite_33 has proposed a statistical model identification framework in applying wavelet networks and it is investigated under many subjects including architecture, initialization, variable and model selection. Literature indicates applications of wavelet based neural networks in many different fields such as signal classification and compression @cite_11 @cite_28 @cite_8 , in time series predicting @cite_35 @cite_19 @cite_44 , electrical load forecasting @cite_45 @cite_5 and power distribution recognition @cite_10 . | {
"cite_N": [
"@cite_35",
"@cite_33",
"@cite_8",
"@cite_41",
"@cite_28",
"@cite_24",
"@cite_19",
"@cite_27",
"@cite_40",
"@cite_44",
"@cite_45",
"@cite_5",
"@cite_10",
"@cite_25",
"@cite_11"
],
"mid": [
"2734777338",
"2006447203",
"1970876195",
"",
"2290765693",
"346668388",
"2030032088",
"",
"2171506994",
"1980418485",
"2094887027",
"2288990565",
"2105693855",
"2115755118",
"2075621527"
],
"abstract": [
"The application of deep learning approaches to finance has received a great deal of attention from both investors and researchers. This study presents a novel deep learning framework where wavelet transforms (WT), stacked autoencoders (SAEs) and long-short term memory (LSTM) are combined for stock price forecasting. The SAEs for hierarchically extracted deep features is introduced into stock price forecasting for the first time. The deep learning framework comprises three stages. First, the stock price time series is decomposed by WT to eliminate noise. Second, SAEs is applied to generate deep high-level features for predicting the stock price. Third, high-level denoising features are fed into LSTM to forecast the next day’s closing price. Six market indices and their corresponding index futures are chosen to examine the performance of the proposed model. Results show that the proposed model outperforms other similar models in both predictive accuracy and profitability performance.",
"Wavelet networks (WNs) are a new class of networks which have been used with great success in a wide range of applications. However a general accepted framework for applying WNs is missing from the literature. In this study, we present a complete statistical model identification framework in order to apply WNs in various applications. The following subjects were thoroughly examined: the structure of a WN, training methods, initialization algorithms, variable significance and variable selection algorithms, model selection methods and finally methods to construct confidence and prediction intervals. In addition the complexity of each algorithm is discussed. Our proposed framework was tested in two simulated cases, in one chaotic time series described by the Mackey-Glass equation and in three real datasets described by daily temperatures in Berlin, daily wind speeds in New York and breast cancer classification. Our results have shown that the proposed algorithms produce stable and robust results indicating that our proposed framework can be applied in various applications.",
"Since EEG is one of the most important sources of information in therapy of epilepsy, several researchers tried to address the issue of decision support for such a data. In this paper, we introduce two fundamentally different approaches for designing classification models (classifiers); the traditional statistical method based on logistic regression and the emerging computationally powerful techniques based on artificial neural networks (ANNs). Logistic regression as well as feedforward error backpropagation artificial neural networks (FEBANN) and wavelet neural networks (WNN) based classifiers were developed and compared in relation to their accuracy in classification of EEG signals. In these methods we used FFT and autoregressive (AR) model by using maximum likelihood estimation (MLE) of EEG signals as an input to classification system with two discrete outputs: epileptic seizure or nonepileptic seizure. By identifying features in the signal we want to provide an automatic system that will support a physician in the diagnosing process. By applying AR with MLE in connection with WNN, we obtained novel and reliable classifier architecture. The network is constructed by the error backpropagation neural network using Morlet mother wavelet basic function as node activation function. The comparisons between the developed classifiers were primarily based on analysis of the receiver operating characteristic (ROC) curves as well as a number of scalar performance measures pertaining to the classification. The WNN-based classifier outperformed the FEBANN and logistic regression based counterpart. Within the same group, the WNN-based classifier was more accurate than the FEBANN-based classifier, and the logistic regression-based classifier.",
"",
"It is known that the force and vibration sensor signals in a turning process are sensitive to the gradually increasing flank wear. Based on this fact, this paper investigates a flank wear assessment technique in turning through force and vibration signals. Mainly to reduce the computational burden associated with the existing sensor-based methods for flank wear assessment, a so-called wavelet network is investigated. The basic idea in this new method is to optimize simultaneously the wavelet parameters (that represent signal features) and the signal-interpretation parameters (that are equivalent to neural network weights) to eliminate the feature extraction phase without increasing the computational complexity of the neural network. A neural network architecture similar to a standard one-hidden-layer feedforward neural network is used to relate sensor signal measurements to flank wear classes. A novel training algorithm for such a network is developed. The performance of this n ew method is compared with a previously developed flank wear assessment method which uses a separate feature extraction step. The proposed wavelet network can also be useful for developing signal interpretation schemes for manufacturing process monitoring, critical component monitoring, and product quality monitoring.",
"We describe a new approach to real time learning of unknown functions based on an interpolating wavelet estimation. We choose a subfamily of a wavelet basis relying on nested hierarchical allocation and update in real time our estimate of the unknown function. Such an interpolation process can be used for real time applications like neural network adaptive control, where learning an unknown function very fast is critical.",
"A new technique, wavelet network, is introduced to predict chaotic time series. By using this technique, firstly, we make accurate short-term predictions of the time series from chaotic attractors. Secondly, we make accurate predictions of the values and bifurcation structures of the time series from dynamical systems whose parameter values are changing with time. Finally we predict chaotic attractors by making long-term predictions based on remarkably few data points, where the correlation dimensions of predicted attractors are calculated and are found to be almost identical to those of actual attractors.",
"",
"A representation of a class of feedforward neural networks in terms of discrete affine wavelet transforms is developed. It is shown that by appropriate grouping of terms, feedforward neural networks with sigmoidal activation functions can be viewed as architectures which implement affine wavelet decompositions of mappings. It is shown that the wavelet transform formalism provides a mathematical framework within which it is possible to perform both analysis and synthesis of feedforward networks. For the purpose of analysis, the wavelet formulation characterizes a class of mappings which can be implemented by feedforward networks as well as reveals an exact implementation of a given mapping in this class. Spatio-spectral localization properties of wavelets can be exploited in synthesizing a feedforward network to perform a given approximation task. Two synthesis procedures based on spatio-spectral localization that reduce the training problem to one of convex optimization are outlined. >",
"A local linear wavelet neural network (LLWNN) is presented in this paper. The difference of the network with conventional wavelet neural network (WNN) is that the connection weights between the hidden layer and output layer of conventional WNN are replaced by a local linear model. A hybrid training algorithm of particle swarm optimization (PSO) with diversity learning and gradient descent method is introduced for training the LLWNN. Simulation results for the prediction of time-series show the feasibility and effectiveness of the proposed method.",
"We propose a wavelet multiscale decomposition-based autoregressive approach for the prediction of 1-h ahead load based on historical electricity load data. This approach is based on a multiple resolution decomposition of the signal using the non-decimated or redundant Haar a trous wavelet transform whose advantage is taking into account the asymmetric nature of the time-varying data. There is an additional computational advantage in that there is no need to recompute the wavelet transform (wavelet coefficients) of the full signal if the electricity data (time series) is regularly updated. We assess results produced by this multiscale autoregressive (MAR) method, in both linear and non-linear variants, with single resolution autoregression (AR), multilayer perceptron (MLP), Elman recurrent neural network (ERN) and the general regression neural network (GRNN) models. Results are based on the New South Wales (Australia) electricity load data that is provided by the National Electricity Market Management Company (NEMMCO).",
"We consider a wavelet neural network approach for electricity load prediction. The wavelet transform is used to decompose the load into different frequency components that are predicted separately using neural networks. We firstly propose a new approach for signal extension which minimizes the border distortion when decomposing the data, outperforming three standard methods. We also compare the performance of the standard wavelet transform, which is shift variant, with a non-decimated transform, which is shift invariant. Our results show that the use of shift invariant transform considerably improves the prediction accuracy. In addition to wavelet neural network, we also present the results of wavelet linear regression, wavelet model trees and a number of baselines. Our evaluation uses two years of Australian electricity data.",
"In this paper, a prototype wavelet-based neural-network classifier for recognizing power-quality disturbances is implemented and tested under various transient events. The discrete wavelet transform (DWT) technique is integrated with the probabilistic neural-network (PNN) model to construct the classifier. First, the multiresolution-analysis technique of DWT and the Parseval's theorem are employed to extract the energy distribution features of the distorted signal at different resolution levels. Then, the PNN classifies these extracted features to identify the disturbance type according to the transient duration and the energy features. Since the proposed methodology can reduce a great quantity of the distorted signal features without losing its original property, less memory space and computing time are required. Various transient events tested, such as momentary interruption, capacitor switching, voltage sag swell, harmonic distortion, and flicker show that the classifier can detect and classify different power disturbance types efficiently.",
"Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.",
"Abstract This paper discusses the concept of adaptive wavelets and their application to signal compression and classification. When wavelet non-linear functions are used in the neurons of a neural network we can estimate the wavelet parameters—scale and translation and the relative importance (weight) of each basis function optimally with respect to approximating a given function (signal) in the minimum mean square error sense or with respect to classifying a function (signal) with minimum mean classification error. This estimation can be considered as adaptive (signal or function dependent) sampling. In this paper, we give theoretical proof to show that such an adaptive sampling scheme constitutes a frame which implies that the neural network-based estimation technique allows us to reconstruct the input signal from the adaptive wavelets such that the reconstruction is numerically stable. We apply the proposed representation and classification architectures for coding and classifying biological signals such as electrocardiogram (ECG). Experimental details of ECG coder, and classification of ECG waves such as P, QRS and T are provided. The experimental results reported in this paper are obtained by applying the proposed methodologies to standard ECG (American Heart Association) data base. The results indicate that the compression ratio of about twice better than the current techniques can be obtained and average classification accuracy of 94 can be obtained for abnormal P, QRS and T classification."
]
} |
1908.09826 | 2969846162 | In this paper, we investigate the secure connectivity of wireless sensor networks utilizing the heterogeneous random key predistribution scheme, where each sensor node is classified as class- @math with probability @math for @math with @math and @math . A class- @math sensor is given @math cryptographic keys selected uniformly at random from a key pool of size @math . After deployment, two nodes can communicate securely if they share at least one cryptographic key. We consider the wireless connectivity of the network using a heterogeneous on-off channel model, where the channel between a class- @math node and a class- @math node is on (respectively, off) with probability @math (respectively, @math ) for @math . Collectively, two sensor nodes are adjacent if they i) share a cryptographic key and ii) have a wireless channel in between that is on. We model the overall network using a composite random graph obtained by the intersection of inhomogeneous random key graphs (IRKG) @math with inhomogeneous Erd o s-R 'enyi graphs (IERG) @math . The former graph is naturally induced by the heterogeneous random key predistribution scheme, while the latter is induced by the heterogeneous on-off channel model. More specifically, two nodes are adjacent in the composite graph if they are i) adjacent in the IRKG i.e., share a cryptographic key and ii) adjacent in the IERG, i.e., have an available wireless channel. We investigate the connectivity of the composite random graph and present conditions (in the form of zero-one laws) on how to scale its parameters so that it i) has no secure node which is isolated and ii) is securely connected, both with high probability when the number of nodes gets large. We also present numerical results to support these zero-one laws in the finite-node regime. | The connectivity (respectively, @math -connectivity) of wireless sensor networks secured by the classical scheme under a uniform on off channel model was investigated in @cite_33 (respectively, @cite_11 ). The network was modeled by a composite random graph formed by the intersection of random key graphs @math (induced by scheme) with graphs @math (induced by the uniform on-off channel model). Our paper generalizes this model to heterogeneous setting where different nodes could be given different number of keys depending on their respective classes and the availability of a wireless channel between two nodes depends on their respective classes. Hence, our model highly resembles emerging wireless sensor networks which are essentially complex and heterogeneous. | {
"cite_N": [
"@cite_33",
"@cite_11"
],
"mid": [
"2165958223",
"2107866581"
],
"abstract": [
"We investigate the secure connectivity of wireless sensor networks under the random key distribution scheme of Eschenauer and Gligor. Unlike recent work which was carried out under the assumption of full visibility, here we assume a (simplified) communication model where unreliable wireless links are represented as on off channels. We present conditions on how to scale the model parameters so that the network: 1) has no secure node which is isolated and 2) is securely connected, both with high probability when the number of sensor nodes becomes large. The results are given in the form of full zero-one laws, and constitute the first complete analysis of the EG scheme under non-full visibility. Through simulations, these zero-one laws are shown to be valid also under a more realistic communication model (i.e., the disk model). The relations to the Gupta and Kumar's conjecture on the connectivity of geometric random graphs with randomly deleted edges are also discussed.",
"Random key graphs form a class of random intersection graphs that are naturally induced by the random key predistribution scheme of Eschenauer and Gligor for securing wireless sensor network (WSN) communications. Random key graphs have received much attention recently, owing in part to their wide applicability in various domains, including recommender systems, social networks, secure sensor networks, clustering and classification analysis, and cryptanalysis to name a few. In this paper, we study connectivity properties of random key graphs in the presence of unreliable links. Unreliability of graph links is captured by independent Bernoulli random variables, rendering them to be on or off independently from each other. The resulting model is an intersection of a random key graph and an Erdős–Renyi graph, and is expected to be useful in capturing various real-world networks; e.g., with secure WSN applications in mind, link unreliability can be attributed to harsh environmental conditions severely impairing transmissions. We present conditions on how to scale this model’s parameters so that: 1) the minimum node degree in the graph is at least @math and 2) the graph is @math -connected, both with high probability as the number of nodes becomes large. The results are given in the form of zero-one laws with critical thresholds identified and shown to coincide for both graph properties. These findings improve the previous results by Rybarczyk on @math -connectivity of random key graphs (with reliable links), as well as the zero-one laws by Yagan on one-connectivity of random key graphs with unreliable links."
]
} |
1908.09826 | 2969846162 | In this paper, we investigate the secure connectivity of wireless sensor networks utilizing the heterogeneous random key predistribution scheme, where each sensor node is classified as class- @math with probability @math for @math with @math and @math . A class- @math sensor is given @math cryptographic keys selected uniformly at random from a key pool of size @math . After deployment, two nodes can communicate securely if they share at least one cryptographic key. We consider the wireless connectivity of the network using a heterogeneous on-off channel model, where the channel between a class- @math node and a class- @math node is on (respectively, off) with probability @math (respectively, @math ) for @math . Collectively, two sensor nodes are adjacent if they i) share a cryptographic key and ii) have a wireless channel in between that is on. We model the overall network using a composite random graph obtained by the intersection of inhomogeneous random key graphs (IRKG) @math with inhomogeneous Erd o s-R 'enyi graphs (IERG) @math . The former graph is naturally induced by the heterogeneous random key predistribution scheme, while the latter is induced by the heterogeneous on-off channel model. More specifically, two nodes are adjacent in the composite graph if they are i) adjacent in the IRKG i.e., share a cryptographic key and ii) adjacent in the IERG, i.e., have an available wireless channel. We investigate the connectivity of the composite random graph and present conditions (in the form of zero-one laws) on how to scale its parameters so that it i) has no secure node which is isolated and ii) is securely connected, both with high probability when the number of nodes gets large. We also present numerical results to support these zero-one laws in the finite-node regime. | In @cite_7 , Ya g an considered the connectivity of wireless sensor networks secured by the heterogeneous random key predistribution scheme under the full visibility assumption, i.e., all wireless channels are available and reliable, hence the only condition for two nodes to be adjacent is to share a key. It is clear that the full visibility assumption is not likely to hold in most practical deployments of wireless sensor networks as the wireless medium is typically unreliable. Our paper extends the results given in @cite_7 to more practical scenarios where the wireless connectivity is taken into account through the heterogeneous on-off channel model. In fact, by setting @math for @math and each @math (i.e., by assuming that all wireless channels are on ), our results reduce to those given in @cite_7 . | {
"cite_N": [
"@cite_7"
],
"mid": [
"2196704028"
],
"abstract": [
"We introduce a new random key predistribution scheme for securing heterogeneous wireless sensor networks. Each of the @math sensors in the network is classified into @math classes according to some probability distribution @math . Before deployment, a class- @math sensor is assigned @math cryptographic keys that are selected uniformly at random from a common pool of @math keys. Once deployed, a pair of sensors can communicate securely if and only if they have a key in common. We model the communication topology of this network by a newly defined inhomogeneous random key graph. We establish scaling conditions on the parameters @math and @math so that this graph: 1) has no isolated nodes and 2) is connected, both with high probability. The results are given in the form of zero-one laws with the number of sensors @math growing unboundedly large; critical scalings are identified and shown to coincide for both graph properties. Our results are shown to complement and improve those given by and for the same model, therein referred to as the general random intersection graph."
]
} |
1908.09826 | 2969846162 | In this paper, we investigate the secure connectivity of wireless sensor networks utilizing the heterogeneous random key predistribution scheme, where each sensor node is classified as class- @math with probability @math for @math with @math and @math . A class- @math sensor is given @math cryptographic keys selected uniformly at random from a key pool of size @math . After deployment, two nodes can communicate securely if they share at least one cryptographic key. We consider the wireless connectivity of the network using a heterogeneous on-off channel model, where the channel between a class- @math node and a class- @math node is on (respectively, off) with probability @math (respectively, @math ) for @math . Collectively, two sensor nodes are adjacent if they i) share a cryptographic key and ii) have a wireless channel in between that is on. We model the overall network using a composite random graph obtained by the intersection of inhomogeneous random key graphs (IRKG) @math with inhomogeneous Erd o s-R 'enyi graphs (IERG) @math . The former graph is naturally induced by the heterogeneous random key predistribution scheme, while the latter is induced by the heterogeneous on-off channel model. More specifically, two nodes are adjacent in the composite graph if they are i) adjacent in the IRKG i.e., share a cryptographic key and ii) adjacent in the IERG, i.e., have an available wireless channel. We investigate the connectivity of the composite random graph and present conditions (in the form of zero-one laws) on how to scale its parameters so that it i) has no secure node which is isolated and ii) is securely connected, both with high probability when the number of nodes gets large. We also present numerical results to support these zero-one laws in the finite-node regime. | In comparison with the existing literature on similar models, our result can be seen to extend the work by Eletreby and Ya g an in @cite_8 (respectively, @cite_18 ). Therein, the authors established a zero-one law for the @math -connectivity (respectively, @math -connectivity) of @math , i.e., for a wireless sensor network under the heterogeneous key predistribution scheme and a uniform on-off channel model. Although these results form a crucial starting point towards the analysis of the heterogeneous key predistribution scheme under a wireless connectivity model, they are limited to uniform on-off channel model where all channels are on (respectively, off) with the same probability @math (respectively, @math ). The heterogeneous on-off channel model accounts for the fact that different nodes could have different radio capabilities, or could be deployed in locations with different channel characteristics. In addition, it offers the flexibility of modeling several interesting scenarios, such as when nodes of the same type are more (or less) likely to be adjacent with one another than with nodes belonging to other classes. Indeed, by setting @math for @math and each @math , our results reduce to those given in @cite_8 . | {
"cite_N": [
"@cite_18",
"@cite_8"
],
"mid": [
"2555492793",
"2963495066"
],
"abstract": [
"We consider secure and reliable connectivity in wireless sensor networks that utilize the heterogeneous random key predistribution scheme. We model the unreliability of wireless links by an on off channel model that induces an Erdős-Renyi graph, while the heterogeneous scheme induces an inhomogeneous random key graph. The overall network can thus be modeled by the intersection of both graphs. We present conditions (in the form of zero-one laws) on how to scale the parameters of the intersection model, so that with high probability: i) all of its nodes are connected to at least @math other nodes, i.e., the minimum node degree of the graph is no less than @math , and ii) the graph is @math -connected, i.e., the graph remains connected even if any @math nodes leave the network. These results are shown to complement and generalize several previous results in the literature. We also present numerical results to support our findings in the finite-node regime. Finally, we demonstrate via simulations that our results are also useful when the on off channel model is replaced with the more realistic disk communication model .",
"We investigate the connectivity of a wireless sensor network (WSN) secured by the heterogeneous key predistribution scheme under an independent on off channel model. The heterogeneous scheme induces an inhomogeneous random key graph, denoted by @math and the on off channel model induces an Erdős-Renyi graph, denoted by @math . Hence, the overall random graph modeling the WSN is obtained by the intersection of @math and @math . We present conditions on how to scale the parameters of the intersecting graph with respect to the network size @math such that the graph i) has no isolated nodes and ii) is connected, both with high probability (whp) as the number of nodes gets large. Our results are supported by a simulation study demonstrating that i) despite their asymptotic nature, our results can in fact be useful in designing finite -node WSNs so that they achieve secure connectivity whp; and ii) despite the simplicity of the on off communication model, the probability of connectivity in the resulting WSN approximates very well the case where the disk model is used."
]
} |
1908.10017 | 2970793270 | The state-of-art DNN structures involve intensive computation and high memory storage. To mitigate the challenges, the memristor crossbar array has emerged as an intrinsically suitable matrix computation and low-power acceleration framework for DNN applications. However, the high accuracy solution for extreme model compression on memristor crossbar array architecture is still waiting for unraveling. In this paper, we propose a memristor-based DNN framework which combines both structured weight pruning and quantization by incorporating alternating direction method of multipliers (ADMM) algorithm for better pruning and quantization performance. We also discover the non-optimality of the ADMM solution in weight pruning and the unused data path in a structured pruned model. Motivated by these discoveries, we design a software-hardware co-optimization framework which contains the first proposed Network Purification and Unused Path Removal algorithms targeting on post-processing a structured pruned model after ADMM steps. By taking memristor hardware constraints into our whole framework, we achieve extreme high compression ratio on the state-of-art neural network structures with minimum accuracy loss. For quantizing structured pruned model, our framework achieves nearly no accuracy loss after quantizing weights to 8-bit memristor weight representation. We share our models at anonymous link this https URL. | Heuristic weight pruning methods @cite_4 are widely used in neuromorphic computing designs to reduce the weight storage and computing delay @cite_20 . @cite_20 implemented weight pruning techniques on a neuromorphic computing system using irregular pruning caused unbalanced workload, greater circuits overheads and extra memory requirement on indices. To overcome the limitations, @cite_21 proposed group connection deletion, which structually prunes connections to reduce routing congestion between memristor crossbar arrays. | {
"cite_N": [
"@cite_21",
"@cite_4",
"@cite_20"
],
"mid": [
"2593769476",
"2963674932",
"2752640714"
],
"abstract": [
"Synapse crossbar is an elementary structure in neuromorphic computing systems (NCS). However, the limited size of crossbars and heavy routing congestion impede the NCS implementation of large neural networks. In this paper, we propose a two-step framework (namely, group scissor) to scale NCS designs to large neural networks. The first step rank clipping integrates low-rank approximation into the training to reduce total crossbar area. The second step is group connection deletion, which structurally prunes connections to reduce routing congestion between crossbars. Tested on convolutional neural networks of LeNet on MNIST database and ConvNet on CIFAR-10 database, our experiments show significant reduction of crossbar and routing area in NCS designs. Without accuracy loss, rank clipping reduces the total crossbar area to 13.62 or 51.81 in the NCS design of LeNet or ConvNet, respectively. The following group connection deletion further decreases the routing area of LeNet or ConvNet to 8.1 or 52.06 , respectively.",
"Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.",
"Implementation of Neuromorphic Systems using post Complementary Met al-Oxide-Semiconductor (CMOS) technology based Memristive Crossbar Array (MCA) has emerged as a promising solution to enable low-power acceleration of neural networks. However, the recent trend to design Deep Neural Networks (DNNs) for achieving human-like cognitive abilities poses significant challenges towards the scalable design of neuromorphic systems (due to the increase in computation storage demands). Network pruning [7] is a powerful technique to remove redundant connections for designing optimally connected (maximally sparse) DNNs. However, such pruning techniques induce irregular connections that are incoherent to the crossbar structure. Eventually they produce DNNs with highly inefficient hardware realizations (in terms of area and energy). In this work, we propose TraNNsformer - an integrated training framework that transforms DNNs to enable their efficient realization on MCA-based systems. TraNNsformer first prunes the connectivity matrix while forming clusters with the remaining connections. Subsequently, it retrains the network to fine tune the connections and reinforce the clusters. This is done iteratively to transform the original connectivity into an optimally pruned and maximally clustered mapping. We evaluated the proposed framework by transforming different Multi-Layer Perceptron (MLP) based Spiking Neural Networks (SNNs) on a wide range of datasets (MNIST, SVHN and CIFAR10) and executing them on MCA-based systems to analyze the area and energy benefits. Without accuracy loss, TraNNsformer reduces the area (energy) consumption by 28 - 55 (49 - 67 ) with respect to the original network. Compared to network pruning, TraNNsformer achieves 28 - 49 (15 - 29 ) area (energy) savings. Furthermore, TraNNsformer is a technology-aware framework that allows mapping a given DNN to any MCA size permissible by the memristive technology for reliable operations."
]
} |
1908.10017 | 2970793270 | The state-of-art DNN structures involve intensive computation and high memory storage. To mitigate the challenges, the memristor crossbar array has emerged as an intrinsically suitable matrix computation and low-power acceleration framework for DNN applications. However, the high accuracy solution for extreme model compression on memristor crossbar array architecture is still waiting for unraveling. In this paper, we propose a memristor-based DNN framework which combines both structured weight pruning and quantization by incorporating alternating direction method of multipliers (ADMM) algorithm for better pruning and quantization performance. We also discover the non-optimality of the ADMM solution in weight pruning and the unused data path in a structured pruned model. Motivated by these discoveries, we design a software-hardware co-optimization framework which contains the first proposed Network Purification and Unused Path Removal algorithms targeting on post-processing a structured pruned model after ADMM steps. By taking memristor hardware constraints into our whole framework, we achieve extreme high compression ratio on the state-of-art neural network structures with minimum accuracy loss. For quantizing structured pruned model, our framework achieves nearly no accuracy loss after quantizing weights to 8-bit memristor weight representation. We share our models at anonymous link this https URL. | Weight quantization can mitigate hardware imperfection of memristor including state drift and process variations, caused by the imperfect fabrication process or by the device feature itself @cite_3 @cite_10 . @cite_1 presented a technique to reduce the overhead of Digital-to-Analog Converters (DACs) Analog-to-Digital Converters (ADCs) in resistive random-access memory (ReRAM) neuromorphic computing systems. They first normalized the data, and then quantized intermediary data to 1-bit value. This can be directly used as the analog input for ReRAM crossbar and, hence, avoids the need of DACs. | {
"cite_N": [
"@cite_1",
"@cite_10",
"@cite_3"
],
"mid": [
"2408724663",
"2233116163",
"2748818695"
],
"abstract": [
"Convolutional Neural Network (CNN) is a powerful technique widely used in computer vision area, which also demands much more computations and memory resources than traditional solutions. The emerging met al-oxide resistive random-access memory (RRAM) and RRAM crossbar have shown great potential on neuromorphic applications with high energy efficiency. However, the interfaces between analog RRAM crossbars and digital peripheral functions, namely Analog-to-Digital Converters (AD-Cs) and Digital-to-Analog Converters (DACs), consume most of the area and energy of RRAM-based CNN design due to the large amount of intermediate data in CNN. In this paper, we propose an energy efficient structure for RRAM-based CNN. Based on the analysis of data distribution, a quantization method is proposed to transfer the intermediate data into 1 bit and eliminate DACs. An energy efficient structure using input data as selection signals is proposed to reduce the ADC cost for merging results of multiple crossbars. The experimental results show that the proposed method and structure can save 80 area and more than 95 energy while maintaining the same or comparable classification accuracy of CNN on MNIST.",
"Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4 6× speed-up and 15 20× compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.",
"Quantization is considered as one of the most effective methods to optimize the inference cost of neural network models for their deployment to mobile and embedded systems, which have tight resource constraints. In such approaches, it is critical to provide low-cost quantization under a tight accuracy loss constraint (e.g., 1 ). In this paper, we propose a novel method for quantizing weights and activations based on the concept of weighted entropy. Unlike recent work on binary-weight neural networks, our approach is multi-bit quantization, in which weights and activations can be quantized by any number of bits depending on the target accuracy. This facilitates much more flexible exploitation of accuracy-performance trade-off provided by different levels of quantization. Moreover, our scheme provides an automated quantization flow based on conventional training algorithms, which greatly reduces the design-time effort to quantize the network. According to our extensive evaluations based on practical neural network models for image classification (AlexNet, GoogLeNet and ResNet-50 101), object detection (R-FCN with 50-layer ResNet), and language modeling (an LSTM network), our method achieves significant reductions in both the model size and the amount of computation with minimal accuracy loss. Also, compared to existing quantization schemes, ours provides higher accuracy with a similar resource constraint and requires much lower design effort."
]
} |
1908.09931 | 2971181831 | At present, object recognition studies are mostly conducted in a closed lab setting with classes in test phase typically in training phase. However, real-world problem is far more challenging because: i) new classes unseen in the training phase can appear when predicting; ii) discriminative features need to evolve when new classes emerge in real time; and iii) instances in new classes may not follow the "independent and identically distributed" (iid) assumption. Most existing work only aims to detect the unknown classes and is incapable of continuing to learn newer classes. Although a few methods consider both detecting and including new classes, all are based on the predefined handcrafted features that cannot evolve and are out-of-date for characterizing emerging classes. Thus, to address the above challenges, we propose a novel generic end-to-end framework consisting of a dynamic cascade of classifiers that incrementally learn their dynamic and inherent features. The proposed method injects dynamic elements into the system by detecting instances from unknown classes, while at the same time incrementally updating the model to include the new classes. The resulting cascade tree grows by adding a new leaf node classifier once a new class is detected, and the discriminative features are updated via an end-to-end learning strategy. Experiments on two real-world datasets demonstrate that our proposed method outperforms existing state-of-the-art methods. | : Open set recognition was first introduced in @cite_19 , which considers the problem of detecting unseen classes that are never seen in the training phase @cite_2 @cite_27 . Many open-set recognition methods based on SVM @cite_31 @cite_24 and NCM @cite_9 have since been proposed, but all built on shallow models for classification. @cite_19 formulated the problem of open set recognition for static one-vs-all learning scenario by balancing open space risk while minimizing empirical error,going on to extend the work to multi-class settings by introducing a compact abating probability model @cite_34 . For the scalability problem, @cite_7 proposed the use of a scalable Weibull based calibration for hypothesis generation to model matching scores, but did not address its use for the general recognition problem. @cite_9 proposed a novel detection method dealing with deep model architecture by introducing an openmax layer, while @cite_16 proposed a one class classification based on the DCNN which can be used as a novel detector and outlier detector for a single known class. However, none have not addressed the problem of how to incrementally update their model after a new class has been recognized. | {
"cite_N": [
"@cite_7",
"@cite_9",
"@cite_16",
"@cite_24",
"@cite_19",
"@cite_27",
"@cite_2",
"@cite_31",
"@cite_34"
],
"mid": [
"2114759747",
"2963149653",
"2783748519",
"2039229101",
"2119880843",
"",
"2099064293",
"2015563892",
"2018459374"
],
"abstract": [
"Algorithms based on RANSAC that estimate models using feature correspondences between images can slow down tremendously when the percentage of correct correspondences (inliers) is small. In this paper, we present a probabilistic parametric model that allows us to assign confidence values for each matching correspondence and therefore accelerates the generation of hypothesis models for RANSAC under these conditions. Our framework leverages Extreme Value Theory to accurately model the statistics of matching scores produced by a nearest-neighbor feature matcher. Using a new algorithm based on this model, we are able to estimate accurate hypotheses with RANSAC at low inlier ratios significantly faster than previous state-of-the-art approaches, while still performing comparably when the number of inliers is large. We present results of homography and fundamental matrix estimation experiments for both SIFT and SURF matches that demonstrate that our method leads to accurate and fast model estimations.",
"Deep networks have produced significant gains for various visual recognition problems, leading to high impact academic and commercial applications. Recent work in deep networks highlighted that it is easy to generate images that humans would never classify as a particular object class, yet networks classify such images high confidence as that given class – deep network are easily fooled with images humans do not consider meaningful. The closed set nature of deep networks forces them to choose from one of the known classes leading to such artifacts. Recognition in the real world is open set, i.e. the recognition system should reject unknown unseen classes at test time. We present a methodology to adapt deep networks for open set recognition, by introducing a new model layer, OpenMax, which estimates the probability of an input being from an unknown class. A key element of estimating the unknown probability is adapting Meta-Recognition concepts to the activation patterns in the penultimate layer of the network. Open-Max allows rejection of \"fooling\" and unrelated open set images presented to the system, OpenMax greatly reduces the number of obvious errors made by a deep network. We prove that the OpenMax concept provides bounded open space risk, thereby formally providing an open set recognition solution. We evaluate the resulting open set deep networks using pre-trained networks from the Caffe Model-zoo on ImageNet 2012 validation data, and thousands of fooling and open set images. The proposed OpenMax model significantly outperforms open set recognition accuracy of basic deep networks as well as deep networks with thresholding of SoftMax probabilities.",
"We propose a deep learning-based solution for the problem of feature learning in one-class classification. The proposed method operates on top of a Convolutional Neural Network (CNN) of choice and produces descriptive features while maintaining a low intra-class variance in the feature space for the given class. For this purpose two loss functions, compactness loss and descriptiveness loss are proposed along with a parallel CNN architecture. A template matching-based framework is introduced to facilitate the testing process. Extensive experiments on publicly available anomaly detection, novelty detection and mobile active authentication datasets show that the proposed Deep One-Class (DOC) classification method achieves significant improvements over the state-of-the-art.",
"In this paper, we propose using local learning for multiclass novelty detection, a framework that we call local novelty detection. Estimating the novelty of a new sample is an extremely challenging task due to the large variability of known object categories. The features used to judge on the novelty are often very specific for the object in the image and therefore we argue that individual novelty models for each test sample are important. Similar to human experts, it seems intuitive to first look for the most related images thus filtering out unrelated data. Afterwards, the system focuses on discovering similarities and differences to those images only. Therefore, we claim that it is beneficial to solely consider training images most similar to a test sample when deciding about its novelty. Following the principle of local learning, for each test sample a local novelty detection model is learned and evaluated. Our local novelty score turns out to be a valuable indicator for deciding whether the sample belongs to a known category from the training set or to a new, unseen one. With our local novelty detection approach, we achieve state-of-the-art performance in multi-class novelty detection on two popular visual object recognition datasets, Caltech-256 and Image Net. We further show that our framework: (i) can be successfully applied to unknown face detection using the Labeled-Faces-in-the-Wild dataset and (ii) outperforms recent work on attribute-based unfamiliar class detection in fine-grained recognition of bird species on the challenging CUB-200-2011 dataset.",
"To date, almost all experimental evaluations of machine learning-based recognition algorithms in computer vision have taken the form of “closed set” recognition, whereby all testing classes are known at training time. A more realistic scenario for vision applications is “open set” recognition, where incomplete knowledge of the world is present at training time, and unknown classes can be submitted to an algorithm during testing. This paper explores the nature of open set recognition and formalizes its definition as a constrained minimization problem. The open set recognition problem is not well addressed by existing algorithms because it requires strong generalization. As a step toward a solution, we introduce a novel “1-vs-set machine,” which sculpts a decision space from the marginal distances of a 1-class or binary SVM with a linear kernel. This methodology applies to several different applications in computer vision where open set recognition is a challenging problem, including object recognition and face verification. We consider both in this work, with large scale cross-dataset experiments performed over the Caltech 256 and ImageNet sets, as well as face matching experiments performed over the Labeled Faces in the Wild set. The experiments highlight the effectiveness of machines adapted for open set evaluation compared to existing 1-class and binary SVMs for the same tasks.",
"",
"In this paper we propose a probabilistic model for online document clustering. We use non-parametric Dirichlet process prior to model the growing number of clusters, and use a prior of general English language model as the base distribution to handle the generation of novel clusters. Furthermore, cluster uncertainty is modeled with a Bayesian Dirichlet-multinomial distribution. We use empirical Bayes method to estimate hyperparameters based on a historical dataset. Our probabilistic model is applied to the novelty detection task in Topic Detection and Tracking (TDT) and compared with existing approaches in the literature.",
"We study large-scale image classification methods that can incorporate new classes and training images continuously over time at negligible cost. To this end, we consider two distance-based classifiers, the k-nearest neighbor (k-NN) and nearest class mean (NCM) classifiers, and introduce a new metric learning approach for the latter. We also introduce an extension of the NCM classifier to allow for richer class representations. Experiments on the ImageNet 2010 challenge dataset, which contains over 106 training images of 1,000 classes, show that, surprisingly, the NCM classifier compares favorably to the more flexible k-NN classifier. Moreover, the NCM performance is comparable to that of linear SVMs which obtain current state-of-the-art performance. Experimentally, we study the generalization performance to classes that were not used to learn the metrics. Using a metric learned on 1,000 classes, we show results for the ImageNet-10K dataset which contains 10,000 classes, and obtain performance that is competitive with the current state-of-the-art while being orders of magnitude faster. Furthermore, we show how a zero-shot class prior based on the ImageNet hierarchy can improve performance when few training images are available.",
"Real-world tasks in computer vision often touch upon open set recognition: multi-class recognition with incomplete knowledge of the world and many unknown inputs. Recent work on this problem has proposed a model incorporating an open space risk term to account for the space beyond the reasonable support of known classes. This paper extends the general idea of open space risk limiting classification to accommodate non-linear classifiers in a multiclass setting. We introduce a new open set recognition model called compact abating probability (CAP), where the probability of class membership decreases in value (abates) as points move from known data toward open space. We show that CAP models improve open set recognition for multiple algorithms. Leveraging the CAP formulation, we go on to describe the novel Weibull-calibrated SVM (W-SVM) algorithm, which combines the useful properties of statistical extreme value theory for score calibration with one-class and binary support vector machines. Our experiments show that the W-SVM is significantly better for open set object detection and OCR problems when compared to the state-of-the-art for the same tasks."
]
} |
1908.09931 | 2971181831 | At present, object recognition studies are mostly conducted in a closed lab setting with classes in test phase typically in training phase. However, real-world problem is far more challenging because: i) new classes unseen in the training phase can appear when predicting; ii) discriminative features need to evolve when new classes emerge in real time; and iii) instances in new classes may not follow the "independent and identically distributed" (iid) assumption. Most existing work only aims to detect the unknown classes and is incapable of continuing to learn newer classes. Although a few methods consider both detecting and including new classes, all are based on the predefined handcrafted features that cannot evolve and are out-of-date for characterizing emerging classes. Thus, to address the above challenges, we propose a novel generic end-to-end framework consisting of a dynamic cascade of classifiers that incrementally learn their dynamic and inherent features. The proposed method injects dynamic elements into the system by detecting instances from unknown classes, while at the same time incrementally updating the model to include the new classes. The resulting cascade tree grows by adding a new leaf node classifier once a new class is detected, and the discriminative features are updated via an end-to-end learning strategy. Experiments on two real-world datasets demonstrate that our proposed method outperforms existing state-of-the-art methods. | : different from incremental learning problem, other researchers have proposed tree based classification methods to address the scalability of object categories in large scale visual recognition challenges @cite_6 @cite_10 @cite_18 @cite_4 . Recent advances in the deep learning domain @cite_1 @cite_13 of scalable learning have resulted in state of the art performances, which are extremely useful when the goal is to maximize classification recognition performance. These systems assume a priori availability of comprehensive training data containing both images and categories. However, adapting such methods to a dynamic learning scenario becomes extremely challenging. Adding object categories requires retraining the entire system, which could be unfeasible for many applications. As a result, these methods are scalable but not incremental. | {
"cite_N": [
"@cite_13",
"@cite_18",
"@cite_4",
"@cite_1",
"@cite_6",
"@cite_10"
],
"mid": [
"1686810756",
"2157065343",
"2031489346",
"2618530766",
"1851597118",
"1967732418"
],
"abstract": [
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"We present a novel approach to efficiently learn a label tree for large scale classification with many classes. The key contribution of the approach is a technique to simultaneously determine the structure of the tree and learn the classifiers for each node in the tree. This approach also allows fine grained control over the efficiency vs accuracy trade-off in designing a label tree, leading to more balanced trees. Experiments are performed on large scale image classification with 10184 classes and 9 million images. We demonstrate significant improvements in test accuracy and efficiency with less training time and more balanced trees compared to the previous state of the art by",
"The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection. This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.",
"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 , respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.",
"Class hierarchies are commonly used to reduce the complexity of the classification problem. This is crucial when dealing with a large number of categories. In this work, we evaluate class hierarchies currently constructed for visual recognition. We show that top-down as well as bottom-up approaches, which are commonly used to automatically construct hierarchies, incorporate assumptions about the separability of classes. Those assumptions do not hold for visual recognition of a large number of object categories. We therefore propose a modification which is appropriate for most top-down approaches. It allows to construct class hierarchies that postpone decisions in the presence of uncertainty and thus provide higher recognition accuracy. We also compare our method to a one-against-all approach and show how to control the speed-for-accuracy trade-off with our method. For the experimental evaluation, we use the Caltech-256 visual object classes dataset and compare to state-of-the-art methods.",
"Large-scale recognition problems with thousands of classes pose a particular challenge because applying the classifier requires more computation as the number of classes grows. The label tree model integrates classification with the traversal of the tree so that complexity grows logarithmically. In this paper, we show how the parameters of the label tree can be found using maximum likelihood estimation. This new probabilistic learning technique produces a label tree with significantly improved recognition accuracy."
]
} |
1908.09931 | 2971181831 | At present, object recognition studies are mostly conducted in a closed lab setting with classes in test phase typically in training phase. However, real-world problem is far more challenging because: i) new classes unseen in the training phase can appear when predicting; ii) discriminative features need to evolve when new classes emerge in real time; and iii) instances in new classes may not follow the "independent and identically distributed" (iid) assumption. Most existing work only aims to detect the unknown classes and is incapable of continuing to learn newer classes. Although a few methods consider both detecting and including new classes, all are based on the predefined handcrafted features that cannot evolve and are out-of-date for characterizing emerging classes. Thus, to address the above challenges, we propose a novel generic end-to-end framework consisting of a dynamic cascade of classifiers that incrementally learn their dynamic and inherent features. The proposed method injects dynamic elements into the system by detecting instances from unknown classes, while at the same time incrementally updating the model to include the new classes. The resulting cascade tree grows by adding a new leaf node classifier once a new class is detected, and the discriminative features are updated via an end-to-end learning strategy. Experiments on two real-world datasets demonstrate that our proposed method outperforms existing state-of-the-art methods. | Open world recognition considers both detection and learning to distinguish the new classes. proposed a NCM learning algorithm that relies on the estimation of a determined threshold in conjunction with the threshold counts on some known new classes @cite_38 . For a more practical situation, proposed an online-learning approach that involves the NBC classifier instead of NCM @cite_28 , while @cite_8 proposed an online learning for streaming data where new classes come continuously. It is worth noting that Bayesian non-parametric models @cite_32 @cite_12 are not related to our problem. Though they were originally proposed to identify mixed components or clusters in the test data that may cover unseen classes, their clusters are not themselves classes and multiple clusters must be mapped to one class manually. | {
"cite_N": [
"@cite_38",
"@cite_8",
"@cite_28",
"@cite_32",
"@cite_12"
],
"mid": [
"1917989004",
"2963183879",
"2340646384",
"2030962165",
"2951532892"
],
"abstract": [
"With the of advent rich classification models and high computational power visual recognition systems have found many operational applications. Recognition in the real world poses multiple challenges that are not apparent in controlled lab environments. The datasets are dynamic and novel categories must be continuously detected and then added. At prediction time, a trained system has to deal with myriad unseen categories. Operational systems require minimal downtime, even to learn. To handle these operational issues, we present the problem of Open World Recognition and formally define it. We prove that thresholding sums of monotonically decreasing functions of distances in linearly transformed feature space can balance “open space risk” and empirical risk. Our theory extends existing algorithms for open world recognition. We present a protocol for evaluation of open world recognition systems. We present the Nearest Non-Outlier (NNO) algorithm that evolves model efficiently, adding object categories incrementally while detecting outliers and managing open space risk. We perform experiments on the ImageNet dataset with 1.2M+ images to validate the effectiveness of our method on large scale visual recognition tasks. NNO consistently yields superior results on open world recognition.",
"This paper investigates an important problem in stream mining, i.e., classification under streaming emerging new classes or SENC . The SENC problem can be decomposed into three subproblems: detecting emerging new classes, classifying known classes, and updating models to integrate each new class as part of known classes. The common approach is to treat it as a classification problem and solve it using either a supervised learner or a semi-supervised learner. We propose an alternative approach by using unsupervised learning as the basis to solve this problem. The proposed method employs completely-random trees which have been shown to work well in unsupervised learning and supervised learning independently in the literature. The completely-random trees are used as a single common core to solve all three subproblems: unsupervised learning, supervised learning, and model update on data streams. We show that the proposed unsupervised-learning-focused method often achieves significantly better outcomes than existing classification-focused methods.",
"As we enter into the big data age and an avalanche of images have become readily available, recognition systems face the need to move from close, lab settings where the number of classes and training data are fixed, to dynamic scenarios where the number of categories to be recognized grows continuously over time, as well as new data providing useful information to update the system. Recent attempts, like the open world recognition framework, tried to inject dynamics into the system by detecting new unknown classes and adding them incrementally, while at the same time continuously updating the models for the known classes. incrementally adding new classes and detecting instances from unknown classes, while at the same time continuously updating the models for the known classes. In this paper we argue that to properly capture the intrinsic dynamic of open world recognition, it is necessary to add to these aspects (a) the incremental learning of the underlying metric, (b) the incremental estimate of confidence thresholds for the unknown classes, and (c) the use of local learning to precisely describe the space of classes. We extend three existing metric learning algorithms towards these goals by using online metric learning. Experimentally we validate our approach on two large-scale datasets in different learning scenarios. For all these scenarios our proposed methods outperform their non-online counterparts. We conclude that local and online learning is important to capture the full dynamics of open world recognition.",
"We present a new direction for semi-supervised learning where self-adjusting generative models replace fixed ones and unlabeled data can potentially improve learning even when labeled data is only partially-observed. We model each class data by a mixture model and use a hierarchical Dirichlet process (HDP) to model observed as well as unobserved classes. We extend the standard HDP model to accommodate unlabeled samples and introduce a new sharing strategy, within the context of Gaussian mixture models, that restricts sharing with covariance matrices while leaving the mean vectors free. Our research is mainly driven by real-world applications with evolving data-generating mechanisms where obtaining a fully-observed labeled data set is impractical. We demonstrate the feasibility of the proposed approach for semi-supervised learning in two such applications.",
"We present a framework for online inference in the presence of a nonexhaustively defined set of classes that incorporates supervised classification with class discovery and modeling. A Dirichlet process prior (DPP) model defined over class distributions ensures that both known and unknown class distributions originate according to a common base distribution. In an attempt to automatically discover potentially interesting class formations, the prior model is coupled with a suitably chosen data model, and sequential Monte Carlo sampling is used to perform online inference. Our research is driven by a biodetection application, where a new class of pathogen may suddenly appear, and the rapid increase in the number of samples originating from this class indicates the onset of an outbreak."
]
} |
1908.09648 | 2969329859 | This paper is motivated by real-life applications of bi-objective optimization. Having many non dominated solutions, one wishes to cluster the Pareto front using Euclidian distances. The p-center problems, both in the discrete and continuous versions, are proven solvable in polynomial time with a common dynamic programming algorithm. Having @math points to partition in @math clusters, the complexity is proven in @math (resp @math ) time and @math memory space for the continuous (resp discrete) @math -center problem. @math -center problems have complexities in @math . To speed-up the algorithm, parallelization issues are discussed. A posteriori, these results allow an application inside multi-objective heuristics to archive partial Pareto Fronts. | Selection or clustering points in PF have been studied with applications to MOO algorithms. Firstly, a motivation is to store representative elements of a large PF (exponential sizes of PF are possible @cite_45 ) for exact methods or population meta-heuristics. Maximizing the quality of discrete representations of Pareto sets was studied with the hypervolume measure in the Hypervolume Subset Selection (HSS) problem @cite_31 @cite_43 . Secondly, a crucial issue in the design of population meta-heuristics for MOO problems is to select relevant solutions for operators like cross-over or mutation phases in evolutionary algorithms @cite_2 @cite_38 . Selecting knee-points is another known approach for such goals @cite_42 . | {
"cite_N": [
"@cite_38",
"@cite_42",
"@cite_43",
"@cite_45",
"@cite_2",
"@cite_31"
],
"mid": [
"1990421110",
"2729527533",
"2039191483",
"",
"1613273921",
"2002682954"
],
"abstract": [
"In many multiobjective optimization problems, the Pareto Fronts and Sets contain a large number of solutions and this makes it difficult for the decision maker to identify the preferred ones. A possible way to alleviate this difficulty is to present to the decision maker a subset of a small number of solutions representatives of the Pareto Front characteristics. In this paper, a two-steps procedure is presented, aimed at identifying a limited number of representative solutions to be presented to the decision maker. Pareto Front solutions are first clustered into \"families\", which are then synthetically represented by a \"head-of-the-family\" solution. Level Diagrams are then used to represent, analyse and interpret the Pareto Front reduced to its head-of-the-family solutions. The procedure is applied to a reliability allocation case study of literature, in decision-making contexts both without or with explicit preferences by the decision maker on the objectives to be optimized.",
"The current boom of Unmanned Aerial Vehicles (UAVs) is increasing the number of potential industrial and research applications. One of the most demanded topics in this area is related to the automated planning of a UAVs swarm, controlled by one or several Ground Control Stations (GCSs). In this context, there are several variables that influence the selection of the most appropriate plan, such as the makespan, the cost or the risk of the mission. This problem can be seen as a Multi-Objective Optimization Problem (MOP). On previous approaches, the problem was modelled as a Constraint Satisfaction Problem (CSP) and solved using a Multi-Objective Genetic Algorithm (MOGA), so a Pareto Optimal Frontier (POF) was obtained. The main problem with this approach is based on the large number of obtained solutions, which hinders the selection of the best solution. This paper presents a new algorithm that has been designed to obtain the most significant solutions in the POF. This approach is based on Knee Points applied to MOGA. The new algorithm has been proved in a real scenario with different number of optimization variables, the experimental results show a significant improvement of the algorithm performance.",
"One way of solving multiple objective mathematical programming problems is finding discrete representations of the efficient set. A modified goal of finding good discrete representations of the efficient set would contribute to the practicality of vector maximization algorithms. We define coverage, uniformity and cardinality as the three attributes of quality of discrete representations and introduce a framework that includes these attributes in which discrete representations can be evaluated, compared to each other, and judged satisfactory or unsatisfactory by a Decision Maker. We provide simple mathematical programming formulations that can be used to compute the coverage error of a given discrete representation. Our formulations are practically implementable when the problem under study is a multiobjective linear programming problem. We believe that the interactive algorithms along with the vector maximization methods can make use of our framework and its tools.",
"",
"A unified view of metaheuristics This book provides a complete background on metaheuristics and shows readers how to design and implement efficient algorithms to solve complex optimization problems across a diverse range of applications, from networking and bioinformatics to engineering design, routing, and scheduling. It presents the main design questions for all families of metaheuristics and clearly illustrates how to implement the algorithms under a software framework to reuse both the design and code. Throughout the book, the key search components of metaheuristics are considered as a toolbox for: Designing efficient metaheuristics (e.g. local search, tabu search, simulated annealing, evolutionary algorithms, particle swarm optimization, scatter search, ant colonies, bee colonies, artificial immune systems) for optimization problems Designing efficient metaheuristics for multi-objective optimization problems Designing hybrid, parallel, and distributed metaheuristics Implementing metaheuristics on sequential and parallel machines Using many case studies and treating design and implementation independently, this book gives readers the skills necessary to solve large-scale optimization problems quickly and efficiently. It is a valuable reference for practicing engineers and researchers from diverse areas dealing with optimization or machine learning; and graduate students in computer science, operations research, control, engineering, business and management, and applied mathematics.",
"Optimizing the hypervolume indicator within evolutionary multiobjective optimizers has become popular in the last years. Recently, the indicator has been generalized to the weighted case to incorporate various user preferences into hypervolume-based search algorithms. There are two main open questions in this context: (i) how does the specified weight influence the distribution of a fixed number of points that maximize the weighted hypervolume indicator? (ii) how can the user articulate her preferences easily without specifying a certain weight distribution function? In this paper, we tackle both questions. First, we theoretically investigate optimal distributions of μ points that maximize the weighted hypervolume indicator. Second, based on the obtained theoretical results, we propose a new approach to articulate user preferences within biobjective hypervolume-based optimization in terms of specifying a desired density of points on a predefined (imaginary) Pareto front. Within this approach, a new exact algorithm based on dynamic programming is proposed which selects the set of μ points that maximizes the (weighted) hypervolume indicator. Experiments on various test functions show the usefulness of this new preference articulation approach and the agreement between theory and practice."
]
} |
1908.09648 | 2969329859 | This paper is motivated by real-life applications of bi-objective optimization. Having many non dominated solutions, one wishes to cluster the Pareto front using Euclidian distances. The p-center problems, both in the discrete and continuous versions, are proven solvable in polynomial time with a common dynamic programming algorithm. Having @math points to partition in @math clusters, the complexity is proven in @math (resp @math ) time and @math memory space for the continuous (resp discrete) @math -center problem. @math -center problems have complexities in @math . To speed-up the algorithm, parallelization issues are discussed. A posteriori, these results allow an application inside multi-objective heuristics to archive partial Pareto Fronts. | The HSS problem, maximizing the representativity of @math solutions among a PF of size @math , is known to be NP-hard in dimension 3 (and greater) since @cite_10 . An exact algorithm in @math and a polynomial-time approximation scheme for any constant dimension @math are also provided in @cite_10 . The 2d case is solvable in polynomial time thanks to a DP algorithm with a complexity in @math time and @math space provided in @cite_31 . The time complexity of the DP algorithm was improved in @math by @cite_26 and in @math by @cite_14 . | {
"cite_N": [
"@cite_14",
"@cite_31",
"@cite_26",
"@cite_10"
],
"mid": [
"2117068724",
"2002682954",
"2008049315",
"2624826130"
],
"abstract": [
"The hypervolume subset selection problem consists of finding a subset, with a given cardinality k, of a set of nondominated points that maximizes the hypervolume indicator. This problem arises in selection procedures of evolutionary algorithms for multiobjective optimization, for which practically efficient algorithms are required. In this article, two new formulations are provided for the two-dimensional variant of this problem. The first is a linear integer programming formulation that can be solved by solving its linear programming relaxation. The second formulation is a k-link shortest path formulation on a special digraph with the Monge property that can be solved by dynamic programming in time. This improves upon the result of in Bader 2009, and slightly improves upon the result of in Bringmann eti¾ al. 2014b, which was developed independently from this work using different techniques. Numerical results are shown for several values of n and k.",
"Optimizing the hypervolume indicator within evolutionary multiobjective optimizers has become popular in the last years. Recently, the indicator has been generalized to the weighted case to incorporate various user preferences into hypervolume-based search algorithms. There are two main open questions in this context: (i) how does the specified weight influence the distribution of a fixed number of points that maximize the weighted hypervolume indicator? (ii) how can the user articulate her preferences easily without specifying a certain weight distribution function? In this paper, we tackle both questions. First, we theoretically investigate optimal distributions of μ points that maximize the weighted hypervolume indicator. Second, based on the obtained theoretical results, we propose a new approach to articulate user preferences within biobjective hypervolume-based optimization in terms of specifying a desired density of points on a predefined (imaginary) Pareto front. Within this approach, a new exact algorithm based on dynamic programming is proposed which selects the set of μ points that maximizes the (weighted) hypervolume indicator. Experiments on various test functions show the usefulness of this new preference articulation approach and the agreement between theory and practice.",
"The goal of bi-objective optimization is to find a small set of good compromise solutions. A common problem for bi-objective evolutionary algorithms is the following subset selection problem (SSP): Given n solutions P ⊂ R2 in the objective space, select k solutions P* from P that optimize an indicator function. In the hypervolume SSP we want to select k points P* that maximize the hypervolume indicator IHYP(P*, r) for some reference point r ∈ R2. Similarly, the e-indicator SSP aims at selecting k points P* that minimize the e-indicator Ie(P*,R) for some reference set R ⊂ R2 of size m (which can be R=P). We first present a new algorithm for the hypervolume SSP with runtime O(n (k + log n)). Our second main result is a new algorithm for the e-indicator SSP with runtime O(n log n + m log m). Both results improve the current state of the art runtimes by a factor of (nearly) @math and make the problems tractable for new applications. Preliminary experiments confirm that the theoretical results translate into substantial empirical runtime improvements.",
"Let B be a set of n axis-parallel boxes in d-dimensions such that each box has a corner at the origin and the other corner in the positive quadrant, and let k be a positive integer. We study the problem of selecting k boxes in B that maximize the volume of the union of the selected boxes. The research is motivated by applications in skyline queries for databases and in multicriteria optimization, where the problem is known as the hypervolume subset selection problem. It is known that the problem can be solved in polynomial time in the plane, while the best known algorithms in any dimension d>2 enumerate all size-k subsets. We show that: * The problem is NP-hard already in 3 dimensions. * In 3 dimensions, we break the enumeration of all size-k subsets, by providing an n^O(sqrt(k)) algorithm. * For any constant dimension d, we give an efficient polynomial-time approximation scheme."
]
} |
1908.09648 | 2969329859 | This paper is motivated by real-life applications of bi-objective optimization. Having many non dominated solutions, one wishes to cluster the Pareto front using Euclidian distances. The p-center problems, both in the discrete and continuous versions, are proven solvable in polynomial time with a common dynamic programming algorithm. Having @math points to partition in @math clusters, the complexity is proven in @math (resp @math ) time and @math memory space for the continuous (resp discrete) @math -center problem. @math -center problems have complexities in @math . To speed-up the algorithm, parallelization issues are discussed. A posteriori, these results allow an application inside multi-objective heuristics to archive partial Pareto Fronts. | We note that an affine 2d PF is a line in @math , clustering is equivalent to 1 dimensional cases. 1-dimension K-means was proven to be solvable in polynomial time with a DP algorithm in @math time and @math space. This complexity was improved for a DP algorithm in @math time and @math space in @cite_5 . This is thus the complexity of K-means in an affine 2d PF. The specific case, already mentioned in the previous section, of the continuous p-center problem with centers on a straight line is more general that the case of an affine 2d PF, with a complexity proven in @math time and @math space by @cite_34 . 2d cases of clustering problems can also be seen as specific cases of three-dimensional (3d) PF, affine 3d PF. Having NP-hard complexities proven for planar cases of clustering, which is the case for k-means, p-median, k-medoids, p-center problems since @cite_44 @cite_1 , it implies that the considered clustering problems are also NP-hard for 3d PF. | {
"cite_N": [
"@cite_44",
"@cite_5",
"@cite_34",
"@cite_1"
],
"mid": [
"",
"2582351091",
"2755780781",
"2032624334"
],
"abstract": [
"",
"The @math -Means clustering problem on @math points is NP-Hard for any dimension @math , however, for the 1D case there exists exact polynomial time algorithms. Previous literature reported an @math time dynamic programming algorithm that uses @math space. It turns out that the problem has been considered under a different name more than twenty years ago. We present all the existing work that had been overlooked and compare the various solutions theoretically. Moreover, we show how to reduce the space usage for some of them, as well as generalize them to data structures that can quickly report an optimal @math -Means clustering for any @math . Finally we also generalize all the algorithms to work for the absolute distance and to work for any Bregman Divergence. We complement our theoretical contributions by experiments that compare the practical performance of the various algorithms.",
"Given a set P of n points and a straight line L, we study three important variations of minimum enclosing circle problem as follows:",
"An @math algorithm for the continuous p-center problem on a tree is presented. Following a sequence of previous algorithms, ours is the first one whose time bound in uniform in p and less than quadratic in n. We also present an @math algorithm for a weighted discrete p-center problem."
]
} |
1908.09485 | 2969944405 | A point-of-interest (POI) recommendation system plays an important role in location-based services (LBS) because it can help people to explore new locations and promote advertisers to launch ads to target users. Exiting POI recommendation methods need users' raw check-in data, which can raise location privacy breaches. Even worse, several privacy-preserving recommendation systems could not utilize the transition pattern in the human movement. To address these problems, we propose Successive Point-of-Interest REcommendation with Local differential privacy (SPIREL) framework. SPIREL employs two types of sources from users' check-in history: a transition pattern between two POIs and visiting counts of POIs. We propose a novel objective function for learning the user-POI and POI-POI relationships simultaneously. We further propose two privacy-preserving mechanisms to train our recommendation system. Experiments using two public datasets demonstrate that SPIREL achieves better POI recommendation quality while preserving stronger privacy for check-in history. | The problem of successive POI recommendation has received much attention recently @cite_22 @cite_8 @cite_23 @cite_11 . To predict where a user will visit next, we need to consider the relationship between POIs. However, existing private recommendation methods @cite_1 @cite_7 @cite_13 only focus on learning the relationship between users and items. Our research direction is to incorporate the relationship between POIs by adapting the transfer learning approach @cite_18 @cite_12 @cite_16 . Most transfer learning methods in collaborative filtering utilize auxiliary domain data by sharing the latent matrix between two different domain. In our work, we use two domain data from users' check-in history: visiting counts and POI transition patterns. We assume that the POI latent factors can bridge the user-POI and POI-POI relationships. To figure out the POI-POI relationship, we build a POI-POI matrix, which represents global preference transitions between two POIs. After that, in the learning process, users update their profile vector based on the visiting counts which describe user-POI relationship. | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_1",
"@cite_23",
"@cite_16",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"1502375784",
"1546409232",
"2896723315",
"2072609015",
"2296635479",
"2017921654",
"1994576156",
"2789607830",
"193545200",
"2807855639"
],
"abstract": [
"Data sparsity is a major problem for collaborative filtering (CF) techniques in recommender systems, especially for new users and items. We observe that, while our target data are sparse for CF systems, related and relatively dense auxiliary data may already exist in some other more mature application domains. In this paper, we address the data sparsity problem in a target domain by transferring knowledge about both users and items from auxiliary data sources. We observe that in different domains the user feedbacks are often heterogeneous such as ratings vs. clicks. Our solution is to integrate both user and item knowledge in auxiliary data sources through a principled matrix-based transfer learning framework that takes into account the data heterogeneity. In particular, we discover the principle coordinates of both users and items in the auxiliary data matrices, and transfer them to the target domain in order to reduce the effect of data sparsity. We describe our method, which is known as coordinate system transfer or CST, and demonstrate its effectiveness in alleviating the data sparsity problem in collaborative filtering. We show that our proposed method can significantly outperform several state-of-the-art solutions for this problem.",
"Personalized point-of-interest (POI) recommendation is a significant task in location-based social networks (LBSNs) as it can help provide better user experience as well as enable third-party services, e.g., launching advertisements. To provide a good recommendation, various research has been conducted in the literature. However, pervious efforts mainly consider the \"check-ins\" in a whole and omit their temporal relation. They can only recommend POI globally and cannot know where a user would like to go tomorrow or in the next few days. In this paper, we consider the task of successive personalized POI recommendation in LBSNs, which is a much harder task than standard personalized POI recommendation or prediction. To solve this task, we observe two prominent properties in the check-in sequence: personalized Markov chain and region localization. Hence, we propose a novel matrix factorization method, namely FPMC-LR, to embed the personalized Markov chains and the localized regions. Our proposed FPMC-LR not only exploits the personalized Markov chain in the check-in sequence, but also takes into account users' movement constraint, i.e., moving around a localized region. More importantly, utilizing the information of localized regions, we not only reduce the computation cost largely, but also discard the noisy information to boost recommendation. Results on two real-world LBSNs datasets demonstrate the merits of our proposed FPMC-LR.",
"Probabilistic matrix factorization (PMF) plays a crucial role in recommendation systems. It requires a large amount of user data (such as user shopping records and movie ratings) to predict personal preferences, and thereby provides users high-quality recommendation services, which expose the risk of leakage of user privacy. Differential privacy, as a provable privacy protection framework, has been applied widely to recommendation systems. It is common that different individuals have different levels of privacy requirements on items. However, traditional differential privacy can only provide a uniform level of privacy protection for all users. In this paper, we mainly propose a probabilistic matrix factorization recommendation scheme with personalized differential privacy (PDP-PMF). It aims to meet users' privacy requirements specified at the item-level instead of giving the same level of privacy guarantees for all. We then develop a modified sampling mechanism (with bounded differential privacy) for achieving PDP. We also perform a theoretical analysis of the PDP-PMF scheme and demonstrate the privacy of the PDP-PMF scheme. In addition, we implement the probabilistic matrix factorization schemes both with traditional and with personalized differential privacy (DP-PMF, PDP-PMF) and compare them through a series of experiments. The results show that the PDP-PMF scheme performs well on protecting the privacy of each user and its recommendation quality is much better than the DP-PMF scheme.",
"Location-based social networks (LBSNs) offer researchers rich data to study people's online activities and mobility patterns. One important application of such studies is to provide personalized point-of-interest (POI) recommendations to enhance user experience in LBSNs. Previous solutions directly predict users' preference on locations but fail to provide insights about users' preference transitions among locations. In this work, we propose a novel category-aware POI recommendation model, which exploits the transition patterns of users' preference over location categories to improve location recommendation accuracy. Our approach consists of two stages: (1) preference transition (over location categories) prediction, and (2) category-aware POI recommendation. Matrix factorization is employed to predict a user's preference transitions over categories and then her preference on locations in the corresponding categories. Real data based experiments demonstrate that our approach outperforms the state-of-the-art POI recommendation models by at least 39.75 in terms of recall.",
"Matrix factorization (MF) is a prevailing collaborative filtering method for building recommender systems. It requires users to upload their personal preferences to the recommender for performing MF, which raises serious privacy concerns. This paper proposes a differentially private MF mechanism that can prevent an untrusted recommender from learning any users' ratings or profiles. Our design decouples computations upon users' private data from the recommender to users, and makes the recommender aggregate local results in a privacy-preserving way. It uses the objective perturbation to make sure that the final item profiles satisfy differential privacy and solves the challenge to decompose the noise component for objective perturbation into small pieces that can be determined locally and independently by users. We also propose a third-party based mechanism to reduce noises added in each iteration and adapt our online algorithm to the dynamic setting that allows users to leave and join. The experiments show that our proposal is efficient and introduces acceptable side effects on the precision of results.",
"Point-of-Interest (POI) recommendation has become an important means to help people discover attractive locations. However, extreme sparsity of user-POI matrices creates a severe challenge. To cope with this challenge, viewing mobility records on location-based social networks (LBSNs) as implicit feedback for POI recommendation, we first propose to exploit weighted matrix factorization for this task since it usually serves collaborative filtering with implicit feedback better. Besides, researchers have recently discovered a spatial clustering phenomenon in human mobility behavior on the LBSNs, i.e., individual visiting locations tend to cluster together, and also demonstrated its effectiveness in POI recommendation, thus we incorporate it into the factorization model. Particularly, we augment users' and POIs' latent factors in the factorization model with activity area vectors of users and influence area vectors of POIs, respectively. Based on such an augmented model, we not only capture the spatial clustering phenomenon in terms of two-dimensional kernel density estimation, but we also explain why the introduction of such a phenomenon into matrix factorization helps to deal with the challenge from matrix sparsity. We then evaluate the proposed algorithm on a large-scale LBSN dataset. The results indicate that weighted matrix factorization is superior to other forms of factorization models and that incorporating the spatial clustering phenomenon into matrix factorization improves recommendation performance.",
"A major challenge for collaborative filtering (CF) techniques in recommender systems is the data sparsity that is caused by missing and noisy ratings. This problem is even more serious for CF domains where the ratings are expressed numerically, e.g. as 5-star grades. We assume the 5-star ratings are unordered bins instead of ordinal relative preferences. We observe that, while we may lack the information in numerical ratings, we sometimes have additional auxiliary data in the form of binary ratings. This is especially true given that users can easily express themselves with their preferences expressed as likes or dislikes for items. In this paper, we explore how to use these binary auxiliary preference data to help reduce the impact of data sparsity for CF domains expressed in numerical ratings. We solve this problem by transferring the rating knowledge from some auxiliary data source in binary form (that is, likes or dislikes), to a target numerical rating matrix. In particular, our solution is to model both the numerical ratings and ratings expressed as like or dislike in a principled way. We present a novel framework of Transfer by Collective Factorization (TCF), in which we construct a shared latent space collectively and learn the data-dependent effect separately. A major advantage of the TCF approach over the previous bilinear method of collective matrix factorization is that we are able to capture the data-dependent effect when sharing the data-independent knowledge. This allows us to increase the overall quality of knowledge transfer. We present extensive experimental results to demonstrate the effectiveness of TCF at various sparsity levels, and show improvements of our approach as compared to several state-of-the-art methods.",
"Recommender systems are collecting and analyzing user data to provide better user experience. However, several privacy concerns have been raised when a recommender knows user's set of items or their ratings. A number of solutions have been suggested to improve privacy of legacy recommender systems, but the existing solutions in the literature can protect either items or ratings only. In this paper, we propose a recommender system that protects both user's items and ratings. For this, we develop novel matrix factorization algorithms under local differential privacy (LDP). In a recommender system with LDP, individual users randomize their data themselves to satisfy differential privacy and send the perturbed data to the recommender. Then, the recommender computes aggregates of the perturbed data. This framework ensures that both user's items and ratings remain private from the recommender. However, applying LDP to matrix factorization typically raises utility issues with i) high dimensionality due to a large number of items and ii) iterative estimation algorithms. To tackle these technical challenges, we adopt dimensionality reduction technique and a novel binary mechanism based on sampling. We additionally introduce a factor that stabilizes the perturbed gradients. With MovieLens and LibimSeTi datasets, we evaluate recommendation accuracy of our recommender system and demonstrate that our algorithm performs better than the existing differentially private gradient descent algorithm for matrix factorization under stronger privacy requirements.",
"Data sparsity due to missing ratings is a major challenge for collaborative filtering (CF) techniques in recommender systems. This is especially true for CF domains where the ratings are expressed numerically. We observe that, while we may lack the information in numerical ratings, we may have more data in the form of binary ratings. This is especially true when users can easily express themselves with their likes and dislikes for certain items. In this paper, we explore how to use the binary preference data expressed in the form of like dislike to help reduce the impact of data sparsity of more expressive numerical ratings. We do this by transferring the rating knowledge from some auxiliary data source in binary form (that is, likes or dislikes), to a target numerical rating matrix. Our solution is to model both numerical ratings and like dislike in a principled way, using a novel framework of Transfer by Collective Factorization (TCF). In particular, we construct the shared latent space collectively and learn the data-dependent effect separately. A major advantage of the TCF approach over previous collective matrix factorization (or bifactorization) methods is that we are able to capture the data-dependent effect when sharing the data-independent knowledge, so as to increase the over-all quality of knowledge transfer. Experimental results demonstrate the effectiveness of TCF at various sparsity levels as compared to several state-of-the-art methods.",
""
]
} |
1908.09485 | 2969944405 | A point-of-interest (POI) recommendation system plays an important role in location-based services (LBS) because it can help people to explore new locations and promote advertisers to launch ads to target users. Exiting POI recommendation methods need users' raw check-in data, which can raise location privacy breaches. Even worse, several privacy-preserving recommendation systems could not utilize the transition pattern in the human movement. To address these problems, we propose Successive Point-of-Interest REcommendation with Local differential privacy (SPIREL) framework. SPIREL employs two types of sources from users' check-in history: a transition pattern between two POIs and visiting counts of POIs. We propose a novel objective function for learning the user-POI and POI-POI relationships simultaneously. We further propose two privacy-preserving mechanisms to train our recommendation system. Experiments using two public datasets demonstrate that SPIREL achieves better POI recommendation quality while preserving stronger privacy for check-in history. | Differential privacy @cite_15 is a rigorous privacy standard that requires the output of a DP mechanism should not reveal information specific to any individuals. DP requires a trusted data curator who collects original data from users. Recently, a local version of DP has been proposed. In the local setting, each user perturbs his her data and sends perturbed data to the data curator. Since the original data never leave users' devices, LDP mechanisms have the benefit of not requiring trusted data curator. Accordingly, many companies attempt to adopt LDP to collect data from the clients privately @cite_5 @cite_20 @cite_2 @cite_10 . | {
"cite_N": [
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_10",
"@cite_20"
],
"mid": [
"2963559079",
"1981029888",
"2517104773",
"2421389337",
""
],
"abstract": [
"The collection and analysis of telemetry data from user's devices is routinely performed by many software companies. Telemetry collection leads to improved user experience but poses significant risks to users' privacy. Locally differentially private (LDP) algorithms have recently emerged as the main tool that allows data collectors to estimate various population statistics, while preserving privacy. The guarantees provided by such algorithms are typically very strong for a single round of telemetry collection, but degrade rapidly when telemetry is collected regularly. In particular, existing LDP algorithms are not suitable for repeated collection of counter data such as daily app usage statistics. In this paper, we develop new LDP mechanisms geared towards repeated collection of counter data, with formal privacy guarantees even after being executed for an arbitrarily long period of time. For two basic analytical tasks, mean estimation and histogram estimation, our LDP mechanisms for repeated data collection provide estimates with comparable or even the same accuracy as existing single-round LDP collection mechanisms. We conduct empirical evaluation on real-world counter datasets to verify our theoretical results. Our mechanisms have been deployed by Microsoft to collect telemetry across millions of devices.",
"Randomized Aggregatable Privacy-Preserving Ordinal Response, or RAPPOR, is a technology for crowdsourcing statistics from end-user client software, anonymously, with strong privacy guarantees. In short, RAPPORs allow the forest of client data to be studied, without permitting the possibility of looking at individual trees. By applying randomized response in a novel manner, RAPPOR provides the mechanisms for such collection as well as for efficient, high-utility analysis of the collected data. In particular, RAPPOR permits statistics to be collected on the population of client-side strings with strong privacy guarantees for each client, and without linkability of their reports. This paper describes and motivates RAPPOR, details its differential-privacy and utility guarantees, discusses its practical deployment and properties in the face of different attack models, and, finally, gives results of its application to both synthetic and real-world data.",
"We continue a line of research initiated in [10, 11] on privacy-preserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function f mapping databases to reals, the so-called true answer is the result of applying f to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user. Previous work focused on the case of noisy sums, in which f = Σ i g(x i ), where x i denotes the ith row of the database and g maps database rows to [0,1]. We extend the study to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f. Roughly speaking, this is the amount that any single argument to f can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case. The first step is a very clean characterization of privacy in terms of indistinguishability of transcripts. Additionally, we obtain separation results showing the increased value of interactive sanitization mechanisms over non-interactive.",
"Organizations with a large user base, such as Samsung and Google, can potentially benefit from collecting and mining users' data. However, doing so raises privacy concerns, and risks accidental privacy breaches with serious consequences. Local differential privacy (LDP) techniques address this problem by only collecting randomized answers from each user, with guarantees of plausible deniability; meanwhile, the aggregator can still build accurate models and predictors by analyzing large amounts of such randomized data. So far, existing LDP solutions either have severely restricted functionality, or focus mainly on theoretical aspects such as asymptotical bounds rather than practical usability and performance. Motivated by this, we propose Harmony, a practical, accurate and efficient system for collecting and analyzing data from smart device users, while satisfying LDP. Harmony applies to multi-dimensional data containing both numerical and categorical attributes, and supports both basic statistics (e.g., mean and frequency estimates), and complex machine learning tasks (e.g., linear regression, logistic regression and SVM classification). Experiments using real data confirm Harmony's effectiveness.",
""
]
} |
1908.09485 | 2969944405 | A point-of-interest (POI) recommendation system plays an important role in location-based services (LBS) because it can help people to explore new locations and promote advertisers to launch ads to target users. Exiting POI recommendation methods need users' raw check-in data, which can raise location privacy breaches. Even worse, several privacy-preserving recommendation systems could not utilize the transition pattern in the human movement. To address these problems, we propose Successive Point-of-Interest REcommendation with Local differential privacy (SPIREL) framework. SPIREL employs two types of sources from users' check-in history: a transition pattern between two POIs and visiting counts of POIs. We propose a novel objective function for learning the user-POI and POI-POI relationships simultaneously. We further propose two privacy-preserving mechanisms to train our recommendation system. Experiments using two public datasets demonstrate that SPIREL achieves better POI recommendation quality while preserving stronger privacy for check-in history. | There are several works applying DP LDP on the recommendation system @cite_1 @cite_7 @cite_13 . @cite_1 proposed an objective function perturbation method. In their work, a trusted data curator adds Laplace noises to the objective function so that the factorized item matrix satisfies DP. They also proposed a gradient perturbation method which can preserve the privacy of users' ratings from an untrusted data curator. @cite_7 proposed a probabilistic matrix factorization with personalized differential privacy. They used a random sampling method to satisfy different users' privacy requirements. Then, they applied the objective function perturbation method to obtain the perturbed item matrix. Finally, @cite_13 proposed a new recommendation system under LDP. Specifically, users update their profile vectors locally and submit perturbed gradients in the iterative factorization process. Further, to reduce the error incurred by perturbation, they adopted random projection for dimensionality reduction. | {
"cite_N": [
"@cite_13",
"@cite_1",
"@cite_7"
],
"mid": [
"2789607830",
"2296635479",
"2896723315"
],
"abstract": [
"Recommender systems are collecting and analyzing user data to provide better user experience. However, several privacy concerns have been raised when a recommender knows user's set of items or their ratings. A number of solutions have been suggested to improve privacy of legacy recommender systems, but the existing solutions in the literature can protect either items or ratings only. In this paper, we propose a recommender system that protects both user's items and ratings. For this, we develop novel matrix factorization algorithms under local differential privacy (LDP). In a recommender system with LDP, individual users randomize their data themselves to satisfy differential privacy and send the perturbed data to the recommender. Then, the recommender computes aggregates of the perturbed data. This framework ensures that both user's items and ratings remain private from the recommender. However, applying LDP to matrix factorization typically raises utility issues with i) high dimensionality due to a large number of items and ii) iterative estimation algorithms. To tackle these technical challenges, we adopt dimensionality reduction technique and a novel binary mechanism based on sampling. We additionally introduce a factor that stabilizes the perturbed gradients. With MovieLens and LibimSeTi datasets, we evaluate recommendation accuracy of our recommender system and demonstrate that our algorithm performs better than the existing differentially private gradient descent algorithm for matrix factorization under stronger privacy requirements.",
"Matrix factorization (MF) is a prevailing collaborative filtering method for building recommender systems. It requires users to upload their personal preferences to the recommender for performing MF, which raises serious privacy concerns. This paper proposes a differentially private MF mechanism that can prevent an untrusted recommender from learning any users' ratings or profiles. Our design decouples computations upon users' private data from the recommender to users, and makes the recommender aggregate local results in a privacy-preserving way. It uses the objective perturbation to make sure that the final item profiles satisfy differential privacy and solves the challenge to decompose the noise component for objective perturbation into small pieces that can be determined locally and independently by users. We also propose a third-party based mechanism to reduce noises added in each iteration and adapt our online algorithm to the dynamic setting that allows users to leave and join. The experiments show that our proposal is efficient and introduces acceptable side effects on the precision of results.",
"Probabilistic matrix factorization (PMF) plays a crucial role in recommendation systems. It requires a large amount of user data (such as user shopping records and movie ratings) to predict personal preferences, and thereby provides users high-quality recommendation services, which expose the risk of leakage of user privacy. Differential privacy, as a provable privacy protection framework, has been applied widely to recommendation systems. It is common that different individuals have different levels of privacy requirements on items. However, traditional differential privacy can only provide a uniform level of privacy protection for all users. In this paper, we mainly propose a probabilistic matrix factorization recommendation scheme with personalized differential privacy (PDP-PMF). It aims to meet users' privacy requirements specified at the item-level instead of giving the same level of privacy guarantees for all. We then develop a modified sampling mechanism (with bounded differential privacy) for achieving PDP. We also perform a theoretical analysis of the PDP-PMF scheme and demonstrate the privacy of the PDP-PMF scheme. In addition, we implement the probabilistic matrix factorization schemes both with traditional and with personalized differential privacy (DP-PMF, PDP-PMF) and compare them through a series of experiments. The results show that the PDP-PMF scheme performs well on protecting the privacy of each user and its recommendation quality is much better than the DP-PMF scheme."
]
} |
1908.09550 | 2969760672 | In this paper, we propose a Customizable Architecture Search (CAS) approach to automatically generate a network architecture for semantic image segmentation. The generated network consists of a sequence of stacked computation cells. A computation cell is represented as a directed acyclic graph, in which each node is a hidden representation (i.e., feature map) and each edge is associated with an operation (e.g., convolution and pooling), which transforms data to a new layer. During the training, the CAS algorithm explores the search space for an optimized computation cell to build a network. The cells of the same type share one architecture but with different weights. In real applications, however, an optimization may need to be conducted under some constraints such as GPU time and model size. To this end, a cost corresponding to the constraint will be assigned to each operation. When an operation is selected during the search, its associated cost will be added to the objective. As a result, our CAS is able to search an optimized architecture with customized constraints. The approach has been thoroughly evaluated on Cityscapes and CamVid datasets, and demonstrates superior performance over several state-of-the-art techniques. More remarkably, our CAS achieves 72.3 mIoU on the Cityscapes dataset with speed of 108 FPS on an Nvidia TitanXp GPU. | Our work is inspired by @cite_0 @cite_35 . Unlike these methods, however, our work attempts to achieve a good tradeoff between system performance and the availability of the computational resource. In other words, our algorithm is optimized with some constraints from real applications. We notice that the recent DPC work @cite_25 is very related to ours. It addresses the dense image prediction problem via searching an efficient multi-scale architecture on the use of performance driven random search @cite_23 . Nevertheless, our work is different from @cite_25 . First of all, we have different objectives. Instead of targeting high-quality segmentation in @cite_25 , our solution is customizable to search for an optimized architecture which is constrained by the requirements of real applications. The generated architecture tries to keep a balance between the quality and limited computational resource. Secondly, our solution optimizes the architecture of the whole network including both backbone and multi-scale module, while @cite_25 focuses on multi-scale optimization. Finally, our method employs a lightweight network, which costs much less training time as compared to that of @cite_25 . | {
"cite_N": [
"@cite_0",
"@cite_35",
"@cite_25",
"@cite_23"
],
"mid": [
"2810075754",
"2964081807",
"2891778567",
"2732547613"
],
"abstract": [
"",
"Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (which we call the \"NASNet search space\") which enables transferability. In our experiments, we search for the best convolutional layer (or \"cell\") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, which we name a \"NASNet architecture\". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4 error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7 top-1 and 96.2 top-5 on ImageNet. Our model is 1.2 better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28 in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74 top-1 accuracy, which is 3.1 better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0 achieving 43.1 mAP on the COCO dataset.",
"The design of neural network architectures is an important component for achieving state-of-the-art performance with machine learning systems across a broad array of tasks. Much work has endeavored to design and build architectures automatically through clever construction of a search space paired with simple learning algorithms. Recent progress has demonstrated that such meta-learning methods may exceed scalable human-invented architectures on image classification tasks. An open question is the degree to which such methods may generalize to new domains. In this work we explore the construction of meta-learning techniques for dense image prediction focused on the tasks of scene parsing, person-part segmentation, and semantic image segmentation. Constructing viable search spaces in this domain is challenging because of the multi-scale representation of visual information and the necessity to operate on high resolution imagery. Based on a survey of techniques in dense image prediction, we construct a recursive search space and demonstrate that even with efficient random search, we can identify architectures that outperform human-invented architectures and achieve state-of-the-art performance on three dense prediction tasks including 82.7 on Cityscapes (street scene parsing), 71.3 on PASCAL-Person-Part (person-part segmentation), and 87.9 on PASCAL VOC 2012 (semantic image segmentation). Additionally, the resulting architecture is more computationally efficient, requiring half the parameters and half the computational cost as previous state of the art systems.",
"Any sufficiently complex system acts as a black box when it becomes easier to experiment with than to understand. Hence, black-box optimization has become increasingly important as systems have become more complex. In this paper we describe Google Vizier, a Google-internal service for performing black-box optimization that has become the de facto parameter tuning engine at Google. Google Vizier is used to optimize many of our machine learning models and other systems, and also provides core capabilities to Google's Cloud Machine Learning HyperTune subsystem. We discuss our requirements, infrastructure design, underlying algorithms, and advanced features such as transfer learning and automated early stopping that the service provides."
]
} |
1908.09586 | 2969470403 | Given a hypergraph @math , the Minimum Connectivity Inference problem asks for a graph on the same vertex set as @math with the minimum number of edges such that the subgraph induced by every hyperedge of @math is connected. This problem has received a lot of attention these recent years, both from a theoretical and practical perspective, leading to several implemented approximation, greedy and heuristic algorithms. Concerning exact algorithms, only Mixed Integer Linear Programming (MILP) formulations have been experimented, all representing connectivity constraints by the means of graph flows. In this work, we investigate the efficiency of a constraint generation algorithm, where we iteratively add cut constraints to a simple ILP until a feasible (and optimal) solution is found. It turns out that our method is faster than the previous best flow-based MILP algorithm on random generated instances, which suggests that a constraint generation approach might be also useful for other optimization problems dealing with connectivity constraints. At last, we present the results of an enumeration algorithm for the problem. | This optimization problem is NP-hard @cite_9 , and was first introduced for the design of vacuum systems @cite_5 . It has then be studied independently in several different contexts, mainly dealing with network design: computer networks @cite_1 , social networks @cite_2 (more precisely modeling the communication paradigm @cite_6 @cite_16 @cite_4 ), but also other fields, such as auction systems @cite_3 and structural biology @cite_17 @cite_7 . Finally, we can mention the issue of hypergraph drawing, where, in addition to the connectivity constraints, one usually looks for graphs with additional properties ( planarity, having a tree-like structure... ) @cite_18 @cite_10 @cite_8 @cite_0 . This plethora of applications explains why this problem is known under different names, such as , or . For a comprehensive survey of the theoretical work done on this problem, see @cite_11 and the references therein. | {
"cite_N": [
"@cite_18",
"@cite_11",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_6",
"@cite_3",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_16",
"@cite_10",
"@cite_17"
],
"mid": [
"",
"2013504716",
"2134774739",
"2130739699",
"381489330",
"2080022329",
"22994886",
"2049693545",
"1543455987",
"2050339304",
"1787635289",
"1969940463",
"2058702473",
"2153718046",
"125066187"
],
"abstract": [
"",
"The NP-hard Subset Interconnection Design problem, also known as Minimum Topic-Connected Overlay, is motivated by numerous applications including the design of scalable overlay networks and vacuum systems. It has as input a finite set @math and a collection of subsets @math , and asks for a minimum-cardinality edge set @math such that for the graph @math all induced subgraphs @math are connected. We study Subset Interconnection Design in the context of polynomial-time data reduction rules that preserve the possibility of constructing optimal solutions. Our contribution is threefold: First, we show the incorrectness of earlier polynomial-time data reduction rules. Second, we show linear-time solvability in case of a constant number @math of subsets, implying fixed-parameter tractability for the parameter @math . Third, we provide a fixed-parameter tractability result for small subset sizes and tree-like output graphs. To achieve our results, we elaborate o...",
"Designing an overlay network for publish subscribe communication in a system where nodes may subscribe to many different topics of interest is of fundamental importance. For scalability and efficiency, it is important to keep the degree of the nodes in the publish subscribe system low. It is only natural then to formalize the following problem: Given a collection of nodes and their topic subscriptions connect the nodes into a graph which has least possible maximum degree and in such a way that for each topic t, the graph induced by the nodes interested in t is connected. We present the first polynomial time logarithmic approximation algorithm for this problem and prove an almost tight lower bound on the approximation ratio. Our experimental results show that our algorithm drastically improves the maximum degree of publish subscribe overlay systems. We also propose a variation of the problem by enforcing that each topic-connected overlay network be of constant diameter, while keeping the average degree low. We present a heuristic for this problem which guarantees that each topic-connected overlay network will be of diameter 2 and which aims at keeping the overall average node degree low. Our experimental results validate our algorithm showing that our algorithm is able to achieve very low diameter without increasing the average degree by much.",
"Consider a set of oligomers listing the subunits involved in sub-complexes of a macro-molecular assembly, obtained e.g. using native mass spectrometry or affinity purification. Given these oligomers, connectivity inference (CI) consists of finding the most plausible contacts between these subunits, and minimum connectivity inference (MCI) is the variant consisting of finding a set of contacts of smallest cardinality. MCI problems avoid speculating on the total number of contacts, but yield a subset of all contacts and do not allow exploiting a priori information on the likelihood of individual contacts. In this context, we present two novel algorithms, MILP-W and MILP-WB. The former solves the minimum weight connectivity inference (MWCI), an optimization problem whose criterion mixes the number of contacts and their likelihood. The latter uses the former in a bootstrap fashion, to improve the sensitivity and the specificity of solution sets. Experiments on three systems (yeast exosome, yeast proteasome lid, human eiF3), for which reference contacts are known (crystal structure, cryo electron microscopy, cross-linking), show that our algorithms predict contacts with high specificity and sensitivity, yielding a very significant improvement over previous work, typically a twofold increase in sensitivity. The software accompanying this paper is made available, and should prove of ubiquitous interest whenever connectivity inference from oligomers is faced.",
"In this paper we present an O(n 2(m + logn))-time algorithm for computing a minimum-weight tree support (if one exists) of a hypergraph H = (V,S) with n vertices and m hyperedges. This improves the previously best known algorithm with running time O(n 4 m 2). A support of H is a graph G on V such that each hyperedge in S induces a connected subgraph in G. If G is a tree, it is called a tree support and it is a minimum tree support if its edge weight is minimum for a given edge weight function. Tree supports of hypergraphs have several applications, from social network analysis and network design problems to the visualization of hypergraphs and Euler diagrams. We show in particular how a minimum-weight tree support can be used to generate an area-proportional Euler diagram that satisfies typical well-formedness conditions and additionally minimizes the number of concurrent curves of the set boundaries in the Euler diagram.",
"A problem arising in the design of vacuum systems and having applications to some natural problems of interconnection design is described as follows. (1) Given a set X and subsets @math of @math , satisfying @math , find a graph G with vertex set X and the minimum number of edges such that for any i, the subgraph induced by @math has a connected component containing @math .Two other problems related to this one are the following ones. (2) Given a set X and subsets @math such that @math , find a graph G with vertex set X and the minimum number of edges such that for any i the subgraph @math induced by @math in G is connected. (3) Given a set X and subsets @math such that @math , find a graph G with vertex set X, find a graph G with vertex set X and the minimum number of edges such that for any subset I of @math , the subgraph induced by @math is co...",
"The Interconnection Graph Problem (IGP) is to compute for a given hypergraph H= (V, R) a graph G= (V, E) with the minimum number of edges |E| such that for all hyperedges N? Rthe subgraph of Ginduced by Nis connected. Computing feasible interconnection graphs is basically motivated by the design of reconfigurable interconnection networks. This paper proves that IGP is NP-complete and hard to approximate even when all hyperedges of Hhave at most three vertices. Afterwards it presents a search tree based parameterized algorithm showing that the problem is fixed-parameter tractable when the hyperedge size of His bounded. Moreover, the paper gives a reduction based greedy algorithm and closes with its experimental justification.",
"We investigate the problem of designing a scalable overlay network to support decentralized topic-based pub sub communication. We introduce a new optimization problem, called Minimum Topic-Connected Overlay (Min-TCO), that captures the tradeoff between the scalability of the overlay (in terms of the nodes' fanout) and the message forwarding overhead incurred by the communicating parties. Roughly, the Min-TCO problem is as follows: Given a collection of nodes and their subscriptions, connect the nodes using the minimum possible number of edges so that for each topic t, a message published on t could reach all the nodes interested in t by being forwarded by onlythe nodes interested in t. We show that the decision version of Min-TCO is NP-complete, and present a polynomial algorithm that approximates the optimal solution within a logarithmic factor with respect to the number of edges in theconstructed overlay. We further prove that this approximation ratio is almost tight by showing that no polynomial algorithm can approximate Min-TCO within a constant factor (unless P=NP). We show experimentally that on typical inputs, the fanout of the overlay constructed by our approximation algorithm is significantly lower thanthat of the overlays built by the existing algorithms, and that its running time is just a small fraction of the analytical worst case bound. As Min-TCO can be shown to capture several important aspects of most known overlay-based pub sub implementations, our study sheds light on the inherent limitations of the existing systems as well asprovides an insight into the best possible feasible solution. Finally, we introduce a flexible framework that generalizes Min-TCO and formalizes most similar overlay design problems that occur in scalable pub sub systems. We also briefly discuss several examples of such problems, and show some results with respect to their complexity.",
"Combinatorial auctions (CAs) are important mechanisms for allocating interrelated items. Unfortunately, winner determination is NP-complete unless there is special structure. We study the setting where there is a graph (with some desired property), with the items as vertices, and every bid bids on a connected set of items. Two computational problems arise: 1) clearing the auction when given the item graph, and 2) constructing an item graph (if one exists) with the desired property. 1 was previously solved for the case of a tree or a cycle, and 2 for the case of a line graph or a cycle. We generalize the first result by showing that given an item graph with bounded treewidth, the clearing problem can be solved in polynomial time (and every CA instance has some treewidth; the complexity is exponential in only that parameter). We then give an algorithm for constructing an item tree (treewidth 1) if such a tree exists, thus closing a recognized open problem. We show why this algorithm does not work for treewidth greater than 1, but leave open whether item graphs of (say) treewidth 2 can be constructed in polynomial time. We show that finding the item graph with the fewest edges is NP-complete (even when a graph of treewidth 2 exists). Finally, we study how the results change if a bid is allowed to have more than one connected component. Even for line graphs, we show that clearing is hard even with 2 components, and constructing the line graph is hard even with 5.",
"We consider the following problem: Given a complete graph G=(V,E) with a cost on every edge and a given collection of subsets of V, we have to find a minimum cost spanning tree T such that each subset of the vertices in the collection induces a subtree in T. One motivation for this problem is to construct a minimum cost communication tree network for a collection of non-disjoint groups of customers such that the network will provide group fault tolerance'' and group privacy''. We model this problem as a matroid. We extend it to general matroids and call the new matroids clustering matroids''. We define three variations of the clustering tree problem and show that from an algorithmic point of view they are polynomially equivalent. We present a polynomial algorithm for one of the three variations, which implies that all of them can be solved polynomially. For the case where the cardinality of the subsets in the collection does not exceed three, we provide a greedy algorithm, a linear algorithm and also a polyhedron description of the convex hull of all the feasible solutions.",
"We consider the problem of inferring the most likely social network given connectivity constraints imposed by observations of outbreaks within the network. Given a set of vertices (or agents) V and constraints (or observations) Si ⊆ V we seek to find a minimum log-likelihood cost (or maximum likelihood) set of edges (or connections) E such that each Si induces a connected subgraph of (V, E). For the offline version of the problem, we prove an Ω(log(n)) hardness of approximation result for uniform cost networks and give an algorithm that almost matches this bound, even for arbitrary costs. Then we consider the online problem, where the constraints are satisfied as they arrive. We give an O(n log(n))-competitive algorithm for the arbitrary cost online problem, which has an Ω(n)-competitive lower bound.We look at the uniform cost case as well and give an O(n2 3 log2 3(n))-competitive algorithm against an oblivious adversary, as well as an Ω(√n)-competitive lower bound against an adaptive adversary. We examine cases when the underlying network graph is known to be a star or a path, and prove matching upper and lower bounds of Θ(log(n)) on the competitive ratio for them.",
"Given a setX and subsetsX1,...,Xm, we consider the problem of finding a graphG with vertex setX and the minimum number of edges such that fori=1,...,m, the subgraphGi; induced byXi is connected. Suppose that for anyα pointsx1,...,xαe X, there are at mostβXi 's containing the set x1,...,xα . In the paper, we show that the problem is polynomial-time solvable for (α ⩽ 2,β ⩽ 2) and is NP-hard for (α⩾3,β=1), (α=l,β⩾6), and (α⩾2,β⩾3).",
"In the context of designing a scalable overlay network to support decentralized topic-based pub sub communication, the Minimum Topic-Connected Overlay problem (Min-TCO in short) has been investigated: given a set of t topics and a collection of n users together with the lists of topics they are interested in, the aim is to connect these users to a network by a minimum number of edges such that every graph induced by users interested in a common topic is connected. It is known that Min-TCO is NP-hard and approximable within O(logt) in polynomial time. In this paper, we further investigate the problem and some of its special instances. We give various hardness results for instances where the number of topics in which a user is interested in is bounded by a constant, and also for the instances where the number of users interested in a common topic is a constant. For the latter case, we present a first constant approximation algorithm. We also present some polynomial-time algorithms for very restricted instances of Min-TCO.",
"We introduce two new notions of planarity for hypergraphs based on dual generalizations of the standard Venn diagram. These definitions are illustrated by results concerning the existence and nonexistence of such diagrams for certain classes of hypergraphs. We conclude by showing that the general problem of determining whether such diagrams exist is NP-complete.",
"We consider the following Minimum Connectivity Inference problem (MCI), which arises in structural biology: given vertex sets V i ⊆ V, i ∈ I, find a graph G = (V,E) minimizing the size of the edge set E, such that the sub-graph of G induced by each V i is connected. This problem arises in structural biology, when one aims at finding the pairwise contacts between the proteins of a protein assembly, given the lists of proteins involved in sub-complexes. We present four contributions."
]
} |
1908.09586 | 2969470403 | Given a hypergraph @math , the Minimum Connectivity Inference problem asks for a graph on the same vertex set as @math with the minimum number of edges such that the subgraph induced by every hyperedge of @math is connected. This problem has received a lot of attention these recent years, both from a theoretical and practical perspective, leading to several implemented approximation, greedy and heuristic algorithms. Concerning exact algorithms, only Mixed Integer Linear Programming (MILP) formulations have been experimented, all representing connectivity constraints by the means of graph flows. In this work, we investigate the efficiency of a constraint generation algorithm, where we iteratively add cut constraints to a simple ILP until a feasible (and optimal) solution is found. It turns out that our method is faster than the previous best flow-based MILP algorithm on random generated instances, which suggests that a constraint generation approach might be also useful for other optimization problems dealing with connectivity constraints. At last, we present the results of an enumeration algorithm for the problem. | Concerning the implementation of algorithms, previous works mainly focused on approximation, greedy and other heuristic techniques @cite_4 . To the best of our knowledge, the first exact algorithm was designed by Agarwal al @cite_17 @cite_7 in the context of structural biology, where the sought graph represents the contact relations between proteins of a macro-molecule, which has to be inferred from a hypergraph constructed by chemical experiments and mass spectrometry. In this work, the authors define a Mixed Integer Linear Programming (MILP) formulation of the problem, representing the connectivity constraints by flows. They also provide an enumeration method using their algorithm as a black box, by iteratively adding constraints to the MILP in order to forbid already found solutions. Both their optimization and enumeration algorithms were tested on some real-life (from a structural biology perspective) instances for which the contact graph was already known. | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_17"
],
"mid": [
"2134774739",
"2130739699",
"125066187"
],
"abstract": [
"Designing an overlay network for publish subscribe communication in a system where nodes may subscribe to many different topics of interest is of fundamental importance. For scalability and efficiency, it is important to keep the degree of the nodes in the publish subscribe system low. It is only natural then to formalize the following problem: Given a collection of nodes and their topic subscriptions connect the nodes into a graph which has least possible maximum degree and in such a way that for each topic t, the graph induced by the nodes interested in t is connected. We present the first polynomial time logarithmic approximation algorithm for this problem and prove an almost tight lower bound on the approximation ratio. Our experimental results show that our algorithm drastically improves the maximum degree of publish subscribe overlay systems. We also propose a variation of the problem by enforcing that each topic-connected overlay network be of constant diameter, while keeping the average degree low. We present a heuristic for this problem which guarantees that each topic-connected overlay network will be of diameter 2 and which aims at keeping the overall average node degree low. Our experimental results validate our algorithm showing that our algorithm is able to achieve very low diameter without increasing the average degree by much.",
"Consider a set of oligomers listing the subunits involved in sub-complexes of a macro-molecular assembly, obtained e.g. using native mass spectrometry or affinity purification. Given these oligomers, connectivity inference (CI) consists of finding the most plausible contacts between these subunits, and minimum connectivity inference (MCI) is the variant consisting of finding a set of contacts of smallest cardinality. MCI problems avoid speculating on the total number of contacts, but yield a subset of all contacts and do not allow exploiting a priori information on the likelihood of individual contacts. In this context, we present two novel algorithms, MILP-W and MILP-WB. The former solves the minimum weight connectivity inference (MWCI), an optimization problem whose criterion mixes the number of contacts and their likelihood. The latter uses the former in a bootstrap fashion, to improve the sensitivity and the specificity of solution sets. Experiments on three systems (yeast exosome, yeast proteasome lid, human eiF3), for which reference contacts are known (crystal structure, cryo electron microscopy, cross-linking), show that our algorithms predict contacts with high specificity and sensitivity, yielding a very significant improvement over previous work, typically a twofold increase in sensitivity. The software accompanying this paper is made available, and should prove of ubiquitous interest whenever connectivity inference from oligomers is faced.",
"We consider the following Minimum Connectivity Inference problem (MCI), which arises in structural biology: given vertex sets V i ⊆ V, i ∈ I, find a graph G = (V,E) minimizing the size of the edge set E, such that the sub-graph of G induced by each V i is connected. This problem arises in structural biology, when one aims at finding the pairwise contacts between the proteins of a protein assembly, given the lists of proteins involved in sub-complexes. We present four contributions."
]
} |
1908.09586 | 2969470403 | Given a hypergraph @math , the Minimum Connectivity Inference problem asks for a graph on the same vertex set as @math with the minimum number of edges such that the subgraph induced by every hyperedge of @math is connected. This problem has received a lot of attention these recent years, both from a theoretical and practical perspective, leading to several implemented approximation, greedy and heuristic algorithms. Concerning exact algorithms, only Mixed Integer Linear Programming (MILP) formulations have been experimented, all representing connectivity constraints by the means of graph flows. In this work, we investigate the efficiency of a constraint generation algorithm, where we iteratively add cut constraints to a simple ILP until a feasible (and optimal) solution is found. It turns out that our method is faster than the previous best flow-based MILP algorithm on random generated instances, which suggests that a constraint generation approach might be also useful for other optimization problems dealing with connectivity constraints. At last, we present the results of an enumeration algorithm for the problem. | This MILP model was then improved recently by Dar al @cite_12 , who mainly reduced the number of variables and constraints of the formulation, but still representing the connectivity constraints by the means of flows. In addition, they also presented and implemented a number of (already known and new) reduction rules. This new MILP formulation together with the reduction rules were then compared to the algorithm of Agarwal al on randomly-generated instances. For every kind of tested hypergraphs (different number and sizes of hyperedges), they observed a drastic improvement of both the execution time and the maximum size of instances that could be solved. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2800207446"
],
"abstract": [
"AbstractThe Minimum Connectivity Inference (MCI) problem represents an NP -hard generalization of the well-known minimum spanning tree problem and has been studied in different fields of research i..."
]
} |
1908.09165 | 2969385932 | Smartphones store a significant amount of personal and private information, and are playing an increasingly important role in people's lives. It is important for authentication techniques to be more resistant against two known attacks called shoulder surfing and smudge attacks. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of a new input paradigm called pre-touch, which could soon allow smartphones to sense a user's finger position at some distance from the screen. We implement the technique and evaluate it in a pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes about 8 seconds to authenticate, it is immune to smudge attacks and promises to be more resistant to shoulder surfing. | Shoulder surfing is a widely known attack in which the adversary tries to infer the victim's authentication secret by looking over his or her shoulder. There is a significant body of research into mitigating the impact of shoulder-surfing attacks. An in-depth survey conducted by @cite_8 considered the threat not only in the context of authentication, but also in the context of routine smartphone usage. The survey showed that 130 out of 174 participants indicated that shoulder-surfing attacks occurred on public transportation. Victims most commonly defended against such an attack by modifying their posture or cancelling the authentication. Furthermore, a study conducted by @cite_18 found the perceived risk of shoulder surfing to be high in only 11 of 3410 situations. This demonstrates that people are not actively defending themselves against shoulder surfing, and more work is needed to improve the shoulder-surfing resistance of authentication techniques. | {
"cite_N": [
"@cite_18",
"@cite_8"
],
"mid": [
"2181155974",
"2611149039"
],
"abstract": [
"A lot of research is being conducted into improving the usability and security of phone-unlocking. There is however a severe lack of scientic data on users’ current unlocking behavior and perceptions. We performed an online survey (n = 260) and a one-month eld study ( n = 52) to gain insights into real world (un)locking behavior of smartphone users. One of the main goals was to nd out how much overhead unlocking and authenticating adds to the overall phone usage and in how many unlock interactions security (i.e. authentication) was perceived as necessary. We also investigated why users do or do not use a lock screen and how they cope with smartphone-related risks, such as shouldersurng or unwanted accesses. Among other results, we found that on average, participants spent around 2.9 of their smartphone interaction time with authenticating (9 in the worst case). Participants that used a secure lock screen like PIN or Android unlock patterns considered it unnecessary in 24.1 of situations. Shoulder surng was perceived to be a relevant risk in only 11 of 3410 sampled situations.",
"Research has brought forth a variety of authentication systems to mitigate observation attacks. However, there is little work about shoulder surfing situations in the real world. We present the results of a user survey (N=174) in which we investigate actual stories about shoulder surfing on mobile devices from both users and observers. Our analysis indicates that shoulder surfing mainly occurs in an opportunistic, non-malicious way. It usually does not have serious consequences, but evokes negative feelings for both parties, resulting in a variety of coping strategies. Observed data was personal in most cases and ranged from information about interests and hobbies to login data and intimate details about third persons and relationships. Thus, our work contributes evidence for shoulder surfing in the real world and informs implications for the design of privacy protection mechanisms."
]
} |
1908.09165 | 2969385932 | Smartphones store a significant amount of personal and private information, and are playing an increasingly important role in people's lives. It is important for authentication techniques to be more resistant against two known attacks called shoulder surfing and smudge attacks. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of a new input paradigm called pre-touch, which could soon allow smartphones to sense a user's finger position at some distance from the screen. We implement the technique and evaluate it in a pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes about 8 seconds to authenticate, it is immune to smudge attacks and promises to be more resistant to shoulder surfing. | PIN keypads and pattern locks are commonly used methods for phone authentication. Unfortunately, these techniques are vulnerable to smudge attacks, because the user leaves oily residues on the screen. Previous work has demonstrated that smudge attacks are especially effective on pattern locks as users drag their fingers over the screen. Smudge attacks can also be used to limit the input space for PIN locks. @cite_24 found that as long as the line of sight is not perpendicular, it is easy to observe entered patterns based on smudges. Under ideal conditions, 92 These results demonstrate that even if the adversary is not able to actively observe the process of authentication, he or she can still recover the password with considerable success. In our work, we leverage pre-touch information to limit the number of touches the user makes on the screen, mitigating the effect of smudge attacks. | {
"cite_N": [
"@cite_24"
],
"mid": [
"1626992774"
],
"abstract": [
"Touch screens are an increasingly common feature on personal computing devices, especially smartphones, where size and user interface advantages accrue from consolidating multiple hardware components (keyboard, number pad, etc.) into a single software definable user interface. Oily residues, or smudges, on the touch screen surface, are one side effect of touches from which frequently used patterns such as a graphical password might be inferred. In this paper we examine the feasibility of such smudge attacks on touch screens for smartphones, and focus our analysis on the Android password pattern. We first investigate the conditions (e.g., lighting and camera orientation) under which smudges are easily extracted. In the vast majority of settings, partial or complete patterns are easily retrieved. We also emulate usage situations that interfere with pattern identification, and show that pattern smudges continue to be recognizable. Finally, we provide a preliminary analysis of applying the information learned in a smudge attack to guessing an Android password pattern."
]
} |
1908.09165 | 2969385932 | Smartphones store a significant amount of personal and private information, and are playing an increasingly important role in people's lives. It is important for authentication techniques to be more resistant against two known attacks called shoulder surfing and smudge attacks. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of a new input paradigm called pre-touch, which could soon allow smartphones to sense a user's finger position at some distance from the screen. We implement the technique and evaluate it in a pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes about 8 seconds to authenticate, it is immune to smudge attacks and promises to be more resistant to shoulder surfing. | Now that smartphones are commonplace, traditional authentication techniques have been adapted to work on the small touchscreens of smartphones. @cite_25 compared speed and shoulder-surfing resistance of a scrambled PIN entry keypad and a normal PIN entry keypad. They found that the scrambled keypad was slower but more resistant to shoulder surfing. | {
"cite_N": [
"@cite_25"
],
"mid": [
"2538700141"
],
"abstract": [
"PIN unlock is a popular screen locking mechanism used for protecting the sensitive private information on smart-phones. However, it is susceptible to a number of attacks such as guessing attacks, shoulder surfing attacks, smudge attacks, and side-channel attacks. Scramble keypad changes the keypad layout in each PIN-entry process to improve the security of PIN unlock. In this paper, the security and usability of scramble keypad for PIN unlock are studied. Our security analysis shows that scramble keypad can defend smudge attacks perfectly and greatly reduce the threats of side-channel attacks. A user study is conducted to demonstrate that scramble keypad has a better chance to defend shoulder surfing attacks than standard keypad. We also investigate how the usability of scramble keypad is compromised for improved security through a user study."
]
} |
1908.09165 | 2969385932 | Smartphones store a significant amount of personal and private information, and are playing an increasingly important role in people's lives. It is important for authentication techniques to be more resistant against two known attacks called shoulder surfing and smudge attacks. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of a new input paradigm called pre-touch, which could soon allow smartphones to sense a user's finger position at some distance from the screen. We implement the technique and evaluate it in a pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes about 8 seconds to authenticate, it is immune to smudge attacks and promises to be more resistant to shoulder surfing. | Several works have examined the possibility of augmenting PIN keypads with gestures. SwiPIN, by von @cite_6 , divided the PIN keypad into two sections. Each number in each section corresponded to a different swipe gesture direction. Performing a swipe gesture on the correct section of the screen would insert the corresponding number. Their study demonstrated that this technique improved resistance against smudge attacks. introduced ForcePINs'' @cite_11 , with which each PIN digit could be entered with different levels of finger pressure on the screen, to add an additional layer of challenge for shoulder surfers. However, results showed that there was no statistically significant difference in shoulder-surfing resistance between regular PINs and ForcePINs, because when users pressed harder, they also pressed for a noticeably longer time. | {
"cite_N": [
"@cite_6",
"@cite_11"
],
"mid": [
"2159837114",
"2795614125"
],
"abstract": [
"In this paper, we present SwiPIN, a novel authentication system that allows input of traditional PINs using simple touch gestures like up or down and makes it secure against human observers. We present two user studies which evaluated different designs of SwiPIN and compared it against traditional PIN. The results show that SwiPIN performs adequately fast (3.7 s) to serve as an alternative input method for risky situations. Furthermore, SwiPIN is easy to use, significantly more secure against shoulder surfing attacks and switching between PIN and SwiPIN feels natural.",
"We evaluate the efficacy of shoulder surfing defenses for PIN-based authentication systems. We find tilting the device away from the observer, a widely adopted defense strategy, provides limited protection. We also evaluate a recently proposed defense incorporating an \"invisible pressure component\" into PIN entry. Contrary to earlier claims, our results show this provides little defense against malicious insider attacks. Observations during the study uncover successful attacker strategies for reconstructing a victim's PIN when faced with a tilt defense. Our evaluations identify common misconceptions regarding shoulder surfing defenses, and highlight the need to educate users on how to safeguard their credentials from these attacks."
]
} |
1908.09165 | 2969385932 | Smartphones store a significant amount of personal and private information, and are playing an increasingly important role in people's lives. It is important for authentication techniques to be more resistant against two known attacks called shoulder surfing and smudge attacks. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of a new input paradigm called pre-touch, which could soon allow smartphones to sense a user's finger position at some distance from the screen. We implement the technique and evaluate it in a pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes about 8 seconds to authenticate, it is immune to smudge attacks and promises to be more resistant to shoulder surfing. | Other works have looked beyond purely visual representations of PINs by incorporating haptic and audio feedback. @cite_17 created an observation-resistant authentication technique by providing no visual clues to the user. The technique renders a wheel on the screen with identical sections. However, when users drag their fingers over the sections of the wheel, tactile feedback is presented with varying lengths and strengths. To select a section, users drag their fingers to the middle of the wheel. After each entry, the sections are shuffled to provide resistance against smudge attacks. Similarly, VibraInput @cite_1 used an on-screen, rotary wheel with two levels. The outer level contained the letters A through D, each corresponding to a fixed vibration pattern (that has to be remembered by user). The inner level corresponded to the PIN numbers 0 through 9. Upon starting PIN entry, the phone would vibrate the pattern of a letter. The user would then rotate the outer wheel to align the letter with the number to select on the inner wheel. By repeating this process, the technique could use process of elimination to ascertain the PIN number. The overall technique would repeat until the entire PIN was entered. | {
"cite_N": [
"@cite_1",
"@cite_17"
],
"mid": [
"2036616308",
"2151027695"
],
"abstract": [
"Current standard PIN entry systems for mobile devices are not safe to shoulder surfing. In this paper, we present VibraInput, a two-step PIN entry system based on the combination of vibration and visual information for mobile devices. This system only uses four vibration patterns, with which users enter a digit by two distinct selections. We believe that this design secures PIN entry, and allows users to easily remember and recognize the patterns. Moreover, it can be implemented on current off-the-shelf mobile devices. We designed two kinds of prototypes of VibraInput. The experiment shows that the mean failure rate is 4.0 ; moreover, the system shows good security properties.",
"Tangible user interfaces are portals to digital information. In the future, securing access to such material will be an important concern. This paper describes the design, implementation and evaluation of a PIN entry system based on audio or haptic cues that is suitable for integration into such physical systems. The current implementation links movements on a mobile phone touch screen with the display of non-visual cues; selection of a sequence of these cues composes a password. Studies reveal the validity of this approach in terms of task times and error rates that improve over prior art. In sum, this paper demonstrates the potential of non-visual PINs as a mechanism for securing access to a range of systems, ultimately incorporating mobile, ubiquitous or tangible interfaces."
]
} |
1908.09165 | 2969385932 | Smartphones store a significant amount of personal and private information, and are playing an increasingly important role in people's lives. It is important for authentication techniques to be more resistant against two known attacks called shoulder surfing and smudge attacks. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of a new input paradigm called pre-touch, which could soon allow smartphones to sense a user's finger position at some distance from the screen. We implement the technique and evaluate it in a pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes about 8 seconds to authenticate, it is immune to smudge attacks and promises to be more resistant to shoulder surfing. | Two-Thumbs-Up (TTU) @cite_9 prevents shoulder-surfing attacks by requiring the user to cover the screen with their hands. This forms a handshield'' and enters a challenge mode. If users move their hands away from the screen, the authentication technique disappears. TTU randomly associates five response'' letters with two digits each, presenting the digits and letters on either side of the screen. The user then has to tap on the letter corresponding to the next PIN digit. After a certain number (dependent on PIN length) of correctly selected letters, the authentication process is complete. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2803549391"
],
"abstract": [
"Abstract We present a new Personal Identification Number (PIN) entry method for smartphones that can be used in security-critical applications, such as smartphone banking. The proposed “Two-Thumbs-Up” (TTU) scheme is resilient against observation attacks such as shoulder-surfing and camera recording, and guides users to protect their PIN information from eavesdropping by shielding the challenge area on the touch screen. To demonstrate the feasibility of TTU, we conducted a user study for TTU, and compared it with existing authentication methods (Normal PIN, Black and White PIN, and ColorPIN) in terms of usability and security. The study results demonstrate that TTU is more secure than other PIN entry methods in the presence of an observer recording multiple authentication sessions."
]
} |
1908.09165 | 2969385932 | Smartphones store a significant amount of personal and private information, and are playing an increasingly important role in people's lives. It is important for authentication techniques to be more resistant against two known attacks called shoulder surfing and smudge attacks. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of a new input paradigm called pre-touch, which could soon allow smartphones to sense a user's finger position at some distance from the screen. We implement the technique and evaluate it in a pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes about 8 seconds to authenticate, it is immune to smudge attacks and promises to be more resistant to shoulder surfing. | Harbach at al. @cite_23 focused on comparing PIN locks and pattern locks. They were able to observe the behaviour of 134 smartphone users over one month, revealing differences between the two techniques. Results showed that although pattern locks are faster, users are six times as likely to make mistakes compared to PIN locks. When including failed attempts, there were no differences in authentication time between the two techniques. When a user made a mistake entering a PIN or pattern, subsequent successful attempts took more time, presumably because the user took more care when repeating the authentication. Visual feedback did not influence the error rate nor the entry time. Similarly, our 3D Pattern technique improves shoulder-surfing resistance by reducing visual feedback during authentication. | {
"cite_N": [
"@cite_23"
],
"mid": [
"2315247372"
],
"abstract": [
"To prevent unauthorized parties from accessing data stored on their smartphones, users have the option of enabling a \"lock screen\" that requires a secret code (e.g., PIN, drawing a pattern, or biometric) to gain access to their devices. We present a detailed analysis of the smartphone locking mechanisms currently available to billions of smartphone users worldwide. Through a month-long field study, we logged events from a panel of users with instrumented smartphones (N=134). We are able to show how existing lock screen mechanisms provide users with distinct tradeoffs between usability (unlocking speed vs. unlocking frequency) and security. We find that PIN users take longer to enter their codes, but commit fewer errors than pattern users, who unlock more frequently and are very prone to errors. Overall, PIN and pattern users spent the same amount of time unlocking their devices on average. Additionally, unlock performance seemed unaffected for users enabling the stealth mode for patterns. Based on our results, we identify areas where device locking mechanisms can be improved to result in fewer human errors -- increasing usability -- while also maintaining security."
]
} |
1908.09165 | 2969385932 | Smartphones store a significant amount of personal and private information, and are playing an increasingly important role in people's lives. It is important for authentication techniques to be more resistant against two known attacks called shoulder surfing and smudge attacks. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of a new input paradigm called pre-touch, which could soon allow smartphones to sense a user's finger position at some distance from the screen. We implement the technique and evaluate it in a pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes about 8 seconds to authenticate, it is immune to smudge attacks and promises to be more resistant to shoulder surfing. | Another category of PIN entry techniques uses pictures or other graphics. In SemanticLock @cite_10 , users arrange icons on the screen in a memorable way. The user is authenticated based on correct placement of the icons. In a similar work, Awase-E @cite_5 , Takada and Koike leverage photos taken on a user's smartphone. The lock screen breaks a user-chosen photograph up into smaller chunks, and shows nine chunks of various photographs all at once. The user then has to select the tile from the correct photograph four times in a row to unlock the phone. | {
"cite_N": [
"@cite_5",
"@cite_10"
],
"mid": [
"135685467",
"2809689775"
],
"abstract": [
"There is a trade-off between security and usability in user authentication for mobile phones. Since such devices have a poor input interfaces, 4-digit number passwords are widely used at present. Therefore, a more secure and user friendly authentication is needed. This paper proposes a novel authentication method called “Awase-E”. The system uses image passwords. It, moreover, integrates image registration and notification interfaces. Image registration enables users to use their favorite image instead of a text password. Notification gives users a trigger to take action against a threat when it happens. Awase-E is implemented so that it has a higher usability even when it is used through a mobile phone.",
"We introduce SemanticLock, a single factor graphical authentication solution for mobile devices. SemanticLock uses a set of graphical images as password tokens that construct a semantically memorable story representing the user s password. A familiar and quick action of dragging or dropping the images into their respective positions either in a or in movements on the the touchscreen is what is required to use our solution. The authentication strength of the SemanticLock is based on the large number of possible semantic constructs derived from the positioning of the image tokens and the type of images selected. Semantic Lock has a high resistance to smudge attacks and it equally exhibits a higher level of memorability due to its graphical paradigm. In a three weeks user study with 21 participants comparing SemanticLock against other authentication systems, we discovered that SemanticLock outperformed the PIN and matched the PATTERN both on speed, memorability, user acceptance and usability. Furthermore, qualitative test also show that SemanticLock was rated more superior in like-ability. SemanticLock was also evaluated while participants walked unencumbered and walked encumbered carrying \"everyday\" items to analyze the effects of such activities on its usage."
]
} |
1908.09165 | 2969385932 | Smartphones store a significant amount of personal and private information, and are playing an increasingly important role in people's lives. It is important for authentication techniques to be more resistant against two known attacks called shoulder surfing and smudge attacks. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of a new input paradigm called pre-touch, which could soon allow smartphones to sense a user's finger position at some distance from the screen. We implement the technique and evaluate it in a pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes about 8 seconds to authenticate, it is immune to smudge attacks and promises to be more resistant to shoulder surfing. | There is considerable research exploring whether or not lock screens are even necessary at all, by applying , also known as implicit authentication. Continuous authentication systems analyze an individual's regular patterns of touches on the screen, and build a model. A different user would have different patterns, and could be denied access by the system. With the Touchalytics project @cite_0 , were able to use continuous authentication to identify the user with an error rate below 4 , @cite_19 showed that an attacker, merely watching a video of the target using their phone, could bypass swipe-based continuous authentication at least 75 | {
"cite_N": [
"@cite_0",
"@cite_19"
],
"mid": [
"2151854612",
"2468988960"
],
"abstract": [
"We investigate whether a classifier can continuously authenticate users based on the way they interact with the touchscreen of a smart phone. We propose a set of 30 behavioral touch features that can be extracted from raw touchscreen logs and demonstrate that different users populate distinct subspaces of this feature space. In a systematic experiment designed to test how this behavioral pattern exhibits consistency over time, we collected touch data from users interacting with a smart phone using basic navigation maneuvers, i.e., up-down and left-right scrolling. We propose a classification framework that learns the touch behavior of a user during an enrollment phase and is able to accept or reject the current user by monitoring interaction with the touch screen. The classifier achieves a median equal error rate of 0 for intrasession authentication, 2 -3 for intersession authentication, and below 4 when the authentication test was carried out one week after the enrollment phase. While our experimental findings disqualify this method as a standalone authentication mechanism for long-term authentication, it could be implemented as a means to extend screen-lock time or as a part of a multimodal biometric authentication system.",
"Touch input implicit authentication ( touch IA'') employs behavioural biometrics like touch location and pressure to continuously and transparently authenticate smartphone users. We provide the first ever evaluation of targeted mimicry attacks on touch IA and show that it fails against shoulder surfing and offline training attacks. Based on experiments with three diverse touch IA schemes and 256 unique attacker-victim pairs, we show that shoulder surfing attacks have a bypass success rate of 84 with the majority of successful attackers observing the victim's behaviour for less than two minutes. Therefore, the accepted assumption that shoulder surfing attacks on touch IA are infeasible due to the hidden nature of some features is incorrect. For offline training attacks, we created an open-source training app for attackers to train on their victims' touch data. With this training, attackers achieved bypass success rates of 86 , even with only partial knowledge of the underlying features used by the IA scheme. Previous work failed to find these severe vulnerabilities due to its focus on random, non-targeted attacks. Our work demonstrates the importance of considering targeted mimicry attacks to evaluate the security of an implicit authentication scheme. Based on our results, we conclude that touch IA is unsuitable from a security standpoint."
]
} |
1908.09165 | 2969385932 | Smartphones store a significant amount of personal and private information, and are playing an increasingly important role in people's lives. It is important for authentication techniques to be more resistant against two known attacks called shoulder surfing and smudge attacks. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of a new input paradigm called pre-touch, which could soon allow smartphones to sense a user's finger position at some distance from the screen. We implement the technique and evaluate it in a pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes about 8 seconds to authenticate, it is immune to smudge attacks and promises to be more resistant to shoulder surfing. | Some work has explored applying the principles of continuous authentication to augment traditional lock screen techniques. @cite_12 use spatial touch features in addition to previously used temporal touch features on keyboards to verify users based on their individual text entry behaviours. Examples of spatial touch features include touch offsets, angles, and pressures. By incorporating such spatial features, user recognition accuracy was improved. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2064376060"
],
"abstract": [
"Authentication methods can be improved by considering implicit, individual behavioural cues. In particular, verifying users based on typing behaviour has been widely studied with physical keyboards. On mobile touchscreens, the same concepts have been applied with little adaptations so far. This paper presents the first reported study on mobile keystroke biometrics which compares touch-specific features between three different hand postures and evaluation schemes. Based on 20.160 password entries from a study with 28 participants over two weeks, we show that including spatial touch features reduces implicit authentication equal error rates (EER) by 26.4 - 36.8 relative to the previously used temporal features. We also show that authentication works better for some hand postures than others. To improve applicability and usability, we further quantify the influence of common evaluation assumptions: known attacker data, training and testing on data from a single typing session, and fixed hand postures. We show that these practices can lead to overly optimistic evaluations. In consequence, we describe evaluation recommendations, a probabilistic framework to handle unknown hand postures, and ideas for further improvements."
]
} |
1908.09165 | 2969385932 | Smartphones store a significant amount of personal and private information, and are playing an increasingly important role in people's lives. It is important for authentication techniques to be more resistant against two known attacks called shoulder surfing and smudge attacks. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of a new input paradigm called pre-touch, which could soon allow smartphones to sense a user's finger position at some distance from the screen. We implement the technique and evaluate it in a pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes about 8 seconds to authenticate, it is immune to smudge attacks and promises to be more resistant to shoulder surfing. | Many recent works on touchscreen interactions have started exploring pre-touch information; that is, positional information about the user's hands or fingers before making contact with the screen. For example, with TouchCuts and TouchZoom, @cite_21 used pre-touch finger distance to expand nearby targets on screen, facilitating easier target selection. This general approach has not yet been explored in the context of authentication techniques resistant to shoulder surfing. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2136328445"
],
"abstract": [
"Although touch-screen laptops are increasing in popularity, users still do not comfortably rely on touch in these environments, as current software interfaces were not designed for being used by the finger. In this paper, we first demonstrate the benefits of using touch as a complementary input modality along with the keyboard and mouse or touchpad in a laptop setting. To alleviate the frustration users experience with touch, we then design two techniques, TouchCuts, a single target expansion technique, and ,i>TouchZoom, i>, a multiple target expansion technique. Both techniques facilitate the selection of small icons, by detecting the finger proximity above the display surface, and expanding the target as the finger approaches. In a controlled evaluation, we show that our techniques improve performance in comparison to both the computer mouse and a baseline touch-based target acquisition technique. We conclude by discussing other application scenarios that our techniques support."
]
} |
1908.09165 | 2969385932 | Smartphones store a significant amount of personal and private information, and are playing an increasingly important role in people's lives. It is important for authentication techniques to be more resistant against two known attacks called shoulder surfing and smudge attacks. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of a new input paradigm called pre-touch, which could soon allow smartphones to sense a user's finger position at some distance from the screen. We implement the technique and evaluate it in a pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes about 8 seconds to authenticate, it is immune to smudge attacks and promises to be more resistant to shoulder surfing. | Another common application of pre-touch information is for reducing the perceived latency of touchscreen interactions. employed this approach for tabletop displays @cite_4 , achieving a touch location prediction error of about 1 ,cm. The approach was implemented by tracking the user's index finger location using motion capture with fiducial markers, which are small retro-reflective spheres that can be precisely tracked by IR cameras. In the prototype of our 3D Pattern technique, we also use a motion capture system for finger position tracking. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2078073494"
],
"abstract": [
"A method of reducing the perceived latency of touch input by employing a model to predict touch events before the finger reaches the touch surface is proposed. A corpus of 3D finger movement data was collected, and used to develop a model capable of three granularities at different phases of movement: initial direction, final touch location, time of touchdown. The model is validated for target distances >= 25.5cm, and demonstrated to have a mean accuracy of 1.05cm 128ms before the user touches the screen. Preference study of different levels of latency reveals a strong preference for unperceived latency touchdown feedback. A form of 'soft' feedback, as well as other uses for this prediction to improve performance, is proposed."
]
} |
1908.09165 | 2969385932 | Smartphones store a significant amount of personal and private information, and are playing an increasingly important role in people's lives. It is important for authentication techniques to be more resistant against two known attacks called shoulder surfing and smudge attacks. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of a new input paradigm called pre-touch, which could soon allow smartphones to sense a user's finger position at some distance from the screen. We implement the technique and evaluate it in a pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes about 8 seconds to authenticate, it is immune to smudge attacks and promises to be more resistant to shoulder surfing. | We anticipate pre-touch sensing to become available on commodity smartphones in the near future. In 2016, @cite_2 explored how a smartphone with a self-capacitance touchscreen could enable pre-touch information to be sensed, and applied this information in various smartphone applications. We envision that our pre-touch PIN entry techniques will be able to be used on smartphones without additional motion tracking hardware. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2397886250"
],
"abstract": [
"Touchscreens continue to advance including progress towards sensing fingers proximal to the display. We explore this emerging pre-touch modality via a self-capacitance touchscreen that can sense multiple fingers above a mobile device, as well as grip around the screen's edges. This capability opens up many possibilities for mobile interaction. For example, using pre-touch in an anticipatory role affords an \"ad-lib interface\" that fades in a different UI--appropriate to the context--as the user approaches one-handed with a thumb, two-handed with an index finger, or even with a pinch or two thumbs. Or we can interpret pre-touch in a retroactive manner that leverages the approach trajectory to discern whether the user made contact with a ballistic vs. a finely-targeted motion. Pre-touch also enables hybrid touch + hover gestures, such as selecting an icon with the thumb while bringing a second finger into range to invoke a context menu at a convenient location. Collectively these techniques illustrate how pre-touch sensing offers an intriguing new back-channel for mobile interaction."
]
} |
1908.09340 | 2969469636 | This paper mainly studies one-example and few-example video person re-identification. A multi-branch network PAM that jointly learns local and global features is proposed. PAM has high accuracy, few parameters and converges fast, which is suitable for few-example person re-identification. We iteratively estimates labels for unlabeled samples, incorporates them into training sets, and trains a more robust network. We propose the static relative distance sampling(SRD) strategy based on the relative distance between classes. For the problem that SRD can not use all unlabeled samples, we propose adaptive relative distance sampling (ARD) strategy. For one-example setting, We get 89.78 , 56.13 rank-1 accuracy on PRID2011 and iLIDS-VID respectively, and 85.16 , 45.36 mAP on DukeMTMC and MARS respectively, which exceeds the previous methods by large margin. | In the first type, @cite_8 propose a framework for solving the problem of one-shot classification. They first build a fully convolutional siamese network based on verification loss, and then use this network to calculate the similarity between the image to be identified and other labeled samples. The image is then recognized as a sample of the category which the most similar labeled sample belongs to. @cite_1 propose matching network. During the training process, some samples are selected to form a support set and the remaining samples are used as training images. They construct different encoders for the support set and training pictures. The classfier's output is a weighted sum of the predicted values between the support set and the training images. During the test process, one-shot sample are used as support set to predict the category of new images. @cite_14 use meta-learning methods to learn multiple similar tasks, and build two encoders for the gallery and probe respectively. Based on these encoders, they get gallery images' embedding according to the characteristics of the remaining gallery images. They get probe images' embedding according to the characteristics of the gallery images. In this way they obtain a more discriminative feature representation. | {
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_8"
],
"mid": [
"2886491726",
"2432717477",
""
],
"abstract": [
"In this paper, we investigate the challenging task of person re-identification from a new perspective and propose an end-to-end attention-based architecture for few-shot re-identification through meta-learning. The motivation for this task lies in the fact that humans, can usually identify another person after just seeing that given person a few times (or even once) by attending to their memory. On the other hand, the unique nature of the person re-identification problem, i.e., only few examples exist per identity and new identities always appearing during testing, calls for a few shot learning architecture with the capacity of handling new identities. Hence, we frame the problem within a meta-learning setting, where a neural network based meta-learner is trained to optimize a learner i.e., an attention-based matching function. Another challenge of the person re-identification problem is the small inter-class difference between different identities and large intra-class difference of the same identity. In order to increase the discriminative power of the model, we propose a new attention-based feature encoding scheme that takes into account the critical intra-view and cross-view relationship of images. We refer to the proposed Attention-based Re-identification Met alearning model as ARM. Extensive evaluations demonstrate the advantages of the ARM as compared to the state-of-the-art on the challenging PRID2011, CUHK01, CUHK03 and Market1501 datasets.",
"Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6 to 93.2 and from 88.0 to 93.8 on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank.",
""
]
} |
1908.09340 | 2969469636 | This paper mainly studies one-example and few-example video person re-identification. A multi-branch network PAM that jointly learns local and global features is proposed. PAM has high accuracy, few parameters and converges fast, which is suitable for few-example person re-identification. We iteratively estimates labels for unlabeled samples, incorporates them into training sets, and trains a more robust network. We propose the static relative distance sampling(SRD) strategy based on the relative distance between classes. For the problem that SRD can not use all unlabeled samples, we propose adaptive relative distance sampling (ARD) strategy. For one-example setting, We get 89.78 , 56.13 rank-1 accuracy on PRID2011 and iLIDS-VID respectively, and 85.16 , 45.36 mAP on DukeMTMC and MARS respectively, which exceeds the previous methods by large margin. | In the second type, @cite_9 establish a graph for each camera. They view the labeled sample as the node of the graph, and view the distance between the video sequence features as the path. Unlabeled sample are mapped into different graphs (namely estimating the labels) to minimize the objective function. The graphs are updated dynamically . They continually estimate labels, and train models until the algorithm converges. @cite_10 first initialize the model with labeled samples. Then they calculate k nearest neighbors of the probe with the gallery. They remove the suspect samples and then add the remaining samples to the training set. The procedure is iterated until the algorithm converges. @cite_2 initialize a CNN with labeled data firstly, and then linearly incorporate pseudo-label samples to the training set according to the distance to labeled samples. Then the CNN is retrained with the new training set. Finally all unlabeled samples have estimated label and are added into training set, then they use a validation set to select the best model. | {
"cite_N": [
"@cite_9",
"@cite_10",
"@cite_2"
],
"mid": [
"2963989829",
"2778652957",
"2799185441"
],
"abstract": [
"Label estimation is an important component in an unsupervised person re-identification (re-ID) system. This paper focuses on cross-camera label estimation, which can be subsequently used in feature learning to learn robust re-ID models. Specifically, we propose to construct a graph for samples in each camera, and then graph matching scheme is introduced for cross-camera labeling association. While labels directly output from existing graph matching methods may be noisy and inaccurate due to significant cross-camera variations, this paper propose a dynamic graph matching (DGM) method. DGM iteratively updates the image graph and the label estimation process by learning a better feature space with intermediate estimated labels. DGM is advantageous in two aspects: 1) the accuracy of estimated labels is improved significantly with the iterations; 2) DGM is robust to noisy initial training data. Extensive experiments conducted on three benchmarks including the large-scale MARS dataset show that DGM yields competitive performance to fully supervised baselines, and outperforms competing unsupervised learning methods.1",
"The intensive annotation cost and the rich but unlabeled data contained in videos motivate us to propose an unsupervised video-based person re-identification (re-ID) method. We start from two assumptions: 1) different video tracklets typically contain different persons, given that the tracklets are taken at distinct places or with long intervals; 2) within each tracklet, the frames are mostly of the same person. Based on these assumptions, this paper propose a stepwise metric promotion approach to estimate the identities of training tracklets, which iterates between cross-camera tracklet association and feature learning. Specifically, We use each training tracklet as a query, and perform retrieval in the cross-camera training set. Our method is built on reciprocal nearest neighbor search and can eliminate the hard negative label matches, i.e., the cross-camera nearest neighbors of the false matches in the initial rank list. The tracklet that passes the reciprocal nearest neighbor check is considered to have the same ID with the query. Experimental results on the PRID 2011, ILIDS-VID, and MARS datasets show that the proposed method achieves very competitive re-ID accuracy compared with its supervised counterparts.",
"We focus on the one-shot learning for video-based person re-Identification (re-ID). Unlabeled tracklets for the person re-ID tasks can be easily obtained by preprocessing, such as pedestrian detection and tracking. In this paper, we propose an approach to exploiting unlabeled tracklets by gradually but steadily improving the discriminative capability of the Convolutional Neural Network (CNN) feature representation via stepwise learning. We first initialize a CNN model using one labeled tracklet for each identity. Then we update the CNN model by the following two steps iteratively: 1. sample a few candidates with most reliable pseudo labels from unlabeled tracklets; 2. update the CNN model according to the selected data. Instead of the static sampling strategy applied in existing works, we propose a progressive sampling method to increase the number of the selected pseudo-labeled candidates step by step. We systematically investigate the way how we should select pseudo-labeled tracklets into the training set to make the best use of them. Notably, the rank-1 accuracy of our method outperforms the state-of-the-art method by 21.46 points (absolute, i.e., 62.67 vs. 41.21 ) on the MARS dataset, and 16.53 points on the DukeMTMC-VideoReID dataset1."
]
} |
1908.09072 | 2969789183 | Accurate camera pose estimation result is essential for visual SLAM (VSLAM). This paper presents a novel pose correction method to improve the accuracy of the VSLAM system. Firstly, the relationship between the camera pose estimation error and bias values of map points is derived based on the optimized function in VSLAM. Secondly, the bias value of the map point is calculated by a statistical method. Finally, the camera pose estimation error is compensated according to the first derived relationship. After the pose correction, procedures of the original system, such as the bundle adjustment (BA) optimization, can be executed as before. Compared with existing methods, our algorithm is compact and effective and can be easily generalized to different VSLAM systems. Additionally, the robustness to system noise of our method is better than feature selection methods, due to all original system information is preserved in our algorithm while only a subset is employed in the latter. Experimental results on benchmark datasets show that our approach leads to considerable improvements over state-of-the-art algorithms for absolute pose estimation. | A monocular SLAM system, which leverages structural regularity in Manhattan world and contains three optimization strategies is proposed in @cite_17 . However, to reduce the estimation error of the rotation motion, multiple orthogonal planes must be visible throughout the entire motion estimation process. Unlike only using planes in @cite_17 , the rotation motion is estimated by joint lines and planes in @cite_28 . Once the rotation is found, the translational motion can be recovered by minimizing the de-rotated reprojection error. In @cite_24 , the accuracy of BA optimization is enhanced by incorporating feature scale constraints into it. Structural constraints between nearby planes (e.g. right angle) are added in the SLAM system to further recover the drift and distortion in @cite_2 . Since the structural regularity does not exist in all environments, the application scope of this category is limited. | {
"cite_N": [
"@cite_28",
"@cite_2",
"@cite_24",
"@cite_17"
],
"mid": [
"2892177182",
"2892147056",
"2784319755",
"2889958683"
],
"abstract": [
"We present a low-drift visual odometry algorithm that separately estimates rotational and translational motion from lines, planes, and points found in RGB-D images. Previous methods estimate drift-free rotational motion from structural regularities to reduce drift in the rotation estimate, which is the primary source of positioning inaccuracy in visual odometry. However, multiple orthogonal planes are required to be visible throughout the entire motion estimation process; otherwise, these VO approaches fail. We propose a new approach to estimate drift-free rotational motion jointly from both lines and planes by exploiting environmental regularities. We track the spatial regularities with an efficient SO(3)-manifold constrained mean shift algorithm. Once the drift-free rotation is found, we recover the translational motion from all tracked points with and without depth by minimizing the de-rotated reprojection error. We compare the proposed algorithm to other state-of-the-art visual odometry methods on a variety of RGB-D datasets (including especially challenging pure rotations) and demonstrate improved accuracy and lower drift error.",
"In this work, we develop a novel dense planar-inertial SLAM (DPI-SLAM) system to reconstruct dense 3D models of large indoor environments using a hand-held RGB-D sensor and an inertial measurement unit (IMU). The preinte-grated IMU measurements are loosely-coupled with the dense visual odometry (VO) estimation and tightly-coupled with the planar measurements in a full SLAM framework. The poses, velocities, and IMU biases are optimized together with the planar landmarks in a global factor graph using incremental smoothing and mapping with the Bayes Tree (iSAM2). With odometry estimation using both RGB-D and IMU data, our system can keep track of the poses of the sensors even without sufficient planes or visual information (e.g. textureless walls) temporarily. Modeling planes and IMU states in the fully probabilistic global optimization reduces the drift that distorts the reconstruction results of other SLAM algorithms. Moreover, structural constraints between nearby planes (e.g. right angles) are added into the DPI-SLAM system, which further recovers the drift and distortion. We test our DPI-SLAM on large indoor datasets and demonstrate its state-of-the-art performance as the first planar-inertial SLAM system.",
"We propose to incorporate within bundle adjustment (BA) a new type of constraint that uses feature scale information, leveraging the scale invariance property of typical image feature detectors (e.g., SIFT). While feature scales play an important role in image matching, they have not been utilized thus far for estimation purposes in a BA framework. Our approach exploits the already-available feature scale information and uses it to enhance the accuracy of BA, especially along the optical axis of the camera in a monocular setup. Importantly, the mentioned feature scale constraints can be formulated on a frame to frame basis and do not require loop closures. We study our approach in synthetic environments and the real-imagery KITTI dataset, demonstrating significant improvement in positioning error.",
"The structural features in Manhattan world encode useful geometric information of parallelism, orthogonality and or coplanarity in the scene. By fully exploiting these structural features, we propose a novel monocular SLAM system which provides accurate estimation of camera poses and 3D map. The foremost contribution of the proposed system is a structural feature-based optimization module which contains three novel optimization strategies. First, a rotation optimization strategy using the parallelism and orthogonality of 3D lines is presented. We propose a global binding method to compute an accurate estimation of the absolute rotation of the camera. Then we propose an approach for calculating the relative rotation to further refine the absolute rotation. Second, a translation optimization strategy leveraging coplanarity is proposed. Coplanar features are effectively identified, and we leverage them by a unified model handling both points and lines to calculate the relative translation, and then the optimal absolute translation. Third, a 3D line optimization strategy utilizing parallelism, orthogonality and coplanarity simultaneously is proposed to obtain an accurate 3D map consisting of structural line segments with low computational complexity. Experiments in man-made environments have demonstrated that the proposed system outperforms existing state-of-the-art monocular SLAM systems in terms of accuracy and robustness."
]
} |
1908.08972 | 2969766398 | Deep Neural Networks (DNNs) have achieved state-of-the-art accuracy performance in many tasks. However, recent works have pointed out that the outputs provided by these models are not well-calibrated, seriously limiting their use in critical decision scenarios. In this work, we propose to use a decoupled Bayesian stage, implemented with a Bayesian Neural Network (BNN), to map the uncalibrated probabilities provided by a DNN to calibrated ones, consistently improving calibration. Our results evidence that incorporating uncertainty provides more reliable probabilistic models, a critical condition for achieving good calibration. We report a generous collection of experimental results using high-accuracy DNNs in standardized image classification benchmarks, showing the good performance, flexibility and robust behavior of our approach with respect to several state-of-the-art calibration methods. Code for reproducibility is provided. | On the side of BNNs, @cite_18 connect Bernoulli dropout with BNNs, and @cite_29 formalize Gaussian dropout as a Bayesian approach. In @cite_5 , novel BNNs are proposed, using RealNVP @cite_22 to implement a normalizing flow @cite_56 , auxiliary variables and local reparameterization . None of these approaches measure calibration performance explicitly on DNNs, as we do. For instance, @cite_5 and @cite_21 evaluate uncertainty by training on one dataset and use it on another, expecting a maximum entropy output distribution. More recently, @cite_2 propose a scalable inference algorithm that is also asymptotically accurate as MCMC algorithms and @cite_34 propose a deterministic way of computing the ELBO to reduce the variance of the estimator to 0, allowing for faster convergence. They also propose a hierarchical prior on the parameters. | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_29",
"@cite_21",
"@cite_56",
"@cite_2",
"@cite_5",
"@cite_34"
],
"mid": [
"601603264",
"2962695743",
"1826234144",
"2963238274",
"2963090522",
"2886496274",
"2592505114",
"2897001865"
],
"abstract": [
"Convolutional neural networks (CNNs) work well on large datasets. But labelled data is hard to collect, and in some applications larger amounts of data are not available. The problem then is how to use CNNs with small data -- as CNNs overfit quickly. We present an efficient Bayesian CNN, offering better robustness to over-fitting on small data than traditional approaches. This is by placing a probability distribution over the CNN's kernels. We approximate our model's intractable posterior with Bernoulli variational distributions, requiring no additional model parameters. On the theoretical side, we cast dropout network training as approximate inference in Bayesian neural networks. This allows us to implement our model using existing tools in deep learning with no increase in time complexity, while highlighting a negative result in the field. We show a considerable improvement in classification accuracy compared to standard techniques and improve on published state-of-the-art results for CIFAR-10.",
"Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving this task. We extend the space of such models using real-valued non-volume preserving (real NVP) transformations, a set of powerful invertible and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact sampling, exact inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation and latent variable manipulations.",
"We investigate a local reparameterizaton technique for greatly reducing the variance of stochastic gradients for variational Bayesian inference (SGVB) of a posterior over model parameters, while retaining parallelizability. This local reparameterization translates uncertainty about global parameters into local noise that is independent across datapoints in the minibatch. Such parameterizations can be trivially parallelized and have variance that is inversely proportional to the mini-batch size, generally leading to much faster convergence. Additionally, we explore a connection with dropout: Gaussian dropout objectives correspond to SGVB with local reparameterization, a scale-invariant prior and proportionally fixed posterior variance. Our method allows inference of more flexibly parameterized posteriors; specifically, we propose variational dropout, a generalization of Gaussian dropout where the dropout rates are learned, often leading to better models. The method is demonstrated through several experiments.",
"Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks. Quantifying predictive uncertainty in NNs is a challenging and yet unsolved problem. Bayesian NNs, which learn a distribution over weights, are currently the state-of-the-art for estimating predictive uncertainty; however these require significant modifications to the training procedure and are computationally expensive compared to standard (non-Bayesian) NNs. We propose an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Through a series of experiments on classification and regression benchmarks, we demonstrate that our method produces well-calibrated uncertainty estimates which are as good or better than approximate Bayesian NNs. To assess robustness to dataset shift, we evaluate the predictive uncertainty on test examples from known and unknown distributions, and show that our method is able to express higher uncertainty on out-of-distribution examples. We demonstrate the scalability of our method by evaluating predictive uncertainty estimates on ImageNet.",
"The choice of approximate posterior distribution is one of the core problems in variational inference. Most applications of variational inference employ simple families of posterior approximations in order to allow for efficient inference, focusing on mean-field or other simple structured approximations. This restriction has a significant impact on the quality of inferences made using variational methods. We introduce a new approach for specifying flexible, arbitrarily complex and scalable approximate posterior distributions. Our approximations are distributions constructed through a normalizing flow, whereby a simple initial density is transformed into a more complex one by applying a sequence of invertible transformations until a desired level of complexity is attained. We use this view of normalizing flows to develop categories of finite and infinitesimal flows and provide a unified view of approaches for constructing rich posterior approximations. We demonstrate that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference.",
"Probabilistic modelling is a general and elegant framework to capture the uncertainty, ambiguity and diversity of data. Probabilistic inference is the core technique for developing training and simulation algorithms on probabilistic models. However, the classic inference methods, like Markov chain Monte Carlo (MCMC) methods and mean-field variational inference (VI), are not computationally scalable for the recent developed probabilistic models with neural networks (NNs). This motivates many recent works on improving classic inference methods using NNs, especially, NN empowered VI. However, even with powerful NNs, VI still suffers its fundamental limitations. In this work, we propose a novel computational scalable general inference framework. With the theoretical foundation in ergodic theory, the proposed methods are not only computationally scalable like NN-based VI methods but also asymptotically accurate like MCMC. We test our method on popular benchmark problems and the results suggest that our methods can outperform NN-based VI and MCMC on deep generative models and Bayesian neural networks.",
"We reinterpret multiplicative noise in neural networks as auxiliary random variables that augment the approximate posterior in a variational setting for Bayesian neural networks. We show that through this interpretation it is both efficient and straightforward to improve the approximation by employing normalizing flows (Rezende & Mohamed, 2015) while still allowing for local reparametrizations (, 2015) and a tractable lower bound (, 2015; , 2016). In experiments we show that with this new approximation we can significantly improve upon classical mean field for Bayesian neural networks on both predictive accuracy as well as predictive uncertainty.",
"Bayesian neural networks (BNNs) hold great promise as a flexible and principled solution to deal with uncertainty when learning from finite data. Among approaches to realize probabilistic inference in deep neural networks, variational Bayes (VB) is theoretically grounded, generally applicable, and computationally efficient. With wide recognition of potential advantages, why is it that variational Bayes has seen very limited practical use for BNNs in real applications? We argue that variational inference in neural networks is fragile: successful implementations require careful initialization and tuning of prior variances, as well as controlling the variance of Monte Carlo gradient estimates. We fix VB and turn it into a robust inference tool for Bayesian neural networks. We achieve this with two innovations: first, we introduce a novel deterministic method to approximate moments in neural networks, eliminating gradient variance; second, we introduce a hierarchical prior for parameters and a novel empirical Bayes procedure for automatically selecting prior variances. Combining these two innovations, the resulting method is highly efficient and robust. On the application of heteroscedastic regression we demonstrate strong predictive performance over alternative approaches."
]
} |
1908.08994 | 2969915736 | Text detection in natural images is a challenging but necessary task for many applications. Existing approaches utilize large deep convolutional neural networks making it difficult to use them in real-world tasks. We propose a small yet relatively precise text extraction method. The basic component of it is a convolutional neural network which works in a fully-convolutional manner and produces results at multiple scales. Each scale output predicts whether a pixel is a part of some word, its geometry, and its relation to neighbors at the same scale and between scales. The key factor of reducing the complexity of the model was the utilization of depthwise separable convolution, linear bottlenecks, and inverted residuals. Experiments on public datasets show that the proposed network can effectively detect text while keeping the number of parameters in the range of 1.58 to 10.59 million in different configurations. | Since the implementation of deep learning became practical, text detection techniques are based on neural networks. A deep learning based method @cite_2 uses fully convolutional network (FCN) to find a probability that pixels belong to a text area. After applying maximally stable extremal regions (MSER), a shortened FCN was utilized to acquire the character centroids and with the help of intensity and geometric criteria remove false candidates. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2339589954"
],
"abstract": [
"In this paper, we propose a novel approach for text detection in natural images. Both local and global cues are taken into account for localizing text lines in a coarse-to-fine procedure. First, a Fully Convolutional Network (FCN) model is trained to predict the salient map of text regions in a holistic manner. Then, text line hypotheses are estimated by combining the salient map and character components. Finally, another FCN classifier is used to predict the centroid of each character, in order to remove the false hypotheses. The framework is general for handling text in multiple orientations, languages and fonts. The proposed method consistently achieves the state-of-the-art performance on three text detection benchmarks: MSRA-TD500, ICDAR2015 and ICDAR2013."
]
} |
1908.08994 | 2969915736 | Text detection in natural images is a challenging but necessary task for many applications. Existing approaches utilize large deep convolutional neural networks making it difficult to use them in real-world tasks. We propose a small yet relatively precise text extraction method. The basic component of it is a convolutional neural network which works in a fully-convolutional manner and produces results at multiple scales. Each scale output predicts whether a pixel is a part of some word, its geometry, and its relation to neighbors at the same scale and between scales. The key factor of reducing the complexity of the model was the utilization of depthwise separable convolution, linear bottlenecks, and inverted residuals. Experiments on public datasets show that the proposed network can effectively detect text while keeping the number of parameters in the range of 1.58 to 10.59 million in different configurations. | Shi in @cite_4 proposed to find segments of words and connections between them. The whole detection process of segments and links was done in a single pass of a CNN named SegLink in a fully-convolutional manner with depth-first search (DFS) and bounding box creation postprocessing. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2605076167"
],
"abstract": [
"Most state-of-the-art text detection methods are specific to horizontal Latin text and are not fast enough for real-time applications. We introduce Segment Linking (SegLink), an oriented text detection method. The main idea is to decompose text into two locally detectable elements, namely segments and links. A segment is an oriented box covering a part of a word or text line, A link connects two adjacent segments, indicating that they belong to the same word or text line. Both elements are detected densely at multiple scales by an end-to-end trained, fully-convolutional neural network. Final detections are produced by combining segments connected by links. Compared with previous methods, SegLink improves along the dimensions of accuracy, speed, and ease of training. It achieves an f-measure of 75.0 on the standard ICDAR 2015 Incidental (Challenge 4) benchmark, outperforming the previous best by a large margin. It runs at over 20 FPS on 512x512 images. Moreover, without modification, SegLink is able to detect long lines of non-Latin text, such as Chinese."
]
} |
1908.08994 | 2969915736 | Text detection in natural images is a challenging but necessary task for many applications. Existing approaches utilize large deep convolutional neural networks making it difficult to use them in real-world tasks. We propose a small yet relatively precise text extraction method. The basic component of it is a convolutional neural network which works in a fully-convolutional manner and produces results at multiple scales. Each scale output predicts whether a pixel is a part of some word, its geometry, and its relation to neighbors at the same scale and between scales. The key factor of reducing the complexity of the model was the utilization of depthwise separable convolution, linear bottlenecks, and inverted residuals. Experiments on public datasets show that the proposed network can effectively detect text while keeping the number of parameters in the range of 1.58 to 10.59 million in different configurations. | Zhou proposed a similar strategy in @cite_3 where a variety of postprocessing steps were eliminated by performing most of the calculations in a single U-Net-like @cite_12 FCN named EAST which outputs word box parameters by itself. Results of computations are filtered by non-maximum suppression (NMS) and thresholding. The length of the word to be detected is limited by a receptive field of output pixels. | {
"cite_N": [
"@cite_12",
"@cite_3"
],
"mid": [
"1901129140",
"2605982830"
],
"abstract": [
"There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net .",
"Previous approaches for scene text detection have already achieved promising performances across various benchmarks. However, they usually fall short when dealing with challenging scenarios, even when equipped with deep neural network models, because the overall performance is determined by the interplay of multiple stages and components in the pipelines. In this work, we propose a simple yet powerful pipeline that yields fast and accurate text detection in natural scenes. The pipeline directly predicts words or text lines of arbitrary orientations and quadrilateral shapes in full images, eliminating unnecessary intermediate steps (e.g., candidate aggregation and word partitioning), with a single neural network. The simplicity of our pipeline allows concentrating efforts on designing loss functions and neural network architecture. Experiments on standard datasets including ICDAR 2015, COCO-Text and MSRA-TD500 demonstrate that the proposed algorithm significantly outperforms state-of-the-art methods in terms of both accuracy and efficiency. On the ICDAR 2015 dataset, the proposed algorithm achieves an F-score of 0.7820 at 13.2fps at 720p resolution."
]
} |
1908.08994 | 2969915736 | Text detection in natural images is a challenging but necessary task for many applications. Existing approaches utilize large deep convolutional neural networks making it difficult to use them in real-world tasks. We propose a small yet relatively precise text extraction method. The basic component of it is a convolutional neural network which works in a fully-convolutional manner and produces results at multiple scales. Each scale output predicts whether a pixel is a part of some word, its geometry, and its relation to neighbors at the same scale and between scales. The key factor of reducing the complexity of the model was the utilization of depthwise separable convolution, linear bottlenecks, and inverted residuals. Experiments on public datasets show that the proposed network can effectively detect text while keeping the number of parameters in the range of 1.58 to 10.59 million in different configurations. | An ArbiText network @cite_8 based on the Single Shot Detector (SSD) applies the circle anchors to replace bounding boxes which should be more robust to orientation variations. Authors also applied pyramid pooling to preserve low-level features in deeper layers. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2771082502"
],
"abstract": [
"Arbitrary-oriented text detection in the wild is a very challenging task, due to the aspect ratio, scale, orientation, and illumination variations. In this paper, we propose a novel method, namely Arbitrary-oriented Text (or ArbText for short) detector, for efficient text detection in unconstrained natural scene images. Specifically, we first adopt the circle anchors rather than the rectangular ones to represent bounding boxes, which is more robust to orientation variations. Subsequently, we incorporate a pyramid pooling module into the Single Shot MultiBox Detector framework, in order to simultaneously explore the local and global visual information, which can, therefore, generate more confidential detection results. Experiments on established scene-text datasets, such as the ICDAR 2015 and MSRA-TD500 datasets, have demonstrated the supe rior performance of the proposed method, compared to the state-of-the-art approaches."
]
} |
1908.08994 | 2969915736 | Text detection in natural images is a challenging but necessary task for many applications. Existing approaches utilize large deep convolutional neural networks making it difficult to use them in real-world tasks. We propose a small yet relatively precise text extraction method. The basic component of it is a convolutional neural network which works in a fully-convolutional manner and produces results at multiple scales. Each scale output predicts whether a pixel is a part of some word, its geometry, and its relation to neighbors at the same scale and between scales. The key factor of reducing the complexity of the model was the utilization of depthwise separable convolution, linear bottlenecks, and inverted residuals. Experiments on public datasets show that the proposed network can effectively detect text while keeping the number of parameters in the range of 1.58 to 10.59 million in different configurations. | Liu in @cite_13 combined text detection and recognition parts in one end-to-end CNN. The backbone of the network is the Feature Pyramid Network which incorporates residual operations from ResNet-50 @cite_15 . The network, in text detection part, outputs text probability, bounding box distances in four directions, and a rotation angle of the bounding box. The smallest real-time version contains 29 million parameters. | {
"cite_N": [
"@cite_15",
"@cite_13"
],
"mid": [
"2194775991",
"2964018263"
],
"abstract": [
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"Incidental scene text spotting is considered one of the most difficult and valuable challenges in the document analysis community. Most existing methods treat text detection and recognition as separate tasks. In this work, we propose a unified end-to-end trainable Fast Oriented Text Spotting (FOTS) network for simultaneous detection and recognition, sharing computation and visual information among the two complementary tasks. Specifically, RoIRotate is introduced to share convolutional features between detection and recognition. Benefiting from convolution sharing strategy, our FOTS has little computation overhead compared to baseline text detection network, and the joint training method makes our method perform better than these two-stage methods. Experiments on ICDAR 2015, ICDAR 2017 MLT, and ICDAR 2013 datasets demonstrate that the proposed method outperforms state-of-the-art methods significantly, which further allows us to develop the first real-time oriented text spotting system which surpasses all previous state-of-the-art results by more than 5 on ICDAR 2015 text spotting task while keeping 22.6 fps."
]
} |
1908.08979 | 2969556441 | Various psychological factors affect how individuals express emotions. Yet, when we collect data intended for use in building emotion recognition systems, we often try to do so by creating paradigms that are designed just with a focus on eliciting emotional behavior. Algorithms trained with these types of data are unlikely to function outside of controlled environments because our emotions naturally change as a function of these other factors. In this work, we study how the multimodal expressions of emotion change when an individual is under varying levels of stress. We hypothesize that stress produces modulations that can hide the true underlying emotions of individuals and that we can make emotion recognition algorithms more generalizable by controlling for variations in stress. To this end, we use adversarial networks to decorrelate stress modulations from emotion representations. We study how stress alters acoustic and lexical emotional predictions, paying special attention to how modulations due to stress affect the transferability of learned emotion recognition models across domains. Our results show that stress is indeed encoded in trained emotion classifiers and that this encoding varies across levels of emotions and across the lexical and acoustic modalities. Our results also show that emotion recognition models that control for stress during training have better generalizability when applied to new domains, compared to models that do not control for stress during training. We conclude that is is necessary to consider the effect of extraneous psychological factors when building and testing emotion recognition models. | One group of methods have considered confounding factors that are either singularly labeled or cannot be labeled. Ben-David et. al @cite_49 showed that a classifier trained to predict the sentiment of reviews can implicitly learn to predict the category of the products. The authors used an adversarial multi-task classifier to learn domain invariant sentiment representations. Shinohara @cite_38 used an adversarial approach to train noise-robust networks for automatic speech recognition. They used domain (i.e., background noise) as the adversarial task while training the model to obtain representations that are both senone-discriminative and domain-invariant. In emotion recognition applications, @cite_44 used domain adversarial networks to improve cross-corpus generalization for emotion recognition tasks. | {
"cite_N": [
"@cite_44",
"@cite_38",
"@cite_49"
],
"mid": [
"2963447013",
"2510867321",
"2104094955"
],
"abstract": [
"The performance of speech emotion recognition is affected by the differences in data distributions between train (source domain) and test (target domain) sets used to build and evaluate the models. This is a common problem, as multiple studies have shown that the performance of emotional classifiers drops when they are exposed to data that do not match the distribution used to build the emotion classifiers. The difference in data distributions becomes very clear when the training and testing data come from different domains, causing a large performance gap between development and testing performance. Due to the high cost of annotating new data and the abundance of unlabeled data, it is crucial to extract as much useful information as possible from the available unlabeled data. This study looks into the use of adversarial multitask training to extract a common representation between train and test domains. The primary task is to predict emotional-attribute-based descriptors for arousal, valence, or dominance. The secondary task is to learn a common representation, where the train and test domains cannot be distinguished. By using a gradient reversal layer, the gradients coming from the domain classifier are used to bring the source and target domain representations closer. We show that exploiting unlabeled data consistently leads to better emotion recognition performance across all emotional dimensions. We visualize the effect of adversarial training on the feature representation across the proposed deep learning architecture. The analysis shows that the data representations for the train and test domains converge as the data are passed to deeper layers of the network. We also evaluate the difference in performance when we use a shallow neural network versus a deep neural network and the effect of the number of shared layers used by the task and domain classifiers.",
"",
"Discriminative learning methods for classification perform well when training and test data are drawn from the same distribution. Often, however, we have plentiful labeled training data from a source domain but wish to learn a classifier which performs well on a target domain with a different distribution and little or no labeled training data. In this work we investigate two questions. First, under what conditions can a classifier trained from source data be expected to perform well on target data? Second, given a small amount of labeled target data, how should we combine it during training with the large amount of labeled source data to achieve the lowest target error at test time? We address the first question by bounding a classifier's target error in terms of its source error and the divergence between the two domains. We give a classifier-induced divergence measure that can be estimated from finite, unlabeled samples from the domains. Under the assumption that there exists some hypothesis that performs well in both domains, we show that this quantity together with the empirical source error characterize the target error of a source-trained classifier. We answer the second question by bounding the target error of a model which minimizes a convex combination of the empirical source and target errors. Previous theoretical work has considered minimizing just the source error, just the target error, or weighting instances from the two domains equally. We show how to choose the optimal combination of source and target error as a function of the divergence, the sample sizes of both domains, and the complexity of the hypothesis class. The resulting bound generalizes the previously studied cases and is always at least as tight as a bound which considers minimizing only the target error or an equal weighting of source and target errors."
]
} |
1908.08909 | 2969984239 | Predicting features of complex, large-scale quantum systems is essential to the characterization and engineering of quantum architectures. We present an efficient approach for predicting a large number of linear features using classical shadows obtained from very few quantum measurements. This approach is guaranteed to accurately predict @math linear functions with bounded Hilbert-Schmidt norm from only @math measurement repetitions. This sampling rate is completely independent of the system size and saturates fundamental lower bounds from information theory. We support our theoretical findings with numerical experiments over a wide range of problem sizes (2 to 162 qubits). These highlight advantages compared to existing machine learning approaches. | The task of reconstructing a full classical description -- the density matrix @math -- of a @math -dimensional quantum system from experimental data is one of the most fundamental problems in quantum statistics, see e.g. @cite_52 @cite_13 @cite_12 @cite_31 and references therein. Sample-optimal protocols, i.e. estimation techniques that get by with a minimal number of measurement repetitions, have only been developed recently. Information-theoretic bounds assert that an order of @math state copies are necessary to fully reconstruct @math @cite_26 . Constructive protocols @cite_29 @cite_26 saturate this bound, but require entangled circuits and measurements that act on all state copies simultaneously. More tractable measurement procedures, where each copy of the state is measured independently, require an order of @math measurements @cite_26 . This more stringent bound is saturated by low rank matrix recovery @cite_4 @cite_44 @cite_40 and projected least squares estimation @cite_18 . | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_4",
"@cite_29",
"@cite_52",
"@cite_44",
"@cite_40",
"@cite_31",
"@cite_13",
"@cite_12"
],
"mid": [
"2893608605",
"2649051464",
"",
"",
"1965471276",
"2963583445",
"2539873326",
"2063059850",
"2098271372",
"1529624360"
],
"abstract": [
"Projected least squares (PLS) is an intuitive and numerically cheap technique for quantum state tomography. The method first computes the least-squares estimator (or a linear inversion estimator) and then projects the initial estimate onto the space of states. The main result of this paper equips this point estimator with a rigorous, non-asymptotic confidence region expressed in terms of the trace distance. The analysis holds for a variety of measurements, including 2-designs and Pauli measurements. The sample complexity of the estimator is comparable to the strongest convergence guarantees available in the literature and---in the case of measuring the uniform POVM---saturates fundamental lower bounds.The results are derived by reinterpreting the least-squares estimator as a sum of random matrices and applying a matrix-valued concentration inequality. The theory is supported by numerical simulations for mutually unbiased bases, Pauli observables, and Pauli basis measurements.",
"It is a fundamental problem to decide how many copies of an unknown mixed quantum state are necessary and sufficient to determine the state. Previously, it was known only that estimating states to error @math in trace distance required @math copies for a @math -dimensional density matrix of rank @math . Here, we give a theoretical measurement scheme (POVM) that requires @math copies to estimate @math to error @math in infidelity, and a matching lower bound up to logarithmic factors. This implies @math copies suffice to achieve error @math in trace distance. We also prove that for independent (product) measurements, @math copies are necessary in order to achieve error @math in infidelity. For fixed @math , our measurement can be implemented on a quantum computer in time polynomial in @math .",
"",
"",
"An algorithm for quantum-state estimation based on the maximum-likelihood estimation is proposed. Existing techniques for state reconstruction based on the inversion of measured data are shown to be overestimated since they do not guarantee the positive definiteness of the reconstructed density matrix.",
"Abstract We study the recovery of Hermitian low rank matrices X ∈ C n × n from undersampled measurements via nuclear norm minimization. We consider the particular scenario where the measurements are Frobenius inner products with random rank-one matrices of the form a j a j ⁎ for some measurement vectors a 1 , … , a m , i.e., the measurements are given by b j = tr ( X a j a j ⁎ ) . The case where the matrix X = x x ⁎ to be recovered is of rank one reduces to the problem of phaseless estimation (from measurements b j = | 〈 x , a j 〉 | 2 ) via the PhaseLift approach, which has been introduced recently. We derive bounds for the number m of measurements that guarantee successful uniform recovery of Hermitian rank r matrices, either for the vectors a j , j = 1 , … , m , being chosen independently at random according to a standard Gaussian distribution, or a j being sampled independently from an (approximate) complex projective t-design with t = 4 . In the Gaussian case, we require m ≥ C r n measurements, while in the case of 4-designs we need m ≥ Cr n log ( n ) . Our results are uniform in the sense that one random choice of the measurement vectors a j guarantees recovery of all rank r-matrices simultaneously with high probability. Moreover, we prove robustness of recovery under perturbation of the measurements by noise. The result for approximate 4-designs generalizes and improves a recent bound on phase retrieval due to Gross, Krahmer and Kueng. In addition, it has applications in quantum state tomography. Our proofs employ the so-called bowling scheme which is based on recent ideas by Mendelson and Koltchinskii.",
"We prove that low-rank matrices can be recovered efficiently from a small number of measurements that are sampled from orbits of a certain matrix group. As a special case, our theory makes statements about the phase retrieval problem. Here, the task is to recover a vector given only the amplitudes of its inner product with a small number of vectors from an orbit. Variants of the group in question have appeared under different names in many areas of mathematics. In coding theory and quantum information, it is the complex Clifford group; in time-frequency analysis the oscillator group; and in mathematical physics the metaplectic group. It affords one particularly small and highly structured orbit that includes and generalizes the discrete Fourier basis: While the Fourier vectors have coefficients of constant modulus and phases that depend linearly on their index, the vectors in said orbit have phases with a quadratic dependence. In quantum information, the orbit is used extensively and is known as the set of stabilizer states. We argue that due to their rich geometric structure and their near-optimal recovery properties, stabilizer states form an ideal model for structured measurements for phase retrieval. Our results hold for @math measurements, where the oversampling factor k varies between @math and @math depending on the orbit. The reconstruction is stable towards both additive noise and deviations from the assumption of low rank. If the matrices of interest are in addition positive semidefinite, reconstruction may be performed by a simple constrained least squares regression. Our proof methods could be adapted to cover orbits of other groups.",
"Quantum tomography has come a long way from early reconstructions of Wigner functions from projections along quadratures to the full characterization of multipartite systems. Now, it is routinely carried out in a wide variety of systems. And yet, many fundamental questions remain unanswered. In recent years, a spate of radical new experimental, theoretical and mathematical developments have occurred. The appeal of the subject lies largely in the breadth of techniques that must be brought together in order to fully understand the problem. This ‘focus on’ collection provides a platform for facilitating the exchange of ideas between the different communities involved in this process.",
"Accurately inferring the state of a quantum device from the results of measurements is a crucial task in building quantum information processing hardware. The predominant state estimation procedure, maximum likelihood estimation (MLE), generally reports an estimate with zero eigenvalues. These cannot be justified. Furthermore, the MLE estimate is incompatible with error bars, so conclusions drawn from it are suspect. I propose an alternative procedure, Bayesian mean estimation (BME). BME never yields zero eigenvalues, its eigenvalues provide a bound on their own uncertainties, and under certain circumstances it is provably the most accurate procedure possible. I show how to implement BME numerically, and how to obtain natural error bars that are compatible with the estimate. Finally, I briefly discuss the differences between Bayesian and frequentist estimation techniques.",
"We establish methods for quantum state tomography based on compressed sensing. These methods are specialized for quantum states that are fairly pure, and they offer a significant performance improvement on large quantum systems. In particular, they are able to reconstruct an unknown density matrix of dimension d and rank r using O(rdlog^2d) measurement settings, compared to standard methods that require d^2 settings. Our methods have several features that make them amenable to experimental implementation: they require only simple Pauli measurements, use fast convex optimization, are stable against noise, and can be applied to states that are only approximately low rank. The acquired data can be used to certify that the state is indeed close to pure, so no a priori assumptions are needed."
]
} |
1908.08909 | 2969984239 | Predicting features of complex, large-scale quantum systems is essential to the characterization and engineering of quantum architectures. We present an efficient approach for predicting a large number of linear features using classical shadows obtained from very few quantum measurements. This approach is guaranteed to accurately predict @math linear functions with bounded Hilbert-Schmidt norm from only @math measurement repetitions. This sampling rate is completely independent of the system size and saturates fundamental lower bounds from information theory. We support our theoretical findings with numerical experiments over a wide range of problem sizes (2 to 162 qubits). These highlight advantages compared to existing machine learning approaches. | Restricting attention to highly structured subsets of quantum states sometimes allows for overcoming the exponential bottleneck that plagues general tomography. Matrix product state (MPS) tomography @cite_41 is the most prominent example for such an approach. It only requires a polynomial number of samples, provided that the underlying quantum state is well approximated by a MPS with low bond dimension. In quantum many body physics this assumption is often justifiable @cite_54 . However, MPS representations of general states have exponentially large bond dimension. In this case, MPS tomography offers no advantage over general tomography. | {
"cite_N": [
"@cite_41",
"@cite_54"
],
"mid": [
"1971384536",
"2565722603"
],
"abstract": [
"Direct quantum state tomography—deducing the state of a system from measurements—is mostly unfeasible due to the exponential scaling of measurement number with system size. The authors present two new schemes, which scale linearly in this respect, and can be applied to a wide range of quantum states.",
"Traditionally quantum state tomography is used to characterize a quantum state, but it becomes exponentially hard with the system size. An alternative technique, matrix product state tomography, is shown to work well in practical situations."
]
} |
1908.08909 | 2969984239 | Predicting features of complex, large-scale quantum systems is essential to the characterization and engineering of quantum architectures. We present an efficient approach for predicting a large number of linear features using classical shadows obtained from very few quantum measurements. This approach is guaranteed to accurately predict @math linear functions with bounded Hilbert-Schmidt norm from only @math measurement repetitions. This sampling rate is completely independent of the system size and saturates fundamental lower bounds from information theory. We support our theoretical findings with numerical experiments over a wide range of problem sizes (2 to 162 qubits). These highlight advantages compared to existing machine learning approaches. | Direct fidelity estimation is a procedure that allows for predicting a single pure target fidelity @math up to accuracy @math . The best-known technique is based on few Pauli measurements that are selected randomly using importance sampling @cite_49 . The required number of samples depends on the target: it can range from a dimension-independent order of @math (if @math is a stablizer state) to roughly @math in the worst case. | {
"cite_N": [
"@cite_49"
],
"mid": [
"2090368878"
],
"abstract": [
"We describe a simple method for certifying that an experimental device prepares a desired quantum state ρ. Our method is applicable to any pure state ρ, and it provides an estimate of the fidelity between ρ and the actual (arbitrary) state in the lab, up to a constant additive error. The method requires measuring only a constant number of Pauli expectation values, selected at random according to an importance-weighting rule. Our method is faster than full tomography by a factor of d, the dimension of the state space, and extends easily and naturally to quantum channels."
]
} |
1908.08909 | 2969984239 | Predicting features of complex, large-scale quantum systems is essential to the characterization and engineering of quantum architectures. We present an efficient approach for predicting a large number of linear features using classical shadows obtained from very few quantum measurements. This approach is guaranteed to accurately predict @math linear functions with bounded Hilbert-Schmidt norm from only @math measurement repetitions. This sampling rate is completely independent of the system size and saturates fundamental lower bounds from information theory. We support our theoretical findings with numerical experiments over a wide range of problem sizes (2 to 162 qubits). These highlight advantages compared to existing machine learning approaches. | Shadow tomography aims at simultaneously estimating the probability associated with @math 2-outcome measurements up to accuaracy @math : @math , where each @math is a positive semidefinite matrix whose with operator norm at most one @cite_15 @cite_5 @cite_47 . This may be viewed as a generalization of direct fidelity estimation. The best existing result is due to Aaronson @cite_47 who showed that copies of the unknown state The scaling symbol @math suppresses logarithmic expressions in other problem-specific parameters. suffice to achieve this task. In a nutshell, his protocol is based on gently measuring the 2-outcome measurements one-by-one and subsequently (partially) reverting the perturbative effects a measurement exerts on quantum states. This task is achieved by explicit quantum circuits of exponential size that act on all copies of the unknown state simultaneously. This rather intricate procedure bypasses the no-go result advertised in Theorem and results in a sampling rate that is independent of the measurement in question -- only their cardinality @math matters. | {
"cite_N": [
"@cite_5",
"@cite_15",
"@cite_47"
],
"mid": [
"2797355014",
"2963956175",
"2939719762"
],
"abstract": [
"We give two new quantum algorithms for solving semidefinite programs (SDPs) providing quantum speed-ups. We consider SDP instances with @math constraint matrices, each of dimension @math , rank @math , and sparsity @math . The first algorithm assumes an input model where one is given access to entries of the matrices at unit cost. We show that it has run time @math , where @math is the error. This gives an optimal dependence in terms of @math and quadratic improvement over previous quantum algorithms when @math . The second algorithm assumes a fully quantum input model in which the matrices are given as quantum states. We show that its run time is @math , with @math an upper bound on the trace-norm of all input matrices. In particular the complexity depends only poly-logarithmically in @math and polynomially in @math . We apply the second SDP solver to the problem of learning a good description of a quantum state with respect to a set of measurements: Given @math measurements and copies of an unknown state @math , we show we can find in time @math a description of the state as a quantum circuit preparing a density matrix which has the same expectation values as @math on the @math measurements, up to error @math . The density matrix obtained is an approximation to the maximum entropy state consistent with the measurement data considered in Jaynes' principle from statistical mechanics. As in previous work, we obtain our algorithm by \"quantizing\" classical SDP solvers based on the matrix multiplicative weight method. One of our main technical contributions is a quantum Gibbs state sampler for low-rank Hamiltonians with a poly-logarithmic dependence on its dimension, which could be of independent interest.",
"We introduce the problem of *shadow tomography*: given an unknown D-dimensional quantum mixed state ρ, as well as known two-outcome measurements E1,…,EM, estimate the probability that Ei accepts ρ, to within additive error e, for each of the M measurements. How many copies of ρ are needed to achieve this, with high probability? Surprisingly, we give a procedure that solves the problem by measuring only O( e−5·log4 M·logD) copies. This means, for example, that we can learn the behavior of an arbitrary n-qubit state, on *all* accepting rejecting circuits of some fixed polynomial size, by measuring only nO( 1) copies of the state. This resolves an open problem of the author, which arose from his work on private-key quantum money schemes, but which also has applications to quantum copy-protected software, quantum advice, and quantum one-way communication. Recently, building on this work, have given a different approach to shadow tomography using semidefinite programming, which achieves a savings in computation time.",
""
]
} |
1908.08474 | 2969551072 | The Shapley value has become a popular method to attribute the prediction of a machine-learning model on an input to its base features. The Shapley value [1] is known to be the unique method that satisfies certain desirable properties, and this motivates its use. Unfortunately, despite this uniqueness result, there are a multiplicity of Shapley values used in explaining a model's prediction. This is because there are many ways to apply the Shapley value that differ in how they reference the model, the training data, and the explanation context. In this paper, we study an approach that applies the Shapley value to conditional expectations (CES) of sets of features (cf. [2]) that subsumes several prior approaches within a common framework. We provide the first algorithm for the general version of CES. We show that CES can result in counterintuitive attributions in theory and in practice (we study a diabetes prediction task); for instance, CES can assign non-zero attributions to features that are not referenced by the model. In contrast, we show that an approach called the Baseline Shapley (BS) does not exhibit counterintuitive attributions; we support this claim with a uniqueness (axiomatic) result. We show that BS is a special case of CES, and CES with an independent feature distribution coincides with a randomized version of BS. Thus, BS fits into the CES framework, but does not suffer from many of CES's deficiencies. | The first and second approaches solve a different problem (of feature importance across all the training data), and we will ignore them for the most part. Notice that the rest are solving the attribution problem, @cite_13 unifies several of these methods under a common framework based on conditional expectations. and they all apply the Shapley value, but they differ in how they switch a feature off', and consequently give different results. In this paper, we attempt to pick between these methods using the lens of axiomatization. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2962862931"
],
"abstract": [
"Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications. However, the highest accuracy for large modern datasets is often achieved by complex models that even experts struggle to interpret, such as ensemble or deep learning models, creating a tension between accuracy and interpretability. In response, various methods have recently been proposed to help users interpret the predictions of complex models, but it is often unclear how these methods are related and when one method is preferable over another. To address this problem, we present a unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations). SHAP assigns each feature an importance value for a particular prediction. Its novel components include: (1) the identification of a new class of additive feature importance measures, and (2) theoretical results showing there is a unique solution in this class with a set of desirable properties. The new class unifies six existing methods, notable because several recent methods in the class lack the proposed desirable properties. Based on insights from this unification, we present new methods that show improved computational performance and or better consistency with human intuition than previous approaches."
]
} |
1908.08692 | 2969835258 | Automatic estimation of the number of people in unconstrained crowded scenes is a challenging task and one major difficulty stems from the huge scale variation of people. In this paper, we propose a novel Deep Structured Scale Integration Network (DSSINet) for crowd counting, which addresses the scale variation of people by using structured feature representation learning and hierarchically structured loss function optimization. Unlike conventional methods which directly fuse multiple features with weighted average or concatenation, we first introduce a Structured Feature Enhancement Module based on conditional random fields (CRFs) to refine multiscale features mutually with a message passing mechanism. In this module, each scale-specific feature is considered as a continuous random variable and passes complementary information to refine the features at other scales. Second, we utilize a Dilated Multiscale Structural Similarity loss to enforce our DSSINet to learn the local correlation of people's scales within regions of various size, thus yielding high-quality density maps. Extensive experiments on four challenging benchmarks well demonstrate the effectiveness of our method. Specifically, our DSSINet achieves improvements of 9.5 error reduction on Shanghaitech dataset and 24.9 on UCF-QNRF dataset against the state-of-the-art methods. | Crowd Counting: Numerous deep learning based methods @cite_33 @cite_13 @cite_20 @cite_40 @cite_46 @cite_6 @cite_10 have been proposed for crowd counting. These methods have various network structures and the mainstream is a multiscale architecture, which extracts multiple features from different columns branches of networks to handle the scale variation of people. For instance, @cite_21 combined a deep network and a shallow network to learn scale-robust features. @cite_0 developed a multi-column CNN to generate density maps. HydraCNN @cite_45 fed a pyramid of image patches into networks to estimate the count. CP-CNN @cite_3 proposed a Contextual Pyramid CNN to incorporate the global and local contextual information for crowd counting. @cite_4 built an encoder-decoder network with multiple scale aggregation modules. However, the issue of the huge variation of people's scales is still far from being fully solved. | {
"cite_N": [
"@cite_4",
"@cite_33",
"@cite_10",
"@cite_21",
"@cite_6",
"@cite_3",
"@cite_0",
"@cite_40",
"@cite_45",
"@cite_46",
"@cite_13",
"@cite_20"
],
"mid": [
"2895051362",
"2520826941",
"2951955299",
"2517615595",
"",
"",
"2463631526",
"2964203052",
"2519281173",
"",
"2741077351",
"2964018834"
],
"abstract": [
"In this paper, we propose a novel encoder-decoder network, called Scale Aggregation Network (SANet), for accurate and efficient crowd counting. The encoder extracts multi-scale features with scale aggregation modules and the decoder generates high-resolution density maps by using a set of transposed convolutions. Moreover, we find that most existing works use only Euclidean loss which assumes independence among each pixel but ignores the local correlation in density maps. Therefore, we propose a novel training loss, combining of Euclidean loss and local pattern consistency loss, which improves the performance of the model in our experiments. In addition, we use normalization layers to ease the training process and apply a patch-based test scheme to reduce the impact of statistic shift problem. To demonstrate the effectiveness of the proposed method, we conduct extensive experiments on four major crowd counting datasets and our method achieves superior performance to state-of-the-art methods while with much less parameters.",
"In this paper, we address the task of object counting in images. We follow modern learning approaches in which a density map is estimated directly from the input image. We employ CNNs and incorporate two significant improvements to the state of the art methods: layered boosting and selective sampling. As a result, we manage both to increase the counting accuracy and to reduce processing time. Moreover, we show that the proposed method is effective, even in the presence of labeling errors. Extensive experiments on five different datasets demonstrate the efficacy and robustness of our approach. Mean Absolute error was reduced by 20 to 35 . At the same time, the training time of each CNN has been reduced by 50 .",
"Crowd counting from a single image is a challenging task due to high appearance similarity, perspective changes and severe congestion. Many methods only focus on the local appearance features and they cannot handle the aforementioned challenges. In order to tackle them, we propose a Perspective Crowd Counting Network (PCC Net), which consists of three parts: 1) Density Map Estimation (DME) focuses on learning very local features for density map estimation; 2) Random High-level Density Classification (R-HDC) extracts global features to predict the coarse density labels of random patches in images; 3) Fore- Background Segmentation (FBS) encodes mid-level features to segments the foreground and background. Besides, the DULR module is embedded in PCC Net to encode the perspective changes on four directions (Down, Up, Left and Right). The proposed PCC Net is verified on five mainstream datasets, which achieves the state-of-the-art performance on the one and attains the competitive results on the other four datasets. The source code is available at this https URL.",
"Our work proposes a novel deep learning framework for estimating crowd density from static images of highly dense crowds. We use a combination of deep and shallow, fully convolutional networks to predict the density map for a given crowd image. Such a combination is used for effectively capturing both the high-level semantic information (face body detectors) and the low-level features (blob detectors), that are necessary for crowd counting under large scale variations. As most crowd datasets have limited training samples (",
"",
"",
"This paper aims to develop a method than can accurately estimate the crowd count from an individual image with arbitrary crowd density and arbitrary perspective. To this end, we have proposed a simple but effective Multi-column Convolutional Neural Network (MCNN) architecture to map the image to its crowd density map. The proposed MCNN allows the input image to be of arbitrary size or resolution. By utilizing filters with receptive fields of different sizes, the features learned by each column CNN are adaptive to variations in people head size due to perspective effect or image resolution. Furthermore, the true density map is computed accurately based on geometry-adaptive kernels which do not need knowing the perspective map of the input image. Since exiting crowd counting datasets do not adequately cover all the challenging situations considered in our work, we have collected and labelled a large new dataset that includes 1198 images with about 330,000 heads annotated. On this challenging new dataset, as well as all existing datasets, we conduct extensive experiments to verify the effectiveness of the proposed model and method. In particular, with the proposed simple MCNN model, our method outperforms all existing methods. In addition, experiments show that our model, once trained on one dataset, can be readily transferred to a new dataset.",
"In real-world crowd counting applications, the crowd densities vary greatly in spatial and temporal domains. A detection based counting method will estimate crowds accurately in low density scenes, while its reliability in congested areas is downgraded. A regression based approach, on the other hand, captures the general density information in crowded regions. Without knowing the location of each person, it tends to overestimate the count in low density areas. Thus, exclusively using either one of them is not sufficient to handle all kinds of scenes with varying densities. To address this issue, a novel end-to-end crowd counting framework, named DecideNet (DEteCtIon and Density Estimation Network) is proposed. It can adaptively decide the appropriate counting mode for different locations on the image based on its real density conditions. DecideNet starts with estimating the crowd density by generating detection and regression based density maps separately. To capture inevitable variation in densities, it incorporates an attention module, meant to adaptively assess the reliability of the two types of estimations. The final crowd counts are obtained with the guidance of the attention module to adopt suitable estimations from the two kinds of density maps. Experimental results show that our method achieves state-of-the-art performance on three challenging crowd counting datasets.",
"In this paper we address the problem of counting objects instances in images. Our models are able to precisely estimate the number of vehicles in a traffic congestion, or to count the humans in a very crowded scene. Our first contribution is the proposal of a novel convolutional neural network solution, named Counting CNN (CCNN). Essentially, the CCNN is formulated as a regression model where the network learns how to map the appearance of the image patches to their corresponding object density maps. Our second contribution consists in a scale-aware counting model, the Hydra CNN, able to estimate object densities in different very crowded scenarios where no geometric information of the scene can be provided. Hydra CNN learns a multiscale non-linear regression model which uses a pyramid of image patches extracted at multiple scales to perform the final density prediction. We report an extensive experimental evaluation, using up to three different object counting benchmarks, where we show how our solutions achieve a state-of-the-art performance.",
"",
"We propose a novel crowd counting model that maps a given crowd scene to its density. Crowd analysis is compounded by myriad of factors like inter-occlusion between people due to extreme crowding, high similarity of appearance between people and background elements, and large variability of camera view-points. Current state-of-the art approaches tackle these factors by using multi-scale CNN architectures, recurrent networks and late fusion of features from multi-column CNN with different receptive fields. We propose switching convolutional neural network that leverages variation of crowd density within an image to improve the accuracy and localization of the predicted crowd count. Patches from a grid within a crowd scene are relayed to independent CNN regressors based on crowd count prediction quality of the CNN established during training. The independent CNN regressors are designed to have different receptive fields and a switch classifier is trained to relay the crowd scene patch to the best CNN regressor. We perform extensive experiments on all major crowd counting datasets and evidence better performance compared to current state-of-the-art methods. We provide interpretable representations of the multichotomy of space of crowd scene patches inferred from the switch. It is observed that the switch relays an image patch to a particular CNN column based on density of crowd.",
"Region of Interest (ROI) crowd counting can be formulated as a regression problem of learning a mapping from an image or a video frame to a crowd density map. Recently, convolutional neural network (CNN) models have achieved promising results for crowd counting. However, even when dealing with video data, CNN-based methods still consider each video frame independently, ignoring the strong temporal correlation between neighboring frames. To exploit the otherwise very useful temporal information in video sequences, we propose a variant of a recent deep learning model called convolutional LSTM (ConvLSTM) for crowd counting. Unlike the previous CNN-based methods, our method fully captures both spatial and temporal dependencies. Furthermore, we extend the ConvLSTM model to a bidirectional ConvLSTM model which can access long-range information in both directions. Extensive experiments using four publicly available datasets demonstrate the reliability of our approach and the effectiveness of incorporating temporal information to boost the accuracy of crowd counting. In addition, we also conduct some transfer learning experiments to show that once our model is trained on one dataset, its learning experience can be transferred easily to a new dataset which consists of only very few video frames for model adaptation."
]
} |
1908.08692 | 2969835258 | Automatic estimation of the number of people in unconstrained crowded scenes is a challenging task and one major difficulty stems from the huge scale variation of people. In this paper, we propose a novel Deep Structured Scale Integration Network (DSSINet) for crowd counting, which addresses the scale variation of people by using structured feature representation learning and hierarchically structured loss function optimization. Unlike conventional methods which directly fuse multiple features with weighted average or concatenation, we first introduce a Structured Feature Enhancement Module based on conditional random fields (CRFs) to refine multiscale features mutually with a message passing mechanism. In this module, each scale-specific feature is considered as a continuous random variable and passes complementary information to refine the features at other scales. Second, we utilize a Dilated Multiscale Structural Similarity loss to enforce our DSSINet to learn the local correlation of people's scales within regions of various size, thus yielding high-quality density maps. Extensive experiments on four challenging benchmarks well demonstrate the effectiveness of our method. Specifically, our DSSINet achieves improvements of 9.5 error reduction on Shanghaitech dataset and 24.9 on UCF-QNRF dataset against the state-of-the-art methods. | Conditional Random Fields: In the field of computer vision, CRFs have been exploited to refine the features and outputs of convolutional neural networks (CNN) with a message passing mechanism @cite_25 . For instance, @cite_29 used CRFs to refine the semantic segmentation maps of CNN by modeling the relationship among pixels. @cite_44 fused multiple features with Attention-Gated CRFs to produce richer representations for contour prediction. @cite_28 introduced an inter-view message passing module based on CRFs to enhance the view-specific features for action recognition. | {
"cite_N": [
"@cite_44",
"@cite_29",
"@cite_28",
"@cite_25"
],
"mid": [
"2964088293",
"2124592697",
"2894942405",
"2161236525"
],
"abstract": [
"Recent works have shown that exploiting multi-scale representations deeply learned via convolutional neural networks (CNN) is of tremendous importance for accurate contour detection. This paper presents a novel approach for predicting contours which advances the state of the art in two fundamental aspects, i.e. multi-scale feature generation and fusion. Different from previous works directly considering multi-scale feature maps obtained from the inner layers of a primary CNN architecture, we introduce a hierarchical deep model which produces more rich and complementary representations. Furthermore, to refine and robustly fuse the representations learned at different scales, the novel Attention-Gated Conditional Random Fields (AG-CRFs) are proposed. The experiments ran on two publicly available datasets (BSDS500 and NYUDv2) demonstrate the effectiveness of the latent AG-CRF model and of the overall hierarchical framework.",
"Pixel-level labelling tasks, such as semantic segmentation, play a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning techniques for image recognition to tackle pixel-level labelling tasks. One central issue in this methodology is the limited capacity of deep learning techniques to delineate visual objects. To solve this problem, we introduce a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling. To this end, we formulate Conditional Random Fields with Gaussian pairwise potentials and mean-field approximate inference as Recurrent Neural Networks. This network, called CRF-RNN, is then plugged in as a part of a CNN to obtain a deep network that has desirable properties of both CNNs and CRFs. Importantly, our system fully integrates CRF modelling with CNNs, making it possible to train the whole deep network end-to-end with the usual back-propagation algorithm, avoiding offline post-processing methods for object delineation. We apply the proposed method to the problem of semantic image segmentation, obtaining top results on the challenging Pascal VOC 2012 segmentation benchmark.",
"In this paper, we propose a new Dividing and Aggregating Network (DA-Net) for multi-view action recognition. In our DA-Net, we learn view-independent representations shared by all views at lower layers, while we learn one view-specific representation for each view at higher layers. We then train view-specific action classifiers based on the view-specific representation for each view and a view classifier based on the shared representation at lower layers. The view classifier is used to predict how likely each video belongs to each view. Finally, the predicted view probabilities from multiple views are used as the weights when fusing the prediction scores of view-specific action classifiers. We also propose a new approach based on the conditional random field (CRF) formulation to pass message among view-specific representations from different branches to help each other. Comprehensive experiments on two benchmark datasets clearly demonstrate the effectiveness of our proposed DA-Net for multi-view action recognition.",
"Most state-of-the-art techniques for multi-class image segmentation and labeling use conditional random fields defined over pixels or image regions. While region-level models often feature dense pairwise connectivity, pixel-level models are considerably larger and have only permitted sparse graph structures. In this paper, we consider fully connected CRF models defined on the complete set of pixels in an image. The resulting graphs have billions of edges, making traditional inference algorithms impractical. Our main contribution is a highly efficient approximate inference algorithm for fully connected CRF models in which the pairwise edge potentials are defined by a linear combination of Gaussian kernels. Our experiments demonstrate that dense connectivity at the pixel level substantially improves segmentation and labeling accuracy."
]
} |
1908.08692 | 2969835258 | Automatic estimation of the number of people in unconstrained crowded scenes is a challenging task and one major difficulty stems from the huge scale variation of people. In this paper, we propose a novel Deep Structured Scale Integration Network (DSSINet) for crowd counting, which addresses the scale variation of people by using structured feature representation learning and hierarchically structured loss function optimization. Unlike conventional methods which directly fuse multiple features with weighted average or concatenation, we first introduce a Structured Feature Enhancement Module based on conditional random fields (CRFs) to refine multiscale features mutually with a message passing mechanism. In this module, each scale-specific feature is considered as a continuous random variable and passes complementary information to refine the features at other scales. Second, we utilize a Dilated Multiscale Structural Similarity loss to enforce our DSSINet to learn the local correlation of people's scales within regions of various size, thus yielding high-quality density maps. Extensive experiments on four challenging benchmarks well demonstrate the effectiveness of our method. Specifically, our DSSINet achieves improvements of 9.5 error reduction on Shanghaitech dataset and 24.9 on UCF-QNRF dataset against the state-of-the-art methods. | Multiscale Structural Similarity: MS-SSIM @cite_37 is a widely used metric for image quality assessment. Its formula is based on the luminance, contrast and structure comparisons between the multiscale regions of two images. In @cite_22 , MS-SSIM loss has been successfully applied in image restoration tasks (e.g., image denoising and super-resolution), but its effectiveness has not been verified in high-level tasks (e.g, crowd counting). Recently, @cite_4 combined Euclidean loss and SSIM loss @cite_30 to optimize their network for crowd counting, but they can only capture the local correlation in regions with a fixed size. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_4",
"@cite_22"
],
"mid": [
"2133665775",
"1580389772",
"2895051362",
"2562637781"
],
"abstract": [
"Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http: www.cns.nyu.edu spl sim lcv ssim .",
"The structural similarity image quality paradigm is based on the assumption that the human visual system is highly adapted for extracting structural information from the scene, and therefore a measure of structural similarity can provide a good approximation to perceived image quality. This paper proposes a multiscale structural similarity method, which supplies more flexibility than previous single-scale methods in incorporating the variations of viewing conditions. We develop an image synthesis method to calibrate the parameters that define the relative importance of different scales. Experimental comparisons demonstrate the effectiveness of the proposed method.",
"In this paper, we propose a novel encoder-decoder network, called Scale Aggregation Network (SANet), for accurate and efficient crowd counting. The encoder extracts multi-scale features with scale aggregation modules and the decoder generates high-resolution density maps by using a set of transposed convolutions. Moreover, we find that most existing works use only Euclidean loss which assumes independence among each pixel but ignores the local correlation in density maps. Therefore, we propose a novel training loss, combining of Euclidean loss and local pattern consistency loss, which improves the performance of the model in our experiments. In addition, we use normalization layers to ease the training process and apply a patch-based test scheme to reduce the impact of statistic shift problem. To demonstrate the effectiveness of the proposed method, we conduct extensive experiments on four major crowd counting datasets and our method achieves superior performance to state-of-the-art methods while with much less parameters.",
"Neural networks are becoming central in several areas of computer vision and image processing and different architectures have been proposed to solve specific problems. The impact of the loss layer of neural networks, however, has not received much attention in the context of image processing: the default and virtually only choice is @math . In this paper, we bring attention to alternative choices for image restoration. In particular, we show the importance of perceptually-motivated losses when the resulting image is to be evaluated by a human observer. We compare the performance of several losses, and propose a novel, differentiable error function. We show that the quality of the results improves significantly with better loss functions, even when the network architecture is left unchanged."
]
} |
1908.08705 | 2969664989 | In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models. | The whole concept of adversarial attacks is quite simple: let us slightly change the input to a classifying neural net so that the recognized class will change from correct to some other class (first adversarial attacks were made only on classifiers). The pioneering work @cite_27 formulates the task as follows: | {
"cite_N": [
"@cite_27"
],
"mid": [
"1673923490"
],
"abstract": [
"Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input."
]
} |
1908.08705 | 2969664989 | In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models. | In @cite_27 the authors propose to use a quasi-newton L-BFGS-B method to solve the task formulated above. Simpler and more efficient method called Fast Gradient-Sign Method (FGSM) is proposed in @cite_39 . This method suggests using the gradients with respect to the input and constructing an adversarial image using the following formula: @math (or @math in case of targeted attack). Here @math is a loss function (e.g. cross-entropy) which depends on the weights of the model @math , input @math , and label @math . Note that usually one step is not enough and we need to do a number of iterations described above each time using the projection to the initial input space (e.g. @math ). It is called projected gradient descent (PGD) @cite_20 . | {
"cite_N": [
"@cite_27",
"@cite_20",
"@cite_39"
],
"mid": [
"1673923490",
"2640329709",
"1945616565"
],
"abstract": [
"Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.",
"Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against any adversary. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest the notion of security against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models.",
"Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset."
]
} |
1908.08705 | 2969664989 | In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models. | It turns out that using momentum for the iterative procedure of an adversarial example construction is a good way to increase the robustness of the adversarial attack @cite_17 . | {
"cite_N": [
"@cite_17"
],
"mid": [
"2950906520"
],
"abstract": [
"Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed. However, most of the existing adversarial attacks can only fool a black-box model with a low success rate because of the coupling of the attack ability and the transferability. To address this issue, we propose a broad class of momentum-based iterative algorithms to boost adversarial attacks. By integrating the momentum term into the iterative process for attacks, our methods can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples. To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks. We hope that the proposed methods will serve as a benchmark for evaluating the robustness of various deep models and defense methods. We won the first places in NIPS 2017 Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions."
]
} |
1908.08705 | 2969664989 | In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models. | All the aforementioned adversarial attacks suggest that we restrict the maximum per-pixel perturbation (in case of image as an input) i.e. use @math norm. Another interesting case is when we do not concentrate on the maximum perturbation but we strive to achieve the fewest possible number of pixels to be attacked ( @math norm). One of the first examples of such attack is the Jacobian-based Saliency Map Attack (JSMA) @cite_6 , where the saliency maps are constructed of the pixels that are the most prone to cause the misclassification. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2949152835"
],
"abstract": [
"Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine learning tasks. However, imperfections in the training phase of deep neural networks make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. In an application to computer vision, we show that our algorithms can reliably produce samples correctly classified by human subjects but misclassified in specific targets by a DNN with a 97 adversarial success rate while only modifying on average 4.02 of the input features per sample. We then evaluate the vulnerability of different sample classes to adversarial perturbations by defining a hardness measure. Finally, we describe preliminary work outlining defenses against adversarial samples by defining a predictive measure of distance between a benign input and a target classification."
]
} |
1908.08705 | 2969664989 | In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models. | Another extreme case of attack for the @math norm is a one-pixel attack @cite_0 . The authors use differential evolution for this specific case, the algorithm which lies in the class of evolutionary algorithms. It should be mentioned that not only classification neural nets are prone to adversarial attacks. There are also attacks for detection and segmentation @cite_11 . | {
"cite_N": [
"@cite_0",
"@cite_11"
],
"mid": [
"2765424254",
"2950774971"
],
"abstract": [
"Recent research has revealed that the output of Deep Neural Networks (DNN) can be easily altered by adding relatively small perturbations to the input vector. In this paper, we analyze an attack in an extremely limited scenario where only one pixel can be modified. For that we propose a novel method for generating one-pixel adversarial perturbations based on differential evolution. It requires less adversarial information and can fool more types of networks. The results show that 70.97 of the natural images can be perturbed to at least one target class by modifying just one pixel with 97.47 confidence on average. Thus, the proposed attack explores a different take on adversarial machine learning in an extreme limited scenario, showing that current DNNs are also vulnerable to such low dimension attacks.",
"It has been well demonstrated that adversarial examples, i.e., natural images with visually imperceptible perturbations added, generally exist for deep networks to fail on image classification. In this paper, we extend adversarial examples to semantic segmentation and object detection which are much more difficult. Our observation is that both segmentation and detection are based on classifying multiple targets on an image (e.g., the basic target is a pixel or a receptive field in segmentation, and an object proposal in detection), which inspires us to optimize a loss function over a set of pixels proposals for generating adversarial perturbations. Based on this idea, we propose a novel algorithm named Dense Adversary Generation (DAG), which generates a large family of adversarial examples, and applies to a wide range of state-of-the-art deep networks for segmentation and detection. We also find that the adversarial perturbations can be transferred across networks with different training data, based on different architectures, and even for different recognition tasks. In particular, the transferability across networks with the same architecture is more significant than in other cases. Besides, summing up heterogeneous perturbations often leads to better transfer performance, which provides an effective method of black-box adversarial attack."
]
} |
1908.08705 | 2969664989 | In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models. | Another interesting property of the adversarial attacks is that they are transferable between different neural networks @cite_27 . An attack prepared using one model can successfully confuse another model with different architecture and training dataset. | {
"cite_N": [
"@cite_27"
],
"mid": [
"1673923490"
],
"abstract": [
"Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input."
]
} |
1908.08705 | 2969664989 | In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models. | Usually, the adversarial attacks which are constructed using the specific architecture and even the weights of the attacked model are called white-box attacks. If the attack has no access to model weights then it is called a black-box attack @cite_36 . | {
"cite_N": [
"@cite_36"
],
"mid": [
"2603766943"
],
"abstract": [
"Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassified by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24 of the adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19 and 88.94 . We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder."
]
} |
1908.08705 | 2969664989 | In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models. | Usually, attacks are constructed for the specific input (e.g. photo of some object). This is called an input-aware attack. Adversarial attacks are called universal when one successful adversarial perturbation can be applied for any image @cite_23 . | {
"cite_N": [
"@cite_23"
],
"mid": [
"2953047670"
],
"abstract": [
"Given a state-of-the-art deep neural network classifier, we show the existence of a universal (image-agnostic) and very small perturbation vector that causes natural images to be misclassified with high probability. We propose a systematic algorithm for computing universal perturbations, and show that state-of-the-art deep neural networks are highly vulnerable to such perturbations, albeit being quasi-imperceptible to the human eye. We further empirically analyze these universal perturbations and show, in particular, that they generalize very well across neural networks. The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers. It further outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images."
]
} |
1908.08705 | 2969664989 | In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models. | Although adversarial attacks are quite successful in the digital domain (where we can change the image on the pixel level before feeding it to a classifier), in the physical (i.e. real) world the efficiency of adversarial attacks is still questionable. Kurakin demonstrate the potential for further research in this domain @cite_5 . They discovered that if an adversarial image is printed on the paper and then shot by a camera phone it still can successfully fool classification network. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2460937040"
],
"abstract": [
"Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera."
]
} |
1908.08705 | 2969664989 | In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models. | It turns out that the most successful paradigm to construct the real-world adversarial examples is an Expectation Over Transformation (EOT) algorithm @cite_28 . This approach takes into account that in the real world the object usually undergoes a set of transformations (scaling, jittering, brightness and contrast changes, etc). The task is to find an adversarial example which is robust under this set of transformations @math and can be formulated as follows: | {
"cite_N": [
"@cite_28"
],
"mid": [
"2736899637"
],
"abstract": [
"Standard methods for generating adversarial examples for neural networks do not consistently fool neural network classifiers in the physical world due to a combination of viewpoint shifts, camera noise, and other natural transformations, limiting their relevance to real-world systems. We demonstrate the existence of robust 3D adversarial objects, and we present the first algorithm for synthesizing examples that are adversarial over a chosen distribution of transformations. We synthesize two-dimensional adversarial images that are robust to noise, distortion, and affine transformation. We apply our algorithm to complex three-dimensional objects, using 3D-printing to manufacture the first physical adversarial objects. Our results demonstrate the existence of 3D adversarial objects in the physical world."
]
} |
1908.08705 | 2969664989 | In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models. | Another work with the usage of @math -limited attacks proposes to attack facial recognition neural nets with the adversarial eyeglasses @cite_21 . The authors propose a method to print adversarial perturbation on the eyeglasses frame with the help of Total Variation (TV) loss and non-printability score (NPS). TV loss is designed to make the image more smooth. Thus it makes an attack more stable for different image interpolation methods on the devices and makes it more inconspicuousness for human. NPS is designed to deal with the difference in digital RGB-values and the ability of real printers to reproduce these values. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2535873859"
],
"abstract": [
"Machine learning is enabling a myriad innovations, including new algorithms for cancer diagnosis and self-driving cars. The broad use of machine learning makes it important to understand the extent to which machine-learning algorithms are subject to attack, particularly when used in applications where physical security or safety is at risk. In this paper, we focus on facial biometric systems, which are widely used in surveillance and access control. We define and investigate a novel class of attacks: attacks that are physically realizable and inconspicuous, and allow an attacker to evade recognition or impersonate another individual. We develop a systematic method to automatically generate such attacks, which are realized through printing a pair of eyeglass frames. When worn by the attacker whose image is supplied to a state-of-the-art face-recognition algorithm, the eyeglasses allow her to evade being recognized or to impersonate another individual. Our investigation focuses on white-box face-recognition systems, but we also demonstrate how similar techniques can be used in black-box scenarios, as well as to avoid face detection."
]
} |
1908.08705 | 2969664989 | In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models. | In general, most of the subsequent works for the real-world attack use the concepts of @math -limited perturbation, EOT, TV loss, and NPS. Let us briefly list them. In @cite_40 the authors construct the physical attack for the traffic sign recognition model using EOT and NPS for making either adversarial posters (attacking the whole traffic sign area) or adversarial stickers (black and white stickers on the real traffic sign). The works of @cite_1 @cite_15 use some form of EOT to attack traffic sign recognition model too. | {
"cite_N": [
"@cite_40",
"@cite_1",
"@cite_15"
],
"mid": [
"2759471388",
"2783882201",
""
],
"abstract": [
"Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input. Given that that emerging physical systems are using DNNs in safety-critical situations, adversarial examples could mislead these systems and cause dangerous situations.Therefore, understanding adversarial examples in the physical world is an important step towards developing resilient learning algorithms. We propose a general attack algorithm,Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions. Using the real-world case of road sign classification, we show that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Due to the current lack of a standardized testing method, we propose a two-stage evaluation methodology for robust physical adversarial examples consisting of lab and field tests. Using this methodology, we evaluate the efficacy of physical adversarial manipulations on real objects. Witha perturbation in the form of only black and white stickers,we attack a real stop sign, causing targeted misclassification in 100 of the images obtained in lab settings, and in 84.8 of the captured video frames obtained on a moving vehicle(field test) for the target classifier.",
"We propose a new real-world attack against the computer vision based systems of autonomous vehicles (AVs). Our novel Sign Embedding attack exploits the concept of adversarial examples to modify innocuous signs and advertisements in the environment such that they are classified as the adversary's desired traffic sign with high confidence. Our attack greatly expands the scope of the threat posed to AVs since adversaries are no longer restricted to just modifying existing traffic signs as in previous work. Our attack pipeline generates adversarial samples which are robust to the environmental conditions and noisy image transformations present in the physical world. We ensure this by including a variety of possible image transformations in the optimization problem used to generate adversarial samples. We verify the robustness of the adversarial samples by printing them out and carrying out drive-by tests simulating the conditions under which image capture would occur in a real-world scenario. We experimented with physical attack samples for different distances, lighting conditions, and camera angles. In addition, extensive evaluations were carried out in the virtual setting for a variety of image transformations. The adversarial samples generated using our method have adversarial success rates in excess of 95 in the physical as well as virtual settings.",
""
]
} |
1908.08705 | 2969664989 | In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models. | A number of works are devoted to adversarial attacks on traffic sign detectors in the real world. One of the first works @cite_22 proposes an adversarial attack on Faster R-CNN @cite_35 stop sign detector using a sort of EOT (handcrafted estimation of a viewing map). Several works used EOT, NPS, and TV loss to attack Faster R-CNN, YOLOv2 @cite_42 based traffic sign recognition models @cite_19 @cite_45 @cite_8 . | {
"cite_N": [
"@cite_35",
"@cite_22",
"@cite_8",
"@cite_42",
"@cite_19",
"@cite_45"
],
"mid": [
"2953106684",
"2775467454",
"",
"2951433694",
"2890883923",
"2778115935"
],
"abstract": [
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.",
"An adversarial example is an example that has been adjusted to produce a wrong label when presented to a system at test time. To date, adversarial example constructions have been demonstrated for classifiers, but not for detectors. If adversarial examples that could fool a detector exist, they could be used to (for example) maliciously create security hazards on roads populated with smart vehicles. In this paper, we demonstrate a construction that successfully fools two standard detectors, Faster RCNN and YOLO. The existence of such examples is surprising, as attacking a classifier is very different from attacking a detector, and that the structure of detectors - which must search for their own bounding box, and which cannot estimate that box very accurately - makes it quite likely that adversarial patterns are strongly disrupted. We show that our construction produces adversarial examples that generalize well across sequences digitally, even though large perturbations are needed. We also show that our construction yields physical objects that are adversarial.",
"",
"We introduce YOLO9000, a state-of-the-art, real-time object detection system that can detect over 9000 object categories. First we propose various improvements to the YOLO detection method, both novel and drawn from prior work. The improved model, YOLOv2, is state-of-the-art on standard detection tasks like PASCAL VOC and COCO. At 67 FPS, YOLOv2 gets 76.8 mAP on VOC 2007. At 40 FPS, YOLOv2 gets 78.6 mAP, outperforming state-of-the-art methods like Faster RCNN with ResNet and SSD while still running significantly faster. Finally we propose a method to jointly train on object detection and classification. Using this method we train YOLO9000 simultaneously on the COCO detection dataset and the ImageNet classification dataset. Our joint training allows YOLO9000 to predict detections for object classes that don't have labelled detection data. We validate our approach on the ImageNet detection task. YOLO9000 gets 19.7 mAP on the ImageNet detection validation set despite only having detection data for 44 of the 200 classes. On the 156 classes not in COCO, YOLO9000 gets 16.0 mAP. But YOLO can detect more than just 200 classes; it predicts detections for more than 9000 different object categories. And it still runs in real-time.",
"Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we propose ShapeShifter, an attack that tackles the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to mislead the classification results in multiple bounding boxes with different scales. Extending the digital attack to the physical world adds another layer of difficulty, because it requires the perturbation to be robust enough to survive real-world distortions due to different viewing distances and angles, lighting conditions, and camera limitations. We show that the Expectation over Transformation technique, which was originally proposed to enhance the robustness of adversarial perturbations in image classification, can be successfully adapted to the object detection setting. ShapeShifter can generate adversarially perturbed stop signs that are consistently mis-detected by Faster R-CNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems. Code related to this paper is available at: https: github.com shangtse robust-physical-attack.",
"Deep learning has proven to be a powerful tool for computer vision and has seen widespread adoption for numerous tasks. However, deep learning algorithms are known to be vulnerable to adversarial examples. These adversarial inputs are created such that, when provided to a deep learning algorithm, they are very likely to be mislabeled. This can be problematic when deep learning is used to assist in safety critical decisions. Recent research has shown that classifiers can be attacked by physical adversarial examples under various physical conditions. Given the fact that state-of-the-art objection detection algorithms are harder to be fooled by the same set of adversarial examples, here we show that these detectors can also be attacked by physical adversarial examples. In this note, we briefly show both static and dynamic test results. We design an algorithm that produces physical adversarial inputs, which can fool the YOLO object detector and can also attack Faster-RCNN with relatively high success rate based on transferability. Furthermore, our algorithm can compress the size of the adversarial inputs to stickers that, when attached to the targeted object, result in the detector either mislabeling or not detecting the object a high percentage of the time. This note provides a small set of results. Our upcoming paper will contain a thorough evaluation on other object detectors, and will present the algorithm."
]
} |